Back to Blog
ai-infraAILocal SetupQwenOllamaOpenCodeCoding Assistant

Running Qwen 2.5 Coder Locally with OpenCode: A Private Offline AI Coding Assistant

A complete setup guide for running Qwen 2.5 Coder locally via Ollama and connecting it to OpenCode (an open-source AI coding CLI) to create a private, offline-capable AI pair programmer in your terminal.

April 20, 2026·1 min read

Summary

A complete setup guide for running Qwen 2.5 Coder locally via Ollama and connecting it to OpenCode (an open-source AI coding CLI) to create a private, offline-capable AI pair programmer in your terminal.

What I Did

  • Install Ollama using provided commands for macOS/Linux or Homebrew.
  • Pull the Qwen 2.5 Coder model from Ollama with ollama pull qwen2.5-coder.
  • Install OpenCode via npm or Homebrew.
  • Configure OpenCode's global config file to use Qwen locally by setting up a JSON configuration.
  • Increase num_ctx in the config for better performance and larger working memory.
  • Start Ollama with ollama serve before running OpenCode.
  • Use OpenCode commands directly in your project directory.

Key Technical Findings

  • Proper configuration of the context window (num_ctx) is crucial for optimal performance.
  • Ollama and OpenCode provide a powerful, private AI coding solution without cloud dependencies.

Commands Used

Install Ollama: `curl -fsSL https://ollama.com/install.sh | sh` or `brew install ollama`
Pull Qwen model: `ollama pull qwen2.5-coder`
Install OpenCode: `npm install -g opencode-ai` or `brew install sst/tap/opencode`
Verify Ollama: `ollama --version`
List models: `ollama list`
Verify OpenCode: `opencode --version`
Start Ollama server: `ollama serve`
Run OpenCode in project directory: `cd ~/projects/my-project && opencode`

Next Steps

Install and configure Ollama. Pull the Qwen 2.5 Coder model. Install and configure OpenCode. Test the setup with various coding tasks.