60 KiB
Oh-My-OpenCode Configuration
Highly opinionated, but adjustable to taste.
Quick Start
Most users don't need to configure anything manually. Run the interactive installer:
bunx oh-my-opencode install
It asks about your providers (Claude, OpenAI, Gemini, etc.) and generates optimal config automatically.
Want to customize? Here's the common patterns:
{
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
// Override specific agent models
"agents": {
"oracle": { "model": "openai/gpt-5.2" }, // Use GPT for debugging
"librarian": { "model": "zai-coding-plan/glm-4.7" }, // Cheap model for research
"explore": { "model": "opencode/gpt-5-nano" } // Free model for grep
},
// Override category models (used by task)
"categories": {
"quick": { "model": "opencode/gpt-5-nano" }, // Fast/cheap for trivial tasks
"visual-engineering": { "model": "google/gemini-3-pro" } // Gemini for UI
}
}
Find available models: Run opencode models to see all models in your environment.
Config File Locations
Config file locations (priority order):
.opencode/oh-my-opencode.jsoncor.opencode/oh-my-opencode.json(project; prefers.jsoncwhen both exist)- User config (platform-specific; prefers
.jsoncwhen both exist):
| Platform | User Config Path |
|---|---|
| Windows | ~/.config/opencode/oh-my-opencode.jsonc (preferred) or ~/.config/opencode/oh-my-opencode.json (fallback); %APPDATA%\opencode\oh-my-opencode.jsonc / %APPDATA%\opencode\oh-my-opencode.json (fallback) |
| macOS/Linux | ~/.config/opencode/oh-my-opencode.jsonc (preferred) or ~/.config/opencode/oh-my-opencode.json (fallback) |
Schema autocomplete supported:
{
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json"
}
JSONC Support
The oh-my-opencode configuration file supports JSONC (JSON with Comments):
- Line comments:
// comment - Block comments:
/* comment */ - Trailing commas:
{ "key": "value", }
When both oh-my-opencode.jsonc and oh-my-opencode.json files exist, .jsonc takes priority.
Example with comments:
{
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
/* Agent overrides - customize models for specific tasks */
"agents": {
"oracle": {
"model": "openai/gpt-5.2" // GPT for strategic reasoning
},
"explore": {
"model": "opencode/gpt-5-nano" // Free & fast for exploration
},
},
}
Google Auth
Recommended: For Google Gemini authentication, install the opencode-antigravity-auth plugin (@latest). It provides multi-account load balancing, variant-based thinking levels, dual quota system (Antigravity + Gemini CLI), and active maintenance. See Installation > Google Gemini.
Ollama Provider
IMPORTANT: When using Ollama as a provider, you must disable streaming to avoid JSON parsing errors.
Required Configuration
{
"agents": {
"explore": {
"model": "ollama/qwen3-coder",
"stream": false
}
}
}
Why stream: false is Required
Ollama returns NDJSON (newline-delimited JSON) when streaming is enabled, but Claude Code SDK expects a single JSON object. This causes JSON Parse error: Unexpected EOF when agents attempt tool calls.
Example of the problem:
// Ollama streaming response (NDJSON - multiple lines)
{"message":{"tool_calls":[...]}, "done":false}
{"message":{"content":""}, "done":true}
// Claude Code SDK expects (single JSON object)
{"message":{"tool_calls":[...], "content":""}, "done":true}
Supported Models
Common Ollama models that work with oh-my-opencode:
| Model | Best For | Configuration |
|---|---|---|
ollama/qwen3-coder |
Code generation, build fixes | {"model": "ollama/qwen3-coder", "stream": false} |
ollama/ministral-3:14b |
Exploration, codebase search | {"model": "ollama/ministral-3:14b", "stream": false} |
ollama/lfm2.5-thinking |
Documentation, writing | {"model": "ollama/lfm2.5-thinking", "stream": false} |
Troubleshooting
If you encounter JSON Parse error: Unexpected EOF:
- Verify
stream: falseis set in your agent configuration - Check Ollama is running:
curl http://localhost:11434/api/tags - Test with curl:
curl -s http://localhost:11434/api/chat \ -d '{"model": "qwen3-coder", "messages": [{"role": "user", "content": "Hello"}], "stream": false}' - See detailed troubleshooting: docs/troubleshooting/ollama-streaming-issue.md
Future SDK Fix
The proper long-term fix requires Claude Code SDK to parse NDJSON responses correctly. Until then, use stream: false as a workaround.
Tracking: https://github.com/code-yeongyu/oh-my-opencode/issues/1124
Agents
Override built-in agent settings:
{
"agents": {
"explore": {
"model": "anthropic/claude-haiku-4-5",
"temperature": 0.5
},
"multimodal-looker": {
"disable": true
}
}
}
Each agent supports: model, fallback_models, temperature, top_p, prompt, prompt_append, tools, disable, description, mode, color, permission, category, variant, maxTokens, thinking, reasoningEffort, textVerbosity, providerOptions.
Additional Agent Options
| Option | Type | Description |
|---|---|---|
fallback_models |
string/array | Fallback models for runtime switching on API errors. Single string or array of model strings. |
category |
string | Category name to inherit model and other settings from category defaults |
variant |
string | Model variant (e.g., max, high, medium, low, xhigh) |
maxTokens |
number | Maximum tokens for response. Passed directly to OpenCode SDK. |
thinking |
object | Extended thinking configuration for Anthropic models. See Thinking Options below. |
reasoningEffort |
string | OpenAI reasoning effort level. Values: low, medium, high, xhigh. |
textVerbosity |
string | Text verbosity level. Values: low, medium, high. |
providerOptions |
object | Provider-specific options passed directly to OpenCode SDK. |
Thinking Options (Anthropic)
{
"agents": {
"oracle": {
"thinking": {
"type": "enabled",
"budgetTokens": 200000
}
}
}
}
| Option | Type | Default | Description |
|---|---|---|---|
type |
string | - | enabled or disabled |
budgetTokens |
number | - | Maximum budget tokens for extended thinking |
Use prompt_append to add extra instructions without replacing the default system prompt:
{
"agents": {
"librarian": {
"prompt_append": "Always use the elisp-dev-mcp for Emacs Lisp documentation lookups."
}
}
}
You can also override settings for Sisyphus (the main orchestrator) and build (the default agent) using the same options.
Permission Options
Fine-grained control over what agents can do:
{
"agents": {
"explore": {
"permission": {
"edit": "deny",
"bash": "ask",
"webfetch": "allow"
}
}
}
}
| Permission | Description | Values |
|---|---|---|
edit |
File editing permission | ask / allow / deny |
bash |
Bash command execution | ask / allow / deny or per-command: { "git": "allow", "rm": "deny" } |
webfetch |
Web request permission | ask / allow / deny |
doom_loop |
Allow infinite loop detection override | ask / allow / deny |
external_directory |
Access files outside project root | ask / allow / deny |
Or disable via disabled_agents in ~/.config/opencode/oh-my-opencode.json or .opencode/oh-my-opencode.json:
{
"disabled_agents": ["oracle", "multimodal-looker"]
}
Available agents: sisyphus, hephaestus, prometheus, oracle, librarian, explore, multimodal-looker, metis, momus, atlas
Built-in Skills
Oh My OpenCode includes built-in skills that provide additional capabilities:
- playwright (default) / agent-browser: Browser automation for web scraping, testing, screenshots, and browser interactions. See Browser Automation for switching between providers.
- git-master: Git expert for atomic commits, rebase/squash, and history search (blame, bisect, log -S). STRONGLY RECOMMENDED: Use with
task(category='quick', load_skills=['git-master'], ...)to save context.
Disable built-in skills via disabled_skills in ~/.config/opencode/oh-my-opencode.json or .opencode/oh-my-opencode.json:
{
"disabled_skills": ["playwright"]
}
Available built-in skills: playwright, agent-browser, git-master
Skills Configuration
Configure advanced skills settings including custom skill sources, enabling/disabling specific skills, and defining custom skills.
{
"skills": {
"sources": [
{ "path": "./custom-skills", "recursive": true },
"https://example.com/skill.yaml"
],
"enable": ["my-custom-skill"],
"disable": ["other-skill"],
"my-skill": {
"description": "Custom skill description",
"template": "Custom prompt template",
"from": "source-file.ts",
"model": "custom/model",
"agent": "custom-agent",
"subtask": true,
"argument-hint": "usage hint",
"license": "MIT",
"compatibility": ">= 3.0.0",
"metadata": {
"author": "Your Name"
},
"allowed-tools": ["tool1", "tool2"]
}
}
}
Sources
Load skills from local directories or remote URLs:
{
"skills": {
"sources": [
{ "path": "./custom-skills", "recursive": true },
{ "path": "./single-skill.yaml" },
"https://example.com/skill.yaml",
"https://raw.githubusercontent.com/user/repo/main/skills/*"
]
}
}
| Option | Default | Description |
|---|---|---|
path |
- | Local file/directory path or remote URL |
recursive |
false |
Recursively load from directory |
glob |
- | Glob pattern for file selection |
Enable/Disable Skills
{
"skills": {
"enable": ["skill-1", "skill-2"],
"disable": ["disabled-skill"]
}
}
Custom Skill Definition
Define custom skills directly in your config:
| Option | Default | Description |
|---|---|---|
description |
- | Human-readable description of the skill |
template |
- | Custom prompt template for the skill |
from |
- | Source file to load template from |
model |
- | Override model for this skill |
agent |
- | Override agent for this skill |
subtask |
false |
Whether to run as a subtask |
argument-hint |
- | Hint for how to use the skill |
license |
- | Skill license |
compatibility |
- | Required oh-my-opencode version compatibility |
metadata |
- | Additional metadata as key-value pairs |
allowed-tools |
- | Array of tools this skill is allowed to use |
Example: Custom skill
{
"skills": {
"data-analyst": {
"description": "Specialized for data analysis tasks",
"template": "You are a data analyst. Focus on statistical analysis, visualization, and data interpretation.",
"model": "openai/gpt-5.2",
"allowed-tools": ["read", "bash", "lsp_diagnostics"]
}
}
}
Browser Automation
Choose between two browser automation providers:
| Provider | Interface | Features | Installation |
|---|---|---|---|
| playwright (default) | MCP tools | Playwright MCP server with structured tool calls | Auto-installed via npx |
| agent-browser | Bash CLI | Vercel's CLI with session management, parallel browsers | Requires bun add -g agent-browser |
Switch providers via browser_automation_engine in oh-my-opencode.json:
{
"browser_automation_engine": {
"provider": "agent-browser"
}
}
Playwright (Default)
Uses the official Playwright MCP server (@playwright/mcp). Browser automation happens through structured MCP tool calls.
agent-browser
Uses Vercel's agent-browser CLI. Key advantages:
- Session management: Run multiple isolated browser instances with
--sessionflag - Persistent profiles: Keep browser state across restarts with
--profile - Snapshot-based workflow: Get element refs via
snapshot -i, interact with@e1,@e2, etc. - CLI-first: All commands via Bash - great for scripting
Installation required:
bun add -g agent-browser
agent-browser install # Download Chromium
Example workflow:
agent-browser open https://example.com
agent-browser snapshot -i # Get interactive elements with refs
agent-browser fill @e1 "user@example.com"
agent-browser click @e2
agent-browser screenshot result.png
agent-browser close
Tmux Integration
Run background subagents in separate tmux panes for visual multi-agent execution. See your agents working in parallel, each in their own terminal pane.
Enable tmux integration via tmux in oh-my-opencode.json:
{
"tmux": {
"enabled": true,
"layout": "main-vertical",
"main_pane_size": 60,
"main_pane_min_width": 120,
"agent_pane_min_width": 40
}
}
| Option | Default | Description |
|---|---|---|
enabled |
false |
Enable tmux subagent pane spawning. Only works when running inside an existing tmux session. |
layout |
main-vertical |
Tmux layout for agent panes. See Layout Options below. |
main_pane_size |
60 |
Main pane size as percentage (20-80). |
main_pane_min_width |
120 |
Minimum width for main pane in columns. |
agent_pane_min_width |
40 |
Minimum width for each agent pane in columns. |
Layout Options
| Layout | Description |
|---|---|
main-vertical |
Main pane left, agent panes stacked on right (default) |
main-horizontal |
Main pane top, agent panes stacked bottom |
tiled |
All panes in equal-sized grid |
even-horizontal |
All panes in horizontal row |
even-vertical |
All panes in vertical stack |
Requirements
- Must run inside tmux: The feature only activates when OpenCode is already running inside a tmux session
- Tmux installed: Requires tmux to be available in PATH
- Server mode: OpenCode must run with
--portflag to enable subagent pane spawning
How It Works
When tmux.enabled is true and you're inside a tmux session:
- Background agents (via
task(run_in_background=true)) spawn in new tmux panes - Each pane shows the subagent's real-time output
- Panes are automatically closed when the subagent completes
- Layout is automatically adjusted based on your configuration
Running OpenCode with Tmux Subagent Support
To enable tmux subagent panes, OpenCode must run in server mode with the --port flag. This starts an HTTP server that subagent panes connect to via opencode attach.
Basic setup:
# Start tmux session
tmux new -s dev
# Run OpenCode with server mode (port 4096)
opencode --port 4096
# Now background agents will appear in separate panes
Recommended: Shell Function
For convenience, create a shell function that automatically handles tmux sessions and port allocation. Here's an example for Fish shell:
# ~/.config/fish/config.fish
function oc
set base_name (basename (pwd))
set path_hash (echo (pwd) | md5 | cut -c1-4)
set session_name "$base_name-$path_hash"
# Find available port starting from 4096
function __oc_find_port
set port 4096
while test $port -lt 5096
if not lsof -i :$port >/dev/null 2>&1
echo $port
return 0
end
set port (math $port + 1)
end
echo 4096
end
set oc_port (__oc_find_port)
set -x OPENCODE_PORT $oc_port
if set -q TMUX
# Already inside tmux - just run with port
opencode --port $oc_port $argv
else
# Create tmux session and run opencode
set oc_cmd "OPENCODE_PORT=$oc_port opencode --port $oc_port $argv; exec fish"
if tmux has-session -t "$session_name" 2>/dev/null
tmux new-window -t "$session_name" -c (pwd) "$oc_cmd"
tmux attach-session -t "$session_name"
else
tmux new-session -s "$session_name" -c (pwd) "$oc_cmd"
end
end
functions -e __oc_find_port
end
Bash/Zsh equivalent:
# ~/.bashrc or ~/.zshrc
oc() {
local base_name=$(basename "$PWD")
local path_hash=$(echo "$PWD" | md5sum | cut -c1-4)
local session_name="${base_name}-${path_hash}"
# Find available port
local port=4096
while [ $port -lt 5096 ]; do
if ! lsof -i :$port >/dev/null 2>&1; then
break
fi
port=$((port + 1))
done
export OPENCODE_PORT=$port
if [ -n "$TMUX" ]; then
opencode --port $port "$@"
else
local oc_cmd="OPENCODE_PORT=$port opencode --port $port $*; exec $SHELL"
if tmux has-session -t "$session_name" 2>/dev/null; then
tmux new-window -t "$session_name" -c "$PWD" "$oc_cmd"
tmux attach-session -t "$session_name"
else
tmux new-session -s "$session_name" -c "$PWD" "$oc_cmd"
fi
fi
}
How subagent panes work:
- Main OpenCode starts HTTP server on specified port (e.g.,
http://localhost:4096) - When a background agent spawns, Oh My OpenCode creates a new tmux pane
- The pane runs:
opencode attach http://localhost:4096 --session <session-id> - Each subagent pane shows real-time streaming output
- Panes are automatically closed when the subagent completes
Environment variables:
| Variable | Description |
|---|---|
OPENCODE_PORT |
Default port for the HTTP server (used if --port not specified) |
Server Mode Reference
OpenCode's server mode exposes an HTTP API for programmatic interaction:
# Standalone server (no TUI)
opencode serve --port 4096
# TUI with server (recommended for tmux integration)
opencode --port 4096
| Flag | Default | Description |
|---|---|---|
--port |
4096 |
Port for HTTP server |
--hostname |
127.0.0.1 |
Hostname to listen on |
For more details, see the OpenCode Server documentation.
Git Master
Configure git-master skill behavior:
{
"git_master": {
"commit_footer": true,
"include_co_authored_by": true
}
}
| Option | Default | Description |
|---|---|---|
commit_footer |
true |
Adds "Ultraworked with Sisyphus" footer to commit messages. |
include_co_authored_by |
true |
Adds Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai> trailer to commits. |
Sisyphus Agent
When enabled (default), Sisyphus provides a powerful orchestrator with optional specialized agents:
- Sisyphus: Primary orchestrator agent (Claude Opus 4.6)
- OpenCode-Builder: OpenCode's default build agent, renamed due to SDK limitations (disabled by default)
- Prometheus (Planner): OpenCode's default plan agent with work-planner methodology (enabled by default)
- Metis (Plan Consultant): Pre-planning analysis agent that identifies hidden requirements and AI failure points
Configuration Options:
{
"sisyphus_agent": {
"disabled": false,
"default_builder_enabled": false,
"planner_enabled": true,
"replace_plan": true
}
}
Example: Enable OpenCode-Builder:
{
"sisyphus_agent": {
"default_builder_enabled": true
}
}
This enables OpenCode-Builder agent alongside Sisyphus. The default build agent is always demoted to subagent mode when Sisyphus is enabled.
Example: Disable all Sisyphus orchestration:
{
"sisyphus_agent": {
"disabled": true
}
}
You can also customize Sisyphus agents like other agents:
{
"agents": {
"Sisyphus": {
"model": "anthropic/claude-sonnet-4",
"temperature": 0.3
},
"OpenCode-Builder": {
"model": "anthropic/claude-opus-4"
},
"Prometheus (Planner)": {
"model": "openai/gpt-5.2"
},
"Metis (Plan Consultant)": {
"model": "anthropic/claude-sonnet-4-6"
}
}
}
| Option | Default | Description |
|---|---|---|
disabled |
false |
When true, disables all Sisyphus orchestration and restores original build/plan as primary. |
default_builder_enabled |
false |
When true, enables OpenCode-Builder agent (same as OpenCode build, renamed due to SDK limitations). Disabled by default. |
planner_enabled |
true |
When true, enables Prometheus (Planner) agent with work-planner methodology. Enabled by default. |
replace_plan |
true |
When true, demotes default plan agent to subagent mode. Set to false to keep both Prometheus (Planner) and default plan available. |
Background Tasks
Configure concurrency limits for background agent tasks. This controls how many parallel background agents can run simultaneously.
{
"background_task": {
"defaultConcurrency": 5,
"staleTimeoutMs": 180000,
"providerConcurrency": {
"anthropic": 3,
"openai": 5,
"google": 10
},
"modelConcurrency": {
"anthropic/claude-opus-4-6": 2,
"google/gemini-3-flash": 10
}
}
}
| Option | Default | Description |
|---|---|---|
defaultConcurrency |
- | Default maximum concurrent background tasks for all providers/models |
staleTimeoutMs |
180000 |
Stale timeout in milliseconds - interrupt tasks with no activity for this duration (minimum: 60000 = 1 minute) |
providerConcurrency |
- | Per-provider concurrency limits. Keys are provider names (e.g., anthropic, openai, google) |
modelConcurrency |
- | Per-model concurrency limits. Keys are full model names (e.g., anthropic/claude-opus-4-6). Overrides provider limits. |
Priority Order: modelConcurrency > providerConcurrency > defaultConcurrency
Use Cases:
- Limit expensive models (e.g., Opus) to prevent cost spikes
- Allow more concurrent tasks for fast/cheap models (e.g., Gemini Flash)
- Respect provider rate limits by setting provider-level caps
Runtime Fallback
Automatically switch to backup models when the primary model encounters retryable API errors (rate limits, overload, etc.) or provider key misconfiguration errors (for example, missing API key). This keeps conversations running without manual intervention.
{
"runtime_fallback": {
"enabled": true,
"retry_on_errors": [400, 429, 503, 529],
"max_fallback_attempts": 3,
"cooldown_seconds": 60,
"timeout_seconds": 30,
"notify_on_fallback": true
}
}
| Option | Default | Description |
|---|---|---|
enabled |
true |
Enable runtime fallback |
retry_on_errors |
[400, 429, 503, 529] |
HTTP status codes that trigger fallback (rate limit, service unavailable). Also supports certain classified provider errors (for example, missing API key) that do not expose HTTP status codes. |
max_fallback_attempts |
3 |
Maximum fallback attempts per session (1-20) |
cooldown_seconds |
60 |
Cooldown in seconds before retrying a failed model |
timeout_seconds |
30 |
Timeout in seconds for an in-flight fallback request before forcing the next fallback model. ⚠️ Set to 0 to disable auto-retry signal detection (see below). |
notify_on_fallback |
true |
Show toast notification when switching to a fallback model |
timeout_seconds: Understanding the 0 Value
⚠️ IMPORTANT: Setting timeout_seconds: 0 disables auto-retry signal detection. This is a critical behavior change:
| Setting | Behavior |
|---|---|
timeout_seconds: 30 (default) |
✅ Full fallback coverage: Error-based fallback (429, 503, etc.) + auto-retry signal detection (provider messages like "retrying in 8h") |
timeout_seconds: 0 |
⚠️ Limited fallback: Only error-based fallback works. Provider retry messages are completely ignored. Timeout-based escalation is disabled. |
When timeout_seconds: 0:
- ✅ HTTP errors (429, 503, 529) still trigger fallback
- ✅ Provider key errors (missing API key) still trigger fallback
- ❌ Provider retry messages ("retrying in Xh") are ignored
- ❌ Timeout-based escalation is disabled
- ❌ Hanging requests do not advance to the next fallback model
Recommendation: Use a non-zero value (e.g., 30 seconds) to enable full fallback coverage. Only set to 0 if you explicitly want to disable auto-retry signal detection.
How It Works
- When an API error matching
retry_on_errorsoccurs (or a classified provider key error such as missing API key), the hook intercepts it - The next request automatically uses the next available model from
fallback_models - Failed models enter a cooldown period before being retried
- If
timeout_seconds > 0and a fallback provider hangs, timeout advances to the next fallback model - Toast notification (optional) informs you of the model switch
Configuring Fallback Models
Define fallback_models at the agent or category level:
{
"agents": {
"sisyphus": {
"model": "anthropic/claude-opus-4-5",
"fallback_models": ["openai/gpt-5.2", "google/gemini-3-pro"]
}
},
"categories": {
"ultrabrain": {
"model": "openai/gpt-5.2-codex",
"fallback_models": ["anthropic/claude-opus-4-5", "google/gemini-3-pro"]
}
}
}
When the primary model fails:
- First fallback:
openai/gpt-5.2 - Second fallback:
google/gemini-3-pro - After
max_fallback_attempts, returns to primary model
Categories
Categories enable domain-specific task delegation via the task tool. Each category applies runtime presets (model, temperature, prompt additions) when calling the Sisyphus-Junior agent.
Built-in Categories
All 8 categories come with optimal model defaults, but you must configure them to use those defaults:
| Category | Built-in Default Model | Description |
|---|---|---|
visual-engineering |
google/gemini-3-pro (high) |
Frontend, UI/UX, design, styling, animation |
ultrabrain |
openai/gpt-5.3-codex (xhigh) |
Deep logical reasoning, complex architecture decisions |
deep |
openai/gpt-5.3-codex (medium) |
Goal-oriented autonomous problem-solving, thorough research before action |
artistry |
google/gemini-3-pro (high) |
Highly creative/artistic tasks, novel ideas |
quick |
anthropic/claude-haiku-4-5 |
Trivial tasks - single file changes, typo fixes, simple modifications |
unspecified-low |
anthropic/claude-sonnet-4-6 |
Tasks that don't fit other categories, low effort required |
unspecified-high |
anthropic/claude-opus-4-6 (max) |
Tasks that don't fit other categories, high effort required |
writing |
kimi-for-coding/k2p5 |
Documentation, prose, technical writing |
⚠️ Critical: Model Resolution Priority
Categories DO NOT use their built-in defaults unless configured. Model resolution follows this priority:
1. User-configured model (in oh-my-opencode.json)
2. Category's built-in default (if you add category to config)
3. System default model (from opencode.json)
Example Problem:
// opencode.json
{ "model": "anthropic/claude-sonnet-4-6" }
// oh-my-opencode.json (empty categories section)
{}
// Result: ALL categories use claude-sonnet-4-6 (wasteful!)
// - quick tasks use Sonnet instead of Haiku (expensive)
// - ultrabrain uses Sonnet instead of GPT-5.2 (inferior reasoning)
// - visual tasks use Sonnet instead of Gemini (suboptimal for UI)
Recommended Configuration
To use optimal models for each category, add them to your config:
{
"categories": {
"visual-engineering": {
"model": "google/gemini-3-pro"
},
"ultrabrain": {
"model": "openai/gpt-5.3-codex",
"variant": "xhigh"
},
"deep": {
"model": "openai/gpt-5.3-codex",
"variant": "medium"
},
"artistry": {
"model": "google/gemini-3-pro",
"variant": "high"
},
"quick": {
"model": "anthropic/claude-haiku-4-5" // Fast + cheap for trivial tasks
},
"unspecified-low": {
"model": "anthropic/claude-sonnet-4-6"
},
"unspecified-high": {
"model": "anthropic/claude-opus-4-6",
"variant": "max"
},
"writing": {
"model": "kimi-for-coding/k2p5"
}
}
}
Only configure categories you have access to. Unconfigured categories fall back to your system default model.
Usage
// Via task tool
task(category="visual-engineering", prompt="Create a responsive dashboard component")
task(category="ultrabrain", prompt="Design the payment processing flow")
// Or target a specific agent directly (bypasses categories)
task(agent="oracle", prompt="Review this architecture")
Custom Categories
Add your own categories or override built-in ones:
{
"categories": {
"data-science": {
"model": "anthropic/claude-sonnet-4-6",
"temperature": 0.2,
"prompt_append": "Focus on data analysis, ML pipelines, and statistical methods."
},
"visual-engineering": {
"model": "google/gemini-3-pro-preview",
"prompt_append": "Use shadcn/ui components and Tailwind CSS."
}
}
}
Each category supports: model, fallback_models, temperature, top_p, maxTokens, thinking, reasoningEffort, textVerbosity, tools, prompt_append, variant, description, is_unstable_agent.
Additional Category Options
| Option | Type | Default | Description |
|---|---|---|---|
fallback_models |
string/array | - | Fallback models for runtime switching on API errors. Single string or array of model strings. |
description |
string | - | Human-readable description of the category's purpose. Shown in delegate_task prompt. |
is_unstable_agent |
boolean | false |
Mark agent as unstable - forces background mode for monitoring. Auto-enabled for gemini models. |
Runtime Fallback
Automatically switch to backup models when the primary model encounters retryable API errors (rate limits, overload, etc.) or provider key misconfiguration errors (for example, missing API key). This keeps conversations running without manual intervention.
{
"runtime_fallback": {
"enabled": true,
"retry_on_errors": [429, 503, 529],
"max_fallback_attempts": 3,
"cooldown_seconds": 60,
"timeout_seconds": 30,
"notify_on_fallback": true
}
}
| Option | Default | Description |
|---|---|---|
enabled |
true |
Enable runtime fallback |
retry_on_errors |
[429, 503, 529] |
HTTP status codes that trigger fallback (rate limit, service unavailable). Also supports certain classified provider errors (for example, missing API key) that do not expose HTTP status codes. |
max_fallback_attempts |
3 |
Maximum fallback attempts per session (1-10) |
cooldown_seconds |
60 |
Cooldown in seconds before retrying a failed model |
timeout_seconds |
30 |
Timeout in seconds for an in-flight fallback request before forcing the next fallback model. Set to 0 to disable timeout-based fallback and provider quota retry signal detection. |
notify_on_fallback |
true |
Show toast notification when switching to a fallback model |
How It Works
- When an API error matching
retry_on_errorsoccurs (or a classified provider key error such as missing API key), the hook intercepts it - The next request automatically uses the next available model from
fallback_models - Failed models enter a cooldown period before being retried
- If a fallback provider hangs, timeout advances to the next fallback model
- Toast notification (optional) informs you of the model switch
Configuring Fallback Models
Define fallback_models at the agent or category level:
{
"agents": {
"sisyphus": {
"model": "anthropic/claude-opus-4-5",
"fallback_models": ["openai/gpt-5.2", "google/gemini-3-pro"]
}
},
"categories": {
"ultrabrain": {
"model": "openai/gpt-5.2-codex",
"fallback_models": ["anthropic/claude-opus-4-5", "google/gemini-3-pro"]
}
}
}
When the primary model fails:
- First fallback:
openai/gpt-5.2 - Second fallback:
google/gemini-3-pro - After
max_fallback_attempts, returns to primary model
Model Resolution System
At runtime, Oh My OpenCode uses a 3-step resolution process to determine which model to use for each agent and category. This happens dynamically based on your configuration and available models.
Overview
Problem: Users have different provider configurations. The system needs to select the best available model for each task at runtime.
Solution: A simple 3-step resolution flow:
- Step 1: User Override — If you specify a model in
oh-my-opencode.json, use exactly that - Step 2: Provider Fallback — Try each provider in the requirement's priority order until one is available
- Step 3: System Default — Fall back to OpenCode's configured default model
Resolution Flow
┌─────────────────────────────────────────────────────────────────┐
│ MODEL RESOLUTION FLOW │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Step 1: USER OVERRIDE │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ User specified model in oh-my-opencode.json? │ │
│ │ YES → Use exactly as specified │ │
│ │ NO → Continue to Step 2 │ │
│ └─────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ Step 2: PROVIDER PRIORITY FALLBACK │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ For each provider in requirement.providers order: │ │
│ │ │ │
│ │ Example for Sisyphus: │ │
│ │ anthropic → github-copilot → opencode → antigravity │ │
│ │ │ │ │ │ │ │
│ │ ▼ ▼ ▼ ▼ │ │
│ │ Try: anthropic/claude-opus-4-6 │ │
│ │ Try: github-copilot/claude-opus-4-6 │ │
│ │ Try: opencode/claude-opus-4-6 │ │
│ │ ... │ │
│ │ │ │
│ │ Found in available models? → Return matched model │ │
│ │ Not found? → Try next provider │ │
│ └─────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ (all providers exhausted) │
│ Step 3: SYSTEM DEFAULT │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Return systemDefaultModel (from opencode.json) │ │
│ └─────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
Agent Provider Chains
Each agent has a defined provider priority chain. The system tries providers in order until it finds an available model:
| Agent | Model (no prefix) | Provider Priority Chain |
|---|---|---|
| Sisyphus | claude-opus-4-6 |
anthropic/github-copilot/opencode → kimi-for-coding → opencode → zai-coding-plan → opencode |
| Hephaestus | gpt-5.3-codex |
openai/github-copilot/opencode (requires provider) |
| oracle | gpt-5.2 |
openai/github-copilot/opencode → google/github-copilot/opencode → anthropic/github-copilot/opencode |
| librarian | glm-4.7 |
zai-coding-plan → opencode → anthropic/github-copilot/opencode |
| explore | grok-code-fast-1 |
github-copilot → anthropic/opencode → opencode |
| multimodal-looker | gemini-3-flash |
google/github-copilot/opencode → openai/github-copilot/opencode → zai-coding-plan → kimi-for-coding → opencode → anthropic/github-copilot/opencode → opencode |
| Prometheus (Planner) | claude-opus-4-6 |
anthropic/github-copilot/opencode → kimi-for-coding → opencode → openai/github-copilot/opencode → google/github-copilot/opencode |
| Metis (Plan Consultant) | claude-opus-4-6 |
anthropic/github-copilot/opencode → kimi-for-coding → opencode → openai/github-copilot/opencode → google/github-copilot/opencode |
| Momus (Plan Reviewer) | gpt-5.2 |
openai/github-copilot/opencode → anthropic/github-copilot/opencode → google/github-copilot/opencode |
| Atlas | k2p5 |
kimi-for-coding → opencode → anthropic/github-copilot/opencode → openai/github-copilot/opencode → google/github-copilot/opencode |
Category Provider Chains
Categories follow the same resolution logic:
| Category | Model (no prefix) | Provider Priority Chain |
|---|---|---|
| visual-engineering | gemini-3-pro |
google/github-copilot/opencode → zai-coding-plan → anthropic/github-copilot/opencode → kimi-for-coding |
| ultrabrain | gpt-5.3-codex |
openai/github-copilot/opencode → google/github-copilot/opencode → anthropic/github-copilot/opencode |
| deep | gpt-5.3-codex |
openai/github-copilot/opencode → anthropic/github-copilot/opencode → google/github-copilot/opencode |
| artistry | gemini-3-pro |
google/github-copilot/opencode → anthropic/github-copilot/opencode → openai/github-copilot/opencode |
| quick | claude-haiku-4-5 |
anthropic/github-copilot/opencode → google/github-copilot/opencode → opencode |
| unspecified-low | claude-sonnet-4-6 |
anthropic/github-copilot/opencode → openai/github-copilot/opencode → google/github-copilot/opencode |
| unspecified-high | claude-opus-4-6 |
anthropic/github-copilot/opencode → openai/github-copilot/opencode → google/github-copilot/opencode |
| writing | k2p5 |
kimi-for-coding → google/github-copilot/opencode → anthropic/github-copilot/opencode |
Checking Your Configuration
Use the doctor command to see how models resolve with your current configuration:
bunx oh-my-opencode doctor --verbose
The "Model Resolution" check shows:
- Each agent/category's model requirement
- Provider fallback chain
- User overrides (if configured)
- Effective resolution path
Manual Override
Override any agent or category model in oh-my-opencode.json:
{
"agents": {
"Sisyphus": {
"model": "anthropic/claude-sonnet-4-6"
},
"oracle": {
"model": "openai/o3"
}
},
"categories": {
"visual-engineering": {
"model": "anthropic/claude-opus-4-6"
}
}
}
When you specify a model override, it takes precedence (Step 1) and the provider fallback chain is skipped entirely.
Hooks
Disable specific built-in hooks via disabled_hooks in ~/.config/opencode/oh-my-opencode.json or .opencode/oh-my-opencode.json:
{
"disabled_hooks": ["comment-checker", "agent-usage-reminder"]
}
Available hooks: todo-continuation-enforcer, context-window-monitor, session-recovery, session-notification, comment-checker, grep-output-truncator, tool-output-truncator, directory-agents-injector, directory-readme-injector, empty-task-response-detector, think-mode, anthropic-context-window-limit-recovery, rules-injector, background-notification, auto-update-checker, startup-toast, keyword-detector, agent-usage-reminder, non-interactive-env, interactive-bash-session, compaction-context-injector, thinking-block-validator, claude-code-hooks, ralph-loop, preemptive-compaction, auto-slash-command, sisyphus-junior-notepad, no-sisyphus-gpt, start-work, runtime-fallback
Note on directory-agents-injector: This hook is automatically disabled when running on OpenCode 1.1.37+ because OpenCode now has native support for dynamically resolving AGENTS.md files from subdirectories (PR #10678). This prevents duplicate AGENTS.md injection. For older OpenCode versions, the hook remains active to provide the same functionality.
Note on no-sisyphus-gpt: Disabling this hook is STRONGLY discouraged. Sisyphus is NOT optimized for GPT models — running Sisyphus with GPT performs worse than vanilla Codex and wastes your money. This hook automatically switches to Hephaestus when a GPT model is detected, which is the correct agent for GPT. Only disable this if you fully understand the consequences.
Note on auto-update-checker and startup-toast: The startup-toast hook is a sub-feature of auto-update-checker. To disable only the startup toast notification while keeping update checking enabled, add "startup-toast" to disabled_hooks. To disable all update checking features (including the toast), add "auto-update-checker" to disabled_hooks.
Hashline Edit
Oh My OpenCode replaces OpenCode's built-in Edit tool with a hash-anchored version that uses LINE#ID references (e.g. 5#VK) instead of bare line numbers. This prevents stale-line edits by validating content hash before applying each change.
Enabled by default. Set hashline_edit: false to opt out and restore standard file editing.
{
"hashline_edit": false
}
| Option | Default | Description |
|---|---|---|
hashline_edit |
true |
Enable hash-anchored Edit tool and companion hooks. When false, falls back to standard editing without hash validation. |
When enabled, two companion hooks are also active:
hashline-read-enhancer— AppendsLINE#ID:contentannotations toReadoutput so agents always have fresh anchors.hashline-edit-diff-enhancer— Shows a unified diff inEdit/Writeoutput for immediate change visibility.
To disable only the hooks while keeping the hash-anchored Edit tool:
{
"disabled_hooks": ["hashline-read-enhancer", "hashline-edit-diff-enhancer"]
}
## Disabled Commands
Disable specific built-in commands via `disabled_commands` in `~/.config/opencode/oh-my-opencode.json` or `.opencode/oh-my-opencode.json`:
```json
{
"disabled_commands": ["init-deep", "start-work"]
}
Available commands: init-deep, start-work
Comment Checker
Configure comment-checker hook behavior. The comment checker warns when excessive comments are added to code.
{
"comment_checker": {
"custom_prompt": "Your custom warning message. Use {{comments}} placeholder for detected comments XML."
}
}
| Option | Default | Description |
|---|---|---|
custom_prompt |
- | Custom warning message to replace the default. Use {{comments}} placeholder. |
Notification
Configure notification behavior for background task completion.
{
"notification": {
"force_enable": true
}
}
| Option | Default | Description |
|---|---|---|
force_enable |
false |
Force enable session-notification even if external notification plugins are detected. Default: false. |
Sisyphus Tasks
Configure Sisyphus Tasks system for advanced task management.
{
"sisyphus": {
"tasks": {
"enabled": false,
"storage_path": ".sisyphus/tasks",
"claude_code_compat": false
}
}
}
Tasks Configuration
| Option | Default | Description |
|---|---|---|
enabled |
false |
Enable Sisyphus Tasks system |
storage_path |
.sisyphus/tasks |
Storage path for tasks (relative to project root) |
claude_code_compat |
false |
Enable Claude Code path compatibility mode |
MCPs
Exa, Context7 and grep.app MCP enabled by default.
- websearch: Real-time web search powered by Exa AI - searches the web and returns relevant content
- context7: Fetches up-to-date official documentation for libraries
- grep_app: Ultra-fast code search across millions of public GitHub repositories via grep.app
Don't want them? Disable via disabled_mcps in ~/.config/opencode/oh-my-opencode.json or .opencode/oh-my-opencode.json:
{
"disabled_mcps": ["websearch", "context7", "grep_app"]
}
LSP
OpenCode provides LSP tools for analysis.
Oh My OpenCode adds refactoring tools (rename, code actions).
All OpenCode LSP configs and custom settings (from opencode.jsonc / opencode.json) are supported, plus additional Oh My OpenCode-specific settings.
For config discovery, .jsonc takes precedence over .json when both exist (applies to both opencode.* and oh-my-opencode.*).
Add LSP servers via the lsp option in ~/.config/opencode/oh-my-opencode.jsonc / ~/.config/opencode/oh-my-opencode.json or .opencode/oh-my-opencode.jsonc / .opencode/oh-my-opencode.json:
{
"lsp": {
"typescript-language-server": {
"command": ["typescript-language-server", "--stdio"],
"extensions": [".ts", ".tsx"],
"priority": 10
},
"pylsp": {
"disabled": true
}
}
}
Each server supports: command, extensions, priority, env, initialization, disabled.
| Option | Type | Default | Description |
|---|---|---|---|
command |
array | - | Command to start the LSP server (executable + args) |
extensions |
array | - | File extensions this server handles (e.g., [".ts", ".tsx"]) |
priority |
number | - | Server priority when multiple servers match a file |
env |
object | - | Environment variables for the LSP server (key-value pairs) |
initialization |
object | - | Custom initialization options passed to the LSP server |
disabled |
boolean | false |
Whether to disable this LSP server |
Example with advanced options:
{
"lsp": {
"typescript-language-server": {
"command": ["typescript-language-server", "--stdio"],
"extensions": [".ts", ".tsx"],
"priority": 10,
"env": {
"NODE_OPTIONS": "--max-old-space-size=4096"
},
"initialization": {
"preferences": {
"includeInlayParameterNameHints": "all",
"includeInlayFunctionParameterTypeHints": true
}
}
}
}
}
Experimental
Opt-in experimental features that may change or be removed in future versions. Use with caution.
{
"experimental": {
"truncate_all_tool_outputs": true,
"aggressive_truncation": true,
"auto_resume": true,
"disable_omo_env": false,
"dynamic_context_pruning": {
"enabled": false,
"notification": "detailed",
"turn_protection": {
"enabled": true,
"turns": 3
},
"protected_tools": ["task", "todowrite", "lsp_rename"],
"strategies": {
"deduplication": {
"enabled": true
},
"supersede_writes": {
"enabled": true,
"aggressive": false
},
"purge_errors": {
"enabled": true,
"turns": 5
}
}
}
}
}
| Option | Default | Description |
|---|---|---|
truncate_all_tool_outputs |
false |
Truncates ALL tool outputs instead of just whitelisted tools (Grep, Glob, LSP, AST-grep). Tool output truncator is enabled by default - disable via disabled_hooks. |
aggressive_truncation |
false |
When token limit is exceeded, aggressively truncates tool outputs to fit within limits. More aggressive than the default truncation behavior. Falls back to summarize/revert if insufficient. |
auto_resume |
false |
Automatically resumes session after successful recovery from thinking block errors or thinking disabled violations. Extracts last user message and continues. |
disable_omo_env |
false |
When true, disables auto-injected <omo-env> block generation (date, time, timezone, locale). When unset or false, current behavior is preserved. Setting this to true will improve the cache hit rate and reduce the API cost. |
dynamic_context_pruning |
See below | Dynamic context pruning configuration for managing context window usage automatically. See Dynamic Context Pruning below. |
Dynamic Context Pruning
Dynamic context pruning automatically manages context window by intelligently pruning old tool outputs. This feature helps maintain performance in long sessions.
{
"experimental": {
"dynamic_context_pruning": {
"enabled": false,
"notification": "detailed",
"turn_protection": {
"enabled": true,
"turns": 3
},
"protected_tools": ["task", "todowrite", "todoread", "lsp_rename", "session_read", "session_write", "session_search"],
"strategies": {
"deduplication": {
"enabled": true
},
"supersede_writes": {
"enabled": true,
"aggressive": false
},
"purge_errors": {
"enabled": true,
"turns": 5
}
}
}
}
}
| Option | Default | Description |
|---|---|---|
enabled |
false |
Enable dynamic context pruning |
notification |
detailed |
Notification level: off, minimal, or detailed |
turn_protection |
See below | Turn protection settings - prevent pruning recent tool outputs |
Turn Protection
| Option | Default | Description |
|---|---|---|
enabled |
true |
Enable turn protection |
turns |
3 |
Number of recent turns to protect from pruning (1-10) |
Protected Tools
Tools that should never be pruned (default):
["task", "todowrite", "todoread", "lsp_rename", "session_read", "session_write", "session_search"]
Pruning Strategies
| Strategy | Option | Default | Description |
|---|---|---|---|
| deduplication | enabled |
true |
Remove duplicate tool calls (same tool + same args) |
| supersede_writes | enabled |
true |
Prune write inputs when file subsequently read |
aggressive |
false |
Aggressive mode: prune any write if ANY subsequent read | |
| purge_errors | enabled |
true |
Prune errored tool inputs after N turns |
turns |
5 |
Number of turns before pruning errors (1-20) |
Warning: These features are experimental and may cause unexpected behavior. Enable only if you understand the implications.
Environment Variables
| Variable | Description |
|---|---|
OPENCODE_CONFIG_DIR |
Override the OpenCode configuration directory. Useful for profile isolation with tools like OCX ghost mode. |