docs: comprehensive update for v3.14.0 features

- Document object-style fallback_models with per-model settings
- Add package rename compatibility layer docs (oh-my-opencode → oh-my-openagent)
- Update agent-model-matching with Hephaestus gpt-5.4 default
- Document MiniMax M2.5 → M2.7 upgrade across agents
- Add agent priority/order deterministic Tab cycling docs
- Document file:// URI support for agent prompt field
- Add doctor legacy package name warning docs
- Update CLI reference with new doctor checks
- Document model settings compatibility resolver
This commit is contained in:
YeonGyu-Kim
2026-03-27 12:20:40 +09:00
parent 1c9f4148d0
commit a2c7fed9d4
6 changed files with 296 additions and 120 deletions

View File

@@ -111,6 +111,8 @@ Fetch the installation guide and follow it:
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/refs/heads/dev/docs/guide/installation.md
```
**Note**: Use the published package and binary name `oh-my-opencode`. Inside `opencode.json`, the compatibility layer now prefers the plugin entry `oh-my-openagent`, while legacy `oh-my-opencode` entries still load with a warning. Plugin config files still commonly use `oh-my-opencode.json` or `oh-my-opencode.jsonc`, and both legacy and renamed basenames are recognized during the transition.
---
## Skip This README
@@ -273,11 +275,11 @@ To remove oh-my-opencode:
1. **Remove the plugin from your OpenCode config**
Edit `~/.config/opencode/opencode.json` (or `opencode.jsonc`) and remove `"oh-my-opencode"` from the `plugin` array:
Edit `~/.config/opencode/opencode.json` (or `opencode.jsonc`) and remove either `"oh-my-openagent"` or the legacy `"oh-my-opencode"` entry from the `plugin` array:
```bash
# Using jq
jq '.plugin = [.plugin[] | select(. != "oh-my-opencode")]' \
jq '.plugin = [.plugin[] | select(. != "oh-my-openagent" and . != "oh-my-opencode")]' \
~/.config/opencode/opencode.json > /tmp/oc.json && \
mv /tmp/oc.json ~/.config/opencode/opencode.json
```
@@ -285,11 +287,13 @@ To remove oh-my-opencode:
2. **Remove configuration files (optional)**
```bash
# Remove user config
rm -f ~/.config/opencode/oh-my-opencode.json ~/.config/opencode/oh-my-opencode.jsonc
# Remove plugin config files recognized during the compatibility window
rm -f ~/.config/opencode/oh-my-openagent.jsonc ~/.config/opencode/oh-my-openagent.json \
~/.config/opencode/oh-my-opencode.jsonc ~/.config/opencode/oh-my-opencode.json
# Remove project config (if exists)
rm -f .opencode/oh-my-opencode.json .opencode/oh-my-opencode.jsonc
rm -f .opencode/oh-my-openagent.jsonc .opencode/oh-my-openagent.json \
.opencode/oh-my-opencode.jsonc .opencode/oh-my-opencode.json
```
3. **Verify removal**
@@ -315,6 +319,10 @@ See full [Features Documentation](docs/reference/features.md).
- **Built-in MCPs**: websearch (Exa), context7 (docs), grep_app (GitHub search)
- **Session Tools**: List, read, search, and analyze session history
- **Productivity Features**: Ralph Loop, Todo Enforcer, Comment Checker, Think Mode, and more
- **Doctor Command**: Built-in diagnostics (`bunx oh-my-opencode doctor`) verify plugin registration, config, models, and environment
- **Model Fallbacks**: `fallback_models` can mix plain model strings with per-fallback object settings in the same array
- **File Prompts**: Load prompts from files with `file://` support in agent configurations
- **Session Recovery**: Automatic recovery from session errors, context window limits, and API failures
- **Model Setup**: Agent-model matching is built into the [Installation Guide](docs/guide/installation.md#step-5-understand-your-model-setup)
## Configuration
@@ -324,7 +332,7 @@ Opinionated defaults, adjustable if you insist.
See [Configuration Documentation](docs/reference/configuration.md).
**Quick Overview:**
- **Config Locations**: `.opencode/oh-my-opencode.jsonc` or `.opencode/oh-my-opencode.json` (project), `~/.config/opencode/oh-my-opencode.jsonc` or `~/.config/opencode/oh-my-opencode.json` (user)
- **Config Locations**: The compatibility layer recognizes both `oh-my-openagent.json[c]` and legacy `oh-my-opencode.json[c]` plugin config files. Existing installs still commonly use the legacy basename.
- **JSONC Support**: Comments and trailing commas supported
- **Agents**: Override models, temperatures, prompts, and permissions for any agent
- **Built-in Skills**: `playwright` (browser automation), `git-master` (atomic commits)

View File

@@ -92,8 +92,8 @@ These agents do grep, search, and retrieval. They intentionally use the fastest,
| Agent | Role | Fallback Chain | Notes |
| --------------------- | ------------------ | ---------------------------------------------- | ----------------------------------------------------- |
| **Explore** | Fast codebase grep | Grok Code Fast → opencode-go/minimax-m2.7-highspeed → MiniMax M2.7 → Haiku → GPT-5-Nano | Speed is everything. Fire 10 in parallel. |
| **Librarian** | Docs/code search | opencode-go/minimax-m2.7 → MiniMax M2.7-highspeed → Haiku → GPT-5-Nano | Doc retrieval doesn't need deep reasoning. |
| **Explore** | Fast codebase grep | Grok Code Fast → opencode-go/minimax-m2.7 → opencode/minimax-m2.5 → Haiku → GPT-5-Nano | Speed is everything. Fire 10 in parallel. |
| **Librarian** | Docs/code search | opencode-go/minimax-m2.7 → opencode/minimax-m2.5 → Haiku → GPT-5-Nano | Doc retrieval doesn't need deep reasoning. |
| **Multimodal Looker** | Vision/screenshots | GPT-5.4 → opencode-go/kimi-k2.5 → GLM-4.6v → GPT-5-Nano | Uses the first available multimodal-capable fallback. |
| **Sisyphus-Junior** | Category executor | Claude Sonnet → opencode-go/kimi-k2.5 → GPT-5.4 → MiniMax M2.7 → Big Pickle | Handles delegated category tasks. Sonnet-tier default. |
@@ -131,8 +131,8 @@ Principle-driven, explicit reasoning, deep technical capability. Best for agents
| **Gemini 3.1 Pro** | Excels at visual/frontend tasks. Different reasoning style. Default for `visual-engineering` and `artistry`. |
| **Gemini 3 Flash** | Fast. Good for doc search and light tasks. |
| **Grok Code Fast 1** | Blazing fast code grep. Default for Explore agent. |
| **MiniMax M2.7** | Fast and smart. Good for utility tasks and search/retrieval. Upgraded from M2.5 with better reasoning. |
| **MiniMax M2.7 Highspeed** | Ultra-fast variant. Optimized for latency-sensitive tasks like codebase grep. |
| **MiniMax M2.7** | Fast and smart. Used where provider catalogs expose the newer MiniMax line, especially through OpenCode Go. |
| **MiniMax M2.7 Highspeed** | Ultra-fast variant. You may still see it in older docs, logs, or provider catalogs during the transition. |
### OpenCode Go
@@ -144,11 +144,11 @@ A premium subscription tier ($10/month) that provides reliable access to Chinese
| ------------------------ | --------------------------------------------------------------------- |
| **opencode-go/kimi-k2.5** | Vision-capable, Claude-like reasoning. Used by Sisyphus, Atlas, Sisyphus-Junior, Multimodal Looker. |
| **opencode-go/glm-5** | Text-only orchestration model. Used by Oracle, Prometheus, Metis, Momus. |
| **opencode-go/minimax-m2.7** | Ultra-cheap, fast responses. Used by Librarian, Explore, Atlas, Sisyphus-Junior for utility work. |
| **opencode-go/minimax-m2.7** | Ultra-cheap, fast responses. Used by Librarian, Explore, Atlas, and Sisyphus-Junior for utility work. |
**When It Gets Used:**
OpenCode Go models appear in fallback chains as intermediate options. They bridge the gap between premium Claude access and free-tier alternatives. The system tries OpenCode Go models before falling back to free tiers (MiniMax M2.7-highspeed, Big Pickle) or GPT alternatives.
OpenCode Go models appear in fallback chains as intermediate options. They bridge the gap between premium Claude access and free-tier alternatives. The system tries OpenCode Go models before falling back to cheaper provider-specific entries like MiniMax or Big Pickle, then GPT alternatives where applicable.
**Go-Only Scenarios:**
@@ -156,7 +156,7 @@ Some model identifiers like `k2p5` (paid Kimi K2.5) and `glm-5` may only be avai
### About Free-Tier Fallbacks
You may see model names like `kimi-k2.5-free`, `minimax-m2.7-highspeed`, or `big-pickle` (GLM 4.6) in the source code or logs. These are free-tier or speed-optimized versions of the same model families. They exist as lower-priority entries in fallback chains.
You may see model names like `kimi-k2.5-free`, `minimax-m2.7`, `minimax-m2.5`, or `big-pickle` (GLM 4.6) in the source code or logs. These are provider-specific or speed-optimized entries in fallback chains. The exact MiniMax model can differ by provider catalog.
You don't need to configure them. The system includes them so it degrades gracefully when you don't have every paid subscription. If you have the paid version, the paid version is always preferred.
@@ -187,7 +187,7 @@ See the [Orchestration System Guide](./orchestration.md) for how agents dispatch
```jsonc
{
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/dev/assets/oh-my-openagent.schema.json",
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/dev/assets/oh-my-opencode.schema.json",
"agents": {
// Main orchestrator: Claude Opus or Kimi K2.5 work best
@@ -255,7 +255,15 @@ Run `opencode models` to see available models, `opencode auth login` to authenti
### How Model Resolution Works
Each agent has a fallback chain. The system tries models in priority order until it finds one available through your connected providers. You don't need to configure providers per model — just authenticate (`opencode auth login`) and the system figures out which models are available and where.
Each agent has a fallback chain. The system tries models in priority order until it finds one available through your connected providers. You don't need to configure providers per model. Just authenticate (`opencode auth login`) and the system figures out which models are available and where.
Core-agent tab cycling is now deterministic. The fixed priority order is Sisyphus, Hephaestus, Prometheus, and Atlas, then the remaining agents follow.
Your explicit configuration always wins. If you set a specific model for an agent, that choice takes precedence even when resolution data is cold.
Variant and `reasoningEffort` overrides are normalized to model-supported values, so cross-provider overrides degrade gracefully instead of failing hard.
To see which models your agents will actually use, run `bunx oh-my-opencode doctor`. This shows effective model resolution based on your current authentication and config.
```
Agent Request → User Override (if configured) → Fallback Chain → System Default

View File

@@ -5,7 +5,7 @@
Paste this into your llm agent session:
```
Install and configure oh-my-openagent by following the instructions here:
Install and configure oh-my-opencode by following the instructions here:
https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/refs/heads/dev/docs/guide/installation.md
```
@@ -14,8 +14,8 @@ https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/refs/heads/dev/do
Run the interactive installer:
```bash
bunx oh-my-openagent install # recommended
npx oh-my-openagent install # alternative
bunx oh-my-opencode install # recommended
npx oh-my-opencode install # alternative
```
> **Note**: The CLI ships with standalone binaries for all major platforms. No runtime (Bun/Node.js) is required for CLI execution after installation.
@@ -26,6 +26,8 @@ Follow the prompts to configure your Claude, ChatGPT, and Gemini subscriptions.
After you install it, you can read this [overview guide](./overview.md) to understand more.
The published package and local binary are still `oh-my-opencode`. Inside `opencode.json`, the compatibility layer now prefers the plugin entry `oh-my-openagent`, while legacy `oh-my-opencode` entries still load with a warning. Plugin config loading recognizes both `oh-my-openagent.json[c]` and `oh-my-opencode.json[c]` during the transition.
## For LLM Agents
> **IMPORTANT: Use `curl` to fetch this file, NOT WebFetch.** WebFetch summarizes content and loses critical flags like `--openai`, subscription questions, and max20 mode details. Always use:
@@ -34,7 +36,7 @@ After you install it, you can read this [overview guide](./overview.md) to under
> curl -fsSL https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/refs/heads/dev/docs/guide/installation.md
> ```
If you're an LLM Agent helping set up oh-my-openagent, Welcome! You'll now help your user to install the best agent harness, OhMyOpenAgent. Tell user this with greetings, with test 'oMoMoMoMo...'
If you're an LLM Agent helping set up oh-my-opencode, welcome. You'll help your user install the harness and verify the rename-compat setup cleanly. Tell user this with greetings, with text 'oMoMoMoMo...'
follow these steps:
@@ -96,19 +98,19 @@ Spawn a subagent to handle installation and report back - to save context.
Based on user's answers, run the CLI installer with appropriate flags:
```bash
bunx oh-my-openagent install --no-tui --claude=<yes|no|max20> --gemini=<yes|no> --copilot=<yes|no> [--openai=<yes|no>] [--opencode-go=<yes|no>] [--opencode-zen=<yes|no>] [--zai-coding-plan=<yes|no>]
bunx oh-my-opencode install --no-tui --claude=<yes|no|max20> --gemini=<yes|no> --copilot=<yes|no> [--openai=<yes|no>] [--opencode-go=<yes|no>] [--opencode-zen=<yes|no>] [--zai-coding-plan=<yes|no>]
```
**Examples:**
- User has all native subscriptions: `bunx oh-my-openagent install --no-tui --claude=max20 --openai=yes --gemini=yes --copilot=no`
- User has only Claude: `bunx oh-my-openagent install --no-tui --claude=yes --gemini=no --copilot=no`
- User has Claude + OpenAI: `bunx oh-my-openagent install --no-tui --claude=yes --openai=yes --gemini=no --copilot=no`
- User has only GitHub Copilot: `bunx oh-my-openagent install --no-tui --claude=no --gemini=no --copilot=yes`
- User has Z.ai for Librarian: `bunx oh-my-openagent install --no-tui --claude=yes --gemini=no --copilot=no --zai-coding-plan=yes`
- User has only OpenCode Zen: `bunx oh-my-openagent install --no-tui --claude=no --gemini=no --copilot=no --opencode-zen=yes`
- User has OpenCode Go only: `bunx oh-my-openagent install --no-tui --claude=no --openai=no --gemini=no --copilot=no --opencode-go=yes`
- User has no subscriptions: `bunx oh-my-openagent install --no-tui --claude=no --gemini=no --copilot=no`
- User has all native subscriptions: `bunx oh-my-opencode install --no-tui --claude=max20 --openai=yes --gemini=yes --copilot=no`
- User has only Claude: `bunx oh-my-opencode install --no-tui --claude=yes --gemini=no --copilot=no`
- User has Claude + OpenAI: `bunx oh-my-opencode install --no-tui --claude=yes --openai=yes --gemini=no --copilot=no`
- User has only GitHub Copilot: `bunx oh-my-opencode install --no-tui --claude=no --gemini=no --copilot=yes`
- User has Z.ai for Librarian: `bunx oh-my-opencode install --no-tui --claude=yes --gemini=no --copilot=no --zai-coding-plan=yes`
- User has only OpenCode Zen: `bunx oh-my-opencode install --no-tui --claude=no --gemini=no --copilot=no --opencode-zen=yes`
- User has OpenCode Go only: `bunx oh-my-opencode install --no-tui --claude=no --openai=no --gemini=no --copilot=no --opencode-go=yes`
- User has no subscriptions: `bunx oh-my-opencode install --no-tui --claude=no --gemini=no --copilot=no`
The CLI will:
@@ -120,8 +122,17 @@ The CLI will:
```bash
opencode --version # Should be 1.0.150 or higher
cat ~/.config/opencode/opencode.json # Should contain "oh-my-openagent" in plugin array
cat ~/.config/opencode/opencode.json # Should contain "oh-my-openagent" in plugin array, or the legacy "oh-my-opencode" entry while you are still migrating
```
#### Run Doctor Verification
After installation, verify everything is working correctly:
```bash
bunx oh-my-opencode doctor
```
This checks your environment, authentication status, and shows which models each agent will actually use.
### Step 4: Configure Authentication
@@ -154,9 +165,9 @@ First, add the opencode-antigravity-auth plugin:
You'll also need full model settings in `opencode.json`.
Read the [opencode-antigravity-auth documentation](https://github.com/NoeFabris/opencode-antigravity-auth), copy the full model configuration from the README, and merge carefully to avoid breaking the user's existing setup. The plugin now uses a **variant system** — models like `antigravity-gemini-3-pro` support `low`/`high` variants instead of separate `-low`/`-high` model entries.
##### oh-my-openagent Agent Model Override
##### Plugin config model override
The `opencode-antigravity-auth` plugin uses different model names than the built-in Google auth. Override the agent models in `oh-my-openagent.json` (or `.opencode/oh-my-openagent.json`):
The `opencode-antigravity-auth` plugin uses different model names than the built-in Google auth. Override the agent models in your plugin config file. Existing installs still commonly use `oh-my-opencode.json` or `.opencode/oh-my-opencode.json`, while the compatibility layer also recognizes `oh-my-openagent.json[c]`.
```json
{
@@ -201,7 +212,7 @@ GitHub Copilot is supported as a **fallback provider** when native providers are
##### Model Mappings
When GitHub Copilot is the best available provider, oh-my-openagent uses these model assignments:
When GitHub Copilot is the best available provider, the compatibility layer resolves these assignments:
| Agent | Model |
| ------------- | --------------------------------- |
@@ -227,23 +238,22 @@ If Z.ai is your main provider, the most important fallbacks are:
#### OpenCode Zen
OpenCode Zen provides access to `opencode/` prefixed models including `opencode/claude-opus-4-6`, `opencode/gpt-5.4`, `opencode/gpt-5.3-codex`, `opencode/gpt-5-nano`, `opencode/glm-5`, `opencode/big-pickle`, and `opencode/minimax-m2.7-highspeed`.
OpenCode Zen provides access to `opencode/` prefixed models including `opencode/claude-opus-4-6`, `opencode/gpt-5.4`, `opencode/gpt-5.3-codex`, `opencode/gpt-5-nano`, `opencode/glm-5`, `opencode/big-pickle`, and `opencode/minimax-m2.5`.
When OpenCode Zen is the best available provider (no native or Copilot), these models are used:
When OpenCode Zen is the best available provider, these are the most relevant source-backed examples:
| Agent | Model |
| ------------- | ---------------------------------------------------- |
| **Sisyphus** | `opencode/claude-opus-4-6` |
| **Oracle** | `opencode/gpt-5.4` |
| **Explore** | `opencode/gpt-5-nano` |
| **Librarian** | `opencode/minimax-m2.7-highspeed` / `opencode/big-pickle` |
| **Explore** | `opencode/claude-haiku-4-5` |
##### Setup
Run the installer and select "Yes" for GitHub Copilot:
```bash
bunx oh-my-openagent install
bunx oh-my-opencode install
# Select your subscriptions (Claude, ChatGPT, Gemini)
# When prompted: "Do you have a GitHub Copilot subscription?" → Select "Yes"
```
@@ -251,7 +261,7 @@ bunx oh-my-openagent install
Or use non-interactive mode:
```bash
bunx oh-my-openagent install --no-tui --claude=no --openai=no --gemini=no --copilot=yes
bunx oh-my-opencode install --no-tui --claude=no --openai=no --gemini=no --copilot=yes
```
Then authenticate with GitHub:
@@ -263,7 +273,7 @@ opencode auth login
### Step 5: Understand Your Model Setup
You've just configured oh-my-openagent. Here's what got set up and why.
You've just configured oh-my-opencode. Here's what got set up and why.
#### Model Families: What You're Working With
@@ -296,8 +306,8 @@ Not all models behave the same way. Understanding which models are "similar" hel
| --------------------- | -------------------------------- | ----------------------------------------------------------- |
| **Gemini 3.1 Pro** | google, github-copilot, opencode | Excels at visual/frontend tasks. Different reasoning style. |
| **Gemini 3 Flash** | google, github-copilot, opencode | Fast, good for doc search and light tasks. |
| **MiniMax M2.7** | venice, opencode-go | Fast and smart. Good for utility tasks. Upgraded from M2.5. |
| **MiniMax M2.7 Highspeed** | opencode | Ultra-fast MiniMax variant. Optimized for latency. |
| **MiniMax M2.7** | venice, opencode-go | Fast and smart. Good for utility tasks where the provider catalog exposes M2.7. |
| **MiniMax M2.5** | opencode | Legacy OpenCode catalog entry still used in some fallback chains for compatibility. |
**Speed-Focused Models**:
@@ -305,7 +315,7 @@ Not all models behave the same way. Understanding which models are "similar" hel
| ----------------------- | ---------------------- | -------------- | --------------------------------------------------------------------------------------------------------------------------------------------- |
| **Grok Code Fast 1** | github-copilot, venice | Very fast | Optimized for code grep/search. Default for Explore. |
| **Claude Haiku 4.5** | anthropic, opencode | Fast | Good balance of speed and intelligence. |
| **MiniMax M2.7 Highspeed** | opencode | Very fast | Ultra-fast MiniMax variant. Smart for its speed class. |
| **MiniMax M2.5** | opencode | Very fast | Legacy OpenCode catalog entry that still appears in some utility fallback chains. |
| **GPT-5.3-codex-spark** | openai | Extremely fast | Blazing fast but compacts so aggressively that oh-my-openagent's context management doesn't work well with it. Not recommended for omo agents. |
#### What Each Agent Does and Which Model It Got
@@ -316,7 +326,7 @@ Based on your subscriptions, here's how the agents were configured:
| Agent | Role | Default Chain | What It Does |
| ------------ | ---------------- | ----------------------------------------------- | ---------------------------------------------------------------------------------------- |
| **Sisyphus** | Main ultraworker | Opus (max) → Kimi K2.5 → GLM 5 → Big Pickle | Primary coding agent. Orchestrates everything. **Never use GPT — no GPT prompt exists.** |
| **Sisyphus** | Main ultraworker | Opus (max) → Kimi K2.5 → GPT-5.4 → GLM 5 → Big Pickle | Primary coding agent. Orchestrates everything. Claude-family models are still preferred, but GPT-5.4 now has a dedicated prompt path. |
| **Metis** | Plan review | Opus (max) → Kimi K2.5 → GPT-5.4 → Gemini 3.1 Pro | Reviews Prometheus plans for gaps. |
**Dual-Prompt Agents** (auto-switch between Claude and GPT prompts):
@@ -328,7 +338,7 @@ Priority: **Claude > GPT > Claude-like models**
| Agent | Role | Default Chain | GPT Prompt? |
| -------------- | ----------------- | ---------------------------------------------------------- | ---------------------------------------------------------------- |
| **Prometheus** | Strategic planner | Opus (max) → **GPT-5.4 (high)** → Kimi K2.5 → Gemini 3.1 Pro | Yes — XML-tagged, principle-driven (~300 lines vs ~1,100 Claude) |
| **Atlas** | Todo orchestrator | **Kimi K2.5** → Sonnet → GPT-5.4 | Yes GPT-optimized todo management |
| **Atlas** | Todo orchestrator | **Claude Sonnet 4.6** → Kimi K2.5 → GPT-5.4 | Yes - GPT-optimized todo management |
**GPT-Native Agents** (built for GPT, don't override to Claude):
@@ -344,9 +354,9 @@ These agents do search, grep, and retrieval. They intentionally use fast, cheap
| Agent | Role | Default Chain | Design Rationale |
| --------------------- | ------------------ | ---------------------------------------------------------------------- | -------------------------------------------------------------- |
| **Explore** | Fast codebase grep | Grok Code Fast → MiniMax M2.7-highspeed → MiniMax M2.7 → Haiku → GPT-5-Nano | Speed is everything. Grok is blazing fast for grep. |
| **Librarian** | Docs/code search | MiniMax M2.7 → MiniMax M2.7-highspeed → Haiku → GPT-5-Nano | Doc retrieval doesn't need deep reasoning. MiniMax is fast. |
| **Multimodal Looker** | Vision/screenshots | Kimi K2.5 → Kimi Free → Gemini Flash → GPT-5.4 → GLM-4.6v | Kimi excels at multimodal understanding. |
| **Explore** | Fast codebase grep | Grok Code Fast → OpenCode Go MiniMax M2.7 → OpenCode MiniMax M2.5 → Haiku → GPT-5-Nano | Speed is everything. Grok is blazing fast for grep. |
| **Librarian** | Docs/code search | OpenCode Go MiniMax M2.7 → OpenCode MiniMax M2.5 → Haiku → GPT-5-Nano | Doc retrieval doesn't need deep reasoning. MiniMax is fast where the provider catalog supports it. |
| **Multimodal Looker** | Vision/screenshots | GPT-5.4 → Kimi K2.5 → GLM-4.6v → GPT-5-Nano | GPT-5.4 now leads the default vision path when available. |
#### Why Different Models Need Different Prompts
@@ -365,7 +375,7 @@ This is why Prometheus and Atlas ship separate prompts per model family — they
#### Custom Model Configuration
If the user wants to override which model an agent uses, you can customize in `oh-my-openagent.json`:
If the user wants to override which model an agent uses, you can customize in your plugin config file. Existing installs still commonly use `oh-my-opencode.json`, while the compatibility layer also recognizes `oh-my-openagent.json[c]`.
```jsonc
{
@@ -400,7 +410,7 @@ GPT (5.3-codex, 5.2) > Claude Opus (decent fallback) > Gemini (acceptable)
**Dangerous** (no prompt support):
- Sisyphus → GPT: **No GPT prompt. Will degrade significantly.**
- Sisyphus → older GPT models: **Still a bad fit. GPT-5.4 is the only dedicated GPT prompt path.**
- Hephaestus → Claude: **Built for Codex. Claude can't replicate this.**
- Explore → Opus: **Massive cost waste. Explore needs speed, not intelligence.**
- Librarian → Opus: **Same. Doc search doesn't need Opus-level reasoning.**
@@ -462,3 +472,7 @@ Tell the user of following:
4. You wanna have your own agent- catalog setup? I can read the [docs](docs/guide/agent-model-matching.md) and set up for you after interviewing!
That's it. The agent will figure out the rest and handle everything automatically.
#### Advanced Configuration
You can customize agent models and fallback chains in your config. The `fallback_models` field accepts either a single string or an array that mixes strings and per-model objects with settings like `variant` and `temperature`. See the [Configuration Reference](../reference/configuration.md) and example configs in `docs/examples/` for details.

View File

@@ -1,15 +1,15 @@
# CLI Reference
Complete reference for the `oh-my-openagent` command-line interface.
Complete reference for the published `oh-my-opencode` CLI. During the rename transition, OpenCode plugin registration now prefers `oh-my-openagent` inside `opencode.json`.
## Basic Usage
```bash
# Display help
bunx oh-my-openagent
bunx oh-my-opencode
# Or with npx
npx oh-my-openagent
npx oh-my-opencode
```
## Commands
@@ -27,20 +27,20 @@ npx oh-my-openagent
## install
Interactive installation tool for initial Oh-My-OpenAgent setup. Provides a TUI based on `@clack/prompts`.
Interactive installation tool for initial Oh My OpenCode setup. Provides a TUI based on `@clack/prompts`.
### Usage
```bash
bunx oh-my-openagent install
bunx oh-my-opencode install
```
### Installation Process
1. **Provider Selection**: Choose your AI provider (Claude, ChatGPT, or Gemini)
2. **API Key Input**: Enter the API key for your selected provider
3. **Configuration File Creation**: Generates `opencode.json` or `oh-my-openagent.json` files
4. **Plugin Registration**: Automatically registers the oh-my-openagent plugin in OpenCode settings
3. **Configuration File Creation**: Writes the plugin config file used by the current install path. Existing installs still commonly use `oh-my-opencode.json`, while renamed `oh-my-openagent.json[c]` files are also recognized.
4. **Plugin Registration**: Registers `oh-my-openagent` in OpenCode settings, or upgrades a legacy `oh-my-opencode` entry during the compatibility window
### Options
@@ -53,12 +53,18 @@ bunx oh-my-openagent install
## doctor
Diagnoses your environment to ensure Oh-My-OpenAgent is functioning correctly. Performs 17+ health checks.
Diagnoses your environment to ensure Oh My OpenCode is functioning correctly. Performs 17+ health checks covering installation, configuration, authentication, dependencies, and tools.
The doctor command detects common issues including:
- Legacy plugin entry references in `opencode.json` (warns when `oh-my-opencode` is still used instead of `oh-my-openagent`)
- Configuration file validity and JSONC parsing errors
- Model resolution and fallback chain verification
- API key validity for configured providers
- Missing or misconfigured MCP servers
### Usage
```bash
bunx oh-my-openagent doctor
bunx oh-my-opencode doctor
```
### Diagnostic Categories
@@ -83,7 +89,7 @@ bunx oh-my-openagent doctor
### Example Output
```
oh-my-openagent doctor
oh-my-opencode doctor
┌──────────────────────────────────────────────────┐
│ Oh-My-OpenAgent Doctor │
@@ -94,7 +100,8 @@ Installation
✓ Plugin registered in opencode.json
Configuration
✓ oh-my-openagent.json is valid
✓ oh-my-opencode.jsonc is valid
✓ Model resolution: all agents have valid fallback chains
⚠ categories.visual-engineering: using default model
Authentication
@@ -109,7 +116,6 @@ Dependencies
Summary: 10 passed, 1 warning, 1 failed
```
---
## run
@@ -119,7 +125,7 @@ Executes OpenCode sessions and monitors task completion.
### Usage
```bash
bunx oh-my-openagent run [prompt]
bunx oh-my-opencode run [prompt]
```
### Options
@@ -148,16 +154,16 @@ Manages OAuth 2.1 authentication for remote MCP servers.
```bash
# Login to an OAuth-protected MCP server
bunx oh-my-openagent mcp oauth login <server-name> --server-url https://api.example.com
bunx oh-my-opencode mcp oauth login <server-name> --server-url https://api.example.com
# Login with explicit client ID and scopes
bunx oh-my-openagent mcp oauth login my-api --server-url https://api.example.com --client-id my-client --scopes "read,write"
bunx oh-my-opencode mcp oauth login my-api --server-url https://api.example.com --client-id my-client --scopes "read,write"
# Remove stored OAuth tokens
bunx oh-my-openagent mcp oauth logout <server-name>
bunx oh-my-opencode mcp oauth logout <server-name>
# Check OAuth token status
bunx oh-my-openagent mcp oauth status [server-name]
bunx oh-my-opencode mcp oauth status [server-name]
```
### Options
@@ -178,8 +184,18 @@ Tokens are stored in `~/.config/opencode/mcp-oauth.json` with `0600` permissions
The CLI searches for configuration files in the following locations (in priority order):
1. **Project Level**: `.opencode/oh-my-openagent.json`
2. **User Level**: `~/.config/opencode/oh-my-openagent.json`
1. **Project Level**: `.opencode/oh-my-openagent.jsonc`, `.opencode/oh-my-openagent.json`, `.opencode/oh-my-opencode.jsonc`, or `.opencode/oh-my-opencode.json`
2. **User Level**: `~/.config/opencode/oh-my-openagent.jsonc`, `~/.config/opencode/oh-my-openagent.json`, `~/.config/opencode/oh-my-opencode.jsonc`, or `~/.config/opencode/oh-my-opencode.json`
**Naming Note**: The published package and binary are still `oh-my-opencode`. Inside `opencode.json`, the compatibility layer now prefers the plugin entry `oh-my-openagent`. Plugin config loading recognizes both `oh-my-openagent.*` and legacy `oh-my-opencode.*` basenames. If both basenames exist in the same directory, the legacy `oh-my-opencode.*` file currently wins.
### Filename Compatibility
Both `.jsonc` and `.json` extensions are supported. JSONC (JSON with Comments) is preferred as it allows:
- Comments (both `//` and `/* */` styles)
- Trailing commas in arrays and objects
If both `.jsonc` and `.json` exist in the same directory, the `.jsonc` file takes precedence.
### JSONC Support
@@ -219,31 +235,40 @@ bun install -g opencode@latest
```bash
# Reinstall plugin
bunx oh-my-openagent install
bunx oh-my-opencode install
```
### Doctor Check Failures
```bash
# Diagnose with detailed information
bunx oh-my-openagent doctor --verbose
bunx oh-my-opencode doctor --verbose
# Check specific category only
bunx oh-my-openagent doctor --category authentication
bunx oh-my-opencode doctor --category authentication
```
### "Using legacy package name" Warning
The doctor warns if it finds the legacy plugin entry `oh-my-opencode` in `opencode.json`. Update the plugin array to the canonical `oh-my-openagent` entry:
```bash
# Replace the legacy plugin entry in user config
jq '.plugin = (.plugin // [] | map(if . == "oh-my-opencode" then "oh-my-openagent" else . end))' \
~/.config/opencode/opencode.json > /tmp/opencode.json && mv /tmp/opencode.json ~/.config/opencode/opencode.json
```
---
## Non-Interactive Mode
Use the `--no-tui` option for CI/CD environments.
Use JSON output for CI or scripted diagnostics.
```bash
# Run doctor in CI environment
bunx oh-my-openagent doctor --no-tui --json
bunx oh-my-opencode doctor --json
# Save results to file
bunx oh-my-openagent doctor --json > doctor-report.json
bunx oh-my-opencode doctor --json > doctor-report.json
```
---

View File

@@ -1,6 +1,6 @@
# Configuration Reference
Complete reference for `oh-my-openagent.jsonc` configuration. This document covers every available option with examples.
Complete reference for Oh My OpenCode plugin configuration. During the rename transition, the runtime recognizes both `oh-my-openagent.json[c]` and legacy `oh-my-opencode.json[c]` files.
---
@@ -42,27 +42,28 @@ Complete reference for `oh-my-openagent.jsonc` configuration. This document cove
### File Locations
Priority order (project overrides user):
User config is loaded first, then project config overrides it. In each directory, the compatibility layer recognizes both the renamed and legacy basenames.
1. `.opencode/oh-my-openagent.jsonc` / `.opencode/oh-my-openagent.json`
1. Project config: `.opencode/oh-my-openagent.json[c]` or `.opencode/oh-my-opencode.json[c]`
2. User config (`.jsonc` preferred over `.json`):
| Platform | Path |
| ----------- | ----------------------------------------- |
| macOS/Linux | `~/.config/opencode/oh-my-openagent.jsonc` |
| Windows | `%APPDATA%\opencode\oh-my-openagent.jsonc` |
| Platform | Path candidates |
| ----------- | --------------- |
| macOS/Linux | `~/.config/opencode/oh-my-openagent.json[c]`, `~/.config/opencode/oh-my-opencode.json[c]` |
| Windows | `%APPDATA%\opencode\oh-my-openagent.json[c]`, `%APPDATA%\opencode\oh-my-opencode.json[c]` |
**Rename compatibility:** OpenCode plugin registration now prefers `oh-my-openagent`, while legacy `oh-my-opencode` entries and config basenames still load during the transition. If both plugin config basenames exist in the same directory, the legacy `oh-my-opencode.*` file currently wins.
JSONC supports `// line comments`, `/* block comments */`, and trailing commas.
Enable schema autocomplete:
```json
{
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/dev/assets/oh-my-openagent.schema.json"
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/dev/assets/oh-my-opencode.schema.json"
}
```
Run `bunx oh-my-openagent install` for guided setup. Run `opencode models` to list available models.
Run `bunx oh-my-opencode install` for guided setup. Run `opencode models` to list available models.
### Quick Start Example
@@ -70,7 +71,7 @@ Here's a practical starting configuration:
```jsonc
{
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/dev/assets/oh-my-openagent.schema.json",
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/dev/assets/oh-my-opencode.schema.json",
"agents": {
// Main orchestrator: Claude Opus or Kimi K2.5 work best
@@ -93,19 +94,19 @@ Here's a practical starting configuration:
},
"categories": {
// quick trivial tasks
// quick - trivial tasks
"quick": { "model": "opencode/gpt-5-nano" },
// unspecified-low moderate tasks
// unspecified-low - moderate tasks
"unspecified-low": { "model": "anthropic/claude-sonnet-4-6" },
// unspecified-high complex work
// unspecified-high - complex work
"unspecified-high": { "model": "anthropic/claude-opus-4-6", "variant": "max" },
// writing docs/prose
// writing - docs/prose
"writing": { "model": "google/gemini-3-flash" },
// visual-engineering Gemini dominates visual tasks
// visual-engineering - Gemini dominates visual tasks
"visual-engineering": {
"model": "google/gemini-3.1-pro",
"variant": "high",
@@ -159,24 +160,24 @@ Disable agents entirely: `{ "disabled_agents": ["oracle", "multimodal-looker"] }
#### Agent Options
| Option | Type | Description |
| ----------------- | ------------- | ------------------------------------------------------ |
| `model` | string | Model override (`provider/model`) |
| `fallback_models` | string\|array | Fallback models on API errors |
| `temperature` | number | Sampling temperature |
| `top_p` | number | Top-p sampling |
| `prompt` | string | Replace system prompt |
| `prompt_append` | string | Append to system prompt |
| Option | Type | Description |
| ----------------- | -------------- | --------------------------------------------------------------- |
| `model` | string | Model override (`provider/model`) |
| `fallback_models` | string\|array | Fallback models on API errors. Arrays can mix plain strings and per-model objects |
| `temperature` | number | Sampling temperature |
| `top_p` | number | Top-p sampling |
| `prompt` | string | Replace system prompt. Supports `file://` URIs |
| `prompt_append` | string | Append to system prompt. Supports `file://` URIs |
| `tools` | array | Allowed tools list |
| `disable` | boolean | Disable this agent |
| `mode` | string | Agent mode |
| `color` | string | UI color |
| `permission` | object | Per-tool permissions (see below) |
| `category` | string | Inherit model from category |
| `variant` | string | Model variant: `max`, `high`, `medium`, `low`, `xhigh` |
| `variant` | string | Model variant: `max`, `high`, `medium`, `low`, `xhigh`. Normalized to supported values |
| `maxTokens` | number | Max response tokens |
| `thinking` | object | Anthropic extended thinking |
| `reasoningEffort` | string | OpenAI reasoning: `low`, `medium`, `high`, `xhigh` |
| `reasoningEffort` | string | OpenAI reasoning: `low`, `medium`, `high`, `xhigh`. Normalized to supported values |
| `textVerbosity` | string | Text verbosity: `low`, `medium`, `high` |
| `providerOptions` | object | Provider-specific options |
@@ -216,6 +217,58 @@ Control what tools an agent can use:
| `doom_loop` | `ask` / `allow` / `deny` |
| `external_directory` | `ask` / `allow` / `deny` |
#### Fallback Models with Per-Model Settings
`fallback_models` accepts either a single model string or an array. Array entries can be plain strings or objects with individual model settings:
```jsonc
{
"agents": {
"sisyphus": {
"model": "anthropic/claude-opus-4-6",
"fallback_models": [
// Simple string fallback
"openai/gpt-5.4",
// Object with per-model settings
{
"model": "google/gemini-3.1-pro",
"variant": "high",
"temperature": 0.2
},
{
"model": "anthropic/claude-sonnet-4-6",
"thinking": { "type": "enabled", "budgetTokens": 64000 }
}
]
}
}
}
```
Object entries support: `model`, `variant`, `reasoningEffort`, `temperature`, `top_p`, `maxTokens`, `thinking`.
#### File URIs for Prompts
Both `prompt` and `prompt_append` support loading content from files via `file://` URIs:
```jsonc
{
"agents": {
"sisyphus": {
"prompt_append": "file:///absolute/path/to/prompt.txt"
},
"oracle": {
"prompt": "file://./relative/to/project/prompt.md"
},
"explore": {
"prompt_append": "file://~/home/dir/prompt.txt"
}
}
}
```
Paths can be absolute (`file:///abs/path`), relative to project root (`file://./rel/path`), or home-relative (`file://~/home/path`).
### Categories
Domain-specific model delegation used by the `task()` tool. When Sisyphus delegates work, it picks a category, not a model name.
@@ -240,16 +293,16 @@ Domain-specific model delegation used by the `task()` tool. When Sisyphus delega
| Option | Type | Default | Description |
| ------------------- | ------------- | ------- | ------------------------------------------------------------------- |
| `model` | string | - | Model override |
| `fallback_models` | string\|array | - | Fallback models on API errors |
| `fallback_models` | string\|array | - | Fallback models on API errors. Arrays can mix plain strings and per-model objects |
| `temperature` | number | - | Sampling temperature |
| `top_p` | number | - | Top-p sampling |
| `maxTokens` | number | - | Max response tokens |
| `thinking` | object | - | Anthropic extended thinking |
| `reasoningEffort` | string | - | OpenAI reasoning effort |
| `reasoningEffort` | string | - | OpenAI reasoning effort. Unsupported values are normalized |
| `textVerbosity` | string | - | Text verbosity |
| `tools` | array | - | Allowed tools |
| `prompt_append` | string | - | Append to system prompt |
| `variant` | string | - | Model variant |
| `variant` | string | - | Model variant. Unsupported values are normalized |
| `description` | string | - | Shown in `task()` tool prompt |
| `is_unstable_agent` | boolean | `false` | Force background mode + monitoring. Auto-enabled for Gemini models. |
@@ -259,9 +312,20 @@ Disable categories: `{ "disabled_categories": ["ultrabrain"] }`
3-step priority at runtime:
1. **User override** model set in config → used exactly as-is
2. **Provider fallback chain** tries each provider in priority order until available
3. **System default** falls back to OpenCode's configured default model
1. **User override** - model set in config → used exactly as-is. Even on cold cache (first run without model availability data), explicit user configuration takes precedence over hardcoded fallback chains
2. **Provider fallback chain** - tries each provider in priority order until available
3. **System default** - falls back to OpenCode's configured default model
#### Model Settings Compatibility
`variant` and `reasoningEffort` values are automatically normalized to what each model supports. If you specify a variant or reasoning effort level that a model does not support, it is adjusted to the closest supported value rather than causing errors.
Examples:
- Claude models do not support `reasoningEffort` - it is removed automatically
- GPT-4.1 does not support reasoning - `reasoningEffort` is removed
- o-series models support `none` through `high` - `xhigh` is downgraded to `high`
- GPT-5 supports `none`, `minimal`, `low`, `medium`, `high`, `xhigh` - all pass through
#### Agent Provider Chains
@@ -270,9 +334,9 @@ Disable categories: `{ "disabled_categories": ["ultrabrain"] }`
| **Sisyphus** | `claude-opus-4-6` | `claude-opus-4-6``glm-5``big-pickle` |
| **Hephaestus** | `gpt-5.4` | `gpt-5.4` |
| **oracle** | `gpt-5.4` | `gpt-5.4``gemini-3.1-pro``claude-opus-4-6` |
| **librarian** | `minimax-m2.7` | `minimax-m2.7``minimax-m2.7-highspeed``claude-haiku-4-5``gpt-5-nano` |
| **explore** | `grok-code-fast-1` | `grok-code-fast-1``minimax-m2.7-highspeed``minimax-m2.7``claude-haiku-4-5``gpt-5-nano` |
| **multimodal-looker** | `gpt-5.3-codex` | `gpt-5.3-codex``k2p5` `gemini-3-flash` `glm-4.6v``gpt-5-nano` |
| **librarian** | `minimax-m2.7` | `opencode-go/minimax-m2.7``opencode/minimax-m2.5``claude-haiku-4-5``gpt-5-nano` |
| **explore** | `grok-code-fast-1` | `grok-code-fast-1``opencode-go/minimax-m2.7``opencode/minimax-m2.5``claude-haiku-4-5``gpt-5-nano` |
| **multimodal-looker** | `gpt-5.4` | `gpt-5.4``k2p5``glm-4.6v``gpt-5-nano` |
| **Prometheus** | `claude-opus-4-6` | `claude-opus-4-6``gpt-5.4``gemini-3.1-pro` |
| **Metis** | `claude-opus-4-6` | `claude-opus-4-6``gpt-5.4``gemini-3.1-pro` |
| **Momus** | `gpt-5.4` | `gpt-5.4``claude-opus-4-6``gemini-3.1-pro` |
@@ -291,7 +355,7 @@ Disable categories: `{ "disabled_categories": ["ultrabrain"] }`
| **unspecified-high** | `claude-opus-4-6` | `claude-opus-4-6``gpt-5.4 (high)``glm-5``k2p5``kimi-k2.5` |
| **writing** | `gemini-3-flash` | `gemini-3-flash``claude-sonnet-4-6``minimax-m2.7` |
Run `bunx oh-my-openagent doctor --verbose` to see effective model resolution for your config.
Run `bunx oh-my-opencode doctor --verbose` to see effective model resolution for your config.
---
@@ -425,9 +489,10 @@ Available hooks: `todo-continuation-enforcer`, `context-window-monitor`, `sessio
**Notes:**
- `directory-agents-injector` auto-disabled on OpenCode 1.1.37+ (native AGENTS.md support)
- `no-sisyphus-gpt` **do not disable**. It blocks incompatible GPT models for Sisyphus while allowing the dedicated GPT-5.4 prompt path.
- `directory-agents-injector` - auto-disabled on OpenCode 1.1.37+ (native AGENTS.md support)
- `no-sisyphus-gpt` - **do not disable**. It blocks incompatible GPT models for Sisyphus while allowing the dedicated GPT-5.4 prompt path.
- `startup-toast` is a sub-feature of `auto-update-checker`. Disable just the toast by adding `startup-toast` to `disabled_hooks`.
- `session-recovery` - automatically recovers from recoverable session errors (missing tool results, unavailable tools, thinking block violations). Shows toast notifications during recovery. Enable `experimental.auto_resume` for automatic retry after recovery.
### Commands
@@ -504,7 +569,7 @@ Force-enable session notifications:
{ "notification": { "force_enable": true } }
```
`force_enable` (`false`) force session-notification even if external notification plugins are detected.
`force_enable` (`false`) - force session-notification even if external notification plugins are detected.
### MCPs

View File

@@ -6,15 +6,16 @@ Oh-My-OpenAgent provides 11 specialized AI agents. Each has distinct expertise,
### Core Agents
Core-agent tab cycling is deterministic. The fixed priority order is Sisyphus, Hephaestus, Prometheus, and Atlas. Remaining agents follow after that stable core ordering.
| Agent | Model | Purpose |
| --------------------- | ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Sisyphus** | `claude-opus-4-6` | The default orchestrator. Plans, delegates, and executes complex tasks using specialized subagents with aggressive parallel execution. Todo-driven workflow with extended thinking (32k budget). Fallback: `glm-5``big-pickle`. |
| **Sisyphus** | `claude-opus-4-6` | The default orchestrator. Plans, delegates, and executes complex tasks using specialized subagents with aggressive parallel execution. Todo-driven workflow with extended thinking (32k budget). Fallback: `glm-5``big-pickle`. |
| **Hephaestus** | `gpt-5.4` | The Legitimate Craftsman. Autonomous deep worker inspired by AmpCode's deep mode. Goal-oriented execution with thorough research before action. Explores codebase patterns, completes tasks end-to-end without premature stopping. Named after the Greek god of forge and craftsmanship. Requires a GPT-capable provider. |
| **Oracle** | `gpt-5.4` | Architecture decisions, code review, debugging. Read-only consultation with stellar logical reasoning and deep analysis. Inspired by AmpCode. Fallback: `gemini-3.1-pro``claude-opus-4-6`. |
| **Librarian** | `minimax-m2.7` | Multi-repo analysis, documentation lookup, OSS implementation examples. Deep codebase understanding with evidence-based answers. Fallback: `minimax-m2.7-highspeed` `claude-haiku-4-5` `gpt-5-nano`. |
| **Explore** | `grok-code-fast-1` | Fast codebase exploration and contextual grep. Fallback: `minimax-m2.7-highspeed``minimax-m2.7``claude-haiku-4-5``gpt-5-nano`. |
| **Multimodal-Looker** | `gpt-5.3-codex` | Visual content specialist. Analyzes PDFs, images, diagrams to extract information. Fallback: `k2p5` `gemini-3-flash` `glm-4.6v``gpt-5-nano`. |
| **Librarian** | `minimax-m2.7` | Multi-repo analysis, documentation lookup, OSS implementation examples. Deep codebase understanding with evidence-based answers. Primary OpenCode Go path uses MiniMax M2.7. Other provider catalogs may still fall back to MiniMax M2.5, then `claude-haiku-4-5` and `gpt-5-nano`. |
| **Explore** | `grok-code-fast-1` | Fast codebase exploration and contextual grep. Primary path stays on Grok Code Fast 1. MiniMax M2.7 is now used where provider catalogs expose it, while some OpenCode fallback paths still use MiniMax M2.5 for catalog compatibility. |
| **Multimodal-Looker** | `gpt-5.4` | Visual content specialist. Analyzes PDFs, images, diagrams to extract information. Fallback: `k2p5``glm-4.6v``gpt-5-nano`. |
### Planning Agents
| Agent | Model | Purpose |
@@ -89,8 +90,9 @@ When running inside tmux:
- Watch multiple agents work in real-time
- Each pane shows agent output live
- Auto-cleanup when agents complete
- **Stable agent ordering**: the core tab cycle stays deterministic with Sisyphus, Hephaestus, Prometheus, and Atlas first
Customize agent models, prompts, and permissions in `oh-my-openagent.json`.
Customize agent models, prompts, and permissions in `oh-my-opencode.jsonc`.
## Category System
@@ -129,7 +131,7 @@ task({
### Custom Categories
You can define custom categories in `oh-my-openagent.json`.
You can define custom categories in your plugin config file. During the rename transition, both `oh-my-openagent.json[c]` and legacy `oh-my-opencode.json[c]` basenames are recognized.
#### Category Configuration Schema
@@ -188,6 +190,60 @@ When you use a Category, a special agent called **Sisyphus-Junior** performs the
- **Characteristic**: Cannot **re-delegate** tasks to other agents.
- **Purpose**: Prevents infinite delegation loops and ensures focus on the assigned task.
## Advanced Configuration
### Fallback Models
Configure per-agent fallback chains with arrays that can mix plain model strings and per-model objects:
```jsonc
{
"agents": {
"sisyphus": {
"fallback_models": [
"opencode/glm-5",
{ "model": "openai/gpt-5.4", "variant": "high" },
{ "model": "anthropic/claude-sonnet-4-6", "thinking": { "type": "enabled", "budgetTokens": 64000 } }
]
}
}
}
```
When a model errors, the runtime can move through the configured fallback array. Object entries let you tune the backup model itself instead of only swapping the model name.
### File-Based Prompts
Load agent system prompts from external files using `file://` URLs:
```jsonc
{
"agents": {
"sisyphus": {
"prompt": "file:///path/to/custom-prompt.md"
}
}
}
```
Useful for:
- Version controlling prompts separately from config
- Sharing prompts across projects
- Keeping configuration files concise
The file content is loaded at runtime and injected as the agent's system prompt.
### Session Recovery
The system automatically recovers from common session failures without user intervention:
- **Missing tool results**: reconstructs recoverable tool state and skips invalid tool-part IDs instead of failing the whole recovery pass
- **Thinking block violations**: Recovers from API thinking block mismatches
- **Empty messages**: Reconstructs message history when content is missing
- **Context window limits**: Gracefully handles Claude context window exceeded errors with intelligent compaction
- **JSON parse errors**: Recovers from malformed tool outputs
Recovery happens transparently during agent execution. You see the result, not the failure.
## Skills
Skills provide specialized workflows with embedded MCP servers and detailed instructions. A Skill is a mechanism that injects **specialized knowledge (Context)** and **tools (MCP)** for specific domains into agents.
@@ -844,7 +900,7 @@ When a skill MCP has `oauth` configured:
Pre-authenticate via CLI:
```bash
bunx oh-my-openagent mcp oauth login <server-name> --server-url https://api.example.com
bunx oh-my-opencode mcp oauth login <server-name> --server-url https://api.example.com
```
## Context Injection