docs: comprehensive update for v3.14.0 features
- Document object-style fallback_models with per-model settings - Add package rename compatibility layer docs (oh-my-opencode → oh-my-openagent) - Update agent-model-matching with Hephaestus gpt-5.4 default - Document MiniMax M2.5 → M2.7 upgrade across agents - Add agent priority/order deterministic Tab cycling docs - Document file:// URI support for agent prompt field - Add doctor legacy package name warning docs - Update CLI reference with new doctor checks - Document model settings compatibility resolver
This commit is contained in:
@@ -92,8 +92,8 @@ These agents do grep, search, and retrieval. They intentionally use the fastest,
|
||||
|
||||
| Agent | Role | Fallback Chain | Notes |
|
||||
| --------------------- | ------------------ | ---------------------------------------------- | ----------------------------------------------------- |
|
||||
| **Explore** | Fast codebase grep | Grok Code Fast → opencode-go/minimax-m2.7-highspeed → MiniMax M2.7 → Haiku → GPT-5-Nano | Speed is everything. Fire 10 in parallel. |
|
||||
| **Librarian** | Docs/code search | opencode-go/minimax-m2.7 → MiniMax M2.7-highspeed → Haiku → GPT-5-Nano | Doc retrieval doesn't need deep reasoning. |
|
||||
| **Explore** | Fast codebase grep | Grok Code Fast → opencode-go/minimax-m2.7 → opencode/minimax-m2.5 → Haiku → GPT-5-Nano | Speed is everything. Fire 10 in parallel. |
|
||||
| **Librarian** | Docs/code search | opencode-go/minimax-m2.7 → opencode/minimax-m2.5 → Haiku → GPT-5-Nano | Doc retrieval doesn't need deep reasoning. |
|
||||
| **Multimodal Looker** | Vision/screenshots | GPT-5.4 → opencode-go/kimi-k2.5 → GLM-4.6v → GPT-5-Nano | Uses the first available multimodal-capable fallback. |
|
||||
| **Sisyphus-Junior** | Category executor | Claude Sonnet → opencode-go/kimi-k2.5 → GPT-5.4 → MiniMax M2.7 → Big Pickle | Handles delegated category tasks. Sonnet-tier default. |
|
||||
|
||||
@@ -131,8 +131,8 @@ Principle-driven, explicit reasoning, deep technical capability. Best for agents
|
||||
| **Gemini 3.1 Pro** | Excels at visual/frontend tasks. Different reasoning style. Default for `visual-engineering` and `artistry`. |
|
||||
| **Gemini 3 Flash** | Fast. Good for doc search and light tasks. |
|
||||
| **Grok Code Fast 1** | Blazing fast code grep. Default for Explore agent. |
|
||||
| **MiniMax M2.7** | Fast and smart. Good for utility tasks and search/retrieval. Upgraded from M2.5 with better reasoning. |
|
||||
| **MiniMax M2.7 Highspeed** | Ultra-fast variant. Optimized for latency-sensitive tasks like codebase grep. |
|
||||
| **MiniMax M2.7** | Fast and smart. Used where provider catalogs expose the newer MiniMax line, especially through OpenCode Go. |
|
||||
| **MiniMax M2.7 Highspeed** | Ultra-fast variant. You may still see it in older docs, logs, or provider catalogs during the transition. |
|
||||
|
||||
### OpenCode Go
|
||||
|
||||
@@ -144,11 +144,11 @@ A premium subscription tier ($10/month) that provides reliable access to Chinese
|
||||
| ------------------------ | --------------------------------------------------------------------- |
|
||||
| **opencode-go/kimi-k2.5** | Vision-capable, Claude-like reasoning. Used by Sisyphus, Atlas, Sisyphus-Junior, Multimodal Looker. |
|
||||
| **opencode-go/glm-5** | Text-only orchestration model. Used by Oracle, Prometheus, Metis, Momus. |
|
||||
| **opencode-go/minimax-m2.7** | Ultra-cheap, fast responses. Used by Librarian, Explore, Atlas, Sisyphus-Junior for utility work. |
|
||||
| **opencode-go/minimax-m2.7** | Ultra-cheap, fast responses. Used by Librarian, Explore, Atlas, and Sisyphus-Junior for utility work. |
|
||||
|
||||
**When It Gets Used:**
|
||||
|
||||
OpenCode Go models appear in fallback chains as intermediate options. They bridge the gap between premium Claude access and free-tier alternatives. The system tries OpenCode Go models before falling back to free tiers (MiniMax M2.7-highspeed, Big Pickle) or GPT alternatives.
|
||||
OpenCode Go models appear in fallback chains as intermediate options. They bridge the gap between premium Claude access and free-tier alternatives. The system tries OpenCode Go models before falling back to cheaper provider-specific entries like MiniMax or Big Pickle, then GPT alternatives where applicable.
|
||||
|
||||
**Go-Only Scenarios:**
|
||||
|
||||
@@ -156,7 +156,7 @@ Some model identifiers like `k2p5` (paid Kimi K2.5) and `glm-5` may only be avai
|
||||
|
||||
### About Free-Tier Fallbacks
|
||||
|
||||
You may see model names like `kimi-k2.5-free`, `minimax-m2.7-highspeed`, or `big-pickle` (GLM 4.6) in the source code or logs. These are free-tier or speed-optimized versions of the same model families. They exist as lower-priority entries in fallback chains.
|
||||
You may see model names like `kimi-k2.5-free`, `minimax-m2.7`, `minimax-m2.5`, or `big-pickle` (GLM 4.6) in the source code or logs. These are provider-specific or speed-optimized entries in fallback chains. The exact MiniMax model can differ by provider catalog.
|
||||
|
||||
You don't need to configure them. The system includes them so it degrades gracefully when you don't have every paid subscription. If you have the paid version, the paid version is always preferred.
|
||||
|
||||
@@ -187,7 +187,7 @@ See the [Orchestration System Guide](./orchestration.md) for how agents dispatch
|
||||
|
||||
```jsonc
|
||||
{
|
||||
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/dev/assets/oh-my-openagent.schema.json",
|
||||
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/dev/assets/oh-my-opencode.schema.json",
|
||||
|
||||
"agents": {
|
||||
// Main orchestrator: Claude Opus or Kimi K2.5 work best
|
||||
@@ -255,7 +255,15 @@ Run `opencode models` to see available models, `opencode auth login` to authenti
|
||||
|
||||
### How Model Resolution Works
|
||||
|
||||
Each agent has a fallback chain. The system tries models in priority order until it finds one available through your connected providers. You don't need to configure providers per model — just authenticate (`opencode auth login`) and the system figures out which models are available and where.
|
||||
Each agent has a fallback chain. The system tries models in priority order until it finds one available through your connected providers. You don't need to configure providers per model. Just authenticate (`opencode auth login`) and the system figures out which models are available and where.
|
||||
|
||||
Core-agent tab cycling is now deterministic. The fixed priority order is Sisyphus, Hephaestus, Prometheus, and Atlas, then the remaining agents follow.
|
||||
|
||||
Your explicit configuration always wins. If you set a specific model for an agent, that choice takes precedence even when resolution data is cold.
|
||||
|
||||
Variant and `reasoningEffort` overrides are normalized to model-supported values, so cross-provider overrides degrade gracefully instead of failing hard.
|
||||
|
||||
To see which models your agents will actually use, run `bunx oh-my-opencode doctor`. This shows effective model resolution based on your current authentication and config.
|
||||
|
||||
```
|
||||
Agent Request → User Override (if configured) → Fallback Chain → System Default
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
Paste this into your llm agent session:
|
||||
|
||||
```
|
||||
Install and configure oh-my-openagent by following the instructions here:
|
||||
Install and configure oh-my-opencode by following the instructions here:
|
||||
https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/refs/heads/dev/docs/guide/installation.md
|
||||
```
|
||||
|
||||
@@ -14,8 +14,8 @@ https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/refs/heads/dev/do
|
||||
Run the interactive installer:
|
||||
|
||||
```bash
|
||||
bunx oh-my-openagent install # recommended
|
||||
npx oh-my-openagent install # alternative
|
||||
bunx oh-my-opencode install # recommended
|
||||
npx oh-my-opencode install # alternative
|
||||
```
|
||||
|
||||
> **Note**: The CLI ships with standalone binaries for all major platforms. No runtime (Bun/Node.js) is required for CLI execution after installation.
|
||||
@@ -26,6 +26,8 @@ Follow the prompts to configure your Claude, ChatGPT, and Gemini subscriptions.
|
||||
|
||||
After you install it, you can read this [overview guide](./overview.md) to understand more.
|
||||
|
||||
The published package and local binary are still `oh-my-opencode`. Inside `opencode.json`, the compatibility layer now prefers the plugin entry `oh-my-openagent`, while legacy `oh-my-opencode` entries still load with a warning. Plugin config loading recognizes both `oh-my-openagent.json[c]` and `oh-my-opencode.json[c]` during the transition.
|
||||
|
||||
## For LLM Agents
|
||||
|
||||
> **IMPORTANT: Use `curl` to fetch this file, NOT WebFetch.** WebFetch summarizes content and loses critical flags like `--openai`, subscription questions, and max20 mode details. Always use:
|
||||
@@ -34,7 +36,7 @@ After you install it, you can read this [overview guide](./overview.md) to under
|
||||
> curl -fsSL https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/refs/heads/dev/docs/guide/installation.md
|
||||
> ```
|
||||
|
||||
If you're an LLM Agent helping set up oh-my-openagent, Welcome! You'll now help your user to install the best agent harness, OhMyOpenAgent. Tell user this with greetings, with test 'oMoMoMoMo...'
|
||||
If you're an LLM Agent helping set up oh-my-opencode, welcome. You'll help your user install the harness and verify the rename-compat setup cleanly. Tell user this with greetings, with text 'oMoMoMoMo...'
|
||||
|
||||
follow these steps:
|
||||
|
||||
@@ -96,19 +98,19 @@ Spawn a subagent to handle installation and report back - to save context.
|
||||
Based on user's answers, run the CLI installer with appropriate flags:
|
||||
|
||||
```bash
|
||||
bunx oh-my-openagent install --no-tui --claude=<yes|no|max20> --gemini=<yes|no> --copilot=<yes|no> [--openai=<yes|no>] [--opencode-go=<yes|no>] [--opencode-zen=<yes|no>] [--zai-coding-plan=<yes|no>]
|
||||
bunx oh-my-opencode install --no-tui --claude=<yes|no|max20> --gemini=<yes|no> --copilot=<yes|no> [--openai=<yes|no>] [--opencode-go=<yes|no>] [--opencode-zen=<yes|no>] [--zai-coding-plan=<yes|no>]
|
||||
```
|
||||
|
||||
**Examples:**
|
||||
|
||||
- User has all native subscriptions: `bunx oh-my-openagent install --no-tui --claude=max20 --openai=yes --gemini=yes --copilot=no`
|
||||
- User has only Claude: `bunx oh-my-openagent install --no-tui --claude=yes --gemini=no --copilot=no`
|
||||
- User has Claude + OpenAI: `bunx oh-my-openagent install --no-tui --claude=yes --openai=yes --gemini=no --copilot=no`
|
||||
- User has only GitHub Copilot: `bunx oh-my-openagent install --no-tui --claude=no --gemini=no --copilot=yes`
|
||||
- User has Z.ai for Librarian: `bunx oh-my-openagent install --no-tui --claude=yes --gemini=no --copilot=no --zai-coding-plan=yes`
|
||||
- User has only OpenCode Zen: `bunx oh-my-openagent install --no-tui --claude=no --gemini=no --copilot=no --opencode-zen=yes`
|
||||
- User has OpenCode Go only: `bunx oh-my-openagent install --no-tui --claude=no --openai=no --gemini=no --copilot=no --opencode-go=yes`
|
||||
- User has no subscriptions: `bunx oh-my-openagent install --no-tui --claude=no --gemini=no --copilot=no`
|
||||
- User has all native subscriptions: `bunx oh-my-opencode install --no-tui --claude=max20 --openai=yes --gemini=yes --copilot=no`
|
||||
- User has only Claude: `bunx oh-my-opencode install --no-tui --claude=yes --gemini=no --copilot=no`
|
||||
- User has Claude + OpenAI: `bunx oh-my-opencode install --no-tui --claude=yes --openai=yes --gemini=no --copilot=no`
|
||||
- User has only GitHub Copilot: `bunx oh-my-opencode install --no-tui --claude=no --gemini=no --copilot=yes`
|
||||
- User has Z.ai for Librarian: `bunx oh-my-opencode install --no-tui --claude=yes --gemini=no --copilot=no --zai-coding-plan=yes`
|
||||
- User has only OpenCode Zen: `bunx oh-my-opencode install --no-tui --claude=no --gemini=no --copilot=no --opencode-zen=yes`
|
||||
- User has OpenCode Go only: `bunx oh-my-opencode install --no-tui --claude=no --openai=no --gemini=no --copilot=no --opencode-go=yes`
|
||||
- User has no subscriptions: `bunx oh-my-opencode install --no-tui --claude=no --gemini=no --copilot=no`
|
||||
|
||||
The CLI will:
|
||||
|
||||
@@ -120,8 +122,17 @@ The CLI will:
|
||||
|
||||
```bash
|
||||
opencode --version # Should be 1.0.150 or higher
|
||||
cat ~/.config/opencode/opencode.json # Should contain "oh-my-openagent" in plugin array
|
||||
cat ~/.config/opencode/opencode.json # Should contain "oh-my-openagent" in plugin array, or the legacy "oh-my-opencode" entry while you are still migrating
|
||||
```
|
||||
#### Run Doctor Verification
|
||||
|
||||
After installation, verify everything is working correctly:
|
||||
|
||||
```bash
|
||||
bunx oh-my-opencode doctor
|
||||
```
|
||||
|
||||
This checks your environment, authentication status, and shows which models each agent will actually use.
|
||||
|
||||
### Step 4: Configure Authentication
|
||||
|
||||
@@ -154,9 +165,9 @@ First, add the opencode-antigravity-auth plugin:
|
||||
You'll also need full model settings in `opencode.json`.
|
||||
Read the [opencode-antigravity-auth documentation](https://github.com/NoeFabris/opencode-antigravity-auth), copy the full model configuration from the README, and merge carefully to avoid breaking the user's existing setup. The plugin now uses a **variant system** — models like `antigravity-gemini-3-pro` support `low`/`high` variants instead of separate `-low`/`-high` model entries.
|
||||
|
||||
##### oh-my-openagent Agent Model Override
|
||||
##### Plugin config model override
|
||||
|
||||
The `opencode-antigravity-auth` plugin uses different model names than the built-in Google auth. Override the agent models in `oh-my-openagent.json` (or `.opencode/oh-my-openagent.json`):
|
||||
The `opencode-antigravity-auth` plugin uses different model names than the built-in Google auth. Override the agent models in your plugin config file. Existing installs still commonly use `oh-my-opencode.json` or `.opencode/oh-my-opencode.json`, while the compatibility layer also recognizes `oh-my-openagent.json[c]`.
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -201,7 +212,7 @@ GitHub Copilot is supported as a **fallback provider** when native providers are
|
||||
|
||||
##### Model Mappings
|
||||
|
||||
When GitHub Copilot is the best available provider, oh-my-openagent uses these model assignments:
|
||||
When GitHub Copilot is the best available provider, the compatibility layer resolves these assignments:
|
||||
|
||||
| Agent | Model |
|
||||
| ------------- | --------------------------------- |
|
||||
@@ -227,23 +238,22 @@ If Z.ai is your main provider, the most important fallbacks are:
|
||||
|
||||
#### OpenCode Zen
|
||||
|
||||
OpenCode Zen provides access to `opencode/` prefixed models including `opencode/claude-opus-4-6`, `opencode/gpt-5.4`, `opencode/gpt-5.3-codex`, `opencode/gpt-5-nano`, `opencode/glm-5`, `opencode/big-pickle`, and `opencode/minimax-m2.7-highspeed`.
|
||||
OpenCode Zen provides access to `opencode/` prefixed models including `opencode/claude-opus-4-6`, `opencode/gpt-5.4`, `opencode/gpt-5.3-codex`, `opencode/gpt-5-nano`, `opencode/glm-5`, `opencode/big-pickle`, and `opencode/minimax-m2.5`.
|
||||
|
||||
When OpenCode Zen is the best available provider (no native or Copilot), these models are used:
|
||||
When OpenCode Zen is the best available provider, these are the most relevant source-backed examples:
|
||||
|
||||
| Agent | Model |
|
||||
| ------------- | ---------------------------------------------------- |
|
||||
| **Sisyphus** | `opencode/claude-opus-4-6` |
|
||||
| **Oracle** | `opencode/gpt-5.4` |
|
||||
| **Explore** | `opencode/gpt-5-nano` |
|
||||
| **Librarian** | `opencode/minimax-m2.7-highspeed` / `opencode/big-pickle` |
|
||||
| **Explore** | `opencode/claude-haiku-4-5` |
|
||||
|
||||
##### Setup
|
||||
|
||||
Run the installer and select "Yes" for GitHub Copilot:
|
||||
|
||||
```bash
|
||||
bunx oh-my-openagent install
|
||||
bunx oh-my-opencode install
|
||||
# Select your subscriptions (Claude, ChatGPT, Gemini)
|
||||
# When prompted: "Do you have a GitHub Copilot subscription?" → Select "Yes"
|
||||
```
|
||||
@@ -251,7 +261,7 @@ bunx oh-my-openagent install
|
||||
Or use non-interactive mode:
|
||||
|
||||
```bash
|
||||
bunx oh-my-openagent install --no-tui --claude=no --openai=no --gemini=no --copilot=yes
|
||||
bunx oh-my-opencode install --no-tui --claude=no --openai=no --gemini=no --copilot=yes
|
||||
```
|
||||
|
||||
Then authenticate with GitHub:
|
||||
@@ -263,7 +273,7 @@ opencode auth login
|
||||
|
||||
### Step 5: Understand Your Model Setup
|
||||
|
||||
You've just configured oh-my-openagent. Here's what got set up and why.
|
||||
You've just configured oh-my-opencode. Here's what got set up and why.
|
||||
|
||||
#### Model Families: What You're Working With
|
||||
|
||||
@@ -296,8 +306,8 @@ Not all models behave the same way. Understanding which models are "similar" hel
|
||||
| --------------------- | -------------------------------- | ----------------------------------------------------------- |
|
||||
| **Gemini 3.1 Pro** | google, github-copilot, opencode | Excels at visual/frontend tasks. Different reasoning style. |
|
||||
| **Gemini 3 Flash** | google, github-copilot, opencode | Fast, good for doc search and light tasks. |
|
||||
| **MiniMax M2.7** | venice, opencode-go | Fast and smart. Good for utility tasks. Upgraded from M2.5. |
|
||||
| **MiniMax M2.7 Highspeed** | opencode | Ultra-fast MiniMax variant. Optimized for latency. |
|
||||
| **MiniMax M2.7** | venice, opencode-go | Fast and smart. Good for utility tasks where the provider catalog exposes M2.7. |
|
||||
| **MiniMax M2.5** | opencode | Legacy OpenCode catalog entry still used in some fallback chains for compatibility. |
|
||||
|
||||
**Speed-Focused Models**:
|
||||
|
||||
@@ -305,7 +315,7 @@ Not all models behave the same way. Understanding which models are "similar" hel
|
||||
| ----------------------- | ---------------------- | -------------- | --------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Grok Code Fast 1** | github-copilot, venice | Very fast | Optimized for code grep/search. Default for Explore. |
|
||||
| **Claude Haiku 4.5** | anthropic, opencode | Fast | Good balance of speed and intelligence. |
|
||||
| **MiniMax M2.7 Highspeed** | opencode | Very fast | Ultra-fast MiniMax variant. Smart for its speed class. |
|
||||
| **MiniMax M2.5** | opencode | Very fast | Legacy OpenCode catalog entry that still appears in some utility fallback chains. |
|
||||
| **GPT-5.3-codex-spark** | openai | Extremely fast | Blazing fast but compacts so aggressively that oh-my-openagent's context management doesn't work well with it. Not recommended for omo agents. |
|
||||
|
||||
#### What Each Agent Does and Which Model It Got
|
||||
@@ -316,7 +326,7 @@ Based on your subscriptions, here's how the agents were configured:
|
||||
|
||||
| Agent | Role | Default Chain | What It Does |
|
||||
| ------------ | ---------------- | ----------------------------------------------- | ---------------------------------------------------------------------------------------- |
|
||||
| **Sisyphus** | Main ultraworker | Opus (max) → Kimi K2.5 → GLM 5 → Big Pickle | Primary coding agent. Orchestrates everything. **Never use GPT — no GPT prompt exists.** |
|
||||
| **Sisyphus** | Main ultraworker | Opus (max) → Kimi K2.5 → GPT-5.4 → GLM 5 → Big Pickle | Primary coding agent. Orchestrates everything. Claude-family models are still preferred, but GPT-5.4 now has a dedicated prompt path. |
|
||||
| **Metis** | Plan review | Opus (max) → Kimi K2.5 → GPT-5.4 → Gemini 3.1 Pro | Reviews Prometheus plans for gaps. |
|
||||
|
||||
**Dual-Prompt Agents** (auto-switch between Claude and GPT prompts):
|
||||
@@ -328,7 +338,7 @@ Priority: **Claude > GPT > Claude-like models**
|
||||
| Agent | Role | Default Chain | GPT Prompt? |
|
||||
| -------------- | ----------------- | ---------------------------------------------------------- | ---------------------------------------------------------------- |
|
||||
| **Prometheus** | Strategic planner | Opus (max) → **GPT-5.4 (high)** → Kimi K2.5 → Gemini 3.1 Pro | Yes — XML-tagged, principle-driven (~300 lines vs ~1,100 Claude) |
|
||||
| **Atlas** | Todo orchestrator | **Kimi K2.5** → Sonnet → GPT-5.4 | Yes — GPT-optimized todo management |
|
||||
| **Atlas** | Todo orchestrator | **Claude Sonnet 4.6** → Kimi K2.5 → GPT-5.4 | Yes - GPT-optimized todo management |
|
||||
|
||||
**GPT-Native Agents** (built for GPT, don't override to Claude):
|
||||
|
||||
@@ -344,9 +354,9 @@ These agents do search, grep, and retrieval. They intentionally use fast, cheap
|
||||
|
||||
| Agent | Role | Default Chain | Design Rationale |
|
||||
| --------------------- | ------------------ | ---------------------------------------------------------------------- | -------------------------------------------------------------- |
|
||||
| **Explore** | Fast codebase grep | Grok Code Fast → MiniMax M2.7-highspeed → MiniMax M2.7 → Haiku → GPT-5-Nano | Speed is everything. Grok is blazing fast for grep. |
|
||||
| **Librarian** | Docs/code search | MiniMax M2.7 → MiniMax M2.7-highspeed → Haiku → GPT-5-Nano | Doc retrieval doesn't need deep reasoning. MiniMax is fast. |
|
||||
| **Multimodal Looker** | Vision/screenshots | Kimi K2.5 → Kimi Free → Gemini Flash → GPT-5.4 → GLM-4.6v | Kimi excels at multimodal understanding. |
|
||||
| **Explore** | Fast codebase grep | Grok Code Fast → OpenCode Go MiniMax M2.7 → OpenCode MiniMax M2.5 → Haiku → GPT-5-Nano | Speed is everything. Grok is blazing fast for grep. |
|
||||
| **Librarian** | Docs/code search | OpenCode Go MiniMax M2.7 → OpenCode MiniMax M2.5 → Haiku → GPT-5-Nano | Doc retrieval doesn't need deep reasoning. MiniMax is fast where the provider catalog supports it. |
|
||||
| **Multimodal Looker** | Vision/screenshots | GPT-5.4 → Kimi K2.5 → GLM-4.6v → GPT-5-Nano | GPT-5.4 now leads the default vision path when available. |
|
||||
|
||||
#### Why Different Models Need Different Prompts
|
||||
|
||||
@@ -365,7 +375,7 @@ This is why Prometheus and Atlas ship separate prompts per model family — they
|
||||
|
||||
#### Custom Model Configuration
|
||||
|
||||
If the user wants to override which model an agent uses, you can customize in `oh-my-openagent.json`:
|
||||
If the user wants to override which model an agent uses, you can customize in your plugin config file. Existing installs still commonly use `oh-my-opencode.json`, while the compatibility layer also recognizes `oh-my-openagent.json[c]`.
|
||||
|
||||
```jsonc
|
||||
{
|
||||
@@ -400,7 +410,7 @@ GPT (5.3-codex, 5.2) > Claude Opus (decent fallback) > Gemini (acceptable)
|
||||
|
||||
**Dangerous** (no prompt support):
|
||||
|
||||
- Sisyphus → GPT: **No GPT prompt. Will degrade significantly.**
|
||||
- Sisyphus → older GPT models: **Still a bad fit. GPT-5.4 is the only dedicated GPT prompt path.**
|
||||
- Hephaestus → Claude: **Built for Codex. Claude can't replicate this.**
|
||||
- Explore → Opus: **Massive cost waste. Explore needs speed, not intelligence.**
|
||||
- Librarian → Opus: **Same. Doc search doesn't need Opus-level reasoning.**
|
||||
@@ -462,3 +472,7 @@ Tell the user of following:
|
||||
4. You wanna have your own agent- catalog setup? I can read the [docs](docs/guide/agent-model-matching.md) and set up for you after interviewing!
|
||||
|
||||
That's it. The agent will figure out the rest and handle everything automatically.
|
||||
|
||||
#### Advanced Configuration
|
||||
|
||||
You can customize agent models and fallback chains in your config. The `fallback_models` field accepts either a single string or an array that mixes strings and per-model objects with settings like `variant` and `temperature`. See the [Configuration Reference](../reference/configuration.md) and example configs in `docs/examples/` for details.
|
||||
|
||||
Reference in New Issue
Block a user