feat(categories): change quick category default model from claude-haiku-4-5 to gpt-5.4-mini
GPT-5.4-mini provides stronger reasoning at comparable speed and cost. Haiku remains as the next fallback priority in the chain. Changes: - DEFAULT_CATEGORIES quick model: anthropic/claude-haiku-4-5 → openai/gpt-5.4-mini - Fallback chain: gpt-5.4-mini → haiku → gemini-3-flash → minimax-m2.5 → gpt-5-nano - OpenAI-only catalog: quick uses gpt-5.4-mini directly - Think-mode: add gpt-5-4-mini and gpt-5-4-nano high variants - Update all documentation references
This commit is contained in:
@@ -121,6 +121,7 @@ Principle-driven, explicit reasoning, deep technical capability. Best for agents
|
||||
| ----------------- | ----------------------------------------------------------------------------------------------- |
|
||||
| **GPT-5.3 Codex** | Deep coding powerhouse. Autonomous exploration. Required for Hephaestus. |
|
||||
| **GPT-5.4** | High intelligence, strategic reasoning. Default for Oracle, Momus, and a key fallback for Prometheus / Atlas. Uses xhigh variant for Momus. |
|
||||
| **GPT-5.4 Mini** | Fast + strong reasoning. Good for lightweight autonomous tasks. Default for quick category. |
|
||||
| **GPT-5-Nano** | Ultra-cheap, fast. Good for simple utility tasks. |
|
||||
|
||||
### Other Models
|
||||
@@ -170,7 +171,7 @@ When agents delegate work, they don't pick a model name — they pick a **catego
|
||||
| `ultrabrain` | Maximum reasoning needed | GPT-5.4 → Gemini 3.1 Pro → Claude Opus → opencode-go/glm-5 |
|
||||
| `deep` | Deep coding, complex logic | GPT-5.3 Codex → Claude Opus → Gemini 3.1 Pro |
|
||||
| `artistry` | Creative, novel approaches | Gemini 3.1 Pro → Claude Opus → GPT-5.4 |
|
||||
| `quick` | Simple, fast tasks | Claude Haiku → Gemini Flash → opencode-go/minimax-m2.5 → GPT-5-Nano |
|
||||
| `quick` | Simple, fast tasks | GPT-5.4 Mini → Claude Haiku → Gemini Flash → opencode-go/minimax-m2.5 → GPT-5-Nano |
|
||||
| `unspecified-high` | General complex work | Claude Opus → GPT-5.4 → GLM 5 → K2P5 → opencode-go/glm-5 → Kimi K2.5 |
|
||||
| `unspecified-low` | General standard work | Claude Sonnet → GPT-5.3 Codex → opencode-go/kimi-k2.5 → Gemini Flash |
|
||||
| `writing` | Text, docs, prose | Gemini Flash → opencode-go/kimi-k2.5 → Claude Sonnet |
|
||||
|
||||
@@ -287,6 +287,7 @@ Not all models behave the same way. Understanding which models are "similar" hel
|
||||
| ----------------- | -------------------------------- | ------------------------------------------------- |
|
||||
| **GPT-5.3-codex** | openai, github-copilot, opencode | Deep coding powerhouse. Required for Hephaestus. |
|
||||
| **GPT-5.4** | openai, github-copilot, opencode | High intelligence. Default for Oracle. |
|
||||
| **GPT-5.4 Mini** | openai, github-copilot, opencode | Fast + strong reasoning. Default for quick category. |
|
||||
| **GPT-5-Nano** | opencode | Ultra-cheap, fast. Good for simple utility tasks. |
|
||||
|
||||
**Different-Behavior Models**:
|
||||
|
||||
@@ -298,7 +298,7 @@ task({ category: "quick", prompt: "..." }); // "Just get it done fast"
|
||||
| `visual-engineering` | Gemini 3.1 Pro | Frontend, UI/UX, design, styling, animation |
|
||||
| `ultrabrain` | GPT-5.4 (xhigh) | Deep logical reasoning, complex architecture decisions |
|
||||
| `artistry` | Gemini 3.1 Pro (high) | Highly creative or artistic tasks, novel ideas |
|
||||
| `quick` | Claude Haiku 4.5 | Trivial tasks - single file changes, typo fixes |
|
||||
| `quick` | GPT-5.4 Mini | Trivial tasks - single file changes, typo fixes |
|
||||
| `deep` | GPT-5.3 Codex (medium) | Goal-oriented autonomous problem-solving, thorough research |
|
||||
| `unspecified-low` | Claude Sonnet 4.6 | Tasks that don't fit other categories, low effort |
|
||||
| `unspecified-high` | Claude Opus 4.6 (max) | Tasks that don't fit other categories, high effort |
|
||||
|
||||
@@ -41,7 +41,7 @@ We used to call this "Claude Code on steroids." That was wrong.
|
||||
|
||||
This isn't about making Claude Code better. It's about breaking free from the idea that one model, one provider, one way of working is enough. Anthropic wants you locked in. OpenAI wants you locked in. Everyone wants you locked in.
|
||||
|
||||
Oh My OpenCode doesn't play that game. It orchestrates across models, picking the right brain for the right job. Claude for orchestration. GPT for deep reasoning. Gemini for frontend. Haiku for quick tasks. All working together, automatically.
|
||||
Oh My OpenCode doesn't play that game. It orchestrates across models, picking the right brain for the right job. Claude for orchestration. GPT for deep reasoning. Gemini for frontend. GPT-5.4 Mini for quick tasks. All working together, automatically.
|
||||
|
||||
---
|
||||
|
||||
@@ -99,9 +99,9 @@ Use Hephaestus when you need deep architectural reasoning, complex debugging acr
|
||||
|
||||
**Why this beats vanilla Codex CLI:**
|
||||
|
||||
- **Multi-model orchestration.** Pure Codex is single-model. OmO routes different tasks to different models automatically. GPT for deep reasoning. Gemini for frontend. Haiku for speed. The right brain for the right job.
|
||||
- **Multi-model orchestration.** Pure Codex is single-model. OmO routes different tasks to different models automatically. GPT for deep reasoning. Gemini for frontend. GPT-5.4 Mini for speed. The right brain for the right job.
|
||||
- **Background agents.** Fire 5+ agents in parallel. Something Codex simply cannot do. While one agent writes code, another researches patterns, another checks documentation. Like a real dev team.
|
||||
- **Category system.** Tasks are routed by intent, not model name. `visual-engineering` gets Gemini. `ultrabrain` gets GPT-5.4. `quick` gets Haiku. No manual juggling.
|
||||
- **Category system.** Tasks are routed by intent, not model name. `visual-engineering` gets Gemini. `ultrabrain` gets GPT-5.4. `quick` gets GPT-5.4 Mini. No manual juggling.
|
||||
- **Accumulated wisdom.** Subagents learn from previous results. Conventions discovered in task 1 are passed to task 5. Mistakes made early aren't repeated. The system gets smarter as it works.
|
||||
|
||||
### Prometheus: The Strategic Planner
|
||||
@@ -195,8 +195,8 @@ You can override specific agents or categories in your config:
|
||||
// General high-effort work
|
||||
"unspecified-high": { "model": "anthropic/claude-opus-4-6", "variant": "max" },
|
||||
|
||||
// Quick tasks: use the cheapest models
|
||||
"quick": { "model": "anthropic/claude-haiku-4-5" },
|
||||
// Quick tasks: use GPT-5.4-mini (fast and cheap)
|
||||
"quick": { "model": "openai/gpt-5.4-mini" },
|
||||
|
||||
// Deep reasoning: GPT-5.4
|
||||
"ultrabrain": { "model": "openai/gpt-5.4", "variant": "xhigh" },
|
||||
|
||||
@@ -228,7 +228,7 @@ Domain-specific model delegation used by the `task()` tool. When Sisyphus delega
|
||||
| `ultrabrain` | `openai/gpt-5.4` (xhigh) | Deep logical reasoning, complex architecture |
|
||||
| `deep` | `openai/gpt-5.3-codex` (medium) | Autonomous problem-solving, thorough research |
|
||||
| `artistry` | `google/gemini-3.1-pro` (high) | Creative/unconventional approaches |
|
||||
| `quick` | `anthropic/claude-haiku-4-5` | Trivial tasks, typo fixes, single-file changes |
|
||||
| `quick` | `openai/gpt-5.4-mini` | Trivial tasks, typo fixes, single-file changes |
|
||||
| `unspecified-low` | `anthropic/claude-sonnet-4-6` | General tasks, low effort |
|
||||
| `unspecified-high` | `anthropic/claude-opus-4-6` (max) | General tasks, high effort |
|
||||
| `writing` | `google/gemini-3-flash` | Documentation, prose, technical writing |
|
||||
@@ -286,7 +286,7 @@ Disable categories: `{ "disabled_categories": ["ultrabrain"] }`
|
||||
| **ultrabrain** | `gpt-5.4` | `gpt-5.4` → `gemini-3.1-pro` → `claude-opus-4-6` |
|
||||
| **deep** | `gpt-5.3-codex` | `gpt-5.3-codex` → `claude-opus-4-6` → `gemini-3.1-pro` |
|
||||
| **artistry** | `gemini-3.1-pro` | `gemini-3.1-pro` → `claude-opus-4-6` → `gpt-5.4` |
|
||||
| **quick** | `claude-haiku-4-5` | `claude-haiku-4-5` → `gemini-3-flash` → `gpt-5-nano` |
|
||||
| **quick** | `gpt-5.4-mini` | `gpt-5.4-mini` → `claude-haiku-4-5` → `gemini-3-flash` → `minimax-m2.5` → `gpt-5-nano` |
|
||||
| **unspecified-low** | `claude-sonnet-4-6` | `claude-sonnet-4-6` → `gpt-5.3-codex` → `gemini-3-flash` |
|
||||
| **unspecified-high** | `claude-opus-4-6` | `claude-opus-4-6` → `gpt-5.4 (high)` → `glm-5` → `k2p5` → `kimi-k2.5` |
|
||||
| **writing** | `gemini-3-flash` | `gemini-3-flash` → `claude-sonnet-4-6` |
|
||||
|
||||
@@ -111,7 +111,7 @@ By combining these two concepts, you can generate optimal agents through `task`.
|
||||
| `ultrabrain` | `openai/gpt-5.4` (xhigh) | Deep logical reasoning, complex architecture decisions requiring extensive analysis |
|
||||
| `deep` | `openai/gpt-5.3-codex` (medium) | Goal-oriented autonomous problem-solving. Thorough research before action. For hairy problems requiring deep understanding. |
|
||||
| `artistry` | `google/gemini-3.1-pro` (high) | Highly creative/artistic tasks, novel ideas |
|
||||
| `quick` | `anthropic/claude-haiku-4-5` | Trivial tasks - single file changes, typo fixes, simple modifications |
|
||||
| `quick` | `openai/gpt-5.4-mini` | Trivial tasks - single file changes, typo fixes, simple modifications |
|
||||
| `unspecified-low` | `anthropic/claude-sonnet-4-6` | Tasks that don't fit other categories, low effort required |
|
||||
| `unspecified-high` | `anthropic/claude-opus-4-6` (max) | Tasks that don't fit other categories, high effort required |
|
||||
| `writing` | `google/gemini-3-flash` | Documentation, prose, technical writing |
|
||||
|
||||
@@ -248,8 +248,7 @@ exports[`generateModelConfig single native provider uses OpenAI models when only
|
||||
"variant": "medium",
|
||||
},
|
||||
"quick": {
|
||||
"model": "openai/gpt-5.3-codex",
|
||||
"variant": "low",
|
||||
"model": "openai/gpt-5.4-mini",
|
||||
},
|
||||
"ultrabrain": {
|
||||
"model": "openai/gpt-5.4",
|
||||
@@ -334,8 +333,7 @@ exports[`generateModelConfig single native provider uses OpenAI models with isMa
|
||||
"variant": "medium",
|
||||
},
|
||||
"quick": {
|
||||
"model": "openai/gpt-5.3-codex",
|
||||
"variant": "low",
|
||||
"model": "openai/gpt-5.4-mini",
|
||||
},
|
||||
"ultrabrain": {
|
||||
"model": "openai/gpt-5.4",
|
||||
@@ -533,7 +531,7 @@ exports[`generateModelConfig all native providers uses preferred models from fal
|
||||
"variant": "medium",
|
||||
},
|
||||
"quick": {
|
||||
"model": "anthropic/claude-haiku-4-5",
|
||||
"model": "openai/gpt-5.4-mini",
|
||||
},
|
||||
"ultrabrain": {
|
||||
"model": "openai/gpt-5.4",
|
||||
@@ -608,7 +606,7 @@ exports[`generateModelConfig all native providers uses preferred models with isM
|
||||
"variant": "medium",
|
||||
},
|
||||
"quick": {
|
||||
"model": "anthropic/claude-haiku-4-5",
|
||||
"model": "openai/gpt-5.4-mini",
|
||||
},
|
||||
"ultrabrain": {
|
||||
"model": "openai/gpt-5.4",
|
||||
@@ -684,7 +682,7 @@ exports[`generateModelConfig fallback providers uses OpenCode Zen models when on
|
||||
"variant": "medium",
|
||||
},
|
||||
"quick": {
|
||||
"model": "opencode/claude-haiku-4-5",
|
||||
"model": "opencode/gpt-5.4-mini",
|
||||
},
|
||||
"ultrabrain": {
|
||||
"model": "opencode/gpt-5.4",
|
||||
@@ -759,7 +757,7 @@ exports[`generateModelConfig fallback providers uses OpenCode Zen models with is
|
||||
"variant": "medium",
|
||||
},
|
||||
"quick": {
|
||||
"model": "opencode/claude-haiku-4-5",
|
||||
"model": "opencode/gpt-5.4-mini",
|
||||
},
|
||||
"ultrabrain": {
|
||||
"model": "opencode/gpt-5.4",
|
||||
@@ -830,7 +828,7 @@ exports[`generateModelConfig fallback providers uses GitHub Copilot models when
|
||||
"variant": "high",
|
||||
},
|
||||
"quick": {
|
||||
"model": "github-copilot/claude-haiku-4.5",
|
||||
"model": "github-copilot/gpt-5.4-mini",
|
||||
},
|
||||
"ultrabrain": {
|
||||
"model": "github-copilot/gemini-3.1-pro-preview",
|
||||
@@ -900,7 +898,7 @@ exports[`generateModelConfig fallback providers uses GitHub Copilot models with
|
||||
"variant": "high",
|
||||
},
|
||||
"quick": {
|
||||
"model": "github-copilot/claude-haiku-4.5",
|
||||
"model": "github-copilot/gpt-5.4-mini",
|
||||
},
|
||||
"ultrabrain": {
|
||||
"model": "github-copilot/gemini-3.1-pro-preview",
|
||||
@@ -1092,7 +1090,7 @@ exports[`generateModelConfig mixed provider scenarios uses Claude + OpenCode Zen
|
||||
"variant": "medium",
|
||||
},
|
||||
"quick": {
|
||||
"model": "anthropic/claude-haiku-4-5",
|
||||
"model": "opencode/gpt-5.4-mini",
|
||||
},
|
||||
"ultrabrain": {
|
||||
"model": "opencode/gpt-5.4",
|
||||
@@ -1167,7 +1165,7 @@ exports[`generateModelConfig mixed provider scenarios uses OpenAI + Copilot comb
|
||||
"variant": "medium",
|
||||
},
|
||||
"quick": {
|
||||
"model": "github-copilot/claude-haiku-4.5",
|
||||
"model": "openai/gpt-5.4-mini",
|
||||
},
|
||||
"ultrabrain": {
|
||||
"model": "openai/gpt-5.4",
|
||||
@@ -1375,7 +1373,7 @@ exports[`generateModelConfig mixed provider scenarios uses all fallback provider
|
||||
"variant": "medium",
|
||||
},
|
||||
"quick": {
|
||||
"model": "github-copilot/claude-haiku-4.5",
|
||||
"model": "github-copilot/gpt-5.4-mini",
|
||||
},
|
||||
"ultrabrain": {
|
||||
"model": "opencode/gpt-5.4",
|
||||
@@ -1453,7 +1451,7 @@ exports[`generateModelConfig mixed provider scenarios uses all providers togethe
|
||||
"variant": "medium",
|
||||
},
|
||||
"quick": {
|
||||
"model": "anthropic/claude-haiku-4-5",
|
||||
"model": "openai/gpt-5.4-mini",
|
||||
},
|
||||
"ultrabrain": {
|
||||
"model": "openai/gpt-5.4",
|
||||
@@ -1531,7 +1529,7 @@ exports[`generateModelConfig mixed provider scenarios uses all providers with is
|
||||
"variant": "medium",
|
||||
},
|
||||
"quick": {
|
||||
"model": "anthropic/claude-haiku-4-5",
|
||||
"model": "openai/gpt-5.4-mini",
|
||||
},
|
||||
"ultrabrain": {
|
||||
"model": "openai/gpt-5.4",
|
||||
|
||||
@@ -40,7 +40,7 @@ describe("generateModelConfig OpenAI-only model catalog", () => {
|
||||
|
||||
// #then
|
||||
expect(result.categories?.artistry).toEqual({ model: "openai/gpt-5.4", variant: "xhigh" })
|
||||
expect(result.categories?.quick).toEqual({ model: "openai/gpt-5.3-codex", variant: "low" })
|
||||
expect(result.categories?.quick).toEqual({ model: "openai/gpt-5.4-mini" })
|
||||
expect(result.categories?.["visual-engineering"]).toEqual({ model: "openai/gpt-5.4", variant: "high" })
|
||||
expect(result.categories?.writing).toEqual({ model: "openai/gpt-5.4", variant: "medium" })
|
||||
})
|
||||
@@ -55,6 +55,6 @@ describe("generateModelConfig OpenAI-only model catalog", () => {
|
||||
// #then
|
||||
expect(result.agents?.explore).toEqual({ model: "opencode-go/minimax-m2.5" })
|
||||
expect(result.agents?.librarian).toEqual({ model: "opencode-go/minimax-m2.5" })
|
||||
expect(result.categories?.quick).toEqual({ model: "opencode-go/minimax-m2.5" })
|
||||
expect(result.categories?.quick).toEqual({ model: "openai/gpt-5.4-mini" })
|
||||
})
|
||||
})
|
||||
|
||||
@@ -7,7 +7,7 @@ const OPENAI_ONLY_AGENT_OVERRIDES: Record<string, AgentConfig> = {
|
||||
|
||||
const OPENAI_ONLY_CATEGORY_OVERRIDES: Record<string, CategoryConfig> = {
|
||||
artistry: { model: "openai/gpt-5.4", variant: "xhigh" },
|
||||
quick: { model: "openai/gpt-5.3-codex", variant: "low" },
|
||||
quick: { model: "openai/gpt-5.4-mini" },
|
||||
"visual-engineering": { model: "openai/gpt-5.4", variant: "high" },
|
||||
writing: { model: "openai/gpt-5.4", variant: "medium" },
|
||||
}
|
||||
|
||||
@@ -53,6 +53,22 @@ describe("think-mode switcher", () => {
|
||||
expect(variant).toBe("gpt-5-4-high")
|
||||
})
|
||||
|
||||
it("should handle gpt-5.4-mini model", () => {
|
||||
// given a GPT-5.4-mini model ID
|
||||
const variant = getHighVariant("gpt-5.4-mini")
|
||||
|
||||
// then should return high variant
|
||||
expect(variant).toBe("gpt-5-4-mini-high")
|
||||
})
|
||||
|
||||
it("should handle gpt-5.4-nano model", () => {
|
||||
// given a GPT-5.4-nano model ID
|
||||
const variant = getHighVariant("gpt-5.4-nano")
|
||||
|
||||
// then should return high variant
|
||||
expect(variant).toBe("gpt-5-4-nano-high")
|
||||
})
|
||||
|
||||
it("should handle dots in GPT-5.1 codex variants", () => {
|
||||
// given a GPT-5.1-codex model ID
|
||||
const variant = getHighVariant("gpt-5.1-codex")
|
||||
|
||||
@@ -65,6 +65,8 @@ const HIGH_VARIANT_MAP: Record<string, string> = {
|
||||
"gpt-5-4": "gpt-5-4-high",
|
||||
"gpt-5-4-chat-latest": "gpt-5-4-chat-latest-high",
|
||||
"gpt-5-4-pro": "gpt-5-4-pro-high",
|
||||
"gpt-5-4-mini": "gpt-5-4-mini-high",
|
||||
"gpt-5-4-nano": "gpt-5-4-nano-high",
|
||||
// Antigravity (Google)
|
||||
"antigravity-gemini-3-1-pro": "antigravity-gemini-3-1-pro-high",
|
||||
"antigravity-gemini-3-flash": "antigravity-gemini-3-flash-high",
|
||||
|
||||
@@ -361,19 +361,23 @@ describe("CATEGORY_MODEL_REQUIREMENTS", () => {
|
||||
expect(fifth.model).toBe("k2p5")
|
||||
})
|
||||
|
||||
test("quick has valid fallbackChain with claude-haiku-4-5 as primary", () => {
|
||||
test("quick has valid fallbackChain with gpt-5.4-mini as primary and claude-haiku-4-5 as secondary", () => {
|
||||
// given - quick category requirement
|
||||
const quick = CATEGORY_MODEL_REQUIREMENTS["quick"]
|
||||
|
||||
// when - accessing quick requirement
|
||||
// then - fallbackChain exists with claude-haiku-4-5 as first entry
|
||||
// then - fallbackChain exists with gpt-5.4-mini as first entry, haiku as second
|
||||
expect(quick).toBeDefined()
|
||||
expect(quick.fallbackChain).toBeArray()
|
||||
expect(quick.fallbackChain.length).toBeGreaterThan(0)
|
||||
expect(quick.fallbackChain.length).toBeGreaterThan(1)
|
||||
|
||||
const primary = quick.fallbackChain[0]
|
||||
expect(primary.model).toBe("claude-haiku-4-5")
|
||||
expect(primary.providers[0]).toBe("anthropic")
|
||||
expect(primary.model).toBe("gpt-5.4-mini")
|
||||
expect(primary.providers).toContain("openai")
|
||||
|
||||
const secondary = quick.fallbackChain[1]
|
||||
expect(secondary.model).toBe("claude-haiku-4-5")
|
||||
expect(secondary.providers).toContain("anthropic")
|
||||
})
|
||||
|
||||
test("unspecified-low has valid fallbackChain with claude-sonnet-4-6 as primary", () => {
|
||||
|
||||
@@ -251,6 +251,10 @@ export const CATEGORY_MODEL_REQUIREMENTS: Record<string, ModelRequirement> = {
|
||||
},
|
||||
quick: {
|
||||
fallbackChain: [
|
||||
{
|
||||
providers: ["openai", "github-copilot", "opencode"],
|
||||
model: "gpt-5.4-mini",
|
||||
},
|
||||
{
|
||||
providers: ["anthropic", "github-copilot", "opencode"],
|
||||
model: "claude-haiku-4-5",
|
||||
|
||||
@@ -95,7 +95,7 @@
|
||||
| ultrabrain | gpt-5.4 xhigh | Hard logic |
|
||||
| deep | gpt-5.3-codex medium | Autonomous problem-solving |
|
||||
| artistry | gemini-3.1-pro high | Creative approaches |
|
||||
| quick | claude-haiku-4-5 | Trivial tasks |
|
||||
| quick | gpt-5.4-mini | Trivial tasks |
|
||||
| unspecified-low | claude-sonnet-4-6 | Moderate effort |
|
||||
| unspecified-high | claude-opus-4-6 max | High effort |
|
||||
| writing | kimi-k2p5 | Documentation |
|
||||
|
||||
@@ -149,9 +149,9 @@ Approach:
|
||||
</Category_Context>
|
||||
|
||||
<Caller_Warning>
|
||||
THIS CATEGORY USES A LESS CAPABLE MODEL (claude-haiku-4-5).
|
||||
THIS CATEGORY USES A SMALLER/FASTER MODEL (gpt-5.4-mini).
|
||||
|
||||
The model executing this task has LIMITED reasoning capacity. Your prompt MUST be:
|
||||
The model executing this task is optimized for speed over depth. Your prompt MUST be:
|
||||
|
||||
**EXHAUSTIVELY EXPLICIT** - Leave NOTHING to interpretation:
|
||||
1. MUST DO: List every required action as atomic, numbered steps
|
||||
@@ -159,10 +159,9 @@ The model executing this task has LIMITED reasoning capacity. Your prompt MUST b
|
||||
3. EXPECTED OUTPUT: Describe exact success criteria with concrete examples
|
||||
|
||||
**WHY THIS MATTERS:**
|
||||
- Less capable models WILL deviate without explicit guardrails
|
||||
- Vague instructions → unpredictable results
|
||||
- Implicit expectations → missed requirements
|
||||
|
||||
- Smaller models benefit from explicit guardrails
|
||||
- Vague instructions may lead to unpredictable results
|
||||
- Implicit expectations may be missed
|
||||
**PROMPT STRUCTURE (MANDATORY):**
|
||||
\`\`\`
|
||||
TASK: [One-sentence goal]
|
||||
@@ -287,7 +286,7 @@ export const DEFAULT_CATEGORIES: Record<string, CategoryConfig> = {
|
||||
ultrabrain: { model: "openai/gpt-5.4", variant: "xhigh" },
|
||||
deep: { model: "openai/gpt-5.3-codex", variant: "medium" },
|
||||
artistry: { model: "google/gemini-3.1-pro", variant: "high" },
|
||||
quick: { model: "anthropic/claude-haiku-4-5" },
|
||||
quick: { model: "openai/gpt-5.4-mini" },
|
||||
"unspecified-low": { model: "anthropic/claude-sonnet-4-6" },
|
||||
"unspecified-high": { model: "anthropic/claude-opus-4-6", variant: "max" },
|
||||
writing: { model: "kimi-for-coding/k2p5" },
|
||||
|
||||
Reference in New Issue
Block a user