Compare commits

...

46 Commits

Author SHA1 Message Date
github-actions[bot]
c77c9ceb53 release: v3.1.9 2026-01-30 14:15:54 +00:00
YeonGyu-Kim
8c2625cfb0 🏆 test: optimize test suite with FakeTimers and race condition fixes (#1284)
* fix: exclude prompt/permission from plan agent config

plan agent should only inherit model settings from prometheus,
not the prompt or permission. This ensures plan agent uses
OpenCode's default behavior while only overriding the model.

* test(todo-continuation-enforcer): use FakeTimers for 15x faster tests

- Add custom FakeTimers implementation (~100 lines)
- Replace all real setTimeout waits with fakeTimers.advanceBy()
- Test time: 104.6s → 7.01s

* test(callback-server): fix race conditions with Promise.all and Bun.fetch

- Use Bun.fetch.bind(Bun) to avoid globalThis.fetch mock interference
- Use Promise.all pattern for concurrent fetch/waitForCallback
- Add Bun.sleep(10) in afterEach for port release

* test(concurrency): replace placeholder assertions with getCount checks

Replace 6 meaningless expect(true).toBe(true) assertions with
actual getCount() verifications for test quality improvement

* refactor(config-handler): simplify planDemoteConfig creation

Remove unnecessary IIFE and destructuring, use direct spread instead

* test(executor): use FakeTimeouts for faster tests

- Add custom FakeTimeouts implementation
- Replace setTimeout waits with fakeTimeouts.advanceBy()
- Test time reduced from ~26s to ~6.8s

* test: fix gemini model mock for artistry unstable mode

* test: fix model list mock payload shape

* test: mock provider models for artistry category

---------

Co-authored-by: justsisyphus <justsisyphus@users.noreply.github.com>
2026-01-30 22:10:52 +09:00
github-actions[bot]
3ced20d1ab @kunal70006 has signed the CLA in code-yeongyu/oh-my-opencode#1282 2026-01-30 09:56:07 +00:00
github-actions[bot]
fb02cc9e95 @Zacks-Zhang has signed the CLA in code-yeongyu/oh-my-opencode#1280 2026-01-30 08:51:59 +00:00
justsisyphus
80ee52fe3b fix: improve model resolution with client API fallback and explicit model passing
- fetchAvailableModels now falls back to client.model.list() when cache is empty
- provider-models cache empty → models.json → client API (3-tier fallback)
- look-at tool explicitly passes registered agent's model to session.prompt
- Ensures multimodal-looker uses correctly resolved model (e.g., gemini-3-flash-preview)
- Add comprehensive tests for fuzzy matching and fallback scenarios
2026-01-30 16:57:21 +09:00
github-actions[bot]
2f7e188cb5 @Hisir0909 has signed the CLA in code-yeongyu/oh-my-opencode#1275 2026-01-30 07:33:44 +00:00
justsisyphus
f8be01c6dd test: update Atlas fallback test and misc code improvements
- Update Atlas fallback test to expect k2p5 as primary (kimi-for-coding)

- Minor improvements to connected-providers-cache and utils

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-01-30 16:19:02 +09:00
justsisyphus
0dbec08923 feat(cli): add kimi-for-coding provider to model fallback
- Add kimiForCoding field to ProviderAvailability interface

- Add kimi-for-coding provider mapping in isProviderAvailable

- Include kimi-for-coding in Sisyphus fallback chain for non-max plan

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-01-30 16:19:02 +09:00
justsisyphus
691fa8b815 refactor(sisyphus-junior): extract MODE constant and add export
- Add AgentMode type import and MODE constant

- Export mode on createSisyphusJuniorAgentWithOverrides function

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-01-30 16:19:02 +09:00
justsisyphus
a73d806d4e docs: update explore agent model and category descriptions
- Change explore agent from Grok Code to Claude Haiku 4.5

- Update deep category description for clarity

- Fix Momus fallback chain order

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-01-30 16:19:02 +09:00
justsisyphus
a424f81cd5 docs: update Sisyphus fallback chain across all documentation
Update Sisyphus fallback chain to include gpt-5.2-codex and gemini-3-pro

Files: AGENTS.md, README*.md, src/agents/AGENTS.md

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-01-30 16:19:02 +09:00
justsisyphus
1187a02020 fix: Atlas respects fallbackChain, always refresh provider-models cache
- Remove uiSelectedModel from Atlas model resolution (use k2p5 as primary)
- Always overwrite provider-models.json on session start to prevent stale cache
2026-01-30 16:19:02 +09:00
Junho Yeo
3074434887 fix: use correct gh api command for starring repo (#1274)
`gh repo star` is not a valid GitHub CLI command.
Use `gh api --silent --method PUT /user/starred/OWNER/REPO` instead.
2026-01-30 15:58:56 +09:00
justsisyphus
6bb2854162 Merge branch 'omo-avail' into dev 2026-01-30 15:28:20 +09:00
justsisyphus
e08904a27a feat: add artistry category to ultrawork-mode specialist delegation
- Add oracle vs artistry distinction in MANDATORY CERTAINTY PROTOCOL
- Update WHEN IN DOUBT examples with both delegation options
- Add artistry to IF YOU ENCOUNTER A BLOCKER section
- Add 'Hard problem (non-conventional)' row to AGENTS UTILIZATION table
- Update analyze-mode message with artistry specialist option

Oracle: conventional problems (architecture, debugging, complex logic)
Artistry: non-conventional problems (different approach needed)
2026-01-30 15:19:38 +09:00
justsisyphus
0188d69233 test: add requiresModel and isModelAvailable tests 2026-01-30 15:11:32 +09:00
justsisyphus
2c74f608f0 feat(delegate-task, agents): check requiresModel for conditional activation 2026-01-30 15:11:27 +09:00
justsisyphus
baefd16b3f feat(shared): add requiresModel field and isModelAvailable helper 2026-01-30 15:11:19 +09:00
justsisyphus
b1b4578906 feat: add opencode/kimi-k2.5-free fallback and prioritize kimi for atlas 2026-01-30 15:10:38 +09:00
justsisyphus
9d20a5b11c feat: add kimi-for-coding provider to installer and fix model ID to k2p5 2026-01-30 15:08:26 +09:00
justsisyphus
d2d8d1a782 feat: add kimi-k2.5 to agent fallback chains and update model catalog
- sisyphus: opus → kimi-k2.5 → glm-4.7 → gpt-5.2-codex → gemini-3-pro
- atlas: sonnet-4-5 → kimi-k2.5 → gpt-5.2 → gemini-3-pro
- prometheus/metis: opus → kimi-k2.5 → gpt-5.2 → gemini-3-pro
- multimodal-looker: gemini-flash → gpt-5.2 → glm-4.6v → kimi-k2.5 → haiku → gpt-5-nano
- visual-engineering: remove gpt-5.2 from chain
- ultrabrain: reorder to gpt-5.2-codex → gemini-3-pro → opus
- Add cross-provider fuzzy match for model resolution
- Update all documentation (AGENTS.md, features.md, configurations.md, category-skill-guide.md)
2026-01-30 14:53:50 +09:00
justsisyphus
10bdb6c694 chore: update artistry category description for creative problem-solving 2026-01-30 14:53:50 +09:00
justsisyphus
5f243e2d3a chore: add glm-4.7 to visual-engineering fallback chain 2026-01-30 14:53:50 +09:00
justsisyphus
82a47ff928 chore: add code style requirements to ultrabrain prompt
- MUST search existing codebase for patterns before writing code
- MUST match project's existing conventions
- MUST write readable, human-friendly code
2026-01-30 14:53:50 +09:00
justsisyphus
c06f38693e refactor: revamp ultrabrain category with deep work mindset
- Add variant: max to ultrabrain's gemini-3-pro fallback entry
- Rename STRATEGIC_CATEGORY_PROMPT_APPEND to ULTRABRAIN_CATEGORY_PROMPT_APPEND
- Keep original strategic advisor prompt content (no micromanagement instructions)
- Update description: use only for genuinely hard tasks, give clear goals only
- Update tests to match renamed constant
2026-01-30 14:53:50 +09:00
justsisyphus
6e9cb7ecd8 chore: add variant max to momus opus-4-5 fallback entry 2026-01-30 14:53:50 +09:00
justsisyphus
b731399edf chore: prioritize gemini-3-pro over opus in oracle fallback chain
- Move gemini-3-pro above claude-opus-4-5 in oracle's fallbackChain
- Add variant: "max" to gemini-3-pro entry
2026-01-30 14:53:50 +09:00
github-actions[bot]
0a28f6a790 @gabriel-ecegi has signed the CLA in code-yeongyu/oh-my-opencode#1271 2026-01-30 05:13:19 +00:00
justsisyphus
4e529b74e0 revert: remove unnecessary NODE_AUTH_TOKEN from publish.yml (OIDC works) 2026-01-30 13:54:46 +09:00
justsisyphus
90eec0a369 fix: add NODE_AUTH_TOKEN env to main publish workflow 2026-01-30 13:50:55 +09:00
justsisyphus
3b5d18e6bf fix(agents): exclude subagents from UI model selection override
Subagents (explore, librarian, oracle, etc.) now use their own fallback
chain instead of inheriting the UI-selected model. This fixes the issue
where explore agent was incorrectly using Opus instead of Haiku.

- Add AgentMode type and static mode property to AgentFactory
- Each agent declares its own mode via factory.mode = MODE pattern
- createBuiltinAgents() checks source.mode before passing uiSelectedModel
2026-01-30 13:49:40 +09:00
justsisyphus
67aeb9cb8c chore: replace big-pickle model with glm-4.7-free 2026-01-30 13:44:04 +09:00
justsisyphus
b1c1f02172 fix: add NODE_AUTH_TOKEN env to publish step 2026-01-30 13:36:20 +09:00
justsisyphus
2b39d119cd fix: restore registry-url for npm auth with new granular token 2026-01-30 13:21:35 +09:00
justsisyphus
afa2ece847 fix: remove registry-url from setup-node to enable OIDC auth 2026-01-30 13:11:44 +09:00
justsisyphus
390c25197f fix: manually create .npmrc without token for OIDC
setup-node with registry-url injects NODE_AUTH_TOKEN secret which is revoked.
Create .npmrc manually with empty _authToken to force OIDC authentication.
2026-01-30 12:57:15 +09:00
justsisyphus
9e07b143df fix: match main workflow's OIDC setup exactly
Main workflow works with registry-url + NPM_CONFIG_PROVENANCE.
Removed all extra env vars and debugging - simplify to match working pattern.
2026-01-30 12:52:57 +09:00
justsisyphus
ad95880198 fix(start-work): restore atlas agent and proper model fallback chain
- Restore agent: 'atlas' in start-work command (removed by PR #1201)
- Fix model-resolver to properly iterate through fallback chain providers
- Remove broken parent model inheritance that bypassed fallback logic
- Add model-suggestion-retry for runtime API failures (cherry-pick 800846c1)

Fixes #1200
2026-01-30 12:52:46 +09:00
justsisyphus
86088d3a6e fix: remove registry-url to enable npm OIDC auto-detection
- Remove registry-url from setup-node (was injecting NODE_AUTH_TOKEN)
- Add npm version check and auto-upgrade for OIDC support (11.5.1+)
- Add explicit --registry flag to npm publish
- Remove empty NODE_AUTH_TOKEN/NPM_CONFIG_USERCONFIG env vars that were breaking OIDC
2026-01-30 12:47:15 +09:00
justsisyphus
ae8a6c5eb8 refactor: replace console.log/warn/error with file-based log() for silent logging
Replace all console output with shared logger to write to
/tmp/oh-my-opencode.log instead of stdout/stderr.

Files changed:
- index.ts: console.warn → log()
- hook-message-injector/injector.ts: console.warn → log()
- lsp/client.ts: console.error → log()
- ast-grep/downloader.ts: console.log/error → log()
- session-recovery/index.ts: console.error → log()
- comment-checker/downloader.ts: console.log/error → log()

CLI tools (install.ts, doctor, etc.) retain console output for UX.
2026-01-30 12:45:37 +09:00
justsisyphus
db538c7e6b fix(ci): override env vars to disable token auth, force OIDC 2026-01-30 12:41:00 +09:00
justsisyphus
dfed2abd3e fix(ci): also remove NPM_CONFIG_USERCONFIG .npmrc and unset tokens for OIDC 2026-01-30 12:37:12 +09:00
justsisyphus
300a3fdc14 fix(ci): remove .npmrc to enable pure OIDC auth for npm publish 2026-01-30 12:33:51 +09:00
justsisyphus
c993cf007f fix(ci): remove registry-url to use pure OIDC auth for npm publish 2026-01-30 12:29:33 +09:00
justsisyphus
3d7de0a050 fix(publish-platform): use 7z on Windows, simplify skip logic 2026-01-30 12:25:30 +09:00
justsisyphus
8e19ffdce4 ci(publish-platform): separate build/publish jobs with OIDC provenance
- Split into two jobs: build (compile binaries) and publish (npm publish)
- Build job uploads compressed artifacts (tar.gz/zip)
- Publish job downloads artifacts and uses OIDC Trusted Publishing
- Removes NODE_AUTH_TOKEN dependency, uses npm provenance instead
- Increased timeout for large binary uploads (40-120MB)
- Build parallelism increased to 7 (all platforms simultaneously)
- Fixes npm classic token deprecation issue

Benefits:
- Fresh OIDC token at publish time avoids timeout issues
- No token rotation needed (OIDC is ephemeral)
- Build failures isolated from publish failures
- Artifacts can be reused if publish fails
2026-01-30 12:21:24 +09:00
69 changed files with 2114 additions and 417 deletions

View File

@@ -28,16 +28,20 @@ permissions:
id-token: write
jobs:
publish-platform:
# Use windows-latest for Windows to avoid cross-compilation segfault (oven-sh/bun#18416)
# Fixes: #873, #844
# =============================================================================
# Job 1: Build binaries for all platforms
# - Windows builds on windows-latest (avoid bun cross-compile segfault)
# - All other platforms build on ubuntu-latest
# - Uploads compressed artifacts for the publish job
# =============================================================================
build:
runs-on: ${{ matrix.platform == 'windows-x64' && 'windows-latest' || 'ubuntu-latest' }}
defaults:
run:
shell: bash
strategy:
fail-fast: false
max-parallel: 2
max-parallel: 7
matrix:
platform: [darwin-arm64, darwin-x64, linux-x64, linux-arm64, linux-x64-musl, linux-arm64-musl, windows-x64]
steps:
@@ -47,11 +51,6 @@ jobs:
with:
bun-version: latest
- uses: actions/setup-node@v4
with:
node-version: "24"
registry-url: "https://registry.npmjs.org"
- name: Install dependencies
run: bun install
env:
@@ -63,15 +62,20 @@ jobs:
PKG_NAME="oh-my-opencode-${{ matrix.platform }}"
VERSION="${{ inputs.version }}"
STATUS=$(curl -s -o /dev/null -w "%{http_code}" "https://registry.npmjs.org/${PKG_NAME}/${VERSION}")
# Convert platform name for output (replace - with _)
PLATFORM_KEY="${{ matrix.platform }}"
PLATFORM_KEY="${PLATFORM_KEY//-/_}"
if [ "$STATUS" = "200" ]; then
echo "skip=true" >> $GITHUB_OUTPUT
echo "skip_${PLATFORM_KEY}=true" >> $GITHUB_OUTPUT
echo "✓ ${PKG_NAME}@${VERSION} already published"
else
echo "skip=false" >> $GITHUB_OUTPUT
echo "skip_${PLATFORM_KEY}=false" >> $GITHUB_OUTPUT
echo "→ ${PKG_NAME}@${VERSION} needs publishing"
fi
- name: Update version
- name: Update version in package.json
if: steps.check.outputs.skip != 'true'
run: |
VERSION="${{ inputs.version }}"
@@ -99,15 +103,109 @@ jobs:
fi
bun build src/cli/index.ts --compile --minify --target=$TARGET --outfile=$OUTPUT
echo "Built binary:"
ls -lh "$OUTPUT"
- name: Compress binary
if: steps.check.outputs.skip != 'true'
run: |
PLATFORM="${{ matrix.platform }}"
cd packages/${PLATFORM}
if [ "$PLATFORM" = "windows-x64" ]; then
# Windows: use 7z (pre-installed on windows-latest)
7z a -tzip ../../binary-${PLATFORM}.zip bin/ package.json
else
# Unix: use tar.gz
tar -czvf ../../binary-${PLATFORM}.tar.gz bin/ package.json
fi
cd ../..
echo "Compressed artifact:"
ls -lh binary-${PLATFORM}.*
- name: Upload artifact
if: steps.check.outputs.skip != 'true'
uses: actions/upload-artifact@v4
with:
name: binary-${{ matrix.platform }}
path: |
binary-${{ matrix.platform }}.tar.gz
binary-${{ matrix.platform }}.zip
retention-days: 1
if-no-files-found: error
# =============================================================================
# Job 2: Publish all platforms using OIDC/Provenance
# - Runs on ubuntu-latest for ALL platforms (just downloading artifacts)
# - Uses npm Trusted Publishing (OIDC) - no NODE_AUTH_TOKEN needed
# - Fresh OIDC token at publish time avoids timeout issues
# =============================================================================
publish:
needs: build
runs-on: ubuntu-latest
strategy:
fail-fast: false
max-parallel: 2
matrix:
platform: [darwin-arm64, darwin-x64, linux-x64, linux-arm64, linux-x64-musl, linux-arm64-musl, windows-x64]
steps:
- name: Check if already published
id: check
run: |
PKG_NAME="oh-my-opencode-${{ matrix.platform }}"
VERSION="${{ inputs.version }}"
STATUS=$(curl -s -o /dev/null -w "%{http_code}" "https://registry.npmjs.org/${PKG_NAME}/${VERSION}")
if [ "$STATUS" = "200" ]; then
echo "skip=true" >> $GITHUB_OUTPUT
echo "✓ ${PKG_NAME}@${VERSION} already published, skipping"
else
echo "skip=false" >> $GITHUB_OUTPUT
echo "→ ${PKG_NAME}@${VERSION} will be published"
fi
- name: Download artifact
if: steps.check.outputs.skip != 'true'
uses: actions/download-artifact@v4
with:
name: binary-${{ matrix.platform }}
path: .
- name: Extract artifact
if: steps.check.outputs.skip != 'true'
run: |
PLATFORM="${{ matrix.platform }}"
mkdir -p packages/${PLATFORM}
if [ "$PLATFORM" = "windows-x64" ]; then
unzip binary-${PLATFORM}.zip -d packages/${PLATFORM}/
else
tar -xzvf binary-${PLATFORM}.tar.gz -C packages/${PLATFORM}/
fi
echo "Extracted contents:"
ls -la packages/${PLATFORM}/
ls -la packages/${PLATFORM}/bin/
- uses: actions/setup-node@v4
if: steps.check.outputs.skip != 'true'
with:
node-version: "24"
registry-url: "https://registry.npmjs.org"
- name: Publish ${{ matrix.platform }}
if: steps.check.outputs.skip != 'true'
run: |
cd packages/${{ matrix.platform }}
TAG_ARG=""
if [ -n "${{ inputs.dist_tag }}" ]; then
TAG_ARG="--tag ${{ inputs.dist_tag }}"
fi
npm publish --access public $TAG_ARG
npm publish --access public --provenance $TAG_ARG
env:
NPM_CONFIG_PROVENANCE: false
NODE_AUTH_TOKEN: ${{ secrets.NODE_AUTH_TOKEN }}
NPM_CONFIG_PROVENANCE: true
timeout-minutes: 15

View File

@@ -98,13 +98,13 @@ oh-my-opencode/
| Agent | Model | Purpose |
|-------|-------|---------|
| Sisyphus | anthropic/claude-opus-4-5 | Primary orchestrator |
| Atlas | anthropic/claude-opus-4-5 | Master orchestrator |
| Sisyphus | anthropic/claude-opus-4-5 | Primary orchestrator (fallback: kimi-k2.5 → glm-4.7 → gpt-5.2-codex → gemini-3-pro) |
| Atlas | anthropic/claude-sonnet-4-5 | Master orchestrator (fallback: kimi-k2.5 → gpt-5.2) |
| oracle | openai/gpt-5.2 | Consultation, debugging |
| librarian | opencode/big-pickle | Docs, GitHub search |
| explore | opencode/gpt-5-nano | Fast codebase grep |
| librarian | zai-coding-plan/glm-4.7 | Docs, GitHub search (fallback: glm-4.7-free) |
| explore | anthropic/claude-haiku-4-5 | Fast codebase grep (fallback: gpt-5-mini → gpt-5-nano) |
| multimodal-looker | google/gemini-3-flash | PDF/image analysis |
| Prometheus | anthropic/claude-opus-4-5 | Strategic planning |
| Prometheus | anthropic/claude-opus-4-5 | Strategic planning (fallback: kimi-k2.5 → gpt-5.2) |
## COMMANDS

View File

@@ -189,7 +189,7 @@ Windows から Linux に初めて乗り換えた時のこと、自分の思い
- Oracle: 設計、デバッグ (GPT 5.2 Medium)
- Frontend UI/UX Engineer: フロントエンド開発 (Gemini 3 Pro)
- Librarian: 公式ドキュメント、オープンソース実装、コードベース探索 (Claude Sonnet 4.5)
- Explore: 超高速コードベース探索 (Contextual Grep) (Grok Code)
- Explore: 超高速コードベース探索 (Contextual Grep) (Claude Haiku 4.5)
- Full LSP / AstGrep Support: 決定的にリファクタリングしましょう。
- Todo Continuation Enforcer: 途中で諦めたら、続行を強制します。これがシジフォスに岩を転がし続けさせる秘訣です。
- Comment Checker: AIが過剰なコメントを付けないようにします。シジフォスが生成したコードは、人間が書いたものと区別がつかないべきです。

View File

@@ -197,7 +197,7 @@ Hey please read this readme and tell me why it is different from other agent har
- Oracle: 디자인, 디버깅 (GPT 5.2 Medium)
- Frontend UI/UX Engineer: 프론트엔드 개발 (Gemini 3 Pro)
- Librarian: 공식 문서, 오픈 소스 구현, 코드베이스 탐색 (Claude Sonnet 4.5)
- Explore: 엄청나게 빠른 코드베이스 탐색 (Contextual Grep) (Grok Code)
- Explore: 엄청나게 빠른 코드베이스 탐색 (Contextual Grep) (Claude Haiku 4.5)
- 완전한 LSP / AstGrep 지원: 결정적으로 리팩토링합니다.
- TODO 연속 강제: 에이전트가 중간에 멈추면 계속하도록 강제합니다. **이것이 Sisyphus가 그 바위를 굴리게 하는 것입니다.**
- 주석 검사기: AI가 과도한 주석을 추가하는 것을 방지합니다. Sisyphus가 생성한 코드는 인간이 작성한 것과 구별할 수 없어야 합니다.

View File

@@ -196,7 +196,7 @@ Meet our main agent: Sisyphus (Opus 4.5 High). Below are the tools Sisyphus uses
- Oracle: Design, debugging (GPT 5.2 Medium)
- Frontend UI/UX Engineer: Frontend development (Gemini 3 Pro)
- Librarian: Official docs, open source implementations, codebase exploration (Claude Sonnet 4.5)
- Explore: Blazing fast codebase exploration (Contextual Grep) (Grok Code)
- Explore: Blazing fast codebase exploration (Contextual Grep) (Claude Haiku 4.5)
- Full LSP / AstGrep Support: Refactor decisively.
- Todo Continuation Enforcer: Forces the agent to continue if it quits halfway. **This is what keeps Sisyphus rolling that boulder.**
- Comment Checker: Prevents AI from adding excessive comments. Code generated by Sisyphus should be indistinguishable from human-written code.

View File

@@ -193,7 +193,7 @@
- Oracle设计、调试 (GPT 5.2 Medium)
- Frontend UI/UX Engineer前端开发 (Gemini 3 Pro)
- Librarian官方文档、开源实现、代码库探索 (Claude Sonnet 4.5)
- Explore极速代码库探索上下文感知 Grep(Grok Code)
- Explore极速代码库探索上下文感知 Grep(Claude Haiku 4.5)
- 完整 LSP / AstGrep 支持:果断重构。
- Todo 继续执行器:如果智能体中途退出,强制它继续。**这就是让 Sisyphus 继续推动巨石的关键。**
- 注释检查器:防止 AI 添加过多注释。Sisyphus 生成的代码应该与人类编写的代码无法区分。

View File

@@ -23,6 +23,7 @@ A Category is an agent configuration preset optimized for specific domains.
|----------|---------------|-----------|
| `visual-engineering` | `google/gemini-3-pro` | Frontend, UI/UX, design, styling, animation |
| `ultrabrain` | `openai/gpt-5.2-codex` (xhigh) | Deep logical reasoning, complex architecture decisions requiring extensive analysis |
| `deep` | `openai/gpt-5.2-codex` (medium) | Goal-oriented autonomous problem-solving. Thorough research before action. For hairy problems requiring deep understanding. |
| `artistry` | `google/gemini-3-pro` (max) | Highly creative/artistic tasks, novel ideas |
| `quick` | `anthropic/claude-haiku-4-5` | Trivial tasks - single file changes, typo fixes, simple modifications |
| `unspecified-low` | `anthropic/claude-sonnet-4-5` | Tasks that don't fit other categories, low effort required |

View File

@@ -894,15 +894,15 @@ Each agent has a defined provider priority chain. The system tries providers in
| Agent | Model (no prefix) | Provider Priority Chain |
|-------|-------------------|-------------------------|
| **Sisyphus** | `claude-opus-4-5` | anthropic → github-copilot → opencode → antigravity → google |
| **oracle** | `gpt-5.2` | openai → anthropic → google → github-copilot → opencode |
| **librarian** | `big-pickle` | opencode → github-copilot → anthropic |
| **explore** | `gpt-5-nano` | anthropic → opencode |
| **multimodal-looker** | `gemini-3-flash` | google → openai → zai-coding-plan → anthropic → opencode |
| **Prometheus (Planner)** | `claude-opus-4-5` | anthropic → github-copilot → opencode → antigravity → google |
| **Metis (Plan Consultant)** | `claude-sonnet-4-5` | anthropic → github-copilot → opencode → antigravity → google |
| **Momus (Plan Reviewer)** | `claude-opus-4-5` | anthropic → github-copilot → opencode → antigravity → google |
| **Atlas** | `claude-sonnet-4-5` | anthropic → github-copilot → opencode → antigravity → google |
| **Sisyphus** | `claude-opus-4-5` | anthropic → kimi-for-coding → zai-coding-plan → openai → google |
| **oracle** | `gpt-5.2` | openai → google → anthropic |
| **librarian** | `glm-4.7` | zai-coding-plan → opencode → anthropic |
| **explore** | `claude-haiku-4-5` | anthropic → github-copilot → opencode |
| **multimodal-looker** | `gemini-3-flash` | google → openai → zai-coding-plan → kimi-for-coding → anthropic → opencode |
| **Prometheus (Planner)** | `claude-opus-4-5` | anthropic → kimi-for-coding → openai → google |
| **Metis (Plan Consultant)** | `claude-opus-4-5` | anthropic → kimi-for-coding → openai → google |
| **Momus (Plan Reviewer)** | `gpt-5.2` | openai → anthropic → google |
| **Atlas** | `claude-sonnet-4-5` | anthropic → kimi-for-coding → openai → google |
### Category Provider Chains
@@ -910,13 +910,14 @@ Categories follow the same resolution logic:
| Category | Model (no prefix) | Provider Priority Chain |
|----------|-------------------|-------------------------|
| **visual-engineering** | `gemini-3-pro` | google → openai → anthropic → github-copilot → opencode |
| **ultrabrain** | `gpt-5.2-codex` | openai → anthropic → google → github-copilot → opencode |
| **artistry** | `gemini-3-pro` | google → openai → anthropic → github-copilot → opencode |
| **quick** | `claude-haiku-4-5` | anthropic → github-copilot → opencode → antigravity → google |
| **unspecified-low** | `claude-sonnet-4-5` | anthropic → github-copilot → opencode → antigravity → google |
| **unspecified-high** | `claude-opus-4-5` | anthropic → github-copilot → opencode → antigravity → google |
| **writing** | `gemini-3-flash` | google → openai → anthropic → github-copilot → opencode |
| **visual-engineering** | `gemini-3-pro` | google → anthropic → zai-coding-plan |
| **ultrabrain** | `gpt-5.2-codex` | openai → google → anthropic |
| **deep** | `gpt-5.2-codex` | openai → anthropic → google |
| **artistry** | `gemini-3-pro` | google → anthropic → openai |
| **quick** | `claude-haiku-4-5` | anthropic → google → opencode |
| **unspecified-low** | `claude-sonnet-4-5` | anthropic → openai → google |
| **unspecified-high** | `claude-opus-4-5` | anthropic → openai → google |
| **writing** | `gemini-3-flash` | google → anthropic → zai-coding-plan → openai |
### Checking Your Configuration

View File

@@ -10,19 +10,19 @@ Oh-My-OpenCode provides 10 specialized AI agents. Each has distinct expertise, o
| Agent | Model | Purpose |
|-------|-------|---------|
| **Sisyphus** | `anthropic/claude-opus-4-5` | **The default orchestrator.** Plans, delegates, and executes complex tasks using specialized subagents with aggressive parallel execution. Todo-driven workflow with extended thinking (32k budget). |
| **Sisyphus** | `anthropic/claude-opus-4-5` | **The default orchestrator.** Plans, delegates, and executes complex tasks using specialized subagents with aggressive parallel execution. Todo-driven workflow with extended thinking (32k budget). Fallback: kimi-k2.5 → glm-4.7 → gpt-5.2-codex → gemini-3-pro. |
| **oracle** | `openai/gpt-5.2` | Architecture decisions, code review, debugging. Read-only consultation - stellar logical reasoning and deep analysis. Inspired by AmpCode. |
| **librarian** | `opencode/big-pickle` | Multi-repo analysis, documentation lookup, OSS implementation examples. Deep codebase understanding with evidence-based answers. Inspired by AmpCode. |
| **explore** | `opencode/gpt-5-nano` | Fast codebase exploration and contextual grep. Uses Gemini 3 Flash when Antigravity auth is configured, Haiku when Claude max20 is available, otherwise Grok. Inspired by Claude Code. |
| **multimodal-looker** | `google/gemini-3-flash` | Visual content specialist. Analyzes PDFs, images, diagrams to extract information. Saves tokens by having another agent process media. |
| **librarian** | `zai-coding-plan/glm-4.7` | Multi-repo analysis, documentation lookup, OSS implementation examples. Deep codebase understanding with evidence-based answers. Fallback: glm-4.7-free → claude-sonnet-4-5. |
| **explore** | `anthropic/claude-haiku-4-5` | Fast codebase exploration and contextual grep. Fallback: gpt-5-mini → gpt-5-nano. |
| **multimodal-looker** | `google/gemini-3-flash` | Visual content specialist. Analyzes PDFs, images, diagrams to extract information. Fallback: gpt-5.2 → glm-4.6v → kimi-k2.5 → claude-haiku-4-5 → gpt-5-nano. |
### Planning Agents
| Agent | Model | Purpose |
|-------|-------|---------|
| **Prometheus** | `anthropic/claude-opus-4-5` | Strategic planner with interview mode. Creates detailed work plans through iterative questioning. |
| **Metis** | `anthropic/claude-sonnet-4-5` | Plan consultant - pre-planning analysis. Identifies hidden intentions, ambiguities, and AI failure points. |
| **Momus** | `anthropic/claude-sonnet-4-5` | Plan reviewer - validates plans against clarity, verifiability, and completeness standards. |
| **Prometheus** | `anthropic/claude-opus-4-5` | Strategic planner with interview mode. Creates detailed work plans through iterative questioning. Fallback: kimi-k2.5 → gpt-5.2 → gemini-3-pro. |
| **Metis** | `anthropic/claude-opus-4-5` | Plan consultant - pre-planning analysis. Identifies hidden intentions, ambiguities, and AI failure points. Fallback: kimi-k2.5 → gpt-5.2 → gemini-3-pro. |
| **Momus** | `openai/gpt-5.2` | Plan reviewer - validates plans against clarity, verifiability, and completeness standards. Fallback: gpt-5.2 → claude-opus-4-5 → gemini-3-pro. |
### Invoking Agents

View File

@@ -213,7 +213,7 @@ If Z.ai is the only provider available, all agents will use GLM models:
#### OpenCode Zen
OpenCode Zen provides access to `opencode/` prefixed models including `opencode/claude-opus-4-5`, `opencode/gpt-5.2`, `opencode/gpt-5-nano`, and `opencode/big-pickle`.
OpenCode Zen provides access to `opencode/` prefixed models including `opencode/claude-opus-4-5`, `opencode/gpt-5.2`, `opencode/gpt-5-nano`, and `opencode/glm-4.7-free`.
When OpenCode Zen is the best available provider (no native or Copilot), these models are used:
@@ -222,7 +222,7 @@ When OpenCode Zen is the best available provider (no native or Copilot), these m
| **Sisyphus** | `opencode/claude-opus-4-5` |
| **Oracle** | `opencode/gpt-5.2` |
| **Explore** | `opencode/gpt-5-nano` |
| **Librarian** | `opencode/big-pickle` |
| **Librarian** | `opencode/glm-4.7-free` |
##### Setup
@@ -278,7 +278,7 @@ After everything is set up and working, ask the user:
If the user agrees, run:
```bash
gh repo star code-yeongyu/oh-my-opencode
gh api --silent --method PUT /user/starred/code-yeongyu/oh-my-opencode >/dev/null 2>&1 || true
```
**Important**: Only run the command if the user explicitly says yes. Never run it automatically without consent.

View File

@@ -1,6 +1,6 @@
{
"name": "oh-my-opencode",
"version": "3.1.8",
"version": "3.1.9",
"description": "The Best AI Agent Harness - Batteries-Included OpenCode Plugin with Multi-Model Orchestration, Parallel Background Agents, and Crafted LSP/AST Tools",
"main": "dist/index.js",
"types": "dist/index.d.ts",
@@ -74,13 +74,13 @@
"typescript": "^5.7.3"
},
"optionalDependencies": {
"oh-my-opencode-darwin-arm64": "3.1.8",
"oh-my-opencode-darwin-x64": "3.1.8",
"oh-my-opencode-linux-arm64": "3.1.8",
"oh-my-opencode-linux-arm64-musl": "3.1.8",
"oh-my-opencode-linux-x64": "3.1.8",
"oh-my-opencode-linux-x64-musl": "3.1.8",
"oh-my-opencode-windows-x64": "3.1.8"
"oh-my-opencode-darwin-arm64": "3.1.9",
"oh-my-opencode-darwin-x64": "3.1.9",
"oh-my-opencode-linux-arm64": "3.1.9",
"oh-my-opencode-linux-arm64-musl": "3.1.9",
"oh-my-opencode-linux-x64": "3.1.9",
"oh-my-opencode-linux-x64-musl": "3.1.9",
"oh-my-opencode-windows-x64": "3.1.9"
},
"trustedDependencies": [
"@ast-grep/cli",

View File

@@ -1,6 +1,6 @@
{
"name": "oh-my-opencode-darwin-arm64",
"version": "3.1.8",
"version": "3.1.9",
"description": "Platform-specific binary for oh-my-opencode (darwin-arm64)",
"license": "MIT",
"repository": {

View File

@@ -1,6 +1,6 @@
{
"name": "oh-my-opencode-darwin-x64",
"version": "3.1.8",
"version": "3.1.9",
"description": "Platform-specific binary for oh-my-opencode (darwin-x64)",
"license": "MIT",
"repository": {

View File

@@ -1,6 +1,6 @@
{
"name": "oh-my-opencode-linux-arm64-musl",
"version": "3.1.8",
"version": "3.1.9",
"description": "Platform-specific binary for oh-my-opencode (linux-arm64-musl)",
"license": "MIT",
"repository": {

View File

@@ -1,6 +1,6 @@
{
"name": "oh-my-opencode-linux-arm64",
"version": "3.1.8",
"version": "3.1.9",
"description": "Platform-specific binary for oh-my-opencode (linux-arm64)",
"license": "MIT",
"repository": {

View File

@@ -1,6 +1,6 @@
{
"name": "oh-my-opencode-linux-x64-musl",
"version": "3.1.8",
"version": "3.1.9",
"description": "Platform-specific binary for oh-my-opencode (linux-x64-musl)",
"license": "MIT",
"repository": {

View File

@@ -1,6 +1,6 @@
{
"name": "oh-my-opencode-linux-x64",
"version": "3.1.8",
"version": "3.1.9",
"description": "Platform-specific binary for oh-my-opencode (linux-x64)",
"license": "MIT",
"repository": {

View File

@@ -1,6 +1,6 @@
{
"name": "oh-my-opencode-windows-x64",
"version": "3.1.8",
"version": "3.1.9",
"description": "Platform-specific binary for oh-my-opencode (windows-x64)",
"license": "MIT",
"repository": {

View File

@@ -975,6 +975,38 @@
"created_at": "2026-01-29T17:03:24Z",
"repoId": 1108837393,
"pullRequestNo": 1254
},
{
"name": "gabriel-ecegi",
"id": 35489017,
"comment_id": 3821842363,
"created_at": "2026-01-30T05:13:15Z",
"repoId": 1108837393,
"pullRequestNo": 1271
},
{
"name": "Hisir0909",
"id": 76634394,
"comment_id": 3822248445,
"created_at": "2026-01-30T07:20:09Z",
"repoId": 1108837393,
"pullRequestNo": 1275
},
{
"name": "Zacks-Zhang",
"id": 16462428,
"comment_id": 3822585754,
"created_at": "2026-01-30T08:51:49Z",
"repoId": 1108837393,
"pullRequestNo": 1280
},
{
"name": "kunal70006",
"id": 62700112,
"comment_id": 3822849937,
"created_at": "2026-01-30T09:55:57Z",
"repoId": 1108837393,
"pullRequestNo": 1282
}
]
}

View File

@@ -25,15 +25,15 @@ agents/
## AGENT MODELS
| Agent | Model | Temp | Purpose |
|-------|-------|------|---------|
| Sisyphus | anthropic/claude-opus-4-5 | 0.1 | Primary orchestrator |
| Atlas | anthropic/claude-opus-4-5 | 0.1 | Master orchestrator |
| Sisyphus | anthropic/claude-opus-4-5 | 0.1 | Primary orchestrator (fallback: kimi-k2.5 → glm-4.7 → gpt-5.2-codex → gemini-3-pro) |
| Atlas | anthropic/claude-sonnet-4-5 | 0.1 | Master orchestrator (fallback: kimi-k2.5 → gpt-5.2) |
| oracle | openai/gpt-5.2 | 0.1 | Consultation, debugging |
| librarian | opencode/big-pickle | 0.1 | Docs, GitHub search |
| explore | opencode/gpt-5-nano | 0.1 | Fast contextual grep |
| librarian | zai-coding-plan/glm-4.7 | 0.1 | Docs, GitHub search (fallback: glm-4.7-free) |
| explore | anthropic/claude-haiku-4-5 | 0.1 | Fast contextual grep (fallback: gpt-5-mini → gpt-5-nano) |
| multimodal-looker | google/gemini-3-flash | 0.1 | PDF/image analysis |
| Prometheus | anthropic/claude-opus-4-5 | 0.1 | Strategic planning |
| Metis | anthropic/claude-sonnet-4-5 | 0.3 | Pre-planning analysis |
| Momus | anthropic/claude-sonnet-4-5 | 0.1 | Plan validation |
| Prometheus | anthropic/claude-opus-4-5 | 0.1 | Strategic planning (fallback: kimi-k2.5 → gpt-5.2) |
| Metis | anthropic/claude-opus-4-5 | 0.3 | Pre-planning analysis (fallback: kimi-k2.5 → gpt-5.2) |
| Momus | openai/gpt-5.2 | 0.1 | Plan validation (fallback: claude-opus-4-5) |
| Sisyphus-Junior | anthropic/claude-sonnet-4-5 | 0.1 | Category-spawned executor |
## HOW TO ADD

View File

@@ -1,5 +1,7 @@
import type { AgentConfig } from "@opencode-ai/sdk"
import type { AgentPromptMetadata } from "./types"
import type { AgentMode, AgentPromptMetadata } from "./types"
const MODE: AgentMode = "primary"
import type { AvailableAgent, AvailableSkill, AvailableCategory } from "./dynamic-agent-prompt-builder"
import { buildCategorySkillsDelegationGuide } from "./dynamic-agent-prompt-builder"
import type { CategoryConfig } from "../config/schema"
@@ -530,7 +532,7 @@ export function createAtlasAgent(ctx: OrchestratorContext): AgentConfig {
return {
description:
"Orchestrates work via delegate_task() to complete ALL tasks in a todo list until fully done. (Atlas - OhMyOpenCode)",
mode: "primary" as const,
mode: MODE,
...(ctx.model ? { model: ctx.model } : {}),
temperature: 0.1,
prompt: buildDynamicOrchestratorPrompt(ctx),
@@ -539,6 +541,7 @@ export function createAtlasAgent(ctx: OrchestratorContext): AgentConfig {
...restrictions,
} as AgentConfig
}
createAtlasAgent.mode = MODE
export const atlasPromptMetadata: AgentPromptMetadata = {
category: "advisor",

View File

@@ -1,7 +1,9 @@
import type { AgentConfig } from "@opencode-ai/sdk"
import type { AgentPromptMetadata } from "./types"
import type { AgentMode, AgentPromptMetadata } from "./types"
import { createAgentToolRestrictions } from "../shared/permission-compat"
const MODE: AgentMode = "subagent"
export const EXPLORE_PROMPT_METADATA: AgentPromptMetadata = {
category: "exploration",
cost: "FREE",
@@ -34,7 +36,7 @@ export function createExploreAgent(model: string): AgentConfig {
return {
description:
'Contextual grep for codebases. Answers "Where is X?", "Which file has Y?", "Find the code that does Z". Fire multiple in parallel for broad searches. Specify thoroughness: "quick" for basic, "medium" for moderate, "very thorough" for comprehensive analysis. (Explore - OhMyOpenCode)',
mode: "subagent" as const,
mode: MODE,
model,
temperature: 0.1,
...restrictions,
@@ -119,4 +121,4 @@ Use the right tool for the job:
Flood with parallel calls. Cross-validate findings across multiple tools.`,
}
}
createExploreAgent.mode = MODE

View File

@@ -1,7 +1,9 @@
import type { AgentConfig } from "@opencode-ai/sdk"
import type { AgentPromptMetadata } from "./types"
import type { AgentMode, AgentPromptMetadata } from "./types"
import { createAgentToolRestrictions } from "../shared/permission-compat"
const MODE: AgentMode = "subagent"
export const LIBRARIAN_PROMPT_METADATA: AgentPromptMetadata = {
category: "exploration",
cost: "CHEAP",
@@ -31,7 +33,7 @@ export function createLibrarianAgent(model: string): AgentConfig {
return {
description:
"Specialized codebase understanding agent for multi-repository analysis, searching remote codebases, retrieving official documentation, and finding implementation examples using GitHub CLI, Context7, and Web Search. MUST BE USED when users ask to look up code in remote repositories, explain library internals, or find usage examples in open source. (Librarian - OhMyOpenCode)",
mode: "subagent" as const,
mode: MODE,
model,
temperature: 0.1,
...restrictions,
@@ -323,4 +325,4 @@ grep_app_searchGitHub(query: "useQuery")
`,
}
}
createLibrarianAgent.mode = MODE

View File

@@ -1,7 +1,9 @@
import type { AgentConfig } from "@opencode-ai/sdk"
import type { AgentPromptMetadata } from "./types"
import type { AgentMode, AgentPromptMetadata } from "./types"
import { createAgentToolRestrictions } from "../shared/permission-compat"
const MODE: AgentMode = "subagent"
/**
* Metis - Plan Consultant Agent
*
@@ -311,7 +313,7 @@ export function createMetisAgent(model: string): AgentConfig {
return {
description:
"Pre-planning consultant that analyzes requests to identify hidden intentions, ambiguities, and AI failure points. (Metis - OhMyOpenCode)",
mode: "subagent" as const,
mode: MODE,
model,
temperature: 0.3,
...metisRestrictions,
@@ -319,7 +321,7 @@ export function createMetisAgent(model: string): AgentConfig {
thinking: { type: "enabled", budgetTokens: 32000 },
} as AgentConfig
}
createMetisAgent.mode = MODE
export const metisPromptMetadata: AgentPromptMetadata = {
category: "advisor",

View File

@@ -1,8 +1,10 @@
import type { AgentConfig } from "@opencode-ai/sdk"
import type { AgentPromptMetadata } from "./types"
import type { AgentMode, AgentPromptMetadata } from "./types"
import { isGptModel } from "./types"
import { createAgentToolRestrictions } from "../shared/permission-compat"
const MODE: AgentMode = "subagent"
/**
* Momus - Plan Reviewer Agent
*
@@ -400,7 +402,7 @@ export function createMomusAgent(model: string): AgentConfig {
const base = {
description:
"Expert reviewer for evaluating work plans against rigorous clarity, verifiability, and completeness standards. (Momus - OhMyOpenCode)",
mode: "subagent" as const,
mode: MODE,
model,
temperature: 0.1,
...restrictions,
@@ -413,7 +415,7 @@ export function createMomusAgent(model: string): AgentConfig {
return { ...base, thinking: { type: "enabled", budgetTokens: 32000 } } as AgentConfig
}
createMomusAgent.mode = MODE
export const momusPromptMetadata: AgentPromptMetadata = {
category: "advisor",

View File

@@ -1,7 +1,9 @@
import type { AgentConfig } from "@opencode-ai/sdk"
import type { AgentPromptMetadata } from "./types"
import type { AgentMode, AgentPromptMetadata } from "./types"
import { createAgentToolAllowlist } from "../shared/permission-compat"
const MODE: AgentMode = "subagent"
export const MULTIMODAL_LOOKER_PROMPT_METADATA: AgentPromptMetadata = {
category: "utility",
cost: "CHEAP",
@@ -15,7 +17,7 @@ export function createMultimodalLookerAgent(model: string): AgentConfig {
return {
description:
"Analyze media files (PDFs, images, diagrams) that require interpretation beyond raw text. Extracts specific information or summaries from documents, describes visual content. Use when you need analyzed/extracted data rather than literal file contents. (Multimodal-Looker - OhMyOpenCode)",
mode: "subagent" as const,
mode: MODE,
model,
temperature: 0.1,
...restrictions,
@@ -53,4 +55,4 @@ Response rules:
Your output goes straight to the main agent for continued work.`,
}
}
createMultimodalLookerAgent.mode = MODE

View File

@@ -1,8 +1,10 @@
import type { AgentConfig } from "@opencode-ai/sdk"
import type { AgentPromptMetadata } from "./types"
import type { AgentMode, AgentPromptMetadata } from "./types"
import { isGptModel } from "./types"
import { createAgentToolRestrictions } from "../shared/permission-compat"
const MODE: AgentMode = "subagent"
export const ORACLE_PROMPT_METADATA: AgentPromptMetadata = {
category: "advisor",
cost: "EXPENSIVE",
@@ -106,7 +108,7 @@ export function createOracleAgent(model: string): AgentConfig {
const base = {
description:
"Read-only consultation agent. High-IQ reasoning specialist for debugging hard problems and high-difficulty architecture design. (Oracle - OhMyOpenCode)",
mode: "subagent" as const,
mode: MODE,
model,
temperature: 0.1,
...restrictions,
@@ -119,4 +121,5 @@ export function createOracleAgent(model: string): AgentConfig {
return { ...base, thinking: { type: "enabled", budgetTokens: 32000 } } as AgentConfig
}
createOracleAgent.mode = MODE

View File

@@ -1,4 +1,5 @@
import type { AgentConfig } from "@opencode-ai/sdk"
import type { AgentMode } from "./types"
import { isGptModel } from "./types"
import type { AgentOverrideConfig } from "../config/schema"
import {
@@ -6,6 +7,8 @@ import {
type PermissionValue,
} from "../shared/permission-compat"
const MODE: AgentMode = "subagent"
const SISYPHUS_JUNIOR_PROMPT = `<Role>
Sisyphus-Junior - Focused executor from OhMyOpenCode.
Execute tasks directly. NEVER delegate or spawn other agents.
@@ -85,7 +88,7 @@ export function createSisyphusJuniorAgentWithOverrides(
const base: AgentConfig = {
description: override?.description ??
"Focused task executor. Same discipline, no delegation. (Sisyphus-Junior - OhMyOpenCode)",
mode: "subagent" as const,
mode: MODE,
model,
temperature,
maxTokens: 64000,
@@ -107,3 +110,5 @@ export function createSisyphusJuniorAgentWithOverrides(
thinking: { type: "enabled", budgetTokens: 32000 },
} as AgentConfig
}
createSisyphusJuniorAgentWithOverrides.mode = MODE

View File

@@ -1,5 +1,8 @@
import type { AgentConfig } from "@opencode-ai/sdk"
import type { AgentMode } from "./types"
import { isGptModel } from "./types"
const MODE: AgentMode = "primary"
import type { AvailableAgent, AvailableTool, AvailableSkill, AvailableCategory } from "./dynamic-agent-prompt-builder"
import {
buildKeyTriggersSection,
@@ -434,7 +437,7 @@ export function createSisyphusAgent(
const base = {
description:
"Powerful AI orchestrator. Plans obsessively with todos, assesses search complexity before exploration, delegates strategically via category+skills combinations. Uses explore for internal code (parallel-friendly), librarian for external docs. (Sisyphus - OhMyOpenCode)",
mode: "primary" as const,
mode: MODE,
model,
maxTokens: 64000,
prompt,
@@ -448,3 +451,4 @@ export function createSisyphusAgent(
return { ...base, thinking: { type: "enabled", budgetTokens: 32000 } }
}
createSisyphusAgent.mode = MODE

View File

@@ -1,6 +1,20 @@
import type { AgentConfig } from "@opencode-ai/sdk"
export type AgentFactory = (model: string) => AgentConfig
/**
* Agent mode determines UI model selection behavior:
* - "primary": Respects user's UI-selected model (sisyphus, atlas)
* - "subagent": Uses own fallback chain, ignores UI selection (oracle, explore, etc.)
* - "all": Available in both contexts (OpenCode compatibility)
*/
export type AgentMode = "primary" | "subagent" | "all"
/**
* Agent factory function with static mode property.
* Mode is exposed as static property for pre-instantiation access.
*/
export type AgentFactory = ((model: string) => AgentConfig) & {
mode: AgentMode
}
/**
* Agent category for grouping in Sisyphus prompt sections

View File

@@ -10,7 +10,7 @@ import { createMetisAgent } from "./metis"
import { createAtlasAgent } from "./atlas"
import { createMomusAgent } from "./momus"
import type { AvailableAgent, AvailableCategory, AvailableSkill } from "./dynamic-agent-prompt-builder"
import { deepMerge, fetchAvailableModels, resolveModelWithFallback, AGENT_MODEL_REQUIREMENTS, findCaseInsensitive, includesCaseInsensitive, readConnectedProvidersCache } from "../shared"
import { deepMerge, fetchAvailableModels, resolveModelWithFallback, AGENT_MODEL_REQUIREMENTS, findCaseInsensitive, includesCaseInsensitive, readConnectedProvidersCache, isModelAvailable } from "../shared"
import { DEFAULT_CATEGORIES, CATEGORY_DESCRIPTIONS } from "../tools/delegate-task/constants"
import { resolveMultipleSkills } from "../features/opencode-skill-loader/skill-content"
import { createBuiltinSkills } from "../features/builtin-skills"
@@ -222,11 +222,20 @@ export async function createBuiltinAgents(
if (agentName === "atlas") continue
if (includesCaseInsensitive(disabledAgents, agentName)) continue
const override = findCaseInsensitive(agentOverrides, agentName)
const requirement = AGENT_MODEL_REQUIREMENTS[agentName]
const resolution = resolveModelWithFallback({
uiSelectedModel,
const override = findCaseInsensitive(agentOverrides, agentName)
const requirement = AGENT_MODEL_REQUIREMENTS[agentName]
// Check if agent requires a specific model
if (requirement?.requiresModel && availableModels) {
if (!isModelAvailable(requirement.requiresModel, availableModels)) {
continue
}
}
const isPrimaryAgent = isFactory(source) && source.mode === "primary"
const resolution = resolveModelWithFallback({
uiSelectedModel: isPrimaryAgent ? uiSelectedModel : undefined,
userModel: override?.model,
fallbackChain: requirement?.fallbackChain,
availableModels,
@@ -320,7 +329,7 @@ export async function createBuiltinAgents(
const atlasRequirement = AGENT_MODEL_REQUIREMENTS["atlas"]
const atlasResolution = resolveModelWithFallback({
uiSelectedModel,
// NOTE: Atlas does NOT use uiSelectedModel - respects its own fallbackChain (k2p5 primary)
userModel: orchestratorOverride?.model,
fallbackChain: atlasRequirement?.fallbackChain,
availableModels,

View File

@@ -5,54 +5,57 @@ exports[`generateModelConfig no providers available returns ULTIMATE_FALLBACK fo
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
"agents": {
"atlas": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"explore": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"librarian": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"metis": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"momus": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"multimodal-looker": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"oracle": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"prometheus": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"sisyphus": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
},
"categories": {
"artistry": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"deep": {
"model": "opencode/glm-4.7-free",
},
"quick": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"ultrabrain": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"unspecified-high": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"unspecified-low": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"visual-engineering": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"writing": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
},
}
@@ -77,6 +80,7 @@ exports[`generateModelConfig single native provider uses Claude models when only
},
"momus": {
"model": "anthropic/claude-opus-4-5",
"variant": "max",
},
"multimodal-looker": {
"model": "anthropic/claude-haiku-4-5",
@@ -98,6 +102,10 @@ exports[`generateModelConfig single native provider uses Claude models when only
"model": "anthropic/claude-opus-4-5",
"variant": "max",
},
"deep": {
"model": "anthropic/claude-opus-4-5",
"variant": "max",
},
"quick": {
"model": "anthropic/claude-haiku-4-5",
},
@@ -141,6 +149,7 @@ exports[`generateModelConfig single native provider uses Claude models with isMa
},
"momus": {
"model": "anthropic/claude-opus-4-5",
"variant": "max",
},
"multimodal-looker": {
"model": "anthropic/claude-haiku-4-5",
@@ -163,6 +172,10 @@ exports[`generateModelConfig single native provider uses Claude models with isMa
"model": "anthropic/claude-opus-4-5",
"variant": "max",
},
"deep": {
"model": "anthropic/claude-opus-4-5",
"variant": "max",
},
"quick": {
"model": "anthropic/claude-haiku-4-5",
},
@@ -199,7 +212,7 @@ exports[`generateModelConfig single native provider uses OpenAI models when only
"model": "opencode/gpt-5-nano",
},
"librarian": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"metis": {
"model": "openai/gpt-5.2",
@@ -229,8 +242,12 @@ exports[`generateModelConfig single native provider uses OpenAI models when only
"artistry": {
"model": "openai/gpt-5.2",
},
"deep": {
"model": "openai/gpt-5.2-codex",
"variant": "medium",
},
"quick": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"ultrabrain": {
"model": "openai/gpt-5.2-codex",
@@ -245,8 +262,7 @@ exports[`generateModelConfig single native provider uses OpenAI models when only
"variant": "medium",
},
"visual-engineering": {
"model": "openai/gpt-5.2",
"variant": "high",
"model": "opencode/glm-4.7-free",
},
"writing": {
"model": "openai/gpt-5.2",
@@ -266,7 +282,7 @@ exports[`generateModelConfig single native provider uses OpenAI models with isMa
"model": "opencode/gpt-5-nano",
},
"librarian": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"metis": {
"model": "openai/gpt-5.2",
@@ -296,8 +312,12 @@ exports[`generateModelConfig single native provider uses OpenAI models with isMa
"artistry": {
"model": "openai/gpt-5.2",
},
"deep": {
"model": "openai/gpt-5.2-codex",
"variant": "medium",
},
"quick": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"ultrabrain": {
"model": "openai/gpt-5.2-codex",
@@ -312,8 +332,7 @@ exports[`generateModelConfig single native provider uses OpenAI models with isMa
"variant": "medium",
},
"visual-engineering": {
"model": "openai/gpt-5.2",
"variant": "high",
"model": "opencode/glm-4.7-free",
},
"writing": {
"model": "openai/gpt-5.2",
@@ -333,7 +352,7 @@ exports[`generateModelConfig single native provider uses Gemini models when only
"model": "opencode/gpt-5-nano",
},
"librarian": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"metis": {
"model": "google/gemini-3-pro",
@@ -348,6 +367,7 @@ exports[`generateModelConfig single native provider uses Gemini models when only
},
"oracle": {
"model": "google/gemini-3-pro",
"variant": "max",
},
"prometheus": {
"model": "google/gemini-3-pro",
@@ -361,11 +381,16 @@ exports[`generateModelConfig single native provider uses Gemini models when only
"model": "google/gemini-3-pro",
"variant": "max",
},
"deep": {
"model": "google/gemini-3-pro",
"variant": "max",
},
"quick": {
"model": "google/gemini-3-flash",
},
"ultrabrain": {
"model": "google/gemini-3-pro",
"variant": "max",
},
"unspecified-high": {
"model": "google/gemini-3-flash",
@@ -394,7 +419,7 @@ exports[`generateModelConfig single native provider uses Gemini models with isMa
"model": "opencode/gpt-5-nano",
},
"librarian": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"metis": {
"model": "google/gemini-3-pro",
@@ -409,6 +434,7 @@ exports[`generateModelConfig single native provider uses Gemini models with isMa
},
"oracle": {
"model": "google/gemini-3-pro",
"variant": "max",
},
"prometheus": {
"model": "google/gemini-3-pro",
@@ -422,11 +448,16 @@ exports[`generateModelConfig single native provider uses Gemini models with isMa
"model": "google/gemini-3-pro",
"variant": "max",
},
"deep": {
"model": "google/gemini-3-pro",
"variant": "max",
},
"quick": {
"model": "google/gemini-3-flash",
},
"ultrabrain": {
"model": "google/gemini-3-pro",
"variant": "max",
},
"unspecified-high": {
"model": "google/gemini-3-pro",
@@ -485,6 +516,10 @@ exports[`generateModelConfig all native providers uses preferred models from fal
"model": "google/gemini-3-pro",
"variant": "max",
},
"deep": {
"model": "openai/gpt-5.2-codex",
"variant": "medium",
},
"quick": {
"model": "anthropic/claude-haiku-4-5",
},
@@ -550,6 +585,10 @@ exports[`generateModelConfig all native providers uses preferred models with isM
"model": "google/gemini-3-pro",
"variant": "max",
},
"deep": {
"model": "openai/gpt-5.2-codex",
"variant": "medium",
},
"quick": {
"model": "anthropic/claude-haiku-4-5",
},
@@ -579,13 +618,13 @@ exports[`generateModelConfig fallback providers uses OpenCode Zen models when on
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
"agents": {
"atlas": {
"model": "opencode/claude-sonnet-4-5",
"model": "opencode/kimi-k2.5-free",
},
"explore": {
"model": "opencode/claude-haiku-4-5",
},
"librarian": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"metis": {
"model": "opencode/claude-opus-4-5",
@@ -615,6 +654,10 @@ exports[`generateModelConfig fallback providers uses OpenCode Zen models when on
"model": "opencode/gemini-3-pro",
"variant": "max",
},
"deep": {
"model": "opencode/gpt-5.2-codex",
"variant": "medium",
},
"quick": {
"model": "opencode/claude-haiku-4-5",
},
@@ -643,13 +686,13 @@ exports[`generateModelConfig fallback providers uses OpenCode Zen models with is
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
"agents": {
"atlas": {
"model": "opencode/claude-sonnet-4-5",
"model": "opencode/kimi-k2.5-free",
},
"explore": {
"model": "opencode/claude-haiku-4-5",
},
"librarian": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"metis": {
"model": "opencode/claude-opus-4-5",
@@ -680,6 +723,10 @@ exports[`generateModelConfig fallback providers uses OpenCode Zen models with is
"model": "opencode/gemini-3-pro",
"variant": "max",
},
"deep": {
"model": "opencode/gpt-5.2-codex",
"variant": "medium",
},
"quick": {
"model": "opencode/claude-haiku-4-5",
},
@@ -745,6 +792,10 @@ exports[`generateModelConfig fallback providers uses GitHub Copilot models when
"model": "github-copilot/gemini-3-pro",
"variant": "max",
},
"deep": {
"model": "github-copilot/gpt-5.2-codex",
"variant": "medium",
},
"quick": {
"model": "github-copilot/claude-haiku-4.5",
},
@@ -810,6 +861,10 @@ exports[`generateModelConfig fallback providers uses GitHub Copilot models with
"model": "github-copilot/gemini-3-pro",
"variant": "max",
},
"deep": {
"model": "github-copilot/gpt-5.2-codex",
"variant": "medium",
},
"quick": {
"model": "github-copilot/claude-haiku-4.5",
},
@@ -839,7 +894,7 @@ exports[`generateModelConfig fallback providers uses ZAI model for librarian whe
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
"agents": {
"atlas": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"explore": {
"model": "opencode/gpt-5-nano",
@@ -848,42 +903,45 @@ exports[`generateModelConfig fallback providers uses ZAI model for librarian whe
"model": "zai-coding-plan/glm-4.7",
},
"metis": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"momus": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"multimodal-looker": {
"model": "zai-coding-plan/glm-4.6v",
},
"oracle": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"prometheus": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"sisyphus": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
},
"categories": {
"artistry": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"deep": {
"model": "opencode/glm-4.7-free",
},
"quick": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"ultrabrain": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"unspecified-high": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"unspecified-low": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"visual-engineering": {
"model": "opencode/big-pickle",
"model": "zai-coding-plan/glm-4.7",
},
"writing": {
"model": "zai-coding-plan/glm-4.7",
@@ -897,7 +955,7 @@ exports[`generateModelConfig fallback providers uses ZAI model for librarian wit
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
"agents": {
"atlas": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"explore": {
"model": "opencode/gpt-5-nano",
@@ -906,19 +964,19 @@ exports[`generateModelConfig fallback providers uses ZAI model for librarian wit
"model": "zai-coding-plan/glm-4.7",
},
"metis": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"momus": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"multimodal-looker": {
"model": "zai-coding-plan/glm-4.6v",
},
"oracle": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"prometheus": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"sisyphus": {
"model": "zai-coding-plan/glm-4.7",
@@ -926,22 +984,25 @@ exports[`generateModelConfig fallback providers uses ZAI model for librarian wit
},
"categories": {
"artistry": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"deep": {
"model": "opencode/glm-4.7-free",
},
"quick": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"ultrabrain": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"unspecified-high": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"unspecified-low": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"visual-engineering": {
"model": "opencode/big-pickle",
"model": "zai-coding-plan/glm-4.7",
},
"writing": {
"model": "zai-coding-plan/glm-4.7",
@@ -955,13 +1016,13 @@ exports[`generateModelConfig mixed provider scenarios uses Claude + OpenCode Zen
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
"agents": {
"atlas": {
"model": "anthropic/claude-sonnet-4-5",
"model": "opencode/kimi-k2.5-free",
},
"explore": {
"model": "anthropic/claude-haiku-4-5",
},
"librarian": {
"model": "opencode/big-pickle",
"model": "opencode/glm-4.7-free",
},
"metis": {
"model": "anthropic/claude-opus-4-5",
@@ -991,6 +1052,10 @@ exports[`generateModelConfig mixed provider scenarios uses Claude + OpenCode Zen
"model": "opencode/gemini-3-pro",
"variant": "max",
},
"deep": {
"model": "opencode/gpt-5.2-codex",
"variant": "medium",
},
"quick": {
"model": "anthropic/claude-haiku-4-5",
},
@@ -1055,6 +1120,10 @@ exports[`generateModelConfig mixed provider scenarios uses OpenAI + Copilot comb
"model": "github-copilot/gemini-3-pro",
"variant": "max",
},
"deep": {
"model": "openai/gpt-5.2-codex",
"variant": "medium",
},
"quick": {
"model": "github-copilot/claude-haiku-4.5",
},
@@ -1097,6 +1166,7 @@ exports[`generateModelConfig mixed provider scenarios uses Claude + ZAI combinat
},
"momus": {
"model": "anthropic/claude-opus-4-5",
"variant": "max",
},
"multimodal-looker": {
"model": "zai-coding-plan/glm-4.6v",
@@ -1118,6 +1188,10 @@ exports[`generateModelConfig mixed provider scenarios uses Claude + ZAI combinat
"model": "anthropic/claude-opus-4-5",
"variant": "max",
},
"deep": {
"model": "anthropic/claude-opus-4-5",
"variant": "max",
},
"quick": {
"model": "anthropic/claude-haiku-4-5",
},
@@ -1161,12 +1235,13 @@ exports[`generateModelConfig mixed provider scenarios uses Gemini + Claude combi
},
"momus": {
"model": "anthropic/claude-opus-4-5",
"variant": "max",
},
"multimodal-looker": {
"model": "google/gemini-3-flash",
},
"oracle": {
"model": "anthropic/claude-opus-4-5",
"model": "google/gemini-3-pro",
"variant": "max",
},
"prometheus": {
@@ -1182,11 +1257,15 @@ exports[`generateModelConfig mixed provider scenarios uses Gemini + Claude combi
"model": "google/gemini-3-pro",
"variant": "max",
},
"deep": {
"model": "anthropic/claude-opus-4-5",
"variant": "max",
},
"quick": {
"model": "anthropic/claude-haiku-4-5",
},
"ultrabrain": {
"model": "anthropic/claude-opus-4-5",
"model": "google/gemini-3-pro",
"variant": "max",
},
"unspecified-high": {
@@ -1210,7 +1289,7 @@ exports[`generateModelConfig mixed provider scenarios uses all fallback provider
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
"agents": {
"atlas": {
"model": "github-copilot/claude-sonnet-4.5",
"model": "opencode/kimi-k2.5-free",
},
"explore": {
"model": "opencode/claude-haiku-4-5",
@@ -1246,6 +1325,10 @@ exports[`generateModelConfig mixed provider scenarios uses all fallback provider
"model": "github-copilot/gemini-3-pro",
"variant": "max",
},
"deep": {
"model": "github-copilot/gpt-5.2-codex",
"variant": "medium",
},
"quick": {
"model": "github-copilot/claude-haiku-4.5",
},
@@ -1274,7 +1357,7 @@ exports[`generateModelConfig mixed provider scenarios uses all providers togethe
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
"agents": {
"atlas": {
"model": "anthropic/claude-sonnet-4-5",
"model": "opencode/kimi-k2.5-free",
},
"explore": {
"model": "anthropic/claude-haiku-4-5",
@@ -1310,6 +1393,10 @@ exports[`generateModelConfig mixed provider scenarios uses all providers togethe
"model": "google/gemini-3-pro",
"variant": "max",
},
"deep": {
"model": "openai/gpt-5.2-codex",
"variant": "medium",
},
"quick": {
"model": "anthropic/claude-haiku-4-5",
},
@@ -1338,7 +1425,7 @@ exports[`generateModelConfig mixed provider scenarios uses all providers with is
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
"agents": {
"atlas": {
"model": "anthropic/claude-sonnet-4-5",
"model": "opencode/kimi-k2.5-free",
},
"explore": {
"model": "anthropic/claude-haiku-4-5",
@@ -1375,6 +1462,10 @@ exports[`generateModelConfig mixed provider scenarios uses all providers with is
"model": "google/gemini-3-pro",
"variant": "max",
},
"deep": {
"model": "openai/gpt-5.2-codex",
"variant": "medium",
},
"quick": {
"model": "anthropic/claude-haiku-4-5",
},

View File

@@ -250,6 +250,7 @@ describe("generateOmoConfig - model fallback system", () => {
hasCopilot: false,
hasOpencodeZen: false,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
}
// #when generating config
@@ -271,6 +272,7 @@ describe("generateOmoConfig - model fallback system", () => {
hasCopilot: false,
hasOpencodeZen: false,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
}
// #when generating config
@@ -290,6 +292,7 @@ describe("generateOmoConfig - model fallback system", () => {
hasCopilot: true,
hasOpencodeZen: false,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
}
// #when generating config
@@ -309,6 +312,7 @@ describe("generateOmoConfig - model fallback system", () => {
hasCopilot: false,
hasOpencodeZen: false,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
}
// #when generating config
@@ -316,7 +320,7 @@ describe("generateOmoConfig - model fallback system", () => {
// #then should use ultimate fallback for all agents
expect(result.$schema).toBe("https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json")
expect((result.agents as Record<string, { model: string }>).sisyphus.model).toBe("opencode/big-pickle")
expect((result.agents as Record<string, { model: string }>).sisyphus.model).toBe("opencode/glm-4.7-free")
})
test("uses zai-coding-plan/glm-4.7 for librarian when Z.ai available", () => {
@@ -329,6 +333,7 @@ describe("generateOmoConfig - model fallback system", () => {
hasCopilot: false,
hasOpencodeZen: false,
hasZaiCodingPlan: true,
hasKimiForCoding: false,
}
// #when generating config
@@ -350,6 +355,7 @@ describe("generateOmoConfig - model fallback system", () => {
hasCopilot: false,
hasOpencodeZen: false,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
}
// #when generating config
@@ -373,6 +379,7 @@ describe("generateOmoConfig - model fallback system", () => {
hasCopilot: false,
hasOpencodeZen: false,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
}
// #when generating config
@@ -392,6 +399,7 @@ describe("generateOmoConfig - model fallback system", () => {
hasCopilot: false,
hasOpencodeZen: false,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
}
// #when generating config

View File

@@ -598,27 +598,28 @@ export function addProviderConfig(config: InstallConfig): ConfigMergeResult {
}
}
function detectProvidersFromOmoConfig(): { hasOpenAI: boolean; hasOpencodeZen: boolean; hasZaiCodingPlan: boolean } {
function detectProvidersFromOmoConfig(): { hasOpenAI: boolean; hasOpencodeZen: boolean; hasZaiCodingPlan: boolean; hasKimiForCoding: boolean } {
const omoConfigPath = getOmoConfig()
if (!existsSync(omoConfigPath)) {
return { hasOpenAI: true, hasOpencodeZen: true, hasZaiCodingPlan: false }
return { hasOpenAI: true, hasOpencodeZen: true, hasZaiCodingPlan: false, hasKimiForCoding: false }
}
try {
const content = readFileSync(omoConfigPath, "utf-8")
const omoConfig = parseJsonc<Record<string, unknown>>(content)
if (!omoConfig || typeof omoConfig !== "object") {
return { hasOpenAI: true, hasOpencodeZen: true, hasZaiCodingPlan: false }
return { hasOpenAI: true, hasOpencodeZen: true, hasZaiCodingPlan: false, hasKimiForCoding: false }
}
const configStr = JSON.stringify(omoConfig)
const hasOpenAI = configStr.includes('"openai/')
const hasOpencodeZen = configStr.includes('"opencode/')
const hasZaiCodingPlan = configStr.includes('"zai-coding-plan/')
const hasKimiForCoding = configStr.includes('"kimi-for-coding/')
return { hasOpenAI, hasOpencodeZen, hasZaiCodingPlan }
return { hasOpenAI, hasOpencodeZen, hasZaiCodingPlan, hasKimiForCoding }
} catch {
return { hasOpenAI: true, hasOpencodeZen: true, hasZaiCodingPlan: false }
return { hasOpenAI: true, hasOpencodeZen: true, hasZaiCodingPlan: false, hasKimiForCoding: false }
}
}
@@ -632,6 +633,7 @@ export function detectCurrentConfig(): DetectedConfig {
hasCopilot: false,
hasOpencodeZen: true,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
}
const { format, path } = detectConfigFormat()
@@ -655,10 +657,11 @@ export function detectCurrentConfig(): DetectedConfig {
// Gemini auth plugin detection still works via plugin presence
result.hasGemini = plugins.some((p) => p.startsWith("opencode-antigravity-auth"))
const { hasOpenAI, hasOpencodeZen, hasZaiCodingPlan } = detectProvidersFromOmoConfig()
const { hasOpenAI, hasOpencodeZen, hasZaiCodingPlan, hasKimiForCoding } = detectProvidersFromOmoConfig()
result.hasOpenAI = hasOpenAI
result.hasOpencodeZen = hasOpencodeZen
result.hasZaiCodingPlan = hasZaiCodingPlan
result.hasKimiForCoding = hasKimiForCoding
return result
}

View File

@@ -30,6 +30,7 @@ program
.option("--copilot <value>", "GitHub Copilot subscription: no, yes")
.option("--opencode-zen <value>", "OpenCode Zen access: no, yes (default: no)")
.option("--zai-coding-plan <value>", "Z.ai Coding Plan subscription: no, yes (default: no)")
.option("--kimi-for-coding <value>", "Kimi For Coding subscription: no, yes (default: no)")
.option("--skip-auth", "Skip authentication setup hints")
.addHelpText("after", `
Examples:
@@ -37,13 +38,14 @@ Examples:
$ bunx oh-my-opencode install --no-tui --claude=max20 --openai=yes --gemini=yes --copilot=no
$ bunx oh-my-opencode install --no-tui --claude=no --gemini=no --copilot=yes --opencode-zen=yes
Model Providers (Priority: Native > Copilot > OpenCode Zen > Z.ai):
Model Providers (Priority: Native > Copilot > OpenCode Zen > Z.ai > Kimi):
Claude Native anthropic/ models (Opus, Sonnet, Haiku)
OpenAI Native openai/ models (GPT-5.2 for Oracle)
Gemini Native google/ models (Gemini 3 Pro, Flash)
Copilot github-copilot/ models (fallback)
OpenCode Zen opencode/ models (opencode/claude-opus-4-5, etc.)
Z.ai zai-coding-plan/glm-4.7 (Librarian priority)
Kimi kimi-for-coding/k2p5 (Sisyphus/Prometheus fallback)
`)
.action(async (options) => {
const args: InstallArgs = {
@@ -54,6 +56,7 @@ Model Providers (Priority: Native > Copilot > OpenCode Zen > Z.ai):
copilot: options.copilot,
opencodeZen: options.opencodeZen,
zaiCodingPlan: options.zaiCodingPlan,
kimiForCoding: options.kimiForCoding,
skipAuth: options.skipAuth ?? false,
}
const exitCode = await install(args)

View File

@@ -45,6 +45,7 @@ function formatConfigSummary(config: InstallConfig): string {
lines.push(formatProvider("GitHub Copilot", config.hasCopilot, "fallback"))
lines.push(formatProvider("OpenCode Zen", config.hasOpencodeZen, "opencode/ models"))
lines.push(formatProvider("Z.ai Coding Plan", config.hasZaiCodingPlan, "Librarian/Multimodal"))
lines.push(formatProvider("Kimi For Coding", config.hasKimiForCoding, "Sisyphus/Prometheus fallback"))
lines.push("")
lines.push(color.dim("─".repeat(40)))
@@ -141,6 +142,10 @@ function validateNonTuiArgs(args: InstallArgs): { valid: boolean; errors: string
errors.push(`Invalid --zai-coding-plan value: ${args.zaiCodingPlan} (expected: no, yes)`)
}
if (args.kimiForCoding !== undefined && !["no", "yes"].includes(args.kimiForCoding)) {
errors.push(`Invalid --kimi-for-coding value: ${args.kimiForCoding} (expected: no, yes)`)
}
return { valid: errors.length === 0, errors }
}
@@ -153,10 +158,11 @@ function argsToConfig(args: InstallArgs): InstallConfig {
hasCopilot: args.copilot === "yes",
hasOpencodeZen: args.opencodeZen === "yes",
hasZaiCodingPlan: args.zaiCodingPlan === "yes",
hasKimiForCoding: args.kimiForCoding === "yes",
}
}
function detectedToInitialValues(detected: DetectedConfig): { claude: ClaudeSubscription; openai: BooleanArg; gemini: BooleanArg; copilot: BooleanArg; opencodeZen: BooleanArg; zaiCodingPlan: BooleanArg } {
function detectedToInitialValues(detected: DetectedConfig): { claude: ClaudeSubscription; openai: BooleanArg; gemini: BooleanArg; copilot: BooleanArg; opencodeZen: BooleanArg; zaiCodingPlan: BooleanArg; kimiForCoding: BooleanArg } {
let claude: ClaudeSubscription = "no"
if (detected.hasClaude) {
claude = detected.isMax20 ? "max20" : "yes"
@@ -169,6 +175,7 @@ function detectedToInitialValues(detected: DetectedConfig): { claude: ClaudeSubs
copilot: detected.hasCopilot ? "yes" : "no",
opencodeZen: detected.hasOpencodeZen ? "yes" : "no",
zaiCodingPlan: detected.hasZaiCodingPlan ? "yes" : "no",
kimiForCoding: detected.hasKimiForCoding ? "yes" : "no",
}
}
@@ -178,7 +185,7 @@ async function runTuiMode(detected: DetectedConfig): Promise<InstallConfig | nul
const claude = await p.select({
message: "Do you have a Claude Pro/Max subscription?",
options: [
{ value: "no" as const, label: "No", hint: "Will use opencode/big-pickle as fallback" },
{ value: "no" as const, label: "No", hint: "Will use opencode/glm-4.7-free as fallback" },
{ value: "yes" as const, label: "Yes (standard)", hint: "Claude Opus 4.5 for orchestration" },
{ value: "max20" as const, label: "Yes (max20 mode)", hint: "Full power with Claude Sonnet 4.5 for Librarian" },
],
@@ -260,6 +267,20 @@ async function runTuiMode(detected: DetectedConfig): Promise<InstallConfig | nul
return null
}
const kimiForCoding = await p.select({
message: "Do you have a Kimi For Coding subscription?",
options: [
{ value: "no" as const, label: "No", hint: "Will use other configured providers" },
{ value: "yes" as const, label: "Yes", hint: "Kimi K2.5 for Sisyphus/Prometheus fallback" },
],
initialValue: initial.kimiForCoding,
})
if (p.isCancel(kimiForCoding)) {
p.cancel("Installation cancelled.")
return null
}
return {
hasClaude: claude !== "no",
isMax20: claude === "max20",
@@ -268,6 +289,7 @@ async function runTuiMode(detected: DetectedConfig): Promise<InstallConfig | nul
hasCopilot: copilot === "yes",
hasOpencodeZen: opencodeZen === "yes",
hasZaiCodingPlan: zaiCodingPlan === "yes",
hasKimiForCoding: kimiForCoding === "yes",
}
}
@@ -363,7 +385,7 @@ async function runNonTuiInstall(args: InstallArgs): Promise<number> {
}
if (!config.hasClaude && !config.hasOpenAI && !config.hasGemini && !config.hasCopilot && !config.hasOpencodeZen) {
printWarning("No model providers configured. Using opencode/big-pickle as fallback.")
printWarning("No model providers configured. Using opencode/glm-4.7-free as fallback.")
}
console.log(`${SYMBOLS.star} ${color.bold(color.green(isUpdate ? "Configuration updated!" : "Installation complete!"))}`)
@@ -378,7 +400,7 @@ async function runNonTuiInstall(args: InstallArgs): Promise<number> {
)
console.log(`${SYMBOLS.star} ${color.yellow("If you found this helpful, consider starring the repo!")}`)
console.log(` ${color.dim("gh repo star code-yeongyu/oh-my-opencode")}`)
console.log(` ${color.dim("gh api --silent --method PUT /user/starred/code-yeongyu/oh-my-opencode >/dev/null 2>&1 || true")}`)
console.log()
console.log(color.dim("oMoMoMoMo... Enjoy!"))
console.log()
@@ -480,7 +502,7 @@ export async function install(args: InstallArgs): Promise<number> {
}
if (!config.hasClaude && !config.hasOpenAI && !config.hasGemini && !config.hasCopilot && !config.hasOpencodeZen) {
p.log.warn("No model providers configured. Using opencode/big-pickle as fallback.")
p.log.warn("No model providers configured. Using opencode/glm-4.7-free as fallback.")
}
p.note(formatConfigSummary(config), isUpdate ? "Updated Configuration" : "Installation Complete")
@@ -496,7 +518,7 @@ export async function install(args: InstallArgs): Promise<number> {
)
p.log.message(`${color.yellow("★")} If you found this helpful, consider starring the repo!`)
p.log.message(` ${color.dim("gh repo star code-yeongyu/oh-my-opencode")}`)
p.log.message(` ${color.dim("gh api --silent --method PUT /user/starred/code-yeongyu/oh-my-opencode >/dev/null 2>&1 || true")}`)
p.outro(color.green("oMoMoMoMo... Enjoy!"))

View File

@@ -12,6 +12,7 @@ function createConfig(overrides: Partial<InstallConfig> = {}): InstallConfig {
hasCopilot: false,
hasOpencodeZen: false,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
...overrides,
}
}

View File

@@ -14,6 +14,7 @@ interface ProviderAvailability {
opencodeZen: boolean
copilot: boolean
zai: boolean
kimiForCoding: boolean
isMaxPlan: boolean
}
@@ -36,7 +37,7 @@ export interface GeneratedOmoConfig {
const ZAI_MODEL = "zai-coding-plan/glm-4.7"
const ULTIMATE_FALLBACK = "opencode/big-pickle"
const ULTIMATE_FALLBACK = "opencode/glm-4.7-free"
const SCHEMA_URL = "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json"
function toProviderAvailability(config: InstallConfig): ProviderAvailability {
@@ -49,6 +50,7 @@ function toProviderAvailability(config: InstallConfig): ProviderAvailability {
opencodeZen: config.hasOpencodeZen,
copilot: config.hasCopilot,
zai: config.hasZaiCodingPlan,
kimiForCoding: config.hasKimiForCoding,
isMaxPlan: config.isMax20,
}
}
@@ -61,6 +63,7 @@ function isProviderAvailable(provider: string, avail: ProviderAvailability): boo
"github-copilot": avail.copilot,
opencode: avail.opencodeZen,
"zai-coding-plan": avail.zai,
"kimi-for-coding": avail.kimiForCoding,
}
return mapping[provider] ?? false
}
@@ -102,6 +105,8 @@ function getSisyphusFallbackChain(isMaxPlan: boolean): FallbackEntry[] {
// For non-max plan, use sonnet instead of opus
return [
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-sonnet-4-5" },
{ providers: ["kimi-for-coding"], model: "k2p5" },
{ providers: ["opencode"], model: "kimi-k2.5-free" },
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2", variant: "high" },
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro" },
]
@@ -115,7 +120,8 @@ export function generateModelConfig(config: InstallConfig): GeneratedOmoConfig {
avail.native.gemini ||
avail.opencodeZen ||
avail.copilot ||
avail.zai
avail.zai ||
avail.kimiForCoding
if (!hasAnyProvider) {
return {

View File

@@ -9,6 +9,7 @@ export interface InstallArgs {
copilot?: BooleanArg
opencodeZen?: BooleanArg
zaiCodingPlan?: BooleanArg
kimiForCoding?: BooleanArg
skipAuth?: boolean
}
@@ -20,6 +21,7 @@ export interface InstallConfig {
hasCopilot: boolean
hasOpencodeZen: boolean
hasZaiCodingPlan: boolean
hasKimiForCoding: boolean
}
export interface ConfigMergeResult {
@@ -37,4 +39,5 @@ export interface DetectedConfig {
hasCopilot: boolean
hasOpencodeZen: boolean
hasZaiCodingPlan: boolean
hasKimiForCoding: boolean
}

View File

@@ -187,6 +187,7 @@ export const CategoryConfigSchema = z.object({
export const BuiltinCategoryNameSchema = z.enum([
"visual-engineering",
"ultrabrain",
"deep",
"artistry",
"quick",
"unspecified-low",

View File

@@ -176,8 +176,8 @@ describe("ConcurrencyManager.acquire/release", () => {
await manager.acquire("model-a")
await manager.acquire("model-a")
// #then - both resolved without waiting
expect(true).toBe(true)
// #then - both resolved without waiting, count should be 2
expect(manager.getCount("model-a")).toBe(2)
})
test("should allow acquires up to default limit of 5", async () => {
@@ -190,8 +190,8 @@ describe("ConcurrencyManager.acquire/release", () => {
await manager.acquire("model-a")
await manager.acquire("model-a")
// #then - all 5 resolved
expect(true).toBe(true)
// #then - all 5 resolved, count should be 5
expect(manager.getCount("model-a")).toBe(5)
})
test("should queue when limit reached", async () => {
@@ -276,8 +276,8 @@ describe("ConcurrencyManager.acquire/release", () => {
manager.release("model-a")
await manager.acquire("model-a")
// #then
expect(true).toBe(true)
// #then - count should be 1 after re-acquiring
expect(manager.getCount("model-a")).toBe(1)
})
test("should handle release when no acquire", () => {
@@ -288,21 +288,21 @@ describe("ConcurrencyManager.acquire/release", () => {
// #when - release without acquire
manager.release("model-a")
// #then - should not throw
expect(true).toBe(true)
// #then - count should be 0 (no negative count)
expect(manager.getCount("model-a")).toBe(0)
})
test("should handle release when no prior acquire", () => {
// #given - default config
// #when - release without acquire
manager.release("model-a")
// #when - release without acquire
manager.release("model-a")
// #then - should not throw
expect(true).toBe(true)
})
// #then - count should be 0 (no negative count)
expect(manager.getCount("model-a")).toBe(0)
})
test("should handle multiple acquires and releases correctly", async () => {
test("should handle multiple acquires and releases correctly", async () => {
// #given
const config: BackgroundTaskConfig = { defaultConcurrency: 3 }
manager = new ConcurrencyManager(config)
@@ -317,11 +317,11 @@ describe("ConcurrencyManager.acquire/release", () => {
manager.release("model-a")
manager.release("model-a")
// Should be able to acquire again
await manager.acquire("model-a")
// Should be able to acquire again
await manager.acquire("model-a")
// #then
expect(true).toBe(true)
// #then - count should be 1 after re-acquiring
expect(manager.getCount("model-a")).toBe(1)
})
test("should use model-specific limit for acquire", async () => {

View File

@@ -5,7 +5,7 @@ import type {
LaunchInput,
ResumeInput,
} from "./types"
import { log, getAgentToolRestrictions } from "../../shared"
import { log, getAgentToolRestrictions, promptWithModelSuggestionRetry } from "../../shared"
import { ConcurrencyManager } from "./concurrency"
import type { BackgroundTaskConfig, TmuxConfig } from "../../config/schema"
import { isInsideTmux } from "../../shared/tmux"
@@ -307,7 +307,7 @@ export class BackgroundManager {
: undefined
const launchVariant = input.model?.variant
this.client.session.prompt({
promptWithModelSuggestionRetry(this.client, {
path: { id: sessionID },
body: {
agent: input.agent,

View File

@@ -55,6 +55,7 @@ ${REFACTOR_TEMPLATE}
},
"start-work": {
description: "(builtin) Start Sisyphus work session from Prometheus plan",
agent: "atlas",
template: `<command-instruction>
${START_WORK_TEMPLATE}
</command-instruction>

View File

@@ -2,6 +2,7 @@ import { existsSync, mkdirSync, readFileSync, readdirSync, writeFileSync } from
import { join } from "node:path"
import { MESSAGE_STORAGE, PART_STORAGE } from "./constants"
import type { MessageMeta, OriginalMessageContext, TextPart, ToolPermission } from "./types"
import { log } from "../../shared/logger"
export interface StoredMessage {
agent?: string
@@ -117,7 +118,7 @@ export function injectHookMessage(
): boolean {
// Validate hook content to prevent empty message injection
if (!hookContent || hookContent.trim().length === 0) {
console.warn("[hook-message-injector] Attempted to inject empty hook content, skipping injection", {
log("[hook-message-injector] Attempted to inject empty hook content, skipping injection", {
sessionID,
hasAgent: !!originalMessage.agent,
hasModel: !!(originalMessage.model?.providerID && originalMessage.model?.modelID)

View File

@@ -1,6 +1,8 @@
import { afterEach, describe, expect, it } from "bun:test"
import { findAvailablePort, startCallbackServer, type CallbackServer } from "./callback-server"
const nativeFetch = Bun.fetch.bind(Bun)
describe("findAvailablePort", () => {
it("returns the start port when it is available", async () => {
//#given
@@ -34,9 +36,11 @@ describe("findAvailablePort", () => {
describe("startCallbackServer", () => {
let server: CallbackServer | null = null
afterEach(() => {
afterEach(async () => {
server?.close()
server = null
// Allow time for port to be released before next test
await Bun.sleep(10)
})
it("starts server and returns port", async () => {
@@ -57,9 +61,12 @@ describe("startCallbackServer", () => {
const callbackUrl = `http://127.0.0.1:${server.port}/oauth/callback?code=test-code&state=test-state`
//#when
const fetchPromise = fetch(callbackUrl)
const result = await server.waitForCallback()
const response = await fetchPromise
// Use Promise.all to ensure fetch and waitForCallback run concurrently
// This prevents race condition where waitForCallback blocks before fetch starts
const [result, response] = await Promise.all([
server.waitForCallback(),
nativeFetch(callbackUrl)
])
//#then
expect(result).toEqual({ code: "test-code", state: "test-state" })
@@ -73,7 +80,7 @@ describe("startCallbackServer", () => {
server = await startCallbackServer()
//#when
const response = await fetch(`http://127.0.0.1:${server.port}/other`)
const response = await nativeFetch(`http://127.0.0.1:${server.port}/other`)
//#then
expect(response.status).toBe(404)
@@ -85,7 +92,7 @@ describe("startCallbackServer", () => {
const callbackRejection = server.waitForCallback().catch((e: Error) => e)
//#when
const response = await fetch(`http://127.0.0.1:${server.port}/oauth/callback?state=s`)
const response = await nativeFetch(`http://127.0.0.1:${server.port}/oauth/callback?state=s`)
//#then
expect(response.status).toBe(400)
@@ -100,7 +107,7 @@ describe("startCallbackServer", () => {
const callbackRejection = server.waitForCallback().catch((e: Error) => e)
//#when
const response = await fetch(`http://127.0.0.1:${server.port}/oauth/callback?code=c`)
const response = await nativeFetch(`http://127.0.0.1:${server.port}/oauth/callback?code=c`)
//#then
expect(response.status).toBe(400)
@@ -120,7 +127,7 @@ describe("startCallbackServer", () => {
//#then
try {
await fetch(`http://127.0.0.1:${port}/oauth/callback?code=c&state=s`)
await nativeFetch(`http://127.0.0.1:${port}/oauth/callback?code=c&state=s`)
expect(true).toBe(false)
} catch (error) {
expect(error).toBeDefined()

View File

@@ -1,11 +1,83 @@
import { describe, test, expect, mock, beforeEach, spyOn } from "bun:test"
import { afterEach, beforeEach, describe, expect, mock, spyOn, test } from "bun:test"
import { executeCompact } from "./executor"
import type { AutoCompactState } from "./types"
import * as storage from "./storage"
type TimerCallback = (...args: any[]) => void
interface FakeTimeouts {
advanceBy: (ms: number) => Promise<void>
restore: () => void
}
function createFakeTimeouts(): FakeTimeouts {
let now = 0
let nextId = 1
const timers = new Map<number, { id: number; time: number; callback: TimerCallback; args: any[] }>()
const cleared = new Set<number>()
const original = {
setTimeout: globalThis.setTimeout,
clearTimeout: globalThis.clearTimeout,
}
const normalizeDelay = (delay?: number) => {
if (typeof delay !== "number" || !Number.isFinite(delay)) return 0
return delay < 0 ? 0 : delay
}
globalThis.setTimeout = ((callback: TimerCallback, delay?: number, ...args: any[]) => {
const id = nextId++
timers.set(id, {
id,
time: now + normalizeDelay(delay),
callback,
args,
})
return id as unknown as ReturnType<typeof setTimeout>
}) as typeof setTimeout
globalThis.clearTimeout = ((id?: number) => {
if (typeof id !== "number") return
cleared.add(id)
timers.delete(id)
}) as typeof clearTimeout
const advanceBy = async (ms: number) => {
const target = now + Math.max(0, ms)
while (true) {
let next: { id: number; time: number; callback: TimerCallback; args: any[] } | undefined
for (const timer of timers.values()) {
if (timer.time <= target && (!next || timer.time < next.time)) {
next = timer
}
}
if (!next) break
now = next.time
timers.delete(next.id)
if (!cleared.has(next.id)) {
next.callback(...next.args)
}
cleared.delete(next.id)
await Promise.resolve()
}
now = target
await Promise.resolve()
}
const restore = () => {
globalThis.setTimeout = original.setTimeout
globalThis.clearTimeout = original.clearTimeout
}
return { advanceBy, restore }
}
describe("executeCompact lock management", () => {
let autoCompactState: AutoCompactState
let mockClient: any
let fakeTimeouts: FakeTimeouts
const sessionID = "test-session-123"
const directory = "/test/dir"
const msg = { providerID: "anthropic", modelID: "claude-opus-4-5" }
@@ -32,6 +104,12 @@ describe("executeCompact lock management", () => {
showToast: mock(() => Promise.resolve()),
},
}
fakeTimeouts = createFakeTimeouts()
})
afterEach(() => {
fakeTimeouts.restore()
})
test("clears lock on successful summarize completion", async () => {
@@ -216,7 +294,7 @@ describe("executeCompact lock management", () => {
await executeCompact(sessionID, msg, autoCompactState, mockClient, directory)
// Wait for setTimeout callback
await new Promise((resolve) => setTimeout(resolve, 600))
await fakeTimeouts.advanceBy(600)
// #then: Lock should be cleared
// The continuation happens in setTimeout, but lock is cleared in finally before that
@@ -288,7 +366,7 @@ describe("executeCompact lock management", () => {
await executeCompact(sessionID, msg, autoCompactState, mockClient, directory)
// Wait for setTimeout callback
await new Promise((resolve) => setTimeout(resolve, 600))
await fakeTimeouts.advanceBy(600)
// #then: Truncation was attempted
expect(truncateSpy).toHaveBeenCalled()

View File

@@ -4,6 +4,7 @@ import { join } from "path"
import { homedir, tmpdir } from "os"
import { createRequire } from "module"
import { extractZip } from "../../shared"
import { log } from "../../shared/logger"
const DEBUG = process.env.COMMENT_CHECKER_DEBUG === "1"
const DEBUG_FILE = join(tmpdir(), "comment-checker-debug.log")
@@ -127,7 +128,7 @@ export async function downloadCommentChecker(): Promise<string | null> {
const downloadUrl = `https://github.com/${REPO}/releases/download/v${version}/${assetName}`
debugLog(`Downloading from: ${downloadUrl}`)
console.log(`[oh-my-opencode] Downloading comment-checker binary...`)
log(`[oh-my-opencode] Downloading comment-checker binary...`)
try {
// Ensure cache directory exists
@@ -166,14 +167,14 @@ export async function downloadCommentChecker(): Promise<string | null> {
}
debugLog(`Successfully downloaded binary to: ${binaryPath}`)
console.log(`[oh-my-opencode] comment-checker binary ready.`)
log(`[oh-my-opencode] comment-checker binary ready.`)
return binaryPath
} catch (err) {
debugLog(`Failed to download: ${err}`)
console.error(`[oh-my-opencode] Failed to download comment-checker: ${err instanceof Error ? err.message : err}`)
console.error(`[oh-my-opencode] Comment checking disabled.`)
log(`[oh-my-opencode] Failed to download comment-checker: ${err instanceof Error ? err.message : err}`)
log(`[oh-my-opencode] Comment checking disabled.`)
return null
}
}

View File

@@ -180,7 +180,9 @@ ${ULTRAWORK_PLANNER_SECTION}
1. **THINK DEEPLY** - What is the user's TRUE intent? What problem are they REALLY trying to solve?
2. **EXPLORE THOROUGHLY** - Fire explore/librarian agents to gather ALL relevant context
3. **CONSULT ORACLE** - For architecture decisions, complex logic, or when you're stuck
3. **CONSULT SPECIALISTS** - For hard/complex tasks, DO NOT struggle alone. Delegate:
- **Oracle**: Conventional problems - architecture, debugging, complex logic
- **Artistry**: Non-conventional problems - different approach needed, unusual constraints
4. **ASK THE USER** - If ambiguity remains after exploration, ASK. Don't guess.
**SIGNS YOU ARE NOT READY TO IMPLEMENT:**
@@ -194,7 +196,10 @@ ${ULTRAWORK_PLANNER_SECTION}
\`\`\`
delegate_task(agent="explore", prompt="Find [X] patterns in codebase", background=true)
delegate_task(agent="librarian", prompt="Find docs/examples for [Y]", background=true)
delegate_task(agent="oracle", prompt="Review my approach: [describe plan]")
// Hard problem? DON'T struggle alone:
delegate_task(agent="oracle", prompt="...") // conventional: architecture, debugging
delegate_task(category="artistry", prompt="...") // non-conventional: needs different approach
\`\`\`
**ONLY AFTER YOU HAVE:**
@@ -229,7 +234,7 @@ delegate_task(agent="oracle", prompt="Review my approach: [describe plan]")
**IF YOU ENCOUNTER A BLOCKER:**
1. **DO NOT** give up
2. **DO NOT** deliver a compromised version
3. **DO** consult oracle for solutions
3. **DO** consult specialists (oracle for conventional, artistry for non-conventional)
4. **DO** ask the user for guidance
5. **DO** explore alternative approaches
@@ -298,7 +303,8 @@ delegate_task(session_id="ses_abc123", prompt="Here's my answer to your question
| Codebase exploration | delegate_task(subagent_type="explore", run_in_background=true) | Parallel, context-efficient |
| Documentation lookup | delegate_task(subagent_type="librarian", run_in_background=true) | Specialized knowledge |
| Planning | delegate_task(subagent_type="plan") | Parallel task graph + structured TODO list |
| Architecture/Debugging | delegate_task(subagent_type="oracle") | High-IQ reasoning |
| Hard problem (conventional) | delegate_task(subagent_type="oracle") | Architecture, debugging, complex logic |
| Hard problem (non-conventional) | delegate_task(category="artistry", load_skills=[...]) | Different approach needed |
| Implementation | delegate_task(category="...", load_skills=[...]) | Domain-optimized models |
**CATEGORY + SKILL DELEGATION:**
@@ -490,8 +496,9 @@ CONTEXT GATHERING (parallel):
- 1-2 librarian agents (if external library involved)
- Direct tools: Grep, AST-grep, LSP for targeted searches
IF COMPLEX (architecture, multi-system, debugging after 2+ failures):
- Consult oracle for strategic guidance
IF COMPLEX - DO NOT STRUGGLE ALONE. Consult specialists:
- **Oracle**: Conventional problems (architecture, debugging, complex logic)
- **Artistry**: Non-conventional problems (different approach needed)
SYNTHESIZE findings before proceeding.`,
},

View File

@@ -16,6 +16,7 @@ import {
stripThinkingParts,
} from "./storage"
import type { MessageData, ResumeConfig } from "./types"
import { log } from "../../shared/logger"
export interface SessionRecoveryOptions {
experimental?: ExperimentalConfig
@@ -414,7 +415,7 @@ export function createSessionRecoveryHook(ctx: PluginInput, options?: SessionRec
return success
} catch (err) {
console.error("[session-recovery] Recovery failed:", err)
log("[session-recovery] Recovery failed:", err)
return false
} finally {
processingErrors.delete(assistantMsgID)

View File

@@ -4,9 +4,123 @@ import type { BackgroundManager } from "../features/background-agent"
import { setMainSession, subagentSessions, _resetForTesting } from "../features/claude-code-session-state"
import { createTodoContinuationEnforcer } from "./todo-continuation-enforcer"
type TimerCallback = (...args: any[]) => void
interface FakeTimers {
advanceBy: (ms: number, advanceClock?: boolean) => Promise<void>
restore: () => void
}
function createFakeTimers(): FakeTimers {
const originalNow = Date.now()
let clockNow = originalNow
let timerNow = 0
let nextId = 1
const timers = new Map<number, { id: number; time: number; interval: number | null; callback: TimerCallback; args: any[] }>()
const cleared = new Set<number>()
const original = {
setTimeout: globalThis.setTimeout,
clearTimeout: globalThis.clearTimeout,
setInterval: globalThis.setInterval,
clearInterval: globalThis.clearInterval,
dateNow: Date.now,
}
const normalizeDelay = (delay?: number) => {
if (typeof delay !== "number" || !Number.isFinite(delay)) return 0
return delay < 0 ? 0 : delay
}
const schedule = (callback: TimerCallback, delay: number | undefined, interval: number | null, args: any[]) => {
const id = nextId++
timers.set(id, {
id,
time: timerNow + normalizeDelay(delay),
interval,
callback,
args,
})
return id
}
const clear = (id: number | undefined) => {
if (typeof id !== "number") return
cleared.add(id)
timers.delete(id)
}
globalThis.setTimeout = ((callback: TimerCallback, delay?: number, ...args: any[]) => {
return schedule(callback, delay, null, args) as unknown as ReturnType<typeof setTimeout>
}) as typeof setTimeout
globalThis.setInterval = ((callback: TimerCallback, delay?: number, ...args: any[]) => {
const interval = normalizeDelay(delay)
return schedule(callback, delay, interval, args) as unknown as ReturnType<typeof setInterval>
}) as typeof setInterval
globalThis.clearTimeout = ((id?: number) => {
clear(id)
}) as typeof clearTimeout
globalThis.clearInterval = ((id?: number) => {
clear(id)
}) as typeof clearInterval
Date.now = () => clockNow
const advanceBy = async (ms: number, advanceClock: boolean = false) => {
const clamped = Math.max(0, ms)
const target = timerNow + clamped
if (advanceClock) {
clockNow += clamped
}
while (true) {
let next: { id: number; time: number; interval: number | null; callback: TimerCallback; args: any[] } | undefined
for (const timer of timers.values()) {
if (timer.time <= target && (!next || timer.time < next.time)) {
next = timer
}
}
if (!next) break
timerNow = next.time
timers.delete(next.id)
next.callback(...next.args)
if (next.interval !== null && !cleared.has(next.id)) {
timers.set(next.id, {
id: next.id,
time: timerNow + next.interval,
interval: next.interval,
callback: next.callback,
args: next.args,
})
} else {
cleared.delete(next.id)
}
await Promise.resolve()
}
timerNow = target
await Promise.resolve()
}
const restore = () => {
globalThis.setTimeout = original.setTimeout
globalThis.clearTimeout = original.clearTimeout
globalThis.setInterval = original.setInterval
globalThis.clearInterval = original.clearInterval
Date.now = original.dateNow
}
return { advanceBy, restore }
}
describe("todo-continuation-enforcer", () => {
let promptCalls: Array<{ sessionID: string; agent?: string; model?: { providerID?: string; modelID?: string }; text: string }>
let toastCalls: Array<{ title: string; message: string }>
let fakeTimers: FakeTimers
interface MockMessage {
info: {
@@ -60,6 +174,7 @@ describe("todo-continuation-enforcer", () => {
}
beforeEach(() => {
fakeTimers = createFakeTimers()
_resetForTesting()
promptCalls = []
toastCalls = []
@@ -67,6 +182,7 @@ describe("todo-continuation-enforcer", () => {
})
afterEach(() => {
fakeTimers.restore()
_resetForTesting()
})
@@ -85,12 +201,12 @@ describe("todo-continuation-enforcer", () => {
})
// #then - countdown toast shown
await new Promise(r => setTimeout(r, 100))
await fakeTimers.advanceBy(100)
expect(toastCalls.length).toBeGreaterThanOrEqual(1)
expect(toastCalls[0].title).toBe("Todo Continuation")
// #then - after countdown, continuation injected
await new Promise(r => setTimeout(r, 2500))
await fakeTimers.advanceBy(2500)
expect(promptCalls.length).toBe(1)
expect(promptCalls[0].text).toContain("TODO CONTINUATION")
})
@@ -112,7 +228,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - no continuation injected
expect(promptCalls).toHaveLength(0)
@@ -132,7 +248,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - no continuation injected
expect(promptCalls).toHaveLength(0)
@@ -150,7 +266,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID: otherSession } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - no continuation injected
expect(promptCalls).toHaveLength(0)
@@ -170,7 +286,7 @@ describe("todo-continuation-enforcer", () => {
})
// #then - continuation injected for background task session
await new Promise(r => setTimeout(r, 2500))
await fakeTimers.advanceBy(2500)
expect(promptCalls.length).toBe(1)
expect(promptCalls[0].sessionID).toBe(bgTaskSession)
})
@@ -190,7 +306,7 @@ describe("todo-continuation-enforcer", () => {
})
// #when - wait past grace period (500ms), then user sends message
await new Promise(r => setTimeout(r, 600))
await fakeTimers.advanceBy(600, true)
await hook.handler({
event: {
type: "message.updated",
@@ -199,7 +315,7 @@ describe("todo-continuation-enforcer", () => {
})
// #then - wait past countdown time and verify no injection (countdown was cancelled)
await new Promise(r => setTimeout(r, 2500))
await fakeTimers.advanceBy(2500)
expect(promptCalls).toHaveLength(0)
})
@@ -223,9 +339,9 @@ describe("todo-continuation-enforcer", () => {
},
})
// #then - countdown should continue (message was ignored)
// #then - countdown should continue (message was ignored)
// wait past 2s countdown and verify injection happens
await new Promise(r => setTimeout(r, 2500))
await fakeTimers.advanceBy(2500)
expect(promptCalls).toHaveLength(1)
})
@@ -242,7 +358,7 @@ describe("todo-continuation-enforcer", () => {
})
// #when - assistant starts responding
await new Promise(r => setTimeout(r, 500))
await fakeTimers.advanceBy(500)
await hook.handler({
event: {
type: "message.part.updated",
@@ -250,7 +366,7 @@ describe("todo-continuation-enforcer", () => {
},
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - no continuation injected (cancelled)
expect(promptCalls).toHaveLength(0)
@@ -269,12 +385,12 @@ describe("todo-continuation-enforcer", () => {
})
// #when - tool starts executing
await new Promise(r => setTimeout(r, 500))
await fakeTimers.advanceBy(500)
await hook.handler({
event: { type: "tool.execute.before", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - no continuation injected (cancelled)
expect(promptCalls).toHaveLength(0)
@@ -295,7 +411,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - no continuation injected
expect(promptCalls).toHaveLength(0)
@@ -317,7 +433,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - continuation injected
expect(promptCalls.length).toBe(1)
@@ -336,12 +452,12 @@ describe("todo-continuation-enforcer", () => {
})
// #when - session is deleted during countdown
await new Promise(r => setTimeout(r, 500))
await fakeTimers.advanceBy(500)
await hook.handler({
event: { type: "session.deleted", properties: { info: { id: sessionID } } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - no continuation injected (cleaned up)
expect(promptCalls).toHaveLength(0)
@@ -362,7 +478,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 100))
await fakeTimers.advanceBy(100)
expect(toastCalls.length).toBeGreaterThanOrEqual(1)
})
@@ -379,7 +495,7 @@ describe("todo-continuation-enforcer", () => {
})
// #then - multiple toast updates during countdown (2s countdown = 2 toasts: "2s" and "1s")
await new Promise(r => setTimeout(r, 2500))
await fakeTimers.advanceBy(2500)
expect(toastCalls.length).toBeGreaterThanOrEqual(2)
expect(toastCalls[0].message).toContain("2s")
})
@@ -395,7 +511,7 @@ describe("todo-continuation-enforcer", () => {
await hook.handler({
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3500))
await fakeTimers.advanceBy(3500)
// #then - first injection happened
expect(promptCalls.length).toBe(1)
@@ -404,7 +520,7 @@ describe("todo-continuation-enforcer", () => {
await hook.handler({
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3500))
await fakeTimers.advanceBy(3500)
// #then - second injection also happened (no throttle blocking)
expect(promptCalls.length).toBe(2)
@@ -439,7 +555,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 2500))
await fakeTimers.advanceBy(2500)
// #then - continuation injected (non-abort errors don't block)
expect(promptCalls.length).toBe(1)
@@ -472,7 +588,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - no continuation (last message was aborted)
expect(promptCalls).toHaveLength(0)
@@ -490,12 +606,12 @@ describe("todo-continuation-enforcer", () => {
const hook = createTodoContinuationEnforcer(createMockPluginInput(), {})
// #when - session goes idle
// #when - session goes idle
await hook.handler({
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - continuation injected (no abort)
expect(promptCalls.length).toBe(1)
@@ -518,7 +634,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - continuation injected (last message is user, not aborted assistant)
expect(promptCalls.length).toBe(1)
@@ -541,7 +657,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - no continuation (abort error detected)
expect(promptCalls).toHaveLength(0)
@@ -566,12 +682,12 @@ describe("todo-continuation-enforcer", () => {
},
})
// #when - session goes idle immediately after
// #when - session goes idle immediately after
await hook.handler({
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - no continuation (abort detected via event)
expect(promptCalls).toHaveLength(0)
@@ -601,7 +717,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - no continuation (abort detected via event)
expect(promptCalls).toHaveLength(0)
@@ -627,13 +743,13 @@ describe("todo-continuation-enforcer", () => {
})
// #when - wait >3s then idle fires
await new Promise(r => setTimeout(r, 3100))
await fakeTimers.advanceBy(3100, true)
await hook.handler({
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - continuation injected (abort flag is stale)
expect(promptCalls.length).toBeGreaterThan(0)
@@ -659,7 +775,7 @@ describe("todo-continuation-enforcer", () => {
})
// #when - user sends new message (clears abort flag)
await new Promise(r => setTimeout(r, 600))
await fakeTimers.advanceBy(600)
await hook.handler({
event: {
type: "message.updated",
@@ -672,7 +788,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - continuation injected (abort flag was cleared by user activity)
expect(promptCalls.length).toBeGreaterThan(0)
@@ -710,7 +826,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - continuation injected (abort flag was cleared by assistant activity)
expect(promptCalls.length).toBeGreaterThan(0)
@@ -748,7 +864,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - continuation injected (abort flag was cleared by tool execution)
expect(promptCalls.length).toBeGreaterThan(0)
@@ -778,7 +894,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - no continuation (event-based detection wins over API)
expect(promptCalls).toHaveLength(0)
@@ -800,7 +916,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - no continuation (API fallback detected the abort)
expect(promptCalls).toHaveLength(0)
@@ -820,7 +936,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 2500))
await fakeTimers.advanceBy(2500)
// #then - prompt call made, model is undefined when no context (expected behavior)
expect(promptCalls.length).toBe(1)
@@ -867,7 +983,7 @@ describe("todo-continuation-enforcer", () => {
// #when - session goes idle
await hook.handler({ event: { type: "session.idle", properties: { sessionID } } })
await new Promise(r => setTimeout(r, 2500))
await fakeTimers.advanceBy(2500)
// #then - model should be extracted from assistant message's flat modelID/providerID
expect(promptCalls.length).toBe(1)
@@ -919,7 +1035,7 @@ describe("todo-continuation-enforcer", () => {
// #when - session goes idle
await hook.handler({ event: { type: "session.idle", properties: { sessionID } } })
await new Promise(r => setTimeout(r, 2500))
await fakeTimers.advanceBy(2500)
// #then - continuation uses Sisyphus (skipped compaction agent)
expect(promptCalls.length).toBe(1)
@@ -964,7 +1080,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - no continuation (compaction is in default skipAgents)
expect(promptCalls).toHaveLength(0)
@@ -1010,7 +1126,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - no continuation (prometheus found after filtering compaction, prometheus is in skipAgents)
expect(promptCalls).toHaveLength(0)
@@ -1057,7 +1173,7 @@ describe("todo-continuation-enforcer", () => {
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
await fakeTimers.advanceBy(3000)
// #then - continuation injected (no agents to skip)
expect(promptCalls.length).toBe(1)

View File

@@ -118,7 +118,7 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
if (externalNotifier.detected && !forceEnable) {
// External notification plugin detected - skip our notification to avoid conflicts
console.warn(getNotificationConflictWarning(externalNotifier.pluginName!));
log(getNotificationConflictWarning(externalNotifier.pluginName!));
log("session-notification disabled due to external notifier conflict", {
detected: externalNotifier.pluginName,
allPlugins: externalNotifier.allPlugins,

View File

@@ -46,7 +46,7 @@ describe("Agent Config Integration", () => {
const config = {
sisyphus: { model: "anthropic/claude-opus-4-5" },
oracle: { model: "openai/gpt-5.2" },
librarian: { model: "opencode/big-pickle" },
librarian: { model: "opencode/glm-4.7-free" },
}
// #when - migration is applied
@@ -65,7 +65,7 @@ describe("Agent Config Integration", () => {
Sisyphus: { model: "anthropic/claude-opus-4-5" },
oracle: { model: "openai/gpt-5.2" },
"Prometheus (Planner)": { model: "anthropic/claude-opus-4-5" },
librarian: { model: "opencode/big-pickle" },
librarian: { model: "opencode/glm-4.7-free" },
}
// #when - migration is applied

View File

@@ -159,13 +159,13 @@ export async function updateConnectedProvidersCache(client: {
writeConnectedProvidersCache(connected)
// Also update provider-models cache if model.list is available
// Always update provider-models cache (overwrite with fresh data)
let modelsByProvider: Record<string, string[]> = {}
if (client.model?.list) {
try {
const modelsResult = await client.model.list()
const models = modelsResult.data ?? []
const modelsByProvider: Record<string, string[]> = {}
for (const model of models) {
if (!modelsByProvider[model.provider]) {
modelsByProvider[model.provider] = []
@@ -173,19 +173,21 @@ export async function updateConnectedProvidersCache(client: {
modelsByProvider[model.provider].push(model.id)
}
writeProviderModelsCache({
models: modelsByProvider,
connected,
})
log("[connected-providers-cache] Provider-models cache updated", {
log("[connected-providers-cache] Fetched models from API", {
providerCount: Object.keys(modelsByProvider).length,
totalModels: models.length,
})
} catch (modelErr) {
log("[connected-providers-cache] Error fetching models", { error: String(modelErr) })
log("[connected-providers-cache] Error fetching models, writing empty cache", { error: String(modelErr) })
}
} else {
log("[connected-providers-cache] client.model.list not available, writing empty cache")
}
writeProviderModelsCache({
models: modelsByProvider,
connected,
})
} catch (err) {
log("[connected-providers-cache] Error updating cache", { error: String(err) })
}

View File

@@ -32,3 +32,4 @@ export * from "./connected-providers-cache"
export * from "./case-insensitive"
export * from "./session-utils"
export * from "./tmux"
export * from "./model-suggestion-retry"

View File

@@ -2,7 +2,7 @@ import { describe, it, expect, beforeEach, afterEach } from "bun:test"
import { mkdtempSync, writeFileSync, rmSync } from "fs"
import { tmpdir } from "os"
import { join } from "path"
import { fetchAvailableModels, fuzzyMatchModel, getConnectedProviders, __resetModelCache } from "./model-availability"
import { fetchAvailableModels, fuzzyMatchModel, getConnectedProviders, __resetModelCache, isModelAvailable } from "./model-availability"
describe("fetchAvailableModels", () => {
let tempDir: string
@@ -59,6 +59,28 @@ describe("fetchAvailableModels", () => {
expect(result.size).toBe(0)
})
it("#given connectedProviders unknown but client can list #when fetchAvailableModels called with client #then returns models from API filtered by connected providers", async () => {
const client = {
provider: {
list: async () => ({ data: { connected: ["openai"] } }),
},
model: {
list: async () => ({
data: [
{ id: "gpt-5.2-codex", provider: "openai" },
{ id: "gemini-3-pro", provider: "google" },
],
}),
},
}
const result = await fetchAvailableModels(client)
expect(result).toBeInstanceOf(Set)
expect(result.has("openai/gpt-5.2-codex")).toBe(true)
expect(result.has("google/gemini-3-pro")).toBe(false)
})
it("#given cache file not found #when fetchAvailableModels called with connectedProviders #then returns empty Set", async () => {
const result = await fetchAvailableModels(undefined, { connectedProviders: ["openai"] })
@@ -66,6 +88,28 @@ describe("fetchAvailableModels", () => {
expect(result.size).toBe(0)
})
it("#given cache missing but client can list #when fetchAvailableModels called with connectedProviders #then returns models from API", async () => {
const client = {
provider: {
list: async () => ({ data: { connected: ["openai", "google"] } }),
},
model: {
list: async () => ({
data: [
{ id: "gpt-5.2-codex", provider: "openai" },
{ id: "gemini-3-pro", provider: "google" },
],
}),
},
}
const result = await fetchAvailableModels(client, { connectedProviders: ["openai", "google"] })
expect(result).toBeInstanceOf(Set)
expect(result.has("openai/gpt-5.2-codex")).toBe(true)
expect(result.has("google/gemini-3-pro")).toBe(true)
})
it("#given cache read twice #when second call made with same providers #then reads fresh each time", async () => {
writeModelsCache({
openai: { id: "openai", models: { "gpt-5.2": { id: "gpt-5.2" } } },
@@ -122,6 +166,19 @@ describe("fuzzyMatchModel", () => {
expect(result).toBe("openai/gpt-5.2")
})
// #given available model with preview suffix
// #when searching with provider-prefixed base model
// #then return preview model
it("should match preview suffix for gemini-3-flash", () => {
const available = new Set(["google/gemini-3-flash-preview"])
const result = fuzzyMatchModel(
"google/gemini-3-flash",
available,
["google"],
)
expect(result).toBe("google/gemini-3-flash-preview")
})
// #given available models with partial matches
// #when searching for a substring
// #then return exact match if it exists
@@ -547,13 +604,13 @@ describe("fetchAvailableModels with provider-models cache (whitelist-filtered)",
it("should prefer provider-models cache over models.json", async () => {
writeProviderModelsCache({
models: {
opencode: ["big-pickle", "gpt-5-nano"],
opencode: ["glm-4.7-free", "gpt-5-nano"],
anthropic: ["claude-opus-4-5"]
},
connected: ["opencode", "anthropic"]
})
writeModelsCache({
opencode: { models: { "big-pickle": {}, "gpt-5-nano": {}, "gpt-5.2": {} } },
opencode: { models: { "glm-4.7-free": {}, "gpt-5-nano": {}, "gpt-5.2": {} } },
anthropic: { models: { "claude-opus-4-5": {}, "claude-sonnet-4-5": {} } }
})
@@ -562,19 +619,40 @@ describe("fetchAvailableModels with provider-models cache (whitelist-filtered)",
})
expect(result.size).toBe(3)
expect(result.has("opencode/big-pickle")).toBe(true)
expect(result.has("opencode/glm-4.7-free")).toBe(true)
expect(result.has("opencode/gpt-5-nano")).toBe(true)
expect(result.has("anthropic/claude-opus-4-5")).toBe(true)
expect(result.has("opencode/gpt-5.2")).toBe(false)
expect(result.has("anthropic/claude-sonnet-4-5")).toBe(false)
})
//#given provider-models cache exists but has no models (API failure)
//#when fetchAvailableModels called
//#then falls back to models.json so fuzzy matching can still work
it("should fall back to models.json when provider-models cache is empty", async () => {
writeProviderModelsCache({
models: {
},
connected: ["google"],
})
writeModelsCache({
google: { models: { "gemini-3-flash-preview": {} } },
})
const availableModels = await fetchAvailableModels(undefined, {
connectedProviders: ["google"],
})
const match = fuzzyMatchModel("google/gemini-3-flash", availableModels, ["google"])
expect(match).toBe("google/gemini-3-flash-preview")
})
//#given only models.json exists (no provider-models cache)
//#when fetchAvailableModels called
//#then falls back to models.json (no whitelist filtering)
it("should fallback to models.json when provider-models cache not found", async () => {
writeModelsCache({
opencode: { models: { "big-pickle": {}, "gpt-5-nano": {}, "gpt-5.2": {} } },
opencode: { models: { "glm-4.7-free": {}, "gpt-5-nano": {}, "gpt-5.2": {} } },
})
const result = await fetchAvailableModels(undefined, {
@@ -582,7 +660,7 @@ describe("fetchAvailableModels with provider-models cache (whitelist-filtered)",
})
expect(result.size).toBe(3)
expect(result.has("opencode/big-pickle")).toBe(true)
expect(result.has("opencode/glm-4.7-free")).toBe(true)
expect(result.has("opencode/gpt-5-nano")).toBe(true)
expect(result.has("opencode/gpt-5.2")).toBe(true)
})
@@ -593,7 +671,7 @@ describe("fetchAvailableModels with provider-models cache (whitelist-filtered)",
it("should filter by connectedProviders even with provider-models cache", async () => {
writeProviderModelsCache({
models: {
opencode: ["big-pickle"],
opencode: ["glm-4.7-free"],
anthropic: ["claude-opus-4-5"],
google: ["gemini-3-pro"]
},
@@ -605,8 +683,43 @@ describe("fetchAvailableModels with provider-models cache (whitelist-filtered)",
})
expect(result.size).toBe(1)
expect(result.has("opencode/big-pickle")).toBe(true)
expect(result.has("opencode/glm-4.7-free")).toBe(true)
expect(result.has("anthropic/claude-opus-4-5")).toBe(false)
expect(result.has("google/gemini-3-pro")).toBe(false)
})
})
describe("isModelAvailable", () => {
it("returns true when model exists via fuzzy match", () => {
// #given
const available = new Set(["openai/gpt-5.2-codex", "anthropic/claude-opus-4-5"])
// #when
const result = isModelAvailable("gpt-5.2-codex", available)
// #then
expect(result).toBe(true)
})
it("returns false when model not found", () => {
// #given
const available = new Set(["anthropic/claude-opus-4-5"])
// #when
const result = isModelAvailable("gpt-5.2-codex", available)
// #then
expect(result).toBe(false)
})
it("returns false for empty available set", () => {
// #given
const available = new Set<string>()
// #when
const result = isModelAvailable("gpt-5.2-codex", available)
// #then
expect(result).toBe(false)
})
})

View File

@@ -87,6 +87,20 @@ export function fuzzyMatchModel(
return result
}
/**
* Check if a target model is available (fuzzy match by model name, no provider filtering)
*
* @param targetModel - Model name to check (e.g., "gpt-5.2-codex")
* @param availableModels - Set of available models in "provider/model" format
* @returns true if model is available, false otherwise
*/
export function isModelAvailable(
targetModel: string,
availableModels: Set<string>,
): boolean {
return fuzzyMatchModel(targetModel, availableModels) !== null
}
export async function getConnectedProviders(client: any): Promise<string[]> {
if (!client?.provider?.list) {
log("[getConnectedProviders] client.provider.list not available")
@@ -105,85 +119,144 @@ export async function getConnectedProviders(client: any): Promise<string[]> {
}
export async function fetchAvailableModels(
_client?: any,
client?: any,
options?: { connectedProviders?: string[] | null }
): Promise<Set<string>> {
const connectedProvidersUnknown = options?.connectedProviders === null || options?.connectedProviders === undefined
let connectedProviders = options?.connectedProviders ?? null
let connectedProvidersUnknown = connectedProviders === null
log("[fetchAvailableModels] CALLED", {
connectedProvidersUnknown,
connectedProviders: options?.connectedProviders
})
if (connectedProvidersUnknown && client) {
const liveConnected = await getConnectedProviders(client)
if (liveConnected.length > 0) {
connectedProviders = liveConnected
connectedProvidersUnknown = false
log("[fetchAvailableModels] connected providers fetched from client", { count: liveConnected.length })
}
}
if (connectedProvidersUnknown) {
if (client?.model?.list) {
const modelSet = new Set<string>()
try {
const modelsResult = await client.model.list()
const models = modelsResult.data ?? []
for (const model of models) {
if (model?.provider && model?.id) {
modelSet.add(`${model.provider}/${model.id}`)
}
}
log("[fetchAvailableModels] fetched models from client without provider filter", {
count: modelSet.size,
})
return modelSet
} catch (err) {
log("[fetchAvailableModels] client.model.list error", { error: String(err) })
}
}
log("[fetchAvailableModels] connected providers unknown, returning empty set for fallback resolution")
return new Set<string>()
}
const connectedProviders = options!.connectedProviders!
const connectedSet = new Set(connectedProviders)
const connectedProvidersList = connectedProviders ?? []
const connectedSet = new Set(connectedProvidersList)
const modelSet = new Set<string>()
const providerModelsCache = readProviderModelsCache()
if (providerModelsCache) {
log("[fetchAvailableModels] using provider-models cache (whitelist-filtered)")
for (const [providerId, modelIds] of Object.entries(providerModelsCache.models)) {
if (!connectedSet.has(providerId)) {
continue
const providerCount = Object.keys(providerModelsCache.models).length
if (providerCount === 0) {
log("[fetchAvailableModels] provider-models cache empty, falling back to models.json")
} else {
log("[fetchAvailableModels] using provider-models cache (whitelist-filtered)")
for (const [providerId, modelIds] of Object.entries(providerModelsCache.models)) {
if (!connectedSet.has(providerId)) {
continue
}
for (const modelId of modelIds) {
modelSet.add(`${providerId}/${modelId}`)
}
}
for (const modelId of modelIds) {
modelSet.add(`${providerId}/${modelId}`)
log("[fetchAvailableModels] parsed from provider-models cache", {
count: modelSet.size,
connectedProviders: connectedProvidersList.slice(0, 5)
})
if (modelSet.size > 0) {
return modelSet
}
log("[fetchAvailableModels] provider-models cache produced no models for connected providers, falling back to models.json")
}
log("[fetchAvailableModels] parsed from provider-models cache", {
count: modelSet.size,
connectedProviders: connectedProviders.slice(0, 5)
})
return modelSet
}
log("[fetchAvailableModels] provider-models cache not found, falling back to models.json")
const cacheFile = join(getOpenCodeCacheDir(), "models.json")
if (!existsSync(cacheFile)) {
log("[fetchAvailableModels] models.json cache file not found, returning empty set")
return modelSet
}
log("[fetchAvailableModels] models.json cache file not found, falling back to client")
} else {
try {
const content = readFileSync(cacheFile, "utf-8")
const data = JSON.parse(content) as Record<string, { id?: string; models?: Record<string, { id?: string }> }>
try {
const content = readFileSync(cacheFile, "utf-8")
const data = JSON.parse(content) as Record<string, { id?: string; models?: Record<string, { id?: string }> }>
const providerIds = Object.keys(data)
log("[fetchAvailableModels] providers found in models.json", { count: providerIds.length, providers: providerIds.slice(0, 10) })
const providerIds = Object.keys(data)
log("[fetchAvailableModels] providers found in models.json", { count: providerIds.length, providers: providerIds.slice(0, 10) })
for (const providerId of providerIds) {
if (!connectedSet.has(providerId)) {
continue
}
for (const providerId of providerIds) {
if (!connectedSet.has(providerId)) {
continue
const provider = data[providerId]
const models = provider?.models
if (!models || typeof models !== "object") continue
for (const modelKey of Object.keys(models)) {
modelSet.add(`${providerId}/${modelKey}`)
}
}
const provider = data[providerId]
const models = provider?.models
if (!models || typeof models !== "object") continue
log("[fetchAvailableModels] parsed models from models.json (NO whitelist filtering)", {
count: modelSet.size,
connectedProviders: connectedProvidersList.slice(0, 5)
})
for (const modelKey of Object.keys(models)) {
modelSet.add(`${providerId}/${modelKey}`)
if (modelSet.size > 0) {
return modelSet
}
} catch (err) {
log("[fetchAvailableModels] error", { error: String(err) })
}
log("[fetchAvailableModels] parsed models from models.json (NO whitelist filtering)", {
count: modelSet.size,
connectedProviders: connectedProviders.slice(0, 5)
})
return modelSet
} catch (err) {
log("[fetchAvailableModels] error", { error: String(err) })
return modelSet
}
if (client?.model?.list) {
try {
const modelsResult = await client.model.list()
const models = modelsResult.data ?? []
for (const model of models) {
if (!model?.provider || !model?.id) continue
if (connectedSet.has(model.provider)) {
modelSet.add(`${model.provider}/${model.id}`)
}
}
log("[fetchAvailableModels] fetched models from client (filtered)", {
count: modelSet.size,
connectedProviders: connectedProvidersList.slice(0, 5),
})
} catch (err) {
log("[fetchAvailableModels] client.model.list error", { error: String(err) })
}
}
return modelSet
}
export function __resetModelCache(): void {}

View File

@@ -141,19 +141,19 @@ describe("AGENT_MODEL_REQUIREMENTS", () => {
expect(primary.providers[0]).toBe("openai")
})
test("atlas has valid fallbackChain with claude-sonnet-4-5 as primary", () => {
test("atlas has valid fallbackChain with k2p5 as primary (kimi-for-coding prioritized)", () => {
// #given - atlas agent requirement
const atlas = AGENT_MODEL_REQUIREMENTS["atlas"]
// #when - accessing Atlas requirement
// #then - fallbackChain exists with claude-sonnet-4-5 as first entry
// #then - fallbackChain exists with k2p5 as first entry (kimi-for-coding prioritized)
expect(atlas).toBeDefined()
expect(atlas.fallbackChain).toBeArray()
expect(atlas.fallbackChain.length).toBeGreaterThan(0)
const primary = atlas.fallbackChain[0]
expect(primary.model).toBe("claude-sonnet-4-5")
expect(primary.providers[0]).toBe("anthropic")
expect(primary.model).toBe("k2p5")
expect(primary.providers[0]).toBe("kimi-for-coding")
})
test("all 9 builtin agents have valid fallbackChain arrays", () => {
@@ -208,6 +208,22 @@ describe("CATEGORY_MODEL_REQUIREMENTS", () => {
expect(primary.providers[0]).toBe("openai")
})
test("deep has valid fallbackChain with gpt-5.2-codex as primary", () => {
// #given - deep category requirement
const deep = CATEGORY_MODEL_REQUIREMENTS["deep"]
// #when - accessing deep requirement
// #then - fallbackChain exists with gpt-5.2-codex as first entry, medium variant
expect(deep).toBeDefined()
expect(deep.fallbackChain).toBeArray()
expect(deep.fallbackChain.length).toBeGreaterThan(0)
const primary = deep.fallbackChain[0]
expect(primary.variant).toBe("medium")
expect(primary.model).toBe("gpt-5.2-codex")
expect(primary.providers[0]).toBe("openai")
})
test("visual-engineering has valid fallbackChain with gemini-3-pro as primary", () => {
// #given - visual-engineering category requirement
const visualEngineering = CATEGORY_MODEL_REQUIREMENTS["visual-engineering"]
@@ -300,11 +316,12 @@ describe("CATEGORY_MODEL_REQUIREMENTS", () => {
expect(primary.providers[0]).toBe("google")
})
test("all 7 categories have valid fallbackChain arrays", () => {
// #given - list of 7 category names
test("all 8 categories have valid fallbackChain arrays", () => {
// #given - list of 8 category names
const expectedCategories = [
"visual-engineering",
"ultrabrain",
"deep",
"artistry",
"quick",
"unspecified-low",
@@ -316,7 +333,7 @@ describe("CATEGORY_MODEL_REQUIREMENTS", () => {
const definedCategories = Object.keys(CATEGORY_MODEL_REQUIREMENTS)
// #then - all categories present with valid fallbackChain
expect(definedCategories).toHaveLength(7)
expect(definedCategories).toHaveLength(8)
for (const category of expectedCategories) {
const requirement = CATEGORY_MODEL_REQUIREMENTS[category]
expect(requirement).toBeDefined()
@@ -353,7 +370,7 @@ describe("FallbackEntry type", () => {
// #given - a FallbackEntry without variant
const entry: FallbackEntry = {
providers: ["opencode", "anthropic"],
model: "big-pickle",
model: "glm-4.7-free",
}
// #when - accessing variant
@@ -383,7 +400,7 @@ describe("ModelRequirement type", () => {
test("ModelRequirement variant is optional", () => {
// #given - a ModelRequirement without top-level variant
const requirement: ModelRequirement = {
fallbackChain: [{ providers: ["opencode"], model: "big-pickle" }],
fallbackChain: [{ providers: ["opencode"], model: "glm-4.7-free" }],
}
// #when - accessing variant
@@ -407,20 +424,38 @@ describe("ModelRequirement type", () => {
}
})
test("all fallbackChain entries have non-empty providers array", () => {
// #given - all agent and category requirements
const allRequirements = [
...Object.values(AGENT_MODEL_REQUIREMENTS),
...Object.values(CATEGORY_MODEL_REQUIREMENTS),
]
test("all fallbackChain entries have non-empty providers array", () => {
// #given - all agent and category requirements
const allRequirements = [
...Object.values(AGENT_MODEL_REQUIREMENTS),
...Object.values(CATEGORY_MODEL_REQUIREMENTS),
]
// #when - checking each entry in fallbackChain
// #then - all have non-empty providers array
for (const req of allRequirements) {
for (const entry of req.fallbackChain) {
expect(entry.providers).toBeArray()
expect(entry.providers.length).toBeGreaterThan(0)
}
}
// #when - checking each entry in fallbackChain
// #then - all have non-empty providers array
for (const req of allRequirements) {
for (const entry of req.fallbackChain) {
expect(entry.providers).toBeArray()
expect(entry.providers.length).toBeGreaterThan(0)
}
}
})
})
describe("requiresModel field in categories", () => {
test("deep category has requiresModel set to gpt-5.2-codex", () => {
// #given
const deep = CATEGORY_MODEL_REQUIREMENTS["deep"]
// #when / #then
expect(deep.requiresModel).toBe("gpt-5.2-codex")
})
test("artistry category has requiresModel set to gemini-3-pro", () => {
// #given
const artistry = CATEGORY_MODEL_REQUIREMENTS["artistry"]
// #when / #then
expect(artistry.requiresModel).toBe("gemini-3-pro")
})
})

View File

@@ -7,12 +7,15 @@ export type FallbackEntry = {
export type ModelRequirement = {
fallbackChain: FallbackEntry[]
variant?: string // Default variant (used when entry doesn't specify one)
requiresModel?: string // If set, only activates when this model is available (fuzzy match)
}
export const AGENT_MODEL_REQUIREMENTS: Record<string, ModelRequirement> = {
sisyphus: {
fallbackChain: [
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-5", variant: "max" },
{ providers: ["kimi-for-coding"], model: "k2p5" },
{ providers: ["opencode"], model: "kimi-k2.5-free" },
{ providers: ["zai-coding-plan"], model: "glm-4.7" },
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2-codex", variant: "medium" },
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro" },
@@ -21,14 +24,14 @@ export const AGENT_MODEL_REQUIREMENTS: Record<string, ModelRequirement> = {
oracle: {
fallbackChain: [
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2", variant: "high" },
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro", variant: "max" },
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-5", variant: "max" },
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro" },
],
},
librarian: {
fallbackChain: [
{ providers: ["zai-coding-plan"], model: "glm-4.7" },
{ providers: ["opencode"], model: "big-pickle" },
{ providers: ["opencode"], model: "glm-4.7-free" },
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-sonnet-4-5" },
],
},
@@ -44,6 +47,8 @@ export const AGENT_MODEL_REQUIREMENTS: Record<string, ModelRequirement> = {
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-flash" },
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2" },
{ providers: ["zai-coding-plan"], model: "glm-4.6v" },
{ providers: ["kimi-for-coding"], model: "k2p5" },
{ providers: ["opencode"], model: "kimi-k2.5-free" },
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-haiku-4-5" },
{ providers: ["opencode"], model: "gpt-5-nano" },
],
@@ -51,6 +56,8 @@ export const AGENT_MODEL_REQUIREMENTS: Record<string, ModelRequirement> = {
prometheus: {
fallbackChain: [
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-5", variant: "max" },
{ providers: ["kimi-for-coding"], model: "k2p5" },
{ providers: ["opencode"], model: "kimi-k2.5-free" },
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2", variant: "high" },
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro" },
],
@@ -58,6 +65,8 @@ export const AGENT_MODEL_REQUIREMENTS: Record<string, ModelRequirement> = {
metis: {
fallbackChain: [
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-5", variant: "max" },
{ providers: ["kimi-for-coding"], model: "k2p5" },
{ providers: ["opencode"], model: "kimi-k2.5-free" },
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2", variant: "high" },
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro", variant: "max" },
],
@@ -65,12 +74,14 @@ export const AGENT_MODEL_REQUIREMENTS: Record<string, ModelRequirement> = {
momus: {
fallbackChain: [
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2", variant: "medium" },
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-5" },
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-5", variant: "max" },
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro", variant: "max" },
],
},
atlas: {
fallbackChain: [
{ providers: ["kimi-for-coding"], model: "k2p5" },
{ providers: ["opencode"], model: "kimi-k2.5-free" },
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-sonnet-4-5" },
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2" },
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro" },
@@ -83,23 +94,32 @@ export const CATEGORY_MODEL_REQUIREMENTS: Record<string, ModelRequirement> = {
fallbackChain: [
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro" },
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-5", variant: "max" },
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2", variant: "high" },
{ providers: ["zai-coding-plan"], model: "glm-4.7" },
],
},
ultrabrain: {
fallbackChain: [
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2-codex", variant: "xhigh" },
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-5", variant: "max" },
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro" },
],
},
artistry: {
fallbackChain: [
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro", variant: "max" },
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-5", variant: "max" },
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2" },
],
},
deep: {
fallbackChain: [
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2-codex", variant: "medium" },
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-5", variant: "max" },
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro", variant: "max" },
],
requiresModel: "gpt-5.2-codex",
},
artistry: {
fallbackChain: [
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro", variant: "max" },
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-5", variant: "max" },
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2" },
],
requiresModel: "gemini-3-pro",
},
quick: {
fallbackChain: [
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-haiku-4-5" },

View File

@@ -388,6 +388,85 @@ describe("resolveModelWithFallback", () => {
expect(result!.model).toBe("anthropic/claude-opus-4-5")
expect(result!.source).toBe("provider-fallback")
})
test("cross-provider fuzzy match when preferred provider unavailable (librarian scenario)", () => {
// #given - glm-4.7 is defined for zai-coding-plan, but only opencode has it
const input: ExtendedModelResolutionInput = {
fallbackChain: [
{ providers: ["zai-coding-plan"], model: "glm-4.7" },
{ providers: ["anthropic"], model: "claude-sonnet-4-5" },
],
availableModels: new Set(["opencode/glm-4.7", "anthropic/claude-sonnet-4-5"]),
systemDefaultModel: "google/gemini-3-pro",
}
// #when
const result = resolveModelWithFallback(input)
// #then - should find glm-4.7 from opencode via cross-provider fuzzy match
expect(result!.model).toBe("opencode/glm-4.7")
expect(result!.source).toBe("provider-fallback")
expect(logSpy).toHaveBeenCalledWith("Model resolved via fallback chain (cross-provider fuzzy match)", {
model: "glm-4.7",
match: "opencode/glm-4.7",
variant: undefined,
})
})
test("prefers specified provider over cross-provider match", () => {
// #given - both zai-coding-plan and opencode have glm-4.7
const input: ExtendedModelResolutionInput = {
fallbackChain: [
{ providers: ["zai-coding-plan"], model: "glm-4.7" },
],
availableModels: new Set(["zai-coding-plan/glm-4.7", "opencode/glm-4.7"]),
systemDefaultModel: "google/gemini-3-pro",
}
// #when
const result = resolveModelWithFallback(input)
// #then - should prefer zai-coding-plan (specified provider) over opencode
expect(result!.model).toBe("zai-coding-plan/glm-4.7")
expect(result!.source).toBe("provider-fallback")
})
test("cross-provider match preserves variant from entry", () => {
// #given - entry has variant, model found via cross-provider
const input: ExtendedModelResolutionInput = {
fallbackChain: [
{ providers: ["zai-coding-plan"], model: "glm-4.7", variant: "high" },
],
availableModels: new Set(["opencode/glm-4.7"]),
systemDefaultModel: "google/gemini-3-pro",
}
// #when
const result = resolveModelWithFallback(input)
// #then - variant should be preserved
expect(result!.model).toBe("opencode/glm-4.7")
expect(result!.variant).toBe("high")
})
test("cross-provider match tries next entry if no match found anywhere", () => {
// #given - first entry model not available anywhere, second entry available
const input: ExtendedModelResolutionInput = {
fallbackChain: [
{ providers: ["zai-coding-plan"], model: "nonexistent-model" },
{ providers: ["anthropic"], model: "claude-sonnet-4-5" },
],
availableModels: new Set(["anthropic/claude-sonnet-4-5"]),
systemDefaultModel: "google/gemini-3-pro",
}
// #when
const result = resolveModelWithFallback(input)
// #then - should fall through to second entry
expect(result!.model).toBe("anthropic/claude-sonnet-4-5")
expect(result!.source).toBe("provider-fallback")
})
})
describe("Step 4: System default fallback (no availability match)", () => {
@@ -626,6 +705,103 @@ describe("resolveModelWithFallback", () => {
})
})
describe("categoryDefaultModel (fuzzy matching for category defaults)", () => {
test("applies fuzzy matching to categoryDefaultModel when userModel not provided", () => {
// #given - gemini-3-pro is the category default, but only gemini-3-pro-preview is available
const input: ExtendedModelResolutionInput = {
categoryDefaultModel: "google/gemini-3-pro",
fallbackChain: [
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro" },
],
availableModels: new Set(["google/gemini-3-pro-preview", "anthropic/claude-opus-4-5"]),
systemDefaultModel: "anthropic/claude-sonnet-4-5",
}
// #when
const result = resolveModelWithFallback(input)
// #then - should fuzzy match gemini-3-pro → gemini-3-pro-preview
expect(result!.model).toBe("google/gemini-3-pro-preview")
expect(result!.source).toBe("category-default")
})
test("categoryDefaultModel uses exact match when available", () => {
// #given - exact match exists
const input: ExtendedModelResolutionInput = {
categoryDefaultModel: "google/gemini-3-pro",
fallbackChain: [
{ providers: ["google"], model: "gemini-3-pro" },
],
availableModels: new Set(["google/gemini-3-pro", "google/gemini-3-pro-preview"]),
systemDefaultModel: "anthropic/claude-sonnet-4-5",
}
// #when
const result = resolveModelWithFallback(input)
// #then - should use exact match
expect(result!.model).toBe("google/gemini-3-pro")
expect(result!.source).toBe("category-default")
})
test("categoryDefaultModel falls through to fallbackChain when no match in availableModels", () => {
// #given - categoryDefaultModel has no match, but fallbackChain does
const input: ExtendedModelResolutionInput = {
categoryDefaultModel: "google/gemini-3-pro",
fallbackChain: [
{ providers: ["anthropic"], model: "claude-opus-4-5" },
],
availableModels: new Set(["anthropic/claude-opus-4-5"]),
systemDefaultModel: "system/default",
}
// #when
const result = resolveModelWithFallback(input)
// #then - should fall through to fallbackChain
expect(result!.model).toBe("anthropic/claude-opus-4-5")
expect(result!.source).toBe("provider-fallback")
})
test("userModel takes priority over categoryDefaultModel", () => {
// #given - both userModel and categoryDefaultModel provided
const input: ExtendedModelResolutionInput = {
userModel: "anthropic/claude-opus-4-5",
categoryDefaultModel: "google/gemini-3-pro",
fallbackChain: [
{ providers: ["google"], model: "gemini-3-pro" },
],
availableModels: new Set(["google/gemini-3-pro-preview", "anthropic/claude-opus-4-5"]),
systemDefaultModel: "system/default",
}
// #when
const result = resolveModelWithFallback(input)
// #then - userModel wins
expect(result!.model).toBe("anthropic/claude-opus-4-5")
expect(result!.source).toBe("override")
})
test("categoryDefaultModel works when availableModels is empty but connected provider exists", () => {
// #given - no availableModels but connected provider cache exists
const cacheSpy = spyOn(connectedProvidersCache, "readConnectedProvidersCache").mockReturnValue(["google"])
const input: ExtendedModelResolutionInput = {
categoryDefaultModel: "google/gemini-3-pro",
availableModels: new Set(),
systemDefaultModel: "anthropic/claude-sonnet-4-5",
}
// #when
const result = resolveModelWithFallback(input)
// #then - should use categoryDefaultModel since google is connected
expect(result!.model).toBe("google/gemini-3-pro")
expect(result!.source).toBe("category-default")
cacheSpy.mockRestore()
})
})
describe("Optional systemDefaultModel", () => {
test("returns undefined when systemDefaultModel is undefined and no fallback found", () => {
// #given

View File

@@ -11,6 +11,7 @@ export type ModelResolutionInput = {
export type ModelSource =
| "override"
| "category-default"
| "provider-fallback"
| "system-default"
@@ -23,6 +24,7 @@ export type ModelResolutionResult = {
export type ExtendedModelResolutionInput = {
uiSelectedModel?: string
userModel?: string
categoryDefaultModel?: string
fallbackChain?: FallbackEntry[]
availableModels: Set<string>
systemDefaultModel?: string
@@ -44,7 +46,7 @@ export function resolveModel(input: ModelResolutionInput): string | undefined {
export function resolveModelWithFallback(
input: ExtendedModelResolutionInput,
): ModelResolutionResult | undefined {
const { uiSelectedModel, userModel, fallbackChain, availableModels, systemDefaultModel } = input
const { uiSelectedModel, userModel, categoryDefaultModel, fallbackChain, availableModels, systemDefaultModel } = input
// Step 1: UI Selection (highest priority - respects user's model choice in OpenCode UI)
const normalizedUiModel = normalizeModel(uiSelectedModel)
@@ -53,14 +55,43 @@ export function resolveModelWithFallback(
return { model: normalizedUiModel, source: "override" }
}
// Step 2: Config Override (from oh-my-opencode.json)
// Step 2: Config Override (from oh-my-opencode.json user config)
const normalizedUserModel = normalizeModel(userModel)
if (normalizedUserModel) {
log("Model resolved via config override", { model: normalizedUserModel })
return { model: normalizedUserModel, source: "override" }
}
// Step 3: Provider fallback chain (with availability check)
// Step 2.5: Category Default Model (from DEFAULT_CATEGORIES, with fuzzy matching)
const normalizedCategoryDefault = normalizeModel(categoryDefaultModel)
if (normalizedCategoryDefault) {
if (availableModels.size > 0) {
const parts = normalizedCategoryDefault.split("/")
const providerHint = parts.length >= 2 ? [parts[0]] : undefined
const match = fuzzyMatchModel(normalizedCategoryDefault, availableModels, providerHint)
if (match) {
log("Model resolved via category default (fuzzy matched)", { original: normalizedCategoryDefault, matched: match })
return { model: match, source: "category-default" }
}
} else {
const connectedProviders = readConnectedProvidersCache()
if (connectedProviders === null) {
log("Model resolved via category default (no cache, first run)", { model: normalizedCategoryDefault })
return { model: normalizedCategoryDefault, source: "category-default" }
}
const parts = normalizedCategoryDefault.split("/")
if (parts.length >= 2) {
const provider = parts[0]
if (connectedProviders.includes(provider)) {
log("Model resolved via category default (connected provider)", { model: normalizedCategoryDefault })
return { model: normalizedCategoryDefault, source: "category-default" }
}
}
}
log("Category default model not available, falling through to fallback chain", { model: normalizedCategoryDefault })
}
// Step 3: Provider fallback chain (exact match → fuzzy match → next provider)
if (fallbackChain && fallbackChain.length > 0) {
if (availableModels.size === 0) {
const connectedProviders = readConnectedProvidersCache()
@@ -73,7 +104,7 @@ export function resolveModelWithFallback(
for (const provider of entry.providers) {
if (connectedSet.has(provider)) {
const model = `${provider}/${entry.model}`
log("Model resolved via fallback chain (no model cache, using connected provider)", {
log("Model resolved via fallback chain (connected provider)", {
provider,
model: entry.model,
variant: entry.variant,
@@ -84,19 +115,31 @@ export function resolveModelWithFallback(
}
log("No connected provider found in fallback chain, falling through to system default")
}
}
} else {
for (const entry of fallbackChain) {
// Step 1: Try with provider filter (preferred providers first)
for (const provider of entry.providers) {
const fullModel = `${provider}/${entry.model}`
const match = fuzzyMatchModel(fullModel, availableModels, [provider])
if (match) {
log("Model resolved via fallback chain (availability confirmed)", { provider, model: entry.model, match, variant: entry.variant })
return { model: match, source: "provider-fallback", variant: entry.variant }
}
}
for (const entry of fallbackChain) {
for (const provider of entry.providers) {
const fullModel = `${provider}/${entry.model}`
const match = fuzzyMatchModel(fullModel, availableModels, [provider])
if (match) {
log("Model resolved via fallback chain (availability confirmed)", { provider, model: entry.model, match, variant: entry.variant })
return { model: match, source: "provider-fallback", variant: entry.variant }
// Step 2: Try without provider filter (cross-provider fuzzy match)
const crossProviderMatch = fuzzyMatchModel(entry.model, availableModels)
if (crossProviderMatch) {
log("Model resolved via fallback chain (cross-provider fuzzy match)", {
model: entry.model,
match: crossProviderMatch,
variant: entry.variant,
})
return { model: crossProviderMatch, source: "provider-fallback", variant: entry.variant }
}
}
log("No available model found in fallback chain, falling through to system default")
}
log("No available model found in fallback chain, falling through to system default")
}
// Step 4: System default (if provided)

View File

@@ -0,0 +1,401 @@
import { describe, it, expect, mock } from "bun:test"
import { parseModelSuggestion, promptWithModelSuggestionRetry } from "./model-suggestion-retry"
describe("parseModelSuggestion", () => {
describe("structured NamedError format", () => {
it("should extract suggestion from ProviderModelNotFoundError", () => {
//#given a structured NamedError with suggestions
const error = {
name: "ProviderModelNotFoundError",
data: {
providerID: "anthropic",
modelID: "claude-sonet-4",
suggestions: ["claude-sonnet-4", "claude-sonnet-4-5"],
},
}
//#when parsing the error
const result = parseModelSuggestion(error)
//#then should return the first suggestion
expect(result).toEqual({
providerID: "anthropic",
modelID: "claude-sonet-4",
suggestion: "claude-sonnet-4",
})
})
it("should return null when suggestions array is empty", () => {
//#given a NamedError with empty suggestions
const error = {
name: "ProviderModelNotFoundError",
data: {
providerID: "anthropic",
modelID: "claude-sonet-4",
suggestions: [],
},
}
//#when parsing the error
const result = parseModelSuggestion(error)
//#then should return null
expect(result).toBeNull()
})
it("should return null when suggestions field is missing", () => {
//#given a NamedError without suggestions
const error = {
name: "ProviderModelNotFoundError",
data: {
providerID: "anthropic",
modelID: "claude-sonet-4",
},
}
//#when parsing the error
const result = parseModelSuggestion(error)
//#then should return null
expect(result).toBeNull()
})
})
describe("nested error format", () => {
it("should extract suggestion from nested data.error", () => {
//#given an error with nested NamedError in data field
const error = {
data: {
name: "ProviderModelNotFoundError",
data: {
providerID: "openai",
modelID: "gpt-5",
suggestions: ["gpt-5.2"],
},
},
}
//#when parsing the error
const result = parseModelSuggestion(error)
//#then should extract from nested structure
expect(result).toEqual({
providerID: "openai",
modelID: "gpt-5",
suggestion: "gpt-5.2",
})
})
it("should extract suggestion from nested error field", () => {
//#given an error with nested NamedError in error field
const error = {
error: {
name: "ProviderModelNotFoundError",
data: {
providerID: "google",
modelID: "gemini-3-flsh",
suggestions: ["gemini-3-flash"],
},
},
}
//#when parsing the error
const result = parseModelSuggestion(error)
//#then should extract from nested error field
expect(result).toEqual({
providerID: "google",
modelID: "gemini-3-flsh",
suggestion: "gemini-3-flash",
})
})
})
describe("string message format", () => {
it("should parse suggestion from error message string", () => {
//#given an Error with model-not-found message and suggestion
const error = new Error(
"Model not found: anthropic/claude-sonet-4. Did you mean: claude-sonnet-4, claude-sonnet-4-5?"
)
//#when parsing the error
const result = parseModelSuggestion(error)
//#then should extract from message string
expect(result).toEqual({
providerID: "anthropic",
modelID: "claude-sonet-4",
suggestion: "claude-sonnet-4",
})
})
it("should parse from plain string error", () => {
//#given a plain string error message
const error =
"Model not found: openai/gtp-5. Did you mean: gpt-5?"
//#when parsing the error
const result = parseModelSuggestion(error)
//#then should extract from string
expect(result).toEqual({
providerID: "openai",
modelID: "gtp-5",
suggestion: "gpt-5",
})
})
it("should parse from object with message property", () => {
//#given an object with message property
const error = {
message: "Model not found: google/gemini-3-flsh. Did you mean: gemini-3-flash?",
}
//#when parsing the error
const result = parseModelSuggestion(error)
//#then should extract from message property
expect(result).toEqual({
providerID: "google",
modelID: "gemini-3-flsh",
suggestion: "gemini-3-flash",
})
})
it("should return null when message has no suggestion", () => {
//#given an error without Did you mean
const error = new Error("Model not found: anthropic/nonexistent.")
//#when parsing the error
const result = parseModelSuggestion(error)
//#then should return null
expect(result).toBeNull()
})
})
describe("edge cases", () => {
it("should return null for null error", () => {
//#given null
//#when parsing
const result = parseModelSuggestion(null)
//#then should return null
expect(result).toBeNull()
})
it("should return null for undefined error", () => {
//#given undefined
//#when parsing
const result = parseModelSuggestion(undefined)
//#then should return null
expect(result).toBeNull()
})
it("should return null for unrelated error", () => {
//#given an unrelated error
const error = new Error("Connection timeout")
//#when parsing
const result = parseModelSuggestion(error)
//#then should return null
expect(result).toBeNull()
})
it("should return null for empty object", () => {
//#given empty object
//#when parsing
const result = parseModelSuggestion({})
//#then should return null
expect(result).toBeNull()
})
})
})
describe("promptWithModelSuggestionRetry", () => {
it("should succeed on first try without retry", async () => {
//#given a client where prompt succeeds
const promptMock = mock(() => Promise.resolve())
const client = { session: { prompt: promptMock } }
//#when calling promptWithModelSuggestionRetry
await promptWithModelSuggestionRetry(client as any, {
path: { id: "session-1" },
body: {
parts: [{ type: "text", text: "hello" }],
model: { providerID: "anthropic", modelID: "claude-sonnet-4" },
},
})
//#then should call prompt exactly once
expect(promptMock).toHaveBeenCalledTimes(1)
})
it("should retry with suggestion on model-not-found error", async () => {
//#given a client that fails first with model-not-found, then succeeds
const promptMock = mock()
.mockRejectedValueOnce({
name: "ProviderModelNotFoundError",
data: {
providerID: "anthropic",
modelID: "claude-sonet-4",
suggestions: ["claude-sonnet-4"],
},
})
.mockResolvedValueOnce(undefined)
const client = { session: { prompt: promptMock } }
//#when calling promptWithModelSuggestionRetry
await promptWithModelSuggestionRetry(client as any, {
path: { id: "session-1" },
body: {
agent: "explore",
parts: [{ type: "text", text: "hello" }],
model: { providerID: "anthropic", modelID: "claude-sonet-4" },
},
})
//#then should call prompt twice - first with original, then with suggestion
expect(promptMock).toHaveBeenCalledTimes(2)
const retryCall = promptMock.mock.calls[1][0]
expect(retryCall.body.model).toEqual({
providerID: "anthropic",
modelID: "claude-sonnet-4",
})
})
it("should throw original error when no suggestion available", async () => {
//#given a client that fails with a non-model-not-found error
const originalError = new Error("Connection refused")
const promptMock = mock().mockRejectedValueOnce(originalError)
const client = { session: { prompt: promptMock } }
//#when calling promptWithModelSuggestionRetry
//#then should throw the original error
await expect(
promptWithModelSuggestionRetry(client as any, {
path: { id: "session-1" },
body: {
parts: [{ type: "text", text: "hello" }],
model: { providerID: "anthropic", modelID: "claude-sonnet-4" },
},
})
).rejects.toThrow("Connection refused")
expect(promptMock).toHaveBeenCalledTimes(1)
})
it("should throw original error when retry also fails", async () => {
//#given a client that fails with model-not-found, retry also fails
const modelNotFoundError = {
name: "ProviderModelNotFoundError",
data: {
providerID: "anthropic",
modelID: "claude-sonet-4",
suggestions: ["claude-sonnet-4"],
},
}
const retryError = new Error("Still not found")
const promptMock = mock()
.mockRejectedValueOnce(modelNotFoundError)
.mockRejectedValueOnce(retryError)
const client = { session: { prompt: promptMock } }
//#when calling promptWithModelSuggestionRetry
//#then should throw the retry error (not the original)
await expect(
promptWithModelSuggestionRetry(client as any, {
path: { id: "session-1" },
body: {
parts: [{ type: "text", text: "hello" }],
model: { providerID: "anthropic", modelID: "claude-sonet-4" },
},
})
).rejects.toThrow("Still not found")
expect(promptMock).toHaveBeenCalledTimes(2)
})
it("should preserve other body fields during retry", async () => {
//#given a client that fails first with model-not-found
const promptMock = mock()
.mockRejectedValueOnce({
name: "ProviderModelNotFoundError",
data: {
providerID: "anthropic",
modelID: "claude-sonet-4",
suggestions: ["claude-sonnet-4"],
},
})
.mockResolvedValueOnce(undefined)
const client = { session: { prompt: promptMock } }
//#when calling with additional body fields
await promptWithModelSuggestionRetry(client as any, {
path: { id: "session-1" },
body: {
agent: "explore",
system: "You are a helpful agent",
tools: { task: false },
parts: [{ type: "text", text: "hello" }],
model: { providerID: "anthropic", modelID: "claude-sonet-4" },
variant: "max",
},
})
//#then retry call should preserve all fields except corrected model
const retryCall = promptMock.mock.calls[1][0]
expect(retryCall.body.agent).toBe("explore")
expect(retryCall.body.system).toBe("You are a helpful agent")
expect(retryCall.body.tools).toEqual({ task: false })
expect(retryCall.body.variant).toBe("max")
expect(retryCall.body.model).toEqual({
providerID: "anthropic",
modelID: "claude-sonnet-4",
})
})
it("should handle string error message with suggestion", async () => {
//#given a client that fails with a string error containing suggestion
const promptMock = mock()
.mockRejectedValueOnce(
new Error("Model not found: anthropic/claude-sonet-4. Did you mean: claude-sonnet-4?")
)
.mockResolvedValueOnce(undefined)
const client = { session: { prompt: promptMock } }
//#when calling promptWithModelSuggestionRetry
await promptWithModelSuggestionRetry(client as any, {
path: { id: "session-1" },
body: {
parts: [{ type: "text", text: "hello" }],
model: { providerID: "anthropic", modelID: "claude-sonet-4" },
},
})
//#then should retry with suggested model
expect(promptMock).toHaveBeenCalledTimes(2)
const retryCall = promptMock.mock.calls[1][0]
expect(retryCall.body.model.modelID).toBe("claude-sonnet-4")
})
it("should not retry when no model in original request", async () => {
//#given a client that fails with model-not-found but original has no model param
const modelNotFoundError = new Error(
"Model not found: anthropic/claude-sonet-4. Did you mean: claude-sonnet-4?"
)
const promptMock = mock().mockRejectedValueOnce(modelNotFoundError)
const client = { session: { prompt: promptMock } }
//#when calling without model in body
//#then should throw without retrying
await expect(
promptWithModelSuggestionRetry(client as any, {
path: { id: "session-1" },
body: {
parts: [{ type: "text", text: "hello" }],
},
})
).rejects.toThrow()
expect(promptMock).toHaveBeenCalledTimes(1)
})
})

View File

@@ -0,0 +1,111 @@
import type { createOpencodeClient } from "@opencode-ai/sdk"
import { log } from "./logger"
type Client = ReturnType<typeof createOpencodeClient>
export interface ModelSuggestionInfo {
providerID: string
modelID: string
suggestion: string
}
function extractMessage(error: unknown): string {
if (typeof error === "string") return error
if (error instanceof Error) return error.message
if (typeof error === "object" && error !== null) {
const obj = error as Record<string, unknown>
if (typeof obj.message === "string") return obj.message
try {
return JSON.stringify(error)
} catch {
return ""
}
}
return String(error)
}
export function parseModelSuggestion(error: unknown): ModelSuggestionInfo | null {
if (!error) return null
if (typeof error === "object") {
const errObj = error as Record<string, unknown>
if (errObj.name === "ProviderModelNotFoundError" && typeof errObj.data === "object" && errObj.data !== null) {
const data = errObj.data as Record<string, unknown>
const suggestions = data.suggestions
if (Array.isArray(suggestions) && suggestions.length > 0 && typeof suggestions[0] === "string") {
return {
providerID: String(data.providerID ?? ""),
modelID: String(data.modelID ?? ""),
suggestion: suggestions[0],
}
}
return null
}
for (const key of ["data", "error", "cause"] as const) {
const nested = errObj[key]
if (nested && typeof nested === "object") {
const result = parseModelSuggestion(nested)
if (result) return result
}
}
}
const message = extractMessage(error)
if (!message) return null
const modelMatch = message.match(/model not found:\s*([^/\s]+)\s*\/\s*([^.\s]+)/i)
const suggestionMatch = message.match(/did you mean:\s*([^,?]+)/i)
if (modelMatch && suggestionMatch) {
return {
providerID: modelMatch[1].trim(),
modelID: modelMatch[2].trim(),
suggestion: suggestionMatch[1].trim(),
}
}
return null
}
interface PromptBody {
model?: { providerID: string; modelID: string }
[key: string]: unknown
}
interface PromptArgs {
path: { id: string }
body: PromptBody
[key: string]: unknown
}
export async function promptWithModelSuggestionRetry(
client: Client,
args: PromptArgs,
): Promise<void> {
try {
await client.session.prompt(args as Parameters<typeof client.session.prompt>[0])
} catch (error) {
const suggestion = parseModelSuggestion(error)
if (!suggestion || !args.body.model) {
throw error
}
log("[model-suggestion-retry] Model not found, retrying with suggestion", {
original: `${suggestion.providerID}/${suggestion.modelID}`,
suggested: suggestion.suggestion,
})
await client.session.prompt({
...args,
body: {
...args.body,
model: {
providerID: suggestion.providerID,
modelID: suggestion.suggestion,
},
},
} as Parameters<typeof client.session.prompt>[0])
}
}

View File

@@ -3,6 +3,7 @@ import { join } from "path"
import { homedir } from "os"
import { createRequire } from "module"
import { extractZip } from "../../shared"
import { log } from "../../shared/logger"
const REPO = "ast-grep/ast-grep"
@@ -63,7 +64,7 @@ export async function downloadAstGrep(version: string = DEFAULT_VERSION): Promis
const platformInfo = PLATFORM_MAP[platformKey]
if (!platformInfo) {
console.error(`[oh-my-opencode] Unsupported platform for ast-grep: ${platformKey}`)
log(`[oh-my-opencode] Unsupported platform for ast-grep: ${platformKey}`)
return null
}
@@ -79,7 +80,7 @@ export async function downloadAstGrep(version: string = DEFAULT_VERSION): Promis
const assetName = `app-${arch}-${os}.zip`
const downloadUrl = `https://github.com/${REPO}/releases/download/${version}/${assetName}`
console.log(`[oh-my-opencode] Downloading ast-grep binary...`)
log(`[oh-my-opencode] Downloading ast-grep binary...`)
try {
if (!existsSync(cacheDir)) {
@@ -106,11 +107,11 @@ export async function downloadAstGrep(version: string = DEFAULT_VERSION): Promis
chmodSync(binaryPath, 0o755)
}
console.log(`[oh-my-opencode] ast-grep binary ready.`)
log(`[oh-my-opencode] ast-grep binary ready.`)
return binaryPath
} catch (err) {
console.error(
log(
`[oh-my-opencode] Failed to download ast-grep: ${err instanceof Error ? err.message : err}`
)
return null

View File

@@ -14,8 +14,14 @@ Design-first mindset:
AVOID: Generic fonts, purple gradients on white, predictable layouts, cookie-cutter patterns.
</Category_Context>`
export const STRATEGIC_CATEGORY_PROMPT_APPEND = `<Category_Context>
You are working on BUSINESS LOGIC / ARCHITECTURE tasks.
export const ULTRABRAIN_CATEGORY_PROMPT_APPEND = `<Category_Context>
You are working on DEEP LOGICAL REASONING / COMPLEX ARCHITECTURE tasks.
**CRITICAL - CODE STYLE REQUIREMENTS (NON-NEGOTIABLE)**:
1. BEFORE writing ANY code, SEARCH the existing codebase to find similar patterns/styles
2. Your code MUST match the project's existing conventions - blend in seamlessly
3. Write READABLE code that humans can easily understand - no clever tricks
4. If unsure about style, explore more files until you find the pattern
Strategic advisor mindset:
- Bias toward simplicity: least complex solution that fulfills requirements
@@ -153,11 +159,43 @@ Approach:
- Documentation, READMEs, articles, technical writing
</Category_Context>`
export const DEEP_CATEGORY_PROMPT_APPEND = `<Category_Context>
You are working on GOAL-ORIENTED AUTONOMOUS tasks.
**CRITICAL - AUTONOMOUS EXECUTION MINDSET (NON-NEGOTIABLE)**:
You are NOT an interactive assistant. You are an autonomous problem-solver.
**BEFORE making ANY changes**:
1. SILENTLY explore the codebase extensively (5-15 minutes of reading is normal)
2. Read related files, trace dependencies, understand the full context
3. Build a complete mental model of the problem space
4. DO NOT ask clarifying questions - the goal is already defined
**Autonomous executor mindset**:
- You receive a GOAL, not step-by-step instructions
- Figure out HOW to achieve the goal yourself
- Thorough research before any action
- Fix hairy problems that require deep understanding
- Work independently without frequent check-ins
**Approach**:
- Explore extensively, understand deeply, then act decisively
- Prefer comprehensive solutions over quick patches
- If the goal is unclear, make reasonable assumptions and proceed
- Document your reasoning in code comments only when non-obvious
**Response format**:
- Minimal status updates (user trusts your autonomy)
- Focus on results, not play-by-play progress
- Report completion with summary of changes made
</Category_Context>`
export const DEFAULT_CATEGORIES: Record<string, CategoryConfig> = {
"visual-engineering": { model: "google/gemini-3-pro" },
ultrabrain: { model: "openai/gpt-5.2-codex", variant: "xhigh" },
deep: { model: "openai/gpt-5.2-codex", variant: "medium" },
artistry: { model: "google/gemini-3-pro", variant: "max" },
quick: { model: "anthropic/claude-haiku-4-5" },
"unspecified-low": { model: "anthropic/claude-sonnet-4-5" },
@@ -167,7 +205,8 @@ export const DEFAULT_CATEGORIES: Record<string, CategoryConfig> = {
export const CATEGORY_PROMPT_APPENDS: Record<string, string> = {
"visual-engineering": VISUAL_CATEGORY_PROMPT_APPEND,
ultrabrain: STRATEGIC_CATEGORY_PROMPT_APPEND,
ultrabrain: ULTRABRAIN_CATEGORY_PROMPT_APPEND,
deep: DEEP_CATEGORY_PROMPT_APPEND,
artistry: ARTISTRY_CATEGORY_PROMPT_APPEND,
quick: QUICK_CATEGORY_PROMPT_APPEND,
"unspecified-low": UNSPECIFIED_LOW_CATEGORY_PROMPT_APPEND,
@@ -177,8 +216,9 @@ export const CATEGORY_PROMPT_APPENDS: Record<string, string> = {
export const CATEGORY_DESCRIPTIONS: Record<string, string> = {
"visual-engineering": "Frontend, UI/UX, design, styling, animation",
ultrabrain: "Deep logical reasoning, complex architecture decisions requiring extensive analysis",
artistry: "Highly creative/artistic tasks, novel ideas",
ultrabrain: "Use ONLY for genuinely hard, logic-heavy tasks. Give clear goals only, not step-by-step instructions.",
deep: "Goal-oriented autonomous problem-solving. Thorough research before action. For hairy problems requiring deep understanding.",
artistry: "Complex problem-solving with unconventional, creative approaches - beyond standard patterns",
quick: "Trivial tasks - single file changes, typo fixes, simple modifications",
"unspecified-low": "Tasks that don't fit other categories, low effort required",
"unspecified-high": "Tasks that don't fit other categories, high effort required",

View File

@@ -51,6 +51,16 @@ describe("sisyphus-task", () => {
expect(category.model).toBe("openai/gpt-5.2-codex")
expect(category.variant).toBe("xhigh")
})
test("deep category has model and variant config", () => {
// #given
const category = DEFAULT_CATEGORIES["deep"]
// #when / #then
expect(category).toBeDefined()
expect(category.model).toBe("openai/gpt-5.2-codex")
expect(category.variant).toBe("medium")
})
})
describe("CATEGORY_PROMPT_APPENDS", () => {
@@ -63,14 +73,23 @@ describe("sisyphus-task", () => {
expect(promptAppend).toContain("Design-first")
})
test("ultrabrain category has strategic prompt", () => {
test("ultrabrain category has deep logical reasoning prompt", () => {
// #given
const promptAppend = CATEGORY_PROMPT_APPENDS["ultrabrain"]
// #when / #then
expect(promptAppend).toContain("BUSINESS LOGIC")
expect(promptAppend).toContain("DEEP LOGICAL REASONING")
expect(promptAppend).toContain("Strategic advisor")
})
test("deep category has goal-oriented autonomous prompt", () => {
// #given
const promptAppend = CATEGORY_PROMPT_APPENDS["deep"]
// #when / #then
expect(promptAppend).toContain("GOAL-ORIENTED")
expect(promptAppend).toContain("autonomous")
})
})
describe("CATEGORY_DESCRIPTIONS", () => {
@@ -283,6 +302,36 @@ describe("sisyphus-task", () => {
expect(result).toBeNull()
})
test("blocks requiresModel when availability is known and missing the required model", () => {
// #given
const categoryName = "deep"
const availableModels = new Set<string>(["anthropic/claude-opus-4-5"])
// #when
const result = resolveCategoryConfig(categoryName, {
systemDefaultModel: SYSTEM_DEFAULT_MODEL,
availableModels,
})
// #then
expect(result).toBeNull()
})
test("blocks requiresModel when availability is empty", () => {
// #given
const categoryName = "deep"
const availableModels = new Set<string>()
// #when
const result = resolveCategoryConfig(categoryName, {
systemDefaultModel: SYSTEM_DEFAULT_MODEL,
availableModels,
})
// #then
expect(result).toBeNull()
})
test("returns default model from DEFAULT_CATEGORIES for builtin category", () => {
// #given
const categoryName = "visual-engineering"
@@ -1110,7 +1159,7 @@ describe("sisyphus-task", () => {
const mockClient = {
app: { agents: async () => ({ data: [] }) },
config: { get: async () => ({ data: { model: SYSTEM_DEFAULT_MODEL } }) },
model: { list: async () => [{ id: "google/gemini-3-pro" }] },
model: { list: async () => ({ data: [{ provider: "google", id: "gemini-3-pro" }] }) },
session: {
get: async () => ({ data: { directory: "/project" } }),
create: async () => ({ data: { id: "ses_unstable_gemini" } }),
@@ -1276,6 +1325,13 @@ describe("sisyphus-task", () => {
test("artistry category (gemini) with run_in_background=false should force background but wait for result", async () => {
// #given - artistry also uses gemini model
const { createDelegateTask } = require("./tools")
const providerModelsSpy = spyOn(connectedProvidersCache, "readProviderModelsCache").mockReturnValue({
connected: ["anthropic", "google", "openai"],
updatedAt: new Date().toISOString(),
models: {
google: ["gemini-3-pro", "gemini-3-flash"],
},
})
let launchCalled = false
const mockManager = {
@@ -1294,7 +1350,7 @@ describe("sisyphus-task", () => {
const mockClient = {
app: { agents: async () => ({ data: [] }) },
config: { get: async () => ({ data: { model: SYSTEM_DEFAULT_MODEL } }) },
model: { list: async () => [{ id: "google/gemini-3-pro" }] },
model: { list: async () => ({ data: [{ provider: "google", id: "gemini-3-pro" }] }) },
session: {
get: async () => ({ data: { directory: "/project" } }),
create: async () => ({ data: { id: "ses_artistry_gemini" } }),
@@ -1336,6 +1392,7 @@ describe("sisyphus-task", () => {
expect(launchCalled).toBe(true)
expect(result).toContain("SUPERVISED TASK COMPLETED")
expect(result).toContain("Artistry result here")
providerModelsSpy.mockRestore()
}, { timeout: 20000 })
test("writing category (gemini-flash) with run_in_background=false should force background but wait for result", async () => {

View File

@@ -12,8 +12,8 @@ import { discoverSkills } from "../../features/opencode-skill-loader"
import { getTaskToastManager } from "../../features/task-toast-manager"
import type { ModelFallbackInfo } from "../../features/task-toast-manager/types"
import { subagentSessions, getSessionAgent } from "../../features/claude-code-session-state"
import { log, getAgentToolRestrictions, resolveModel, getOpenCodeConfigPaths, findByNameCaseInsensitive, equalsIgnoreCase } from "../../shared"
import { fetchAvailableModels } from "../../shared/model-availability"
import { log, getAgentToolRestrictions, resolveModel, getOpenCodeConfigPaths, findByNameCaseInsensitive, equalsIgnoreCase, promptWithModelSuggestionRetry } from "../../shared"
import { fetchAvailableModels, isModelAvailable } from "../../shared/model-availability"
import { readConnectedProvidersCache } from "../../shared/connected-providers-cache"
import { resolveModelWithFallback } from "../../shared/model-resolver"
import { CATEGORY_MODEL_REQUIREMENTS } from "../../shared/model-requirements"
@@ -117,9 +117,20 @@ export function resolveCategoryConfig(
userCategories?: CategoriesConfig
inheritedModel?: string
systemDefaultModel?: string
availableModels?: Set<string>
}
): { config: CategoryConfig; promptAppend: string; model: string | undefined } | null {
const { userCategories, inheritedModel, systemDefaultModel } = options
const { userCategories, inheritedModel, systemDefaultModel, availableModels } = options
// Check if category requires a specific model
const categoryReq = CATEGORY_MODEL_REQUIREMENTS[categoryName]
if (categoryReq?.requiresModel && availableModels) {
if (!isModelAvailable(categoryReq.requiresModel, availableModels)) {
log(`[resolveCategoryConfig] Category ${categoryName} requires ${categoryReq.requiresModel} but not available`)
return null
}
}
const defaultConfig = DEFAULT_CATEGORIES[categoryName]
const userConfig = userCategories?.[categoryName]
const defaultPromptAppend = CATEGORY_PROMPT_APPENDS[categoryName] ?? ""
@@ -522,11 +533,12 @@ To continue this session: session_id="${args.session_id}"`
connectedProviders: connectedProviders ?? undefined
})
const resolved = resolveCategoryConfig(args.category, {
userCategories,
inheritedModel,
systemDefaultModel,
})
const resolved = resolveCategoryConfig(args.category, {
userCategories,
inheritedModel,
systemDefaultModel,
availableModels,
})
if (!resolved) {
return `Unknown category: "${args.category}". Available: ${Object.keys({ ...DEFAULT_CATEGORIES, ...userCategories }).join(", ")}`
}
@@ -541,7 +553,8 @@ To continue this session: session_id="${args.session_id}"`
}
} else {
const resolution = resolveModelWithFallback({
userModel: userCategories?.[args.category]?.model ?? resolved.model ?? sisyphusJuniorModel,
userModel: userCategories?.[args.category]?.model,
categoryDefaultModel: resolved.model ?? sisyphusJuniorModel,
fallbackChain: requirement.fallbackChain,
availableModels,
systemDefaultModel,
@@ -555,18 +568,19 @@ To continue this session: session_id="${args.session_id}"`
return `Invalid model format "${actualModel}". Expected "provider/model" format (e.g., "anthropic/claude-sonnet-4-5").`
}
let type: "user-defined" | "inherited" | "category-default" | "system-default"
switch (source) {
case "override":
type = "user-defined"
break
case "provider-fallback":
type = "category-default"
break
case "system-default":
type = "system-default"
break
}
let type: "user-defined" | "inherited" | "category-default" | "system-default"
switch (source) {
case "override":
type = "user-defined"
break
case "category-default":
case "provider-fallback":
type = "category-default"
break
case "system-default":
type = "system-default"
break
}
modelInfo = { model: actualModel, type, source }
@@ -819,12 +833,6 @@ Create the work plan directly - that's your job as the planning agent.`
// If we can't fetch agents, proceed anyway - the session.prompt will fail with a clearer error
}
// When using subagent_type directly, inherit parent model so agents don't default
// to their hardcoded models (like grok-code) which may not be available
if (parentModel) {
categoryModel = parentModel
modelInfo = { model: `${parentModel.providerID}/${parentModel.modelID}`, type: "inherited" }
}
}
const systemContent = buildSystemContent({ skillContent, categoryPromptAppend, agentName: agentToUse })
@@ -953,7 +961,7 @@ To continue this session: session_id="${task.sessionID}"`
try {
const allowDelegateTask = isPlanAgent(agentToUse)
await client.session.prompt({
await promptWithModelSuggestionRetry(client, {
path: { id: sessionID },
body: {
agent: agentToUse,

View File

@@ -146,4 +146,62 @@ describe("look-at tool", () => {
expect(result).toContain("Network connection failed")
})
})
describe("createLookAt model passthrough", () => {
// #given multimodal-looker agent has resolved model info
// #when LookAt 도구 실행
// #then session.prompt에 model 정보가 전달되어야 함
test("passes multimodal-looker model to session.prompt when available", async () => {
let promptBody: any
const mockClient = {
app: {
agents: async () => ({
data: [
{
name: "multimodal-looker",
mode: "subagent",
model: { providerID: "google", modelID: "gemini-3-flash" },
},
],
}),
},
session: {
get: async () => ({ data: { directory: "/project" } }),
create: async () => ({ data: { id: "ses_model_passthrough" } }),
prompt: async (input: any) => {
promptBody = input.body
return { data: {} }
},
messages: async () => ({
data: [
{ info: { role: "assistant", time: { created: 1 } }, parts: [{ type: "text", text: "done" }] },
],
}),
},
}
const tool = createLookAt({
client: mockClient,
directory: "/project",
} as any)
const toolContext = {
sessionID: "parent-session",
messageID: "parent-message",
agent: "sisyphus",
abort: new AbortController().signal,
}
await tool.execute(
{ file_path: "/test/file.png", goal: "analyze image" },
toolContext
)
expect(promptBody.model).toEqual({
providerID: "google",
modelID: "gemini-3-flash",
})
})
})
})

View File

@@ -3,7 +3,7 @@ import { pathToFileURL } from "node:url"
import { tool, type PluginInput, type ToolDefinition } from "@opencode-ai/plugin"
import { LOOK_AT_DESCRIPTION, MULTIMODAL_LOOKER_AGENT } from "./constants"
import type { LookAtArgs } from "./types"
import { log } from "../../shared/logger"
import { findByNameCaseInsensitive, log, promptWithModelSuggestionRetry } from "../../shared"
interface LookAtArgsWithAlias extends LookAtArgs {
path?: string
@@ -130,9 +130,34 @@ Original error: ${createResult.error}`
const sessionID = createResult.data.id
log(`[look_at] Created session: ${sessionID}`)
let agentModel: { providerID: string; modelID: string } | undefined
let agentVariant: string | undefined
try {
const agentsResult = await ctx.client.app?.agents?.()
type AgentInfo = {
name: string
mode?: "subagent" | "primary" | "all"
model?: { providerID: string; modelID: string }
variant?: string
}
const agents = ((agentsResult as { data?: AgentInfo[] })?.data ?? agentsResult) as AgentInfo[] | undefined
if (agents?.length) {
const matchedAgent = findByNameCaseInsensitive(agents, MULTIMODAL_LOOKER_AGENT)
if (matchedAgent?.model) {
agentModel = matchedAgent.model
}
if (matchedAgent?.variant) {
agentVariant = matchedAgent.variant
}
}
} catch (error) {
log("[look_at] Failed to resolve multimodal-looker model info", error)
}
log(`[look_at] Sending prompt with file passthrough to session ${sessionID}`)
try {
await ctx.client.session.prompt({
await promptWithModelSuggestionRetry(ctx.client, {
path: { id: sessionID },
body: {
agent: MULTIMODAL_LOOKER_AGENT,
@@ -146,6 +171,8 @@ Original error: ${createResult.error}`
{ type: "text", text: prompt },
{ type: "file", mime: mimeType, url: pathToFileURL(args.file_path).href, filename },
],
...(agentModel ? { model: { providerID: agentModel.providerID, modelID: agentModel.modelID } } : {}),
...(agentVariant ? { variant: agentVariant } : {}),
},
})
} catch (promptError) {

View File

@@ -11,6 +11,7 @@ import {
} from "vscode-jsonrpc/node"
import { getLanguageId } from "./config"
import type { Diagnostic, ResolvedServer } from "./types"
import { log } from "../../shared/logger"
interface ManagedClient {
client: LSPClient
@@ -306,7 +307,7 @@ export class LSPClient {
})
this.connection.onError((error) => {
console.error("LSP connection error:", error)
log("LSP connection error:", error)
})
this.connection.listen()