Compare commits

..

40 Commits

Author SHA1 Message Date
YeonGyu-Kim
1c9f4148d0 fix(publish-ci): sync mock-heavy test isolation with ci.yml
Apply the same mock.module() isolation fixes to publish.yml:
- Move shared and session-recovery mock-heavy tests to isolated section
- Use dynamic find + exclusion for remaining src/shared tests
- Include session-recovery tests in remaining batch

Ensures publish workflow has the same test config as main CI run.
2026-03-27 00:56:55 +09:00
YeonGyu-Kim
8dd0191ea5 fix(ci): isolate mock-heavy shared tests to prevent cross-file contamination
Move 4 src/shared tests that use mock.module() to the isolated test section:
- model-capabilities.test.ts (mocks ./connected-providers-cache)
- log-legacy-plugin-startup-warning.test.ts (mocks ./legacy-plugin-warning)
- model-error-classifier.test.ts
- opencode-message-dir.test.ts

Also isolate recover-tool-result-missing.test.ts (mocks ./storage).

Use find + exclusion pattern in remaining tests to dynamically build the
src/shared file list without the isolated mock-heavy files.

Fixes 6 Linux CI failures caused by bun's mock.module() cache pollution
when running in parallel.
2026-03-27 00:08:27 +09:00
YeonGyu-Kim
9daaeedc50 fix(test): restore shared Bun mocks after suite cleanup
Prevent src/shared batch runs from leaking module mocks into later files, which was breaking Linux CI cache metadata and legacy plugin warning assertions.
2026-03-27 00:08:20 +09:00
YeonGyu-Kim
3e13a4cf57 fix(session-recovery): filter invalid prt_* part IDs from tool_use_id reconstruction
When recovering missing tool results, the session recovery hook was using
raw part.id (prt_* format) as tool_use_id when callID was absent, causing
ZodError validation failures from the API.

Added isValidToolUseID() guard that only accepts toolu_* and call_* prefixed
IDs, and normalizeMessagePart() that returns null for parts without valid
callIDs. Both the SQLite fallback and stored-parts paths now filter out
invalid entries before constructing tool_result payloads.

Includes 4 regression tests covering both valid/invalid callID paths for
both SQLite and stored-parts backends.
2026-03-26 20:48:33 +09:00
YeonGyu-Kim
8e65d6cf2c fix(test): make legacy-plugin-warning tests isolation-safe
Pass explicit config dir to checkForLegacyPluginEntry instead of relying
on XDG_CONFIG_HOME env var, which gets contaminated by parallel tests on
Linux CI. Also adds missing 'join' import.
2026-03-26 19:54:05 +09:00
YeonGyu-Kim
f419a3a925 fix(test): use Bun.spawnSync in command discovery test to avoid execFileSync mock leakage
The opencode-project-command-discovery test used execFileSync for git init,
which collided with image-converter.test.ts's global execFileSync mock when
running in parallel on Linux CI. Switching to Bun.spawnSync avoids the mock
entirely since spyOn(childProcess, 'execFileSync') doesn't affect Bun APIs.

Fixes CI flake that only reproduced on Linux.
2026-03-26 19:47:25 +09:00
YeonGyu-Kim
1c54fdad26 feat(compat): package rename compatibility layer for oh-my-opencode → oh-my-openagent
- Add legacy plugin startup warning when oh-my-opencode config detected
- Update CLI installer and TUI installer for new package name
- Split monolithic config-manager.test.ts into focused test modules
- Add plugin config detection tests for legacy name fallback
- Update processed-command-store to use plugin-identity constants
- Add claude-code-plugin-loader discovery test for both config names
- Update chat-params and ultrawork-db tests for plugin identity

Part of #2823
2026-03-26 19:44:55 +09:00
YeonGyu-Kim
d39891fcab docs: update hephaestus default model references from gpt-5.3-codex to gpt-5.4
Updated across README (all locales), docs/guide/, docs/reference/,
docs/examples/, AGENTS.md files, and test expectations/snapshots.

The deep category and multimodal-looker still use gpt-5.3-codex as
those are separate from the hephaestus agent.
2026-03-26 19:25:26 +09:00
YeonGyu-Kim
d57ed97386 feat(hephaestus): upgrade default model from gpt-5.3-codex to gpt-5.4
Hephaestus now uses gpt-5.4 as its default model across all providers
(openai, github-copilot, venice, opencode), matching Sisyphus's GPT 5.4
support. The separate gpt-5.3-codex → github-copilot fallback entry is
removed since gpt-5.4 is available on all required providers.
2026-03-26 19:02:37 +09:00
github-actions[bot]
6a510c01e0 @kuitos has signed the CLA in code-yeongyu/oh-my-openagent#2833 2026-03-26 09:56:02 +00:00
YeonGyu-Kim
b34eab3884 fix(test): isolate model-capabilities from local provider cache
Mock connected-providers-cache in model-capabilities.test.ts to prevent
findProviderModelMetadata from reading disk-cached model metadata.

Without this mock, the 'prefers runtime models.dev cache' test gets
polluted by real cached data from opencode serve runs, causing the
test to receive different maxOutputTokens/supportsTemperature values
than the mock runtime snapshot provides.

This was the last CI-only failure — passes locally with cache, fails
on CI without cache, now passes everywhere via mock isolation.

Full suite: 4484 pass, 0 fail.
2026-03-26 18:14:51 +09:00
YeonGyu-Kim
4efc181390 fix(ci): resolve all test failures + complete rename compat layer
Sisyphus-authored fixes across 15 files:

- plugin-identity: align CONFIG_BASENAME with actual config file name
- add-plugin-to-opencode-config: handle legacy→canonical name migration
- plugin-detection tests: update expectations for new identity constants
- doctor/system: fix legacy name warning test assertions
- install tests: align with new plugin name
- chat-params tests: fix mock isolation
- model-capabilities tests: fix snapshot expectations
- image-converter: fix platform-dependent test assertions (Linux CI)
- example configs: expanded with more detailed comments

Full suite: 4484 pass, 0 fail, typecheck clean.
2026-03-26 18:04:31 +09:00
YeonGyu-Kim
e86edca633 feat(doctor): warn on legacy package name + add example configs
- Doctor now detects when opencode.json references 'oh-my-opencode'
  (legacy name) and warns users to switch to 'oh-my-openagent' with
  the exact replacement string.

- Added 3 example config files in docs/examples/:
  - default.jsonc: balanced setup with all agents documented
  - coding-focused.jsonc: Sisyphus + Hephaestus heavy
  - planning-focused.jsonc: Prometheus + Atlas heavy

All examples include every agent (sisyphus, hephaestus, atlas,
prometheus, explore, librarian) with model recommendations.

Helps with #2823
2026-03-26 17:01:42 +09:00
YeonGyu-Kim
a8ec92748c fix(model-resolution): honor user config overrides on cold cache
When provider-models cache is cold (first run / cache miss),
resolveModelForDelegateTask returns {skipped: true}. Previously this
caused the subagent resolver to:

1. Ignore the user's explicit model override (e.g. explore.model)
2. Fall through to the hardcoded fallback chain which may contain
   model IDs that don't exist in the provider catalog

Now:
- subagent-resolver: if resolution is skipped but user explicitly
  configured a model, use it directly
- subagent-resolver: don't assign hardcoded fallback chain on skip
- category-resolver: same — don't leak hardcoded chain on skip
- general-agents: if user model fails resolution, use it as-is
  instead of falling back to hardcoded chain first entry

Closes #2820
2026-03-26 16:54:04 +09:00
YeonGyu-Kim
dd85d1451a fix(model-requirements): align fallback models with available provider catalogs
- opencode/minimax-m2.7-highspeed → opencode/minimax-m2.5 (provider lacks m2.7 variants)
- opencode-go/minimax-m2.7-highspeed → opencode-go/minimax-m2.7 (provider lacks -highspeed)
- opencode/minimax-m2.7 → opencode/minimax-m2.5 (provider only has m2.5)
- added xai as alternative provider for grok-code-fast-1 (prevents wrong provider prefix)
2026-03-26 16:00:02 +09:00
YeonGyu-Kim
682eead61b Merge pull request #2845 from code-yeongyu/fix/path-discovery-parity-followup
fix: add remaining path discovery parity coverage
2026-03-26 13:13:15 +09:00
YeonGyu-Kim
42f5386100 fix(tests): drop duplicate tilde config regression
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 13:08:53 +09:00
YeonGyu-Kim
5bc019eb7c fix(skills): remove duplicate homedir import
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 13:06:32 +09:00
YeonGyu-Kim
097e2be7e8 fix(slashcommand): discover nested opencode commands with slash names
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 13:05:03 +09:00
YeonGyu-Kim
c637d77965 fix(commands): discover ancestor opencode project commands
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 13:04:39 +09:00
YeonGyu-Kim
4c8aacef48 fix(agents): include .agents skills in agent awareness
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 13:02:52 +09:00
YeonGyu-Kim
8413bc6a91 fix(skills): expand tilde config source paths
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 13:02:36 +09:00
YeonGyu-Kim
86a62aef45 fix(skills): discover ancestor project skill directories
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 13:02:36 +09:00
YeonGyu-Kim
961cc788f6 fix(shared): support opencode directory aliases
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 13:02:12 +09:00
YeonGyu-Kim
19838b78a7 fix(shared): add bounded project discovery helpers
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 13:01:06 +09:00
YeonGyu-Kim
9d4a8f2183 Merge pull request #2844 from code-yeongyu/fix/opencode-followup-gaps
fix: close remaining upstream path discovery gaps
2026-03-26 12:35:39 +09:00
YeonGyu-Kim
7f742723b5 fix(slashcommand): use slash separator for nested commands
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 12:29:42 +09:00
YeonGyu-Kim
b20a34bfa7 fix(slashcommand): discover nested opencode commands
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 12:15:47 +09:00
YeonGyu-Kim
12a4318439 fix(commands): load .agents skills into command config
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 12:15:47 +09:00
YeonGyu-Kim
e4a5973b16 fix(agents): include .agents skills in agent awareness
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 12:15:47 +09:00
YeonGyu-Kim
83819a15d3 fix(shared): stop ancestor discovery at worktree root
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 12:15:47 +09:00
YeonGyu-Kim
a391f44420 Merge pull request #2842 from code-yeongyu/fix/opencode-skill-override-gaps
fix: align path discovery with upstream opencode
2026-03-26 11:54:08 +09:00
YeonGyu-Kim
94b4a4f850 fix(slashcommand): deduplicate opencode command aliases
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 11:36:59 +09:00
YeonGyu-Kim
9fde370838 fix(commands): preserve nearest opencode command precedence
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 11:36:59 +09:00
YeonGyu-Kim
b6ee7f09b1 fix(slashcommand): discover ancestor opencode commands
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 11:22:00 +09:00
YeonGyu-Kim
28bcab066e fix(commands): load opencode command dirs from aliases
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 11:22:00 +09:00
YeonGyu-Kim
b5cb50b561 fix(skills): discover ancestor project skill directories
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 11:22:00 +09:00
YeonGyu-Kim
8242500856 fix(skills): expand tilde config source paths
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 11:22:00 +09:00
YeonGyu-Kim
6d688ac0ae fix(shared): support opencode directory aliases
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 11:22:00 +09:00
YeonGyu-Kim
da3e80464d fix(shared): add ancestor project discovery helpers
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-26 11:22:00 +09:00
81 changed files with 2653 additions and 861 deletions

View File

@@ -60,16 +60,31 @@ jobs:
bun test src/features/opencode-skill-loader/loader.test.ts
bun test src/hooks/anthropic-context-window-limit-recovery/recovery-hook.test.ts
bun test src/hooks/anthropic-context-window-limit-recovery/executor.test.ts
# src/shared mock-heavy files (mock.module pollutes connected-providers-cache and legacy-plugin-warning)
bun test src/shared/model-capabilities.test.ts
bun test src/shared/log-legacy-plugin-startup-warning.test.ts
bun test src/shared/model-error-classifier.test.ts
bun test src/shared/opencode-message-dir.test.ts
# session-recovery mock isolation (recover-tool-result-missing mocks ./storage)
bun test src/hooks/session-recovery/recover-tool-result-missing.test.ts
- name: Run remaining tests
run: |
# Enumerate subdirectories/files explicitly to EXCLUDE mock-heavy files
# that were already run in isolation above.
# Excluded from src/shared: model-capabilities, log-legacy-plugin-startup-warning, model-error-classifier, opencode-message-dir
# Excluded from src/cli: doctor/formatter.test.ts, doctor/format-default.test.ts
# Excluded from src/tools: call-omo-agent/sync-executor.test.ts, call-omo-agent/session-creator.test.ts, session-manager (all)
# Excluded from src/hooks/anthropic-context-window-limit-recovery: recovery-hook.test.ts, executor.test.ts
# Build src/shared file list excluding mock-heavy files already run in isolation
SHARED_FILES=$(find src/shared -name '*.test.ts' \
! -name 'model-capabilities.test.ts' \
! -name 'log-legacy-plugin-startup-warning.test.ts' \
! -name 'model-error-classifier.test.ts' \
! -name 'opencode-message-dir.test.ts' \
| sort | tr '\n' ' ')
bun test bin script src/config src/mcp src/index.test.ts \
src/agents src/shared \
src/agents $SHARED_FILES \
src/cli/run src/cli/config-manager src/cli/mcp-oauth \
src/cli/index.test.ts src/cli/install.test.ts src/cli/model-fallback.test.ts \
src/cli/config-manager.test.ts \
@@ -82,6 +97,7 @@ jobs:
src/tools/call-omo-agent/background-executor.test.ts \
src/tools/call-omo-agent/subagent-session-creator.test.ts \
src/hooks/anthropic-context-window-limit-recovery/empty-content-recovery-sdk.test.ts src/hooks/anthropic-context-window-limit-recovery/parser.test.ts src/hooks/anthropic-context-window-limit-recovery/pruning-deduplication.test.ts src/hooks/anthropic-context-window-limit-recovery/recovery-deduplication.test.ts src/hooks/anthropic-context-window-limit-recovery/storage.test.ts \
src/hooks/session-recovery/detect-error-type.test.ts src/hooks/session-recovery/index.test.ts src/hooks/session-recovery/recover-empty-content-message-sdk.test.ts src/hooks/session-recovery/resume.test.ts src/hooks/session-recovery/storage \
src/hooks/claude-code-compatibility \
src/hooks/context-injection \
src/hooks/provider-toast \

View File

@@ -61,16 +61,31 @@ jobs:
bun test src/features/opencode-skill-loader/loader.test.ts
bun test src/hooks/anthropic-context-window-limit-recovery/recovery-hook.test.ts
bun test src/hooks/anthropic-context-window-limit-recovery/executor.test.ts
# src/shared mock-heavy files (mock.module pollutes connected-providers-cache and legacy-plugin-warning)
bun test src/shared/model-capabilities.test.ts
bun test src/shared/log-legacy-plugin-startup-warning.test.ts
bun test src/shared/model-error-classifier.test.ts
bun test src/shared/opencode-message-dir.test.ts
# session-recovery mock isolation (recover-tool-result-missing mocks ./storage)
bun test src/hooks/session-recovery/recover-tool-result-missing.test.ts
- name: Run remaining tests
run: |
# Enumerate subdirectories/files explicitly to EXCLUDE mock-heavy files
# that were already run in isolation above.
# Excluded from src/shared: model-capabilities, log-legacy-plugin-startup-warning, model-error-classifier, opencode-message-dir
# Excluded from src/cli: doctor/formatter.test.ts, doctor/format-default.test.ts
# Excluded from src/tools: call-omo-agent/sync-executor.test.ts, call-omo-agent/session-creator.test.ts, session-manager (all)
# Excluded from src/hooks/anthropic-context-window-limit-recovery: recovery-hook.test.ts, executor.test.ts
# Build src/shared file list excluding mock-heavy files already run in isolation
SHARED_FILES=$(find src/shared -name '*.test.ts' \
! -name 'model-capabilities.test.ts' \
! -name 'log-legacy-plugin-startup-warning.test.ts' \
! -name 'model-error-classifier.test.ts' \
! -name 'opencode-message-dir.test.ts' \
| sort | tr '\n' ' ')
bun test bin script src/config src/mcp src/index.test.ts \
src/agents src/shared \
src/agents $SHARED_FILES \
src/cli/run src/cli/config-manager src/cli/mcp-oauth \
src/cli/index.test.ts src/cli/install.test.ts src/cli/model-fallback.test.ts \
src/cli/config-manager.test.ts \
@@ -83,6 +98,7 @@ jobs:
src/tools/call-omo-agent/background-executor.test.ts \
src/tools/call-omo-agent/subagent-session-creator.test.ts \
src/hooks/anthropic-context-window-limit-recovery/empty-content-recovery-sdk.test.ts src/hooks/anthropic-context-window-limit-recovery/parser.test.ts src/hooks/anthropic-context-window-limit-recovery/pruning-deduplication.test.ts src/hooks/anthropic-context-window-limit-recovery/recovery-deduplication.test.ts src/hooks/anthropic-context-window-limit-recovery/storage.test.ts \
src/hooks/session-recovery/detect-error-type.test.ts src/hooks/session-recovery/index.test.ts src/hooks/session-recovery/recover-empty-content-message-sdk.test.ts src/hooks/session-recovery/resume.test.ts src/hooks/session-recovery/storage \
src/hooks/claude-code-compatibility \
src/hooks/context-injection \
src/hooks/provider-toast \

View File

@@ -168,7 +168,7 @@ Read this and tell me why it's not just another boilerplate: https://raw.githubu
**Sisyphus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**) はあなたのメインのオーケストレーターです。計画を立て、専門家に委任し、攻撃的な並列実行でタスクを完了まで推進します。途中で投げ出すことはありません。
**Hephaestus** (`gpt-5.3-codex`) はあなたの自律的なディープワーカーです。レシピではなく、目標を与えてください。手取り足取り教えなくても、コードベースを探索し、パターンを研究し、端から端まで実行します。*正当なる職人 (The Legitimate Craftsman).*
**Hephaestus** (`gpt-5.4`) はあなたの自律的なディープワーカーです。レシピではなく、目標を与えてください。手取り足取り教えなくても、コードベースを探索し、パターンを研究し、端から端まで実行します。*正当なる職人 (The Legitimate Craftsman).*
**Prometheus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**) はあなたの戦略プランナーです。インタビューモードで動作し、コードに触れる前に質問をしてスコープを特定し、詳細な計画を構築します。
@@ -176,7 +176,7 @@ Read this and tell me why it's not just another boilerplate: https://raw.githubu
> Anthropicが[私たちのせいでOpenCodeをブロックしました。](https://x.com/thdxr/status/2010149530486911014) だからこそHephaestusは「正当なる職人 (The Legitimate Craftsman)」と呼ばれているのです。皮肉を込めています。
>
> Opusで最もよく動きますが、Kimi K2.5 + GPT-5.3 Codexの組み合わせだけでも、バニラのClaude Codeを軽く凌駕します。設定は一切不要です。
> Opusで最もよく動きますが、Kimi K2.5 + GPT-5.4の組み合わせだけでも、バニラのClaude Codeを軽く凌駕します。設定は一切不要です。
### エージェントの<E38388><E381AE>ーケストレーション

View File

@@ -162,7 +162,7 @@ Read this and tell me why it's not just another boilerplate: https://raw.githubu
**Sisyphus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**)는 당신의 메인 오케스트레이터입니다. 공격적인 병렬 실행으로 계획을 세우고, 전문가들에게 위임하며, 완료될 때까지 밀어붙입니다. 중간에 포기하는 법이 없습니다.
**Hephaestus** (`gpt-5.3-codex`)는 당신의 자율 딥 워커입니다. 레시피가 아니라 목표를 주세요. 베이비시터 없이 알아서 코드베이스를 탐색하고, 패턴을 연구하며, 끝에서 끝까지 전부 해냅니다. *진정한 장인(The Legitimate Craftsman).*
**Hephaestus** (`gpt-5.4`)는 당신의 자율 딥 워커입니다. 레시피가 아니라 목표를 주세요. 베이비시터 없이 알아서 코드베이스를 탐색하고, 패턴을 연구하며, 끝에서 끝까지 전부 해냅니다. *진정한 장인(The Legitimate Craftsman).*
**Prometheus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**)는 당신의 전략 플래너입니다. 인터뷰 모드로 작동합니다. 코드 한 줄 만지기 전에 질문을 던져 스코프를 파악하고 상세한 계획부터 세웁니다.
@@ -170,7 +170,7 @@ Read this and tell me why it's not just another boilerplate: https://raw.githubu
> Anthropic이 [우리 때문에 OpenCode를 막아버렸습니다.](https://x.com/thdxr/status/2010149530486911014) 그래서 Hephaestus의 별명이 "진정한 장인(The Legitimate Craftsman)"인 겁니다. (어디서 많이 들어본 이름이죠?) 아이러니를 노렸습니다.
>
> Opus에서 제일 잘 돌아가긴 하지만, Kimi K2.5 + GPT-5.3 Codex 조합만으로도 바닐라 Claude Code는 가볍게 바릅니다. 설정도 필요 없습니다.
> Opus에서 제일 잘 돌아가긴 하지만, Kimi K2.5 + GPT-5.4 조합만으로도 바닐라 Claude Code는 가볍게 바릅니다. 설정도 필요 없습니다.
### 에이전트 오케스트레이션

View File

@@ -164,7 +164,7 @@ Even only with following subscriptions, ultrawork will work well (this project i
**Sisyphus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`** ) is your main orchestrator. He plans, delegates to specialists, and drives tasks to completion with aggressive parallel execution. He does not stop halfway.
**Hephaestus** (`gpt-5.3-codex`) is your autonomous deep worker. Give him a goal, not a recipe. He explores the codebase, researches patterns, and executes end-to-end without hand-holding. *The Legitimate Craftsman.*
**Hephaestus** (`gpt-5.4`) is your autonomous deep worker. Give him a goal, not a recipe. He explores the codebase, researches patterns, and executes end-to-end without hand-holding. *The Legitimate Craftsman.*
**Prometheus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`** ) is your strategic planner. Interview mode: it questions, identifies scope, and builds a detailed plan before a single line of code is touched.
@@ -172,7 +172,7 @@ Every agent is tuned to its model's specific strengths. No manual model-juggling
> Anthropic [blocked OpenCode because of us.](https://x.com/thdxr/status/2010149530486911014) That's why Hephaestus is called "The Legitimate Craftsman." The irony is intentional.
>
> We run best on Opus, but Kimi K2.5 + GPT-5.3 Codex already beats vanilla Claude Code. Zero config needed.
> We run best on Opus, but Kimi K2.5 + GPT-5.4 already beats vanilla Claude Code. Zero config needed.
### Agent Orchestration

View File

@@ -152,7 +152,7 @@ Read this and tell me why it's not just another boilerplate: https://raw.githubu
**Sisyphus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**) — главный оркестратор. Он планирует, делегирует задачи специалистам и доводит их до завершения с агрессивным параллельным выполнением. Он не останавливается на полпути.
**Hephaestus** (`gpt-5.3-codex`) — автономный глубокий исполнитель. Дайте ему цель, а не рецепт. Он исследует кодовую базу, изучает паттерны и выполняет задачи сквозным образом без лишних подсказок. *Законный Мастер.*
**Hephaestus** (`gpt-5.4`) — автономный глубокий исполнитель. Дайте ему цель, а не рецепт. Он исследует кодовую базу, изучает паттерны и выполняет задачи сквозным образом без лишних подсказок. *Законный Мастер.*
**Prometheus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**) — стратегический планировщик. Режим интервью: задаёт вопросы, определяет объём работ и формирует детальный план до того, как написана хотя бы одна строка кода.
@@ -160,7 +160,7 @@ Read this and tell me why it's not just another boilerplate: https://raw.githubu
> Anthropic [заблокировал OpenCode из-за нас.](https://x.com/thdxr/status/2010149530486911014) Именно поэтому Hephaestus зовётся «Законным Мастером». Ирония намеренная.
>
> Мы работаем лучше всего на Opus, но Kimi K2.5 + GPT-5.3 Codex уже превосходят ванильный Claude Code. Никакой настройки не требуется.
> Мы работаем лучше всего на Opus, но Kimi K2.5 + GPT-5.4 уже превосходят ванильный Claude Code. Никакой настройки не требуется.
### Оркестрация агентов

View File

@@ -169,7 +169,7 @@ Read this and tell me why it's not just another boilerplate: https://raw.githubu
**Sisyphus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**) 是你的主指挥官。他负责制定计划、分配任务给专家团队,并以极其激进的并行策略推动任务直至完成。他从不半途而废。
**Hephaestus** (`gpt-5.3-codex`) 是你的自主深度工作者。你只需要给他目标,不要给他具体做法。他会自动探索代码库模式,从头到尾独立执行任务,绝不会中途要你当保姆。*名副其实的正牌工匠。*
**Hephaestus** (`gpt-5.4`) 是你的自主深度工作者。你只需要给他目标,不要给他具体做法。他会自动探索代码库模式,从头到尾独立执行任务,绝不会中途要你当保姆。*名副其实的正牌工匠。*
**Prometheus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**) 是你的战略规划师。他通过访谈模式,在动一行代码之前,先通过提问确定范围并构建详尽的执行计划。
@@ -177,7 +177,7 @@ Read this and tell me why it's not just another boilerplate: https://raw.githubu
> Anthropic [因为我们屏蔽了 OpenCode](https://x.com/thdxr/status/2010149530486911014)。这就是为什么我们将 Hephaestus 命名为“正牌工匠 (The Legitimate Craftsman)”。这是一个故意的讽刺。
>
> 我们在 Opus 上运行得最好,但仅仅使用 Kimi K2.5 + GPT-5.3 Codex 就足以碾压原版的 Claude Code。完全不需要配置。
> 我们在 Opus 上运行得最好,但仅仅使用 Kimi K2.5 + GPT-5.4 就足以碾压原版的 Claude Code。完全不需要配置。
### 智能体调度机制

View File

@@ -0,0 +1,88 @@
{
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/dev/assets/oh-my-opencode.schema.json",
// Optimized for intensive coding sessions.
// Prioritizes deep implementation agents and fast feedback loops.
"agents": {
// Primary orchestrator: aggressive parallel delegation
"sisyphus": {
"model": "kimi-for-coding/k2p5",
"ultrawork": { "model": "anthropic/claude-opus-4-6", "variant": "max" },
"prompt_append": "Delegate heavily to hephaestus for implementation. Parallelize exploration.",
},
// Heavy lifter: maximum autonomy for coding tasks
"hephaestus": {
"model": "openai/gpt-5.4",
"prompt_append": "You are the primary implementation agent. Own the codebase. Explore, decide, execute. Use LSP and AST-grep aggressively.",
"permission": { "edit": "allow", "bash": { "git": "allow", "test": "allow" } },
},
// Lightweight planner: quick planning for coding tasks
"prometheus": {
"model": "opencode/gpt-5-nano",
"prompt_append": "Keep plans concise. Focus on file structure and key decisions.",
},
// Debugging and architecture
"oracle": { "model": "openai/gpt-5.4", "variant": "high" },
// Fast docs lookup
"librarian": { "model": "github-copilot/grok-code-fast-1" },
// Rapid codebase navigation
"explore": { "model": "github-copilot/grok-code-fast-1" },
// Frontend and visual work
"multimodal-looker": { "model": "google/gemini-3.1-pro" },
// Plan review: minimal overhead
"metis": { "model": "opencode/gpt-5-nano" },
// Code review focus
"momus": { "prompt_append": "Focus on code quality, edge cases, and test coverage." },
// Long-running coding sessions
"atlas": {},
// Quick fixes and small tasks
"sisyphus-junior": { "model": "opencode/gpt-5-nano" },
},
"categories": {
// Trivial changes: fastest possible
"quick": { "model": "opencode/gpt-5-nano" },
// Standard coding tasks: good quality, fast
"unspecified-low": { "model": "anthropic/claude-sonnet-4-6" },
// Complex refactors: best quality
"unspecified-high": { "model": "openai/gpt-5.3-codex" },
// Visual work
"visual-engineering": { "model": "google/gemini-3.1-pro", "variant": "high" },
// Deep autonomous work
"deep": { "model": "openai/gpt-5.3-codex" },
// Architecture decisions
"ultrabrain": { "model": "openai/gpt-5.4", "variant": "xhigh" },
},
// High concurrency for parallel agent work
"background_task": {
"defaultConcurrency": 8,
"providerConcurrency": {
"anthropic": 5,
"openai": 5,
"google": 10,
"github-copilot": 10,
"opencode": 15,
},
},
// Enable all coding aids
"hashline_edit": true,
"experimental": { "aggressive_truncation": true, "task_system": true },
}

View File

@@ -0,0 +1,71 @@
{
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/dev/assets/oh-my-opencode.schema.json",
// Balanced defaults for general development.
// Tuned for reliability across diverse tasks without overspending.
"agents": {
// Main orchestrator: handles delegation and drives tasks to completion
"sisyphus": {
"model": "anthropic/claude-opus-4-6",
"ultrawork": { "model": "anthropic/claude-opus-4-6", "variant": "max" },
},
// Deep autonomous worker: end-to-end implementation
"hephaestus": {
"model": "openai/gpt-5.4",
"prompt_append": "Explore thoroughly, then implement. Prefer small, testable changes.",
},
// Strategic planner: interview mode before execution
"prometheus": {
"prompt_append": "Always interview first. Validate scope before planning.",
},
// Architecture consultant: complex design and debugging
"oracle": { "model": "openai/gpt-5.4", "variant": "high" },
// Documentation and code search
"librarian": { "model": "google/gemini-3-flash" },
// Fast codebase exploration
"explore": { "model": "github-copilot/grok-code-fast-1" },
// Visual tasks: UI/UX, images, diagrams
"multimodal-looker": { "model": "google/gemini-3.1-pro" },
// Plan consultant: reviews and improves plans
"metis": {},
// Critic and reviewer
"momus": {},
// Continuation and long-running task handler
"atlas": {},
// Lightweight task executor for simple jobs
"sisyphus-junior": { "model": "opencode/gpt-5-nano" },
},
"categories": {
"quick": { "model": "opencode/gpt-5-nano" },
"unspecified-low": { "model": "anthropic/claude-sonnet-4-6" },
"unspecified-high": { "model": "anthropic/claude-opus-4-6", "variant": "max" },
"writing": { "model": "google/gemini-3-flash" },
"visual-engineering": { "model": "google/gemini-3.1-pro", "variant": "high" },
"deep": { "model": "openai/gpt-5.3-codex" },
"ultrabrain": { "model": "openai/gpt-5.4", "variant": "xhigh" },
},
// Conservative concurrency for cost control
"background_task": {
"providerConcurrency": {
"anthropic": 3,
"openai": 3,
"google": 5,
"opencode": 10,
},
},
"experimental": { "aggressive_truncation": true },
}

View File

@@ -0,0 +1,112 @@
{
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/dev/assets/oh-my-opencode.schema.json",
// Optimized for strategic planning, architecture, and complex project design.
// Prioritizes deep thinking agents and thorough analysis before execution.
"agents": {
// Orchestrator: delegates to planning agents first
"sisyphus": {
"model": "anthropic/claude-opus-4-6",
"ultrawork": { "model": "anthropic/claude-opus-4-6", "variant": "max" },
"prompt_append": "Always consult prometheus and atlas for planning. Never rush to implementation.",
},
// Implementation: uses planning outputs
"hephaestus": {
"model": "openai/gpt-5.4",
"prompt_append": "Follow established plans precisely. Ask for clarification when plans are ambiguous.",
},
// Primary planner: deep interview mode
"prometheus": {
"model": "anthropic/claude-opus-4-6",
"thinking": { "type": "enabled", "budgetTokens": 160000 },
"prompt_append": "Interview extensively. Question assumptions. Build exhaustive plans with milestones, risks, and contingencies. Use deep & quick agents heavily in parallel for research.",
},
// Architecture consultant
"oracle": {
"model": "openai/gpt-5.4",
"variant": "xhigh",
"thinking": { "type": "enabled", "budgetTokens": 120000 },
},
// Research and documentation
"librarian": { "model": "google/gemini-3-flash" },
// Exploration for research phase
"explore": { "model": "github-copilot/grok-code-fast-1" },
// Visual planning and diagrams
"multimodal-looker": { "model": "google/gemini-3.1-pro", "variant": "high" },
// Plan review and refinement: heavily utilized
"metis": {
"model": "anthropic/claude-opus-4-6",
"prompt_append": "Critically evaluate plans. Identify gaps, risks, and improvements. Be thorough.",
},
// Critic: challenges assumptions
"momus": {
"model": "openai/gpt-5.4",
"prompt_append": "Challenge all assumptions in plans. Look for edge cases, failure modes, and overlooked requirements.",
},
// Long-running planning sessions
"atlas": {
"prompt_append": "Preserve context across long planning sessions. Track evolving decisions.",
},
// Quick research tasks
"sisyphus-junior": { "model": "opencode/gpt-5-nano" },
},
"categories": {
"quick": { "model": "opencode/gpt-5-nano" },
"unspecified-low": { "model": "anthropic/claude-sonnet-4-6" },
// High-effort planning tasks: maximum reasoning
"unspecified-high": {
"model": "openai/gpt-5.4",
"variant": "xhigh",
},
// Documentation from plans
"writing": { "model": "google/gemini-3-flash" },
// Visual architecture
"visual-engineering": { "model": "google/gemini-3.1-pro", "variant": "high" },
// Deep research and analysis
"deep": { "model": "openai/gpt-5.3-codex" },
// Strategic reasoning
"ultrabrain": { "model": "openai/gpt-5.4", "variant": "xhigh" },
// Creative approaches to problems
"artistry": { "model": "google/gemini-3.1-pro", "variant": "high" },
},
// Moderate concurrency: planning is sequential by nature
"background_task": {
"defaultConcurrency": 5,
"staleTimeoutMs": 300000,
"providerConcurrency": {
"anthropic": 3,
"openai": 3,
},
"modelConcurrency": {
"anthropic/claude-opus-4-6": 2,
"openai/gpt-5.4": 2,
},
},
"sisyphus_agent": {
"planner_enabled": true,
"replace_plan": true,
},
"experimental": { "aggressive_truncation": true },
}

View File

@@ -27,7 +27,7 @@ Using Sisyphus with older GPT models would be like taking your best project mana
Hephaestus is the developer who stays in their room coding all day. Doesn't talk much. Might seem socially awkward. But give them a hard technical problem and they'll emerge three hours later with a solution nobody else could have found.
**This is why Hephaestus uses GPT-5.3 Codex.** Codex is built for exactly this:
**This is why Hephaestus uses GPT-5.4.** GPT-5.4 is built for exactly this:
- Deep, autonomous exploration without hand-holding
- Multi-file reasoning across complex codebases
@@ -82,7 +82,7 @@ These agents are built for GPT's principle-driven style. Their prompts assume au
| Agent | Role | Fallback Chain | Notes |
| -------------- | ----------------------- | -------------------------------------- | ------------------------------------------------ |
| **Hephaestus** | Autonomous deep worker | GPT-5.3 Codex → GPT-5.4 (Copilot) | Requires GPT access. GPT-5.4 via Copilot as fallback. The craftsman. |
| **Hephaestus** | Autonomous deep worker | GPT-5.4 | Requires GPT access. The craftsman. |
| **Oracle** | Architecture consultant | GPT-5.4 → Gemini 3.1 Pro → Claude Opus → opencode-go/glm-5 | Read-only high-IQ consultation. |
| **Momus** | Ruthless reviewer | GPT-5.4 → Claude Opus → Gemini 3.1 Pro → opencode-go/glm-5 | Verification and plan review. GPT-5.4 uses xhigh variant. |
@@ -119,7 +119,7 @@ Principle-driven, explicit reasoning, deep technical capability. Best for agents
| Model | Strengths |
| ----------------- | ----------------------------------------------------------------------------------------------- |
| **GPT-5.3 Codex** | Deep coding powerhouse. Autonomous exploration. Required for Hephaestus. |
| **GPT-5.3 Codex** | Deep coding powerhouse. Autonomous exploration. Still available for deep category and explicit overrides. |
| **GPT-5.4** | High intelligence, strategic reasoning. Default for Oracle, Momus, and a key fallback for Prometheus / Atlas. Uses xhigh variant for Momus. |
| **GPT-5.4 Mini** | Fast + strong reasoning. Good for lightweight autonomous tasks. Default for quick category. |
| **GPT-5-Nano** | Ultra-cheap, fast. Good for simple utility tasks. |

View File

@@ -285,7 +285,7 @@ Not all models behave the same way. Understanding which models are "similar" hel
| Model | Provider(s) | Notes |
| ----------------- | -------------------------------- | ------------------------------------------------- |
| **GPT-5.3-codex** | openai, github-copilot, opencode | Deep coding powerhouse. Required for Hephaestus. |
| **GPT-5.3-codex** | openai, github-copilot, opencode | Deep coding powerhouse. Still available for deep category and explicit overrides. |
| **GPT-5.4** | openai, github-copilot, opencode | High intelligence. Default for Oracle. |
| **GPT-5.4 Mini** | openai, github-copilot, opencode | Fast + strong reasoning. Default for quick category. |
| **GPT-5-Nano** | opencode | Ultra-cheap, fast. Good for simple utility tasks. |
@@ -334,7 +334,7 @@ Priority: **Claude > GPT > Claude-like models**
| Agent | Role | Default Chain | Notes |
| -------------- | ---------------------- | -------------------------------------- | ------------------------------------------------------ |
| **Hephaestus** | Deep autonomous worker | GPT-5.3-codex (medium) only | "Codex on steroids." No fallback. Requires GPT access. |
| **Hephaestus** | Deep autonomous worker | GPT-5.4 (medium) only | "Codex on steroids." No fallback. Requires GPT access. |
| **Oracle** | Architecture/debugging | GPT-5.4 (high) → Gemini 3.1 Pro → Opus | High-IQ strategic backup. GPT preferred. |
| **Momus** | High-accuracy reviewer | GPT-5.4 (medium) → Opus → Gemini 3.1 Pro | Verification agent. GPT preferred. |

View File

@@ -420,7 +420,7 @@ Atlas is automatically activated when you run `/start-work`. You don't need to m
| Aspect | Hephaestus | Sisyphus + `ulw` / `ultrawork` |
| --------------- | ------------------------------------------ | ---------------------------------------------------- |
| **Model** | GPT-5.3 Codex (medium reasoning) | Claude Opus 4.6 / GPT-5.4 / GLM 5 depending on setup |
| **Model** | GPT-5.4 (medium reasoning) | Claude Opus 4.6 / GPT-5.4 / GLM 5 depending on setup |
| **Approach** | Autonomous deep worker | Keyword-activated ultrawork mode |
| **Best For** | Complex architectural work, deep reasoning | General complex tasks, "just do it" scenarios |
| **Planning** | Self-plans during execution | Uses Prometheus plans if available |
@@ -443,8 +443,8 @@ Switch to Hephaestus (Tab → Select Hephaestus) when:
- "Integrate our Rust core with the TypeScript frontend"
- "Migrate from MongoDB to PostgreSQL with zero downtime"
4. **You specifically want GPT-5.3 Codex reasoning**
- Some problems benefit from GPT-5.3 Codex's training characteristics
4. **You specifically want GPT-5.4 reasoning**
- Some problems benefit from GPT-5.4's training characteristics
**When to Use Sisyphus + `ulw`:**
@@ -469,7 +469,7 @@ Use the `ulw` keyword in Sisyphus when:
**Recommendation:**
- **For most users**: Use `ulw` keyword in Sisyphus. It's the default path and works excellently for 90% of complex tasks.
- **For power users**: Switch to Hephaestus when you specifically need GPT-5.3 Codex's reasoning style or want the "AmpCode deep mode" experience of fully autonomous exploration and execution.
- **For power users**: Switch to Hephaestus when you specifically need GPT-5.4's reasoning style or want the "AmpCode deep mode" experience of fully autonomous exploration and execution.
---
@@ -520,7 +520,7 @@ Type `exit` or start a new session. Atlas is primarily entered via `/start-work`
**For most tasks**: Type `ulw` in Sisyphus.
**Use Hephaestus when**: You specifically need GPT-5.3 Codex's reasoning style for deep architectural work or complex debugging.
**Use Hephaestus when**: You specifically need GPT-5.4's reasoning style for deep architectural work or complex debugging.
---

View File

@@ -93,9 +93,9 @@ Sisyphus still works best on Claude-family models, Kimi, and GLM. GPT-5.4 now ha
Named with intentional irony. Anthropic blocked OpenCode from using their API because of this project. So the team built an autonomous GPT-native agent instead.
Hephaestus runs on GPT-5.3 Codex. Give him a goal, not a recipe. He explores the codebase, researches patterns, and executes end-to-end without hand-holding. He is the legitimate craftsman because he was born from necessity, not privilege.
Hephaestus runs on GPT-5.4. Give him a goal, not a recipe. He explores the codebase, researches patterns, and executes end-to-end without hand-holding. He is the legitimate craftsman because he was born from necessity, not privilege.
Use Hephaestus when you need deep architectural reasoning, complex debugging across many files, or cross-domain knowledge synthesis. Switch to him explicitly when the work demands GPT-5.3 Codex's particular strengths.
Use Hephaestus when you need deep architectural reasoning, complex debugging across many files, or cross-domain knowledge synthesis. Switch to him explicitly when the work demands GPT-5.4's particular strengths.
**Why this beats vanilla Codex CLI:**
@@ -214,8 +214,7 @@ You can override specific agents or categories in your config:
**GPT models** (explicit reasoning, principle-driven):
- GPT-5.3-codex — deep coding powerhouse, required for Hephaestus
- GPT-5.4 — high intelligence, default for Oracle
- GPT-5.4 — deep coding powerhouse, required for Hephaestus and default for Oracle
- GPT-5-Nano — ultra-cheap, fast utility tasks
**Different-behavior models**:

View File

@@ -268,7 +268,7 @@ Disable categories: `{ "disabled_categories": ["ultrabrain"] }`
| Agent | Default Model | Provider Priority |
| --------------------- | ------------------- | ---------------------------------------------------------------------------- |
| **Sisyphus** | `claude-opus-4-6` | `claude-opus-4-6``glm-5``big-pickle` |
| **Hephaestus** | `gpt-5.3-codex` | `gpt-5.3-codex``gpt-5.4` (GitHub Copilot fallback) |
| **Hephaestus** | `gpt-5.4` | `gpt-5.4` |
| **oracle** | `gpt-5.4` | `gpt-5.4``gemini-3.1-pro``claude-opus-4-6` |
| **librarian** | `minimax-m2.7` | `minimax-m2.7``minimax-m2.7-highspeed``claude-haiku-4-5``gpt-5-nano` |
| **explore** | `grok-code-fast-1` | `grok-code-fast-1``minimax-m2.7-highspeed``minimax-m2.7``claude-haiku-4-5``gpt-5-nano` |

View File

@@ -9,7 +9,7 @@ Oh-My-OpenAgent provides 11 specialized AI agents. Each has distinct expertise,
| Agent | Model | Purpose |
| --------------------- | ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Sisyphus** | `claude-opus-4-6` | The default orchestrator. Plans, delegates, and executes complex tasks using specialized subagents with aggressive parallel execution. Todo-driven workflow with extended thinking (32k budget). Fallback: `glm-5``big-pickle`. |
| **Hephaestus** | `gpt-5.3-codex` | The Legitimate Craftsman. Autonomous deep worker inspired by AmpCode's deep mode. Goal-oriented execution with thorough research before action. Explores codebase patterns, completes tasks end-to-end without premature stopping. Named after the Greek god of forge and craftsmanship. Fallback: `gpt-5.4` on GitHub Copilot. Requires a GPT-capable provider. |
| **Hephaestus** | `gpt-5.4` | The Legitimate Craftsman. Autonomous deep worker inspired by AmpCode's deep mode. Goal-oriented execution with thorough research before action. Explores codebase patterns, completes tasks end-to-end without premature stopping. Named after the Greek god of forge and craftsmanship. Requires a GPT-capable provider. |
| **Oracle** | `gpt-5.4` | Architecture decisions, code review, debugging. Read-only consultation with stellar logical reasoning and deep analysis. Inspired by AmpCode. Fallback: `gemini-3.1-pro``claude-opus-4-6`. |
| **Librarian** | `minimax-m2.7` | Multi-repo analysis, documentation lookup, OSS implementation examples. Deep codebase understanding with evidence-based answers. Fallback: `minimax-m2.7-highspeed``claude-haiku-4-5``gpt-5-nano`. |
| **Explore** | `grok-code-fast-1` | Fast codebase exploration and contextual grep. Fallback: `minimax-m2.7-highspeed``minimax-m2.7``claude-haiku-4-5``gpt-5-nano`. |

View File

@@ -2327,6 +2327,14 @@
"created_at": "2026-03-25T23:11:32Z",
"repoId": 1108837393,
"pullRequestNo": 2840
},
{
"name": "kuitos",
"id": 5206843,
"comment_id": 4133207953,
"created_at": "2026-03-26T09:55:49Z",
"repoId": 1108837393,
"pullRequestNo": 2833
}
]
}

View File

@@ -11,7 +11,7 @@ Agent factories following `createXXXAgent(model) → AgentConfig` pattern. Each
| Agent | Model | Temp | Mode | Fallback Chain | Purpose |
|-------|-------|------|------|----------------|---------|
| **Sisyphus** | claude-opus-4-6 max | 0.1 | all | k2p5 → kimi-k2.5 → gpt-5.4 medium → glm-5 → big-pickle | Main orchestrator, plans + delegates |
| **Hephaestus** | gpt-5.3-codex medium | 0.1 | all | gpt-5.4 medium (copilot) | Autonomous deep worker |
| **Hephaestus** | gpt-5.4 medium | 0.1 | all | | Autonomous deep worker |
| **Oracle** | gpt-5.4 high | 0.1 | subagent | gemini-3.1-pro high → claude-opus-4-6 max | Read-only consultation |
| **Librarian** | minimax-m2.7 | 0.1 | subagent | minimax-m2.7-highspeed → claude-haiku-4-5 → gpt-5-nano | External docs/code search |
| **Explore** | grok-code-fast-1 | 0.1 | subagent | minimax-m2.7-highspeed → minimax-m2.7 → claude-haiku-4-5 → gpt-5-nano | Contextual grep |

View File

@@ -78,12 +78,16 @@ export function collectPendingBuiltinAgents(input: {
})
if (!resolution) {
if (override?.model) {
log("[agent-registration] User-configured model could not be resolved, falling back", {
// User explicitly configured a model but resolution failed (e.g., cold cache).
// Honor the user's choice directly instead of falling back to hardcoded chain.
log("[agent-registration] User-configured model not resolved, using as-is", {
agent: agentName,
configuredModel: override.model,
})
resolution = { model: override.model, provenance: "override" as const }
} else {
resolution = getFirstFallbackModel(requirement)
}
resolution = getFirstFallbackModel(requirement)
}
if (!resolution) continue
const { model, variant: resolvedVariant } = resolution

View File

@@ -642,7 +642,7 @@ describe("createBuiltinAgents with requiresProvider gating (hephaestus)", () =>
// #then
expect(agents.hephaestus).toBeDefined()
expect(agents.hephaestus.model).toBe("openai/gpt-5.3-codex")
expect(agents.hephaestus.model).toBe("openai/gpt-5.4")
} finally {
cacheSpy.mockRestore()
fetchSpy.mockRestore()

View File

@@ -202,7 +202,7 @@ exports[`generateModelConfig single native provider uses OpenAI models when only
"variant": "medium",
},
"hephaestus": {
"model": "openai/gpt-5.3-codex",
"model": "openai/gpt-5.4",
"variant": "medium",
},
"librarian": {
@@ -287,7 +287,7 @@ exports[`generateModelConfig single native provider uses OpenAI models with isMa
"variant": "medium",
},
"hephaestus": {
"model": "openai/gpt-5.3-codex",
"model": "openai/gpt-5.4",
"variant": "medium",
},
"librarian": {
@@ -490,7 +490,7 @@ exports[`generateModelConfig all native providers uses preferred models from fal
"model": "anthropic/claude-haiku-4-5",
},
"hephaestus": {
"model": "openai/gpt-5.3-codex",
"model": "openai/gpt-5.4",
"variant": "medium",
},
"metis": {
@@ -565,7 +565,7 @@ exports[`generateModelConfig all native providers uses preferred models with isM
"model": "anthropic/claude-haiku-4-5",
},
"hephaestus": {
"model": "openai/gpt-5.3-codex",
"model": "openai/gpt-5.4",
"variant": "medium",
},
"metis": {
@@ -641,7 +641,7 @@ exports[`generateModelConfig fallback providers uses OpenCode Zen models when on
"model": "opencode/claude-haiku-4-5",
},
"hephaestus": {
"model": "opencode/gpt-5.3-codex",
"model": "opencode/gpt-5.4",
"variant": "medium",
},
"metis": {
@@ -716,7 +716,7 @@ exports[`generateModelConfig fallback providers uses OpenCode Zen models with is
"model": "opencode/claude-haiku-4-5",
},
"hephaestus": {
"model": "opencode/gpt-5.3-codex",
"model": "opencode/gpt-5.4",
"variant": "medium",
},
"metis": {
@@ -1049,7 +1049,7 @@ exports[`generateModelConfig mixed provider scenarios uses Claude + OpenCode Zen
"model": "anthropic/claude-haiku-4-5",
},
"hephaestus": {
"model": "opencode/gpt-5.3-codex",
"model": "opencode/gpt-5.4",
"variant": "medium",
},
"metis": {
@@ -1124,7 +1124,7 @@ exports[`generateModelConfig mixed provider scenarios uses OpenAI + Copilot comb
"model": "github-copilot/gpt-5-mini",
},
"hephaestus": {
"model": "openai/gpt-5.3-codex",
"model": "openai/gpt-5.4",
"variant": "medium",
},
"metis": {
@@ -1329,7 +1329,7 @@ exports[`generateModelConfig mixed provider scenarios uses all fallback provider
"model": "opencode/claude-haiku-4-5",
},
"hephaestus": {
"model": "opencode/gpt-5.3-codex",
"model": "github-copilot/gpt-5.4",
"variant": "medium",
},
"librarian": {
@@ -1407,7 +1407,7 @@ exports[`generateModelConfig mixed provider scenarios uses all providers togethe
"model": "anthropic/claude-haiku-4-5",
},
"hephaestus": {
"model": "openai/gpt-5.3-codex",
"model": "openai/gpt-5.4",
"variant": "medium",
},
"librarian": {
@@ -1485,7 +1485,7 @@ exports[`generateModelConfig mixed provider scenarios uses all providers with is
"model": "anthropic/claude-haiku-4-5",
},
"hephaestus": {
"model": "openai/gpt-5.3-codex",
"model": "openai/gpt-5.4",
"variant": "medium",
},
"librarian": {

View File

@@ -1,4 +1,5 @@
import color from "picocolors"
import { PLUGIN_NAME } from "../shared"
import type { InstallArgs } from "./types"
import {
addPluginToOpenCodeConfig,
@@ -32,7 +33,7 @@ export async function runCliInstaller(args: InstallArgs, version: string): Promi
}
console.log()
printInfo(
"Usage: bunx oh-my-opencode install --no-tui --claude=<no|yes|max20> --gemini=<no|yes> --copilot=<no|yes>",
`Usage: bunx ${PLUGIN_NAME} install --no-tui --claude=<no|yes|max20> --gemini=<no|yes> --copilot=<no|yes>`,
)
console.log()
return 1
@@ -65,7 +66,7 @@ export async function runCliInstaller(args: InstallArgs, version: string): Promi
const config = argsToConfig(args)
printStep(step++, totalSteps, "Adding oh-my-opencode plugin...")
printStep(step++, totalSteps, `Adding ${PLUGIN_NAME} plugin...`)
const pluginResult = await addPluginToOpenCodeConfig(version)
if (!pluginResult.success) {
printError(`Failed: ${pluginResult.error}`)
@@ -75,7 +76,7 @@ export async function runCliInstaller(args: InstallArgs, version: string): Promi
`Plugin ${isUpdate ? "verified" : "added"} ${SYMBOLS.arrow} ${color.dim(pluginResult.configPath)}`,
)
printStep(step++, totalSteps, "Writing oh-my-opencode configuration...")
printStep(step++, totalSteps, `Writing ${PLUGIN_NAME} configuration...`)
const omoResult = writeOmoConfig(config)
if (!omoResult.success) {
printError(`Failed: ${omoResult.error}`)

View File

@@ -1,300 +0,0 @@
import { describe, expect, test, mock, afterEach } from "bun:test"
import { getPluginNameWithVersion, fetchNpmDistTags, generateOmoConfig } from "./config-manager"
import type { InstallConfig } from "./types"
describe("getPluginNameWithVersion", () => {
const originalFetch = globalThis.fetch
afterEach(() => {
globalThis.fetch = originalFetch
})
test("returns @latest when current version matches latest tag", async () => {
// #given npm dist-tags with latest=2.14.0
globalThis.fetch = mock(() =>
Promise.resolve({
ok: true,
json: () => Promise.resolve({ latest: "2.14.0", beta: "3.0.0-beta.3" }),
} as Response)
) as unknown as typeof fetch
// #when current version is 2.14.0
const result = await getPluginNameWithVersion("2.14.0")
// #then should use @latest tag
expect(result).toBe("oh-my-opencode@latest")
})
test("returns @beta when current version matches beta tag", async () => {
// #given npm dist-tags with beta=3.0.0-beta.3
globalThis.fetch = mock(() =>
Promise.resolve({
ok: true,
json: () => Promise.resolve({ latest: "2.14.0", beta: "3.0.0-beta.3" }),
} as Response)
) as unknown as typeof fetch
// #when current version is 3.0.0-beta.3
const result = await getPluginNameWithVersion("3.0.0-beta.3")
// #then should use @beta tag
expect(result).toBe("oh-my-opencode@beta")
})
test("returns @next when current version matches next tag", async () => {
// #given npm dist-tags with next=3.1.0-next.1
globalThis.fetch = mock(() =>
Promise.resolve({
ok: true,
json: () => Promise.resolve({ latest: "2.14.0", beta: "3.0.0-beta.3", next: "3.1.0-next.1" }),
} as Response)
) as unknown as typeof fetch
// #when current version is 3.1.0-next.1
const result = await getPluginNameWithVersion("3.1.0-next.1")
// #then should use @next tag
expect(result).toBe("oh-my-opencode@next")
})
test("returns prerelease channel tag when no dist-tag matches prerelease version", async () => {
// #given npm dist-tags with beta=3.0.0-beta.3
globalThis.fetch = mock(() =>
Promise.resolve({
ok: true,
json: () => Promise.resolve({ latest: "2.14.0", beta: "3.0.0-beta.3" }),
} as Response)
) as unknown as typeof fetch
// #when current version is old beta 3.0.0-beta.2
const result = await getPluginNameWithVersion("3.0.0-beta.2")
// #then should preserve prerelease channel
expect(result).toBe("oh-my-opencode@beta")
})
test("returns prerelease channel tag when fetch fails", async () => {
// #given network failure
globalThis.fetch = mock(() => Promise.reject(new Error("Network error"))) as unknown as typeof fetch
// #when current version is 3.0.0-beta.3
const result = await getPluginNameWithVersion("3.0.0-beta.3")
// #then should preserve prerelease channel
expect(result).toBe("oh-my-opencode@beta")
})
test("returns bare package name when npm returns non-ok response for stable version", async () => {
// #given npm returns 404
globalThis.fetch = mock(() =>
Promise.resolve({
ok: false,
status: 404,
} as Response)
) as unknown as typeof fetch
// #when current version is 2.14.0
const result = await getPluginNameWithVersion("2.14.0")
// #then should fall back to bare package entry
expect(result).toBe("oh-my-opencode")
})
test("prioritizes latest over other tags when version matches multiple", async () => {
// #given version matches both latest and beta (during release promotion)
globalThis.fetch = mock(() =>
Promise.resolve({
ok: true,
json: () => Promise.resolve({ beta: "3.0.0", latest: "3.0.0", next: "3.1.0-alpha.1" }),
} as Response)
) as unknown as typeof fetch
// #when current version matches both
const result = await getPluginNameWithVersion("3.0.0")
// #then should prioritize @latest
expect(result).toBe("oh-my-opencode@latest")
})
})
describe("fetchNpmDistTags", () => {
const originalFetch = globalThis.fetch
afterEach(() => {
globalThis.fetch = originalFetch
})
test("returns dist-tags on success", async () => {
// #given npm returns dist-tags
globalThis.fetch = mock(() =>
Promise.resolve({
ok: true,
json: () => Promise.resolve({ latest: "2.14.0", beta: "3.0.0-beta.3" }),
} as Response)
) as unknown as typeof fetch
// #when fetching dist-tags
const result = await fetchNpmDistTags("oh-my-opencode")
// #then should return the tags
expect(result).toEqual({ latest: "2.14.0", beta: "3.0.0-beta.3" })
})
test("returns null on network failure", async () => {
// #given network failure
globalThis.fetch = mock(() => Promise.reject(new Error("Network error"))) as unknown as typeof fetch
// #when fetching dist-tags
const result = await fetchNpmDistTags("oh-my-opencode")
// #then should return null
expect(result).toBeNull()
})
test("returns null on non-ok response", async () => {
// #given npm returns 404
globalThis.fetch = mock(() =>
Promise.resolve({
ok: false,
status: 404,
} as Response)
) as unknown as typeof fetch
// #when fetching dist-tags
const result = await fetchNpmDistTags("oh-my-opencode")
// #then should return null
expect(result).toBeNull()
})
})
describe("generateOmoConfig - model fallback system", () => {
test("uses github-copilot sonnet fallback when only copilot available", () => {
// #given user has only copilot (no max plan)
const config: InstallConfig = {
hasClaude: false,
isMax20: false,
hasOpenAI: false,
hasGemini: false,
hasCopilot: true,
hasOpencodeZen: false,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
}
// #when generating config
const result = generateOmoConfig(config)
// #then Sisyphus uses Copilot (OR logic - copilot is in claude-opus-4-6 providers)
expect((result.agents as Record<string, { model: string }>).sisyphus.model).toBe("github-copilot/claude-opus-4.6")
})
test("uses ultimate fallback when no providers configured", () => {
// #given user has no providers
const config: InstallConfig = {
hasClaude: false,
isMax20: false,
hasOpenAI: false,
hasGemini: false,
hasCopilot: false,
hasOpencodeZen: false,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
}
// #when generating config
const result = generateOmoConfig(config)
// #then Sisyphus is omitted (requires all fallback providers)
expect(result.$schema).toBe("https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/dev/assets/oh-my-opencode.schema.json")
expect((result.agents as Record<string, { model: string }>).sisyphus).toBeUndefined()
})
test("uses ZAI model for librarian when Z.ai is available", () => {
// #given user has Z.ai and Claude max20
const config: InstallConfig = {
hasClaude: true,
isMax20: true,
hasOpenAI: false,
hasGemini: false,
hasCopilot: false,
hasOpencodeZen: false,
hasZaiCodingPlan: true,
hasKimiForCoding: false,
}
// #when generating config
const result = generateOmoConfig(config)
// #then librarian should use ZAI model
expect((result.agents as Record<string, { model: string }>).librarian.model).toBe("zai-coding-plan/glm-4.7")
// #then Sisyphus uses Claude (OR logic)
expect((result.agents as Record<string, { model: string }>).sisyphus.model).toBe("anthropic/claude-opus-4-6")
})
test("uses native OpenAI models when only ChatGPT available", () => {
// #given user has only ChatGPT subscription
const config: InstallConfig = {
hasClaude: false,
isMax20: false,
hasOpenAI: true,
hasGemini: false,
hasCopilot: false,
hasOpencodeZen: false,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
}
// #when generating config
const result = generateOmoConfig(config)
// #then Sisyphus resolves to gpt-5.4 medium (openai is now in sisyphus chain)
expect((result.agents as Record<string, { model: string; variant?: string }>).sisyphus.model).toBe("openai/gpt-5.4")
expect((result.agents as Record<string, { model: string; variant?: string }>).sisyphus.variant).toBe("medium")
// #then Oracle should use native OpenAI (first fallback entry)
expect((result.agents as Record<string, { model: string }>).oracle.model).toBe("openai/gpt-5.4")
// #then multimodal-looker should use native OpenAI (first fallback entry is gpt-5.4)
expect((result.agents as Record<string, { model: string }>)["multimodal-looker"].model).toBe("openai/gpt-5.4")
})
test("uses haiku for explore when Claude max20", () => {
// #given user has Claude max20
const config: InstallConfig = {
hasClaude: true,
isMax20: true,
hasOpenAI: false,
hasGemini: false,
hasCopilot: false,
hasOpencodeZen: false,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
}
// #when generating config
const result = generateOmoConfig(config)
// #then explore should use haiku (max20 plan uses Claude quota)
expect((result.agents as Record<string, { model: string }>).explore.model).toBe("anthropic/claude-haiku-4-5")
})
test("uses haiku for explore regardless of max20 flag", () => {
// #given user has Claude but not max20
const config: InstallConfig = {
hasClaude: true,
isMax20: false,
hasOpenAI: false,
hasGemini: false,
hasCopilot: false,
hasOpencodeZen: false,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
}
// #when generating config
const result = generateOmoConfig(config)
// #then explore should use haiku (isMax20 doesn't affect explore anymore)
expect((result.agents as Record<string, { model: string }>).explore.model).toBe("anthropic/claude-haiku-4-5")
})
})

View File

@@ -41,37 +41,39 @@ export async function addPluginToOpenCodeConfig(currentVersion: string): Promise
const config = parseResult.config
const plugins = config.plugin ?? []
// Check for existing plugin (either current or legacy name)
const currentNameIndex = plugins.findIndex(
const canonicalEntries = plugins.filter(
(plugin) => plugin === PLUGIN_NAME || plugin.startsWith(`${PLUGIN_NAME}@`)
)
const legacyNameIndex = plugins.findIndex(
const legacyEntries = plugins.filter(
(plugin) => plugin === LEGACY_PLUGIN_NAME || plugin.startsWith(`${LEGACY_PLUGIN_NAME}@`)
)
const otherPlugins = plugins.filter(
(plugin) => !(plugin === PLUGIN_NAME || plugin.startsWith(`${PLUGIN_NAME}@`))
&& !(plugin === LEGACY_PLUGIN_NAME || plugin.startsWith(`${LEGACY_PLUGIN_NAME}@`))
)
// If either name exists, update to new name
if (currentNameIndex !== -1) {
if (plugins[currentNameIndex] === pluginEntry) {
return { success: true, configPath: path }
}
plugins[currentNameIndex] = pluginEntry
} else if (legacyNameIndex !== -1) {
// Upgrade legacy name to new name
plugins[legacyNameIndex] = pluginEntry
const normalizedPlugins = [...otherPlugins]
if (canonicalEntries.length > 0) {
normalizedPlugins.push(canonicalEntries[0])
} else if (legacyEntries.length > 0) {
const versionMatch = legacyEntries[0].match(/@(.+)$/)
const preservedVersion = versionMatch ? versionMatch[1] : null
normalizedPlugins.push(preservedVersion ? `${PLUGIN_NAME}@${preservedVersion}` : pluginEntry)
} else {
plugins.push(pluginEntry)
normalizedPlugins.push(pluginEntry)
}
config.plugin = plugins
config.plugin = normalizedPlugins
if (format === "jsonc") {
const content = readFileSync(path, "utf-8")
const pluginArrayRegex = /"plugin"\s*:\s*\[([\s\S]*?)\]/
const pluginArrayRegex = /((?:"plugin"|plugin)\s*:\s*)\[([\s\S]*?)\]/
const match = content.match(pluginArrayRegex)
if (match) {
const formattedPlugins = plugins.map((p) => `"${p}"`).join(",\n ")
const newContent = content.replace(pluginArrayRegex, `"plugin": [\n ${formattedPlugins}\n ]`)
const formattedPlugins = normalizedPlugins.map((p) => `"${p}"`).join(",\n ")
const newContent = content.replace(pluginArrayRegex, `$1[\n ${formattedPlugins}\n ]`)
writeFileSync(path, newContent)
} else {
const newContent = content.replace(/(\{)/, `$1\n "plugin": ["${pluginEntry}"],`)

View File

@@ -0,0 +1,142 @@
/// <reference types="bun-types" />
import { describe, expect, test } from "bun:test"
import { generateOmoConfig } from "../config-manager"
import type { InstallConfig } from "../types"
describe("generateOmoConfig - model fallback system", () => {
test("uses github-copilot sonnet fallback when only copilot available", () => {
//#given
const config: InstallConfig = {
hasClaude: false,
isMax20: false,
hasOpenAI: false,
hasGemini: false,
hasCopilot: true,
hasOpencodeZen: false,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
hasOpencodeGo: false,
}
//#when
const result = generateOmoConfig(config)
//#then
expect([
"github-copilot/claude-opus-4.6",
"github-copilot/claude-opus-4-6",
]).toContain((result.agents as Record<string, { model: string }>).sisyphus.model)
})
test("uses ultimate fallback when no providers configured", () => {
//#given
const config: InstallConfig = {
hasClaude: false,
isMax20: false,
hasOpenAI: false,
hasGemini: false,
hasCopilot: false,
hasOpencodeZen: false,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
hasOpencodeGo: false,
}
//#when
const result = generateOmoConfig(config)
//#then
expect(result.$schema).toBe("https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/dev/assets/oh-my-opencode.schema.json")
expect((result.agents as Record<string, { model: string }>).sisyphus).toBeUndefined()
})
test("uses ZAI model for librarian when Z.ai is available", () => {
//#given
const config: InstallConfig = {
hasClaude: true,
isMax20: true,
hasOpenAI: false,
hasGemini: false,
hasCopilot: false,
hasOpencodeZen: false,
hasZaiCodingPlan: true,
hasKimiForCoding: false,
hasOpencodeGo: false,
}
//#when
const result = generateOmoConfig(config)
//#then
expect((result.agents as Record<string, { model: string }>).librarian.model).toBe("zai-coding-plan/glm-4.7")
expect((result.agents as Record<string, { model: string }>).sisyphus.model).toBe("anthropic/claude-opus-4-6")
})
test("uses native OpenAI models when only ChatGPT available", () => {
//#given
const config: InstallConfig = {
hasClaude: false,
isMax20: false,
hasOpenAI: true,
hasGemini: false,
hasCopilot: false,
hasOpencodeZen: false,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
hasOpencodeGo: false,
}
//#when
const result = generateOmoConfig(config)
//#then
expect((result.agents as Record<string, { model: string; variant?: string }>).sisyphus.model).toBe("openai/gpt-5.4")
expect((result.agents as Record<string, { model: string; variant?: string }>).sisyphus.variant).toBe("medium")
expect((result.agents as Record<string, { model: string }>).oracle.model).toBe("openai/gpt-5.4")
expect((result.agents as Record<string, { model: string }>)['multimodal-looker'].model).toBe("openai/gpt-5.4")
})
test("uses haiku for explore when Claude max20", () => {
//#given
const config: InstallConfig = {
hasClaude: true,
isMax20: true,
hasOpenAI: false,
hasGemini: false,
hasCopilot: false,
hasOpencodeZen: false,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
hasOpencodeGo: false,
}
//#when
const result = generateOmoConfig(config)
//#then
expect((result.agents as Record<string, { model: string }>).explore.model).toBe("anthropic/claude-haiku-4-5")
})
test("uses haiku for explore regardless of max20 flag", () => {
//#given
const config: InstallConfig = {
hasClaude: true,
isMax20: false,
hasOpenAI: false,
hasGemini: false,
hasCopilot: false,
hasOpencodeZen: false,
hasZaiCodingPlan: false,
hasKimiForCoding: false,
hasOpencodeGo: false,
}
//#when
const result = generateOmoConfig(config)
//#then
expect((result.agents as Record<string, { model: string }>).explore.model).toBe("anthropic/claude-haiku-4-5")
})
})

View File

@@ -0,0 +1,56 @@
/// <reference types="bun-types" />
import { afterEach, describe, expect, mock, test } from "bun:test"
import { fetchNpmDistTags } from "../config-manager"
describe("fetchNpmDistTags", () => {
const originalFetch = globalThis.fetch
afterEach(() => {
globalThis.fetch = originalFetch
})
test("returns dist-tags on success", async () => {
//#given
globalThis.fetch = mock(() =>
Promise.resolve({
ok: true,
json: () => Promise.resolve({ latest: "3.13.1", beta: "3.14.0-beta.1" }),
} as Response)
) as unknown as typeof fetch
//#when
const result = await fetchNpmDistTags("oh-my-openagent")
//#then
expect(result).toEqual({ latest: "3.13.1", beta: "3.14.0-beta.1" })
})
test("returns null on network failure", async () => {
//#given
globalThis.fetch = mock(() => Promise.reject(new Error("Network error"))) as unknown as typeof fetch
//#when
const result = await fetchNpmDistTags("oh-my-openagent")
//#then
expect(result).toBeNull()
})
test("returns null on non-ok response", async () => {
//#given
globalThis.fetch = mock(() =>
Promise.resolve({
ok: false,
status: 404,
} as Response)
) as unknown as typeof fetch
//#when
const result = await fetchNpmDistTags("oh-my-openagent")
//#then
expect(result).toBeNull()
})
})

View File

@@ -28,10 +28,9 @@ describe("detectCurrentConfig - single package detection", () => {
delete process.env.OPENCODE_CONFIG_DIR
})
it("detects oh-my-opencode in plugin array", () => {
it("detects both legacy and canonical plugin entries", () => {
// given
const config = { plugin: ["oh-my-opencode"] }
writeFileSync(testConfigPath, JSON.stringify(config, null, 2) + "\n", "utf-8")
writeFileSync(testConfigPath, JSON.stringify({ plugin: ["oh-my-opencode", "oh-my-openagent@3.11.0"] }, null, 2) + "\n", "utf-8")
// when
const result = detectCurrentConfig()
@@ -40,58 +39,9 @@ describe("detectCurrentConfig - single package detection", () => {
expect(result.isInstalled).toBe(true)
})
it("detects oh-my-opencode with version pin", () => {
it("returns false when plugin not present with similar name", () => {
// given
const config = { plugin: ["oh-my-opencode@3.11.0"] }
writeFileSync(testConfigPath, JSON.stringify(config, null, 2) + "\n", "utf-8")
// when
const result = detectCurrentConfig()
// then
expect(result.isInstalled).toBe(true)
})
it("detects oh-my-openagent as installed (legacy name)", () => {
// given
const config = { plugin: ["oh-my-openagent"] }
writeFileSync(testConfigPath, JSON.stringify(config, null, 2) + "\n", "utf-8")
// when
const result = detectCurrentConfig()
// then
expect(result.isInstalled).toBe(true)
})
it("detects oh-my-openagent with version pin as installed (legacy name)", () => {
// given
const config = { plugin: ["oh-my-openagent@3.11.0"] }
writeFileSync(testConfigPath, JSON.stringify(config, null, 2) + "\n", "utf-8")
// when
const result = detectCurrentConfig()
// then
expect(result.isInstalled).toBe(true)
})
it("returns false when plugin not present", () => {
// given
const config = { plugin: ["some-other-plugin"] }
writeFileSync(testConfigPath, JSON.stringify(config, null, 2) + "\n", "utf-8")
// when
const result = detectCurrentConfig()
// then
expect(result.isInstalled).toBe(false)
})
it("returns false when plugin not present (even with similar name)", () => {
// given - not exactly oh-my-openagent
const config = { plugin: ["oh-my-openagent-extra"] }
writeFileSync(testConfigPath, JSON.stringify(config, null, 2) + "\n", "utf-8")
writeFileSync(testConfigPath, JSON.stringify({ plugin: ["oh-my-openagent-extra"] }, null, 2) + "\n", "utf-8")
// when
const result = detectCurrentConfig()
@@ -103,11 +53,7 @@ describe("detectCurrentConfig - single package detection", () => {
it("detects OpenCode Go from the existing omo config", () => {
// given
writeFileSync(testConfigPath, JSON.stringify({ plugin: ["oh-my-opencode"] }, null, 2) + "\n", "utf-8")
writeFileSync(
testOmoConfigPath,
JSON.stringify({ agents: { atlas: { model: "opencode-go/kimi-k2.5" } } }, null, 2) + "\n",
"utf-8",
)
writeFileSync(testOmoConfigPath, JSON.stringify({ agents: { atlas: { model: "opencode-go/kimi-k2.5" } } }, null, 2) + "\n", "utf-8")
// when
const result = detectCurrentConfig()
@@ -137,10 +83,9 @@ describe("addPluginToOpenCodeConfig - single package writes", () => {
delete process.env.OPENCODE_CONFIG_DIR
})
it("keeps oh-my-opencode when it already exists", async () => {
it("writes canonical plugin entry for new installs", async () => {
// given
const config = { plugin: ["oh-my-opencode"] }
writeFileSync(testConfigPath, JSON.stringify(config, null, 2) + "\n", "utf-8")
writeFileSync(testConfigPath, JSON.stringify({}, null, 2) + "\n", "utf-8")
// when
const result = await addPluginToOpenCodeConfig("3.11.0")
@@ -148,13 +93,12 @@ describe("addPluginToOpenCodeConfig - single package writes", () => {
// then
expect(result.success).toBe(true)
const savedConfig = JSON.parse(readFileSync(testConfigPath, "utf-8"))
expect(savedConfig.plugin).toContain("oh-my-opencode")
expect(savedConfig.plugin).toEqual(["oh-my-openagent"])
})
it("replaces version-pinned oh-my-opencode@X.Y.Z", async () => {
it("upgrades a bare legacy plugin entry to canonical", async () => {
// given
const config = { plugin: ["oh-my-opencode@3.10.0"] }
writeFileSync(testConfigPath, JSON.stringify(config, null, 2) + "\n", "utf-8")
writeFileSync(testConfigPath, JSON.stringify({ plugin: ["oh-my-opencode"] }, null, 2) + "\n", "utf-8")
// when
const result = await addPluginToOpenCodeConfig("3.11.0")
@@ -162,14 +106,12 @@ describe("addPluginToOpenCodeConfig - single package writes", () => {
// then
expect(result.success).toBe(true)
const savedConfig = JSON.parse(readFileSync(testConfigPath, "utf-8"))
expect(savedConfig.plugin).toContain("oh-my-opencode")
expect(savedConfig.plugin).not.toContain("oh-my-opencode@3.10.0")
expect(savedConfig.plugin).toEqual(["oh-my-openagent"])
})
it("recognizes oh-my-openagent as already installed (legacy name)", async () => {
it("upgrades a version-pinned legacy entry to canonical", async () => {
// given
const config = { plugin: ["oh-my-openagent"] }
writeFileSync(testConfigPath, JSON.stringify(config, null, 2) + "\n", "utf-8")
writeFileSync(testConfigPath, JSON.stringify({ plugin: ["oh-my-opencode@3.10.0"] }, null, 2) + "\n", "utf-8")
// when
const result = await addPluginToOpenCodeConfig("3.11.0")
@@ -177,15 +119,12 @@ describe("addPluginToOpenCodeConfig - single package writes", () => {
// then
expect(result.success).toBe(true)
const savedConfig = JSON.parse(readFileSync(testConfigPath, "utf-8"))
// Should upgrade to new name
expect(savedConfig.plugin).toContain("oh-my-opencode")
expect(savedConfig.plugin).not.toContain("oh-my-openagent")
expect(savedConfig.plugin).toEqual(["oh-my-openagent@3.10.0"])
})
it("replaces version-pinned oh-my-openagent@X.Y.Z with new name", async () => {
it("removes stale legacy entry when canonical and legacy entries both exist", async () => {
// given
const config = { plugin: ["oh-my-openagent@3.10.0"] }
writeFileSync(testConfigPath, JSON.stringify(config, null, 2) + "\n", "utf-8")
writeFileSync(testConfigPath, JSON.stringify({ plugin: ["oh-my-openagent", "oh-my-opencode"] }, null, 2) + "\n", "utf-8")
// when
const result = await addPluginToOpenCodeConfig("3.11.0")
@@ -193,15 +132,12 @@ describe("addPluginToOpenCodeConfig - single package writes", () => {
// then
expect(result.success).toBe(true)
const savedConfig = JSON.parse(readFileSync(testConfigPath, "utf-8"))
// Legacy should be replaced with new name
expect(savedConfig.plugin).toContain("oh-my-opencode")
expect(savedConfig.plugin).not.toContain("oh-my-openagent")
expect(savedConfig.plugin).toEqual(["oh-my-openagent"])
})
it("adds new plugin when none exists", async () => {
it("preserves a canonical entry when it already exists", async () => {
// given
const config = {}
writeFileSync(testConfigPath, JSON.stringify(config, null, 2) + "\n", "utf-8")
writeFileSync(testConfigPath, JSON.stringify({ plugin: ["oh-my-openagent@3.10.0"] }, null, 2) + "\n", "utf-8")
// when
const result = await addPluginToOpenCodeConfig("3.11.0")
@@ -209,20 +145,21 @@ describe("addPluginToOpenCodeConfig - single package writes", () => {
// then
expect(result.success).toBe(true)
const savedConfig = JSON.parse(readFileSync(testConfigPath, "utf-8"))
expect(savedConfig.plugin).toContain("oh-my-opencode")
expect(savedConfig.plugin).toEqual(["oh-my-openagent@3.10.0"])
})
it("adds plugin when plugin array is empty", async () => {
it("rewrites quoted jsonc plugin field in place", async () => {
// given
const config = { plugin: [] }
writeFileSync(testConfigPath, JSON.stringify(config, null, 2) + "\n", "utf-8")
testConfigPath = join(testConfigDir, "opencode.jsonc")
writeFileSync(testConfigPath, '{\n "plugin": ["oh-my-opencode"]\n}\n', "utf-8")
// when
const result = await addPluginToOpenCodeConfig("3.11.0")
// then
expect(result.success).toBe(true)
const savedConfig = JSON.parse(readFileSync(testConfigPath, "utf-8"))
expect(savedConfig.plugin).toContain("oh-my-opencode")
const savedContent = readFileSync(testConfigPath, "utf-8")
expect(savedContent.includes('"plugin": [\n "oh-my-openagent"\n ]')).toBe(true)
expect(savedContent.includes("oh-my-opencode")).toBe(false)
})
})

View File

@@ -0,0 +1,56 @@
/// <reference types="bun-types" />
import { afterEach, describe, expect, mock, test } from "bun:test"
import { getPluginNameWithVersion } from "../config-manager"
describe("getPluginNameWithVersion", () => {
const originalFetch = globalThis.fetch
afterEach(() => {
globalThis.fetch = originalFetch
})
test("returns the canonical latest tag when current version matches latest", async () => {
//#given
globalThis.fetch = mock(() =>
Promise.resolve({
ok: true,
json: () => Promise.resolve({ latest: "3.13.1", beta: "3.14.0-beta.1" }),
} as Response)
) as unknown as typeof fetch
//#when
const result = await getPluginNameWithVersion("3.13.1")
//#then
expect(result).toBe("oh-my-openagent@latest")
})
test("preserves the canonical prerelease channel when fetch fails", async () => {
//#given
globalThis.fetch = mock(() => Promise.reject(new Error("Network error"))) as unknown as typeof fetch
//#when
const result = await getPluginNameWithVersion("3.14.0-beta.1")
//#then
expect(result).toBe("oh-my-openagent@beta")
})
test("returns the canonical bare package name for stable fallback", async () => {
//#given
globalThis.fetch = mock(() =>
Promise.resolve({
ok: false,
status: 404,
} as Response)
) as unknown as typeof fetch
//#when
const result = await getPluginNameWithVersion("3.13.1")
//#then
expect(result).toBe("oh-my-openagent")
})
})

View File

@@ -1,6 +1,7 @@
import { PLUGIN_NAME } from "../../shared"
import { fetchNpmDistTags } from "./npm-dist-tags"
const DEFAULT_PACKAGE_NAME = "oh-my-opencode"
const DEFAULT_PACKAGE_NAME = PLUGIN_NAME
const PRIORITIZED_TAGS = ["latest", "beta", "next"] as const
function getFallbackEntry(version: string, packageName: string): string {

View File

@@ -142,48 +142,6 @@ describe("model-resolution check", () => {
snapshot: { source: "bundled-snapshot" },
})
})
it("keeps provider-prefixed overrides for transport while capability diagnostics use pattern aliases", async () => {
const { getModelResolutionInfoWithOverrides } = await import("./model-resolution")
const info = getModelResolutionInfoWithOverrides({
categories: {
"visual-engineering": { model: "google/gemini-3.1-pro-high" },
},
})
const visual = info.categories.find((category) => category.name === "visual-engineering")
expect(visual).toBeDefined()
expect(visual!.effectiveModel).toBe("google/gemini-3.1-pro-high")
expect(visual!.capabilityDiagnostics).toMatchObject({
resolutionMode: "alias-backed",
canonicalization: {
source: "pattern-alias",
ruleID: "gemini-3.1-pro-tier-alias",
},
})
})
it("keeps provider-prefixed Claude overrides for transport while capability diagnostics canonicalize to bare IDs", async () => {
const { getModelResolutionInfoWithOverrides } = await import("./model-resolution")
const info = getModelResolutionInfoWithOverrides({
agents: {
oracle: { model: "anthropic/claude-opus-4-6-thinking" },
},
})
const oracle = info.agents.find((agent) => agent.name === "oracle")
expect(oracle).toBeDefined()
expect(oracle!.effectiveModel).toBe("anthropic/claude-opus-4-6-thinking")
expect(oracle!.capabilityDiagnostics).toMatchObject({
resolutionMode: "alias-backed",
canonicalization: {
source: "pattern-alias",
ruleID: "claude-thinking-legacy-alias",
},
})
})
})
describe("checkModelResolution", () => {

View File

@@ -1,9 +1,19 @@
/// <reference types="bun-types" />
import { beforeEach, describe, expect, it, mock } from "bun:test"
import { PLUGIN_NAME } from "../../../shared"
import type { PluginInfo } from "./system-plugin"
type SystemModule = typeof import("./system")
async function importFreshSystemModule(): Promise<SystemModule> {
return import(`./system?test=${Date.now()}-${Math.random()}`)
}
const mockFindOpenCodeBinary = mock(async () => ({ path: "/usr/local/bin/opencode" }))
const mockGetOpenCodeVersion = mock(async () => "1.0.200")
const mockCompareVersions = mock(() => true)
const mockGetPluginInfo = mock(() => ({
const mockCompareVersions = mock((_leftVersion?: string, _rightVersion?: string) => true)
const mockGetPluginInfo = mock((): PluginInfo => ({
registered: true,
entry: "oh-my-opencode",
isPinned: false,
@@ -18,7 +28,8 @@ const mockGetLoadedPluginVersion = mock(() => ({
expectedVersion: "3.0.0",
loadedVersion: "3.1.0",
}))
const mockGetLatestPluginVersion = mock(async () => null)
const mockGetLatestPluginVersion = mock(async (_currentVersion: string | null) => null as string | null)
const mockGetSuggestedInstallTag = mock(() => "latest")
mock.module("./system-binary", () => ({
findOpenCodeBinary: mockFindOpenCodeBinary,
@@ -33,10 +44,9 @@ mock.module("./system-plugin", () => ({
mock.module("./system-loaded-version", () => ({
getLoadedPluginVersion: mockGetLoadedPluginVersion,
getLatestPluginVersion: mockGetLatestPluginVersion,
getSuggestedInstallTag: mockGetSuggestedInstallTag,
}))
const { checkSystem } = await import("./system?test")
describe("system check", () => {
beforeEach(() => {
mockFindOpenCodeBinary.mockReset()
@@ -45,6 +55,7 @@ describe("system check", () => {
mockGetPluginInfo.mockReset()
mockGetLoadedPluginVersion.mockReset()
mockGetLatestPluginVersion.mockReset()
mockGetSuggestedInstallTag.mockReset()
mockFindOpenCodeBinary.mockResolvedValue({ path: "/usr/local/bin/opencode" })
mockGetOpenCodeVersion.mockResolvedValue("1.0.200")
@@ -65,10 +76,14 @@ describe("system check", () => {
loadedVersion: "3.1.0",
})
mockGetLatestPluginVersion.mockResolvedValue(null)
mockGetSuggestedInstallTag.mockReturnValue("latest")
})
describe("#given cache directory contains spaces", () => {
it("uses a quoted cache directory in mismatch fix command", async () => {
//#given
const { checkSystem } = await importFreshSystemModule()
//#when
const result = await checkSystem()
@@ -87,9 +102,11 @@ describe("system check", () => {
loadedVersion: "3.0.0-canary.1",
})
mockGetLatestPluginVersion.mockResolvedValue("3.0.0-canary.2")
mockCompareVersions.mockImplementation((leftVersion: string, rightVersion: string) => {
mockGetSuggestedInstallTag.mockReturnValue("canary")
mockCompareVersions.mockImplementation((leftVersion?: string, rightVersion?: string) => {
return !(leftVersion === "3.0.0-canary.1" && rightVersion === "3.0.0-canary.2")
})
const { checkSystem } = await importFreshSystemModule()
//#when
const result = await checkSystem()
@@ -97,8 +114,94 @@ describe("system check", () => {
//#then
const outdatedIssue = result.issues.find((issue) => issue.title === "Loaded plugin is outdated")
expect(outdatedIssue?.fix).toBe(
'Update: cd "/Users/test/Library/Caches/opencode with spaces" && bun add oh-my-opencode@canary'
`Update: cd "/Users/test/Library/Caches/opencode with spaces" && bun add ${PLUGIN_NAME}@canary`
)
})
})
describe("#given OpenCode plugin entry uses legacy package name", () => {
it("adds a warning for a bare legacy entry", async () => {
//#given
mockGetPluginInfo.mockReturnValue({
registered: true,
entry: "oh-my-opencode",
isPinned: false,
pinnedVersion: null,
configPath: null,
isLocalDev: false,
})
const { checkSystem } = await importFreshSystemModule()
//#when
const result = await checkSystem()
//#then
const legacyEntryIssue = result.issues.find((issue) => issue.title === "Using legacy package name")
expect(legacyEntryIssue?.severity).toBe("warning")
expect(legacyEntryIssue?.fix).toBe(
'Update your opencode.json plugin entry: "oh-my-opencode" → "oh-my-openagent"'
)
})
it("adds a warning for a version-pinned legacy entry", async () => {
//#given
mockGetPluginInfo.mockReturnValue({
registered: true,
entry: "oh-my-opencode@3.0.0",
isPinned: true,
pinnedVersion: "3.0.0",
configPath: null,
isLocalDev: false,
})
const { checkSystem } = await importFreshSystemModule()
//#when
const result = await checkSystem()
//#then
const legacyEntryIssue = result.issues.find((issue) => issue.title === "Using legacy package name")
expect(legacyEntryIssue?.severity).toBe("warning")
expect(legacyEntryIssue?.fix).toBe(
'Update your opencode.json plugin entry: "oh-my-opencode@3.0.0" → "oh-my-openagent@3.0.0"'
)
})
it("does not warn for a canonical plugin entry", async () => {
//#given
mockGetPluginInfo.mockReturnValue({
registered: true,
entry: PLUGIN_NAME,
isPinned: false,
pinnedVersion: null,
configPath: null,
isLocalDev: false,
})
const { checkSystem } = await importFreshSystemModule()
//#when
const result = await checkSystem()
//#then
expect(result.issues.some((issue) => issue.title === "Using legacy package name")).toBe(false)
})
it("does not warn for a local-dev legacy entry", async () => {
//#given
mockGetPluginInfo.mockReturnValue({
registered: true,
entry: "oh-my-opencode",
isPinned: false,
pinnedVersion: null,
configPath: null,
isLocalDev: true,
})
const { checkSystem } = await importFreshSystemModule()
//#when
const result = await checkSystem()
//#then
expect(result.issues.some((issue) => issue.title === "Using legacy package name")).toBe(false)
})
})
})

View File

@@ -6,6 +6,7 @@ import { findOpenCodeBinary, getOpenCodeVersion, compareVersions } from "./syste
import { getPluginInfo } from "./system-plugin"
import { getLatestPluginVersion, getLoadedPluginVersion, getSuggestedInstallTag } from "./system-loaded-version"
import { parseJsonc } from "../../../shared"
import { PLUGIN_NAME, LEGACY_PLUGIN_NAME } from "../../../shared/plugin-identity"
function isConfigValid(configPath: string | null): boolean {
if (!configPath) return true
@@ -82,14 +83,30 @@ export async function checkSystem(): Promise<CheckResult> {
if (!pluginInfo.registered) {
issues.push({
title: "oh-my-opencode is not registered",
title: `${PLUGIN_NAME} is not registered`,
description: "Plugin entry is missing from OpenCode configuration.",
fix: "Run: bunx oh-my-opencode install",
fix: `Run: bunx ${PLUGIN_NAME} install`,
severity: "error",
affects: ["all agents"],
})
}
if (pluginInfo.entry && !pluginInfo.isLocalDev) {
const isLegacyName = pluginInfo.entry === LEGACY_PLUGIN_NAME
|| pluginInfo.entry.startsWith(`${LEGACY_PLUGIN_NAME}@`)
if (isLegacyName) {
const suggestedEntry = pluginInfo.entry.replace(LEGACY_PLUGIN_NAME, PLUGIN_NAME)
issues.push({
title: "Using legacy package name",
description: `Your opencode.json references "${LEGACY_PLUGIN_NAME}" which has been renamed to "${PLUGIN_NAME}". The old name may stop working in a future release.`,
fix: `Update your opencode.json plugin entry: "${pluginInfo.entry}" → "${suggestedEntry}"`,
severity: "warning",
affects: ["plugin loading"],
})
}
}
if (loadedInfo.expectedVersion && loadedInfo.loadedVersion && loadedInfo.expectedVersion !== loadedInfo.loadedVersion) {
issues.push({
title: "Loaded plugin version mismatch",
@@ -108,7 +125,7 @@ export async function checkSystem(): Promise<CheckResult> {
issues.push({
title: "Loaded plugin is outdated",
description: `Loaded ${systemInfo.loadedVersion}, latest ${latestVersion}.`,
fix: `Update: cd "${loadedInfo.cacheDir}" && bun add oh-my-opencode@${installTag}`,
fix: `Update: cd "${loadedInfo.cacheDir}" && bun add ${PLUGIN_NAME}@${installTag}`,
severity: "warning",
affects: ["plugin features"],
})

View File

@@ -1,4 +1,5 @@
import color from "picocolors"
import { PLUGIN_NAME } from "../../shared"
export const SYMBOLS = {
check: color.green("\u2713"),
@@ -38,6 +39,6 @@ export const EXIT_CODES = {
export const MIN_OPENCODE_VERSION = "1.0.150"
export const PACKAGE_NAME = "oh-my-opencode"
export const PACKAGE_NAME = PLUGIN_NAME
export const OPENCODE_BINARIES = ["opencode", "opencode-desktop"] as const

View File

@@ -113,10 +113,10 @@ describe("install CLI - binary check behavior", () => {
const configPath = join(tempDir, "opencode.json")
expect(existsSync(configPath)).toBe(true)
// then opencode.json should have plugin entry
const config = JSON.parse(readFileSync(configPath, "utf-8"))
expect(config.plugin).toBeDefined()
expect(config.plugin.some((p: string) => p.includes("oh-my-opencode"))).toBe(true)
expect(config.plugin.some((p: string) => p.includes("oh-my-openagent"))).toBe(true)
expect(config.plugin.some((p: string) => p.includes("oh-my-opencode"))).toBe(false)
// then exit code should be 0 (success)
expect(exitCode).toBe(0)

View File

@@ -458,7 +458,7 @@ describe("generateModelConfig", () => {
const result = generateModelConfig(config)
// #then
expect(result.agents?.hephaestus?.model).toBe("openai/gpt-5.3-codex")
expect(result.agents?.hephaestus?.model).toBe("openai/gpt-5.4")
expect(result.agents?.hephaestus?.variant).toBe("medium")
})
@@ -484,7 +484,7 @@ describe("generateModelConfig", () => {
const result = generateModelConfig(config)
// #then
expect(result.agents?.hephaestus?.model).toBe("opencode/gpt-5.3-codex")
expect(result.agents?.hephaestus?.model).toBe("opencode/gpt-5.4")
expect(result.agents?.hephaestus?.variant).toBe("medium")
})

View File

@@ -1,5 +1,6 @@
import * as p from "@clack/prompts"
import color from "picocolors"
import { PLUGIN_NAME } from "../shared"
import type { InstallArgs } from "./types"
import {
addPluginToOpenCodeConfig,
@@ -43,7 +44,7 @@ export async function runTuiInstaller(args: InstallArgs, version: string): Promi
const config = await promptInstallConfig(detected)
if (!config) return 1
spinner.start("Adding oh-my-opencode to OpenCode config")
spinner.start(`Adding ${PLUGIN_NAME} to OpenCode config`)
const pluginResult = await addPluginToOpenCodeConfig(version)
if (!pluginResult.success) {
spinner.stop(`Failed to add plugin: ${pluginResult.error}`)
@@ -52,7 +53,7 @@ export async function runTuiInstaller(args: InstallArgs, version: string): Promi
}
spinner.stop(`Plugin added to ${color.cyan(pluginResult.configPath)}`)
spinner.start("Writing oh-my-opencode configuration")
spinner.start(`Writing ${PLUGIN_NAME} configuration`)
const omoResult = writeOmoConfig(config)
if (!omoResult.success) {
spinner.stop(`Failed to write config: ${omoResult.error}`)

View File

@@ -0,0 +1,125 @@
import { execFileSync } from "node:child_process"
import { afterEach, beforeEach, describe, expect, it } from "bun:test"
import { mkdirSync, rmSync, writeFileSync } from "node:fs"
import { tmpdir } from "node:os"
import { join } from "node:path"
import { loadOpencodeGlobalCommands, loadOpencodeProjectCommands } from "./loader"
const TEST_DIR = join(tmpdir(), `claude-code-command-loader-${Date.now()}`)
function writeCommand(directory: string, name: string, description: string): void {
mkdirSync(directory, { recursive: true })
writeFileSync(
join(directory, `${name}.md`),
`---\ndescription: ${description}\n---\nRun ${name}.\n`,
)
}
describe("claude-code command loader", () => {
let originalOpencodeConfigDir: string | undefined
beforeEach(() => {
mkdirSync(TEST_DIR, { recursive: true })
originalOpencodeConfigDir = process.env.OPENCODE_CONFIG_DIR
})
afterEach(() => {
if (originalOpencodeConfigDir === undefined) {
delete process.env.OPENCODE_CONFIG_DIR
} else {
process.env.OPENCODE_CONFIG_DIR = originalOpencodeConfigDir
}
rmSync(TEST_DIR, { recursive: true, force: true })
})
it("#given a parent .opencode/commands directory #when loadOpencodeProjectCommands is called from child directory #then it loads the ancestor command", async () => {
// given
const projectDir = join(TEST_DIR, "project")
const childDir = join(projectDir, "apps", "desktop")
writeCommand(join(projectDir, ".opencode", "commands"), "ancestor", "Ancestor command")
// when
const commands = await loadOpencodeProjectCommands(childDir)
// then
expect(commands.ancestor?.description).toBe("(opencode-project) Ancestor command")
})
it("#given a .opencode/command directory #when loadOpencodeProjectCommands is called #then it loads the singular alias directory", async () => {
// given
writeCommand(join(TEST_DIR, ".opencode", "command"), "singular", "Singular command")
// when
const commands = await loadOpencodeProjectCommands(TEST_DIR)
// then
expect(commands.singular?.description).toBe("(opencode-project) Singular command")
})
it("#given duplicate project command names across ancestors #when loadOpencodeProjectCommands is called #then the nearest directory wins", async () => {
// given
const projectRoot = join(TEST_DIR, "project")
const childDir = join(projectRoot, "apps", "desktop")
const ancestorDir = join(TEST_DIR, ".opencode", "commands")
const projectDir = join(projectRoot, ".opencode", "commands")
writeCommand(ancestorDir, "duplicate", "Ancestor command")
writeCommand(projectDir, "duplicate", "Nearest command")
// when
const commands = await loadOpencodeProjectCommands(childDir)
// then
expect(commands.duplicate?.description).toBe("(opencode-project) Nearest command")
})
it("#given a global .opencode/commands directory #when loadOpencodeGlobalCommands is called #then it loads the plural alias directory", async () => {
// given
const opencodeConfigDir = join(TEST_DIR, "opencode-config")
process.env.OPENCODE_CONFIG_DIR = opencodeConfigDir
writeCommand(join(opencodeConfigDir, "commands"), "global-plural", "Global plural command")
// when
const commands = await loadOpencodeGlobalCommands()
// then
expect(commands["global-plural"]?.description).toBe("(opencode) Global plural command")
})
it("#given duplicate global command names across profile and parent dirs #when loadOpencodeGlobalCommands is called #then the profile dir wins", async () => {
// given
const opencodeRootDir = join(TEST_DIR, "opencode-root")
const profileConfigDir = join(opencodeRootDir, "profiles", "codex")
process.env.OPENCODE_CONFIG_DIR = profileConfigDir
writeCommand(join(opencodeRootDir, "commands"), "duplicate-global", "Parent global command")
writeCommand(join(profileConfigDir, "commands"), "duplicate-global", "Profile global command")
// when
const commands = await loadOpencodeGlobalCommands()
// then
expect(commands["duplicate-global"]?.description).toBe("(opencode) Profile global command")
})
it("#given nested project opencode commands in a worktree #when loadOpencodeProjectCommands is called #then it preserves slash names and stops at the worktree root", async () => {
// given
const repositoryDir = join(TEST_DIR, "repo")
const nestedDirectory = join(repositoryDir, "packages", "app", "src")
mkdirSync(nestedDirectory, { recursive: true })
execFileSync("git", ["init"], {
cwd: repositoryDir,
stdio: ["ignore", "ignore", "ignore"],
})
writeCommand(join(repositoryDir, ".opencode", "commands", "deploy"), "staging", "Deploy staging")
writeCommand(join(repositoryDir, ".opencode", "command"), "release", "Release command")
writeCommand(join(TEST_DIR, ".opencode", "commands"), "outside", "Outside command")
// when
const commands = await loadOpencodeProjectCommands(nestedDirectory)
// then
expect(commands["deploy/staging"]?.description).toBe("(opencode-project) Deploy staging")
expect(commands.release?.description).toBe("(opencode-project) Release command")
expect(commands.outside).toBeUndefined()
expect(commands["deploy:staging"]).toBeUndefined()
})
})

View File

@@ -3,7 +3,11 @@ import { join, basename } from "path"
import { parseFrontmatter } from "../../shared/frontmatter"
import { sanitizeModelField } from "../../shared/model-sanitizer"
import { isMarkdownFile } from "../../shared/file-utils"
import { getClaudeConfigDir, getOpenCodeConfigDir } from "../../shared"
import {
findProjectOpencodeCommandDirs,
getClaudeConfigDir,
getOpenCodeCommandDirs,
} from "../../shared"
import { log } from "../../shared/logger"
import type { CommandScope, CommandDefinition, CommandFrontmatter, LoadedCommand } from "./types"
@@ -46,7 +50,7 @@ async function loadCommandsFromDir(
if (entry.isDirectory()) {
if (entry.name.startsWith(".")) continue
const subDirPath = join(commandsDir, entry.name)
const subPrefix = prefix ? `${prefix}:${entry.name}` : entry.name
const subPrefix = prefix ? `${prefix}/${entry.name}` : entry.name
const subCommands = await loadCommandsFromDir(subDirPath, scope, visited, subPrefix)
commands.push(...subCommands)
continue
@@ -56,7 +60,7 @@ async function loadCommandsFromDir(
const commandPath = join(commandsDir, entry.name)
const baseCommandName = basename(entry.name, ".md")
const commandName = prefix ? `${prefix}:${baseCommandName}` : baseCommandName
const commandName = prefix ? `${prefix}/${baseCommandName}` : baseCommandName
try {
const content = await fs.readFile(commandPath, "utf-8")
@@ -99,9 +103,25 @@ $ARGUMENTS
return commands
}
function deduplicateLoadedCommandsByName(commands: LoadedCommand[]): LoadedCommand[] {
const seen = new Set<string>()
const deduplicatedCommands: LoadedCommand[] = []
for (const command of commands) {
if (seen.has(command.name)) {
continue
}
seen.add(command.name)
deduplicatedCommands.push(command)
}
return deduplicatedCommands
}
function commandsToRecord(commands: LoadedCommand[]): Record<string, CommandDefinition> {
const result: Record<string, CommandDefinition> = {}
for (const cmd of commands) {
for (const cmd of deduplicateLoadedCommandsByName(commands)) {
const { name: _name, argumentHint: _argumentHint, ...openCodeCompatible } = cmd.definition
result[cmd.name] = openCodeCompatible as CommandDefinition
}
@@ -121,16 +141,21 @@ export async function loadProjectCommands(directory?: string): Promise<Record<st
}
export async function loadOpencodeGlobalCommands(): Promise<Record<string, CommandDefinition>> {
const configDir = getOpenCodeConfigDir({ binary: "opencode" })
const opencodeCommandsDir = join(configDir, "command")
const commands = await loadCommandsFromDir(opencodeCommandsDir, "opencode")
return commandsToRecord(commands)
const opencodeCommandDirs = getOpenCodeCommandDirs({ binary: "opencode" })
const allCommands = await Promise.all(
opencodeCommandDirs.map((commandsDir) => loadCommandsFromDir(commandsDir, "opencode")),
)
return commandsToRecord(allCommands.flat())
}
export async function loadOpencodeProjectCommands(directory?: string): Promise<Record<string, CommandDefinition>> {
const opencodeProjectDir = join(directory ?? process.cwd(), ".opencode", "command")
const commands = await loadCommandsFromDir(opencodeProjectDir, "opencode-project")
return commandsToRecord(commands)
const opencodeProjectDirs = findProjectOpencodeCommandDirs(directory ?? process.cwd())
const allCommands = await Promise.all(
opencodeProjectDirs.map((commandsDir) =>
loadCommandsFromDir(commandsDir, "opencode-project"),
),
)
return commandsToRecord(allCommands.flat())
}
export async function loadAllCommands(directory?: string): Promise<Record<string, CommandDefinition>> {

View File

@@ -101,4 +101,39 @@ describe("discoverInstalledPlugins", () => {
expect(discovered.plugins).toHaveLength(1)
expect(discovered.plugins[0]?.name).toBe("oh-my-opencode")
})
it("derives canonical package name from npm plugin keys", () => {
//#given
const pluginsHome = process.env.CLAUDE_PLUGINS_HOME as string
const installPath = join(createTemporaryDirectory("omo-plugin-install-"), "oh-my-openagent")
mkdirSync(installPath, { recursive: true })
const databasePath = join(pluginsHome, "installed_plugins.json")
writeFileSync(
databasePath,
JSON.stringify({
version: 2,
plugins: {
"oh-my-openagent@3.13.1": [
{
scope: "user",
installPath,
version: "3.13.1",
installedAt: "2026-03-26T00:00:00Z",
lastUpdated: "2026-03-26T00:00:00Z",
},
],
},
}),
"utf-8",
)
//#when
const discovered = discoverInstalledPlugins()
//#then
expect(discovered.errors).toHaveLength(0)
expect(discovered.plugins).toHaveLength(1)
expect(discovered.plugins[0]?.name).toBe("oh-my-openagent")
})
})

View File

@@ -1,7 +1,7 @@
import { afterEach, beforeEach, describe, expect, it } from "bun:test"
import { mkdirSync, rmSync, writeFileSync } from "fs"
import { join } from "path"
import { tmpdir } from "os"
import { homedir, tmpdir } from "os"
import { SkillsConfigSchema } from "../../config/schema/skills"
import { discoverConfigSourceSkills, normalizePathForGlob } from "./config-source-discovery"
@@ -69,6 +69,28 @@ describe("config source discovery", () => {
expect(names).not.toContain("skip/skipped-skill")
})
it("loads skills from ~/ sources path", async () => {
// given
const homeSkillsDir = join(homedir(), `.omo-config-source-${Date.now()}`)
writeSkill(join(homeSkillsDir, "tilde-skill"), "tilde-skill", "Loaded from tilde path")
const config = SkillsConfigSchema.parse({
sources: [{ path: `~/${homeSkillsDir.split(homedir())[1]?.replace(/^\//, "")}`, recursive: true }],
})
try {
// when
const skills = await discoverConfigSourceSkills({
config,
configDir: join(TEST_DIR, "config"),
})
// then
expect(skills.some((skill) => skill.name === "tilde-skill")).toBe(true)
} finally {
rmSync(homeSkillsDir, { recursive: true, force: true })
}
})
it("normalizes windows separators before glob matching", () => {
// given
const windowsPath = "keep\\nested\\SKILL.md"

View File

@@ -1,4 +1,5 @@
import { promises as fs } from "fs"
import { homedir } from "os"
import { dirname, extname, isAbsolute, join, relative } from "path"
import picomatch from "picomatch"
import type { SkillsConfig } from "../../config/schema"
@@ -15,6 +16,14 @@ function isHttpUrl(path: string): boolean {
}
function toAbsolutePath(path: string, configDir: string): string {
if (path === "~") {
return homedir()
}
if (path.startsWith("~/")) {
return join(homedir(), path.slice(2))
}
if (isAbsolute(path)) {
return path
}

View File

@@ -615,5 +615,92 @@ Skill body.
expect(skill).toBeDefined()
expect(skill?.scope).toBe("project")
})
it("#given a skill in ancestor .agents/skills/ #when discoverProjectAgentsSkills is called from child directory #then it discovers the ancestor skill", async () => {
// given
const skillContent = `---
name: ancestor-agent-skill
description: A skill from ancestor .agents/skills directory
---
Skill body.
`
const projectDir = join(TEST_DIR, "project")
const childDir = join(projectDir, "apps", "worker")
const agentsProjectSkillsDir = join(projectDir, ".agents", "skills")
const skillDir = join(agentsProjectSkillsDir, "ancestor-agent-skill")
mkdirSync(childDir, { recursive: true })
mkdirSync(skillDir, { recursive: true })
writeFileSync(join(skillDir, "SKILL.md"), skillContent)
// when
const { discoverProjectAgentsSkills } = await import("./loader")
const skills = await discoverProjectAgentsSkills(childDir)
const skill = skills.find((candidate) => candidate.name === "ancestor-agent-skill")
// then
expect(skill).toBeDefined()
expect(skill?.scope).toBe("project")
})
})
describe("opencode project skill discovery", () => {
it("#given a skill in ancestor .opencode/skills/ #when discoverOpencodeProjectSkills is called from child directory #then it discovers the ancestor skill", async () => {
// given
const skillContent = `---
name: ancestor-opencode-skill
description: A skill from ancestor .opencode/skills directory
---
Skill body.
`
const projectDir = join(TEST_DIR, "project")
const childDir = join(projectDir, "packages", "cli")
const skillsDir = join(projectDir, ".opencode", "skills", "ancestor-opencode-skill")
mkdirSync(childDir, { recursive: true })
mkdirSync(skillsDir, { recursive: true })
writeFileSync(join(skillsDir, "SKILL.md"), skillContent)
// when
const { discoverOpencodeProjectSkills } = await import("./loader")
const skills = await discoverOpencodeProjectSkills(childDir)
const skill = skills.find((candidate) => candidate.name === "ancestor-opencode-skill")
// then
expect(skill).toBeDefined()
expect(skill?.scope).toBe("opencode-project")
})
it("#given a skill in .opencode/skill/ #when discoverOpencodeProjectSkills is called #then it discovers the singular alias directory", async () => {
// given
const skillContent = `---
name: singular-opencode-skill
description: A skill from .opencode/skill directory
---
Skill body.
`
const singularSkillDir = join(
TEST_DIR,
".opencode",
"skill",
"singular-opencode-skill",
)
mkdirSync(singularSkillDir, { recursive: true })
writeFileSync(join(singularSkillDir, "SKILL.md"), skillContent)
// when
const { discoverOpencodeProjectSkills } = await import("./loader")
const originalCwd = process.cwd()
process.chdir(TEST_DIR)
try {
const skills = await discoverOpencodeProjectSkills()
const skill = skills.find((candidate) => candidate.name === "singular-opencode-skill")
// then
expect(skill).toBeDefined()
expect(skill?.scope).toBe("opencode-project")
} finally {
process.chdir(originalCwd)
}
})
})
})

View File

@@ -3,6 +3,11 @@ import { homedir } from "os"
import { getClaudeConfigDir } from "../../shared/claude-config-dir"
import { getOpenCodeConfigDir } from "../../shared/opencode-config-dir"
import { getOpenCodeSkillDirs } from "../../shared/opencode-command-dirs"
import {
findProjectAgentsSkillDirs,
findProjectClaudeSkillDirs,
findProjectOpencodeSkillDirs,
} from "../../shared/project-discovery-dirs"
import type { CommandDefinition } from "../claude-code-command-loader/types"
import type { LoadedSkill } from "./types"
import { skillsToCommandDefinitionRecord } from "./skill-definition-record"
@@ -16,9 +21,11 @@ export async function loadUserSkills(): Promise<Record<string, CommandDefinition
}
export async function loadProjectSkills(directory?: string): Promise<Record<string, CommandDefinition>> {
const projectSkillsDir = join(directory ?? process.cwd(), ".claude", "skills")
const skills = await loadSkillsFromDir({ skillsDir: projectSkillsDir, scope: "project" })
return skillsToCommandDefinitionRecord(skills)
const projectSkillDirs = findProjectClaudeSkillDirs(directory ?? process.cwd())
const allSkills = await Promise.all(
projectSkillDirs.map((skillsDir) => loadSkillsFromDir({ skillsDir, scope: "project" })),
)
return skillsToCommandDefinitionRecord(deduplicateSkillsByName(allSkills.flat()))
}
export async function loadOpencodeGlobalSkills(): Promise<Record<string, CommandDefinition>> {
@@ -30,8 +37,28 @@ export async function loadOpencodeGlobalSkills(): Promise<Record<string, Command
}
export async function loadOpencodeProjectSkills(directory?: string): Promise<Record<string, CommandDefinition>> {
const opencodeProjectDir = join(directory ?? process.cwd(), ".opencode", "skills")
const skills = await loadSkillsFromDir({ skillsDir: opencodeProjectDir, scope: "opencode-project" })
const opencodeProjectSkillDirs = findProjectOpencodeSkillDirs(
directory ?? process.cwd(),
)
const allSkills = await Promise.all(
opencodeProjectSkillDirs.map((skillsDir) =>
loadSkillsFromDir({ skillsDir, scope: "opencode-project" }),
),
)
return skillsToCommandDefinitionRecord(deduplicateSkillsByName(allSkills.flat()))
}
export async function loadProjectAgentsSkills(directory?: string): Promise<Record<string, CommandDefinition>> {
const agentsProjectSkillDirs = findProjectAgentsSkillDirs(directory ?? process.cwd())
const allSkills = await Promise.all(
agentsProjectSkillDirs.map((skillsDir) => loadSkillsFromDir({ skillsDir, scope: "project" })),
)
return skillsToCommandDefinitionRecord(deduplicateSkillsByName(allSkills.flat()))
}
export async function loadGlobalAgentsSkills(): Promise<Record<string, CommandDefinition>> {
const agentsGlobalDir = join(homedir(), ".agents", "skills")
const skills = await loadSkillsFromDir({ skillsDir: agentsGlobalDir, scope: "user" })
return skillsToCommandDefinitionRecord(skills)
}
@@ -104,8 +131,11 @@ export async function discoverUserClaudeSkills(): Promise<LoadedSkill[]> {
}
export async function discoverProjectClaudeSkills(directory?: string): Promise<LoadedSkill[]> {
const projectSkillsDir = join(directory ?? process.cwd(), ".claude", "skills")
return loadSkillsFromDir({ skillsDir: projectSkillsDir, scope: "project" })
const projectSkillDirs = findProjectClaudeSkillDirs(directory ?? process.cwd())
const allSkills = await Promise.all(
projectSkillDirs.map((skillsDir) => loadSkillsFromDir({ skillsDir, scope: "project" })),
)
return deduplicateSkillsByName(allSkills.flat())
}
export async function discoverOpencodeGlobalSkills(): Promise<LoadedSkill[]> {
@@ -117,13 +147,23 @@ export async function discoverOpencodeGlobalSkills(): Promise<LoadedSkill[]> {
}
export async function discoverOpencodeProjectSkills(directory?: string): Promise<LoadedSkill[]> {
const opencodeProjectDir = join(directory ?? process.cwd(), ".opencode", "skills")
return loadSkillsFromDir({ skillsDir: opencodeProjectDir, scope: "opencode-project" })
const opencodeProjectSkillDirs = findProjectOpencodeSkillDirs(
directory ?? process.cwd(),
)
const allSkills = await Promise.all(
opencodeProjectSkillDirs.map((skillsDir) =>
loadSkillsFromDir({ skillsDir, scope: "opencode-project" }),
),
)
return deduplicateSkillsByName(allSkills.flat())
}
export async function discoverProjectAgentsSkills(directory?: string): Promise<LoadedSkill[]> {
const agentsProjectDir = join(directory ?? process.cwd(), ".agents", "skills")
return loadSkillsFromDir({ skillsDir: agentsProjectDir, scope: "project" })
const agentsProjectSkillDirs = findProjectAgentsSkillDirs(directory ?? process.cwd())
const allSkills = await Promise.all(
agentsProjectSkillDirs.map((skillsDir) => loadSkillsFromDir({ skillsDir, scope: "project" })),
)
return deduplicateSkillsByName(allSkills.flat())
}
export async function discoverGlobalAgentsSkills(): Promise<LoadedSkill[]> {

View File

@@ -0,0 +1,86 @@
import { execFileSync } from "node:child_process"
import { afterEach, beforeEach, describe, expect, it } from "bun:test"
import { mkdtempSync, mkdirSync, rmSync, writeFileSync } from "node:fs"
import { tmpdir } from "node:os"
import { join } from "node:path"
import {
discoverOpencodeProjectSkills,
discoverProjectAgentsSkills,
discoverProjectClaudeSkills,
} from "./loader"
function writeSkill(directory: string, name: string, description: string): void {
mkdirSync(directory, { recursive: true })
writeFileSync(
join(directory, "SKILL.md"),
`---\nname: ${name}\ndescription: ${description}\n---\nBody\n`,
)
}
describe("project skill discovery", () => {
let tempDir = ""
beforeEach(() => {
tempDir = mkdtempSync(join(tmpdir(), "omo-project-skill-discovery-"))
})
afterEach(() => {
rmSync(tempDir, { recursive: true, force: true })
})
it("discovers ancestor project skill directories up to the worktree root", async () => {
// given
const repositoryDir = join(tempDir, "repo")
const nestedDirectory = join(repositoryDir, "packages", "app", "src")
mkdirSync(nestedDirectory, { recursive: true })
execFileSync("git", ["init"], {
cwd: repositoryDir,
stdio: ["ignore", "ignore", "ignore"],
})
writeSkill(
join(repositoryDir, ".claude", "skills", "repo-claude"),
"repo-claude",
"Discovered from the repository root",
)
writeSkill(
join(repositoryDir, ".agents", "skills", "repo-agents"),
"repo-agents",
"Discovered from the repository root",
)
writeSkill(
join(repositoryDir, ".opencode", "skill", "repo-opencode"),
"repo-opencode",
"Discovered from the repository root",
)
writeSkill(
join(tempDir, ".claude", "skills", "outside-claude"),
"outside-claude",
"Should stay outside the worktree",
)
writeSkill(
join(tempDir, ".agents", "skills", "outside-agents"),
"outside-agents",
"Should stay outside the worktree",
)
writeSkill(
join(tempDir, ".opencode", "skills", "outside-opencode"),
"outside-opencode",
"Should stay outside the worktree",
)
// when
const [claudeSkills, agentSkills, opencodeSkills] = await Promise.all([
discoverProjectClaudeSkills(nestedDirectory),
discoverProjectAgentsSkills(nestedDirectory),
discoverOpencodeProjectSkills(nestedDirectory),
])
// then
expect(claudeSkills.map(skill => skill.name)).toEqual(["repo-claude"])
expect(agentSkills.map(skill => skill.name)).toEqual(["repo-agents"])
expect(opencodeSkills.map(skill => skill.name)).toEqual(["repo-opencode"])
})
})

View File

@@ -1,25 +1,36 @@
const MAX_PROCESSED_ENTRY_COUNT = 10_000
const PROCESSED_COMMAND_TTL_MS = 30_000
function pruneExpiredEntries(entries: Map<string, number>, now: number): Map<string, number> {
return new Map(Array.from(entries.entries()).filter(([, expiresAt]) => expiresAt > now))
function pruneExpiredEntries(entries: Map<string, number>, now: number): void {
for (const [commandKey, expiresAt] of entries) {
if (expiresAt <= now) {
entries.delete(commandKey)
}
}
}
function trimProcessedEntries(entries: Map<string, number>): Map<string, number> {
function trimProcessedEntries(entries: Map<string, number>): void {
if (entries.size <= MAX_PROCESSED_ENTRY_COUNT) {
return entries
return
}
return new Map(
Array.from(entries.entries())
.sort((left, right) => left[1] - right[1])
.slice(Math.floor(entries.size / 2))
)
const targetSize = Math.floor(entries.size / 2)
for (const commandKey of entries.keys()) {
if (entries.size <= targetSize) {
return
}
entries.delete(commandKey)
}
}
function removeSessionEntries(entries: Map<string, number>, sessionID: string): Map<string, number> {
function removeSessionEntries(entries: Map<string, number>, sessionID: string): void {
const sessionPrefix = `${sessionID}:`
return new Map(Array.from(entries.entries()).filter(([entry]) => !entry.startsWith(sessionPrefix)))
for (const entry of entries.keys()) {
if (entry.startsWith(sessionPrefix)) {
entries.delete(entry)
}
}
}
export interface ProcessedCommandStore {
@@ -34,19 +45,27 @@ export function createProcessedCommandStore(): ProcessedCommandStore {
return {
has(commandKey: string): boolean {
const now = Date.now()
entries = pruneExpiredEntries(entries, now)
return entries.has(commandKey)
const expiresAt = entries.get(commandKey)
if (expiresAt === undefined) {
return false
}
if (expiresAt <= Date.now()) {
entries.delete(commandKey)
return false
}
return true
},
add(commandKey: string, ttlMs = PROCESSED_COMMAND_TTL_MS): void {
const now = Date.now()
entries = pruneExpiredEntries(entries, now)
pruneExpiredEntries(entries, now)
entries.delete(commandKey)
entries.set(commandKey, now + ttlMs)
entries = trimProcessedEntries(entries)
trimProcessedEntries(entries)
},
cleanupSession(sessionID: string): void {
entries = removeSessionEntries(entries, sessionID)
removeSessionEntries(entries, sessionID)
},
clear(): void {
entries.clear()

View File

@@ -0,0 +1,134 @@
const { describe, it, expect, mock, beforeEach } = require("bun:test")
import type { MessageData } from "./types"
let sqliteBackend = false
let storedParts: Array<{ type: string; id?: string; callID?: string; [key: string]: unknown }> = []
mock.module("../../shared/opencode-storage-detection", () => ({
isSqliteBackend: () => sqliteBackend,
}))
mock.module("../../shared", () => ({
normalizeSDKResponse: <TData>(response: { data?: TData }, fallback: TData): TData => response.data ?? fallback,
}))
mock.module("./storage", () => ({
readParts: () => storedParts,
}))
const { recoverToolResultMissing } = await import("./recover-tool-result-missing")
function createMockClient(messages: MessageData[] = []) {
const promptAsync = mock(() => Promise.resolve({}))
return {
client: {
session: {
messages: mock(() => Promise.resolve({ data: messages })),
promptAsync,
},
} as never,
promptAsync,
}
}
const failedAssistantMsg: MessageData = {
info: { id: "msg_failed", role: "assistant" },
parts: [],
}
describe("recoverToolResultMissing", () => {
beforeEach(() => {
sqliteBackend = false
storedParts = []
})
it("returns false for sqlite fallback when tool part has no valid callID", async () => {
//#given
sqliteBackend = true
const { client, promptAsync } = createMockClient([
{
info: { id: "msg_failed", role: "assistant" },
parts: [{ type: "tool", id: "prt_missing_call", name: "bash", input: {} }],
},
])
//#when
const result = await recoverToolResultMissing(client, "ses_1", failedAssistantMsg)
//#then
expect(result).toBe(false)
expect(promptAsync).not.toHaveBeenCalled()
})
it("sends the recovered sqlite tool result when callID is valid", async () => {
//#given
sqliteBackend = true
const { client, promptAsync } = createMockClient([
{
info: { id: "msg_failed", role: "assistant" },
parts: [{ type: "tool", id: "prt_valid_call", callID: "call_recovered", name: "bash", input: {} }],
},
])
//#when
const result = await recoverToolResultMissing(client, "ses_1", failedAssistantMsg)
//#then
expect(result).toBe(true)
expect(promptAsync).toHaveBeenCalledWith({
path: { id: "ses_1" },
body: {
parts: [{
type: "tool_result",
tool_use_id: "call_recovered",
content: "Operation cancelled by user (ESC pressed)",
}],
},
})
})
it("returns false for stored parts when tool part has no valid callID", async () => {
//#given
storedParts = [{ type: "tool", id: "prt_stored_missing_call", tool: "bash", state: { input: {} } }]
const { client, promptAsync } = createMockClient()
//#when
const result = await recoverToolResultMissing(client, "ses_2", failedAssistantMsg)
//#then
expect(result).toBe(false)
expect(promptAsync).not.toHaveBeenCalled()
})
it("sends the recovered stored tool result when callID is valid", async () => {
//#given
storedParts = [{
type: "tool",
id: "prt_stored_valid_call",
callID: "toolu_recovered",
tool: "bash",
state: { input: {} },
}]
const { client, promptAsync } = createMockClient()
//#when
const result = await recoverToolResultMissing(client, "ses_2", failedAssistantMsg)
//#then
expect(result).toBe(true)
expect(promptAsync).toHaveBeenCalledWith({
path: { id: "ses_2" },
body: {
parts: [{
type: "tool_result",
tool_use_id: "toolu_recovered",
content: "Operation cancelled by user (ESC pressed)",
}],
},
})
})
})
export {}

View File

@@ -24,8 +24,30 @@ interface MessagePart {
id?: string
}
function isValidToolUseID(id: string | undefined): id is string {
return typeof id === "string" && /^(toolu_|call_)/.test(id)
}
function normalizeMessagePart(part: { type: string; id?: string; callID?: string }): MessagePart | null {
if (part.type === "tool" || part.type === "tool_use") {
if (!isValidToolUseID(part.callID)) {
return null
}
return {
type: "tool_use",
id: part.callID,
}
}
return {
type: part.type,
id: part.id,
}
}
function extractToolUseIds(parts: MessagePart[]): string[] {
return parts.filter((part): part is ToolUsePart => part.type === "tool_use" && !!part.id).map((part) => part.id)
return parts.filter((part): part is ToolUsePart => part.type === "tool_use" && isValidToolUseID(part.id)).map((part) => part.id)
}
async function readPartsFromSDKFallback(
@@ -39,10 +61,7 @@ async function readPartsFromSDKFallback(
const target = messages.find((m) => m.info?.id === messageID)
if (!target?.parts) return []
return target.parts.map((part) => ({
type: part.type === "tool" ? "tool_use" : part.type,
id: "callID" in part ? (part as { callID?: string }).callID : part.id,
}))
return target.parts.map((part) => normalizeMessagePart(part)).filter((part): part is MessagePart => part !== null)
} catch {
return []
}
@@ -59,10 +78,7 @@ export async function recoverToolResultMissing(
parts = await readPartsFromSDKFallback(client, sessionID, failedAssistantMsg.info.id)
} else {
const storedParts = readParts(failedAssistantMsg.info.id)
parts = storedParts.map((part) => ({
type: part.type === "tool" ? "tool_use" : part.type,
id: "callID" in part ? (part as { callID?: string }).callID : part.id,
}))
parts = storedParts.map((part) => normalizeMessagePart(part)).filter((part): part is MessagePart => part !== null)
}
}

View File

@@ -12,7 +12,7 @@ import { createPluginDispose, type PluginDispose } from "./plugin-dispose"
import { loadPluginConfig } from "./plugin-config"
import { createModelCacheState } from "./plugin-state"
import { createFirstMessageVariantGate } from "./shared/first-message-variant"
import { injectServerAuthIntoClient, log } from "./shared"
import { injectServerAuthIntoClient, log, logLegacyPluginStartupWarning } from "./shared"
import { startTmuxCheck } from "./tools"
let activePluginDispose: PluginDispose | null = null
@@ -23,6 +23,7 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
log("[OhMyOpenCodePlugin] ENTRY - plugin loading", {
directory: ctx.directory,
})
logLegacyPluginStartupWarning()
injectServerAuthIntoClient(ctx.client)
startTmuxCheck()

View File

@@ -0,0 +1,125 @@
import type { AgentConfig } from "@opencode-ai/sdk"
import { afterEach, beforeEach, describe, expect, spyOn, test } from "bun:test"
import * as agents from "../agents"
import * as shared from "../shared"
import * as sisyphusJunior from "../agents/sisyphus-junior"
import type { OhMyOpenCodeConfig } from "../config"
import * as skillLoader from "../features/opencode-skill-loader"
import { applyAgentConfig } from "./agent-config-handler"
import type { PluginComponents } from "./plugin-components-loader"
function createPluginComponents(): PluginComponents {
return {
commands: {},
skills: {},
agents: {},
mcpServers: {},
hooksConfigs: [],
plugins: [],
errors: [],
}
}
function createPluginConfig(): OhMyOpenCodeConfig {
return {
sisyphus_agent: {
planner_enabled: false,
},
}
}
describe("applyAgentConfig .agents skills", () => {
let createBuiltinAgentsSpy: ReturnType<typeof spyOn>
let createSisyphusJuniorAgentSpy: ReturnType<typeof spyOn>
let discoverConfigSourceSkillsSpy: ReturnType<typeof spyOn>
let discoverUserClaudeSkillsSpy: ReturnType<typeof spyOn>
let discoverProjectClaudeSkillsSpy: ReturnType<typeof spyOn>
let discoverOpencodeGlobalSkillsSpy: ReturnType<typeof spyOn>
let discoverOpencodeProjectSkillsSpy: ReturnType<typeof spyOn>
let discoverProjectAgentsSkillsSpy: ReturnType<typeof spyOn>
let discoverGlobalAgentsSkillsSpy: ReturnType<typeof spyOn>
let logSpy: ReturnType<typeof spyOn>
beforeEach(() => {
createBuiltinAgentsSpy = spyOn(agents, "createBuiltinAgents").mockResolvedValue({
sisyphus: { name: "sisyphus", prompt: "builtin", mode: "primary" } satisfies AgentConfig,
})
createSisyphusJuniorAgentSpy = spyOn(
sisyphusJunior,
"createSisyphusJuniorAgentWithOverrides",
).mockReturnValue({
name: "sisyphus-junior",
prompt: "junior",
mode: "all",
} satisfies AgentConfig)
discoverConfigSourceSkillsSpy = spyOn(skillLoader, "discoverConfigSourceSkills").mockResolvedValue([])
discoverUserClaudeSkillsSpy = spyOn(skillLoader, "discoverUserClaudeSkills").mockResolvedValue([])
discoverProjectClaudeSkillsSpy = spyOn(skillLoader, "discoverProjectClaudeSkills").mockResolvedValue([])
discoverOpencodeGlobalSkillsSpy = spyOn(skillLoader, "discoverOpencodeGlobalSkills").mockResolvedValue([])
discoverOpencodeProjectSkillsSpy = spyOn(skillLoader, "discoverOpencodeProjectSkills").mockResolvedValue([])
discoverProjectAgentsSkillsSpy = spyOn(skillLoader, "discoverProjectAgentsSkills").mockResolvedValue([])
discoverGlobalAgentsSkillsSpy = spyOn(skillLoader, "discoverGlobalAgentsSkills").mockResolvedValue([])
logSpy = spyOn(shared, "log").mockImplementation(() => {})
})
afterEach(() => {
createBuiltinAgentsSpy.mockRestore()
createSisyphusJuniorAgentSpy.mockRestore()
discoverConfigSourceSkillsSpy.mockRestore()
discoverUserClaudeSkillsSpy.mockRestore()
discoverProjectClaudeSkillsSpy.mockRestore()
discoverOpencodeGlobalSkillsSpy.mockRestore()
discoverOpencodeProjectSkillsSpy.mockRestore()
discoverProjectAgentsSkillsSpy.mockRestore()
discoverGlobalAgentsSkillsSpy.mockRestore()
logSpy.mockRestore()
})
test("calls .agents skill discovery during agent configuration", async () => {
// given
const directory = "/tmp/project"
// when
await applyAgentConfig({
config: { model: "anthropic/claude-opus-4-6", agent: {} },
pluginConfig: createPluginConfig(),
ctx: { directory },
pluginComponents: createPluginComponents(),
})
// then
expect(discoverProjectAgentsSkillsSpy).toHaveBeenCalledWith(directory)
expect(discoverGlobalAgentsSkillsSpy).toHaveBeenCalled()
})
test("passes discovered .agents skills to builtin agent creation", async () => {
// given
discoverProjectAgentsSkillsSpy.mockResolvedValue([
{
name: "project-agent-skill",
definition: { name: "project-agent-skill", template: "project-template" },
scope: "project",
},
])
discoverGlobalAgentsSkillsSpy.mockResolvedValue([
{
name: "global-agent-skill",
definition: { name: "global-agent-skill", template: "global-template" },
scope: "user",
},
])
// when
await applyAgentConfig({
config: { model: "anthropic/claude-opus-4-6", agent: {} },
pluginConfig: createPluginConfig(),
ctx: { directory: "/tmp/project" },
pluginComponents: createPluginComponents(),
})
// then
const discoveredSkills = createBuiltinAgentsSpy.mock.calls[0]?.[6] as Array<{ name: string }>
expect(discoveredSkills.map(skill => skill.name)).toContain("project-agent-skill")
expect(discoveredSkills.map(skill => skill.name)).toContain("global-agent-skill")
})
})

View File

@@ -8,6 +8,7 @@ import * as sisyphusJunior from "../agents/sisyphus-junior"
import type { OhMyOpenCodeConfig } from "../config"
import * as agentLoader from "../features/claude-code-agent-loader"
import * as skillLoader from "../features/opencode-skill-loader"
import type { LoadedSkill } from "../features/opencode-skill-loader"
import { getAgentDisplayName } from "../shared/agent-display-names"
import { applyAgentConfig } from "./agent-config-handler"
import type { PluginComponents } from "./plugin-components-loader"
@@ -51,6 +52,8 @@ describe("applyAgentConfig builtin override protection", () => {
let discoverProjectClaudeSkillsSpy: ReturnType<typeof spyOn>
let discoverOpencodeGlobalSkillsSpy: ReturnType<typeof spyOn>
let discoverOpencodeProjectSkillsSpy: ReturnType<typeof spyOn>
let discoverProjectAgentsSkillsSpy: ReturnType<typeof spyOn>
let discoverGlobalAgentsSkillsSpy: ReturnType<typeof spyOn>
let loadUserAgentsSpy: ReturnType<typeof spyOn>
let loadProjectAgentsSpy: ReturnType<typeof spyOn>
let migrateAgentConfigSpy: ReturnType<typeof spyOn>
@@ -121,6 +124,14 @@ describe("applyAgentConfig builtin override protection", () => {
skillLoader,
"discoverOpencodeProjectSkills",
).mockResolvedValue([])
discoverProjectAgentsSkillsSpy = spyOn(
skillLoader,
"discoverProjectAgentsSkills",
).mockResolvedValue([])
discoverGlobalAgentsSkillsSpy = spyOn(
skillLoader,
"discoverGlobalAgentsSkills",
).mockResolvedValue([])
loadUserAgentsSpy = spyOn(agentLoader, "loadUserAgents").mockReturnValue({})
loadProjectAgentsSpy = spyOn(agentLoader, "loadProjectAgents").mockReturnValue({})
@@ -139,6 +150,8 @@ describe("applyAgentConfig builtin override protection", () => {
discoverProjectClaudeSkillsSpy.mockRestore()
discoverOpencodeGlobalSkillsSpy.mockRestore()
discoverOpencodeProjectSkillsSpy.mockRestore()
discoverProjectAgentsSkillsSpy.mockRestore()
discoverGlobalAgentsSkillsSpy.mockRestore()
loadUserAgentsSpy.mockRestore()
loadProjectAgentsSpy.mockRestore()
migrateAgentConfigSpy.mockRestore()
@@ -279,4 +292,45 @@ describe("applyAgentConfig builtin override protection", () => {
// then
expect(createSisyphusJuniorAgentSpy).toHaveBeenCalledWith(undefined, "openai/gpt-5.4", false)
})
test("includes project and global .agents skills in builtin agent awareness", async () => {
// given
const projectAgentsSkill = {
name: "project-agent-skill",
definition: {
name: "project-agent-skill",
description: "Project agent skill",
template: "template",
},
scope: "project",
} satisfies LoadedSkill
const globalAgentsSkill = {
name: "global-agent-skill",
definition: {
name: "global-agent-skill",
description: "Global agent skill",
template: "template",
},
scope: "user",
} satisfies LoadedSkill
discoverProjectAgentsSkillsSpy.mockResolvedValue([projectAgentsSkill])
discoverGlobalAgentsSkillsSpy.mockResolvedValue([globalAgentsSkill])
// when
await applyAgentConfig({
config: createBaseConfig(),
pluginConfig: createPluginConfig(),
ctx: { directory: "/tmp" },
pluginComponents: createPluginComponents(),
})
// then
const discoveredSkills = createBuiltinAgentsSpy.mock.calls[0]?.[6]
expect(discoveredSkills).toEqual(
expect.arrayContaining([
expect.objectContaining({ name: "project-agent-skill" }),
expect.objectContaining({ name: "global-agent-skill" }),
]),
)
})
})

View File

@@ -6,8 +6,10 @@ import { AGENT_NAME_MAP } from "../shared/migration";
import { getAgentDisplayName } from "../shared/agent-display-names";
import {
discoverConfigSourceSkills,
discoverGlobalAgentsSkills,
discoverOpencodeGlobalSkills,
discoverOpencodeProjectSkills,
discoverProjectAgentsSkills,
discoverProjectClaudeSkills,
discoverUserClaudeSkills,
} from "../features/opencode-skill-loader";
@@ -52,8 +54,10 @@ export async function applyAgentConfig(params: {
discoveredConfigSourceSkills,
discoveredUserSkills,
discoveredProjectSkills,
discoveredProjectAgentsSkills,
discoveredOpencodeGlobalSkills,
discoveredOpencodeProjectSkills,
discoveredGlobalAgentsSkills,
] = await Promise.all([
discoverConfigSourceSkills({
config: params.pluginConfig.skills,
@@ -63,16 +67,22 @@ export async function applyAgentConfig(params: {
includeClaudeSkillsForAwareness
? discoverProjectClaudeSkills(params.ctx.directory)
: Promise.resolve([]),
includeClaudeSkillsForAwareness
? discoverProjectAgentsSkills(params.ctx.directory)
: Promise.resolve([]),
discoverOpencodeGlobalSkills(),
discoverOpencodeProjectSkills(params.ctx.directory),
includeClaudeSkillsForAwareness ? discoverGlobalAgentsSkills() : Promise.resolve([]),
]);
const allDiscoveredSkills = [
...discoveredConfigSourceSkills,
...discoveredOpencodeProjectSkills,
...discoveredProjectSkills,
...discoveredProjectAgentsSkills,
...discoveredOpencodeGlobalSkills,
...discoveredUserSkills,
...discoveredGlobalAgentsSkills,
];
const browserProvider =

View File

@@ -0,0 +1,98 @@
import { afterEach, beforeEach, describe, expect, spyOn, test } from "bun:test";
import * as builtinCommands from "../features/builtin-commands";
import * as commandLoader from "../features/claude-code-command-loader";
import * as skillLoader from "../features/opencode-skill-loader";
import type { OhMyOpenCodeConfig } from "../config";
import type { PluginComponents } from "./plugin-components-loader";
import { applyCommandConfig } from "./command-config-handler";
function createPluginComponents(): PluginComponents {
return {
commands: {},
skills: {},
agents: {},
mcpServers: {},
hooksConfigs: [],
plugins: [],
errors: [],
};
}
function createPluginConfig(): OhMyOpenCodeConfig {
return {};
}
describe("applyCommandConfig", () => {
let loadBuiltinCommandsSpy: ReturnType<typeof spyOn>;
let loadUserCommandsSpy: ReturnType<typeof spyOn>;
let loadProjectCommandsSpy: ReturnType<typeof spyOn>;
let loadOpencodeGlobalCommandsSpy: ReturnType<typeof spyOn>;
let loadOpencodeProjectCommandsSpy: ReturnType<typeof spyOn>;
let discoverConfigSourceSkillsSpy: ReturnType<typeof spyOn>;
let loadUserSkillsSpy: ReturnType<typeof spyOn>;
let loadProjectSkillsSpy: ReturnType<typeof spyOn>;
let loadOpencodeGlobalSkillsSpy: ReturnType<typeof spyOn>;
let loadOpencodeProjectSkillsSpy: ReturnType<typeof spyOn>;
let loadProjectAgentsSkillsSpy: ReturnType<typeof spyOn>;
let loadGlobalAgentsSkillsSpy: ReturnType<typeof spyOn>;
beforeEach(() => {
loadBuiltinCommandsSpy = spyOn(builtinCommands, "loadBuiltinCommands").mockReturnValue({});
loadUserCommandsSpy = spyOn(commandLoader, "loadUserCommands").mockResolvedValue({});
loadProjectCommandsSpy = spyOn(commandLoader, "loadProjectCommands").mockResolvedValue({});
loadOpencodeGlobalCommandsSpy = spyOn(commandLoader, "loadOpencodeGlobalCommands").mockResolvedValue({});
loadOpencodeProjectCommandsSpy = spyOn(commandLoader, "loadOpencodeProjectCommands").mockResolvedValue({});
discoverConfigSourceSkillsSpy = spyOn(skillLoader, "discoverConfigSourceSkills").mockResolvedValue([]);
loadUserSkillsSpy = spyOn(skillLoader, "loadUserSkills").mockResolvedValue({});
loadProjectSkillsSpy = spyOn(skillLoader, "loadProjectSkills").mockResolvedValue({});
loadOpencodeGlobalSkillsSpy = spyOn(skillLoader, "loadOpencodeGlobalSkills").mockResolvedValue({});
loadOpencodeProjectSkillsSpy = spyOn(skillLoader, "loadOpencodeProjectSkills").mockResolvedValue({});
loadProjectAgentsSkillsSpy = spyOn(skillLoader, "loadProjectAgentsSkills").mockResolvedValue({});
loadGlobalAgentsSkillsSpy = spyOn(skillLoader, "loadGlobalAgentsSkills").mockResolvedValue({});
});
afterEach(() => {
loadBuiltinCommandsSpy.mockRestore();
loadUserCommandsSpy.mockRestore();
loadProjectCommandsSpy.mockRestore();
loadOpencodeGlobalCommandsSpy.mockRestore();
loadOpencodeProjectCommandsSpy.mockRestore();
discoverConfigSourceSkillsSpy.mockRestore();
loadUserSkillsSpy.mockRestore();
loadProjectSkillsSpy.mockRestore();
loadOpencodeGlobalSkillsSpy.mockRestore();
loadOpencodeProjectSkillsSpy.mockRestore();
loadProjectAgentsSkillsSpy.mockRestore();
loadGlobalAgentsSkillsSpy.mockRestore();
});
test("includes .agents skills in command config", async () => {
// given
loadProjectAgentsSkillsSpy.mockResolvedValue({
"agents-project-skill": {
description: "(project - Skill) Agents project skill",
template: "template",
},
});
loadGlobalAgentsSkillsSpy.mockResolvedValue({
"agents-global-skill": {
description: "(user - Skill) Agents global skill",
template: "template",
},
});
const config: Record<string, unknown> = { command: {} };
// when
await applyCommandConfig({
config,
pluginConfig: createPluginConfig(),
ctx: { directory: "/tmp" },
pluginComponents: createPluginComponents(),
});
// then
const commandConfig = config.command as Record<string, { description?: string }>;
expect(commandConfig["agents-project-skill"]?.description).toContain("Agents project skill");
expect(commandConfig["agents-global-skill"]?.description).toContain("Agents global skill");
});
});

View File

@@ -9,6 +9,8 @@ import {
import { loadBuiltinCommands } from "../features/builtin-commands";
import {
discoverConfigSourceSkills,
loadGlobalAgentsSkills,
loadProjectAgentsSkills,
loadUserSkills,
loadProjectSkills,
loadOpencodeGlobalSkills,
@@ -36,7 +38,9 @@ export async function applyCommandConfig(params: {
opencodeGlobalCommands,
opencodeProjectCommands,
userSkills,
globalAgentsSkills,
projectSkills,
projectAgentsSkills,
opencodeGlobalSkills,
opencodeProjectSkills,
] = await Promise.all([
@@ -49,7 +53,9 @@ export async function applyCommandConfig(params: {
loadOpencodeGlobalCommands(),
loadOpencodeProjectCommands(params.ctx.directory),
includeClaudeSkills ? loadUserSkills() : Promise.resolve({}),
includeClaudeSkills ? loadGlobalAgentsSkills() : Promise.resolve({}),
includeClaudeSkills ? loadProjectSkills(params.ctx.directory) : Promise.resolve({}),
includeClaudeSkills ? loadProjectAgentsSkills(params.ctx.directory) : Promise.resolve({}),
loadOpencodeGlobalSkills(),
loadOpencodeProjectSkills(params.ctx.directory),
]);
@@ -59,11 +65,13 @@ export async function applyCommandConfig(params: {
...skillsToCommandDefinitionRecord(configSourceSkills),
...userCommands,
...userSkills,
...globalAgentsSkills,
...opencodeGlobalCommands,
...opencodeGlobalSkills,
...systemCommands,
...projectCommands,
...projectSkills,
...projectAgentsSkills,
...opencodeProjectCommands,
...opencodeProjectSkills,
...params.pluginComponents.commands,

View File

@@ -1,6 +1,11 @@
import { afterEach, describe, expect, test } from "bun:test"
import { afterEach, beforeEach, describe, expect, spyOn, test } from "bun:test"
import { mkdtempSync, rmSync } from "node:fs"
import { tmpdir } from "node:os"
import { join } from "node:path"
import { createChatParamsHandler } from "./chat-params"
import { createChatParamsHandler, type ChatParamsOutput } from "./chat-params"
import * as dataPathModule from "../shared/data-path"
import { writeProviderModelsCache } from "../shared"
import {
clearSessionPromptParams,
getSessionPromptParams,
@@ -8,8 +13,25 @@ import {
} from "../shared/session-prompt-params-state"
describe("createChatParamsHandler", () => {
let tempCacheRoot = ""
let getCacheDirSpy: ReturnType<typeof spyOn>
beforeEach(() => {
tempCacheRoot = mkdtempSync(join(tmpdir(), "chat-params-cache-"))
getCacheDirSpy = spyOn(dataPathModule, "getOmoOpenCodeCacheDir").mockReturnValue(
join(tempCacheRoot, "oh-my-opencode"),
)
writeProviderModelsCache({ connected: [], models: {} })
})
afterEach(() => {
clearSessionPromptParams("ses_chat_params")
clearSessionPromptParams("ses_chat_params_temperature")
writeProviderModelsCache({ connected: [], models: {} })
getCacheDirSpy?.mockRestore()
if (tempCacheRoot) {
rmSync(tempCacheRoot, { recursive: true, force: true })
}
})
test("normalizes object-style agent payload and runs chat.params hooks", async () => {
@@ -31,7 +53,7 @@ describe("createChatParamsHandler", () => {
message: {},
}
const output = {
const output: ChatParamsOutput = {
temperature: 0.1,
topP: 1,
topK: 1,
@@ -63,7 +85,7 @@ describe("createChatParamsHandler", () => {
message,
}
const output = {
const output: ChatParamsOutput = {
temperature: 0.1,
topP: 1,
topK: 1,
@@ -79,7 +101,26 @@ describe("createChatParamsHandler", () => {
test("applies stored prompt params for the session", async () => {
//#given
setSessionPromptParams("ses_chat_params", {
writeProviderModelsCache({
connected: ["openai"],
models: {
openai: [
{
id: "gpt-5.4",
name: "GPT-5.4",
temperature: true,
reasoning: true,
variants: {
low: {},
high: {},
},
limit: { output: 128_000 },
},
],
},
})
setSessionPromptParams("ses_chat_params_temperature", {
temperature: 0.4,
topP: 0.7,
options: {
@@ -94,14 +135,14 @@ describe("createChatParamsHandler", () => {
})
const input = {
sessionID: "ses_chat_params",
sessionID: "ses_chat_params_temperature",
agent: { name: "oracle" },
model: { providerID: "openai", modelID: "gpt-5.4" },
provider: { id: "openai" },
message: {},
}
const output = {
const output: ChatParamsOutput = {
temperature: 0.1,
topP: 1,
topK: 1,
@@ -113,6 +154,7 @@ describe("createChatParamsHandler", () => {
//#then
expect(output).toEqual({
temperature: 0.4,
topP: 0.7,
topK: 1,
options: {
@@ -122,7 +164,7 @@ describe("createChatParamsHandler", () => {
maxTokens: 4096,
},
})
expect(getSessionPromptParams("ses_chat_params")).toEqual({
expect(getSessionPromptParams("ses_chat_params_temperature")).toEqual({
temperature: 0.4,
topP: 0.7,
options: {
@@ -133,9 +175,9 @@ describe("createChatParamsHandler", () => {
})
})
test("drops unsupported temperature and clamps maxTokens from bundled model capabilities", async () => {
test("drops gpt-5.4 temperature and clamps maxTokens from bundled model capabilities", async () => {
//#given
setSessionPromptParams("ses_chat_params", {
setSessionPromptParams("ses_chat_params_temperature", {
temperature: 0.7,
options: {
maxTokens: 200_000,
@@ -147,14 +189,14 @@ describe("createChatParamsHandler", () => {
})
const input = {
sessionID: "ses_chat_params",
sessionID: "ses_chat_params_temperature",
agent: { name: "oracle" },
model: { providerID: "openai", modelID: "gpt-5.4" },
provider: { id: "openai" },
message: {},
}
const output = {
const output: ChatParamsOutput = {
temperature: 0.1,
topP: 1,
topK: 1,

View File

@@ -22,6 +22,10 @@ function flushWithTimeout(): Promise<void> {
return new Promise<void>((resolve) => setTimeout(resolve, 10))
}
function isRecord(value: unknown): value is Record<string, unknown> {
return typeof value === "object" && value !== null
}
describe("scheduleDeferredModelOverride", () => {
let tempDir: string
let dbPath: string
@@ -60,9 +64,7 @@ describe("scheduleDeferredModelOverride", () => {
const db = new Database(dbPath)
db.run(
`INSERT INTO message (id, session_id, data) VALUES (?, ?, ?)`,
id,
"ses_test",
JSON.stringify({ model }),
[id, "ses_test", JSON.stringify({ model })],
)
db.close()
}
@@ -178,7 +180,7 @@ describe("scheduleDeferredModelOverride", () => {
)
})
test("should not crash when DB file exists but is corrupted", async () => {
test("should log a DB failure when DB file exists but is corrupted", async () => {
//#given
const { chmodSync, writeFileSync } = await import("node:fs")
const corruptedDbPath = join(tempDir, "opencode", "opencode.db")
@@ -194,9 +196,16 @@ describe("scheduleDeferredModelOverride", () => {
await flushMicrotasks(5)
//#then
expect(logSpy).toHaveBeenCalledWith(
expect.stringContaining("Failed to open DB"),
expect.objectContaining({ messageId: "msg_corrupt" }),
const failureCall = logSpy.mock.calls.find(([message, metadata]) =>
typeof message === "string"
&& (
message.includes("Failed to open DB")
|| message.includes("Deferred DB update failed with error")
)
&& isRecord(metadata)
&& metadata.messageId === "msg_corrupt"
)
expect(failureCall).toBeDefined()
})
})

View File

@@ -113,9 +113,9 @@ describe("resolveVariantForModel", () => {
})
test("returns correct variant for openai provider (hephaestus agent)", () => {
// #given hephaestus has openai/gpt-5.3-codex with variant "medium" in its chain
// #given hephaestus has openai/gpt-5.4 with variant "medium" in its chain
const config = {} as OhMyOpenCodeConfig
const model = { providerID: "openai", modelID: "gpt-5.3-codex" }
const model = { providerID: "openai", modelID: "gpt-5.4" }
// #when
const variant = resolveVariantForModel(config, "hephaestus", model)

View File

@@ -62,6 +62,7 @@ export * from "./truncate-description"
export * from "./opencode-storage-paths"
export * from "./opencode-message-dir"
export * from "./opencode-command-dirs"
export * from "./project-discovery-dirs"
export * from "./normalize-sdk-response"
export * from "./session-directory-resolver"
export * from "./prompt-tools"
@@ -69,3 +70,4 @@ export * from "./internal-initiator-marker"
export * from "./plugin-command-discovery"
export { SessionCategoryRegistry } from "./session-category-registry"
export * from "./plugin-identity"
export * from "./log-legacy-plugin-startup-warning"

View File

@@ -0,0 +1,81 @@
import { afterEach, beforeEach, describe, expect, it } from "bun:test"
import { mkdirSync, rmSync, writeFileSync } from "node:fs"
import { tmpdir } from "node:os"
import { join } from "node:path"
import { checkForLegacyPluginEntry } from "./legacy-plugin-warning"
describe("checkForLegacyPluginEntry", () => {
let testConfigDir = ""
beforeEach(() => {
testConfigDir = join(tmpdir(), `omo-legacy-check-${Date.now()}-${Math.random().toString(36).slice(2)}`)
mkdirSync(testConfigDir, { recursive: true })
})
afterEach(() => {
rmSync(testConfigDir, { recursive: true, force: true })
})
it("detects a bare legacy plugin entry", () => {
// given
writeFileSync(join(testConfigDir, "opencode.json"), JSON.stringify({ plugin: ["oh-my-opencode"] }, null, 2))
// when
const result = checkForLegacyPluginEntry(testConfigDir)
// then
expect(result.hasLegacyEntry).toBe(true)
expect(result.hasCanonicalEntry).toBe(false)
expect(result.legacyEntries).toEqual(["oh-my-opencode"])
})
it("detects a version-pinned legacy plugin entry", () => {
// given
writeFileSync(join(testConfigDir, "opencode.json"), JSON.stringify({ plugin: ["oh-my-opencode@3.10.0"] }, null, 2))
// when
const result = checkForLegacyPluginEntry(testConfigDir)
// then
expect(result.hasLegacyEntry).toBe(true)
expect(result.hasCanonicalEntry).toBe(false)
expect(result.legacyEntries).toEqual(["oh-my-opencode@3.10.0"])
})
it("does not flag a canonical plugin entry", () => {
// given
writeFileSync(join(testConfigDir, "opencode.json"), JSON.stringify({ plugin: ["oh-my-openagent"] }, null, 2))
// when
const result = checkForLegacyPluginEntry(testConfigDir)
// then
expect(result.hasLegacyEntry).toBe(false)
expect(result.hasCanonicalEntry).toBe(true)
expect(result.legacyEntries).toEqual([])
})
it("detects legacy entries in quoted jsonc config", () => {
// given
writeFileSync(join(testConfigDir, "opencode.jsonc"), '{\n "plugin": ["oh-my-opencode"]\n}\n')
// when
const result = checkForLegacyPluginEntry(testConfigDir)
// then
expect(result.hasLegacyEntry).toBe(true)
expect(result.legacyEntries).toEqual(["oh-my-opencode"])
})
it("returns no warning data when config is missing", () => {
// given — empty dir, no config files
// when
const result = checkForLegacyPluginEntry(testConfigDir)
// then
expect(result.hasLegacyEntry).toBe(false)
expect(result.hasCanonicalEntry).toBe(false)
expect(result.legacyEntries).toEqual([])
})
})

View File

@@ -0,0 +1,66 @@
import { existsSync, readFileSync } from "node:fs"
import { join } from "node:path"
import { parseJsoncSafe } from "./jsonc-parser"
import { getOpenCodeConfigPaths } from "./opencode-config-dir"
import { LEGACY_PLUGIN_NAME, PLUGIN_NAME } from "./plugin-identity"
interface OpenCodeConfig {
plugin?: string[]
}
export interface LegacyPluginCheckResult {
hasLegacyEntry: boolean
hasCanonicalEntry: boolean
legacyEntries: string[]
}
function getOpenCodeConfigPath(overrideConfigDir?: string): string | null {
if (overrideConfigDir) {
const jsonPath = join(overrideConfigDir, "opencode.json")
const jsoncPath = join(overrideConfigDir, "opencode.jsonc")
if (existsSync(jsoncPath)) return jsoncPath
if (existsSync(jsonPath)) return jsonPath
return null
}
const { configJsonc, configJson } = getOpenCodeConfigPaths({ binary: "opencode", version: null })
if (existsSync(configJsonc)) return configJsonc
if (existsSync(configJson)) return configJson
return null
}
function isLegacyPluginEntry(entry: string): boolean {
return entry === LEGACY_PLUGIN_NAME || entry.startsWith(`${LEGACY_PLUGIN_NAME}@`)
}
function isCanonicalPluginEntry(entry: string): boolean {
return entry === PLUGIN_NAME || entry.startsWith(`${PLUGIN_NAME}@`)
}
export function checkForLegacyPluginEntry(overrideConfigDir?: string): LegacyPluginCheckResult {
const configPath = getOpenCodeConfigPath(overrideConfigDir)
if (!configPath) {
return { hasLegacyEntry: false, hasCanonicalEntry: false, legacyEntries: [] }
}
try {
const content = readFileSync(configPath, "utf-8")
const parseResult = parseJsoncSafe<OpenCodeConfig>(content)
if (!parseResult.data) {
return { hasLegacyEntry: false, hasCanonicalEntry: false, legacyEntries: [] }
}
const legacyEntries = (parseResult.data.plugin ?? []).filter(isLegacyPluginEntry)
const hasCanonicalEntry = (parseResult.data.plugin ?? []).some(isCanonicalPluginEntry)
return {
hasLegacyEntry: legacyEntries.length > 0,
hasCanonicalEntry,
legacyEntries,
}
} catch {
return { hasLegacyEntry: false, hasCanonicalEntry: false, legacyEntries: [] }
}
}

View File

@@ -0,0 +1,80 @@
import { afterAll, beforeEach, describe, expect, it, mock } from "bun:test"
import type { LegacyPluginCheckResult } from "./legacy-plugin-warning"
function createLegacyPluginCheckResult(
overrides: Partial<LegacyPluginCheckResult> = {},
): LegacyPluginCheckResult {
return {
hasLegacyEntry: false,
hasCanonicalEntry: false,
legacyEntries: [],
...overrides,
}
}
const mockCheckForLegacyPluginEntry = mock(() => createLegacyPluginCheckResult())
const mockLog = mock(() => {})
mock.module("./legacy-plugin-warning", () => ({
checkForLegacyPluginEntry: mockCheckForLegacyPluginEntry,
}))
mock.module("./logger", () => ({
log: mockLog,
}))
afterAll(() => {
mock.restore()
})
async function importFreshStartupWarningModule(): Promise<typeof import("./log-legacy-plugin-startup-warning")> {
return import(`./log-legacy-plugin-startup-warning?test=${Date.now()}-${Math.random()}`)
}
describe("logLegacyPluginStartupWarning", () => {
beforeEach(() => {
mockCheckForLegacyPluginEntry.mockReset()
mockLog.mockReset()
mockCheckForLegacyPluginEntry.mockReturnValue(createLegacyPluginCheckResult())
})
describe("#given OpenCode config contains legacy plugin entries", () => {
it("logs the legacy entries with canonical replacements", async () => {
//#given
mockCheckForLegacyPluginEntry.mockReturnValue(createLegacyPluginCheckResult({
hasLegacyEntry: true,
legacyEntries: ["oh-my-opencode", "oh-my-opencode@3.13.1"],
}))
const { logLegacyPluginStartupWarning } = await importFreshStartupWarningModule()
//#when
logLegacyPluginStartupWarning()
//#then
expect(mockLog).toHaveBeenCalledTimes(1)
expect(mockLog).toHaveBeenCalledWith(
"[OhMyOpenCodePlugin] Legacy plugin entry detected in OpenCode config",
{
legacyEntries: ["oh-my-opencode", "oh-my-opencode@3.13.1"],
suggestedEntries: ["oh-my-openagent", "oh-my-openagent@3.13.1"],
hasCanonicalEntry: false,
},
)
})
})
describe("#given OpenCode config uses only canonical plugin entries", () => {
it("does not log a startup warning", async () => {
//#given
const { logLegacyPluginStartupWarning } = await importFreshStartupWarningModule()
//#when
logLegacyPluginStartupWarning()
//#then
expect(mockLog).not.toHaveBeenCalled()
})
})
})

View File

@@ -0,0 +1,28 @@
import { checkForLegacyPluginEntry } from "./legacy-plugin-warning"
import { log } from "./logger"
import { LEGACY_PLUGIN_NAME, PLUGIN_NAME } from "./plugin-identity"
function toCanonicalEntry(entry: string): string {
if (entry === LEGACY_PLUGIN_NAME) {
return PLUGIN_NAME
}
if (entry.startsWith(`${LEGACY_PLUGIN_NAME}@`)) {
return `${PLUGIN_NAME}${entry.slice(LEGACY_PLUGIN_NAME.length)}`
}
return entry
}
export function logLegacyPluginStartupWarning(): void {
const result = checkForLegacyPluginEntry()
if (!result.hasLegacyEntry) {
return
}
log("[OhMyOpenCodePlugin] Legacy plugin entry detected in OpenCode config", {
legacyEntries: result.legacyEntries,
suggestedEntries: result.legacyEntries.map(toCanonicalEntry),
hasCanonicalEntry: result.hasCanonicalEntry,
})
}

View File

@@ -1,4 +1,18 @@
import { describe, expect, test } from "bun:test"
import { afterAll, describe, expect, test, mock } from "bun:test"
// Mock connected-providers-cache to prevent local disk cache from polluting test results.
// Without this, findProviderModelMetadata reads real cached model metadata (e.g., from opencode serve)
// which causes the "prefers runtime models.dev cache" test to get different values than expected.
mock.module("./connected-providers-cache", () => ({
findProviderModelMetadata: () => undefined,
readConnectedProvidersCache: () => null,
hasConnectedProvidersCache: () => false,
hasProviderModelsCache: () => false,
}))
afterAll(() => {
mock.restore()
})
import {
getModelCapabilities,
@@ -178,8 +192,8 @@ describe("getModelCapabilities", () => {
expect(result.diagnostics).toMatchObject({
resolutionMode: "alias-backed",
canonicalization: {
source: "pattern-alias",
ruleID: "claude-thinking-legacy-alias",
source: "exact-alias",
ruleID: "claude-opus-4-6-thinking-legacy-alias",
},
snapshot: { source: "bundled-snapshot" },
})
@@ -202,63 +216,13 @@ describe("getModelCapabilities", () => {
expect(result.diagnostics).toMatchObject({
resolutionMode: "alias-backed",
canonicalization: {
source: "pattern-alias",
source: "exact-alias",
ruleID: "gemini-3.1-pro-tier-alias",
},
snapshot: { source: "bundled-snapshot" },
})
})
test("canonicalizes provider-prefixed gemini aliases without changing the transport-facing request", () => {
const result = getModelCapabilities({
providerID: "google",
modelID: "google/gemini-3.1-pro-high",
bundledSnapshot,
})
expect(result).toMatchObject({
requestedModelID: "google/gemini-3.1-pro-high",
canonicalModelID: "gemini-3.1-pro",
family: "gemini",
supportsThinking: true,
supportsTemperature: true,
maxOutputTokens: 65_000,
})
expect(result.diagnostics).toMatchObject({
resolutionMode: "alias-backed",
canonicalization: {
source: "pattern-alias",
ruleID: "gemini-3.1-pro-tier-alias",
},
snapshot: { source: "bundled-snapshot" },
})
})
test("canonicalizes provider-prefixed Claude thinking aliases to bare snapshot IDs", () => {
const result = getModelCapabilities({
providerID: "anthropic",
modelID: "anthropic/claude-opus-4-6-thinking",
bundledSnapshot,
})
expect(result).toMatchObject({
requestedModelID: "anthropic/claude-opus-4-6-thinking",
canonicalModelID: "claude-opus-4-6",
family: "claude-opus",
supportsThinking: true,
supportsTemperature: true,
maxOutputTokens: 128_000,
})
expect(result.diagnostics).toMatchObject({
resolutionMode: "alias-backed",
canonicalization: {
source: "pattern-alias",
ruleID: "claude-thinking-legacy-alias",
},
snapshot: { source: "bundled-snapshot" },
})
})
test("prefers runtime models.dev cache over bundled snapshot", () => {
const runtimeSnapshot: ModelCapabilitiesSnapshot = {
...bundledSnapshot,
@@ -322,8 +286,7 @@ describe("getModelCapabilities", () => {
})
expect(result).toMatchObject({
requestedModelID: "openai/o3-mini",
canonicalModelID: "o3-mini",
canonicalModelID: "openai/o3-mini",
family: "openai-reasoning",
variants: ["low", "medium", "high"],
reasoningEfforts: ["none", "minimal", "low", "medium", "high"],

View File

@@ -13,46 +13,14 @@ describe("model-capability-aliases", () => {
})
})
test("strips provider prefixes when the input is already canonical", () => {
const result = resolveModelIDAlias("anthropic/claude-sonnet-4-6")
expect(result).toEqual({
requestedModelID: "anthropic/claude-sonnet-4-6",
canonicalModelID: "claude-sonnet-4-6",
source: "canonical",
})
})
test("normalizes gemini tier aliases through a pattern rule", () => {
test("normalizes exact local tier aliases to canonical models.dev IDs", () => {
const result = resolveModelIDAlias("gemini-3.1-pro-high")
expect(result).toEqual({
requestedModelID: "gemini-3.1-pro-high",
canonicalModelID: "gemini-3.1-pro",
source: "pattern-alias",
ruleID: "gemini-3.1-pro-tier-alias",
})
})
test("normalizes provider-prefixed gemini tier aliases to bare canonical IDs", () => {
const result = resolveModelIDAlias("google/gemini-3.1-pro-high")
expect(result).toEqual({
requestedModelID: "google/gemini-3.1-pro-high",
canonicalModelID: "gemini-3.1-pro",
source: "pattern-alias",
ruleID: "gemini-3.1-pro-tier-alias",
})
})
test("keeps exceptional gemini preview aliases as exact rules", () => {
const result = resolveModelIDAlias("gemini-3-pro-high")
expect(result).toEqual({
requestedModelID: "gemini-3-pro-high",
canonicalModelID: "gemini-3-pro-preview",
source: "exact-alias",
ruleID: "gemini-3-pro-tier-alias",
ruleID: "gemini-3.1-pro-tier-alias",
})
})
@@ -66,45 +34,14 @@ describe("model-capability-aliases", () => {
})
})
test("normalizes provider-prefixed Claude thinking aliases through a pattern rule", () => {
const result = resolveModelIDAlias("anthropic/claude-opus-4-6-thinking")
expect(result).toEqual({
requestedModelID: "anthropic/claude-opus-4-6-thinking",
canonicalModelID: "claude-opus-4-6",
source: "pattern-alias",
ruleID: "claude-thinking-legacy-alias",
})
})
test("does not pattern-match nearby canonical Claude IDs incorrectly", () => {
const result = resolveModelIDAlias("claude-opus-4-6-think")
expect(result).toEqual({
requestedModelID: "claude-opus-4-6-think",
canonicalModelID: "claude-opus-4-6-think",
source: "canonical",
})
})
test("does not pattern-match canonical gemini preview IDs incorrectly", () => {
const result = resolveModelIDAlias("gemini-3.1-pro-preview")
expect(result).toEqual({
requestedModelID: "gemini-3.1-pro-preview",
canonicalModelID: "gemini-3.1-pro-preview",
source: "canonical",
})
})
test("normalizes legacy Claude thinking aliases through a pattern rule", () => {
test("normalizes legacy Claude thinking aliases through a named exact rule", () => {
const result = resolveModelIDAlias("claude-opus-4-6-thinking")
expect(result).toEqual({
requestedModelID: "claude-opus-4-6-thinking",
canonicalModelID: "claude-opus-4-6",
source: "pattern-alias",
ruleID: "claude-thinking-legacy-alias",
source: "exact-alias",
ruleID: "claude-opus-4-6-thinking-legacy-alias",
})
})
})

View File

@@ -20,6 +20,18 @@ export type ModelIDAliasResolution = {
}
const EXACT_ALIAS_RULES: ReadonlyArray<ExactAliasRule> = [
{
aliasModelID: "gemini-3.1-pro-high",
ruleID: "gemini-3.1-pro-tier-alias",
canonicalModelID: "gemini-3.1-pro",
rationale: "OmO historically encoded Gemini tier selection in the model name instead of variant metadata.",
},
{
aliasModelID: "gemini-3.1-pro-low",
ruleID: "gemini-3.1-pro-tier-alias",
canonicalModelID: "gemini-3.1-pro",
rationale: "OmO historically encoded Gemini tier selection in the model name instead of variant metadata.",
},
{
aliasModelID: "gemini-3-pro-high",
ruleID: "gemini-3-pro-tier-alias",
@@ -32,47 +44,30 @@ const EXACT_ALIAS_RULES: ReadonlyArray<ExactAliasRule> = [
canonicalModelID: "gemini-3-pro-preview",
rationale: "Legacy Gemini 3 tier suffixes still need to land on the canonical preview model.",
},
{
aliasModelID: "claude-opus-4-6-thinking",
ruleID: "claude-opus-4-6-thinking-legacy-alias",
canonicalModelID: "claude-opus-4-6",
rationale: "OmO historically used a legacy compatibility suffix before models.dev shipped canonical thinking variants for newer Claude families.",
},
]
const EXACT_ALIAS_RULES_BY_MODEL: ReadonlyMap<string, ExactAliasRule> = new Map(
EXACT_ALIAS_RULES.map((rule) => [rule.aliasModelID, rule]),
)
const PATTERN_ALIAS_RULES: ReadonlyArray<PatternAliasRule> = [
{
ruleID: "claude-thinking-legacy-alias",
description: "Normalizes the legacy Claude Opus 4.6 thinking suffix to the canonical snapshot ID.",
match: (normalizedModelID) => /^claude-opus-4-6-thinking$/.test(normalizedModelID),
canonicalize: () => "claude-opus-4-6",
},
{
ruleID: "gemini-3.1-pro-tier-alias",
description: "Normalizes Gemini 3.1 Pro tier suffixes to the canonical snapshot ID.",
match: (normalizedModelID) => /^gemini-3\.1-pro-(?:high|low)$/.test(normalizedModelID),
canonicalize: () => "gemini-3.1-pro",
},
]
const PATTERN_ALIAS_RULES: ReadonlyArray<PatternAliasRule> = []
function normalizeLookupModelID(modelID: string): string {
return modelID.trim().toLowerCase()
}
function stripProviderPrefixForAliasLookup(normalizedModelID: string): string {
const slashIndex = normalizedModelID.indexOf("/")
if (slashIndex <= 0 || slashIndex === normalizedModelID.length - 1) {
return normalizedModelID
}
return normalizedModelID.slice(slashIndex + 1)
}
export function resolveModelIDAlias(modelID: string): ModelIDAliasResolution {
const requestedModelID = normalizeLookupModelID(modelID)
const aliasLookupModelID = stripProviderPrefixForAliasLookup(requestedModelID)
const exactRule = EXACT_ALIAS_RULES_BY_MODEL.get(aliasLookupModelID)
const normalizedModelID = normalizeLookupModelID(modelID)
const exactRule = EXACT_ALIAS_RULES_BY_MODEL.get(normalizedModelID)
if (exactRule) {
return {
requestedModelID,
requestedModelID: normalizedModelID,
canonicalModelID: exactRule.canonicalModelID,
source: "exact-alias",
ruleID: exactRule.ruleID,
@@ -80,21 +75,21 @@ export function resolveModelIDAlias(modelID: string): ModelIDAliasResolution {
}
for (const rule of PATTERN_ALIAS_RULES) {
if (!rule.match(aliasLookupModelID)) {
if (!rule.match(normalizedModelID)) {
continue
}
return {
requestedModelID,
canonicalModelID: rule.canonicalize(aliasLookupModelID),
requestedModelID: normalizedModelID,
canonicalModelID: rule.canonicalize(normalizedModelID),
source: "pattern-alias",
ruleID: rule.ruleID,
}
}
return {
requestedModelID,
canonicalModelID: aliasLookupModelID,
requestedModelID: normalizedModelID,
canonicalModelID: normalizedModelID,
source: "canonical",
}
}

View File

@@ -29,7 +29,7 @@ describe("model-capability-guardrails", () => {
const brokenSnapshot: ModelCapabilitiesSnapshot = {
...bundledSnapshot,
models: Object.fromEntries(
Object.entries(bundledSnapshot.models).filter(([modelID]) => modelID !== "gemini-3-pro-preview"),
Object.entries(bundledSnapshot.models).filter(([modelID]) => modelID !== "gemini-3.1-pro"),
),
}
@@ -41,13 +41,13 @@ describe("model-capability-guardrails", () => {
expect(issues).toContainEqual(
expect.objectContaining({
kind: "alias-target-missing-from-snapshot",
aliasModelID: "gemini-3-pro-high",
canonicalModelID: "gemini-3-pro-preview",
aliasModelID: "gemini-3.1-pro-high",
canonicalModelID: "gemini-3.1-pro",
}),
)
})
test("flags pattern aliases when models.dev gains a canonical entry for the alias itself", () => {
test("flags exact aliases when models.dev gains a canonical entry for the alias itself", () => {
const bundledSnapshot = getBundledModelCapabilitiesSnapshot()
const aliasCollisionSnapshot: ModelCapabilitiesSnapshot = {
...bundledSnapshot,
@@ -66,39 +66,11 @@ describe("model-capability-guardrails", () => {
requirementModelIDs: [],
})
expect(issues).toContainEqual(
expect.objectContaining({
kind: "pattern-alias-collides-with-snapshot",
modelID: "gemini-3.1-pro-high",
canonicalModelID: "gemini-3.1-pro",
}),
)
})
test("flags exact aliases when models.dev gains a canonical entry for the alias itself", () => {
const bundledSnapshot = getBundledModelCapabilitiesSnapshot()
const aliasCollisionSnapshot: ModelCapabilitiesSnapshot = {
...bundledSnapshot,
models: {
...bundledSnapshot.models,
"gemini-3-pro-high": {
id: "gemini-3-pro-high",
family: "gemini",
reasoning: true,
},
},
}
const issues = collectModelCapabilityGuardrailIssues({
snapshot: aliasCollisionSnapshot,
requirementModelIDs: [],
})
expect(issues).toContainEqual(
expect.objectContaining({
kind: "exact-alias-collides-with-snapshot",
aliasModelID: "gemini-3-pro-high",
canonicalModelID: "gemini-3-pro-preview",
aliasModelID: "gemini-3.1-pro-high",
canonicalModelID: "gemini-3.1-pro",
}),
)
})

View File

@@ -80,7 +80,7 @@ describe("AGENT_MODEL_REQUIREMENTS", () => {
const second = librarian.fallbackChain[1]
expect(second.providers[0]).toBe("opencode")
expect(second.model).toBe("minimax-m2.7-highspeed")
expect(second.model).toBe("minimax-m2.5")
const tertiary = librarian.fallbackChain[2]
expect(tertiary.providers).toContain("anthropic")
@@ -95,22 +95,22 @@ describe("AGENT_MODEL_REQUIREMENTS", () => {
const explore = AGENT_MODEL_REQUIREMENTS["explore"]
// when - accessing explore requirement
// then - fallbackChain: grok → minimax-m2.7-highspeed → minimax-m2.7 → haiku → nano
expect(explore).toBeDefined()
expect(explore.fallbackChain).toBeArray()
expect(explore.fallbackChain).toHaveLength(5)
const primary = explore.fallbackChain[0]
expect(primary.providers).toContain("github-copilot")
expect(primary.providers).toContain("xai")
expect(primary.model).toBe("grok-code-fast-1")
const secondary = explore.fallbackChain[1]
expect(secondary.providers).toContain("opencode-go")
expect(secondary.model).toBe("minimax-m2.7-highspeed")
expect(secondary.model).toBe("minimax-m2.7")
const tertiary = explore.fallbackChain[2]
expect(tertiary.providers).toContain("opencode")
expect(tertiary.model).toBe("minimax-m2.7")
expect(tertiary.model).toBe("minimax-m2.5")
const quaternary = explore.fallbackChain[3]
expect(quaternary.providers).toContain("anthropic")

View File

@@ -47,11 +47,10 @@ export const AGENT_MODEL_REQUIREMENTS: Record<string, ModelRequirement> = {
hephaestus: {
fallbackChain: [
{
providers: ["openai", "venice", "opencode"],
model: "gpt-5.3-codex",
providers: ["openai", "github-copilot", "venice", "opencode"],
model: "gpt-5.4",
variant: "medium",
},
{ providers: ["github-copilot"], model: "gpt-5.4", variant: "medium" },
],
requiresProvider: ["openai", "github-copilot", "venice", "opencode"],
},
@@ -78,16 +77,16 @@ export const AGENT_MODEL_REQUIREMENTS: Record<string, ModelRequirement> = {
librarian: {
fallbackChain: [
{ providers: ["opencode-go"], model: "minimax-m2.7" },
{ providers: ["opencode"], model: "minimax-m2.7-highspeed" },
{ providers: ["opencode"], model: "minimax-m2.5" },
{ providers: ["anthropic", "opencode"], model: "claude-haiku-4-5" },
{ providers: ["opencode"], model: "gpt-5-nano" },
],
},
explore: {
fallbackChain: [
{ providers: ["github-copilot"], model: "grok-code-fast-1" },
{ providers: ["opencode-go"], model: "minimax-m2.7-highspeed" },
{ providers: ["opencode"], model: "minimax-m2.7" },
{ providers: ["github-copilot", "xai"], model: "grok-code-fast-1" },
{ providers: ["opencode-go"], model: "minimax-m2.7" },
{ providers: ["opencode"], model: "minimax-m2.5" },
{ providers: ["anthropic", "opencode"], model: "claude-haiku-4-5" },
{ providers: ["opencode"], model: "gpt-5-nano" },
],

View File

@@ -26,8 +26,10 @@ describe("opencode-command-dirs", () => {
const dirs = getOpenCodeSkillDirs({ binary: "opencode" })
expect(dirs).toContain("/home/user/.config/opencode/profiles/opus/skills")
expect(dirs).toContain("/home/user/.config/opencode/profiles/opus/skill")
expect(dirs).toContain("/home/user/.config/opencode/skill")
expect(dirs).toContain("/home/user/.config/opencode/skills")
expect(dirs).toHaveLength(2)
expect(dirs).toHaveLength(4)
})
})
})
@@ -41,7 +43,8 @@ describe("opencode-command-dirs", () => {
const dirs = getOpenCodeSkillDirs({ binary: "opencode" })
expect(dirs).toContain("/home/user/.config/opencode/skills")
expect(dirs).toHaveLength(1)
expect(dirs).toContain("/home/user/.config/opencode/skill")
expect(dirs).toHaveLength(2)
})
})
})
@@ -56,9 +59,11 @@ describe("opencode-command-dirs", () => {
const { getOpenCodeCommandDirs } = await import("./opencode-command-dirs")
const dirs = getOpenCodeCommandDirs({ binary: "opencode" })
expect(dirs).toContain("/home/user/.config/opencode/profiles/opus/commands")
expect(dirs).toContain("/home/user/.config/opencode/profiles/opus/command")
expect(dirs).toContain("/home/user/.config/opencode/commands")
expect(dirs).toContain("/home/user/.config/opencode/command")
expect(dirs).toHaveLength(2)
expect(dirs).toHaveLength(4)
})
})
})

View File

@@ -14,11 +14,11 @@ function getParentOpencodeConfigDir(configDir: string): string | null {
export function getOpenCodeCommandDirs(options: OpenCodeConfigDirOptions): string[] {
const configDir = getOpenCodeConfigDir(options)
const parentConfigDir = getParentOpencodeConfigDir(configDir)
return Array.from(
new Set([
join(configDir, "commands"),
join(configDir, "command"),
...(parentConfigDir ? [join(parentConfigDir, "command")] : []),
...(parentConfigDir ? [join(parentConfigDir, "commands"), join(parentConfigDir, "command")] : []),
])
)
}
@@ -26,11 +26,11 @@ export function getOpenCodeCommandDirs(options: OpenCodeConfigDirOptions): strin
export function getOpenCodeSkillDirs(options: OpenCodeConfigDirOptions): string[] {
const configDir = getOpenCodeConfigDir(options)
const parentConfigDir = getParentOpencodeConfigDir(configDir)
return Array.from(
new Set([
join(configDir, "skills"),
...(parentConfigDir ? [join(parentConfigDir, "skills")] : []),
join(configDir, "skill"),
...(parentConfigDir ? [join(parentConfigDir, "skills"), join(parentConfigDir, "skill")] : []),
])
)
}

View File

@@ -0,0 +1,23 @@
import { describe, expect, test } from "bun:test"
import { existsSync, mkdirSync, rmSync, writeFileSync } from "node:fs"
import { join } from "node:path"
import { detectPluginConfigFile } from "./jsonc-parser"
describe("detectPluginConfigFile - canonical config detection", () => {
const testDir = join(__dirname, ".test-detect-plugin-canonical")
test("detects oh-my-openagent config when no legacy config exists", () => {
//#given
if (!existsSync(testDir)) mkdirSync(testDir, { recursive: true })
writeFileSync(join(testDir, "oh-my-openagent.jsonc"), "{}")
//#when
const result = detectPluginConfigFile(testDir)
//#then
expect(result.format).toBe("jsonc")
expect(result.path).toBe(join(testDir, "oh-my-openagent.jsonc"))
rmSync(testDir, { recursive: true, force: true })
})
})

View File

@@ -3,24 +3,24 @@ import { PLUGIN_NAME, CONFIG_BASENAME, LOG_FILENAME, CACHE_DIR_NAME } from "./pl
describe("plugin-identity constants", () => {
describe("PLUGIN_NAME", () => {
it("equals oh-my-opencode", () => {
it("equals oh-my-openagent", () => {
// given
// when
// then
expect(PLUGIN_NAME).toBe("oh-my-opencode")
expect(PLUGIN_NAME).toBe("oh-my-openagent")
})
})
describe("CONFIG_BASENAME", () => {
it("equals oh-my-opencode", () => {
it("equals oh-my-openagent", () => {
// given
// when
// then
expect(CONFIG_BASENAME).toBe("oh-my-opencode")
expect(CONFIG_BASENAME).toBe("oh-my-openagent")
})
})

View File

@@ -1,5 +1,6 @@
export const PLUGIN_NAME = "oh-my-opencode"
export const LEGACY_PLUGIN_NAME = "oh-my-openagent"
export const CONFIG_BASENAME = "oh-my-opencode"
export const PLUGIN_NAME = "oh-my-openagent"
export const LEGACY_PLUGIN_NAME = "oh-my-opencode"
export const CONFIG_BASENAME = "oh-my-openagent"
export const LEGACY_CONFIG_BASENAME = "oh-my-opencode"
export const LOG_FILENAME = "oh-my-opencode.log"
export const CACHE_DIR_NAME = "oh-my-opencode"

View File

@@ -0,0 +1,92 @@
import { afterEach, beforeEach, describe, expect, it } from "bun:test"
import { mkdirSync, realpathSync, rmSync } from "node:fs"
import { tmpdir } from "node:os"
import { join } from "node:path"
import {
findProjectAgentsSkillDirs,
findProjectClaudeSkillDirs,
findProjectOpencodeCommandDirs,
findProjectOpencodeSkillDirs,
} from "./project-discovery-dirs"
const TEST_DIR = join(tmpdir(), `project-discovery-dirs-${Date.now()}`)
function canonicalPath(path: string): string {
return realpathSync(path)
}
describe("project-discovery-dirs", () => {
beforeEach(() => {
mkdirSync(TEST_DIR, { recursive: true })
})
afterEach(() => {
rmSync(TEST_DIR, { recursive: true, force: true })
})
it("#given nested .opencode skill directories #when finding project opencode skill dirs #then returns nearest-first with aliases", () => {
// given
const projectDir = join(TEST_DIR, "project")
const childDir = join(projectDir, "apps", "cli")
mkdirSync(join(projectDir, ".opencode", "skill"), { recursive: true })
mkdirSync(join(projectDir, ".opencode", "skills"), { recursive: true })
mkdirSync(join(TEST_DIR, ".opencode", "skills"), { recursive: true })
// when
const directories = findProjectOpencodeSkillDirs(childDir)
// then
expect(directories).toEqual([
canonicalPath(join(projectDir, ".opencode", "skills")),
canonicalPath(join(projectDir, ".opencode", "skill")),
canonicalPath(join(TEST_DIR, ".opencode", "skills")),
])
})
it("#given nested .opencode command directories #when finding project opencode command dirs #then returns nearest-first with aliases", () => {
// given
const projectDir = join(TEST_DIR, "project")
const childDir = join(projectDir, "packages", "tool")
mkdirSync(join(projectDir, ".opencode", "commands"), { recursive: true })
mkdirSync(join(TEST_DIR, ".opencode", "command"), { recursive: true })
// when
const directories = findProjectOpencodeCommandDirs(childDir)
// then
expect(directories).toEqual([
canonicalPath(join(projectDir, ".opencode", "commands")),
canonicalPath(join(TEST_DIR, ".opencode", "command")),
])
})
it("#given ancestor claude and agents skill directories #when finding project compatibility dirs #then discovers both scopes", () => {
// given
const projectDir = join(TEST_DIR, "project")
const childDir = join(projectDir, "src", "nested")
mkdirSync(join(projectDir, ".claude", "skills"), { recursive: true })
mkdirSync(join(TEST_DIR, ".agents", "skills"), { recursive: true })
// when
const claudeDirectories = findProjectClaudeSkillDirs(childDir)
const agentsDirectories = findProjectAgentsSkillDirs(childDir)
// then
expect(claudeDirectories).toEqual([canonicalPath(join(projectDir, ".claude", "skills"))])
expect(agentsDirectories).toEqual([canonicalPath(join(TEST_DIR, ".agents", "skills"))])
})
it("#given a stop directory #when finding ancestor dirs #then it does not scan beyond the stop boundary", () => {
// given
const projectDir = join(TEST_DIR, "project")
const childDir = join(projectDir, "apps", "cli")
mkdirSync(join(projectDir, ".opencode", "skills"), { recursive: true })
mkdirSync(join(TEST_DIR, ".opencode", "skills"), { recursive: true })
// when
const directories = findProjectOpencodeSkillDirs(childDir, projectDir)
// then
expect(directories).toEqual([canonicalPath(join(projectDir, ".opencode", "skills"))])
})
})

View File

@@ -0,0 +1,101 @@
import { execFileSync } from "node:child_process"
import { existsSync, realpathSync } from "node:fs"
import { dirname, join, resolve } from "node:path"
function normalizePath(path: string): string {
const resolvedPath = resolve(path)
if (!existsSync(resolvedPath)) {
return resolvedPath
}
try {
return realpathSync(resolvedPath)
} catch {
return resolvedPath
}
}
function findAncestorDirectories(
startDirectory: string,
targetPaths: ReadonlyArray<ReadonlyArray<string>>,
stopDirectory?: string,
): string[] {
const directories: string[] = []
const seen = new Set<string>()
let currentDirectory = normalizePath(startDirectory)
const resolvedStopDirectory = stopDirectory ? normalizePath(stopDirectory) : undefined
while (true) {
for (const targetPath of targetPaths) {
const candidateDirectory = join(currentDirectory, ...targetPath)
if (!existsSync(candidateDirectory) || seen.has(candidateDirectory)) {
continue
}
seen.add(candidateDirectory)
directories.push(candidateDirectory)
}
if (resolvedStopDirectory === currentDirectory) {
return directories
}
const parentDirectory = dirname(currentDirectory)
if (parentDirectory === currentDirectory) {
return directories
}
currentDirectory = normalizePath(parentDirectory)
}
}
function detectWorktreePath(directory: string): string | undefined {
try {
return execFileSync("git", ["rev-parse", "--show-toplevel"], {
cwd: directory,
encoding: "utf-8",
timeout: 5000,
stdio: ["pipe", "pipe", "pipe"],
}).trim()
} catch {
return undefined
}
}
export function findProjectClaudeSkillDirs(startDirectory: string, stopDirectory?: string): string[] {
return findAncestorDirectories(
startDirectory,
[[".claude", "skills"]],
stopDirectory ?? detectWorktreePath(startDirectory),
)
}
export function findProjectAgentsSkillDirs(startDirectory: string, stopDirectory?: string): string[] {
return findAncestorDirectories(
startDirectory,
[[".agents", "skills"]],
stopDirectory ?? detectWorktreePath(startDirectory),
)
}
export function findProjectOpencodeSkillDirs(startDirectory: string, stopDirectory?: string): string[] {
return findAncestorDirectories(
startDirectory,
[
[".opencode", "skills"],
[".opencode", "skill"],
],
stopDirectory ?? detectWorktreePath(startDirectory),
)
}
export function findProjectOpencodeCommandDirs(startDirectory: string, stopDirectory?: string): string[] {
return findAncestorDirectories(
startDirectory,
[
[".opencode", "commands"],
[".opencode", "command"],
],
stopDirectory ?? detectWorktreePath(startDirectory),
)
}

View File

@@ -239,6 +239,7 @@ Available categories: ${categoryNames.join(", ")}`,
modelInfo,
actualModel,
isUnstableAgent,
fallbackChain: configuredFallbackChain ?? requirement?.fallbackChain,
// Don't use hardcoded fallback chain when resolution was skipped (cold cache)
fallbackChain: configuredFallbackChain ?? (isModelResolutionSkipped ? undefined : requirement?.fallbackChain),
}
}

View File

@@ -125,12 +125,26 @@ Create the work plan directly - that's your job as the planning agent.`,
systemDefaultModel: undefined,
})
if (resolution && !('skipped' in resolution)) {
const resolutionSkipped = resolution && 'skipped' in resolution
if (resolution && !resolutionSkipped) {
const normalized = normalizeModelFormat(resolution.model)
if (normalized) {
const variantToUse = agentOverride?.variant ?? resolution.variant
categoryModel = variantToUse ? { ...normalized, variant: variantToUse } : normalized
}
} else if (resolutionSkipped && agentOverride?.model) {
// Cold cache: resolution was skipped but user explicitly configured a model.
// Honor the user override directly — don't fall through to hardcoded fallback chain.
const normalized = normalizeModelFormat(agentOverride.model)
if (normalized) {
const variantToUse = agentOverride?.variant
categoryModel = variantToUse ? { ...normalized, variant: variantToUse } : normalized
log("[delegate-task] Cold cache: using explicit user override for subagent", {
agent: agentToUse,
model: agentOverride.model,
})
}
}
const defaultProviderID = categoryModel?.providerID
@@ -140,7 +154,9 @@ Create the work plan directly - that's your job as the planning agent.`,
normalizedAgentFallbackModels,
defaultProviderID,
)
fallbackChain = configuredFallbackChain ?? agentRequirement?.fallbackChain
// Don't assign hardcoded fallback chain when resolution was skipped (cold cache)
// — the chain may contain model IDs that don't exist in the provider yet.
fallbackChain = configuredFallbackChain ?? (resolutionSkipped ? undefined : agentRequirement?.fallbackChain)
// Only promote fallback-only settings when resolution actually selected a fallback model.
const resolvedFallbackEntry = (resolution && !('skipped' in resolution)) ? resolution.fallbackEntry : undefined

View File

@@ -1,22 +1,14 @@
import { describe, expect, test, mock, beforeEach } from "bun:test"
import { afterEach, beforeEach, describe, expect, spyOn, test } from "bun:test"
import * as childProcess from "node:child_process"
import { existsSync, mkdtempSync, writeFileSync, unlinkSync, rmSync } from "node:fs"
import { tmpdir } from "node:os"
import { dirname, join } from "node:path"
const originalChildProcess = await import("node:child_process")
type ImageConverterModule = typeof import("./image-converter")
const execFileSyncMock = mock((_command: string, _args: string[], _options?: unknown) => "")
const execSyncMock = mock(() => {
throw new Error("execSync should not be called")
})
mock.module("node:child_process", () => ({
...originalChildProcess,
execFileSync: execFileSyncMock,
execSync: execSyncMock,
}))
const { convertImageToJpeg, cleanupConvertedImage } = await import("./image-converter")
async function loadImageConverter(): Promise<ImageConverterModule> {
return import(`./image-converter?test=${Date.now()}-${Math.random()}`)
}
function writeConvertedOutput(command: string, args: string[]): void {
if (command === "sips") {
@@ -38,7 +30,10 @@ function writeConvertedOutput(command: string, args: string[]): void {
}
}
function withMockPlatform<TValue>(platform: NodeJS.Platform, run: () => TValue): TValue {
async function withMockPlatform<TValue>(
platform: NodeJS.Platform,
run: () => TValue | Promise<TValue>,
): Promise<TValue> {
const originalPlatform = process.platform
Object.defineProperty(process, "platform", {
value: platform,
@@ -46,7 +41,7 @@ function withMockPlatform<TValue>(platform: NodeJS.Platform, run: () => TValue):
})
try {
return run()
return await run()
} finally {
Object.defineProperty(process, "platform", {
value: originalPlatform,
@@ -56,34 +51,50 @@ function withMockPlatform<TValue>(platform: NodeJS.Platform, run: () => TValue):
}
describe("image-converter command execution safety", () => {
let execFileSyncSpy: ReturnType<typeof spyOn>
let execSyncSpy: ReturnType<typeof spyOn>
beforeEach(() => {
execFileSyncMock.mockReset()
execSyncMock.mockReset()
execSyncSpy = spyOn(childProcess, "execSync").mockImplementation(() => {
throw new Error("execSync should not be called")
})
execFileSyncSpy = spyOn(childProcess, "execFileSync").mockImplementation(
((_command: string, _args: string[], _options?: unknown) => "") as typeof childProcess.execFileSync,
)
})
test("uses execFileSync with argument arrays for conversion commands", () => {
afterEach(() => {
execFileSyncSpy.mockRestore()
execSyncSpy.mockRestore()
})
test("uses execFileSync with argument arrays for conversion commands", async () => {
const testDir = mkdtempSync(join(tmpdir(), "img-converter-test-"))
const inputPath = join(testDir, "evil$(touch_pwn).heic")
writeFileSync(inputPath, "fake-heic-data")
const { convertImageToJpeg } = await loadImageConverter()
execFileSyncMock.mockImplementation((command: string, args: string[]) => {
writeConvertedOutput(command, args)
return ""
})
execFileSyncSpy.mockImplementation(
((command: string, args: string[]) => {
writeConvertedOutput(command, args)
return ""
}) as typeof childProcess.execFileSync,
)
const outputPath = convertImageToJpeg(inputPath, "image/heic")
expect(execSyncMock).not.toHaveBeenCalled()
expect(execFileSyncMock).toHaveBeenCalled()
expect(execSyncSpy).not.toHaveBeenCalled()
expect(execFileSyncSpy).toHaveBeenCalled()
const [firstCommand, firstArgs] = execFileSyncMock.mock.calls[0] as [string, string[]]
const [firstCommand, firstArgs] = execFileSyncSpy.mock.calls[0] as [string, string[]]
expect(typeof firstCommand).toBe("string")
expect(Array.isArray(firstArgs)).toBe(true)
expect(["sips", "convert", "magick"]).toContain(firstCommand)
expect(firstArgs).toContain("--")
expect(firstArgs).toContain(inputPath)
expect(firstArgs.indexOf("--") < firstArgs.indexOf(inputPath)).toBe(true)
expect(firstArgs.join(" ")).not.toContain(`\"${inputPath}\"`)
expect(firstArgs.join(" ")).not.toContain(`"${inputPath}"`)
expect(existsSync(outputPath)).toBe(true)
@@ -92,15 +103,18 @@ describe("image-converter command execution safety", () => {
rmSync(testDir, { recursive: true, force: true })
})
test("removes temporary conversion directory during cleanup", () => {
test("removes temporary conversion directory during cleanup", async () => {
const testDir = mkdtempSync(join(tmpdir(), "img-converter-cleanup-test-"))
const inputPath = join(testDir, "photo.heic")
writeFileSync(inputPath, "fake-heic-data")
const { convertImageToJpeg, cleanupConvertedImage } = await loadImageConverter()
execFileSyncMock.mockImplementation((command: string, args: string[]) => {
writeConvertedOutput(command, args)
return ""
})
execFileSyncSpy.mockImplementation(
((command: string, args: string[]) => {
writeConvertedOutput(command, args)
return ""
}) as typeof childProcess.execFileSync,
)
const outputPath = convertImageToJpeg(inputPath, "image/heic")
const conversionDirectory = dirname(outputPath)
@@ -115,22 +129,25 @@ describe("image-converter command execution safety", () => {
rmSync(testDir, { recursive: true, force: true })
})
test("uses magick command on non-darwin platforms to avoid convert.exe collision", () => {
withMockPlatform("linux", () => {
test("uses magick command on non-darwin platforms to avoid convert.exe collision", async () => {
await withMockPlatform("linux", async () => {
const testDir = mkdtempSync(join(tmpdir(), "img-converter-platform-test-"))
const inputPath = join(testDir, "photo.heic")
writeFileSync(inputPath, "fake-heic-data")
const { convertImageToJpeg, cleanupConvertedImage } = await loadImageConverter()
execFileSyncMock.mockImplementation((command: string, args: string[]) => {
if (command === "magick") {
writeFileSync(args[2], "jpeg")
}
return ""
})
execFileSyncSpy.mockImplementation(
((command: string, args: string[]) => {
if (command === "magick") {
writeFileSync(args[2], "jpeg")
}
return ""
}) as typeof childProcess.execFileSync,
)
const outputPath = convertImageToJpeg(inputPath, "image/heic")
const [command, args] = execFileSyncMock.mock.calls[0] as [string, string[]]
const [command, args] = execFileSyncSpy.mock.calls[0] as [string, string[]]
expect(command).toBe("magick")
expect(args).toContain("--")
expect(args.indexOf("--") < args.indexOf(inputPath)).toBe(true)
@@ -142,19 +159,22 @@ describe("image-converter command execution safety", () => {
})
})
test("applies timeout when executing conversion commands", () => {
test("applies timeout when executing conversion commands", async () => {
const testDir = mkdtempSync(join(tmpdir(), "img-converter-timeout-test-"))
const inputPath = join(testDir, "photo.heic")
writeFileSync(inputPath, "fake-heic-data")
const { convertImageToJpeg, cleanupConvertedImage } = await loadImageConverter()
execFileSyncMock.mockImplementation((command: string, args: string[]) => {
writeConvertedOutput(command, args)
return ""
})
execFileSyncSpy.mockImplementation(
((command: string, args: string[]) => {
writeConvertedOutput(command, args)
return ""
}) as typeof childProcess.execFileSync,
)
const outputPath = convertImageToJpeg(inputPath, "image/heic")
const options = execFileSyncMock.mock.calls[0]?.[2] as { timeout?: number } | undefined
const options = execFileSyncSpy.mock.calls[0]?.[2] as { timeout?: number } | undefined
expect(options).toBeDefined()
expect(typeof options?.timeout).toBe("number")
expect((options?.timeout ?? 0) > 0).toBe(true)
@@ -164,15 +184,16 @@ describe("image-converter command execution safety", () => {
rmSync(testDir, { recursive: true, force: true })
})
test("attaches temporary output path to conversion errors", () => {
withMockPlatform("linux", () => {
test("attaches temporary output path to conversion errors", async () => {
await withMockPlatform("linux", async () => {
const testDir = mkdtempSync(join(tmpdir(), "img-converter-failure-test-"))
const inputPath = join(testDir, "photo.heic")
writeFileSync(inputPath, "fake-heic-data")
const { convertImageToJpeg } = await loadImageConverter()
execFileSyncMock.mockImplementation(() => {
execFileSyncSpy.mockImplementation((() => {
throw new Error("conversion process failed")
})
}) as typeof childProcess.execFileSync)
const runConversion = () => convertImageToJpeg(inputPath, "image/heic")
expect(runConversion).toThrow("No image conversion tool available")

View File

@@ -1,4 +1,4 @@
import { execFileSync } from "node:child_process"
import * as childProcess from "node:child_process"
import { existsSync, mkdtempSync, readFileSync, rmSync, unlinkSync, writeFileSync } from "node:fs"
import { tmpdir } from "node:os"
import { dirname, join } from "node:path"
@@ -59,7 +59,7 @@ export function convertImageToJpeg(inputPath: string, mimeType: string): string
try {
if (process.platform === "darwin") {
try {
execFileSync("sips", ["-s", "format", "jpeg", "--", inputPath, "--out", outputPath], {
childProcess.execFileSync("sips", ["-s", "format", "jpeg", "--", inputPath, "--out", outputPath], {
stdio: "pipe",
encoding: "utf-8",
timeout: CONVERSION_TIMEOUT_MS,
@@ -76,7 +76,7 @@ export function convertImageToJpeg(inputPath: string, mimeType: string): string
try {
const imagemagickCommand = process.platform === "darwin" ? "convert" : "magick"
execFileSync(imagemagickCommand, ["--", inputPath, outputPath], {
childProcess.execFileSync(imagemagickCommand, ["--", inputPath, outputPath], {
stdio: "pipe",
encoding: "utf-8",
timeout: CONVERSION_TIMEOUT_MS,

View File

@@ -181,4 +181,78 @@ Use parent opencode commit command.
expect(commitCommand?.scope).toBe("opencode")
expect(commitCommand?.content).toContain("Use parent opencode commit command.")
})
it("discovers ancestor project opencode commands from plural commands directory", () => {
const projectRoot = join(projectDir, "workspace")
const childDir = join(projectRoot, "apps", "cli")
const commandsDir = join(projectRoot, ".opencode", "commands")
mkdirSync(childDir, { recursive: true })
mkdirSync(commandsDir, { recursive: true })
writeFileSync(
join(commandsDir, "ancestor.md"),
`---
description: Discover command from ancestor plural directory
---
Use ancestor command.
`,
)
const commands = discoverCommandsSync(childDir)
const ancestorCommand = commands.find((command) => command.name === "ancestor")
expect(ancestorCommand?.scope).toBe("opencode-project")
expect(ancestorCommand?.content).toContain("Use ancestor command.")
})
it("deduplicates same-named opencode commands while keeping the higher-priority alias", () => {
const commandsRoot = join(projectDir, ".opencode")
const singularDir = join(commandsRoot, "command")
const pluralDir = join(commandsRoot, "commands")
mkdirSync(singularDir, { recursive: true })
mkdirSync(pluralDir, { recursive: true })
writeFileSync(
join(singularDir, "duplicate.md"),
`---
description: Singular duplicate command
---
Use singular command.
`,
)
writeFileSync(
join(pluralDir, "duplicate.md"),
`---
description: Plural duplicate command
---
Use plural command.
`,
)
const commands = discoverCommandsSync(projectDir)
const duplicates = commands.filter((command) => command.name === "duplicate")
expect(duplicates).toHaveLength(1)
expect(duplicates[0]?.content).toContain("Use plural command.")
})
it("discovers nested opencode project commands", () => {
const commandsDir = join(projectDir, ".opencode", "commands", "refactor")
mkdirSync(commandsDir, { recursive: true })
writeFileSync(
join(commandsDir, "code.md"),
`---
description: Nested command
---
Use nested command.
`,
)
const commands = discoverCommandsSync(projectDir)
const nestedCommand = commands.find((command) => command.name === "refactor/code")
expect(nestedCommand?.content).toContain("Use nested command.")
expect(nestedCommand?.scope).toBe("opencode-project")
})
})

View File

@@ -3,6 +3,7 @@ import { basename, join } from "path"
import {
parseFrontmatter,
sanitizeModelField,
findProjectOpencodeCommandDirs,
getOpenCodeCommandDirs,
discoverPluginCommandDefinitions,
} from "../../shared"
@@ -17,17 +18,37 @@ export interface CommandDiscoveryOptions {
enabledPluginsOverride?: Record<string, boolean>
}
function discoverCommandsFromDir(commandsDir: string, scope: CommandScope): CommandInfo[] {
const NESTED_COMMAND_SEPARATOR = "/"
function discoverCommandsFromDir(
commandsDir: string,
scope: CommandScope,
prefix = "",
): CommandInfo[] {
if (!existsSync(commandsDir)) return []
const entries = readdirSync(commandsDir, { withFileTypes: true })
const commands: CommandInfo[] = []
for (const entry of entries) {
if (entry.isDirectory()) {
if (entry.name.startsWith(".")) continue
const nestedPrefix = prefix
? `${prefix}${NESTED_COMMAND_SEPARATOR}${entry.name}`
: entry.name
commands.push(
...discoverCommandsFromDir(join(commandsDir, entry.name), scope, nestedPrefix),
)
continue
}
if (!isMarkdownFile(entry)) continue
const commandPath = join(commandsDir, entry.name)
const commandName = basename(entry.name, ".md")
const baseCommandName = basename(entry.name, ".md")
const commandName = prefix
? `${prefix}${NESTED_COMMAND_SEPARATOR}${baseCommandName}`
: baseCommandName
try {
const content = readFileSync(commandPath, "utf-8")
@@ -75,6 +96,22 @@ function discoverPluginCommands(options?: CommandDiscoveryOptions): CommandInfo[
}))
}
function deduplicateCommandInfosByName(commands: CommandInfo[]): CommandInfo[] {
const seen = new Set<string>()
const deduplicatedCommands: CommandInfo[] = []
for (const command of commands) {
if (seen.has(command.name)) {
continue
}
seen.add(command.name)
deduplicatedCommands.push(command)
}
return deduplicatedCommands
}
export function discoverCommandsSync(
directory?: string,
options?: CommandDiscoveryOptions,
@@ -82,14 +119,16 @@ export function discoverCommandsSync(
const userCommandsDir = join(getClaudeConfigDir(), "commands")
const projectCommandsDir = join(directory ?? process.cwd(), ".claude", "commands")
const opencodeGlobalDirs = getOpenCodeCommandDirs({ binary: "opencode" })
const opencodeProjectDir = join(directory ?? process.cwd(), ".opencode", "command")
const opencodeProjectDirs = findProjectOpencodeCommandDirs(directory ?? process.cwd())
const userCommands = discoverCommandsFromDir(userCommandsDir, "user")
const opencodeGlobalCommands = opencodeGlobalDirs.flatMap((commandsDir) =>
discoverCommandsFromDir(commandsDir, "opencode")
)
const projectCommands = discoverCommandsFromDir(projectCommandsDir, "project")
const opencodeProjectCommands = discoverCommandsFromDir(opencodeProjectDir, "opencode-project")
const opencodeProjectCommands = opencodeProjectDirs.flatMap((commandsDir) =>
discoverCommandsFromDir(commandsDir, "opencode-project"),
)
const pluginCommands = discoverPluginCommands(options)
const builtinCommandsMap = loadBuiltinCommands()
@@ -107,12 +146,12 @@ export function discoverCommandsSync(
scope: "builtin",
}))
return [
return deduplicateCommandInfosByName([
...projectCommands,
...userCommands,
...opencodeProjectCommands,
...opencodeGlobalCommands,
...builtinCommands,
...pluginCommands,
]
])
}

View File

@@ -0,0 +1,62 @@
import { afterEach, beforeEach, describe, expect, it } from "bun:test"
import { mkdtempSync, mkdirSync, rmSync, writeFileSync } from "node:fs"
import { tmpdir } from "node:os"
import { join } from "node:path"
import { discoverCommandsSync } from "./command-discovery"
function writeCommand(path: string, description: string, body: string): void {
mkdirSync(join(path, ".."), { recursive: true })
writeFileSync(path, `---\ndescription: ${description}\n---\n${body}\n`)
}
describe("opencode project command discovery", () => {
let tempDir = ""
beforeEach(() => {
tempDir = mkdtempSync(join(tmpdir(), "omo-opencode-project-command-discovery-"))
})
afterEach(() => {
rmSync(tempDir, { recursive: true, force: true })
})
it("discovers ancestor opencode commands with slash-separated nested names and worktree boundaries", () => {
// given
const repositoryDir = join(tempDir, "repo")
const nestedDirectory = join(repositoryDir, "packages", "app", "src")
mkdirSync(nestedDirectory, { recursive: true })
// Use Bun.spawnSync instead of execFileSync to avoid mock leakage
// from parallel test files (e.g. image-converter.test.ts mocks execFileSync globally)
Bun.spawnSync(["git", "init"], {
cwd: repositoryDir,
stdout: "ignore",
stderr: "ignore",
})
writeCommand(
join(repositoryDir, ".opencode", "commands", "deploy", "staging.md"),
"Deploy to staging",
"Run the staged deploy.",
)
writeCommand(
join(repositoryDir, ".opencode", "command", "release.md"),
"Release command",
"Run the release.",
)
writeCommand(
join(tempDir, ".opencode", "commands", "outside.md"),
"Outside command",
"Should not be discovered.",
)
// when
const names = discoverCommandsSync(nestedDirectory).map(command => command.name)
// then
expect(names).toContain("deploy/staging")
expect(names).toContain("release")
expect(names).not.toContain("deploy:staging")
expect(names).not.toContain("outside")
})
})