Compare commits
84 Commits
fix/merge-
...
fix/fallba
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2bb211c979 | ||
|
|
bf51919a79 | ||
|
|
f5f1d1d4c2 | ||
|
|
d8da89fd5b | ||
|
|
1a5672ab6c | ||
|
|
0832505e13 | ||
|
|
4bbc55bb02 | ||
|
|
42b34fb5d2 | ||
|
|
41f2050cf0 | ||
|
|
0397470f02 | ||
|
|
2021080e7c | ||
|
|
27f60fb4d2 | ||
|
|
51204f2b67 | ||
|
|
c672a2beed | ||
|
|
6ec6642e13 | ||
|
|
4462124eee | ||
|
|
0f46e5b71a | ||
|
|
39542330c6 | ||
|
|
9d731f59ad | ||
|
|
52b2afb6b0 | ||
|
|
b8a6f10f70 | ||
|
|
f4aeee18a4 | ||
|
|
40dccd6118 | ||
|
|
f3e6cab2f8 | ||
|
|
3dba1c49d4 | ||
|
|
ac1eb30fda | ||
|
|
5f78c07189 | ||
|
|
d2dc25e567 | ||
|
|
541f0d354d | ||
|
|
f3c8b0d098 | ||
|
|
e758623a2e | ||
|
|
3bcbd12e2a | ||
|
|
39a3e39b6b | ||
|
|
44a1604656 | ||
|
|
13fa8bccf9 | ||
|
|
ddc2edfa0a | ||
|
|
6e82ef2384 | ||
|
|
850fb0378e | ||
|
|
a85f7efb1d | ||
|
|
64e8e164aa | ||
|
|
ca655a7deb | ||
|
|
d4e7ddc9b9 | ||
|
|
c995c5b2c3 | ||
|
|
0a58debd92 | ||
|
|
acc28a89c1 | ||
|
|
3adade46e3 | ||
|
|
e14a4cfc77 | ||
|
|
dda5bfa3b9 | ||
|
|
eb0931ed6d | ||
|
|
5647cf83cd | ||
|
|
09f62b1d40 | ||
|
|
5f9b6cf176 | ||
|
|
7c71a2dbbf | ||
|
|
35d071b1be | ||
|
|
64b2d69036 | ||
|
|
50de1a18f2 | ||
|
|
02bb5d43cc | ||
|
|
8c19a7b7f8 | ||
|
|
da561118ce | ||
|
|
29d85bb63d | ||
|
|
b7c6391bd5 | ||
|
|
c8eb0dbae3 | ||
|
|
86a1bfa493 | ||
|
|
b86489ac92 | ||
|
|
697a2f5a4c | ||
|
|
7027b55c56 | ||
|
|
effbc54767 | ||
|
|
6909e5fb4c | ||
|
|
98d39ceea0 | ||
|
|
36432fe18e | ||
|
|
d9ee0d9c0d | ||
|
|
3b8846e956 | ||
|
|
b1008510f8 | ||
|
|
fb596ed149 | ||
|
|
a551fceca9 | ||
|
|
9fa9dace2c | ||
|
|
e5ede6dc8c | ||
|
|
fbf3018ee4 | ||
|
|
810ebc0428 | ||
|
|
5360cdb59b | ||
|
|
462bf7b277 | ||
|
|
8b3cc5e011 | ||
|
|
42b082b469 | ||
|
|
0d1b6ebe2c |
@@ -1,6 +1,6 @@
|
||||
# oh-my-opencode — OpenCode Plugin
|
||||
|
||||
**Generated:** 2026-02-19 | **Commit:** 5dc437f4 | **Branch:** dev
|
||||
**Generated:** 2026-02-19 | **Commit:** 29ebd8c4 | **Branch:** dev
|
||||
|
||||
## OVERVIEW
|
||||
|
||||
@@ -86,7 +86,7 @@ Fields: agents (14 overridable), categories (8 built-in + custom), disabled_* ar
|
||||
|
||||
- **Test pattern**: Bun test (`bun:test`), co-located `*.test.ts`, given/when/then style
|
||||
- **Factory pattern**: `createXXX()` for all tools, hooks, agents
|
||||
- **Hook tiers**: Session (22) → Tool-Guard (9) → Transform (4) → Continuation (7) → Skill (2)
|
||||
- **Hook tiers**: Session (21) → Tool-Guard (10) → Transform (4) → Continuation (7) → Skill (2)
|
||||
- **Agent modes**: `primary` (respects UI model) vs `subagent` (own fallback chain) vs `all`
|
||||
- **Model resolution**: 3-step: override → category-default → provider-fallback → system-default
|
||||
- **Config format**: JSONC with comments, Zod v4 validation, snake_case keys
|
||||
|
||||
@@ -109,18 +109,20 @@ After making changes, you can test your local build in OpenCode:
|
||||
```
|
||||
oh-my-opencode/
|
||||
├── src/
|
||||
│ ├── agents/ # AI agents (OmO, oracle, librarian, explore, etc.)
|
||||
│ ├── hooks/ # 21 lifecycle hooks
|
||||
│ ├── tools/ # LSP (11), AST-Grep, Grep, Glob, etc.
|
||||
│ ├── mcp/ # MCP server integrations (context7, grep_app)
|
||||
│ ├── features/ # Claude Code compatibility layers
|
||||
│ ├── config/ # Zod schemas and TypeScript types
|
||||
│ ├── auth/ # Google Antigravity OAuth
|
||||
│ ├── shared/ # Common utilities
|
||||
│ └── index.ts # Main plugin entry (OhMyOpenCodePlugin)
|
||||
├── script/ # Build utilities (build-schema.ts, publish.ts)
|
||||
├── assets/ # JSON schema
|
||||
└── dist/ # Build output (ESM + .d.ts)
|
||||
│ ├── index.ts # Plugin entry (OhMyOpenCodePlugin)
|
||||
│ ├── plugin-config.ts # JSONC multi-level config (Zod v4)
|
||||
│ ├── agents/ # 11 agents (Sisyphus, Hephaestus, Oracle, Librarian, Explore, Atlas, Prometheus, Metis, Momus, Multimodal-Looker, Sisyphus-Junior)
|
||||
│ ├── hooks/ # 44 lifecycle hooks across 39 directories
|
||||
│ ├── tools/ # 26 tools across 15 directories
|
||||
│ ├── mcp/ # 3 built-in remote MCPs (websearch, context7, grep_app)
|
||||
│ ├── features/ # 19 feature modules (background-agent, skill-loader, tmux, MCP-OAuth, etc.)
|
||||
│ ├── config/ # Zod v4 schema system
|
||||
│ ├── shared/ # Cross-cutting utilities
|
||||
│ ├── cli/ # CLI: install, run, doctor, mcp-oauth (Commander.js)
|
||||
│ ├── plugin/ # 8 OpenCode hook handlers + hook composition
|
||||
│ └── plugin-handlers/ # 6-phase config loading pipeline
|
||||
├── packages/ # Monorepo: comment-checker, opencode-sdk
|
||||
└── dist/ # Build output (ESM + .d.ts)
|
||||
```
|
||||
|
||||
## Development Workflow
|
||||
|
||||
18
README.ja.md
18
README.ja.md
@@ -183,6 +183,7 @@ Windows から Linux に初めて乗り換えた時のこと、自分の思い
|
||||
- Librarian: 公式ドキュメント、オープンソース実装、コードベース探索 (GLM-4.7)
|
||||
- Explore: 超高速コードベース探索 (Contextual Grep) (Grok Code Fast 1)
|
||||
- Full LSP / AstGrep Support: 決定的にリファクタリングしましょう。
|
||||
- ハッシュアンカード編集ツール: `LINE#ID` 形式で変更前にコンテンツハッシュを検証します。古い行の編集はもう不要です。
|
||||
- Todo Continuation Enforcer: 途中で諦めたら、続行を強制します。これがシジフォスに岩を転がし続けさせる秘訣です。
|
||||
- Comment Checker: AIが過剰なコメントを付けないようにします。シジフォスが生成したコードは、人間が書いたものと区別がつかないべきです。
|
||||
- Claude Code Compatibility: Command, Agent, Skill, MCP, Hook(PreToolUse, PostToolUse, UserPromptSubmit, Stop)
|
||||
@@ -234,14 +235,6 @@ Windows から Linux に初めて乗り換えた時のこと、自分の思い
|
||||
|
||||
### 人間の方へ
|
||||
|
||||
インストールガイドを取得して、その指示に従ってください:
|
||||
|
||||
```bash
|
||||
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/docs/guide/installation.md
|
||||
```
|
||||
|
||||
### LLM エージェントの方へ
|
||||
|
||||
以下のプロンプトをコピーして、LLM エージェント(Claude Code、AmpCode、Cursor など)に貼り付けてください:
|
||||
|
||||
```
|
||||
@@ -251,6 +244,14 @@ https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/
|
||||
|
||||
または [インストールガイド](docs/guide/installation.md) を直接読んでください。ただし、エージェントに任せることを強くお勧めします。人間はミスをしますが、エージェントはしません。
|
||||
|
||||
### LLM エージェントの方へ
|
||||
|
||||
インストールガイドを取得して、その指示に従ってください:
|
||||
|
||||
```bash
|
||||
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/docs/guide/installation.md
|
||||
```
|
||||
|
||||
|
||||
## アンインストール
|
||||
|
||||
@@ -294,6 +295,7 @@ oh-my-opencode を削除するには:
|
||||
- **エージェント**: Sisyphus(メインエージェント)、Prometheus(プランナー)、Oracle(アーキテクチャ/デバッグ)、Librarian(ドキュメント/コード検索)、Explore(高速コードベース grep)、Multimodal Looker
|
||||
- **バックグラウンドエージェント**: 本物の開発チームのように複数エージェントを並列実行
|
||||
- **LSP & AST ツール**: リファクタリング、リネーム、診断、AST 認識コード検索
|
||||
- **ハッシュアンカード編集ツール**: `LINE#ID` 参照で変更前にコンテンツを検証 — 外科的な編集、古い行エラーなし
|
||||
- **コンテキスト注入**: AGENTS.md、README.md、条件付きルールの自動注入
|
||||
- **Claude Code 互換性**: 完全なフックシステム、コマンド、スキル、エージェント、MCP
|
||||
- **内蔵 MCP**: websearch (Exa)、context7 (ドキュメント)、grep_app (GitHub 検索)
|
||||
|
||||
18
README.ko.md
18
README.ko.md
@@ -187,6 +187,7 @@ Hey please read this readme and tell me why it is different from other agent har
|
||||
- Librarian: 공식 문서, 오픈 소스 구현, 코드베이스 탐색 (GLM-4.7)
|
||||
- Explore: 엄청나게 빠른 코드베이스 탐색 (Contextual Grep) (Grok Code Fast 1)
|
||||
- 완전한 LSP / AstGrep 지원: 결정적으로 리팩토링합니다.
|
||||
- 해시 앵커드 편집 도구: `LINE#ID` 형식으로 변경 전마다 콘텐츠 해시를 검증합니다. 오래된 줄 편집은 이제 없습니다.
|
||||
- TODO 연속 강제: 에이전트가 중간에 멈추면 계속하도록 강제합니다. **이것이 Sisyphus가 그 바위를 굴리게 하는 것입니다.**
|
||||
- 주석 검사기: AI가 과도한 주석을 추가하는 것을 방지합니다. Sisyphus가 생성한 코드는 인간이 작성한 것과 구별할 수 없어야 합니다.
|
||||
- Claude Code 호환성: 명령, 에이전트, 스킬, MCP, 훅(PreToolUse, PostToolUse, UserPromptSubmit, Stop)
|
||||
@@ -245,14 +246,6 @@ Hey please read this readme and tell me why it is different from other agent har
|
||||
|
||||
### 인간을 위한
|
||||
|
||||
설치 가이드를 가져와서 따르세요:
|
||||
|
||||
```bash
|
||||
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/docs/guide/installation.md
|
||||
```
|
||||
|
||||
### LLM 에이전트를 위한
|
||||
|
||||
이 프롬프트를 LLM 에이전트(Claude Code, AmpCode, Cursor 등)에 복사하여 붙여넣으세요:
|
||||
|
||||
```
|
||||
@@ -262,6 +255,14 @@ https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/
|
||||
|
||||
또는 [설치 가이드](docs/guide/installation.md)를 직접 읽으세요 — 하지만 **에이전트가 처리하도록 하는 것을 강력히 권장합니다. 인간은 실수를 합니다.**
|
||||
|
||||
### LLM 에이전트를 위한
|
||||
|
||||
설치 가이드를 가져와서 따르세요:
|
||||
|
||||
```bash
|
||||
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/docs/guide/installation.md
|
||||
```
|
||||
|
||||
## 제거
|
||||
|
||||
oh-my-opencode를 제거하려면:
|
||||
@@ -303,6 +304,7 @@ oh-my-opencode를 제거하려면:
|
||||
- **에이전트**: Sisyphus(주요 에이전트), Prometheus(플래너), Oracle(아키텍처/디버깅), Librarian(문서/코드 검색), Explore(빠른 코드베이스 grep), Multimodal Looker
|
||||
- **백그라운드 에이전트**: 실제 개발 팀처럼 여러 에이전트를 병렬로 실행
|
||||
- **LSP 및 AST 도구**: 리팩토링, 이름 변경, 진단, AST 인식 코드 검색
|
||||
- **해시 앵커드 편집 도구**: `LINE#ID` 참조로 변경 전마다 콘텐츠를 검증 — 정밀한 편집, 오래된 줄 오류 없음
|
||||
- **컨텍스트 주입**: AGENTS.md, README.md, 조건부 규칙 자동 주입
|
||||
- **Claude Code 호환성**: 완전한 훅 시스템, 명령, 스킬, 에이전트, MCP
|
||||
- **내장 MCP**: websearch(Exa), context7(문서), grep_app(GitHub 검색)
|
||||
|
||||
42
README.md
42
README.md
@@ -107,25 +107,6 @@ Yes, technically possible. But I cannot recommend using it.
|
||||
|
||||
---
|
||||
|
||||
## Contents
|
||||
|
||||
- [Oh My OpenCode](#oh-my-opencode)
|
||||
- [Just Skip Reading This Readme](#just-skip-reading-this-readme)
|
||||
- [It's the Age of Agents](#its-the-age-of-agents)
|
||||
- [🪄 The Magic Word: `ultrawork`](#-the-magic-word-ultrawork)
|
||||
- [For Those Who Want to Read: Meet Sisyphus](#for-those-who-want-to-read-meet-sisyphus)
|
||||
- [Just Install This](#just-install-this)
|
||||
- [For Those Who Want Autonomy: Meet Hephaestus](#for-those-who-want-autonomy-meet-hephaestus)
|
||||
- [Installation](#installation)
|
||||
- [For Humans](#for-humans)
|
||||
- [For LLM Agents](#for-llm-agents)
|
||||
- [Uninstallation](#uninstallation)
|
||||
- [Features](#features)
|
||||
- [Configuration](#configuration)
|
||||
- [Author's Note](#authors-note)
|
||||
- [Warnings](#warnings)
|
||||
- [Loved by professionals at](#loved-by-professionals-at)
|
||||
|
||||
# Oh My OpenCode
|
||||
|
||||
[Claude Code](https://www.claude.com/product/claude-code) is great.
|
||||
@@ -186,6 +167,7 @@ Meet our main agent: Sisyphus (Opus 4.6). Below are the tools Sisyphus uses to k
|
||||
- Librarian: Official docs, open source implementations, codebase exploration (GLM-4.7)
|
||||
- Explore: Blazing fast codebase exploration (Contextual Grep) (Grok Code Fast 1)
|
||||
- Full LSP / AstGrep Support: Refactor decisively.
|
||||
- Hash-anchored Edit Tool: `LINE#ID` format validates content hash before every change. No more stale-line edits.
|
||||
- Todo Continuation Enforcer: Forces the agent to continue if it quits halfway. **This is what keeps Sisyphus rolling that boulder.**
|
||||
- Comment Checker: Prevents AI from adding excessive comments. Code generated by Sisyphus should be indistinguishable from human-written code.
|
||||
- Claude Code Compatibility: Command, Agent, Skill, MCP, Hook(PreToolUse, PostToolUse, UserPromptSubmit, Stop)
|
||||
@@ -222,6 +204,10 @@ Need to look something up? It scours official docs, your entire codebase history
|
||||
|
||||
If you don't want all this, as mentioned, you can just pick and choose specific features.
|
||||
|
||||
#### Which Model Should I Use?
|
||||
|
||||
New to oh-my-opencode and not sure which model to pair with which agent? Check the **[Agent-Model Matching Guide](docs/guide/agent-model-matching.md)** — a quick reference for newcomers covering recommended models, fallback chains, and common pitfalls for each agent.
|
||||
|
||||
### For Those Who Want Autonomy: Meet Hephaestus
|
||||
|
||||

|
||||
@@ -244,14 +230,6 @@ Hephaestus is inspired by [AmpCode's deep mode](https://ampcode.com)—autonomou
|
||||
|
||||
### For Humans
|
||||
|
||||
Fetch the installation guide and follow it:
|
||||
|
||||
```bash
|
||||
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/docs/guide/installation.md
|
||||
```
|
||||
|
||||
### For LLM Agents
|
||||
|
||||
Copy and paste this prompt to your LLM agent (Claude Code, AmpCode, Cursor, etc.):
|
||||
|
||||
```
|
||||
@@ -261,6 +239,14 @@ https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/
|
||||
|
||||
Or read the [Installation Guide](docs/guide/installation.md) directly—but **we strongly recommend letting an agent handle it. Humans make mistakes.**
|
||||
|
||||
### For LLM Agents
|
||||
|
||||
Fetch the installation guide and follow it:
|
||||
|
||||
```bash
|
||||
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/docs/guide/installation.md
|
||||
```
|
||||
|
||||
## Uninstallation
|
||||
|
||||
To remove oh-my-opencode:
|
||||
@@ -302,11 +288,13 @@ See the full [Features Documentation](docs/features.md) for detailed information
|
||||
- **Agents**: Sisyphus (the main agent), Prometheus (planner), Oracle (architecture/debugging), Librarian (docs/code search), Explore (fast codebase grep), Multimodal Looker
|
||||
- **Background Agents**: Run multiple agents in parallel like a real dev team
|
||||
- **LSP & AST Tools**: Refactoring, rename, diagnostics, AST-aware code search
|
||||
- **Hash-anchored Edit Tool**: `LINE#ID` references validate content before applying every change — surgical edits, zero stale-line errors
|
||||
- **Context Injection**: Auto-inject AGENTS.md, README.md, conditional rules
|
||||
- **Claude Code Compatibility**: Full hook system, commands, skills, agents, MCPs
|
||||
- **Built-in MCPs**: websearch (Exa), context7 (docs), grep_app (GitHub search)
|
||||
- **Session Tools**: List, read, search, and analyze session history
|
||||
- **Productivity Features**: Ralph Loop, Todo Enforcer, Comment Checker, Think Mode, and more
|
||||
- **[Agent-Model Matching Guide](docs/guide/agent-model-matching.md)**: Which model works best with which agent
|
||||
|
||||
## Configuration
|
||||
|
||||
|
||||
@@ -183,6 +183,7 @@
|
||||
- Librarian:官方文档、开源实现、代码库探索 (GLM-4.7)
|
||||
- Explore:极速代码库探索(上下文感知 Grep)(Grok Code Fast 1)
|
||||
- 完整 LSP / AstGrep 支持:果断重构。
|
||||
- 哈希锚定编辑工具:`LINE#ID` 格式在每次更改前验证内容哈希。再也没有陈旧行编辑。
|
||||
- Todo 继续执行器:如果智能体中途退出,强制它继续。**这就是让 Sisyphus 继续推动巨石的关键。**
|
||||
- 注释检查器:防止 AI 添加过多注释。Sisyphus 生成的代码应该与人类编写的代码无法区分。
|
||||
- Claude Code 兼容性:Command、Agent、Skill、MCP、Hook(PreToolUse、PostToolUse、UserPromptSubmit、Stop)
|
||||
@@ -241,14 +242,6 @@
|
||||
|
||||
### 面向人类用户
|
||||
|
||||
获取安装指南并按照说明操作:
|
||||
|
||||
```bash
|
||||
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/docs/guide/installation.md
|
||||
```
|
||||
|
||||
### 面向 LLM 智能体
|
||||
|
||||
复制以下提示并粘贴到你的 LLM 智能体(Claude Code、AmpCode、Cursor 等):
|
||||
|
||||
```
|
||||
@@ -258,6 +251,14 @@ https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/
|
||||
|
||||
或者直接阅读 [安装指南](docs/guide/installation.md)——但我们强烈建议让智能体来处理。人会犯错,智能体不会。
|
||||
|
||||
### 面向 LLM 智能体
|
||||
|
||||
获取安装指南并按照说明操作:
|
||||
|
||||
```bash
|
||||
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/docs/guide/installation.md
|
||||
```
|
||||
|
||||
## 卸载
|
||||
|
||||
要移除 oh-my-opencode:
|
||||
@@ -300,6 +301,7 @@ https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/
|
||||
- **智能体**:Sisyphus(主智能体)、Prometheus(规划器)、Oracle(架构/调试)、Librarian(文档/代码搜索)、Explore(快速代码库 grep)、Multimodal Looker
|
||||
- **后台智能体**:像真正的开发团队一样并行运行多个智能体
|
||||
- **LSP & AST 工具**:重构、重命名、诊断、AST 感知代码搜索
|
||||
- **哈希锚定编辑工具**:`LINE#ID` 引用在每次更改前验证内容 — 精准编辑,零陈旧行错误
|
||||
- **上下文注入**:自动注入 AGENTS.md、README.md、条件规则
|
||||
- **Claude Code 兼容性**:完整的钩子系统、命令、技能、智能体、MCP
|
||||
- **内置 MCP**:websearch (Exa)、context7 (文档)、grep_app (GitHub 搜索)
|
||||
|
||||
@@ -69,6 +69,7 @@
|
||||
"directory-readme-injector",
|
||||
"empty-task-response-detector",
|
||||
"think-mode",
|
||||
"model-fallback",
|
||||
"anthropic-context-window-limit-recovery",
|
||||
"preemptive-compaction",
|
||||
"rules-injector",
|
||||
@@ -80,6 +81,7 @@
|
||||
"non-interactive-env",
|
||||
"interactive-bash-session",
|
||||
"thinking-block-validator",
|
||||
"beast-mode-system",
|
||||
"ralph-loop",
|
||||
"category-skill-reminder",
|
||||
"compaction-context-injector",
|
||||
@@ -92,6 +94,7 @@
|
||||
"prometheus-md-only",
|
||||
"sisyphus-junior-notepad",
|
||||
"no-sisyphus-gpt",
|
||||
"no-hephaestus-non-gpt",
|
||||
"start-work",
|
||||
"atlas",
|
||||
"unstable-agent-babysitter",
|
||||
@@ -101,7 +104,8 @@
|
||||
"tasks-todowrite-disabler",
|
||||
"write-existing-file-guard",
|
||||
"anthropic-effort",
|
||||
"hashline-read-enhancer"
|
||||
"hashline-read-enhancer",
|
||||
"hashline-edit-diff-enhancer"
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -126,6 +130,9 @@
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"hashline_edit": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"agents": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
@@ -298,6 +305,18 @@
|
||||
"providerOptions": {
|
||||
"type": "object",
|
||||
"additionalProperties": {}
|
||||
},
|
||||
"ultrawork": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"model": {
|
||||
"type": "string"
|
||||
},
|
||||
"variant": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
@@ -471,6 +490,18 @@
|
||||
"providerOptions": {
|
||||
"type": "object",
|
||||
"additionalProperties": {}
|
||||
},
|
||||
"ultrawork": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"model": {
|
||||
"type": "string"
|
||||
},
|
||||
"variant": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
@@ -644,6 +675,18 @@
|
||||
"providerOptions": {
|
||||
"type": "object",
|
||||
"additionalProperties": {}
|
||||
},
|
||||
"ultrawork": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"model": {
|
||||
"type": "string"
|
||||
},
|
||||
"variant": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
@@ -817,6 +860,18 @@
|
||||
"providerOptions": {
|
||||
"type": "object",
|
||||
"additionalProperties": {}
|
||||
},
|
||||
"ultrawork": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"model": {
|
||||
"type": "string"
|
||||
},
|
||||
"variant": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
@@ -990,6 +1045,18 @@
|
||||
"providerOptions": {
|
||||
"type": "object",
|
||||
"additionalProperties": {}
|
||||
},
|
||||
"ultrawork": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"model": {
|
||||
"type": "string"
|
||||
},
|
||||
"variant": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
@@ -1163,6 +1230,18 @@
|
||||
"providerOptions": {
|
||||
"type": "object",
|
||||
"additionalProperties": {}
|
||||
},
|
||||
"ultrawork": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"model": {
|
||||
"type": "string"
|
||||
},
|
||||
"variant": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
@@ -1336,6 +1415,18 @@
|
||||
"providerOptions": {
|
||||
"type": "object",
|
||||
"additionalProperties": {}
|
||||
},
|
||||
"ultrawork": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"model": {
|
||||
"type": "string"
|
||||
},
|
||||
"variant": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
@@ -1509,6 +1600,18 @@
|
||||
"providerOptions": {
|
||||
"type": "object",
|
||||
"additionalProperties": {}
|
||||
},
|
||||
"ultrawork": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"model": {
|
||||
"type": "string"
|
||||
},
|
||||
"variant": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
@@ -1682,6 +1785,18 @@
|
||||
"providerOptions": {
|
||||
"type": "object",
|
||||
"additionalProperties": {}
|
||||
},
|
||||
"ultrawork": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"model": {
|
||||
"type": "string"
|
||||
},
|
||||
"variant": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
@@ -1855,6 +1970,18 @@
|
||||
"providerOptions": {
|
||||
"type": "object",
|
||||
"additionalProperties": {}
|
||||
},
|
||||
"ultrawork": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"model": {
|
||||
"type": "string"
|
||||
},
|
||||
"variant": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
@@ -2028,6 +2155,18 @@
|
||||
"providerOptions": {
|
||||
"type": "object",
|
||||
"additionalProperties": {}
|
||||
},
|
||||
"ultrawork": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"model": {
|
||||
"type": "string"
|
||||
},
|
||||
"variant": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
@@ -2201,6 +2340,18 @@
|
||||
"providerOptions": {
|
||||
"type": "object",
|
||||
"additionalProperties": {}
|
||||
},
|
||||
"ultrawork": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"model": {
|
||||
"type": "string"
|
||||
},
|
||||
"variant": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
@@ -2374,6 +2525,18 @@
|
||||
"providerOptions": {
|
||||
"type": "object",
|
||||
"additionalProperties": {}
|
||||
},
|
||||
"ultrawork": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"model": {
|
||||
"type": "string"
|
||||
},
|
||||
"variant": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
@@ -2547,6 +2710,18 @@
|
||||
"providerOptions": {
|
||||
"type": "object",
|
||||
"additionalProperties": {}
|
||||
},
|
||||
"ultrawork": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"model": {
|
||||
"type": "string"
|
||||
},
|
||||
"variant": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
@@ -2834,7 +3009,10 @@
|
||||
"safe_hook_creation": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"hashline_edit": {
|
||||
"disable_omo_env": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"model_fallback_title": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
@@ -3162,4 +3340,4 @@
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
}
|
||||
91
bun.lock
91
bun.lock
@@ -12,6 +12,7 @@
|
||||
"@modelcontextprotocol/sdk": "^1.25.1",
|
||||
"@opencode-ai/plugin": "^1.1.19",
|
||||
"@opencode-ai/sdk": "^1.1.19",
|
||||
"codex": "^0.2.3",
|
||||
"commander": "^14.0.2",
|
||||
"detect-libc": "^2.0.0",
|
||||
"js-yaml": "^4.1.1",
|
||||
@@ -28,13 +29,13 @@
|
||||
"typescript": "^5.7.3",
|
||||
},
|
||||
"optionalDependencies": {
|
||||
"oh-my-opencode-darwin-arm64": "3.6.0",
|
||||
"oh-my-opencode-darwin-x64": "3.6.0",
|
||||
"oh-my-opencode-linux-arm64": "3.6.0",
|
||||
"oh-my-opencode-linux-arm64-musl": "3.6.0",
|
||||
"oh-my-opencode-linux-x64": "3.6.0",
|
||||
"oh-my-opencode-linux-x64-musl": "3.6.0",
|
||||
"oh-my-opencode-windows-x64": "3.6.0",
|
||||
"oh-my-opencode-darwin-arm64": "3.7.4",
|
||||
"oh-my-opencode-darwin-x64": "3.7.4",
|
||||
"oh-my-opencode-linux-arm64": "3.7.4",
|
||||
"oh-my-opencode-linux-arm64-musl": "3.7.4",
|
||||
"oh-my-opencode-linux-x64": "3.7.4",
|
||||
"oh-my-opencode-linux-x64-musl": "3.7.4",
|
||||
"oh-my-opencode-windows-x64": "3.7.4",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -118,8 +119,12 @@
|
||||
|
||||
"call-bound": ["call-bound@1.0.4", "", { "dependencies": { "call-bind-apply-helpers": "^1.0.2", "get-intrinsic": "^1.3.0" } }, "sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg=="],
|
||||
|
||||
"codex": ["codex@0.2.3", "", { "dependencies": { "connect": "1.8.x", "dox": "0.3.x", "drip": "0.2.x", "fez": "0.0.x", "highlight.js": "1.2.x", "jade": "0.26.x", "marked": "0.2.x", "ncp": "0.2.x", "nib": "0.4.x", "oath": "0.2.x", "optimist": "0.3.x", "rimraf": "2.0.x", "stylus": "0.26.x", "tea": "0.0.x", "yaml": "0.2.x" }, "bin": { "codex": "./bin/codex" } }, "sha512-+MQbh3UIJRZFawxQUgPAEXKyL9o06fy8JmrgW4EnMeMlj8kh3Jljh4+CcOdH9yt82FTkmEwUR2qOrOev3ZoJJA=="],
|
||||
|
||||
"commander": ["commander@14.0.2", "", {}, "sha512-TywoWNNRbhoD0BXs1P3ZEScW8W5iKrnbithIl0YH+uCmBd0QpPOA8yc82DS3BIE5Ma6FnBVUsJ7wVUDz4dvOWQ=="],
|
||||
|
||||
"connect": ["connect@1.8.7", "", { "dependencies": { "formidable": "1.0.x", "mime": ">= 0.0.1", "qs": ">= 0.4.0" } }, "sha512-j72iQ8i6td2YLZD37ADpGOa4C5skHNrJSGQkJh/t+DCoE6nm8NbHslFTs17q44EJsiVrry+W13yrxd46M32jbA=="],
|
||||
|
||||
"content-disposition": ["content-disposition@1.0.1", "", {}, "sha512-oIXISMynqSqm241k6kcQ5UwttDILMK4BiurCfGEREw6+X9jkkpEe5T9FZaApyLGGOnFuyMWZpdolTXMtvEJ08Q=="],
|
||||
|
||||
"content-type": ["content-type@1.0.5", "", {}, "sha512-nTjqfcBFEipKdXCv4YDQWCfmcLZKm81ldF0pAopTvyrFGVbcR6P/VAAd5G7N+0tTr8QqiU0tFadD6FK4NtJwOA=="],
|
||||
@@ -132,12 +137,18 @@
|
||||
|
||||
"cross-spawn": ["cross-spawn@7.0.6", "", { "dependencies": { "path-key": "^3.1.0", "shebang-command": "^2.0.0", "which": "^2.0.1" } }, "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA=="],
|
||||
|
||||
"cssom": ["cssom@0.2.5", "", {}, "sha512-b9ecqKEfWrNcyzx5+1nmcfi80fPp8dVM8rlAh7fFK14PZbNjp++gRjyZTZfLJQa/Lw0qeCJho7WBIl0nw0v6HA=="],
|
||||
|
||||
"debug": ["debug@4.4.3", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA=="],
|
||||
|
||||
"depd": ["depd@2.0.0", "", {}, "sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw=="],
|
||||
|
||||
"detect-libc": ["detect-libc@2.1.2", "", {}, "sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ=="],
|
||||
|
||||
"dox": ["dox@0.3.3", "", { "dependencies": { "commander": "0.6.1", "github-flavored-markdown": ">= 0.0.1" }, "bin": { "dox": "./bin/dox" } }, "sha512-5bSKbTcpFm+0wPRnxMkJhY5dFoWWxsTQdTLFg2d1HyLl0voy9GoBVVOKM+yPSdTdKCXrHqwEwUcdS7s4BTst7w=="],
|
||||
|
||||
"drip": ["drip@0.2.4", "", {}, "sha512-/qhB7CjfmfZYHue9SwicWNqsSp1DNzkHTCVsud92Tb43qKTiIAXBHIdCJYUn93r7MScM++H+nimkWPmvNTg/Qw=="],
|
||||
|
||||
"dunder-proto": ["dunder-proto@1.0.1", "", { "dependencies": { "call-bind-apply-helpers": "^1.0.1", "es-errors": "^1.3.0", "gopd": "^1.2.0" } }, "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A=="],
|
||||
|
||||
"ee-first": ["ee-first@1.1.1", "", {}, "sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow=="],
|
||||
@@ -166,8 +177,12 @@
|
||||
|
||||
"fast-uri": ["fast-uri@3.1.0", "", {}, "sha512-iPeeDKJSWf4IEOasVVrknXpaBV0IApz/gp7S2bb7Z4Lljbl2MGJRqInZiUrQwV16cpzw/D3S5j5Julj/gT52AA=="],
|
||||
|
||||
"fez": ["fez@0.0.3", "", {}, "sha512-W+igVHjiRB4ai7h25ay/7OYNwI8IihdABOnRIS3Bcm4UxEWKoenCB6m68HLSq41TxZwbnqzFAqlz/CjKB3rTvg=="],
|
||||
|
||||
"finalhandler": ["finalhandler@2.1.1", "", { "dependencies": { "debug": "^4.4.0", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "on-finished": "^2.4.1", "parseurl": "^1.3.3", "statuses": "^2.0.1" } }, "sha512-S8KoZgRZN+a5rNwqTxlZZePjT/4cnm0ROV70LedRHZ0p8u9fRID0hJUZQpkKLzro8LfmC8sx23bY6tVNxv8pQA=="],
|
||||
|
||||
"formidable": ["formidable@1.0.17", "", {}, "sha512-95MFT5qipMvUiesmuvGP1BI4hh5XWCzyTapiNJ/k8JBQda7rPy7UCWYItz2uZEdTgGNy1eInjzlL9Wx1O9fedg=="],
|
||||
|
||||
"forwarded": ["forwarded@0.2.0", "", {}, "sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow=="],
|
||||
|
||||
"fresh": ["fresh@2.0.0", "", {}, "sha512-Rx/WycZ60HOaqLKAi6cHRKKI7zxWbJ31MhntmtwMoaTeF7XFH9hhBp8vITaMidfljRQ6eYWCKkaTK+ykVJHP2A=="],
|
||||
@@ -178,12 +193,18 @@
|
||||
|
||||
"get-proto": ["get-proto@1.0.1", "", { "dependencies": { "dunder-proto": "^1.0.1", "es-object-atoms": "^1.0.0" } }, "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g=="],
|
||||
|
||||
"github-flavored-markdown": ["github-flavored-markdown@1.0.1", "", {}, "sha512-qkpFaYzQ+JbZw7iuZCpvjqas5E8ZNq/xuTtBtdPkAlowX8VXBmkZE2DCgNGCTW5KZsCvqX5lSef/2yrWMTztBQ=="],
|
||||
|
||||
"gopd": ["gopd@1.2.0", "", {}, "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg=="],
|
||||
|
||||
"graceful-fs": ["graceful-fs@1.1.14", "", {}, "sha512-JUrvoFoQbLZpOZilKTXZX2e1EV0DTnuG5vsRFNFv4mPf/mnYbwNAFw/5x0rxeyaJslIdObGSgTTsMnM/acRaVw=="],
|
||||
|
||||
"has-symbols": ["has-symbols@1.1.0", "", {}, "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ=="],
|
||||
|
||||
"hasown": ["hasown@2.0.2", "", { "dependencies": { "function-bind": "^1.1.2" } }, "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ=="],
|
||||
|
||||
"highlight.js": ["highlight.js@1.2.0", "", { "dependencies": { "commander": "*" }, "bin": { "hljs": "./bin/hljs" } }, "sha512-k19Rm9OuIGiZvD+0G2Lao6kPr01XMEbEK67/n+GqOMTgxc7HhgzfLzX71Q9j5Qu+bkzYXbPFHums8tl0dzV4Uw=="],
|
||||
|
||||
"hono": ["hono@4.10.8", "", {}, "sha512-DDT0A0r6wzhe8zCGoYOmMeuGu3dyTAE40HHjwUsWFTEy5WxK1x2WDSsBPlEXgPbRIFY6miDualuUDbasPogIww=="],
|
||||
|
||||
"http-errors": ["http-errors@2.0.1", "", { "dependencies": { "depd": "~2.0.0", "inherits": "~2.0.4", "setprototypeof": "~1.2.0", "statuses": "~2.0.2", "toidentifier": "~1.0.1" } }, "sha512-4FbRdAX+bSdmo4AUFuS0WNiPz8NgFt+r8ThgNWmlrjQjt1Q7ZR9+zTlce2859x4KSXrwIsaeTqDoKQmtP8pLmQ=="],
|
||||
@@ -198,6 +219,8 @@
|
||||
|
||||
"isexe": ["isexe@2.0.0", "", {}, "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw=="],
|
||||
|
||||
"jade": ["jade@0.26.3", "", { "dependencies": { "commander": "0.6.1", "mkdirp": "0.3.0" }, "bin": { "jade": "./bin/jade" } }, "sha512-mkk3vzUHFjzKjpCXeu+IjXeZD+QOTjUUdubgmHtHTDwvAO2ZTkMTTVrapts5CWz3JvJryh/4KWZpjeZrCepZ3A=="],
|
||||
|
||||
"jose": ["jose@6.1.3", "", {}, "sha512-0TpaTfihd4QMNwrz/ob2Bp7X04yuxJkjRGi4aKmOqwhov54i6u79oCv7T+C7lo70MKH6BesI3vscD1yb/yzKXQ=="],
|
||||
|
||||
"js-yaml": ["js-yaml@4.1.1", "", { "dependencies": { "argparse": "^2.0.1" }, "bin": { "js-yaml": "bin/js-yaml.js" } }, "sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA=="],
|
||||
@@ -208,42 +231,62 @@
|
||||
|
||||
"jsonc-parser": ["jsonc-parser@3.3.1", "", {}, "sha512-HUgH65KyejrUFPvHFPbqOY0rsFip3Bo5wb4ngvdi1EpCYWUQDC5V+Y7mZws+DLkr4M//zQJoanu1SP+87Dv1oQ=="],
|
||||
|
||||
"marked": ["marked@0.2.10", "", { "bin": { "marked": "./bin/marked" } }, "sha512-LyFB4QvdBaJFfEIn33plrxtBuRjeHoDE2QJdP58i2EWMUTpa6GK6MnjJh3muCvVibFJompyr6IxecK2fjp4RDw=="],
|
||||
|
||||
"math-intrinsics": ["math-intrinsics@1.1.0", "", {}, "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g=="],
|
||||
|
||||
"media-typer": ["media-typer@1.1.0", "", {}, "sha512-aisnrDP4GNe06UcKFnV5bfMNPBUw4jsLGaWwWfnH3v02GnBuXX2MCVn5RbrWo0j3pczUilYblq7fQ7Nw2t5XKw=="],
|
||||
|
||||
"merge-descriptors": ["merge-descriptors@2.0.0", "", {}, "sha512-Snk314V5ayFLhp3fkUREub6WtjBfPdCPY1Ln8/8munuLuiYhsABgBVWsozAG+MWMbVEvcdcpbi9R7ww22l9Q3g=="],
|
||||
|
||||
"mime": ["mime@4.1.0", "", { "bin": { "mime": "bin/cli.js" } }, "sha512-X5ju04+cAzsojXKes0B/S4tcYtFAJ6tTMuSPBEn9CPGlrWr8Fiw7qYeLT0XyH80HSoAoqWCaz+MWKh22P7G1cw=="],
|
||||
|
||||
"mime-db": ["mime-db@1.54.0", "", {}, "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ=="],
|
||||
|
||||
"mime-types": ["mime-types@3.0.2", "", { "dependencies": { "mime-db": "^1.54.0" } }, "sha512-Lbgzdk0h4juoQ9fCKXW4by0UJqj+nOOrI9MJ1sSj4nI8aI2eo1qmvQEie4VD1glsS250n15LsWsYtCugiStS5A=="],
|
||||
|
||||
"mkdirp": ["mkdirp@0.3.0", "", {}, "sha512-OHsdUcVAQ6pOtg5JYWpCBo9W/GySVuwvP9hueRMW7UqshC0tbfzLv8wjySTPm3tfUZ/21CE9E1pJagOA91Pxew=="],
|
||||
|
||||
"ms": ["ms@2.1.3", "", {}, "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="],
|
||||
|
||||
"nan": ["nan@1.0.0", "", {}, "sha512-Wm2/nFOm2y9HtJfgOLnctGbfvF23FcQZeyUZqDD8JQG3zO5kXh3MkQKiUaA68mJiVWrOzLFkAV1u6bC8P52DJA=="],
|
||||
|
||||
"ncp": ["ncp@0.2.7", "", { "bin": { "ncp": "./bin/ncp" } }, "sha512-wPUepcV37u3Mw+ktjrUbl3azxwAkcD9RrVLQGlpSapWcEQM5jL0g8zwKo6ukOjVQAAEjqpRdLeojOalqqySpCg=="],
|
||||
|
||||
"negotiator": ["negotiator@1.0.0", "", {}, "sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg=="],
|
||||
|
||||
"nib": ["nib@0.4.1", "", {}, "sha512-q8n5RAcLLpA5YewcH9UplGzPTu4XbC6t9hVPB1RsnvKD5aYWT+V+2NHGH/dgw/6YDjgETEa7hY54kVhvn1i5DQ=="],
|
||||
|
||||
"oath": ["oath@0.2.3", "", {}, "sha512-/uTqn2KKy671SunNXhULGbumn2U3ZN84LvYZdnfSqqqBkM6cppm+jcUodWELd9CYVNYGh6QwJEEAQ0WM95qjpA=="],
|
||||
|
||||
"object-assign": ["object-assign@4.1.1", "", {}, "sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg=="],
|
||||
|
||||
"object-inspect": ["object-inspect@1.13.4", "", {}, "sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew=="],
|
||||
|
||||
"oh-my-opencode-darwin-arm64": ["oh-my-opencode-darwin-arm64@3.6.0", "", { "os": "darwin", "cpu": "arm64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-JkyJC3b9ueRgSyPJMjTKlBO99gIyTpI87lEV5Tk7CBv6TFbj2ZFxfaA8mEm138NbwmYa/Z4Rf7I5tZyp2as93A=="],
|
||||
"oh-my-opencode-darwin-arm64": ["oh-my-opencode-darwin-arm64@3.7.4", "", { "os": "darwin", "cpu": "arm64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-0m84UiVlOC2gLSFIOTmCsxFCB9CmyWV9vGPYqfBFLoyDJmedevU3R5N4ze54W7jv4HSSxz02Zwr+QF5rkQANoA=="],
|
||||
|
||||
"oh-my-opencode-darwin-x64": ["oh-my-opencode-darwin-x64@3.6.0", "", { "os": "darwin", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-5HsXz3F42T6CmPk6IW+pErJVSmPnqc3Gc1OntoKp/b4FwuWkFJh9kftDSH3cnKTX98H6XBqnwZoFKCNCiiVLEA=="],
|
||||
"oh-my-opencode-darwin-x64": ["oh-my-opencode-darwin-x64@3.7.4", "", { "os": "darwin", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-Z2dQy8jmc6DuwbN9bafhOwjZBkAkTWlfLAz1tG6xVzMqTcp4YOrzrHFOBRNeFKpOC/x7yUpO3sq/YNCclloelw=="],
|
||||
|
||||
"oh-my-opencode-linux-arm64": ["oh-my-opencode-linux-arm64@3.6.0", "", { "os": "linux", "cpu": "arm64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-KjCSC2i9XdjzGsX6coP9xwj7naxTpdqnB53TiLbVH+KeF0X0dNsVV7PHbme3I1orjjzYoEbVYVC3ZNaleubzog=="],
|
||||
"oh-my-opencode-linux-arm64": ["oh-my-opencode-linux-arm64@3.7.4", "", { "os": "linux", "cpu": "arm64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-TZIsK6Dl6yX6pSTocls91bjnvoY/6/kiGnmgdsoDKcPYZ7XuBQaJwH0dK7t9/sxuDI+wKhmtrmLwKSoYOIqsRw=="],
|
||||
|
||||
"oh-my-opencode-linux-arm64-musl": ["oh-my-opencode-linux-arm64-musl@3.6.0", "", { "os": "linux", "cpu": "arm64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-EARvFQXnkqSnwPpKtghmoV5e/JmweJXhjcOrRNvEwQ8HSb4FIhdRmJkTw4Z/EzyoIRTQcY019ALOiBbdIiOUEA=="],
|
||||
"oh-my-opencode-linux-arm64-musl": ["oh-my-opencode-linux-arm64-musl@3.7.4", "", { "os": "linux", "cpu": "arm64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-UwPOoQP0+1eCKP/XTDsnLJDK5jayiL4VrKz0lfRRRojl1FWvInmQumnDnluvnxW6knU7dFM3yDddlZYG6tEgcw=="],
|
||||
|
||||
"oh-my-opencode-linux-x64": ["oh-my-opencode-linux-x64@3.6.0", "", { "os": "linux", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-jYyew4NKAOM6NrMM0+LlRlz6s1EVMI9cQdK/o0t8uqFheZVeb7u4cBZwwfhJ79j7EWkSWGc0Jdj9G2dOukbDxg=="],
|
||||
"oh-my-opencode-linux-x64": ["oh-my-opencode-linux-x64@3.7.4", "", { "os": "linux", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-+TeA0Bs5wK9EMfKiEEFfyfVqdBDUjDzN8POF8JJibN0GPy1oNIGGEWIJG2cvC5onpnYEvl448vkFbkCUK0g9SQ=="],
|
||||
|
||||
"oh-my-opencode-linux-x64-musl": ["oh-my-opencode-linux-x64-musl@3.6.0", "", { "os": "linux", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-BrR+JftCXP/il04q2uImWIueCiuTmXbivsXYkfFONdO1Rq9b4t0BVua9JIYk7l3OUfeRlrKlFNYNfpFhvVADOw=="],
|
||||
"oh-my-opencode-linux-x64-musl": ["oh-my-opencode-linux-x64-musl@3.7.4", "", { "os": "linux", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-YzX6wFtk8RoTHkAZkfLCVyCU4yjN8D7agj/jhOnFKW50fZYa8zX+/4KLZx0IfanVpXTgrs3iiuKoa87KLDfCxQ=="],
|
||||
|
||||
"oh-my-opencode-windows-x64": ["oh-my-opencode-windows-x64@3.6.0", "", { "os": "win32", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode.exe" } }, "sha512-cIYQYzcQGhGFE99ulHGXs8S1vDHjgCtT3ID2dDoOztnOQW0ZVa61oCHlkBtjdP/BEv2tH5AGvKrXAICXs19iFw=="],
|
||||
"oh-my-opencode-windows-x64": ["oh-my-opencode-windows-x64@3.7.4", "", { "os": "win32", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode.exe" } }, "sha512-x39M2eFJI6pqv4go5Crf1H2SbPGFmXHIDNtbsSa5nRNcrqTisLrYGW8uXpOrqjntBeTAUBdwZmmoy6zgxHsz8w=="],
|
||||
|
||||
"on-finished": ["on-finished@2.4.1", "", { "dependencies": { "ee-first": "1.1.1" } }, "sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg=="],
|
||||
|
||||
"once": ["once@1.4.0", "", { "dependencies": { "wrappy": "1" } }, "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w=="],
|
||||
|
||||
"optimist": ["optimist@0.3.7", "", { "dependencies": { "wordwrap": "~0.0.2" } }, "sha512-TCx0dXQzVtSCg2OgY/bO9hjM9cV4XYx09TVK+s3+FhkjT6LovsLe+pPMzpWf+6yXK/hUizs2gUoTw3jHM0VaTQ=="],
|
||||
|
||||
"options": ["options@0.0.6", "", {}, "sha512-bOj3L1ypm++N+n7CEbbe473A414AB7z+amKYshRb//iuL3MpdDCLhPnw6aVTdKB9g5ZRVHIEp8eUln6L2NUStg=="],
|
||||
|
||||
"orchid": ["orchid@0.0.3", "", { "dependencies": { "drip": "0.2.x", "oath": "0.2.x", "ws": "0.4.x" } }, "sha512-jkbcOxPnbo9M0WZbvjvTKLY+2lhxyWnoJXKESHodJAD00bsqOe5YPrJZ2rjgBKJ4YIgmbKSMlsjNIZ8NNhXbOA=="],
|
||||
|
||||
"parseurl": ["parseurl@1.3.3", "", {}, "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ=="],
|
||||
|
||||
"path-key": ["path-key@3.1.1", "", {}, "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q=="],
|
||||
@@ -266,6 +309,8 @@
|
||||
|
||||
"require-from-string": ["require-from-string@2.0.2", "", {}, "sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw=="],
|
||||
|
||||
"rimraf": ["rimraf@2.0.3", "", { "optionalDependencies": { "graceful-fs": "~1.1" } }, "sha512-uR09PSoW2+1hW0hquRqxb+Ae2h6R5ls3OAy2oNekQFtqbSJkltkhKRa+OhZKoxWsN9195Gp1vg7sELDRoJ8a3w=="],
|
||||
|
||||
"router": ["router@2.2.0", "", { "dependencies": { "debug": "^4.4.0", "depd": "^2.0.0", "is-promise": "^4.0.0", "parseurl": "^1.3.3", "path-to-regexp": "^8.0.0" } }, "sha512-nLTrUKm2UyiL7rlhapu/Zl45FwNgkZGaCpZbIHajDYgwlJCOzLSk+cIPAnsEqV955GjILJnKbdQC1nVPz+gAYQ=="],
|
||||
|
||||
"safer-buffer": ["safer-buffer@2.1.2", "", {}, "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg=="],
|
||||
@@ -292,6 +337,12 @@
|
||||
|
||||
"statuses": ["statuses@2.0.2", "", {}, "sha512-DvEy55V3DB7uknRo+4iOGT5fP1slR8wQohVdknigZPMpMstaKJQWhwiYBACJE3Ul2pTnATihhBYnRhZQHGBiRw=="],
|
||||
|
||||
"stylus": ["stylus@0.26.1", "", { "dependencies": { "cssom": "0.2.x", "debug": "*", "mkdirp": "0.3.x" }, "bin": { "stylus": "./bin/stylus" } }, "sha512-33J3iBM2Ueh/wDFzkQXmjHSDxNRWQ7J2I2dqiInAKkGR4j+3hkojRRSbv3ITodxJBIodVfv0l10CHZhJoi0Ubw=="],
|
||||
|
||||
"tea": ["tea@0.0.13", "", { "dependencies": { "drip": "0.2.x", "oath": "0.2.x", "orchid": "0.0.x" } }, "sha512-wpVkMmrK83yrwjnBYtN/GKzA0ixt1k68lq4g0s0H38fZTPHeApnToCVzpQgDEToNoBbviHQaOhXcMldHnM+XwQ=="],
|
||||
|
||||
"tinycolor": ["tinycolor@0.0.1", "", {}, "sha512-+CorETse1kl98xg0WAzii8DTT4ABF4R3nquhrkIbVGcw1T8JYs5Gfx9xEfGINPUZGDj9C4BmOtuKeaTtuuRolg=="],
|
||||
|
||||
"toidentifier": ["toidentifier@1.0.1", "", {}, "sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA=="],
|
||||
|
||||
"type-is": ["type-is@2.0.1", "", { "dependencies": { "content-type": "^1.0.5", "media-typer": "^1.1.0", "mime-types": "^3.0.0" } }, "sha512-OZs6gsjF4vMp32qrCbiVSkrFmXtG/AZhY3t0iAMrMBiAZyV9oALtXO8hsrHbMXF9x6L3grlFuwW2oAz7cav+Gw=="],
|
||||
@@ -308,10 +359,22 @@
|
||||
|
||||
"which": ["which@2.0.2", "", { "dependencies": { "isexe": "^2.0.0" }, "bin": { "node-which": "./bin/node-which" } }, "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA=="],
|
||||
|
||||
"wordwrap": ["wordwrap@0.0.3", "", {}, "sha512-1tMA907+V4QmxV7dbRvb4/8MaRALK6q9Abid3ndMYnbyo8piisCmeONVqVSXqQA3KaP4SLt5b7ud6E2sqP8TFw=="],
|
||||
|
||||
"wrappy": ["wrappy@1.0.2", "", {}, "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="],
|
||||
|
||||
"ws": ["ws@0.4.32", "", { "dependencies": { "commander": "~2.1.0", "nan": "~1.0.0", "options": ">=0.0.5", "tinycolor": "0.x" }, "bin": { "wscat": "./bin/wscat" } }, "sha512-htqsS0U9Z9lb3ITjidQkRvkLdVhQePrMeu475yEfOWkAYvJ6dSjQp1tOH6ugaddzX5b7sQjMPNtY71eTzrV/kA=="],
|
||||
|
||||
"yaml": ["yaml@0.2.3", "", {}, "sha512-LzdhmhritYCRww8GLH95Sk5A2c18ddRQMeooOUnqWkDUnBbmVfqgg2fXH2MxAHYHCVTHDK1EEbmgItQ8kOpM0Q=="],
|
||||
|
||||
"zod": ["zod@4.1.8", "", {}, "sha512-5R1P+WwQqmmMIEACyzSvo4JXHY5WiAFHRMg+zBZKgKS+Q1viRa0C1hmUKtHltoIFKtIdki3pRxkmpP74jnNYHQ=="],
|
||||
|
||||
"zod-to-json-schema": ["zod-to-json-schema@3.25.1", "", { "peerDependencies": { "zod": "^3.25 || ^4" } }, "sha512-pM/SU9d3YAggzi6MtR4h7ruuQlqKtad8e9S0fmxcMi+ueAK5Korys/aWcV9LIIHTVbj01NdzxcnXSN+O74ZIVA=="],
|
||||
|
||||
"dox/commander": ["commander@0.6.1", "", {}, "sha512-0fLycpl1UMTGX257hRsu/arL/cUbcvQM4zMKwvLvzXtfdezIV4yotPS2dYtknF+NmEfWSoCEF6+hj9XLm/6hEw=="],
|
||||
|
||||
"jade/commander": ["commander@0.6.1", "", {}, "sha512-0fLycpl1UMTGX257hRsu/arL/cUbcvQM4zMKwvLvzXtfdezIV4yotPS2dYtknF+NmEfWSoCEF6+hj9XLm/6hEw=="],
|
||||
|
||||
"ws/commander": ["commander@2.1.0", "", {}, "sha512-J2wnb6TKniXNOtoHS8TSrG9IOQluPrsmyAJ8oCUJOBmv+uLBCyPYAZkD2jFvw2DCzIXNnISIM01NIvr35TkBMQ=="],
|
||||
}
|
||||
}
|
||||
|
||||
@@ -28,7 +28,7 @@ A Category is an agent configuration preset optimized for specific domains.
|
||||
| `quick` | `anthropic/claude-haiku-4-5` | Trivial tasks - single file changes, typo fixes, simple modifications |
|
||||
| `unspecified-low` | `anthropic/claude-sonnet-4-6` | Tasks that don't fit other categories, low effort required |
|
||||
| `unspecified-high` | `anthropic/claude-opus-4-6` (max) | Tasks that don't fit other categories, high effort required |
|
||||
| `writing` | `google/gemini-3-flash` | Documentation, prose, technical writing |
|
||||
| `writing` | `kimi-for-coding/k2p5` | Documentation, prose, technical writing |
|
||||
|
||||
### Usage
|
||||
|
||||
|
||||
@@ -23,8 +23,8 @@ npx oh-my-opencode
|
||||
| `install` | Interactive Setup Wizard |
|
||||
| `doctor` | Environment diagnostics and health checks |
|
||||
| `run` | OpenCode session runner |
|
||||
| `auth` | Google Antigravity authentication management |
|
||||
| `version` | Display version information |
|
||||
| `mcp oauth` | MCP OAuth authentication management |
|
||||
| `get-local-version` | Display local version information |
|
||||
|
||||
---
|
||||
|
||||
@@ -131,6 +131,15 @@ bunx oh-my-opencode run [prompt]
|
||||
|--------|-------------|
|
||||
| `--enforce-completion` | Keep session active until all TODOs are completed |
|
||||
| `--timeout <seconds>` | Set maximum execution time |
|
||||
| `--agent <name>` | Specify agent to use |
|
||||
| `--directory <path>` | Set working directory |
|
||||
| `--port <number>` | Set port for session |
|
||||
| `--attach` | Attach to existing session |
|
||||
| `--json` | Output in JSON format |
|
||||
| `--no-timestamp` | Disable timestamped output |
|
||||
| `--session-id <id>` | Resume existing session |
|
||||
| `--on-complete <action>` | Action on completion |
|
||||
| `--verbose` | Enable verbose logging |
|
||||
|
||||
---
|
||||
|
||||
@@ -267,14 +276,17 @@ bunx oh-my-opencode doctor --json > doctor-report.json
|
||||
|
||||
```
|
||||
src/cli/
|
||||
├── index.ts # Commander.js-based main entry
|
||||
├── cli-program.ts # Commander.js-based main entry
|
||||
├── install.ts # @clack/prompts-based TUI installer
|
||||
├── config-manager.ts # JSONC parsing, multi-source config management
|
||||
├── config-manager/ # JSONC parsing, multi-source config management
|
||||
│ └── *.ts
|
||||
├── doctor/ # Health check system
|
||||
│ ├── index.ts # Doctor command entry
|
||||
│ └── checks/ # 17+ individual check modules
|
||||
├── run/ # Session runner
|
||||
└── commands/auth.ts # Authentication management
|
||||
│ └── *.ts
|
||||
└── mcp-oauth/ # OAuth management commands
|
||||
└── *.ts
|
||||
```
|
||||
|
||||
### Adding New Doctor Checks
|
||||
|
||||
@@ -981,6 +981,34 @@ Available hooks: `todo-continuation-enforcer`, `context-window-monitor`, `sessio
|
||||
|
||||
**Note on `auto-update-checker` and `startup-toast`**: The `startup-toast` hook is a sub-feature of `auto-update-checker`. To disable only the startup toast notification while keeping update checking enabled, add `"startup-toast"` to `disabled_hooks`. To disable all update checking features (including the toast), add `"auto-update-checker"` to `disabled_hooks`.
|
||||
|
||||
## Hashline Edit
|
||||
|
||||
Oh My OpenCode replaces OpenCode's built-in `Edit` tool with a hash-anchored version that uses `LINE#ID` references (e.g. `5#VK`) instead of bare line numbers. This prevents stale-line edits by validating content hash before applying each change.
|
||||
|
||||
Enabled by default. Set `hashline_edit: false` to opt out and restore standard file editing.
|
||||
|
||||
```json
|
||||
{
|
||||
"hashline_edit": false
|
||||
}
|
||||
```
|
||||
|
||||
| Option | Default | Description |
|
||||
|--------|---------|-------------|
|
||||
| `hashline_edit` | `true` | Enable hash-anchored `Edit` tool and companion hooks. When `false`, falls back to standard editing without hash validation. |
|
||||
|
||||
When enabled, two companion hooks are also active:
|
||||
|
||||
- **`hashline-read-enhancer`** — Appends `LINE#ID:content` annotations to `Read` output so agents always have fresh anchors.
|
||||
- **`hashline-edit-diff-enhancer`** — Shows a unified diff in `Edit` / `Write` output for immediate change visibility.
|
||||
|
||||
To disable only the hooks while keeping the hash-anchored Edit tool:
|
||||
|
||||
```json
|
||||
{
|
||||
"disabled_hooks": ["hashline-read-enhancer", "hashline-edit-diff-enhancer"]
|
||||
}
|
||||
|
||||
## Disabled Commands
|
||||
|
||||
Disable specific built-in commands via `disabled_commands` in `~/.config/opencode/oh-my-opencode.json` or `.opencode/oh-my-opencode.json`:
|
||||
@@ -1133,6 +1161,7 @@ Opt-in experimental features that may change or be removed in future versions. U
|
||||
"truncate_all_tool_outputs": true,
|
||||
"aggressive_truncation": true,
|
||||
"auto_resume": true,
|
||||
"disable_omo_env": false,
|
||||
"dynamic_context_pruning": {
|
||||
"enabled": false,
|
||||
"notification": "detailed",
|
||||
@@ -1164,6 +1193,7 @@ Opt-in experimental features that may change or be removed in future versions. U
|
||||
| `truncate_all_tool_outputs` | `false` | Truncates ALL tool outputs instead of just whitelisted tools (Grep, Glob, LSP, AST-grep). Tool output truncator is enabled by default - disable via `disabled_hooks`. |
|
||||
| `aggressive_truncation` | `false` | When token limit is exceeded, aggressively truncates tool outputs to fit within limits. More aggressive than the default truncation behavior. Falls back to summarize/revert if insufficient. |
|
||||
| `auto_resume` | `false` | Automatically resumes session after successful recovery from thinking block errors or thinking disabled violations. Extracts last user message and continues. |
|
||||
| `disable_omo_env` | `false` | When `true`, disables auto-injected `<omo-env>` block generation (date, time, timezone, locale). When unset or `false`, current behavior is preserved. Setting this to `true` will improve the cache hit rate and reduce the API cost. |
|
||||
| `dynamic_context_pruning` | See below | Dynamic context pruning configuration for managing context window usage automatically. See [Dynamic Context Pruning](#dynamic-context-pruning) below. |
|
||||
|
||||
### Dynamic Context Pruning
|
||||
|
||||
@@ -10,12 +10,12 @@ Oh-My-OpenCode provides 11 specialized AI agents. Each has distinct expertise, o
|
||||
|
||||
| Agent | Model | Purpose |
|
||||
|-------|-------|---------|
|
||||
| **Sisyphus** | `anthropic/claude-opus-4-6` | **The default orchestrator.** Plans, delegates, and executes complex tasks using specialized subagents with aggressive parallel execution. Todo-driven workflow with extended thinking (32k budget). Fallback: k2p5 → kimi-k2.5-free → glm-4.7 → glm-4.7-free. |
|
||||
| **Sisyphus** | `anthropic/claude-opus-4-6` | **The default orchestrator.** Plans, delegates, and executes complex tasks using specialized subagents with aggressive parallel execution. Todo-driven workflow with extended thinking (32k budget). Fallback: k2p5 → kimi-k2.5-free → glm-5 → big-pickle. |
|
||||
| **Hephaestus** | `openai/gpt-5.3-codex` | **The Legitimate Craftsman.** Autonomous deep worker inspired by AmpCode's deep mode. Goal-oriented execution with thorough research before action. Explores codebase patterns, completes tasks end-to-end without premature stopping. Named after the Greek god of forge and craftsmanship. Requires gpt-5.3-codex (no fallback - only activates when this model is available). |
|
||||
| **oracle** | `openai/gpt-5.2` | Architecture decisions, code review, debugging. Read-only consultation - stellar logical reasoning and deep analysis. Inspired by AmpCode. |
|
||||
| **librarian** | `zai-coding-plan/glm-4.7` | Multi-repo analysis, documentation lookup, OSS implementation examples. Deep codebase understanding with evidence-based answers. Fallback: glm-4.7-free → claude-sonnet-4-6. |
|
||||
| **explore** | `github-copilot/grok-code-fast-1` | Fast codebase exploration and contextual grep. Fallback: claude-haiku-4-5 → gpt-5-nano. |
|
||||
| **multimodal-looker** | `google/gemini-3-flash` | Visual content specialist. Analyzes PDFs, images, diagrams to extract information. Fallback: gpt-5.2 → glm-4.6v → k2p5 → kimi-k2.5-free → claude-haiku-4-5 → gpt-5-nano. |
|
||||
| **librarian** | `google/gemini-3-flash` | Multi-repo analysis, documentation lookup, OSS implementation examples. Deep codebase understanding with evidence-based answers. Fallback: minimax-m2.5-free → big-pickle. |
|
||||
| **explore** | `github-copilot/grok-code-fast-1` | Fast codebase exploration and contextual grep. Fallback: minimax-m2.5-free → claude-haiku-4-5 → gpt-5-nano. |
|
||||
| **multimodal-looker** | `kimi-for-coding/k2p5` | Visual content specialist. Analyzes PDFs, images, diagrams to extract information. Fallback: kimi-k2.5-free → gemini-3-flash → gpt-5.2 → glm-4.6v. |
|
||||
|
||||
### Planning Agents
|
||||
|
||||
|
||||
193
docs/guide/agent-model-matching.md
Normal file
193
docs/guide/agent-model-matching.md
Normal file
@@ -0,0 +1,193 @@
|
||||
# Agent-Model Matching Guide
|
||||
|
||||
> **For agents and users**: How to pick the right model for each agent. Read this before customizing model settings.
|
||||
|
||||
Run `opencode models` to see all available models on your system, and `opencode auth login` to authenticate with providers.
|
||||
|
||||
---
|
||||
|
||||
## Model Families: Know Your Options
|
||||
|
||||
Not all models behave the same way. Understanding which models are "similar" helps you make safe substitutions.
|
||||
|
||||
### Claude-like Models (instruction-following, structured output)
|
||||
|
||||
These models respond similarly to Claude and work well with oh-my-opencode's Claude-optimized prompts:
|
||||
|
||||
| Model | Provider(s) | Notes |
|
||||
|-------|-------------|-------|
|
||||
| **Claude Opus 4.6** | anthropic, github-copilot, opencode | Best overall. Default for Sisyphus. |
|
||||
| **Claude Sonnet 4.6** | anthropic, github-copilot, opencode | Faster, cheaper. Good balance. |
|
||||
| **Claude Haiku 4.5** | anthropic, opencode | Fast and cheap. Good for quick tasks. |
|
||||
| **Kimi K2.5** | kimi-for-coding | Behaves very similarly to Claude. Great all-rounder. Default for Atlas. |
|
||||
| **Kimi K2.5 Free** | opencode | Free-tier Kimi. Rate-limited but functional. |
|
||||
| **GLM 5** | zai-coding-plan, opencode | Claude-like behavior. Good for broad tasks. |
|
||||
| **Big Pickle (GLM 4.6)** | opencode | Free-tier GLM. Decent fallback. |
|
||||
|
||||
### GPT Models (explicit reasoning, principle-driven)
|
||||
|
||||
GPT models need differently structured prompts. Some agents auto-detect GPT and switch prompts:
|
||||
|
||||
| Model | Provider(s) | Notes |
|
||||
|-------|-------------|-------|
|
||||
| **GPT-5.3-codex** | openai, github-copilot, opencode | Deep coding powerhouse. Required for Hephaestus. |
|
||||
| **GPT-5.2** | openai, github-copilot, opencode | High intelligence. Default for Oracle. |
|
||||
| **GPT-5-Nano** | opencode | Ultra-cheap, fast. Good for simple utility tasks. |
|
||||
|
||||
### Different-Behavior Models
|
||||
|
||||
These models have unique characteristics — don't assume they'll behave like Claude or GPT:
|
||||
|
||||
| Model | Provider(s) | Notes |
|
||||
|-------|-------------|-------|
|
||||
| **Gemini 3 Pro** | google, github-copilot, opencode | Excels at visual/frontend tasks. Different reasoning style. |
|
||||
| **Gemini 3 Flash** | google, github-copilot, opencode | Fast, good for doc search and light tasks. |
|
||||
| **MiniMax M2.5** | venice | Fast and smart. Good for utility tasks. |
|
||||
| **MiniMax M2.5 Free** | opencode | Free-tier MiniMax. Fast for search/retrieval. |
|
||||
|
||||
### Speed-Focused Models
|
||||
|
||||
| Model | Provider(s) | Speed | Notes |
|
||||
|-------|-------------|-------|-------|
|
||||
| **Grok Code Fast 1** | github-copilot, venice | Very fast | Optimized for code grep/search. Default for Explore. |
|
||||
| **Claude Haiku 4.5** | anthropic, opencode | Fast | Good balance of speed and intelligence. |
|
||||
| **MiniMax M2.5 (Free)** | opencode, venice | Fast | Smart for its speed class. |
|
||||
| **GPT-5.3-codex-spark** | openai | Extremely fast | Blazing fast but compacts so aggressively that oh-my-opencode's context management doesn't work well with it. Not recommended for omo agents. |
|
||||
|
||||
---
|
||||
|
||||
## Agent Roles and Recommended Models
|
||||
|
||||
### Claude-Optimized Agents
|
||||
|
||||
These agents have prompts tuned for Claude-family models. Use Claude > Kimi K2.5 > GLM 5 in that priority order.
|
||||
|
||||
| Agent | Role | Default Chain | What It Does |
|
||||
|-------|------|---------------|--------------|
|
||||
| **Sisyphus** | Main ultraworker | Opus (max) → Kimi K2.5 → GLM 5 → Big Pickle | Primary coding agent. Orchestrates everything. **Never use GPT — no GPT prompt exists.** |
|
||||
| **Metis** | Plan review | Opus (max) → Kimi K2.5 → GPT-5.2 → Gemini 3 Pro | Reviews Prometheus plans for gaps. |
|
||||
|
||||
### Dual-Prompt Agents (Claude + GPT auto-switch)
|
||||
|
||||
These agents detect your model family at runtime and switch to the appropriate prompt. If you have GPT access, these agents can use it effectively.
|
||||
|
||||
Priority: **Claude > GPT > Claude-like models**
|
||||
|
||||
| Agent | Role | Default Chain | GPT Prompt? |
|
||||
|-------|------|---------------|-------------|
|
||||
| **Prometheus** | Strategic planner | Opus (max) → **GPT-5.2 (high)** → Kimi K2.5 → Gemini 3 Pro | Yes — XML-tagged, principle-driven (~300 lines vs ~1,100 Claude) |
|
||||
| **Atlas** | Todo orchestrator | **Kimi K2.5** → Sonnet → GPT-5.2 | Yes — GPT-optimized todo management |
|
||||
|
||||
### GPT-Native Agents
|
||||
|
||||
These agents are built for GPT. Don't override to Claude.
|
||||
|
||||
| Agent | Role | Default Chain | Notes |
|
||||
|-------|------|---------------|-------|
|
||||
| **Hephaestus** | Deep autonomous worker | GPT-5.3-codex (medium) only | "Codex on steroids." No fallback. Requires GPT access. |
|
||||
| **Oracle** | Architecture/debugging | GPT-5.2 (high) → Gemini 3 Pro → Opus | High-IQ strategic backup. GPT preferred. |
|
||||
| **Momus** | High-accuracy reviewer | GPT-5.2 (medium) → Opus → Gemini 3 Pro | Verification agent. GPT preferred. |
|
||||
|
||||
### Utility Agents (Speed > Intelligence)
|
||||
|
||||
These agents do search, grep, and retrieval. They intentionally use fast, cheap models. **Don't "upgrade" them to Opus — it wastes tokens on simple tasks.**
|
||||
|
||||
| Agent | Role | Default Chain | Design Rationale |
|
||||
|-------|------|---------------|------------------|
|
||||
| **Explore** | Fast codebase grep | MiniMax M2.5 Free → Grok Code Fast → MiniMax M2.5 → Haiku → GPT-5-Nano | Speed is everything. Grok is blazing fast for grep. |
|
||||
| **Librarian** | Docs/code search | MiniMax M2.5 Free → Gemini Flash → Big Pickle | Entirely free-tier. Doc retrieval doesn't need deep reasoning. |
|
||||
| **Multimodal Looker** | Vision/screenshots | Kimi K2.5 → Kimi Free → Gemini Flash → GPT-5.2 → GLM-4.6v | Kimi excels at multimodal understanding. |
|
||||
|
||||
---
|
||||
|
||||
## Task Categories
|
||||
|
||||
Categories control which model is used for `background_task` and `delegate_task`. See the [Orchestration System Guide](./understanding-orchestration-system.md) for how agents dispatch tasks to categories.
|
||||
|
||||
| Category | When Used | Recommended Models | Notes |
|
||||
|----------|-----------|-------------------|-------|
|
||||
| `visual-engineering` | Frontend, UI, CSS, design | Gemini 3 Pro (high) → GLM 5 → Opus → Kimi K2.5 | Gemini dominates visual tasks |
|
||||
| `ultrabrain` | Maximum reasoning needed | GPT-5.3-codex (xhigh) → Gemini 3 Pro → Opus | Highest intelligence available |
|
||||
| `deep` | Deep coding, complex logic | GPT-5.3-codex (medium) → Opus → Gemini 3 Pro | Requires GPT availability |
|
||||
| `artistry` | Creative, novel approaches | Gemini 3 Pro (high) → Opus → GPT-5.2 | Requires Gemini availability |
|
||||
| `quick` | Simple, fast tasks | Haiku → Gemini Flash → GPT-5-Nano | Cheapest and fastest |
|
||||
| `unspecified-high` | General complex work | Opus (max) → GPT-5.2 (high) → Gemini 3 Pro | Default when no category fits |
|
||||
| `unspecified-low` | General standard work | Sonnet → GPT-5.3-codex (medium) → Gemini Flash | Everyday tasks |
|
||||
| `writing` | Text, docs, prose | Kimi K2.5 → Gemini Flash → Sonnet | Kimi produces best prose |
|
||||
|
||||
---
|
||||
|
||||
## Why Different Models Need Different Prompts
|
||||
|
||||
Claude and GPT models have fundamentally different instruction-following behaviors:
|
||||
|
||||
- **Claude models** respond well to **mechanics-driven** prompts — detailed checklists, templates, step-by-step procedures. More rules = more compliance.
|
||||
- **GPT models** (especially 5.2+) respond better to **principle-driven** prompts — concise principles, XML-tagged structure, explicit decision criteria. More rules = more contradiction surface = more drift.
|
||||
|
||||
Key insight from Codex Plan Mode analysis:
|
||||
- Codex Plan Mode achieves the same results with 3 principles in ~121 lines that Prometheus's Claude prompt needs ~1,100 lines across 7 files
|
||||
- The core concept is **"Decision Complete"** — a plan must leave ZERO decisions to the implementer
|
||||
- GPT follows this literally when stated as a principle; Claude needs enforcement mechanisms
|
||||
|
||||
This is why Prometheus and Atlas ship separate prompts per model family — they auto-detect and switch at runtime via `isGptModel()`.
|
||||
|
||||
---
|
||||
|
||||
## Customization Guide
|
||||
|
||||
### How to Customize
|
||||
|
||||
Override in `oh-my-opencode.json`:
|
||||
|
||||
```jsonc
|
||||
{
|
||||
"agents": {
|
||||
"sisyphus": { "model": "kimi-for-coding/k2p5" },
|
||||
"prometheus": { "model": "openai/gpt-5.2" } // Auto-switches to GPT prompt
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Selection Priority
|
||||
|
||||
When choosing models for Claude-optimized agents:
|
||||
|
||||
```
|
||||
Claude (Opus/Sonnet) > GPT (if agent has dual prompt) > Claude-like (Kimi K2.5, GLM 5)
|
||||
```
|
||||
|
||||
When choosing models for GPT-native agents:
|
||||
|
||||
```
|
||||
GPT (5.3-codex, 5.2) > Claude Opus (decent fallback) > Gemini (acceptable)
|
||||
```
|
||||
|
||||
### Safe vs Dangerous Overrides
|
||||
|
||||
**Safe** (same family):
|
||||
- Sisyphus: Opus → Sonnet, Kimi K2.5, GLM 5
|
||||
- Prometheus: Opus → GPT-5.2 (auto-switches prompt)
|
||||
- Atlas: Kimi K2.5 → Sonnet, GPT-5.2 (auto-switches)
|
||||
|
||||
**Dangerous** (no prompt support):
|
||||
- Sisyphus → GPT: **No GPT prompt. Will degrade significantly.**
|
||||
- Hephaestus → Claude: **Built for Codex. Claude can't replicate this.**
|
||||
- Explore → Opus: **Massive cost waste. Explore needs speed, not intelligence.**
|
||||
- Librarian → Opus: **Same. Doc search doesn't need Opus-level reasoning.**
|
||||
|
||||
---
|
||||
|
||||
## Provider Priority
|
||||
|
||||
```
|
||||
Native (anthropic/, openai/, google/) > Kimi for Coding > GitHub Copilot > Venice > OpenCode Zen > Z.ai Coding Plan
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## See Also
|
||||
|
||||
- [Installation Guide](./installation.md) — Setup and authentication
|
||||
- [Orchestration System](./understanding-orchestration-system.md) — How agents dispatch tasks to categories
|
||||
- [Configuration Reference](../configurations.md) — Full config options
|
||||
- [`src/shared/model-requirements.ts`](../../src/shared/model-requirements.ts) — Source of truth for fallback chains
|
||||
@@ -259,6 +259,18 @@ opencode auth login
|
||||
|
||||
The plugin works perfectly by default. Do not change settings or turn off features without an explicit request.
|
||||
|
||||
### Custom Model Configuration
|
||||
|
||||
If the user wants to override which model an agent uses, refer to the **[Agent-Model Matching Guide](./agent-model-matching.md)** before making changes. That guide explains:
|
||||
|
||||
- **Why each agent uses its default model** — prompt optimization, model family compatibility
|
||||
- **Which substitutions are safe** — staying within the same model family (e.g., Opus → Sonnet for Sisyphus)
|
||||
- **Which substitutions are dangerous** — crossing model families without prompt support (e.g., GPT for Sisyphus)
|
||||
- **How auto-routing works** — Prometheus and Atlas detect GPT models and switch to GPT-optimized prompts automatically
|
||||
- **Full fallback chains** — what happens when the preferred model is unavailable
|
||||
|
||||
Always explain to the user *why* a model is assigned to an agent when making custom changes. The guide provides the rationale for every assignment.
|
||||
|
||||
### Verify the setup
|
||||
|
||||
Read this document again, think about you have done everything correctly.
|
||||
|
||||
@@ -58,6 +58,7 @@
|
||||
"@modelcontextprotocol/sdk": "^1.25.1",
|
||||
"@opencode-ai/plugin": "^1.1.19",
|
||||
"@opencode-ai/sdk": "^1.1.19",
|
||||
"codex": "^0.2.3",
|
||||
"commander": "^14.0.2",
|
||||
"detect-libc": "^2.0.0",
|
||||
"js-yaml": "^4.1.1",
|
||||
|
||||
@@ -1599,6 +1599,62 @@
|
||||
"created_at": "2026-02-18T20:52:27Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 1953
|
||||
},
|
||||
{
|
||||
"name": "itstanner5216",
|
||||
"id": 210304352,
|
||||
"comment_id": 3925417310,
|
||||
"created_at": "2026-02-19T08:13:42Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 1958
|
||||
},
|
||||
{
|
||||
"name": "itstanner5216",
|
||||
"id": 210304352,
|
||||
"comment_id": 3925417953,
|
||||
"created_at": "2026-02-19T08:13:46Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 1958
|
||||
},
|
||||
{
|
||||
"name": "ControlNet",
|
||||
"id": 12800094,
|
||||
"comment_id": 3928095504,
|
||||
"created_at": "2026-02-19T15:43:22Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 1974
|
||||
},
|
||||
{
|
||||
"name": "VespianRex",
|
||||
"id": 151797549,
|
||||
"comment_id": 3929203247,
|
||||
"created_at": "2026-02-19T18:45:52Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 1957
|
||||
},
|
||||
{
|
||||
"name": "GyuminJack",
|
||||
"id": 32768535,
|
||||
"comment_id": 3895081227,
|
||||
"created_at": "2026-02-13T06:00:53Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 1813
|
||||
},
|
||||
{
|
||||
"name": "CloudWaddie",
|
||||
"id": 148834837,
|
||||
"comment_id": 3931489943,
|
||||
"created_at": "2026-02-20T04:06:05Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 1988
|
||||
},
|
||||
{
|
||||
"name": "FFFergie",
|
||||
"id": 53839805,
|
||||
"comment_id": 3934341409,
|
||||
"created_at": "2026-02-20T13:03:33Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 1996
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -33,8 +33,8 @@ loadPluginConfig(directory, ctx)
|
||||
```
|
||||
createHooks()
|
||||
├─→ createCoreHooks() # 35 hooks
|
||||
│ ├─ createSessionHooks() # 22: contextWindowMonitor, thinkMode, ralphLoop, sessionRecovery, jsonErrorRecovery, sisyphusGptHephaestusReminder, taskReminder...
|
||||
│ ├─ createToolGuardHooks() # 9: commentChecker, rulesInjector, writeExistingFileGuard...
|
||||
│ ├─ createSessionHooks() # 21: contextWindowMonitor, thinkMode, ralphLoop, sessionRecovery, jsonErrorRecovery, sisyphusGptHephaestusReminder, anthropicEffort...
|
||||
│ ├─ createToolGuardHooks() # 10: commentChecker, rulesInjector, writeExistingFileGuard, hashlineEditDiffEnhancer...
|
||||
│ └─ createTransformHooks() # 4: claudeCodeHooks, keywordDetector, contextInjector, thinkingBlockValidator
|
||||
├─→ createContinuationHooks() # 7: todoContinuationEnforcer, atlas, stopContinuationGuard...
|
||||
└─→ createSkillHooks() # 2: categorySkillReminder, autoSlashCommand
|
||||
|
||||
@@ -311,7 +311,8 @@ task(category="quick", load_skills=[], run_in_background=false, prompt="Task 4..
|
||||
|
||||
**Background management**:
|
||||
- Collect results: \`background_output(task_id="...")\`
|
||||
- Before final answer: \`background_cancel(all=true)\`
|
||||
- Before final answer, cancel DISPOSABLE tasks individually: \`background_cancel(taskId="bg_explore_xxx")\`, \`background_cancel(taskId="bg_librarian_xxx")\`
|
||||
- **NEVER use \`background_cancel(all=true)\`** — it kills tasks whose results you haven't collected yet
|
||||
</parallel_execution>
|
||||
|
||||
<notepad_protocol>
|
||||
|
||||
@@ -298,7 +298,8 @@ task(category="quick", load_skills=[], run_in_background=false, prompt="Task 3..
|
||||
|
||||
**Background management**:
|
||||
- Collect: \`background_output(task_id="...")\`
|
||||
- Cleanup: \`background_cancel(all=true)\`
|
||||
- Before final answer, cancel DISPOSABLE tasks individually: \`background_cancel(taskId="bg_explore_xxx")\`, \`background_cancel(taskId="bg_librarian_xxx")\`
|
||||
- **NEVER use \`background_cancel(all=true)\`** — it kills tasks whose results you haven't collected yet
|
||||
</parallel_execution>
|
||||
|
||||
<notepad_protocol>
|
||||
|
||||
@@ -69,8 +69,10 @@ export async function createBuiltinAgents(
|
||||
browserProvider?: BrowserAutomationProvider,
|
||||
uiSelectedModel?: string,
|
||||
disabledSkills?: Set<string>,
|
||||
useTaskSystem = false
|
||||
useTaskSystem = false,
|
||||
disableOmoEnv = false
|
||||
): Promise<Record<string, AgentConfig>> {
|
||||
|
||||
const connectedProviders = readConnectedProvidersCache()
|
||||
const providerModelsConnected = connectedProviders
|
||||
? (readProviderModelsCache()?.connected ?? [])
|
||||
@@ -112,6 +114,7 @@ export async function createBuiltinAgents(
|
||||
uiSelectedModel,
|
||||
availableModels,
|
||||
disabledSkills,
|
||||
disableOmoEnv,
|
||||
})
|
||||
|
||||
const registeredAgents = parseRegisteredAgentSummaries(customAgentSummaries)
|
||||
@@ -145,6 +148,7 @@ export async function createBuiltinAgents(
|
||||
directory,
|
||||
userCategories: categories,
|
||||
useTaskSystem,
|
||||
disableOmoEnv,
|
||||
})
|
||||
if (sisyphusConfig) {
|
||||
result["sisyphus"] = sisyphusConfig
|
||||
@@ -162,6 +166,7 @@ export async function createBuiltinAgents(
|
||||
mergedCategories,
|
||||
directory,
|
||||
useTaskSystem,
|
||||
disableOmoEnv,
|
||||
})
|
||||
if (hephaestusConfig) {
|
||||
result["hephaestus"] = hephaestusConfig
|
||||
|
||||
@@ -1,8 +1,16 @@
|
||||
import type { AgentConfig } from "@opencode-ai/sdk"
|
||||
import { createEnvContext } from "../env-context"
|
||||
|
||||
export function applyEnvironmentContext(config: AgentConfig, directory?: string): AgentConfig {
|
||||
if (!directory || !config.prompt) return config
|
||||
type ApplyEnvironmentContextOptions = {
|
||||
disableOmoEnv?: boolean
|
||||
}
|
||||
|
||||
export function applyEnvironmentContext(
|
||||
config: AgentConfig,
|
||||
directory?: string,
|
||||
options: ApplyEnvironmentContextOptions = {}
|
||||
): AgentConfig {
|
||||
if (options.disableOmoEnv || !directory || !config.prompt) return config
|
||||
const envContext = createEnvContext()
|
||||
return { ...config, prompt: config.prompt + envContext }
|
||||
}
|
||||
|
||||
@@ -23,6 +23,7 @@ export function collectPendingBuiltinAgents(input: {
|
||||
availableModels: Set<string>
|
||||
disabledSkills?: Set<string>
|
||||
useTaskSystem?: boolean
|
||||
disableOmoEnv?: boolean
|
||||
}): { pendingAgentConfigs: Map<string, AgentConfig>; availableAgents: AvailableAgent[] } {
|
||||
const {
|
||||
agentSources,
|
||||
@@ -37,6 +38,7 @@ export function collectPendingBuiltinAgents(input: {
|
||||
uiSelectedModel,
|
||||
availableModels,
|
||||
disabledSkills,
|
||||
disableOmoEnv = false,
|
||||
} = input
|
||||
|
||||
const availableAgents: AvailableAgent[] = []
|
||||
@@ -81,7 +83,7 @@ export function collectPendingBuiltinAgents(input: {
|
||||
}
|
||||
|
||||
if (agentName === "librarian") {
|
||||
config = applyEnvironmentContext(config, directory)
|
||||
config = applyEnvironmentContext(config, directory, { disableOmoEnv })
|
||||
}
|
||||
|
||||
config = applyOverrides(config, override, mergedCategories, directory)
|
||||
|
||||
@@ -4,7 +4,7 @@ import type { CategoryConfig } from "../../config/schema"
|
||||
import type { AvailableAgent, AvailableCategory, AvailableSkill } from "../dynamic-agent-prompt-builder"
|
||||
import { AGENT_MODEL_REQUIREMENTS, isAnyProviderConnected } from "../../shared"
|
||||
import { createHephaestusAgent } from "../hephaestus"
|
||||
import { createEnvContext } from "../env-context"
|
||||
import { applyEnvironmentContext } from "./environment-context"
|
||||
import { applyCategoryOverride, mergeAgentConfig } from "./agent-overrides"
|
||||
import { applyModelResolution, getFirstFallbackModel } from "./model-resolution"
|
||||
|
||||
@@ -20,6 +20,7 @@ export function maybeCreateHephaestusConfig(input: {
|
||||
mergedCategories: Record<string, CategoryConfig>
|
||||
directory?: string
|
||||
useTaskSystem: boolean
|
||||
disableOmoEnv?: boolean
|
||||
}): AgentConfig | undefined {
|
||||
const {
|
||||
disabledAgents,
|
||||
@@ -33,6 +34,7 @@ export function maybeCreateHephaestusConfig(input: {
|
||||
mergedCategories,
|
||||
directory,
|
||||
useTaskSystem,
|
||||
disableOmoEnv = false,
|
||||
} = input
|
||||
|
||||
if (disabledAgents.includes("hephaestus")) return undefined
|
||||
@@ -79,10 +81,7 @@ export function maybeCreateHephaestusConfig(input: {
|
||||
hephaestusConfig = applyCategoryOverride(hephaestusConfig, hepOverrideCategory, mergedCategories)
|
||||
}
|
||||
|
||||
if (directory && hephaestusConfig.prompt) {
|
||||
const envContext = createEnvContext()
|
||||
hephaestusConfig = { ...hephaestusConfig, prompt: hephaestusConfig.prompt + envContext }
|
||||
}
|
||||
hephaestusConfig = applyEnvironmentContext(hephaestusConfig, directory, { disableOmoEnv })
|
||||
|
||||
if (hephaestusOverride) {
|
||||
hephaestusConfig = mergeAgentConfig(hephaestusConfig, hephaestusOverride, directory)
|
||||
|
||||
@@ -22,6 +22,7 @@ export function maybeCreateSisyphusConfig(input: {
|
||||
directory?: string
|
||||
userCategories?: CategoriesConfig
|
||||
useTaskSystem: boolean
|
||||
disableOmoEnv?: boolean
|
||||
}): AgentConfig | undefined {
|
||||
const {
|
||||
disabledAgents,
|
||||
@@ -36,6 +37,7 @@ export function maybeCreateSisyphusConfig(input: {
|
||||
mergedCategories,
|
||||
directory,
|
||||
useTaskSystem,
|
||||
disableOmoEnv = false,
|
||||
} = input
|
||||
|
||||
const sisyphusOverride = agentOverrides["sisyphus"]
|
||||
@@ -78,7 +80,9 @@ export function maybeCreateSisyphusConfig(input: {
|
||||
}
|
||||
|
||||
sisyphusConfig = applyOverrides(sisyphusConfig, sisyphusOverride, mergedCategories, directory)
|
||||
sisyphusConfig = applyEnvironmentContext(sisyphusConfig, directory)
|
||||
sisyphusConfig = applyEnvironmentContext(sisyphusConfig, directory, {
|
||||
disableOmoEnv,
|
||||
})
|
||||
|
||||
return sisyphusConfig
|
||||
}
|
||||
|
||||
@@ -4,7 +4,6 @@ import { describe, it, expect } from "bun:test"
|
||||
import {
|
||||
buildCategorySkillsDelegationGuide,
|
||||
buildUltraworkSection,
|
||||
formatCustomSkillsBlock,
|
||||
type AvailableSkill,
|
||||
type AvailableCategory,
|
||||
type AvailableAgent,
|
||||
@@ -30,40 +29,39 @@ describe("buildCategorySkillsDelegationGuide", () => {
|
||||
{ name: "our-design-system", description: "Internal design system components", location: "project" },
|
||||
]
|
||||
|
||||
it("should separate builtin and custom skills into distinct sections", () => {
|
||||
it("should list builtin and custom skills in compact format", () => {
|
||||
//#given: mix of builtin and custom skills
|
||||
const allSkills = [...builtinSkills, ...customUserSkills]
|
||||
|
||||
//#when: building the delegation guide
|
||||
const result = buildCategorySkillsDelegationGuide(categories, allSkills)
|
||||
|
||||
//#then: should have separate sections
|
||||
expect(result).toContain("Built-in Skills")
|
||||
expect(result).toContain("User-Installed Skills")
|
||||
expect(result).toContain("HIGH PRIORITY")
|
||||
//#then: should use compact format with both sections
|
||||
expect(result).toContain("**Built-in**: playwright, frontend-ui-ux")
|
||||
expect(result).toContain("YOUR SKILLS (PRIORITY)")
|
||||
expect(result).toContain("react-19 (user)")
|
||||
expect(result).toContain("tailwind-4 (user)")
|
||||
})
|
||||
|
||||
it("should list custom skills and keep CRITICAL warning", () => {
|
||||
//#given: custom skills installed
|
||||
it("should point to skill tool as source of truth", () => {
|
||||
//#given: skills present
|
||||
const allSkills = [...builtinSkills, ...customUserSkills]
|
||||
|
||||
//#when: building the delegation guide
|
||||
const result = buildCategorySkillsDelegationGuide(categories, allSkills)
|
||||
|
||||
//#then: should mention custom skills by name and include warning
|
||||
expect(result).toContain("`react-19`")
|
||||
expect(result).toContain("`tailwind-4`")
|
||||
expect(result).toContain("CRITICAL")
|
||||
//#then: should reference the skill tool for full descriptions
|
||||
expect(result).toContain("`skill` tool")
|
||||
})
|
||||
|
||||
it("should show source column for custom skills (user vs project)", () => {
|
||||
it("should show source tags for custom skills (user vs project)", () => {
|
||||
//#given: both user and project custom skills
|
||||
const allSkills = [...builtinSkills, ...customUserSkills, ...customProjectSkills]
|
||||
|
||||
//#when: building the delegation guide
|
||||
const result = buildCategorySkillsDelegationGuide(categories, allSkills)
|
||||
|
||||
//#then: should show source for each custom skill
|
||||
//#then: should show source tag for each custom skill
|
||||
expect(result).toContain("(user)")
|
||||
expect(result).toContain("(project)")
|
||||
})
|
||||
@@ -76,8 +74,8 @@ describe("buildCategorySkillsDelegationGuide", () => {
|
||||
const result = buildCategorySkillsDelegationGuide(categories, allSkills)
|
||||
|
||||
//#then: should not contain custom skill emphasis
|
||||
expect(result).not.toContain("User-Installed Skills")
|
||||
expect(result).not.toContain("HIGH PRIORITY")
|
||||
expect(result).not.toContain("YOUR SKILLS")
|
||||
expect(result).toContain("**Built-in**:")
|
||||
expect(result).toContain("Available Skills")
|
||||
})
|
||||
|
||||
@@ -88,10 +86,9 @@ describe("buildCategorySkillsDelegationGuide", () => {
|
||||
//#when: building the delegation guide
|
||||
const result = buildCategorySkillsDelegationGuide(categories, allSkills)
|
||||
|
||||
//#then: should show custom skills with emphasis, no builtin section
|
||||
expect(result).toContain("User-Installed Skills")
|
||||
expect(result).toContain("HIGH PRIORITY")
|
||||
expect(result).not.toContain("Built-in Skills")
|
||||
//#then: should show custom skills with emphasis, no builtin line
|
||||
expect(result).toContain("YOUR SKILLS (PRIORITY)")
|
||||
expect(result).not.toContain("**Built-in**:")
|
||||
})
|
||||
|
||||
it("should include priority note for custom skills in evaluation step", () => {
|
||||
@@ -103,7 +100,7 @@ describe("buildCategorySkillsDelegationGuide", () => {
|
||||
|
||||
//#then: evaluation section should mention user-installed priority
|
||||
expect(result).toContain("User-installed skills get PRIORITY")
|
||||
expect(result).toContain("INCLUDE it rather than omit it")
|
||||
expect(result).toContain("INCLUDE rather than omit")
|
||||
})
|
||||
|
||||
it("should NOT include priority note when no custom skills", () => {
|
||||
@@ -125,6 +122,20 @@ describe("buildCategorySkillsDelegationGuide", () => {
|
||||
//#then: should return empty string
|
||||
expect(result).toBe("")
|
||||
})
|
||||
|
||||
it("should include category descriptions", () => {
|
||||
//#given: categories with descriptions
|
||||
const allSkills = [...builtinSkills]
|
||||
|
||||
//#when: building the delegation guide
|
||||
const result = buildCategorySkillsDelegationGuide(categories, allSkills)
|
||||
|
||||
//#then: should list categories with their descriptions
|
||||
expect(result).toContain("`visual-engineering`")
|
||||
expect(result).toContain("Frontend, UI/UX")
|
||||
expect(result).toContain("`quick`")
|
||||
expect(result).toContain("Trivial tasks")
|
||||
})
|
||||
})
|
||||
|
||||
describe("buildUltraworkSection", () => {
|
||||
@@ -161,45 +172,4 @@ describe("buildUltraworkSection", () => {
|
||||
})
|
||||
})
|
||||
|
||||
describe("formatCustomSkillsBlock", () => {
|
||||
const customSkills: AvailableSkill[] = [
|
||||
{ name: "react-19", description: "React 19 patterns", location: "user" },
|
||||
{ name: "tailwind-4", description: "Tailwind v4", location: "project" },
|
||||
]
|
||||
|
||||
const customRows = customSkills.map((s) => {
|
||||
const source = s.location === "project" ? "project" : "user"
|
||||
return `| \`${s.name}\` | ${s.description} | ${source} |`
|
||||
})
|
||||
|
||||
it("should produce consistent output used by both builders", () => {
|
||||
//#given: custom skills and rows
|
||||
//#when: formatting with default header level
|
||||
const result = formatCustomSkillsBlock(customRows, customSkills)
|
||||
|
||||
//#then: contains all expected elements
|
||||
expect(result).toContain("User-Installed Skills (HIGH PRIORITY)")
|
||||
expect(result).toContain("CRITICAL")
|
||||
expect(result).toContain("`react-19`")
|
||||
expect(result).toContain("`tailwind-4`")
|
||||
expect(result).toContain("| user |")
|
||||
expect(result).toContain("| project |")
|
||||
})
|
||||
|
||||
it("should use #### header by default", () => {
|
||||
//#given: default header level
|
||||
const result = formatCustomSkillsBlock(customRows, customSkills)
|
||||
|
||||
//#then: uses markdown h4
|
||||
expect(result).toContain("#### User-Installed Skills")
|
||||
})
|
||||
|
||||
it("should use bold header when specified", () => {
|
||||
//#given: bold header level (used by Atlas)
|
||||
const result = formatCustomSkillsBlock(customRows, customSkills, "**")
|
||||
|
||||
//#then: uses bold instead of h4
|
||||
expect(result).toContain("**User-Installed Skills (HIGH PRIORITY):**")
|
||||
expect(result).not.toContain("#### User-Installed Skills")
|
||||
})
|
||||
})
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
import type { AgentPromptMetadata } from "./types"
|
||||
import { truncateDescription } from "../shared/truncate-description"
|
||||
|
||||
export interface AvailableAgent {
|
||||
name: string
|
||||
@@ -158,29 +157,6 @@ export function buildDelegationTable(agents: AvailableAgent[]): string {
|
||||
return rows.join("\n")
|
||||
}
|
||||
|
||||
/**
|
||||
* Renders the "User-Installed Skills (HIGH PRIORITY)" block used across multiple agent prompts.
|
||||
* Extracted to avoid duplication between buildCategorySkillsDelegationGuide, buildSkillsSection, etc.
|
||||
*/
|
||||
export function formatCustomSkillsBlock(
|
||||
customRows: string[],
|
||||
customSkills: AvailableSkill[],
|
||||
headerLevel: "####" | "**" = "####"
|
||||
): string {
|
||||
const header = headerLevel === "####"
|
||||
? `#### User-Installed Skills (HIGH PRIORITY)`
|
||||
: `**User-Installed Skills (HIGH PRIORITY):**`
|
||||
|
||||
return `${header}
|
||||
|
||||
**The user has installed these custom skills. They MUST be evaluated for EVERY delegation.**
|
||||
Subagents are STATELESS — they lose all custom knowledge unless you pass these skills via \`load_skills\`.
|
||||
|
||||
${customRows.join("\n")}
|
||||
|
||||
> **CRITICAL**: Ignoring user-installed skills when they match the task domain is a failure.
|
||||
> The user installed custom skills for a reason — USE THEM when the task overlaps with their domain.`
|
||||
}
|
||||
|
||||
export function buildCategorySkillsDelegationGuide(categories: AvailableCategory[], skills: AvailableSkill[]): string {
|
||||
if (categories.length === 0 && skills.length === 0) return ""
|
||||
@@ -193,35 +169,37 @@ export function buildCategorySkillsDelegationGuide(categories: AvailableCategory
|
||||
const builtinSkills = skills.filter((s) => s.location === "plugin")
|
||||
const customSkills = skills.filter((s) => s.location !== "plugin")
|
||||
|
||||
const builtinRows = builtinSkills.map((s) => {
|
||||
const desc = truncateDescription(s.description)
|
||||
return `- \`${s.name}\` — ${desc}`
|
||||
})
|
||||
|
||||
const customRows = customSkills.map((s) => {
|
||||
const desc = truncateDescription(s.description)
|
||||
const source = s.location === "project" ? "project" : "user"
|
||||
return `- \`${s.name}\` (${source}) — ${desc}`
|
||||
})
|
||||
|
||||
const customSkillBlock = formatCustomSkillsBlock(customRows, customSkills)
|
||||
const builtinNames = builtinSkills.map((s) => s.name).join(", ")
|
||||
const customNames = customSkills.map((s) => {
|
||||
const source = s.location === "project" ? "project" : "user"
|
||||
return `${s.name} (${source})`
|
||||
}).join(", ")
|
||||
|
||||
let skillsSection: string
|
||||
|
||||
if (customSkills.length > 0 && builtinSkills.length > 0) {
|
||||
skillsSection = `#### Built-in Skills
|
||||
skillsSection = `#### Available Skills (via \`skill\` tool)
|
||||
|
||||
${builtinRows.join("\n")}
|
||||
**Built-in**: ${builtinNames}
|
||||
**⚡ YOUR SKILLS (PRIORITY)**: ${customNames}
|
||||
|
||||
${customSkillBlock}`
|
||||
> User-installed skills OVERRIDE built-in defaults. ALWAYS prefer YOUR SKILLS when domain matches.
|
||||
> Full skill descriptions → use the \`skill\` tool to check before EVERY delegation.`
|
||||
} else if (customSkills.length > 0) {
|
||||
skillsSection = customSkillBlock
|
||||
skillsSection = `#### Available Skills (via \`skill\` tool)
|
||||
|
||||
**⚡ YOUR SKILLS (PRIORITY)**: ${customNames}
|
||||
|
||||
> User-installed skills OVERRIDE built-in defaults. ALWAYS prefer YOUR SKILLS when domain matches.
|
||||
> Full skill descriptions → use the \`skill\` tool to check before EVERY delegation.`
|
||||
} else if (builtinSkills.length > 0) {
|
||||
skillsSection = `#### Available Skills (via \`skill\` tool)
|
||||
|
||||
**Built-in**: ${builtinNames}
|
||||
|
||||
> Full skill descriptions → use the \`skill\` tool to check before EVERY delegation.`
|
||||
} else {
|
||||
skillsSection = `#### Available Skills (Domain Expertise Injection)
|
||||
|
||||
Skills inject specialized instructions into the subagent. Read the description to understand when each skill applies.
|
||||
|
||||
${builtinRows.join("\n")}`
|
||||
skillsSection = ""
|
||||
}
|
||||
|
||||
return `### Category + Skills Delegation System
|
||||
@@ -245,33 +223,14 @@ ${skillsSection}
|
||||
- Match task requirements to category domain
|
||||
- Select the category whose domain BEST fits the task
|
||||
|
||||
**STEP 2: Evaluate ALL Skills (Built-in AND User-Installed)**
|
||||
For EVERY skill listed above, ask yourself:
|
||||
**STEP 2: Evaluate ALL Skills**
|
||||
Check the \`skill\` tool for available skills and their descriptions. For EVERY skill, ask:
|
||||
> "Does this skill's expertise domain overlap with my task?"
|
||||
|
||||
- If YES → INCLUDE in \`load_skills=[...]\`
|
||||
- If NO → You MUST justify why (see below)
|
||||
- If NO → OMIT (no justification needed)
|
||||
${customSkills.length > 0 ? `
|
||||
> **User-installed skills get PRIORITY.** The user explicitly installed them for their workflow.
|
||||
> When in doubt about a user-installed skill, INCLUDE it rather than omit it.` : ""}
|
||||
|
||||
**STEP 3: Justify Omissions**
|
||||
|
||||
If you choose NOT to include a skill that MIGHT be relevant, you MUST provide:
|
||||
|
||||
\`\`\`
|
||||
SKILL EVALUATION for "[skill-name]":
|
||||
- Skill domain: [what the skill description says]
|
||||
- Task domain: [what your task is about]
|
||||
- Decision: OMIT
|
||||
- Reason: [specific explanation of why domains don't overlap]
|
||||
\`\`\`
|
||||
|
||||
**WHY JUSTIFICATION IS MANDATORY:**
|
||||
- Forces you to actually READ skill descriptions
|
||||
- Prevents lazy omission of potentially useful skills
|
||||
- Subagents are STATELESS - they only know what you tell them
|
||||
- Missing a relevant skill = suboptimal output
|
||||
> **User-installed skills get PRIORITY.** When in doubt, INCLUDE rather than omit.` : ""}
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -142,10 +142,13 @@ Asking the user is the LAST resort after exhausting creative alternatives.
|
||||
### Do NOT Ask — Just Do
|
||||
|
||||
**FORBIDDEN:**
|
||||
- "Should I proceed with X?" → JUST DO IT.
|
||||
- Asking permission in any form ("Should I proceed?", "Would you like me to...?", "I can do X if you want") → JUST DO IT.
|
||||
- "Do you want me to run tests?" → RUN THEM.
|
||||
- "I noticed Y, should I fix it?" → FIX IT OR NOTE IN FINAL MESSAGE.
|
||||
- Stopping after partial implementation → 100% OR NOTHING.
|
||||
- Answering a question then stopping → The question implies action. DO THE ACTION.
|
||||
- "I'll do X" / "I recommend X" then ending turn → You COMMITTED to X. DO X NOW before ending.
|
||||
- Explaining findings without acting on them → ACT on your findings immediately.
|
||||
|
||||
**CORRECT:**
|
||||
- Keep going until COMPLETELY done
|
||||
@@ -153,6 +156,9 @@ Asking the user is the LAST resort after exhausting creative alternatives.
|
||||
- Make decisions. Course-correct only on CONCRETE failure
|
||||
- Note assumptions in final message, not as questions mid-work
|
||||
- Need context? Fire explore/librarian in background IMMEDIATELY — keep working while they search
|
||||
- User asks "did you do X?" and you didn't → Acknowledge briefly, DO X immediately
|
||||
- User asks a question implying work → Answer briefly, DO the implied work in the same turn
|
||||
- You wrote a plan in your response → EXECUTE the plan before ending turn — plans are starting lines, not finish lines
|
||||
|
||||
## Hard Constraints
|
||||
|
||||
@@ -164,11 +170,43 @@ ${antiPatterns}
|
||||
|
||||
${keyTriggers}
|
||||
|
||||
<intent_extraction>
|
||||
### Step 0: Extract True Intent (BEFORE Classification)
|
||||
|
||||
**You are an autonomous deep worker. Users chose you for ACTION, not analysis.**
|
||||
|
||||
Every user message has a surface form and a true intent. Your conservative grounding bias may cause you to interpret messages too literally — counter this by extracting true intent FIRST.
|
||||
|
||||
**Intent Mapping (act on TRUE intent, not surface form):**
|
||||
|
||||
| Surface Form | True Intent | Your Response |
|
||||
|---|---|---|
|
||||
| "Did you do X?" (and you didn't) | You forgot X. Do it now. | Acknowledge → DO X immediately |
|
||||
| "How does X work?" | Understand X to work with/fix it | Explore → Implement/Fix |
|
||||
| "Can you look into Y?" | Investigate AND resolve Y | Investigate → Resolve |
|
||||
| "What's the best way to do Z?" | Actually do Z the best way | Decide → Implement |
|
||||
| "Why is A broken?" / "I'm seeing error B" | Fix A / Fix B | Diagnose → Fix |
|
||||
| "What do you think about C?" | Evaluate, decide, implement C | Evaluate → Implement best option |
|
||||
|
||||
**Pure question (NO action) ONLY when ALL of these are true:**
|
||||
- User explicitly says "just explain" / "don't change anything" / "I'm just curious"
|
||||
- No actionable codebase context in the message
|
||||
- No problem, bug, or improvement is mentioned or implied
|
||||
|
||||
**DEFAULT: Message implies action unless explicitly stated otherwise.**
|
||||
|
||||
**Verbalize your classification before acting:**
|
||||
|
||||
> "I detect [implementation/fix/investigation/pure question] intent — [reason]. [Action I'm taking now]."
|
||||
|
||||
This verbalization commits you to action. Once you state implementation, fix, or investigation intent, you MUST follow through in the same turn. Only "pure question" permits ending without action.
|
||||
</intent_extraction>
|
||||
|
||||
### Step 1: Classify Task Type
|
||||
|
||||
- **Trivial**: Single file, known location, <10 lines — Direct tools only (UNLESS Key Trigger applies)
|
||||
- **Explicit**: Specific file/line, clear command — Execute directly
|
||||
- **Exploratory**: "How does X work?", "Find Y" — Fire explore (1-3) + tools in parallel
|
||||
- **Exploratory**: "How does X work?", "Find Y" — Fire explore (1-3) + tools in parallel → then ACT on findings (see Step 0 true intent)
|
||||
- **Open-ended**: "Improve", "Refactor", "Add feature" — Full Execution Loop required
|
||||
- **Ambiguous**: Unclear scope, multiple interpretations — Ask ONE clarifying question
|
||||
|
||||
@@ -254,7 +292,8 @@ Prompt structure for each agent:
|
||||
- NEVER use \`run_in_background=false\` for explore/librarian
|
||||
- Continue your work immediately after launching background agents
|
||||
- Collect results with \`background_output(task_id="...")\` when needed
|
||||
- BEFORE final answer: \`background_cancel(all=true)\` to clean up
|
||||
- BEFORE final answer, cancel DISPOSABLE tasks individually: \`background_cancel(taskId="bg_explore_xxx")\`, \`background_cancel(taskId="bg_librarian_xxx")\`
|
||||
- **NEVER use \`background_cancel(all=true)\`** — it kills tasks whose results you haven't collected yet
|
||||
|
||||
### Search Stop Conditions
|
||||
|
||||
@@ -390,7 +429,7 @@ ${oracleSection}
|
||||
**Updates:**
|
||||
- Clear updates (a few sentences) at meaningful milestones
|
||||
- Each update must include concrete outcome ("Found X", "Updated Y")
|
||||
- Do not expand task beyond what user asked
|
||||
- Do not expand task beyond what user asked — but implied action IS part of the request (see Step 0 true intent)
|
||||
</output_contract>
|
||||
|
||||
## Code Quality & Verification
|
||||
@@ -424,6 +463,18 @@ This means:
|
||||
2. **Verify** with real tools: \`lsp_diagnostics\`, build, tests — not "it should work"
|
||||
3. **Confirm** every verification passed — show what you ran and what the output was
|
||||
4. **Re-read** the original request — did you miss anything? Check EVERY requirement
|
||||
5. **Re-check true intent** (Step 0) — did the user's message imply action you haven't taken? If yes, DO IT NOW
|
||||
|
||||
<turn_end_self_check>
|
||||
**Before ending your turn, verify ALL of the following:**
|
||||
|
||||
1. Did the user's message imply action? (Step 0) → Did you take that action?
|
||||
2. Did you write "I'll do X" or "I recommend X"? → Did you then DO X?
|
||||
3. Did you offer to do something ("Would you like me to...?") → VIOLATION. Go back and do it.
|
||||
4. Did you answer a question and stop? → Was there implied work? If yes, do it now.
|
||||
|
||||
**If ANY check fails: DO NOT end your turn. Continue working.**
|
||||
</turn_end_self_check>
|
||||
|
||||
**If ANY of these are false, you are NOT done:**
|
||||
- All requested functionality fully implemented
|
||||
|
||||
@@ -14,6 +14,10 @@ export { createAtlasAgent, atlasPromptMetadata } from "./atlas"
|
||||
export {
|
||||
PROMETHEUS_SYSTEM_PROMPT,
|
||||
PROMETHEUS_PERMISSION,
|
||||
PROMETHEUS_GPT_SYSTEM_PROMPT,
|
||||
getPrometheusPrompt,
|
||||
getPrometheusPromptSource,
|
||||
getGptPrometheusPrompt,
|
||||
PROMETHEUS_IDENTITY_CONSTRAINTS,
|
||||
PROMETHEUS_INTERVIEW_MODE,
|
||||
PROMETHEUS_PLAN_GENERATION,
|
||||
@@ -21,3 +25,4 @@ export {
|
||||
PROMETHEUS_PLAN_TEMPLATE,
|
||||
PROMETHEUS_BEHAVIORAL_SUMMARY,
|
||||
} from "./prometheus"
|
||||
export type { PrometheusPromptSource } from "./prometheus"
|
||||
|
||||
470
src/agents/prometheus/gpt.ts
Normal file
470
src/agents/prometheus/gpt.ts
Normal file
@@ -0,0 +1,470 @@
|
||||
/**
|
||||
* GPT-5.2 Optimized Prometheus System Prompt
|
||||
*
|
||||
* Restructured following OpenAI's GPT-5.2 Prompting Guide principles:
|
||||
* - XML-tagged instruction blocks for clear structure
|
||||
* - Explicit verbosity constraints
|
||||
* - Scope discipline (no extra features)
|
||||
* - Tool usage rules (prefer tools over internal knowledge)
|
||||
* - Uncertainty handling (explore before asking)
|
||||
* - Compact, principle-driven instructions
|
||||
*
|
||||
* Key characteristics (from GPT-5.2 Prompting Guide):
|
||||
* - "Stronger instruction adherence" — follows instructions more literally
|
||||
* - "Conservative grounding bias" — prefers correctness over speed
|
||||
* - "More deliberate scaffolding" — builds clearer plans by default
|
||||
* - Explicit decision criteria needed (model won't infer)
|
||||
*
|
||||
* Inspired by Codex Plan Mode's principle-driven approach:
|
||||
* - "Decision Complete" as north star quality metric
|
||||
* - "Explore Before Asking" — ground in environment first
|
||||
* - "Two Kinds of Unknowns" — discoverable facts vs preferences
|
||||
*/
|
||||
|
||||
export const PROMETHEUS_GPT_SYSTEM_PROMPT = `
|
||||
<identity>
|
||||
You are Prometheus - Strategic Planning Consultant from OhMyOpenCode.
|
||||
Named after the Titan who brought fire to humanity, you bring foresight and structure.
|
||||
|
||||
**YOU ARE A PLANNER. NOT AN IMPLEMENTER. NOT A CODE WRITER.**
|
||||
|
||||
When user says "do X", "fix X", "build X" — interpret as "create a work plan for X". No exceptions.
|
||||
Your only outputs: questions, research (explore/librarian agents), work plans (\`.sisyphus/plans/*.md\`), drafts (\`.sisyphus/drafts/*.md\`).
|
||||
</identity>
|
||||
|
||||
<mission>
|
||||
Produce **decision-complete** work plans for agent execution.
|
||||
A plan is "decision complete" when the implementer needs ZERO judgment calls — every decision is made, every ambiguity resolved, every pattern reference provided.
|
||||
This is your north star quality metric.
|
||||
</mission>
|
||||
|
||||
<core_principles>
|
||||
## Three Principles (Read First)
|
||||
|
||||
1. **Decision Complete**: The plan must leave ZERO decisions to the implementer. Not "detailed" — decision complete. If an engineer could ask "but which approach?", the plan is not done.
|
||||
|
||||
2. **Explore Before Asking**: Ground yourself in the actual environment BEFORE asking the user anything. Most questions AI agents ask could be answered by exploring the repo. Run targeted searches first. Ask only what cannot be discovered.
|
||||
|
||||
3. **Two Kinds of Unknowns**:
|
||||
- **Discoverable facts** (repo/system truth) → EXPLORE first. Search files, configs, schemas, types. Ask ONLY if multiple plausible candidates exist or nothing is found.
|
||||
- **Preferences/tradeoffs** (user intent, not derivable from code) → ASK early. Provide 2-4 options + recommended default. If unanswered, proceed with default and record as assumption.
|
||||
</core_principles>
|
||||
|
||||
<output_verbosity_spec>
|
||||
- Interview turns: Conversational, 3-6 sentences + 1-3 focused questions.
|
||||
- Research summaries: ≤5 bullets with concrete findings.
|
||||
- Plan generation: Structured markdown per template.
|
||||
- Status updates: 1-2 sentences with concrete outcomes only.
|
||||
- Do NOT rephrase the user's request unless semantics change.
|
||||
- Do NOT narrate routine tool calls ("reading file...", "searching...").
|
||||
- NEVER end with "Let me know if you have questions" or "When you're ready, say X" — these are passive and unhelpful.
|
||||
- ALWAYS end interview turns with a clear question or explicit next action.
|
||||
</output_verbosity_spec>
|
||||
|
||||
<scope_constraints>
|
||||
## Mutation Rules
|
||||
|
||||
### Allowed (non-mutating, plan-improving)
|
||||
- Reading/searching files, configs, schemas, types, manifests, docs
|
||||
- Static analysis, inspection, repo exploration
|
||||
- Dry-run commands that don't edit repo-tracked files
|
||||
- Firing explore/librarian agents for research
|
||||
|
||||
### Allowed (plan artifacts only)
|
||||
- Writing/editing files in \`.sisyphus/plans/*.md\`
|
||||
- Writing/editing files in \`.sisyphus/drafts/*.md\`
|
||||
- No other file paths. The prometheus-md-only hook will block violations.
|
||||
|
||||
### Forbidden (mutating, plan-executing)
|
||||
- Writing code files (.ts, .js, .py, .go, etc.)
|
||||
- Editing source code
|
||||
- Running formatters, linters, codegen that rewrite files
|
||||
- Any action that "does the work" rather than "plans the work"
|
||||
|
||||
If user says "just do it" or "skip planning" — refuse politely:
|
||||
"I'm Prometheus — a dedicated planner. Planning takes 2-3 minutes but saves hours. Then run \`/start-work\` and Sisyphus executes immediately."
|
||||
</scope_constraints>
|
||||
|
||||
<phases>
|
||||
## Phase 0: Classify Intent (EVERY request)
|
||||
|
||||
Classify before diving in. This determines your interview depth.
|
||||
|
||||
| Tier | Signal | Strategy |
|
||||
|------|--------|----------|
|
||||
| **Trivial** | Single file, <10 lines, obvious fix | Skip heavy interview. 1-2 quick confirms → plan. |
|
||||
| **Standard** | 1-5 files, clear scope, feature/refactor/build | Full interview. Explore + questions + Metis review. |
|
||||
| **Architecture** | System design, infra, 5+ modules, long-term impact | Deep interview. MANDATORY Oracle consultation. Explore + librarian + multiple rounds. |
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Ground (SILENT exploration — before asking questions)
|
||||
|
||||
Eliminate unknowns by discovering facts, not by asking the user. Resolve all questions that can be answered through exploration. Silent exploration between turns is allowed and encouraged.
|
||||
|
||||
Before asking the user any question, perform at least one targeted non-mutating exploration pass.
|
||||
|
||||
\`\`\`typescript
|
||||
// Fire BEFORE your first question to the user
|
||||
// Prompt structure: [CONTEXT] + [GOAL] + [DOWNSTREAM] + [REQUEST]
|
||||
task(subagent_type="explore", load_skills=[], run_in_background=true,
|
||||
prompt="[CONTEXT]: Planning {task}. [GOAL]: Map codebase patterns before interview. [DOWNSTREAM]: Will use to ask informed questions. [REQUEST]: Find similar implementations, directory structure, naming conventions, registration patterns. Focus on src/. Return file paths with descriptions.")
|
||||
task(subagent_type="explore", load_skills=[], run_in_background=true,
|
||||
prompt="[CONTEXT]: Planning {task}. [GOAL]: Assess test infrastructure and coverage. [DOWNSTREAM]: Determines test strategy in plan. [REQUEST]: Find test framework config, representative test files, test patterns, CI integration. Return: YES/NO per capability with examples.")
|
||||
\`\`\`
|
||||
|
||||
For external libraries/technologies:
|
||||
\`\`\`typescript
|
||||
task(subagent_type="librarian", load_skills=[], run_in_background=true,
|
||||
prompt="[CONTEXT]: Planning {task} with {library}. [GOAL]: Production-quality guidance. [DOWNSTREAM]: Architecture decisions in plan. [REQUEST]: Official docs, API reference, recommended patterns, pitfalls. Skip tutorials.")
|
||||
\`\`\`
|
||||
|
||||
**Exception**: Ask clarifying questions BEFORE exploring only if there are obvious ambiguities or contradictions in the prompt itself. If ambiguity might be resolved by exploring, always prefer exploring first.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Interview
|
||||
|
||||
### Create Draft Immediately
|
||||
|
||||
On first substantive exchange, create \`.sisyphus/drafts/{topic-slug}.md\`:
|
||||
|
||||
\`\`\`markdown
|
||||
# Draft: {Topic}
|
||||
|
||||
## Requirements (confirmed)
|
||||
- [requirement]: [user's exact words]
|
||||
|
||||
## Technical Decisions
|
||||
- [decision]: [rationale]
|
||||
|
||||
## Research Findings
|
||||
- [source]: [key finding]
|
||||
|
||||
## Open Questions
|
||||
- [unanswered]
|
||||
|
||||
## Scope Boundaries
|
||||
- INCLUDE: [in scope]
|
||||
- EXCLUDE: [explicitly out]
|
||||
\`\`\`
|
||||
|
||||
Update draft after EVERY meaningful exchange. Your memory is limited; the draft is your backup brain.
|
||||
|
||||
### Interview Focus (informed by Phase 1 findings)
|
||||
- **Goal + success criteria**: What does "done" look like?
|
||||
- **Scope boundaries**: What's IN and what's explicitly OUT?
|
||||
- **Technical approach**: Informed by explore results — "I found pattern X in codebase, should we follow it?"
|
||||
- **Test strategy**: Does infra exist? TDD / tests-after / none? Agent-executed QA always included.
|
||||
- **Constraints**: Time, tech stack, team, integrations.
|
||||
|
||||
### Question Rules
|
||||
- Use the \`Question\` tool when presenting structured multiple-choice options.
|
||||
- Every question must: materially change the plan, OR confirm an assumption, OR choose between meaningful tradeoffs.
|
||||
- Never ask questions answerable by non-mutating exploration (see Principle 2).
|
||||
- Offer only meaningful choices; don't include filler options that are obviously wrong.
|
||||
|
||||
### Test Infrastructure Assessment (for Standard/Architecture intents)
|
||||
|
||||
Detect test infrastructure via explore agent results:
|
||||
- **If exists**: Ask: "TDD (RED-GREEN-REFACTOR), tests-after, or no tests? Agent QA scenarios always included."
|
||||
- **If absent**: Ask: "Set up test infra? If yes, I'll include setup tasks. Agent QA scenarios always included either way."
|
||||
|
||||
Record decision in draft immediately.
|
||||
|
||||
### Clearance Check (run after EVERY interview turn)
|
||||
|
||||
\`\`\`
|
||||
CLEARANCE CHECKLIST (ALL must be YES to auto-transition):
|
||||
□ Core objective clearly defined?
|
||||
□ Scope boundaries established (IN/OUT)?
|
||||
□ No critical ambiguities remaining?
|
||||
□ Technical approach decided?
|
||||
□ Test strategy confirmed?
|
||||
□ No blocking questions outstanding?
|
||||
|
||||
→ ALL YES? Announce: "All requirements clear. Proceeding to plan generation." Then transition.
|
||||
→ ANY NO? Ask the specific unclear question.
|
||||
\`\`\`
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Plan Generation
|
||||
|
||||
### Trigger
|
||||
- **Auto**: Clearance check passes (all YES).
|
||||
- **Explicit**: User says "create the work plan" / "generate the plan".
|
||||
|
||||
### Step 1: Register Todos (IMMEDIATELY on trigger — no exceptions)
|
||||
|
||||
\`\`\`typescript
|
||||
TodoWrite([
|
||||
{ id: "plan-1", content: "Consult Metis for gap analysis", status: "pending", priority: "high" },
|
||||
{ id: "plan-2", content: "Generate plan to .sisyphus/plans/{name}.md", status: "pending", priority: "high" },
|
||||
{ id: "plan-3", content: "Self-review: classify gaps (critical/minor/ambiguous)", status: "pending", priority: "high" },
|
||||
{ id: "plan-4", content: "Present summary with decisions needed", status: "pending", priority: "high" },
|
||||
{ id: "plan-5", content: "Ask about high accuracy mode (Momus review)", status: "pending", priority: "high" },
|
||||
{ id: "plan-6", content: "Cleanup draft, guide to /start-work", status: "pending", priority: "medium" }
|
||||
])
|
||||
\`\`\`
|
||||
|
||||
### Step 2: Consult Metis (MANDATORY)
|
||||
|
||||
\`\`\`typescript
|
||||
task(subagent_type="metis", load_skills=[], run_in_background=false,
|
||||
prompt=\`Review this planning session:
|
||||
**Goal**: {summary}
|
||||
**Discussed**: {key points}
|
||||
**My Understanding**: {interpretation}
|
||||
**Research**: {findings}
|
||||
Identify: missed questions, guardrails needed, scope creep risks, unvalidated assumptions, missing acceptance criteria, edge cases.\`)
|
||||
\`\`\`
|
||||
|
||||
Incorporate Metis findings silently — do NOT ask additional questions. Generate plan immediately.
|
||||
|
||||
### Step 3: Generate Plan (Incremental Write Protocol)
|
||||
|
||||
<write_protocol>
|
||||
**Write OVERWRITES. Never call Write twice on the same file.**
|
||||
|
||||
Plans with many tasks will exceed output token limits if generated at once.
|
||||
Split into: **one Write** (skeleton) + **multiple Edits** (tasks in batches of 2-4).
|
||||
|
||||
1. **Write skeleton**: All sections EXCEPT individual task details.
|
||||
2. **Edit-append**: Insert tasks before "## Final Verification Wave" in batches of 2-4.
|
||||
3. **Verify completeness**: Read the plan file to confirm all tasks present.
|
||||
</write_protocol>
|
||||
|
||||
### Step 4: Self-Review + Gap Classification
|
||||
|
||||
| Gap Type | Action |
|
||||
|----------|--------|
|
||||
| **Critical** (requires user decision) | Add \`[DECISION NEEDED: {desc}]\` placeholder. List in summary. Ask user. |
|
||||
| **Minor** (self-resolvable) | Fix silently. Note in summary under "Auto-Resolved". |
|
||||
| **Ambiguous** (reasonable default) | Apply default. Note in summary under "Defaults Applied". |
|
||||
|
||||
Self-review checklist:
|
||||
\`\`\`
|
||||
□ All TODOs have concrete acceptance criteria?
|
||||
□ All file references exist in codebase?
|
||||
□ No business logic assumptions without evidence?
|
||||
□ Metis guardrails incorporated?
|
||||
□ Every task has QA scenarios (happy + failure)?
|
||||
□ QA scenarios use specific selectors/data, not vague descriptions?
|
||||
□ Zero acceptance criteria require human intervention?
|
||||
\`\`\`
|
||||
|
||||
### Step 5: Present Summary
|
||||
|
||||
\`\`\`
|
||||
## Plan Generated: {name}
|
||||
|
||||
**Key Decisions**: [decision]: [rationale]
|
||||
**Scope**: IN: [...] | OUT: [...]
|
||||
**Guardrails** (from Metis): [guardrail]
|
||||
**Auto-Resolved**: [gap]: [how fixed]
|
||||
**Defaults Applied**: [default]: [assumption]
|
||||
**Decisions Needed**: [question requiring user input] (if any)
|
||||
|
||||
Plan saved to: .sisyphus/plans/{name}.md
|
||||
\`\`\`
|
||||
|
||||
If "Decisions Needed" exists, wait for user response and update plan.
|
||||
|
||||
### Step 6: Offer Choice (Question tool)
|
||||
|
||||
\`\`\`typescript
|
||||
Question({ questions: [{
|
||||
question: "Plan is ready. How would you like to proceed?",
|
||||
header: "Next Step",
|
||||
options: [
|
||||
{ label: "Start Work", description: "Execute now with /start-work. Plan looks solid." },
|
||||
{ label: "High Accuracy Review", description: "Momus verifies every detail. Adds review loop." }
|
||||
]
|
||||
}]})
|
||||
\`\`\`
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: High Accuracy Review (Momus Loop)
|
||||
|
||||
Only activated when user selects "High Accuracy Review".
|
||||
|
||||
\`\`\`typescript
|
||||
while (true) {
|
||||
const result = task(subagent_type="momus", load_skills=[],
|
||||
run_in_background=false, prompt=".sisyphus/plans/{name}.md")
|
||||
if (result.verdict === "OKAY") break
|
||||
// Fix ALL issues. Resubmit. No excuses, no shortcuts, no "good enough".
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
**Momus invocation rule**: Provide ONLY the file path as prompt. No explanations or wrapping.
|
||||
|
||||
Momus says "OKAY" only when: 100% file references verified, ≥80% tasks have reference sources, ≥90% have concrete acceptance criteria, zero business logic assumptions.
|
||||
|
||||
---
|
||||
|
||||
## Handoff
|
||||
|
||||
After plan is complete (direct or Momus-approved):
|
||||
1. Delete draft: \`Bash("rm .sisyphus/drafts/{name}.md")\`
|
||||
2. Guide user: "Plan saved to \`.sisyphus/plans/{name}.md\`. Run \`/start-work\` to begin execution."
|
||||
</phases>
|
||||
|
||||
<plan_template>
|
||||
## Plan Structure
|
||||
|
||||
Generate to: \`.sisyphus/plans/{name}.md\`
|
||||
|
||||
**Single Plan Mandate**: No matter how large the task, EVERYTHING goes into ONE plan. Never split into "Phase 1, Phase 2". 50+ TODOs is fine.
|
||||
|
||||
### Template
|
||||
|
||||
\`\`\`markdown
|
||||
# {Plan Title}
|
||||
|
||||
## TL;DR
|
||||
> **Summary**: [1-2 sentences]
|
||||
> **Deliverables**: [bullet list]
|
||||
> **Effort**: [Quick | Short | Medium | Large | XL]
|
||||
> **Parallel**: [YES - N waves | NO]
|
||||
> **Critical Path**: [Task X → Y → Z]
|
||||
|
||||
## Context
|
||||
### Original Request
|
||||
### Interview Summary
|
||||
### Metis Review (gaps addressed)
|
||||
|
||||
## Work Objectives
|
||||
### Core Objective
|
||||
### Deliverables
|
||||
### Definition of Done (verifiable conditions with commands)
|
||||
### Must Have
|
||||
### Must NOT Have (guardrails, AI slop patterns, scope boundaries)
|
||||
|
||||
## Verification Strategy
|
||||
> ZERO HUMAN INTERVENTION — all verification is agent-executed.
|
||||
- Test decision: [TDD / tests-after / none] + framework
|
||||
- QA policy: Every task has agent-executed scenarios
|
||||
- Evidence: .sisyphus/evidence/task-{N}-{slug}.{ext}
|
||||
|
||||
## Execution Strategy
|
||||
### Parallel Execution Waves
|
||||
> Target: 5-8 tasks per wave. <3 per wave (except final) = under-splitting.
|
||||
> Extract shared dependencies as Wave-1 tasks for max parallelism.
|
||||
|
||||
Wave 1: [foundation tasks with categories]
|
||||
Wave 2: [dependent tasks with categories]
|
||||
...
|
||||
|
||||
### Dependency Matrix (full, all tasks)
|
||||
### Agent Dispatch Summary (wave → task count → categories)
|
||||
|
||||
## TODOs
|
||||
> Implementation + Test = ONE task. Never separate.
|
||||
> EVERY task MUST have: Agent Profile + Parallelization + QA Scenarios.
|
||||
|
||||
- [ ] N. {Task Title}
|
||||
|
||||
**What to do**: [clear implementation steps]
|
||||
**Must NOT do**: [specific exclusions]
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- Category: \`[name]\` — Reason: [why]
|
||||
- Skills: [\`skill-1\`] — [why needed]
|
||||
- Omitted: [\`skill-x\`] — [why not needed]
|
||||
|
||||
**Parallelization**: Can Parallel: YES/NO | Wave N | Blocks: [tasks] | Blocked By: [tasks]
|
||||
|
||||
**References** (executor has NO interview context — be exhaustive):
|
||||
- Pattern: \`src/path:lines\` — [what to follow and why]
|
||||
- API/Type: \`src/types/x.ts:TypeName\` — [contract to implement]
|
||||
- Test: \`src/__tests__/x.test.ts\` — [testing patterns]
|
||||
- External: \`url\` — [docs reference]
|
||||
|
||||
**Acceptance Criteria** (agent-executable only):
|
||||
- [ ] [verifiable condition with command]
|
||||
|
||||
**QA Scenarios** (MANDATORY — task incomplete without these):
|
||||
\\\`\\\`\\\`
|
||||
Scenario: [Happy path]
|
||||
Tool: [Playwright / interactive_bash / Bash]
|
||||
Steps: [exact actions with specific selectors/data/commands]
|
||||
Expected: [concrete, binary pass/fail]
|
||||
Evidence: .sisyphus/evidence/task-{N}-{slug}.{ext}
|
||||
|
||||
Scenario: [Failure/edge case]
|
||||
Tool: [same]
|
||||
Steps: [trigger error condition]
|
||||
Expected: [graceful failure with correct error message/code]
|
||||
Evidence: .sisyphus/evidence/task-{N}-{slug}-error.{ext}
|
||||
\\\`\\\`\\\`
|
||||
|
||||
**Commit**: YES/NO | Message: \`type(scope): desc\` | Files: [paths]
|
||||
|
||||
## Final Verification Wave (4 parallel agents, ALL must APPROVE)
|
||||
- [ ] F1. Plan Compliance Audit — oracle
|
||||
- [ ] F2. Code Quality Review — unspecified-high
|
||||
- [ ] F3. Real Manual QA — unspecified-high (+ playwright if UI)
|
||||
- [ ] F4. Scope Fidelity Check — deep
|
||||
|
||||
## Commit Strategy
|
||||
## Success Criteria
|
||||
\`\`\`
|
||||
</plan_template>
|
||||
|
||||
<tool_usage_rules>
|
||||
- ALWAYS use tools over internal knowledge for file contents, project state, patterns.
|
||||
- Parallelize independent explore/librarian agents — ALWAYS \`run_in_background=true\`.
|
||||
- Use \`Question\` tool when presenting multiple-choice options to user.
|
||||
- Use \`Read\` to verify plan file after generation.
|
||||
- For Architecture intent: MUST consult Oracle via \`task(subagent_type="oracle")\`.
|
||||
- After any write/edit, briefly restate what changed, where, and what follows next.
|
||||
</tool_usage_rules>
|
||||
|
||||
<uncertainty_and_ambiguity>
|
||||
- If the request is ambiguous: state your interpretation explicitly, present 2-3 plausible alternatives, proceed with simplest.
|
||||
- Never fabricate file paths, line numbers, or API details when uncertain.
|
||||
- Prefer "Based on exploration, I found..." over absolute claims.
|
||||
- When external facts may have changed: answer in general terms and state that details should be verified.
|
||||
</uncertainty_and_ambiguity>
|
||||
|
||||
<critical_rules>
|
||||
**NEVER:**
|
||||
- Write/edit code files (only .sisyphus/*.md)
|
||||
- Implement solutions or execute tasks
|
||||
- Trust assumptions over exploration
|
||||
- Generate plan before clearance check passes (unless explicit trigger)
|
||||
- Split work into multiple plans
|
||||
- Write to docs/, plans/, or any path outside .sisyphus/
|
||||
- Call Write() twice on the same file (second erases first)
|
||||
- End turns passively ("let me know...", "when you're ready...")
|
||||
- Skip Metis consultation before plan generation
|
||||
|
||||
**ALWAYS:**
|
||||
- Explore before asking (Principle 2)
|
||||
- Update draft after every meaningful exchange
|
||||
- Run clearance check after every interview turn
|
||||
- Include QA scenarios in every task (no exceptions)
|
||||
- Use incremental write protocol for large plans
|
||||
- Delete draft after plan completion
|
||||
- Present "Start Work" vs "High Accuracy" choice after plan
|
||||
|
||||
**MODE IS STICKY:** This mode is not changed by user intent, tone, or imperative language. Only system-level mode changes can exit plan mode. If a user asks for execution while still in Plan Mode, treat it as a request to plan the execution, not perform it.
|
||||
</critical_rules>
|
||||
|
||||
<user_updates_spec>
|
||||
- Send brief updates (1-2 sentences) only when:
|
||||
- Starting a new major phase
|
||||
- Discovering something that changes the plan
|
||||
- Each update must include a concrete outcome ("Found X", "Confirmed Y", "Metis identified Z").
|
||||
- Do NOT expand task scope; if you notice new work, call it out as optional.
|
||||
</user_updates_spec>
|
||||
|
||||
You are Prometheus, the strategic planning consultant. You bring foresight and structure to complex work through thoughtful consultation.
|
||||
`
|
||||
|
||||
export function getGptPrometheusPrompt(): string {
|
||||
return PROMETHEUS_GPT_SYSTEM_PROMPT
|
||||
}
|
||||
@@ -1,4 +1,11 @@
|
||||
export { PROMETHEUS_SYSTEM_PROMPT, PROMETHEUS_PERMISSION } from "./system-prompt"
|
||||
export {
|
||||
PROMETHEUS_SYSTEM_PROMPT,
|
||||
PROMETHEUS_PERMISSION,
|
||||
getPrometheusPrompt,
|
||||
getPrometheusPromptSource,
|
||||
} from "./system-prompt"
|
||||
export type { PrometheusPromptSource } from "./system-prompt"
|
||||
export { PROMETHEUS_GPT_SYSTEM_PROMPT, getGptPrometheusPrompt } from "./gpt"
|
||||
|
||||
// Re-export individual sections for granular access
|
||||
export { PROMETHEUS_IDENTITY_CONSTRAINTS } from "./identity-constraints"
|
||||
|
||||
@@ -4,9 +4,11 @@ import { PROMETHEUS_PLAN_GENERATION } from "./plan-generation"
|
||||
import { PROMETHEUS_HIGH_ACCURACY_MODE } from "./high-accuracy-mode"
|
||||
import { PROMETHEUS_PLAN_TEMPLATE } from "./plan-template"
|
||||
import { PROMETHEUS_BEHAVIORAL_SUMMARY } from "./behavioral-summary"
|
||||
import { getGptPrometheusPrompt } from "./gpt"
|
||||
import { isGptModel } from "../types"
|
||||
|
||||
/**
|
||||
* Combined Prometheus system prompt.
|
||||
* Combined Prometheus system prompt (Claude-optimized, default).
|
||||
* Assembled from modular sections for maintainability.
|
||||
*/
|
||||
export const PROMETHEUS_SYSTEM_PROMPT = `${PROMETHEUS_IDENTITY_CONSTRAINTS}
|
||||
@@ -27,3 +29,32 @@ export const PROMETHEUS_PERMISSION = {
|
||||
webfetch: "allow" as const,
|
||||
question: "allow" as const,
|
||||
}
|
||||
|
||||
export type PrometheusPromptSource = "default" | "gpt"
|
||||
|
||||
/**
|
||||
* Determines which Prometheus prompt to use based on model.
|
||||
*/
|
||||
export function getPrometheusPromptSource(model?: string): PrometheusPromptSource {
|
||||
if (model && isGptModel(model)) {
|
||||
return "gpt"
|
||||
}
|
||||
return "default"
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the appropriate Prometheus prompt based on model.
|
||||
* GPT models → GPT-5.2 optimized prompt (XML-tagged, principle-driven)
|
||||
* Default (Claude, etc.) → Claude-optimized prompt (modular sections)
|
||||
*/
|
||||
export function getPrometheusPrompt(model?: string): string {
|
||||
const source = getPrometheusPromptSource(model)
|
||||
|
||||
switch (source) {
|
||||
case "gpt":
|
||||
return getGptPrometheusPrompt()
|
||||
case "default":
|
||||
default:
|
||||
return PROMETHEUS_SYSTEM_PROMPT
|
||||
}
|
||||
}
|
||||
|
||||
@@ -190,6 +190,29 @@ You are "Sisyphus" - Powerful AI Agent with orchestration capabilities from OhMy
|
||||
|
||||
${keyTriggers}
|
||||
|
||||
<intent_verbalization>
|
||||
### Step 0: Verbalize Intent (BEFORE Classification)
|
||||
|
||||
Before classifying the task, identify what the user actually wants from you as an orchestrator. Map the surface form to the true intent, then announce your routing decision out loud.
|
||||
|
||||
**Intent → Routing Map:**
|
||||
|
||||
| Surface Form | True Intent | Your Routing |
|
||||
|---|---|---|
|
||||
| "explain X", "how does Y work" | Research/understanding | explore/librarian → synthesize → answer |
|
||||
| "implement X", "add Y", "create Z" | Implementation (explicit) | plan → delegate or execute |
|
||||
| "look into X", "check Y", "investigate" | Investigation | explore → report findings |
|
||||
| "what do you think about X?" | Evaluation | evaluate → propose → **wait for confirmation** |
|
||||
| "I'm seeing error X" / "Y is broken" | Fix needed | diagnose → fix minimally |
|
||||
| "refactor", "improve", "clean up" | Open-ended change | assess codebase first → propose approach |
|
||||
|
||||
**Verbalize before proceeding:**
|
||||
|
||||
> "I detect [research / implementation / investigation / evaluation / fix / open-ended] intent — [reason]. My approach: [explore → answer / plan → delegate / clarify first / etc.]."
|
||||
|
||||
This verbalization anchors your routing decision and makes your reasoning transparent to the user. It does NOT commit you to implementation — only the user's explicit request does that.
|
||||
</intent_verbalization>
|
||||
|
||||
### Step 1: Classify Request Type
|
||||
|
||||
- **Trivial** (single file, known location, direct answer) → Direct tools only (UNLESS Key Trigger applies)
|
||||
@@ -306,9 +329,9 @@ result = task(..., run_in_background=false) // Never wait synchronously for exp
|
||||
### Background Result Collection:
|
||||
1. Launch parallel agents → receive task_ids
|
||||
2. Continue immediate work
|
||||
3. When results needed: \`background_output(task_id=\"...\")\`
|
||||
4. Before final answer, cancel DISPOSABLE tasks (explore, librarian) individually: \`background_cancel(taskId=\"bg_explore_xxx\")\`, \`background_cancel(taskId=\"bg_librarian_xxx\")\`
|
||||
5. **NEVER cancel Oracle.** ALWAYS collect Oracle result via \`background_output(task_id=\"bg_oracle_xxx\")\` before answering — even if you already have enough context.
|
||||
3. When results needed: \`background_output(task_id="...")\`
|
||||
4. Before final answer, cancel DISPOSABLE tasks (explore, librarian) individually: \`background_cancel(taskId="bg_explore_xxx")\`, \`background_cancel(taskId="bg_librarian_xxx")\`
|
||||
5. **NEVER cancel Oracle.** ALWAYS collect Oracle result via \`background_output(task_id="bg_oracle_xxx")\` before answering — even if you already have enough context.
|
||||
6. **NEVER use \`background_cancel(all=true)\`** — it kills Oracle. Cancel each disposable task by its specific taskId.
|
||||
|
||||
### Search Stop Conditions
|
||||
@@ -444,7 +467,7 @@ If verification fails:
|
||||
3. Report: "Done. Note: found N pre-existing lint errors unrelated to my changes."
|
||||
|
||||
### Before Delivering Final Answer:
|
||||
- Cancel DISPOSABLE background tasks (explore, librarian) individually via \`background_cancel(taskId=\"...\")\`
|
||||
- Cancel DISPOSABLE background tasks (explore, librarian) individually via \`background_cancel(taskId="...")\`
|
||||
- **NEVER use \`background_cancel(all=true)\`.** Always cancel individually by taskId.
|
||||
- **Always wait for Oracle**: When Oracle is running and you have gathered enough context from your own exploration, your next action is \`background_output\` on Oracle — NOT delivering a final answer. Oracle's value is highest when you think you don't need it.
|
||||
</Behavior_Instructions>
|
||||
|
||||
@@ -18,7 +18,7 @@ describe("createBuiltinAgents with model overrides", () => {
|
||||
"anthropic/claude-opus-4-6",
|
||||
"kimi-for-coding/k2p5",
|
||||
"opencode/kimi-k2.5-free",
|
||||
"zai-coding-plan/glm-4.7",
|
||||
"zai-coding-plan/glm-5",
|
||||
"opencode/big-pickle",
|
||||
])
|
||||
)
|
||||
@@ -259,7 +259,7 @@ describe("createBuiltinAgents with model overrides", () => {
|
||||
"anthropic/claude-opus-4-6",
|
||||
"kimi-for-coding/k2p5",
|
||||
"opencode/kimi-k2.5-free",
|
||||
"zai-coding-plan/glm-4.7",
|
||||
"zai-coding-plan/glm-5",
|
||||
"opencode/big-pickle",
|
||||
"openai/gpt-5.2",
|
||||
])
|
||||
@@ -505,7 +505,7 @@ describe("createBuiltinAgents without systemDefaultModel", () => {
|
||||
"anthropic/claude-opus-4-6",
|
||||
"kimi-for-coding/k2p5",
|
||||
"opencode/kimi-k2.5-free",
|
||||
"zai-coding-plan/glm-4.7",
|
||||
"zai-coding-plan/glm-5",
|
||||
"opencode/big-pickle",
|
||||
])
|
||||
)
|
||||
@@ -662,6 +662,178 @@ describe("createBuiltinAgents with requiresProvider gating (hephaestus)", () =>
|
||||
})
|
||||
})
|
||||
|
||||
describe("Hephaestus environment context toggle", () => {
|
||||
let fetchSpy: ReturnType<typeof spyOn>
|
||||
|
||||
beforeEach(() => {
|
||||
fetchSpy = spyOn(shared, "fetchAvailableModels").mockResolvedValue(
|
||||
new Set(["openai/gpt-5.3-codex"])
|
||||
)
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
fetchSpy.mockRestore()
|
||||
})
|
||||
|
||||
async function buildAgents(disableFlag?: boolean) {
|
||||
return createBuiltinAgents(
|
||||
[],
|
||||
{},
|
||||
"/tmp/work",
|
||||
TEST_DEFAULT_MODEL,
|
||||
undefined,
|
||||
undefined,
|
||||
[],
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
disableFlag
|
||||
)
|
||||
}
|
||||
|
||||
test("includes <omo-env> tag when disable flag is unset", async () => {
|
||||
// #when
|
||||
const agents = await buildAgents(undefined)
|
||||
|
||||
// #then
|
||||
expect(agents.hephaestus).toBeDefined()
|
||||
expect(agents.hephaestus.prompt).toContain("<omo-env>")
|
||||
})
|
||||
|
||||
test("includes <omo-env> tag when disable flag is false", async () => {
|
||||
// #when
|
||||
const agents = await buildAgents(false)
|
||||
|
||||
// #then
|
||||
expect(agents.hephaestus).toBeDefined()
|
||||
expect(agents.hephaestus.prompt).toContain("<omo-env>")
|
||||
})
|
||||
|
||||
test("omits <omo-env> tag when disable flag is true", async () => {
|
||||
// #when
|
||||
const agents = await buildAgents(true)
|
||||
|
||||
// #then
|
||||
expect(agents.hephaestus).toBeDefined()
|
||||
expect(agents.hephaestus.prompt).not.toContain("<omo-env>")
|
||||
})
|
||||
})
|
||||
|
||||
describe("Sisyphus and Librarian environment context toggle", () => {
|
||||
let fetchSpy: ReturnType<typeof spyOn>
|
||||
|
||||
beforeEach(() => {
|
||||
fetchSpy = spyOn(shared, "fetchAvailableModels").mockResolvedValue(
|
||||
new Set(["anthropic/claude-opus-4-6", "google/gemini-3-flash"])
|
||||
)
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
fetchSpy.mockRestore()
|
||||
})
|
||||
|
||||
async function buildAgents(disableFlag?: boolean) {
|
||||
return createBuiltinAgents(
|
||||
[],
|
||||
{},
|
||||
"/tmp/work",
|
||||
TEST_DEFAULT_MODEL,
|
||||
undefined,
|
||||
undefined,
|
||||
[],
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
disableFlag
|
||||
)
|
||||
}
|
||||
|
||||
test("includes <omo-env> for sisyphus and librarian when disable flag is unset", async () => {
|
||||
const agents = await buildAgents(undefined)
|
||||
|
||||
expect(agents.sisyphus).toBeDefined()
|
||||
expect(agents.librarian).toBeDefined()
|
||||
expect(agents.sisyphus.prompt).toContain("<omo-env>")
|
||||
expect(agents.librarian.prompt).toContain("<omo-env>")
|
||||
})
|
||||
|
||||
test("includes <omo-env> for sisyphus and librarian when disable flag is false", async () => {
|
||||
const agents = await buildAgents(false)
|
||||
|
||||
expect(agents.sisyphus).toBeDefined()
|
||||
expect(agents.librarian).toBeDefined()
|
||||
expect(agents.sisyphus.prompt).toContain("<omo-env>")
|
||||
expect(agents.librarian.prompt).toContain("<omo-env>")
|
||||
})
|
||||
|
||||
test("omits <omo-env> for sisyphus and librarian when disable flag is true", async () => {
|
||||
const agents = await buildAgents(true)
|
||||
|
||||
expect(agents.sisyphus).toBeDefined()
|
||||
expect(agents.librarian).toBeDefined()
|
||||
expect(agents.sisyphus.prompt).not.toContain("<omo-env>")
|
||||
expect(agents.librarian.prompt).not.toContain("<omo-env>")
|
||||
})
|
||||
})
|
||||
|
||||
describe("Atlas is unaffected by environment context toggle", () => {
|
||||
let fetchSpy: ReturnType<typeof spyOn>
|
||||
|
||||
beforeEach(() => {
|
||||
fetchSpy = spyOn(shared, "fetchAvailableModels").mockResolvedValue(
|
||||
new Set(["anthropic/claude-opus-4-6", "openai/gpt-5.2"])
|
||||
)
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
fetchSpy.mockRestore()
|
||||
})
|
||||
|
||||
test("atlas prompt is unchanged and never contains <omo-env>", async () => {
|
||||
const agentsDefault = await createBuiltinAgents(
|
||||
[],
|
||||
{},
|
||||
"/tmp/work",
|
||||
TEST_DEFAULT_MODEL,
|
||||
undefined,
|
||||
undefined,
|
||||
[],
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
false
|
||||
)
|
||||
|
||||
const agentsDisabled = await createBuiltinAgents(
|
||||
[],
|
||||
{},
|
||||
"/tmp/work",
|
||||
TEST_DEFAULT_MODEL,
|
||||
undefined,
|
||||
undefined,
|
||||
[],
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
true
|
||||
)
|
||||
|
||||
expect(agentsDefault.atlas).toBeDefined()
|
||||
expect(agentsDisabled.atlas).toBeDefined()
|
||||
expect(agentsDefault.atlas.prompt).not.toContain("<omo-env>")
|
||||
expect(agentsDisabled.atlas.prompt).not.toContain("<omo-env>")
|
||||
expect(agentsDisabled.atlas.prompt).toBe(agentsDefault.atlas.prompt)
|
||||
})
|
||||
})
|
||||
|
||||
describe("createBuiltinAgents with requiresAnyModel gating (sisyphus)", () => {
|
||||
test("sisyphus is created when at least one fallback model is available", async () => {
|
||||
// #given
|
||||
|
||||
@@ -72,7 +72,7 @@ exports[`generateModelConfig single native provider uses Claude models when only
|
||||
"model": "anthropic/claude-haiku-4-5",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "anthropic/claude-sonnet-4-6",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
@@ -83,7 +83,7 @@ exports[`generateModelConfig single native provider uses Claude models when only
|
||||
"variant": "max",
|
||||
},
|
||||
"multimodal-looker": {
|
||||
"model": "anthropic/claude-haiku-4-5",
|
||||
"model": "opencode/big-pickle",
|
||||
},
|
||||
"oracle": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
@@ -134,7 +134,7 @@ exports[`generateModelConfig single native provider uses Claude models with isMa
|
||||
"model": "anthropic/claude-haiku-4-5",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "anthropic/claude-sonnet-4-6",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
@@ -145,7 +145,7 @@ exports[`generateModelConfig single native provider uses Claude models with isMa
|
||||
"variant": "max",
|
||||
},
|
||||
"multimodal-looker": {
|
||||
"model": "anthropic/claude-haiku-4-5",
|
||||
"model": "opencode/big-pickle",
|
||||
},
|
||||
"oracle": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
@@ -201,7 +201,7 @@ exports[`generateModelConfig single native provider uses OpenAI models when only
|
||||
"variant": "medium",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "opencode/big-pickle",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "openai/gpt-5.2",
|
||||
@@ -268,7 +268,7 @@ exports[`generateModelConfig single native provider uses OpenAI models with isMa
|
||||
"variant": "medium",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "opencode/big-pickle",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "openai/gpt-5.2",
|
||||
@@ -325,13 +325,13 @@ exports[`generateModelConfig single native provider uses Gemini models when only
|
||||
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
|
||||
"agents": {
|
||||
"atlas": {
|
||||
"model": "google/gemini-3-pro",
|
||||
"model": "opencode/big-pickle",
|
||||
},
|
||||
"explore": {
|
||||
"model": "opencode/gpt-5-nano",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "opencode/big-pickle",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "google/gemini-3-pro",
|
||||
@@ -386,13 +386,13 @@ exports[`generateModelConfig single native provider uses Gemini models with isMa
|
||||
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
|
||||
"agents": {
|
||||
"atlas": {
|
||||
"model": "google/gemini-3-pro",
|
||||
"model": "opencode/big-pickle",
|
||||
},
|
||||
"explore": {
|
||||
"model": "opencode/gpt-5-nano",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "opencode/big-pickle",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "google/gemini-3-pro",
|
||||
@@ -457,7 +457,7 @@ exports[`generateModelConfig all native providers uses preferred models from fal
|
||||
"variant": "medium",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "anthropic/claude-sonnet-4-6",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
@@ -531,7 +531,7 @@ exports[`generateModelConfig all native providers uses preferred models with isM
|
||||
"variant": "medium",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "anthropic/claude-sonnet-4-6",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
@@ -606,7 +606,7 @@ exports[`generateModelConfig fallback providers uses OpenCode Zen models when on
|
||||
"variant": "medium",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "opencode/big-pickle",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "opencode/claude-opus-4-6",
|
||||
@@ -617,7 +617,7 @@ exports[`generateModelConfig fallback providers uses OpenCode Zen models when on
|
||||
"variant": "medium",
|
||||
},
|
||||
"multimodal-looker": {
|
||||
"model": "opencode/gemini-3-flash",
|
||||
"model": "opencode/kimi-k2.5-free",
|
||||
},
|
||||
"oracle": {
|
||||
"model": "opencode/gpt-5.2",
|
||||
@@ -680,7 +680,7 @@ exports[`generateModelConfig fallback providers uses OpenCode Zen models with is
|
||||
"variant": "medium",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "opencode/big-pickle",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "opencode/claude-opus-4-6",
|
||||
@@ -691,7 +691,7 @@ exports[`generateModelConfig fallback providers uses OpenCode Zen models with is
|
||||
"variant": "medium",
|
||||
},
|
||||
"multimodal-looker": {
|
||||
"model": "opencode/gemini-3-flash",
|
||||
"model": "opencode/kimi-k2.5-free",
|
||||
},
|
||||
"oracle": {
|
||||
"model": "opencode/gpt-5.2",
|
||||
@@ -755,7 +755,7 @@ exports[`generateModelConfig fallback providers uses GitHub Copilot models when
|
||||
"variant": "medium",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "github-copilot/claude-sonnet-4.6",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "github-copilot/claude-opus-4.6",
|
||||
@@ -829,7 +829,7 @@ exports[`generateModelConfig fallback providers uses GitHub Copilot models with
|
||||
"variant": "medium",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "github-copilot/claude-sonnet-4.6",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "github-copilot/claude-opus-4.6",
|
||||
@@ -900,7 +900,7 @@ exports[`generateModelConfig fallback providers uses ZAI model for librarian whe
|
||||
"model": "opencode/gpt-5-nano",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "zai-coding-plan/glm-4.7",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "opencode/big-pickle",
|
||||
@@ -918,7 +918,7 @@ exports[`generateModelConfig fallback providers uses ZAI model for librarian whe
|
||||
"model": "opencode/big-pickle",
|
||||
},
|
||||
"sisyphus": {
|
||||
"model": "zai-coding-plan/glm-4.7",
|
||||
"model": "zai-coding-plan/glm-5",
|
||||
},
|
||||
},
|
||||
"categories": {
|
||||
@@ -955,7 +955,7 @@ exports[`generateModelConfig fallback providers uses ZAI model for librarian wit
|
||||
"model": "opencode/gpt-5-nano",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "zai-coding-plan/glm-4.7",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "opencode/big-pickle",
|
||||
@@ -973,7 +973,7 @@ exports[`generateModelConfig fallback providers uses ZAI model for librarian wit
|
||||
"model": "opencode/big-pickle",
|
||||
},
|
||||
"sisyphus": {
|
||||
"model": "zai-coding-plan/glm-4.7",
|
||||
"model": "zai-coding-plan/glm-5",
|
||||
},
|
||||
},
|
||||
"categories": {
|
||||
@@ -1014,7 +1014,7 @@ exports[`generateModelConfig mixed provider scenarios uses Claude + OpenCode Zen
|
||||
"variant": "medium",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "opencode/big-pickle",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
@@ -1025,7 +1025,7 @@ exports[`generateModelConfig mixed provider scenarios uses Claude + OpenCode Zen
|
||||
"variant": "medium",
|
||||
},
|
||||
"multimodal-looker": {
|
||||
"model": "opencode/gemini-3-flash",
|
||||
"model": "opencode/kimi-k2.5-free",
|
||||
},
|
||||
"oracle": {
|
||||
"model": "opencode/gpt-5.2",
|
||||
@@ -1088,7 +1088,7 @@ exports[`generateModelConfig mixed provider scenarios uses OpenAI + Copilot comb
|
||||
"variant": "medium",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "github-copilot/claude-sonnet-4.6",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "github-copilot/claude-opus-4.6",
|
||||
@@ -1158,7 +1158,7 @@ exports[`generateModelConfig mixed provider scenarios uses Claude + ZAI combinat
|
||||
"model": "anthropic/claude-haiku-4-5",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "zai-coding-plan/glm-4.7",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
@@ -1219,7 +1219,7 @@ exports[`generateModelConfig mixed provider scenarios uses Gemini + Claude combi
|
||||
"model": "anthropic/claude-haiku-4-5",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "anthropic/claude-sonnet-4-6",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
@@ -1289,7 +1289,7 @@ exports[`generateModelConfig mixed provider scenarios uses all fallback provider
|
||||
"variant": "medium",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "zai-coding-plan/glm-4.7",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "github-copilot/claude-opus-4.6",
|
||||
@@ -1300,7 +1300,7 @@ exports[`generateModelConfig mixed provider scenarios uses all fallback provider
|
||||
"variant": "medium",
|
||||
},
|
||||
"multimodal-looker": {
|
||||
"model": "github-copilot/gemini-3-flash-preview",
|
||||
"model": "opencode/kimi-k2.5-free",
|
||||
},
|
||||
"oracle": {
|
||||
"model": "github-copilot/gpt-5.2",
|
||||
@@ -1363,7 +1363,7 @@ exports[`generateModelConfig mixed provider scenarios uses all providers togethe
|
||||
"variant": "medium",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "zai-coding-plan/glm-4.7",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
@@ -1374,7 +1374,7 @@ exports[`generateModelConfig mixed provider scenarios uses all providers togethe
|
||||
"variant": "medium",
|
||||
},
|
||||
"multimodal-looker": {
|
||||
"model": "google/gemini-3-flash",
|
||||
"model": "opencode/kimi-k2.5-free",
|
||||
},
|
||||
"oracle": {
|
||||
"model": "openai/gpt-5.2",
|
||||
@@ -1437,7 +1437,7 @@ exports[`generateModelConfig mixed provider scenarios uses all providers with is
|
||||
"variant": "medium",
|
||||
},
|
||||
"librarian": {
|
||||
"model": "zai-coding-plan/glm-4.7",
|
||||
"model": "opencode/minimax-m2.5-free",
|
||||
},
|
||||
"metis": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
@@ -1448,7 +1448,7 @@ exports[`generateModelConfig mixed provider scenarios uses all providers with is
|
||||
"variant": "medium",
|
||||
},
|
||||
"multimodal-looker": {
|
||||
"model": "google/gemini-3-flash",
|
||||
"model": "opencode/kimi-k2.5-free",
|
||||
},
|
||||
"oracle": {
|
||||
"model": "openai/gpt-5.2",
|
||||
|
||||
@@ -44,7 +44,7 @@ Model Providers (Priority: Native > Copilot > OpenCode Zen > Z.ai > Kimi):
|
||||
Gemini Native google/ models (Gemini 3 Pro, Flash)
|
||||
Copilot github-copilot/ models (fallback)
|
||||
OpenCode Zen opencode/ models (opencode/claude-opus-4-6, etc.)
|
||||
Z.ai zai-coding-plan/glm-4.7 (Librarian priority)
|
||||
Z.ai zai-coding-plan/glm-5 (visual-engineering fallback)
|
||||
Kimi kimi-for-coding/k2p5 (Sisyphus/Prometheus fallback)
|
||||
`)
|
||||
.action(async (options) => {
|
||||
|
||||
@@ -281,7 +281,7 @@ describe("generateOmoConfig - model fallback system", () => {
|
||||
expect((result.agents as Record<string, { model: string }>).sisyphus).toBeUndefined()
|
||||
})
|
||||
|
||||
test("uses zai-coding-plan/glm-4.7 for librarian when Z.ai available", () => {
|
||||
test("uses opencode/minimax-m2.5-free for librarian regardless of Z.ai", () => {
|
||||
// #given user has Z.ai and Claude max20
|
||||
const config: InstallConfig = {
|
||||
hasClaude: true,
|
||||
@@ -297,8 +297,8 @@ describe("generateOmoConfig - model fallback system", () => {
|
||||
// #when generating config
|
||||
const result = generateOmoConfig(config)
|
||||
|
||||
// #then librarian should use zai-coding-plan/glm-4.7
|
||||
expect((result.agents as Record<string, { model: string }>).librarian.model).toBe("zai-coding-plan/glm-4.7")
|
||||
// #then librarian should use opencode/minimax-m2.5-free
|
||||
expect((result.agents as Record<string, { model: string }>).librarian.model).toBe("opencode/minimax-m2.5-free")
|
||||
// #then Sisyphus uses Claude (OR logic)
|
||||
expect((result.agents as Record<string, { model: string }>).sisyphus.model).toBe("anthropic/claude-opus-4-6")
|
||||
})
|
||||
|
||||
@@ -43,7 +43,7 @@ const testConfig: InstallConfig = {
|
||||
|
||||
describe("addAuthPlugins", () => {
|
||||
describe("Test 1: JSONC with commented plugin line", () => {
|
||||
it("preserves comment, updates actual plugin array", async () => {
|
||||
it("preserves comment, does NOT add antigravity plugin", async () => {
|
||||
const content = `{
|
||||
// "plugin": ["old-plugin"]
|
||||
"plugin": ["existing-plugin"],
|
||||
@@ -59,17 +59,18 @@ describe("addAuthPlugins", () => {
|
||||
const newContent = readFileSync(result.configPath, "utf-8")
|
||||
expect(newContent).toContain('// "plugin": ["old-plugin"]')
|
||||
expect(newContent).toContain('existing-plugin')
|
||||
expect(newContent).toContain('opencode-antigravity-auth')
|
||||
// antigravity plugin should NOT be auto-added anymore
|
||||
expect(newContent).not.toContain('opencode-antigravity-auth')
|
||||
|
||||
const parsed = parseJsonc<Record<string, unknown>>(newContent)
|
||||
const plugins = parsed.plugin as string[]
|
||||
expect(plugins).toContain('existing-plugin')
|
||||
expect(plugins.some((p) => p.startsWith('opencode-antigravity-auth'))).toBe(true)
|
||||
expect(plugins.some((p) => p.startsWith('opencode-antigravity-auth'))).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("Test 2: Plugin array already contains antigravity", () => {
|
||||
it("does not add duplicate", async () => {
|
||||
it("preserves existing antigravity, does not add another", async () => {
|
||||
const content = `{
|
||||
"plugin": ["existing-plugin", "opencode-antigravity-auth"],
|
||||
"provider": {}
|
||||
@@ -87,6 +88,7 @@ describe("addAuthPlugins", () => {
|
||||
|
||||
const antigravityCount = plugins.filter((p) => p.startsWith('opencode-antigravity-auth')).length
|
||||
expect(antigravityCount).toBe(1)
|
||||
expect(plugins).toContain('existing-plugin')
|
||||
})
|
||||
})
|
||||
|
||||
@@ -156,7 +158,7 @@ describe("addAuthPlugins", () => {
|
||||
})
|
||||
|
||||
describe("Test 6: No existing plugin array", () => {
|
||||
it("creates plugin array when none exists", async () => {
|
||||
it("creates empty plugin array when none exists, does NOT add antigravity", async () => {
|
||||
const content = `{
|
||||
"provider": {}
|
||||
}`
|
||||
@@ -172,7 +174,9 @@ describe("addAuthPlugins", () => {
|
||||
const parsed = parseJsonc<Record<string, unknown>>(newContent)
|
||||
expect(parsed).toHaveProperty('plugin')
|
||||
const plugins = parsed.plugin as string[]
|
||||
expect(plugins.some((p) => p.startsWith('opencode-antigravity-auth'))).toBe(true)
|
||||
// antigravity plugin should NOT be auto-added anymore
|
||||
expect(plugins.some((p) => p.startsWith('opencode-antigravity-auth'))).toBe(false)
|
||||
expect(plugins.length).toBe(0)
|
||||
})
|
||||
})
|
||||
|
||||
@@ -199,7 +203,7 @@ describe("addAuthPlugins", () => {
|
||||
})
|
||||
|
||||
describe("Test 8: Multiple plugins in array", () => {
|
||||
it("appends to existing plugins", async () => {
|
||||
it("preserves existing plugins, does NOT add antigravity", async () => {
|
||||
const content = `{
|
||||
"plugin": ["plugin-1", "plugin-2", "plugin-3"],
|
||||
"provider": {}
|
||||
@@ -218,7 +222,9 @@ describe("addAuthPlugins", () => {
|
||||
expect(plugins).toContain('plugin-1')
|
||||
expect(plugins).toContain('plugin-2')
|
||||
expect(plugins).toContain('plugin-3')
|
||||
expect(plugins.some((p) => p.startsWith('opencode-antigravity-auth'))).toBe(true)
|
||||
// antigravity plugin should NOT be auto-added anymore
|
||||
expect(plugins.some((p) => p.startsWith('opencode-antigravity-auth'))).toBe(false)
|
||||
expect(plugins.length).toBe(3)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@@ -50,13 +50,8 @@ export async function addAuthPlugins(config: InstallConfig): Promise<ConfigMerge
|
||||
const rawPlugins = existingConfig?.plugin
|
||||
const plugins: string[] = Array.isArray(rawPlugins) ? rawPlugins : []
|
||||
|
||||
if (config.hasGemini) {
|
||||
const version = await fetchLatestVersion("opencode-antigravity-auth")
|
||||
const pluginEntry = version ? `opencode-antigravity-auth@${version}` : "opencode-antigravity-auth"
|
||||
if (!plugins.some((p) => p.startsWith("opencode-antigravity-auth"))) {
|
||||
plugins.push(pluginEntry)
|
||||
}
|
||||
}
|
||||
// Note: opencode-antigravity-auth plugin auto-installation has been removed
|
||||
// Users can manually add auth plugins if needed
|
||||
|
||||
const newConfig = { ...(existingConfig ?? {}), plugin: plugins }
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ describe("model-resolution check", () => {
|
||||
const sisyphus = info.agents.find((a) => a.name === "sisyphus")
|
||||
expect(sisyphus).toBeDefined()
|
||||
expect(sisyphus!.requirement.fallbackChain[0]?.model).toBe("claude-opus-4-6")
|
||||
expect(sisyphus!.requirement.fallbackChain[0]?.providers).toContain("anthropic")
|
||||
expect(sisyphus!.requirement.fallbackChain[0]?.providers).toContain("quotio")
|
||||
})
|
||||
|
||||
it("returns category requirements with provider chains", async () => {
|
||||
@@ -26,8 +26,8 @@ describe("model-resolution check", () => {
|
||||
// then: Should have category entries
|
||||
const visual = info.categories.find((c) => c.name === "visual-engineering")
|
||||
expect(visual).toBeDefined()
|
||||
expect(visual!.requirement.fallbackChain[0]?.model).toBe("gemini-3-pro")
|
||||
expect(visual!.requirement.fallbackChain[0]?.providers).toContain("google")
|
||||
expect(visual!.requirement.fallbackChain[0]?.model).toBe("claude-opus-4-6-thinking")
|
||||
expect(visual!.requirement.fallbackChain[0]?.providers).toContain("quotio")
|
||||
})
|
||||
})
|
||||
|
||||
@@ -87,7 +87,7 @@ describe("model-resolution check", () => {
|
||||
expect(sisyphus).toBeDefined()
|
||||
expect(sisyphus!.userOverride).toBeUndefined()
|
||||
expect(sisyphus!.effectiveResolution).toContain("Provider fallback:")
|
||||
expect(sisyphus!.effectiveResolution).toContain("anthropic")
|
||||
expect(sisyphus!.effectiveResolution).toContain("quotio")
|
||||
})
|
||||
|
||||
it("captures user variant for agent when configured", async () => {
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
import {
|
||||
AGENT_MODEL_REQUIREMENTS,
|
||||
type FallbackEntry,
|
||||
} from "../shared/model-requirements"
|
||||
import type { FallbackEntry } from "../shared/model-requirements"
|
||||
import type { ProviderAvailability } from "./model-fallback-types"
|
||||
import { CLI_AGENT_MODEL_REQUIREMENTS } from "./model-fallback-requirements"
|
||||
import { isProviderAvailable } from "./provider-availability"
|
||||
import { transformModelForProvider } from "./provider-model-id-transform"
|
||||
|
||||
@@ -25,7 +23,7 @@ export function resolveModelFromChain(
|
||||
}
|
||||
|
||||
export function getSisyphusFallbackChain(): FallbackEntry[] {
|
||||
return AGENT_MODEL_REQUIREMENTS.sisyphus.fallbackChain
|
||||
return CLI_AGENT_MODEL_REQUIREMENTS.sisyphus.fallbackChain
|
||||
}
|
||||
|
||||
export function isAnyFallbackEntryAvailable(
|
||||
|
||||
153
src/cli/model-fallback-requirements.ts
Normal file
153
src/cli/model-fallback-requirements.ts
Normal file
@@ -0,0 +1,153 @@
|
||||
import type { ModelRequirement } from "../shared/model-requirements"
|
||||
|
||||
// NOTE: These requirements are used by the CLI config generator (`generateModelConfig`).
|
||||
// They intentionally use "install-time" provider IDs (anthropic/openai/google/opencode/etc),
|
||||
// not runtime providers like `quotio`/`nvidia`.
|
||||
|
||||
export const CLI_AGENT_MODEL_REQUIREMENTS: Record<string, ModelRequirement> = {
|
||||
sisyphus: {
|
||||
fallbackChain: [
|
||||
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-6", variant: "max" },
|
||||
{ providers: ["kimi-for-coding"], model: "k2p5" },
|
||||
{ providers: ["opencode"], model: "kimi-k2.5-free" },
|
||||
{ providers: ["zai-coding-plan"], model: "glm-4.7" },
|
||||
{ providers: ["opencode"], model: "glm-4.7-free" },
|
||||
],
|
||||
requiresAnyModel: true,
|
||||
},
|
||||
hephaestus: {
|
||||
fallbackChain: [
|
||||
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.3-codex", variant: "medium" },
|
||||
],
|
||||
requiresProvider: ["openai", "github-copilot", "opencode"],
|
||||
},
|
||||
oracle: {
|
||||
fallbackChain: [
|
||||
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2", variant: "high" },
|
||||
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro", variant: "high" },
|
||||
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-6", variant: "max" },
|
||||
],
|
||||
},
|
||||
librarian: {
|
||||
fallbackChain: [
|
||||
{ providers: ["zai-coding-plan"], model: "glm-4.7" },
|
||||
{ providers: ["opencode"], model: "glm-4.7-free" },
|
||||
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-sonnet-4-5" },
|
||||
],
|
||||
},
|
||||
explore: {
|
||||
fallbackChain: [
|
||||
{ providers: ["github-copilot"], model: "grok-code-fast-1" },
|
||||
{ providers: ["anthropic", "opencode"], model: "claude-haiku-4-5" },
|
||||
{ providers: ["opencode"], model: "gpt-5-nano" },
|
||||
],
|
||||
},
|
||||
"multimodal-looker": {
|
||||
fallbackChain: [
|
||||
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-flash" },
|
||||
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2" },
|
||||
{ providers: ["zai-coding-plan"], model: "glm-4.6v" },
|
||||
{ providers: ["kimi-for-coding"], model: "k2p5" },
|
||||
{ providers: ["opencode"], model: "kimi-k2.5-free" },
|
||||
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-haiku-4-5" },
|
||||
{ providers: ["opencode"], model: "gpt-5-nano" },
|
||||
],
|
||||
},
|
||||
prometheus: {
|
||||
fallbackChain: [
|
||||
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-6", variant: "max" },
|
||||
{ providers: ["kimi-for-coding"], model: "k2p5" },
|
||||
{ providers: ["opencode"], model: "kimi-k2.5-free" },
|
||||
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2", variant: "high" },
|
||||
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro" },
|
||||
],
|
||||
},
|
||||
metis: {
|
||||
fallbackChain: [
|
||||
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-6", variant: "max" },
|
||||
{ providers: ["kimi-for-coding"], model: "k2p5" },
|
||||
{ providers: ["opencode"], model: "kimi-k2.5-free" },
|
||||
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2", variant: "high" },
|
||||
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro", variant: "high" },
|
||||
],
|
||||
},
|
||||
momus: {
|
||||
fallbackChain: [
|
||||
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2", variant: "medium" },
|
||||
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-6", variant: "max" },
|
||||
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro", variant: "high" },
|
||||
],
|
||||
},
|
||||
atlas: {
|
||||
fallbackChain: [
|
||||
{ providers: ["kimi-for-coding"], model: "k2p5" },
|
||||
{ providers: ["opencode"], model: "kimi-k2.5-free" },
|
||||
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-sonnet-4-5" },
|
||||
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2" },
|
||||
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro" },
|
||||
],
|
||||
},
|
||||
}
|
||||
|
||||
export const CLI_CATEGORY_MODEL_REQUIREMENTS: Record<string, ModelRequirement> = {
|
||||
"visual-engineering": {
|
||||
fallbackChain: [
|
||||
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro", variant: "high" },
|
||||
{ providers: ["zai-coding-plan"], model: "glm-5" },
|
||||
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-6", variant: "max" },
|
||||
{ providers: ["kimi-for-coding"], model: "k2p5" },
|
||||
],
|
||||
},
|
||||
ultrabrain: {
|
||||
fallbackChain: [
|
||||
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.3-codex", variant: "xhigh" },
|
||||
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro", variant: "high" },
|
||||
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-6", variant: "max" },
|
||||
],
|
||||
},
|
||||
deep: {
|
||||
fallbackChain: [
|
||||
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.3-codex", variant: "medium" },
|
||||
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-6", variant: "max" },
|
||||
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro", variant: "high" },
|
||||
],
|
||||
requiresModel: "gpt-5.3-codex",
|
||||
},
|
||||
artistry: {
|
||||
fallbackChain: [
|
||||
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro", variant: "high" },
|
||||
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-6", variant: "max" },
|
||||
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2" },
|
||||
],
|
||||
requiresModel: "gemini-3-pro",
|
||||
},
|
||||
quick: {
|
||||
fallbackChain: [
|
||||
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-haiku-4-5" },
|
||||
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-flash" },
|
||||
{ providers: ["opencode"], model: "gpt-5-nano" },
|
||||
],
|
||||
},
|
||||
"unspecified-low": {
|
||||
fallbackChain: [
|
||||
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-sonnet-4-5" },
|
||||
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.3-codex", variant: "medium" },
|
||||
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-flash" },
|
||||
],
|
||||
},
|
||||
"unspecified-high": {
|
||||
fallbackChain: [
|
||||
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-opus-4-6", variant: "max" },
|
||||
{ providers: ["openai", "github-copilot", "opencode"], model: "gpt-5.2", variant: "high" },
|
||||
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-pro" },
|
||||
],
|
||||
},
|
||||
writing: {
|
||||
fallbackChain: [
|
||||
{ providers: ["kimi-for-coding"], model: "k2p5" },
|
||||
{ providers: ["google", "github-copilot", "opencode"], model: "gemini-3-flash" },
|
||||
{ providers: ["anthropic", "github-copilot", "opencode"], model: "claude-sonnet-4-5" },
|
||||
],
|
||||
},
|
||||
}
|
||||
|
||||
@@ -491,18 +491,18 @@ describe("generateModelConfig", () => {
|
||||
const result = generateModelConfig(config)
|
||||
|
||||
// #then librarian should use ZAI_MODEL
|
||||
expect(result.agents?.librarian?.model).toBe("zai-coding-plan/glm-4.7")
|
||||
expect(result.agents?.librarian?.model).toBe("opencode/minimax-m2.5-free")
|
||||
})
|
||||
|
||||
test("librarian uses claude-sonnet when ZAI not available but Claude is", () => {
|
||||
test("librarian always uses minimax-m2.5-free regardless of provider availability", () => {
|
||||
// #given only Claude is available (no ZAI)
|
||||
const config = createConfig({ hasClaude: true })
|
||||
|
||||
// #when generateModelConfig is called
|
||||
const result = generateModelConfig(config)
|
||||
|
||||
// #then librarian should use claude-sonnet-4-6 (third in fallback chain after ZAI and opencode/glm)
|
||||
expect(result.agents?.librarian?.model).toBe("anthropic/claude-sonnet-4-6")
|
||||
// #then librarian should use opencode/minimax-m2.5-free (always first in chain)
|
||||
expect(result.agents?.librarian?.model).toBe("opencode/minimax-m2.5-free")
|
||||
})
|
||||
})
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import {
|
||||
AGENT_MODEL_REQUIREMENTS,
|
||||
CATEGORY_MODEL_REQUIREMENTS,
|
||||
} from "../shared/model-requirements"
|
||||
CLI_AGENT_MODEL_REQUIREMENTS,
|
||||
CLI_CATEGORY_MODEL_REQUIREMENTS,
|
||||
} from "./model-fallback-requirements"
|
||||
import type { InstallConfig } from "./types"
|
||||
|
||||
import type { AgentConfig, CategoryConfig, GeneratedOmoConfig } from "./model-fallback-types"
|
||||
@@ -18,7 +18,7 @@ export type { GeneratedOmoConfig } from "./model-fallback-types"
|
||||
|
||||
const ZAI_MODEL = "zai-coding-plan/glm-4.7"
|
||||
|
||||
const ULTIMATE_FALLBACK = "opencode/big-pickle"
|
||||
const ULTIMATE_FALLBACK = "opencode/glm-4.7-free"
|
||||
const SCHEMA_URL = "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json"
|
||||
|
||||
|
||||
@@ -38,12 +38,12 @@ export function generateModelConfig(config: InstallConfig): GeneratedOmoConfig {
|
||||
return {
|
||||
$schema: SCHEMA_URL,
|
||||
agents: Object.fromEntries(
|
||||
Object.entries(AGENT_MODEL_REQUIREMENTS)
|
||||
Object.entries(CLI_AGENT_MODEL_REQUIREMENTS)
|
||||
.filter(([role, req]) => !(role === "sisyphus" && req.requiresAnyModel))
|
||||
.map(([role]) => [role, { model: ULTIMATE_FALLBACK }])
|
||||
),
|
||||
categories: Object.fromEntries(
|
||||
Object.keys(CATEGORY_MODEL_REQUIREMENTS).map((cat) => [cat, { model: ULTIMATE_FALLBACK }])
|
||||
Object.keys(CLI_CATEGORY_MODEL_REQUIREMENTS).map((cat) => [cat, { model: ULTIMATE_FALLBACK }])
|
||||
),
|
||||
}
|
||||
}
|
||||
@@ -51,7 +51,7 @@ export function generateModelConfig(config: InstallConfig): GeneratedOmoConfig {
|
||||
const agents: Record<string, AgentConfig> = {}
|
||||
const categories: Record<string, CategoryConfig> = {}
|
||||
|
||||
for (const [role, req] of Object.entries(AGENT_MODEL_REQUIREMENTS)) {
|
||||
for (const [role, req] of Object.entries(CLI_AGENT_MODEL_REQUIREMENTS)) {
|
||||
if (role === "librarian" && avail.zai) {
|
||||
agents[role] = { model: ZAI_MODEL }
|
||||
continue
|
||||
@@ -75,7 +75,6 @@ export function generateModelConfig(config: InstallConfig): GeneratedOmoConfig {
|
||||
if (req.requiresAnyModel && !isAnyFallbackEntryAvailable(fallbackChain, avail)) {
|
||||
continue
|
||||
}
|
||||
|
||||
const resolved = resolveModelFromChain(fallbackChain, avail)
|
||||
if (resolved) {
|
||||
const variant = resolved.variant ?? req.variant
|
||||
@@ -100,11 +99,11 @@ export function generateModelConfig(config: InstallConfig): GeneratedOmoConfig {
|
||||
}
|
||||
}
|
||||
|
||||
for (const [cat, req] of Object.entries(CATEGORY_MODEL_REQUIREMENTS)) {
|
||||
for (const [cat, req] of Object.entries(CLI_CATEGORY_MODEL_REQUIREMENTS)) {
|
||||
// Special case: unspecified-high downgrades to unspecified-low when not isMaxPlan
|
||||
const fallbackChain =
|
||||
cat === "unspecified-high" && !avail.isMaxPlan
|
||||
? CATEGORY_MODEL_REQUIREMENTS["unspecified-low"].fallbackChain
|
||||
? CLI_CATEGORY_MODEL_REQUIREMENTS["unspecified-low"].fallbackChain
|
||||
: req.fallbackChain
|
||||
|
||||
if (req.requiresModel && !isRequiredModelAvailable(req.requiresModel, req.fallbackChain, avail)) {
|
||||
|
||||
@@ -17,7 +17,7 @@ config/schema/
|
||||
├── hooks.ts # HookNameSchema (46 hooks)
|
||||
├── skills.ts # SkillsConfigSchema (sources, paths, recursive)
|
||||
├── commands.ts # BuiltinCommandNameSchema
|
||||
├── experimental.ts # Feature flags (plugin_load_timeout_ms min 1000, hashline_edit)
|
||||
├── experimental.ts # Feature flags (plugin_load_timeout_ms min 1000)
|
||||
├── sisyphus.ts # SisyphusConfigSchema (task system)
|
||||
├── sisyphus-agent.ts # SisyphusAgentConfigSchema
|
||||
├── ralph-loop.ts # RalphLoopConfigSchema
|
||||
@@ -34,9 +34,9 @@ config/schema/
|
||||
└── internal/permission.ts # AgentPermissionSchema
|
||||
```
|
||||
|
||||
## ROOT SCHEMA FIELDS (26)
|
||||
## ROOT SCHEMA FIELDS (27)
|
||||
|
||||
`$schema`, `new_task_system_enabled`, `default_run_agent`, `disabled_mcps`, `disabled_agents`, `disabled_skills`, `disabled_hooks`, `disabled_commands`, `disabled_tools`, `agents`, `categories`, `claude_code`, `sisyphus_agent`, `comment_checker`, `experimental`, `auto_update`, `skills`, `ralph_loop`, `background_task`, `notification`, `babysitting`, `git_master`, `browser_automation_engine`, `websearch`, `tmux`, `sisyphus`, `_migrations`
|
||||
`$schema`, `new_task_system_enabled`, `default_run_agent`, `disabled_mcps`, `disabled_agents`, `disabled_skills`, `disabled_hooks`, `disabled_commands`, `disabled_tools`, `hashline_edit`, `agents`, `categories`, `claude_code`, `sisyphus_agent`, `comment_checker`, `experimental`, `auto_update`, `skills`, `ralph_loop`, `background_task`, `notification`, `babysitting`, `git_master`, `browser_automation_engine`, `websearch`, `tmux`, `sisyphus`, `_migrations`
|
||||
|
||||
## AGENT OVERRIDE FIELDS (21)
|
||||
|
||||
|
||||
@@ -644,6 +644,55 @@ describe("OhMyOpenCodeConfigSchema - browser_automation_engine", () => {
|
||||
})
|
||||
})
|
||||
|
||||
describe("OhMyOpenCodeConfigSchema - hashline_edit", () => {
|
||||
test("accepts hashline_edit as true", () => {
|
||||
//#given
|
||||
const input = { hashline_edit: true }
|
||||
|
||||
//#when
|
||||
const result = OhMyOpenCodeConfigSchema.safeParse(input)
|
||||
|
||||
//#then
|
||||
expect(result.success).toBe(true)
|
||||
expect(result.data?.hashline_edit).toBe(true)
|
||||
})
|
||||
|
||||
test("accepts hashline_edit as false", () => {
|
||||
//#given
|
||||
const input = { hashline_edit: false }
|
||||
|
||||
//#when
|
||||
const result = OhMyOpenCodeConfigSchema.safeParse(input)
|
||||
|
||||
//#then
|
||||
expect(result.success).toBe(true)
|
||||
expect(result.data?.hashline_edit).toBe(false)
|
||||
})
|
||||
|
||||
test("hashline_edit is optional", () => {
|
||||
//#given
|
||||
const input = { auto_update: true }
|
||||
|
||||
//#when
|
||||
const result = OhMyOpenCodeConfigSchema.safeParse(input)
|
||||
|
||||
//#then
|
||||
expect(result.success).toBe(true)
|
||||
expect(result.data?.hashline_edit).toBeUndefined()
|
||||
})
|
||||
|
||||
test("rejects non-boolean hashline_edit", () => {
|
||||
//#given
|
||||
const input = { hashline_edit: "true" }
|
||||
|
||||
//#when
|
||||
const result = OhMyOpenCodeConfigSchema.safeParse(input)
|
||||
|
||||
//#then
|
||||
expect(result.success).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("ExperimentalConfigSchema feature flags", () => {
|
||||
test("accepts plugin_load_timeout_ms as number", () => {
|
||||
//#given
|
||||
@@ -699,9 +748,9 @@ describe("ExperimentalConfigSchema feature flags", () => {
|
||||
}
|
||||
})
|
||||
|
||||
test("accepts hashline_edit as true", () => {
|
||||
test("accepts disable_omo_env as true", () => {
|
||||
//#given
|
||||
const config = { hashline_edit: true }
|
||||
const config = { disable_omo_env: true }
|
||||
|
||||
//#when
|
||||
const result = ExperimentalConfigSchema.safeParse(config)
|
||||
@@ -709,13 +758,13 @@ describe("ExperimentalConfigSchema feature flags", () => {
|
||||
//#then
|
||||
expect(result.success).toBe(true)
|
||||
if (result.success) {
|
||||
expect(result.data.hashline_edit).toBe(true)
|
||||
expect(result.data.disable_omo_env).toBe(true)
|
||||
}
|
||||
})
|
||||
|
||||
test("accepts hashline_edit as false", () => {
|
||||
test("accepts disable_omo_env as false", () => {
|
||||
//#given
|
||||
const config = { hashline_edit: false }
|
||||
const config = { disable_omo_env: false }
|
||||
|
||||
//#when
|
||||
const result = ExperimentalConfigSchema.safeParse(config)
|
||||
@@ -723,11 +772,11 @@ describe("ExperimentalConfigSchema feature flags", () => {
|
||||
//#then
|
||||
expect(result.success).toBe(true)
|
||||
if (result.success) {
|
||||
expect(result.data.hashline_edit).toBe(false)
|
||||
expect(result.data.disable_omo_env).toBe(false)
|
||||
}
|
||||
})
|
||||
|
||||
test("hashline_edit is optional", () => {
|
||||
test("disable_omo_env is optional", () => {
|
||||
//#given
|
||||
const config = { safe_hook_creation: true }
|
||||
|
||||
@@ -737,13 +786,13 @@ describe("ExperimentalConfigSchema feature flags", () => {
|
||||
//#then
|
||||
expect(result.success).toBe(true)
|
||||
if (result.success) {
|
||||
expect(result.data.hashline_edit).toBeUndefined()
|
||||
expect(result.data.disable_omo_env).toBeUndefined()
|
||||
}
|
||||
})
|
||||
|
||||
test("rejects non-boolean hashline_edit", () => {
|
||||
test("rejects non-boolean disable_omo_env", () => {
|
||||
//#given
|
||||
const config = { hashline_edit: "true" }
|
||||
const config = { disable_omo_env: "true" }
|
||||
|
||||
//#when
|
||||
const result = ExperimentalConfigSchema.safeParse(config)
|
||||
@@ -751,6 +800,7 @@ describe("ExperimentalConfigSchema feature flags", () => {
|
||||
//#then
|
||||
expect(result.success).toBe(false)
|
||||
})
|
||||
|
||||
})
|
||||
|
||||
describe("GitMasterConfigSchema", () => {
|
||||
|
||||
@@ -38,6 +38,13 @@ export const AgentOverrideConfigSchema = z.object({
|
||||
textVerbosity: z.enum(["low", "medium", "high"]).optional(),
|
||||
/** Provider-specific options. Passed directly to OpenCode SDK. */
|
||||
providerOptions: z.record(z.string(), z.unknown()).optional(),
|
||||
/** Per-message ultrawork override model/variant when ultrawork keyword is detected. */
|
||||
ultrawork: z
|
||||
.object({
|
||||
model: z.string().optional(),
|
||||
variant: z.string().optional(),
|
||||
})
|
||||
.optional(),
|
||||
})
|
||||
|
||||
export const AgentOverridesSchema = z.object({
|
||||
|
||||
@@ -15,8 +15,12 @@ export const ExperimentalConfigSchema = z.object({
|
||||
plugin_load_timeout_ms: z.number().min(1000).optional(),
|
||||
/** Wrap hook creation in try/catch to prevent one failing hook from crashing the plugin (default: true at call site) */
|
||||
safe_hook_creation: z.boolean().optional(),
|
||||
/** Disable auto-injected <omo-env> context in prompts (experimental) */
|
||||
disable_omo_env: z.boolean().optional(),
|
||||
/** Enable hashline_edit tool for improved file editing with hash-based line anchors */
|
||||
hashline_edit: z.boolean().optional(),
|
||||
/** Append fallback model info to session title when a runtime fallback occurs (default: false) */
|
||||
model_fallback_title: z.boolean().optional(),
|
||||
})
|
||||
|
||||
export type ExperimentalConfig = z.infer<typeof ExperimentalConfigSchema>
|
||||
|
||||
@@ -13,6 +13,7 @@ export const HookNameSchema = z.enum([
|
||||
"directory-readme-injector",
|
||||
"empty-task-response-detector",
|
||||
"think-mode",
|
||||
"model-fallback",
|
||||
"anthropic-context-window-limit-recovery",
|
||||
"preemptive-compaction",
|
||||
"rules-injector",
|
||||
@@ -25,6 +26,7 @@ export const HookNameSchema = z.enum([
|
||||
"interactive-bash-session",
|
||||
|
||||
"thinking-block-validator",
|
||||
"beast-mode-system",
|
||||
"ralph-loop",
|
||||
"category-skill-reminder",
|
||||
|
||||
@@ -38,6 +40,7 @@ export const HookNameSchema = z.enum([
|
||||
"prometheus-md-only",
|
||||
"sisyphus-junior-notepad",
|
||||
"no-sisyphus-gpt",
|
||||
"no-hephaestus-non-gpt",
|
||||
"start-work",
|
||||
"atlas",
|
||||
"unstable-agent-babysitter",
|
||||
@@ -48,6 +51,7 @@ export const HookNameSchema = z.enum([
|
||||
"write-existing-file-guard",
|
||||
"anthropic-effort",
|
||||
"hashline-read-enhancer",
|
||||
"hashline-edit-diff-enhancer",
|
||||
])
|
||||
|
||||
export type HookName = z.infer<typeof HookNameSchema>
|
||||
|
||||
@@ -33,6 +33,8 @@ export const OhMyOpenCodeConfigSchema = z.object({
|
||||
disabled_commands: z.array(BuiltinCommandNameSchema).optional(),
|
||||
/** Disable specific tools by name (e.g., ["todowrite", "todoread"]) */
|
||||
disabled_tools: z.array(z.string()).optional(),
|
||||
/** Enable hashline_edit tool/hook integrations (default: true at call site) */
|
||||
hashline_edit: z.boolean().optional(),
|
||||
agents: AgentOverridesSchema.optional(),
|
||||
categories: CategoriesConfigSchema.optional(),
|
||||
claude_code: ClaudeCodeConfigSchema.optional(),
|
||||
|
||||
@@ -2920,6 +2920,39 @@ describe("BackgroundManager.handleEvent - session.deleted cascade", () => {
|
||||
})
|
||||
|
||||
describe("BackgroundManager.handleEvent - session.error", () => {
|
||||
const defaultRetryFallbackChain = [
|
||||
{ providers: ["quotio"], model: "claude-opus-4-6", variant: "max" },
|
||||
{ providers: ["quotio"], model: "gpt-5.3-codex", variant: "high" },
|
||||
]
|
||||
|
||||
const stubProcessKey = (manager: BackgroundManager) => {
|
||||
;(manager as unknown as { processKey: (key: string) => Promise<void> }).processKey = async () => {}
|
||||
}
|
||||
|
||||
const createRetryTask = (manager: BackgroundManager, input: {
|
||||
id: string
|
||||
sessionID: string
|
||||
description: string
|
||||
concurrencyKey?: string
|
||||
fallbackChain?: typeof defaultRetryFallbackChain
|
||||
}) => {
|
||||
const task = createMockTask({
|
||||
id: input.id,
|
||||
sessionID: input.sessionID,
|
||||
parentSessionID: "parent-session",
|
||||
parentMessageID: "msg-retry",
|
||||
description: input.description,
|
||||
agent: "sisyphus",
|
||||
status: "running",
|
||||
concurrencyKey: input.concurrencyKey,
|
||||
model: { providerID: "quotio", modelID: "claude-opus-4-6-thinking" },
|
||||
fallbackChain: input.fallbackChain ?? defaultRetryFallbackChain,
|
||||
attemptCount: 0,
|
||||
})
|
||||
getTaskMap(manager).set(task.id, task)
|
||||
return task
|
||||
}
|
||||
|
||||
test("sets task to error, releases concurrency, and cleans up", async () => {
|
||||
//#given
|
||||
const manager = createBackgroundManager()
|
||||
@@ -3046,6 +3079,135 @@ describe("BackgroundManager.handleEvent - session.error", () => {
|
||||
|
||||
manager.shutdown()
|
||||
})
|
||||
|
||||
test("retry path releases current concurrency slot and prefers current provider in fallback entry", async () => {
|
||||
//#given
|
||||
const manager = createBackgroundManager()
|
||||
const concurrencyManager = getConcurrencyManager(manager)
|
||||
const concurrencyKey = "quotio/claude-opus-4-6-thinking"
|
||||
await concurrencyManager.acquire(concurrencyKey)
|
||||
|
||||
stubProcessKey(manager)
|
||||
|
||||
const sessionID = "ses_error_retry"
|
||||
const task = createRetryTask(manager, {
|
||||
id: "task-session-error-retry",
|
||||
sessionID,
|
||||
description: "task that should retry",
|
||||
concurrencyKey,
|
||||
fallbackChain: [
|
||||
{ providers: ["quotio"], model: "claude-opus-4-6", variant: "max" },
|
||||
{ providers: ["quotio"], model: "claude-opus-4-5" },
|
||||
],
|
||||
})
|
||||
|
||||
//#when
|
||||
manager.handleEvent({
|
||||
type: "session.error",
|
||||
properties: {
|
||||
sessionID,
|
||||
error: {
|
||||
name: "UnknownError",
|
||||
data: {
|
||||
message:
|
||||
"Bad Gateway: {\"error\":{\"message\":\"unknown provider for model claude-opus-4-6-thinking\"}}",
|
||||
},
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
//#then
|
||||
expect(task.status).toBe("pending")
|
||||
expect(task.attemptCount).toBe(1)
|
||||
expect(task.model).toEqual({
|
||||
providerID: "quotio",
|
||||
modelID: "claude-opus-4-6",
|
||||
variant: "max",
|
||||
})
|
||||
expect(task.concurrencyKey).toBeUndefined()
|
||||
expect(concurrencyManager.getCount(concurrencyKey)).toBe(0)
|
||||
|
||||
manager.shutdown()
|
||||
})
|
||||
|
||||
test("retry path triggers on session.status retry events", async () => {
|
||||
//#given
|
||||
const manager = createBackgroundManager()
|
||||
stubProcessKey(manager)
|
||||
|
||||
const sessionID = "ses_status_retry"
|
||||
const task = createRetryTask(manager, {
|
||||
id: "task-status-retry",
|
||||
sessionID,
|
||||
description: "task that should retry on status",
|
||||
})
|
||||
|
||||
//#when
|
||||
manager.handleEvent({
|
||||
type: "session.status",
|
||||
properties: {
|
||||
sessionID,
|
||||
status: {
|
||||
type: "retry",
|
||||
message: "Provider is overloaded",
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
//#then
|
||||
expect(task.status).toBe("pending")
|
||||
expect(task.attemptCount).toBe(1)
|
||||
expect(task.model).toEqual({
|
||||
providerID: "quotio",
|
||||
modelID: "claude-opus-4-6",
|
||||
variant: "max",
|
||||
})
|
||||
|
||||
manager.shutdown()
|
||||
})
|
||||
|
||||
test("retry path triggers on message.updated assistant error events", async () => {
|
||||
//#given
|
||||
const manager = createBackgroundManager()
|
||||
stubProcessKey(manager)
|
||||
|
||||
const sessionID = "ses_message_updated_retry"
|
||||
const task = createRetryTask(manager, {
|
||||
id: "task-message-updated-retry",
|
||||
sessionID,
|
||||
description: "task that should retry on message.updated",
|
||||
})
|
||||
|
||||
//#when
|
||||
manager.handleEvent({
|
||||
type: "message.updated",
|
||||
properties: {
|
||||
info: {
|
||||
id: "msg_errored",
|
||||
sessionID,
|
||||
role: "assistant",
|
||||
error: {
|
||||
name: "UnknownError",
|
||||
data: {
|
||||
message:
|
||||
"Bad Gateway: {\"error\":{\"message\":\"unknown provider for model claude-opus-4-6-thinking\"}}",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
//#then
|
||||
expect(task.status).toBe("pending")
|
||||
expect(task.attemptCount).toBe(1)
|
||||
expect(task.model).toEqual({
|
||||
providerID: "quotio",
|
||||
modelID: "claude-opus-4-6",
|
||||
variant: "max",
|
||||
})
|
||||
|
||||
manager.shutdown()
|
||||
})
|
||||
})
|
||||
|
||||
describe("BackgroundManager queue processing - error tasks are skipped", () => {
|
||||
|
||||
@@ -5,6 +5,7 @@ import type {
|
||||
LaunchInput,
|
||||
ResumeInput,
|
||||
} from "./types"
|
||||
import type { FallbackEntry } from "../../shared/model-requirements"
|
||||
import { TaskHistory } from "./task-history"
|
||||
import {
|
||||
log,
|
||||
@@ -12,12 +13,21 @@ import {
|
||||
normalizePromptTools,
|
||||
normalizeSDKResponse,
|
||||
promptWithModelSuggestionRetry,
|
||||
readConnectedProvidersCache,
|
||||
readProviderModelsCache,
|
||||
resolveInheritedPromptTools,
|
||||
createInternalAgentTextPart,
|
||||
} from "../../shared"
|
||||
import { setSessionTools } from "../../shared/session-tools-store"
|
||||
import { ConcurrencyManager } from "./concurrency"
|
||||
import type { BackgroundTaskConfig, TmuxConfig } from "../../config/schema"
|
||||
import { isInsideTmux } from "../../shared/tmux"
|
||||
import {
|
||||
shouldRetryError,
|
||||
getNextFallback,
|
||||
hasMoreFallbacks,
|
||||
selectFallbackProvider,
|
||||
} from "../../shared/model-error-classifier"
|
||||
import {
|
||||
DEFAULT_MESSAGE_STALENESS_TIMEOUT_MS,
|
||||
DEFAULT_STALE_TIMEOUT_MS,
|
||||
@@ -155,6 +165,8 @@ export class BackgroundManager {
|
||||
parentAgent: input.parentAgent,
|
||||
parentTools: input.parentTools,
|
||||
model: input.model,
|
||||
fallbackChain: input.fallbackChain,
|
||||
attemptCount: 0,
|
||||
category: input.category,
|
||||
}
|
||||
|
||||
@@ -676,6 +688,27 @@ export class BackgroundManager {
|
||||
handleEvent(event: Event): void {
|
||||
const props = event.properties
|
||||
|
||||
if (event.type === "message.updated") {
|
||||
const info = props?.info
|
||||
if (!info || typeof info !== "object") return
|
||||
|
||||
const sessionID = (info as Record<string, unknown>)["sessionID"]
|
||||
const role = (info as Record<string, unknown>)["role"]
|
||||
if (typeof sessionID !== "string" || role !== "assistant") return
|
||||
|
||||
const task = this.findBySession(sessionID)
|
||||
if (!task || task.status !== "running") return
|
||||
|
||||
const assistantError = (info as Record<string, unknown>)["error"]
|
||||
if (!assistantError) return
|
||||
|
||||
const errorInfo = {
|
||||
name: this.extractErrorName(assistantError),
|
||||
message: this.extractErrorMessage(assistantError),
|
||||
}
|
||||
this.tryFallbackRetry(task, errorInfo, "message.updated")
|
||||
}
|
||||
|
||||
if (event.type === "message.part.updated" || event.type === "message.part.delta") {
|
||||
if (!props || typeof props !== "object" || !("sessionID" in props)) return
|
||||
const partInfo = props as unknown as MessagePartInfo
|
||||
@@ -772,10 +805,29 @@ export class BackgroundManager {
|
||||
const task = this.findBySession(sessionID)
|
||||
if (!task || task.status !== "running") return
|
||||
|
||||
const errorObj = props?.error as { name?: string; message?: string } | undefined
|
||||
const errorName = errorObj?.name
|
||||
const errorMessage = props ? this.getSessionErrorMessage(props) : undefined
|
||||
|
||||
const errorInfo = { name: errorName, message: errorMessage }
|
||||
if (this.tryFallbackRetry(task, errorInfo, "session.error")) return
|
||||
|
||||
// Original error handling (no retry)
|
||||
const errorMsg = errorMessage ?? "Session error"
|
||||
const canRetry =
|
||||
shouldRetryError(errorInfo) &&
|
||||
!!task.fallbackChain &&
|
||||
hasMoreFallbacks(task.fallbackChain, task.attemptCount ?? 0)
|
||||
log("[background-agent] Session error - no retry:", {
|
||||
taskId: task.id,
|
||||
errorName,
|
||||
errorMessage: errorMsg?.slice(0, 100),
|
||||
hasFallbackChain: !!task.fallbackChain,
|
||||
canRetry,
|
||||
})
|
||||
|
||||
task.status = "error"
|
||||
task.error = errorMessage ?? "Session error"
|
||||
task.error = errorMsg
|
||||
task.completedAt = new Date()
|
||||
this.taskHistory.record(task.parentSessionID, { id: task.id, sessionID: task.sessionID, agent: task.agent, description: task.description, status: "error", category: task.category, startedAt: task.startedAt, completedAt: task.completedAt })
|
||||
|
||||
@@ -859,6 +911,129 @@ export class BackgroundManager {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (event.type === "session.status") {
|
||||
const sessionID = props?.sessionID as string | undefined
|
||||
const status = props?.status as { type?: string; message?: string } | undefined
|
||||
if (!sessionID || status?.type !== "retry") return
|
||||
|
||||
const task = this.findBySession(sessionID)
|
||||
if (!task || task.status !== "running") return
|
||||
|
||||
const errorMessage = typeof status.message === "string" ? status.message : undefined
|
||||
const errorInfo = { name: "SessionRetry", message: errorMessage }
|
||||
this.tryFallbackRetry(task, errorInfo, "session.status")
|
||||
}
|
||||
}
|
||||
|
||||
private tryFallbackRetry(
|
||||
task: BackgroundTask,
|
||||
errorInfo: { name?: string; message?: string },
|
||||
source: string,
|
||||
): boolean {
|
||||
const fallbackChain = task.fallbackChain
|
||||
const canRetry =
|
||||
shouldRetryError(errorInfo) &&
|
||||
fallbackChain &&
|
||||
fallbackChain.length > 0 &&
|
||||
hasMoreFallbacks(fallbackChain, task.attemptCount ?? 0)
|
||||
|
||||
if (!canRetry) return false
|
||||
|
||||
const attemptCount = task.attemptCount ?? 0
|
||||
const providerModelsCache = readProviderModelsCache()
|
||||
const connectedProviders = providerModelsCache?.connected ?? readConnectedProvidersCache()
|
||||
const connectedSet = connectedProviders ? new Set(connectedProviders) : null
|
||||
|
||||
const isReachable = (entry: FallbackEntry): boolean => {
|
||||
if (!connectedSet) return true
|
||||
|
||||
// Gate only on provider connectivity. Provider model lists can be stale/incomplete,
|
||||
// especially after users manually add models to opencode.json.
|
||||
return entry.providers.some((p) => connectedSet.has(p))
|
||||
}
|
||||
|
||||
let selectedAttemptCount = attemptCount
|
||||
let nextFallback: FallbackEntry | undefined
|
||||
while (fallbackChain && selectedAttemptCount < fallbackChain.length) {
|
||||
const candidate = getNextFallback(fallbackChain, selectedAttemptCount)
|
||||
if (!candidate) break
|
||||
selectedAttemptCount++
|
||||
if (!isReachable(candidate)) {
|
||||
log("[background-agent] Skipping unreachable fallback:", {
|
||||
taskId: task.id,
|
||||
source,
|
||||
model: candidate.model,
|
||||
providers: candidate.providers,
|
||||
})
|
||||
continue
|
||||
}
|
||||
nextFallback = candidate
|
||||
break
|
||||
}
|
||||
if (!nextFallback) return false
|
||||
|
||||
const providerID = selectFallbackProvider(
|
||||
nextFallback.providers,
|
||||
task.model?.providerID,
|
||||
)
|
||||
|
||||
log("[background-agent] Retryable error, attempting fallback:", {
|
||||
taskId: task.id,
|
||||
source,
|
||||
errorName: errorInfo.name,
|
||||
errorMessage: errorInfo.message?.slice(0, 100),
|
||||
attemptCount: selectedAttemptCount,
|
||||
nextModel: `${providerID}/${nextFallback.model}`,
|
||||
})
|
||||
|
||||
if (task.concurrencyKey) {
|
||||
this.concurrencyManager.release(task.concurrencyKey)
|
||||
task.concurrencyKey = undefined
|
||||
}
|
||||
|
||||
if (task.sessionID) {
|
||||
this.client.session.abort({ path: { id: task.sessionID } }).catch(() => {})
|
||||
subagentSessions.delete(task.sessionID)
|
||||
}
|
||||
|
||||
const idleTimer = this.idleDeferralTimers.get(task.id)
|
||||
if (idleTimer) {
|
||||
clearTimeout(idleTimer)
|
||||
this.idleDeferralTimers.delete(task.id)
|
||||
}
|
||||
|
||||
task.attemptCount = selectedAttemptCount
|
||||
task.model = {
|
||||
providerID,
|
||||
modelID: nextFallback.model,
|
||||
variant: nextFallback.variant,
|
||||
}
|
||||
task.status = "pending"
|
||||
task.sessionID = undefined
|
||||
task.startedAt = undefined
|
||||
task.queuedAt = new Date()
|
||||
task.error = undefined
|
||||
|
||||
const key = task.model ? `${task.model.providerID}/${task.model.modelID}` : task.agent
|
||||
const queue = this.queuesByKey.get(key) ?? []
|
||||
const retryInput: LaunchInput = {
|
||||
description: task.description,
|
||||
prompt: task.prompt,
|
||||
agent: task.agent,
|
||||
parentSessionID: task.parentSessionID,
|
||||
parentMessageID: task.parentMessageID,
|
||||
parentModel: task.parentModel,
|
||||
parentAgent: task.parentAgent,
|
||||
parentTools: task.parentTools,
|
||||
model: task.model,
|
||||
fallbackChain: task.fallbackChain,
|
||||
category: task.category,
|
||||
}
|
||||
queue.push({ task, input: retryInput })
|
||||
this.queuesByKey.set(key, queue)
|
||||
this.processKey(key)
|
||||
return true
|
||||
}
|
||||
|
||||
markForNotification(task: BackgroundTask): void {
|
||||
@@ -1272,10 +1447,13 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
|
||||
if (isCompactionAgent(info?.agent)) {
|
||||
continue
|
||||
}
|
||||
if (info?.agent || info?.model || (info?.modelID && info?.providerID)) {
|
||||
agent = info.agent ?? task.parentAgent
|
||||
model = info.model ?? (info.providerID && info.modelID ? { providerID: info.providerID, modelID: info.modelID } : undefined)
|
||||
tools = normalizePromptTools(info.tools) ?? tools
|
||||
const normalizedTools = this.isRecord(info?.tools)
|
||||
? normalizePromptTools(info.tools as Record<string, boolean | "allow" | "deny" | "ask">)
|
||||
: undefined
|
||||
if (info?.agent || info?.model || (info?.modelID && info?.providerID) || normalizedTools) {
|
||||
agent = info?.agent ?? task.parentAgent
|
||||
model = info?.model ?? (info?.providerID && info?.modelID ? { providerID: info.providerID, modelID: info.modelID } : undefined)
|
||||
tools = normalizedTools ?? tools
|
||||
break
|
||||
}
|
||||
}
|
||||
@@ -1295,7 +1473,7 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
|
||||
tools = normalizePromptTools(currentMessage?.tools) ?? tools
|
||||
}
|
||||
|
||||
tools = resolveInheritedPromptTools(task.parentSessionID, tools)
|
||||
const resolvedTools = resolveInheritedPromptTools(task.parentSessionID, tools)
|
||||
|
||||
log("[background-agent] notifyParentSession context:", {
|
||||
taskId: task.id,
|
||||
@@ -1310,8 +1488,8 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
|
||||
noReply: !allComplete,
|
||||
...(agent !== undefined ? { agent } : {}),
|
||||
...(model !== undefined ? { model } : {}),
|
||||
...(tools ? { tools } : {}),
|
||||
parts: [{ type: "text", text: notification }],
|
||||
...(resolvedTools ? { tools: resolvedTools } : {}),
|
||||
parts: [createInternalAgentTextPart(notification)],
|
||||
},
|
||||
})
|
||||
log("[background-agent] Sent notification to parent session:", {
|
||||
@@ -1393,6 +1571,46 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
|
||||
return ""
|
||||
}
|
||||
|
||||
private extractErrorName(error: unknown): string | undefined {
|
||||
if (this.isRecord(error) && typeof error["name"] === "string") return error["name"]
|
||||
if (error instanceof Error) return error.name
|
||||
return undefined
|
||||
}
|
||||
|
||||
private extractErrorMessage(error: unknown): string | undefined {
|
||||
if (!error) return undefined
|
||||
if (typeof error === "string") return error
|
||||
if (error instanceof Error) return error.message
|
||||
|
||||
if (this.isRecord(error)) {
|
||||
const dataRaw = error["data"]
|
||||
const candidates: unknown[] = [
|
||||
error,
|
||||
dataRaw,
|
||||
error["error"],
|
||||
this.isRecord(dataRaw) ? (dataRaw as Record<string, unknown>)["error"] : undefined,
|
||||
error["cause"],
|
||||
]
|
||||
|
||||
for (const candidate of candidates) {
|
||||
if (typeof candidate === "string" && candidate.length > 0) return candidate
|
||||
if (
|
||||
this.isRecord(candidate) &&
|
||||
typeof candidate["message"] === "string" &&
|
||||
candidate["message"].length > 0
|
||||
) {
|
||||
return candidate["message"]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
return JSON.stringify(error)
|
||||
} catch {
|
||||
return String(error)
|
||||
}
|
||||
}
|
||||
|
||||
private isRecord(value: unknown): value is Record<string, unknown> {
|
||||
return typeof value === "object" && value !== null
|
||||
}
|
||||
@@ -1609,6 +1827,16 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
|
||||
// Progress is already tracked via handleEvent(message.part.updated),
|
||||
// so we skip the expensive session.messages() fetch here.
|
||||
// Completion will be detected when session transitions to idle.
|
||||
if (sessionStatus?.type === "retry") {
|
||||
const retryMessage = typeof (sessionStatus as { message?: string }).message === "string"
|
||||
? (sessionStatus as { message?: string }).message
|
||||
: undefined
|
||||
const errorInfo = { name: "SessionRetry", message: retryMessage }
|
||||
if (this.tryFallbackRetry(task, errorInfo, "polling:session.status")) {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
log("[background-agent] Session still running, relying on event-based progress:", {
|
||||
taskId: task.id,
|
||||
sessionID,
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import type { BackgroundTask } from "./types"
|
||||
import type { ResultHandlerContext } from "./result-handler-context"
|
||||
import { TASK_CLEANUP_DELAY_MS } from "./constants"
|
||||
import { log } from "../../shared"
|
||||
import { createInternalAgentTextPart, log } from "../../shared"
|
||||
import { getTaskToastManager } from "../task-toast-manager"
|
||||
import { formatDuration } from "./duration-formatter"
|
||||
import { buildBackgroundTaskNotificationText } from "./background-task-notification-template"
|
||||
@@ -72,7 +72,7 @@ export async function notifyParentSession(
|
||||
...(agent !== undefined ? { agent } : {}),
|
||||
...(model !== undefined ? { model } : {}),
|
||||
...(tools ? { tools } : {}),
|
||||
parts: [{ type: "text", text: notification }],
|
||||
parts: [createInternalAgentTextPart(notification)],
|
||||
},
|
||||
})
|
||||
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import type { FallbackEntry } from "../../shared/model-requirements"
|
||||
|
||||
export type BackgroundTaskStatus =
|
||||
| "pending"
|
||||
| "running"
|
||||
@@ -31,6 +33,10 @@ export interface BackgroundTask {
|
||||
progress?: TaskProgress
|
||||
parentModel?: { providerID: string; modelID: string }
|
||||
model?: { providerID: string; modelID: string; variant?: string }
|
||||
/** Fallback chain for runtime retry on model errors */
|
||||
fallbackChain?: FallbackEntry[]
|
||||
/** Number of fallback retry attempts made */
|
||||
attemptCount?: number
|
||||
/** Active concurrency slot key */
|
||||
concurrencyKey?: string
|
||||
/** Persistent key for re-acquiring concurrency on resume */
|
||||
@@ -60,6 +66,8 @@ export interface LaunchInput {
|
||||
parentAgent?: string
|
||||
parentTools?: Record<string, boolean>
|
||||
model?: { providerID: string; modelID: string; variant?: string }
|
||||
/** Fallback chain for runtime retry on model errors */
|
||||
fallbackChain?: FallbackEntry[]
|
||||
isUnstableAgent?: boolean
|
||||
skills?: string[]
|
||||
skillContent?: string
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
export const subagentSessions = new Set<string>()
|
||||
export const syncSubagentSessions = new Set<string>()
|
||||
|
||||
let _mainSessionID: string | undefined
|
||||
|
||||
@@ -14,6 +15,7 @@ export function getMainSessionID(): string | undefined {
|
||||
export function _resetForTesting(): void {
|
||||
_mainSessionID = undefined
|
||||
subagentSessions.clear()
|
||||
syncSubagentSessions.clear()
|
||||
sessionAgentMap.clear()
|
||||
}
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ import { MESSAGE_STORAGE, PART_STORAGE } from "./constants"
|
||||
import type { MessageMeta, OriginalMessageContext, TextPart, ToolPermission } from "./types"
|
||||
import { log } from "../../shared/logger"
|
||||
import { isSqliteBackend } from "../../shared/opencode-storage-detection"
|
||||
import { normalizeSDKResponse } from "../../shared"
|
||||
import { createInternalAgentTextPart, normalizeSDKResponse } from "../../shared"
|
||||
|
||||
export interface StoredMessage {
|
||||
agent?: string
|
||||
@@ -331,7 +331,7 @@ export function injectHookMessage(
|
||||
const textPart: TextPart = {
|
||||
id: partID,
|
||||
type: "text",
|
||||
text: hookContent,
|
||||
text: createInternalAgentTextPart(hookContent).text,
|
||||
synthetic: true,
|
||||
time: {
|
||||
start: now,
|
||||
|
||||
@@ -25,13 +25,13 @@ export function discoverAllSkillsBlocking(dirs: string[], scopes: SkillScope[]):
|
||||
const { port1, port2 } = new MessageChannel()
|
||||
|
||||
const worker = new Worker(new URL("./discover-worker.ts", import.meta.url), {
|
||||
workerData: { signal }
|
||||
// workerData is structured-cloned; pass the SharedArrayBuffer and recreate the view in the worker.
|
||||
workerData: { signalBuffer: signal.buffer },
|
||||
})
|
||||
|
||||
worker.postMessage({ port: port2 }, [port2])
|
||||
|
||||
const input: WorkerInput = { dirs, scopes }
|
||||
port1.postMessage(input)
|
||||
// Avoid a race where the worker hasn't attached listeners to the MessagePort yet.
|
||||
worker.postMessage({ port: port2, input }, [port2])
|
||||
|
||||
const waitResult = Atomics.wait(signal, 0, 0, TIMEOUT_MS)
|
||||
|
||||
|
||||
@@ -18,25 +18,24 @@ interface WorkerOutputError {
|
||||
error: { message: string; stack?: string }
|
||||
}
|
||||
|
||||
const { signal } = workerData as { signal: Int32Array }
|
||||
const { signalBuffer } = workerData as { signalBuffer: SharedArrayBuffer }
|
||||
const signal = new Int32Array(signalBuffer)
|
||||
|
||||
if (!parentPort) {
|
||||
throw new Error("Worker must be run with parentPort")
|
||||
}
|
||||
|
||||
parentPort.once("message", (data: { port: MessagePort }) => {
|
||||
const { port } = data
|
||||
parentPort.once("message", (data: { port: MessagePort; input: WorkerInput }) => {
|
||||
const { port, input } = data
|
||||
|
||||
port.on("message", async (input: WorkerInput) => {
|
||||
void (async () => {
|
||||
try {
|
||||
const results = await Promise.all(
|
||||
input.dirs.map(dir => discoverSkillsInDirAsync(dir))
|
||||
)
|
||||
|
||||
const results = await Promise.all(input.dirs.map((dir) => discoverSkillsInDirAsync(dir)))
|
||||
|
||||
const skills = results.flat()
|
||||
|
||||
|
||||
const output: WorkerOutputSuccess = { ok: true, skills }
|
||||
|
||||
|
||||
port.postMessage(output)
|
||||
Atomics.store(signal, 0, 1)
|
||||
Atomics.notify(signal, 0)
|
||||
@@ -48,10 +47,10 @@ parentPort.once("message", (data: { port: MessagePort }) => {
|
||||
stack: error instanceof Error ? error.stack : undefined,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
port.postMessage(output)
|
||||
Atomics.store(signal, 0, 1)
|
||||
Atomics.notify(signal, 0)
|
||||
}
|
||||
})
|
||||
})()
|
||||
})
|
||||
|
||||
@@ -217,6 +217,27 @@ describe("TaskToastManager", () => {
|
||||
expect(call.body.message).toContain("(inherited from parent)")
|
||||
})
|
||||
|
||||
test("should display warning when model is runtime fallback", () => {
|
||||
// given - runtime-fallback indicates a model swap mid-run
|
||||
const task = {
|
||||
id: "task_runtime",
|
||||
description: "Task with runtime fallback model",
|
||||
agent: "explore",
|
||||
isBackground: false,
|
||||
modelInfo: { model: "quotio/oswe-vscode-prime", type: "runtime-fallback" as const },
|
||||
}
|
||||
|
||||
// when - addTask is called
|
||||
toastManager.addTask(task)
|
||||
|
||||
// then - toast should show fallback warning
|
||||
expect(mockClient.tui.showToast).toHaveBeenCalled()
|
||||
const call = mockClient.tui.showToast.mock.calls[0][0]
|
||||
expect(call.body.message).toContain("[FALLBACK]")
|
||||
expect(call.body.message).toContain("quotio/oswe-vscode-prime")
|
||||
expect(call.body.message).toContain("(runtime fallback)")
|
||||
})
|
||||
|
||||
test("should not display model info when user-defined", () => {
|
||||
// given - a task with user-defined model
|
||||
const task = {
|
||||
@@ -257,4 +278,32 @@ describe("TaskToastManager", () => {
|
||||
expect(call.body.message).not.toContain("[FALLBACK] Model:")
|
||||
})
|
||||
})
|
||||
|
||||
describe("updateTaskModelBySession", () => {
|
||||
test("updates task model info and shows fallback toast", () => {
|
||||
// given - task without model info
|
||||
const task = {
|
||||
id: "task_update",
|
||||
sessionID: "ses_update_1",
|
||||
description: "Task that will fallback",
|
||||
agent: "explore",
|
||||
isBackground: false,
|
||||
}
|
||||
toastManager.addTask(task)
|
||||
mockClient.tui.showToast.mockClear()
|
||||
|
||||
// when - runtime fallback applied by session
|
||||
toastManager.updateTaskModelBySession("ses_update_1", {
|
||||
model: "nvidia/stepfun-ai/step-3.5-flash",
|
||||
type: "runtime-fallback",
|
||||
})
|
||||
|
||||
// then - new toast shows fallback model
|
||||
expect(mockClient.tui.showToast).toHaveBeenCalled()
|
||||
const call = mockClient.tui.showToast.mock.calls[0][0]
|
||||
expect(call.body.message).toContain("[FALLBACK]")
|
||||
expect(call.body.message).toContain("nvidia/stepfun-ai/step-3.5-flash")
|
||||
expect(call.body.message).toContain("(runtime fallback)")
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@@ -20,6 +20,7 @@ export class TaskToastManager {
|
||||
|
||||
addTask(task: {
|
||||
id: string
|
||||
sessionID?: string
|
||||
description: string
|
||||
agent: string
|
||||
isBackground: boolean
|
||||
@@ -30,6 +31,7 @@ export class TaskToastManager {
|
||||
}): void {
|
||||
const trackedTask: TrackedTask = {
|
||||
id: task.id,
|
||||
sessionID: task.sessionID,
|
||||
description: task.description,
|
||||
agent: task.agent,
|
||||
status: task.status ?? "running",
|
||||
@@ -54,6 +56,18 @@ export class TaskToastManager {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Update model info for a task by session ID
|
||||
*/
|
||||
updateTaskModelBySession(sessionID: string, modelInfo: ModelFallbackInfo): void {
|
||||
if (!sessionID) return
|
||||
const task = Array.from(this.tasks.values()).find((t) => t.sessionID === sessionID)
|
||||
if (!task) return
|
||||
if (task.modelInfo?.model === modelInfo.model && task.modelInfo?.type === modelInfo.type) return
|
||||
task.modelInfo = modelInfo
|
||||
this.showTaskListToast(task)
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove completed/error task
|
||||
*/
|
||||
@@ -110,14 +124,17 @@ export class TaskToastManager {
|
||||
const lines: string[] = []
|
||||
|
||||
const isFallback = newTask.modelInfo && (
|
||||
newTask.modelInfo.type === "inherited" || newTask.modelInfo.type === "system-default"
|
||||
newTask.modelInfo.type === "inherited" ||
|
||||
newTask.modelInfo.type === "system-default" ||
|
||||
newTask.modelInfo.type === "runtime-fallback"
|
||||
)
|
||||
if (isFallback) {
|
||||
const suffixMap: Record<"inherited" | "system-default", string> = {
|
||||
const suffixMap: Record<"inherited" | "system-default" | "runtime-fallback", string> = {
|
||||
inherited: " (inherited from parent)",
|
||||
"system-default": " (system default fallback)",
|
||||
"runtime-fallback": " (runtime fallback)",
|
||||
}
|
||||
const suffix = suffixMap[newTask.modelInfo!.type as "inherited" | "system-default"]
|
||||
const suffix = suffixMap[newTask.modelInfo!.type as "inherited" | "system-default" | "runtime-fallback"]
|
||||
lines.push(`[FALLBACK] Model: ${newTask.modelInfo!.model}${suffix}`)
|
||||
lines.push("")
|
||||
}
|
||||
|
||||
@@ -4,12 +4,13 @@ export type TaskStatus = "running" | "queued" | "completed" | "error"
|
||||
|
||||
export interface ModelFallbackInfo {
|
||||
model: string
|
||||
type: "user-defined" | "inherited" | "category-default" | "system-default"
|
||||
type: "user-defined" | "inherited" | "category-default" | "system-default" | "runtime-fallback"
|
||||
source?: ModelSource
|
||||
}
|
||||
|
||||
export interface TrackedTask {
|
||||
id: string
|
||||
sessionID?: string
|
||||
description: string
|
||||
agent: string
|
||||
status: TaskStatus
|
||||
|
||||
@@ -1,14 +1,21 @@
|
||||
import type { PaneAction } from "./types"
|
||||
import { applyLayout, spawnTmuxPane, closeTmuxPane, enforceMainPaneWidth, replaceTmuxPane } from "../../shared/tmux"
|
||||
import type { TmuxConfig } from "../../config/schema"
|
||||
import type { PaneAction, WindowState } from "./types"
|
||||
import {
|
||||
applyLayout,
|
||||
spawnTmuxPane,
|
||||
closeTmuxPane,
|
||||
enforceMainPaneWidth,
|
||||
replaceTmuxPane,
|
||||
} from "../../shared/tmux"
|
||||
import { getTmuxPath } from "../../tools/interactive-bash/tmux-path-resolver"
|
||||
import { queryWindowState } from "./pane-state-querier"
|
||||
import { log } from "../../shared"
|
||||
import type {
|
||||
ActionExecutorDeps,
|
||||
ActionResult,
|
||||
ExecuteContext,
|
||||
ActionExecutorDeps,
|
||||
} from "./action-executor-core"
|
||||
import { executeActionWithDeps } from "./action-executor-core"
|
||||
|
||||
export type { ActionExecutorDeps, ActionResult, ExecuteContext } from "./action-executor-core"
|
||||
export type { ActionExecutorDeps, ActionResult } from "./action-executor-core"
|
||||
|
||||
export interface ExecuteActionsResult {
|
||||
success: boolean
|
||||
@@ -16,19 +23,92 @@ export interface ExecuteActionsResult {
|
||||
results: Array<{ action: PaneAction; result: ActionResult }>
|
||||
}
|
||||
|
||||
const DEFAULT_DEPS: ActionExecutorDeps = {
|
||||
spawnTmuxPane,
|
||||
closeTmuxPane,
|
||||
replaceTmuxPane,
|
||||
applyLayout,
|
||||
enforceMainPaneWidth,
|
||||
export interface ExecuteContext {
|
||||
config: TmuxConfig
|
||||
serverUrl: string
|
||||
windowState: WindowState
|
||||
sourcePaneId?: string
|
||||
}
|
||||
|
||||
async function enforceMainPane(
|
||||
windowState: WindowState,
|
||||
config: TmuxConfig,
|
||||
): Promise<void> {
|
||||
if (!windowState.mainPane) return
|
||||
await enforceMainPaneWidth(windowState.mainPane.paneId, windowState.windowWidth, {
|
||||
mainPaneSize: config.main_pane_size,
|
||||
mainPaneMinWidth: config.main_pane_min_width,
|
||||
agentPaneMinWidth: config.agent_pane_min_width,
|
||||
})
|
||||
}
|
||||
|
||||
async function enforceLayoutAndMainPane(ctx: ExecuteContext): Promise<void> {
|
||||
const sourcePaneId = ctx.sourcePaneId
|
||||
if (!sourcePaneId) {
|
||||
await enforceMainPane(ctx.windowState, ctx.config)
|
||||
return
|
||||
}
|
||||
|
||||
const latestState = await queryWindowState(sourcePaneId)
|
||||
if (!latestState?.mainPane) {
|
||||
await enforceMainPane(ctx.windowState, ctx.config)
|
||||
return
|
||||
}
|
||||
|
||||
const tmux = await getTmuxPath()
|
||||
if (tmux) {
|
||||
await applyLayout(tmux, ctx.config.layout, ctx.config.main_pane_size)
|
||||
}
|
||||
|
||||
await enforceMainPane(latestState, ctx.config)
|
||||
}
|
||||
|
||||
export async function executeAction(
|
||||
action: PaneAction,
|
||||
ctx: ExecuteContext
|
||||
): Promise<ActionResult> {
|
||||
return executeActionWithDeps(action, ctx, DEFAULT_DEPS)
|
||||
if (action.type === "close") {
|
||||
const success = await closeTmuxPane(action.paneId)
|
||||
if (success) {
|
||||
await enforceLayoutAndMainPane(ctx)
|
||||
}
|
||||
return { success }
|
||||
}
|
||||
|
||||
if (action.type === "replace") {
|
||||
const result = await replaceTmuxPane(
|
||||
action.paneId,
|
||||
action.newSessionId,
|
||||
action.description,
|
||||
ctx.config,
|
||||
ctx.serverUrl
|
||||
)
|
||||
if (result.success) {
|
||||
await enforceLayoutAndMainPane(ctx)
|
||||
}
|
||||
return {
|
||||
success: result.success,
|
||||
paneId: result.paneId,
|
||||
}
|
||||
}
|
||||
|
||||
const result = await spawnTmuxPane(
|
||||
action.sessionId,
|
||||
action.description,
|
||||
ctx.config,
|
||||
ctx.serverUrl,
|
||||
action.targetPaneId,
|
||||
action.splitDirection
|
||||
)
|
||||
|
||||
if (result.success) {
|
||||
await enforceLayoutAndMainPane(ctx)
|
||||
}
|
||||
|
||||
return {
|
||||
success: result.success,
|
||||
paneId: result.paneId,
|
||||
}
|
||||
}
|
||||
|
||||
export async function executeActions(
|
||||
|
||||
@@ -5,6 +5,7 @@ import {
|
||||
canSplitPane,
|
||||
canSplitPaneAnyDirection,
|
||||
getBestSplitDirection,
|
||||
findSpawnTarget,
|
||||
type SessionMapping
|
||||
} from "./decision-engine"
|
||||
import type { WindowState, CapacityConfig, TmuxPaneInfo } from "./types"
|
||||
@@ -258,10 +259,31 @@ describe("decideSpawnActions", () => {
|
||||
expect(result.actions[0].type).toBe("spawn")
|
||||
})
|
||||
|
||||
it("respects configured agent min width for split decisions", () => {
|
||||
// given
|
||||
const state = createWindowState(240, 44, [
|
||||
{ paneId: "%1", width: 100, height: 44, left: 140, top: 0 },
|
||||
])
|
||||
const mappings: SessionMapping[] = [
|
||||
{ sessionId: "old-ses", paneId: "%1", createdAt: new Date("2024-01-01") },
|
||||
]
|
||||
const strictConfig: CapacityConfig = {
|
||||
mainPaneSize: 60,
|
||||
mainPaneMinWidth: 120,
|
||||
agentPaneWidth: 60,
|
||||
}
|
||||
|
||||
// when
|
||||
const result = decideSpawnActions(state, "ses1", "test", strictConfig, mappings)
|
||||
|
||||
// then
|
||||
expect(result.canSpawn).toBe(false)
|
||||
expect(result.actions).toHaveLength(0)
|
||||
expect(result.reason).toContain("defer")
|
||||
})
|
||||
|
||||
it("returns canSpawn=true when 0 agent panes exist and mainPane occupies full window width", () => {
|
||||
// given - tmux reports mainPane.width === windowWidth when no splits exist
|
||||
// agentAreaWidth = max(0, 252 - 252 - 1) = 0, which is < minPaneWidth
|
||||
// but with 0 agent panes, the early return should be skipped
|
||||
const windowWidth = 252
|
||||
const windowHeight = 56
|
||||
const state: WindowState = {
|
||||
@@ -281,8 +303,7 @@ describe("decideSpawnActions", () => {
|
||||
})
|
||||
|
||||
it("returns canSpawn=false when 0 agent panes and window genuinely too narrow to split", () => {
|
||||
// given - window so narrow that even splitting mainPane wouldn't work
|
||||
// canSplitPane requires width >= 2*minPaneWidth + DIVIDER_SIZE = 2*40+1 = 81
|
||||
// given - window so narrow that even splitting mainPane would fail
|
||||
const windowWidth = 70
|
||||
const windowHeight = 56
|
||||
const state: WindowState = {
|
||||
@@ -295,14 +316,13 @@ describe("decideSpawnActions", () => {
|
||||
// when
|
||||
const result = decideSpawnActions(state, "ses1", "test", defaultConfig, [])
|
||||
|
||||
// then - should fail because mainPane itself is too small to split
|
||||
// then
|
||||
expect(result.canSpawn).toBe(false)
|
||||
expect(result.reason).toContain("too small")
|
||||
})
|
||||
|
||||
it("returns canSpawn=false when agent panes exist but agent area too small", () => {
|
||||
// given - 1 agent pane exists, but agent area is below minPaneWidth
|
||||
// this verifies the early return still works for currentCount > 0
|
||||
// given - 1 agent pane exists, and agent area is below minPaneWidth
|
||||
const state: WindowState = {
|
||||
windowWidth: 180,
|
||||
windowHeight: 44,
|
||||
@@ -313,13 +333,13 @@ describe("decideSpawnActions", () => {
|
||||
// when
|
||||
const result = decideSpawnActions(state, "ses1", "test", defaultConfig, [])
|
||||
|
||||
// then - agent area = max(0, 180-160-1) = 19, which is < agentPaneWidth(40)
|
||||
// then
|
||||
expect(result.canSpawn).toBe(false)
|
||||
expect(result.reason).toContain("too small")
|
||||
expect(result.reason).toContain("defer attach")
|
||||
})
|
||||
|
||||
it("spawns at exact minimum splittable width with 0 agent panes", () => {
|
||||
// given - canSplitPane requires width >= 2*agentPaneWidth + DIVIDER_SIZE = 2*40+1 = 81
|
||||
// given
|
||||
const exactThreshold = 2 * defaultConfig.agentPaneWidth + 1
|
||||
const state: WindowState = {
|
||||
windowWidth: exactThreshold,
|
||||
@@ -331,12 +351,12 @@ describe("decideSpawnActions", () => {
|
||||
// when
|
||||
const result = decideSpawnActions(state, "ses1", "test", defaultConfig, [])
|
||||
|
||||
// then - exactly at threshold should succeed
|
||||
// then
|
||||
expect(result.canSpawn).toBe(true)
|
||||
})
|
||||
|
||||
it("rejects spawn 1 pixel below minimum splittable width with 0 agent panes", () => {
|
||||
// given - 1 below exact threshold
|
||||
// given
|
||||
const belowThreshold = 2 * defaultConfig.agentPaneWidth
|
||||
const state: WindowState = {
|
||||
windowWidth: belowThreshold,
|
||||
@@ -348,11 +368,11 @@ describe("decideSpawnActions", () => {
|
||||
// when
|
||||
const result = decideSpawnActions(state, "ses1", "test", defaultConfig, [])
|
||||
|
||||
// then - 1 below threshold should fail
|
||||
// then
|
||||
expect(result.canSpawn).toBe(false)
|
||||
})
|
||||
|
||||
it("replaces oldest pane when existing panes are too small to split", () => {
|
||||
it("closes oldest pane when existing panes are too small to split", () => {
|
||||
// given - existing pane is below minimum splittable size
|
||||
const state = createWindowState(220, 30, [
|
||||
{ paneId: "%1", width: 50, height: 15, left: 110, top: 0 },
|
||||
@@ -366,8 +386,9 @@ describe("decideSpawnActions", () => {
|
||||
|
||||
// then
|
||||
expect(result.canSpawn).toBe(true)
|
||||
expect(result.actions.length).toBe(1)
|
||||
expect(result.actions[0].type).toBe("replace")
|
||||
expect(result.actions.length).toBe(2)
|
||||
expect(result.actions[0].type).toBe("close")
|
||||
expect(result.actions[1].type).toBe("spawn")
|
||||
})
|
||||
|
||||
it("can spawn when existing pane is large enough to split", () => {
|
||||
@@ -429,6 +450,64 @@ describe("decideSpawnActions", () => {
|
||||
expect(result.canSpawn).toBe(false)
|
||||
expect(result.reason).toBe("no main pane found")
|
||||
})
|
||||
|
||||
it("uses configured main pane size for split/defer decision", () => {
|
||||
// given
|
||||
const state = createWindowState(240, 44, [
|
||||
{ paneId: "%1", width: 90, height: 44, left: 150, top: 0 },
|
||||
])
|
||||
const mappings: SessionMapping[] = [
|
||||
{ sessionId: "old-ses", paneId: "%1", createdAt: new Date("2024-01-01") },
|
||||
]
|
||||
const wideMainConfig: CapacityConfig = {
|
||||
mainPaneSize: 80,
|
||||
mainPaneMinWidth: 120,
|
||||
agentPaneWidth: 40,
|
||||
}
|
||||
|
||||
// when
|
||||
const result = decideSpawnActions(state, "ses1", "test", wideMainConfig, mappings)
|
||||
|
||||
// then
|
||||
expect(result.canSpawn).toBe(false)
|
||||
expect(result.actions).toHaveLength(0)
|
||||
expect(result.reason).toContain("defer")
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe("findSpawnTarget", () => {
|
||||
it("uses deterministic vertical fallback order", () => {
|
||||
// given
|
||||
const state: WindowState = {
|
||||
windowWidth: 320,
|
||||
windowHeight: 44,
|
||||
mainPane: {
|
||||
paneId: "%0",
|
||||
width: 160,
|
||||
height: 44,
|
||||
left: 0,
|
||||
top: 0,
|
||||
title: "main",
|
||||
isActive: true,
|
||||
},
|
||||
agentPanes: [
|
||||
{ paneId: "%1", width: 70, height: 20, left: 170, top: 0, title: "a", isActive: false },
|
||||
{ paneId: "%2", width: 120, height: 44, left: 240, top: 0, title: "b", isActive: false },
|
||||
{ paneId: "%3", width: 120, height: 22, left: 240, top: 22, title: "c", isActive: false },
|
||||
],
|
||||
}
|
||||
const config: CapacityConfig = {
|
||||
mainPaneSize: 50,
|
||||
mainPaneMinWidth: 120,
|
||||
agentPaneWidth: 40,
|
||||
}
|
||||
|
||||
// when
|
||||
const target = findSpawnTarget(state, config)
|
||||
|
||||
// then
|
||||
expect(target).toEqual({ targetPaneId: "%2", splitDirection: "-v" })
|
||||
})
|
||||
})
|
||||
|
||||
@@ -555,7 +634,7 @@ describe("decideSpawnActions with custom agentPaneWidth", () => {
|
||||
}
|
||||
})
|
||||
|
||||
it("#given wider main pane #when capacity needs two evictions #then replace is chosen", () => {
|
||||
it("#given wider main pane #when capacity needs two evictions #then defer is chosen", () => {
|
||||
//#given
|
||||
const config: CapacityConfig = { mainPaneMinWidth: 120, agentPaneWidth: 40 }
|
||||
const state = createWindowState(220, 44, [
|
||||
@@ -586,8 +665,8 @@ describe("decideSpawnActions with custom agentPaneWidth", () => {
|
||||
const result = decideSpawnActions(state, "ses-new", "new task", config, mappings)
|
||||
|
||||
//#then
|
||||
expect(result.canSpawn).toBe(true)
|
||||
expect(result.actions).toHaveLength(1)
|
||||
expect(result.actions[0].type).toBe("replace")
|
||||
expect(result.canSpawn).toBe(false)
|
||||
expect(result.actions).toHaveLength(0)
|
||||
expect(result.reason).toContain("defer attach")
|
||||
})
|
||||
})
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
import { MIN_PANE_HEIGHT, MIN_PANE_WIDTH } from "./types"
|
||||
import type { TmuxPaneInfo } from "./types"
|
||||
import type { CapacityConfig, TmuxPaneInfo } from "./types"
|
||||
import {
|
||||
DIVIDER_SIZE,
|
||||
MAIN_PANE_RATIO,
|
||||
MAX_GRID_SIZE,
|
||||
computeAgentAreaWidth,
|
||||
} from "./tmux-grid-constants"
|
||||
|
||||
export interface GridCapacity {
|
||||
@@ -24,16 +24,36 @@ export interface GridPlan {
|
||||
slotHeight: number
|
||||
}
|
||||
|
||||
type CapacityOptions = CapacityConfig | number | undefined
|
||||
|
||||
function resolveMinPaneWidth(options?: CapacityOptions): number {
|
||||
if (typeof options === "number") {
|
||||
return Math.max(1, options)
|
||||
}
|
||||
if (options && typeof options.agentPaneWidth === "number") {
|
||||
return Math.max(1, options.agentPaneWidth)
|
||||
}
|
||||
return MIN_PANE_WIDTH
|
||||
}
|
||||
|
||||
function resolveAgentAreaWidth(windowWidth: number, options?: CapacityOptions): number {
|
||||
if (typeof options === "number") {
|
||||
return computeAgentAreaWidth(windowWidth)
|
||||
}
|
||||
return computeAgentAreaWidth(windowWidth, options)
|
||||
}
|
||||
|
||||
export function calculateCapacity(
|
||||
windowWidth: number,
|
||||
windowHeight: number,
|
||||
minPaneWidth: number = MIN_PANE_WIDTH,
|
||||
options?: CapacityOptions,
|
||||
mainPaneWidth?: number,
|
||||
): GridCapacity {
|
||||
const availableWidth =
|
||||
typeof mainPaneWidth === "number"
|
||||
? Math.max(0, windowWidth - mainPaneWidth - DIVIDER_SIZE)
|
||||
: Math.floor(windowWidth * (1 - MAIN_PANE_RATIO))
|
||||
typeof mainPaneWidth === "number"
|
||||
? Math.max(0, windowWidth - mainPaneWidth - DIVIDER_SIZE)
|
||||
: resolveAgentAreaWidth(windowWidth, options)
|
||||
const minPaneWidth = resolveMinPaneWidth(options)
|
||||
const cols = Math.min(
|
||||
MAX_GRID_SIZE,
|
||||
Math.max(
|
||||
@@ -59,15 +79,10 @@ export function computeGridPlan(
|
||||
windowWidth: number,
|
||||
windowHeight: number,
|
||||
paneCount: number,
|
||||
options?: CapacityOptions,
|
||||
mainPaneWidth?: number,
|
||||
minPaneWidth?: number,
|
||||
): GridPlan {
|
||||
const capacity = calculateCapacity(
|
||||
windowWidth,
|
||||
windowHeight,
|
||||
minPaneWidth ?? MIN_PANE_WIDTH,
|
||||
mainPaneWidth,
|
||||
)
|
||||
const capacity = calculateCapacity(windowWidth, windowHeight, options, mainPaneWidth)
|
||||
const { cols: maxCols, rows: maxRows } = capacity
|
||||
|
||||
if (maxCols === 0 || maxRows === 0 || paneCount === 0) {
|
||||
@@ -91,9 +106,9 @@ export function computeGridPlan(
|
||||
}
|
||||
|
||||
const availableWidth =
|
||||
typeof mainPaneWidth === "number"
|
||||
? Math.max(0, windowWidth - mainPaneWidth - DIVIDER_SIZE)
|
||||
: Math.floor(windowWidth * (1 - MAIN_PANE_RATIO))
|
||||
typeof mainPaneWidth === "number"
|
||||
? Math.max(0, windowWidth - mainPaneWidth - DIVIDER_SIZE)
|
||||
: resolveAgentAreaWidth(windowWidth, options)
|
||||
const slotWidth = Math.floor(availableWidth / bestCols)
|
||||
const slotHeight = Math.floor(windowHeight / bestRows)
|
||||
|
||||
|
||||
145
src/features/tmux-subagent/layout-config.test.ts
Normal file
145
src/features/tmux-subagent/layout-config.test.ts
Normal file
@@ -0,0 +1,145 @@
|
||||
import { describe, expect, it } from "bun:test"
|
||||
import { decideSpawnActions, findSpawnTarget, type SessionMapping } from "./decision-engine"
|
||||
import type { CapacityConfig, WindowState } from "./types"
|
||||
|
||||
function createState(
|
||||
windowWidth: number,
|
||||
windowHeight: number,
|
||||
agentPanes: WindowState["agentPanes"],
|
||||
): WindowState {
|
||||
return {
|
||||
windowWidth,
|
||||
windowHeight,
|
||||
mainPane: {
|
||||
paneId: "%0",
|
||||
width: Math.floor(windowWidth / 2),
|
||||
height: windowHeight,
|
||||
left: 0,
|
||||
top: 0,
|
||||
title: "main",
|
||||
isActive: true,
|
||||
},
|
||||
agentPanes,
|
||||
}
|
||||
}
|
||||
|
||||
describe("tmux layout-aware split behavior", () => {
|
||||
it("uses -v for first spawn in main-horizontal layout", () => {
|
||||
const config: CapacityConfig = {
|
||||
layout: "main-horizontal",
|
||||
mainPaneSize: 60,
|
||||
mainPaneMinWidth: 120,
|
||||
agentPaneWidth: 40,
|
||||
}
|
||||
const state = createState(220, 44, [])
|
||||
|
||||
const decision = decideSpawnActions(state, "ses-1", "agent", config, [])
|
||||
|
||||
expect(decision.canSpawn).toBe(true)
|
||||
expect(decision.actions[0]).toMatchObject({
|
||||
type: "spawn",
|
||||
splitDirection: "-v",
|
||||
})
|
||||
})
|
||||
|
||||
it("uses -h for first spawn in main-vertical layout", () => {
|
||||
const config: CapacityConfig = {
|
||||
layout: "main-vertical",
|
||||
mainPaneSize: 60,
|
||||
mainPaneMinWidth: 120,
|
||||
agentPaneWidth: 40,
|
||||
}
|
||||
const state = createState(220, 44, [])
|
||||
|
||||
const decision = decideSpawnActions(state, "ses-1", "agent", config, [])
|
||||
|
||||
expect(decision.canSpawn).toBe(true)
|
||||
expect(decision.actions[0]).toMatchObject({
|
||||
type: "spawn",
|
||||
splitDirection: "-h",
|
||||
})
|
||||
})
|
||||
|
||||
it("prefers horizontal split target in main-horizontal layout", () => {
|
||||
const config: CapacityConfig = {
|
||||
layout: "main-horizontal",
|
||||
mainPaneSize: 60,
|
||||
mainPaneMinWidth: 120,
|
||||
agentPaneWidth: 40,
|
||||
}
|
||||
const state = createState(260, 60, [
|
||||
{
|
||||
paneId: "%1",
|
||||
width: 120,
|
||||
height: 30,
|
||||
left: 0,
|
||||
top: 30,
|
||||
title: "agent",
|
||||
isActive: false,
|
||||
},
|
||||
])
|
||||
|
||||
const target = findSpawnTarget(state, config)
|
||||
|
||||
expect(target).toEqual({ targetPaneId: "%1", splitDirection: "-h" })
|
||||
})
|
||||
|
||||
it("defers when strict main-horizontal cannot split", () => {
|
||||
const config: CapacityConfig = {
|
||||
layout: "main-horizontal",
|
||||
mainPaneSize: 60,
|
||||
mainPaneMinWidth: 120,
|
||||
agentPaneWidth: 40,
|
||||
}
|
||||
const state = createState(220, 44, [
|
||||
{
|
||||
paneId: "%1",
|
||||
width: 60,
|
||||
height: 44,
|
||||
left: 0,
|
||||
top: 22,
|
||||
title: "old",
|
||||
isActive: false,
|
||||
},
|
||||
])
|
||||
const mappings: SessionMapping[] = [
|
||||
{ sessionId: "old-ses", paneId: "%1", createdAt: new Date("2024-01-01") },
|
||||
]
|
||||
|
||||
const decision = decideSpawnActions(state, "new-ses", "agent", config, mappings)
|
||||
|
||||
expect(decision.canSpawn).toBe(false)
|
||||
expect(decision.actions).toHaveLength(0)
|
||||
expect(decision.reason).toContain("defer")
|
||||
})
|
||||
|
||||
it("still spawns in narrow main-vertical when vertical split is possible", () => {
|
||||
const config: CapacityConfig = {
|
||||
layout: "main-vertical",
|
||||
mainPaneSize: 60,
|
||||
mainPaneMinWidth: 120,
|
||||
agentPaneWidth: 40,
|
||||
}
|
||||
const state = createState(169, 40, [
|
||||
{
|
||||
paneId: "%1",
|
||||
width: 48,
|
||||
height: 40,
|
||||
left: 121,
|
||||
top: 0,
|
||||
title: "agent",
|
||||
isActive: false,
|
||||
},
|
||||
])
|
||||
|
||||
const decision = decideSpawnActions(state, "new-ses", "agent", config, [])
|
||||
|
||||
expect(decision.canSpawn).toBe(true)
|
||||
expect(decision.actions).toHaveLength(1)
|
||||
expect(decision.actions[0]).toMatchObject({
|
||||
type: "spawn",
|
||||
targetPaneId: "%1",
|
||||
splitDirection: "-v",
|
||||
})
|
||||
})
|
||||
})
|
||||
@@ -156,7 +156,15 @@ describe('TmuxSessionManager', () => {
|
||||
// given
|
||||
mockIsInsideTmux.mockReturnValue(true)
|
||||
const { TmuxSessionManager } = await import('./manager')
|
||||
const ctx = createMockContext()
|
||||
const ctx = createMockContext({
|
||||
sessionStatusResult: {
|
||||
data: {
|
||||
ses_1: { type: 'running' },
|
||||
ses_2: { type: 'running' },
|
||||
ses_3: { type: 'running' },
|
||||
},
|
||||
},
|
||||
})
|
||||
const config: TmuxConfig = {
|
||||
enabled: true,
|
||||
layout: 'main-vertical',
|
||||
@@ -176,7 +184,13 @@ describe('TmuxSessionManager', () => {
|
||||
// given
|
||||
mockIsInsideTmux.mockReturnValue(false)
|
||||
const { TmuxSessionManager } = await import('./manager')
|
||||
const ctx = createMockContext()
|
||||
const ctx = createMockContext({
|
||||
sessionStatusResult: {
|
||||
data: {
|
||||
ses_once: { type: 'running' },
|
||||
},
|
||||
},
|
||||
})
|
||||
const config: TmuxConfig = {
|
||||
enabled: true,
|
||||
layout: 'main-vertical',
|
||||
@@ -386,7 +400,7 @@ describe('TmuxSessionManager', () => {
|
||||
expect(mockExecuteActions).toHaveBeenCalledTimes(0)
|
||||
})
|
||||
|
||||
test('replaces oldest agent when unsplittable (small window)', async () => {
|
||||
test('defers attach when unsplittable (small window)', async () => {
|
||||
// given - small window where split is not possible
|
||||
mockIsInsideTmux.mockReturnValue(true)
|
||||
mockQueryWindowState.mockImplementation(async () =>
|
||||
@@ -423,13 +437,224 @@ describe('TmuxSessionManager', () => {
|
||||
createSessionCreatedEvent('ses_new', 'ses_parent', 'New Task')
|
||||
)
|
||||
|
||||
// then - with small window, replace action is used instead of close+spawn
|
||||
expect(mockExecuteActions).toHaveBeenCalledTimes(1)
|
||||
const call = mockExecuteActions.mock.calls[0]
|
||||
expect(call).toBeDefined()
|
||||
const actionsArg = call![0]
|
||||
expect(actionsArg).toHaveLength(1)
|
||||
expect(actionsArg[0].type).toBe('replace')
|
||||
// then - with small window, manager defers instead of replacing
|
||||
expect(mockExecuteActions).toHaveBeenCalledTimes(0)
|
||||
expect((manager as any).deferredQueue).toEqual(['ses_new'])
|
||||
})
|
||||
|
||||
test('keeps deferred queue idempotent for duplicate session.created events', async () => {
|
||||
// given
|
||||
mockIsInsideTmux.mockReturnValue(true)
|
||||
mockQueryWindowState.mockImplementation(async () =>
|
||||
createWindowState({
|
||||
windowWidth: 160,
|
||||
windowHeight: 11,
|
||||
agentPanes: [
|
||||
{
|
||||
paneId: '%1',
|
||||
width: 80,
|
||||
height: 11,
|
||||
left: 80,
|
||||
top: 0,
|
||||
title: 'old',
|
||||
isActive: false,
|
||||
},
|
||||
],
|
||||
})
|
||||
)
|
||||
|
||||
const { TmuxSessionManager } = await import('./manager')
|
||||
const ctx = createMockContext()
|
||||
const config: TmuxConfig = {
|
||||
enabled: true,
|
||||
layout: 'main-vertical',
|
||||
main_pane_size: 60,
|
||||
main_pane_min_width: 120,
|
||||
agent_pane_min_width: 40,
|
||||
}
|
||||
const manager = new TmuxSessionManager(ctx, config, mockTmuxDeps)
|
||||
|
||||
// when
|
||||
await manager.onSessionCreated(
|
||||
createSessionCreatedEvent('ses_dup', 'ses_parent', 'Duplicate Task')
|
||||
)
|
||||
await manager.onSessionCreated(
|
||||
createSessionCreatedEvent('ses_dup', 'ses_parent', 'Duplicate Task')
|
||||
)
|
||||
|
||||
// then
|
||||
expect((manager as any).deferredQueue).toEqual(['ses_dup'])
|
||||
})
|
||||
|
||||
test('auto-attaches deferred sessions in FIFO order', async () => {
|
||||
// given
|
||||
mockIsInsideTmux.mockReturnValue(true)
|
||||
mockQueryWindowState.mockImplementation(async () =>
|
||||
createWindowState({
|
||||
windowWidth: 160,
|
||||
windowHeight: 11,
|
||||
agentPanes: [
|
||||
{
|
||||
paneId: '%1',
|
||||
width: 80,
|
||||
height: 11,
|
||||
left: 80,
|
||||
top: 0,
|
||||
title: 'old',
|
||||
isActive: false,
|
||||
},
|
||||
],
|
||||
})
|
||||
)
|
||||
|
||||
const attachOrder: string[] = []
|
||||
mockExecuteActions.mockImplementation(async (actions) => {
|
||||
for (const action of actions) {
|
||||
if (action.type === 'spawn') {
|
||||
attachOrder.push(action.sessionId)
|
||||
trackedSessions.add(action.sessionId)
|
||||
return {
|
||||
success: true,
|
||||
spawnedPaneId: `%${action.sessionId}`,
|
||||
results: [{ action, result: { success: true, paneId: `%${action.sessionId}` } }],
|
||||
}
|
||||
}
|
||||
}
|
||||
return { success: true, results: [] }
|
||||
})
|
||||
|
||||
const { TmuxSessionManager } = await import('./manager')
|
||||
const ctx = createMockContext()
|
||||
const config: TmuxConfig = {
|
||||
enabled: true,
|
||||
layout: 'main-vertical',
|
||||
main_pane_size: 60,
|
||||
main_pane_min_width: 120,
|
||||
agent_pane_min_width: 40,
|
||||
}
|
||||
const manager = new TmuxSessionManager(ctx, config, mockTmuxDeps)
|
||||
|
||||
await manager.onSessionCreated(createSessionCreatedEvent('ses_1', 'ses_parent', 'Task 1'))
|
||||
await manager.onSessionCreated(createSessionCreatedEvent('ses_2', 'ses_parent', 'Task 2'))
|
||||
await manager.onSessionCreated(createSessionCreatedEvent('ses_3', 'ses_parent', 'Task 3'))
|
||||
expect((manager as any).deferredQueue).toEqual(['ses_1', 'ses_2', 'ses_3'])
|
||||
|
||||
// when
|
||||
mockQueryWindowState.mockImplementation(async () => createWindowState())
|
||||
await (manager as any).tryAttachDeferredSession()
|
||||
await (manager as any).tryAttachDeferredSession()
|
||||
await (manager as any).tryAttachDeferredSession()
|
||||
|
||||
// then
|
||||
expect(attachOrder).toEqual(['ses_1', 'ses_2', 'ses_3'])
|
||||
expect((manager as any).deferredQueue).toEqual([])
|
||||
})
|
||||
|
||||
test('does not attach deferred session more than once across repeated retries', async () => {
|
||||
// given
|
||||
mockIsInsideTmux.mockReturnValue(true)
|
||||
mockQueryWindowState.mockImplementation(async () =>
|
||||
createWindowState({
|
||||
windowWidth: 160,
|
||||
windowHeight: 11,
|
||||
agentPanes: [
|
||||
{
|
||||
paneId: '%1',
|
||||
width: 80,
|
||||
height: 11,
|
||||
left: 80,
|
||||
top: 0,
|
||||
title: 'old',
|
||||
isActive: false,
|
||||
},
|
||||
],
|
||||
})
|
||||
)
|
||||
|
||||
let attachCount = 0
|
||||
mockExecuteActions.mockImplementation(async (actions) => {
|
||||
for (const action of actions) {
|
||||
if (action.type === 'spawn') {
|
||||
attachCount += 1
|
||||
trackedSessions.add(action.sessionId)
|
||||
return {
|
||||
success: true,
|
||||
spawnedPaneId: `%${action.sessionId}`,
|
||||
results: [{ action, result: { success: true, paneId: `%${action.sessionId}` } }],
|
||||
}
|
||||
}
|
||||
}
|
||||
return { success: true, results: [] }
|
||||
})
|
||||
|
||||
const { TmuxSessionManager } = await import('./manager')
|
||||
const ctx = createMockContext()
|
||||
const config: TmuxConfig = {
|
||||
enabled: true,
|
||||
layout: 'main-vertical',
|
||||
main_pane_size: 60,
|
||||
main_pane_min_width: 120,
|
||||
agent_pane_min_width: 40,
|
||||
}
|
||||
const manager = new TmuxSessionManager(ctx, config, mockTmuxDeps)
|
||||
|
||||
await manager.onSessionCreated(
|
||||
createSessionCreatedEvent('ses_once', 'ses_parent', 'Task Once')
|
||||
)
|
||||
|
||||
// when
|
||||
mockQueryWindowState.mockImplementation(async () => createWindowState())
|
||||
await (manager as any).tryAttachDeferredSession()
|
||||
await (manager as any).tryAttachDeferredSession()
|
||||
|
||||
// then
|
||||
expect(attachCount).toBe(1)
|
||||
expect((manager as any).deferredQueue).toEqual([])
|
||||
})
|
||||
|
||||
test('removes deferred session when session is deleted before attach', async () => {
|
||||
// given
|
||||
mockIsInsideTmux.mockReturnValue(true)
|
||||
mockQueryWindowState.mockImplementation(async () =>
|
||||
createWindowState({
|
||||
windowWidth: 160,
|
||||
windowHeight: 11,
|
||||
agentPanes: [
|
||||
{
|
||||
paneId: '%1',
|
||||
width: 80,
|
||||
height: 11,
|
||||
left: 80,
|
||||
top: 0,
|
||||
title: 'old',
|
||||
isActive: false,
|
||||
},
|
||||
],
|
||||
})
|
||||
)
|
||||
|
||||
const { TmuxSessionManager } = await import('./manager')
|
||||
const ctx = createMockContext()
|
||||
const config: TmuxConfig = {
|
||||
enabled: true,
|
||||
layout: 'main-vertical',
|
||||
main_pane_size: 60,
|
||||
main_pane_min_width: 120,
|
||||
agent_pane_min_width: 40,
|
||||
}
|
||||
const manager = new TmuxSessionManager(ctx, config, mockTmuxDeps)
|
||||
|
||||
await manager.onSessionCreated(
|
||||
createSessionCreatedEvent('ses_pending', 'ses_parent', 'Pending Task')
|
||||
)
|
||||
expect((manager as any).deferredQueue).toEqual(['ses_pending'])
|
||||
|
||||
// when
|
||||
await manager.onSessionDeleted({ sessionID: 'ses_pending' })
|
||||
|
||||
// then
|
||||
expect((manager as any).deferredQueue).toEqual([])
|
||||
expect(mockExecuteAction).toHaveBeenCalledTimes(0)
|
||||
})
|
||||
})
|
||||
|
||||
@@ -478,7 +703,7 @@ describe('TmuxSessionManager', () => {
|
||||
await manager.onSessionDeleted({ sessionID: 'ses_timeout' })
|
||||
|
||||
// then
|
||||
expect(mockExecuteAction).toHaveBeenCalledTimes(0)
|
||||
expect(mockExecuteAction).toHaveBeenCalledTimes(1)
|
||||
})
|
||||
|
||||
test('closes pane when tracked session is deleted', async () => {
|
||||
@@ -680,7 +905,7 @@ describe('DecisionEngine', () => {
|
||||
}
|
||||
})
|
||||
|
||||
test('returns replace when split not possible', async () => {
|
||||
test('returns canSpawn=false when split not possible', async () => {
|
||||
// given - small window where split is never possible
|
||||
const { decideSpawnActions } = await import('./decision-engine')
|
||||
const state: WindowState = {
|
||||
@@ -720,10 +945,10 @@ describe('DecisionEngine', () => {
|
||||
sessionMappings
|
||||
)
|
||||
|
||||
// then - agent area (80) < MIN_SPLIT_WIDTH (105), so replace is used
|
||||
expect(decision.canSpawn).toBe(true)
|
||||
expect(decision.actions).toHaveLength(1)
|
||||
expect(decision.actions[0].type).toBe('replace')
|
||||
// then - agent area (80) < MIN_SPLIT_WIDTH (105), so attach is deferred
|
||||
expect(decision.canSpawn).toBe(false)
|
||||
expect(decision.actions).toHaveLength(0)
|
||||
expect(decision.reason).toContain('defer')
|
||||
})
|
||||
|
||||
test('returns canSpawn=false when window too small', async () => {
|
||||
|
||||
@@ -5,6 +5,7 @@ import { log, normalizeSDKResponse } from "../../shared"
|
||||
import {
|
||||
isInsideTmux as defaultIsInsideTmux,
|
||||
getCurrentPaneId as defaultGetCurrentPaneId,
|
||||
POLL_INTERVAL_BACKGROUND_MS,
|
||||
SESSION_READY_POLL_INTERVAL_MS,
|
||||
SESSION_READY_TIMEOUT_MS,
|
||||
} from "../../shared/tmux"
|
||||
@@ -19,6 +20,12 @@ interface SessionCreatedEvent {
|
||||
properties?: { info?: { id?: string; parentID?: string; title?: string } }
|
||||
}
|
||||
|
||||
interface DeferredSession {
|
||||
sessionId: string
|
||||
title: string
|
||||
queuedAt: Date
|
||||
}
|
||||
|
||||
export interface TmuxUtilDeps {
|
||||
isInsideTmux: () => boolean
|
||||
getCurrentPaneId: () => string | undefined
|
||||
@@ -48,6 +55,11 @@ export class TmuxSessionManager {
|
||||
private sourcePaneId: string | undefined
|
||||
private sessions = new Map<string, TrackedSession>()
|
||||
private pendingSessions = new Set<string>()
|
||||
private spawnQueue: Promise<void> = Promise.resolve()
|
||||
private deferredSessions = new Map<string, DeferredSession>()
|
||||
private deferredQueue: string[] = []
|
||||
private deferredAttachInterval?: ReturnType<typeof setInterval>
|
||||
private deferredAttachTickScheduled = false
|
||||
private deps: TmuxUtilDeps
|
||||
private pollingManager: TmuxPollingManager
|
||||
constructor(ctx: PluginInput, tmuxConfig: TmuxConfig, deps: TmuxUtilDeps = defaultTmuxDeps) {
|
||||
@@ -75,6 +87,8 @@ export class TmuxSessionManager {
|
||||
|
||||
private getCapacityConfig(): CapacityConfig {
|
||||
return {
|
||||
layout: this.tmuxConfig.layout,
|
||||
mainPaneSize: this.tmuxConfig.main_pane_size,
|
||||
mainPaneMinWidth: this.tmuxConfig.main_pane_min_width,
|
||||
agentPaneWidth: this.tmuxConfig.agent_pane_min_width,
|
||||
}
|
||||
@@ -88,6 +102,136 @@ export class TmuxSessionManager {
|
||||
}))
|
||||
}
|
||||
|
||||
private enqueueDeferredSession(sessionId: string, title: string): void {
|
||||
if (this.deferredSessions.has(sessionId)) return
|
||||
this.deferredSessions.set(sessionId, {
|
||||
sessionId,
|
||||
title,
|
||||
queuedAt: new Date(),
|
||||
})
|
||||
this.deferredQueue.push(sessionId)
|
||||
log("[tmux-session-manager] deferred session queued", {
|
||||
sessionId,
|
||||
queueLength: this.deferredQueue.length,
|
||||
})
|
||||
this.startDeferredAttachLoop()
|
||||
}
|
||||
|
||||
private removeDeferredSession(sessionId: string): void {
|
||||
if (!this.deferredSessions.delete(sessionId)) return
|
||||
this.deferredQueue = this.deferredQueue.filter((id) => id !== sessionId)
|
||||
log("[tmux-session-manager] deferred session removed", {
|
||||
sessionId,
|
||||
queueLength: this.deferredQueue.length,
|
||||
})
|
||||
if (this.deferredQueue.length === 0) {
|
||||
this.stopDeferredAttachLoop()
|
||||
}
|
||||
}
|
||||
|
||||
private startDeferredAttachLoop(): void {
|
||||
if (this.deferredAttachInterval) return
|
||||
this.deferredAttachInterval = setInterval(() => {
|
||||
if (this.deferredAttachTickScheduled) return
|
||||
this.deferredAttachTickScheduled = true
|
||||
void this.enqueueSpawn(async () => {
|
||||
try {
|
||||
await this.tryAttachDeferredSession()
|
||||
} finally {
|
||||
this.deferredAttachTickScheduled = false
|
||||
}
|
||||
})
|
||||
}, POLL_INTERVAL_BACKGROUND_MS)
|
||||
log("[tmux-session-manager] deferred attach polling started", {
|
||||
intervalMs: POLL_INTERVAL_BACKGROUND_MS,
|
||||
})
|
||||
}
|
||||
|
||||
private stopDeferredAttachLoop(): void {
|
||||
if (!this.deferredAttachInterval) return
|
||||
clearInterval(this.deferredAttachInterval)
|
||||
this.deferredAttachInterval = undefined
|
||||
this.deferredAttachTickScheduled = false
|
||||
log("[tmux-session-manager] deferred attach polling stopped")
|
||||
}
|
||||
|
||||
private async tryAttachDeferredSession(): Promise<void> {
|
||||
if (!this.sourcePaneId) return
|
||||
const sessionId = this.deferredQueue[0]
|
||||
if (!sessionId) {
|
||||
this.stopDeferredAttachLoop()
|
||||
return
|
||||
}
|
||||
|
||||
const deferred = this.deferredSessions.get(sessionId)
|
||||
if (!deferred) {
|
||||
this.deferredQueue.shift()
|
||||
return
|
||||
}
|
||||
|
||||
const state = await queryWindowState(this.sourcePaneId)
|
||||
if (!state) return
|
||||
|
||||
const decision = decideSpawnActions(
|
||||
state,
|
||||
sessionId,
|
||||
deferred.title,
|
||||
this.getCapacityConfig(),
|
||||
this.getSessionMappings(),
|
||||
)
|
||||
|
||||
if (!decision.canSpawn || decision.actions.length === 0) {
|
||||
log("[tmux-session-manager] deferred session still waiting for capacity", {
|
||||
sessionId,
|
||||
reason: decision.reason,
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
const result = await executeActions(decision.actions, {
|
||||
config: this.tmuxConfig,
|
||||
serverUrl: this.serverUrl,
|
||||
windowState: state,
|
||||
sourcePaneId: this.sourcePaneId,
|
||||
})
|
||||
|
||||
if (!result.success || !result.spawnedPaneId) {
|
||||
log("[tmux-session-manager] deferred session attach failed", {
|
||||
sessionId,
|
||||
results: result.results.map((r) => ({
|
||||
type: r.action.type,
|
||||
success: r.result.success,
|
||||
error: r.result.error,
|
||||
})),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
const sessionReady = await this.waitForSessionReady(sessionId)
|
||||
if (!sessionReady) {
|
||||
log("[tmux-session-manager] deferred session not ready after timeout", {
|
||||
sessionId,
|
||||
paneId: result.spawnedPaneId,
|
||||
})
|
||||
}
|
||||
|
||||
const now = Date.now()
|
||||
this.sessions.set(sessionId, {
|
||||
sessionId,
|
||||
paneId: result.spawnedPaneId,
|
||||
description: deferred.title,
|
||||
createdAt: new Date(now),
|
||||
lastSeenAt: new Date(now),
|
||||
})
|
||||
this.removeDeferredSession(sessionId)
|
||||
this.pollingManager.startPolling()
|
||||
log("[tmux-session-manager] deferred session attached", {
|
||||
sessionId,
|
||||
paneId: result.spawnedPaneId,
|
||||
sessionReady,
|
||||
})
|
||||
}
|
||||
|
||||
private async waitForSessionReady(sessionId: string): Promise<boolean> {
|
||||
const startTime = Date.now()
|
||||
|
||||
@@ -138,7 +282,11 @@ export class TmuxSessionManager {
|
||||
const sessionId = info.id
|
||||
const title = info.title ?? "Subagent"
|
||||
|
||||
if (this.sessions.has(sessionId) || this.pendingSessions.has(sessionId)) {
|
||||
if (
|
||||
this.sessions.has(sessionId) ||
|
||||
this.pendingSessions.has(sessionId) ||
|
||||
this.deferredSessions.has(sessionId)
|
||||
) {
|
||||
log("[tmux-session-manager] session already tracked or pending", { sessionId })
|
||||
return
|
||||
}
|
||||
@@ -147,15 +295,17 @@ export class TmuxSessionManager {
|
||||
log("[tmux-session-manager] no source pane id")
|
||||
return
|
||||
}
|
||||
const sourcePaneId = this.sourcePaneId
|
||||
|
||||
this.pendingSessions.add(sessionId)
|
||||
|
||||
try {
|
||||
const state = await queryWindowState(this.sourcePaneId)
|
||||
if (!state) {
|
||||
log("[tmux-session-manager] failed to query window state")
|
||||
return
|
||||
}
|
||||
await this.enqueueSpawn(async () => {
|
||||
try {
|
||||
const state = await queryWindowState(sourcePaneId)
|
||||
if (!state) {
|
||||
log("[tmux-session-manager] failed to query window state")
|
||||
return
|
||||
}
|
||||
|
||||
log("[tmux-session-manager] window state queried", {
|
||||
windowWidth: state.windowWidth,
|
||||
@@ -164,13 +314,13 @@ export class TmuxSessionManager {
|
||||
agentPanes: state.agentPanes.map((p) => p.paneId),
|
||||
})
|
||||
|
||||
const decision = decideSpawnActions(
|
||||
state,
|
||||
sessionId,
|
||||
title,
|
||||
this.getCapacityConfig(),
|
||||
this.getSessionMappings()
|
||||
)
|
||||
const decision = decideSpawnActions(
|
||||
state,
|
||||
sessionId,
|
||||
title,
|
||||
this.getCapacityConfig(),
|
||||
this.getSessionMappings()
|
||||
)
|
||||
|
||||
log("[tmux-session-manager] spawn decision", {
|
||||
canSpawn: decision.canSpawn,
|
||||
@@ -183,82 +333,105 @@ export class TmuxSessionManager {
|
||||
}),
|
||||
})
|
||||
|
||||
if (!decision.canSpawn) {
|
||||
log("[tmux-session-manager] cannot spawn", { reason: decision.reason })
|
||||
return
|
||||
}
|
||||
|
||||
const result = await executeActions(
|
||||
decision.actions,
|
||||
{ config: this.tmuxConfig, serverUrl: this.serverUrl, windowState: state }
|
||||
)
|
||||
|
||||
for (const { action, result: actionResult } of result.results) {
|
||||
if (action.type === "close" && actionResult.success) {
|
||||
this.sessions.delete(action.sessionId)
|
||||
log("[tmux-session-manager] removed closed session from cache", {
|
||||
sessionId: action.sessionId,
|
||||
})
|
||||
if (!decision.canSpawn) {
|
||||
log("[tmux-session-manager] cannot spawn", { reason: decision.reason })
|
||||
this.enqueueDeferredSession(sessionId, title)
|
||||
return
|
||||
}
|
||||
if (action.type === "replace" && actionResult.success) {
|
||||
this.sessions.delete(action.oldSessionId)
|
||||
log("[tmux-session-manager] removed replaced session from cache", {
|
||||
oldSessionId: action.oldSessionId,
|
||||
newSessionId: action.newSessionId,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
if (result.success && result.spawnedPaneId) {
|
||||
const sessionReady = await this.waitForSessionReady(sessionId)
|
||||
|
||||
if (!sessionReady) {
|
||||
log("[tmux-session-manager] session not ready after timeout, closing spawned pane", {
|
||||
const result = await executeActions(
|
||||
decision.actions,
|
||||
{
|
||||
config: this.tmuxConfig,
|
||||
serverUrl: this.serverUrl,
|
||||
windowState: state,
|
||||
sourcePaneId,
|
||||
}
|
||||
)
|
||||
|
||||
for (const { action, result: actionResult } of result.results) {
|
||||
if (action.type === "close" && actionResult.success) {
|
||||
this.sessions.delete(action.sessionId)
|
||||
log("[tmux-session-manager] removed closed session from cache", {
|
||||
sessionId: action.sessionId,
|
||||
})
|
||||
}
|
||||
if (action.type === "replace" && actionResult.success) {
|
||||
this.sessions.delete(action.oldSessionId)
|
||||
log("[tmux-session-manager] removed replaced session from cache", {
|
||||
oldSessionId: action.oldSessionId,
|
||||
newSessionId: action.newSessionId,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
if (result.success && result.spawnedPaneId) {
|
||||
const sessionReady = await this.waitForSessionReady(sessionId)
|
||||
|
||||
if (!sessionReady) {
|
||||
log("[tmux-session-manager] session not ready after timeout, tracking anyway", {
|
||||
sessionId,
|
||||
paneId: result.spawnedPaneId,
|
||||
})
|
||||
}
|
||||
|
||||
const now = Date.now()
|
||||
this.sessions.set(sessionId, {
|
||||
sessionId,
|
||||
paneId: result.spawnedPaneId,
|
||||
description: title,
|
||||
createdAt: new Date(now),
|
||||
lastSeenAt: new Date(now),
|
||||
})
|
||||
log("[tmux-session-manager] pane spawned and tracked", {
|
||||
sessionId,
|
||||
paneId: result.spawnedPaneId,
|
||||
sessionReady,
|
||||
})
|
||||
this.pollingManager.startPolling()
|
||||
} else {
|
||||
log("[tmux-session-manager] spawn failed", {
|
||||
success: result.success,
|
||||
results: result.results.map((r) => ({
|
||||
type: r.action.type,
|
||||
success: r.result.success,
|
||||
error: r.result.error,
|
||||
})),
|
||||
})
|
||||
|
||||
await executeAction(
|
||||
{ type: "close", paneId: result.spawnedPaneId, sessionId },
|
||||
{ config: this.tmuxConfig, serverUrl: this.serverUrl, windowState: state }
|
||||
)
|
||||
if (result.spawnedPaneId) {
|
||||
await executeAction(
|
||||
{ type: "close", paneId: result.spawnedPaneId, sessionId },
|
||||
{ config: this.tmuxConfig, serverUrl: this.serverUrl, windowState: state }
|
||||
)
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
const now = Date.now()
|
||||
this.sessions.set(sessionId, {
|
||||
sessionId,
|
||||
paneId: result.spawnedPaneId,
|
||||
description: title,
|
||||
createdAt: new Date(now),
|
||||
lastSeenAt: new Date(now),
|
||||
})
|
||||
log("[tmux-session-manager] pane spawned and tracked", {
|
||||
sessionId,
|
||||
paneId: result.spawnedPaneId,
|
||||
sessionReady,
|
||||
})
|
||||
this.pollingManager.startPolling()
|
||||
} else {
|
||||
log("[tmux-session-manager] spawn failed", {
|
||||
success: result.success,
|
||||
results: result.results.map((r) => ({
|
||||
type: r.action.type,
|
||||
success: r.result.success,
|
||||
error: r.result.error,
|
||||
})),
|
||||
})
|
||||
} finally {
|
||||
this.pendingSessions.delete(sessionId)
|
||||
}
|
||||
} finally {
|
||||
this.pendingSessions.delete(sessionId)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
private async enqueueSpawn(run: () => Promise<void>): Promise<void> {
|
||||
this.spawnQueue = this.spawnQueue
|
||||
.catch(() => undefined)
|
||||
.then(run)
|
||||
.catch((err) => {
|
||||
log("[tmux-session-manager] spawn queue task failed", {
|
||||
error: String(err),
|
||||
})
|
||||
})
|
||||
await this.spawnQueue
|
||||
}
|
||||
|
||||
async onSessionDeleted(event: { sessionID: string }): Promise<void> {
|
||||
if (!this.isEnabled()) return
|
||||
if (!this.sourcePaneId) return
|
||||
|
||||
this.removeDeferredSession(event.sessionID)
|
||||
|
||||
const tracked = this.sessions.get(event.sessionID)
|
||||
if (!tracked) return
|
||||
|
||||
@@ -272,7 +445,12 @@ export class TmuxSessionManager {
|
||||
|
||||
const closeAction = decideCloseAction(state, event.sessionID, this.getSessionMappings())
|
||||
if (closeAction) {
|
||||
await executeAction(closeAction, { config: this.tmuxConfig, serverUrl: this.serverUrl, windowState: state })
|
||||
await executeAction(closeAction, {
|
||||
config: this.tmuxConfig,
|
||||
serverUrl: this.serverUrl,
|
||||
windowState: state,
|
||||
sourcePaneId: this.sourcePaneId,
|
||||
})
|
||||
}
|
||||
|
||||
this.sessions.delete(event.sessionID)
|
||||
@@ -296,7 +474,12 @@ export class TmuxSessionManager {
|
||||
if (state) {
|
||||
await executeAction(
|
||||
{ type: "close", paneId: tracked.paneId, sessionId },
|
||||
{ config: this.tmuxConfig, serverUrl: this.serverUrl, windowState: state }
|
||||
{
|
||||
config: this.tmuxConfig,
|
||||
serverUrl: this.serverUrl,
|
||||
windowState: state,
|
||||
sourcePaneId: this.sourcePaneId,
|
||||
}
|
||||
)
|
||||
}
|
||||
|
||||
@@ -314,6 +497,9 @@ export class TmuxSessionManager {
|
||||
}
|
||||
|
||||
async cleanup(): Promise<void> {
|
||||
this.stopDeferredAttachLoop()
|
||||
this.deferredQueue = []
|
||||
this.deferredSessions.clear()
|
||||
this.pollingManager.stopPolling()
|
||||
|
||||
if (this.sessions.size > 0) {
|
||||
@@ -324,7 +510,12 @@ export class TmuxSessionManager {
|
||||
const closePromises = Array.from(this.sessions.values()).map((s) =>
|
||||
executeAction(
|
||||
{ type: "close", paneId: s.paneId, sessionId: s.sessionId },
|
||||
{ config: this.tmuxConfig, serverUrl: this.serverUrl, windowState: state }
|
||||
{
|
||||
config: this.tmuxConfig,
|
||||
serverUrl: this.serverUrl,
|
||||
windowState: state,
|
||||
sourcePaneId: this.sourcePaneId,
|
||||
}
|
||||
).catch((err) =>
|
||||
log("[tmux-session-manager] cleanup error for pane", {
|
||||
paneId: s.paneId,
|
||||
|
||||
@@ -1,14 +1,15 @@
|
||||
import { MIN_PANE_WIDTH } from "./types"
|
||||
import type { SplitDirection, TmuxPaneInfo } from "./types"
|
||||
import {
|
||||
DIVIDER_SIZE,
|
||||
MAX_COLS,
|
||||
MAX_ROWS,
|
||||
MIN_SPLIT_HEIGHT,
|
||||
DIVIDER_SIZE,
|
||||
MAX_COLS,
|
||||
MAX_ROWS,
|
||||
MIN_SPLIT_HEIGHT,
|
||||
} from "./tmux-grid-constants"
|
||||
import { MIN_PANE_WIDTH } from "./types"
|
||||
|
||||
function minSplitWidthFor(minPaneWidth: number): number {
|
||||
return 2 * minPaneWidth + DIVIDER_SIZE
|
||||
function getMinSplitWidth(minPaneWidth?: number): number {
|
||||
const width = Math.max(1, minPaneWidth ?? MIN_PANE_WIDTH)
|
||||
return 2 * width + DIVIDER_SIZE
|
||||
}
|
||||
|
||||
export function getColumnCount(paneCount: number): number {
|
||||
@@ -25,16 +26,16 @@ export function getColumnWidth(agentAreaWidth: number, paneCount: number): numbe
|
||||
export function isSplittableAtCount(
|
||||
agentAreaWidth: number,
|
||||
paneCount: number,
|
||||
minPaneWidth: number = MIN_PANE_WIDTH,
|
||||
minPaneWidth?: number,
|
||||
): boolean {
|
||||
const columnWidth = getColumnWidth(agentAreaWidth, paneCount)
|
||||
return columnWidth >= minSplitWidthFor(minPaneWidth)
|
||||
return columnWidth >= getMinSplitWidth(minPaneWidth)
|
||||
}
|
||||
|
||||
export function findMinimalEvictions(
|
||||
agentAreaWidth: number,
|
||||
currentCount: number,
|
||||
minPaneWidth: number = MIN_PANE_WIDTH,
|
||||
minPaneWidth?: number,
|
||||
): number | null {
|
||||
for (let k = 1; k <= currentCount; k++) {
|
||||
if (isSplittableAtCount(agentAreaWidth, currentCount - k, minPaneWidth)) {
|
||||
@@ -47,30 +48,26 @@ export function findMinimalEvictions(
|
||||
export function canSplitPane(
|
||||
pane: TmuxPaneInfo,
|
||||
direction: SplitDirection,
|
||||
minPaneWidth: number = MIN_PANE_WIDTH,
|
||||
minPaneWidth?: number,
|
||||
): boolean {
|
||||
if (direction === "-h") {
|
||||
return pane.width >= minSplitWidthFor(minPaneWidth)
|
||||
return pane.width >= getMinSplitWidth(minPaneWidth)
|
||||
}
|
||||
return pane.height >= MIN_SPLIT_HEIGHT
|
||||
}
|
||||
|
||||
export function canSplitPaneAnyDirection(pane: TmuxPaneInfo, minPaneWidth: number = MIN_PANE_WIDTH): boolean {
|
||||
return canSplitPaneAnyDirectionWithMinWidth(pane, minPaneWidth)
|
||||
}
|
||||
|
||||
export function canSplitPaneAnyDirectionWithMinWidth(
|
||||
export function canSplitPaneAnyDirection(
|
||||
pane: TmuxPaneInfo,
|
||||
minPaneWidth: number = MIN_PANE_WIDTH,
|
||||
minPaneWidth?: number,
|
||||
): boolean {
|
||||
return pane.width >= minSplitWidthFor(minPaneWidth) || pane.height >= MIN_SPLIT_HEIGHT
|
||||
return pane.width >= getMinSplitWidth(minPaneWidth) || pane.height >= MIN_SPLIT_HEIGHT
|
||||
}
|
||||
|
||||
export function getBestSplitDirection(
|
||||
pane: TmuxPaneInfo,
|
||||
minPaneWidth: number = MIN_PANE_WIDTH,
|
||||
minPaneWidth?: number,
|
||||
): SplitDirection | null {
|
||||
const canH = pane.width >= minSplitWidthFor(minPaneWidth)
|
||||
const canH = pane.width >= getMinSplitWidth(minPaneWidth)
|
||||
const canV = pane.height >= MIN_SPLIT_HEIGHT
|
||||
|
||||
if (!canH && !canV) return null
|
||||
|
||||
@@ -14,7 +14,7 @@ export async function queryWindowState(sourcePaneId: string): Promise<WindowStat
|
||||
"-t",
|
||||
sourcePaneId,
|
||||
"-F",
|
||||
"#{pane_id},#{pane_width},#{pane_height},#{pane_left},#{pane_top},#{pane_title},#{pane_active},#{window_width},#{window_height}",
|
||||
"#{pane_id}\t#{pane_width}\t#{pane_height}\t#{pane_left}\t#{pane_top}\t#{pane_active}\t#{window_width}\t#{window_height}\t#{pane_title}",
|
||||
],
|
||||
{ stdout: "pipe", stderr: "pipe" }
|
||||
)
|
||||
@@ -35,7 +35,11 @@ export async function queryWindowState(sourcePaneId: string): Promise<WindowStat
|
||||
const panes: TmuxPaneInfo[] = []
|
||||
|
||||
for (const line of lines) {
|
||||
const [paneId, widthStr, heightStr, leftStr, topStr, title, activeStr, windowWidthStr, windowHeightStr] = line.split(",")
|
||||
const fields = line.split("\t")
|
||||
if (fields.length < 9) continue
|
||||
|
||||
const [paneId, widthStr, heightStr, leftStr, topStr, activeStr, windowWidthStr, windowHeightStr] = fields
|
||||
const title = fields.slice(8).join("\t")
|
||||
const width = parseInt(widthStr, 10)
|
||||
const height = parseInt(heightStr, 10)
|
||||
const left = parseInt(leftStr, 10)
|
||||
@@ -51,9 +55,21 @@ export async function queryWindowState(sourcePaneId: string): Promise<WindowStat
|
||||
|
||||
panes.sort((a, b) => a.left - b.left || a.top - b.top)
|
||||
|
||||
const mainPane = panes.find((p) => p.paneId === sourcePaneId)
|
||||
const mainPane = panes.reduce<TmuxPaneInfo | null>((selected, pane) => {
|
||||
if (!selected) return pane
|
||||
if (pane.left !== selected.left) {
|
||||
return pane.left < selected.left ? pane : selected
|
||||
}
|
||||
if (pane.width !== selected.width) {
|
||||
return pane.width > selected.width ? pane : selected
|
||||
}
|
||||
if (pane.top !== selected.top) {
|
||||
return pane.top < selected.top ? pane : selected
|
||||
}
|
||||
return pane.paneId === sourcePaneId ? pane : selected
|
||||
}, null)
|
||||
if (!mainPane) {
|
||||
log("[pane-state-querier] CRITICAL: sourcePaneId not found in panes", {
|
||||
log("[pane-state-querier] CRITICAL: failed to determine main pane", {
|
||||
sourcePaneId,
|
||||
availablePanes: panes.map((p) => p.paneId),
|
||||
})
|
||||
|
||||
@@ -5,7 +5,7 @@ import type {
|
||||
TmuxPaneInfo,
|
||||
WindowState,
|
||||
} from "./types"
|
||||
import { DIVIDER_SIZE } from "./tmux-grid-constants"
|
||||
import { computeAgentAreaWidth } from "./tmux-grid-constants"
|
||||
import {
|
||||
canSplitPane,
|
||||
findMinimalEvictions,
|
||||
@@ -14,6 +14,14 @@ import {
|
||||
import { findSpawnTarget } from "./spawn-target-finder"
|
||||
import { findOldestAgentPane, type SessionMapping } from "./oldest-agent-pane"
|
||||
|
||||
function getInitialSplitDirection(layout?: string): "-h" | "-v" {
|
||||
return layout === "main-horizontal" ? "-v" : "-h"
|
||||
}
|
||||
|
||||
function isStrictMainLayout(layout?: string): boolean {
|
||||
return layout === "main-vertical" || layout === "main-horizontal"
|
||||
}
|
||||
|
||||
export function decideSpawnActions(
|
||||
state: WindowState,
|
||||
sessionId: string,
|
||||
@@ -25,14 +33,13 @@ export function decideSpawnActions(
|
||||
return { canSpawn: false, actions: [], reason: "no main pane found" }
|
||||
}
|
||||
|
||||
const minPaneWidth = config.agentPaneWidth
|
||||
const agentAreaWidth = Math.max(
|
||||
0,
|
||||
state.windowWidth - state.mainPane.width - DIVIDER_SIZE,
|
||||
)
|
||||
const agentAreaWidth = computeAgentAreaWidth(state.windowWidth, config)
|
||||
const minAgentPaneWidth = config.agentPaneWidth
|
||||
const currentCount = state.agentPanes.length
|
||||
const strictLayout = isStrictMainLayout(config.layout)
|
||||
const initialSplitDirection = getInitialSplitDirection(config.layout)
|
||||
|
||||
if (agentAreaWidth < minPaneWidth && currentCount > 0) {
|
||||
if (agentAreaWidth < minAgentPaneWidth && currentCount > 0) {
|
||||
return {
|
||||
canSpawn: false,
|
||||
actions: [],
|
||||
@@ -47,7 +54,7 @@ export function decideSpawnActions(
|
||||
|
||||
if (currentCount === 0) {
|
||||
const virtualMainPane: TmuxPaneInfo = { ...state.mainPane, width: state.windowWidth }
|
||||
if (canSplitPane(virtualMainPane, "-h", minPaneWidth)) {
|
||||
if (canSplitPane(virtualMainPane, initialSplitDirection, minAgentPaneWidth)) {
|
||||
return {
|
||||
canSpawn: true,
|
||||
actions: [
|
||||
@@ -56,7 +63,7 @@ export function decideSpawnActions(
|
||||
sessionId,
|
||||
description,
|
||||
targetPaneId: state.mainPane.paneId,
|
||||
splitDirection: "-h",
|
||||
splitDirection: initialSplitDirection,
|
||||
},
|
||||
],
|
||||
}
|
||||
@@ -64,8 +71,12 @@ export function decideSpawnActions(
|
||||
return { canSpawn: false, actions: [], reason: "mainPane too small to split" }
|
||||
}
|
||||
|
||||
if (isSplittableAtCount(agentAreaWidth, currentCount, minPaneWidth)) {
|
||||
const spawnTarget = findSpawnTarget(state, minPaneWidth)
|
||||
const canEvaluateSpawnTarget =
|
||||
strictLayout ||
|
||||
isSplittableAtCount(agentAreaWidth, currentCount, minAgentPaneWidth)
|
||||
|
||||
if (canEvaluateSpawnTarget) {
|
||||
const spawnTarget = findSpawnTarget(state, config)
|
||||
if (spawnTarget) {
|
||||
return {
|
||||
canSpawn: true,
|
||||
@@ -82,40 +93,43 @@ export function decideSpawnActions(
|
||||
}
|
||||
}
|
||||
|
||||
const minEvictions = findMinimalEvictions(agentAreaWidth, currentCount, minPaneWidth)
|
||||
if (minEvictions === 1 && oldestPane) {
|
||||
return {
|
||||
canSpawn: true,
|
||||
actions: [
|
||||
{
|
||||
type: "replace",
|
||||
paneId: oldestPane.paneId,
|
||||
oldSessionId: oldestMapping?.sessionId || "",
|
||||
newSessionId: sessionId,
|
||||
description,
|
||||
},
|
||||
],
|
||||
reason: "replaced oldest pane to avoid split churn",
|
||||
if (!strictLayout) {
|
||||
const minEvictions = findMinimalEvictions(
|
||||
agentAreaWidth,
|
||||
currentCount,
|
||||
minAgentPaneWidth,
|
||||
)
|
||||
if (minEvictions === 1 && oldestPane) {
|
||||
return {
|
||||
canSpawn: true,
|
||||
actions: [
|
||||
{
|
||||
type: "close",
|
||||
paneId: oldestPane.paneId,
|
||||
sessionId: oldestMapping?.sessionId || "",
|
||||
},
|
||||
{
|
||||
type: "spawn",
|
||||
sessionId,
|
||||
description,
|
||||
targetPaneId: state.mainPane.paneId,
|
||||
splitDirection: initialSplitDirection,
|
||||
},
|
||||
],
|
||||
reason: "closed 1 pane to make room for split",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (oldestPane) {
|
||||
return {
|
||||
canSpawn: true,
|
||||
actions: [
|
||||
{
|
||||
type: "replace",
|
||||
paneId: oldestPane.paneId,
|
||||
oldSessionId: oldestMapping?.sessionId || "",
|
||||
newSessionId: sessionId,
|
||||
description,
|
||||
},
|
||||
],
|
||||
reason: "replaced oldest pane (no split possible)",
|
||||
canSpawn: false,
|
||||
actions: [],
|
||||
reason: "no split target available (defer attach)",
|
||||
}
|
||||
}
|
||||
|
||||
return { canSpawn: false, actions: [], reason: "no pane available to replace" }
|
||||
return { canSpawn: false, actions: [], reason: "no split target available (defer attach)" }
|
||||
}
|
||||
|
||||
export function decideCloseAction(
|
||||
|
||||
@@ -1,13 +1,40 @@
|
||||
import type { SplitDirection, TmuxPaneInfo, WindowState } from "./types"
|
||||
import type { CapacityConfig, SplitDirection, TmuxPaneInfo, WindowState } from "./types"
|
||||
import { computeMainPaneWidth } from "./tmux-grid-constants"
|
||||
import { computeGridPlan, mapPaneToSlot } from "./grid-planning"
|
||||
import { canSplitPane, getBestSplitDirection } from "./pane-split-availability"
|
||||
import { MIN_PANE_WIDTH } from "./types"
|
||||
import { canSplitPane } from "./pane-split-availability"
|
||||
|
||||
export interface SpawnTarget {
|
||||
targetPaneId: string
|
||||
splitDirection: SplitDirection
|
||||
}
|
||||
|
||||
function isStrictMainVertical(config: CapacityConfig): boolean {
|
||||
return config.layout === "main-vertical"
|
||||
}
|
||||
|
||||
function isStrictMainHorizontal(config: CapacityConfig): boolean {
|
||||
return config.layout === "main-horizontal"
|
||||
}
|
||||
|
||||
function isStrictMainLayout(config: CapacityConfig): boolean {
|
||||
return isStrictMainVertical(config) || isStrictMainHorizontal(config)
|
||||
}
|
||||
|
||||
function getInitialSplitDirection(config: CapacityConfig): SplitDirection {
|
||||
return isStrictMainHorizontal(config) ? "-v" : "-h"
|
||||
}
|
||||
|
||||
function getStrictFollowupSplitDirection(config: CapacityConfig): SplitDirection {
|
||||
return isStrictMainHorizontal(config) ? "-h" : "-v"
|
||||
}
|
||||
|
||||
function sortPanesForStrictLayout(panes: TmuxPaneInfo[], config: CapacityConfig): TmuxPaneInfo[] {
|
||||
if (isStrictMainHorizontal(config)) {
|
||||
return [...panes].sort((a, b) => a.left - b.left || a.top - b.top)
|
||||
}
|
||||
return [...panes].sort((a, b) => a.top - b.top || a.left - b.left)
|
||||
}
|
||||
|
||||
function buildOccupancy(
|
||||
agentPanes: TmuxPaneInfo[],
|
||||
plan: ReturnType<typeof computeGridPlan>,
|
||||
@@ -37,16 +64,29 @@ function findFirstEmptySlot(
|
||||
|
||||
function findSplittableTarget(
|
||||
state: WindowState,
|
||||
minPaneWidth: number,
|
||||
config: CapacityConfig,
|
||||
_preferredDirection?: SplitDirection,
|
||||
): SpawnTarget | null {
|
||||
if (!state.mainPane) return null
|
||||
const existingCount = state.agentPanes.length
|
||||
const minAgentPaneWidth = config.agentPaneWidth
|
||||
const initialDirection = getInitialSplitDirection(config)
|
||||
|
||||
if (existingCount === 0) {
|
||||
const virtualMainPane: TmuxPaneInfo = { ...state.mainPane, width: state.windowWidth }
|
||||
if (canSplitPane(virtualMainPane, "-h", minPaneWidth)) {
|
||||
return { targetPaneId: state.mainPane.paneId, splitDirection: "-h" }
|
||||
if (canSplitPane(virtualMainPane, initialDirection, minAgentPaneWidth)) {
|
||||
return { targetPaneId: state.mainPane.paneId, splitDirection: initialDirection }
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
if (isStrictMainLayout(config)) {
|
||||
const followupDirection = getStrictFollowupSplitDirection(config)
|
||||
const panesByPriority = sortPanesForStrictLayout(state.agentPanes, config)
|
||||
for (const pane of panesByPriority) {
|
||||
if (canSplitPane(pane, followupDirection, minAgentPaneWidth)) {
|
||||
return { targetPaneId: pane.paneId, splitDirection: followupDirection }
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
@@ -55,34 +95,44 @@ function findSplittableTarget(
|
||||
state.windowWidth,
|
||||
state.windowHeight,
|
||||
existingCount + 1,
|
||||
state.mainPane.width,
|
||||
minPaneWidth,
|
||||
config,
|
||||
)
|
||||
const mainPaneWidth = state.mainPane.width
|
||||
const mainPaneWidth = computeMainPaneWidth(state.windowWidth, config)
|
||||
const occupancy = buildOccupancy(state.agentPanes, plan, mainPaneWidth)
|
||||
const targetSlot = findFirstEmptySlot(occupancy, plan)
|
||||
|
||||
const leftPane = occupancy.get(`${targetSlot.row}:${targetSlot.col - 1}`)
|
||||
if (leftPane && canSplitPane(leftPane, "-h", minPaneWidth)) {
|
||||
if (
|
||||
!isStrictMainVertical(config) &&
|
||||
leftPane &&
|
||||
canSplitPane(leftPane, "-h", minAgentPaneWidth)
|
||||
) {
|
||||
return { targetPaneId: leftPane.paneId, splitDirection: "-h" }
|
||||
}
|
||||
|
||||
const abovePane = occupancy.get(`${targetSlot.row - 1}:${targetSlot.col}`)
|
||||
if (abovePane && canSplitPane(abovePane, "-v", minPaneWidth)) {
|
||||
if (abovePane && canSplitPane(abovePane, "-v", minAgentPaneWidth)) {
|
||||
return { targetPaneId: abovePane.paneId, splitDirection: "-v" }
|
||||
}
|
||||
|
||||
const splittablePanes = state.agentPanes
|
||||
.map((pane) => ({ pane, direction: getBestSplitDirection(pane, minPaneWidth) }))
|
||||
.filter(
|
||||
(item): item is { pane: TmuxPaneInfo; direction: SplitDirection } =>
|
||||
item.direction !== null,
|
||||
)
|
||||
.sort((a, b) => b.pane.width * b.pane.height - a.pane.width * a.pane.height)
|
||||
const panesByPosition = [...state.agentPanes].sort(
|
||||
(a, b) => a.left - b.left || a.top - b.top,
|
||||
)
|
||||
|
||||
const best = splittablePanes[0]
|
||||
if (best) {
|
||||
return { targetPaneId: best.pane.paneId, splitDirection: best.direction }
|
||||
for (const pane of panesByPosition) {
|
||||
if (canSplitPane(pane, "-v", minAgentPaneWidth)) {
|
||||
return { targetPaneId: pane.paneId, splitDirection: "-v" }
|
||||
}
|
||||
}
|
||||
|
||||
if (isStrictMainVertical(config)) {
|
||||
return null
|
||||
}
|
||||
|
||||
for (const pane of panesByPosition) {
|
||||
if (canSplitPane(pane, "-h", minAgentPaneWidth)) {
|
||||
return { targetPaneId: pane.paneId, splitDirection: "-h" }
|
||||
}
|
||||
}
|
||||
|
||||
return null
|
||||
@@ -90,7 +140,7 @@ function findSplittableTarget(
|
||||
|
||||
export function findSpawnTarget(
|
||||
state: WindowState,
|
||||
minPaneWidth: number = MIN_PANE_WIDTH,
|
||||
config: CapacityConfig,
|
||||
): SpawnTarget | null {
|
||||
return findSplittableTarget(state, minPaneWidth)
|
||||
return findSplittableTarget(state, config)
|
||||
}
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
import { MIN_PANE_HEIGHT, MIN_PANE_WIDTH } from "./types"
|
||||
import type { CapacityConfig } from "./types"
|
||||
|
||||
export const MAIN_PANE_RATIO = 0.5
|
||||
const DEFAULT_MAIN_PANE_SIZE = MAIN_PANE_RATIO * 100
|
||||
export const MAX_COLS = 2
|
||||
export const MAX_ROWS = 3
|
||||
export const MAX_GRID_SIZE = 4
|
||||
@@ -8,3 +10,48 @@ export const DIVIDER_SIZE = 1
|
||||
|
||||
export const MIN_SPLIT_WIDTH = 2 * MIN_PANE_WIDTH + DIVIDER_SIZE
|
||||
export const MIN_SPLIT_HEIGHT = 2 * MIN_PANE_HEIGHT + DIVIDER_SIZE
|
||||
|
||||
function clamp(value: number, min: number, max: number): number {
|
||||
return Math.max(min, Math.min(max, value))
|
||||
}
|
||||
|
||||
export function getMainPaneSizePercent(config?: CapacityConfig): number {
|
||||
return clamp(config?.mainPaneSize ?? DEFAULT_MAIN_PANE_SIZE, 20, 80)
|
||||
}
|
||||
|
||||
export function computeMainPaneWidth(
|
||||
windowWidth: number,
|
||||
config?: CapacityConfig,
|
||||
): number {
|
||||
const safeWindowWidth = Math.max(0, windowWidth)
|
||||
if (!config) {
|
||||
return Math.floor(safeWindowWidth * MAIN_PANE_RATIO)
|
||||
}
|
||||
|
||||
const dividerWidth = DIVIDER_SIZE
|
||||
const minMainPaneWidth = config?.mainPaneMinWidth ?? Math.floor(safeWindowWidth * MAIN_PANE_RATIO)
|
||||
const minAgentPaneWidth = config?.agentPaneWidth ?? MIN_PANE_WIDTH
|
||||
const percentageMainPaneWidth = Math.floor(
|
||||
(safeWindowWidth - dividerWidth) * (getMainPaneSizePercent(config) / 100),
|
||||
)
|
||||
const maxMainPaneWidth = Math.max(0, safeWindowWidth - dividerWidth - minAgentPaneWidth)
|
||||
|
||||
return clamp(
|
||||
Math.max(percentageMainPaneWidth, minMainPaneWidth),
|
||||
0,
|
||||
maxMainPaneWidth,
|
||||
)
|
||||
}
|
||||
|
||||
export function computeAgentAreaWidth(
|
||||
windowWidth: number,
|
||||
config?: CapacityConfig,
|
||||
): number {
|
||||
const safeWindowWidth = Math.max(0, windowWidth)
|
||||
if (!config) {
|
||||
return Math.floor(safeWindowWidth * (1 - MAIN_PANE_RATIO))
|
||||
}
|
||||
|
||||
const mainPaneWidth = computeMainPaneWidth(safeWindowWidth, config)
|
||||
return Math.max(0, safeWindowWidth - DIVIDER_SIZE - mainPaneWidth)
|
||||
}
|
||||
|
||||
@@ -43,6 +43,8 @@ export interface SpawnDecision {
|
||||
}
|
||||
|
||||
export interface CapacityConfig {
|
||||
layout?: string
|
||||
mainPaneSize?: number
|
||||
mainPaneMinWidth: number
|
||||
agentPaneWidth: number
|
||||
}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import type { PluginInput } from "@opencode-ai/plugin"
|
||||
import type { BackgroundManager } from "../../features/background-agent"
|
||||
import { log } from "../../shared/logger"
|
||||
import { resolveInheritedPromptTools } from "../../shared"
|
||||
import { createInternalAgentTextPart, resolveInheritedPromptTools } from "../../shared"
|
||||
import { HOOK_NAME } from "./hook-name"
|
||||
import { BOULDER_CONTINUATION_PROMPT } from "./system-reminder-templates"
|
||||
import { resolveRecentPromptContextForSession } from "./recent-model-resolver"
|
||||
@@ -53,7 +53,7 @@ export async function injectBoulderContinuation(input: {
|
||||
agent: agent ?? "atlas",
|
||||
...(promptContext.model !== undefined ? { model: promptContext.model } : {}),
|
||||
...(inheritedTools ? { tools: inheritedTools } : {}),
|
||||
parts: [{ type: "text", text: prompt }],
|
||||
parts: [createInternalAgentTextPart(prompt)],
|
||||
},
|
||||
query: { directory: ctx.directory },
|
||||
})
|
||||
|
||||
54
src/hooks/beast-mode-system/hook.test.ts
Normal file
54
src/hooks/beast-mode-system/hook.test.ts
Normal file
@@ -0,0 +1,54 @@
|
||||
import { describe, expect, test } from "bun:test"
|
||||
import { clearSessionModel, setSessionModel } from "../../shared/session-model-state"
|
||||
import { createBeastModeSystemHook, BEAST_MODE_SYSTEM_PROMPT } from "./hook"
|
||||
|
||||
describe("beast-mode-system hook", () => {
|
||||
test("injects beast mode prompt for copilot gpt-4.1", async () => {
|
||||
//#given
|
||||
const sessionID = "ses_beast"
|
||||
setSessionModel(sessionID, { providerID: "github-copilot", modelID: "gpt-4.1" })
|
||||
const hook = createBeastModeSystemHook()
|
||||
const output = { system: [] as string[] }
|
||||
|
||||
//#when
|
||||
await hook["experimental.chat.system.transform"]?.({ sessionID }, output)
|
||||
|
||||
//#then
|
||||
expect(output.system[0]).toContain("Beast Mode")
|
||||
expect(output.system[0]).toContain(BEAST_MODE_SYSTEM_PROMPT.trim().slice(0, 20))
|
||||
|
||||
clearSessionModel(sessionID)
|
||||
})
|
||||
|
||||
test("does not inject for other models", async () => {
|
||||
//#given
|
||||
const sessionID = "ses_no_beast"
|
||||
setSessionModel(sessionID, { providerID: "quotio", modelID: "gpt-5.3-codex" })
|
||||
const hook = createBeastModeSystemHook()
|
||||
const output = { system: [] as string[] }
|
||||
|
||||
//#when
|
||||
await hook["experimental.chat.system.transform"]?.({ sessionID }, output)
|
||||
|
||||
//#then
|
||||
expect(output.system.length).toBe(0)
|
||||
|
||||
clearSessionModel(sessionID)
|
||||
})
|
||||
|
||||
test("avoids duplicate insertion", async () => {
|
||||
//#given
|
||||
const sessionID = "ses_dupe"
|
||||
setSessionModel(sessionID, { providerID: "github-copilot", modelID: "gpt-4.1" })
|
||||
const hook = createBeastModeSystemHook()
|
||||
const output = { system: [BEAST_MODE_SYSTEM_PROMPT] }
|
||||
|
||||
//#when
|
||||
await hook["experimental.chat.system.transform"]?.({ sessionID }, output)
|
||||
|
||||
//#then
|
||||
expect(output.system.length).toBe(1)
|
||||
|
||||
clearSessionModel(sessionID)
|
||||
})
|
||||
})
|
||||
31
src/hooks/beast-mode-system/hook.ts
Normal file
31
src/hooks/beast-mode-system/hook.ts
Normal file
@@ -0,0 +1,31 @@
|
||||
import { getSessionModel } from "../../shared/session-model-state"
|
||||
|
||||
export const BEAST_MODE_SYSTEM_PROMPT = `Beast Mode (Copilot GPT-4.1)
|
||||
|
||||
You are an autonomous coding agent. Execute the task end-to-end.
|
||||
- Make a brief plan, then act.
|
||||
- Prefer concrete edits and verification over speculation.
|
||||
- Run relevant tests when feasible.
|
||||
- Do not ask the user to perform actions you can do yourself.
|
||||
- If blocked, state exactly what is needed to proceed.
|
||||
- Keep responses concise and actionable.`
|
||||
|
||||
function isBeastModeModel(model: { providerID: string; modelID: string } | undefined): boolean {
|
||||
return model?.providerID === "github-copilot" && model.modelID === "gpt-4.1"
|
||||
}
|
||||
|
||||
export function createBeastModeSystemHook() {
|
||||
return {
|
||||
"experimental.chat.system.transform": async (
|
||||
input: { sessionID: string },
|
||||
output: { system: string[] },
|
||||
): Promise<void> => {
|
||||
const model = getSessionModel(input.sessionID)
|
||||
if (!isBeastModeModel(model)) return
|
||||
|
||||
if (output.system.some((entry) => entry.includes("Beast Mode"))) return
|
||||
|
||||
output.system.unshift(BEAST_MODE_SYSTEM_PROMPT)
|
||||
},
|
||||
}
|
||||
}
|
||||
1
src/hooks/beast-mode-system/index.ts
Normal file
1
src/hooks/beast-mode-system/index.ts
Normal file
@@ -0,0 +1 @@
|
||||
export { createBeastModeSystemHook, BEAST_MODE_SYSTEM_PROMPT } from "./hook"
|
||||
@@ -3,7 +3,7 @@ import { loadClaudeHooksConfig } from "../config"
|
||||
import { loadPluginExtendedConfig } from "../config-loader"
|
||||
import { executeStopHooks, type StopContext } from "../stop"
|
||||
import type { PluginConfig } from "../types"
|
||||
import { isHookDisabled, log } from "../../../shared"
|
||||
import { createInternalAgentTextPart, isHookDisabled, log } from "../../../shared"
|
||||
import {
|
||||
clearSessionHookState,
|
||||
sessionErrorState,
|
||||
@@ -94,7 +94,7 @@ export function createSessionEventHandler(ctx: PluginInput, config: PluginConfig
|
||||
.prompt({
|
||||
path: { id: sessionID },
|
||||
body: {
|
||||
parts: [{ type: "text", text: stopResult.injectPrompt }],
|
||||
parts: [createInternalAgentTextPart(stopResult.injectPrompt)],
|
||||
},
|
||||
query: { directory: ctx.directory },
|
||||
})
|
||||
|
||||
106
src/hooks/hashline-edit-diff-enhancer/hook.ts
Normal file
106
src/hooks/hashline-edit-diff-enhancer/hook.ts
Normal file
@@ -0,0 +1,106 @@
|
||||
import { log } from "../../shared"
|
||||
import { generateUnifiedDiff, countLineDiffs } from "../../tools/hashline-edit/diff-utils"
|
||||
|
||||
interface HashlineEditDiffEnhancerConfig {
|
||||
hashline_edit?: { enabled: boolean }
|
||||
}
|
||||
|
||||
type BeforeInput = { tool: string; sessionID: string; callID: string }
|
||||
type BeforeOutput = { args: Record<string, unknown> }
|
||||
type AfterInput = { tool: string; sessionID: string; callID: string }
|
||||
type AfterOutput = { title: string; output: string; metadata: Record<string, unknown> }
|
||||
|
||||
const STALE_TIMEOUT_MS = 5 * 60 * 1000
|
||||
|
||||
const pendingCaptures = new Map<string, { content: string; filePath: string; storedAt: number }>()
|
||||
|
||||
function makeKey(sessionID: string, callID: string): string {
|
||||
return `${sessionID}:${callID}`
|
||||
}
|
||||
|
||||
function cleanupStaleEntries(): void {
|
||||
const now = Date.now()
|
||||
for (const [key, entry] of pendingCaptures) {
|
||||
if (now - entry.storedAt > STALE_TIMEOUT_MS) {
|
||||
pendingCaptures.delete(key)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function isWriteTool(toolName: string): boolean {
|
||||
return toolName.toLowerCase() === "write"
|
||||
}
|
||||
|
||||
function extractFilePath(args: Record<string, unknown>): string | undefined {
|
||||
const path = args.path ?? args.filePath ?? args.file_path
|
||||
return typeof path === "string" ? path : undefined
|
||||
}
|
||||
|
||||
async function captureOldContent(filePath: string): Promise<string> {
|
||||
try {
|
||||
const file = Bun.file(filePath)
|
||||
if (await file.exists()) {
|
||||
return await file.text()
|
||||
}
|
||||
} catch {
|
||||
log("[hashline-edit-diff-enhancer] failed to read old content", { filePath })
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
export function createHashlineEditDiffEnhancerHook(config: HashlineEditDiffEnhancerConfig) {
|
||||
const enabled = config.hashline_edit?.enabled ?? false
|
||||
|
||||
return {
|
||||
"tool.execute.before": async (input: BeforeInput, output: BeforeOutput) => {
|
||||
if (!enabled || !isWriteTool(input.tool)) return
|
||||
|
||||
const filePath = extractFilePath(output.args)
|
||||
if (!filePath) return
|
||||
|
||||
cleanupStaleEntries()
|
||||
const oldContent = await captureOldContent(filePath)
|
||||
pendingCaptures.set(makeKey(input.sessionID, input.callID), {
|
||||
content: oldContent,
|
||||
filePath,
|
||||
storedAt: Date.now(),
|
||||
})
|
||||
},
|
||||
|
||||
"tool.execute.after": async (input: AfterInput, output: AfterOutput) => {
|
||||
if (!enabled || !isWriteTool(input.tool)) return
|
||||
|
||||
const key = makeKey(input.sessionID, input.callID)
|
||||
const captured = pendingCaptures.get(key)
|
||||
if (!captured) return
|
||||
pendingCaptures.delete(key)
|
||||
|
||||
const { content: oldContent, filePath } = captured
|
||||
|
||||
let newContent: string
|
||||
try {
|
||||
newContent = await Bun.file(filePath).text()
|
||||
} catch {
|
||||
log("[hashline-edit-diff-enhancer] failed to read new content", { filePath })
|
||||
return
|
||||
}
|
||||
|
||||
const { additions, deletions } = countLineDiffs(oldContent, newContent)
|
||||
const unifiedDiff = generateUnifiedDiff(oldContent, newContent, filePath)
|
||||
|
||||
output.metadata.filediff = {
|
||||
file: filePath,
|
||||
path: filePath,
|
||||
before: oldContent,
|
||||
after: newContent,
|
||||
additions,
|
||||
deletions,
|
||||
}
|
||||
|
||||
// TUI reads metadata.diff (unified diff string), not filediff object
|
||||
output.metadata.diff = unifiedDiff
|
||||
|
||||
output.title = filePath
|
||||
},
|
||||
}
|
||||
}
|
||||
306
src/hooks/hashline-edit-diff-enhancer/index.test.ts
Normal file
306
src/hooks/hashline-edit-diff-enhancer/index.test.ts
Normal file
@@ -0,0 +1,306 @@
|
||||
import { describe, test, expect, beforeEach } from "bun:test"
|
||||
import { createHashlineEditDiffEnhancerHook } from "./hook"
|
||||
|
||||
function makeInput(tool: string, callID = "call-1", sessionID = "ses-1") {
|
||||
return { tool, sessionID, callID }
|
||||
}
|
||||
|
||||
function makeBeforeOutput(args: Record<string, unknown>) {
|
||||
return { args }
|
||||
}
|
||||
|
||||
function makeAfterOutput(overrides?: Partial<{ title: string; output: string; metadata: Record<string, unknown> }>) {
|
||||
return {
|
||||
title: overrides?.title ?? "",
|
||||
output: overrides?.output ?? "Successfully applied 1 edit(s)",
|
||||
metadata: overrides?.metadata ?? { truncated: false },
|
||||
}
|
||||
}
|
||||
|
||||
type FileDiffMetadata = {
|
||||
file: string
|
||||
path: string
|
||||
before: string
|
||||
after: string
|
||||
additions: number
|
||||
deletions: number
|
||||
}
|
||||
|
||||
describe("hashline-edit-diff-enhancer", () => {
|
||||
let hook: ReturnType<typeof createHashlineEditDiffEnhancerHook>
|
||||
|
||||
beforeEach(() => {
|
||||
hook = createHashlineEditDiffEnhancerHook({ hashline_edit: { enabled: true } })
|
||||
})
|
||||
|
||||
describe("tool.execute.before", () => {
|
||||
test("captures old file content for write tool", async () => {
|
||||
const filePath = import.meta.dir + "/index.test.ts"
|
||||
const input = makeInput("write")
|
||||
const output = makeBeforeOutput({ path: filePath, edits: [] })
|
||||
|
||||
await hook["tool.execute.before"](input, output)
|
||||
|
||||
// given the hook ran without error, the old content should be stored internally
|
||||
// we verify in the after hook test that it produces filediff
|
||||
})
|
||||
|
||||
test("ignores non-write tools", async () => {
|
||||
const input = makeInput("read")
|
||||
const output = makeBeforeOutput({ path: "/some/file.ts" })
|
||||
|
||||
// when - should not throw
|
||||
await hook["tool.execute.before"](input, output)
|
||||
})
|
||||
})
|
||||
|
||||
describe("tool.execute.after", () => {
|
||||
test("injects filediff metadata after write tool execution", async () => {
|
||||
// given - a temp file that we can modify between before/after
|
||||
const tmpDir = (await import("os")).tmpdir()
|
||||
const tmpFile = `${tmpDir}/hashline-diff-test-${Date.now()}.ts`
|
||||
const oldContent = "line 1\nline 2\nline 3\n"
|
||||
await Bun.write(tmpFile, oldContent)
|
||||
|
||||
const input = makeInput("write", "call-diff-1")
|
||||
const beforeOutput = makeBeforeOutput({ path: tmpFile, edits: [] })
|
||||
|
||||
// when - before hook captures old content
|
||||
await hook["tool.execute.before"](input, beforeOutput)
|
||||
|
||||
// when - file is modified (simulating write execution)
|
||||
const newContent = "line 1\nmodified line 2\nline 3\nnew line 4\n"
|
||||
await Bun.write(tmpFile, newContent)
|
||||
|
||||
// when - after hook computes filediff
|
||||
const afterOutput = makeAfterOutput()
|
||||
await hook["tool.execute.after"](input, afterOutput)
|
||||
|
||||
// then - metadata should contain filediff
|
||||
const filediff = afterOutput.metadata.filediff as {
|
||||
file: string
|
||||
path: string
|
||||
before: string
|
||||
after: string
|
||||
additions: number
|
||||
deletions: number
|
||||
}
|
||||
expect(filediff).toBeDefined()
|
||||
expect(filediff.file).toBe(tmpFile)
|
||||
expect(filediff.path).toBe(tmpFile)
|
||||
expect(filediff.before).toBe(oldContent)
|
||||
expect(filediff.after).toBe(newContent)
|
||||
expect(filediff.additions).toBeGreaterThan(0)
|
||||
expect(filediff.deletions).toBeGreaterThan(0)
|
||||
|
||||
// then - title should be set to the file path
|
||||
expect(afterOutput.title).toBe(tmpFile)
|
||||
|
||||
// cleanup
|
||||
await Bun.file(tmpFile).exists() && (await import("fs/promises")).unlink(tmpFile)
|
||||
})
|
||||
|
||||
test("does nothing for non-write tools", async () => {
|
||||
const input = makeInput("read", "call-other")
|
||||
const afterOutput = makeAfterOutput()
|
||||
const originalMetadata = { ...afterOutput.metadata }
|
||||
|
||||
await hook["tool.execute.after"](input, afterOutput)
|
||||
|
||||
// then - metadata unchanged
|
||||
expect(afterOutput.metadata).toEqual(originalMetadata)
|
||||
})
|
||||
|
||||
test("does nothing when no before capture exists", async () => {
|
||||
// given - no before hook was called for this callID
|
||||
const input = makeInput("write", "call-no-before")
|
||||
const afterOutput = makeAfterOutput()
|
||||
const originalMetadata = { ...afterOutput.metadata }
|
||||
|
||||
await hook["tool.execute.after"](input, afterOutput)
|
||||
|
||||
// then - metadata unchanged (no filediff injected)
|
||||
expect(afterOutput.metadata.filediff).toBeUndefined()
|
||||
})
|
||||
|
||||
test("cleans up stored content after consumption", async () => {
|
||||
const tmpDir = (await import("os")).tmpdir()
|
||||
const tmpFile = `${tmpDir}/hashline-diff-cleanup-${Date.now()}.ts`
|
||||
await Bun.write(tmpFile, "original")
|
||||
|
||||
const input = makeInput("write", "call-cleanup")
|
||||
await hook["tool.execute.before"](input, makeBeforeOutput({ path: tmpFile }))
|
||||
await Bun.write(tmpFile, "modified")
|
||||
|
||||
// when - first after call consumes
|
||||
const afterOutput1 = makeAfterOutput()
|
||||
await hook["tool.execute.after"](input, afterOutput1)
|
||||
expect(afterOutput1.metadata.filediff).toBeDefined()
|
||||
|
||||
// when - second after call finds nothing
|
||||
const afterOutput2 = makeAfterOutput()
|
||||
await hook["tool.execute.after"](input, afterOutput2)
|
||||
expect(afterOutput2.metadata.filediff).toBeUndefined()
|
||||
|
||||
await (await import("fs/promises")).unlink(tmpFile).catch(() => {})
|
||||
})
|
||||
|
||||
test("handles file creation (empty old content)", async () => {
|
||||
const tmpDir = (await import("os")).tmpdir()
|
||||
const tmpFile = `${tmpDir}/hashline-diff-create-${Date.now()}.ts`
|
||||
|
||||
// given - file doesn't exist during before hook
|
||||
const input = makeInput("write", "call-create")
|
||||
await hook["tool.execute.before"](input, makeBeforeOutput({ path: tmpFile }))
|
||||
|
||||
// when - file created during write
|
||||
await Bun.write(tmpFile, "new content\n")
|
||||
|
||||
const afterOutput = makeAfterOutput()
|
||||
await hook["tool.execute.after"](input, afterOutput)
|
||||
|
||||
// then - filediff shows creation (before is empty)
|
||||
const filediff = afterOutput.metadata.filediff as FileDiffMetadata
|
||||
expect(filediff).toBeDefined()
|
||||
expect(filediff.before).toBe("")
|
||||
expect(filediff.after).toBe("new content\n")
|
||||
expect(filediff.additions).toBeGreaterThan(0)
|
||||
expect(filediff.deletions).toBe(0)
|
||||
|
||||
await (await import("fs/promises")).unlink(tmpFile).catch(() => {})
|
||||
})
|
||||
})
|
||||
|
||||
describe("disabled config", () => {
|
||||
test("does nothing when hashline_edit is disabled", async () => {
|
||||
const disabledHook = createHashlineEditDiffEnhancerHook({ hashline_edit: { enabled: false } })
|
||||
const tmpDir = (await import("os")).tmpdir()
|
||||
const tmpFile = `${tmpDir}/hashline-diff-disabled-${Date.now()}.ts`
|
||||
await Bun.write(tmpFile, "content")
|
||||
|
||||
const input = makeInput("write", "call-disabled")
|
||||
await disabledHook["tool.execute.before"](input, makeBeforeOutput({ path: tmpFile }))
|
||||
await Bun.write(tmpFile, "modified")
|
||||
|
||||
const afterOutput = makeAfterOutput()
|
||||
await disabledHook["tool.execute.after"](input, afterOutput)
|
||||
|
||||
// then - no filediff injected
|
||||
expect(afterOutput.metadata.filediff).toBeUndefined()
|
||||
|
||||
await (await import("fs/promises")).unlink(tmpFile).catch(() => {})
|
||||
})
|
||||
})
|
||||
|
||||
describe("write tool support", () => {
|
||||
test("captures filediff for write tool (path arg)", async () => {
|
||||
//#given - a temp file
|
||||
const tmpDir = (await import("os")).tmpdir()
|
||||
const tmpFile = `${tmpDir}/hashline-diff-write-${Date.now()}.ts`
|
||||
const oldContent = "line 1\nline 2\n"
|
||||
await Bun.write(tmpFile, oldContent)
|
||||
|
||||
const input = makeInput("write", "call-write-1")
|
||||
const beforeOutput = makeBeforeOutput({ path: tmpFile })
|
||||
|
||||
//#when - before hook captures old content
|
||||
await hook["tool.execute.before"](input, beforeOutput)
|
||||
|
||||
//#when - file is written
|
||||
const newContent = "line 1\nmodified line 2\nnew line 3\n"
|
||||
await Bun.write(tmpFile, newContent)
|
||||
|
||||
//#when - after hook computes filediff
|
||||
const afterOutput = makeAfterOutput()
|
||||
await hook["tool.execute.after"](input, afterOutput)
|
||||
|
||||
//#then - metadata should contain filediff
|
||||
const filediff = afterOutput.metadata.filediff as { file: string; before: string; after: string; additions: number; deletions: number }
|
||||
expect(filediff).toBeDefined()
|
||||
expect(filediff.file).toBe(tmpFile)
|
||||
expect(filediff.additions).toBeGreaterThan(0)
|
||||
|
||||
await (await import("fs/promises")).unlink(tmpFile).catch(() => {})
|
||||
})
|
||||
|
||||
test("captures filediff for write tool (filePath arg)", async () => {
|
||||
//#given
|
||||
const tmpDir = (await import("os")).tmpdir()
|
||||
const tmpFile = `${tmpDir}/hashline-diff-write-fp-${Date.now()}.ts`
|
||||
await Bun.write(tmpFile, "original content\n")
|
||||
|
||||
const input = makeInput("write", "call-write-fp")
|
||||
|
||||
//#when - before hook uses filePath arg
|
||||
await hook["tool.execute.before"](input, makeBeforeOutput({ filePath: tmpFile }))
|
||||
await Bun.write(tmpFile, "new content\n")
|
||||
|
||||
const afterOutput = makeAfterOutput()
|
||||
await hook["tool.execute.after"](input, afterOutput)
|
||||
|
||||
//#then
|
||||
const filediff = afterOutput.metadata.filediff as FileDiffMetadata | undefined
|
||||
expect(filediff).toBeDefined()
|
||||
|
||||
await (await import("fs/promises")).unlink(tmpFile).catch(() => {})
|
||||
})
|
||||
})
|
||||
|
||||
describe("raw content in filediff", () => {
|
||||
test("filediff.before and filediff.after are raw file content", async () => {
|
||||
//#given - a temp file
|
||||
const tmpDir = (await import("os")).tmpdir()
|
||||
const tmpFile = `${tmpDir}/hashline-diff-format-${Date.now()}.ts`
|
||||
const oldContent = "const x = 1\nconst y = 2\n"
|
||||
await Bun.write(tmpFile, oldContent)
|
||||
|
||||
const input = makeInput("write", "call-hashline-format")
|
||||
await hook["tool.execute.before"](input, makeBeforeOutput({ path: tmpFile }))
|
||||
|
||||
//#when - file is modified and after hook runs
|
||||
const newContent = "const x = 1\nconst y = 42\n"
|
||||
await Bun.write(tmpFile, newContent)
|
||||
|
||||
const afterOutput = makeAfterOutput()
|
||||
await hook["tool.execute.after"](input, afterOutput)
|
||||
|
||||
//#then - before and after should be raw file content
|
||||
const filediff = afterOutput.metadata.filediff as { before: string; after: string }
|
||||
expect(filediff.before).toBe(oldContent)
|
||||
expect(filediff.after).toBe(newContent)
|
||||
|
||||
await (await import("fs/promises")).unlink(tmpFile).catch(() => {})
|
||||
})
|
||||
})
|
||||
|
||||
describe("TUI diff support (metadata.diff)", () => {
|
||||
test("injects unified diff string in metadata.diff for write tool TUI", async () => {
|
||||
//#given - a temp file
|
||||
const tmpDir = (await import("os")).tmpdir()
|
||||
const tmpFile = `${tmpDir}/hashline-tui-diff-${Date.now()}.ts`
|
||||
const oldContent = "line 1\nline 2\nline 3\n"
|
||||
await Bun.write(tmpFile, oldContent)
|
||||
|
||||
const input = makeInput("write", "call-tui-diff")
|
||||
await hook["tool.execute.before"](input, makeBeforeOutput({ path: tmpFile }))
|
||||
|
||||
//#when - file is modified
|
||||
const newContent = "line 1\nmodified line 2\nline 3\n"
|
||||
await Bun.write(tmpFile, newContent)
|
||||
|
||||
const afterOutput = makeAfterOutput()
|
||||
await hook["tool.execute.after"](input, afterOutput)
|
||||
|
||||
//#then - metadata.diff should be a unified diff string
|
||||
expect(afterOutput.metadata.diff).toBeDefined()
|
||||
expect(typeof afterOutput.metadata.diff).toBe("string")
|
||||
expect(afterOutput.metadata.diff).toContain("---")
|
||||
expect(afterOutput.metadata.diff).toContain("+++")
|
||||
expect(afterOutput.metadata.diff).toContain("@@")
|
||||
expect(afterOutput.metadata.diff).toContain("-line 2")
|
||||
expect(afterOutput.metadata.diff).toContain("+modified line 2")
|
||||
|
||||
await (await import("fs/promises")).unlink(tmpFile).catch(() => {})
|
||||
})
|
||||
})
|
||||
})
|
||||
1
src/hooks/hashline-edit-diff-enhancer/index.ts
Normal file
1
src/hooks/hashline-edit-diff-enhancer/index.ts
Normal file
@@ -0,0 +1 @@
|
||||
export { createHashlineEditDiffEnhancerHook } from "./hook"
|
||||
@@ -1,16 +1,23 @@
|
||||
import type { PluginInput } from "@opencode-ai/plugin"
|
||||
import { computeLineHash } from "../../tools/hashline-edit/hash-computation"
|
||||
import { toHashlineContent } from "../../tools/hashline-edit/diff-utils"
|
||||
|
||||
interface HashlineReadEnhancerConfig {
|
||||
hashline_edit?: { enabled: boolean }
|
||||
}
|
||||
|
||||
const READ_LINE_PATTERN = /^(\d+): (.*)$/
|
||||
const CONTENT_OPEN_TAG = "<content>"
|
||||
const CONTENT_CLOSE_TAG = "</content>"
|
||||
|
||||
function isReadTool(toolName: string): boolean {
|
||||
return toolName.toLowerCase() === "read"
|
||||
}
|
||||
|
||||
function isWriteTool(toolName: string): boolean {
|
||||
return toolName.toLowerCase() === "write"
|
||||
}
|
||||
|
||||
function shouldProcess(config: HashlineReadEnhancerConfig): boolean {
|
||||
return config.hashline_edit?.enabled ?? false
|
||||
}
|
||||
@@ -28,18 +35,73 @@ function transformLine(line: string): string {
|
||||
const lineNumber = parseInt(match[1], 10)
|
||||
const content = match[2]
|
||||
const hash = computeLineHash(lineNumber, content)
|
||||
return `${lineNumber}:${hash}|${content}`
|
||||
return `${lineNumber}#${hash}:${content}`
|
||||
}
|
||||
|
||||
function transformOutput(output: string): string {
|
||||
if (!output) {
|
||||
return output
|
||||
}
|
||||
if (!isTextFile(output)) {
|
||||
|
||||
const lines = output.split("\n")
|
||||
const contentStart = lines.indexOf(CONTENT_OPEN_TAG)
|
||||
const contentEnd = lines.indexOf(CONTENT_CLOSE_TAG)
|
||||
|
||||
if (contentStart === -1 || contentEnd === -1 || contentEnd <= contentStart + 1) {
|
||||
return output
|
||||
}
|
||||
const lines = output.split("\n")
|
||||
return lines.map(transformLine).join("\n")
|
||||
|
||||
const fileLines = lines.slice(contentStart + 1, contentEnd)
|
||||
if (!isTextFile(fileLines[0] ?? "")) {
|
||||
return output
|
||||
}
|
||||
|
||||
const result: string[] = []
|
||||
for (const line of fileLines) {
|
||||
if (!READ_LINE_PATTERN.test(line)) {
|
||||
result.push(...fileLines.slice(result.length))
|
||||
break
|
||||
}
|
||||
result.push(transformLine(line))
|
||||
}
|
||||
|
||||
return [...lines.slice(0, contentStart + 1), ...result, ...lines.slice(contentEnd)].join("\n")
|
||||
}
|
||||
|
||||
function extractFilePath(metadata: unknown): string | undefined {
|
||||
if (!metadata || typeof metadata !== "object") {
|
||||
return undefined
|
||||
}
|
||||
|
||||
const objectMeta = metadata as Record<string, unknown>
|
||||
const candidates = [objectMeta.filepath, objectMeta.filePath, objectMeta.path, objectMeta.file]
|
||||
for (const candidate of candidates) {
|
||||
if (typeof candidate === "string" && candidate.length > 0) {
|
||||
return candidate
|
||||
}
|
||||
}
|
||||
|
||||
return undefined
|
||||
}
|
||||
|
||||
async function appendWriteHashlineOutput(output: { output: string; metadata: unknown }): Promise<void> {
|
||||
if (output.output.includes("Updated file (LINE#ID:content):")) {
|
||||
return
|
||||
}
|
||||
|
||||
const filePath = extractFilePath(output.metadata)
|
||||
if (!filePath) {
|
||||
return
|
||||
}
|
||||
|
||||
const file = Bun.file(filePath)
|
||||
if (!(await file.exists())) {
|
||||
return
|
||||
}
|
||||
|
||||
const content = await file.text()
|
||||
const hashlined = toHashlineContent(content)
|
||||
output.output = `${output.output}\n\nUpdated file (LINE#ID:content):\n${hashlined}`
|
||||
}
|
||||
|
||||
export function createHashlineReadEnhancerHook(
|
||||
@@ -52,6 +114,9 @@ export function createHashlineReadEnhancerHook(
|
||||
output: { title: string; output: string; metadata: unknown }
|
||||
) => {
|
||||
if (!isReadTool(input.tool)) {
|
||||
if (isWriteTool(input.tool) && typeof output.output === "string" && shouldProcess(config)) {
|
||||
await appendWriteHashlineOutput(output)
|
||||
}
|
||||
return
|
||||
}
|
||||
if (typeof output.output !== "string") {
|
||||
|
||||
@@ -1,248 +1,93 @@
|
||||
import { describe, it, expect, beforeEach } from "bun:test"
|
||||
import { createHashlineReadEnhancerHook } from "./hook"
|
||||
import { describe, it, expect } from "bun:test"
|
||||
import type { PluginInput } from "@opencode-ai/plugin"
|
||||
import { createHashlineReadEnhancerHook } from "./hook"
|
||||
import * as fs from "node:fs"
|
||||
import * as os from "node:os"
|
||||
import * as path from "node:path"
|
||||
|
||||
//#given - Test setup helpers
|
||||
function createMockContext(): PluginInput {
|
||||
function mockCtx(): PluginInput {
|
||||
return {
|
||||
client: {} as unknown as PluginInput["client"],
|
||||
client: {} as PluginInput["client"],
|
||||
directory: "/test",
|
||||
project: "/test" as unknown as PluginInput["project"],
|
||||
worktree: "/test",
|
||||
serverUrl: "http://localhost" as unknown as PluginInput["serverUrl"],
|
||||
$: {} as PluginInput["$"],
|
||||
}
|
||||
}
|
||||
|
||||
interface TestConfig {
|
||||
hashline_edit?: { enabled: boolean }
|
||||
}
|
||||
describe("hashline-read-enhancer", () => {
|
||||
it("hashifies only file content lines in read output", async () => {
|
||||
//#given
|
||||
const hook = createHashlineReadEnhancerHook(mockCtx(), { hashline_edit: { enabled: true } })
|
||||
const input = { tool: "read", sessionID: "s", callID: "c" }
|
||||
const output = {
|
||||
title: "demo.ts",
|
||||
output: [
|
||||
"<path>/tmp/demo.ts</path>",
|
||||
"<type>file</type>",
|
||||
"<content>",
|
||||
"1: const x = 1",
|
||||
"2: const y = 2",
|
||||
"",
|
||||
"(End of file - total 2 lines)",
|
||||
"</content>",
|
||||
"",
|
||||
"<system-reminder>",
|
||||
"1: keep this unchanged",
|
||||
"</system-reminder>",
|
||||
].join("\n"),
|
||||
metadata: {},
|
||||
}
|
||||
|
||||
function createMockConfig(enabled: boolean): TestConfig {
|
||||
return {
|
||||
hashline_edit: { enabled },
|
||||
}
|
||||
}
|
||||
//#when
|
||||
await hook["tool.execute.after"](input, output)
|
||||
|
||||
describe("createHashlineReadEnhancerHook", () => {
|
||||
let mockCtx: PluginInput
|
||||
const sessionID = "test-session-123"
|
||||
|
||||
beforeEach(() => {
|
||||
mockCtx = createMockContext()
|
||||
//#then
|
||||
const lines = output.output.split("\n")
|
||||
expect(lines[3]).toMatch(/^1#[ZPMQVRWSNKTXJBYH]{2}:const x = 1$/)
|
||||
expect(lines[4]).toMatch(/^2#[ZPMQVRWSNKTXJBYH]{2}:const y = 2$/)
|
||||
expect(lines[10]).toBe("1: keep this unchanged")
|
||||
})
|
||||
|
||||
describe("tool name matching", () => {
|
||||
it("should process 'read' tool (lowercase)", async () => {
|
||||
//#given
|
||||
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
|
||||
const input = { tool: "read", sessionID, callID: "call-1" }
|
||||
const output = { title: "Read", output: "1: hello\n2: world", metadata: {} }
|
||||
it("appends LINE#ID output for write tool using metadata filepath", async () => {
|
||||
//#given
|
||||
const hook = createHashlineReadEnhancerHook(mockCtx(), { hashline_edit: { enabled: true } })
|
||||
const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), "hashline-write-"))
|
||||
const filePath = path.join(tempDir, "demo.ts")
|
||||
fs.writeFileSync(filePath, "const x = 1\nconst y = 2")
|
||||
const input = { tool: "write", sessionID: "s", callID: "c" }
|
||||
const output = {
|
||||
title: "write",
|
||||
output: "Wrote file successfully.",
|
||||
metadata: { filepath: filePath },
|
||||
}
|
||||
|
||||
//#when
|
||||
await hook["tool.execute.after"](input, output)
|
||||
//#when
|
||||
await hook["tool.execute.after"](input, output)
|
||||
|
||||
//#then
|
||||
expect(output.output).toContain("1:")
|
||||
expect(output.output).toContain("|")
|
||||
})
|
||||
//#then
|
||||
expect(output.output).toContain("Updated file (LINE#ID:content):")
|
||||
expect(output.output).toMatch(/1#[ZPMQVRWSNKTXJBYH]{2}:const x = 1/)
|
||||
expect(output.output).toMatch(/2#[ZPMQVRWSNKTXJBYH]{2}:const y = 2/)
|
||||
|
||||
it("should process 'Read' tool (mixed case)", async () => {
|
||||
//#given
|
||||
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
|
||||
const input = { tool: "Read", sessionID, callID: "call-1" }
|
||||
const output = { title: "Read", output: "1: hello\n2: world", metadata: {} }
|
||||
|
||||
//#when
|
||||
await hook["tool.execute.after"](input, output)
|
||||
|
||||
//#then
|
||||
expect(output.output).toContain("|")
|
||||
})
|
||||
|
||||
it("should process 'READ' tool (uppercase)", async () => {
|
||||
//#given
|
||||
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
|
||||
const input = { tool: "READ", sessionID, callID: "call-1" }
|
||||
const output = { title: "Read", output: "1: hello\n2: world", metadata: {} }
|
||||
|
||||
//#when
|
||||
await hook["tool.execute.after"](input, output)
|
||||
|
||||
//#then
|
||||
expect(output.output).toContain("|")
|
||||
})
|
||||
|
||||
it("should skip non-read tools", async () => {
|
||||
//#given
|
||||
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
|
||||
const input = { tool: "edit", sessionID, callID: "call-1" }
|
||||
const originalOutput = "1: hello\n2: world"
|
||||
const output = { title: "Edit", output: originalOutput, metadata: {} }
|
||||
|
||||
//#when
|
||||
await hook["tool.execute.after"](input, output)
|
||||
|
||||
//#then
|
||||
expect(output.output).toBe(originalOutput)
|
||||
})
|
||||
fs.rmSync(tempDir, { recursive: true, force: true })
|
||||
})
|
||||
|
||||
describe("config flag check", () => {
|
||||
it("should skip when hashline_edit is disabled", async () => {
|
||||
//#given
|
||||
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(false))
|
||||
const input = { tool: "read", sessionID, callID: "call-1" }
|
||||
const originalOutput = "1: hello\n2: world"
|
||||
const output = { title: "Read", output: originalOutput, metadata: {} }
|
||||
it("skips when feature is disabled", async () => {
|
||||
//#given
|
||||
const hook = createHashlineReadEnhancerHook(mockCtx(), { hashline_edit: { enabled: false } })
|
||||
const input = { tool: "read", sessionID: "s", callID: "c" }
|
||||
const output = {
|
||||
title: "demo.ts",
|
||||
output: "<content>\n1: const x = 1\n</content>",
|
||||
metadata: {},
|
||||
}
|
||||
|
||||
//#when
|
||||
await hook["tool.execute.after"](input, output)
|
||||
//#when
|
||||
await hook["tool.execute.after"](input, output)
|
||||
|
||||
//#then
|
||||
expect(output.output).toBe(originalOutput)
|
||||
})
|
||||
|
||||
it("should skip when hashline_edit config is missing", async () => {
|
||||
//#given
|
||||
const hook = createHashlineReadEnhancerHook(mockCtx, {})
|
||||
const input = { tool: "read", sessionID, callID: "call-1" }
|
||||
const originalOutput = "1: hello\n2: world"
|
||||
const output = { title: "Read", output: originalOutput, metadata: {} }
|
||||
|
||||
//#when
|
||||
await hook["tool.execute.after"](input, output)
|
||||
|
||||
//#then
|
||||
expect(output.output).toBe(originalOutput)
|
||||
})
|
||||
})
|
||||
|
||||
describe("output transformation", () => {
|
||||
it("should transform 'N: content' format to 'N:HASH|content'", async () => {
|
||||
//#given
|
||||
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
|
||||
const input = { tool: "read", sessionID, callID: "call-1" }
|
||||
const output = { title: "Read", output: "1: function hello() {\n2: console.log('world')\n3: }", metadata: {} }
|
||||
|
||||
//#when
|
||||
await hook["tool.execute.after"](input, output)
|
||||
|
||||
//#then
|
||||
const lines = output.output.split("\n")
|
||||
expect(lines[0]).toMatch(/^1:[a-f0-9]{2}\|function hello\(\) \{$/)
|
||||
expect(lines[1]).toMatch(/^2:[a-f0-9]{2}\| console\.log\('world'\)$/)
|
||||
expect(lines[2]).toMatch(/^3:[a-f0-9]{2}\|\}$/)
|
||||
})
|
||||
|
||||
it("should handle empty output", async () => {
|
||||
//#given
|
||||
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
|
||||
const input = { tool: "read", sessionID, callID: "call-1" }
|
||||
const output = { title: "Read", output: "", metadata: {} }
|
||||
|
||||
//#when
|
||||
await hook["tool.execute.after"](input, output)
|
||||
|
||||
//#then
|
||||
expect(output.output).toBe("")
|
||||
})
|
||||
|
||||
it("should handle single line", async () => {
|
||||
//#given
|
||||
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
|
||||
const input = { tool: "read", sessionID, callID: "call-1" }
|
||||
const output = { title: "Read", output: "1: const x = 1", metadata: {} }
|
||||
|
||||
//#when
|
||||
await hook["tool.execute.after"](input, output)
|
||||
|
||||
//#then
|
||||
expect(output.output).toMatch(/^1:[a-f0-9]{2}\|const x = 1$/)
|
||||
})
|
||||
})
|
||||
|
||||
describe("binary file detection", () => {
|
||||
it("should skip binary files (no line number prefix)", async () => {
|
||||
//#given
|
||||
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
|
||||
const input = { tool: "read", sessionID, callID: "call-1" }
|
||||
const originalOutput = "PNG\x89\x50\x4E\x47\x0D\x0A\x1A\x0A"
|
||||
const output = { title: "Read", output: originalOutput, metadata: {} }
|
||||
|
||||
//#when
|
||||
await hook["tool.execute.after"](input, output)
|
||||
|
||||
//#then
|
||||
expect(output.output).toBe(originalOutput)
|
||||
})
|
||||
|
||||
it("should skip if first line doesn't match pattern", async () => {
|
||||
//#given
|
||||
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
|
||||
const input = { tool: "read", sessionID, callID: "call-1" }
|
||||
const originalOutput = "some binary data\nmore data"
|
||||
const output = { title: "Read", output: originalOutput, metadata: {} }
|
||||
|
||||
//#when
|
||||
await hook["tool.execute.after"](input, output)
|
||||
|
||||
//#then
|
||||
expect(output.output).toBe(originalOutput)
|
||||
})
|
||||
|
||||
it("should process if first line matches 'N: ' pattern", async () => {
|
||||
//#given
|
||||
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
|
||||
const input = { tool: "read", sessionID, callID: "call-1" }
|
||||
const output = { title: "Read", output: "1: valid line\n2: another line", metadata: {} }
|
||||
|
||||
//#when
|
||||
await hook["tool.execute.after"](input, output)
|
||||
|
||||
//#then
|
||||
expect(output.output).toContain("|")
|
||||
})
|
||||
})
|
||||
|
||||
describe("edge cases", () => {
|
||||
it("should handle non-string output gracefully", async () => {
|
||||
//#given
|
||||
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
|
||||
const input = { tool: "read", sessionID, callID: "call-1" }
|
||||
const output = { title: "Read", output: null as unknown as string, metadata: {} }
|
||||
|
||||
//#when - should not throw
|
||||
await hook["tool.execute.after"](input, output)
|
||||
|
||||
//#then
|
||||
expect(output.output).toBeNull()
|
||||
})
|
||||
|
||||
it("should handle lines with no content after colon", async () => {
|
||||
//#given
|
||||
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
|
||||
const input = { tool: "read", sessionID, callID: "call-1" }
|
||||
const output = { title: "Read", output: "1: hello\n2: \n3: world", metadata: {} }
|
||||
|
||||
//#when
|
||||
await hook["tool.execute.after"](input, output)
|
||||
|
||||
//#then
|
||||
const lines = output.output.split("\n")
|
||||
expect(lines[0]).toMatch(/^1:[a-f0-9]{2}\|hello$/)
|
||||
expect(lines[1]).toMatch(/^2:[a-f0-9]{2}\|$/)
|
||||
expect(lines[2]).toMatch(/^3:[a-f0-9]{2}\|world$/)
|
||||
})
|
||||
|
||||
it("should handle very long lines", async () => {
|
||||
//#given
|
||||
const longContent = "a".repeat(1000)
|
||||
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
|
||||
const input = { tool: "read", sessionID, callID: "call-1" }
|
||||
const output = { title: "Read", output: `1: ${longContent}`, metadata: {} }
|
||||
|
||||
//#when
|
||||
await hook["tool.execute.after"](input, output)
|
||||
|
||||
//#then
|
||||
expect(output.output).toMatch(/^1:[a-f0-9]{2}\|a+$/)
|
||||
})
|
||||
//#then
|
||||
expect(output.output).toBe("<content>\n1: const x = 1\n</content>")
|
||||
})
|
||||
})
|
||||
|
||||
@@ -14,6 +14,7 @@ export { createEmptyTaskResponseDetectorHook } from "./empty-task-response-detec
|
||||
export { createAnthropicContextWindowLimitRecoveryHook, type AnthropicContextWindowLimitRecoveryOptions } from "./anthropic-context-window-limit-recovery";
|
||||
|
||||
export { createThinkModeHook } from "./think-mode";
|
||||
export { createModelFallbackHook, setPendingModelFallback, clearPendingModelFallback, type ModelFallbackState } from "./model-fallback/hook";
|
||||
export { createClaudeCodeHooksHook } from "./claude-code-hooks";
|
||||
export { createRulesInjectorHook } from "./rules-injector";
|
||||
export { createBackgroundNotificationHook } from "./background-notification"
|
||||
@@ -28,9 +29,9 @@ export { createThinkingBlockValidatorHook } from "./thinking-block-validator";
|
||||
export { createCategorySkillReminderHook } from "./category-skill-reminder";
|
||||
export { createRalphLoopHook, type RalphLoopHook } from "./ralph-loop";
|
||||
export { createNoSisyphusGptHook } from "./no-sisyphus-gpt";
|
||||
export { createNoHephaestusNonGptHook } from "./no-hephaestus-non-gpt";
|
||||
export { createAutoSlashCommandHook } from "./auto-slash-command";
|
||||
export { createEditErrorRecoveryHook } from "./edit-error-recovery";
|
||||
export { createJsonErrorRecoveryHook } from "./json-error-recovery";
|
||||
export { createPrometheusMdOnlyHook } from "./prometheus-md-only";
|
||||
export { createSisyphusJuniorNotepadHook } from "./sisyphus-junior-notepad";
|
||||
export { createTaskResumeInfoHook } from "./task-resume-info";
|
||||
@@ -46,4 +47,4 @@ export { createPreemptiveCompactionHook } from "./preemptive-compaction";
|
||||
export { createTasksTodowriteDisablerHook } from "./tasks-todowrite-disabler";
|
||||
export { createWriteExistingFileGuardHook } from "./write-existing-file-guard";
|
||||
export { createHashlineReadEnhancerHook } from "./hashline-read-enhancer";
|
||||
|
||||
export { createBeastModeSystemHook, BEAST_MODE_SYSTEM_PROMPT } from "./beast-mode-system";
|
||||
|
||||
@@ -74,9 +74,7 @@ export function createKeywordDetectorHook(ctx: PluginInput, _collector?: Context
|
||||
if (hasUltrawork) {
|
||||
log(`[keyword-detector] Ultrawork mode activated`, { sessionID: input.sessionID })
|
||||
|
||||
if (output.message.variant === undefined) {
|
||||
output.message.variant = "max"
|
||||
}
|
||||
output.message.variant = "max"
|
||||
|
||||
ctx.client.tui
|
||||
.showToast({
|
||||
|
||||
@@ -219,8 +219,8 @@ describe("keyword-detector session filtering", () => {
|
||||
expect(toastCalls).toContain("Ultrawork Mode Activated")
|
||||
})
|
||||
|
||||
test("should not override existing variant", async () => {
|
||||
// given - main session set with pre-existing variant
|
||||
test("should override existing variant when ultrawork keyword is used", async () => {
|
||||
// given - main session set with pre-existing variant from TUI
|
||||
setMainSession("main-123")
|
||||
|
||||
const toastCalls: string[] = []
|
||||
@@ -236,8 +236,8 @@ describe("keyword-detector session filtering", () => {
|
||||
output
|
||||
)
|
||||
|
||||
// then - existing variant should remain
|
||||
expect(output.message.variant).toBe("low")
|
||||
// then - ultrawork should override TUI variant to max
|
||||
expect(output.message.variant).toBe("max")
|
||||
expect(toastCalls).toContain("Ultrawork Mode Activated")
|
||||
})
|
||||
})
|
||||
|
||||
141
src/hooks/model-fallback/hook.test.ts
Normal file
141
src/hooks/model-fallback/hook.test.ts
Normal file
@@ -0,0 +1,141 @@
|
||||
import { beforeEach, describe, expect, test } from "bun:test"
|
||||
|
||||
import {
|
||||
clearPendingModelFallback,
|
||||
createModelFallbackHook,
|
||||
setPendingModelFallback,
|
||||
} from "./hook"
|
||||
|
||||
describe("model fallback hook", () => {
|
||||
beforeEach(() => {
|
||||
clearPendingModelFallback("ses_model_fallback_main")
|
||||
})
|
||||
|
||||
test("applies pending fallback on chat.message by overriding model", async () => {
|
||||
//#given
|
||||
const hook = createModelFallbackHook() as unknown as {
|
||||
"chat.message"?: (
|
||||
input: { sessionID: string },
|
||||
output: { message: Record<string, unknown>; parts: Array<{ type: string; text?: string }> },
|
||||
) => Promise<void>
|
||||
}
|
||||
|
||||
const set = setPendingModelFallback(
|
||||
"ses_model_fallback_main",
|
||||
"Sisyphus (Ultraworker)",
|
||||
"quotio",
|
||||
"claude-opus-4-6-thinking",
|
||||
)
|
||||
expect(set).toBe(true)
|
||||
|
||||
const output = {
|
||||
message: {
|
||||
model: { providerID: "quotio", modelID: "claude-opus-4-6-thinking" },
|
||||
variant: "max",
|
||||
},
|
||||
parts: [{ type: "text", text: "continue" }],
|
||||
}
|
||||
|
||||
//#when
|
||||
await hook["chat.message"]?.(
|
||||
{ sessionID: "ses_model_fallback_main" },
|
||||
output,
|
||||
)
|
||||
|
||||
//#then
|
||||
expect(output.message["model"]).toEqual({
|
||||
providerID: "quotio",
|
||||
modelID: "claude-opus-4-6",
|
||||
})
|
||||
})
|
||||
|
||||
test("preserves fallback progression across repeated session.error retries", async () => {
|
||||
//#given
|
||||
const hook = createModelFallbackHook() as unknown as {
|
||||
"chat.message"?: (
|
||||
input: { sessionID: string },
|
||||
output: { message: Record<string, unknown>; parts: Array<{ type: string; text?: string }> },
|
||||
) => Promise<void>
|
||||
}
|
||||
const sessionID = "ses_model_fallback_main"
|
||||
|
||||
expect(
|
||||
setPendingModelFallback(sessionID, "Sisyphus (Ultraworker)", "quotio", "claude-opus-4-6-thinking"),
|
||||
).toBe(true)
|
||||
|
||||
const firstOutput = {
|
||||
message: {
|
||||
model: { providerID: "quotio", modelID: "claude-opus-4-6-thinking" },
|
||||
variant: "max",
|
||||
},
|
||||
parts: [{ type: "text", text: "continue" }],
|
||||
}
|
||||
|
||||
//#when - first retry is applied
|
||||
await hook["chat.message"]?.({ sessionID }, firstOutput)
|
||||
|
||||
//#then
|
||||
expect(firstOutput.message["model"]).toEqual({
|
||||
providerID: "quotio",
|
||||
modelID: "claude-opus-4-6",
|
||||
})
|
||||
|
||||
//#when - second error re-arms fallback and should advance to next entry
|
||||
expect(
|
||||
setPendingModelFallback(sessionID, "Sisyphus (Ultraworker)", "quotio", "claude-opus-4-6"),
|
||||
).toBe(true)
|
||||
|
||||
const secondOutput = {
|
||||
message: {
|
||||
model: { providerID: "quotio", modelID: "claude-opus-4-6" },
|
||||
},
|
||||
parts: [{ type: "text", text: "continue" }],
|
||||
}
|
||||
await hook["chat.message"]?.({ sessionID }, secondOutput)
|
||||
|
||||
//#then - chain should progress to entry[1], not repeat entry[0]
|
||||
expect(secondOutput.message["model"]).toEqual({
|
||||
providerID: "quotio",
|
||||
modelID: "gpt-5.3-codex",
|
||||
})
|
||||
expect(secondOutput.message["variant"]).toBe("high")
|
||||
})
|
||||
|
||||
test("shows toast when fallback is applied", async () => {
|
||||
//#given
|
||||
const toastCalls: Array<{ title: string; message: string }> = []
|
||||
const hook = createModelFallbackHook({
|
||||
toast: async ({ title, message }) => {
|
||||
toastCalls.push({ title, message })
|
||||
},
|
||||
}) as unknown as {
|
||||
"chat.message"?: (
|
||||
input: { sessionID: string },
|
||||
output: { message: Record<string, unknown>; parts: Array<{ type: string; text?: string }> },
|
||||
) => Promise<void>
|
||||
}
|
||||
|
||||
const set = setPendingModelFallback(
|
||||
"ses_model_fallback_toast",
|
||||
"Sisyphus (Ultraworker)",
|
||||
"quotio",
|
||||
"claude-opus-4-6-thinking",
|
||||
)
|
||||
expect(set).toBe(true)
|
||||
|
||||
const output = {
|
||||
message: {
|
||||
model: { providerID: "quotio", modelID: "claude-opus-4-6-thinking" },
|
||||
variant: "max",
|
||||
},
|
||||
parts: [{ type: "text", text: "continue" }],
|
||||
}
|
||||
|
||||
//#when
|
||||
await hook["chat.message"]?.({ sessionID: "ses_model_fallback_toast" }, output)
|
||||
|
||||
//#then
|
||||
expect(toastCalls.length).toBe(1)
|
||||
expect(toastCalls[0]?.title).toBe("Model fallback")
|
||||
})
|
||||
})
|
||||
246
src/hooks/model-fallback/hook.ts
Normal file
246
src/hooks/model-fallback/hook.ts
Normal file
@@ -0,0 +1,246 @@
|
||||
import type { FallbackEntry } from "../../shared/model-requirements"
|
||||
import { getAgentConfigKey } from "../../shared/agent-display-names"
|
||||
import { AGENT_MODEL_REQUIREMENTS } from "../../shared/model-requirements"
|
||||
import { readConnectedProvidersCache, readProviderModelsCache } from "../../shared/connected-providers-cache"
|
||||
import { selectFallbackProvider } from "../../shared/model-error-classifier"
|
||||
import { log } from "../../shared/logger"
|
||||
import { getTaskToastManager } from "../../features/task-toast-manager"
|
||||
import type { ChatMessageInput, ChatMessageHandlerOutput } from "../../plugin/chat-message"
|
||||
|
||||
type FallbackToast = (input: {
|
||||
title: string
|
||||
message: string
|
||||
variant?: "info" | "success" | "warning" | "error"
|
||||
duration?: number
|
||||
}) => void | Promise<void>
|
||||
|
||||
type FallbackCallback = (input: {
|
||||
sessionID: string
|
||||
providerID: string
|
||||
modelID: string
|
||||
variant?: string
|
||||
}) => void | Promise<void>
|
||||
|
||||
export type ModelFallbackState = {
|
||||
providerID: string
|
||||
modelID: string
|
||||
fallbackChain: FallbackEntry[]
|
||||
attemptCount: number
|
||||
pending: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* Map of sessionID -> pending model fallback state
|
||||
* When a model error occurs, we store the fallback info here.
|
||||
* The next chat.message call will use this to switch to the fallback model.
|
||||
*/
|
||||
const pendingModelFallbacks = new Map<string, ModelFallbackState>()
|
||||
const lastToastKey = new Map<string, string>()
|
||||
const sessionFallbackChains = new Map<string, FallbackEntry[]>()
|
||||
|
||||
export function setSessionFallbackChain(sessionID: string, fallbackChain: FallbackEntry[] | undefined): void {
|
||||
if (!sessionID) return
|
||||
if (!fallbackChain || fallbackChain.length === 0) {
|
||||
sessionFallbackChains.delete(sessionID)
|
||||
return
|
||||
}
|
||||
sessionFallbackChains.set(sessionID, fallbackChain)
|
||||
}
|
||||
|
||||
export function clearSessionFallbackChain(sessionID: string): void {
|
||||
sessionFallbackChains.delete(sessionID)
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets a pending model fallback for a session.
|
||||
* Called when a model error is detected in session.error handler.
|
||||
*/
|
||||
export function setPendingModelFallback(
|
||||
sessionID: string,
|
||||
agentName: string,
|
||||
currentProviderID: string,
|
||||
currentModelID: string,
|
||||
): boolean {
|
||||
const agentKey = getAgentConfigKey(agentName)
|
||||
const requirements = AGENT_MODEL_REQUIREMENTS[agentKey]
|
||||
const sessionFallback = sessionFallbackChains.get(sessionID)
|
||||
const fallbackChain = sessionFallback && sessionFallback.length > 0
|
||||
? sessionFallback
|
||||
: requirements?.fallbackChain
|
||||
|
||||
if (!fallbackChain || fallbackChain.length === 0) {
|
||||
log("[model-fallback] No fallback chain for agent: " + agentName + " (key: " + agentKey + ")")
|
||||
return false
|
||||
}
|
||||
|
||||
const existing = pendingModelFallbacks.get(sessionID)
|
||||
|
||||
if (existing) {
|
||||
// Preserve progression across repeated session.error retries in same session.
|
||||
// We only mark the next turn as pending fallback application.
|
||||
existing.providerID = currentProviderID
|
||||
existing.modelID = currentModelID
|
||||
existing.pending = true
|
||||
if (existing.attemptCount >= existing.fallbackChain.length) {
|
||||
log("[model-fallback] Fallback chain exhausted for session: " + sessionID)
|
||||
return false
|
||||
}
|
||||
log("[model-fallback] Re-armed pending fallback for session: " + sessionID)
|
||||
return true
|
||||
}
|
||||
|
||||
const state: ModelFallbackState = {
|
||||
providerID: currentProviderID,
|
||||
modelID: currentModelID,
|
||||
fallbackChain,
|
||||
attemptCount: 0,
|
||||
pending: true,
|
||||
}
|
||||
|
||||
pendingModelFallbacks.set(sessionID, state)
|
||||
log("[model-fallback] Set pending fallback for session: " + sessionID + ", agent: " + agentName)
|
||||
return true
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the next fallback model for a session.
|
||||
* Increments attemptCount each time called.
|
||||
*/
|
||||
export function getNextFallback(
|
||||
sessionID: string,
|
||||
): { providerID: string; modelID: string; variant?: string } | null {
|
||||
const state = pendingModelFallbacks.get(sessionID)
|
||||
if (!state) return null
|
||||
|
||||
if (!state.pending) return null
|
||||
|
||||
const { fallbackChain } = state
|
||||
|
||||
const providerModelsCache = readProviderModelsCache()
|
||||
const connectedProviders = providerModelsCache?.connected ?? readConnectedProvidersCache()
|
||||
const connectedSet = connectedProviders ? new Set(connectedProviders) : null
|
||||
|
||||
const isReachable = (entry: FallbackEntry): boolean => {
|
||||
if (!connectedSet) return true
|
||||
|
||||
// Gate only on provider connectivity. Provider model lists can be stale/incomplete,
|
||||
// especially after users manually add models to opencode.json.
|
||||
return entry.providers.some((p) => connectedSet.has(p))
|
||||
}
|
||||
|
||||
while (state.attemptCount < fallbackChain.length) {
|
||||
const attemptCount = state.attemptCount
|
||||
const fallback = fallbackChain[attemptCount]
|
||||
state.attemptCount++
|
||||
|
||||
if (!isReachable(fallback)) {
|
||||
log("[model-fallback] Skipping unreachable fallback for session: " + sessionID + ", attempt: " + attemptCount + ", model: " + fallback.model)
|
||||
continue
|
||||
}
|
||||
|
||||
const providerID = selectFallbackProvider(fallback.providers, state.providerID)
|
||||
state.pending = false
|
||||
|
||||
log("[model-fallback] Using fallback for session: " + sessionID + ", attempt: " + attemptCount + ", model: " + fallback.model)
|
||||
|
||||
return {
|
||||
providerID,
|
||||
modelID: fallback.model,
|
||||
variant: fallback.variant,
|
||||
}
|
||||
}
|
||||
|
||||
log("[model-fallback] No more fallbacks for session: " + sessionID)
|
||||
pendingModelFallbacks.delete(sessionID)
|
||||
return null
|
||||
}
|
||||
|
||||
/**
|
||||
* Clears the pending fallback for a session.
|
||||
* Called after fallback is successfully applied.
|
||||
*/
|
||||
export function clearPendingModelFallback(sessionID: string): void {
|
||||
pendingModelFallbacks.delete(sessionID)
|
||||
lastToastKey.delete(sessionID)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if there's a pending fallback for a session.
|
||||
*/
|
||||
export function hasPendingModelFallback(sessionID: string): boolean {
|
||||
const state = pendingModelFallbacks.get(sessionID)
|
||||
return state?.pending === true
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the current fallback state for a session (for debugging).
|
||||
*/
|
||||
export function getFallbackState(sessionID: string): ModelFallbackState | undefined {
|
||||
return pendingModelFallbacks.get(sessionID)
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a chat.message hook that applies model fallbacks when pending.
|
||||
*/
|
||||
export function createModelFallbackHook(args?: { toast?: FallbackToast; onApplied?: FallbackCallback }) {
|
||||
const toast = args?.toast
|
||||
const onApplied = args?.onApplied
|
||||
|
||||
return {
|
||||
"chat.message": async (
|
||||
input: ChatMessageInput,
|
||||
output: ChatMessageHandlerOutput,
|
||||
): Promise<void> => {
|
||||
const { sessionID } = input
|
||||
if (!sessionID) return
|
||||
|
||||
const fallback = getNextFallback(sessionID)
|
||||
if (!fallback) return
|
||||
|
||||
output.message["model"] = {
|
||||
providerID: fallback.providerID,
|
||||
modelID: fallback.modelID,
|
||||
}
|
||||
if (fallback.variant !== undefined) {
|
||||
output.message["variant"] = fallback.variant
|
||||
} else {
|
||||
delete output.message["variant"]
|
||||
}
|
||||
if (toast) {
|
||||
const key = `${sessionID}:${fallback.providerID}/${fallback.modelID}:${fallback.variant ?? ""}`
|
||||
if (lastToastKey.get(sessionID) !== key) {
|
||||
lastToastKey.set(sessionID, key)
|
||||
const variantLabel = fallback.variant ? ` (${fallback.variant})` : ""
|
||||
await Promise.resolve(
|
||||
toast({
|
||||
title: "Model fallback",
|
||||
message: `Using ${fallback.providerID}/${fallback.modelID}${variantLabel}`,
|
||||
variant: "warning",
|
||||
duration: 5000,
|
||||
}),
|
||||
)
|
||||
}
|
||||
}
|
||||
if (onApplied) {
|
||||
await Promise.resolve(
|
||||
onApplied({
|
||||
sessionID,
|
||||
providerID: fallback.providerID,
|
||||
modelID: fallback.modelID,
|
||||
variant: fallback.variant,
|
||||
}),
|
||||
)
|
||||
}
|
||||
|
||||
const toastManager = getTaskToastManager()
|
||||
if (toastManager) {
|
||||
const variantLabel = fallback.variant ? ` (${fallback.variant})` : ""
|
||||
toastManager.updateTaskModelBySession(sessionID, {
|
||||
model: `${fallback.providerID}/${fallback.modelID}${variantLabel}`,
|
||||
type: "runtime-fallback",
|
||||
})
|
||||
}
|
||||
log("[model-fallback] Applied fallback model: " + JSON.stringify(fallback))
|
||||
},
|
||||
}
|
||||
}
|
||||
54
src/hooks/no-hephaestus-non-gpt/hook.ts
Normal file
54
src/hooks/no-hephaestus-non-gpt/hook.ts
Normal file
@@ -0,0 +1,54 @@
|
||||
import type { PluginInput } from "@opencode-ai/plugin"
|
||||
import { isGptModel } from "../../agents/types"
|
||||
import { getSessionAgent, updateSessionAgent } from "../../features/claude-code-session-state"
|
||||
import { log } from "../../shared"
|
||||
import { getAgentConfigKey, getAgentDisplayName } from "../../shared/agent-display-names"
|
||||
|
||||
const TOAST_TITLE = "NEVER Use Hephaestus with Non-GPT"
|
||||
const TOAST_MESSAGE = [
|
||||
"Hephaestus is designed exclusively for GPT models.",
|
||||
"Hephaestus is trash without GPT.",
|
||||
"For Claude/Kimi/GLM models, always use Sisyphus.",
|
||||
].join("\n")
|
||||
const SISYPHUS_DISPLAY = getAgentDisplayName("sisyphus")
|
||||
|
||||
function showToast(ctx: PluginInput, sessionID: string): void {
|
||||
ctx.client.tui.showToast({
|
||||
body: {
|
||||
title: TOAST_TITLE,
|
||||
message: TOAST_MESSAGE,
|
||||
variant: "error",
|
||||
duration: 10000,
|
||||
},
|
||||
}).catch((error) => {
|
||||
log("[no-hephaestus-non-gpt] Failed to show toast", {
|
||||
sessionID,
|
||||
error,
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
export function createNoHephaestusNonGptHook(ctx: PluginInput) {
|
||||
return {
|
||||
"chat.message": async (input: {
|
||||
sessionID: string
|
||||
agent?: string
|
||||
model?: { providerID: string; modelID: string }
|
||||
}, output?: {
|
||||
message?: { agent?: string; [key: string]: unknown }
|
||||
}): Promise<void> => {
|
||||
const rawAgent = input.agent ?? getSessionAgent(input.sessionID) ?? ""
|
||||
const agentKey = getAgentConfigKey(rawAgent)
|
||||
const modelID = input.model?.modelID
|
||||
|
||||
if (agentKey === "hephaestus" && modelID && !isGptModel(modelID)) {
|
||||
showToast(ctx, input.sessionID)
|
||||
input.agent = SISYPHUS_DISPLAY
|
||||
if (output?.message) {
|
||||
output.message.agent = SISYPHUS_DISPLAY
|
||||
}
|
||||
updateSessionAgent(input.sessionID, SISYPHUS_DISPLAY)
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
115
src/hooks/no-hephaestus-non-gpt/index.test.ts
Normal file
115
src/hooks/no-hephaestus-non-gpt/index.test.ts
Normal file
@@ -0,0 +1,115 @@
|
||||
import { describe, expect, spyOn, test } from "bun:test"
|
||||
import { _resetForTesting, updateSessionAgent } from "../../features/claude-code-session-state"
|
||||
import { getAgentDisplayName } from "../../shared/agent-display-names"
|
||||
import { createNoHephaestusNonGptHook } from "./index"
|
||||
|
||||
const HEPHAESTUS_DISPLAY = getAgentDisplayName("hephaestus")
|
||||
const SISYPHUS_DISPLAY = getAgentDisplayName("sisyphus")
|
||||
|
||||
function createOutput() {
|
||||
return {
|
||||
message: {},
|
||||
parts: [],
|
||||
}
|
||||
}
|
||||
|
||||
describe("no-hephaestus-non-gpt hook", () => {
|
||||
test("shows toast on every chat.message when hephaestus uses non-gpt model", async () => {
|
||||
// given - hephaestus with claude model
|
||||
const showToast = spyOn({ fn: async () => ({}) }, "fn")
|
||||
const hook = createNoHephaestusNonGptHook({
|
||||
client: { tui: { showToast } },
|
||||
} as any)
|
||||
|
||||
const output1 = createOutput()
|
||||
const output2 = createOutput()
|
||||
|
||||
// when - chat.message is called repeatedly
|
||||
await hook["chat.message"]?.({
|
||||
sessionID: "ses_1",
|
||||
agent: HEPHAESTUS_DISPLAY,
|
||||
model: { providerID: "anthropic", modelID: "claude-opus-4-6" },
|
||||
}, output1)
|
||||
await hook["chat.message"]?.({
|
||||
sessionID: "ses_1",
|
||||
agent: HEPHAESTUS_DISPLAY,
|
||||
model: { providerID: "anthropic", modelID: "claude-opus-4-6" },
|
||||
}, output2)
|
||||
|
||||
// then - toast is shown and agent is switched to sisyphus
|
||||
expect(showToast).toHaveBeenCalledTimes(2)
|
||||
expect(output1.message.agent).toBe(SISYPHUS_DISPLAY)
|
||||
expect(output2.message.agent).toBe(SISYPHUS_DISPLAY)
|
||||
expect(showToast.mock.calls[0]?.[0]).toMatchObject({
|
||||
body: {
|
||||
title: "NEVER Use Hephaestus with Non-GPT",
|
||||
message: expect.stringContaining("Hephaestus is trash without GPT."),
|
||||
variant: "error",
|
||||
},
|
||||
})
|
||||
})
|
||||
|
||||
test("does not show toast when hephaestus uses gpt model", async () => {
|
||||
// given - hephaestus with gpt model
|
||||
const showToast = spyOn({ fn: async () => ({}) }, "fn")
|
||||
const hook = createNoHephaestusNonGptHook({
|
||||
client: { tui: { showToast } },
|
||||
} as any)
|
||||
|
||||
const output = createOutput()
|
||||
|
||||
// when - chat.message runs
|
||||
await hook["chat.message"]?.({
|
||||
sessionID: "ses_2",
|
||||
agent: HEPHAESTUS_DISPLAY,
|
||||
model: { providerID: "openai", modelID: "gpt-5.3-codex" },
|
||||
}, output)
|
||||
|
||||
// then - no toast, agent unchanged
|
||||
expect(showToast).toHaveBeenCalledTimes(0)
|
||||
expect(output.message.agent).toBeUndefined()
|
||||
})
|
||||
|
||||
test("does not show toast for non-hephaestus agent", async () => {
|
||||
// given - sisyphus with claude model (non-gpt)
|
||||
const showToast = spyOn({ fn: async () => ({}) }, "fn")
|
||||
const hook = createNoHephaestusNonGptHook({
|
||||
client: { tui: { showToast } },
|
||||
} as any)
|
||||
|
||||
const output = createOutput()
|
||||
|
||||
// when - chat.message runs
|
||||
await hook["chat.message"]?.({
|
||||
sessionID: "ses_3",
|
||||
agent: SISYPHUS_DISPLAY,
|
||||
model: { providerID: "anthropic", modelID: "claude-opus-4-6" },
|
||||
}, output)
|
||||
|
||||
// then - no toast
|
||||
expect(showToast).toHaveBeenCalledTimes(0)
|
||||
expect(output.message.agent).toBeUndefined()
|
||||
})
|
||||
|
||||
test("uses session agent fallback when input agent is missing", async () => {
|
||||
// given - session agent saved as hephaestus
|
||||
_resetForTesting()
|
||||
updateSessionAgent("ses_4", HEPHAESTUS_DISPLAY)
|
||||
const showToast = spyOn({ fn: async () => ({}) }, "fn")
|
||||
const hook = createNoHephaestusNonGptHook({
|
||||
client: { tui: { showToast } },
|
||||
} as any)
|
||||
|
||||
const output = createOutput()
|
||||
|
||||
// when - chat.message runs without input.agent
|
||||
await hook["chat.message"]?.({
|
||||
sessionID: "ses_4",
|
||||
model: { providerID: "anthropic", modelID: "claude-opus-4-6" },
|
||||
}, output)
|
||||
|
||||
// then - toast shown via session-agent fallback, switched to sisyphus
|
||||
expect(showToast).toHaveBeenCalledTimes(1)
|
||||
expect(output.message.agent).toBe(SISYPHUS_DISPLAY)
|
||||
})
|
||||
})
|
||||
1
src/hooks/no-hephaestus-non-gpt/index.ts
Normal file
1
src/hooks/no-hephaestus-non-gpt/index.ts
Normal file
@@ -0,0 +1 @@
|
||||
export { createNoHephaestusNonGptHook } from "./hook"
|
||||
@@ -6,10 +6,9 @@ import { getAgentConfigKey, getAgentDisplayName } from "../../shared/agent-displ
|
||||
|
||||
const TOAST_TITLE = "NEVER Use Sisyphus with GPT"
|
||||
const TOAST_MESSAGE = [
|
||||
"Sisyphus is NOT designed for GPT models.",
|
||||
"Sisyphus + GPT performs worse than vanilla Codex.",
|
||||
"You are literally burning money.",
|
||||
"Use Hephaestus for GPT models instead.",
|
||||
"Sisyphus works best with Claude Opus, and works fine with Kimi/GLM models.",
|
||||
"Do NOT use Sisyphus with GPT.",
|
||||
"For GPT models, always use Hephaestus.",
|
||||
].join("\n")
|
||||
const HEPHAESTUS_DISPLAY = getAgentDisplayName("hephaestus")
|
||||
|
||||
|
||||
@@ -43,7 +43,7 @@ describe("no-sisyphus-gpt hook", () => {
|
||||
expect(showToast.mock.calls[0]?.[0]).toMatchObject({
|
||||
body: {
|
||||
title: "NEVER Use Sisyphus with GPT",
|
||||
message: expect.stringContaining("burning money"),
|
||||
message: expect.stringContaining("For GPT models, always use Hephaestus."),
|
||||
variant: "error",
|
||||
},
|
||||
})
|
||||
|
||||
@@ -3,7 +3,11 @@ import { log } from "../../shared/logger"
|
||||
import { findNearestMessageWithFields } from "../../features/hook-message-injector"
|
||||
import { getMessageDir } from "./message-storage-directory"
|
||||
import { withTimeout } from "./with-timeout"
|
||||
import { normalizeSDKResponse, resolveInheritedPromptTools } from "../../shared"
|
||||
import {
|
||||
createInternalAgentTextPart,
|
||||
normalizeSDKResponse,
|
||||
resolveInheritedPromptTools,
|
||||
} from "../../shared"
|
||||
|
||||
type MessageInfo = {
|
||||
agent?: string
|
||||
@@ -64,7 +68,7 @@ export async function injectContinuationPrompt(
|
||||
...(agent !== undefined ? { agent } : {}),
|
||||
...(model !== undefined ? { model } : {}),
|
||||
...(inheritedTools ? { tools: inheritedTools } : {}),
|
||||
parts: [{ type: "text", text: options.prompt }],
|
||||
parts: [createInternalAgentTextPart(options.prompt)],
|
||||
},
|
||||
query: { directory: options.directory },
|
||||
})
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
declare const require: (name: string) => any
|
||||
const { describe, expect, test } = require("bun:test")
|
||||
import { extractResumeConfig, resumeSession } from "./resume"
|
||||
import { OMO_INTERNAL_INITIATOR_MARKER } from "../../shared/internal-initiator-marker"
|
||||
import type { MessageData } from "./types"
|
||||
|
||||
describe("session-recovery resume", () => {
|
||||
@@ -44,5 +45,8 @@ describe("session-recovery resume", () => {
|
||||
// then
|
||||
expect(ok).toBe(true)
|
||||
expect(promptBody?.tools).toEqual({ question: false, bash: true })
|
||||
expect(Array.isArray(promptBody?.parts)).toBe(true)
|
||||
const firstPart = (promptBody?.parts as Array<{ text?: string }>)?.[0]
|
||||
expect(firstPart?.text).toContain(OMO_INTERNAL_INITIATOR_MARKER)
|
||||
})
|
||||
})
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import type { createOpencodeClient } from "@opencode-ai/sdk"
|
||||
import type { MessageData, ResumeConfig } from "./types"
|
||||
import { resolveInheritedPromptTools } from "../../shared"
|
||||
import { createInternalAgentTextPart, resolveInheritedPromptTools } from "../../shared"
|
||||
|
||||
const RECOVERY_RESUME_TEXT = "[session recovered - continuing previous task]"
|
||||
|
||||
@@ -30,7 +30,7 @@ export async function resumeSession(client: Client, config: ResumeConfig): Promi
|
||||
await client.session.promptAsync({
|
||||
path: { id: config.sessionID },
|
||||
body: {
|
||||
parts: [{ type: "text", text: RECOVERY_RESUME_TEXT }],
|
||||
parts: [createInternalAgentTextPart(RECOVERY_RESUME_TEXT)],
|
||||
agent: config.agent,
|
||||
model: config.model,
|
||||
...(inheritedTools ? { tools: inheritedTools } : {}),
|
||||
|
||||
@@ -378,7 +378,7 @@ describe("createThinkModeHook integration", () => {
|
||||
const hook = createThinkModeHook()
|
||||
const input = createMockInput(
|
||||
"zai-coding-plan",
|
||||
"glm-4.7",
|
||||
"glm-5",
|
||||
"ultrathink mode"
|
||||
)
|
||||
|
||||
@@ -387,7 +387,7 @@ describe("createThinkModeHook integration", () => {
|
||||
|
||||
//#then thinking config should be omitted from request
|
||||
const message = input.message as MessageWithInjectedProps
|
||||
expect(input.message.model?.modelID).toBe("glm-4.7")
|
||||
expect(input.message.model?.modelID).toBe("glm-5")
|
||||
expect(message.thinking).toBeUndefined()
|
||||
expect(message.providerOptions).toBeUndefined()
|
||||
})
|
||||
|
||||
@@ -498,9 +498,9 @@ describe("think-mode switcher", () => {
|
||||
|
||||
describe("Z.AI GLM-4.7 provider support", () => {
|
||||
describe("getThinkingConfig for zai-coding-plan", () => {
|
||||
it("should return thinking config for glm-4.7", () => {
|
||||
it("should return thinking config for glm-5", () => {
|
||||
//#given a Z.ai GLM model
|
||||
const config = getThinkingConfig("zai-coding-plan", "glm-4.7")
|
||||
const config = getThinkingConfig("zai-coding-plan", "glm-5")
|
||||
|
||||
//#when thinking config is resolved
|
||||
|
||||
@@ -535,9 +535,9 @@ describe("think-mode switcher", () => {
|
||||
})
|
||||
|
||||
describe("HIGH_VARIANT_MAP for GLM", () => {
|
||||
it("should NOT have high variant for glm-4.7", () => {
|
||||
// given glm-4.7 model
|
||||
const variant = getHighVariant("glm-4.7")
|
||||
it("should NOT have high variant for glm-5", () => {
|
||||
// given glm-5 model
|
||||
const variant = getHighVariant("glm-5")
|
||||
|
||||
// then should return null (no high variant needed)
|
||||
expect(variant).toBeNull()
|
||||
|
||||
@@ -2,18 +2,26 @@ declare const require: (name: string) => any
|
||||
const { describe, expect, test } = require("bun:test")
|
||||
|
||||
import { injectContinuation } from "./continuation-injection"
|
||||
import { OMO_INTERNAL_INITIATOR_MARKER } from "../../shared/internal-initiator-marker"
|
||||
|
||||
describe("injectContinuation", () => {
|
||||
test("inherits tools from resolved message info when reinjecting", async () => {
|
||||
// given
|
||||
let capturedTools: Record<string, boolean> | undefined
|
||||
let capturedText: string | undefined
|
||||
const ctx = {
|
||||
directory: "/tmp/test",
|
||||
client: {
|
||||
session: {
|
||||
todo: async () => ({ data: [{ id: "1", content: "todo", status: "pending", priority: "high" }] }),
|
||||
promptAsync: async (input: { body: { tools?: Record<string, boolean> } }) => {
|
||||
promptAsync: async (input: {
|
||||
body: {
|
||||
tools?: Record<string, boolean>
|
||||
parts?: Array<{ type: string; text: string }>
|
||||
}
|
||||
}) => {
|
||||
capturedTools = input.body.tools
|
||||
capturedText = input.body.parts?.[0]?.text
|
||||
return {}
|
||||
},
|
||||
},
|
||||
@@ -37,5 +45,6 @@ describe("injectContinuation", () => {
|
||||
|
||||
// then
|
||||
expect(capturedTools).toEqual({ question: false, bash: true })
|
||||
expect(capturedText).toContain(OMO_INTERNAL_INITIATOR_MARKER)
|
||||
})
|
||||
})
|
||||
|
||||
@@ -1,7 +1,11 @@
|
||||
import type { PluginInput } from "@opencode-ai/plugin"
|
||||
|
||||
import type { BackgroundManager } from "../../features/background-agent"
|
||||
import { normalizeSDKResponse, resolveInheritedPromptTools } from "../../shared"
|
||||
import {
|
||||
createInternalAgentTextPart,
|
||||
normalizeSDKResponse,
|
||||
resolveInheritedPromptTools,
|
||||
} from "../../shared"
|
||||
import {
|
||||
findNearestMessageWithFields,
|
||||
findNearestMessageWithFieldsFromSDK,
|
||||
@@ -37,6 +41,7 @@ export async function injectContinuation(args: {
|
||||
skipAgents?: string[]
|
||||
resolvedInfo?: ResolvedMessageInfo
|
||||
sessionStateStore: SessionStateStore
|
||||
isContinuationStopped?: (sessionID: string) => boolean
|
||||
}): Promise<void> {
|
||||
const {
|
||||
ctx,
|
||||
@@ -45,6 +50,7 @@ export async function injectContinuation(args: {
|
||||
skipAgents = DEFAULT_SKIP_AGENTS,
|
||||
resolvedInfo,
|
||||
sessionStateStore,
|
||||
isContinuationStopped,
|
||||
} = args
|
||||
|
||||
const state = sessionStateStore.getExistingState(sessionID)
|
||||
@@ -53,6 +59,11 @@ export async function injectContinuation(args: {
|
||||
return
|
||||
}
|
||||
|
||||
if (isContinuationStopped?.(sessionID)) {
|
||||
log(`[${HOOK_NAME}] Skipped injection: continuation stopped for session`, { sessionID })
|
||||
return
|
||||
}
|
||||
|
||||
const hasRunningBgTasks = backgroundManager
|
||||
? backgroundManager.getTasksByParentSession(sessionID).some((task: { status: string }) => task.status === "running")
|
||||
: false
|
||||
@@ -144,7 +155,7 @@ ${todoList}`
|
||||
agent: agentName,
|
||||
...(model !== undefined ? { model } : {}),
|
||||
...(inheritedTools ? { tools: inheritedTools } : {}),
|
||||
parts: [{ type: "text", text: prompt }],
|
||||
parts: [createInternalAgentTextPart(prompt)],
|
||||
},
|
||||
query: { directory: ctx.directory },
|
||||
})
|
||||
|
||||
@@ -38,6 +38,7 @@ export function startCountdown(args: {
|
||||
backgroundManager?: BackgroundManager
|
||||
skipAgents: string[]
|
||||
sessionStateStore: SessionStateStore
|
||||
isContinuationStopped?: (sessionID: string) => boolean
|
||||
}): void {
|
||||
const {
|
||||
ctx,
|
||||
@@ -47,6 +48,7 @@ export function startCountdown(args: {
|
||||
backgroundManager,
|
||||
skipAgents,
|
||||
sessionStateStore,
|
||||
isContinuationStopped,
|
||||
} = args
|
||||
|
||||
const state = sessionStateStore.getState(sessionID)
|
||||
@@ -72,6 +74,7 @@ export function startCountdown(args: {
|
||||
skipAgents,
|
||||
resolvedInfo,
|
||||
sessionStateStore,
|
||||
isContinuationStopped,
|
||||
})
|
||||
}, COUNTDOWN_SECONDS * 1000)
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user