Compare commits

..

7 Commits

Author SHA1 Message Date
YeonGyu-Kim
17d3184e7a fix(skill): unify command listing format and trim verbose description 2026-02-19 13:53:15 +09:00
Bo Li
e70539fe5f refactor(prompt): dedupe repeated skill guidance blocks 2026-02-19 13:46:33 +09:00
Bo Li
c13886e13a fix(skill): keep no-skills wording compatible with tests 2026-02-19 13:46:33 +09:00
Bo Li
05d7c3f462 fix(skill): eagerly build description for preloaded skills 2026-02-19 13:46:33 +09:00
Bo Li
e00275c07d refactor: merge slashcommand tool into skill tool
Per reviewer feedback (code-yeongyu), keep the 'skill' tool as the main
tool and merge slashcommand functionality INTO it, rather than the reverse.

Changes:
- skill/tools.ts: Add command discovery (discoverCommandsSync) support;
  handle both SKILL.md skills and .omo/commands/ slash commands in a single
  tool; show combined listing in tool description
- skill/types.ts: Add 'commands' option to SkillLoadOptions
- skill/constants.ts: Update description to mention both skills and commands
- plugin/tool-registry.ts: Replace createSlashcommandTool with createSkillTool;
  register tool as 'skill' instead of 'slashcommand'
- tools/index.ts: Export createSkillTool instead of createSlashcommandTool
- plugin/tool-execute-before.ts: Update tool name checks from 'slashcommand'
  to 'skill'; update arg name from 'command' to 'name'
- agents/dynamic-agent-prompt-builder.ts: Categorize 'skill' tool as 'command'
- tools/skill-mcp/tools.ts: Update hint message to reference 'skill' tool
- hooks/auto-slash-command/executor.ts: Update error message

The slashcommand/ module files are kept (they provide shared utilities used
by the skill tool), but the slashcommand tool itself is no longer registered.
2026-02-19 13:46:33 +09:00
Bo Li
70c033060b fix: preserve git-master config defaults and tighten type safety 2026-02-19 13:46:33 +09:00
Bo Li
8fee45401d refactor: merge skill tool into slashcommand to reduce system prompt size 2026-02-19 13:46:33 +09:00
170 changed files with 2403 additions and 10500 deletions

View File

@@ -1,6 +1,6 @@
# oh-my-opencode — OpenCode Plugin
**Generated:** 2026-02-19 | **Commit:** 29ebd8c4 | **Branch:** dev
**Generated:** 2026-02-19 | **Commit:** 5dc437f4 | **Branch:** dev
## OVERVIEW
@@ -86,7 +86,7 @@ Fields: agents (14 overridable), categories (8 built-in + custom), disabled_* ar
- **Test pattern**: Bun test (`bun:test`), co-located `*.test.ts`, given/when/then style
- **Factory pattern**: `createXXX()` for all tools, hooks, agents
- **Hook tiers**: Session (21) → Tool-Guard (10) → Transform (4) → Continuation (7) → Skill (2)
- **Hook tiers**: Session (22) → Tool-Guard (9) → Transform (4) → Continuation (7) → Skill (2)
- **Agent modes**: `primary` (respects UI model) vs `subagent` (own fallback chain) vs `all`
- **Model resolution**: 3-step: override → category-default → provider-fallback → system-default
- **Config format**: JSONC with comments, Zod v4 validation, snake_case keys

View File

@@ -109,20 +109,18 @@ After making changes, you can test your local build in OpenCode:
```
oh-my-opencode/
├── src/
│ ├── index.ts # Plugin entry (OhMyOpenCodePlugin)
│ ├── plugin-config.ts # JSONC multi-level config (Zod v4)
│ ├── agents/ # 11 agents (Sisyphus, Hephaestus, Oracle, Librarian, Explore, Atlas, Prometheus, Metis, Momus, Multimodal-Looker, Sisyphus-Junior)
│ ├── hooks/ # 44 lifecycle hooks across 39 directories
│ ├── tools/ # 26 tools across 15 directories
│ ├── mcp/ # 3 built-in remote MCPs (websearch, context7, grep_app)
│ ├── features/ # 19 feature modules (background-agent, skill-loader, tmux, MCP-OAuth, etc.)
│ ├── config/ # Zod v4 schema system
── shared/ # Cross-cutting utilities
│ ├── cli/ # CLI: install, run, doctor, mcp-oauth (Commander.js)
│ ├── plugin/ # 8 OpenCode hook handlers + hook composition
│ └── plugin-handlers/ # 6-phase config loading pipeline
├── packages/ # Monorepo: comment-checker, opencode-sdk
└── dist/ # Build output (ESM + .d.ts)
│ ├── agents/ # AI agents (OmO, oracle, librarian, explore, etc.)
│ ├── hooks/ # 21 lifecycle hooks
│ ├── tools/ # LSP (11), AST-Grep, Grep, Glob, etc.
│ ├── mcp/ # MCP server integrations (context7, grep_app)
│ ├── features/ # Claude Code compatibility layers
│ ├── config/ # Zod schemas and TypeScript types
│ ├── auth/ # Google Antigravity OAuth
│ ├── shared/ # Common utilities
── index.ts # Main plugin entry (OhMyOpenCodePlugin)
├── script/ # Build utilities (build-schema.ts, publish.ts)
├── assets/ # JSON schema
└── dist/ # Build output (ESM + .d.ts)
```
## Development Workflow

View File

@@ -183,7 +183,6 @@ Windows から Linux に初めて乗り換えた時のこと、自分の思い
- Librarian: 公式ドキュメント、オープンソース実装、コードベース探索 (GLM-4.7)
- Explore: 超高速コードベース探索 (Contextual Grep) (Grok Code Fast 1)
- Full LSP / AstGrep Support: 決定的にリファクタリングしましょう。
- ハッシュアンカード編集ツール: `LINE#ID` 形式で変更前にコンテンツハッシュを検証します。古い行の編集はもう不要です。
- Todo Continuation Enforcer: 途中で諦めたら、続行を強制します。これがシジフォスに岩を転がし続けさせる秘訣です。
- Comment Checker: AIが過剰なコメントを付けないようにします。シジフォスが生成したコードは、人間が書いたものと区別がつかないべきです。
- Claude Code Compatibility: Command, Agent, Skill, MCP, Hook(PreToolUse, PostToolUse, UserPromptSubmit, Stop)
@@ -235,6 +234,14 @@ Windows から Linux に初めて乗り換えた時のこと、自分の思い
### 人間の方へ
インストールガイドを取得して、その指示に従ってください:
```bash
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/docs/guide/installation.md
```
### LLM エージェントの方へ
以下のプロンプトをコピーして、LLM エージェントClaude Code、AmpCode、Cursor など)に貼り付けてください:
```
@@ -244,14 +251,6 @@ https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/
または [インストールガイド](docs/guide/installation.md) を直接読んでください。ただし、エージェントに任せることを強くお勧めします。人間はミスをしますが、エージェントはしません。
### LLM エージェントの方へ
インストールガイドを取得して、その指示に従ってください:
```bash
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/docs/guide/installation.md
```
## アンインストール
@@ -295,7 +294,6 @@ oh-my-opencode を削除するには:
- **エージェント**: Sisyphusメインエージェント、Prometheusプランナー、Oracleアーキテクチャ/デバッグ、Librarianドキュメント/コード検索、Explore高速コードベース grep、Multimodal Looker
- **バックグラウンドエージェント**: 本物の開発チームのように複数エージェントを並列実行
- **LSP & AST ツール**: リファクタリング、リネーム、診断、AST 認識コード検索
- **ハッシュアンカード編集ツール**: `LINE#ID` 参照で変更前にコンテンツを検証 — 外科的な編集、古い行エラーなし
- **コンテキスト注入**: AGENTS.md、README.md、条件付きルールの自動注入
- **Claude Code 互換性**: 完全なフックシステム、コマンド、スキル、エージェント、MCP
- **内蔵 MCP**: websearch (Exa)、context7 (ドキュメント)、grep_app (GitHub 検索)

View File

@@ -187,7 +187,6 @@ Hey please read this readme and tell me why it is different from other agent har
- Librarian: 공식 문서, 오픈 소스 구현, 코드베이스 탐색 (GLM-4.7)
- Explore: 엄청나게 빠른 코드베이스 탐색 (Contextual Grep) (Grok Code Fast 1)
- 완전한 LSP / AstGrep 지원: 결정적으로 리팩토링합니다.
- 해시 앵커드 편집 도구: `LINE#ID` 형식으로 변경 전마다 콘텐츠 해시를 검증합니다. 오래된 줄 편집은 이제 없습니다.
- TODO 연속 강제: 에이전트가 중간에 멈추면 계속하도록 강제합니다. **이것이 Sisyphus가 그 바위를 굴리게 하는 것입니다.**
- 주석 검사기: AI가 과도한 주석을 추가하는 것을 방지합니다. Sisyphus가 생성한 코드는 인간이 작성한 것과 구별할 수 없어야 합니다.
- Claude Code 호환성: 명령, 에이전트, 스킬, MCP, 훅(PreToolUse, PostToolUse, UserPromptSubmit, Stop)
@@ -246,6 +245,14 @@ Hey please read this readme and tell me why it is different from other agent har
### 인간을 위한
설치 가이드를 가져와서 따르세요:
```bash
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/docs/guide/installation.md
```
### LLM 에이전트를 위한
이 프롬프트를 LLM 에이전트(Claude Code, AmpCode, Cursor 등)에 복사하여 붙여넣으세요:
```
@@ -255,14 +262,6 @@ https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/
또는 [설치 가이드](docs/guide/installation.md)를 직접 읽으세요 — 하지만 **에이전트가 처리하도록 하는 것을 강력히 권장합니다. 인간은 실수를 합니다.**
### LLM 에이전트를 위한
설치 가이드를 가져와서 따르세요:
```bash
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/docs/guide/installation.md
```
## 제거
oh-my-opencode를 제거하려면:
@@ -304,7 +303,6 @@ oh-my-opencode를 제거하려면:
- **에이전트**: Sisyphus(주요 에이전트), Prometheus(플래너), Oracle(아키텍처/디버깅), Librarian(문서/코드 검색), Explore(빠른 코드베이스 grep), Multimodal Looker
- **백그라운드 에이전트**: 실제 개발 팀처럼 여러 에이전트를 병렬로 실행
- **LSP 및 AST 도구**: 리팩토링, 이름 변경, 진단, AST 인식 코드 검색
- **해시 앵커드 편집 도구**: `LINE#ID` 참조로 변경 전마다 콘텐츠를 검증 — 정밀한 편집, 오래된 줄 오류 없음
- **컨텍스트 주입**: AGENTS.md, README.md, 조건부 규칙 자동 주입
- **Claude Code 호환성**: 완전한 훅 시스템, 명령, 스킬, 에이전트, MCP
- **내장 MCP**: websearch(Exa), context7(문서), grep_app(GitHub 검색)

View File

@@ -107,6 +107,25 @@ Yes, technically possible. But I cannot recommend using it.
---
## Contents
- [Oh My OpenCode](#oh-my-opencode)
- [Just Skip Reading This Readme](#just-skip-reading-this-readme)
- [It's the Age of Agents](#its-the-age-of-agents)
- [🪄 The Magic Word: `ultrawork`](#-the-magic-word-ultrawork)
- [For Those Who Want to Read: Meet Sisyphus](#for-those-who-want-to-read-meet-sisyphus)
- [Just Install This](#just-install-this)
- [For Those Who Want Autonomy: Meet Hephaestus](#for-those-who-want-autonomy-meet-hephaestus)
- [Installation](#installation)
- [For Humans](#for-humans)
- [For LLM Agents](#for-llm-agents)
- [Uninstallation](#uninstallation)
- [Features](#features)
- [Configuration](#configuration)
- [Author's Note](#authors-note)
- [Warnings](#warnings)
- [Loved by professionals at](#loved-by-professionals-at)
# Oh My OpenCode
[Claude Code](https://www.claude.com/product/claude-code) is great.
@@ -167,7 +186,6 @@ Meet our main agent: Sisyphus (Opus 4.6). Below are the tools Sisyphus uses to k
- Librarian: Official docs, open source implementations, codebase exploration (GLM-4.7)
- Explore: Blazing fast codebase exploration (Contextual Grep) (Grok Code Fast 1)
- Full LSP / AstGrep Support: Refactor decisively.
- Hash-anchored Edit Tool: `LINE#ID` format validates content hash before every change. No more stale-line edits.
- Todo Continuation Enforcer: Forces the agent to continue if it quits halfway. **This is what keeps Sisyphus rolling that boulder.**
- Comment Checker: Prevents AI from adding excessive comments. Code generated by Sisyphus should be indistinguishable from human-written code.
- Claude Code Compatibility: Command, Agent, Skill, MCP, Hook(PreToolUse, PostToolUse, UserPromptSubmit, Stop)
@@ -204,10 +222,6 @@ Need to look something up? It scours official docs, your entire codebase history
If you don't want all this, as mentioned, you can just pick and choose specific features.
#### Which Model Should I Use?
New to oh-my-opencode and not sure which model to pair with which agent? Check the **[Agent-Model Matching Guide](docs/guide/agent-model-matching.md)** — a quick reference for newcomers covering recommended models, fallback chains, and common pitfalls for each agent.
### For Those Who Want Autonomy: Meet Hephaestus
![Meet Hephaestus](.github/assets/hephaestus.png)
@@ -230,6 +244,14 @@ Hephaestus is inspired by [AmpCode's deep mode](https://ampcode.com)—autonomou
### For Humans
Fetch the installation guide and follow it:
```bash
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/docs/guide/installation.md
```
### For LLM Agents
Copy and paste this prompt to your LLM agent (Claude Code, AmpCode, Cursor, etc.):
```
@@ -239,14 +261,6 @@ https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/
Or read the [Installation Guide](docs/guide/installation.md) directly—but **we strongly recommend letting an agent handle it. Humans make mistakes.**
### For LLM Agents
Fetch the installation guide and follow it:
```bash
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/docs/guide/installation.md
```
## Uninstallation
To remove oh-my-opencode:
@@ -288,13 +302,11 @@ See the full [Features Documentation](docs/features.md) for detailed information
- **Agents**: Sisyphus (the main agent), Prometheus (planner), Oracle (architecture/debugging), Librarian (docs/code search), Explore (fast codebase grep), Multimodal Looker
- **Background Agents**: Run multiple agents in parallel like a real dev team
- **LSP & AST Tools**: Refactoring, rename, diagnostics, AST-aware code search
- **Hash-anchored Edit Tool**: `LINE#ID` references validate content before applying every change — surgical edits, zero stale-line errors
- **Context Injection**: Auto-inject AGENTS.md, README.md, conditional rules
- **Claude Code Compatibility**: Full hook system, commands, skills, agents, MCPs
- **Built-in MCPs**: websearch (Exa), context7 (docs), grep_app (GitHub search)
- **Session Tools**: List, read, search, and analyze session history
- **Productivity Features**: Ralph Loop, Todo Enforcer, Comment Checker, Think Mode, and more
- **[Agent-Model Matching Guide](docs/guide/agent-model-matching.md)**: Which model works best with which agent
## Configuration

View File

@@ -183,7 +183,6 @@
- Librarian官方文档、开源实现、代码库探索 (GLM-4.7)
- Explore极速代码库探索上下文感知 Grep(Grok Code Fast 1)
- 完整 LSP / AstGrep 支持:果断重构。
- 哈希锚定编辑工具:`LINE#ID` 格式在每次更改前验证内容哈希。再也没有陈旧行编辑。
- Todo 继续执行器:如果智能体中途退出,强制它继续。**这就是让 Sisyphus 继续推动巨石的关键。**
- 注释检查器:防止 AI 添加过多注释。Sisyphus 生成的代码应该与人类编写的代码无法区分。
- Claude Code 兼容性Command、Agent、Skill、MCP、HookPreToolUse、PostToolUse、UserPromptSubmit、Stop
@@ -242,6 +241,14 @@
### 面向人类用户
获取安装指南并按照说明操作:
```bash
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/docs/guide/installation.md
```
### 面向 LLM 智能体
复制以下提示并粘贴到你的 LLM 智能体Claude Code、AmpCode、Cursor 等):
```
@@ -251,14 +258,6 @@ https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/
或者直接阅读 [安装指南](docs/guide/installation.md)——但我们强烈建议让智能体来处理。人会犯错,智能体不会。
### 面向 LLM 智能体
获取安装指南并按照说明操作:
```bash
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/master/docs/guide/installation.md
```
## 卸载
要移除 oh-my-opencode
@@ -301,7 +300,6 @@ curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads
- **智能体**Sisyphus主智能体、Prometheus规划器、Oracle架构/调试、Librarian文档/代码搜索、Explore快速代码库 grep、Multimodal Looker
- **后台智能体**:像真正的开发团队一样并行运行多个智能体
- **LSP & AST 工具**重构、重命名、诊断、AST 感知代码搜索
- **哈希锚定编辑工具**`LINE#ID` 引用在每次更改前验证内容 — 精准编辑,零陈旧行错误
- **上下文注入**:自动注入 AGENTS.md、README.md、条件规则
- **Claude Code 兼容性**完整的钩子系统、命令、技能、智能体、MCP
- **内置 MCP**websearch (Exa)、context7 (文档)、grep_app (GitHub 搜索)

View File

@@ -92,7 +92,6 @@
"prometheus-md-only",
"sisyphus-junior-notepad",
"no-sisyphus-gpt",
"no-hephaestus-non-gpt",
"start-work",
"atlas",
"unstable-agent-babysitter",
@@ -100,11 +99,9 @@
"task-resume-info",
"stop-continuation-guard",
"tasks-todowrite-disabler",
"runtime-fallback",
"write-existing-file-guard",
"anthropic-effort",
"hashline-read-enhancer",
"hashline-edit-diff-enhancer"
"hashline-read-enhancer"
]
}
},
@@ -129,9 +126,6 @@
"type": "string"
}
},
"hashline_edit": {
"type": "boolean"
},
"agents": {
"type": "object",
"properties": {
@@ -141,19 +135,6 @@
"model": {
"type": "string"
},
"fallback_models": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"variant": {
"type": "string"
},
@@ -317,18 +298,6 @@
"providerOptions": {
"type": "object",
"additionalProperties": {}
},
"ultrawork": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"variant": {
"type": "string"
}
},
"additionalProperties": false
}
},
"additionalProperties": false
@@ -339,19 +308,6 @@
"model": {
"type": "string"
},
"fallback_models": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"variant": {
"type": "string"
},
@@ -515,18 +471,6 @@
"providerOptions": {
"type": "object",
"additionalProperties": {}
},
"ultrawork": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"variant": {
"type": "string"
}
},
"additionalProperties": false
}
},
"additionalProperties": false
@@ -537,19 +481,6 @@
"model": {
"type": "string"
},
"fallback_models": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"variant": {
"type": "string"
},
@@ -713,18 +644,6 @@
"providerOptions": {
"type": "object",
"additionalProperties": {}
},
"ultrawork": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"variant": {
"type": "string"
}
},
"additionalProperties": false
}
},
"additionalProperties": false
@@ -735,19 +654,6 @@
"model": {
"type": "string"
},
"fallback_models": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"variant": {
"type": "string"
},
@@ -911,18 +817,6 @@
"providerOptions": {
"type": "object",
"additionalProperties": {}
},
"ultrawork": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"variant": {
"type": "string"
}
},
"additionalProperties": false
}
},
"additionalProperties": false
@@ -933,19 +827,6 @@
"model": {
"type": "string"
},
"fallback_models": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"variant": {
"type": "string"
},
@@ -1109,18 +990,6 @@
"providerOptions": {
"type": "object",
"additionalProperties": {}
},
"ultrawork": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"variant": {
"type": "string"
}
},
"additionalProperties": false
}
},
"additionalProperties": false
@@ -1131,19 +1000,6 @@
"model": {
"type": "string"
},
"fallback_models": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"variant": {
"type": "string"
},
@@ -1307,18 +1163,6 @@
"providerOptions": {
"type": "object",
"additionalProperties": {}
},
"ultrawork": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"variant": {
"type": "string"
}
},
"additionalProperties": false
}
},
"additionalProperties": false
@@ -1329,19 +1173,6 @@
"model": {
"type": "string"
},
"fallback_models": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"variant": {
"type": "string"
},
@@ -1505,18 +1336,6 @@
"providerOptions": {
"type": "object",
"additionalProperties": {}
},
"ultrawork": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"variant": {
"type": "string"
}
},
"additionalProperties": false
}
},
"additionalProperties": false
@@ -1527,19 +1346,6 @@
"model": {
"type": "string"
},
"fallback_models": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"variant": {
"type": "string"
},
@@ -1703,18 +1509,6 @@
"providerOptions": {
"type": "object",
"additionalProperties": {}
},
"ultrawork": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"variant": {
"type": "string"
}
},
"additionalProperties": false
}
},
"additionalProperties": false
@@ -1725,19 +1519,6 @@
"model": {
"type": "string"
},
"fallback_models": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"variant": {
"type": "string"
},
@@ -1901,18 +1682,6 @@
"providerOptions": {
"type": "object",
"additionalProperties": {}
},
"ultrawork": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"variant": {
"type": "string"
}
},
"additionalProperties": false
}
},
"additionalProperties": false
@@ -1923,19 +1692,6 @@
"model": {
"type": "string"
},
"fallback_models": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"variant": {
"type": "string"
},
@@ -2099,18 +1855,6 @@
"providerOptions": {
"type": "object",
"additionalProperties": {}
},
"ultrawork": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"variant": {
"type": "string"
}
},
"additionalProperties": false
}
},
"additionalProperties": false
@@ -2121,19 +1865,6 @@
"model": {
"type": "string"
},
"fallback_models": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"variant": {
"type": "string"
},
@@ -2297,18 +2028,6 @@
"providerOptions": {
"type": "object",
"additionalProperties": {}
},
"ultrawork": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"variant": {
"type": "string"
}
},
"additionalProperties": false
}
},
"additionalProperties": false
@@ -2319,19 +2038,6 @@
"model": {
"type": "string"
},
"fallback_models": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"variant": {
"type": "string"
},
@@ -2495,18 +2201,6 @@
"providerOptions": {
"type": "object",
"additionalProperties": {}
},
"ultrawork": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"variant": {
"type": "string"
}
},
"additionalProperties": false
}
},
"additionalProperties": false
@@ -2517,19 +2211,6 @@
"model": {
"type": "string"
},
"fallback_models": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"variant": {
"type": "string"
},
@@ -2693,18 +2374,6 @@
"providerOptions": {
"type": "object",
"additionalProperties": {}
},
"ultrawork": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"variant": {
"type": "string"
}
},
"additionalProperties": false
}
},
"additionalProperties": false
@@ -2715,19 +2384,6 @@
"model": {
"type": "string"
},
"fallback_models": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"variant": {
"type": "string"
},
@@ -2891,18 +2547,6 @@
"providerOptions": {
"type": "object",
"additionalProperties": {}
},
"ultrawork": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"variant": {
"type": "string"
}
},
"additionalProperties": false
}
},
"additionalProperties": false
@@ -2921,19 +2565,6 @@
"model": {
"type": "string"
},
"fallback_models": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"variant": {
"type": "string"
},
@@ -3203,7 +2834,7 @@
"safe_hook_creation": {
"type": "boolean"
},
"disable_omo_env": {
"hashline_edit": {
"type": "boolean"
}
},
@@ -3343,37 +2974,6 @@
],
"additionalProperties": false
},
"runtime_fallback": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean"
},
"retry_on_errors": {
"type": "array",
"items": {
"type": "number"
}
},
"max_fallback_attempts": {
"type": "number",
"minimum": 1,
"maximum": 20
},
"cooldown_seconds": {
"type": "number",
"minimum": 0
},
"timeout_seconds": {
"type": "number",
"minimum": 0
},
"notify_on_fallback": {
"type": "boolean"
}
},
"additionalProperties": false
},
"background_task": {
"type": "object",
"properties": {
@@ -3562,4 +3162,4 @@
}
},
"additionalProperties": false
}
}

View File

@@ -12,7 +12,6 @@
"@modelcontextprotocol/sdk": "^1.25.1",
"@opencode-ai/plugin": "^1.1.19",
"@opencode-ai/sdk": "^1.1.19",
"codex": "^0.2.3",
"commander": "^14.0.2",
"detect-libc": "^2.0.0",
"js-yaml": "^4.1.1",
@@ -29,13 +28,13 @@
"typescript": "^5.7.3",
},
"optionalDependencies": {
"oh-my-opencode-darwin-arm64": "3.7.4",
"oh-my-opencode-darwin-x64": "3.7.4",
"oh-my-opencode-linux-arm64": "3.7.4",
"oh-my-opencode-linux-arm64-musl": "3.7.4",
"oh-my-opencode-linux-x64": "3.7.4",
"oh-my-opencode-linux-x64-musl": "3.7.4",
"oh-my-opencode-windows-x64": "3.7.4",
"oh-my-opencode-darwin-arm64": "3.6.0",
"oh-my-opencode-darwin-x64": "3.6.0",
"oh-my-opencode-linux-arm64": "3.6.0",
"oh-my-opencode-linux-arm64-musl": "3.6.0",
"oh-my-opencode-linux-x64": "3.6.0",
"oh-my-opencode-linux-x64-musl": "3.6.0",
"oh-my-opencode-windows-x64": "3.6.0",
},
},
},
@@ -119,12 +118,8 @@
"call-bound": ["call-bound@1.0.4", "", { "dependencies": { "call-bind-apply-helpers": "^1.0.2", "get-intrinsic": "^1.3.0" } }, "sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg=="],
"codex": ["codex@0.2.3", "", { "dependencies": { "connect": "1.8.x", "dox": "0.3.x", "drip": "0.2.x", "fez": "0.0.x", "highlight.js": "1.2.x", "jade": "0.26.x", "marked": "0.2.x", "ncp": "0.2.x", "nib": "0.4.x", "oath": "0.2.x", "optimist": "0.3.x", "rimraf": "2.0.x", "stylus": "0.26.x", "tea": "0.0.x", "yaml": "0.2.x" }, "bin": { "codex": "./bin/codex" } }, "sha512-+MQbh3UIJRZFawxQUgPAEXKyL9o06fy8JmrgW4EnMeMlj8kh3Jljh4+CcOdH9yt82FTkmEwUR2qOrOev3ZoJJA=="],
"commander": ["commander@14.0.2", "", {}, "sha512-TywoWNNRbhoD0BXs1P3ZEScW8W5iKrnbithIl0YH+uCmBd0QpPOA8yc82DS3BIE5Ma6FnBVUsJ7wVUDz4dvOWQ=="],
"connect": ["connect@1.8.7", "", { "dependencies": { "formidable": "1.0.x", "mime": ">= 0.0.1", "qs": ">= 0.4.0" } }, "sha512-j72iQ8i6td2YLZD37ADpGOa4C5skHNrJSGQkJh/t+DCoE6nm8NbHslFTs17q44EJsiVrry+W13yrxd46M32jbA=="],
"content-disposition": ["content-disposition@1.0.1", "", {}, "sha512-oIXISMynqSqm241k6kcQ5UwttDILMK4BiurCfGEREw6+X9jkkpEe5T9FZaApyLGGOnFuyMWZpdolTXMtvEJ08Q=="],
"content-type": ["content-type@1.0.5", "", {}, "sha512-nTjqfcBFEipKdXCv4YDQWCfmcLZKm81ldF0pAopTvyrFGVbcR6P/VAAd5G7N+0tTr8QqiU0tFadD6FK4NtJwOA=="],
@@ -137,18 +132,12 @@
"cross-spawn": ["cross-spawn@7.0.6", "", { "dependencies": { "path-key": "^3.1.0", "shebang-command": "^2.0.0", "which": "^2.0.1" } }, "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA=="],
"cssom": ["cssom@0.2.5", "", {}, "sha512-b9ecqKEfWrNcyzx5+1nmcfi80fPp8dVM8rlAh7fFK14PZbNjp++gRjyZTZfLJQa/Lw0qeCJho7WBIl0nw0v6HA=="],
"debug": ["debug@4.4.3", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA=="],
"depd": ["depd@2.0.0", "", {}, "sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw=="],
"detect-libc": ["detect-libc@2.1.2", "", {}, "sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ=="],
"dox": ["dox@0.3.3", "", { "dependencies": { "commander": "0.6.1", "github-flavored-markdown": ">= 0.0.1" }, "bin": { "dox": "./bin/dox" } }, "sha512-5bSKbTcpFm+0wPRnxMkJhY5dFoWWxsTQdTLFg2d1HyLl0voy9GoBVVOKM+yPSdTdKCXrHqwEwUcdS7s4BTst7w=="],
"drip": ["drip@0.2.4", "", {}, "sha512-/qhB7CjfmfZYHue9SwicWNqsSp1DNzkHTCVsud92Tb43qKTiIAXBHIdCJYUn93r7MScM++H+nimkWPmvNTg/Qw=="],
"dunder-proto": ["dunder-proto@1.0.1", "", { "dependencies": { "call-bind-apply-helpers": "^1.0.1", "es-errors": "^1.3.0", "gopd": "^1.2.0" } }, "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A=="],
"ee-first": ["ee-first@1.1.1", "", {}, "sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow=="],
@@ -177,12 +166,8 @@
"fast-uri": ["fast-uri@3.1.0", "", {}, "sha512-iPeeDKJSWf4IEOasVVrknXpaBV0IApz/gp7S2bb7Z4Lljbl2MGJRqInZiUrQwV16cpzw/D3S5j5Julj/gT52AA=="],
"fez": ["fez@0.0.3", "", {}, "sha512-W+igVHjiRB4ai7h25ay/7OYNwI8IihdABOnRIS3Bcm4UxEWKoenCB6m68HLSq41TxZwbnqzFAqlz/CjKB3rTvg=="],
"finalhandler": ["finalhandler@2.1.1", "", { "dependencies": { "debug": "^4.4.0", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "on-finished": "^2.4.1", "parseurl": "^1.3.3", "statuses": "^2.0.1" } }, "sha512-S8KoZgRZN+a5rNwqTxlZZePjT/4cnm0ROV70LedRHZ0p8u9fRID0hJUZQpkKLzro8LfmC8sx23bY6tVNxv8pQA=="],
"formidable": ["formidable@1.0.17", "", {}, "sha512-95MFT5qipMvUiesmuvGP1BI4hh5XWCzyTapiNJ/k8JBQda7rPy7UCWYItz2uZEdTgGNy1eInjzlL9Wx1O9fedg=="],
"forwarded": ["forwarded@0.2.0", "", {}, "sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow=="],
"fresh": ["fresh@2.0.0", "", {}, "sha512-Rx/WycZ60HOaqLKAi6cHRKKI7zxWbJ31MhntmtwMoaTeF7XFH9hhBp8vITaMidfljRQ6eYWCKkaTK+ykVJHP2A=="],
@@ -193,18 +178,12 @@
"get-proto": ["get-proto@1.0.1", "", { "dependencies": { "dunder-proto": "^1.0.1", "es-object-atoms": "^1.0.0" } }, "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g=="],
"github-flavored-markdown": ["github-flavored-markdown@1.0.1", "", {}, "sha512-qkpFaYzQ+JbZw7iuZCpvjqas5E8ZNq/xuTtBtdPkAlowX8VXBmkZE2DCgNGCTW5KZsCvqX5lSef/2yrWMTztBQ=="],
"gopd": ["gopd@1.2.0", "", {}, "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg=="],
"graceful-fs": ["graceful-fs@1.1.14", "", {}, "sha512-JUrvoFoQbLZpOZilKTXZX2e1EV0DTnuG5vsRFNFv4mPf/mnYbwNAFw/5x0rxeyaJslIdObGSgTTsMnM/acRaVw=="],
"has-symbols": ["has-symbols@1.1.0", "", {}, "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ=="],
"hasown": ["hasown@2.0.2", "", { "dependencies": { "function-bind": "^1.1.2" } }, "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ=="],
"highlight.js": ["highlight.js@1.2.0", "", { "dependencies": { "commander": "*" }, "bin": { "hljs": "./bin/hljs" } }, "sha512-k19Rm9OuIGiZvD+0G2Lao6kPr01XMEbEK67/n+GqOMTgxc7HhgzfLzX71Q9j5Qu+bkzYXbPFHums8tl0dzV4Uw=="],
"hono": ["hono@4.10.8", "", {}, "sha512-DDT0A0r6wzhe8zCGoYOmMeuGu3dyTAE40HHjwUsWFTEy5WxK1x2WDSsBPlEXgPbRIFY6miDualuUDbasPogIww=="],
"http-errors": ["http-errors@2.0.1", "", { "dependencies": { "depd": "~2.0.0", "inherits": "~2.0.4", "setprototypeof": "~1.2.0", "statuses": "~2.0.2", "toidentifier": "~1.0.1" } }, "sha512-4FbRdAX+bSdmo4AUFuS0WNiPz8NgFt+r8ThgNWmlrjQjt1Q7ZR9+zTlce2859x4KSXrwIsaeTqDoKQmtP8pLmQ=="],
@@ -219,8 +198,6 @@
"isexe": ["isexe@2.0.0", "", {}, "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw=="],
"jade": ["jade@0.26.3", "", { "dependencies": { "commander": "0.6.1", "mkdirp": "0.3.0" }, "bin": { "jade": "./bin/jade" } }, "sha512-mkk3vzUHFjzKjpCXeu+IjXeZD+QOTjUUdubgmHtHTDwvAO2ZTkMTTVrapts5CWz3JvJryh/4KWZpjeZrCepZ3A=="],
"jose": ["jose@6.1.3", "", {}, "sha512-0TpaTfihd4QMNwrz/ob2Bp7X04yuxJkjRGi4aKmOqwhov54i6u79oCv7T+C7lo70MKH6BesI3vscD1yb/yzKXQ=="],
"js-yaml": ["js-yaml@4.1.1", "", { "dependencies": { "argparse": "^2.0.1" }, "bin": { "js-yaml": "bin/js-yaml.js" } }, "sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA=="],
@@ -231,62 +208,42 @@
"jsonc-parser": ["jsonc-parser@3.3.1", "", {}, "sha512-HUgH65KyejrUFPvHFPbqOY0rsFip3Bo5wb4ngvdi1EpCYWUQDC5V+Y7mZws+DLkr4M//zQJoanu1SP+87Dv1oQ=="],
"marked": ["marked@0.2.10", "", { "bin": { "marked": "./bin/marked" } }, "sha512-LyFB4QvdBaJFfEIn33plrxtBuRjeHoDE2QJdP58i2EWMUTpa6GK6MnjJh3muCvVibFJompyr6IxecK2fjp4RDw=="],
"math-intrinsics": ["math-intrinsics@1.1.0", "", {}, "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g=="],
"media-typer": ["media-typer@1.1.0", "", {}, "sha512-aisnrDP4GNe06UcKFnV5bfMNPBUw4jsLGaWwWfnH3v02GnBuXX2MCVn5RbrWo0j3pczUilYblq7fQ7Nw2t5XKw=="],
"merge-descriptors": ["merge-descriptors@2.0.0", "", {}, "sha512-Snk314V5ayFLhp3fkUREub6WtjBfPdCPY1Ln8/8munuLuiYhsABgBVWsozAG+MWMbVEvcdcpbi9R7ww22l9Q3g=="],
"mime": ["mime@4.1.0", "", { "bin": { "mime": "bin/cli.js" } }, "sha512-X5ju04+cAzsojXKes0B/S4tcYtFAJ6tTMuSPBEn9CPGlrWr8Fiw7qYeLT0XyH80HSoAoqWCaz+MWKh22P7G1cw=="],
"mime-db": ["mime-db@1.54.0", "", {}, "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ=="],
"mime-types": ["mime-types@3.0.2", "", { "dependencies": { "mime-db": "^1.54.0" } }, "sha512-Lbgzdk0h4juoQ9fCKXW4by0UJqj+nOOrI9MJ1sSj4nI8aI2eo1qmvQEie4VD1glsS250n15LsWsYtCugiStS5A=="],
"mkdirp": ["mkdirp@0.3.0", "", {}, "sha512-OHsdUcVAQ6pOtg5JYWpCBo9W/GySVuwvP9hueRMW7UqshC0tbfzLv8wjySTPm3tfUZ/21CE9E1pJagOA91Pxew=="],
"ms": ["ms@2.1.3", "", {}, "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="],
"nan": ["nan@1.0.0", "", {}, "sha512-Wm2/nFOm2y9HtJfgOLnctGbfvF23FcQZeyUZqDD8JQG3zO5kXh3MkQKiUaA68mJiVWrOzLFkAV1u6bC8P52DJA=="],
"ncp": ["ncp@0.2.7", "", { "bin": { "ncp": "./bin/ncp" } }, "sha512-wPUepcV37u3Mw+ktjrUbl3azxwAkcD9RrVLQGlpSapWcEQM5jL0g8zwKo6ukOjVQAAEjqpRdLeojOalqqySpCg=="],
"negotiator": ["negotiator@1.0.0", "", {}, "sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg=="],
"nib": ["nib@0.4.1", "", {}, "sha512-q8n5RAcLLpA5YewcH9UplGzPTu4XbC6t9hVPB1RsnvKD5aYWT+V+2NHGH/dgw/6YDjgETEa7hY54kVhvn1i5DQ=="],
"oath": ["oath@0.2.3", "", {}, "sha512-/uTqn2KKy671SunNXhULGbumn2U3ZN84LvYZdnfSqqqBkM6cppm+jcUodWELd9CYVNYGh6QwJEEAQ0WM95qjpA=="],
"object-assign": ["object-assign@4.1.1", "", {}, "sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg=="],
"object-inspect": ["object-inspect@1.13.4", "", {}, "sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew=="],
"oh-my-opencode-darwin-arm64": ["oh-my-opencode-darwin-arm64@3.7.4", "", { "os": "darwin", "cpu": "arm64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-0m84UiVlOC2gLSFIOTmCsxFCB9CmyWV9vGPYqfBFLoyDJmedevU3R5N4ze54W7jv4HSSxz02Zwr+QF5rkQANoA=="],
"oh-my-opencode-darwin-arm64": ["oh-my-opencode-darwin-arm64@3.6.0", "", { "os": "darwin", "cpu": "arm64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-JkyJC3b9ueRgSyPJMjTKlBO99gIyTpI87lEV5Tk7CBv6TFbj2ZFxfaA8mEm138NbwmYa/Z4Rf7I5tZyp2as93A=="],
"oh-my-opencode-darwin-x64": ["oh-my-opencode-darwin-x64@3.7.4", "", { "os": "darwin", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-Z2dQy8jmc6DuwbN9bafhOwjZBkAkTWlfLAz1tG6xVzMqTcp4YOrzrHFOBRNeFKpOC/x7yUpO3sq/YNCclloelw=="],
"oh-my-opencode-darwin-x64": ["oh-my-opencode-darwin-x64@3.6.0", "", { "os": "darwin", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-5HsXz3F42T6CmPk6IW+pErJVSmPnqc3Gc1OntoKp/b4FwuWkFJh9kftDSH3cnKTX98H6XBqnwZoFKCNCiiVLEA=="],
"oh-my-opencode-linux-arm64": ["oh-my-opencode-linux-arm64@3.7.4", "", { "os": "linux", "cpu": "arm64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-TZIsK6Dl6yX6pSTocls91bjnvoY/6/kiGnmgdsoDKcPYZ7XuBQaJwH0dK7t9/sxuDI+wKhmtrmLwKSoYOIqsRw=="],
"oh-my-opencode-linux-arm64": ["oh-my-opencode-linux-arm64@3.6.0", "", { "os": "linux", "cpu": "arm64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-KjCSC2i9XdjzGsX6coP9xwj7naxTpdqnB53TiLbVH+KeF0X0dNsVV7PHbme3I1orjjzYoEbVYVC3ZNaleubzog=="],
"oh-my-opencode-linux-arm64-musl": ["oh-my-opencode-linux-arm64-musl@3.7.4", "", { "os": "linux", "cpu": "arm64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-UwPOoQP0+1eCKP/XTDsnLJDK5jayiL4VrKz0lfRRRojl1FWvInmQumnDnluvnxW6knU7dFM3yDddlZYG6tEgcw=="],
"oh-my-opencode-linux-arm64-musl": ["oh-my-opencode-linux-arm64-musl@3.6.0", "", { "os": "linux", "cpu": "arm64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-EARvFQXnkqSnwPpKtghmoV5e/JmweJXhjcOrRNvEwQ8HSb4FIhdRmJkTw4Z/EzyoIRTQcY019ALOiBbdIiOUEA=="],
"oh-my-opencode-linux-x64": ["oh-my-opencode-linux-x64@3.7.4", "", { "os": "linux", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-+TeA0Bs5wK9EMfKiEEFfyfVqdBDUjDzN8POF8JJibN0GPy1oNIGGEWIJG2cvC5onpnYEvl448vkFbkCUK0g9SQ=="],
"oh-my-opencode-linux-x64": ["oh-my-opencode-linux-x64@3.6.0", "", { "os": "linux", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-jYyew4NKAOM6NrMM0+LlRlz6s1EVMI9cQdK/o0t8uqFheZVeb7u4cBZwwfhJ79j7EWkSWGc0Jdj9G2dOukbDxg=="],
"oh-my-opencode-linux-x64-musl": ["oh-my-opencode-linux-x64-musl@3.7.4", "", { "os": "linux", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-YzX6wFtk8RoTHkAZkfLCVyCU4yjN8D7agj/jhOnFKW50fZYa8zX+/4KLZx0IfanVpXTgrs3iiuKoa87KLDfCxQ=="],
"oh-my-opencode-linux-x64-musl": ["oh-my-opencode-linux-x64-musl@3.6.0", "", { "os": "linux", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-BrR+JftCXP/il04q2uImWIueCiuTmXbivsXYkfFONdO1Rq9b4t0BVua9JIYk7l3OUfeRlrKlFNYNfpFhvVADOw=="],
"oh-my-opencode-windows-x64": ["oh-my-opencode-windows-x64@3.7.4", "", { "os": "win32", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode.exe" } }, "sha512-x39M2eFJI6pqv4go5Crf1H2SbPGFmXHIDNtbsSa5nRNcrqTisLrYGW8uXpOrqjntBeTAUBdwZmmoy6zgxHsz8w=="],
"oh-my-opencode-windows-x64": ["oh-my-opencode-windows-x64@3.6.0", "", { "os": "win32", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode.exe" } }, "sha512-cIYQYzcQGhGFE99ulHGXs8S1vDHjgCtT3ID2dDoOztnOQW0ZVa61oCHlkBtjdP/BEv2tH5AGvKrXAICXs19iFw=="],
"on-finished": ["on-finished@2.4.1", "", { "dependencies": { "ee-first": "1.1.1" } }, "sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg=="],
"once": ["once@1.4.0", "", { "dependencies": { "wrappy": "1" } }, "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w=="],
"optimist": ["optimist@0.3.7", "", { "dependencies": { "wordwrap": "~0.0.2" } }, "sha512-TCx0dXQzVtSCg2OgY/bO9hjM9cV4XYx09TVK+s3+FhkjT6LovsLe+pPMzpWf+6yXK/hUizs2gUoTw3jHM0VaTQ=="],
"options": ["options@0.0.6", "", {}, "sha512-bOj3L1ypm++N+n7CEbbe473A414AB7z+amKYshRb//iuL3MpdDCLhPnw6aVTdKB9g5ZRVHIEp8eUln6L2NUStg=="],
"orchid": ["orchid@0.0.3", "", { "dependencies": { "drip": "0.2.x", "oath": "0.2.x", "ws": "0.4.x" } }, "sha512-jkbcOxPnbo9M0WZbvjvTKLY+2lhxyWnoJXKESHodJAD00bsqOe5YPrJZ2rjgBKJ4YIgmbKSMlsjNIZ8NNhXbOA=="],
"parseurl": ["parseurl@1.3.3", "", {}, "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ=="],
"path-key": ["path-key@3.1.1", "", {}, "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q=="],
@@ -309,8 +266,6 @@
"require-from-string": ["require-from-string@2.0.2", "", {}, "sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw=="],
"rimraf": ["rimraf@2.0.3", "", { "optionalDependencies": { "graceful-fs": "~1.1" } }, "sha512-uR09PSoW2+1hW0hquRqxb+Ae2h6R5ls3OAy2oNekQFtqbSJkltkhKRa+OhZKoxWsN9195Gp1vg7sELDRoJ8a3w=="],
"router": ["router@2.2.0", "", { "dependencies": { "debug": "^4.4.0", "depd": "^2.0.0", "is-promise": "^4.0.0", "parseurl": "^1.3.3", "path-to-regexp": "^8.0.0" } }, "sha512-nLTrUKm2UyiL7rlhapu/Zl45FwNgkZGaCpZbIHajDYgwlJCOzLSk+cIPAnsEqV955GjILJnKbdQC1nVPz+gAYQ=="],
"safer-buffer": ["safer-buffer@2.1.2", "", {}, "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg=="],
@@ -337,12 +292,6 @@
"statuses": ["statuses@2.0.2", "", {}, "sha512-DvEy55V3DB7uknRo+4iOGT5fP1slR8wQohVdknigZPMpMstaKJQWhwiYBACJE3Ul2pTnATihhBYnRhZQHGBiRw=="],
"stylus": ["stylus@0.26.1", "", { "dependencies": { "cssom": "0.2.x", "debug": "*", "mkdirp": "0.3.x" }, "bin": { "stylus": "./bin/stylus" } }, "sha512-33J3iBM2Ueh/wDFzkQXmjHSDxNRWQ7J2I2dqiInAKkGR4j+3hkojRRSbv3ITodxJBIodVfv0l10CHZhJoi0Ubw=="],
"tea": ["tea@0.0.13", "", { "dependencies": { "drip": "0.2.x", "oath": "0.2.x", "orchid": "0.0.x" } }, "sha512-wpVkMmrK83yrwjnBYtN/GKzA0ixt1k68lq4g0s0H38fZTPHeApnToCVzpQgDEToNoBbviHQaOhXcMldHnM+XwQ=="],
"tinycolor": ["tinycolor@0.0.1", "", {}, "sha512-+CorETse1kl98xg0WAzii8DTT4ABF4R3nquhrkIbVGcw1T8JYs5Gfx9xEfGINPUZGDj9C4BmOtuKeaTtuuRolg=="],
"toidentifier": ["toidentifier@1.0.1", "", {}, "sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA=="],
"type-is": ["type-is@2.0.1", "", { "dependencies": { "content-type": "^1.0.5", "media-typer": "^1.1.0", "mime-types": "^3.0.0" } }, "sha512-OZs6gsjF4vMp32qrCbiVSkrFmXtG/AZhY3t0iAMrMBiAZyV9oALtXO8hsrHbMXF9x6L3grlFuwW2oAz7cav+Gw=="],
@@ -359,22 +308,10 @@
"which": ["which@2.0.2", "", { "dependencies": { "isexe": "^2.0.0" }, "bin": { "node-which": "./bin/node-which" } }, "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA=="],
"wordwrap": ["wordwrap@0.0.3", "", {}, "sha512-1tMA907+V4QmxV7dbRvb4/8MaRALK6q9Abid3ndMYnbyo8piisCmeONVqVSXqQA3KaP4SLt5b7ud6E2sqP8TFw=="],
"wrappy": ["wrappy@1.0.2", "", {}, "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="],
"ws": ["ws@0.4.32", "", { "dependencies": { "commander": "~2.1.0", "nan": "~1.0.0", "options": ">=0.0.5", "tinycolor": "0.x" }, "bin": { "wscat": "./bin/wscat" } }, "sha512-htqsS0U9Z9lb3ITjidQkRvkLdVhQePrMeu475yEfOWkAYvJ6dSjQp1tOH6ugaddzX5b7sQjMPNtY71eTzrV/kA=="],
"yaml": ["yaml@0.2.3", "", {}, "sha512-LzdhmhritYCRww8GLH95Sk5A2c18ddRQMeooOUnqWkDUnBbmVfqgg2fXH2MxAHYHCVTHDK1EEbmgItQ8kOpM0Q=="],
"zod": ["zod@4.1.8", "", {}, "sha512-5R1P+WwQqmmMIEACyzSvo4JXHY5WiAFHRMg+zBZKgKS+Q1viRa0C1hmUKtHltoIFKtIdki3pRxkmpP74jnNYHQ=="],
"zod-to-json-schema": ["zod-to-json-schema@3.25.1", "", { "peerDependencies": { "zod": "^3.25 || ^4" } }, "sha512-pM/SU9d3YAggzi6MtR4h7ruuQlqKtad8e9S0fmxcMi+ueAK5Korys/aWcV9LIIHTVbj01NdzxcnXSN+O74ZIVA=="],
"dox/commander": ["commander@0.6.1", "", {}, "sha512-0fLycpl1UMTGX257hRsu/arL/cUbcvQM4zMKwvLvzXtfdezIV4yotPS2dYtknF+NmEfWSoCEF6+hj9XLm/6hEw=="],
"jade/commander": ["commander@0.6.1", "", {}, "sha512-0fLycpl1UMTGX257hRsu/arL/cUbcvQM4zMKwvLvzXtfdezIV4yotPS2dYtknF+NmEfWSoCEF6+hj9XLm/6hEw=="],
"ws/commander": ["commander@2.1.0", "", {}, "sha512-J2wnb6TKniXNOtoHS8TSrG9IOQluPrsmyAJ8oCUJOBmv+uLBCyPYAZkD2jFvw2DCzIXNnISIM01NIvr35TkBMQ=="],
}
}

View File

@@ -28,7 +28,7 @@ A Category is an agent configuration preset optimized for specific domains.
| `quick` | `anthropic/claude-haiku-4-5` | Trivial tasks - single file changes, typo fixes, simple modifications |
| `unspecified-low` | `anthropic/claude-sonnet-4-6` | Tasks that don't fit other categories, low effort required |
| `unspecified-high` | `anthropic/claude-opus-4-6` (max) | Tasks that don't fit other categories, high effort required |
| `writing` | `kimi-for-coding/k2p5` | Documentation, prose, technical writing |
| `writing` | `google/gemini-3-flash` | Documentation, prose, technical writing |
### Usage

View File

@@ -23,8 +23,8 @@ npx oh-my-opencode
| `install` | Interactive Setup Wizard |
| `doctor` | Environment diagnostics and health checks |
| `run` | OpenCode session runner |
| `mcp oauth` | MCP OAuth authentication management |
| `get-local-version` | Display local version information |
| `auth` | Google Antigravity authentication management |
| `version` | Display version information |
---
@@ -131,15 +131,6 @@ bunx oh-my-opencode run [prompt]
|--------|-------------|
| `--enforce-completion` | Keep session active until all TODOs are completed |
| `--timeout <seconds>` | Set maximum execution time |
| `--agent <name>` | Specify agent to use |
| `--directory <path>` | Set working directory |
| `--port <number>` | Set port for session |
| `--attach` | Attach to existing session |
| `--json` | Output in JSON format |
| `--no-timestamp` | Disable timestamped output |
| `--session-id <id>` | Resume existing session |
| `--on-complete <action>` | Action on completion |
| `--verbose` | Enable verbose logging |
---
@@ -276,17 +267,14 @@ bunx oh-my-opencode doctor --json > doctor-report.json
```
src/cli/
├── cli-program.ts # Commander.js-based main entry
├── index.ts # Commander.js-based main entry
├── install.ts # @clack/prompts-based TUI installer
├── config-manager/ # JSONC parsing, multi-source config management
│ └── *.ts
├── config-manager.ts # JSONC parsing, multi-source config management
├── doctor/ # Health check system
│ ├── index.ts # Doctor command entry
│ └── checks/ # 17+ individual check modules
├── run/ # Session runner
│ └── *.ts
└── mcp-oauth/ # OAuth management commands
└── *.ts
└── commands/auth.ts # Authentication management
```
### Adding New Doctor Checks

View File

@@ -163,20 +163,19 @@ Override built-in agent settings:
}
```
Each agent supports: `model`, `fallback_models`, `temperature`, `top_p`, `prompt`, `prompt_append`, `tools`, `disable`, `description`, `mode`, `color`, `permission`, `category`, `variant`, `maxTokens`, `thinking`, `reasoningEffort`, `textVerbosity`, `providerOptions`.
Each agent supports: `model`, `temperature`, `top_p`, `prompt`, `prompt_append`, `tools`, `disable`, `description`, `mode`, `color`, `permission`, `category`, `variant`, `maxTokens`, `thinking`, `reasoningEffort`, `textVerbosity`, `providerOptions`.
### Additional Agent Options
| Option | Type | Description |
| ------------------- | -------------- | ----------------------------------------------------------------------------------------------- |
| `fallback_models` | string/array | Fallback models for runtime switching on API errors. Single string or array of model strings. |
| `category` | string | Category name to inherit model and other settings from category defaults |
| `variant` | string | Model variant (e.g., `max`, `high`, `medium`, `low`, `xhigh`) |
| `maxTokens` | number | Maximum tokens for response. Passed directly to OpenCode SDK. |
| `thinking` | object | Extended thinking configuration for Anthropic models. See [Thinking Options](#thinking-options) below. |
| `reasoningEffort` | string | OpenAI reasoning effort level. Values: `low`, `medium`, `high`, `xhigh`. |
| `textVerbosity` | string | Text verbosity level. Values: `low`, `medium`, `high`. |
| `providerOptions` | object | Provider-specific options passed directly to OpenCode SDK. |
| Option | Type | Description |
| ------------------- | ------- | ----------------------------------------------------------------------------------------------- |
| `category` | string | Category name to inherit model and other settings from category defaults |
| `variant` | string | Model variant (e.g., `max`, `high`, `medium`, `low`, `xhigh`) |
| `maxTokens` | number | Maximum tokens for response. Passed directly to OpenCode SDK. |
| `thinking` | object | Extended thinking configuration for Anthropic models. See [Thinking Options](#thinking-options) below. |
| `reasoningEffort` | string | OpenAI reasoning effort level. Values: `low`, `medium`, `high`, `xhigh`. |
| `textVerbosity` | string | Text verbosity level. Values: `low`, `medium`, `high`. |
| `providerOptions` | object | Provider-specific options passed directly to OpenCode SDK. |
#### Thinking Options (Anthropic)
@@ -715,84 +714,6 @@ Configure concurrency limits for background agent tasks. This controls how many
- Allow more concurrent tasks for fast/cheap models (e.g., Gemini Flash)
- Respect provider rate limits by setting provider-level caps
## Runtime Fallback
Automatically switch to backup models when the primary model encounters retryable API errors (rate limits, overload, etc.) or provider key misconfiguration errors (for example, missing API key). This keeps conversations running without manual intervention.
```json
{
"runtime_fallback": {
"enabled": true,
"retry_on_errors": [400, 429, 503, 529],
"max_fallback_attempts": 3,
"cooldown_seconds": 60,
"timeout_seconds": 30,
"notify_on_fallback": true
}
}
```
| Option | Default | Description |
| ----------------------- | ---------------------- | --------------------------------------------------------------------------- |
| `enabled` | `true` | Enable runtime fallback |
| `retry_on_errors` | `[400, 429, 503, 529]` | HTTP status codes that trigger fallback (rate limit, service unavailable). Also supports certain classified provider errors (for example, missing API key) that do not expose HTTP status codes. |
| `max_fallback_attempts` | `3` | Maximum fallback attempts per session (1-20) |
| `cooldown_seconds` | `60` | Cooldown in seconds before retrying a failed model |
| `timeout_seconds` | `30` | Timeout in seconds for an in-flight fallback request before forcing the next fallback model. **⚠️ Set to `0` to disable auto-retry signal detection** (see below). |
| `notify_on_fallback` | `true` | Show toast notification when switching to a fallback model |
### timeout_seconds: Understanding the 0 Value
**⚠️ IMPORTANT**: Setting `timeout_seconds: 0` **disables auto-retry signal detection**. This is a critical behavior change:
| Setting | Behavior |
|---------|----------|
| `timeout_seconds: 30` (default) | ✅ **Full fallback coverage**: Error-based fallback (429, 503, etc.) + auto-retry signal detection (provider messages like "retrying in 8h") |
| `timeout_seconds: 0` | ⚠️ **Limited fallback**: Only error-based fallback works. Provider retry messages are **completely ignored**. Timeout-based escalation is **disabled**. |
**When `timeout_seconds: 0`:**
- ✅ HTTP errors (429, 503, 529) still trigger fallback
- ✅ Provider key errors (missing API key) still trigger fallback
- ❌ Provider retry messages ("retrying in Xh") are **ignored**
- ❌ Timeout-based escalation is **disabled**
- ❌ Hanging requests do **not** advance to the next fallback model
**Recommendation**: Use a non-zero value (e.g., `30` seconds) to enable full fallback coverage. Only set to `0` if you explicitly want to disable auto-retry signal detection.
### How It Works
1. When an API error matching `retry_on_errors` occurs (or a classified provider key error such as missing API key), the hook intercepts it
2. The next request automatically uses the next available model from `fallback_models`
3. Failed models enter a cooldown period before being retried
4. If `timeout_seconds > 0` and a fallback provider hangs, timeout advances to the next fallback model
5. Toast notification (optional) informs you of the model switch
### Configuring Fallback Models
Define `fallback_models` at the agent or category level:
```json
{
"agents": {
"sisyphus": {
"model": "anthropic/claude-opus-4-5",
"fallback_models": ["openai/gpt-5.2", "google/gemini-3-pro"]
}
},
"categories": {
"ultrabrain": {
"model": "openai/gpt-5.2-codex",
"fallback_models": ["anthropic/claude-opus-4-5", "google/gemini-3-pro"]
}
}
}
```
When the primary model fails:
1. First fallback: `openai/gpt-5.2`
2. Second fallback: `google/gemini-3-pro`
3. After `max_fallback_attempts`, returns to primary model
## Categories
Categories enable domain-specific task delegation via the `task` tool. Each category applies runtime presets (model, temperature, prompt additions) when calling the `Sisyphus-Junior` agent.
@@ -909,75 +830,15 @@ Add your own categories or override built-in ones:
}
```
Each category supports: `model`, `fallback_models`, `temperature`, `top_p`, `maxTokens`, `thinking`, `reasoningEffort`, `textVerbosity`, `tools`, `prompt_append`, `variant`, `description`, `is_unstable_agent`.
Each category supports: `model`, `temperature`, `top_p`, `maxTokens`, `thinking`, `reasoningEffort`, `textVerbosity`, `tools`, `prompt_append`, `variant`, `description`, `is_unstable_agent`.
### Additional Category Options
| Option | Type | Default | Description |
| ------------------- | ------------ | ------- | --------------------------------------------------------------------------------------------------- |
| `fallback_models` | string/array | - | Fallback models for runtime switching on API errors. Single string or array of model strings. |
| `description` | string | - | Human-readable description of the category's purpose. Shown in delegate_task prompt. |
| `is_unstable_agent` | boolean | `false` | Mark agent as unstable - forces background mode for monitoring. Auto-enabled for gemini models. |
| Option | Type | Default | Description |
| ------------------ | ------- | ------- | --------------------------------------------------------------------------------------------------- |
| `description` | string | - | Human-readable description of the category's purpose. Shown in task prompt. |
| `is_unstable_agent`| boolean | `false` | Mark agent as unstable - forces background mode for monitoring. Auto-enabled for gemini models. |
## Runtime Fallback
Automatically switch to backup models when the primary model encounters retryable API errors (rate limits, overload, etc.) or provider key misconfiguration errors (for example, missing API key). This keeps conversations running without manual intervention.
```json
{
"runtime_fallback": {
"enabled": true,
"retry_on_errors": [429, 503, 529],
"max_fallback_attempts": 3,
"cooldown_seconds": 60,
"timeout_seconds": 30,
"notify_on_fallback": true
}
}
```
| Option | Default | Description |
| ----------------------- | ----------------- | --------------------------------------------------------------------------- |
| `enabled` | `true` | Enable runtime fallback |
| `retry_on_errors` | `[429, 503, 529]` | HTTP status codes that trigger fallback (rate limit, service unavailable). Also supports certain classified provider errors (for example, missing API key) that do not expose HTTP status codes. |
| `max_fallback_attempts` | `3` | Maximum fallback attempts per session (1-10) |
| `cooldown_seconds` | `60` | Cooldown in seconds before retrying a failed model |
| `timeout_seconds` | `30` | Timeout in seconds for an in-flight fallback request before forcing the next fallback model. Set to `0` to disable timeout-based fallback and provider quota retry signal detection. |
| `notify_on_fallback` | `true` | Show toast notification when switching to a fallback model |
### How It Works
1. When an API error matching `retry_on_errors` occurs (or a classified provider key error such as missing API key), the hook intercepts it
2. The next request automatically uses the next available model from `fallback_models`
3. Failed models enter a cooldown period before being retried
4. If a fallback provider hangs, timeout advances to the next fallback model
5. Toast notification (optional) informs you of the model switch
### Configuring Fallback Models
Define `fallback_models` at the agent or category level:
```json
{
"agents": {
"sisyphus": {
"model": "anthropic/claude-opus-4-5",
"fallback_models": ["openai/gpt-5.2", "google/gemini-3-pro"]
}
},
"categories": {
"ultrabrain": {
"model": "openai/gpt-5.2-codex",
"fallback_models": ["anthropic/claude-opus-4-5", "google/gemini-3-pro"]
}
}
}
```
When the primary model fails:
1. First fallback: `openai/gpt-5.2`
2. Second fallback: `google/gemini-3-pro`
3. After `max_fallback_attempts`, returns to primary model
## Model Resolution System
At runtime, Oh My OpenCode uses a 3-step resolution process to determine which model to use for each agent and category. This happens dynamically based on your configuration and available models.
@@ -1112,7 +973,7 @@ Disable specific built-in hooks via `disabled_hooks` in `~/.config/opencode/oh-m
}
```
Available hooks: `todo-continuation-enforcer`, `context-window-monitor`, `session-recovery`, `session-notification`, `comment-checker`, `grep-output-truncator`, `tool-output-truncator`, `directory-agents-injector`, `directory-readme-injector`, `empty-task-response-detector`, `think-mode`, `anthropic-context-window-limit-recovery`, `rules-injector`, `background-notification`, `auto-update-checker`, `startup-toast`, `keyword-detector`, `agent-usage-reminder`, `non-interactive-env`, `interactive-bash-session`, `compaction-context-injector`, `thinking-block-validator`, `claude-code-hooks`, `ralph-loop`, `preemptive-compaction`, `auto-slash-command`, `sisyphus-junior-notepad`, `no-sisyphus-gpt`, `start-work`, `runtime-fallback`
Available hooks: `todo-continuation-enforcer`, `context-window-monitor`, `session-recovery`, `session-notification`, `comment-checker`, `grep-output-truncator`, `tool-output-truncator`, `directory-agents-injector`, `directory-readme-injector`, `empty-task-response-detector`, `think-mode`, `anthropic-context-window-limit-recovery`, `rules-injector`, `background-notification`, `auto-update-checker`, `startup-toast`, `keyword-detector`, `agent-usage-reminder`, `non-interactive-env`, `interactive-bash-session`, `compaction-context-injector`, `thinking-block-validator`, `claude-code-hooks`, `ralph-loop`, `preemptive-compaction`, `auto-slash-command`, `sisyphus-junior-notepad`, `no-sisyphus-gpt`, `start-work`
**Note on `directory-agents-injector`**: This hook is **automatically disabled** when running on OpenCode 1.1.37+ because OpenCode now has native support for dynamically resolving AGENTS.md files from subdirectories (PR #10678). This prevents duplicate AGENTS.md injection. For older OpenCode versions, the hook remains active to provide the same functionality.
@@ -1120,34 +981,6 @@ Available hooks: `todo-continuation-enforcer`, `context-window-monitor`, `sessio
**Note on `auto-update-checker` and `startup-toast`**: The `startup-toast` hook is a sub-feature of `auto-update-checker`. To disable only the startup toast notification while keeping update checking enabled, add `"startup-toast"` to `disabled_hooks`. To disable all update checking features (including the toast), add `"auto-update-checker"` to `disabled_hooks`.
## Hashline Edit
Oh My OpenCode replaces OpenCode's built-in `Edit` tool with a hash-anchored version that uses `LINE#ID` references (e.g. `5#VK`) instead of bare line numbers. This prevents stale-line edits by validating content hash before applying each change.
Enabled by default. Set `hashline_edit: false` to opt out and restore standard file editing.
```json
{
"hashline_edit": false
}
```
| Option | Default | Description |
|--------|---------|-------------|
| `hashline_edit` | `true` | Enable hash-anchored `Edit` tool and companion hooks. When `false`, falls back to standard editing without hash validation. |
When enabled, two companion hooks are also active:
- **`hashline-read-enhancer`** — Appends `LINE#ID:content` annotations to `Read` output so agents always have fresh anchors.
- **`hashline-edit-diff-enhancer`** — Shows a unified diff in `Edit` / `Write` output for immediate change visibility.
To disable only the hooks while keeping the hash-anchored Edit tool:
```json
{
"disabled_hooks": ["hashline-read-enhancer", "hashline-edit-diff-enhancer"]
}
## Disabled Commands
Disable specific built-in commands via `disabled_commands` in `~/.config/opencode/oh-my-opencode.json` or `.opencode/oh-my-opencode.json`:
@@ -1300,7 +1133,6 @@ Opt-in experimental features that may change or be removed in future versions. U
"truncate_all_tool_outputs": true,
"aggressive_truncation": true,
"auto_resume": true,
"disable_omo_env": false,
"dynamic_context_pruning": {
"enabled": false,
"notification": "detailed",
@@ -1332,7 +1164,6 @@ Opt-in experimental features that may change or be removed in future versions. U
| `truncate_all_tool_outputs` | `false` | Truncates ALL tool outputs instead of just whitelisted tools (Grep, Glob, LSP, AST-grep). Tool output truncator is enabled by default - disable via `disabled_hooks`. |
| `aggressive_truncation` | `false` | When token limit is exceeded, aggressively truncates tool outputs to fit within limits. More aggressive than the default truncation behavior. Falls back to summarize/revert if insufficient. |
| `auto_resume` | `false` | Automatically resumes session after successful recovery from thinking block errors or thinking disabled violations. Extracts last user message and continues. |
| `disable_omo_env` | `false` | When `true`, disables auto-injected `<omo-env>` block generation (date, time, timezone, locale). When unset or `false`, current behavior is preserved. Setting this to `true` will improve the cache hit rate and reduce the API cost. |
| `dynamic_context_pruning` | See below | Dynamic context pruning configuration for managing context window usage automatically. See [Dynamic Context Pruning](#dynamic-context-pruning) below. |
### Dynamic Context Pruning

View File

@@ -10,12 +10,12 @@ Oh-My-OpenCode provides 11 specialized AI agents. Each has distinct expertise, o
| Agent | Model | Purpose |
|-------|-------|---------|
| **Sisyphus** | `anthropic/claude-opus-4-6` | **The default orchestrator.** Plans, delegates, and executes complex tasks using specialized subagents with aggressive parallel execution. Todo-driven workflow with extended thinking (32k budget). Fallback: k2p5 → kimi-k2.5-free → glm-5 → big-pickle. |
| **Sisyphus** | `anthropic/claude-opus-4-6` | **The default orchestrator.** Plans, delegates, and executes complex tasks using specialized subagents with aggressive parallel execution. Todo-driven workflow with extended thinking (32k budget). Fallback: k2p5 → kimi-k2.5-free → glm-4.7 → glm-4.7-free. |
| **Hephaestus** | `openai/gpt-5.3-codex` | **The Legitimate Craftsman.** Autonomous deep worker inspired by AmpCode's deep mode. Goal-oriented execution with thorough research before action. Explores codebase patterns, completes tasks end-to-end without premature stopping. Named after the Greek god of forge and craftsmanship. Requires gpt-5.3-codex (no fallback - only activates when this model is available). |
| **oracle** | `openai/gpt-5.2` | Architecture decisions, code review, debugging. Read-only consultation - stellar logical reasoning and deep analysis. Inspired by AmpCode. |
| **librarian** | `google/gemini-3-flash` | Multi-repo analysis, documentation lookup, OSS implementation examples. Deep codebase understanding with evidence-based answers. Fallback: minimax-m2.5-free → big-pickle. |
| **explore** | `github-copilot/grok-code-fast-1` | Fast codebase exploration and contextual grep. Fallback: minimax-m2.5-free → claude-haiku-4-5 → gpt-5-nano. |
| **multimodal-looker** | `kimi-for-coding/k2p5` | Visual content specialist. Analyzes PDFs, images, diagrams to extract information. Fallback: kimi-k2.5-free → gemini-3-flash → gpt-5.2 → glm-4.6v. |
| **librarian** | `zai-coding-plan/glm-4.7` | Multi-repo analysis, documentation lookup, OSS implementation examples. Deep codebase understanding with evidence-based answers. Fallback: glm-4.7-free → claude-sonnet-4-6. |
| **explore** | `github-copilot/grok-code-fast-1` | Fast codebase exploration and contextual grep. Fallback: claude-haiku-4-5 → gpt-5-nano. |
| **multimodal-looker** | `google/gemini-3-flash` | Visual content specialist. Analyzes PDFs, images, diagrams to extract information. Fallback: gpt-5.2 → glm-4.6v → k2p5 → kimi-k2.5-free → claude-haiku-4-5 → gpt-5-nano. |
### Planning Agents
@@ -352,7 +352,6 @@ Hooks intercept and modify behavior at key points in the agent lifecycle.
| **session-recovery** | Stop | Recovers from session errors - missing tool results, thinking block issues, empty messages. |
| **anthropic-context-window-limit-recovery** | Stop | Handles Claude context window limits gracefully. |
| **background-compaction** | Stop | Auto-compacts sessions hitting token limits. |
| **runtime-fallback** | Event | Automatically switches to backup models on retryable API errors (e.g., 429, 503, 529), provider key misconfiguration errors (e.g., missing API key), and auto-retry signals (when `timeout_seconds > 0`). Configurable retry logic with per-model cooldown. See [Runtime Fallback Configuration](configurations.md#runtime-fallback) for details on `timeout_seconds` behavior. |
#### Truncation & Context Management

View File

@@ -1,193 +0,0 @@
# Agent-Model Matching Guide
> **For agents and users**: How to pick the right model for each agent. Read this before customizing model settings.
Run `opencode models` to see all available models on your system, and `opencode auth login` to authenticate with providers.
---
## Model Families: Know Your Options
Not all models behave the same way. Understanding which models are "similar" helps you make safe substitutions.
### Claude-like Models (instruction-following, structured output)
These models respond similarly to Claude and work well with oh-my-opencode's Claude-optimized prompts:
| Model | Provider(s) | Notes |
|-------|-------------|-------|
| **Claude Opus 4.6** | anthropic, github-copilot, opencode | Best overall. Default for Sisyphus. |
| **Claude Sonnet 4.6** | anthropic, github-copilot, opencode | Faster, cheaper. Good balance. |
| **Claude Haiku 4.5** | anthropic, opencode | Fast and cheap. Good for quick tasks. |
| **Kimi K2.5** | kimi-for-coding | Behaves very similarly to Claude. Great all-rounder. Default for Atlas. |
| **Kimi K2.5 Free** | opencode | Free-tier Kimi. Rate-limited but functional. |
| **GLM 5** | zai-coding-plan, opencode | Claude-like behavior. Good for broad tasks. |
| **Big Pickle (GLM 4.6)** | opencode | Free-tier GLM. Decent fallback. |
### GPT Models (explicit reasoning, principle-driven)
GPT models need differently structured prompts. Some agents auto-detect GPT and switch prompts:
| Model | Provider(s) | Notes |
|-------|-------------|-------|
| **GPT-5.3-codex** | openai, github-copilot, opencode | Deep coding powerhouse. Required for Hephaestus. |
| **GPT-5.2** | openai, github-copilot, opencode | High intelligence. Default for Oracle. |
| **GPT-5-Nano** | opencode | Ultra-cheap, fast. Good for simple utility tasks. |
### Different-Behavior Models
These models have unique characteristics — don't assume they'll behave like Claude or GPT:
| Model | Provider(s) | Notes |
|-------|-------------|-------|
| **Gemini 3 Pro** | google, github-copilot, opencode | Excels at visual/frontend tasks. Different reasoning style. |
| **Gemini 3 Flash** | google, github-copilot, opencode | Fast, good for doc search and light tasks. |
| **MiniMax M2.5** | venice | Fast and smart. Good for utility tasks. |
| **MiniMax M2.5 Free** | opencode | Free-tier MiniMax. Fast for search/retrieval. |
### Speed-Focused Models
| Model | Provider(s) | Speed | Notes |
|-------|-------------|-------|-------|
| **Grok Code Fast 1** | github-copilot, venice | Very fast | Optimized for code grep/search. Default for Explore. |
| **Claude Haiku 4.5** | anthropic, opencode | Fast | Good balance of speed and intelligence. |
| **MiniMax M2.5 (Free)** | opencode, venice | Fast | Smart for its speed class. |
| **GPT-5.3-codex-spark** | openai | Extremely fast | Blazing fast but compacts so aggressively that oh-my-opencode's context management doesn't work well with it. Not recommended for omo agents. |
---
## Agent Roles and Recommended Models
### Claude-Optimized Agents
These agents have prompts tuned for Claude-family models. Use Claude > Kimi K2.5 > GLM 5 in that priority order.
| Agent | Role | Default Chain | What It Does |
|-------|------|---------------|--------------|
| **Sisyphus** | Main ultraworker | Opus (max) → Kimi K2.5 → GLM 5 → Big Pickle | Primary coding agent. Orchestrates everything. **Never use GPT — no GPT prompt exists.** |
| **Metis** | Plan review | Opus (max) → Kimi K2.5 → GPT-5.2 → Gemini 3 Pro | Reviews Prometheus plans for gaps. |
### Dual-Prompt Agents (Claude + GPT auto-switch)
These agents detect your model family at runtime and switch to the appropriate prompt. If you have GPT access, these agents can use it effectively.
Priority: **Claude > GPT > Claude-like models**
| Agent | Role | Default Chain | GPT Prompt? |
|-------|------|---------------|-------------|
| **Prometheus** | Strategic planner | Opus (max) → **GPT-5.2 (high)** → Kimi K2.5 → Gemini 3 Pro | Yes — XML-tagged, principle-driven (~300 lines vs ~1,100 Claude) |
| **Atlas** | Todo orchestrator | **Kimi K2.5** → Sonnet → GPT-5.2 | Yes — GPT-optimized todo management |
### GPT-Native Agents
These agents are built for GPT. Don't override to Claude.
| Agent | Role | Default Chain | Notes |
|-------|------|---------------|-------|
| **Hephaestus** | Deep autonomous worker | GPT-5.3-codex (medium) only | "Codex on steroids." No fallback. Requires GPT access. |
| **Oracle** | Architecture/debugging | GPT-5.2 (high) → Gemini 3 Pro → Opus | High-IQ strategic backup. GPT preferred. |
| **Momus** | High-accuracy reviewer | GPT-5.2 (medium) → Opus → Gemini 3 Pro | Verification agent. GPT preferred. |
### Utility Agents (Speed > Intelligence)
These agents do search, grep, and retrieval. They intentionally use fast, cheap models. **Don't "upgrade" them to Opus — it wastes tokens on simple tasks.**
| Agent | Role | Default Chain | Design Rationale |
|-------|------|---------------|------------------|
| **Explore** | Fast codebase grep | MiniMax M2.5 Free → Grok Code Fast → MiniMax M2.5 → Haiku → GPT-5-Nano | Speed is everything. Grok is blazing fast for grep. |
| **Librarian** | Docs/code search | MiniMax M2.5 Free → Gemini Flash → Big Pickle | Entirely free-tier. Doc retrieval doesn't need deep reasoning. |
| **Multimodal Looker** | Vision/screenshots | Kimi K2.5 → Kimi Free → Gemini Flash → GPT-5.2 → GLM-4.6v | Kimi excels at multimodal understanding. |
---
## Task Categories
Categories control which model is used for `background_task` and `delegate_task`. See the [Orchestration System Guide](./understanding-orchestration-system.md) for how agents dispatch tasks to categories.
| Category | When Used | Recommended Models | Notes |
|----------|-----------|-------------------|-------|
| `visual-engineering` | Frontend, UI, CSS, design | Gemini 3 Pro (high) → GLM 5 → Opus → Kimi K2.5 | Gemini dominates visual tasks |
| `ultrabrain` | Maximum reasoning needed | GPT-5.3-codex (xhigh) → Gemini 3 Pro → Opus | Highest intelligence available |
| `deep` | Deep coding, complex logic | GPT-5.3-codex (medium) → Opus → Gemini 3 Pro | Requires GPT availability |
| `artistry` | Creative, novel approaches | Gemini 3 Pro (high) → Opus → GPT-5.2 | Requires Gemini availability |
| `quick` | Simple, fast tasks | Haiku → Gemini Flash → GPT-5-Nano | Cheapest and fastest |
| `unspecified-high` | General complex work | Opus (max) → GPT-5.2 (high) → Gemini 3 Pro | Default when no category fits |
| `unspecified-low` | General standard work | Sonnet → GPT-5.3-codex (medium) → Gemini Flash | Everyday tasks |
| `writing` | Text, docs, prose | Kimi K2.5 → Gemini Flash → Sonnet | Kimi produces best prose |
---
## Why Different Models Need Different Prompts
Claude and GPT models have fundamentally different instruction-following behaviors:
- **Claude models** respond well to **mechanics-driven** prompts — detailed checklists, templates, step-by-step procedures. More rules = more compliance.
- **GPT models** (especially 5.2+) respond better to **principle-driven** prompts — concise principles, XML-tagged structure, explicit decision criteria. More rules = more contradiction surface = more drift.
Key insight from Codex Plan Mode analysis:
- Codex Plan Mode achieves the same results with 3 principles in ~121 lines that Prometheus's Claude prompt needs ~1,100 lines across 7 files
- The core concept is **"Decision Complete"** — a plan must leave ZERO decisions to the implementer
- GPT follows this literally when stated as a principle; Claude needs enforcement mechanisms
This is why Prometheus and Atlas ship separate prompts per model family — they auto-detect and switch at runtime via `isGptModel()`.
---
## Customization Guide
### How to Customize
Override in `oh-my-opencode.json`:
```jsonc
{
"agents": {
"sisyphus": { "model": "kimi-for-coding/k2p5" },
"prometheus": { "model": "openai/gpt-5.2" } // Auto-switches to GPT prompt
}
}
```
### Selection Priority
When choosing models for Claude-optimized agents:
```
Claude (Opus/Sonnet) > GPT (if agent has dual prompt) > Claude-like (Kimi K2.5, GLM 5)
```
When choosing models for GPT-native agents:
```
GPT (5.3-codex, 5.2) > Claude Opus (decent fallback) > Gemini (acceptable)
```
### Safe vs Dangerous Overrides
**Safe** (same family):
- Sisyphus: Opus → Sonnet, Kimi K2.5, GLM 5
- Prometheus: Opus → GPT-5.2 (auto-switches prompt)
- Atlas: Kimi K2.5 → Sonnet, GPT-5.2 (auto-switches)
**Dangerous** (no prompt support):
- Sisyphus → GPT: **No GPT prompt. Will degrade significantly.**
- Hephaestus → Claude: **Built for Codex. Claude can't replicate this.**
- Explore → Opus: **Massive cost waste. Explore needs speed, not intelligence.**
- Librarian → Opus: **Same. Doc search doesn't need Opus-level reasoning.**
---
## Provider Priority
```
Native (anthropic/, openai/, google/) > Kimi for Coding > GitHub Copilot > Venice > OpenCode Zen > Z.ai Coding Plan
```
---
## See Also
- [Installation Guide](./installation.md) — Setup and authentication
- [Orchestration System](./understanding-orchestration-system.md) — How agents dispatch tasks to categories
- [Configuration Reference](../configurations.md) — Full config options
- [`src/shared/model-requirements.ts`](../../src/shared/model-requirements.ts) — Source of truth for fallback chains

View File

@@ -259,18 +259,6 @@ opencode auth login
The plugin works perfectly by default. Do not change settings or turn off features without an explicit request.
### Custom Model Configuration
If the user wants to override which model an agent uses, refer to the **[Agent-Model Matching Guide](./agent-model-matching.md)** before making changes. That guide explains:
- **Why each agent uses its default model** — prompt optimization, model family compatibility
- **Which substitutions are safe** — staying within the same model family (e.g., Opus → Sonnet for Sisyphus)
- **Which substitutions are dangerous** — crossing model families without prompt support (e.g., GPT for Sisyphus)
- **How auto-routing works** — Prometheus and Atlas detect GPT models and switch to GPT-optimized prompts automatically
- **Full fallback chains** — what happens when the preferred model is unavailable
Always explain to the user *why* a model is assigned to an agent when making custom changes. The guide provides the rationale for every assignment.
### Verify the setup
Read this document again, think about you have done everything correctly.

357
issue-1501-analysis.md Normal file
View File

@@ -0,0 +1,357 @@
# Issue #1501 분석 보고서: ULW Mode PLAN AGENT 무한루프
## 📋 이슈 요약
**증상:**
- ULW (ultrawork) mode에서 PLAN AGENT가 무한루프에 빠짐
- 분석/탐색 완료 후 plan만 계속 생성
- 1분마다 매우 작은 토큰으로 요청 발생
**예상 동작:**
- 탐색 완료 후 solution document 생성
---
## 🔍 근본 원인 분석
### 파일: `src/tools/delegate-task/constants.ts`
#### 문제의 핵심
`PLAN_AGENT_SYSTEM_PREPEND` (constants.ts 234-269행)에 구조적 결함이 있었습니다:
1. **Interactive Mode 가정**
```
2. After gathering context, ALWAYS present:
- Uncertainties: List of unclear points
- Clarifying Questions: Specific questions to resolve uncertainties
3. ITERATE until ALL requirements are crystal clear:
- Do NOT proceed to planning until you have 100% clarity
- Ask the user to confirm your understanding
```
2. **종료 조건 없음**
- "100% clarity" 요구는 객관적 측정 불가능
- 사용자 확인 요청은 ULW mode에서 불가능
- 무한루프로 이어짐
3. **ULW Mode 미감지**
- Subagent로 실행되는 경우를 구분하지 않음
- 항상 interactive mode로 동작 시도
### 왜 무한루프가 발생했는가?
```
ULW Mode 시작
→ Sisyphus가 Plan Agent 호출 (subagent)
→ Plan Agent: "100% clarity 필요"
→ Clarifying questions 생성
→ 사용자 없음 (subagent)
→ 다시 plan 생성 시도
→ "여전히 unclear"
→ 무한루프 반복
```
**핵심:** Plan Agent는 사용자와 대화하도록 설계되었지만, ULW mode에서는 사용자가 없는 subagent로 실행됨.
---
## ✅ 적용된 수정 방안
### 수정 내용 (constants.ts)
#### 1. SUBAGENT MODE DETECTION 섹션 추가
```typescript
SUBAGENT MODE DETECTION (CRITICAL):
If you received a detailed prompt with gathered context from a parent orchestrator (e.g., Sisyphus):
- You are running as a SUBAGENT
- You CANNOT directly interact with the user
- DO NOT ask clarifying questions - proceed with available information
- Make reasonable assumptions for minor ambiguities
- Generate the plan based on the provided context
```
#### 2. Context Gathering Protocol 수정
```diff
- 1. Launch background agents to gather context:
+ 1. Launch background agents to gather context (ONLY if not already provided):
```
**효과:** 이미 Sisyphus가 context를 수집한 경우 중복 방지
#### 3. Clarifying Questions → Assumptions
```diff
- 2. After gathering context, ALWAYS present:
- - Uncertainties: List of unclear points
- - Clarifying Questions: Specific questions
+ 2. After gathering context, assess clarity:
+ - User Request Summary: Concise restatement
+ - Assumptions Made: List any assumptions for unclear points
```
**효과:** 질문 대신 가정 사항 문서화
#### 4. 무한루프 방지 - 명확한 종료 조건
```diff
- 3. ITERATE until ALL requirements are crystal clear:
- - Do NOT proceed to planning until you have 100% clarity
- - Ask the user to confirm your understanding
- - Resolve every ambiguity before generating the work plan
+ 3. PROCEED TO PLAN GENERATION when:
+ - Core objective is understood (even if some details are ambiguous)
+ - You have gathered context via explore/librarian (or context was provided)
+ - You can make reasonable assumptions for remaining ambiguities
+
+ DO NOT loop indefinitely waiting for perfect clarity.
+ DOCUMENT assumptions in the plan so they can be validated during execution.
```
**효과:**
- "100% clarity" 요구 제거
- 객관적인 진입 조건 제공
- 무한루프 명시적 금지
- Assumptions를 plan에 문서화하여 실행 중 검증 가능
#### 5. 철학 변경
```diff
- REMEMBER: Vague requirements lead to failed implementations.
+ REMEMBER: A plan with documented assumptions is better than no plan.
```
**효과:** Perfectionism → Pragmatism
---
## 🎯 해결 메커니즘
### Before (무한루프)
```
Plan Agent 시작
Context gathering
Requirements 명확한가?
↓ NO
Clarifying questions 생성
사용자 응답 대기 (없음)
다시 plan 시도
(무한 반복)
```
### After (정상 종료)
```
Plan Agent 시작
Subagent mode 감지?
↓ YES
Context 이미 있음? → YES
Core objective 이해? → YES
Reasonable assumptions 가능? → YES
Plan 생성 (assumptions 문서화)
완료 ✓
```
---
## 📊 영향 분석
### 해결되는 문제
1. **ULW mode 무한루프** ✓
2. **Sisyphus에서 Plan Agent 호출 시 블로킹** ✓
3. **작은 토큰 반복 요청** ✓
4. **1분마다 재시도** ✓
### 부작용 없음
- Interactive mode (사용자와 직접 대화)는 여전히 작동
- Subagent mode일 때만 다르게 동작
- Backward compatibility 유지
### 추가 개선사항
- Assumptions를 plan에 명시적으로 문서화
- Execution 중 validation 가능
- 더 pragmatic한 workflow
---
## 🧪 검증 방법
### 테스트 시나리오
1. **ULW mode에서 Plan Agent 호출**
```bash
oh-my-opencode run "Complex task requiring planning. ulw"
```
- 예상: Plan 생성 후 정상 종료
- 확인: 무한루프 없음
2. **Interactive mode (변경 없어야 함)**
```bash
oh-my-opencode run --agent prometheus "Design X"
```
- 예상: Clarifying questions 여전히 가능
- 확인: 사용자와 대화 가능
3. **Subagent context 제공 케이스**
- 예상: Context gathering skip
- 확인: 중복 탐색 없음
---
## 📝 수정된 파일
```
src/tools/delegate-task/constants.ts
```
### Diff Summary
```diff
@@ -234,22 +234,32 @@ export const PLAN_AGENT_SYSTEM_PREPEND = `<system>
+SUBAGENT MODE DETECTION (CRITICAL):
+[subagent 감지 및 처리 로직]
+
MANDATORY CONTEXT GATHERING PROTOCOL:
-1. Launch background agents to gather context:
+1. Launch background agents (ONLY if not already provided):
-2. After gathering context, ALWAYS present:
- - Uncertainties
- - Clarifying Questions
+2. After gathering context, assess clarity:
+ - Assumptions Made
-3. ITERATE until ALL requirements are crystal clear:
- - Do NOT proceed until 100% clarity
- - Ask user to confirm
+3. PROCEED TO PLAN GENERATION when:
+ - Core objective understood
+ - Context gathered
+ - Reasonable assumptions possible
+
+ DO NOT loop indefinitely.
+ DOCUMENT assumptions.
```
---
## 🚀 권장 사항
### Immediate Actions
1. ✅ **수정 적용 완료** - constants.ts 업데이트됨
2. ⏳ **테스트 수행** - ULW mode에서 동작 검증
3. ⏳ **PR 생성** - code review 요청
### Future Improvements
1. **Subagent context 표준화**
- Subagent로 호출 시 명시적 플래그 전달
- `is_subagent: true` 파라미터 추가 고려
2. **Assumptions validation workflow**
- Plan 실행 중 assumptions 검증 메커니즘
- Incorrect assumptions 감지 시 재계획
3. **Timeout 메커니즘**
- Plan Agent가 X분 이상 걸리면 강제 종료
- Fallback plan 생성
4. **Monitoring 추가**
- Plan Agent 실행 시간 측정
- Iteration 횟수 로깅
- 무한루프 조기 감지
---
## 📖 관련 코드 구조
### Call Stack
```
Sisyphus (ULW mode)
task(category="deep", ...)
executor.ts: executeBackgroundContinuation()
prompt-builder.ts: buildSystemContent()
constants.ts: PLAN_AGENT_SYSTEM_PREPEND (문제 위치)
Plan Agent 실행
```
### Key Functions
1. **executor.ts:587** - `isPlanAgent()` 체크
2. **prompt-builder.ts:11** - Plan Agent prepend 주입
3. **constants.ts:234** - PLAN_AGENT_SYSTEM_PREPEND 정의
---
## 🎓 교훈
### Design Lessons
1. **Dual Mode Support**
- Interactive vs Autonomous mode 구분 필수
- Context 전달 방식 명확히
2. **Avoid Perfectionism in Agents**
- "100% clarity" 같은 주관적 조건 지양
- 명확한 객관적 종료 조건 필요
3. **Document Uncertainties**
- 불확실성을 숨기지 말고 문서화
- 실행 중 validation 가능하게
4. **Infinite Loop Prevention**
- 모든 반복문에 명시적 종료 조건
- Timeout 또는 max iteration 설정
---
## 🔗 참고 자료
- **Issue:** #1501 - [Bug]: ULW mode will 100% cause PLAN AGENT to get stuck
- **Files Modified:** `src/tools/delegate-task/constants.ts`
- **Related Concepts:** Ultrawork mode, Plan Agent, Subagent delegation
- **Agent Architecture:** Sisyphus → Prometheus → Atlas workflow
---
## ✅ Conclusion
**Root Cause:** Plan Agent가 interactive mode를 가정했으나 ULW mode에서는 subagent로 실행되어 사용자 상호작용 불가능. "100% clarity" 요구로 무한루프 발생.
**Solution:** Subagent mode 감지 로직 추가, clarifying questions 제거, 명확한 종료 조건 제공, assumptions 문서화 방식 도입.
**Result:** ULW mode에서 Plan Agent가 정상적으로 plan 생성 후 종료. 무한루프 해결.
---
**Status:** ✅ Fixed
**Tested:** ⏳ Pending
**Deployed:** ⏳ Pending
**Analyst:** Sisyphus (oh-my-opencode ultrawork mode)
**Date:** 2026-02-05
**Session:** fast-ember

View File

@@ -58,7 +58,6 @@
"@modelcontextprotocol/sdk": "^1.25.1",
"@opencode-ai/plugin": "^1.1.19",
"@opencode-ai/sdk": "^1.1.19",
"codex": "^0.2.3",
"commander": "^14.0.2",
"detect-libc": "^2.0.0",
"js-yaml": "^4.1.1",

View File

@@ -1599,62 +1599,6 @@
"created_at": "2026-02-18T20:52:27Z",
"repoId": 1108837393,
"pullRequestNo": 1953
},
{
"name": "itstanner5216",
"id": 210304352,
"comment_id": 3925417310,
"created_at": "2026-02-19T08:13:42Z",
"repoId": 1108837393,
"pullRequestNo": 1958
},
{
"name": "itstanner5216",
"id": 210304352,
"comment_id": 3925417953,
"created_at": "2026-02-19T08:13:46Z",
"repoId": 1108837393,
"pullRequestNo": 1958
},
{
"name": "ControlNet",
"id": 12800094,
"comment_id": 3928095504,
"created_at": "2026-02-19T15:43:22Z",
"repoId": 1108837393,
"pullRequestNo": 1974
},
{
"name": "VespianRex",
"id": 151797549,
"comment_id": 3929203247,
"created_at": "2026-02-19T18:45:52Z",
"repoId": 1108837393,
"pullRequestNo": 1957
},
{
"name": "GyuminJack",
"id": 32768535,
"comment_id": 3895081227,
"created_at": "2026-02-13T06:00:53Z",
"repoId": 1108837393,
"pullRequestNo": 1813
},
{
"name": "CloudWaddie",
"id": 148834837,
"comment_id": 3931489943,
"created_at": "2026-02-20T04:06:05Z",
"repoId": 1108837393,
"pullRequestNo": 1988
},
{
"name": "FFFergie",
"id": 53839805,
"comment_id": 3934341409,
"created_at": "2026-02-20T13:03:33Z",
"repoId": 1108837393,
"pullRequestNo": 1996
}
]
}

View File

@@ -33,8 +33,8 @@ loadPluginConfig(directory, ctx)
```
createHooks()
├─→ createCoreHooks() # 35 hooks
│ ├─ createSessionHooks() # 21: contextWindowMonitor, thinkMode, ralphLoop, sessionRecovery, jsonErrorRecovery, sisyphusGptHephaestusReminder, anthropicEffort...
│ ├─ createToolGuardHooks() # 10: commentChecker, rulesInjector, writeExistingFileGuard, hashlineEditDiffEnhancer...
│ ├─ createSessionHooks() # 22: contextWindowMonitor, thinkMode, ralphLoop, sessionRecovery, jsonErrorRecovery, sisyphusGptHephaestusReminder, taskReminder...
│ ├─ createToolGuardHooks() # 9: commentChecker, rulesInjector, writeExistingFileGuard...
│ └─ createTransformHooks() # 4: claudeCodeHooks, keywordDetector, contextInjector, thinkingBlockValidator
├─→ createContinuationHooks() # 7: todoContinuationEnforcer, atlas, stopContinuationGuard...
└─→ createSkillHooks() # 2: categorySkillReminder, autoSlashCommand

View File

@@ -311,8 +311,7 @@ task(category="quick", load_skills=[], run_in_background=false, prompt="Task 4..
**Background management**:
- Collect results: \`background_output(task_id="...")\`
- Before final answer, cancel DISPOSABLE tasks individually: \`background_cancel(taskId="bg_explore_xxx")\`, \`background_cancel(taskId="bg_librarian_xxx")\`
- **NEVER use \`background_cancel(all=true)\`** — it kills tasks whose results you haven't collected yet
- Before final answer: \`background_cancel(all=true)\`
</parallel_execution>
<notepad_protocol>

View File

@@ -298,8 +298,7 @@ task(category="quick", load_skills=[], run_in_background=false, prompt="Task 3..
**Background management**:
- Collect: \`background_output(task_id="...")\`
- Before final answer, cancel DISPOSABLE tasks individually: \`background_cancel(taskId="bg_explore_xxx")\`, \`background_cancel(taskId="bg_librarian_xxx")\`
- **NEVER use \`background_cancel(all=true)\`** — it kills tasks whose results you haven't collected yet
- Cleanup: \`background_cancel(all=true)\`
</parallel_execution>
<notepad_protocol>

View File

@@ -69,10 +69,8 @@ export async function createBuiltinAgents(
browserProvider?: BrowserAutomationProvider,
uiSelectedModel?: string,
disabledSkills?: Set<string>,
useTaskSystem = false,
disableOmoEnv = false
useTaskSystem = false
): Promise<Record<string, AgentConfig>> {
const connectedProviders = readConnectedProvidersCache()
const providerModelsConnected = connectedProviders
? (readProviderModelsCache()?.connected ?? [])
@@ -114,7 +112,6 @@ export async function createBuiltinAgents(
uiSelectedModel,
availableModels,
disabledSkills,
disableOmoEnv,
})
const registeredAgents = parseRegisteredAgentSummaries(customAgentSummaries)
@@ -148,7 +145,6 @@ export async function createBuiltinAgents(
directory,
userCategories: categories,
useTaskSystem,
disableOmoEnv,
})
if (sisyphusConfig) {
result["sisyphus"] = sisyphusConfig
@@ -166,7 +162,6 @@ export async function createBuiltinAgents(
mergedCategories,
directory,
useTaskSystem,
disableOmoEnv,
})
if (hephaestusConfig) {
result["hephaestus"] = hephaestusConfig

View File

@@ -1,16 +1,8 @@
import type { AgentConfig } from "@opencode-ai/sdk"
import { createEnvContext } from "../env-context"
type ApplyEnvironmentContextOptions = {
disableOmoEnv?: boolean
}
export function applyEnvironmentContext(
config: AgentConfig,
directory?: string,
options: ApplyEnvironmentContextOptions = {}
): AgentConfig {
if (options.disableOmoEnv || !directory || !config.prompt) return config
export function applyEnvironmentContext(config: AgentConfig, directory?: string): AgentConfig {
if (!directory || !config.prompt) return config
const envContext = createEnvContext()
return { ...config, prompt: config.prompt + envContext }
}

View File

@@ -23,7 +23,6 @@ export function collectPendingBuiltinAgents(input: {
availableModels: Set<string>
disabledSkills?: Set<string>
useTaskSystem?: boolean
disableOmoEnv?: boolean
}): { pendingAgentConfigs: Map<string, AgentConfig>; availableAgents: AvailableAgent[] } {
const {
agentSources,
@@ -38,7 +37,6 @@ export function collectPendingBuiltinAgents(input: {
uiSelectedModel,
availableModels,
disabledSkills,
disableOmoEnv = false,
} = input
const availableAgents: AvailableAgent[] = []
@@ -83,7 +81,7 @@ export function collectPendingBuiltinAgents(input: {
}
if (agentName === "librarian") {
config = applyEnvironmentContext(config, directory, { disableOmoEnv })
config = applyEnvironmentContext(config, directory)
}
config = applyOverrides(config, override, mergedCategories, directory)

View File

@@ -4,7 +4,7 @@ import type { CategoryConfig } from "../../config/schema"
import type { AvailableAgent, AvailableCategory, AvailableSkill } from "../dynamic-agent-prompt-builder"
import { AGENT_MODEL_REQUIREMENTS, isAnyProviderConnected } from "../../shared"
import { createHephaestusAgent } from "../hephaestus"
import { applyEnvironmentContext } from "./environment-context"
import { createEnvContext } from "../env-context"
import { applyCategoryOverride, mergeAgentConfig } from "./agent-overrides"
import { applyModelResolution, getFirstFallbackModel } from "./model-resolution"
@@ -20,7 +20,6 @@ export function maybeCreateHephaestusConfig(input: {
mergedCategories: Record<string, CategoryConfig>
directory?: string
useTaskSystem: boolean
disableOmoEnv?: boolean
}): AgentConfig | undefined {
const {
disabledAgents,
@@ -34,7 +33,6 @@ export function maybeCreateHephaestusConfig(input: {
mergedCategories,
directory,
useTaskSystem,
disableOmoEnv = false,
} = input
if (disabledAgents.includes("hephaestus")) return undefined
@@ -81,7 +79,10 @@ export function maybeCreateHephaestusConfig(input: {
hephaestusConfig = applyCategoryOverride(hephaestusConfig, hepOverrideCategory, mergedCategories)
}
hephaestusConfig = applyEnvironmentContext(hephaestusConfig, directory, { disableOmoEnv })
if (directory && hephaestusConfig.prompt) {
const envContext = createEnvContext()
hephaestusConfig = { ...hephaestusConfig, prompt: hephaestusConfig.prompt + envContext }
}
if (hephaestusOverride) {
hephaestusConfig = mergeAgentConfig(hephaestusConfig, hephaestusOverride, directory)

View File

@@ -22,7 +22,6 @@ export function maybeCreateSisyphusConfig(input: {
directory?: string
userCategories?: CategoriesConfig
useTaskSystem: boolean
disableOmoEnv?: boolean
}): AgentConfig | undefined {
const {
disabledAgents,
@@ -37,7 +36,6 @@ export function maybeCreateSisyphusConfig(input: {
mergedCategories,
directory,
useTaskSystem,
disableOmoEnv = false,
} = input
const sisyphusOverride = agentOverrides["sisyphus"]
@@ -80,9 +78,7 @@ export function maybeCreateSisyphusConfig(input: {
}
sisyphusConfig = applyOverrides(sisyphusConfig, sisyphusOverride, mergedCategories, directory)
sisyphusConfig = applyEnvironmentContext(sisyphusConfig, directory, {
disableOmoEnv,
})
sisyphusConfig = applyEnvironmentContext(sisyphusConfig, directory)
return sisyphusConfig
}

View File

@@ -4,6 +4,7 @@ import { describe, it, expect } from "bun:test"
import {
buildCategorySkillsDelegationGuide,
buildUltraworkSection,
formatCustomSkillsBlock,
type AvailableSkill,
type AvailableCategory,
type AvailableAgent,
@@ -29,39 +30,40 @@ describe("buildCategorySkillsDelegationGuide", () => {
{ name: "our-design-system", description: "Internal design system components", location: "project" },
]
it("should list builtin and custom skills in compact format", () => {
it("should separate builtin and custom skills into distinct sections", () => {
//#given: mix of builtin and custom skills
const allSkills = [...builtinSkills, ...customUserSkills]
//#when: building the delegation guide
const result = buildCategorySkillsDelegationGuide(categories, allSkills)
//#then: should use compact format with both sections
expect(result).toContain("**Built-in**: playwright, frontend-ui-ux")
expect(result).toContain("YOUR SKILLS (PRIORITY)")
expect(result).toContain("react-19 (user)")
expect(result).toContain("tailwind-4 (user)")
//#then: should have separate sections
expect(result).toContain("Built-in Skills")
expect(result).toContain("User-Installed Skills")
expect(result).toContain("HIGH PRIORITY")
})
it("should point to skill tool as source of truth", () => {
//#given: skills present
it("should list custom skills and keep CRITICAL warning", () => {
//#given: custom skills installed
const allSkills = [...builtinSkills, ...customUserSkills]
//#when: building the delegation guide
const result = buildCategorySkillsDelegationGuide(categories, allSkills)
//#then: should reference the skill tool for full descriptions
expect(result).toContain("`skill` tool")
//#then: should mention custom skills by name and include warning
expect(result).toContain("`react-19`")
expect(result).toContain("`tailwind-4`")
expect(result).toContain("CRITICAL")
})
it("should show source tags for custom skills (user vs project)", () => {
it("should show source column for custom skills (user vs project)", () => {
//#given: both user and project custom skills
const allSkills = [...builtinSkills, ...customUserSkills, ...customProjectSkills]
//#when: building the delegation guide
const result = buildCategorySkillsDelegationGuide(categories, allSkills)
//#then: should show source tag for each custom skill
//#then: should show source for each custom skill
expect(result).toContain("(user)")
expect(result).toContain("(project)")
})
@@ -74,8 +76,8 @@ describe("buildCategorySkillsDelegationGuide", () => {
const result = buildCategorySkillsDelegationGuide(categories, allSkills)
//#then: should not contain custom skill emphasis
expect(result).not.toContain("YOUR SKILLS")
expect(result).toContain("**Built-in**:")
expect(result).not.toContain("User-Installed Skills")
expect(result).not.toContain("HIGH PRIORITY")
expect(result).toContain("Available Skills")
})
@@ -86,9 +88,10 @@ describe("buildCategorySkillsDelegationGuide", () => {
//#when: building the delegation guide
const result = buildCategorySkillsDelegationGuide(categories, allSkills)
//#then: should show custom skills with emphasis, no builtin line
expect(result).toContain("YOUR SKILLS (PRIORITY)")
expect(result).not.toContain("**Built-in**:")
//#then: should show custom skills with emphasis, no builtin section
expect(result).toContain("User-Installed Skills")
expect(result).toContain("HIGH PRIORITY")
expect(result).not.toContain("Built-in Skills")
})
it("should include priority note for custom skills in evaluation step", () => {
@@ -100,7 +103,7 @@ describe("buildCategorySkillsDelegationGuide", () => {
//#then: evaluation section should mention user-installed priority
expect(result).toContain("User-installed skills get PRIORITY")
expect(result).toContain("INCLUDE rather than omit")
expect(result).toContain("INCLUDE it rather than omit it")
})
it("should NOT include priority note when no custom skills", () => {
@@ -122,20 +125,6 @@ describe("buildCategorySkillsDelegationGuide", () => {
//#then: should return empty string
expect(result).toBe("")
})
it("should include category descriptions", () => {
//#given: categories with descriptions
const allSkills = [...builtinSkills]
//#when: building the delegation guide
const result = buildCategorySkillsDelegationGuide(categories, allSkills)
//#then: should list categories with their descriptions
expect(result).toContain("`visual-engineering`")
expect(result).toContain("Frontend, UI/UX")
expect(result).toContain("`quick`")
expect(result).toContain("Trivial tasks")
})
})
describe("buildUltraworkSection", () => {
@@ -172,4 +161,45 @@ describe("buildUltraworkSection", () => {
})
})
describe("formatCustomSkillsBlock", () => {
const customSkills: AvailableSkill[] = [
{ name: "react-19", description: "React 19 patterns", location: "user" },
{ name: "tailwind-4", description: "Tailwind v4", location: "project" },
]
const customRows = customSkills.map((s) => {
const source = s.location === "project" ? "project" : "user"
return `| \`${s.name}\` | ${s.description} | ${source} |`
})
it("should produce consistent output used by both builders", () => {
//#given: custom skills and rows
//#when: formatting with default header level
const result = formatCustomSkillsBlock(customRows, customSkills)
//#then: contains all expected elements
expect(result).toContain("User-Installed Skills (HIGH PRIORITY)")
expect(result).toContain("CRITICAL")
expect(result).toContain("`react-19`")
expect(result).toContain("`tailwind-4`")
expect(result).toContain("| user |")
expect(result).toContain("| project |")
})
it("should use #### header by default", () => {
//#given: default header level
const result = formatCustomSkillsBlock(customRows, customSkills)
//#then: uses markdown h4
expect(result).toContain("#### User-Installed Skills")
})
it("should use bold header when specified", () => {
//#given: bold header level (used by Atlas)
const result = formatCustomSkillsBlock(customRows, customSkills, "**")
//#then: uses bold instead of h4
expect(result).toContain("**User-Installed Skills (HIGH PRIORITY):**")
expect(result).not.toContain("#### User-Installed Skills")
})
})

View File

@@ -1,4 +1,5 @@
import type { AgentPromptMetadata } from "./types"
import { truncateDescription } from "../shared/truncate-description"
export interface AvailableAgent {
name: string
@@ -157,6 +158,29 @@ export function buildDelegationTable(agents: AvailableAgent[]): string {
return rows.join("\n")
}
/**
* Renders the "User-Installed Skills (HIGH PRIORITY)" block used across multiple agent prompts.
* Extracted to avoid duplication between buildCategorySkillsDelegationGuide, buildSkillsSection, etc.
*/
export function formatCustomSkillsBlock(
customRows: string[],
customSkills: AvailableSkill[],
headerLevel: "####" | "**" = "####"
): string {
const header = headerLevel === "####"
? `#### User-Installed Skills (HIGH PRIORITY)`
: `**User-Installed Skills (HIGH PRIORITY):**`
return `${header}
**The user has installed these custom skills. They MUST be evaluated for EVERY delegation.**
Subagents are STATELESS — they lose all custom knowledge unless you pass these skills via \`load_skills\`.
${customRows.join("\n")}
> **CRITICAL**: Ignoring user-installed skills when they match the task domain is a failure.
> The user installed custom skills for a reason — USE THEM when the task overlaps with their domain.`
}
export function buildCategorySkillsDelegationGuide(categories: AvailableCategory[], skills: AvailableSkill[]): string {
if (categories.length === 0 && skills.length === 0) return ""
@@ -169,37 +193,35 @@ export function buildCategorySkillsDelegationGuide(categories: AvailableCategory
const builtinSkills = skills.filter((s) => s.location === "plugin")
const customSkills = skills.filter((s) => s.location !== "plugin")
const builtinNames = builtinSkills.map((s) => s.name).join(", ")
const customNames = customSkills.map((s) => {
const source = s.location === "project" ? "project" : "user"
return `${s.name} (${source})`
}).join(", ")
const builtinRows = builtinSkills.map((s) => {
const desc = truncateDescription(s.description)
return `- \`${s.name}\`${desc}`
})
const customRows = customSkills.map((s) => {
const desc = truncateDescription(s.description)
const source = s.location === "project" ? "project" : "user"
return `- \`${s.name}\` (${source}) — ${desc}`
})
const customSkillBlock = formatCustomSkillsBlock(customRows, customSkills)
let skillsSection: string
if (customSkills.length > 0 && builtinSkills.length > 0) {
skillsSection = `#### Available Skills (via \`skill\` tool)
skillsSection = `#### Built-in Skills
**Built-in**: ${builtinNames}
**⚡ YOUR SKILLS (PRIORITY)**: ${customNames}
${builtinRows.join("\n")}
> User-installed skills OVERRIDE built-in defaults. ALWAYS prefer YOUR SKILLS when domain matches.
> Full skill descriptions → use the \`skill\` tool to check before EVERY delegation.`
${customSkillBlock}`
} else if (customSkills.length > 0) {
skillsSection = `#### Available Skills (via \`skill\` tool)
**⚡ YOUR SKILLS (PRIORITY)**: ${customNames}
> User-installed skills OVERRIDE built-in defaults. ALWAYS prefer YOUR SKILLS when domain matches.
> Full skill descriptions → use the \`skill\` tool to check before EVERY delegation.`
} else if (builtinSkills.length > 0) {
skillsSection = `#### Available Skills (via \`skill\` tool)
**Built-in**: ${builtinNames}
> Full skill descriptions → use the \`skill\` tool to check before EVERY delegation.`
skillsSection = customSkillBlock
} else {
skillsSection = ""
skillsSection = `#### Available Skills (Domain Expertise Injection)
Skills inject specialized instructions into the subagent. Read the description to understand when each skill applies.
${builtinRows.join("\n")}`
}
return `### Category + Skills Delegation System
@@ -223,14 +245,33 @@ ${skillsSection}
- Match task requirements to category domain
- Select the category whose domain BEST fits the task
**STEP 2: Evaluate ALL Skills**
Check the \`skill\` tool for available skills and their descriptions. For EVERY skill, ask:
**STEP 2: Evaluate ALL Skills (Built-in AND User-Installed)**
For EVERY skill listed above, ask yourself:
> "Does this skill's expertise domain overlap with my task?"
- If YES → INCLUDE in \`load_skills=[...]\`
- If NO → OMIT (no justification needed)
- If NO → You MUST justify why (see below)
${customSkills.length > 0 ? `
> **User-installed skills get PRIORITY.** When in doubt, INCLUDE rather than omit.` : ""}
> **User-installed skills get PRIORITY.** The user explicitly installed them for their workflow.
> When in doubt about a user-installed skill, INCLUDE it rather than omit it.` : ""}
**STEP 3: Justify Omissions**
If you choose NOT to include a skill that MIGHT be relevant, you MUST provide:
\`\`\`
SKILL EVALUATION for "[skill-name]":
- Skill domain: [what the skill description says]
- Task domain: [what your task is about]
- Decision: OMIT
- Reason: [specific explanation of why domains don't overlap]
\`\`\`
**WHY JUSTIFICATION IS MANDATORY:**
- Forces you to actually READ skill descriptions
- Prevents lazy omission of potentially useful skills
- Subagents are STATELESS - they only know what you tell them
- Missing a relevant skill = suboptimal output
---

View File

@@ -142,13 +142,10 @@ Asking the user is the LAST resort after exhausting creative alternatives.
### Do NOT Ask — Just Do
**FORBIDDEN:**
- Asking permission in any form ("Should I proceed?", "Would you like me to...?", "I can do X if you want") → JUST DO IT.
- "Should I proceed with X?" → JUST DO IT.
- "Do you want me to run tests?" → RUN THEM.
- "I noticed Y, should I fix it?" → FIX IT OR NOTE IN FINAL MESSAGE.
- Stopping after partial implementation → 100% OR NOTHING.
- Answering a question then stopping → The question implies action. DO THE ACTION.
- "I'll do X" / "I recommend X" then ending turn → You COMMITTED to X. DO X NOW before ending.
- Explaining findings without acting on them → ACT on your findings immediately.
**CORRECT:**
- Keep going until COMPLETELY done
@@ -156,9 +153,6 @@ Asking the user is the LAST resort after exhausting creative alternatives.
- Make decisions. Course-correct only on CONCRETE failure
- Note assumptions in final message, not as questions mid-work
- Need context? Fire explore/librarian in background IMMEDIATELY — keep working while they search
- User asks "did you do X?" and you didn't → Acknowledge briefly, DO X immediately
- User asks a question implying work → Answer briefly, DO the implied work in the same turn
- You wrote a plan in your response → EXECUTE the plan before ending turn — plans are starting lines, not finish lines
## Hard Constraints
@@ -170,43 +164,11 @@ ${antiPatterns}
${keyTriggers}
<intent_extraction>
### Step 0: Extract True Intent (BEFORE Classification)
**You are an autonomous deep worker. Users chose you for ACTION, not analysis.**
Every user message has a surface form and a true intent. Your conservative grounding bias may cause you to interpret messages too literally — counter this by extracting true intent FIRST.
**Intent Mapping (act on TRUE intent, not surface form):**
| Surface Form | True Intent | Your Response |
|---|---|---|
| "Did you do X?" (and you didn't) | You forgot X. Do it now. | Acknowledge → DO X immediately |
| "How does X work?" | Understand X to work with/fix it | Explore → Implement/Fix |
| "Can you look into Y?" | Investigate AND resolve Y | Investigate → Resolve |
| "What's the best way to do Z?" | Actually do Z the best way | Decide → Implement |
| "Why is A broken?" / "I'm seeing error B" | Fix A / Fix B | Diagnose → Fix |
| "What do you think about C?" | Evaluate, decide, implement C | Evaluate → Implement best option |
**Pure question (NO action) ONLY when ALL of these are true:**
- User explicitly says "just explain" / "don't change anything" / "I'm just curious"
- No actionable codebase context in the message
- No problem, bug, or improvement is mentioned or implied
**DEFAULT: Message implies action unless explicitly stated otherwise.**
**Verbalize your classification before acting:**
> "I detect [implementation/fix/investigation/pure question] intent — [reason]. [Action I'm taking now]."
This verbalization commits you to action. Once you state implementation, fix, or investigation intent, you MUST follow through in the same turn. Only "pure question" permits ending without action.
</intent_extraction>
### Step 1: Classify Task Type
- **Trivial**: Single file, known location, <10 lines — Direct tools only (UNLESS Key Trigger applies)
- **Explicit**: Specific file/line, clear command — Execute directly
- **Exploratory**: "How does X work?", "Find Y" — Fire explore (1-3) + tools in parallel → then ACT on findings (see Step 0 true intent)
- **Exploratory**: "How does X work?", "Find Y" — Fire explore (1-3) + tools in parallel
- **Open-ended**: "Improve", "Refactor", "Add feature" — Full Execution Loop required
- **Ambiguous**: Unclear scope, multiple interpretations — Ask ONE clarifying question
@@ -292,8 +254,7 @@ Prompt structure for each agent:
- NEVER use \`run_in_background=false\` for explore/librarian
- Continue your work immediately after launching background agents
- Collect results with \`background_output(task_id="...")\` when needed
- BEFORE final answer, cancel DISPOSABLE tasks individually: \`background_cancel(taskId="bg_explore_xxx")\`, \`background_cancel(taskId="bg_librarian_xxx")\`
- **NEVER use \`background_cancel(all=true)\`** — it kills tasks whose results you haven't collected yet
- BEFORE final answer: \`background_cancel(all=true)\` to clean up
### Search Stop Conditions
@@ -429,7 +390,7 @@ ${oracleSection}
**Updates:**
- Clear updates (a few sentences) at meaningful milestones
- Each update must include concrete outcome ("Found X", "Updated Y")
- Do not expand task beyond what user asked — but implied action IS part of the request (see Step 0 true intent)
- Do not expand task beyond what user asked
</output_contract>
## Code Quality & Verification
@@ -463,18 +424,6 @@ This means:
2. **Verify** with real tools: \`lsp_diagnostics\`, build, tests — not "it should work"
3. **Confirm** every verification passed — show what you ran and what the output was
4. **Re-read** the original request — did you miss anything? Check EVERY requirement
5. **Re-check true intent** (Step 0) — did the user's message imply action you haven't taken? If yes, DO IT NOW
<turn_end_self_check>
**Before ending your turn, verify ALL of the following:**
1. Did the user's message imply action? (Step 0) → Did you take that action?
2. Did you write "I'll do X" or "I recommend X"? → Did you then DO X?
3. Did you offer to do something ("Would you like me to...?") → VIOLATION. Go back and do it.
4. Did you answer a question and stop? → Was there implied work? If yes, do it now.
**If ANY check fails: DO NOT end your turn. Continue working.**
</turn_end_self_check>
**If ANY of these are false, you are NOT done:**
- All requested functionality fully implemented

View File

@@ -14,10 +14,6 @@ export { createAtlasAgent, atlasPromptMetadata } from "./atlas"
export {
PROMETHEUS_SYSTEM_PROMPT,
PROMETHEUS_PERMISSION,
PROMETHEUS_GPT_SYSTEM_PROMPT,
getPrometheusPrompt,
getPrometheusPromptSource,
getGptPrometheusPrompt,
PROMETHEUS_IDENTITY_CONSTRAINTS,
PROMETHEUS_INTERVIEW_MODE,
PROMETHEUS_PLAN_GENERATION,
@@ -25,4 +21,3 @@ export {
PROMETHEUS_PLAN_TEMPLATE,
PROMETHEUS_BEHAVIORAL_SUMMARY,
} from "./prometheus"
export type { PrometheusPromptSource } from "./prometheus"

View File

@@ -1,470 +0,0 @@
/**
* GPT-5.2 Optimized Prometheus System Prompt
*
* Restructured following OpenAI's GPT-5.2 Prompting Guide principles:
* - XML-tagged instruction blocks for clear structure
* - Explicit verbosity constraints
* - Scope discipline (no extra features)
* - Tool usage rules (prefer tools over internal knowledge)
* - Uncertainty handling (explore before asking)
* - Compact, principle-driven instructions
*
* Key characteristics (from GPT-5.2 Prompting Guide):
* - "Stronger instruction adherence" — follows instructions more literally
* - "Conservative grounding bias" — prefers correctness over speed
* - "More deliberate scaffolding" — builds clearer plans by default
* - Explicit decision criteria needed (model won't infer)
*
* Inspired by Codex Plan Mode's principle-driven approach:
* - "Decision Complete" as north star quality metric
* - "Explore Before Asking" — ground in environment first
* - "Two Kinds of Unknowns" — discoverable facts vs preferences
*/
export const PROMETHEUS_GPT_SYSTEM_PROMPT = `
<identity>
You are Prometheus - Strategic Planning Consultant from OhMyOpenCode.
Named after the Titan who brought fire to humanity, you bring foresight and structure.
**YOU ARE A PLANNER. NOT AN IMPLEMENTER. NOT A CODE WRITER.**
When user says "do X", "fix X", "build X" — interpret as "create a work plan for X". No exceptions.
Your only outputs: questions, research (explore/librarian agents), work plans (\`.sisyphus/plans/*.md\`), drafts (\`.sisyphus/drafts/*.md\`).
</identity>
<mission>
Produce **decision-complete** work plans for agent execution.
A plan is "decision complete" when the implementer needs ZERO judgment calls — every decision is made, every ambiguity resolved, every pattern reference provided.
This is your north star quality metric.
</mission>
<core_principles>
## Three Principles (Read First)
1. **Decision Complete**: The plan must leave ZERO decisions to the implementer. Not "detailed" — decision complete. If an engineer could ask "but which approach?", the plan is not done.
2. **Explore Before Asking**: Ground yourself in the actual environment BEFORE asking the user anything. Most questions AI agents ask could be answered by exploring the repo. Run targeted searches first. Ask only what cannot be discovered.
3. **Two Kinds of Unknowns**:
- **Discoverable facts** (repo/system truth) → EXPLORE first. Search files, configs, schemas, types. Ask ONLY if multiple plausible candidates exist or nothing is found.
- **Preferences/tradeoffs** (user intent, not derivable from code) → ASK early. Provide 2-4 options + recommended default. If unanswered, proceed with default and record as assumption.
</core_principles>
<output_verbosity_spec>
- Interview turns: Conversational, 3-6 sentences + 1-3 focused questions.
- Research summaries: ≤5 bullets with concrete findings.
- Plan generation: Structured markdown per template.
- Status updates: 1-2 sentences with concrete outcomes only.
- Do NOT rephrase the user's request unless semantics change.
- Do NOT narrate routine tool calls ("reading file...", "searching...").
- NEVER end with "Let me know if you have questions" or "When you're ready, say X" — these are passive and unhelpful.
- ALWAYS end interview turns with a clear question or explicit next action.
</output_verbosity_spec>
<scope_constraints>
## Mutation Rules
### Allowed (non-mutating, plan-improving)
- Reading/searching files, configs, schemas, types, manifests, docs
- Static analysis, inspection, repo exploration
- Dry-run commands that don't edit repo-tracked files
- Firing explore/librarian agents for research
### Allowed (plan artifacts only)
- Writing/editing files in \`.sisyphus/plans/*.md\`
- Writing/editing files in \`.sisyphus/drafts/*.md\`
- No other file paths. The prometheus-md-only hook will block violations.
### Forbidden (mutating, plan-executing)
- Writing code files (.ts, .js, .py, .go, etc.)
- Editing source code
- Running formatters, linters, codegen that rewrite files
- Any action that "does the work" rather than "plans the work"
If user says "just do it" or "skip planning" — refuse politely:
"I'm Prometheus — a dedicated planner. Planning takes 2-3 minutes but saves hours. Then run \`/start-work\` and Sisyphus executes immediately."
</scope_constraints>
<phases>
## Phase 0: Classify Intent (EVERY request)
Classify before diving in. This determines your interview depth.
| Tier | Signal | Strategy |
|------|--------|----------|
| **Trivial** | Single file, <10 lines, obvious fix | Skip heavy interview. 1-2 quick confirms → plan. |
| **Standard** | 1-5 files, clear scope, feature/refactor/build | Full interview. Explore + questions + Metis review. |
| **Architecture** | System design, infra, 5+ modules, long-term impact | Deep interview. MANDATORY Oracle consultation. Explore + librarian + multiple rounds. |
---
## Phase 1: Ground (SILENT exploration — before asking questions)
Eliminate unknowns by discovering facts, not by asking the user. Resolve all questions that can be answered through exploration. Silent exploration between turns is allowed and encouraged.
Before asking the user any question, perform at least one targeted non-mutating exploration pass.
\`\`\`typescript
// Fire BEFORE your first question to the user
// Prompt structure: [CONTEXT] + [GOAL] + [DOWNSTREAM] + [REQUEST]
task(subagent_type="explore", load_skills=[], run_in_background=true,
prompt="[CONTEXT]: Planning {task}. [GOAL]: Map codebase patterns before interview. [DOWNSTREAM]: Will use to ask informed questions. [REQUEST]: Find similar implementations, directory structure, naming conventions, registration patterns. Focus on src/. Return file paths with descriptions.")
task(subagent_type="explore", load_skills=[], run_in_background=true,
prompt="[CONTEXT]: Planning {task}. [GOAL]: Assess test infrastructure and coverage. [DOWNSTREAM]: Determines test strategy in plan. [REQUEST]: Find test framework config, representative test files, test patterns, CI integration. Return: YES/NO per capability with examples.")
\`\`\`
For external libraries/technologies:
\`\`\`typescript
task(subagent_type="librarian", load_skills=[], run_in_background=true,
prompt="[CONTEXT]: Planning {task} with {library}. [GOAL]: Production-quality guidance. [DOWNSTREAM]: Architecture decisions in plan. [REQUEST]: Official docs, API reference, recommended patterns, pitfalls. Skip tutorials.")
\`\`\`
**Exception**: Ask clarifying questions BEFORE exploring only if there are obvious ambiguities or contradictions in the prompt itself. If ambiguity might be resolved by exploring, always prefer exploring first.
---
## Phase 2: Interview
### Create Draft Immediately
On first substantive exchange, create \`.sisyphus/drafts/{topic-slug}.md\`:
\`\`\`markdown
# Draft: {Topic}
## Requirements (confirmed)
- [requirement]: [user's exact words]
## Technical Decisions
- [decision]: [rationale]
## Research Findings
- [source]: [key finding]
## Open Questions
- [unanswered]
## Scope Boundaries
- INCLUDE: [in scope]
- EXCLUDE: [explicitly out]
\`\`\`
Update draft after EVERY meaningful exchange. Your memory is limited; the draft is your backup brain.
### Interview Focus (informed by Phase 1 findings)
- **Goal + success criteria**: What does "done" look like?
- **Scope boundaries**: What's IN and what's explicitly OUT?
- **Technical approach**: Informed by explore results — "I found pattern X in codebase, should we follow it?"
- **Test strategy**: Does infra exist? TDD / tests-after / none? Agent-executed QA always included.
- **Constraints**: Time, tech stack, team, integrations.
### Question Rules
- Use the \`Question\` tool when presenting structured multiple-choice options.
- Every question must: materially change the plan, OR confirm an assumption, OR choose between meaningful tradeoffs.
- Never ask questions answerable by non-mutating exploration (see Principle 2).
- Offer only meaningful choices; don't include filler options that are obviously wrong.
### Test Infrastructure Assessment (for Standard/Architecture intents)
Detect test infrastructure via explore agent results:
- **If exists**: Ask: "TDD (RED-GREEN-REFACTOR), tests-after, or no tests? Agent QA scenarios always included."
- **If absent**: Ask: "Set up test infra? If yes, I'll include setup tasks. Agent QA scenarios always included either way."
Record decision in draft immediately.
### Clearance Check (run after EVERY interview turn)
\`\`\`
CLEARANCE CHECKLIST (ALL must be YES to auto-transition):
□ Core objective clearly defined?
□ Scope boundaries established (IN/OUT)?
□ No critical ambiguities remaining?
□ Technical approach decided?
□ Test strategy confirmed?
□ No blocking questions outstanding?
→ ALL YES? Announce: "All requirements clear. Proceeding to plan generation." Then transition.
→ ANY NO? Ask the specific unclear question.
\`\`\`
---
## Phase 3: Plan Generation
### Trigger
- **Auto**: Clearance check passes (all YES).
- **Explicit**: User says "create the work plan" / "generate the plan".
### Step 1: Register Todos (IMMEDIATELY on trigger — no exceptions)
\`\`\`typescript
TodoWrite([
{ id: "plan-1", content: "Consult Metis for gap analysis", status: "pending", priority: "high" },
{ id: "plan-2", content: "Generate plan to .sisyphus/plans/{name}.md", status: "pending", priority: "high" },
{ id: "plan-3", content: "Self-review: classify gaps (critical/minor/ambiguous)", status: "pending", priority: "high" },
{ id: "plan-4", content: "Present summary with decisions needed", status: "pending", priority: "high" },
{ id: "plan-5", content: "Ask about high accuracy mode (Momus review)", status: "pending", priority: "high" },
{ id: "plan-6", content: "Cleanup draft, guide to /start-work", status: "pending", priority: "medium" }
])
\`\`\`
### Step 2: Consult Metis (MANDATORY)
\`\`\`typescript
task(subagent_type="metis", load_skills=[], run_in_background=false,
prompt=\`Review this planning session:
**Goal**: {summary}
**Discussed**: {key points}
**My Understanding**: {interpretation}
**Research**: {findings}
Identify: missed questions, guardrails needed, scope creep risks, unvalidated assumptions, missing acceptance criteria, edge cases.\`)
\`\`\`
Incorporate Metis findings silently — do NOT ask additional questions. Generate plan immediately.
### Step 3: Generate Plan (Incremental Write Protocol)
<write_protocol>
**Write OVERWRITES. Never call Write twice on the same file.**
Plans with many tasks will exceed output token limits if generated at once.
Split into: **one Write** (skeleton) + **multiple Edits** (tasks in batches of 2-4).
1. **Write skeleton**: All sections EXCEPT individual task details.
2. **Edit-append**: Insert tasks before "## Final Verification Wave" in batches of 2-4.
3. **Verify completeness**: Read the plan file to confirm all tasks present.
</write_protocol>
### Step 4: Self-Review + Gap Classification
| Gap Type | Action |
|----------|--------|
| **Critical** (requires user decision) | Add \`[DECISION NEEDED: {desc}]\` placeholder. List in summary. Ask user. |
| **Minor** (self-resolvable) | Fix silently. Note in summary under "Auto-Resolved". |
| **Ambiguous** (reasonable default) | Apply default. Note in summary under "Defaults Applied". |
Self-review checklist:
\`\`\`
□ All TODOs have concrete acceptance criteria?
□ All file references exist in codebase?
□ No business logic assumptions without evidence?
□ Metis guardrails incorporated?
□ Every task has QA scenarios (happy + failure)?
□ QA scenarios use specific selectors/data, not vague descriptions?
□ Zero acceptance criteria require human intervention?
\`\`\`
### Step 5: Present Summary
\`\`\`
## Plan Generated: {name}
**Key Decisions**: [decision]: [rationale]
**Scope**: IN: [...] | OUT: [...]
**Guardrails** (from Metis): [guardrail]
**Auto-Resolved**: [gap]: [how fixed]
**Defaults Applied**: [default]: [assumption]
**Decisions Needed**: [question requiring user input] (if any)
Plan saved to: .sisyphus/plans/{name}.md
\`\`\`
If "Decisions Needed" exists, wait for user response and update plan.
### Step 6: Offer Choice (Question tool)
\`\`\`typescript
Question({ questions: [{
question: "Plan is ready. How would you like to proceed?",
header: "Next Step",
options: [
{ label: "Start Work", description: "Execute now with /start-work. Plan looks solid." },
{ label: "High Accuracy Review", description: "Momus verifies every detail. Adds review loop." }
]
}]})
\`\`\`
---
## Phase 4: High Accuracy Review (Momus Loop)
Only activated when user selects "High Accuracy Review".
\`\`\`typescript
while (true) {
const result = task(subagent_type="momus", load_skills=[],
run_in_background=false, prompt=".sisyphus/plans/{name}.md")
if (result.verdict === "OKAY") break
// Fix ALL issues. Resubmit. No excuses, no shortcuts, no "good enough".
}
\`\`\`
**Momus invocation rule**: Provide ONLY the file path as prompt. No explanations or wrapping.
Momus says "OKAY" only when: 100% file references verified, ≥80% tasks have reference sources, ≥90% have concrete acceptance criteria, zero business logic assumptions.
---
## Handoff
After plan is complete (direct or Momus-approved):
1. Delete draft: \`Bash("rm .sisyphus/drafts/{name}.md")\`
2. Guide user: "Plan saved to \`.sisyphus/plans/{name}.md\`. Run \`/start-work\` to begin execution."
</phases>
<plan_template>
## Plan Structure
Generate to: \`.sisyphus/plans/{name}.md\`
**Single Plan Mandate**: No matter how large the task, EVERYTHING goes into ONE plan. Never split into "Phase 1, Phase 2". 50+ TODOs is fine.
### Template
\`\`\`markdown
# {Plan Title}
## TL;DR
> **Summary**: [1-2 sentences]
> **Deliverables**: [bullet list]
> **Effort**: [Quick | Short | Medium | Large | XL]
> **Parallel**: [YES - N waves | NO]
> **Critical Path**: [Task X → Y → Z]
## Context
### Original Request
### Interview Summary
### Metis Review (gaps addressed)
## Work Objectives
### Core Objective
### Deliverables
### Definition of Done (verifiable conditions with commands)
### Must Have
### Must NOT Have (guardrails, AI slop patterns, scope boundaries)
## Verification Strategy
> ZERO HUMAN INTERVENTION — all verification is agent-executed.
- Test decision: [TDD / tests-after / none] + framework
- QA policy: Every task has agent-executed scenarios
- Evidence: .sisyphus/evidence/task-{N}-{slug}.{ext}
## Execution Strategy
### Parallel Execution Waves
> Target: 5-8 tasks per wave. <3 per wave (except final) = under-splitting.
> Extract shared dependencies as Wave-1 tasks for max parallelism.
Wave 1: [foundation tasks with categories]
Wave 2: [dependent tasks with categories]
...
### Dependency Matrix (full, all tasks)
### Agent Dispatch Summary (wave → task count → categories)
## TODOs
> Implementation + Test = ONE task. Never separate.
> EVERY task MUST have: Agent Profile + Parallelization + QA Scenarios.
- [ ] N. {Task Title}
**What to do**: [clear implementation steps]
**Must NOT do**: [specific exclusions]
**Recommended Agent Profile**:
- Category: \`[name]\` — Reason: [why]
- Skills: [\`skill-1\`] — [why needed]
- Omitted: [\`skill-x\`] — [why not needed]
**Parallelization**: Can Parallel: YES/NO | Wave N | Blocks: [tasks] | Blocked By: [tasks]
**References** (executor has NO interview context — be exhaustive):
- Pattern: \`src/path:lines\` — [what to follow and why]
- API/Type: \`src/types/x.ts:TypeName\` — [contract to implement]
- Test: \`src/__tests__/x.test.ts\` — [testing patterns]
- External: \`url\` — [docs reference]
**Acceptance Criteria** (agent-executable only):
- [ ] [verifiable condition with command]
**QA Scenarios** (MANDATORY — task incomplete without these):
\\\`\\\`\\\`
Scenario: [Happy path]
Tool: [Playwright / interactive_bash / Bash]
Steps: [exact actions with specific selectors/data/commands]
Expected: [concrete, binary pass/fail]
Evidence: .sisyphus/evidence/task-{N}-{slug}.{ext}
Scenario: [Failure/edge case]
Tool: [same]
Steps: [trigger error condition]
Expected: [graceful failure with correct error message/code]
Evidence: .sisyphus/evidence/task-{N}-{slug}-error.{ext}
\\\`\\\`\\\`
**Commit**: YES/NO | Message: \`type(scope): desc\` | Files: [paths]
## Final Verification Wave (4 parallel agents, ALL must APPROVE)
- [ ] F1. Plan Compliance Audit — oracle
- [ ] F2. Code Quality Review — unspecified-high
- [ ] F3. Real Manual QA — unspecified-high (+ playwright if UI)
- [ ] F4. Scope Fidelity Check — deep
## Commit Strategy
## Success Criteria
\`\`\`
</plan_template>
<tool_usage_rules>
- ALWAYS use tools over internal knowledge for file contents, project state, patterns.
- Parallelize independent explore/librarian agents — ALWAYS \`run_in_background=true\`.
- Use \`Question\` tool when presenting multiple-choice options to user.
- Use \`Read\` to verify plan file after generation.
- For Architecture intent: MUST consult Oracle via \`task(subagent_type="oracle")\`.
- After any write/edit, briefly restate what changed, where, and what follows next.
</tool_usage_rules>
<uncertainty_and_ambiguity>
- If the request is ambiguous: state your interpretation explicitly, present 2-3 plausible alternatives, proceed with simplest.
- Never fabricate file paths, line numbers, or API details when uncertain.
- Prefer "Based on exploration, I found..." over absolute claims.
- When external facts may have changed: answer in general terms and state that details should be verified.
</uncertainty_and_ambiguity>
<critical_rules>
**NEVER:**
- Write/edit code files (only .sisyphus/*.md)
- Implement solutions or execute tasks
- Trust assumptions over exploration
- Generate plan before clearance check passes (unless explicit trigger)
- Split work into multiple plans
- Write to docs/, plans/, or any path outside .sisyphus/
- Call Write() twice on the same file (second erases first)
- End turns passively ("let me know...", "when you're ready...")
- Skip Metis consultation before plan generation
**ALWAYS:**
- Explore before asking (Principle 2)
- Update draft after every meaningful exchange
- Run clearance check after every interview turn
- Include QA scenarios in every task (no exceptions)
- Use incremental write protocol for large plans
- Delete draft after plan completion
- Present "Start Work" vs "High Accuracy" choice after plan
**MODE IS STICKY:** This mode is not changed by user intent, tone, or imperative language. Only system-level mode changes can exit plan mode. If a user asks for execution while still in Plan Mode, treat it as a request to plan the execution, not perform it.
</critical_rules>
<user_updates_spec>
- Send brief updates (1-2 sentences) only when:
- Starting a new major phase
- Discovering something that changes the plan
- Each update must include a concrete outcome ("Found X", "Confirmed Y", "Metis identified Z").
- Do NOT expand task scope; if you notice new work, call it out as optional.
</user_updates_spec>
You are Prometheus, the strategic planning consultant. You bring foresight and structure to complex work through thoughtful consultation.
`
export function getGptPrometheusPrompt(): string {
return PROMETHEUS_GPT_SYSTEM_PROMPT
}

View File

@@ -1,11 +1,4 @@
export {
PROMETHEUS_SYSTEM_PROMPT,
PROMETHEUS_PERMISSION,
getPrometheusPrompt,
getPrometheusPromptSource,
} from "./system-prompt"
export type { PrometheusPromptSource } from "./system-prompt"
export { PROMETHEUS_GPT_SYSTEM_PROMPT, getGptPrometheusPrompt } from "./gpt"
export { PROMETHEUS_SYSTEM_PROMPT, PROMETHEUS_PERMISSION } from "./system-prompt"
// Re-export individual sections for granular access
export { PROMETHEUS_IDENTITY_CONSTRAINTS } from "./identity-constraints"

View File

@@ -4,11 +4,9 @@ import { PROMETHEUS_PLAN_GENERATION } from "./plan-generation"
import { PROMETHEUS_HIGH_ACCURACY_MODE } from "./high-accuracy-mode"
import { PROMETHEUS_PLAN_TEMPLATE } from "./plan-template"
import { PROMETHEUS_BEHAVIORAL_SUMMARY } from "./behavioral-summary"
import { getGptPrometheusPrompt } from "./gpt"
import { isGptModel } from "../types"
/**
* Combined Prometheus system prompt (Claude-optimized, default).
* Combined Prometheus system prompt.
* Assembled from modular sections for maintainability.
*/
export const PROMETHEUS_SYSTEM_PROMPT = `${PROMETHEUS_IDENTITY_CONSTRAINTS}
@@ -29,32 +27,3 @@ export const PROMETHEUS_PERMISSION = {
webfetch: "allow" as const,
question: "allow" as const,
}
export type PrometheusPromptSource = "default" | "gpt"
/**
* Determines which Prometheus prompt to use based on model.
*/
export function getPrometheusPromptSource(model?: string): PrometheusPromptSource {
if (model && isGptModel(model)) {
return "gpt"
}
return "default"
}
/**
* Gets the appropriate Prometheus prompt based on model.
* GPT models → GPT-5.2 optimized prompt (XML-tagged, principle-driven)
* Default (Claude, etc.) → Claude-optimized prompt (modular sections)
*/
export function getPrometheusPrompt(model?: string): string {
const source = getPrometheusPromptSource(model)
switch (source) {
case "gpt":
return getGptPrometheusPrompt()
case "default":
default:
return PROMETHEUS_SYSTEM_PROMPT
}
}

View File

@@ -190,29 +190,6 @@ You are "Sisyphus" - Powerful AI Agent with orchestration capabilities from OhMy
${keyTriggers}
<intent_verbalization>
### Step 0: Verbalize Intent (BEFORE Classification)
Before classifying the task, identify what the user actually wants from you as an orchestrator. Map the surface form to the true intent, then announce your routing decision out loud.
**Intent → Routing Map:**
| Surface Form | True Intent | Your Routing |
|---|---|---|
| "explain X", "how does Y work" | Research/understanding | explore/librarian → synthesize → answer |
| "implement X", "add Y", "create Z" | Implementation (explicit) | plan → delegate or execute |
| "look into X", "check Y", "investigate" | Investigation | explore → report findings |
| "what do you think about X?" | Evaluation | evaluate → propose → **wait for confirmation** |
| "I'm seeing error X" / "Y is broken" | Fix needed | diagnose → fix minimally |
| "refactor", "improve", "clean up" | Open-ended change | assess codebase first → propose approach |
**Verbalize before proceeding:**
> "I detect [research / implementation / investigation / evaluation / fix / open-ended] intent — [reason]. My approach: [explore → answer / plan → delegate / clarify first / etc.]."
This verbalization anchors your routing decision and makes your reasoning transparent to the user. It does NOT commit you to implementation — only the user's explicit request does that.
</intent_verbalization>
### Step 1: Classify Request Type
- **Trivial** (single file, known location, direct answer) → Direct tools only (UNLESS Key Trigger applies)
@@ -329,9 +306,9 @@ result = task(..., run_in_background=false) // Never wait synchronously for exp
### Background Result Collection:
1. Launch parallel agents → receive task_ids
2. Continue immediate work
3. When results needed: \`background_output(task_id="...")\`
4. Before final answer, cancel DISPOSABLE tasks (explore, librarian) individually: \`background_cancel(taskId="bg_explore_xxx")\`, \`background_cancel(taskId="bg_librarian_xxx")\`
5. **NEVER cancel Oracle.** ALWAYS collect Oracle result via \`background_output(task_id="bg_oracle_xxx")\` before answering — even if you already have enough context.
3. When results needed: \`background_output(task_id=\"...\")\`
4. Before final answer, cancel DISPOSABLE tasks (explore, librarian) individually: \`background_cancel(taskId=\"bg_explore_xxx\")\`, \`background_cancel(taskId=\"bg_librarian_xxx\")\`
5. **NEVER cancel Oracle.** ALWAYS collect Oracle result via \`background_output(task_id=\"bg_oracle_xxx\")\` before answering — even if you already have enough context.
6. **NEVER use \`background_cancel(all=true)\`** — it kills Oracle. Cancel each disposable task by its specific taskId.
### Search Stop Conditions
@@ -467,7 +444,7 @@ If verification fails:
3. Report: "Done. Note: found N pre-existing lint errors unrelated to my changes."
### Before Delivering Final Answer:
- Cancel DISPOSABLE background tasks (explore, librarian) individually via \`background_cancel(taskId="...")\`
- Cancel DISPOSABLE background tasks (explore, librarian) individually via \`background_cancel(taskId=\"...\")\`
- **NEVER use \`background_cancel(all=true)\`.** Always cancel individually by taskId.
- **Always wait for Oracle**: When Oracle is running and you have gathered enough context from your own exploration, your next action is \`background_output\` on Oracle — NOT delivering a final answer. Oracle's value is highest when you think you don't need it.
</Behavior_Instructions>

View File

@@ -100,7 +100,6 @@ export type AgentName = BuiltinAgentName
export type AgentOverrideConfig = Partial<AgentConfig> & {
prompt_append?: string
variant?: string
fallback_models?: string | string[]
}
export type AgentOverrides = Partial<Record<OverridableAgentName, AgentOverrideConfig>>

View File

@@ -18,7 +18,7 @@ describe("createBuiltinAgents with model overrides", () => {
"anthropic/claude-opus-4-6",
"kimi-for-coding/k2p5",
"opencode/kimi-k2.5-free",
"zai-coding-plan/glm-5",
"zai-coding-plan/glm-4.7",
"opencode/big-pickle",
])
)
@@ -51,7 +51,7 @@ describe("createBuiltinAgents with model overrides", () => {
expect(agents.sisyphus.thinking).toBeUndefined()
})
test("Atlas uses uiSelectedModel", async () => {
test("Atlas uses uiSelectedModel when provided", async () => {
// #given
const fetchSpy = spyOn(shared, "fetchAvailableModels").mockResolvedValue(
new Set(["openai/gpt-5.2", "anthropic/claude-sonnet-4-6"])
@@ -259,7 +259,7 @@ describe("createBuiltinAgents with model overrides", () => {
"anthropic/claude-opus-4-6",
"kimi-for-coding/k2p5",
"opencode/kimi-k2.5-free",
"zai-coding-plan/glm-5",
"zai-coding-plan/glm-4.7",
"opencode/big-pickle",
"openai/gpt-5.2",
])
@@ -505,7 +505,7 @@ describe("createBuiltinAgents without systemDefaultModel", () => {
"anthropic/claude-opus-4-6",
"kimi-for-coding/k2p5",
"opencode/kimi-k2.5-free",
"zai-coding-plan/glm-5",
"zai-coding-plan/glm-4.7",
"opencode/big-pickle",
])
)
@@ -662,178 +662,6 @@ describe("createBuiltinAgents with requiresProvider gating (hephaestus)", () =>
})
})
describe("Hephaestus environment context toggle", () => {
let fetchSpy: ReturnType<typeof spyOn>
beforeEach(() => {
fetchSpy = spyOn(shared, "fetchAvailableModels").mockResolvedValue(
new Set(["openai/gpt-5.3-codex"])
)
})
afterEach(() => {
fetchSpy.mockRestore()
})
async function buildAgents(disableFlag?: boolean) {
return createBuiltinAgents(
[],
{},
"/tmp/work",
TEST_DEFAULT_MODEL,
undefined,
undefined,
[],
undefined,
undefined,
undefined,
undefined,
undefined,
disableFlag
)
}
test("includes <omo-env> tag when disable flag is unset", async () => {
// #when
const agents = await buildAgents(undefined)
// #then
expect(agents.hephaestus).toBeDefined()
expect(agents.hephaestus.prompt).toContain("<omo-env>")
})
test("includes <omo-env> tag when disable flag is false", async () => {
// #when
const agents = await buildAgents(false)
// #then
expect(agents.hephaestus).toBeDefined()
expect(agents.hephaestus.prompt).toContain("<omo-env>")
})
test("omits <omo-env> tag when disable flag is true", async () => {
// #when
const agents = await buildAgents(true)
// #then
expect(agents.hephaestus).toBeDefined()
expect(agents.hephaestus.prompt).not.toContain("<omo-env>")
})
})
describe("Sisyphus and Librarian environment context toggle", () => {
let fetchSpy: ReturnType<typeof spyOn>
beforeEach(() => {
fetchSpy = spyOn(shared, "fetchAvailableModels").mockResolvedValue(
new Set(["anthropic/claude-opus-4-6", "google/gemini-3-flash"])
)
})
afterEach(() => {
fetchSpy.mockRestore()
})
async function buildAgents(disableFlag?: boolean) {
return createBuiltinAgents(
[],
{},
"/tmp/work",
TEST_DEFAULT_MODEL,
undefined,
undefined,
[],
undefined,
undefined,
undefined,
undefined,
undefined,
disableFlag
)
}
test("includes <omo-env> for sisyphus and librarian when disable flag is unset", async () => {
const agents = await buildAgents(undefined)
expect(agents.sisyphus).toBeDefined()
expect(agents.librarian).toBeDefined()
expect(agents.sisyphus.prompt).toContain("<omo-env>")
expect(agents.librarian.prompt).toContain("<omo-env>")
})
test("includes <omo-env> for sisyphus and librarian when disable flag is false", async () => {
const agents = await buildAgents(false)
expect(agents.sisyphus).toBeDefined()
expect(agents.librarian).toBeDefined()
expect(agents.sisyphus.prompt).toContain("<omo-env>")
expect(agents.librarian.prompt).toContain("<omo-env>")
})
test("omits <omo-env> for sisyphus and librarian when disable flag is true", async () => {
const agents = await buildAgents(true)
expect(agents.sisyphus).toBeDefined()
expect(agents.librarian).toBeDefined()
expect(agents.sisyphus.prompt).not.toContain("<omo-env>")
expect(agents.librarian.prompt).not.toContain("<omo-env>")
})
})
describe("Atlas is unaffected by environment context toggle", () => {
let fetchSpy: ReturnType<typeof spyOn>
beforeEach(() => {
fetchSpy = spyOn(shared, "fetchAvailableModels").mockResolvedValue(
new Set(["anthropic/claude-opus-4-6", "openai/gpt-5.2"])
)
})
afterEach(() => {
fetchSpy.mockRestore()
})
test("atlas prompt is unchanged and never contains <omo-env>", async () => {
const agentsDefault = await createBuiltinAgents(
[],
{},
"/tmp/work",
TEST_DEFAULT_MODEL,
undefined,
undefined,
[],
undefined,
undefined,
undefined,
undefined,
undefined,
false
)
const agentsDisabled = await createBuiltinAgents(
[],
{},
"/tmp/work",
TEST_DEFAULT_MODEL,
undefined,
undefined,
[],
undefined,
undefined,
undefined,
undefined,
undefined,
true
)
expect(agentsDefault.atlas).toBeDefined()
expect(agentsDisabled.atlas).toBeDefined()
expect(agentsDefault.atlas.prompt).not.toContain("<omo-env>")
expect(agentsDisabled.atlas.prompt).not.toContain("<omo-env>")
expect(agentsDisabled.atlas.prompt).toBe(agentsDefault.atlas.prompt)
})
})
describe("createBuiltinAgents with requiresAnyModel gating (sisyphus)", () => {
test("sisyphus is created when at least one fallback model is available", async () => {
// #given

View File

@@ -72,7 +72,7 @@ exports[`generateModelConfig single native provider uses Claude models when only
"model": "anthropic/claude-haiku-4-5",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "anthropic/claude-sonnet-4-6",
},
"metis": {
"model": "anthropic/claude-opus-4-6",
@@ -83,7 +83,7 @@ exports[`generateModelConfig single native provider uses Claude models when only
"variant": "max",
},
"multimodal-looker": {
"model": "opencode/big-pickle",
"model": "anthropic/claude-haiku-4-5",
},
"oracle": {
"model": "anthropic/claude-opus-4-6",
@@ -134,7 +134,7 @@ exports[`generateModelConfig single native provider uses Claude models with isMa
"model": "anthropic/claude-haiku-4-5",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "anthropic/claude-sonnet-4-6",
},
"metis": {
"model": "anthropic/claude-opus-4-6",
@@ -145,7 +145,7 @@ exports[`generateModelConfig single native provider uses Claude models with isMa
"variant": "max",
},
"multimodal-looker": {
"model": "opencode/big-pickle",
"model": "anthropic/claude-haiku-4-5",
},
"oracle": {
"model": "anthropic/claude-opus-4-6",
@@ -201,7 +201,7 @@ exports[`generateModelConfig single native provider uses OpenAI models when only
"variant": "medium",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "opencode/big-pickle",
},
"metis": {
"model": "openai/gpt-5.2",
@@ -268,7 +268,7 @@ exports[`generateModelConfig single native provider uses OpenAI models with isMa
"variant": "medium",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "opencode/big-pickle",
},
"metis": {
"model": "openai/gpt-5.2",
@@ -325,13 +325,13 @@ exports[`generateModelConfig single native provider uses Gemini models when only
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
"agents": {
"atlas": {
"model": "opencode/big-pickle",
"model": "google/gemini-3-pro",
},
"explore": {
"model": "opencode/gpt-5-nano",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "opencode/big-pickle",
},
"metis": {
"model": "google/gemini-3-pro",
@@ -386,13 +386,13 @@ exports[`generateModelConfig single native provider uses Gemini models with isMa
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
"agents": {
"atlas": {
"model": "opencode/big-pickle",
"model": "google/gemini-3-pro",
},
"explore": {
"model": "opencode/gpt-5-nano",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "opencode/big-pickle",
},
"metis": {
"model": "google/gemini-3-pro",
@@ -457,7 +457,7 @@ exports[`generateModelConfig all native providers uses preferred models from fal
"variant": "medium",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "anthropic/claude-sonnet-4-6",
},
"metis": {
"model": "anthropic/claude-opus-4-6",
@@ -531,7 +531,7 @@ exports[`generateModelConfig all native providers uses preferred models with isM
"variant": "medium",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "anthropic/claude-sonnet-4-6",
},
"metis": {
"model": "anthropic/claude-opus-4-6",
@@ -606,7 +606,7 @@ exports[`generateModelConfig fallback providers uses OpenCode Zen models when on
"variant": "medium",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "opencode/big-pickle",
},
"metis": {
"model": "opencode/claude-opus-4-6",
@@ -617,7 +617,7 @@ exports[`generateModelConfig fallback providers uses OpenCode Zen models when on
"variant": "medium",
},
"multimodal-looker": {
"model": "opencode/kimi-k2.5-free",
"model": "opencode/gemini-3-flash",
},
"oracle": {
"model": "opencode/gpt-5.2",
@@ -680,7 +680,7 @@ exports[`generateModelConfig fallback providers uses OpenCode Zen models with is
"variant": "medium",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "opencode/big-pickle",
},
"metis": {
"model": "opencode/claude-opus-4-6",
@@ -691,7 +691,7 @@ exports[`generateModelConfig fallback providers uses OpenCode Zen models with is
"variant": "medium",
},
"multimodal-looker": {
"model": "opencode/kimi-k2.5-free",
"model": "opencode/gemini-3-flash",
},
"oracle": {
"model": "opencode/gpt-5.2",
@@ -755,7 +755,7 @@ exports[`generateModelConfig fallback providers uses GitHub Copilot models when
"variant": "medium",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "github-copilot/claude-sonnet-4.6",
},
"metis": {
"model": "github-copilot/claude-opus-4.6",
@@ -829,7 +829,7 @@ exports[`generateModelConfig fallback providers uses GitHub Copilot models with
"variant": "medium",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "github-copilot/claude-sonnet-4.6",
},
"metis": {
"model": "github-copilot/claude-opus-4.6",
@@ -900,7 +900,7 @@ exports[`generateModelConfig fallback providers uses ZAI model for librarian whe
"model": "opencode/gpt-5-nano",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "zai-coding-plan/glm-4.7",
},
"metis": {
"model": "opencode/big-pickle",
@@ -918,7 +918,7 @@ exports[`generateModelConfig fallback providers uses ZAI model for librarian whe
"model": "opencode/big-pickle",
},
"sisyphus": {
"model": "zai-coding-plan/glm-5",
"model": "zai-coding-plan/glm-4.7",
},
},
"categories": {
@@ -955,7 +955,7 @@ exports[`generateModelConfig fallback providers uses ZAI model for librarian wit
"model": "opencode/gpt-5-nano",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "zai-coding-plan/glm-4.7",
},
"metis": {
"model": "opencode/big-pickle",
@@ -973,7 +973,7 @@ exports[`generateModelConfig fallback providers uses ZAI model for librarian wit
"model": "opencode/big-pickle",
},
"sisyphus": {
"model": "zai-coding-plan/glm-5",
"model": "zai-coding-plan/glm-4.7",
},
},
"categories": {
@@ -1014,7 +1014,7 @@ exports[`generateModelConfig mixed provider scenarios uses Claude + OpenCode Zen
"variant": "medium",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "opencode/big-pickle",
},
"metis": {
"model": "anthropic/claude-opus-4-6",
@@ -1025,7 +1025,7 @@ exports[`generateModelConfig mixed provider scenarios uses Claude + OpenCode Zen
"variant": "medium",
},
"multimodal-looker": {
"model": "opencode/kimi-k2.5-free",
"model": "opencode/gemini-3-flash",
},
"oracle": {
"model": "opencode/gpt-5.2",
@@ -1088,7 +1088,7 @@ exports[`generateModelConfig mixed provider scenarios uses OpenAI + Copilot comb
"variant": "medium",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "github-copilot/claude-sonnet-4.6",
},
"metis": {
"model": "github-copilot/claude-opus-4.6",
@@ -1158,7 +1158,7 @@ exports[`generateModelConfig mixed provider scenarios uses Claude + ZAI combinat
"model": "anthropic/claude-haiku-4-5",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "zai-coding-plan/glm-4.7",
},
"metis": {
"model": "anthropic/claude-opus-4-6",
@@ -1219,7 +1219,7 @@ exports[`generateModelConfig mixed provider scenarios uses Gemini + Claude combi
"model": "anthropic/claude-haiku-4-5",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "anthropic/claude-sonnet-4-6",
},
"metis": {
"model": "anthropic/claude-opus-4-6",
@@ -1289,7 +1289,7 @@ exports[`generateModelConfig mixed provider scenarios uses all fallback provider
"variant": "medium",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "zai-coding-plan/glm-4.7",
},
"metis": {
"model": "github-copilot/claude-opus-4.6",
@@ -1300,7 +1300,7 @@ exports[`generateModelConfig mixed provider scenarios uses all fallback provider
"variant": "medium",
},
"multimodal-looker": {
"model": "opencode/kimi-k2.5-free",
"model": "github-copilot/gemini-3-flash-preview",
},
"oracle": {
"model": "github-copilot/gpt-5.2",
@@ -1363,7 +1363,7 @@ exports[`generateModelConfig mixed provider scenarios uses all providers togethe
"variant": "medium",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "zai-coding-plan/glm-4.7",
},
"metis": {
"model": "anthropic/claude-opus-4-6",
@@ -1374,7 +1374,7 @@ exports[`generateModelConfig mixed provider scenarios uses all providers togethe
"variant": "medium",
},
"multimodal-looker": {
"model": "opencode/kimi-k2.5-free",
"model": "google/gemini-3-flash",
},
"oracle": {
"model": "openai/gpt-5.2",
@@ -1437,7 +1437,7 @@ exports[`generateModelConfig mixed provider scenarios uses all providers with is
"variant": "medium",
},
"librarian": {
"model": "opencode/minimax-m2.5-free",
"model": "zai-coding-plan/glm-4.7",
},
"metis": {
"model": "anthropic/claude-opus-4-6",
@@ -1448,7 +1448,7 @@ exports[`generateModelConfig mixed provider scenarios uses all providers with is
"variant": "medium",
},
"multimodal-looker": {
"model": "opencode/kimi-k2.5-free",
"model": "google/gemini-3-flash",
},
"oracle": {
"model": "openai/gpt-5.2",

View File

@@ -44,7 +44,7 @@ Model Providers (Priority: Native > Copilot > OpenCode Zen > Z.ai > Kimi):
Gemini Native google/ models (Gemini 3 Pro, Flash)
Copilot github-copilot/ models (fallback)
OpenCode Zen opencode/ models (opencode/claude-opus-4-6, etc.)
Z.ai zai-coding-plan/glm-5 (visual-engineering fallback)
Z.ai zai-coding-plan/glm-4.7 (Librarian priority)
Kimi kimi-for-coding/k2p5 (Sisyphus/Prometheus fallback)
`)
.action(async (options) => {

View File

@@ -281,7 +281,7 @@ describe("generateOmoConfig - model fallback system", () => {
expect((result.agents as Record<string, { model: string }>).sisyphus).toBeUndefined()
})
test("uses opencode/minimax-m2.5-free for librarian regardless of Z.ai", () => {
test("uses zai-coding-plan/glm-4.7 for librarian when Z.ai available", () => {
// #given user has Z.ai and Claude max20
const config: InstallConfig = {
hasClaude: true,
@@ -297,8 +297,8 @@ describe("generateOmoConfig - model fallback system", () => {
// #when generating config
const result = generateOmoConfig(config)
// #then librarian should use opencode/minimax-m2.5-free
expect((result.agents as Record<string, { model: string }>).librarian.model).toBe("opencode/minimax-m2.5-free")
// #then librarian should use zai-coding-plan/glm-4.7
expect((result.agents as Record<string, { model: string }>).librarian.model).toBe("zai-coding-plan/glm-4.7")
// #then Sisyphus uses Claude (OR logic)
expect((result.agents as Record<string, { model: string }>).sisyphus.model).toBe("anthropic/claude-opus-4-6")
})

View File

@@ -43,7 +43,7 @@ const testConfig: InstallConfig = {
describe("addAuthPlugins", () => {
describe("Test 1: JSONC with commented plugin line", () => {
it("preserves comment, does NOT add antigravity plugin", async () => {
it("preserves comment, updates actual plugin array", async () => {
const content = `{
// "plugin": ["old-plugin"]
"plugin": ["existing-plugin"],
@@ -59,18 +59,17 @@ describe("addAuthPlugins", () => {
const newContent = readFileSync(result.configPath, "utf-8")
expect(newContent).toContain('// "plugin": ["old-plugin"]')
expect(newContent).toContain('existing-plugin')
// antigravity plugin should NOT be auto-added anymore
expect(newContent).not.toContain('opencode-antigravity-auth')
expect(newContent).toContain('opencode-antigravity-auth')
const parsed = parseJsonc<Record<string, unknown>>(newContent)
const plugins = parsed.plugin as string[]
expect(plugins).toContain('existing-plugin')
expect(plugins.some((p) => p.startsWith('opencode-antigravity-auth'))).toBe(false)
expect(plugins.some((p) => p.startsWith('opencode-antigravity-auth'))).toBe(true)
})
})
describe("Test 2: Plugin array already contains antigravity", () => {
it("preserves existing antigravity, does not add another", async () => {
it("does not add duplicate", async () => {
const content = `{
"plugin": ["existing-plugin", "opencode-antigravity-auth"],
"provider": {}
@@ -88,7 +87,6 @@ describe("addAuthPlugins", () => {
const antigravityCount = plugins.filter((p) => p.startsWith('opencode-antigravity-auth')).length
expect(antigravityCount).toBe(1)
expect(plugins).toContain('existing-plugin')
})
})
@@ -158,7 +156,7 @@ describe("addAuthPlugins", () => {
})
describe("Test 6: No existing plugin array", () => {
it("creates empty plugin array when none exists, does NOT add antigravity", async () => {
it("creates plugin array when none exists", async () => {
const content = `{
"provider": {}
}`
@@ -174,9 +172,7 @@ describe("addAuthPlugins", () => {
const parsed = parseJsonc<Record<string, unknown>>(newContent)
expect(parsed).toHaveProperty('plugin')
const plugins = parsed.plugin as string[]
// antigravity plugin should NOT be auto-added anymore
expect(plugins.some((p) => p.startsWith('opencode-antigravity-auth'))).toBe(false)
expect(plugins.length).toBe(0)
expect(plugins.some((p) => p.startsWith('opencode-antigravity-auth'))).toBe(true)
})
})
@@ -203,7 +199,7 @@ describe("addAuthPlugins", () => {
})
describe("Test 8: Multiple plugins in array", () => {
it("preserves existing plugins, does NOT add antigravity", async () => {
it("appends to existing plugins", async () => {
const content = `{
"plugin": ["plugin-1", "plugin-2", "plugin-3"],
"provider": {}
@@ -222,9 +218,7 @@ describe("addAuthPlugins", () => {
expect(plugins).toContain('plugin-1')
expect(plugins).toContain('plugin-2')
expect(plugins).toContain('plugin-3')
// antigravity plugin should NOT be auto-added anymore
expect(plugins.some((p) => p.startsWith('opencode-antigravity-auth'))).toBe(false)
expect(plugins.length).toBe(3)
expect(plugins.some((p) => p.startsWith('opencode-antigravity-auth'))).toBe(true)
})
})
})

View File

@@ -50,8 +50,13 @@ export async function addAuthPlugins(config: InstallConfig): Promise<ConfigMerge
const rawPlugins = existingConfig?.plugin
const plugins: string[] = Array.isArray(rawPlugins) ? rawPlugins : []
// Note: opencode-antigravity-auth plugin auto-installation has been removed
// Users can manually add auth plugins if needed
if (config.hasGemini) {
const version = await fetchLatestVersion("opencode-antigravity-auth")
const pluginEntry = version ? `opencode-antigravity-auth@${version}` : "opencode-antigravity-auth"
if (!plugins.some((p) => p.startsWith("opencode-antigravity-auth"))) {
plugins.push(pluginEntry)
}
}
const newConfig = { ...(existingConfig ?? {}), plugin: plugins }

View File

@@ -491,18 +491,18 @@ describe("generateModelConfig", () => {
const result = generateModelConfig(config)
// #then librarian should use ZAI_MODEL
expect(result.agents?.librarian?.model).toBe("opencode/minimax-m2.5-free")
expect(result.agents?.librarian?.model).toBe("zai-coding-plan/glm-4.7")
})
test("librarian always uses minimax-m2.5-free regardless of provider availability", () => {
test("librarian uses claude-sonnet when ZAI not available but Claude is", () => {
// #given only Claude is available (no ZAI)
const config = createConfig({ hasClaude: true })
// #when generateModelConfig is called
const result = generateModelConfig(config)
// #then librarian should use opencode/minimax-m2.5-free (always first in chain)
expect(result.agents?.librarian?.model).toBe("opencode/minimax-m2.5-free")
// #then librarian should use claude-sonnet-4-6 (third in fallback chain after ZAI and opencode/glm)
expect(result.agents?.librarian?.model).toBe("anthropic/claude-sonnet-4-6")
})
})

View File

@@ -16,7 +16,7 @@ import {
export type { GeneratedOmoConfig } from "./model-fallback-types"
const LIBRARIAN_MODEL = "opencode/minimax-m2.5-free"
const ZAI_MODEL = "zai-coding-plan/glm-4.7"
const ULTIMATE_FALLBACK = "opencode/big-pickle"
const SCHEMA_URL = "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json"
@@ -52,8 +52,8 @@ export function generateModelConfig(config: InstallConfig): GeneratedOmoConfig {
const categories: Record<string, CategoryConfig> = {}
for (const [role, req] of Object.entries(AGENT_MODEL_REQUIREMENTS)) {
if (role === "librarian") {
agents[role] = { model: LIBRARIAN_MODEL }
if (role === "librarian" && avail.zai) {
agents[role] = { model: ZAI_MODEL }
continue
}

View File

@@ -17,7 +17,7 @@ config/schema/
├── hooks.ts # HookNameSchema (46 hooks)
├── skills.ts # SkillsConfigSchema (sources, paths, recursive)
├── commands.ts # BuiltinCommandNameSchema
├── experimental.ts # Feature flags (plugin_load_timeout_ms min 1000)
├── experimental.ts # Feature flags (plugin_load_timeout_ms min 1000, hashline_edit)
├── sisyphus.ts # SisyphusConfigSchema (task system)
├── sisyphus-agent.ts # SisyphusAgentConfigSchema
├── ralph-loop.ts # RalphLoopConfigSchema
@@ -34,9 +34,9 @@ config/schema/
└── internal/permission.ts # AgentPermissionSchema
```
## ROOT SCHEMA FIELDS (27)
## ROOT SCHEMA FIELDS (26)
`$schema`, `new_task_system_enabled`, `default_run_agent`, `disabled_mcps`, `disabled_agents`, `disabled_skills`, `disabled_hooks`, `disabled_commands`, `disabled_tools`, `hashline_edit`, `agents`, `categories`, `claude_code`, `sisyphus_agent`, `comment_checker`, `experimental`, `auto_update`, `skills`, `ralph_loop`, `background_task`, `notification`, `babysitting`, `git_master`, `browser_automation_engine`, `websearch`, `tmux`, `sisyphus`, `_migrations`
`$schema`, `new_task_system_enabled`, `default_run_agent`, `disabled_mcps`, `disabled_agents`, `disabled_skills`, `disabled_hooks`, `disabled_commands`, `disabled_tools`, `agents`, `categories`, `claude_code`, `sisyphus_agent`, `comment_checker`, `experimental`, `auto_update`, `skills`, `ralph_loop`, `background_task`, `notification`, `babysitting`, `git_master`, `browser_automation_engine`, `websearch`, `tmux`, `sisyphus`, `_migrations`
## AGENT OVERRIDE FIELDS (21)

View File

@@ -11,8 +11,6 @@ export {
RalphLoopConfigSchema,
TmuxConfigSchema,
TmuxLayoutSchema,
RuntimeFallbackConfigSchema,
FallbackModelsSchema,
} from "./schema"
export type {
@@ -31,6 +29,4 @@ export type {
TmuxLayout,
SisyphusConfig,
SisyphusTasksConfig,
RuntimeFallbackConfig,
FallbackModels,
} from "./schema"

View File

@@ -644,55 +644,6 @@ describe("OhMyOpenCodeConfigSchema - browser_automation_engine", () => {
})
})
describe("OhMyOpenCodeConfigSchema - hashline_edit", () => {
test("accepts hashline_edit as true", () => {
//#given
const input = { hashline_edit: true }
//#when
const result = OhMyOpenCodeConfigSchema.safeParse(input)
//#then
expect(result.success).toBe(true)
expect(result.data?.hashline_edit).toBe(true)
})
test("accepts hashline_edit as false", () => {
//#given
const input = { hashline_edit: false }
//#when
const result = OhMyOpenCodeConfigSchema.safeParse(input)
//#then
expect(result.success).toBe(true)
expect(result.data?.hashline_edit).toBe(false)
})
test("hashline_edit is optional", () => {
//#given
const input = { auto_update: true }
//#when
const result = OhMyOpenCodeConfigSchema.safeParse(input)
//#then
expect(result.success).toBe(true)
expect(result.data?.hashline_edit).toBeUndefined()
})
test("rejects non-boolean hashline_edit", () => {
//#given
const input = { hashline_edit: "true" }
//#when
const result = OhMyOpenCodeConfigSchema.safeParse(input)
//#then
expect(result.success).toBe(false)
})
})
describe("ExperimentalConfigSchema feature flags", () => {
test("accepts plugin_load_timeout_ms as number", () => {
//#given
@@ -748,9 +699,9 @@ describe("ExperimentalConfigSchema feature flags", () => {
}
})
test("accepts disable_omo_env as true", () => {
test("accepts hashline_edit as true", () => {
//#given
const config = { disable_omo_env: true }
const config = { hashline_edit: true }
//#when
const result = ExperimentalConfigSchema.safeParse(config)
@@ -758,13 +709,13 @@ describe("ExperimentalConfigSchema feature flags", () => {
//#then
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.disable_omo_env).toBe(true)
expect(result.data.hashline_edit).toBe(true)
}
})
test("accepts disable_omo_env as false", () => {
test("accepts hashline_edit as false", () => {
//#given
const config = { disable_omo_env: false }
const config = { hashline_edit: false }
//#when
const result = ExperimentalConfigSchema.safeParse(config)
@@ -772,11 +723,11 @@ describe("ExperimentalConfigSchema feature flags", () => {
//#then
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.disable_omo_env).toBe(false)
expect(result.data.hashline_edit).toBe(false)
}
})
test("disable_omo_env is optional", () => {
test("hashline_edit is optional", () => {
//#given
const config = { safe_hook_creation: true }
@@ -786,13 +737,13 @@ describe("ExperimentalConfigSchema feature flags", () => {
//#then
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.disable_omo_env).toBeUndefined()
expect(result.data.hashline_edit).toBeUndefined()
}
})
test("rejects non-boolean disable_omo_env", () => {
test("rejects non-boolean hashline_edit", () => {
//#given
const config = { disable_omo_env: "true" }
const config = { hashline_edit: "true" }
//#when
const result = ExperimentalConfigSchema.safeParse(config)
@@ -800,7 +751,6 @@ describe("ExperimentalConfigSchema feature flags", () => {
//#then
expect(result.success).toBe(false)
})
})
describe("GitMasterConfigSchema", () => {

View File

@@ -9,13 +9,11 @@ export * from "./schema/comment-checker"
export * from "./schema/commands"
export * from "./schema/dynamic-context-pruning"
export * from "./schema/experimental"
export * from "./schema/fallback-models"
export * from "./schema/git-master"
export * from "./schema/hooks"
export * from "./schema/notification"
export * from "./schema/oh-my-opencode-config"
export * from "./schema/ralph-loop"
export * from "./schema/runtime-fallback"
export * from "./schema/skills"
export * from "./schema/sisyphus"
export * from "./schema/sisyphus-agent"

View File

@@ -1,11 +1,9 @@
import { z } from "zod"
import { FallbackModelsSchema } from "./fallback-models"
import { AgentPermissionSchema } from "./internal/permission"
export const AgentOverrideConfigSchema = z.object({
/** @deprecated Use `category` instead. Model is inherited from category defaults. */
model: z.string().optional(),
fallback_models: FallbackModelsSchema.optional(),
variant: z.string().optional(),
/** Category name to inherit model and other settings from CategoryConfig */
category: z.string().optional(),
@@ -40,13 +38,6 @@ export const AgentOverrideConfigSchema = z.object({
textVerbosity: z.enum(["low", "medium", "high"]).optional(),
/** Provider-specific options. Passed directly to OpenCode SDK. */
providerOptions: z.record(z.string(), z.unknown()).optional(),
/** Per-message ultrawork override model/variant when ultrawork keyword is detected. */
ultrawork: z
.object({
model: z.string().optional(),
variant: z.string().optional(),
})
.optional(),
})
export const AgentOverridesSchema = z.object({

View File

@@ -1,11 +1,9 @@
import { z } from "zod"
import { FallbackModelsSchema } from "./fallback-models"
export const CategoryConfigSchema = z.object({
/** Human-readable description of the category's purpose. Shown in task prompt. */
description: z.string().optional(),
model: z.string().optional(),
fallback_models: FallbackModelsSchema.optional(),
variant: z.string().optional(),
temperature: z.number().min(0).max(2).optional(),
top_p: z.number().min(0).max(1).optional(),

View File

@@ -15,8 +15,8 @@ export const ExperimentalConfigSchema = z.object({
plugin_load_timeout_ms: z.number().min(1000).optional(),
/** Wrap hook creation in try/catch to prevent one failing hook from crashing the plugin (default: true at call site) */
safe_hook_creation: z.boolean().optional(),
/** Disable auto-injected <omo-env> context in prompts (experimental) */
disable_omo_env: z.boolean().optional(),
/** Enable hashline_edit tool for improved file editing with hash-based line anchors */
hashline_edit: z.boolean().optional(),
})
export type ExperimentalConfig = z.infer<typeof ExperimentalConfigSchema>

View File

@@ -1,5 +0,0 @@
import { z } from "zod"
export const FallbackModelsSchema = z.union([z.string(), z.array(z.string())])
export type FallbackModels = z.infer<typeof FallbackModelsSchema>

View File

@@ -38,7 +38,6 @@ export const HookNameSchema = z.enum([
"prometheus-md-only",
"sisyphus-junior-notepad",
"no-sisyphus-gpt",
"no-hephaestus-non-gpt",
"start-work",
"atlas",
"unstable-agent-babysitter",
@@ -46,11 +45,9 @@ export const HookNameSchema = z.enum([
"task-resume-info",
"stop-continuation-guard",
"tasks-todowrite-disabler",
"runtime-fallback",
"write-existing-file-guard",
"anthropic-effort",
"hashline-read-enhancer",
"hashline-edit-diff-enhancer",
])
export type HookName = z.infer<typeof HookNameSchema>

View File

@@ -14,7 +14,6 @@ import { GitMasterConfigSchema } from "./git-master"
import { HookNameSchema } from "./hooks"
import { NotificationConfigSchema } from "./notification"
import { RalphLoopConfigSchema } from "./ralph-loop"
import { RuntimeFallbackConfigSchema } from "./runtime-fallback"
import { SkillsConfigSchema } from "./skills"
import { SisyphusConfigSchema } from "./sisyphus"
import { SisyphusAgentConfigSchema } from "./sisyphus-agent"
@@ -34,8 +33,6 @@ export const OhMyOpenCodeConfigSchema = z.object({
disabled_commands: z.array(BuiltinCommandNameSchema).optional(),
/** Disable specific tools by name (e.g., ["todowrite", "todoread"]) */
disabled_tools: z.array(z.string()).optional(),
/** Enable hashline_edit tool/hook integrations (default: true at call site) */
hashline_edit: z.boolean().optional(),
agents: AgentOverridesSchema.optional(),
categories: CategoriesConfigSchema.optional(),
claude_code: ClaudeCodeConfigSchema.optional(),
@@ -45,7 +42,6 @@ export const OhMyOpenCodeConfigSchema = z.object({
auto_update: z.boolean().optional(),
skills: SkillsConfigSchema.optional(),
ralph_loop: RalphLoopConfigSchema.optional(),
runtime_fallback: RuntimeFallbackConfigSchema.optional(),
background_task: BackgroundTaskConfigSchema.optional(),
notification: NotificationConfigSchema.optional(),
babysitting: BabysittingConfigSchema.optional(),

View File

@@ -1,18 +0,0 @@
import { z } from "zod"
export const RuntimeFallbackConfigSchema = z.object({
/** Enable runtime fallback (default: true) */
enabled: z.boolean().optional(),
/** HTTP status codes that trigger fallback (default: [400, 429, 503, 529]) */
retry_on_errors: z.array(z.number()).optional(),
/** Maximum fallback attempts per session (default: 3) */
max_fallback_attempts: z.number().min(1).max(20).optional(),
/** Cooldown in seconds before retrying a failed model (default: 60) */
cooldown_seconds: z.number().min(0).optional(),
/** Session-level timeout in seconds to advance fallback when provider hangs (default: 30). Set to 0 to disable auto-retry signal detection (only error-based fallback remains active). */
timeout_seconds: z.number().min(0).optional(),
/** Show toast notification when switching to fallback model (default: true) */
notify_on_fallback: z.boolean().optional(),
})
export type RuntimeFallbackConfig = z.infer<typeof RuntimeFallbackConfigSchema>

View File

@@ -13,10 +13,8 @@ import {
normalizeSDKResponse,
promptWithModelSuggestionRetry,
resolveInheritedPromptTools,
createInternalAgentTextPart,
} from "../../shared"
import { setSessionTools } from "../../shared/session-tools-store"
import { SessionCategoryRegistry } from "../../shared/session-category-registry"
import { ConcurrencyManager } from "./concurrency"
import type { BackgroundTaskConfig, TmuxConfig } from "../../config/schema"
import { isInsideTmux } from "../../shared/tmux"
@@ -860,7 +858,6 @@ export class BackgroundManager {
subagentSessions.delete(task.sessionID)
}
}
SessionCategoryRegistry.remove(sessionID)
}
}
@@ -1024,8 +1021,6 @@ export class BackgroundManager {
this.client.session.abort({
path: { id: task.sessionID },
}).catch(() => {})
SessionCategoryRegistry.remove(task.sessionID)
}
if (options?.skipNotification) {
@@ -1173,8 +1168,6 @@ export class BackgroundManager {
this.client.session.abort({
path: { id: task.sessionID },
}).catch(() => {})
SessionCategoryRegistry.remove(task.sessionID)
}
try {
@@ -1318,7 +1311,7 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
...(agent !== undefined ? { agent } : {}),
...(model !== undefined ? { model } : {}),
...(tools ? { tools } : {}),
parts: [createInternalAgentTextPart(notification)],
parts: [{ type: "text", text: notification }],
},
})
log("[background-agent] Sent notification to parent session:", {
@@ -1477,7 +1470,6 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
this.tasks.delete(taskId)
if (task.sessionID) {
subagentSessions.delete(task.sessionID)
SessionCategoryRegistry.remove(task.sessionID)
}
}
}

View File

@@ -1,7 +1,7 @@
import type { BackgroundTask } from "./types"
import type { ResultHandlerContext } from "./result-handler-context"
import { TASK_CLEANUP_DELAY_MS } from "./constants"
import { createInternalAgentTextPart, log } from "../../shared"
import { log } from "../../shared"
import { getTaskToastManager } from "../task-toast-manager"
import { formatDuration } from "./duration-formatter"
import { buildBackgroundTaskNotificationText } from "./background-task-notification-template"
@@ -72,7 +72,7 @@ export async function notifyParentSession(
...(agent !== undefined ? { agent } : {}),
...(model !== undefined ? { model } : {}),
...(tools ? { tools } : {}),
parts: [createInternalAgentTextPart(notification)],
parts: [{ type: "text", text: notification }],
},
})

View File

@@ -5,7 +5,7 @@ import { MESSAGE_STORAGE, PART_STORAGE } from "./constants"
import type { MessageMeta, OriginalMessageContext, TextPart, ToolPermission } from "./types"
import { log } from "../../shared/logger"
import { isSqliteBackend } from "../../shared/opencode-storage-detection"
import { createInternalAgentTextPart, normalizeSDKResponse } from "../../shared"
import { normalizeSDKResponse } from "../../shared"
export interface StoredMessage {
agent?: string
@@ -331,7 +331,7 @@ export function injectHookMessage(
const textPart: TextPart = {
id: partID,
type: "text",
text: createInternalAgentTextPart(hookContent).text,
text: hookContent,
synthetic: true,
time: {
start: now,

View File

@@ -1,21 +1,14 @@
import type { TmuxConfig } from "../../config/schema"
import type { PaneAction, WindowState } from "./types"
import {
applyLayout,
spawnTmuxPane,
closeTmuxPane,
enforceMainPaneWidth,
replaceTmuxPane,
} from "../../shared/tmux"
import { getTmuxPath } from "../../tools/interactive-bash/tmux-path-resolver"
import { queryWindowState } from "./pane-state-querier"
import type { PaneAction } from "./types"
import { applyLayout, spawnTmuxPane, closeTmuxPane, enforceMainPaneWidth, replaceTmuxPane } from "../../shared/tmux"
import { log } from "../../shared"
import type {
ActionResult,
ActionExecutorDeps,
ActionResult,
ExecuteContext,
} from "./action-executor-core"
import { executeActionWithDeps } from "./action-executor-core"
export type { ActionExecutorDeps, ActionResult } from "./action-executor-core"
export type { ActionExecutorDeps, ActionResult, ExecuteContext } from "./action-executor-core"
export interface ExecuteActionsResult {
success: boolean
@@ -23,92 +16,19 @@ export interface ExecuteActionsResult {
results: Array<{ action: PaneAction; result: ActionResult }>
}
export interface ExecuteContext {
config: TmuxConfig
serverUrl: string
windowState: WindowState
sourcePaneId?: string
}
async function enforceMainPane(
windowState: WindowState,
config: TmuxConfig,
): Promise<void> {
if (!windowState.mainPane) return
await enforceMainPaneWidth(windowState.mainPane.paneId, windowState.windowWidth, {
mainPaneSize: config.main_pane_size,
mainPaneMinWidth: config.main_pane_min_width,
agentPaneMinWidth: config.agent_pane_min_width,
})
}
async function enforceLayoutAndMainPane(ctx: ExecuteContext): Promise<void> {
const sourcePaneId = ctx.sourcePaneId
if (!sourcePaneId) {
await enforceMainPane(ctx.windowState, ctx.config)
return
}
const latestState = await queryWindowState(sourcePaneId)
if (!latestState?.mainPane) {
await enforceMainPane(ctx.windowState, ctx.config)
return
}
const tmux = await getTmuxPath()
if (tmux) {
await applyLayout(tmux, ctx.config.layout, ctx.config.main_pane_size)
}
await enforceMainPane(latestState, ctx.config)
const DEFAULT_DEPS: ActionExecutorDeps = {
spawnTmuxPane,
closeTmuxPane,
replaceTmuxPane,
applyLayout,
enforceMainPaneWidth,
}
export async function executeAction(
action: PaneAction,
ctx: ExecuteContext
): Promise<ActionResult> {
if (action.type === "close") {
const success = await closeTmuxPane(action.paneId)
if (success) {
await enforceLayoutAndMainPane(ctx)
}
return { success }
}
if (action.type === "replace") {
const result = await replaceTmuxPane(
action.paneId,
action.newSessionId,
action.description,
ctx.config,
ctx.serverUrl
)
if (result.success) {
await enforceLayoutAndMainPane(ctx)
}
return {
success: result.success,
paneId: result.paneId,
}
}
const result = await spawnTmuxPane(
action.sessionId,
action.description,
ctx.config,
ctx.serverUrl,
action.targetPaneId,
action.splitDirection
)
if (result.success) {
await enforceLayoutAndMainPane(ctx)
}
return {
success: result.success,
paneId: result.paneId,
}
return executeActionWithDeps(action, ctx, DEFAULT_DEPS)
}
export async function executeActions(

View File

@@ -5,7 +5,6 @@ import {
canSplitPane,
canSplitPaneAnyDirection,
getBestSplitDirection,
findSpawnTarget,
type SessionMapping
} from "./decision-engine"
import type { WindowState, CapacityConfig, TmuxPaneInfo } from "./types"
@@ -259,31 +258,10 @@ describe("decideSpawnActions", () => {
expect(result.actions[0].type).toBe("spawn")
})
it("respects configured agent min width for split decisions", () => {
// given
const state = createWindowState(240, 44, [
{ paneId: "%1", width: 100, height: 44, left: 140, top: 0 },
])
const mappings: SessionMapping[] = [
{ sessionId: "old-ses", paneId: "%1", createdAt: new Date("2024-01-01") },
]
const strictConfig: CapacityConfig = {
mainPaneSize: 60,
mainPaneMinWidth: 120,
agentPaneWidth: 60,
}
// when
const result = decideSpawnActions(state, "ses1", "test", strictConfig, mappings)
// then
expect(result.canSpawn).toBe(false)
expect(result.actions).toHaveLength(0)
expect(result.reason).toContain("defer")
})
it("returns canSpawn=true when 0 agent panes exist and mainPane occupies full window width", () => {
// given - tmux reports mainPane.width === windowWidth when no splits exist
// agentAreaWidth = max(0, 252 - 252 - 1) = 0, which is < minPaneWidth
// but with 0 agent panes, the early return should be skipped
const windowWidth = 252
const windowHeight = 56
const state: WindowState = {
@@ -303,7 +281,8 @@ describe("decideSpawnActions", () => {
})
it("returns canSpawn=false when 0 agent panes and window genuinely too narrow to split", () => {
// given - window so narrow that even splitting mainPane would fail
// given - window so narrow that even splitting mainPane wouldn't work
// canSplitPane requires width >= 2*minPaneWidth + DIVIDER_SIZE = 2*40+1 = 81
const windowWidth = 70
const windowHeight = 56
const state: WindowState = {
@@ -316,13 +295,14 @@ describe("decideSpawnActions", () => {
// when
const result = decideSpawnActions(state, "ses1", "test", defaultConfig, [])
// then
// then - should fail because mainPane itself is too small to split
expect(result.canSpawn).toBe(false)
expect(result.reason).toContain("too small")
})
it("returns canSpawn=false when agent panes exist but agent area too small", () => {
// given - 1 agent pane exists, and agent area is below minPaneWidth
// given - 1 agent pane exists, but agent area is below minPaneWidth
// this verifies the early return still works for currentCount > 0
const state: WindowState = {
windowWidth: 180,
windowHeight: 44,
@@ -333,13 +313,13 @@ describe("decideSpawnActions", () => {
// when
const result = decideSpawnActions(state, "ses1", "test", defaultConfig, [])
// then
// then - agent area = max(0, 180-160-1) = 19, which is < agentPaneWidth(40)
expect(result.canSpawn).toBe(false)
expect(result.reason).toContain("defer attach")
expect(result.reason).toContain("too small")
})
it("spawns at exact minimum splittable width with 0 agent panes", () => {
// given
// given - canSplitPane requires width >= 2*agentPaneWidth + DIVIDER_SIZE = 2*40+1 = 81
const exactThreshold = 2 * defaultConfig.agentPaneWidth + 1
const state: WindowState = {
windowWidth: exactThreshold,
@@ -351,12 +331,12 @@ describe("decideSpawnActions", () => {
// when
const result = decideSpawnActions(state, "ses1", "test", defaultConfig, [])
// then
// then - exactly at threshold should succeed
expect(result.canSpawn).toBe(true)
})
it("rejects spawn 1 pixel below minimum splittable width with 0 agent panes", () => {
// given
// given - 1 below exact threshold
const belowThreshold = 2 * defaultConfig.agentPaneWidth
const state: WindowState = {
windowWidth: belowThreshold,
@@ -368,11 +348,11 @@ describe("decideSpawnActions", () => {
// when
const result = decideSpawnActions(state, "ses1", "test", defaultConfig, [])
// then
// then - 1 below threshold should fail
expect(result.canSpawn).toBe(false)
})
it("closes oldest pane when existing panes are too small to split", () => {
it("replaces oldest pane when existing panes are too small to split", () => {
// given - existing pane is below minimum splittable size
const state = createWindowState(220, 30, [
{ paneId: "%1", width: 50, height: 15, left: 110, top: 0 },
@@ -386,9 +366,8 @@ describe("decideSpawnActions", () => {
// then
expect(result.canSpawn).toBe(true)
expect(result.actions.length).toBe(2)
expect(result.actions[0].type).toBe("close")
expect(result.actions[1].type).toBe("spawn")
expect(result.actions.length).toBe(1)
expect(result.actions[0].type).toBe("replace")
})
it("can spawn when existing pane is large enough to split", () => {
@@ -450,64 +429,6 @@ describe("decideSpawnActions", () => {
expect(result.canSpawn).toBe(false)
expect(result.reason).toBe("no main pane found")
})
it("uses configured main pane size for split/defer decision", () => {
// given
const state = createWindowState(240, 44, [
{ paneId: "%1", width: 90, height: 44, left: 150, top: 0 },
])
const mappings: SessionMapping[] = [
{ sessionId: "old-ses", paneId: "%1", createdAt: new Date("2024-01-01") },
]
const wideMainConfig: CapacityConfig = {
mainPaneSize: 80,
mainPaneMinWidth: 120,
agentPaneWidth: 40,
}
// when
const result = decideSpawnActions(state, "ses1", "test", wideMainConfig, mappings)
// then
expect(result.canSpawn).toBe(false)
expect(result.actions).toHaveLength(0)
expect(result.reason).toContain("defer")
})
})
})
describe("findSpawnTarget", () => {
it("uses deterministic vertical fallback order", () => {
// given
const state: WindowState = {
windowWidth: 320,
windowHeight: 44,
mainPane: {
paneId: "%0",
width: 160,
height: 44,
left: 0,
top: 0,
title: "main",
isActive: true,
},
agentPanes: [
{ paneId: "%1", width: 70, height: 20, left: 170, top: 0, title: "a", isActive: false },
{ paneId: "%2", width: 120, height: 44, left: 240, top: 0, title: "b", isActive: false },
{ paneId: "%3", width: 120, height: 22, left: 240, top: 22, title: "c", isActive: false },
],
}
const config: CapacityConfig = {
mainPaneSize: 50,
mainPaneMinWidth: 120,
agentPaneWidth: 40,
}
// when
const target = findSpawnTarget(state, config)
// then
expect(target).toEqual({ targetPaneId: "%2", splitDirection: "-v" })
})
})
@@ -634,7 +555,7 @@ describe("decideSpawnActions with custom agentPaneWidth", () => {
}
})
it("#given wider main pane #when capacity needs two evictions #then defer is chosen", () => {
it("#given wider main pane #when capacity needs two evictions #then replace is chosen", () => {
//#given
const config: CapacityConfig = { mainPaneMinWidth: 120, agentPaneWidth: 40 }
const state = createWindowState(220, 44, [
@@ -665,8 +586,8 @@ describe("decideSpawnActions with custom agentPaneWidth", () => {
const result = decideSpawnActions(state, "ses-new", "new task", config, mappings)
//#then
expect(result.canSpawn).toBe(false)
expect(result.actions).toHaveLength(0)
expect(result.reason).toContain("defer attach")
expect(result.canSpawn).toBe(true)
expect(result.actions).toHaveLength(1)
expect(result.actions[0].type).toBe("replace")
})
})

View File

@@ -1,9 +1,9 @@
import { MIN_PANE_HEIGHT, MIN_PANE_WIDTH } from "./types"
import type { CapacityConfig, TmuxPaneInfo } from "./types"
import type { TmuxPaneInfo } from "./types"
import {
DIVIDER_SIZE,
MAIN_PANE_RATIO,
MAX_GRID_SIZE,
computeAgentAreaWidth,
} from "./tmux-grid-constants"
export interface GridCapacity {
@@ -24,36 +24,16 @@ export interface GridPlan {
slotHeight: number
}
type CapacityOptions = CapacityConfig | number | undefined
function resolveMinPaneWidth(options?: CapacityOptions): number {
if (typeof options === "number") {
return Math.max(1, options)
}
if (options && typeof options.agentPaneWidth === "number") {
return Math.max(1, options.agentPaneWidth)
}
return MIN_PANE_WIDTH
}
function resolveAgentAreaWidth(windowWidth: number, options?: CapacityOptions): number {
if (typeof options === "number") {
return computeAgentAreaWidth(windowWidth)
}
return computeAgentAreaWidth(windowWidth, options)
}
export function calculateCapacity(
windowWidth: number,
windowHeight: number,
options?: CapacityOptions,
minPaneWidth: number = MIN_PANE_WIDTH,
mainPaneWidth?: number,
): GridCapacity {
const availableWidth =
typeof mainPaneWidth === "number"
? Math.max(0, windowWidth - mainPaneWidth - DIVIDER_SIZE)
: resolveAgentAreaWidth(windowWidth, options)
const minPaneWidth = resolveMinPaneWidth(options)
typeof mainPaneWidth === "number"
? Math.max(0, windowWidth - mainPaneWidth - DIVIDER_SIZE)
: Math.floor(windowWidth * (1 - MAIN_PANE_RATIO))
const cols = Math.min(
MAX_GRID_SIZE,
Math.max(
@@ -79,10 +59,15 @@ export function computeGridPlan(
windowWidth: number,
windowHeight: number,
paneCount: number,
options?: CapacityOptions,
mainPaneWidth?: number,
minPaneWidth?: number,
): GridPlan {
const capacity = calculateCapacity(windowWidth, windowHeight, options, mainPaneWidth)
const capacity = calculateCapacity(
windowWidth,
windowHeight,
minPaneWidth ?? MIN_PANE_WIDTH,
mainPaneWidth,
)
const { cols: maxCols, rows: maxRows } = capacity
if (maxCols === 0 || maxRows === 0 || paneCount === 0) {
@@ -106,9 +91,9 @@ export function computeGridPlan(
}
const availableWidth =
typeof mainPaneWidth === "number"
? Math.max(0, windowWidth - mainPaneWidth - DIVIDER_SIZE)
: resolveAgentAreaWidth(windowWidth, options)
typeof mainPaneWidth === "number"
? Math.max(0, windowWidth - mainPaneWidth - DIVIDER_SIZE)
: Math.floor(windowWidth * (1 - MAIN_PANE_RATIO))
const slotWidth = Math.floor(availableWidth / bestCols)
const slotHeight = Math.floor(windowHeight / bestRows)

View File

@@ -1,145 +0,0 @@
import { describe, expect, it } from "bun:test"
import { decideSpawnActions, findSpawnTarget, type SessionMapping } from "./decision-engine"
import type { CapacityConfig, WindowState } from "./types"
function createState(
windowWidth: number,
windowHeight: number,
agentPanes: WindowState["agentPanes"],
): WindowState {
return {
windowWidth,
windowHeight,
mainPane: {
paneId: "%0",
width: Math.floor(windowWidth / 2),
height: windowHeight,
left: 0,
top: 0,
title: "main",
isActive: true,
},
agentPanes,
}
}
describe("tmux layout-aware split behavior", () => {
it("uses -v for first spawn in main-horizontal layout", () => {
const config: CapacityConfig = {
layout: "main-horizontal",
mainPaneSize: 60,
mainPaneMinWidth: 120,
agentPaneWidth: 40,
}
const state = createState(220, 44, [])
const decision = decideSpawnActions(state, "ses-1", "agent", config, [])
expect(decision.canSpawn).toBe(true)
expect(decision.actions[0]).toMatchObject({
type: "spawn",
splitDirection: "-v",
})
})
it("uses -h for first spawn in main-vertical layout", () => {
const config: CapacityConfig = {
layout: "main-vertical",
mainPaneSize: 60,
mainPaneMinWidth: 120,
agentPaneWidth: 40,
}
const state = createState(220, 44, [])
const decision = decideSpawnActions(state, "ses-1", "agent", config, [])
expect(decision.canSpawn).toBe(true)
expect(decision.actions[0]).toMatchObject({
type: "spawn",
splitDirection: "-h",
})
})
it("prefers horizontal split target in main-horizontal layout", () => {
const config: CapacityConfig = {
layout: "main-horizontal",
mainPaneSize: 60,
mainPaneMinWidth: 120,
agentPaneWidth: 40,
}
const state = createState(260, 60, [
{
paneId: "%1",
width: 120,
height: 30,
left: 0,
top: 30,
title: "agent",
isActive: false,
},
])
const target = findSpawnTarget(state, config)
expect(target).toEqual({ targetPaneId: "%1", splitDirection: "-h" })
})
it("defers when strict main-horizontal cannot split", () => {
const config: CapacityConfig = {
layout: "main-horizontal",
mainPaneSize: 60,
mainPaneMinWidth: 120,
agentPaneWidth: 40,
}
const state = createState(220, 44, [
{
paneId: "%1",
width: 60,
height: 44,
left: 0,
top: 22,
title: "old",
isActive: false,
},
])
const mappings: SessionMapping[] = [
{ sessionId: "old-ses", paneId: "%1", createdAt: new Date("2024-01-01") },
]
const decision = decideSpawnActions(state, "new-ses", "agent", config, mappings)
expect(decision.canSpawn).toBe(false)
expect(decision.actions).toHaveLength(0)
expect(decision.reason).toContain("defer")
})
it("still spawns in narrow main-vertical when vertical split is possible", () => {
const config: CapacityConfig = {
layout: "main-vertical",
mainPaneSize: 60,
mainPaneMinWidth: 120,
agentPaneWidth: 40,
}
const state = createState(169, 40, [
{
paneId: "%1",
width: 48,
height: 40,
left: 121,
top: 0,
title: "agent",
isActive: false,
},
])
const decision = decideSpawnActions(state, "new-ses", "agent", config, [])
expect(decision.canSpawn).toBe(true)
expect(decision.actions).toHaveLength(1)
expect(decision.actions[0]).toMatchObject({
type: "spawn",
targetPaneId: "%1",
splitDirection: "-v",
})
})
})

View File

@@ -156,15 +156,7 @@ describe('TmuxSessionManager', () => {
// given
mockIsInsideTmux.mockReturnValue(true)
const { TmuxSessionManager } = await import('./manager')
const ctx = createMockContext({
sessionStatusResult: {
data: {
ses_1: { type: 'running' },
ses_2: { type: 'running' },
ses_3: { type: 'running' },
},
},
})
const ctx = createMockContext()
const config: TmuxConfig = {
enabled: true,
layout: 'main-vertical',
@@ -184,13 +176,7 @@ describe('TmuxSessionManager', () => {
// given
mockIsInsideTmux.mockReturnValue(false)
const { TmuxSessionManager } = await import('./manager')
const ctx = createMockContext({
sessionStatusResult: {
data: {
ses_once: { type: 'running' },
},
},
})
const ctx = createMockContext()
const config: TmuxConfig = {
enabled: true,
layout: 'main-vertical',
@@ -400,7 +386,7 @@ describe('TmuxSessionManager', () => {
expect(mockExecuteActions).toHaveBeenCalledTimes(0)
})
test('defers attach when unsplittable (small window)', async () => {
test('replaces oldest agent when unsplittable (small window)', async () => {
// given - small window where split is not possible
mockIsInsideTmux.mockReturnValue(true)
mockQueryWindowState.mockImplementation(async () =>
@@ -437,224 +423,13 @@ describe('TmuxSessionManager', () => {
createSessionCreatedEvent('ses_new', 'ses_parent', 'New Task')
)
// then - with small window, manager defers instead of replacing
expect(mockExecuteActions).toHaveBeenCalledTimes(0)
expect((manager as any).deferredQueue).toEqual(['ses_new'])
})
test('keeps deferred queue idempotent for duplicate session.created events', async () => {
// given
mockIsInsideTmux.mockReturnValue(true)
mockQueryWindowState.mockImplementation(async () =>
createWindowState({
windowWidth: 160,
windowHeight: 11,
agentPanes: [
{
paneId: '%1',
width: 80,
height: 11,
left: 80,
top: 0,
title: 'old',
isActive: false,
},
],
})
)
const { TmuxSessionManager } = await import('./manager')
const ctx = createMockContext()
const config: TmuxConfig = {
enabled: true,
layout: 'main-vertical',
main_pane_size: 60,
main_pane_min_width: 120,
agent_pane_min_width: 40,
}
const manager = new TmuxSessionManager(ctx, config, mockTmuxDeps)
// when
await manager.onSessionCreated(
createSessionCreatedEvent('ses_dup', 'ses_parent', 'Duplicate Task')
)
await manager.onSessionCreated(
createSessionCreatedEvent('ses_dup', 'ses_parent', 'Duplicate Task')
)
// then
expect((manager as any).deferredQueue).toEqual(['ses_dup'])
})
test('auto-attaches deferred sessions in FIFO order', async () => {
// given
mockIsInsideTmux.mockReturnValue(true)
mockQueryWindowState.mockImplementation(async () =>
createWindowState({
windowWidth: 160,
windowHeight: 11,
agentPanes: [
{
paneId: '%1',
width: 80,
height: 11,
left: 80,
top: 0,
title: 'old',
isActive: false,
},
],
})
)
const attachOrder: string[] = []
mockExecuteActions.mockImplementation(async (actions) => {
for (const action of actions) {
if (action.type === 'spawn') {
attachOrder.push(action.sessionId)
trackedSessions.add(action.sessionId)
return {
success: true,
spawnedPaneId: `%${action.sessionId}`,
results: [{ action, result: { success: true, paneId: `%${action.sessionId}` } }],
}
}
}
return { success: true, results: [] }
})
const { TmuxSessionManager } = await import('./manager')
const ctx = createMockContext()
const config: TmuxConfig = {
enabled: true,
layout: 'main-vertical',
main_pane_size: 60,
main_pane_min_width: 120,
agent_pane_min_width: 40,
}
const manager = new TmuxSessionManager(ctx, config, mockTmuxDeps)
await manager.onSessionCreated(createSessionCreatedEvent('ses_1', 'ses_parent', 'Task 1'))
await manager.onSessionCreated(createSessionCreatedEvent('ses_2', 'ses_parent', 'Task 2'))
await manager.onSessionCreated(createSessionCreatedEvent('ses_3', 'ses_parent', 'Task 3'))
expect((manager as any).deferredQueue).toEqual(['ses_1', 'ses_2', 'ses_3'])
// when
mockQueryWindowState.mockImplementation(async () => createWindowState())
await (manager as any).tryAttachDeferredSession()
await (manager as any).tryAttachDeferredSession()
await (manager as any).tryAttachDeferredSession()
// then
expect(attachOrder).toEqual(['ses_1', 'ses_2', 'ses_3'])
expect((manager as any).deferredQueue).toEqual([])
})
test('does not attach deferred session more than once across repeated retries', async () => {
// given
mockIsInsideTmux.mockReturnValue(true)
mockQueryWindowState.mockImplementation(async () =>
createWindowState({
windowWidth: 160,
windowHeight: 11,
agentPanes: [
{
paneId: '%1',
width: 80,
height: 11,
left: 80,
top: 0,
title: 'old',
isActive: false,
},
],
})
)
let attachCount = 0
mockExecuteActions.mockImplementation(async (actions) => {
for (const action of actions) {
if (action.type === 'spawn') {
attachCount += 1
trackedSessions.add(action.sessionId)
return {
success: true,
spawnedPaneId: `%${action.sessionId}`,
results: [{ action, result: { success: true, paneId: `%${action.sessionId}` } }],
}
}
}
return { success: true, results: [] }
})
const { TmuxSessionManager } = await import('./manager')
const ctx = createMockContext()
const config: TmuxConfig = {
enabled: true,
layout: 'main-vertical',
main_pane_size: 60,
main_pane_min_width: 120,
agent_pane_min_width: 40,
}
const manager = new TmuxSessionManager(ctx, config, mockTmuxDeps)
await manager.onSessionCreated(
createSessionCreatedEvent('ses_once', 'ses_parent', 'Task Once')
)
// when
mockQueryWindowState.mockImplementation(async () => createWindowState())
await (manager as any).tryAttachDeferredSession()
await (manager as any).tryAttachDeferredSession()
// then
expect(attachCount).toBe(1)
expect((manager as any).deferredQueue).toEqual([])
})
test('removes deferred session when session is deleted before attach', async () => {
// given
mockIsInsideTmux.mockReturnValue(true)
mockQueryWindowState.mockImplementation(async () =>
createWindowState({
windowWidth: 160,
windowHeight: 11,
agentPanes: [
{
paneId: '%1',
width: 80,
height: 11,
left: 80,
top: 0,
title: 'old',
isActive: false,
},
],
})
)
const { TmuxSessionManager } = await import('./manager')
const ctx = createMockContext()
const config: TmuxConfig = {
enabled: true,
layout: 'main-vertical',
main_pane_size: 60,
main_pane_min_width: 120,
agent_pane_min_width: 40,
}
const manager = new TmuxSessionManager(ctx, config, mockTmuxDeps)
await manager.onSessionCreated(
createSessionCreatedEvent('ses_pending', 'ses_parent', 'Pending Task')
)
expect((manager as any).deferredQueue).toEqual(['ses_pending'])
// when
await manager.onSessionDeleted({ sessionID: 'ses_pending' })
// then
expect((manager as any).deferredQueue).toEqual([])
expect(mockExecuteAction).toHaveBeenCalledTimes(0)
// then - with small window, replace action is used instead of close+spawn
expect(mockExecuteActions).toHaveBeenCalledTimes(1)
const call = mockExecuteActions.mock.calls[0]
expect(call).toBeDefined()
const actionsArg = call![0]
expect(actionsArg).toHaveLength(1)
expect(actionsArg[0].type).toBe('replace')
})
})
@@ -703,7 +478,7 @@ describe('TmuxSessionManager', () => {
await manager.onSessionDeleted({ sessionID: 'ses_timeout' })
// then
expect(mockExecuteAction).toHaveBeenCalledTimes(1)
expect(mockExecuteAction).toHaveBeenCalledTimes(0)
})
test('closes pane when tracked session is deleted', async () => {
@@ -905,7 +680,7 @@ describe('DecisionEngine', () => {
}
})
test('returns canSpawn=false when split not possible', async () => {
test('returns replace when split not possible', async () => {
// given - small window where split is never possible
const { decideSpawnActions } = await import('./decision-engine')
const state: WindowState = {
@@ -945,10 +720,10 @@ describe('DecisionEngine', () => {
sessionMappings
)
// then - agent area (80) < MIN_SPLIT_WIDTH (105), so attach is deferred
expect(decision.canSpawn).toBe(false)
expect(decision.actions).toHaveLength(0)
expect(decision.reason).toContain('defer')
// then - agent area (80) < MIN_SPLIT_WIDTH (105), so replace is used
expect(decision.canSpawn).toBe(true)
expect(decision.actions).toHaveLength(1)
expect(decision.actions[0].type).toBe('replace')
})
test('returns canSpawn=false when window too small', async () => {

View File

@@ -5,7 +5,6 @@ import { log, normalizeSDKResponse } from "../../shared"
import {
isInsideTmux as defaultIsInsideTmux,
getCurrentPaneId as defaultGetCurrentPaneId,
POLL_INTERVAL_BACKGROUND_MS,
SESSION_READY_POLL_INTERVAL_MS,
SESSION_READY_TIMEOUT_MS,
} from "../../shared/tmux"
@@ -20,12 +19,6 @@ interface SessionCreatedEvent {
properties?: { info?: { id?: string; parentID?: string; title?: string } }
}
interface DeferredSession {
sessionId: string
title: string
queuedAt: Date
}
export interface TmuxUtilDeps {
isInsideTmux: () => boolean
getCurrentPaneId: () => string | undefined
@@ -55,11 +48,6 @@ export class TmuxSessionManager {
private sourcePaneId: string | undefined
private sessions = new Map<string, TrackedSession>()
private pendingSessions = new Set<string>()
private spawnQueue: Promise<void> = Promise.resolve()
private deferredSessions = new Map<string, DeferredSession>()
private deferredQueue: string[] = []
private deferredAttachInterval?: ReturnType<typeof setInterval>
private deferredAttachTickScheduled = false
private deps: TmuxUtilDeps
private pollingManager: TmuxPollingManager
constructor(ctx: PluginInput, tmuxConfig: TmuxConfig, deps: TmuxUtilDeps = defaultTmuxDeps) {
@@ -87,8 +75,6 @@ export class TmuxSessionManager {
private getCapacityConfig(): CapacityConfig {
return {
layout: this.tmuxConfig.layout,
mainPaneSize: this.tmuxConfig.main_pane_size,
mainPaneMinWidth: this.tmuxConfig.main_pane_min_width,
agentPaneWidth: this.tmuxConfig.agent_pane_min_width,
}
@@ -102,136 +88,6 @@ export class TmuxSessionManager {
}))
}
private enqueueDeferredSession(sessionId: string, title: string): void {
if (this.deferredSessions.has(sessionId)) return
this.deferredSessions.set(sessionId, {
sessionId,
title,
queuedAt: new Date(),
})
this.deferredQueue.push(sessionId)
log("[tmux-session-manager] deferred session queued", {
sessionId,
queueLength: this.deferredQueue.length,
})
this.startDeferredAttachLoop()
}
private removeDeferredSession(sessionId: string): void {
if (!this.deferredSessions.delete(sessionId)) return
this.deferredQueue = this.deferredQueue.filter((id) => id !== sessionId)
log("[tmux-session-manager] deferred session removed", {
sessionId,
queueLength: this.deferredQueue.length,
})
if (this.deferredQueue.length === 0) {
this.stopDeferredAttachLoop()
}
}
private startDeferredAttachLoop(): void {
if (this.deferredAttachInterval) return
this.deferredAttachInterval = setInterval(() => {
if (this.deferredAttachTickScheduled) return
this.deferredAttachTickScheduled = true
void this.enqueueSpawn(async () => {
try {
await this.tryAttachDeferredSession()
} finally {
this.deferredAttachTickScheduled = false
}
})
}, POLL_INTERVAL_BACKGROUND_MS)
log("[tmux-session-manager] deferred attach polling started", {
intervalMs: POLL_INTERVAL_BACKGROUND_MS,
})
}
private stopDeferredAttachLoop(): void {
if (!this.deferredAttachInterval) return
clearInterval(this.deferredAttachInterval)
this.deferredAttachInterval = undefined
this.deferredAttachTickScheduled = false
log("[tmux-session-manager] deferred attach polling stopped")
}
private async tryAttachDeferredSession(): Promise<void> {
if (!this.sourcePaneId) return
const sessionId = this.deferredQueue[0]
if (!sessionId) {
this.stopDeferredAttachLoop()
return
}
const deferred = this.deferredSessions.get(sessionId)
if (!deferred) {
this.deferredQueue.shift()
return
}
const state = await queryWindowState(this.sourcePaneId)
if (!state) return
const decision = decideSpawnActions(
state,
sessionId,
deferred.title,
this.getCapacityConfig(),
this.getSessionMappings(),
)
if (!decision.canSpawn || decision.actions.length === 0) {
log("[tmux-session-manager] deferred session still waiting for capacity", {
sessionId,
reason: decision.reason,
})
return
}
const result = await executeActions(decision.actions, {
config: this.tmuxConfig,
serverUrl: this.serverUrl,
windowState: state,
sourcePaneId: this.sourcePaneId,
})
if (!result.success || !result.spawnedPaneId) {
log("[tmux-session-manager] deferred session attach failed", {
sessionId,
results: result.results.map((r) => ({
type: r.action.type,
success: r.result.success,
error: r.result.error,
})),
})
return
}
const sessionReady = await this.waitForSessionReady(sessionId)
if (!sessionReady) {
log("[tmux-session-manager] deferred session not ready after timeout", {
sessionId,
paneId: result.spawnedPaneId,
})
}
const now = Date.now()
this.sessions.set(sessionId, {
sessionId,
paneId: result.spawnedPaneId,
description: deferred.title,
createdAt: new Date(now),
lastSeenAt: new Date(now),
})
this.removeDeferredSession(sessionId)
this.pollingManager.startPolling()
log("[tmux-session-manager] deferred session attached", {
sessionId,
paneId: result.spawnedPaneId,
sessionReady,
})
}
private async waitForSessionReady(sessionId: string): Promise<boolean> {
const startTime = Date.now()
@@ -282,11 +138,7 @@ export class TmuxSessionManager {
const sessionId = info.id
const title = info.title ?? "Subagent"
if (
this.sessions.has(sessionId) ||
this.pendingSessions.has(sessionId) ||
this.deferredSessions.has(sessionId)
) {
if (this.sessions.has(sessionId) || this.pendingSessions.has(sessionId)) {
log("[tmux-session-manager] session already tracked or pending", { sessionId })
return
}
@@ -295,17 +147,15 @@ export class TmuxSessionManager {
log("[tmux-session-manager] no source pane id")
return
}
const sourcePaneId = this.sourcePaneId
this.pendingSessions.add(sessionId)
await this.enqueueSpawn(async () => {
try {
const state = await queryWindowState(sourcePaneId)
if (!state) {
log("[tmux-session-manager] failed to query window state")
return
}
try {
const state = await queryWindowState(this.sourcePaneId)
if (!state) {
log("[tmux-session-manager] failed to query window state")
return
}
log("[tmux-session-manager] window state queried", {
windowWidth: state.windowWidth,
@@ -314,13 +164,13 @@ export class TmuxSessionManager {
agentPanes: state.agentPanes.map((p) => p.paneId),
})
const decision = decideSpawnActions(
state,
sessionId,
title,
this.getCapacityConfig(),
this.getSessionMappings()
)
const decision = decideSpawnActions(
state,
sessionId,
title,
this.getCapacityConfig(),
this.getSessionMappings()
)
log("[tmux-session-manager] spawn decision", {
canSpawn: decision.canSpawn,
@@ -333,105 +183,82 @@ export class TmuxSessionManager {
}),
})
if (!decision.canSpawn) {
log("[tmux-session-manager] cannot spawn", { reason: decision.reason })
this.enqueueDeferredSession(sessionId, title)
return
}
const result = await executeActions(
decision.actions,
{
config: this.tmuxConfig,
serverUrl: this.serverUrl,
windowState: state,
sourcePaneId,
}
)
for (const { action, result: actionResult } of result.results) {
if (action.type === "close" && actionResult.success) {
this.sessions.delete(action.sessionId)
log("[tmux-session-manager] removed closed session from cache", {
sessionId: action.sessionId,
})
}
if (action.type === "replace" && actionResult.success) {
this.sessions.delete(action.oldSessionId)
log("[tmux-session-manager] removed replaced session from cache", {
oldSessionId: action.oldSessionId,
newSessionId: action.newSessionId,
})
}
}
if (result.success && result.spawnedPaneId) {
const sessionReady = await this.waitForSessionReady(sessionId)
if (!sessionReady) {
log("[tmux-session-manager] session not ready after timeout, tracking anyway", {
sessionId,
paneId: result.spawnedPaneId,
})
}
const now = Date.now()
this.sessions.set(sessionId, {
sessionId,
paneId: result.spawnedPaneId,
description: title,
createdAt: new Date(now),
lastSeenAt: new Date(now),
})
log("[tmux-session-manager] pane spawned and tracked", {
sessionId,
paneId: result.spawnedPaneId,
sessionReady,
})
this.pollingManager.startPolling()
} else {
log("[tmux-session-manager] spawn failed", {
success: result.success,
results: result.results.map((r) => ({
type: r.action.type,
success: r.result.success,
error: r.result.error,
})),
})
if (result.spawnedPaneId) {
await executeAction(
{ type: "close", paneId: result.spawnedPaneId, sessionId },
{ config: this.tmuxConfig, serverUrl: this.serverUrl, windowState: state }
)
}
return
}
} finally {
this.pendingSessions.delete(sessionId)
if (!decision.canSpawn) {
log("[tmux-session-manager] cannot spawn", { reason: decision.reason })
return
}
})
}
private async enqueueSpawn(run: () => Promise<void>): Promise<void> {
this.spawnQueue = this.spawnQueue
.catch(() => undefined)
.then(run)
.catch((err) => {
log("[tmux-session-manager] spawn queue task failed", {
error: String(err),
const result = await executeActions(
decision.actions,
{ config: this.tmuxConfig, serverUrl: this.serverUrl, windowState: state }
)
for (const { action, result: actionResult } of result.results) {
if (action.type === "close" && actionResult.success) {
this.sessions.delete(action.sessionId)
log("[tmux-session-manager] removed closed session from cache", {
sessionId: action.sessionId,
})
}
if (action.type === "replace" && actionResult.success) {
this.sessions.delete(action.oldSessionId)
log("[tmux-session-manager] removed replaced session from cache", {
oldSessionId: action.oldSessionId,
newSessionId: action.newSessionId,
})
}
}
if (result.success && result.spawnedPaneId) {
const sessionReady = await this.waitForSessionReady(sessionId)
if (!sessionReady) {
log("[tmux-session-manager] session not ready after timeout, closing spawned pane", {
sessionId,
paneId: result.spawnedPaneId,
})
await executeAction(
{ type: "close", paneId: result.spawnedPaneId, sessionId },
{ config: this.tmuxConfig, serverUrl: this.serverUrl, windowState: state }
)
return
}
const now = Date.now()
this.sessions.set(sessionId, {
sessionId,
paneId: result.spawnedPaneId,
description: title,
createdAt: new Date(now),
lastSeenAt: new Date(now),
})
})
await this.spawnQueue
log("[tmux-session-manager] pane spawned and tracked", {
sessionId,
paneId: result.spawnedPaneId,
sessionReady,
})
this.pollingManager.startPolling()
} else {
log("[tmux-session-manager] spawn failed", {
success: result.success,
results: result.results.map((r) => ({
type: r.action.type,
success: r.result.success,
error: r.result.error,
})),
})
}
} finally {
this.pendingSessions.delete(sessionId)
}
}
async onSessionDeleted(event: { sessionID: string }): Promise<void> {
if (!this.isEnabled()) return
if (!this.sourcePaneId) return
this.removeDeferredSession(event.sessionID)
const tracked = this.sessions.get(event.sessionID)
if (!tracked) return
@@ -445,12 +272,7 @@ export class TmuxSessionManager {
const closeAction = decideCloseAction(state, event.sessionID, this.getSessionMappings())
if (closeAction) {
await executeAction(closeAction, {
config: this.tmuxConfig,
serverUrl: this.serverUrl,
windowState: state,
sourcePaneId: this.sourcePaneId,
})
await executeAction(closeAction, { config: this.tmuxConfig, serverUrl: this.serverUrl, windowState: state })
}
this.sessions.delete(event.sessionID)
@@ -474,12 +296,7 @@ export class TmuxSessionManager {
if (state) {
await executeAction(
{ type: "close", paneId: tracked.paneId, sessionId },
{
config: this.tmuxConfig,
serverUrl: this.serverUrl,
windowState: state,
sourcePaneId: this.sourcePaneId,
}
{ config: this.tmuxConfig, serverUrl: this.serverUrl, windowState: state }
)
}
@@ -497,9 +314,6 @@ export class TmuxSessionManager {
}
async cleanup(): Promise<void> {
this.stopDeferredAttachLoop()
this.deferredQueue = []
this.deferredSessions.clear()
this.pollingManager.stopPolling()
if (this.sessions.size > 0) {
@@ -510,12 +324,7 @@ export class TmuxSessionManager {
const closePromises = Array.from(this.sessions.values()).map((s) =>
executeAction(
{ type: "close", paneId: s.paneId, sessionId: s.sessionId },
{
config: this.tmuxConfig,
serverUrl: this.serverUrl,
windowState: state,
sourcePaneId: this.sourcePaneId,
}
{ config: this.tmuxConfig, serverUrl: this.serverUrl, windowState: state }
).catch((err) =>
log("[tmux-session-manager] cleanup error for pane", {
paneId: s.paneId,

View File

@@ -1,15 +1,14 @@
import { MIN_PANE_WIDTH } from "./types"
import type { SplitDirection, TmuxPaneInfo } from "./types"
import {
DIVIDER_SIZE,
MAX_COLS,
MAX_ROWS,
MIN_SPLIT_HEIGHT,
DIVIDER_SIZE,
MAX_COLS,
MAX_ROWS,
MIN_SPLIT_HEIGHT,
} from "./tmux-grid-constants"
import { MIN_PANE_WIDTH } from "./types"
function getMinSplitWidth(minPaneWidth?: number): number {
const width = Math.max(1, minPaneWidth ?? MIN_PANE_WIDTH)
return 2 * width + DIVIDER_SIZE
function minSplitWidthFor(minPaneWidth: number): number {
return 2 * minPaneWidth + DIVIDER_SIZE
}
export function getColumnCount(paneCount: number): number {
@@ -26,16 +25,16 @@ export function getColumnWidth(agentAreaWidth: number, paneCount: number): numbe
export function isSplittableAtCount(
agentAreaWidth: number,
paneCount: number,
minPaneWidth?: number,
minPaneWidth: number = MIN_PANE_WIDTH,
): boolean {
const columnWidth = getColumnWidth(agentAreaWidth, paneCount)
return columnWidth >= getMinSplitWidth(minPaneWidth)
return columnWidth >= minSplitWidthFor(minPaneWidth)
}
export function findMinimalEvictions(
agentAreaWidth: number,
currentCount: number,
minPaneWidth?: number,
minPaneWidth: number = MIN_PANE_WIDTH,
): number | null {
for (let k = 1; k <= currentCount; k++) {
if (isSplittableAtCount(agentAreaWidth, currentCount - k, minPaneWidth)) {
@@ -48,26 +47,30 @@ export function findMinimalEvictions(
export function canSplitPane(
pane: TmuxPaneInfo,
direction: SplitDirection,
minPaneWidth?: number,
minPaneWidth: number = MIN_PANE_WIDTH,
): boolean {
if (direction === "-h") {
return pane.width >= getMinSplitWidth(minPaneWidth)
return pane.width >= minSplitWidthFor(minPaneWidth)
}
return pane.height >= MIN_SPLIT_HEIGHT
}
export function canSplitPaneAnyDirection(
export function canSplitPaneAnyDirection(pane: TmuxPaneInfo, minPaneWidth: number = MIN_PANE_WIDTH): boolean {
return canSplitPaneAnyDirectionWithMinWidth(pane, minPaneWidth)
}
export function canSplitPaneAnyDirectionWithMinWidth(
pane: TmuxPaneInfo,
minPaneWidth?: number,
minPaneWidth: number = MIN_PANE_WIDTH,
): boolean {
return pane.width >= getMinSplitWidth(minPaneWidth) || pane.height >= MIN_SPLIT_HEIGHT
return pane.width >= minSplitWidthFor(minPaneWidth) || pane.height >= MIN_SPLIT_HEIGHT
}
export function getBestSplitDirection(
pane: TmuxPaneInfo,
minPaneWidth?: number,
minPaneWidth: number = MIN_PANE_WIDTH,
): SplitDirection | null {
const canH = pane.width >= getMinSplitWidth(minPaneWidth)
const canH = pane.width >= minSplitWidthFor(minPaneWidth)
const canV = pane.height >= MIN_SPLIT_HEIGHT
if (!canH && !canV) return null

View File

@@ -14,7 +14,7 @@ export async function queryWindowState(sourcePaneId: string): Promise<WindowStat
"-t",
sourcePaneId,
"-F",
"#{pane_id}\t#{pane_width}\t#{pane_height}\t#{pane_left}\t#{pane_top}\t#{pane_active}\t#{window_width}\t#{window_height}\t#{pane_title}",
"#{pane_id},#{pane_width},#{pane_height},#{pane_left},#{pane_top},#{pane_title},#{pane_active},#{window_width},#{window_height}",
],
{ stdout: "pipe", stderr: "pipe" }
)
@@ -35,11 +35,7 @@ export async function queryWindowState(sourcePaneId: string): Promise<WindowStat
const panes: TmuxPaneInfo[] = []
for (const line of lines) {
const fields = line.split("\t")
if (fields.length < 9) continue
const [paneId, widthStr, heightStr, leftStr, topStr, activeStr, windowWidthStr, windowHeightStr] = fields
const title = fields.slice(8).join("\t")
const [paneId, widthStr, heightStr, leftStr, topStr, title, activeStr, windowWidthStr, windowHeightStr] = line.split(",")
const width = parseInt(widthStr, 10)
const height = parseInt(heightStr, 10)
const left = parseInt(leftStr, 10)
@@ -55,21 +51,9 @@ export async function queryWindowState(sourcePaneId: string): Promise<WindowStat
panes.sort((a, b) => a.left - b.left || a.top - b.top)
const mainPane = panes.reduce<TmuxPaneInfo | null>((selected, pane) => {
if (!selected) return pane
if (pane.left !== selected.left) {
return pane.left < selected.left ? pane : selected
}
if (pane.width !== selected.width) {
return pane.width > selected.width ? pane : selected
}
if (pane.top !== selected.top) {
return pane.top < selected.top ? pane : selected
}
return pane.paneId === sourcePaneId ? pane : selected
}, null)
const mainPane = panes.find((p) => p.paneId === sourcePaneId)
if (!mainPane) {
log("[pane-state-querier] CRITICAL: failed to determine main pane", {
log("[pane-state-querier] CRITICAL: sourcePaneId not found in panes", {
sourcePaneId,
availablePanes: panes.map((p) => p.paneId),
})

View File

@@ -5,7 +5,7 @@ import type {
TmuxPaneInfo,
WindowState,
} from "./types"
import { computeAgentAreaWidth } from "./tmux-grid-constants"
import { DIVIDER_SIZE } from "./tmux-grid-constants"
import {
canSplitPane,
findMinimalEvictions,
@@ -14,14 +14,6 @@ import {
import { findSpawnTarget } from "./spawn-target-finder"
import { findOldestAgentPane, type SessionMapping } from "./oldest-agent-pane"
function getInitialSplitDirection(layout?: string): "-h" | "-v" {
return layout === "main-horizontal" ? "-v" : "-h"
}
function isStrictMainLayout(layout?: string): boolean {
return layout === "main-vertical" || layout === "main-horizontal"
}
export function decideSpawnActions(
state: WindowState,
sessionId: string,
@@ -33,13 +25,14 @@ export function decideSpawnActions(
return { canSpawn: false, actions: [], reason: "no main pane found" }
}
const agentAreaWidth = computeAgentAreaWidth(state.windowWidth, config)
const minAgentPaneWidth = config.agentPaneWidth
const minPaneWidth = config.agentPaneWidth
const agentAreaWidth = Math.max(
0,
state.windowWidth - state.mainPane.width - DIVIDER_SIZE,
)
const currentCount = state.agentPanes.length
const strictLayout = isStrictMainLayout(config.layout)
const initialSplitDirection = getInitialSplitDirection(config.layout)
if (agentAreaWidth < minAgentPaneWidth && currentCount > 0) {
if (agentAreaWidth < minPaneWidth && currentCount > 0) {
return {
canSpawn: false,
actions: [],
@@ -54,7 +47,7 @@ export function decideSpawnActions(
if (currentCount === 0) {
const virtualMainPane: TmuxPaneInfo = { ...state.mainPane, width: state.windowWidth }
if (canSplitPane(virtualMainPane, initialSplitDirection, minAgentPaneWidth)) {
if (canSplitPane(virtualMainPane, "-h", minPaneWidth)) {
return {
canSpawn: true,
actions: [
@@ -63,7 +56,7 @@ export function decideSpawnActions(
sessionId,
description,
targetPaneId: state.mainPane.paneId,
splitDirection: initialSplitDirection,
splitDirection: "-h",
},
],
}
@@ -71,12 +64,8 @@ export function decideSpawnActions(
return { canSpawn: false, actions: [], reason: "mainPane too small to split" }
}
const canEvaluateSpawnTarget =
strictLayout ||
isSplittableAtCount(agentAreaWidth, currentCount, minAgentPaneWidth)
if (canEvaluateSpawnTarget) {
const spawnTarget = findSpawnTarget(state, config)
if (isSplittableAtCount(agentAreaWidth, currentCount, minPaneWidth)) {
const spawnTarget = findSpawnTarget(state, minPaneWidth)
if (spawnTarget) {
return {
canSpawn: true,
@@ -93,43 +82,40 @@ export function decideSpawnActions(
}
}
if (!strictLayout) {
const minEvictions = findMinimalEvictions(
agentAreaWidth,
currentCount,
minAgentPaneWidth,
)
if (minEvictions === 1 && oldestPane) {
return {
canSpawn: true,
actions: [
{
type: "close",
paneId: oldestPane.paneId,
sessionId: oldestMapping?.sessionId || "",
},
{
type: "spawn",
sessionId,
description,
targetPaneId: state.mainPane.paneId,
splitDirection: initialSplitDirection,
},
],
reason: "closed 1 pane to make room for split",
}
const minEvictions = findMinimalEvictions(agentAreaWidth, currentCount, minPaneWidth)
if (minEvictions === 1 && oldestPane) {
return {
canSpawn: true,
actions: [
{
type: "replace",
paneId: oldestPane.paneId,
oldSessionId: oldestMapping?.sessionId || "",
newSessionId: sessionId,
description,
},
],
reason: "replaced oldest pane to avoid split churn",
}
}
if (oldestPane) {
return {
canSpawn: false,
actions: [],
reason: "no split target available (defer attach)",
canSpawn: true,
actions: [
{
type: "replace",
paneId: oldestPane.paneId,
oldSessionId: oldestMapping?.sessionId || "",
newSessionId: sessionId,
description,
},
],
reason: "replaced oldest pane (no split possible)",
}
}
return { canSpawn: false, actions: [], reason: "no split target available (defer attach)" }
return { canSpawn: false, actions: [], reason: "no pane available to replace" }
}
export function decideCloseAction(

View File

@@ -1,40 +1,13 @@
import type { CapacityConfig, SplitDirection, TmuxPaneInfo, WindowState } from "./types"
import { computeMainPaneWidth } from "./tmux-grid-constants"
import type { SplitDirection, TmuxPaneInfo, WindowState } from "./types"
import { computeGridPlan, mapPaneToSlot } from "./grid-planning"
import { canSplitPane } from "./pane-split-availability"
import { canSplitPane, getBestSplitDirection } from "./pane-split-availability"
import { MIN_PANE_WIDTH } from "./types"
export interface SpawnTarget {
targetPaneId: string
splitDirection: SplitDirection
}
function isStrictMainVertical(config: CapacityConfig): boolean {
return config.layout === "main-vertical"
}
function isStrictMainHorizontal(config: CapacityConfig): boolean {
return config.layout === "main-horizontal"
}
function isStrictMainLayout(config: CapacityConfig): boolean {
return isStrictMainVertical(config) || isStrictMainHorizontal(config)
}
function getInitialSplitDirection(config: CapacityConfig): SplitDirection {
return isStrictMainHorizontal(config) ? "-v" : "-h"
}
function getStrictFollowupSplitDirection(config: CapacityConfig): SplitDirection {
return isStrictMainHorizontal(config) ? "-h" : "-v"
}
function sortPanesForStrictLayout(panes: TmuxPaneInfo[], config: CapacityConfig): TmuxPaneInfo[] {
if (isStrictMainHorizontal(config)) {
return [...panes].sort((a, b) => a.left - b.left || a.top - b.top)
}
return [...panes].sort((a, b) => a.top - b.top || a.left - b.left)
}
function buildOccupancy(
agentPanes: TmuxPaneInfo[],
plan: ReturnType<typeof computeGridPlan>,
@@ -64,29 +37,16 @@ function findFirstEmptySlot(
function findSplittableTarget(
state: WindowState,
config: CapacityConfig,
minPaneWidth: number,
_preferredDirection?: SplitDirection,
): SpawnTarget | null {
if (!state.mainPane) return null
const existingCount = state.agentPanes.length
const minAgentPaneWidth = config.agentPaneWidth
const initialDirection = getInitialSplitDirection(config)
if (existingCount === 0) {
const virtualMainPane: TmuxPaneInfo = { ...state.mainPane, width: state.windowWidth }
if (canSplitPane(virtualMainPane, initialDirection, minAgentPaneWidth)) {
return { targetPaneId: state.mainPane.paneId, splitDirection: initialDirection }
}
return null
}
if (isStrictMainLayout(config)) {
const followupDirection = getStrictFollowupSplitDirection(config)
const panesByPriority = sortPanesForStrictLayout(state.agentPanes, config)
for (const pane of panesByPriority) {
if (canSplitPane(pane, followupDirection, minAgentPaneWidth)) {
return { targetPaneId: pane.paneId, splitDirection: followupDirection }
}
if (canSplitPane(virtualMainPane, "-h", minPaneWidth)) {
return { targetPaneId: state.mainPane.paneId, splitDirection: "-h" }
}
return null
}
@@ -95,44 +55,34 @@ function findSplittableTarget(
state.windowWidth,
state.windowHeight,
existingCount + 1,
config,
state.mainPane.width,
minPaneWidth,
)
const mainPaneWidth = computeMainPaneWidth(state.windowWidth, config)
const mainPaneWidth = state.mainPane.width
const occupancy = buildOccupancy(state.agentPanes, plan, mainPaneWidth)
const targetSlot = findFirstEmptySlot(occupancy, plan)
const leftPane = occupancy.get(`${targetSlot.row}:${targetSlot.col - 1}`)
if (
!isStrictMainVertical(config) &&
leftPane &&
canSplitPane(leftPane, "-h", minAgentPaneWidth)
) {
if (leftPane && canSplitPane(leftPane, "-h", minPaneWidth)) {
return { targetPaneId: leftPane.paneId, splitDirection: "-h" }
}
const abovePane = occupancy.get(`${targetSlot.row - 1}:${targetSlot.col}`)
if (abovePane && canSplitPane(abovePane, "-v", minAgentPaneWidth)) {
if (abovePane && canSplitPane(abovePane, "-v", minPaneWidth)) {
return { targetPaneId: abovePane.paneId, splitDirection: "-v" }
}
const panesByPosition = [...state.agentPanes].sort(
(a, b) => a.left - b.left || a.top - b.top,
)
const splittablePanes = state.agentPanes
.map((pane) => ({ pane, direction: getBestSplitDirection(pane, minPaneWidth) }))
.filter(
(item): item is { pane: TmuxPaneInfo; direction: SplitDirection } =>
item.direction !== null,
)
.sort((a, b) => b.pane.width * b.pane.height - a.pane.width * a.pane.height)
for (const pane of panesByPosition) {
if (canSplitPane(pane, "-v", minAgentPaneWidth)) {
return { targetPaneId: pane.paneId, splitDirection: "-v" }
}
}
if (isStrictMainVertical(config)) {
return null
}
for (const pane of panesByPosition) {
if (canSplitPane(pane, "-h", minAgentPaneWidth)) {
return { targetPaneId: pane.paneId, splitDirection: "-h" }
}
const best = splittablePanes[0]
if (best) {
return { targetPaneId: best.pane.paneId, splitDirection: best.direction }
}
return null
@@ -140,7 +90,7 @@ function findSplittableTarget(
export function findSpawnTarget(
state: WindowState,
config: CapacityConfig,
minPaneWidth: number = MIN_PANE_WIDTH,
): SpawnTarget | null {
return findSplittableTarget(state, config)
return findSplittableTarget(state, minPaneWidth)
}

View File

@@ -1,8 +1,6 @@
import { MIN_PANE_HEIGHT, MIN_PANE_WIDTH } from "./types"
import type { CapacityConfig } from "./types"
export const MAIN_PANE_RATIO = 0.5
const DEFAULT_MAIN_PANE_SIZE = MAIN_PANE_RATIO * 100
export const MAX_COLS = 2
export const MAX_ROWS = 3
export const MAX_GRID_SIZE = 4
@@ -10,48 +8,3 @@ export const DIVIDER_SIZE = 1
export const MIN_SPLIT_WIDTH = 2 * MIN_PANE_WIDTH + DIVIDER_SIZE
export const MIN_SPLIT_HEIGHT = 2 * MIN_PANE_HEIGHT + DIVIDER_SIZE
function clamp(value: number, min: number, max: number): number {
return Math.max(min, Math.min(max, value))
}
export function getMainPaneSizePercent(config?: CapacityConfig): number {
return clamp(config?.mainPaneSize ?? DEFAULT_MAIN_PANE_SIZE, 20, 80)
}
export function computeMainPaneWidth(
windowWidth: number,
config?: CapacityConfig,
): number {
const safeWindowWidth = Math.max(0, windowWidth)
if (!config) {
return Math.floor(safeWindowWidth * MAIN_PANE_RATIO)
}
const dividerWidth = DIVIDER_SIZE
const minMainPaneWidth = config?.mainPaneMinWidth ?? Math.floor(safeWindowWidth * MAIN_PANE_RATIO)
const minAgentPaneWidth = config?.agentPaneWidth ?? MIN_PANE_WIDTH
const percentageMainPaneWidth = Math.floor(
(safeWindowWidth - dividerWidth) * (getMainPaneSizePercent(config) / 100),
)
const maxMainPaneWidth = Math.max(0, safeWindowWidth - dividerWidth - minAgentPaneWidth)
return clamp(
Math.max(percentageMainPaneWidth, minMainPaneWidth),
0,
maxMainPaneWidth,
)
}
export function computeAgentAreaWidth(
windowWidth: number,
config?: CapacityConfig,
): number {
const safeWindowWidth = Math.max(0, windowWidth)
if (!config) {
return Math.floor(safeWindowWidth * (1 - MAIN_PANE_RATIO))
}
const mainPaneWidth = computeMainPaneWidth(safeWindowWidth, config)
return Math.max(0, safeWindowWidth - DIVIDER_SIZE - mainPaneWidth)
}

View File

@@ -43,8 +43,6 @@ export interface SpawnDecision {
}
export interface CapacityConfig {
layout?: string
mainPaneSize?: number
mainPaneMinWidth: number
agentPaneWidth: number
}

View File

@@ -9,45 +9,6 @@
## HOOK TIERS
### Tier 1: Session Hooks (22) — `create-session-hooks.ts`
## STRUCTURE
```
hooks/
├── atlas/ # Main orchestration (757 lines)
├── anthropic-context-window-limit-recovery/ # Auto-summarize
├── todo-continuation-enforcer.ts # Force TODO completion
├── ralph-loop/ # Self-referential dev loop
├── claude-code-hooks/ # settings.json compat layer - see AGENTS.md
├── comment-checker/ # Prevents AI slop
├── auto-slash-command/ # Detects /command patterns
├── rules-injector/ # Conditional rules
├── directory-agents-injector/ # Auto-injects AGENTS.md
├── directory-readme-injector/ # Auto-injects README.md
├── edit-error-recovery/ # Recovers from failures
├── thinking-block-validator/ # Ensures valid <thinking>
├── context-window-monitor.ts # Reminds of headroom
├── session-recovery/ # Auto-recovers from crashes
├── think-mode/ # Dynamic thinking budget
├── keyword-detector/ # ultrawork/search/analyze modes
├── background-notification/ # OS notification
├── prometheus-md-only/ # Planner read-only mode
├── agent-usage-reminder/ # Specialized agent hints
├── auto-update-checker/ # Plugin update check
├── tool-output-truncator.ts # Prevents context bloat
├── compaction-context-injector/ # Injects context on compaction
├── delegate-task-retry/ # Retries failed delegations
├── interactive-bash-session/ # Tmux session management
├── non-interactive-env/ # Non-TTY environment handling
├── start-work/ # Sisyphus work session starter
├── task-resume-info/ # Resume info for cancelled tasks
├── question-label-truncator/ # Auto-truncates question labels
├── category-skill-reminder/ # Reminds of category skills
├── empty-task-response-detector.ts # Detects empty responses
├── sisyphus-junior-notepad/ # Sisyphus Junior notepad
├── stop-continuation-guard/ # Guards stop continuation
├── subagent-question-blocker/ # Blocks subagent questions
├── runtime-fallback/ # Auto-switch models on API errors
└── index.ts # Hook aggregation + registration
```
| Hook | Event | Purpose |
|------|-------|---------|

View File

@@ -1,7 +1,7 @@
import type { PluginInput } from "@opencode-ai/plugin"
import type { BackgroundManager } from "../../features/background-agent"
import { log } from "../../shared/logger"
import { createInternalAgentTextPart, resolveInheritedPromptTools } from "../../shared"
import { resolveInheritedPromptTools } from "../../shared"
import { HOOK_NAME } from "./hook-name"
import { BOULDER_CONTINUATION_PROMPT } from "./system-reminder-templates"
import { resolveRecentPromptContextForSession } from "./recent-model-resolver"
@@ -53,7 +53,7 @@ export async function injectBoulderContinuation(input: {
agent: agent ?? "atlas",
...(promptContext.model !== undefined ? { model: promptContext.model } : {}),
...(inheritedTools ? { tools: inheritedTools } : {}),
parts: [createInternalAgentTextPart(prompt)],
parts: [{ type: "text", text: prompt }],
},
query: { directory: ctx.directory },
})

View File

@@ -3,7 +3,7 @@ import { loadClaudeHooksConfig } from "../config"
import { loadPluginExtendedConfig } from "../config-loader"
import { executeStopHooks, type StopContext } from "../stop"
import type { PluginConfig } from "../types"
import { createInternalAgentTextPart, isHookDisabled, log } from "../../../shared"
import { isHookDisabled, log } from "../../../shared"
import {
clearSessionHookState,
sessionErrorState,
@@ -94,7 +94,7 @@ export function createSessionEventHandler(ctx: PluginInput, config: PluginConfig
.prompt({
path: { id: sessionID },
body: {
parts: [createInternalAgentTextPart(stopResult.injectPrompt)],
parts: [{ type: "text", text: stopResult.injectPrompt }],
},
query: { directory: ctx.directory },
})

View File

@@ -1,106 +0,0 @@
import { log } from "../../shared"
import { generateUnifiedDiff, countLineDiffs } from "../../tools/hashline-edit/diff-utils"
interface HashlineEditDiffEnhancerConfig {
hashline_edit?: { enabled: boolean }
}
type BeforeInput = { tool: string; sessionID: string; callID: string }
type BeforeOutput = { args: Record<string, unknown> }
type AfterInput = { tool: string; sessionID: string; callID: string }
type AfterOutput = { title: string; output: string; metadata: Record<string, unknown> }
const STALE_TIMEOUT_MS = 5 * 60 * 1000
const pendingCaptures = new Map<string, { content: string; filePath: string; storedAt: number }>()
function makeKey(sessionID: string, callID: string): string {
return `${sessionID}:${callID}`
}
function cleanupStaleEntries(): void {
const now = Date.now()
for (const [key, entry] of pendingCaptures) {
if (now - entry.storedAt > STALE_TIMEOUT_MS) {
pendingCaptures.delete(key)
}
}
}
function isWriteTool(toolName: string): boolean {
return toolName.toLowerCase() === "write"
}
function extractFilePath(args: Record<string, unknown>): string | undefined {
const path = args.path ?? args.filePath ?? args.file_path
return typeof path === "string" ? path : undefined
}
async function captureOldContent(filePath: string): Promise<string> {
try {
const file = Bun.file(filePath)
if (await file.exists()) {
return await file.text()
}
} catch {
log("[hashline-edit-diff-enhancer] failed to read old content", { filePath })
}
return ""
}
export function createHashlineEditDiffEnhancerHook(config: HashlineEditDiffEnhancerConfig) {
const enabled = config.hashline_edit?.enabled ?? false
return {
"tool.execute.before": async (input: BeforeInput, output: BeforeOutput) => {
if (!enabled || !isWriteTool(input.tool)) return
const filePath = extractFilePath(output.args)
if (!filePath) return
cleanupStaleEntries()
const oldContent = await captureOldContent(filePath)
pendingCaptures.set(makeKey(input.sessionID, input.callID), {
content: oldContent,
filePath,
storedAt: Date.now(),
})
},
"tool.execute.after": async (input: AfterInput, output: AfterOutput) => {
if (!enabled || !isWriteTool(input.tool)) return
const key = makeKey(input.sessionID, input.callID)
const captured = pendingCaptures.get(key)
if (!captured) return
pendingCaptures.delete(key)
const { content: oldContent, filePath } = captured
let newContent: string
try {
newContent = await Bun.file(filePath).text()
} catch {
log("[hashline-edit-diff-enhancer] failed to read new content", { filePath })
return
}
const { additions, deletions } = countLineDiffs(oldContent, newContent)
const unifiedDiff = generateUnifiedDiff(oldContent, newContent, filePath)
output.metadata.filediff = {
file: filePath,
path: filePath,
before: oldContent,
after: newContent,
additions,
deletions,
}
// TUI reads metadata.diff (unified diff string), not filediff object
output.metadata.diff = unifiedDiff
output.title = filePath
},
}
}

View File

@@ -1,306 +0,0 @@
import { describe, test, expect, beforeEach } from "bun:test"
import { createHashlineEditDiffEnhancerHook } from "./hook"
function makeInput(tool: string, callID = "call-1", sessionID = "ses-1") {
return { tool, sessionID, callID }
}
function makeBeforeOutput(args: Record<string, unknown>) {
return { args }
}
function makeAfterOutput(overrides?: Partial<{ title: string; output: string; metadata: Record<string, unknown> }>) {
return {
title: overrides?.title ?? "",
output: overrides?.output ?? "Successfully applied 1 edit(s)",
metadata: overrides?.metadata ?? { truncated: false },
}
}
type FileDiffMetadata = {
file: string
path: string
before: string
after: string
additions: number
deletions: number
}
describe("hashline-edit-diff-enhancer", () => {
let hook: ReturnType<typeof createHashlineEditDiffEnhancerHook>
beforeEach(() => {
hook = createHashlineEditDiffEnhancerHook({ hashline_edit: { enabled: true } })
})
describe("tool.execute.before", () => {
test("captures old file content for write tool", async () => {
const filePath = import.meta.dir + "/index.test.ts"
const input = makeInput("write")
const output = makeBeforeOutput({ path: filePath, edits: [] })
await hook["tool.execute.before"](input, output)
// given the hook ran without error, the old content should be stored internally
// we verify in the after hook test that it produces filediff
})
test("ignores non-write tools", async () => {
const input = makeInput("read")
const output = makeBeforeOutput({ path: "/some/file.ts" })
// when - should not throw
await hook["tool.execute.before"](input, output)
})
})
describe("tool.execute.after", () => {
test("injects filediff metadata after write tool execution", async () => {
// given - a temp file that we can modify between before/after
const tmpDir = (await import("os")).tmpdir()
const tmpFile = `${tmpDir}/hashline-diff-test-${Date.now()}.ts`
const oldContent = "line 1\nline 2\nline 3\n"
await Bun.write(tmpFile, oldContent)
const input = makeInput("write", "call-diff-1")
const beforeOutput = makeBeforeOutput({ path: tmpFile, edits: [] })
// when - before hook captures old content
await hook["tool.execute.before"](input, beforeOutput)
// when - file is modified (simulating write execution)
const newContent = "line 1\nmodified line 2\nline 3\nnew line 4\n"
await Bun.write(tmpFile, newContent)
// when - after hook computes filediff
const afterOutput = makeAfterOutput()
await hook["tool.execute.after"](input, afterOutput)
// then - metadata should contain filediff
const filediff = afterOutput.metadata.filediff as {
file: string
path: string
before: string
after: string
additions: number
deletions: number
}
expect(filediff).toBeDefined()
expect(filediff.file).toBe(tmpFile)
expect(filediff.path).toBe(tmpFile)
expect(filediff.before).toBe(oldContent)
expect(filediff.after).toBe(newContent)
expect(filediff.additions).toBeGreaterThan(0)
expect(filediff.deletions).toBeGreaterThan(0)
// then - title should be set to the file path
expect(afterOutput.title).toBe(tmpFile)
// cleanup
await Bun.file(tmpFile).exists() && (await import("fs/promises")).unlink(tmpFile)
})
test("does nothing for non-write tools", async () => {
const input = makeInput("read", "call-other")
const afterOutput = makeAfterOutput()
const originalMetadata = { ...afterOutput.metadata }
await hook["tool.execute.after"](input, afterOutput)
// then - metadata unchanged
expect(afterOutput.metadata).toEqual(originalMetadata)
})
test("does nothing when no before capture exists", async () => {
// given - no before hook was called for this callID
const input = makeInput("write", "call-no-before")
const afterOutput = makeAfterOutput()
const originalMetadata = { ...afterOutput.metadata }
await hook["tool.execute.after"](input, afterOutput)
// then - metadata unchanged (no filediff injected)
expect(afterOutput.metadata.filediff).toBeUndefined()
})
test("cleans up stored content after consumption", async () => {
const tmpDir = (await import("os")).tmpdir()
const tmpFile = `${tmpDir}/hashline-diff-cleanup-${Date.now()}.ts`
await Bun.write(tmpFile, "original")
const input = makeInput("write", "call-cleanup")
await hook["tool.execute.before"](input, makeBeforeOutput({ path: tmpFile }))
await Bun.write(tmpFile, "modified")
// when - first after call consumes
const afterOutput1 = makeAfterOutput()
await hook["tool.execute.after"](input, afterOutput1)
expect(afterOutput1.metadata.filediff).toBeDefined()
// when - second after call finds nothing
const afterOutput2 = makeAfterOutput()
await hook["tool.execute.after"](input, afterOutput2)
expect(afterOutput2.metadata.filediff).toBeUndefined()
await (await import("fs/promises")).unlink(tmpFile).catch(() => {})
})
test("handles file creation (empty old content)", async () => {
const tmpDir = (await import("os")).tmpdir()
const tmpFile = `${tmpDir}/hashline-diff-create-${Date.now()}.ts`
// given - file doesn't exist during before hook
const input = makeInput("write", "call-create")
await hook["tool.execute.before"](input, makeBeforeOutput({ path: tmpFile }))
// when - file created during write
await Bun.write(tmpFile, "new content\n")
const afterOutput = makeAfterOutput()
await hook["tool.execute.after"](input, afterOutput)
// then - filediff shows creation (before is empty)
const filediff = afterOutput.metadata.filediff as FileDiffMetadata
expect(filediff).toBeDefined()
expect(filediff.before).toBe("")
expect(filediff.after).toBe("new content\n")
expect(filediff.additions).toBeGreaterThan(0)
expect(filediff.deletions).toBe(0)
await (await import("fs/promises")).unlink(tmpFile).catch(() => {})
})
})
describe("disabled config", () => {
test("does nothing when hashline_edit is disabled", async () => {
const disabledHook = createHashlineEditDiffEnhancerHook({ hashline_edit: { enabled: false } })
const tmpDir = (await import("os")).tmpdir()
const tmpFile = `${tmpDir}/hashline-diff-disabled-${Date.now()}.ts`
await Bun.write(tmpFile, "content")
const input = makeInput("write", "call-disabled")
await disabledHook["tool.execute.before"](input, makeBeforeOutput({ path: tmpFile }))
await Bun.write(tmpFile, "modified")
const afterOutput = makeAfterOutput()
await disabledHook["tool.execute.after"](input, afterOutput)
// then - no filediff injected
expect(afterOutput.metadata.filediff).toBeUndefined()
await (await import("fs/promises")).unlink(tmpFile).catch(() => {})
})
})
describe("write tool support", () => {
test("captures filediff for write tool (path arg)", async () => {
//#given - a temp file
const tmpDir = (await import("os")).tmpdir()
const tmpFile = `${tmpDir}/hashline-diff-write-${Date.now()}.ts`
const oldContent = "line 1\nline 2\n"
await Bun.write(tmpFile, oldContent)
const input = makeInput("write", "call-write-1")
const beforeOutput = makeBeforeOutput({ path: tmpFile })
//#when - before hook captures old content
await hook["tool.execute.before"](input, beforeOutput)
//#when - file is written
const newContent = "line 1\nmodified line 2\nnew line 3\n"
await Bun.write(tmpFile, newContent)
//#when - after hook computes filediff
const afterOutput = makeAfterOutput()
await hook["tool.execute.after"](input, afterOutput)
//#then - metadata should contain filediff
const filediff = afterOutput.metadata.filediff as { file: string; before: string; after: string; additions: number; deletions: number }
expect(filediff).toBeDefined()
expect(filediff.file).toBe(tmpFile)
expect(filediff.additions).toBeGreaterThan(0)
await (await import("fs/promises")).unlink(tmpFile).catch(() => {})
})
test("captures filediff for write tool (filePath arg)", async () => {
//#given
const tmpDir = (await import("os")).tmpdir()
const tmpFile = `${tmpDir}/hashline-diff-write-fp-${Date.now()}.ts`
await Bun.write(tmpFile, "original content\n")
const input = makeInput("write", "call-write-fp")
//#when - before hook uses filePath arg
await hook["tool.execute.before"](input, makeBeforeOutput({ filePath: tmpFile }))
await Bun.write(tmpFile, "new content\n")
const afterOutput = makeAfterOutput()
await hook["tool.execute.after"](input, afterOutput)
//#then
const filediff = afterOutput.metadata.filediff as FileDiffMetadata | undefined
expect(filediff).toBeDefined()
await (await import("fs/promises")).unlink(tmpFile).catch(() => {})
})
})
describe("raw content in filediff", () => {
test("filediff.before and filediff.after are raw file content", async () => {
//#given - a temp file
const tmpDir = (await import("os")).tmpdir()
const tmpFile = `${tmpDir}/hashline-diff-format-${Date.now()}.ts`
const oldContent = "const x = 1\nconst y = 2\n"
await Bun.write(tmpFile, oldContent)
const input = makeInput("write", "call-hashline-format")
await hook["tool.execute.before"](input, makeBeforeOutput({ path: tmpFile }))
//#when - file is modified and after hook runs
const newContent = "const x = 1\nconst y = 42\n"
await Bun.write(tmpFile, newContent)
const afterOutput = makeAfterOutput()
await hook["tool.execute.after"](input, afterOutput)
//#then - before and after should be raw file content
const filediff = afterOutput.metadata.filediff as { before: string; after: string }
expect(filediff.before).toBe(oldContent)
expect(filediff.after).toBe(newContent)
await (await import("fs/promises")).unlink(tmpFile).catch(() => {})
})
})
describe("TUI diff support (metadata.diff)", () => {
test("injects unified diff string in metadata.diff for write tool TUI", async () => {
//#given - a temp file
const tmpDir = (await import("os")).tmpdir()
const tmpFile = `${tmpDir}/hashline-tui-diff-${Date.now()}.ts`
const oldContent = "line 1\nline 2\nline 3\n"
await Bun.write(tmpFile, oldContent)
const input = makeInput("write", "call-tui-diff")
await hook["tool.execute.before"](input, makeBeforeOutput({ path: tmpFile }))
//#when - file is modified
const newContent = "line 1\nmodified line 2\nline 3\n"
await Bun.write(tmpFile, newContent)
const afterOutput = makeAfterOutput()
await hook["tool.execute.after"](input, afterOutput)
//#then - metadata.diff should be a unified diff string
expect(afterOutput.metadata.diff).toBeDefined()
expect(typeof afterOutput.metadata.diff).toBe("string")
expect(afterOutput.metadata.diff).toContain("---")
expect(afterOutput.metadata.diff).toContain("+++")
expect(afterOutput.metadata.diff).toContain("@@")
expect(afterOutput.metadata.diff).toContain("-line 2")
expect(afterOutput.metadata.diff).toContain("+modified line 2")
await (await import("fs/promises")).unlink(tmpFile).catch(() => {})
})
})
})

View File

@@ -1 +0,0 @@
export { createHashlineEditDiffEnhancerHook } from "./hook"

View File

@@ -1,23 +1,16 @@
import type { PluginInput } from "@opencode-ai/plugin"
import { computeLineHash } from "../../tools/hashline-edit/hash-computation"
import { toHashlineContent } from "../../tools/hashline-edit/diff-utils"
interface HashlineReadEnhancerConfig {
hashline_edit?: { enabled: boolean }
}
const READ_LINE_PATTERN = /^(\d+): (.*)$/
const CONTENT_OPEN_TAG = "<content>"
const CONTENT_CLOSE_TAG = "</content>"
function isReadTool(toolName: string): boolean {
return toolName.toLowerCase() === "read"
}
function isWriteTool(toolName: string): boolean {
return toolName.toLowerCase() === "write"
}
function shouldProcess(config: HashlineReadEnhancerConfig): boolean {
return config.hashline_edit?.enabled ?? false
}
@@ -35,73 +28,18 @@ function transformLine(line: string): string {
const lineNumber = parseInt(match[1], 10)
const content = match[2]
const hash = computeLineHash(lineNumber, content)
return `${lineNumber}#${hash}:${content}`
return `${lineNumber}:${hash}|${content}`
}
function transformOutput(output: string): string {
if (!output) {
return output
}
if (!isTextFile(output)) {
return output
}
const lines = output.split("\n")
const contentStart = lines.indexOf(CONTENT_OPEN_TAG)
const contentEnd = lines.indexOf(CONTENT_CLOSE_TAG)
if (contentStart === -1 || contentEnd === -1 || contentEnd <= contentStart + 1) {
return output
}
const fileLines = lines.slice(contentStart + 1, contentEnd)
if (!isTextFile(fileLines[0] ?? "")) {
return output
}
const result: string[] = []
for (const line of fileLines) {
if (!READ_LINE_PATTERN.test(line)) {
result.push(...fileLines.slice(result.length))
break
}
result.push(transformLine(line))
}
return [...lines.slice(0, contentStart + 1), ...result, ...lines.slice(contentEnd)].join("\n")
}
function extractFilePath(metadata: unknown): string | undefined {
if (!metadata || typeof metadata !== "object") {
return undefined
}
const objectMeta = metadata as Record<string, unknown>
const candidates = [objectMeta.filepath, objectMeta.filePath, objectMeta.path, objectMeta.file]
for (const candidate of candidates) {
if (typeof candidate === "string" && candidate.length > 0) {
return candidate
}
}
return undefined
}
async function appendWriteHashlineOutput(output: { output: string; metadata: unknown }): Promise<void> {
if (output.output.includes("Updated file (LINE#ID:content):")) {
return
}
const filePath = extractFilePath(output.metadata)
if (!filePath) {
return
}
const file = Bun.file(filePath)
if (!(await file.exists())) {
return
}
const content = await file.text()
const hashlined = toHashlineContent(content)
output.output = `${output.output}\n\nUpdated file (LINE#ID:content):\n${hashlined}`
return lines.map(transformLine).join("\n")
}
export function createHashlineReadEnhancerHook(
@@ -114,9 +52,6 @@ export function createHashlineReadEnhancerHook(
output: { title: string; output: string; metadata: unknown }
) => {
if (!isReadTool(input.tool)) {
if (isWriteTool(input.tool) && typeof output.output === "string" && shouldProcess(config)) {
await appendWriteHashlineOutput(output)
}
return
}
if (typeof output.output !== "string") {

View File

@@ -1,93 +1,248 @@
import { describe, it, expect } from "bun:test"
import type { PluginInput } from "@opencode-ai/plugin"
import { describe, it, expect, beforeEach } from "bun:test"
import { createHashlineReadEnhancerHook } from "./hook"
import * as fs from "node:fs"
import * as os from "node:os"
import * as path from "node:path"
import type { PluginInput } from "@opencode-ai/plugin"
function mockCtx(): PluginInput {
//#given - Test setup helpers
function createMockContext(): PluginInput {
return {
client: {} as PluginInput["client"],
client: {} as unknown as PluginInput["client"],
directory: "/test",
project: "/test" as unknown as PluginInput["project"],
worktree: "/test",
serverUrl: "http://localhost" as unknown as PluginInput["serverUrl"],
$: {} as PluginInput["$"],
}
}
describe("hashline-read-enhancer", () => {
it("hashifies only file content lines in read output", async () => {
//#given
const hook = createHashlineReadEnhancerHook(mockCtx(), { hashline_edit: { enabled: true } })
const input = { tool: "read", sessionID: "s", callID: "c" }
const output = {
title: "demo.ts",
output: [
"<path>/tmp/demo.ts</path>",
"<type>file</type>",
"<content>",
"1: const x = 1",
"2: const y = 2",
"",
"(End of file - total 2 lines)",
"</content>",
"",
"<system-reminder>",
"1: keep this unchanged",
"</system-reminder>",
].join("\n"),
metadata: {},
}
interface TestConfig {
hashline_edit?: { enabled: boolean }
}
//#when
await hook["tool.execute.after"](input, output)
function createMockConfig(enabled: boolean): TestConfig {
return {
hashline_edit: { enabled },
}
}
//#then
const lines = output.output.split("\n")
expect(lines[3]).toMatch(/^1#[ZPMQVRWSNKTXJBYH]{2}:const x = 1$/)
expect(lines[4]).toMatch(/^2#[ZPMQVRWSNKTXJBYH]{2}:const y = 2$/)
expect(lines[10]).toBe("1: keep this unchanged")
describe("createHashlineReadEnhancerHook", () => {
let mockCtx: PluginInput
const sessionID = "test-session-123"
beforeEach(() => {
mockCtx = createMockContext()
})
it("appends LINE#ID output for write tool using metadata filepath", async () => {
//#given
const hook = createHashlineReadEnhancerHook(mockCtx(), { hashline_edit: { enabled: true } })
const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), "hashline-write-"))
const filePath = path.join(tempDir, "demo.ts")
fs.writeFileSync(filePath, "const x = 1\nconst y = 2")
const input = { tool: "write", sessionID: "s", callID: "c" }
const output = {
title: "write",
output: "Wrote file successfully.",
metadata: { filepath: filePath },
}
describe("tool name matching", () => {
it("should process 'read' tool (lowercase)", async () => {
//#given
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
const input = { tool: "read", sessionID, callID: "call-1" }
const output = { title: "Read", output: "1: hello\n2: world", metadata: {} }
//#when
await hook["tool.execute.after"](input, output)
//#when
await hook["tool.execute.after"](input, output)
//#then
expect(output.output).toContain("Updated file (LINE#ID:content):")
expect(output.output).toMatch(/1#[ZPMQVRWSNKTXJBYH]{2}:const x = 1/)
expect(output.output).toMatch(/2#[ZPMQVRWSNKTXJBYH]{2}:const y = 2/)
//#then
expect(output.output).toContain("1:")
expect(output.output).toContain("|")
})
fs.rmSync(tempDir, { recursive: true, force: true })
it("should process 'Read' tool (mixed case)", async () => {
//#given
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
const input = { tool: "Read", sessionID, callID: "call-1" }
const output = { title: "Read", output: "1: hello\n2: world", metadata: {} }
//#when
await hook["tool.execute.after"](input, output)
//#then
expect(output.output).toContain("|")
})
it("should process 'READ' tool (uppercase)", async () => {
//#given
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
const input = { tool: "READ", sessionID, callID: "call-1" }
const output = { title: "Read", output: "1: hello\n2: world", metadata: {} }
//#when
await hook["tool.execute.after"](input, output)
//#then
expect(output.output).toContain("|")
})
it("should skip non-read tools", async () => {
//#given
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
const input = { tool: "edit", sessionID, callID: "call-1" }
const originalOutput = "1: hello\n2: world"
const output = { title: "Edit", output: originalOutput, metadata: {} }
//#when
await hook["tool.execute.after"](input, output)
//#then
expect(output.output).toBe(originalOutput)
})
})
it("skips when feature is disabled", async () => {
//#given
const hook = createHashlineReadEnhancerHook(mockCtx(), { hashline_edit: { enabled: false } })
const input = { tool: "read", sessionID: "s", callID: "c" }
const output = {
title: "demo.ts",
output: "<content>\n1: const x = 1\n</content>",
metadata: {},
}
describe("config flag check", () => {
it("should skip when hashline_edit is disabled", async () => {
//#given
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(false))
const input = { tool: "read", sessionID, callID: "call-1" }
const originalOutput = "1: hello\n2: world"
const output = { title: "Read", output: originalOutput, metadata: {} }
//#when
await hook["tool.execute.after"](input, output)
//#when
await hook["tool.execute.after"](input, output)
//#then
expect(output.output).toBe("<content>\n1: const x = 1\n</content>")
//#then
expect(output.output).toBe(originalOutput)
})
it("should skip when hashline_edit config is missing", async () => {
//#given
const hook = createHashlineReadEnhancerHook(mockCtx, {})
const input = { tool: "read", sessionID, callID: "call-1" }
const originalOutput = "1: hello\n2: world"
const output = { title: "Read", output: originalOutput, metadata: {} }
//#when
await hook["tool.execute.after"](input, output)
//#then
expect(output.output).toBe(originalOutput)
})
})
describe("output transformation", () => {
it("should transform 'N: content' format to 'N:HASH|content'", async () => {
//#given
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
const input = { tool: "read", sessionID, callID: "call-1" }
const output = { title: "Read", output: "1: function hello() {\n2: console.log('world')\n3: }", metadata: {} }
//#when
await hook["tool.execute.after"](input, output)
//#then
const lines = output.output.split("\n")
expect(lines[0]).toMatch(/^1:[a-f0-9]{2}\|function hello\(\) \{$/)
expect(lines[1]).toMatch(/^2:[a-f0-9]{2}\| console\.log\('world'\)$/)
expect(lines[2]).toMatch(/^3:[a-f0-9]{2}\|\}$/)
})
it("should handle empty output", async () => {
//#given
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
const input = { tool: "read", sessionID, callID: "call-1" }
const output = { title: "Read", output: "", metadata: {} }
//#when
await hook["tool.execute.after"](input, output)
//#then
expect(output.output).toBe("")
})
it("should handle single line", async () => {
//#given
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
const input = { tool: "read", sessionID, callID: "call-1" }
const output = { title: "Read", output: "1: const x = 1", metadata: {} }
//#when
await hook["tool.execute.after"](input, output)
//#then
expect(output.output).toMatch(/^1:[a-f0-9]{2}\|const x = 1$/)
})
})
describe("binary file detection", () => {
it("should skip binary files (no line number prefix)", async () => {
//#given
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
const input = { tool: "read", sessionID, callID: "call-1" }
const originalOutput = "PNG\x89\x50\x4E\x47\x0D\x0A\x1A\x0A"
const output = { title: "Read", output: originalOutput, metadata: {} }
//#when
await hook["tool.execute.after"](input, output)
//#then
expect(output.output).toBe(originalOutput)
})
it("should skip if first line doesn't match pattern", async () => {
//#given
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
const input = { tool: "read", sessionID, callID: "call-1" }
const originalOutput = "some binary data\nmore data"
const output = { title: "Read", output: originalOutput, metadata: {} }
//#when
await hook["tool.execute.after"](input, output)
//#then
expect(output.output).toBe(originalOutput)
})
it("should process if first line matches 'N: ' pattern", async () => {
//#given
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
const input = { tool: "read", sessionID, callID: "call-1" }
const output = { title: "Read", output: "1: valid line\n2: another line", metadata: {} }
//#when
await hook["tool.execute.after"](input, output)
//#then
expect(output.output).toContain("|")
})
})
describe("edge cases", () => {
it("should handle non-string output gracefully", async () => {
//#given
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
const input = { tool: "read", sessionID, callID: "call-1" }
const output = { title: "Read", output: null as unknown as string, metadata: {} }
//#when - should not throw
await hook["tool.execute.after"](input, output)
//#then
expect(output.output).toBeNull()
})
it("should handle lines with no content after colon", async () => {
//#given
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
const input = { tool: "read", sessionID, callID: "call-1" }
const output = { title: "Read", output: "1: hello\n2: \n3: world", metadata: {} }
//#when
await hook["tool.execute.after"](input, output)
//#then
const lines = output.output.split("\n")
expect(lines[0]).toMatch(/^1:[a-f0-9]{2}\|hello$/)
expect(lines[1]).toMatch(/^2:[a-f0-9]{2}\|$/)
expect(lines[2]).toMatch(/^3:[a-f0-9]{2}\|world$/)
})
it("should handle very long lines", async () => {
//#given
const longContent = "a".repeat(1000)
const hook = createHashlineReadEnhancerHook(mockCtx, createMockConfig(true))
const input = { tool: "read", sessionID, callID: "call-1" }
const output = { title: "Read", output: `1: ${longContent}`, metadata: {} }
//#when
await hook["tool.execute.after"](input, output)
//#then
expect(output.output).toMatch(/^1:[a-f0-9]{2}\|a+$/)
})
})
})

View File

@@ -28,7 +28,6 @@ export { createThinkingBlockValidatorHook } from "./thinking-block-validator";
export { createCategorySkillReminderHook } from "./category-skill-reminder";
export { createRalphLoopHook, type RalphLoopHook } from "./ralph-loop";
export { createNoSisyphusGptHook } from "./no-sisyphus-gpt";
export { createNoHephaestusNonGptHook } from "./no-hephaestus-non-gpt";
export { createAutoSlashCommandHook } from "./auto-slash-command";
export { createEditErrorRecoveryHook } from "./edit-error-recovery";
export { createJsonErrorRecoveryHook } from "./json-error-recovery";
@@ -45,7 +44,6 @@ export { createCompactionTodoPreserverHook } from "./compaction-todo-preserver";
export { createUnstableAgentBabysitterHook } from "./unstable-agent-babysitter";
export { createPreemptiveCompactionHook } from "./preemptive-compaction";
export { createTasksTodowriteDisablerHook } from "./tasks-todowrite-disabler";
export { createRuntimeFallbackHook, type RuntimeFallbackHook, type RuntimeFallbackOptions } from "./runtime-fallback";
export { createWriteExistingFileGuardHook } from "./write-existing-file-guard";
export { createHashlineReadEnhancerHook } from "./hashline-read-enhancer";
export { createHashlineEditDiffEnhancerHook } from "./hashline-edit-diff-enhancer";

View File

@@ -74,7 +74,9 @@ export function createKeywordDetectorHook(ctx: PluginInput, _collector?: Context
if (hasUltrawork) {
log(`[keyword-detector] Ultrawork mode activated`, { sessionID: input.sessionID })
output.message.variant = "max"
if (output.message.variant === undefined) {
output.message.variant = "max"
}
ctx.client.tui
.showToast({

View File

@@ -219,8 +219,8 @@ describe("keyword-detector session filtering", () => {
expect(toastCalls).toContain("Ultrawork Mode Activated")
})
test("should override existing variant when ultrawork keyword is used", async () => {
// given - main session set with pre-existing variant from TUI
test("should not override existing variant", async () => {
// given - main session set with pre-existing variant
setMainSession("main-123")
const toastCalls: string[] = []
@@ -236,8 +236,8 @@ describe("keyword-detector session filtering", () => {
output
)
// then - ultrawork should override TUI variant to max
expect(output.message.variant).toBe("max")
// then - existing variant should remain
expect(output.message.variant).toBe("low")
expect(toastCalls).toContain("Ultrawork Mode Activated")
})
})

View File

@@ -1,54 +0,0 @@
import type { PluginInput } from "@opencode-ai/plugin"
import { isGptModel } from "../../agents/types"
import { getSessionAgent, updateSessionAgent } from "../../features/claude-code-session-state"
import { log } from "../../shared"
import { getAgentConfigKey, getAgentDisplayName } from "../../shared/agent-display-names"
const TOAST_TITLE = "NEVER Use Hephaestus with Non-GPT"
const TOAST_MESSAGE = [
"Hephaestus is designed exclusively for GPT models.",
"Hephaestus is trash without GPT.",
"For Claude/Kimi/GLM models, always use Sisyphus.",
].join("\n")
const SISYPHUS_DISPLAY = getAgentDisplayName("sisyphus")
function showToast(ctx: PluginInput, sessionID: string): void {
ctx.client.tui.showToast({
body: {
title: TOAST_TITLE,
message: TOAST_MESSAGE,
variant: "error",
duration: 10000,
},
}).catch((error) => {
log("[no-hephaestus-non-gpt] Failed to show toast", {
sessionID,
error,
})
})
}
export function createNoHephaestusNonGptHook(ctx: PluginInput) {
return {
"chat.message": async (input: {
sessionID: string
agent?: string
model?: { providerID: string; modelID: string }
}, output?: {
message?: { agent?: string; [key: string]: unknown }
}): Promise<void> => {
const rawAgent = input.agent ?? getSessionAgent(input.sessionID) ?? ""
const agentKey = getAgentConfigKey(rawAgent)
const modelID = input.model?.modelID
if (agentKey === "hephaestus" && modelID && !isGptModel(modelID)) {
showToast(ctx, input.sessionID)
input.agent = SISYPHUS_DISPLAY
if (output?.message) {
output.message.agent = SISYPHUS_DISPLAY
}
updateSessionAgent(input.sessionID, SISYPHUS_DISPLAY)
}
},
}
}

View File

@@ -1,115 +0,0 @@
import { describe, expect, spyOn, test } from "bun:test"
import { _resetForTesting, updateSessionAgent } from "../../features/claude-code-session-state"
import { getAgentDisplayName } from "../../shared/agent-display-names"
import { createNoHephaestusNonGptHook } from "./index"
const HEPHAESTUS_DISPLAY = getAgentDisplayName("hephaestus")
const SISYPHUS_DISPLAY = getAgentDisplayName("sisyphus")
function createOutput() {
return {
message: {},
parts: [],
}
}
describe("no-hephaestus-non-gpt hook", () => {
test("shows toast on every chat.message when hephaestus uses non-gpt model", async () => {
// given - hephaestus with claude model
const showToast = spyOn({ fn: async () => ({}) }, "fn")
const hook = createNoHephaestusNonGptHook({
client: { tui: { showToast } },
} as any)
const output1 = createOutput()
const output2 = createOutput()
// when - chat.message is called repeatedly
await hook["chat.message"]?.({
sessionID: "ses_1",
agent: HEPHAESTUS_DISPLAY,
model: { providerID: "anthropic", modelID: "claude-opus-4-6" },
}, output1)
await hook["chat.message"]?.({
sessionID: "ses_1",
agent: HEPHAESTUS_DISPLAY,
model: { providerID: "anthropic", modelID: "claude-opus-4-6" },
}, output2)
// then - toast is shown and agent is switched to sisyphus
expect(showToast).toHaveBeenCalledTimes(2)
expect(output1.message.agent).toBe(SISYPHUS_DISPLAY)
expect(output2.message.agent).toBe(SISYPHUS_DISPLAY)
expect(showToast.mock.calls[0]?.[0]).toMatchObject({
body: {
title: "NEVER Use Hephaestus with Non-GPT",
message: expect.stringContaining("Hephaestus is trash without GPT."),
variant: "error",
},
})
})
test("does not show toast when hephaestus uses gpt model", async () => {
// given - hephaestus with gpt model
const showToast = spyOn({ fn: async () => ({}) }, "fn")
const hook = createNoHephaestusNonGptHook({
client: { tui: { showToast } },
} as any)
const output = createOutput()
// when - chat.message runs
await hook["chat.message"]?.({
sessionID: "ses_2",
agent: HEPHAESTUS_DISPLAY,
model: { providerID: "openai", modelID: "gpt-5.3-codex" },
}, output)
// then - no toast, agent unchanged
expect(showToast).toHaveBeenCalledTimes(0)
expect(output.message.agent).toBeUndefined()
})
test("does not show toast for non-hephaestus agent", async () => {
// given - sisyphus with claude model (non-gpt)
const showToast = spyOn({ fn: async () => ({}) }, "fn")
const hook = createNoHephaestusNonGptHook({
client: { tui: { showToast } },
} as any)
const output = createOutput()
// when - chat.message runs
await hook["chat.message"]?.({
sessionID: "ses_3",
agent: SISYPHUS_DISPLAY,
model: { providerID: "anthropic", modelID: "claude-opus-4-6" },
}, output)
// then - no toast
expect(showToast).toHaveBeenCalledTimes(0)
expect(output.message.agent).toBeUndefined()
})
test("uses session agent fallback when input agent is missing", async () => {
// given - session agent saved as hephaestus
_resetForTesting()
updateSessionAgent("ses_4", HEPHAESTUS_DISPLAY)
const showToast = spyOn({ fn: async () => ({}) }, "fn")
const hook = createNoHephaestusNonGptHook({
client: { tui: { showToast } },
} as any)
const output = createOutput()
// when - chat.message runs without input.agent
await hook["chat.message"]?.({
sessionID: "ses_4",
model: { providerID: "anthropic", modelID: "claude-opus-4-6" },
}, output)
// then - toast shown via session-agent fallback, switched to sisyphus
expect(showToast).toHaveBeenCalledTimes(1)
expect(output.message.agent).toBe(SISYPHUS_DISPLAY)
})
})

View File

@@ -1 +0,0 @@
export { createNoHephaestusNonGptHook } from "./hook"

View File

@@ -6,9 +6,10 @@ import { getAgentConfigKey, getAgentDisplayName } from "../../shared/agent-displ
const TOAST_TITLE = "NEVER Use Sisyphus with GPT"
const TOAST_MESSAGE = [
"Sisyphus works best with Claude Opus, and works fine with Kimi/GLM models.",
"Do NOT use Sisyphus with GPT.",
"For GPT models, always use Hephaestus.",
"Sisyphus is NOT designed for GPT models.",
"Sisyphus + GPT performs worse than vanilla Codex.",
"You are literally burning money.",
"Use Hephaestus for GPT models instead.",
].join("\n")
const HEPHAESTUS_DISPLAY = getAgentDisplayName("hephaestus")

View File

@@ -43,7 +43,7 @@ describe("no-sisyphus-gpt hook", () => {
expect(showToast.mock.calls[0]?.[0]).toMatchObject({
body: {
title: "NEVER Use Sisyphus with GPT",
message: expect.stringContaining("For GPT models, always use Hephaestus."),
message: expect.stringContaining("burning money"),
variant: "error",
},
})

View File

@@ -3,11 +3,7 @@ import { log } from "../../shared/logger"
import { findNearestMessageWithFields } from "../../features/hook-message-injector"
import { getMessageDir } from "./message-storage-directory"
import { withTimeout } from "./with-timeout"
import {
createInternalAgentTextPart,
normalizeSDKResponse,
resolveInheritedPromptTools,
} from "../../shared"
import { normalizeSDKResponse, resolveInheritedPromptTools } from "../../shared"
type MessageInfo = {
agent?: string
@@ -68,7 +64,7 @@ export async function injectContinuationPrompt(
...(agent !== undefined ? { agent } : {}),
...(model !== undefined ? { model } : {}),
...(inheritedTools ? { tools: inheritedTools } : {}),
parts: [createInternalAgentTextPart(options.prompt)],
parts: [{ type: "text", text: options.prompt }],
},
query: { directory: options.directory },
})

View File

@@ -1,54 +0,0 @@
import { getSessionAgent } from "../../features/claude-code-session-state"
export const AGENT_NAMES = [
"sisyphus",
"oracle",
"librarian",
"explore",
"prometheus",
"atlas",
"metis",
"momus",
"hephaestus",
"sisyphus-junior",
"build",
"plan",
"multimodal-looker",
]
export const agentPattern = new RegExp(
`\\b(${AGENT_NAMES
.sort((a, b) => b.length - a.length)
.map((a) => a.replace(/-/g, "\\-"))
.join("|")})\\b`,
"i",
)
export function detectAgentFromSession(sessionID: string): string | undefined {
const match = sessionID.match(agentPattern)
if (match) {
return match[1].toLowerCase()
}
return undefined
}
export function normalizeAgentName(agent: string | undefined): string | undefined {
if (!agent) return undefined
const normalized = agent.toLowerCase().trim()
if (AGENT_NAMES.includes(normalized)) {
return normalized
}
const match = normalized.match(agentPattern)
if (match) {
return match[1].toLowerCase()
}
return undefined
}
export function resolveAgentForSession(sessionID: string, eventAgent?: string): string | undefined {
return (
normalizeAgentName(eventAgent) ??
normalizeAgentName(getSessionAgent(sessionID)) ??
detectAgentFromSession(sessionID)
)
}

View File

@@ -1,213 +0,0 @@
import type { HookDeps } from "./types"
import { HOOK_NAME } from "./constants"
import { log } from "../../shared/logger"
import { normalizeAgentName, resolveAgentForSession } from "./agent-resolver"
import { getSessionAgent } from "../../features/claude-code-session-state"
import { getFallbackModelsForSession } from "./fallback-models"
import { prepareFallback } from "./fallback-state"
import { SessionCategoryRegistry } from "../../shared/session-category-registry"
const SESSION_TTL_MS = 30 * 60 * 1000
export function createAutoRetryHelpers(deps: HookDeps) {
const { ctx, config, options, sessionStates, sessionLastAccess, sessionRetryInFlight, sessionAwaitingFallbackResult, sessionFallbackTimeouts, pluginConfig } = deps
const abortSessionRequest = async (sessionID: string, source: string): Promise<void> => {
try {
await ctx.client.session.abort({ path: { id: sessionID } })
log(`[${HOOK_NAME}] Aborted in-flight session request (${source})`, { sessionID })
} catch (error) {
log(`[${HOOK_NAME}] Failed to abort in-flight session request (${source})`, {
sessionID,
error: String(error),
})
}
}
const clearSessionFallbackTimeout = (sessionID: string) => {
const timer = sessionFallbackTimeouts.get(sessionID)
if (timer) {
clearTimeout(timer)
sessionFallbackTimeouts.delete(sessionID)
}
}
const scheduleSessionFallbackTimeout = (sessionID: string, resolvedAgent?: string) => {
clearSessionFallbackTimeout(sessionID)
const timeoutMs = options?.session_timeout_ms ?? config.timeout_seconds * 1000
if (timeoutMs <= 0) return
const timer = setTimeout(async () => {
sessionFallbackTimeouts.delete(sessionID)
const state = sessionStates.get(sessionID)
if (!state) return
if (sessionRetryInFlight.has(sessionID)) {
log(`[${HOOK_NAME}] Overriding in-flight retry due to session timeout`, { sessionID })
}
await abortSessionRequest(sessionID, "session.timeout")
sessionRetryInFlight.delete(sessionID)
if (state.pendingFallbackModel) {
state.pendingFallbackModel = undefined
}
const fallbackModels = getFallbackModelsForSession(sessionID, resolvedAgent, pluginConfig)
if (fallbackModels.length === 0) return
log(`[${HOOK_NAME}] Session fallback timeout reached`, {
sessionID,
timeoutSeconds: config.timeout_seconds,
currentModel: state.currentModel,
})
const result = prepareFallback(sessionID, state, fallbackModels, config)
if (result.success && result.newModel) {
await autoRetryWithFallback(sessionID, result.newModel, resolvedAgent, "session.timeout")
}
}, timeoutMs)
sessionFallbackTimeouts.set(sessionID, timer)
}
const autoRetryWithFallback = async (
sessionID: string,
newModel: string,
resolvedAgent: string | undefined,
source: string,
): Promise<void> => {
if (sessionRetryInFlight.has(sessionID)) {
log(`[${HOOK_NAME}] Retry already in flight, skipping (${source})`, { sessionID })
return
}
const modelParts = newModel.split("/")
if (modelParts.length < 2) {
log(`[${HOOK_NAME}] Invalid model format (missing provider prefix): ${newModel}`)
return
}
const fallbackModelObj = {
providerID: modelParts[0],
modelID: modelParts.slice(1).join("/"),
}
sessionRetryInFlight.add(sessionID)
try {
const messagesResp = await ctx.client.session.messages({
path: { id: sessionID },
query: { directory: ctx.directory },
})
const msgs = (messagesResp as {
data?: Array<{
info?: Record<string, unknown>
parts?: Array<{ type?: string; text?: string }>
}>
}).data
const lastUserMsg = msgs?.filter((m) => m.info?.role === "user").pop()
const lastUserPartsRaw =
lastUserMsg?.parts ??
(lastUserMsg?.info?.parts as Array<{ type?: string; text?: string }> | undefined)
if (lastUserPartsRaw && lastUserPartsRaw.length > 0) {
log(`[${HOOK_NAME}] Auto-retrying with fallback model (${source})`, {
sessionID,
model: newModel,
})
const retryParts = lastUserPartsRaw
.filter((p) => p.type === "text" && typeof p.text === "string" && p.text.length > 0)
.map((p) => ({ type: "text" as const, text: p.text! }))
if (retryParts.length > 0) {
const retryAgent = resolvedAgent ?? getSessionAgent(sessionID)
sessionAwaitingFallbackResult.add(sessionID)
scheduleSessionFallbackTimeout(sessionID, retryAgent)
await ctx.client.session.promptAsync({
path: { id: sessionID },
body: {
...(retryAgent ? { agent: retryAgent } : {}),
model: fallbackModelObj,
parts: retryParts,
},
query: { directory: ctx.directory },
})
}
} else {
log(`[${HOOK_NAME}] No user message found for auto-retry (${source})`, { sessionID })
}
} catch (retryError) {
log(`[${HOOK_NAME}] Auto-retry failed (${source})`, { sessionID, error: String(retryError) })
} finally {
const state = sessionStates.get(sessionID)
if (state?.pendingFallbackModel === newModel) {
state.pendingFallbackModel = undefined
}
sessionRetryInFlight.delete(sessionID)
}
}
const resolveAgentForSessionFromContext = async (
sessionID: string,
eventAgent?: string,
): Promise<string | undefined> => {
const resolved = resolveAgentForSession(sessionID, eventAgent)
if (resolved) return resolved
try {
const messagesResp = await ctx.client.session.messages({
path: { id: sessionID },
query: { directory: ctx.directory },
})
const msgs = (messagesResp as { data?: Array<{ info?: Record<string, unknown> }> }).data
if (!msgs || msgs.length === 0) return undefined
for (let i = msgs.length - 1; i >= 0; i--) {
const info = msgs[i]?.info
const infoAgent = typeof info?.agent === "string" ? info.agent : undefined
const normalized = normalizeAgentName(infoAgent)
if (normalized) {
return normalized
}
}
} catch {
return undefined
}
return undefined
}
const cleanupStaleSessions = () => {
const now = Date.now()
let cleanedCount = 0
for (const [sessionID, lastAccess] of sessionLastAccess.entries()) {
if (now - lastAccess > SESSION_TTL_MS) {
sessionStates.delete(sessionID)
sessionLastAccess.delete(sessionID)
sessionRetryInFlight.delete(sessionID)
sessionAwaitingFallbackResult.delete(sessionID)
clearSessionFallbackTimeout(sessionID)
SessionCategoryRegistry.remove(sessionID)
cleanedCount++
}
}
if (cleanedCount > 0) {
log(`[${HOOK_NAME}] Cleaned up ${cleanedCount} stale session states`)
}
}
return {
abortSessionRequest,
clearSessionFallbackTimeout,
scheduleSessionFallbackTimeout,
autoRetryWithFallback,
resolveAgentForSessionFromContext,
cleanupStaleSessions,
}
}
export type AutoRetryHelpers = ReturnType<typeof createAutoRetryHelpers>

View File

@@ -1,62 +0,0 @@
import type { HookDeps } from "./types"
import { HOOK_NAME } from "./constants"
import { log } from "../../shared/logger"
import { createFallbackState } from "./fallback-state"
export function createChatMessageHandler(deps: HookDeps) {
const { config, sessionStates, sessionLastAccess } = deps
return async (
input: { sessionID: string; agent?: string; model?: { providerID: string; modelID: string } },
output: { message: { model?: { providerID: string; modelID: string } }; parts?: Array<{ type: string; text?: string }> }
) => {
if (!config.enabled) return
const { sessionID } = input
let state = sessionStates.get(sessionID)
if (!state) return
sessionLastAccess.set(sessionID, Date.now())
const requestedModel = input.model
? `${input.model.providerID}/${input.model.modelID}`
: undefined
if (requestedModel && requestedModel !== state.currentModel) {
if (state.pendingFallbackModel && state.pendingFallbackModel === requestedModel) {
state.pendingFallbackModel = undefined
return
}
log(`[${HOOK_NAME}] Detected manual model change, resetting fallback state`, {
sessionID,
from: state.currentModel,
to: requestedModel,
})
state = createFallbackState(requestedModel)
sessionStates.set(sessionID, state)
return
}
if (state.currentModel === state.originalModel) return
const activeModel = state.currentModel
log(`[${HOOK_NAME}] Applying fallback model override`, {
sessionID,
from: input.model,
to: activeModel,
})
if (output.message && activeModel) {
const parts = activeModel.split("/")
if (parts.length >= 2) {
output.message.model = {
providerID: parts[0],
modelID: parts.slice(1).join("/"),
}
}
}
}
}

View File

@@ -1,44 +0,0 @@
/**
* Runtime Fallback Hook - Constants
*
* Default values and configuration constants for the runtime fallback feature.
*/
import type { RuntimeFallbackConfig } from "../../config"
/**
* Default configuration values for runtime fallback
*/
export const DEFAULT_CONFIG: Required<RuntimeFallbackConfig> = {
enabled: true,
retry_on_errors: [400, 429, 503, 529],
max_fallback_attempts: 3,
cooldown_seconds: 60,
timeout_seconds: 30,
notify_on_fallback: true,
}
/**
* Error patterns that indicate rate limiting or temporary failures
* These are checked in addition to HTTP status codes
*/
export const RETRYABLE_ERROR_PATTERNS = [
/rate.?limit/i,
/too.?many.?requests/i,
/quota.?exceeded/i,
/usage\s+limit\s+has\s+been\s+reached/i,
/service.?unavailable/i,
/overloaded/i,
/temporarily.?unavailable/i,
/try.?again/i,
/credit.*balance.*too.*low/i,
/insufficient.?(?:credits?|funds?|balance)/i,
/(?:^|\s)429(?:\s|$)/,
/(?:^|\s)503(?:\s|$)/,
/(?:^|\s)529(?:\s|$)/,
]
/**
* Hook name for identification and logging
*/
export const HOOK_NAME = "runtime-fallback"

View File

@@ -1,169 +0,0 @@
import { DEFAULT_CONFIG, RETRYABLE_ERROR_PATTERNS } from "./constants"
export function getErrorMessage(error: unknown): string {
if (!error) return ""
if (typeof error === "string") return error.toLowerCase()
const errorObj = error as Record<string, unknown>
const paths = [
errorObj.data,
errorObj.error,
errorObj,
(errorObj.data as Record<string, unknown>)?.error,
]
for (const obj of paths) {
if (obj && typeof obj === "object") {
const msg = (obj as Record<string, unknown>).message
if (typeof msg === "string" && msg.length > 0) {
return msg.toLowerCase()
}
}
}
try {
return JSON.stringify(error).toLowerCase()
} catch {
return ""
}
}
export function extractStatusCode(error: unknown, retryOnErrors?: number[]): number | undefined {
if (!error) return undefined
const errorObj = error as Record<string, unknown>
const statusCode = errorObj.statusCode ?? errorObj.status ?? (errorObj.data as Record<string, unknown>)?.statusCode
if (typeof statusCode === "number") {
return statusCode
}
const codes = retryOnErrors ?? DEFAULT_CONFIG.retry_on_errors
const pattern = new RegExp(`\\b(${codes.join("|")})\\b`)
const message = getErrorMessage(error)
const statusMatch = message.match(pattern)
if (statusMatch) {
return parseInt(statusMatch[1], 10)
}
return undefined
}
export function extractErrorName(error: unknown): string | undefined {
if (!error || typeof error !== "object") return undefined
const errorObj = error as Record<string, unknown>
const directName = errorObj.name
if (typeof directName === "string" && directName.length > 0) {
return directName
}
const nestedError = errorObj.error as Record<string, unknown> | undefined
const nestedName = nestedError?.name
if (typeof nestedName === "string" && nestedName.length > 0) {
return nestedName
}
const dataError = (errorObj.data as Record<string, unknown> | undefined)?.error as Record<string, unknown> | undefined
const dataErrorName = dataError?.name
if (typeof dataErrorName === "string" && dataErrorName.length > 0) {
return dataErrorName
}
return undefined
}
export function classifyErrorType(error: unknown): string | undefined {
const message = getErrorMessage(error)
const errorName = extractErrorName(error)?.toLowerCase()
if (
errorName?.includes("loadapi") ||
(/api.?key.?is.?missing/i.test(message) && /environment variable/i.test(message))
) {
return "missing_api_key"
}
if (/api.?key/i.test(message) && /must be a string/i.test(message)) {
return "invalid_api_key"
}
if (errorName?.includes("unknownerror") && /model\s+not\s+found/i.test(message)) {
return "model_not_found"
}
return undefined
}
export interface AutoRetrySignal {
signal: string
}
export const AUTO_RETRY_PATTERNS: Array<(combined: string) => boolean> = [
(combined) => /retrying\s+in/i.test(combined),
(combined) =>
/(?:too\s+many\s+requests|quota\s*exceeded|usage\s+limit|rate\s+limit|limit\s+reached)/i.test(combined),
]
export function extractAutoRetrySignal(info: Record<string, unknown> | undefined): AutoRetrySignal | undefined {
if (!info) return undefined
const candidates: string[] = []
const directStatus = info.status
if (typeof directStatus === "string") candidates.push(directStatus)
const summary = info.summary
if (typeof summary === "string") candidates.push(summary)
const message = info.message
if (typeof message === "string") candidates.push(message)
const details = info.details
if (typeof details === "string") candidates.push(details)
const combined = candidates.join("\n")
if (!combined) return undefined
const isAutoRetry = AUTO_RETRY_PATTERNS.every((test) => test(combined))
if (isAutoRetry) {
return { signal: combined }
}
return undefined
}
export function containsErrorContent(
parts: Array<{ type?: string; text?: string }> | undefined
): { hasError: boolean; errorMessage?: string } {
if (!parts || parts.length === 0) return { hasError: false }
const errorParts = parts.filter((p) => p.type === "error")
if (errorParts.length > 0) {
const errorMessages = errorParts.map((p) => p.text).filter((text): text is string => typeof text === "string")
const errorMessage = errorMessages.length > 0 ? errorMessages.join("\n") : undefined
return { hasError: true, errorMessage }
}
return { hasError: false }
}
export function isRetryableError(error: unknown, retryOnErrors: number[]): boolean {
const statusCode = extractStatusCode(error, retryOnErrors)
const message = getErrorMessage(error)
const errorType = classifyErrorType(error)
if (errorType === "missing_api_key") {
return true
}
if (errorType === "model_not_found") {
return true
}
if (statusCode && retryOnErrors.includes(statusCode)) {
return true
}
return RETRYABLE_ERROR_PATTERNS.some((pattern) => pattern.test(message))
}

View File

@@ -1,187 +0,0 @@
import type { HookDeps } from "./types"
import type { AutoRetryHelpers } from "./auto-retry"
import { HOOK_NAME } from "./constants"
import { log } from "../../shared/logger"
import { extractStatusCode, extractErrorName, classifyErrorType, isRetryableError } from "./error-classifier"
import { createFallbackState, prepareFallback } from "./fallback-state"
import { getFallbackModelsForSession } from "./fallback-models"
import { SessionCategoryRegistry } from "../../shared/session-category-registry"
export function createEventHandler(deps: HookDeps, helpers: AutoRetryHelpers) {
const { config, pluginConfig, sessionStates, sessionLastAccess, sessionRetryInFlight, sessionAwaitingFallbackResult, sessionFallbackTimeouts } = deps
const handleSessionCreated = (props: Record<string, unknown> | undefined) => {
const sessionInfo = props?.info as { id?: string; model?: string } | undefined
const sessionID = sessionInfo?.id
const model = sessionInfo?.model
if (sessionID && model) {
log(`[${HOOK_NAME}] Session created with model`, { sessionID, model })
sessionStates.set(sessionID, createFallbackState(model))
sessionLastAccess.set(sessionID, Date.now())
}
}
const handleSessionDeleted = (props: Record<string, unknown> | undefined) => {
const sessionInfo = props?.info as { id?: string } | undefined
const sessionID = sessionInfo?.id
if (sessionID) {
log(`[${HOOK_NAME}] Cleaning up session state`, { sessionID })
sessionStates.delete(sessionID)
sessionLastAccess.delete(sessionID)
sessionRetryInFlight.delete(sessionID)
sessionAwaitingFallbackResult.delete(sessionID)
helpers.clearSessionFallbackTimeout(sessionID)
SessionCategoryRegistry.remove(sessionID)
}
}
const handleSessionStop = async (props: Record<string, unknown> | undefined) => {
const sessionID = props?.sessionID as string | undefined
if (!sessionID) return
helpers.clearSessionFallbackTimeout(sessionID)
if (sessionRetryInFlight.has(sessionID)) {
await helpers.abortSessionRequest(sessionID, "session.stop")
}
sessionRetryInFlight.delete(sessionID)
sessionAwaitingFallbackResult.delete(sessionID)
const state = sessionStates.get(sessionID)
if (state?.pendingFallbackModel) {
state.pendingFallbackModel = undefined
}
log(`[${HOOK_NAME}] Cleared fallback retry state on session.stop`, { sessionID })
}
const handleSessionIdle = (props: Record<string, unknown> | undefined) => {
const sessionID = props?.sessionID as string | undefined
if (!sessionID) return
if (sessionAwaitingFallbackResult.has(sessionID)) {
log(`[${HOOK_NAME}] session.idle while awaiting fallback result; keeping timeout armed`, { sessionID })
return
}
const hadTimeout = sessionFallbackTimeouts.has(sessionID)
helpers.clearSessionFallbackTimeout(sessionID)
sessionRetryInFlight.delete(sessionID)
const state = sessionStates.get(sessionID)
if (state?.pendingFallbackModel) {
state.pendingFallbackModel = undefined
}
if (hadTimeout) {
log(`[${HOOK_NAME}] Cleared fallback timeout after session completion`, { sessionID })
}
}
const handleSessionError = async (props: Record<string, unknown> | undefined) => {
const sessionID = props?.sessionID as string | undefined
const error = props?.error
const agent = props?.agent as string | undefined
if (!sessionID) {
log(`[${HOOK_NAME}] session.error without sessionID, skipping`)
return
}
const resolvedAgent = await helpers.resolveAgentForSessionFromContext(sessionID, agent)
sessionAwaitingFallbackResult.delete(sessionID)
helpers.clearSessionFallbackTimeout(sessionID)
log(`[${HOOK_NAME}] session.error received`, {
sessionID,
agent,
resolvedAgent,
statusCode: extractStatusCode(error, config.retry_on_errors),
errorName: extractErrorName(error),
errorType: classifyErrorType(error),
})
if (!isRetryableError(error, config.retry_on_errors)) {
log(`[${HOOK_NAME}] Error not retryable, skipping fallback`, {
sessionID,
retryable: false,
statusCode: extractStatusCode(error, config.retry_on_errors),
errorName: extractErrorName(error),
errorType: classifyErrorType(error),
})
return
}
let state = sessionStates.get(sessionID)
const fallbackModels = getFallbackModelsForSession(sessionID, resolvedAgent, pluginConfig)
if (fallbackModels.length === 0) {
log(`[${HOOK_NAME}] No fallback models configured`, { sessionID, agent })
return
}
if (!state) {
const currentModel = props?.model as string | undefined
if (currentModel) {
state = createFallbackState(currentModel)
sessionStates.set(sessionID, state)
sessionLastAccess.set(sessionID, Date.now())
} else {
const detectedAgent = resolvedAgent
const agentConfig = detectedAgent
? pluginConfig?.agents?.[detectedAgent as keyof typeof pluginConfig.agents]
: undefined
const agentModel = agentConfig?.model as string | undefined
if (agentModel) {
log(`[${HOOK_NAME}] Derived model from agent config`, { sessionID, agent: detectedAgent, model: agentModel })
state = createFallbackState(agentModel)
sessionStates.set(sessionID, state)
sessionLastAccess.set(sessionID, Date.now())
} else {
log(`[${HOOK_NAME}] No model info available, cannot fallback`, { sessionID })
return
}
}
} else {
sessionLastAccess.set(sessionID, Date.now())
}
const result = prepareFallback(sessionID, state, fallbackModels, config)
if (result.success && config.notify_on_fallback) {
await deps.ctx.client.tui
.showToast({
body: {
title: "Model Fallback",
message: `Switching to ${result.newModel?.split("/").pop() || result.newModel} for next request`,
variant: "warning",
duration: 5000,
},
})
.catch(() => {})
}
if (result.success && result.newModel) {
await helpers.autoRetryWithFallback(sessionID, result.newModel, resolvedAgent, "session.error")
}
if (!result.success) {
log(`[${HOOK_NAME}] Fallback preparation failed`, { sessionID, error: result.error })
}
}
return async ({ event }: { event: { type: string; properties?: unknown } }) => {
if (!config.enabled) return
const props = event.properties as Record<string, unknown> | undefined
if (event.type === "session.created") { handleSessionCreated(props); return }
if (event.type === "session.deleted") { handleSessionDeleted(props); return }
if (event.type === "session.stop") { await handleSessionStop(props); return }
if (event.type === "session.idle") { handleSessionIdle(props); return }
if (event.type === "session.error") { await handleSessionError(props); return }
}
}

View File

@@ -1,69 +0,0 @@
import type { OhMyOpenCodeConfig } from "../../config"
import { AGENT_NAMES, agentPattern } from "./agent-resolver"
import { HOOK_NAME } from "./constants"
import { log } from "../../shared/logger"
import { SessionCategoryRegistry } from "../../shared/session-category-registry"
import { normalizeFallbackModels } from "../../shared/model-resolver"
export function getFallbackModelsForSession(
sessionID: string,
agent: string | undefined,
pluginConfig: OhMyOpenCodeConfig | undefined
): string[] {
if (!pluginConfig) return []
const sessionCategory = SessionCategoryRegistry.get(sessionID)
if (sessionCategory && pluginConfig.categories?.[sessionCategory]) {
const categoryConfig = pluginConfig.categories[sessionCategory]
if (categoryConfig?.fallback_models) {
return normalizeFallbackModels(categoryConfig.fallback_models) ?? []
}
}
const tryGetFallbackFromAgent = (agentName: string): string[] | undefined => {
const agentConfig = pluginConfig.agents?.[agentName as keyof typeof pluginConfig.agents]
if (!agentConfig) return undefined
if (agentConfig?.fallback_models) {
return normalizeFallbackModels(agentConfig.fallback_models)
}
const agentCategory = agentConfig?.category
if (agentCategory && pluginConfig.categories?.[agentCategory]) {
const categoryConfig = pluginConfig.categories[agentCategory]
if (categoryConfig?.fallback_models) {
return normalizeFallbackModels(categoryConfig.fallback_models)
}
}
return undefined
}
if (agent) {
const result = tryGetFallbackFromAgent(agent)
if (result) return result
}
const sessionAgentMatch = sessionID.match(agentPattern)
if (sessionAgentMatch) {
const detectedAgent = sessionAgentMatch[1].toLowerCase()
const result = tryGetFallbackFromAgent(detectedAgent)
if (result) return result
}
const sisyphusFallback = tryGetFallbackFromAgent("sisyphus")
if (sisyphusFallback) {
log(`[${HOOK_NAME}] Using sisyphus fallback models (no agent detected)`, { sessionID })
return sisyphusFallback
}
for (const agentName of AGENT_NAMES) {
const result = tryGetFallbackFromAgent(agentName)
if (result) {
log(`[${HOOK_NAME}] Using ${agentName} fallback models (no agent detected)`, { sessionID })
return result
}
}
return []
}

View File

@@ -1,74 +0,0 @@
import type { FallbackState, FallbackResult } from "./types"
import { HOOK_NAME } from "./constants"
import { log } from "../../shared/logger"
import type { RuntimeFallbackConfig } from "../../config"
export function createFallbackState(originalModel: string): FallbackState {
return {
originalModel,
currentModel: originalModel,
fallbackIndex: -1,
failedModels: new Map<string, number>(),
attemptCount: 0,
pendingFallbackModel: undefined,
}
}
export function isModelInCooldown(model: string, state: FallbackState, cooldownSeconds: number): boolean {
const failedAt = state.failedModels.get(model)
if (failedAt === undefined) return false
const cooldownMs = cooldownSeconds * 1000
return Date.now() - failedAt < cooldownMs
}
export function findNextAvailableFallback(
state: FallbackState,
fallbackModels: string[],
cooldownSeconds: number
): string | undefined {
for (let i = state.fallbackIndex + 1; i < fallbackModels.length; i++) {
const candidate = fallbackModels[i]
if (!isModelInCooldown(candidate, state, cooldownSeconds)) {
return candidate
}
log(`[${HOOK_NAME}] Skipping fallback model in cooldown`, { model: candidate, index: i })
}
return undefined
}
export function prepareFallback(
sessionID: string,
state: FallbackState,
fallbackModels: string[],
config: Required<RuntimeFallbackConfig>
): FallbackResult {
if (state.attemptCount >= config.max_fallback_attempts) {
log(`[${HOOK_NAME}] Max fallback attempts reached`, { sessionID, attempts: state.attemptCount })
return { success: false, error: "Max fallback attempts reached", maxAttemptsReached: true }
}
const nextModel = findNextAvailableFallback(state, fallbackModels, config.cooldown_seconds)
if (!nextModel) {
log(`[${HOOK_NAME}] No available fallback models`, { sessionID })
return { success: false, error: "No available fallback models (all in cooldown or exhausted)" }
}
log(`[${HOOK_NAME}] Preparing fallback`, {
sessionID,
from: state.currentModel,
to: nextModel,
attempt: state.attemptCount + 1,
})
const failedModel = state.currentModel
const now = Date.now()
state.fallbackIndex = fallbackModels.indexOf(nextModel)
state.failedModels.set(failedModel, now)
state.attemptCount++
state.currentModel = nextModel
state.pendingFallbackModel = nextModel
return { success: true, newModel: nextModel }
}

View File

@@ -1,67 +0,0 @@
import type { PluginInput } from "@opencode-ai/plugin"
import type { HookDeps, RuntimeFallbackHook, RuntimeFallbackOptions } from "./types"
import { DEFAULT_CONFIG, HOOK_NAME } from "./constants"
import { log } from "../../shared/logger"
import { loadPluginConfig } from "../../plugin-config"
import { createAutoRetryHelpers } from "./auto-retry"
import { createEventHandler } from "./event-handler"
import { createMessageUpdateHandler } from "./message-update-handler"
import { createChatMessageHandler } from "./chat-message-handler"
export function createRuntimeFallbackHook(
ctx: PluginInput,
options?: RuntimeFallbackOptions
): RuntimeFallbackHook {
const config = {
enabled: options?.config?.enabled ?? DEFAULT_CONFIG.enabled,
retry_on_errors: options?.config?.retry_on_errors ?? DEFAULT_CONFIG.retry_on_errors,
max_fallback_attempts: options?.config?.max_fallback_attempts ?? DEFAULT_CONFIG.max_fallback_attempts,
cooldown_seconds: options?.config?.cooldown_seconds ?? DEFAULT_CONFIG.cooldown_seconds,
timeout_seconds: options?.config?.timeout_seconds ?? DEFAULT_CONFIG.timeout_seconds,
notify_on_fallback: options?.config?.notify_on_fallback ?? DEFAULT_CONFIG.notify_on_fallback,
}
let pluginConfig = options?.pluginConfig
if (!pluginConfig) {
try {
pluginConfig = loadPluginConfig(ctx.directory, ctx)
} catch {
log(`[${HOOK_NAME}] Plugin config not available`)
}
}
const deps: HookDeps = {
ctx,
config,
options,
pluginConfig,
sessionStates: new Map(),
sessionLastAccess: new Map(),
sessionRetryInFlight: new Set(),
sessionAwaitingFallbackResult: new Set(),
sessionFallbackTimeouts: new Map(),
}
const helpers = createAutoRetryHelpers(deps)
const baseEventHandler = createEventHandler(deps, helpers)
const messageUpdateHandler = createMessageUpdateHandler(deps, helpers)
const chatMessageHandler = createChatMessageHandler(deps)
const cleanupInterval = setInterval(helpers.cleanupStaleSessions, 5 * 60 * 1000)
cleanupInterval.unref()
const eventHandler = async ({ event }: { event: { type: string; properties?: unknown } }) => {
if (event.type === "message.updated") {
if (!config.enabled) return
const props = event.properties as Record<string, unknown> | undefined
await messageUpdateHandler(props)
return
}
await baseEventHandler({ event })
}
return {
event: eventHandler,
"chat.message": chatMessageHandler,
} as RuntimeFallbackHook
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,2 +0,0 @@
export { createRuntimeFallbackHook } from "./hook"
export type { RuntimeFallbackHook, RuntimeFallbackOptions } from "./types"

View File

@@ -1,216 +0,0 @@
import type { HookDeps } from "./types"
import type { AutoRetryHelpers } from "./auto-retry"
import { HOOK_NAME } from "./constants"
import { log } from "../../shared/logger"
import { extractStatusCode, extractErrorName, classifyErrorType, isRetryableError, extractAutoRetrySignal, containsErrorContent } from "./error-classifier"
import { createFallbackState, prepareFallback } from "./fallback-state"
import { getFallbackModelsForSession } from "./fallback-models"
export function hasVisibleAssistantResponse(extractAutoRetrySignalFn: typeof extractAutoRetrySignal) {
return async (
ctx: HookDeps["ctx"],
sessionID: string,
_info: Record<string, unknown> | undefined,
): Promise<boolean> => {
try {
const messagesResp = await ctx.client.session.messages({
path: { id: sessionID },
query: { directory: ctx.directory },
})
const msgs = (messagesResp as {
data?: Array<{
info?: Record<string, unknown>
parts?: Array<{ type?: string; text?: string }>
}>
}).data
if (!msgs || msgs.length === 0) return false
const lastAssistant = [...msgs].reverse().find((m) => m.info?.role === "assistant")
if (!lastAssistant) return false
if (lastAssistant.info?.error) return false
const parts = lastAssistant.parts ??
(lastAssistant.info?.parts as Array<{ type?: string; text?: string }> | undefined)
const textFromParts = (parts ?? [])
.filter((p) => p.type === "text" && typeof p.text === "string")
.map((p) => p.text!.trim())
.filter((text) => text.length > 0)
.join("\n")
if (!textFromParts) return false
if (extractAutoRetrySignalFn({ message: textFromParts })) return false
return true
} catch {
return false
}
}
}
export function createMessageUpdateHandler(deps: HookDeps, helpers: AutoRetryHelpers) {
const { ctx, config, pluginConfig, sessionStates, sessionLastAccess, sessionRetryInFlight, sessionAwaitingFallbackResult } = deps
const checkVisibleResponse = hasVisibleAssistantResponse(extractAutoRetrySignal)
return async (props: Record<string, unknown> | undefined) => {
const info = props?.info as Record<string, unknown> | undefined
const sessionID = info?.sessionID as string | undefined
const retrySignalResult = extractAutoRetrySignal(info)
const retrySignal = retrySignalResult?.signal
const timeoutEnabled = config.timeout_seconds > 0
const parts = props?.parts as Array<{ type?: string; text?: string }> | undefined
const errorContentResult = containsErrorContent(parts)
const error = info?.error ??
(retrySignal && timeoutEnabled ? { name: "ProviderRateLimitError", message: retrySignal } : undefined) ??
(errorContentResult.hasError ? { name: "MessageContentError", message: errorContentResult.errorMessage || "Message contains error content" } : undefined)
const role = info?.role as string | undefined
const model = info?.model as string | undefined
if (sessionID && role === "assistant" && !error) {
if (!sessionAwaitingFallbackResult.has(sessionID)) {
return
}
const hasVisible = await checkVisibleResponse(ctx, sessionID, info)
if (!hasVisible) {
log(`[${HOOK_NAME}] Assistant update observed without visible final response; keeping fallback timeout`, {
sessionID,
model,
})
return
}
sessionAwaitingFallbackResult.delete(sessionID)
helpers.clearSessionFallbackTimeout(sessionID)
const state = sessionStates.get(sessionID)
if (state?.pendingFallbackModel) {
state.pendingFallbackModel = undefined
}
log(`[${HOOK_NAME}] Assistant response observed; cleared fallback timeout`, { sessionID, model })
return
}
if (sessionID && role === "assistant" && error) {
sessionAwaitingFallbackResult.delete(sessionID)
if (sessionRetryInFlight.has(sessionID) && !retrySignal) {
log(`[${HOOK_NAME}] message.updated fallback skipped (retry in flight)`, { sessionID })
return
}
if (retrySignal && sessionRetryInFlight.has(sessionID) && timeoutEnabled) {
log(`[${HOOK_NAME}] Overriding in-flight retry due to provider auto-retry signal`, {
sessionID,
model,
})
await helpers.abortSessionRequest(sessionID, "message.updated.retry-signal")
sessionRetryInFlight.delete(sessionID)
}
if (retrySignal && timeoutEnabled) {
log(`[${HOOK_NAME}] Detected provider auto-retry signal`, { sessionID, model })
}
if (!retrySignal) {
helpers.clearSessionFallbackTimeout(sessionID)
}
log(`[${HOOK_NAME}] message.updated with assistant error`, {
sessionID,
model,
statusCode: extractStatusCode(error, config.retry_on_errors),
errorName: extractErrorName(error),
errorType: classifyErrorType(error),
})
if (!isRetryableError(error, config.retry_on_errors)) {
log(`[${HOOK_NAME}] message.updated error not retryable, skipping fallback`, {
sessionID,
statusCode: extractStatusCode(error, config.retry_on_errors),
errorName: extractErrorName(error),
errorType: classifyErrorType(error),
})
return
}
let state = sessionStates.get(sessionID)
const agent = info?.agent as string | undefined
const resolvedAgent = await helpers.resolveAgentForSessionFromContext(sessionID, agent)
const fallbackModels = getFallbackModelsForSession(sessionID, resolvedAgent, pluginConfig)
if (fallbackModels.length === 0) {
return
}
if (!state) {
let initialModel = model
if (!initialModel) {
const detectedAgent = resolvedAgent
const agentConfig = detectedAgent
? pluginConfig?.agents?.[detectedAgent as keyof typeof pluginConfig.agents]
: undefined
const agentModel = agentConfig?.model as string | undefined
if (agentModel) {
log(`[${HOOK_NAME}] Derived model from agent config for message.updated`, {
sessionID,
agent: detectedAgent,
model: agentModel,
})
initialModel = agentModel
}
}
if (!initialModel) {
log(`[${HOOK_NAME}] message.updated missing model info, cannot fallback`, {
sessionID,
errorName: extractErrorName(error),
errorType: classifyErrorType(error),
})
return
}
state = createFallbackState(initialModel)
sessionStates.set(sessionID, state)
sessionLastAccess.set(sessionID, Date.now())
} else {
sessionLastAccess.set(sessionID, Date.now())
if (state.pendingFallbackModel) {
if (retrySignal && timeoutEnabled) {
log(`[${HOOK_NAME}] Clearing pending fallback due to provider auto-retry signal`, {
sessionID,
pendingFallbackModel: state.pendingFallbackModel,
})
state.pendingFallbackModel = undefined
} else {
log(`[${HOOK_NAME}] message.updated fallback skipped (pending fallback in progress)`, {
sessionID,
pendingFallbackModel: state.pendingFallbackModel,
})
return
}
}
}
const result = prepareFallback(sessionID, state, fallbackModels, config)
if (result.success && config.notify_on_fallback) {
await deps.ctx.client.tui
.showToast({
body: {
title: "Model Fallback",
message: `Switching to ${result.newModel?.split("/").pop() || result.newModel} for next request`,
variant: "warning",
duration: 5000,
},
})
.catch(() => {})
}
if (result.success && result.newModel) {
await helpers.autoRetryWithFallback(sessionID, result.newModel, resolvedAgent, "message.updated")
}
}
}
}

View File

@@ -1,41 +0,0 @@
import type { PluginInput } from "@opencode-ai/plugin"
import type { RuntimeFallbackConfig, OhMyOpenCodeConfig } from "../../config"
export interface FallbackState {
originalModel: string
currentModel: string
fallbackIndex: number
failedModels: Map<string, number>
attemptCount: number
pendingFallbackModel?: string
}
export interface FallbackResult {
success: boolean
newModel?: string
error?: string
maxAttemptsReached?: boolean
}
export interface RuntimeFallbackOptions {
config?: RuntimeFallbackConfig
pluginConfig?: OhMyOpenCodeConfig
session_timeout_ms?: number
}
export interface RuntimeFallbackHook {
event: (input: { event: { type: string; properties?: unknown } }) => Promise<void>
"chat.message"?: (input: { sessionID: string; agent?: string; model?: { providerID: string; modelID: string } }, output: { message: { model?: { providerID: string; modelID: string } }; parts?: Array<{ type: string; text?: string }> }) => Promise<void>
}
export interface HookDeps {
ctx: PluginInput
config: Required<RuntimeFallbackConfig>
options: RuntimeFallbackOptions | undefined
pluginConfig: OhMyOpenCodeConfig | undefined
sessionStates: Map<string, FallbackState>
sessionLastAccess: Map<string, number>
sessionRetryInFlight: Set<string>
sessionAwaitingFallbackResult: Set<string>
sessionFallbackTimeouts: Map<string, ReturnType<typeof setTimeout>>
}

View File

@@ -1,7 +1,6 @@
declare const require: (name: string) => any
const { describe, expect, test } = require("bun:test")
import { extractResumeConfig, resumeSession } from "./resume"
import { OMO_INTERNAL_INITIATOR_MARKER } from "../../shared/internal-initiator-marker"
import type { MessageData } from "./types"
describe("session-recovery resume", () => {
@@ -45,8 +44,5 @@ describe("session-recovery resume", () => {
// then
expect(ok).toBe(true)
expect(promptBody?.tools).toEqual({ question: false, bash: true })
expect(Array.isArray(promptBody?.parts)).toBe(true)
const firstPart = (promptBody?.parts as Array<{ text?: string }>)?.[0]
expect(firstPart?.text).toContain(OMO_INTERNAL_INITIATOR_MARKER)
})
})

View File

@@ -1,6 +1,6 @@
import type { createOpencodeClient } from "@opencode-ai/sdk"
import type { MessageData, ResumeConfig } from "./types"
import { createInternalAgentTextPart, resolveInheritedPromptTools } from "../../shared"
import { resolveInheritedPromptTools } from "../../shared"
const RECOVERY_RESUME_TEXT = "[session recovered - continuing previous task]"
@@ -30,7 +30,7 @@ export async function resumeSession(client: Client, config: ResumeConfig): Promi
await client.session.promptAsync({
path: { id: config.sessionID },
body: {
parts: [createInternalAgentTextPart(RECOVERY_RESUME_TEXT)],
parts: [{ type: "text", text: RECOVERY_RESUME_TEXT }],
agent: config.agent,
model: config.model,
...(inheritedTools ? { tools: inheritedTools } : {}),

Some files were not shown because too many files have changed in this diff Show More