Compare commits
27 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6e5edafeee | ||
|
|
bfb5d43bc2 | ||
|
|
385e8a97b0 | ||
|
|
7daabf9617 | ||
|
|
5fbcb88a3f | ||
|
|
daa5f6ee5b | ||
|
|
4d66ea9730 | ||
|
|
4d4273603a | ||
|
|
7b7c14301e | ||
|
|
e3be656f86 | ||
|
|
c11cb2e3f1 | ||
|
|
195e8dcb17 | ||
|
|
284e7f5bc3 | ||
|
|
465c9e511f | ||
|
|
18d134fa57 | ||
|
|
092718f82d | ||
|
|
19f504fcfa | ||
|
|
49f3be5a1f | ||
|
|
6d6102f1ff | ||
|
|
1d7e534b92 | ||
|
|
17b7dd396e | ||
|
|
889d80d0ca | ||
|
|
87e229fb62 | ||
|
|
78514ec6d4 | ||
|
|
1c12925c9e | ||
|
|
262f0c3f1f | ||
|
|
aace1982ec |
@@ -1,7 +1,7 @@
|
||||
# PROJECT KNOWLEDGE BASE
|
||||
|
||||
**Generated:** 2025-12-24T17:07:00+09:00
|
||||
**Commit:** 0172241
|
||||
**Generated:** 2025-12-28T17:15:00+09:00
|
||||
**Commit:** f5b74d5
|
||||
**Branch:** dev
|
||||
|
||||
## OVERVIEW
|
||||
|
||||
@@ -396,8 +396,8 @@ gh repo star code-yeongyu/oh-my-opencode
|
||||
|
||||
- **Sisyphus** (`anthropic/claude-opus-4-5`): **デフォルトエージェントです。** OpenCode のための強力な AI オーケストレーターです。専門のサブエージェントを活用して、複雑なタスクを計画、委任、実行します。バックグラウンドタスクへの委任と Todo ベースのワークフローを重視します。最大の推論能力を発揮するため、Claude Opus 4.5 と拡張思考 (32k token budget) を使用します。
|
||||
- **oracle** (`openai/gpt-5.2`): アーキテクチャ、コードレビュー、戦略立案のための専門アドバイザー。GPT-5.2 の卓越した論理的推論と深い分析能力を活用します。AmpCode からインスピレーションを得ました。
|
||||
- **librarian** (`anthropic/claude-sonnet-4-5`): マルチリポジトリ分析、ドキュメント検索、実装例の調査を担当。Claude Sonnet 4.5 を使用して、深いコードベース理解と GitHub リサーチ、根拠に基づいた回答を提供します。AmpCode からインスピレーションを得ました。
|
||||
- **explore** (`opencode/grok-code`): 高速なコードベース探索、ファイルパターンマッチング。Claude Code は Haiku を使用しますが、私たちは Grok を使います。現在無料であり、極めて高速で、ファイル探索タスクには十分な知能を備えているからです。Claude Code からインスピレーションを得ました。
|
||||
- **librarian** (`anthropic/claude-sonnet-4-5` または `google/gemini-3-flash`): マルチリポジトリ分析、ドキュメント検索、実装例の調査を担当。Antigravity 認証が設定されている場合は Gemini 3 Flash を使用し、それ以外は Claude Sonnet 4.5 を使用して、深いコードベース理解と GitHub リサーチ、根拠に基づいた回答を提供します。AmpCode からインスピレーションを得ました。
|
||||
- **explore** (`opencode/grok-code`、`google/gemini-3-flash`、または `anthropic/claude-haiku-4-5`): 高速なコードベース探索、ファイルパターンマッチング。Antigravity 認証が設定されている場合は Gemini 3 Flash を使用し、Claude max20 が利用可能な場合は Haiku を使用し、それ以外は Grok を使います。Claude Code からインスピレーションを得ました。
|
||||
- **frontend-ui-ux-engineer** (`google/gemini-3-pro-preview`): 開発者に転身したデザイナーという設定です。素晴らしい UI を作ります。美しく独創的な UI コードを生成することに長けた Gemini を使用します。
|
||||
- **document-writer** (`google/gemini-3-pro-preview`): テクニカルライティングの専門家という設定です。Gemini は文筆家であり、流れるような文章を書きます。
|
||||
- **multimodal-looker** (`google/gemini-3-flash`): 視覚コンテンツ解釈のための専門エージェント。PDF、画像、図表を分析して情報を抽出します。
|
||||
@@ -857,7 +857,7 @@ OpenCode でサポートされるすべての LSP 構成およびカスタム設
|
||||
| `aggressive_truncation` | `false` | トークン制限を超えた場合、ツール出力を積極的に切り詰めて制限内に収めます。デフォルトの切り詰めより積極的です。不十分な場合は要約/復元にフォールバックします。 |
|
||||
| `auto_resume` | `false` | thinking block エラーや thinking disabled violation からの回復成功後、自動的にセッションを再開します。最後のユーザーメッセージを抽出して続行します。 |
|
||||
| `truncate_all_tool_outputs` | `true` | プロンプトが長くなりすぎるのを防ぐため、コンテキストウィンドウの使用状況に基づいてすべてのツール出力を動的に切り詰めます。完全なツール出力が必要な場合は`false`に設定して無効化します。 |
|
||||
| `dcp_on_compaction_failure` | `false` | 有効にすると、DCP(Dynamic Context Pruning)はコンパクション(要約)が失敗した後にのみ実行され、その後コンパクションを再試行します。通常時は DCP は実行されません。トークン制限に達した際によりスマートな回復が必要な場合は有効にしてください。 |
|
||||
| `dcp_for_compaction` | `false` | 有効にすると、トークン制限エラー発生時にDCP(Dynamic Context Pruning)が最初に実行され、その後コンパクションが実行されます。DCPが不要なコンテキストを整理した後、すぐにコンパクションが進行します。トークン制限に達した際によりスマートな回復が必要な場合は有効にしてください。 |
|
||||
|
||||
**警告**:これらの機能は実験的であり、予期しない動作を引き起こす可能性があります。影響を理解した場合にのみ有効にしてください。
|
||||
|
||||
|
||||
@@ -393,8 +393,8 @@ gh repo star code-yeongyu/oh-my-opencode
|
||||
|
||||
- **Sisyphus** (`anthropic/claude-opus-4-5`): **기본 에이전트입니다.** OpenCode를 위한 강력한 AI 오케스트레이터입니다. 전문 서브에이전트를 활용하여 복잡한 작업을 계획, 위임, 실행합니다. 백그라운드 태스크 위임과 todo 기반 워크플로우를 강조합니다. 최대 추론 능력을 위해 Claude Opus 4.5와 확장된 사고(32k 버짓)를 사용합니다.
|
||||
- **oracle** (`openai/gpt-5.2`): 아키텍처, 코드 리뷰, 전략 수립을 위한 전문가 조언자. GPT-5.2의 뛰어난 논리적 추론과 깊은 분석 능력을 활용합니다. AmpCode 에서 영감을 받았습니다.
|
||||
- **librarian** (`anthropic/claude-sonnet-4-5`): 멀티 레포 분석, 문서 조회, 구현 예제 담당. Claude Sonnet 4.5를 사용하여 깊은 코드베이스 이해와 GitHub 조사, 근거 기반의 답변을 제공합니다. AmpCode 에서 영감을 받았습니다.
|
||||
- **explore** (`opencode/grok-code`): 빠른 코드베이스 탐색, 파일 패턴 매칭. Claude Code는 Haiku를 쓰지만, 우리는 Grok을 씁니다. 현재 무료이고, 극도로 빠르며, 파일 탐색 작업에 충분한 지능을 갖췄기 때문입니다. Claude Code 에서 영감을 받았습니다.
|
||||
- **librarian** (`anthropic/claude-sonnet-4-5` 또는 `google/gemini-3-flash`): 멀티 레포 분석, 문서 조회, 구현 예제 담당. Antigravity 인증이 설정된 경우 Gemini 3 Flash를 사용하고, 그렇지 않으면 Claude Sonnet 4.5를 사용하여 깊은 코드베이스 이해와 GitHub 조사, 근거 기반의 답변을 제공합니다. AmpCode 에서 영감을 받았습니다.
|
||||
- **explore** (`opencode/grok-code`, `google/gemini-3-flash`, 또는 `anthropic/claude-haiku-4-5`): 빠른 코드베이스 탐색, 파일 패턴 매칭. Antigravity 인증이 설정된 경우 Gemini 3 Flash를 사용하고, Claude max20이 있으면 Haiku를 사용하며, 그 외에는 Grok을 씁니다. Claude Code 에서 영감을 받았습니다.
|
||||
- **frontend-ui-ux-engineer** (`google/gemini-3-pro-preview`): 개발자로 전향한 디자이너라는 설정을 갖고 있습니다. 멋진 UI를 만듭니다. 아름답고 창의적인 UI 코드를 생성하는 데 탁월한 Gemini를 사용합니다.
|
||||
- **document-writer** (`google/gemini-3-pro-preview`): 기술 문서 전문가라는 설정을 갖고 있습니다. Gemini 는 문학가입니다. 글을 기가막히게 씁니다.
|
||||
- **multimodal-looker** (`google/gemini-3-flash`): 시각적 콘텐츠 해석을 위한 전문 에이전트. PDF, 이미지, 다이어그램을 분석하여 정보를 추출합니다.
|
||||
@@ -851,7 +851,7 @@ OpenCode 에서 지원하는 모든 LSP 구성 및 커스텀 설정 (opencode.js
|
||||
| `aggressive_truncation` | `false` | 토큰 제한을 초과하면 도구 출력을 공격적으로 잘라내어 제한 내에 맞춥니다. 기본 truncation보다 더 공격적입니다. 부족하면 요약/복구로 fallback합니다. |
|
||||
| `auto_resume` | `false` | thinking block 에러나 thinking disabled violation으로부터 성공적으로 복구한 후 자동으로 세션을 재개합니다. 마지막 사용자 메시지를 추출하여 계속합니다. |
|
||||
| `truncate_all_tool_outputs` | `true` | 프롬프트가 너무 길어지는 것을 방지하기 위해 컨텍스트 윈도우 사용량에 따라 모든 도구 출력을 동적으로 잘라냅니다. 전체 도구 출력이 필요한 경우 `false`로 설정하여 비활성화하세요. |
|
||||
| `dcp_on_compaction_failure` | `false` | 활성화하면, DCP(Dynamic Context Pruning)가 compaction(요약) 실패 후에만 실행되고 compaction을 재시도합니다. DCP는 평소에는 실행되지 않습니다. 토큰 제한에 도달했을 때 더 스마트한 복구를 원하면 활성화하세요. |
|
||||
| `dcp_for_compaction` | `false` | 활성화하면, 토큰 제한 에러 발생 시 DCP(Dynamic Context Pruning)가 가장 먼저 실행되고, 그 다음 compaction이 실행됩니다. DCP가 불필요한 컨텍스트를 정리한 후 바로 compaction이 진행됩니다. 토큰 제한에 도달했을 때 더 스마트한 복구를 원하면 활성화하세요. |
|
||||
|
||||
**경고**: 이 기능들은 실험적이며 예상치 못한 동작을 유발할 수 있습니다. 의미를 이해한 경우에만 활성화하세요.
|
||||
|
||||
|
||||
@@ -465,8 +465,8 @@ To remove oh-my-opencode:
|
||||
|
||||
- **Sisyphus** (`anthropic/claude-opus-4-5`): **The default agent.** A powerful AI orchestrator for OpenCode. Plans, delegates, and executes complex tasks using specialized subagents with aggressive parallel execution. Emphasizes background task delegation and todo-driven workflow. Uses Claude Opus 4.5 with extended thinking (32k budget) for maximum reasoning capability.
|
||||
- **oracle** (`openai/gpt-5.2`): Architecture, code review, strategy. Uses GPT-5.2 for its stellar logical reasoning and deep analysis. Inspired by AmpCode.
|
||||
- **librarian** (`anthropic/claude-sonnet-4-5`): Multi-repo analysis, doc lookup, implementation examples. Uses Claude Sonnet 4.5 for deep codebase understanding and GitHub research with evidence-based answers. Inspired by AmpCode.
|
||||
- **explore** (`opencode/grok-code`): Fast codebase exploration and pattern matching. Claude Code uses Haiku; we use Grok—it's free, blazing fast, and plenty smart for file traversal. Inspired by Claude Code.
|
||||
- **librarian** (`anthropic/claude-sonnet-4-5` or `google/gemini-3-flash`): Multi-repo analysis, doc lookup, implementation examples. Uses Gemini 3 Flash when Antigravity auth is configured, otherwise Claude Sonnet 4.5 for deep codebase understanding and GitHub research with evidence-based answers. Inspired by AmpCode.
|
||||
- **explore** (`opencode/grok-code`, `google/gemini-3-flash`, or `anthropic/claude-haiku-4-5`): Fast codebase exploration and pattern matching. Uses Gemini 3 Flash when Antigravity auth is configured, Haiku when Claude max20 is available, otherwise Grok. Inspired by Claude Code.
|
||||
- **frontend-ui-ux-engineer** (`google/gemini-3-pro-high`): A designer turned developer. Builds gorgeous UIs. Gemini excels at creative, beautiful UI code.
|
||||
- **document-writer** (`google/gemini-3-flash`): Technical writing expert. Gemini is a wordsmith—writes prose that flows.
|
||||
- **multimodal-looker** (`google/gemini-3-flash`): Visual content specialist. Analyzes PDFs, images, diagrams to extract information.
|
||||
@@ -953,7 +953,7 @@ Opt-in experimental features that may change or be removed in future versions. U
|
||||
| `aggressive_truncation` | `false` | When token limit is exceeded, aggressively truncates tool outputs to fit within limits. More aggressive than the default truncation behavior. Falls back to summarize/revert if insufficient. |
|
||||
| `auto_resume` | `false` | Automatically resumes session after successful recovery from thinking block errors or thinking disabled violations. Extracts the last user message and continues. |
|
||||
| `truncate_all_tool_outputs` | `true` | Dynamically truncates ALL tool outputs based on context window usage to prevent prompts from becoming too long. Disable by setting to `false` if you need full tool outputs. |
|
||||
| `dcp_on_compaction_failure` | `false` | When enabled, Dynamic Context Pruning (DCP) runs only after compaction (summarize) fails, then retries compaction. DCP does NOT run during normal operations. Enable this for smarter recovery when hitting token limits. |
|
||||
| `dcp_for_compaction` | `false` | When enabled, Dynamic Context Pruning (DCP) runs FIRST when token limit errors occur, before attempting compaction. DCP prunes redundant context, then compaction runs immediately. Enable this for smarter recovery when hitting token limits. |
|
||||
|
||||
**Warning**: These features are experimental and may cause unexpected behavior. Enable only if you understand the implications.
|
||||
|
||||
|
||||
@@ -404,8 +404,8 @@ gh repo star code-yeongyu/oh-my-opencode
|
||||
|
||||
- **Sisyphus** (`anthropic/claude-opus-4-5`):**默认 Agent。** OpenCode 专属的强力 AI 编排器。指挥专业子 Agent 搞定复杂任务。主打后台任务委派和 Todo 驱动。用 Claude Opus 4.5 加上扩展思考(32k token 预算),智商拉满。
|
||||
- **oracle** (`openai/gpt-5.2`):架构师、代码审查员、战略家。GPT-5.2 的逻辑推理和深度分析能力不是盖的。致敬 AmpCode。
|
||||
- **librarian** (`anthropic/claude-sonnet-4-5`):多仓库分析、查文档、找示例。Claude Sonnet 4.5 深入理解代码库,GitHub 调研,给出的答案都有据可查。致敬 AmpCode。
|
||||
- **explore** (`opencode/grok-code`):极速代码库扫描、模式匹配。Claude Code 用 Haiku,我们用 Grok——免费、飞快、扫文件够用了。致敬 Claude Code。
|
||||
- **librarian** (`anthropic/claude-sonnet-4-5` 或 `google/gemini-3-flash`):多仓库分析、查文档、找示例。配置 Antigravity 认证时使用 Gemini 3 Flash,否则使用 Claude Sonnet 4.5 深入理解代码库,GitHub 调研,给出的答案都有据可查。致敬 AmpCode。
|
||||
- **explore** (`opencode/grok-code`、`google/gemini-3-flash` 或 `anthropic/claude-haiku-4-5`):极速代码库扫描、模式匹配。配置 Antigravity 认证时使用 Gemini 3 Flash,Claude max20 可用时使用 Haiku,否则用 Grok。致敬 Claude Code。
|
||||
- **frontend-ui-ux-engineer** (`google/gemini-3-pro-preview`):设计师出身的程序员。UI 做得那是真漂亮。Gemini 写这种创意美观的代码是一绝。
|
||||
- **document-writer** (`google/gemini-3-pro-preview`):技术写作专家。Gemini 文笔好,写出来的东西读着顺畅。
|
||||
- **multimodal-looker** (`google/gemini-3-flash`):视觉内容专家。PDF、图片、图表,看一眼就知道里头有啥。
|
||||
@@ -857,7 +857,7 @@ Oh My OpenCode 送你重构工具(重命名、代码操作)。
|
||||
| `aggressive_truncation` | `false` | 超出 token 限制时,激进地截断工具输出以适应限制。比默认截断更激进。不够的话会回退到摘要/恢复。 |
|
||||
| `auto_resume` | `false` | 从 thinking block 错误或 thinking disabled violation 成功恢复后,自动恢复会话。提取最后一条用户消息继续执行。 |
|
||||
| `truncate_all_tool_outputs` | `true` | 为防止提示过长,根据上下文窗口使用情况动态截断所有工具输出。如需完整工具输出,设置为 `false` 禁用此功能。 |
|
||||
| `dcp_on_compaction_failure` | `false` | 启用后,DCP(动态上下文剪枝)仅在压缩(摘要)失败后运行,然后重试压缩。平时 DCP 不会运行。当达到 token 限制时需要更智能的恢复请启用此选项。 |
|
||||
| `dcp_for_compaction` | `false` | 启用后,当发生 token 限制错误时,DCP(动态上下文剪枝)首先运行,然后立即执行压缩。DCP 清理不必要的上下文后,压缩立即进行。当达到 token 限制时需要更智能的恢复请启用此选项。 |
|
||||
|
||||
**警告**:这些功能是实验性的,可能会导致意外行为。只有在理解其影响的情况下才启用。
|
||||
|
||||
|
||||
@@ -64,6 +64,15 @@
|
||||
]
|
||||
}
|
||||
},
|
||||
"disabled_commands": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"init-deep"
|
||||
]
|
||||
}
|
||||
},
|
||||
"agents": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
@@ -1375,6 +1384,14 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"comment_checker": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"custom_prompt": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"experimental": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
@@ -1487,7 +1504,7 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"dcp_on_compaction_failure": {
|
||||
"dcp_for_compaction": {
|
||||
"type": "boolean"
|
||||
}
|
||||
}
|
||||
|
||||
4
bun.lock
4
bun.lock
@@ -8,7 +8,7 @@
|
||||
"@ast-grep/cli": "^0.40.0",
|
||||
"@ast-grep/napi": "^0.40.0",
|
||||
"@clack/prompts": "^0.11.0",
|
||||
"@code-yeongyu/comment-checker": "^0.6.0",
|
||||
"@code-yeongyu/comment-checker": "^0.6.1",
|
||||
"@openauthjs/openauth": "^0.4.3",
|
||||
"@opencode-ai/plugin": "^1.0.162",
|
||||
"@opencode-ai/sdk": "^1.0.162",
|
||||
@@ -73,7 +73,7 @@
|
||||
|
||||
"@clack/prompts": ["@clack/prompts@0.11.0", "", { "dependencies": { "@clack/core": "0.5.0", "picocolors": "^1.0.0", "sisteransi": "^1.0.5" } }, "sha512-pMN5FcrEw9hUkZA4f+zLlzivQSeQf5dRGJjSUbvVYDLvpKCdQx5OaknvKzgbtXOizhP+SJJJjqEbOe55uKKfAw=="],
|
||||
|
||||
"@code-yeongyu/comment-checker": ["@code-yeongyu/comment-checker@0.6.0", "", { "os": [ "linux", "win32", "darwin", ], "cpu": [ "x64", "arm64", ], "bin": { "comment-checker": "bin/comment-checker" } }, "sha512-VtDPrhbUJcb5BIS18VMcY/N/xSLbMr6dpU9MO1NYQyEDhI4pSIx07K4gOlCutG/nHVCjO+HEarn8rttODP+5UA=="],
|
||||
"@code-yeongyu/comment-checker": ["@code-yeongyu/comment-checker@0.6.1", "", { "os": [ "linux", "win32", "darwin", ], "cpu": [ "x64", "arm64", ], "bin": { "comment-checker": "bin/comment-checker" } }, "sha512-BBremX+Y5aW8sTzlhHrLsKParupYkPOVUYmq9STrlWvBvfAme6w5IWuZCLl6nHIQScRDdvGdrAjPycJC86EZFA=="],
|
||||
|
||||
"@openauthjs/openauth": ["@openauthjs/openauth@0.4.3", "", { "dependencies": { "@standard-schema/spec": "1.0.0-beta.3", "aws4fetch": "1.0.20", "jose": "5.9.6" }, "peerDependencies": { "arctic": "^2.2.2", "hono": "^4.0.0" } }, "sha512-RlnjqvHzqcbFVymEwhlUEuac4utA5h4nhSK/i2szZuQmxTIqbGUxZ+nM+avM+VV4Ing+/ZaNLKILoXS3yrkOOw=="],
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "oh-my-opencode",
|
||||
"version": "2.6.0",
|
||||
"version": "2.7.0",
|
||||
"description": "OpenCode plugin - custom agents (oracle, librarian) and enhanced features",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
@@ -53,7 +53,7 @@
|
||||
"@ast-grep/cli": "^0.40.0",
|
||||
"@ast-grep/napi": "^0.40.0",
|
||||
"@clack/prompts": "^0.11.0",
|
||||
"@code-yeongyu/comment-checker": "^0.6.0",
|
||||
"@code-yeongyu/comment-checker": "^0.6.1",
|
||||
"@openauthjs/openauth": "^0.4.3",
|
||||
"@opencode-ai/plugin": "^1.0.162",
|
||||
"@opencode-ai/sdk": "^1.0.162",
|
||||
|
||||
@@ -55,6 +55,14 @@
|
||||
"created_at": "2025-12-27T14:49:05Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 281
|
||||
},
|
||||
{
|
||||
"name": "devxoul",
|
||||
"id": 931655,
|
||||
"comment_id": 3694098760,
|
||||
"created_at": "2025-12-27T17:05:50Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 288
|
||||
}
|
||||
]
|
||||
}
|
||||
89
src/agents/AGENTS.md
Normal file
89
src/agents/AGENTS.md
Normal file
@@ -0,0 +1,89 @@
|
||||
# AGENTS KNOWLEDGE BASE
|
||||
|
||||
## OVERVIEW
|
||||
|
||||
AI agent definitions for multi-model orchestration. 7 specialized agents: Sisyphus (orchestrator), oracle (strategy), librarian (research), explore (grep), frontend-ui-ux-engineer, document-writer, multimodal-looker.
|
||||
|
||||
## STRUCTURE
|
||||
|
||||
```
|
||||
agents/
|
||||
├── sisyphus.ts # Primary orchestrator (Claude Opus 4.5)
|
||||
├── oracle.ts # Strategic advisor (GPT-5.2)
|
||||
├── librarian.ts # Multi-repo research (Claude Sonnet 4.5)
|
||||
├── explore.ts # Fast codebase grep (Grok Code)
|
||||
├── frontend-ui-ux-engineer.ts # UI generation (Gemini 3 Pro)
|
||||
├── document-writer.ts # Technical docs (Gemini 3 Flash)
|
||||
├── multimodal-looker.ts # PDF/image analysis (Gemini 3 Flash)
|
||||
├── build-prompt.ts # Shared build agent prompt
|
||||
├── plan-prompt.ts # Shared plan agent prompt
|
||||
├── types.ts # AgentModelConfig interface
|
||||
├── utils.ts # createBuiltinAgents(), getAgentName()
|
||||
└── index.ts # builtinAgents export
|
||||
```
|
||||
|
||||
## AGENT MODELS
|
||||
|
||||
| Agent | Default Model | Fallback | Purpose |
|
||||
|-------|---------------|----------|---------|
|
||||
| Sisyphus | anthropic/claude-opus-4-5 | - | Primary orchestrator with extended thinking |
|
||||
| oracle | openai/gpt-5.2 | - | Architecture, debugging, code review |
|
||||
| librarian | anthropic/claude-sonnet-4-5 | google/gemini-3-flash | Docs, OSS research, GitHub examples |
|
||||
| explore | opencode/grok-code | google/gemini-3-flash, anthropic/claude-haiku-4-5 | Fast contextual grep |
|
||||
| frontend-ui-ux-engineer | google/gemini-3-pro-preview | - | UI/UX code generation |
|
||||
| document-writer | google/gemini-3-pro-preview | - | Technical writing |
|
||||
| multimodal-looker | google/gemini-3-flash | - | PDF/image analysis |
|
||||
|
||||
## HOW TO ADD AN AGENT
|
||||
|
||||
1. Create `src/agents/my-agent.ts`:
|
||||
```typescript
|
||||
import type { AgentConfig } from "@opencode-ai/sdk"
|
||||
|
||||
export const myAgent: AgentConfig = {
|
||||
model: "provider/model-name",
|
||||
temperature: 0.1,
|
||||
system: "Agent system prompt...",
|
||||
tools: { include: ["tool1", "tool2"] }, // or exclude: [...]
|
||||
}
|
||||
```
|
||||
2. Add to `builtinAgents` in `src/agents/index.ts`
|
||||
3. Update `types.ts` if adding new config options
|
||||
|
||||
## AGENT CONFIG OPTIONS
|
||||
|
||||
| Option | Type | Description |
|
||||
|--------|------|-------------|
|
||||
| model | string | Model identifier (provider/model-name) |
|
||||
| temperature | number | 0.0-1.0, most use 0.1 for consistency |
|
||||
| system | string | System prompt (can be multiline template literal) |
|
||||
| tools | object | `{ include: [...] }` or `{ exclude: [...] }` |
|
||||
| top_p | number | Optional nucleus sampling |
|
||||
| maxTokens | number | Optional max output tokens |
|
||||
|
||||
## MODEL FALLBACK LOGIC
|
||||
|
||||
`createBuiltinAgents()` in utils.ts handles model fallback:
|
||||
|
||||
1. Check user config override (`agents.{name}.model`)
|
||||
2. Check installer settings (claude max20, gemini antigravity)
|
||||
3. Use default model
|
||||
|
||||
**Fallback order for explore**:
|
||||
- If gemini antigravity enabled → `google/gemini-3-flash`
|
||||
- If claude max20 enabled → `anthropic/claude-haiku-4-5`
|
||||
- Default → `opencode/grok-code` (free)
|
||||
|
||||
## ANTI-PATTERNS (AGENTS)
|
||||
|
||||
- **High temperature**: Don't use >0.3 for code-related agents
|
||||
- **Broad tool access**: Prefer explicit `include` over unrestricted access
|
||||
- **Monolithic prompts**: Keep prompts focused; delegate to specialized agents
|
||||
- **Missing fallbacks**: Consider free/cheap fallbacks for rate-limited models
|
||||
|
||||
## SHARED PROMPTS
|
||||
|
||||
- **build-prompt.ts**: Base prompt for build agents (OpenCode default + Sisyphus variants)
|
||||
- **plan-prompt.ts**: Base prompt for plan agents (Planner-Sisyphus)
|
||||
|
||||
Used by `src/index.ts` when creating Builder-Sisyphus and Planner-Sisyphus variants.
|
||||
@@ -17,16 +17,15 @@
|
||||
* Debug logging available via ANTIGRAVITY_DEBUG=1 environment variable.
|
||||
*/
|
||||
|
||||
import { ANTIGRAVITY_ENDPOINT_FALLBACKS, ANTIGRAVITY_DEFAULT_PROJECT_ID } from "./constants"
|
||||
import { fetchProjectContext, clearProjectContextCache } from "./project"
|
||||
import { isTokenExpired, refreshAccessToken, parseStoredToken, formatTokenForStorage } from "./token"
|
||||
import { ANTIGRAVITY_ENDPOINT_FALLBACKS } from "./constants"
|
||||
import { fetchProjectContext, clearProjectContextCache, invalidateProjectContextByRefreshToken } from "./project"
|
||||
import { isTokenExpired, refreshAccessToken, parseStoredToken, formatTokenForStorage, AntigravityTokenRefreshError } from "./token"
|
||||
import { transformRequest } from "./request"
|
||||
import { convertRequestBody, hasOpenAIMessages } from "./message-converter"
|
||||
import {
|
||||
transformResponse,
|
||||
transformStreamingResponse,
|
||||
isStreamingResponse,
|
||||
extractSignatureFromSsePayload,
|
||||
} from "./response"
|
||||
import { normalizeToolsForGemini, type OpenAITool } from "./tools"
|
||||
import { extractThinkingBlocks, shouldIncludeThinking, transformResponseThinking } from "./thinking"
|
||||
@@ -391,7 +390,6 @@ export function createAntigravityFetch(
|
||||
try {
|
||||
const newTokens = await refreshAccessToken(refreshParts.refreshToken, clientId, clientSecret)
|
||||
|
||||
// Update cached tokens
|
||||
cachedTokens = {
|
||||
type: "antigravity",
|
||||
access_token: newTokens.access_token,
|
||||
@@ -400,10 +398,8 @@ export function createAntigravityFetch(
|
||||
timestamp: Date.now(),
|
||||
}
|
||||
|
||||
// Clear project context cache on token refresh
|
||||
clearProjectContextCache()
|
||||
|
||||
// Format and save new tokens
|
||||
const formattedRefresh = formatTokenForStorage(
|
||||
newTokens.refresh_token,
|
||||
refreshParts.projectId || "",
|
||||
@@ -418,6 +414,16 @@ export function createAntigravityFetch(
|
||||
|
||||
debugLog("Token refreshed successfully")
|
||||
} catch (error) {
|
||||
if (error instanceof AntigravityTokenRefreshError) {
|
||||
if (error.isInvalidGrant) {
|
||||
debugLog(`[REFRESH] Token revoked (invalid_grant), clearing caches`)
|
||||
invalidateProjectContextByRefreshToken(refreshParts.refreshToken)
|
||||
clearProjectContextCache()
|
||||
}
|
||||
throw new Error(
|
||||
`Antigravity: Token refresh failed: ${error.description || error.message}${error.code ? ` (${error.code})` : ""}`
|
||||
)
|
||||
}
|
||||
throw new Error(
|
||||
`Antigravity: Token refresh failed: ${error instanceof Error ? error.message : "Unknown error"}`
|
||||
)
|
||||
@@ -535,11 +541,33 @@ export function createAntigravityFetch(
|
||||
debugLog("[401] Token refreshed, retrying request...")
|
||||
return executeWithEndpoints()
|
||||
} catch (refreshError) {
|
||||
if (refreshError instanceof AntigravityTokenRefreshError) {
|
||||
if (refreshError.isInvalidGrant) {
|
||||
debugLog(`[401] Token revoked (invalid_grant), clearing caches`)
|
||||
invalidateProjectContextByRefreshToken(refreshParts.refreshToken)
|
||||
clearProjectContextCache()
|
||||
}
|
||||
debugLog(`[401] Token refresh failed: ${refreshError.description || refreshError.message}`)
|
||||
return new Response(
|
||||
JSON.stringify({
|
||||
error: {
|
||||
message: refreshError.description || refreshError.message,
|
||||
type: refreshError.isInvalidGrant ? "token_revoked" : "unauthorized",
|
||||
code: refreshError.code || "token_refresh_failed",
|
||||
},
|
||||
}),
|
||||
{
|
||||
status: 401,
|
||||
statusText: "Unauthorized",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
}
|
||||
)
|
||||
}
|
||||
debugLog(`[401] Token refresh failed: ${refreshError instanceof Error ? refreshError.message : "Unknown error"}`)
|
||||
return new Response(
|
||||
JSON.stringify({
|
||||
error: {
|
||||
message: `Token refresh failed: ${refreshError instanceof Error ? refreshError.message : "Unknown error"}`,
|
||||
message: refreshError instanceof Error ? refreshError.message : "Unknown error",
|
||||
type: "unauthorized",
|
||||
code: "token_refresh_failed",
|
||||
},
|
||||
|
||||
@@ -267,3 +267,8 @@ export function clearProjectContextCache(accessToken?: string): void {
|
||||
projectContextCache.clear()
|
||||
}
|
||||
}
|
||||
|
||||
export function invalidateProjectContextByRefreshToken(_refreshToken: string): void {
|
||||
projectContextCache.clear()
|
||||
debugLog(`[invalidateProjectContextByRefreshToken] Cleared all project context cache due to refresh token invalidation`)
|
||||
}
|
||||
|
||||
@@ -1,8 +1,3 @@
|
||||
/**
|
||||
* Antigravity token management utilities.
|
||||
* Handles token expiration checking, refresh, and storage format parsing.
|
||||
*/
|
||||
|
||||
import {
|
||||
ANTIGRAVITY_CLIENT_ID,
|
||||
ANTIGRAVITY_CLIENT_SECRET,
|
||||
@@ -13,33 +8,86 @@ import type {
|
||||
AntigravityRefreshParts,
|
||||
AntigravityTokenExchangeResult,
|
||||
AntigravityTokens,
|
||||
OAuthErrorPayload,
|
||||
ParsedOAuthError,
|
||||
} from "./types"
|
||||
|
||||
/**
|
||||
* Check if the access token is expired.
|
||||
* Includes a 60-second safety buffer to refresh before actual expiration.
|
||||
*
|
||||
* @param tokens - The Antigravity tokens to check
|
||||
* @returns true if the token is expired or will expire within the buffer period
|
||||
*/
|
||||
export function isTokenExpired(tokens: AntigravityTokens): boolean {
|
||||
// Calculate when the token expires (timestamp + expires_in in ms)
|
||||
// timestamp is in milliseconds, expires_in is in seconds
|
||||
const expirationTime = tokens.timestamp + tokens.expires_in * 1000
|
||||
export class AntigravityTokenRefreshError extends Error {
|
||||
code?: string
|
||||
description?: string
|
||||
status: number
|
||||
statusText: string
|
||||
responseBody?: string
|
||||
|
||||
// Check if current time is past (expiration - buffer)
|
||||
constructor(options: {
|
||||
message: string
|
||||
code?: string
|
||||
description?: string
|
||||
status: number
|
||||
statusText: string
|
||||
responseBody?: string
|
||||
}) {
|
||||
super(options.message)
|
||||
this.name = "AntigravityTokenRefreshError"
|
||||
this.code = options.code
|
||||
this.description = options.description
|
||||
this.status = options.status
|
||||
this.statusText = options.statusText
|
||||
this.responseBody = options.responseBody
|
||||
}
|
||||
|
||||
get isInvalidGrant(): boolean {
|
||||
return this.code === "invalid_grant"
|
||||
}
|
||||
|
||||
get isNetworkError(): boolean {
|
||||
return this.status === 0
|
||||
}
|
||||
}
|
||||
|
||||
function parseOAuthErrorPayload(text: string | undefined): ParsedOAuthError {
|
||||
if (!text) {
|
||||
return {}
|
||||
}
|
||||
|
||||
try {
|
||||
const payload = JSON.parse(text) as OAuthErrorPayload
|
||||
let code: string | undefined
|
||||
|
||||
if (typeof payload.error === "string") {
|
||||
code = payload.error
|
||||
} else if (payload.error && typeof payload.error === "object") {
|
||||
code = payload.error.status ?? payload.error.code
|
||||
}
|
||||
|
||||
return {
|
||||
code,
|
||||
description: payload.error_description,
|
||||
}
|
||||
} catch {
|
||||
return { description: text }
|
||||
}
|
||||
}
|
||||
|
||||
export function isTokenExpired(tokens: AntigravityTokens): boolean {
|
||||
const expirationTime = tokens.timestamp + tokens.expires_in * 1000
|
||||
return Date.now() >= expirationTime - ANTIGRAVITY_TOKEN_REFRESH_BUFFER_MS
|
||||
}
|
||||
|
||||
/**
|
||||
* Refresh an access token using a refresh token.
|
||||
* Exchanges the refresh token for a new access token via Google's OAuth endpoint.
|
||||
*
|
||||
* @param refreshToken - The refresh token to use
|
||||
* @param clientId - Optional custom client ID (defaults to ANTIGRAVITY_CLIENT_ID)
|
||||
* @param clientSecret - Optional custom client secret (defaults to ANTIGRAVITY_CLIENT_SECRET)
|
||||
* @returns Token exchange result with new access token, or throws on error
|
||||
*/
|
||||
const MAX_REFRESH_RETRIES = 3
|
||||
const INITIAL_RETRY_DELAY_MS = 1000
|
||||
|
||||
function calculateRetryDelay(attempt: number): number {
|
||||
return Math.min(INITIAL_RETRY_DELAY_MS * Math.pow(2, attempt), 10000)
|
||||
}
|
||||
|
||||
function isRetryableError(status: number): boolean {
|
||||
if (status === 0) return true
|
||||
if (status === 429) return true
|
||||
if (status >= 500 && status < 600) return true
|
||||
return false
|
||||
}
|
||||
|
||||
export async function refreshAccessToken(
|
||||
refreshToken: string,
|
||||
clientId: string = ANTIGRAVITY_CLIENT_ID,
|
||||
@@ -52,35 +100,81 @@ export async function refreshAccessToken(
|
||||
client_secret: clientSecret,
|
||||
})
|
||||
|
||||
const response = await fetch(GOOGLE_TOKEN_URL, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"Content-Type": "application/x-www-form-urlencoded",
|
||||
},
|
||||
body: params,
|
||||
let lastError: AntigravityTokenRefreshError | undefined
|
||||
|
||||
for (let attempt = 0; attempt <= MAX_REFRESH_RETRIES; attempt++) {
|
||||
try {
|
||||
const response = await fetch(GOOGLE_TOKEN_URL, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"Content-Type": "application/x-www-form-urlencoded",
|
||||
},
|
||||
body: params,
|
||||
})
|
||||
|
||||
if (response.ok) {
|
||||
const data = (await response.json()) as {
|
||||
access_token: string
|
||||
refresh_token?: string
|
||||
expires_in: number
|
||||
token_type: string
|
||||
}
|
||||
|
||||
return {
|
||||
access_token: data.access_token,
|
||||
refresh_token: data.refresh_token || refreshToken,
|
||||
expires_in: data.expires_in,
|
||||
token_type: data.token_type,
|
||||
}
|
||||
}
|
||||
|
||||
const responseBody = await response.text().catch(() => undefined)
|
||||
const parsed = parseOAuthErrorPayload(responseBody)
|
||||
|
||||
lastError = new AntigravityTokenRefreshError({
|
||||
message: parsed.description || `Token refresh failed: ${response.status} ${response.statusText}`,
|
||||
code: parsed.code,
|
||||
description: parsed.description,
|
||||
status: response.status,
|
||||
statusText: response.statusText,
|
||||
responseBody,
|
||||
})
|
||||
|
||||
if (parsed.code === "invalid_grant") {
|
||||
throw lastError
|
||||
}
|
||||
|
||||
if (!isRetryableError(response.status)) {
|
||||
throw lastError
|
||||
}
|
||||
|
||||
if (attempt < MAX_REFRESH_RETRIES) {
|
||||
const delay = calculateRetryDelay(attempt)
|
||||
await new Promise((resolve) => setTimeout(resolve, delay))
|
||||
}
|
||||
} catch (error) {
|
||||
if (error instanceof AntigravityTokenRefreshError) {
|
||||
throw error
|
||||
}
|
||||
|
||||
lastError = new AntigravityTokenRefreshError({
|
||||
message: error instanceof Error ? error.message : "Network error during token refresh",
|
||||
status: 0,
|
||||
statusText: "Network Error",
|
||||
})
|
||||
|
||||
if (attempt < MAX_REFRESH_RETRIES) {
|
||||
const delay = calculateRetryDelay(attempt)
|
||||
await new Promise((resolve) => setTimeout(resolve, delay))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
throw lastError || new AntigravityTokenRefreshError({
|
||||
message: "Token refresh failed after all retries",
|
||||
status: 0,
|
||||
statusText: "Max Retries Exceeded",
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
const errorText = await response.text().catch(() => "Unknown error")
|
||||
throw new Error(
|
||||
`Token refresh failed: ${response.status} ${response.statusText} - ${errorText}`
|
||||
)
|
||||
}
|
||||
|
||||
const data = (await response.json()) as {
|
||||
access_token: string
|
||||
refresh_token?: string
|
||||
expires_in: number
|
||||
token_type: string
|
||||
}
|
||||
|
||||
return {
|
||||
access_token: data.access_token,
|
||||
// Google may return a new refresh token, fall back to the original
|
||||
refresh_token: data.refresh_token || refreshToken,
|
||||
expires_in: data.expires_in,
|
||||
token_type: data.token_type,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -194,3 +194,20 @@ export interface AntigravityRefreshParts {
|
||||
projectId?: string
|
||||
managedProjectId?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* OAuth error payload from Google
|
||||
* Google returns errors in multiple formats, this handles all of them
|
||||
*/
|
||||
export interface OAuthErrorPayload {
|
||||
error?: string | { status?: string; code?: string; message?: string }
|
||||
error_description?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Parsed OAuth error with normalized fields
|
||||
*/
|
||||
export interface ParsedOAuthError {
|
||||
code?: string
|
||||
description?: string
|
||||
}
|
||||
|
||||
@@ -146,9 +146,16 @@ export function generateOmoConfig(installConfig: InstallConfig): Record<string,
|
||||
|
||||
if (!installConfig.hasClaude) {
|
||||
agents["Sisyphus"] = { model: "opencode/big-pickle" }
|
||||
}
|
||||
|
||||
if (installConfig.hasGemini) {
|
||||
agents["librarian"] = { model: "google/gemini-3-flash" }
|
||||
agents["explore"] = { model: "google/gemini-3-flash" }
|
||||
} else if (installConfig.hasClaude && installConfig.isMax20) {
|
||||
agents["explore"] = { model: "anthropic/claude-haiku-4-5" }
|
||||
} else {
|
||||
agents["librarian"] = { model: "opencode/big-pickle" }
|
||||
} else if (!installConfig.isMax20) {
|
||||
agents["librarian"] = { model: "opencode/big-pickle" }
|
||||
agents["explore"] = { model: "opencode/big-pickle" }
|
||||
}
|
||||
|
||||
if (!installConfig.hasChatGPT) {
|
||||
|
||||
@@ -5,6 +5,7 @@ export {
|
||||
McpNameSchema,
|
||||
AgentNameSchema,
|
||||
HookNameSchema,
|
||||
BuiltinCommandNameSchema,
|
||||
SisyphusAgentConfigSchema,
|
||||
ExperimentalConfigSchema,
|
||||
} from "./schema"
|
||||
@@ -16,6 +17,7 @@ export type {
|
||||
McpName,
|
||||
AgentName,
|
||||
HookName,
|
||||
BuiltinCommandName,
|
||||
SisyphusAgentConfig,
|
||||
ExperimentalConfig,
|
||||
DynamicContextPruningConfig,
|
||||
|
||||
@@ -67,6 +67,10 @@ export const HookNameSchema = z.enum([
|
||||
"thinking-block-validator",
|
||||
])
|
||||
|
||||
export const BuiltinCommandNameSchema = z.enum([
|
||||
"init-deep",
|
||||
])
|
||||
|
||||
export const AgentOverrideConfigSchema = z.object({
|
||||
model: z.string().optional(),
|
||||
temperature: z.number().min(0).max(2).optional(),
|
||||
@@ -115,6 +119,11 @@ export const SisyphusAgentConfigSchema = z.object({
|
||||
replace_plan: z.boolean().optional(),
|
||||
})
|
||||
|
||||
export const CommentCheckerConfigSchema = z.object({
|
||||
/** Custom prompt to replace the default warning message. Use {{comments}} placeholder for detected comments XML. */
|
||||
custom_prompt: z.string().optional(),
|
||||
})
|
||||
|
||||
export const DynamicContextPruningConfigSchema = z.object({
|
||||
/** Enable dynamic context pruning (default: false) */
|
||||
enabled: z.boolean().default(false),
|
||||
@@ -162,8 +171,8 @@ export const ExperimentalConfigSchema = z.object({
|
||||
truncate_all_tool_outputs: z.boolean().default(true),
|
||||
/** Dynamic context pruning configuration */
|
||||
dynamic_context_pruning: DynamicContextPruningConfigSchema.optional(),
|
||||
/** Run DCP only when compaction (summarize) fails, then retry compaction (default: false) */
|
||||
dcp_on_compaction_failure: z.boolean().optional(),
|
||||
/** Enable DCP (Dynamic Context Pruning) for compaction - runs first when token limit exceeded (default: false) */
|
||||
dcp_for_compaction: z.boolean().optional(),
|
||||
})
|
||||
|
||||
export const OhMyOpenCodeConfigSchema = z.object({
|
||||
@@ -171,10 +180,12 @@ export const OhMyOpenCodeConfigSchema = z.object({
|
||||
disabled_mcps: z.array(McpNameSchema).optional(),
|
||||
disabled_agents: z.array(BuiltinAgentNameSchema).optional(),
|
||||
disabled_hooks: z.array(HookNameSchema).optional(),
|
||||
disabled_commands: z.array(BuiltinCommandNameSchema).optional(),
|
||||
agents: AgentOverridesSchema.optional(),
|
||||
claude_code: ClaudeCodeConfigSchema.optional(),
|
||||
google_auth: z.boolean().optional(),
|
||||
sisyphus_agent: SisyphusAgentConfigSchema.optional(),
|
||||
comment_checker: CommentCheckerConfigSchema.optional(),
|
||||
experimental: ExperimentalConfigSchema.optional(),
|
||||
auto_update: z.boolean().optional(),
|
||||
})
|
||||
@@ -184,7 +195,9 @@ export type AgentOverrideConfig = z.infer<typeof AgentOverrideConfigSchema>
|
||||
export type AgentOverrides = z.infer<typeof AgentOverridesSchema>
|
||||
export type AgentName = z.infer<typeof AgentNameSchema>
|
||||
export type HookName = z.infer<typeof HookNameSchema>
|
||||
export type BuiltinCommandName = z.infer<typeof BuiltinCommandNameSchema>
|
||||
export type SisyphusAgentConfig = z.infer<typeof SisyphusAgentConfigSchema>
|
||||
export type CommentCheckerConfig = z.infer<typeof CommentCheckerConfigSchema>
|
||||
export type ExperimentalConfig = z.infer<typeof ExperimentalConfigSchema>
|
||||
export type DynamicContextPruningConfig = z.infer<typeof DynamicContextPruningConfigSchema>
|
||||
|
||||
|
||||
@@ -12,10 +12,12 @@ features/
|
||||
│ ├── manager.ts # Task lifecycle, notifications
|
||||
│ ├── manager.test.ts
|
||||
│ └── types.ts
|
||||
├── builtin-commands/ # Built-in slash command definitions
|
||||
├── claude-code-agent-loader/ # Load agents from ~/.claude/agents/*.md
|
||||
├── claude-code-command-loader/ # Load commands from ~/.claude/commands/*.md
|
||||
├── claude-code-mcp-loader/ # Load MCPs from .mcp.json
|
||||
│ └── env-expander.ts # ${VAR} expansion
|
||||
├── claude-code-plugin-loader/ # Load external plugins from installed_plugins.json
|
||||
├── claude-code-session-state/ # Session state persistence
|
||||
├── claude-code-skill-loader/ # Load skills from ~/.claude/skills/*/SKILL.md
|
||||
└── hook-message-injector/ # Inject messages into conversation
|
||||
|
||||
@@ -325,6 +325,7 @@ export class BackgroundManager {
|
||||
|
||||
log("[background-agent] Sending notification to parent session:", { parentSessionID: task.parentSessionID })
|
||||
|
||||
const taskId = task.id
|
||||
setTimeout(async () => {
|
||||
try {
|
||||
const messageDir = getMessageDir(task.parentSessionID)
|
||||
@@ -344,10 +345,13 @@ export class BackgroundManager {
|
||||
},
|
||||
query: { directory: this.directory },
|
||||
})
|
||||
this.clearNotificationsForTask(task.id)
|
||||
this.clearNotificationsForTask(taskId)
|
||||
log("[background-agent] Successfully sent prompt to parent session:", { parentSessionID: task.parentSessionID })
|
||||
} catch (error) {
|
||||
log("[background-agent] prompt failed:", String(error))
|
||||
} finally {
|
||||
this.tasks.delete(taskId)
|
||||
log("[background-agent] Removed completed task from memory:", taskId)
|
||||
}
|
||||
}, 200)
|
||||
}
|
||||
|
||||
35
src/features/builtin-commands/commands.ts
Normal file
35
src/features/builtin-commands/commands.ts
Normal file
@@ -0,0 +1,35 @@
|
||||
import type { CommandDefinition } from "../claude-code-command-loader"
|
||||
import type { BuiltinCommandName, BuiltinCommands } from "./types"
|
||||
import { INIT_DEEP_TEMPLATE } from "./templates/init-deep"
|
||||
|
||||
const BUILTIN_COMMAND_DEFINITIONS: Record<BuiltinCommandName, Omit<CommandDefinition, "name">> = {
|
||||
"init-deep": {
|
||||
description: "(builtin) Initialize hierarchical AGENTS.md knowledge base",
|
||||
template: `<command-instruction>
|
||||
${INIT_DEEP_TEMPLATE}
|
||||
</command-instruction>
|
||||
|
||||
<user-request>
|
||||
$ARGUMENTS
|
||||
</user-request>`,
|
||||
argumentHint: "[--create-new] [--max-depth=N]",
|
||||
},
|
||||
}
|
||||
|
||||
export function loadBuiltinCommands(
|
||||
disabledCommands?: BuiltinCommandName[]
|
||||
): BuiltinCommands {
|
||||
const disabled = new Set(disabledCommands ?? [])
|
||||
const commands: BuiltinCommands = {}
|
||||
|
||||
for (const [name, definition] of Object.entries(BUILTIN_COMMAND_DEFINITIONS)) {
|
||||
if (!disabled.has(name as BuiltinCommandName)) {
|
||||
commands[name] = {
|
||||
name,
|
||||
...definition,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return commands
|
||||
}
|
||||
2
src/features/builtin-commands/index.ts
Normal file
2
src/features/builtin-commands/index.ts
Normal file
@@ -0,0 +1,2 @@
|
||||
export * from "./types"
|
||||
export * from "./commands"
|
||||
299
src/features/builtin-commands/templates/init-deep.ts
Normal file
299
src/features/builtin-commands/templates/init-deep.ts
Normal file
@@ -0,0 +1,299 @@
|
||||
export const INIT_DEEP_TEMPLATE = `# Initialize Deep Knowledge Base
|
||||
|
||||
Generate comprehensive AGENTS.md files across project hierarchy. Combines root-level project knowledge (gen-knowledge) with complexity-based subdirectory documentation (gen-knowledge-deep).
|
||||
|
||||
## Usage
|
||||
|
||||
\`\`\`
|
||||
/init-deep # Analyze and generate hierarchical AGENTS.md
|
||||
/init-deep --create-new # Force create from scratch (ignore existing)
|
||||
/init-deep --max-depth=2 # Limit to N directory levels (default: 3)
|
||||
\`\`\`
|
||||
|
||||
---
|
||||
|
||||
## Core Principles
|
||||
|
||||
- **Telegraphic Style**: Sacrifice grammar for concision ("Project uses React" → "React 18")
|
||||
- **Predict-then-Compare**: Predict standard → find actual → document ONLY deviations
|
||||
- **Hierarchy Aware**: Parent covers general, children cover specific
|
||||
- **No Redundancy**: Child AGENTS.md NEVER repeats parent content
|
||||
|
||||
---
|
||||
|
||||
## Process
|
||||
|
||||
<critical>
|
||||
**MANDATORY: TodoWrite for ALL phases. Mark in_progress → completed in real-time.**
|
||||
</critical>
|
||||
|
||||
### Phase 0: Initialize
|
||||
|
||||
\`\`\`
|
||||
TodoWrite([
|
||||
{ id: "p1-analysis", content: "Parallel project structure & complexity analysis", status: "pending", priority: "high" },
|
||||
{ id: "p2-scoring", content: "Score directories, determine AGENTS.md locations", status: "pending", priority: "high" },
|
||||
{ id: "p3-root", content: "Generate root AGENTS.md with Predict-then-Compare", status: "pending", priority: "high" },
|
||||
{ id: "p4-subdirs", content: "Generate subdirectory AGENTS.md files in parallel", status: "pending", priority: "high" },
|
||||
{ id: "p5-review", content: "Review, deduplicate, validate all files", status: "pending", priority: "medium" }
|
||||
])
|
||||
\`\`\`
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Parallel Project Analysis
|
||||
|
||||
**Mark "p1-analysis" as in_progress.**
|
||||
|
||||
Launch **ALL tasks simultaneously**:
|
||||
|
||||
<parallel-tasks>
|
||||
|
||||
### Structural Analysis (bash - run in parallel)
|
||||
\`\`\`bash
|
||||
# Task A: Directory depth analysis
|
||||
find . -type d -not -path '*/\\.*' -not -path '*/node_modules/*' -not -path '*/venv/*' -not -path '*/__pycache__/*' -not -path '*/dist/*' -not -path '*/build/*' | awk -F/ '{print NF-1}' | sort -n | uniq -c
|
||||
|
||||
# Task B: File count per directory
|
||||
find . -type f -not -path '*/\\.*' -not -path '*/node_modules/*' -not -path '*/venv/*' -not -path '*/__pycache__/*' | sed 's|/[^/]*$||' | sort | uniq -c | sort -rn | head -30
|
||||
|
||||
# Task C: Code concentration
|
||||
find . -type f \\( -name "*.py" -o -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" -o -name "*.go" -o -name "*.rs" -o -name "*.java" \\) -not -path '*/node_modules/*' -not -path '*/venv/*' | sed 's|/[^/]*$||' | sort | uniq -c | sort -rn | head -20
|
||||
|
||||
# Task D: Existing knowledge files
|
||||
find . -type f \\( -name "AGENTS.md" -o -name "CLAUDE.md" \\) -not -path '*/node_modules/*' 2>/dev/null
|
||||
\`\`\`
|
||||
|
||||
### Context Gathering (Explore agents - background_task in parallel)
|
||||
|
||||
\`\`\`
|
||||
background_task(agent="explore", prompt="Project structure: PREDICT standard {lang} patterns → FIND package.json/pyproject.toml/go.mod → REPORT deviations only")
|
||||
|
||||
background_task(agent="explore", prompt="Entry points: PREDICT typical (main.py, index.ts) → FIND actual → REPORT non-standard organization")
|
||||
|
||||
background_task(agent="explore", prompt="Conventions: FIND .cursor/rules, .cursorrules, eslintrc, pyproject.toml → REPORT project-specific rules DIFFERENT from defaults")
|
||||
|
||||
background_task(agent="explore", prompt="Anti-patterns: FIND comments with 'DO NOT', 'NEVER', 'ALWAYS', 'LEGACY', 'DEPRECATED' → REPORT forbidden patterns")
|
||||
|
||||
background_task(agent="explore", prompt="Build/CI: FIND .github/workflows, Makefile, justfile → REPORT non-standard build/deploy patterns")
|
||||
|
||||
background_task(agent="explore", prompt="Test patterns: FIND pytest.ini, jest.config, test structure → REPORT unique testing conventions")
|
||||
\`\`\`
|
||||
|
||||
</parallel-tasks>
|
||||
|
||||
**Collect all results. Mark "p1-analysis" as completed.**
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Complexity Scoring & Location Decision
|
||||
|
||||
**Mark "p2-scoring" as in_progress.**
|
||||
|
||||
### Scoring Matrix
|
||||
|
||||
| Factor | Weight | Threshold |
|
||||
|--------|--------|-----------|
|
||||
| File count | 3x | >20 files = high |
|
||||
| Subdirectory count | 2x | >5 subdirs = high |
|
||||
| Code file ratio | 2x | >70% code = high |
|
||||
| Unique patterns | 1x | Has own config |
|
||||
| Module boundary | 2x | Has __init__.py/index.ts |
|
||||
|
||||
### Decision Rules
|
||||
|
||||
| Score | Action |
|
||||
|-------|--------|
|
||||
| **Root (.)** | ALWAYS create AGENTS.md |
|
||||
| **High (>15)** | Create dedicated AGENTS.md |
|
||||
| **Medium (8-15)** | Create if distinct domain |
|
||||
| **Low (<8)** | Skip, parent sufficient |
|
||||
|
||||
### Output Format
|
||||
|
||||
\`\`\`
|
||||
AGENTS_LOCATIONS = [
|
||||
{ path: ".", type: "root" },
|
||||
{ path: "src/api", score: 18, reason: "high complexity, 45 files" },
|
||||
{ path: "src/hooks", score: 12, reason: "distinct domain, unique patterns" },
|
||||
]
|
||||
\`\`\`
|
||||
|
||||
**Mark "p2-scoring" as completed.**
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Generate Root AGENTS.md
|
||||
|
||||
**Mark "p3-root" as in_progress.**
|
||||
|
||||
Root AGENTS.md gets **full treatment** with Predict-then-Compare synthesis.
|
||||
|
||||
### Required Sections
|
||||
|
||||
\`\`\`markdown
|
||||
# PROJECT KNOWLEDGE BASE
|
||||
|
||||
**Generated:** {TIMESTAMP}
|
||||
**Commit:** {SHORT_SHA}
|
||||
**Branch:** {BRANCH}
|
||||
|
||||
## OVERVIEW
|
||||
|
||||
{1-2 sentences: what project does, core tech stack}
|
||||
|
||||
## STRUCTURE
|
||||
|
||||
\\\`\\\`\\\`
|
||||
{project-root}/
|
||||
├── {dir}/ # {non-obvious purpose only}
|
||||
└── {entry} # entry point
|
||||
\\\`\\\`\\\`
|
||||
|
||||
## WHERE TO LOOK
|
||||
|
||||
| Task | Location | Notes |
|
||||
|------|----------|-------|
|
||||
| Add feature X | \\\`src/x/\\\` | {pattern hint} |
|
||||
|
||||
## CONVENTIONS
|
||||
|
||||
{ONLY deviations from standard - skip generic advice}
|
||||
|
||||
- **{rule}**: {specific detail}
|
||||
|
||||
## ANTI-PATTERNS (THIS PROJECT)
|
||||
|
||||
{Things explicitly forbidden HERE}
|
||||
|
||||
- **{pattern}**: {why} → {alternative}
|
||||
|
||||
## UNIQUE STYLES
|
||||
|
||||
{Project-specific coding styles}
|
||||
|
||||
- **{style}**: {how different}
|
||||
|
||||
## COMMANDS
|
||||
|
||||
\\\`\\\`\\\`bash
|
||||
{dev-command}
|
||||
{test-command}
|
||||
{build-command}
|
||||
\\\`\\\`\\\`
|
||||
|
||||
## NOTES
|
||||
|
||||
{Gotchas, non-obvious info}
|
||||
\`\`\`
|
||||
|
||||
### Quality Gates
|
||||
|
||||
- [ ] Size: 50-150 lines
|
||||
- [ ] No generic advice ("write clean code")
|
||||
- [ ] No obvious info ("tests/ has tests")
|
||||
- [ ] Every item is project-specific
|
||||
|
||||
**Mark "p3-root" as completed.**
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Generate Subdirectory AGENTS.md
|
||||
|
||||
**Mark "p4-subdirs" as in_progress.**
|
||||
|
||||
For each location in AGENTS_LOCATIONS (except root), launch **parallel document-writer agents**:
|
||||
|
||||
\`\`\`typescript
|
||||
for (const loc of AGENTS_LOCATIONS.filter(l => l.path !== ".")) {
|
||||
background_task({
|
||||
agent: "document-writer",
|
||||
prompt: \\\`
|
||||
Generate AGENTS.md for: \${loc.path}
|
||||
|
||||
CONTEXT:
|
||||
- Complexity reason: \${loc.reason}
|
||||
- Parent AGENTS.md: ./AGENTS.md (already covers project overview)
|
||||
|
||||
CRITICAL RULES:
|
||||
1. Focus ONLY on this directory's specific context
|
||||
2. NEVER repeat parent AGENTS.md content
|
||||
3. Shorter is better - 30-80 lines max
|
||||
4. Telegraphic style - sacrifice grammar
|
||||
|
||||
REQUIRED SECTIONS:
|
||||
- OVERVIEW (1 line: what this directory does)
|
||||
- STRUCTURE (only if >5 subdirs)
|
||||
- WHERE TO LOOK (directory-specific tasks)
|
||||
- CONVENTIONS (only if DIFFERENT from root)
|
||||
- ANTI-PATTERNS (directory-specific only)
|
||||
|
||||
OUTPUT: Write to \${loc.path}/AGENTS.md
|
||||
\\\`
|
||||
})
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
**Wait for all agents. Mark "p4-subdirs" as completed.**
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Review & Deduplicate
|
||||
|
||||
**Mark "p5-review" as in_progress.**
|
||||
|
||||
### Validation Checklist
|
||||
|
||||
For EACH generated AGENTS.md:
|
||||
|
||||
| Check | Action if Fail |
|
||||
|-------|----------------|
|
||||
| Contains generic advice | REMOVE the line |
|
||||
| Repeats parent content | REMOVE the line |
|
||||
| Missing required section | ADD it |
|
||||
| Over 150 lines (root) / 80 lines (subdir) | TRIM |
|
||||
| Verbose explanations | REWRITE telegraphic |
|
||||
|
||||
### Cross-Reference Validation
|
||||
|
||||
\`\`\`
|
||||
For each child AGENTS.md:
|
||||
For each line in child:
|
||||
If similar line exists in parent:
|
||||
REMOVE from child (parent already covers)
|
||||
\`\`\`
|
||||
|
||||
**Mark "p5-review" as completed.**
|
||||
|
||||
---
|
||||
|
||||
## Final Report
|
||||
|
||||
\`\`\`
|
||||
=== init-deep Complete ===
|
||||
|
||||
Files Generated:
|
||||
✓ ./AGENTS.md (root, {N} lines)
|
||||
✓ ./src/hooks/AGENTS.md ({N} lines)
|
||||
✓ ./src/tools/AGENTS.md ({N} lines)
|
||||
|
||||
Directories Analyzed: {N}
|
||||
AGENTS.md Created: {N}
|
||||
Total Lines: {N}
|
||||
|
||||
Hierarchy:
|
||||
./AGENTS.md
|
||||
├── src/hooks/AGENTS.md
|
||||
└── src/tools/AGENTS.md
|
||||
\`\`\`
|
||||
|
||||
---
|
||||
|
||||
## Anti-Patterns for THIS Command
|
||||
|
||||
- **Over-documenting**: Not every directory needs AGENTS.md
|
||||
- **Redundancy**: Child must NOT repeat parent
|
||||
- **Generic content**: Remove anything that applies to ALL projects
|
||||
- **Sequential execution**: MUST use parallel agents
|
||||
- **Deep nesting**: Rarely need AGENTS.md at depth 4+
|
||||
- **Verbose style**: "This directory contains..." → just list it`
|
||||
9
src/features/builtin-commands/types.ts
Normal file
9
src/features/builtin-commands/types.ts
Normal file
@@ -0,0 +1,9 @@
|
||||
import type { CommandDefinition } from "../claude-code-command-loader"
|
||||
|
||||
export type BuiltinCommandName = "init-deep"
|
||||
|
||||
export interface BuiltinCommandConfig {
|
||||
disabled_commands?: BuiltinCommandName[]
|
||||
}
|
||||
|
||||
export type BuiltinCommands = Record<string, CommandDefinition>
|
||||
@@ -14,6 +14,7 @@ import type { AgentFrontmatter } from "../claude-code-agent-loader/types"
|
||||
import type { ClaudeCodeMcpConfig, McpServerConfig } from "../claude-code-mcp-loader/types"
|
||||
import type {
|
||||
InstalledPluginsDatabase,
|
||||
PluginInstallation,
|
||||
PluginManifest,
|
||||
LoadedPlugin,
|
||||
PluginLoadResult,
|
||||
@@ -134,6 +135,15 @@ function isPluginEnabled(
|
||||
return true
|
||||
}
|
||||
|
||||
function extractPluginEntries(
|
||||
db: InstalledPluginsDatabase
|
||||
): Array<[string, PluginInstallation | undefined]> {
|
||||
if (db.version === 1) {
|
||||
return Object.entries(db.plugins).map(([key, installation]) => [key, installation])
|
||||
}
|
||||
return Object.entries(db.plugins).map(([key, installations]) => [key, installations[0]])
|
||||
}
|
||||
|
||||
export function discoverInstalledPlugins(options?: PluginLoaderOptions): PluginLoadResult {
|
||||
const db = loadInstalledPlugins()
|
||||
const settings = loadClaudeSettings()
|
||||
@@ -147,15 +157,14 @@ export function discoverInstalledPlugins(options?: PluginLoaderOptions): PluginL
|
||||
const settingsEnabledPlugins = settings?.enabledPlugins
|
||||
const overrideEnabledPlugins = options?.enabledPluginsOverride
|
||||
|
||||
for (const [pluginKey, installations] of Object.entries(db.plugins)) {
|
||||
if (!installations || installations.length === 0) continue
|
||||
for (const [pluginKey, installation] of extractPluginEntries(db)) {
|
||||
if (!installation) continue
|
||||
|
||||
if (!isPluginEnabled(pluginKey, settingsEnabledPlugins, overrideEnabledPlugins)) {
|
||||
log(`Plugin disabled: ${pluginKey}`)
|
||||
continue
|
||||
}
|
||||
|
||||
const installation = installations[0]
|
||||
const { installPath, scope, version } = installation
|
||||
|
||||
if (!existsSync(installPath)) {
|
||||
|
||||
@@ -20,14 +20,29 @@ export interface PluginInstallation {
|
||||
isLocal?: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* Installed plugins database v1 (legacy)
|
||||
* plugins stored as direct objects
|
||||
*/
|
||||
export interface InstalledPluginsDatabaseV1 {
|
||||
version: 1
|
||||
plugins: Record<string, PluginInstallation>
|
||||
}
|
||||
|
||||
/**
|
||||
* Installed plugins database v2 (current)
|
||||
* plugins stored as arrays
|
||||
*/
|
||||
export interface InstalledPluginsDatabaseV2 {
|
||||
version: 2
|
||||
plugins: Record<string, PluginInstallation[]>
|
||||
}
|
||||
|
||||
/**
|
||||
* Installed plugins database structure
|
||||
* Located at ~/.claude/plugins/installed_plugins.json
|
||||
*/
|
||||
export interface InstalledPluginsDatabase {
|
||||
version: number
|
||||
plugins: Record<string, PluginInstallation[]>
|
||||
}
|
||||
export type InstalledPluginsDatabase = InstalledPluginsDatabaseV1 | InstalledPluginsDatabaseV2
|
||||
|
||||
/**
|
||||
* Plugin author information
|
||||
|
||||
@@ -27,6 +27,7 @@ hooks/
|
||||
├── rules-injector/ # Conditional rules from .claude/rules/
|
||||
├── session-recovery/ # Recover from session errors
|
||||
├── think-mode/ # Auto-detect thinking triggers
|
||||
├── thinking-block-validator/ # Validate thinking blocks in messages
|
||||
├── context-window-monitor.ts # Monitor context usage (standalone)
|
||||
├── empty-task-response-detector.ts
|
||||
├── session-notification.ts # OS notify on idle (standalone)
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import { join } from "node:path";
|
||||
import { xdgData } from "xdg-basedir";
|
||||
import { getOpenCodeStorageDir } from "../../shared/data-path";
|
||||
|
||||
export const OPENCODE_STORAGE = join(xdgData ?? "", "opencode", "storage");
|
||||
export const OPENCODE_STORAGE = getOpenCodeStorageDir();
|
||||
export const AGENT_USAGE_REMINDER_STORAGE = join(
|
||||
OPENCODE_STORAGE,
|
||||
"agent-usage-reminder",
|
||||
|
||||
@@ -21,6 +21,8 @@ import {
|
||||
} from "../session-recovery/storage";
|
||||
import { log } from "../../shared/logger";
|
||||
|
||||
const PLACEHOLDER_TEXT = "[user interrupted]";
|
||||
|
||||
type Client = {
|
||||
session: {
|
||||
messages: (opts: {
|
||||
@@ -103,6 +105,36 @@ function getOrCreateDcpState(
|
||||
return state;
|
||||
}
|
||||
|
||||
function sanitizeEmptyMessagesBeforeSummarize(sessionID: string): number {
|
||||
const emptyMessageIds = findEmptyMessages(sessionID);
|
||||
if (emptyMessageIds.length === 0) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
let fixedCount = 0;
|
||||
for (const messageID of emptyMessageIds) {
|
||||
const replaced = replaceEmptyTextParts(messageID, PLACEHOLDER_TEXT);
|
||||
if (replaced) {
|
||||
fixedCount++;
|
||||
} else {
|
||||
const injected = injectTextPart(sessionID, messageID, PLACEHOLDER_TEXT);
|
||||
if (injected) {
|
||||
fixedCount++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (fixedCount > 0) {
|
||||
log("[auto-compact] pre-summarize sanitization fixed empty messages", {
|
||||
sessionID,
|
||||
fixedCount,
|
||||
totalEmpty: emptyMessageIds.length,
|
||||
});
|
||||
}
|
||||
|
||||
return fixedCount;
|
||||
}
|
||||
|
||||
async function getLastMessagePair(
|
||||
sessionID: string,
|
||||
client: Client,
|
||||
@@ -326,6 +358,104 @@ export async function executeCompact(
|
||||
const errorData = autoCompactState.errorDataBySession.get(sessionID);
|
||||
const truncateState = getOrCreateTruncateState(autoCompactState, sessionID);
|
||||
|
||||
// DCP FIRST - run before any other recovery attempts when token limit exceeded
|
||||
const dcpState = getOrCreateDcpState(autoCompactState, sessionID);
|
||||
if (
|
||||
experimental?.dcp_for_compaction &&
|
||||
!dcpState.attempted &&
|
||||
errorData?.currentTokens &&
|
||||
errorData?.maxTokens &&
|
||||
errorData.currentTokens > errorData.maxTokens
|
||||
) {
|
||||
dcpState.attempted = true;
|
||||
log("[auto-compact] DCP triggered FIRST on token limit error", {
|
||||
sessionID,
|
||||
currentTokens: errorData.currentTokens,
|
||||
maxTokens: errorData.maxTokens,
|
||||
});
|
||||
|
||||
const dcpConfig = experimental.dynamic_context_pruning ?? {
|
||||
enabled: true,
|
||||
notification: "detailed" as const,
|
||||
protected_tools: ["task", "todowrite", "todoread", "lsp_rename", "lsp_code_action_resolve"],
|
||||
};
|
||||
|
||||
try {
|
||||
const pruningResult = await executeDynamicContextPruning(
|
||||
sessionID,
|
||||
dcpConfig,
|
||||
client
|
||||
);
|
||||
|
||||
if (pruningResult.itemsPruned > 0) {
|
||||
dcpState.itemsPruned = pruningResult.itemsPruned;
|
||||
log("[auto-compact] DCP successful, proceeding to compaction", {
|
||||
itemsPruned: pruningResult.itemsPruned,
|
||||
tokensSaved: pruningResult.totalTokensSaved,
|
||||
});
|
||||
|
||||
await (client as Client).tui
|
||||
.showToast({
|
||||
body: {
|
||||
title: "Dynamic Context Pruning",
|
||||
message: `Pruned ${pruningResult.itemsPruned} items (~${Math.round(pruningResult.totalTokensSaved / 1000)}k tokens). Running compaction...`,
|
||||
variant: "success",
|
||||
duration: 3000,
|
||||
},
|
||||
})
|
||||
.catch(() => {});
|
||||
|
||||
// After DCP, immediately try summarize
|
||||
const providerID = msg.providerID as string | undefined;
|
||||
const modelID = msg.modelID as string | undefined;
|
||||
|
||||
if (providerID && modelID) {
|
||||
try {
|
||||
sanitizeEmptyMessagesBeforeSummarize(sessionID);
|
||||
|
||||
await (client as Client).tui
|
||||
.showToast({
|
||||
body: {
|
||||
title: "Auto Compact",
|
||||
message: "Summarizing session after DCP...",
|
||||
variant: "warning",
|
||||
duration: 3000,
|
||||
},
|
||||
})
|
||||
.catch(() => {});
|
||||
|
||||
await (client as Client).session.summarize({
|
||||
path: { id: sessionID },
|
||||
body: { providerID, modelID },
|
||||
query: { directory },
|
||||
});
|
||||
|
||||
clearSessionState(autoCompactState, sessionID);
|
||||
|
||||
setTimeout(async () => {
|
||||
try {
|
||||
await (client as Client).session.prompt_async({
|
||||
path: { sessionID },
|
||||
body: { parts: [{ type: "text", text: "Continue" }] },
|
||||
query: { directory },
|
||||
});
|
||||
} catch {}
|
||||
}, 500);
|
||||
return;
|
||||
} catch (summarizeError) {
|
||||
log("[auto-compact] summarize after DCP failed, continuing recovery", {
|
||||
error: String(summarizeError),
|
||||
});
|
||||
}
|
||||
}
|
||||
} else {
|
||||
log("[auto-compact] DCP did not prune any items", { sessionID });
|
||||
}
|
||||
} catch (error) {
|
||||
log("[auto-compact] DCP failed", { error: String(error) });
|
||||
}
|
||||
}
|
||||
|
||||
if (
|
||||
experimental?.aggressive_truncation &&
|
||||
errorData?.currentTokens &&
|
||||
@@ -523,6 +653,8 @@ export async function executeCompact(
|
||||
|
||||
if (providerID && modelID) {
|
||||
try {
|
||||
sanitizeEmptyMessagesBeforeSummarize(sessionID);
|
||||
|
||||
await (client as Client).tui
|
||||
.showToast({
|
||||
body: {
|
||||
@@ -582,67 +714,6 @@ export async function executeCompact(
|
||||
}
|
||||
}
|
||||
|
||||
// Try DCP after summarize fails - only once per compaction cycle
|
||||
const dcpState = getOrCreateDcpState(autoCompactState, sessionID);
|
||||
if (experimental?.dcp_on_compaction_failure && !dcpState.attempted) {
|
||||
dcpState.attempted = true;
|
||||
log("[auto-compact] attempting DCP after summarize failed", { sessionID });
|
||||
|
||||
const dcpConfig = experimental.dynamic_context_pruning ?? {
|
||||
enabled: true,
|
||||
notification: "detailed" as const,
|
||||
protected_tools: ["task", "todowrite", "todoread", "lsp_rename", "lsp_code_action_resolve"],
|
||||
};
|
||||
|
||||
try {
|
||||
const pruningResult = await executeDynamicContextPruning(
|
||||
sessionID,
|
||||
dcpConfig,
|
||||
client
|
||||
);
|
||||
|
||||
if (pruningResult.itemsPruned > 0) {
|
||||
dcpState.itemsPruned = pruningResult.itemsPruned;
|
||||
log("[auto-compact] DCP successful, retrying compaction", {
|
||||
itemsPruned: pruningResult.itemsPruned,
|
||||
tokensSaved: pruningResult.totalTokensSaved,
|
||||
});
|
||||
|
||||
await (client as Client).tui
|
||||
.showToast({
|
||||
body: {
|
||||
title: "Dynamic Context Pruning",
|
||||
message: `Pruned ${pruningResult.itemsPruned} items (~${Math.round(pruningResult.totalTokensSaved / 1000)}k tokens). Retrying compaction...`,
|
||||
variant: "success",
|
||||
duration: 3000,
|
||||
},
|
||||
})
|
||||
.catch(() => {});
|
||||
|
||||
// Reset retry state to allow compaction to retry summarize
|
||||
retryState.attempt = 0;
|
||||
|
||||
setTimeout(() => {
|
||||
executeCompact(
|
||||
sessionID,
|
||||
msg,
|
||||
autoCompactState,
|
||||
client,
|
||||
directory,
|
||||
experimental,
|
||||
);
|
||||
}, 500);
|
||||
return;
|
||||
} else {
|
||||
log("[auto-compact] DCP did not prune any items, continuing to revert", { sessionID });
|
||||
}
|
||||
} catch (error) {
|
||||
log("[auto-compact] DCP failed, continuing to revert", {
|
||||
error: String(error),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
const fallbackState = getOrCreateFallbackState(autoCompactState, sessionID);
|
||||
|
||||
if (fallbackState.revertAttempt < FALLBACK_CONFIG.maxRevertAttempts) {
|
||||
|
||||
@@ -26,6 +26,7 @@ const TOKEN_LIMIT_KEYWORDS = [
|
||||
"context length",
|
||||
"too many tokens",
|
||||
"non-empty content",
|
||||
"invalid_request_error",
|
||||
]
|
||||
|
||||
const MESSAGE_INDEX_PATTERN = /messages\.(\d+)/
|
||||
@@ -114,9 +115,10 @@ export function parseAnthropicTokenLimitError(err: unknown): ParsedTokenLimitErr
|
||||
if (typeof responseBody === "string") {
|
||||
try {
|
||||
const jsonPatterns = [
|
||||
/data:\s*(\{[\s\S]*?\})\s*$/m,
|
||||
/(\{"type"\s*:\s*"error"[\s\S]*?\})/,
|
||||
/(\{[\s\S]*?"error"[\s\S]*?\})/,
|
||||
// Greedy match to last } for nested JSON
|
||||
/data:\s*(\{[\s\S]*\})\s*$/m,
|
||||
/(\{"type"\s*:\s*"error"[\s\S]*\})/,
|
||||
/(\{[\s\S]*"error"[\s\S]*\})/,
|
||||
]
|
||||
|
||||
for (const pattern of jsonPatterns) {
|
||||
|
||||
@@ -3,19 +3,13 @@ import { join } from "node:path"
|
||||
import type { PruningState, ToolCallSignature } from "./pruning-types"
|
||||
import { estimateTokens } from "./pruning-types"
|
||||
import { log } from "../../shared/logger"
|
||||
import { MESSAGE_STORAGE } from "../../features/hook-message-injector"
|
||||
|
||||
export interface DeduplicationConfig {
|
||||
enabled: boolean
|
||||
protectedTools?: string[]
|
||||
}
|
||||
|
||||
const MESSAGE_STORAGE = join(
|
||||
process.env.HOME || process.env.USERPROFILE || "",
|
||||
".config",
|
||||
"opencode",
|
||||
"sessions"
|
||||
)
|
||||
|
||||
interface ToolPart {
|
||||
type: string
|
||||
callID?: string
|
||||
|
||||
@@ -3,6 +3,7 @@ import { join } from "node:path"
|
||||
import type { PruningState, ErroredToolCall } from "./pruning-types"
|
||||
import { estimateTokens } from "./pruning-types"
|
||||
import { log } from "../../shared/logger"
|
||||
import { MESSAGE_STORAGE } from "../../features/hook-message-injector"
|
||||
|
||||
export interface PurgeErrorsConfig {
|
||||
enabled: boolean
|
||||
@@ -10,13 +11,6 @@ export interface PurgeErrorsConfig {
|
||||
protectedTools?: string[]
|
||||
}
|
||||
|
||||
const MESSAGE_STORAGE = join(
|
||||
process.env.HOME || process.env.USERPROFILE || "",
|
||||
".config",
|
||||
"opencode",
|
||||
"sessions"
|
||||
)
|
||||
|
||||
interface ToolPart {
|
||||
type: string
|
||||
callID?: string
|
||||
|
||||
@@ -3,13 +3,7 @@ import { join } from "node:path"
|
||||
import type { PruningState } from "./pruning-types"
|
||||
import { estimateTokens } from "./pruning-types"
|
||||
import { log } from "../../shared/logger"
|
||||
|
||||
const MESSAGE_STORAGE = join(
|
||||
process.env.HOME || process.env.USERPROFILE || "",
|
||||
".config",
|
||||
"opencode",
|
||||
"sessions"
|
||||
)
|
||||
import { MESSAGE_STORAGE } from "../../features/hook-message-injector"
|
||||
|
||||
function getMessageDir(sessionID: string): string | null {
|
||||
if (!existsSync(MESSAGE_STORAGE)) return null
|
||||
|
||||
@@ -3,19 +3,13 @@ import { join } from "node:path"
|
||||
import type { PruningState, FileOperation } from "./pruning-types"
|
||||
import { estimateTokens } from "./pruning-types"
|
||||
import { log } from "../../shared/logger"
|
||||
import { MESSAGE_STORAGE } from "../../features/hook-message-injector"
|
||||
|
||||
export interface SupersedeWritesConfig {
|
||||
enabled: boolean
|
||||
aggressive: boolean
|
||||
}
|
||||
|
||||
const MESSAGE_STORAGE = join(
|
||||
process.env.HOME || process.env.USERPROFILE || "",
|
||||
".config",
|
||||
"opencode",
|
||||
"sessions"
|
||||
)
|
||||
|
||||
interface ToolPart {
|
||||
type: string
|
||||
callID?: string
|
||||
|
||||
@@ -1,19 +1,8 @@
|
||||
import { existsSync, readdirSync, readFileSync, writeFileSync } from "node:fs"
|
||||
import { homedir } from "node:os"
|
||||
import { join } from "node:path"
|
||||
import { xdgData } from "xdg-basedir"
|
||||
|
||||
let OPENCODE_STORAGE = join(xdgData ?? "", "opencode", "storage")
|
||||
|
||||
// Fix for macOS where xdg-basedir points to ~/Library/Application Support
|
||||
// but OpenCode (cli) uses ~/.local/share
|
||||
if (process.platform === "darwin" && !existsSync(OPENCODE_STORAGE)) {
|
||||
const localShare = join(homedir(), ".local", "share", "opencode", "storage")
|
||||
if (existsSync(localShare)) {
|
||||
OPENCODE_STORAGE = localShare
|
||||
}
|
||||
}
|
||||
import { getOpenCodeStorageDir } from "../../shared/data-path"
|
||||
|
||||
const OPENCODE_STORAGE = getOpenCodeStorageDir()
|
||||
const MESSAGE_STORAGE = join(OPENCODE_STORAGE, "message")
|
||||
const PART_STORAGE = join(OPENCODE_STORAGE, "part")
|
||||
|
||||
|
||||
@@ -142,8 +142,9 @@ export interface CheckResult {
|
||||
* Run comment-checker CLI with given input.
|
||||
* @param input Hook input to check
|
||||
* @param cliPath Optional explicit path to CLI binary
|
||||
* @param customPrompt Optional custom prompt to replace default warning message
|
||||
*/
|
||||
export async function runCommentChecker(input: HookInput, cliPath?: string): Promise<CheckResult> {
|
||||
export async function runCommentChecker(input: HookInput, cliPath?: string, customPrompt?: string): Promise<CheckResult> {
|
||||
const binaryPath = cliPath ?? resolvedCliPath ?? COMMENT_CHECKER_CLI_PATH
|
||||
|
||||
if (!binaryPath) {
|
||||
@@ -160,7 +161,12 @@ export async function runCommentChecker(input: HookInput, cliPath?: string): Pro
|
||||
debugLog("running comment-checker with input:", jsonInput.substring(0, 200))
|
||||
|
||||
try {
|
||||
const proc = spawn([binaryPath], {
|
||||
const args = [binaryPath]
|
||||
if (customPrompt) {
|
||||
args.push("--prompt", customPrompt)
|
||||
}
|
||||
|
||||
const proc = spawn(args, {
|
||||
stdin: "pipe",
|
||||
stdout: "pipe",
|
||||
stderr: "pipe",
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import type { PendingCall } from "./types"
|
||||
import { runCommentChecker, getCommentCheckerPath, startBackgroundInit, type HookInput } from "./cli"
|
||||
import type { CommentCheckerConfig } from "../../config/schema"
|
||||
|
||||
import * as fs from "fs"
|
||||
import { existsSync } from "fs"
|
||||
@@ -20,6 +21,7 @@ const pendingCalls = new Map<string, PendingCall>()
|
||||
const PENDING_CALL_TTL = 60_000
|
||||
|
||||
let cliPathPromise: Promise<string | null> | null = null
|
||||
let cleanupIntervalStarted = false
|
||||
|
||||
function cleanupOldPendingCalls(): void {
|
||||
const now = Date.now()
|
||||
@@ -30,10 +32,13 @@ function cleanupOldPendingCalls(): void {
|
||||
}
|
||||
}
|
||||
|
||||
setInterval(cleanupOldPendingCalls, 10_000)
|
||||
export function createCommentCheckerHooks(config?: CommentCheckerConfig) {
|
||||
debugLog("createCommentCheckerHooks called", { config })
|
||||
|
||||
export function createCommentCheckerHooks() {
|
||||
debugLog("createCommentCheckerHooks called")
|
||||
if (!cleanupIntervalStarted) {
|
||||
cleanupIntervalStarted = true
|
||||
setInterval(cleanupOldPendingCalls, 10_000)
|
||||
}
|
||||
|
||||
// Start background CLI initialization (may trigger lazy download)
|
||||
startBackgroundInit()
|
||||
@@ -123,7 +128,7 @@ export function createCommentCheckerHooks() {
|
||||
|
||||
// CLI mode only
|
||||
debugLog("using CLI:", cliPath)
|
||||
await processWithCli(input, pendingCall, output, cliPath)
|
||||
await processWithCli(input, pendingCall, output, cliPath, config?.custom_prompt)
|
||||
} catch (err) {
|
||||
debugLog("tool.execute.after failed:", err)
|
||||
}
|
||||
@@ -135,7 +140,8 @@ async function processWithCli(
|
||||
input: { tool: string; sessionID: string; callID: string },
|
||||
pendingCall: PendingCall,
|
||||
output: { output: string },
|
||||
cliPath: string
|
||||
cliPath: string,
|
||||
customPrompt?: string
|
||||
): Promise<void> {
|
||||
debugLog("using CLI mode with path:", cliPath)
|
||||
|
||||
@@ -154,7 +160,7 @@ async function processWithCli(
|
||||
},
|
||||
}
|
||||
|
||||
const result = await runCommentChecker(hookInput, cliPath)
|
||||
const result = await runCommentChecker(hookInput, cliPath, customPrompt)
|
||||
|
||||
if (result.hasComments && result.message) {
|
||||
debugLog("CLI detected comments, appending message")
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import { join } from "node:path";
|
||||
import { xdgData } from "xdg-basedir";
|
||||
import { getOpenCodeStorageDir } from "../../shared/data-path";
|
||||
|
||||
export const OPENCODE_STORAGE = join(xdgData ?? "", "opencode", "storage");
|
||||
export const OPENCODE_STORAGE = getOpenCodeStorageDir();
|
||||
export const AGENTS_INJECTOR_STORAGE = join(
|
||||
OPENCODE_STORAGE,
|
||||
"directory-agents",
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import { join } from "node:path";
|
||||
import { xdgData } from "xdg-basedir";
|
||||
import { getOpenCodeStorageDir } from "../../shared/data-path";
|
||||
|
||||
export const OPENCODE_STORAGE = join(xdgData ?? "", "opencode", "storage");
|
||||
export const OPENCODE_STORAGE = getOpenCodeStorageDir();
|
||||
export const README_INJECTOR_STORAGE = join(
|
||||
OPENCODE_STORAGE,
|
||||
"directory-readme",
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import { join } from "node:path";
|
||||
import { xdgData } from "xdg-basedir";
|
||||
import { getOpenCodeStorageDir } from "../../shared/data-path";
|
||||
|
||||
export const OPENCODE_STORAGE = join(xdgData ?? "", "opencode", "storage");
|
||||
export const OPENCODE_STORAGE = getOpenCodeStorageDir();
|
||||
export const INTERACTIVE_BASH_SESSION_STORAGE = join(
|
||||
OPENCODE_STORAGE,
|
||||
"interactive-bash-session",
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import { join } from "node:path";
|
||||
import { xdgData } from "xdg-basedir";
|
||||
import { getOpenCodeStorageDir } from "../../shared/data-path";
|
||||
|
||||
export const OPENCODE_STORAGE = join(xdgData ?? "", "opencode", "storage");
|
||||
export const OPENCODE_STORAGE = getOpenCodeStorageDir();
|
||||
export const RULES_INJECTOR_STORAGE = join(OPENCODE_STORAGE, "rules-injector");
|
||||
|
||||
export const PROJECT_MARKERS = [
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import { join } from "node:path"
|
||||
import { xdgData } from "xdg-basedir"
|
||||
import { getOpenCodeStorageDir } from "../../shared/data-path"
|
||||
|
||||
export const OPENCODE_STORAGE = join(xdgData ?? "", "opencode", "storage")
|
||||
export const OPENCODE_STORAGE = getOpenCodeStorageDir()
|
||||
export const MESSAGE_STORAGE = join(OPENCODE_STORAGE, "message")
|
||||
export const PART_STORAGE = join(OPENCODE_STORAGE, "part")
|
||||
|
||||
|
||||
@@ -135,7 +135,16 @@ export function findEmptyMessageByIndex(sessionID: string, targetIndex: number):
|
||||
const messages = readMessages(sessionID)
|
||||
|
||||
// API index may differ from storage index due to system messages
|
||||
const indicesToTry = [targetIndex, targetIndex - 1, targetIndex - 2]
|
||||
const indicesToTry = [
|
||||
targetIndex,
|
||||
targetIndex - 1,
|
||||
targetIndex + 1,
|
||||
targetIndex - 2,
|
||||
targetIndex + 2,
|
||||
targetIndex - 3,
|
||||
targetIndex - 4,
|
||||
targetIndex - 5,
|
||||
]
|
||||
|
||||
for (const idx of indicesToTry) {
|
||||
if (idx < 0 || idx >= messages.length) continue
|
||||
|
||||
@@ -51,14 +51,15 @@ function isExtendedThinkingModel(modelID: string): boolean {
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a message has tool parts (tool_use)
|
||||
* Check if a message has any content parts (tool_use, text, or other non-thinking content)
|
||||
*/
|
||||
function hasToolParts(parts: Part[]): boolean {
|
||||
function hasContentParts(parts: Part[]): boolean {
|
||||
if (!parts || parts.length === 0) return false
|
||||
|
||||
return parts.some((part: Part) => {
|
||||
const type = part.type as string
|
||||
return type === "tool" || type === "tool_use"
|
||||
// Include tool parts and text parts (anything that's not thinking/reasoning)
|
||||
return type === "tool" || type === "tool_use" || type === "text"
|
||||
})
|
||||
}
|
||||
|
||||
@@ -154,8 +155,8 @@ export function createThinkingBlockValidatorHook(): MessagesTransformHook {
|
||||
// Only check assistant messages
|
||||
if (msg.info.role !== "assistant") continue
|
||||
|
||||
// Check if message has tool parts but doesn't start with thinking
|
||||
if (hasToolParts(msg.parts) && !startsWithThinkingBlock(msg.parts)) {
|
||||
// Check if message has content parts but doesn't start with thinking
|
||||
if (hasContentParts(msg.parts) && !startsWithThinkingBlock(msg.parts)) {
|
||||
// Find thinking content from previous turns
|
||||
const previousThinking = findPreviousThinkingContent(messages, i)
|
||||
|
||||
|
||||
@@ -8,7 +8,6 @@ import {
|
||||
} from "../features/hook-message-injector"
|
||||
import type { BackgroundManager } from "../features/background-agent"
|
||||
import { log } from "../shared/logger"
|
||||
import { isNonInteractive } from "./non-interactive-env/detector"
|
||||
|
||||
const HOOK_NAME = "todo-continuation-enforcer"
|
||||
|
||||
@@ -37,6 +36,32 @@ Incomplete tasks remain in your todo list. Continue working on the next pending
|
||||
- Mark each task complete when finished
|
||||
- Do not stop until all tasks are done`
|
||||
|
||||
const COUNTDOWN_SECONDS = 2
|
||||
const TOAST_DURATION_MS = 900
|
||||
const MIN_INJECTION_INTERVAL_MS = 10_000
|
||||
|
||||
// ============================================================================
|
||||
// STATE MACHINE TYPES
|
||||
// ============================================================================
|
||||
|
||||
type SessionMode =
|
||||
| "idle" // Observed idle, no countdown started yet
|
||||
| "countingDown" // Waiting N seconds before injecting
|
||||
| "injecting" // Currently calling session.prompt
|
||||
| "recovering" // Session recovery in progress (external control)
|
||||
| "errorBypass" // Bypass mode after session.error/interrupt
|
||||
|
||||
interface SessionState {
|
||||
version: number // Monotonic generation token - increment to invalidate pending callbacks
|
||||
mode: SessionMode
|
||||
timer?: ReturnType<typeof setTimeout> // Pending countdown timer
|
||||
lastAttemptedAt?: number // Timestamp of last injection attempt (throttle all attempts)
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// HELPER FUNCTIONS
|
||||
// ============================================================================
|
||||
|
||||
function getMessageDir(sessionID: string): string | null {
|
||||
if (!existsSync(MESSAGE_STORAGE)) return null
|
||||
|
||||
@@ -68,104 +93,354 @@ function detectInterrupt(error: unknown): boolean {
|
||||
return false
|
||||
}
|
||||
|
||||
const COUNTDOWN_SECONDS = 2
|
||||
const TOAST_DURATION_MS = 900 // Slightly less than 1s so toasts don't overlap
|
||||
|
||||
interface CountdownState {
|
||||
secondsRemaining: number
|
||||
intervalId: ReturnType<typeof setInterval>
|
||||
function getIncompleteCount(todos: Todo[]): number {
|
||||
return todos.filter(t => t.status !== "completed" && t.status !== "cancelled").length
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// MAIN IMPLEMENTATION
|
||||
// ============================================================================
|
||||
|
||||
export function createTodoContinuationEnforcer(
|
||||
ctx: PluginInput,
|
||||
options: TodoContinuationEnforcerOptions = {}
|
||||
): TodoContinuationEnforcer {
|
||||
const { backgroundManager } = options
|
||||
const remindedSessions = new Set<string>()
|
||||
const interruptedSessions = new Set<string>()
|
||||
const errorSessions = new Set<string>()
|
||||
const recoveringSessions = new Set<string>()
|
||||
const pendingCountdowns = new Map<string, CountdownState>()
|
||||
const preemptivelyInjectedSessions = new Set<string>()
|
||||
|
||||
// Single source of truth: per-session state machine
|
||||
const sessions = new Map<string, SessionState>()
|
||||
|
||||
// ============================================================================
|
||||
// STATE HELPERS
|
||||
// ============================================================================
|
||||
|
||||
function getOrCreateState(sessionID: string): SessionState {
|
||||
let state = sessions.get(sessionID)
|
||||
if (!state) {
|
||||
state = { version: 0, mode: "idle" }
|
||||
sessions.set(sessionID, state)
|
||||
}
|
||||
return state
|
||||
}
|
||||
|
||||
function clearTimer(state: SessionState): void {
|
||||
if (state.timer) {
|
||||
clearTimeout(state.timer)
|
||||
state.timer = undefined
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Invalidate any pending or in-flight operation by incrementing version.
|
||||
* ALWAYS bumps version regardless of current mode to prevent last-mile races.
|
||||
*/
|
||||
function invalidate(sessionID: string, reason: string): void {
|
||||
const state = sessions.get(sessionID)
|
||||
if (!state) return
|
||||
|
||||
// Skip if in recovery mode (external control)
|
||||
if (state.mode === "recovering") return
|
||||
|
||||
state.version++
|
||||
clearTimer(state)
|
||||
|
||||
if (state.mode !== "idle" && state.mode !== "errorBypass") {
|
||||
log(`[${HOOK_NAME}] Invalidated`, { sessionID, reason, prevMode: state.mode, newVersion: state.version })
|
||||
state.mode = "idle"
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if this is the main session (not a subagent session).
|
||||
*/
|
||||
function isMainSession(sessionID: string): boolean {
|
||||
const mainSessionID = getMainSessionID()
|
||||
// If no main session is set, allow all. If set, only allow main.
|
||||
return !mainSessionID || sessionID === mainSessionID
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// EXTERNAL API
|
||||
// ============================================================================
|
||||
|
||||
const markRecovering = (sessionID: string): void => {
|
||||
recoveringSessions.add(sessionID)
|
||||
const state = getOrCreateState(sessionID)
|
||||
invalidate(sessionID, "entering recovery mode")
|
||||
state.mode = "recovering"
|
||||
log(`[${HOOK_NAME}] Session marked as recovering`, { sessionID })
|
||||
}
|
||||
|
||||
const markRecoveryComplete = (sessionID: string): void => {
|
||||
recoveringSessions.delete(sessionID)
|
||||
const state = sessions.get(sessionID)
|
||||
if (state && state.mode === "recovering") {
|
||||
state.mode = "idle"
|
||||
log(`[${HOOK_NAME}] Session recovery complete`, { sessionID })
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// TOAST HELPER
|
||||
// ============================================================================
|
||||
|
||||
async function showCountdownToast(seconds: number, incompleteCount: number): Promise<void> {
|
||||
await ctx.client.tui.showToast({
|
||||
body: {
|
||||
title: "Todo Continuation",
|
||||
message: `Resuming in ${seconds}s... (${incompleteCount} tasks remaining)`,
|
||||
variant: "warning" as const,
|
||||
duration: TOAST_DURATION_MS,
|
||||
},
|
||||
}).catch(() => {})
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// CORE INJECTION LOGIC
|
||||
// ============================================================================
|
||||
|
||||
async function executeInjection(sessionID: string, capturedVersion: number): Promise<void> {
|
||||
const state = sessions.get(sessionID)
|
||||
if (!state) return
|
||||
|
||||
// Version check: if version changed since we started, abort
|
||||
if (state.version !== capturedVersion) {
|
||||
log(`[${HOOK_NAME}] Injection aborted: version mismatch`, {
|
||||
sessionID, capturedVersion, currentVersion: state.version
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
// Mode check: must still be in countingDown mode
|
||||
if (state.mode !== "countingDown") {
|
||||
log(`[${HOOK_NAME}] Injection aborted: mode changed`, {
|
||||
sessionID, mode: state.mode
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
// Throttle check: minimum interval between injection attempts
|
||||
if (state.lastAttemptedAt) {
|
||||
const elapsed = Date.now() - state.lastAttemptedAt
|
||||
if (elapsed < MIN_INJECTION_INTERVAL_MS) {
|
||||
log(`[${HOOK_NAME}] Injection throttled: too soon since last injection`, {
|
||||
sessionID, elapsedMs: elapsed, minIntervalMs: MIN_INJECTION_INTERVAL_MS
|
||||
})
|
||||
state.mode = "idle"
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
state.mode = "injecting"
|
||||
|
||||
// Re-verify todos (CRITICAL: always re-check before injecting)
|
||||
let todos: Todo[] = []
|
||||
try {
|
||||
const response = await ctx.client.session.todo({ path: { id: sessionID } })
|
||||
todos = (response.data ?? response) as Todo[]
|
||||
} catch (err) {
|
||||
log(`[${HOOK_NAME}] Failed to fetch todos for injection`, { sessionID, error: String(err) })
|
||||
state.mode = "idle"
|
||||
return
|
||||
}
|
||||
|
||||
// Version check again after async operation
|
||||
if (state.version !== capturedVersion) {
|
||||
log(`[${HOOK_NAME}] Injection aborted after todo fetch: version mismatch`, { sessionID })
|
||||
state.mode = "idle"
|
||||
return
|
||||
}
|
||||
|
||||
const incompleteCount = getIncompleteCount(todos)
|
||||
if (incompleteCount === 0) {
|
||||
log(`[${HOOK_NAME}] No incomplete todos at injection time`, { sessionID, total: todos.length })
|
||||
state.mode = "idle"
|
||||
return
|
||||
}
|
||||
|
||||
// Skip entirely if background tasks are running (no false positives)
|
||||
const hasRunningBgTasks = backgroundManager
|
||||
? backgroundManager.getTasksByParentSession(sessionID).some((t) => t.status === "running")
|
||||
: false
|
||||
|
||||
if (hasRunningBgTasks) {
|
||||
log(`[${HOOK_NAME}] Skipped: background tasks still running`, { sessionID })
|
||||
state.mode = "idle"
|
||||
return
|
||||
}
|
||||
|
||||
// Get previous message agent info
|
||||
const messageDir = getMessageDir(sessionID)
|
||||
const prevMessage = messageDir ? findNearestMessageWithFields(messageDir) : null
|
||||
|
||||
// Check write permission
|
||||
const agentHasWritePermission = !prevMessage?.tools ||
|
||||
(prevMessage.tools.write !== false && prevMessage.tools.edit !== false)
|
||||
|
||||
if (!agentHasWritePermission) {
|
||||
log(`[${HOOK_NAME}] Skipped: agent lacks write permission`, {
|
||||
sessionID, agent: prevMessage?.agent, tools: prevMessage?.tools
|
||||
})
|
||||
state.mode = "idle"
|
||||
return
|
||||
}
|
||||
|
||||
// Plan mode agents only analyze and plan, not implement - skip todo continuation
|
||||
const agentName = prevMessage?.agent?.toLowerCase() ?? ""
|
||||
const isPlanModeAgent = agentName === "plan" || agentName === "planner-sisyphus"
|
||||
if (isPlanModeAgent) {
|
||||
log(`[${HOOK_NAME}] Skipped: plan mode agent detected`, {
|
||||
sessionID, agent: prevMessage?.agent
|
||||
})
|
||||
state.mode = "idle"
|
||||
return
|
||||
}
|
||||
|
||||
const prompt = `${CONTINUATION_PROMPT}\n\n[Status: ${todos.length - incompleteCount}/${todos.length} completed, ${incompleteCount} remaining]`
|
||||
|
||||
// Final version check right before API call (last-mile race mitigation)
|
||||
if (state.version !== capturedVersion) {
|
||||
log(`[${HOOK_NAME}] Injection aborted: version changed before API call`, { sessionID })
|
||||
state.mode = "idle"
|
||||
return
|
||||
}
|
||||
|
||||
// Set lastAttemptedAt BEFORE calling API (throttle attempts, not just successes)
|
||||
state.lastAttemptedAt = Date.now()
|
||||
|
||||
try {
|
||||
log(`[${HOOK_NAME}] Injecting continuation prompt`, {
|
||||
sessionID,
|
||||
agent: prevMessage?.agent,
|
||||
incompleteCount
|
||||
})
|
||||
|
||||
await ctx.client.session.prompt({
|
||||
path: { id: sessionID },
|
||||
body: {
|
||||
agent: prevMessage?.agent,
|
||||
parts: [{ type: "text", text: prompt }],
|
||||
},
|
||||
query: { directory: ctx.directory },
|
||||
})
|
||||
|
||||
log(`[${HOOK_NAME}] Continuation prompt injected successfully`, { sessionID })
|
||||
} catch (err) {
|
||||
log(`[${HOOK_NAME}] Prompt injection failed`, { sessionID, error: String(err) })
|
||||
}
|
||||
|
||||
state.mode = "idle"
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// COUNTDOWN STARTER
|
||||
// ============================================================================
|
||||
|
||||
function startCountdown(sessionID: string, incompleteCount: number): void {
|
||||
const state = getOrCreateState(sessionID)
|
||||
|
||||
// Cancel any existing countdown
|
||||
invalidate(sessionID, "starting new countdown")
|
||||
|
||||
// Increment version for this new countdown
|
||||
state.version++
|
||||
state.mode = "countingDown"
|
||||
const capturedVersion = state.version
|
||||
|
||||
log(`[${HOOK_NAME}] Starting countdown`, {
|
||||
sessionID,
|
||||
seconds: COUNTDOWN_SECONDS,
|
||||
version: capturedVersion,
|
||||
incompleteCount
|
||||
})
|
||||
|
||||
// Show initial toast
|
||||
showCountdownToast(COUNTDOWN_SECONDS, incompleteCount)
|
||||
|
||||
// Show countdown toasts
|
||||
let secondsRemaining = COUNTDOWN_SECONDS
|
||||
const toastInterval = setInterval(() => {
|
||||
// Check if countdown was cancelled
|
||||
if (state.version !== capturedVersion) {
|
||||
clearInterval(toastInterval)
|
||||
return
|
||||
}
|
||||
secondsRemaining--
|
||||
if (secondsRemaining > 0) {
|
||||
showCountdownToast(secondsRemaining, incompleteCount)
|
||||
}
|
||||
}, 1000)
|
||||
|
||||
// Schedule the injection
|
||||
state.timer = setTimeout(() => {
|
||||
clearInterval(toastInterval)
|
||||
clearTimer(state)
|
||||
executeInjection(sessionID, capturedVersion)
|
||||
}, COUNTDOWN_SECONDS * 1000)
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// EVENT HANDLER
|
||||
// ============================================================================
|
||||
|
||||
const handler = async ({ event }: { event: { type: string; properties?: unknown } }): Promise<void> => {
|
||||
const props = event.properties as Record<string, unknown> | undefined
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// SESSION.ERROR - Enter error bypass mode
|
||||
// -------------------------------------------------------------------------
|
||||
if (event.type === "session.error") {
|
||||
const sessionID = props?.sessionID as string | undefined
|
||||
if (sessionID) {
|
||||
const isInterrupt = detectInterrupt(props?.error)
|
||||
errorSessions.add(sessionID)
|
||||
if (isInterrupt) {
|
||||
interruptedSessions.add(sessionID)
|
||||
}
|
||||
log(`[${HOOK_NAME}] session.error received`, { sessionID, isInterrupt, error: props?.error })
|
||||
|
||||
const countdown = pendingCountdowns.get(sessionID)
|
||||
if (countdown) {
|
||||
clearInterval(countdown.intervalId)
|
||||
pendingCountdowns.delete(sessionID)
|
||||
}
|
||||
}
|
||||
if (!sessionID) return
|
||||
|
||||
const isInterrupt = detectInterrupt(props?.error)
|
||||
const state = getOrCreateState(sessionID)
|
||||
|
||||
invalidate(sessionID, isInterrupt ? "user interrupt" : "session error")
|
||||
state.mode = "errorBypass"
|
||||
|
||||
log(`[${HOOK_NAME}] session.error received`, { sessionID, isInterrupt, error: props?.error })
|
||||
return
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// SESSION.IDLE - Main trigger for todo continuation
|
||||
// -------------------------------------------------------------------------
|
||||
if (event.type === "session.idle") {
|
||||
const sessionID = props?.sessionID as string | undefined
|
||||
if (!sessionID) return
|
||||
|
||||
log(`[${HOOK_NAME}] session.idle received`, { sessionID })
|
||||
|
||||
const mainSessionID = getMainSessionID()
|
||||
if (mainSessionID && sessionID !== mainSessionID) {
|
||||
log(`[${HOOK_NAME}] Skipped: not main session`, { sessionID, mainSessionID })
|
||||
// Skip if not main session
|
||||
if (!isMainSession(sessionID)) {
|
||||
log(`[${HOOK_NAME}] Skipped: not main session`, { sessionID })
|
||||
return
|
||||
}
|
||||
|
||||
const existingCountdown = pendingCountdowns.get(sessionID)
|
||||
if (existingCountdown) {
|
||||
clearInterval(existingCountdown.intervalId)
|
||||
pendingCountdowns.delete(sessionID)
|
||||
log(`[${HOOK_NAME}] Cancelled existing countdown`, { sessionID })
|
||||
}
|
||||
const state = getOrCreateState(sessionID)
|
||||
|
||||
// Check if session is in recovery mode - if so, skip entirely without clearing state
|
||||
if (recoveringSessions.has(sessionID)) {
|
||||
// Skip if in recovery mode
|
||||
if (state.mode === "recovering") {
|
||||
log(`[${HOOK_NAME}] Skipped: session in recovery mode`, { sessionID })
|
||||
return
|
||||
}
|
||||
|
||||
const shouldBypass = interruptedSessions.has(sessionID) || errorSessions.has(sessionID)
|
||||
|
||||
if (shouldBypass) {
|
||||
interruptedSessions.delete(sessionID)
|
||||
errorSessions.delete(sessionID)
|
||||
log(`[${HOOK_NAME}] Skipped: error/interrupt bypass`, { sessionID })
|
||||
// Skip if in error bypass mode (DO NOT clear - wait for user message)
|
||||
if (state.mode === "errorBypass") {
|
||||
log(`[${HOOK_NAME}] Skipped: error bypass (awaiting user message to resume)`, { sessionID })
|
||||
return
|
||||
}
|
||||
|
||||
if (remindedSessions.has(sessionID)) {
|
||||
log(`[${HOOK_NAME}] Skipped: already reminded this session`, { sessionID })
|
||||
// Skip if already counting down or injecting
|
||||
if (state.mode === "countingDown" || state.mode === "injecting") {
|
||||
log(`[${HOOK_NAME}] Skipped: already ${state.mode}`, { sessionID })
|
||||
return
|
||||
}
|
||||
|
||||
// Check for incomplete todos BEFORE starting countdown
|
||||
// Fetch todos
|
||||
let todos: Todo[] = []
|
||||
try {
|
||||
log(`[${HOOK_NAME}] Fetching todos for session`, { sessionID })
|
||||
const response = await ctx.client.session.todo({
|
||||
path: { id: sessionID },
|
||||
})
|
||||
const response = await ctx.client.session.todo({ path: { id: sessionID } })
|
||||
todos = (response.data ?? response) as Todo[]
|
||||
log(`[${HOOK_NAME}] Todo API response`, { sessionID, todosCount: todos?.length ?? 0 })
|
||||
} catch (err) {
|
||||
log(`[${HOOK_NAME}] Todo API error`, { sessionID, error: String(err) })
|
||||
return
|
||||
@@ -176,231 +451,107 @@ export function createTodoContinuationEnforcer(
|
||||
return
|
||||
}
|
||||
|
||||
const incomplete = todos.filter(
|
||||
(t) => t.status !== "completed" && t.status !== "cancelled"
|
||||
)
|
||||
|
||||
if (incomplete.length === 0) {
|
||||
const incompleteCount = getIncompleteCount(todos)
|
||||
if (incompleteCount === 0) {
|
||||
log(`[${HOOK_NAME}] All todos completed`, { sessionID, total: todos.length })
|
||||
return
|
||||
}
|
||||
|
||||
log(`[${HOOK_NAME}] Found incomplete todos, starting countdown`, { sessionID, incomplete: incomplete.length, total: todos.length })
|
||||
// Skip if background tasks are running (avoid toast spam with no injection)
|
||||
const hasRunningBgTasks = backgroundManager
|
||||
? backgroundManager.getTasksByParentSession(sessionID).some((t) => t.status === "running")
|
||||
: false
|
||||
|
||||
const showCountdownToast = async (seconds: number): Promise<void> => {
|
||||
await ctx.client.tui.showToast({
|
||||
body: {
|
||||
title: "Todo Continuation",
|
||||
message: `Resuming in ${seconds}s... (${incomplete.length} tasks remaining)`,
|
||||
variant: "warning" as const,
|
||||
duration: TOAST_DURATION_MS,
|
||||
},
|
||||
}).catch(() => {})
|
||||
if (hasRunningBgTasks) {
|
||||
log(`[${HOOK_NAME}] Skipped: background tasks still running`, { sessionID })
|
||||
return
|
||||
}
|
||||
|
||||
const executeAfterCountdown = async (): Promise<void> => {
|
||||
pendingCountdowns.delete(sessionID)
|
||||
log(`[${HOOK_NAME}] Countdown finished, executing continuation`, { sessionID })
|
||||
log(`[${HOOK_NAME}] Found incomplete todos`, {
|
||||
sessionID,
|
||||
incomplete: incompleteCount,
|
||||
total: todos.length
|
||||
})
|
||||
|
||||
// Re-check conditions after countdown
|
||||
if (recoveringSessions.has(sessionID)) {
|
||||
log(`[${HOOK_NAME}] Abort: session entered recovery mode during countdown`, { sessionID })
|
||||
return
|
||||
}
|
||||
|
||||
if (interruptedSessions.has(sessionID) || errorSessions.has(sessionID)) {
|
||||
log(`[${HOOK_NAME}] Abort: error/interrupt occurred during countdown`, { sessionID })
|
||||
interruptedSessions.delete(sessionID)
|
||||
errorSessions.delete(sessionID)
|
||||
return
|
||||
}
|
||||
|
||||
let freshTodos: Todo[] = []
|
||||
try {
|
||||
log(`[${HOOK_NAME}] Re-verifying todos after countdown`, { sessionID })
|
||||
const response = await ctx.client.session.todo({
|
||||
path: { id: sessionID },
|
||||
})
|
||||
freshTodos = (response.data ?? response) as Todo[]
|
||||
log(`[${HOOK_NAME}] Fresh todo count`, { sessionID, todosCount: freshTodos?.length ?? 0 })
|
||||
} catch (err) {
|
||||
log(`[${HOOK_NAME}] Failed to re-verify todos`, { sessionID, error: String(err) })
|
||||
return
|
||||
}
|
||||
|
||||
const freshIncomplete = freshTodos.filter(
|
||||
(t) => t.status !== "completed" && t.status !== "cancelled"
|
||||
)
|
||||
|
||||
if (freshIncomplete.length === 0) {
|
||||
log(`[${HOOK_NAME}] Abort: no incomplete todos after countdown`, { sessionID, total: freshTodos.length })
|
||||
return
|
||||
}
|
||||
|
||||
log(`[${HOOK_NAME}] Confirmed incomplete todos, proceeding with injection`, { sessionID, incomplete: freshIncomplete.length, total: freshTodos.length })
|
||||
|
||||
remindedSessions.add(sessionID)
|
||||
|
||||
try {
|
||||
// Get previous message's agent info to respect agent mode
|
||||
const messageDir = getMessageDir(sessionID)
|
||||
const prevMessage = messageDir ? findNearestMessageWithFields(messageDir) : null
|
||||
|
||||
const agentHasWritePermission = !prevMessage?.tools || (prevMessage.tools.write !== false && prevMessage.tools.edit !== false)
|
||||
if (!agentHasWritePermission) {
|
||||
log(`[${HOOK_NAME}] Skipped: previous agent lacks write permission`, { sessionID, agent: prevMessage?.agent, tools: prevMessage?.tools })
|
||||
remindedSessions.delete(sessionID)
|
||||
return
|
||||
}
|
||||
|
||||
log(`[${HOOK_NAME}] Injecting continuation prompt`, { sessionID, agent: prevMessage?.agent })
|
||||
await ctx.client.session.prompt({
|
||||
path: { id: sessionID },
|
||||
body: {
|
||||
agent: prevMessage?.agent,
|
||||
parts: [
|
||||
{
|
||||
type: "text",
|
||||
text: `${CONTINUATION_PROMPT}\n\n[Status: ${freshTodos.length - freshIncomplete.length}/${freshTodos.length} completed, ${freshIncomplete.length} remaining]`,
|
||||
},
|
||||
],
|
||||
},
|
||||
query: { directory: ctx.directory },
|
||||
})
|
||||
log(`[${HOOK_NAME}] Continuation prompt injected successfully`, { sessionID })
|
||||
} catch (err) {
|
||||
log(`[${HOOK_NAME}] Prompt injection failed`, { sessionID, error: String(err) })
|
||||
remindedSessions.delete(sessionID)
|
||||
}
|
||||
}
|
||||
|
||||
let secondsRemaining = COUNTDOWN_SECONDS
|
||||
showCountdownToast(secondsRemaining).catch(() => {})
|
||||
|
||||
const intervalId = setInterval(() => {
|
||||
secondsRemaining--
|
||||
|
||||
if (secondsRemaining <= 0) {
|
||||
clearInterval(intervalId)
|
||||
pendingCountdowns.delete(sessionID)
|
||||
executeAfterCountdown()
|
||||
return
|
||||
}
|
||||
|
||||
const countdown = pendingCountdowns.get(sessionID)
|
||||
if (!countdown) {
|
||||
clearInterval(intervalId)
|
||||
return
|
||||
}
|
||||
|
||||
countdown.secondsRemaining = secondsRemaining
|
||||
showCountdownToast(secondsRemaining).catch(() => {})
|
||||
}, 1000)
|
||||
|
||||
pendingCountdowns.set(sessionID, { secondsRemaining, intervalId })
|
||||
startCountdown(sessionID, incompleteCount)
|
||||
return
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// MESSAGE.UPDATED - Cancel countdown on activity
|
||||
// -------------------------------------------------------------------------
|
||||
if (event.type === "message.updated") {
|
||||
const info = props?.info as Record<string, unknown> | undefined
|
||||
const sessionID = info?.sessionID as string | undefined
|
||||
const role = info?.role as string | undefined
|
||||
const finish = info?.finish as string | undefined
|
||||
log(`[${HOOK_NAME}] message.updated received`, { sessionID, role, finish })
|
||||
|
||||
if (sessionID && role === "user") {
|
||||
const countdown = pendingCountdowns.get(sessionID)
|
||||
if (countdown) {
|
||||
clearInterval(countdown.intervalId)
|
||||
pendingCountdowns.delete(sessionID)
|
||||
log(`[${HOOK_NAME}] Cancelled countdown on user message`, { sessionID })
|
||||
|
||||
if (!sessionID) return
|
||||
|
||||
// User message: Always cancel countdown and clear errorBypass
|
||||
if (role === "user") {
|
||||
const state = sessions.get(sessionID)
|
||||
if (state?.mode === "errorBypass") {
|
||||
state.mode = "idle"
|
||||
log(`[${HOOK_NAME}] User message cleared errorBypass mode`, { sessionID })
|
||||
}
|
||||
remindedSessions.delete(sessionID)
|
||||
preemptivelyInjectedSessions.delete(sessionID)
|
||||
invalidate(sessionID, "user message received")
|
||||
return
|
||||
}
|
||||
|
||||
if (sessionID && role === "assistant" && finish) {
|
||||
remindedSessions.delete(sessionID)
|
||||
preemptivelyInjectedSessions.delete(sessionID)
|
||||
log(`[${HOOK_NAME}] Cleared reminded/preemptive state on assistant finish`, { sessionID })
|
||||
|
||||
const isTerminalFinish = finish && !["tool-calls", "unknown"].includes(finish)
|
||||
if (isTerminalFinish && isNonInteractive()) {
|
||||
log(`[${HOOK_NAME}] Terminal finish in non-interactive mode`, { sessionID, finish })
|
||||
|
||||
const mainSessionID = getMainSessionID()
|
||||
if (mainSessionID && sessionID !== mainSessionID) {
|
||||
log(`[${HOOK_NAME}] Skipped preemptive: not main session`, { sessionID, mainSessionID })
|
||||
return
|
||||
}
|
||||
|
||||
if (preemptivelyInjectedSessions.has(sessionID)) {
|
||||
log(`[${HOOK_NAME}] Skipped preemptive: already injected`, { sessionID })
|
||||
return
|
||||
}
|
||||
|
||||
if (recoveringSessions.has(sessionID) || errorSessions.has(sessionID) || interruptedSessions.has(sessionID)) {
|
||||
log(`[${HOOK_NAME}] Skipped preemptive: session in error/recovery state`, { sessionID })
|
||||
return
|
||||
}
|
||||
|
||||
const hasRunningBgTasks = backgroundManager
|
||||
? backgroundManager.getTasksByParentSession(sessionID).some((t) => t.status === "running")
|
||||
: false
|
||||
|
||||
let hasIncompleteTodos = false
|
||||
try {
|
||||
const response = await ctx.client.session.todo({ path: { id: sessionID } })
|
||||
const todos = (response.data ?? response) as Todo[]
|
||||
hasIncompleteTodos = todos?.some((t) => t.status !== "completed" && t.status !== "cancelled") ?? false
|
||||
} catch {
|
||||
log(`[${HOOK_NAME}] Failed to fetch todos for preemptive check`, { sessionID })
|
||||
}
|
||||
|
||||
if (hasRunningBgTasks || hasIncompleteTodos) {
|
||||
log(`[${HOOK_NAME}] Preemptive injection needed`, { sessionID, hasRunningBgTasks, hasIncompleteTodos })
|
||||
preemptivelyInjectedSessions.add(sessionID)
|
||||
|
||||
try {
|
||||
const messageDir = getMessageDir(sessionID)
|
||||
const prevMessage = messageDir ? findNearestMessageWithFields(messageDir) : null
|
||||
|
||||
const prompt = hasRunningBgTasks
|
||||
? "[SYSTEM] Background tasks are still running. Wait for their completion before proceeding."
|
||||
: CONTINUATION_PROMPT
|
||||
|
||||
await ctx.client.session.prompt({
|
||||
path: { id: sessionID },
|
||||
body: {
|
||||
agent: prevMessage?.agent,
|
||||
parts: [{ type: "text", text: prompt }],
|
||||
},
|
||||
query: { directory: ctx.directory },
|
||||
})
|
||||
log(`[${HOOK_NAME}] Preemptive injection successful`, { sessionID })
|
||||
} catch (err) {
|
||||
log(`[${HOOK_NAME}] Preemptive injection failed`, { sessionID, error: String(err) })
|
||||
preemptivelyInjectedSessions.delete(sessionID)
|
||||
}
|
||||
}
|
||||
}
|
||||
// Assistant message WITHOUT finish: Agent is working, cancel countdown
|
||||
if (role === "assistant" && !finish) {
|
||||
invalidate(sessionID, "assistant is working (streaming)")
|
||||
return
|
||||
}
|
||||
|
||||
// Assistant message WITH finish: Agent finished a turn (let session.idle handle it)
|
||||
if (role === "assistant" && finish) {
|
||||
log(`[${HOOK_NAME}] Assistant turn finished`, { sessionID, finish })
|
||||
return
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// MESSAGE.PART.UPDATED - Cancel countdown on streaming activity
|
||||
// -------------------------------------------------------------------------
|
||||
if (event.type === "message.part.updated") {
|
||||
const info = props?.info as Record<string, unknown> | undefined
|
||||
const sessionID = info?.sessionID as string | undefined
|
||||
const role = info?.role as string | undefined
|
||||
|
||||
if (sessionID && role === "assistant") {
|
||||
invalidate(sessionID, "assistant streaming")
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// TOOL EVENTS - Cancel countdown when tools are executing
|
||||
// -------------------------------------------------------------------------
|
||||
if (event.type === "tool.execute.before" || event.type === "tool.execute.after") {
|
||||
const sessionID = props?.sessionID as string | undefined
|
||||
if (sessionID) {
|
||||
invalidate(sessionID, `tool execution (${event.type})`)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// SESSION.DELETED - Cleanup
|
||||
// -------------------------------------------------------------------------
|
||||
if (event.type === "session.deleted") {
|
||||
const sessionInfo = props?.info as { id?: string } | undefined
|
||||
if (sessionInfo?.id) {
|
||||
remindedSessions.delete(sessionInfo.id)
|
||||
interruptedSessions.delete(sessionInfo.id)
|
||||
errorSessions.delete(sessionInfo.id)
|
||||
recoveringSessions.delete(sessionInfo.id)
|
||||
preemptivelyInjectedSessions.delete(sessionInfo.id)
|
||||
|
||||
const countdown = pendingCountdowns.get(sessionInfo.id)
|
||||
if (countdown) {
|
||||
clearInterval(countdown.intervalId)
|
||||
pendingCountdowns.delete(sessionInfo.id)
|
||||
const state = sessions.get(sessionInfo.id)
|
||||
if (state) {
|
||||
clearTimer(state)
|
||||
}
|
||||
sessions.delete(sessionInfo.id)
|
||||
log(`[${HOOK_NAME}] Session deleted, state cleaned up`, { sessionID: sessionInfo.id })
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
12
src/index.ts
12
src/index.ts
@@ -32,6 +32,7 @@ import {
|
||||
loadOpencodeGlobalCommands,
|
||||
loadOpencodeProjectCommands,
|
||||
} from "./features/claude-code-command-loader";
|
||||
import { loadBuiltinCommands } from "./features/builtin-commands";
|
||||
|
||||
import {
|
||||
loadUserAgents,
|
||||
@@ -169,6 +170,12 @@ function mergeConfigs(
|
||||
...(override.disabled_hooks ?? []),
|
||||
]),
|
||||
],
|
||||
disabled_commands: [
|
||||
...new Set([
|
||||
...(base.disabled_commands ?? []),
|
||||
...(override.disabled_commands ?? []),
|
||||
]),
|
||||
],
|
||||
claude_code: deepMerge(base.claude_code, override.claude_code),
|
||||
};
|
||||
}
|
||||
@@ -233,7 +240,7 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
|
||||
: null;
|
||||
|
||||
const commentChecker = isHookEnabled("comment-checker")
|
||||
? createCommentCheckerHooks()
|
||||
? createCommentCheckerHooks(pluginConfig.comment_checker)
|
||||
: null;
|
||||
const toolOutputTruncator = isHookEnabled("tool-output-truncator")
|
||||
? createToolOutputTruncatorHook(ctx, { experimental: pluginConfig.experimental })
|
||||
@@ -510,12 +517,14 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
|
||||
...pluginComponents.mcpServers,
|
||||
};
|
||||
|
||||
const builtinCommands = loadBuiltinCommands(pluginConfig.disabled_commands);
|
||||
const userCommands = (pluginConfig.claude_code?.commands ?? true) ? loadUserCommands() : {};
|
||||
const opencodeGlobalCommands = loadOpencodeGlobalCommands();
|
||||
const systemCommands = config.command ?? {};
|
||||
const projectCommands = (pluginConfig.claude_code?.commands ?? true) ? loadProjectCommands() : {};
|
||||
const opencodeProjectCommands = loadOpencodeProjectCommands();
|
||||
config.command = {
|
||||
...builtinCommands,
|
||||
...userCommands,
|
||||
...opencodeGlobalCommands,
|
||||
...systemCommands,
|
||||
@@ -632,6 +641,7 @@ export type {
|
||||
AgentOverrides,
|
||||
McpName,
|
||||
HookName,
|
||||
BuiltinCommandName,
|
||||
} from "./config";
|
||||
|
||||
// NOTE: Do NOT export functions from main index.ts!
|
||||
|
||||
@@ -2,27 +2,20 @@ import * as path from "node:path"
|
||||
import * as os from "node:os"
|
||||
|
||||
/**
|
||||
* Returns the user-level data directory based on the OS.
|
||||
* - Linux/macOS: XDG_DATA_HOME or ~/.local/share
|
||||
* - Windows: %LOCALAPPDATA%
|
||||
* Returns the user-level data directory.
|
||||
* Matches OpenCode's behavior via xdg-basedir:
|
||||
* - All platforms: XDG_DATA_HOME or ~/.local/share
|
||||
*
|
||||
* This follows XDG Base Directory specification on Unix systems
|
||||
* and Windows conventions on Windows.
|
||||
* Note: OpenCode uses xdg-basedir which returns ~/.local/share on ALL platforms
|
||||
* including Windows, so we match that behavior exactly.
|
||||
*/
|
||||
export function getDataDir(): string {
|
||||
if (process.platform === "win32") {
|
||||
// Windows: Use %LOCALAPPDATA% (e.g., C:\Users\Username\AppData\Local)
|
||||
return process.env.LOCALAPPDATA ?? path.join(os.homedir(), "AppData", "Local")
|
||||
}
|
||||
|
||||
// Unix: Use XDG_DATA_HOME or fallback to ~/.local/share
|
||||
return process.env.XDG_DATA_HOME ?? path.join(os.homedir(), ".local", "share")
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the OpenCode storage directory path.
|
||||
* - Linux/macOS: ~/.local/share/opencode/storage
|
||||
* - Windows: %LOCALAPPDATA%\opencode\storage
|
||||
* All platforms: ~/.local/share/opencode/storage
|
||||
*/
|
||||
export function getOpenCodeStorageDir(): string {
|
||||
return path.join(getDataDir(), "opencode", "storage")
|
||||
|
||||
@@ -100,8 +100,6 @@ export function setSgCliPath(path: string): void {
|
||||
resolvedCliPath = path
|
||||
}
|
||||
|
||||
export const SG_CLI_PATH = getSgCliPath()
|
||||
|
||||
// CLI supported languages (25 total)
|
||||
export const CLI_LANGUAGES = [
|
||||
"bash",
|
||||
@@ -184,21 +182,20 @@ export interface EnvironmentCheckResult {
|
||||
* Call this at startup to provide early feedback about missing dependencies.
|
||||
*/
|
||||
export function checkEnvironment(): EnvironmentCheckResult {
|
||||
const cliPath = getSgCliPath()
|
||||
const result: EnvironmentCheckResult = {
|
||||
cli: {
|
||||
available: false,
|
||||
path: SG_CLI_PATH,
|
||||
path: cliPath,
|
||||
},
|
||||
napi: {
|
||||
available: false,
|
||||
},
|
||||
}
|
||||
|
||||
// Check CLI availability
|
||||
if (existsSync(SG_CLI_PATH)) {
|
||||
if (existsSync(cliPath)) {
|
||||
result.cli.available = true
|
||||
} else if (SG_CLI_PATH === "sg") {
|
||||
// Fallback path - try which/where to find in PATH
|
||||
} else if (cliPath === "sg") {
|
||||
try {
|
||||
const { spawnSync } = require("child_process")
|
||||
const whichResult = spawnSync(process.platform === "win32" ? "where" : "which", ["sg"], {
|
||||
@@ -213,7 +210,7 @@ export function checkEnvironment(): EnvironmentCheckResult {
|
||||
result.cli.error = "Failed to check sg availability"
|
||||
}
|
||||
} else {
|
||||
result.cli.error = `Binary not found: ${SG_CLI_PATH}`
|
||||
result.cli.error = `Binary not found: ${cliPath}`
|
||||
}
|
||||
|
||||
// Check NAPI availability
|
||||
|
||||
@@ -37,6 +37,14 @@ function formatDuration(start: Date, end?: Date): string {
|
||||
}
|
||||
}
|
||||
|
||||
type ToolContextWithMetadata = {
|
||||
sessionID: string
|
||||
messageID: string
|
||||
agent: string
|
||||
abort: AbortSignal
|
||||
metadata?: (input: { title?: string; metadata?: Record<string, unknown> }) => void
|
||||
}
|
||||
|
||||
export function createBackgroundTask(manager: BackgroundManager): ToolDefinition {
|
||||
return tool({
|
||||
description: BACKGROUND_TASK_DESCRIPTION,
|
||||
@@ -46,12 +54,14 @@ export function createBackgroundTask(manager: BackgroundManager): ToolDefinition
|
||||
agent: tool.schema.string().describe("Agent type to use (any registered agent)"),
|
||||
},
|
||||
async execute(args: BackgroundTaskArgs, toolContext) {
|
||||
const ctx = toolContext as ToolContextWithMetadata
|
||||
|
||||
if (!args.agent || args.agent.trim() === "") {
|
||||
return `❌ Agent parameter is required. Please specify which agent to use (e.g., "explore", "librarian", "build", etc.)`
|
||||
}
|
||||
|
||||
try {
|
||||
const messageDir = getMessageDir(toolContext.sessionID)
|
||||
const messageDir = getMessageDir(ctx.sessionID)
|
||||
const prevMessage = messageDir ? findNearestMessageWithFields(messageDir) : null
|
||||
const parentModel = prevMessage?.model?.providerID && prevMessage?.model?.modelID
|
||||
? { providerID: prevMessage.model.providerID, modelID: prevMessage.model.modelID }
|
||||
@@ -61,11 +71,16 @@ export function createBackgroundTask(manager: BackgroundManager): ToolDefinition
|
||||
description: args.description,
|
||||
prompt: args.prompt,
|
||||
agent: args.agent.trim(),
|
||||
parentSessionID: toolContext.sessionID,
|
||||
parentMessageID: toolContext.messageID,
|
||||
parentSessionID: ctx.sessionID,
|
||||
parentMessageID: ctx.messageID,
|
||||
parentModel,
|
||||
})
|
||||
|
||||
ctx.metadata?.({
|
||||
title: args.description,
|
||||
metadata: { sessionId: task.sessionID },
|
||||
})
|
||||
|
||||
return `Background task launched successfully.
|
||||
|
||||
Task ID: ${task.id}
|
||||
|
||||
@@ -4,6 +4,14 @@ import type { CallOmoAgentArgs } from "./types"
|
||||
import type { BackgroundManager } from "../../features/background-agent"
|
||||
import { log } from "../../shared/logger"
|
||||
|
||||
type ToolContextWithMetadata = {
|
||||
sessionID: string
|
||||
messageID: string
|
||||
agent: string
|
||||
abort: AbortSignal
|
||||
metadata?: (input: { title?: string; metadata?: Record<string, unknown> }) => void
|
||||
}
|
||||
|
||||
export function createCallOmoAgent(
|
||||
ctx: PluginInput,
|
||||
backgroundManager: BackgroundManager
|
||||
@@ -27,6 +35,7 @@ export function createCallOmoAgent(
|
||||
session_id: tool.schema.string().describe("Existing Task session to continue").optional(),
|
||||
},
|
||||
async execute(args: CallOmoAgentArgs, toolContext) {
|
||||
const toolCtx = toolContext as ToolContextWithMetadata
|
||||
log(`[call_omo_agent] Starting with agent: ${args.subagent_type}, background: ${args.run_in_background}`)
|
||||
|
||||
if (!ALLOWED_AGENTS.includes(args.subagent_type as typeof ALLOWED_AGENTS[number])) {
|
||||
@@ -37,17 +46,17 @@ export function createCallOmoAgent(
|
||||
if (args.session_id) {
|
||||
return `Error: session_id is not supported in background mode. Use run_in_background=false to continue an existing session.`
|
||||
}
|
||||
return await executeBackground(args, toolContext, backgroundManager)
|
||||
return await executeBackground(args, toolCtx, backgroundManager)
|
||||
}
|
||||
|
||||
return await executeSync(args, toolContext, ctx)
|
||||
return await executeSync(args, toolCtx, ctx)
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
async function executeBackground(
|
||||
args: CallOmoAgentArgs,
|
||||
toolContext: { sessionID: string; messageID: string },
|
||||
toolContext: ToolContextWithMetadata,
|
||||
manager: BackgroundManager
|
||||
): Promise<string> {
|
||||
try {
|
||||
@@ -59,6 +68,11 @@ async function executeBackground(
|
||||
parentMessageID: toolContext.messageID,
|
||||
})
|
||||
|
||||
toolContext.metadata?.({
|
||||
title: args.description,
|
||||
metadata: { sessionId: task.sessionID },
|
||||
})
|
||||
|
||||
return `Background agent task launched successfully.
|
||||
|
||||
Task ID: ${task.id}
|
||||
@@ -79,7 +93,7 @@ Use \`background_output\` tool with task_id="${task.id}" to check progress:
|
||||
|
||||
async function executeSync(
|
||||
args: CallOmoAgentArgs,
|
||||
toolContext: { sessionID: string },
|
||||
toolContext: ToolContextWithMetadata,
|
||||
ctx: PluginInput
|
||||
): Promise<string> {
|
||||
let sessionID: string
|
||||
@@ -112,6 +126,11 @@ async function executeSync(
|
||||
log(`[call_omo_agent] Created session: ${sessionID}`)
|
||||
}
|
||||
|
||||
toolContext.metadata?.({
|
||||
title: args.description,
|
||||
metadata: { sessionId: sessionID },
|
||||
})
|
||||
|
||||
log(`[call_omo_agent] Sending prompt to session ${sessionID}`)
|
||||
log(`[call_omo_agent] Prompt text:`, args.prompt.substring(0, 100))
|
||||
|
||||
|
||||
@@ -1,9 +1,8 @@
|
||||
import { spawn, type Subprocess } from "bun"
|
||||
import { readFileSync } from "fs"
|
||||
import { extname, resolve } from "path"
|
||||
import type { ResolvedServer } from "./config"
|
||||
import { getLanguageId } from "./config"
|
||||
import type { Diagnostic } from "./types"
|
||||
import type { Diagnostic, ResolvedServer } from "./types"
|
||||
|
||||
interface ManagedClient {
|
||||
client: LSPClient
|
||||
|
||||
@@ -1,16 +1,8 @@
|
||||
import { existsSync, readFileSync } from "fs"
|
||||
import { join } from "path"
|
||||
import { homedir } from "os"
|
||||
import { BUILTIN_SERVERS, EXT_TO_LANG } from "./constants"
|
||||
|
||||
export interface ResolvedServer {
|
||||
id: string
|
||||
command: string[]
|
||||
extensions: string[]
|
||||
priority: number
|
||||
env?: Record<string, string>
|
||||
initialization?: Record<string, unknown>
|
||||
}
|
||||
import { BUILTIN_SERVERS, EXT_TO_LANG, LSP_INSTALL_HINTS } from "./constants"
|
||||
import type { ResolvedServer, ServerLookupResult } from "./types"
|
||||
|
||||
interface LspEntry {
|
||||
disabled?: boolean
|
||||
@@ -120,23 +112,47 @@ function getMergedServers(): ServerWithSource[] {
|
||||
})
|
||||
}
|
||||
|
||||
export function findServerForExtension(ext: string): ResolvedServer | null {
|
||||
export function findServerForExtension(ext: string): ServerLookupResult {
|
||||
const servers = getMergedServers()
|
||||
|
||||
for (const server of servers) {
|
||||
if (server.extensions.includes(ext) && isServerInstalled(server.command)) {
|
||||
return {
|
||||
id: server.id,
|
||||
command: server.command,
|
||||
extensions: server.extensions,
|
||||
priority: server.priority,
|
||||
env: server.env,
|
||||
initialization: server.initialization,
|
||||
status: "found",
|
||||
server: {
|
||||
id: server.id,
|
||||
command: server.command,
|
||||
extensions: server.extensions,
|
||||
priority: server.priority,
|
||||
env: server.env,
|
||||
initialization: server.initialization,
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return null
|
||||
for (const server of servers) {
|
||||
if (server.extensions.includes(ext)) {
|
||||
const installHint =
|
||||
LSP_INSTALL_HINTS[server.id] || `Install '${server.command[0]}' and ensure it's in your PATH`
|
||||
return {
|
||||
status: "not_installed",
|
||||
server: {
|
||||
id: server.id,
|
||||
command: server.command,
|
||||
extensions: server.extensions,
|
||||
},
|
||||
installHint,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const availableServers = [...new Set(servers.map((s) => s.id))]
|
||||
return {
|
||||
status: "not_configured",
|
||||
extension: ext,
|
||||
availableServers,
|
||||
}
|
||||
}
|
||||
|
||||
export function getLanguageId(ext: string): string {
|
||||
|
||||
@@ -40,6 +40,37 @@ export const DEFAULT_MAX_REFERENCES = 200
|
||||
export const DEFAULT_MAX_SYMBOLS = 200
|
||||
export const DEFAULT_MAX_DIAGNOSTICS = 200
|
||||
|
||||
export const LSP_INSTALL_HINTS: Record<string, string> = {
|
||||
typescript: "npm install -g typescript-language-server typescript",
|
||||
deno: "Install Deno from https://deno.land",
|
||||
vue: "npm install -g @vue/language-server",
|
||||
eslint: "npm install -g vscode-langservers-extracted",
|
||||
oxlint: "npm install -g oxlint",
|
||||
biome: "npm install -g @biomejs/biome",
|
||||
gopls: "go install golang.org/x/tools/gopls@latest",
|
||||
"ruby-lsp": "gem install ruby-lsp",
|
||||
basedpyright: "pip install basedpyright",
|
||||
pyright: "pip install pyright",
|
||||
ty: "pip install ty",
|
||||
ruff: "pip install ruff",
|
||||
"elixir-ls": "See https://github.com/elixir-lsp/elixir-ls",
|
||||
zls: "See https://github.com/zigtools/zls",
|
||||
csharp: "dotnet tool install -g csharp-ls",
|
||||
fsharp: "dotnet tool install -g fsautocomplete",
|
||||
"sourcekit-lsp": "Included with Xcode or Swift toolchain",
|
||||
rust: "rustup component add rust-analyzer",
|
||||
clangd: "See https://clangd.llvm.org/installation",
|
||||
svelte: "npm install -g svelte-language-server",
|
||||
astro: "npm install -g @astrojs/language-server",
|
||||
"bash-ls": "npm install -g bash-language-server",
|
||||
jdtls: "See https://github.com/eclipse-jdtls/eclipse.jdt.ls",
|
||||
"yaml-ls": "npm install -g yaml-language-server",
|
||||
"lua-ls": "See https://github.com/LuaLS/lua-language-server",
|
||||
php: "npm install -g intelephense",
|
||||
dart: "Included with Dart SDK",
|
||||
"terraform-ls": "See https://github.com/hashicorp/terraform-ls",
|
||||
}
|
||||
|
||||
// Synced with OpenCode's server.ts
|
||||
// https://github.com/sst/opencode/blob/main/packages/opencode/src/lsp/server.ts
|
||||
export const BUILTIN_SERVERS: Record<string, Omit<LSPServerConfig, "id">> = {
|
||||
|
||||
@@ -135,3 +135,23 @@ export interface CodeAction {
|
||||
command?: Command
|
||||
data?: unknown
|
||||
}
|
||||
|
||||
export interface ServerLookupInfo {
|
||||
id: string
|
||||
command: string[]
|
||||
extensions: string[]
|
||||
}
|
||||
|
||||
export type ServerLookupResult =
|
||||
| { status: "found"; server: ResolvedServer }
|
||||
| { status: "not_configured"; extension: string; availableServers: string[] }
|
||||
| { status: "not_installed"; server: ServerLookupInfo; installHint: string }
|
||||
|
||||
export interface ResolvedServer {
|
||||
id: string
|
||||
command: string[]
|
||||
extensions: string[]
|
||||
priority: number
|
||||
env?: Record<string, string>
|
||||
initialization?: Record<string, unknown>
|
||||
}
|
||||
|
||||
@@ -17,6 +17,7 @@ import type {
|
||||
TextEdit,
|
||||
CodeAction,
|
||||
Command,
|
||||
ServerLookupResult,
|
||||
} from "./types"
|
||||
|
||||
export function findWorkspaceRoot(filePath: string): string {
|
||||
@@ -40,15 +41,51 @@ export function findWorkspaceRoot(filePath: string): string {
|
||||
return require("path").dirname(resolve(filePath))
|
||||
}
|
||||
|
||||
export function formatServerLookupError(result: Exclude<ServerLookupResult, { status: "found" }>): string {
|
||||
if (result.status === "not_installed") {
|
||||
const { server, installHint } = result
|
||||
return [
|
||||
`LSP server '${server.id}' is configured but NOT INSTALLED.`,
|
||||
``,
|
||||
`Command not found: ${server.command[0]}`,
|
||||
``,
|
||||
`To install:`,
|
||||
` ${installHint}`,
|
||||
``,
|
||||
`Supported extensions: ${server.extensions.join(", ")}`,
|
||||
``,
|
||||
`After installation, the server will be available automatically.`,
|
||||
`Run 'lsp_servers' tool to verify installation status.`,
|
||||
].join("\n")
|
||||
}
|
||||
|
||||
return [
|
||||
`No LSP server configured for extension: ${result.extension}`,
|
||||
``,
|
||||
`Available servers: ${result.availableServers.slice(0, 10).join(", ")}${result.availableServers.length > 10 ? "..." : ""}`,
|
||||
``,
|
||||
`To add a custom server, configure 'lsp' in oh-my-opencode.json:`,
|
||||
` {`,
|
||||
` "lsp": {`,
|
||||
` "my-server": {`,
|
||||
` "command": ["my-lsp", "--stdio"],`,
|
||||
` "extensions": ["${result.extension}"]`,
|
||||
` }`,
|
||||
` }`,
|
||||
` }`,
|
||||
].join("\n")
|
||||
}
|
||||
|
||||
export async function withLspClient<T>(filePath: string, fn: (client: LSPClient) => Promise<T>): Promise<T> {
|
||||
const absPath = resolve(filePath)
|
||||
const ext = extname(absPath)
|
||||
const server = findServerForExtension(ext)
|
||||
const result = findServerForExtension(ext)
|
||||
|
||||
if (!server) {
|
||||
throw new Error(`No LSP server configured for extension: ${ext}`)
|
||||
if (result.status !== "found") {
|
||||
throw new Error(formatServerLookupError(result))
|
||||
}
|
||||
|
||||
const server = result.server
|
||||
const root = findWorkspaceRoot(absPath)
|
||||
const client = await lspManager.getClient(root, server)
|
||||
|
||||
|
||||
@@ -23,7 +23,8 @@ mock.module("./constants", () => ({
|
||||
TOOL_NAME_PREFIX: "session_",
|
||||
}))
|
||||
|
||||
const { getAllSessions, getMessageDir, sessionExists, readSessionMessages, readSessionTodos, getSessionInfo } = await import("./storage")
|
||||
const { getAllSessions, getMessageDir, sessionExists, readSessionMessages, readSessionTodos, getSessionInfo } =
|
||||
await import("./storage")
|
||||
|
||||
describe("session-manager storage", () => {
|
||||
beforeEach(() => {
|
||||
@@ -43,48 +44,61 @@ describe("session-manager storage", () => {
|
||||
}
|
||||
})
|
||||
|
||||
test("getAllSessions returns empty array when no sessions exist", () => {
|
||||
const sessions = getAllSessions()
|
||||
|
||||
test("getAllSessions returns empty array when no sessions exist", async () => {
|
||||
// #when
|
||||
const sessions = await getAllSessions()
|
||||
|
||||
// #then
|
||||
expect(Array.isArray(sessions)).toBe(true)
|
||||
expect(sessions).toEqual([])
|
||||
})
|
||||
|
||||
test("getMessageDir finds session in direct path", () => {
|
||||
// #given
|
||||
const sessionID = "ses_test123"
|
||||
const sessionPath = join(TEST_MESSAGE_STORAGE, sessionID)
|
||||
mkdirSync(sessionPath, { recursive: true })
|
||||
writeFileSync(join(sessionPath, "msg_001.json"), JSON.stringify({ id: "msg_001", role: "user" }))
|
||||
|
||||
// #when
|
||||
const result = getMessageDir(sessionID)
|
||||
|
||||
|
||||
// #then
|
||||
expect(result).toBe(sessionPath)
|
||||
})
|
||||
|
||||
test("sessionExists returns false for non-existent session", () => {
|
||||
// #when
|
||||
const exists = sessionExists("ses_nonexistent")
|
||||
|
||||
|
||||
// #then
|
||||
expect(exists).toBe(false)
|
||||
})
|
||||
|
||||
test("sessionExists returns true for existing session", () => {
|
||||
// #given
|
||||
const sessionID = "ses_exists"
|
||||
const sessionPath = join(TEST_MESSAGE_STORAGE, sessionID)
|
||||
mkdirSync(sessionPath, { recursive: true })
|
||||
writeFileSync(join(sessionPath, "msg_001.json"), JSON.stringify({ id: "msg_001" }))
|
||||
|
||||
// #when
|
||||
const exists = sessionExists(sessionID)
|
||||
|
||||
|
||||
// #then
|
||||
expect(exists).toBe(true)
|
||||
})
|
||||
|
||||
test("readSessionMessages returns empty array for non-existent session", () => {
|
||||
const messages = readSessionMessages("ses_nonexistent")
|
||||
|
||||
test("readSessionMessages returns empty array for non-existent session", async () => {
|
||||
// #when
|
||||
const messages = await readSessionMessages("ses_nonexistent")
|
||||
|
||||
// #then
|
||||
expect(messages).toEqual([])
|
||||
})
|
||||
|
||||
test("readSessionMessages sorts messages by timestamp", () => {
|
||||
test("readSessionMessages sorts messages by timestamp", async () => {
|
||||
// #given
|
||||
const sessionID = "ses_test123"
|
||||
const sessionPath = join(TEST_MESSAGE_STORAGE, sessionID)
|
||||
mkdirSync(sessionPath, { recursive: true })
|
||||
@@ -98,26 +112,33 @@ describe("session-manager storage", () => {
|
||||
JSON.stringify({ id: "msg_001", role: "user", time: { created: 1000 } })
|
||||
)
|
||||
|
||||
const messages = readSessionMessages(sessionID)
|
||||
|
||||
// #when
|
||||
const messages = await readSessionMessages(sessionID)
|
||||
|
||||
// #then
|
||||
expect(messages.length).toBe(2)
|
||||
expect(messages[0].id).toBe("msg_001")
|
||||
expect(messages[1].id).toBe("msg_002")
|
||||
})
|
||||
|
||||
test("readSessionTodos returns empty array when no todos exist", () => {
|
||||
const todos = readSessionTodos("ses_nonexistent")
|
||||
|
||||
test("readSessionTodos returns empty array when no todos exist", async () => {
|
||||
// #when
|
||||
const todos = await readSessionTodos("ses_nonexistent")
|
||||
|
||||
// #then
|
||||
expect(todos).toEqual([])
|
||||
})
|
||||
|
||||
test("getSessionInfo returns null for non-existent session", () => {
|
||||
const info = getSessionInfo("ses_nonexistent")
|
||||
|
||||
test("getSessionInfo returns null for non-existent session", async () => {
|
||||
// #when
|
||||
const info = await getSessionInfo("ses_nonexistent")
|
||||
|
||||
// #then
|
||||
expect(info).toBeNull()
|
||||
})
|
||||
|
||||
test("getSessionInfo aggregates session metadata correctly", () => {
|
||||
test("getSessionInfo aggregates session metadata correctly", async () => {
|
||||
// #given
|
||||
const sessionID = "ses_test123"
|
||||
const sessionPath = join(TEST_MESSAGE_STORAGE, sessionID)
|
||||
mkdirSync(sessionPath, { recursive: true })
|
||||
@@ -142,8 +163,10 @@ describe("session-manager storage", () => {
|
||||
})
|
||||
)
|
||||
|
||||
const info = getSessionInfo(sessionID)
|
||||
|
||||
// #when
|
||||
const info = await getSessionInfo(sessionID)
|
||||
|
||||
// #then
|
||||
expect(info).not.toBeNull()
|
||||
expect(info?.id).toBe(sessionID)
|
||||
expect(info?.message_count).toBe(2)
|
||||
|
||||
@@ -1,23 +1,25 @@
|
||||
import { existsSync, readdirSync, readFileSync } from "node:fs"
|
||||
import { existsSync, readdirSync } from "node:fs"
|
||||
import { readdir, readFile } from "node:fs/promises"
|
||||
import { join } from "node:path"
|
||||
import { MESSAGE_STORAGE, PART_STORAGE, TODO_DIR, TRANSCRIPT_DIR } from "./constants"
|
||||
import type { SessionMessage, SessionInfo, TodoItem } from "./types"
|
||||
|
||||
export function getAllSessions(): string[] {
|
||||
export async function getAllSessions(): Promise<string[]> {
|
||||
if (!existsSync(MESSAGE_STORAGE)) return []
|
||||
|
||||
const sessions: string[] = []
|
||||
|
||||
function scanDirectory(dir: string): void {
|
||||
async function scanDirectory(dir: string): Promise<void> {
|
||||
try {
|
||||
for (const entry of readdirSync(dir, { withFileTypes: true })) {
|
||||
const entries = await readdir(dir, { withFileTypes: true })
|
||||
for (const entry of entries) {
|
||||
if (entry.isDirectory()) {
|
||||
const sessionPath = join(dir, entry.name)
|
||||
const files = readdirSync(sessionPath)
|
||||
const files = await readdir(sessionPath)
|
||||
if (files.some((f) => f.endsWith(".json"))) {
|
||||
sessions.push(entry.name)
|
||||
} else {
|
||||
scanDirectory(sessionPath)
|
||||
await scanDirectory(sessionPath)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -26,7 +28,7 @@ export function getAllSessions(): string[] {
|
||||
}
|
||||
}
|
||||
|
||||
scanDirectory(MESSAGE_STORAGE)
|
||||
await scanDirectory(MESSAGE_STORAGE)
|
||||
return [...new Set(sessions)]
|
||||
}
|
||||
|
||||
@@ -38,11 +40,15 @@ export function getMessageDir(sessionID: string): string {
|
||||
return directPath
|
||||
}
|
||||
|
||||
for (const dir of readdirSync(MESSAGE_STORAGE)) {
|
||||
const sessionPath = join(MESSAGE_STORAGE, dir, sessionID)
|
||||
if (existsSync(sessionPath)) {
|
||||
return sessionPath
|
||||
try {
|
||||
for (const dir of readdirSync(MESSAGE_STORAGE)) {
|
||||
const sessionPath = join(MESSAGE_STORAGE, dir, sessionID)
|
||||
if (existsSync(sessionPath)) {
|
||||
return sessionPath
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
return ""
|
||||
}
|
||||
|
||||
return ""
|
||||
@@ -52,29 +58,34 @@ export function sessionExists(sessionID: string): boolean {
|
||||
return getMessageDir(sessionID) !== ""
|
||||
}
|
||||
|
||||
export function readSessionMessages(sessionID: string): SessionMessage[] {
|
||||
export async function readSessionMessages(sessionID: string): Promise<SessionMessage[]> {
|
||||
const messageDir = getMessageDir(sessionID)
|
||||
if (!messageDir || !existsSync(messageDir)) return []
|
||||
|
||||
const messages: SessionMessage[] = []
|
||||
for (const file of readdirSync(messageDir)) {
|
||||
if (!file.endsWith(".json")) continue
|
||||
try {
|
||||
const content = readFileSync(join(messageDir, file), "utf-8")
|
||||
const meta = JSON.parse(content)
|
||||
try {
|
||||
const files = await readdir(messageDir)
|
||||
for (const file of files) {
|
||||
if (!file.endsWith(".json")) continue
|
||||
try {
|
||||
const content = await readFile(join(messageDir, file), "utf-8")
|
||||
const meta = JSON.parse(content)
|
||||
|
||||
const parts = readParts(meta.id)
|
||||
const parts = await readParts(meta.id)
|
||||
|
||||
messages.push({
|
||||
id: meta.id,
|
||||
role: meta.role,
|
||||
agent: meta.agent,
|
||||
time: meta.time,
|
||||
parts,
|
||||
})
|
||||
} catch {
|
||||
continue
|
||||
messages.push({
|
||||
id: meta.id,
|
||||
role: meta.role,
|
||||
agent: meta.agent,
|
||||
time: meta.time,
|
||||
parts,
|
||||
})
|
||||
} catch {
|
||||
continue
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
return []
|
||||
}
|
||||
|
||||
return messages.sort((a, b) => {
|
||||
@@ -85,65 +96,75 @@ export function readSessionMessages(sessionID: string): SessionMessage[] {
|
||||
})
|
||||
}
|
||||
|
||||
function readParts(messageID: string): Array<{ id: string; type: string; [key: string]: unknown }> {
|
||||
async function readParts(messageID: string): Promise<Array<{ id: string; type: string; [key: string]: unknown }>> {
|
||||
const partDir = join(PART_STORAGE, messageID)
|
||||
if (!existsSync(partDir)) return []
|
||||
|
||||
const parts: Array<{ id: string; type: string; [key: string]: unknown }> = []
|
||||
for (const file of readdirSync(partDir)) {
|
||||
if (!file.endsWith(".json")) continue
|
||||
try {
|
||||
const content = readFileSync(join(partDir, file), "utf-8")
|
||||
parts.push(JSON.parse(content))
|
||||
} catch {
|
||||
continue
|
||||
try {
|
||||
const files = await readdir(partDir)
|
||||
for (const file of files) {
|
||||
if (!file.endsWith(".json")) continue
|
||||
try {
|
||||
const content = await readFile(join(partDir, file), "utf-8")
|
||||
parts.push(JSON.parse(content))
|
||||
} catch {
|
||||
continue
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
return []
|
||||
}
|
||||
|
||||
return parts.sort((a, b) => a.id.localeCompare(b.id))
|
||||
}
|
||||
|
||||
export function readSessionTodos(sessionID: string): TodoItem[] {
|
||||
export async function readSessionTodos(sessionID: string): Promise<TodoItem[]> {
|
||||
if (!existsSync(TODO_DIR)) return []
|
||||
|
||||
const todoFiles = readdirSync(TODO_DIR).filter((f) => f.includes(sessionID) && f.endsWith(".json"))
|
||||
try {
|
||||
const allFiles = await readdir(TODO_DIR)
|
||||
const todoFiles = allFiles.filter((f) => f.includes(sessionID) && f.endsWith(".json"))
|
||||
|
||||
for (const file of todoFiles) {
|
||||
try {
|
||||
const content = readFileSync(join(TODO_DIR, file), "utf-8")
|
||||
const data = JSON.parse(content)
|
||||
if (Array.isArray(data)) {
|
||||
return data.map((item) => ({
|
||||
id: item.id || "",
|
||||
content: item.content || "",
|
||||
status: item.status || "pending",
|
||||
priority: item.priority,
|
||||
}))
|
||||
for (const file of todoFiles) {
|
||||
try {
|
||||
const content = await readFile(join(TODO_DIR, file), "utf-8")
|
||||
const data = JSON.parse(content)
|
||||
if (Array.isArray(data)) {
|
||||
return data.map((item) => ({
|
||||
id: item.id || "",
|
||||
content: item.content || "",
|
||||
status: item.status || "pending",
|
||||
priority: item.priority,
|
||||
}))
|
||||
}
|
||||
} catch {
|
||||
continue
|
||||
}
|
||||
} catch {
|
||||
continue
|
||||
}
|
||||
} catch {
|
||||
return []
|
||||
}
|
||||
|
||||
return []
|
||||
}
|
||||
|
||||
export function readSessionTranscript(sessionID: string): number {
|
||||
export async function readSessionTranscript(sessionID: string): Promise<number> {
|
||||
if (!existsSync(TRANSCRIPT_DIR)) return 0
|
||||
|
||||
const transcriptFile = join(TRANSCRIPT_DIR, `${sessionID}.jsonl`)
|
||||
if (!existsSync(transcriptFile)) return 0
|
||||
|
||||
try {
|
||||
const content = readFileSync(transcriptFile, "utf-8")
|
||||
const content = await readFile(transcriptFile, "utf-8")
|
||||
return content.trim().split("\n").filter(Boolean).length
|
||||
} catch {
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
export function getSessionInfo(sessionID: string): SessionInfo | null {
|
||||
const messages = readSessionMessages(sessionID)
|
||||
export async function getSessionInfo(sessionID: string): Promise<SessionInfo | null> {
|
||||
const messages = await readSessionMessages(sessionID)
|
||||
if (messages.length === 0) return null
|
||||
|
||||
const agentsUsed = new Set<string>()
|
||||
@@ -159,8 +180,8 @@ export function getSessionInfo(sessionID: string): SessionInfo | null {
|
||||
}
|
||||
}
|
||||
|
||||
const todos = readSessionTodos(sessionID)
|
||||
const transcriptEntries = readSessionTranscript(sessionID)
|
||||
const todos = await readSessionTodos(sessionID)
|
||||
const transcriptEntries = await readSessionTranscript(sessionID)
|
||||
|
||||
return {
|
||||
id: sessionID,
|
||||
|
||||
@@ -6,8 +6,25 @@ import {
|
||||
SESSION_INFO_DESCRIPTION,
|
||||
} from "./constants"
|
||||
import { getAllSessions, getSessionInfo, readSessionMessages, readSessionTodos, sessionExists } from "./storage"
|
||||
import { filterSessionsByDate, formatSessionInfo, formatSessionList, formatSessionMessages, formatSearchResults, searchInSession } from "./utils"
|
||||
import type { SessionListArgs, SessionReadArgs, SessionSearchArgs, SessionInfoArgs } from "./types"
|
||||
import {
|
||||
filterSessionsByDate,
|
||||
formatSessionInfo,
|
||||
formatSessionList,
|
||||
formatSessionMessages,
|
||||
formatSearchResults,
|
||||
searchInSession,
|
||||
} from "./utils"
|
||||
import type { SessionListArgs, SessionReadArgs, SessionSearchArgs, SessionInfoArgs, SearchResult } from "./types"
|
||||
|
||||
const SEARCH_TIMEOUT_MS = 60_000
|
||||
const MAX_SESSIONS_TO_SCAN = 50
|
||||
|
||||
function withTimeout<T>(promise: Promise<T>, ms: number, operation: string): Promise<T> {
|
||||
return Promise.race([
|
||||
promise,
|
||||
new Promise<T>((_, reject) => setTimeout(() => reject(new Error(`${operation} timed out after ${ms}ms`)), ms)),
|
||||
])
|
||||
}
|
||||
|
||||
export const session_list: ToolDefinition = tool({
|
||||
description: SESSION_LIST_DESCRIPTION,
|
||||
@@ -18,17 +35,17 @@ export const session_list: ToolDefinition = tool({
|
||||
},
|
||||
execute: async (args: SessionListArgs, _context) => {
|
||||
try {
|
||||
let sessions = getAllSessions()
|
||||
let sessions = await getAllSessions()
|
||||
|
||||
if (args.from_date || args.to_date) {
|
||||
sessions = filterSessionsByDate(sessions, args.from_date, args.to_date)
|
||||
sessions = await filterSessionsByDate(sessions, args.from_date, args.to_date)
|
||||
}
|
||||
|
||||
if (args.limit && args.limit > 0) {
|
||||
sessions = sessions.slice(0, args.limit)
|
||||
}
|
||||
|
||||
return formatSessionList(sessions)
|
||||
return await formatSessionList(sessions)
|
||||
} catch (e) {
|
||||
return `Error: ${e instanceof Error ? e.message : String(e)}`
|
||||
}
|
||||
@@ -49,13 +66,13 @@ export const session_read: ToolDefinition = tool({
|
||||
return `Session not found: ${args.session_id}`
|
||||
}
|
||||
|
||||
let messages = readSessionMessages(args.session_id)
|
||||
let messages = await readSessionMessages(args.session_id)
|
||||
|
||||
if (args.limit && args.limit > 0) {
|
||||
messages = messages.slice(0, args.limit)
|
||||
}
|
||||
|
||||
const todos = args.include_todos ? readSessionTodos(args.session_id) : undefined
|
||||
const todos = args.include_todos ? await readSessionTodos(args.session_id) : undefined
|
||||
|
||||
return formatSessionMessages(messages, args.include_todos, todos)
|
||||
} catch (e) {
|
||||
@@ -74,13 +91,31 @@ export const session_search: ToolDefinition = tool({
|
||||
},
|
||||
execute: async (args: SessionSearchArgs, _context) => {
|
||||
try {
|
||||
const sessions = args.session_id ? [args.session_id] : getAllSessions()
|
||||
const resultLimit = args.limit && args.limit > 0 ? args.limit : 20
|
||||
|
||||
const allResults = sessions.flatMap((sid) => searchInSession(sid, args.query, args.case_sensitive))
|
||||
const searchOperation = async (): Promise<SearchResult[]> => {
|
||||
if (args.session_id) {
|
||||
return searchInSession(args.session_id, args.query, args.case_sensitive, resultLimit)
|
||||
}
|
||||
|
||||
const limited = args.limit && args.limit > 0 ? allResults.slice(0, args.limit) : allResults.slice(0, 20)
|
||||
const allSessions = await getAllSessions()
|
||||
const sessionsToScan = allSessions.slice(0, MAX_SESSIONS_TO_SCAN)
|
||||
|
||||
return formatSearchResults(limited)
|
||||
const allResults: SearchResult[] = []
|
||||
for (const sid of sessionsToScan) {
|
||||
if (allResults.length >= resultLimit) break
|
||||
|
||||
const remaining = resultLimit - allResults.length
|
||||
const sessionResults = await searchInSession(sid, args.query, args.case_sensitive, remaining)
|
||||
allResults.push(...sessionResults)
|
||||
}
|
||||
|
||||
return allResults.slice(0, resultLimit)
|
||||
}
|
||||
|
||||
const results = await withTimeout(searchOperation(), SEARCH_TIMEOUT_MS, "Search")
|
||||
|
||||
return formatSearchResults(results)
|
||||
} catch (e) {
|
||||
return `Error: ${e instanceof Error ? e.message : String(e)}`
|
||||
}
|
||||
@@ -94,7 +129,7 @@ export const session_info: ToolDefinition = tool({
|
||||
},
|
||||
execute: async (args: SessionInfoArgs, _context) => {
|
||||
try {
|
||||
const info = getSessionInfo(args.session_id)
|
||||
const info = await getSessionInfo(args.session_id)
|
||||
|
||||
if (!info) {
|
||||
return `Session not found: ${args.session_id}`
|
||||
|
||||
@@ -1,21 +1,39 @@
|
||||
import { describe, test, expect } from "bun:test"
|
||||
import { formatSessionList, formatSessionMessages, formatSessionInfo, formatSearchResults, filterSessionsByDate, searchInSession } from "./utils"
|
||||
import {
|
||||
formatSessionList,
|
||||
formatSessionMessages,
|
||||
formatSessionInfo,
|
||||
formatSearchResults,
|
||||
filterSessionsByDate,
|
||||
searchInSession,
|
||||
} from "./utils"
|
||||
import type { SessionInfo, SessionMessage, SearchResult } from "./types"
|
||||
|
||||
describe("session-manager utils", () => {
|
||||
test("formatSessionList handles empty array", () => {
|
||||
const result = formatSessionList([])
|
||||
|
||||
test("formatSessionList handles empty array", async () => {
|
||||
// #given
|
||||
const sessions: string[] = []
|
||||
|
||||
// #when
|
||||
const result = await formatSessionList(sessions)
|
||||
|
||||
// #then
|
||||
expect(result).toContain("No sessions found")
|
||||
})
|
||||
|
||||
test("formatSessionMessages handles empty array", () => {
|
||||
const result = formatSessionMessages([])
|
||||
|
||||
// #given
|
||||
const messages: SessionMessage[] = []
|
||||
|
||||
// #when
|
||||
const result = formatSessionMessages(messages)
|
||||
|
||||
// #then
|
||||
expect(result).toContain("No messages")
|
||||
})
|
||||
|
||||
test("formatSessionMessages includes message content", () => {
|
||||
// #given
|
||||
const messages: SessionMessage[] = [
|
||||
{
|
||||
id: "msg_001",
|
||||
@@ -24,14 +42,17 @@ describe("session-manager utils", () => {
|
||||
parts: [{ id: "prt_001", type: "text", text: "Hello world" }],
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
// #when
|
||||
const result = formatSessionMessages(messages)
|
||||
|
||||
|
||||
// #then
|
||||
expect(result).toContain("user")
|
||||
expect(result).toContain("Hello world")
|
||||
})
|
||||
|
||||
test("formatSessionMessages includes todos when requested", () => {
|
||||
// #given
|
||||
const messages: SessionMessage[] = [
|
||||
{
|
||||
id: "msg_001",
|
||||
@@ -40,20 +61,22 @@ describe("session-manager utils", () => {
|
||||
parts: [{ id: "prt_001", type: "text", text: "Test" }],
|
||||
},
|
||||
]
|
||||
|
||||
const todos = [
|
||||
{ id: "1", content: "Task 1", status: "completed" as const },
|
||||
{ id: "2", content: "Task 2", status: "pending" as const },
|
||||
]
|
||||
|
||||
|
||||
// #when
|
||||
const result = formatSessionMessages(messages, true, todos)
|
||||
|
||||
|
||||
// #then
|
||||
expect(result).toContain("Todos")
|
||||
expect(result).toContain("Task 1")
|
||||
expect(result).toContain("Task 2")
|
||||
})
|
||||
|
||||
test("formatSessionInfo includes all metadata", () => {
|
||||
// #given
|
||||
const info: SessionInfo = {
|
||||
id: "ses_test123",
|
||||
message_count: 42,
|
||||
@@ -65,9 +88,11 @@ describe("session-manager utils", () => {
|
||||
todos: [{ id: "1", content: "Test", status: "pending" }],
|
||||
transcript_entries: 123,
|
||||
}
|
||||
|
||||
|
||||
// #when
|
||||
const result = formatSessionInfo(info)
|
||||
|
||||
|
||||
// #then
|
||||
expect(result).toContain("ses_test123")
|
||||
expect(result).toContain("42")
|
||||
expect(result).toContain("build, oracle")
|
||||
@@ -75,12 +100,18 @@ describe("session-manager utils", () => {
|
||||
})
|
||||
|
||||
test("formatSearchResults handles empty array", () => {
|
||||
const result = formatSearchResults([])
|
||||
|
||||
// #given
|
||||
const results: SearchResult[] = []
|
||||
|
||||
// #when
|
||||
const result = formatSearchResults(results)
|
||||
|
||||
// #then
|
||||
expect(result).toContain("No matches")
|
||||
})
|
||||
|
||||
test("formatSearchResults formats matches correctly", () => {
|
||||
// #given
|
||||
const results: SearchResult[] = [
|
||||
{
|
||||
session_id: "ses_test123",
|
||||
@@ -91,9 +122,11 @@ describe("session-manager utils", () => {
|
||||
timestamp: Date.now(),
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
// #when
|
||||
const result = formatSearchResults(results)
|
||||
|
||||
|
||||
// #then
|
||||
expect(result).toContain("Found 1 matches")
|
||||
expect(result).toContain("ses_test123")
|
||||
expect(result).toContain("msg_001")
|
||||
@@ -101,17 +134,26 @@ describe("session-manager utils", () => {
|
||||
expect(result).toContain("Matches: 3")
|
||||
})
|
||||
|
||||
test("filterSessionsByDate filters correctly", () => {
|
||||
test("filterSessionsByDate filters correctly", async () => {
|
||||
// #given
|
||||
const sessionIDs = ["ses_001", "ses_002", "ses_003"]
|
||||
|
||||
const result = filterSessionsByDate(sessionIDs)
|
||||
|
||||
|
||||
// #when
|
||||
const result = await filterSessionsByDate(sessionIDs)
|
||||
|
||||
// #then
|
||||
expect(Array.isArray(result)).toBe(true)
|
||||
})
|
||||
|
||||
test("searchInSession finds matches case-insensitively", () => {
|
||||
const results = searchInSession("ses_nonexistent", "test", false)
|
||||
|
||||
test("searchInSession finds matches case-insensitively", async () => {
|
||||
// #given
|
||||
const sessionID = "ses_nonexistent"
|
||||
const query = "test"
|
||||
|
||||
// #when
|
||||
const results = await searchInSession(sessionID, query, false)
|
||||
|
||||
// #then
|
||||
expect(Array.isArray(results)).toBe(true)
|
||||
expect(results.length).toBe(0)
|
||||
})
|
||||
|
||||
@@ -1,12 +1,14 @@
|
||||
import type { SessionInfo, SessionMessage, SearchResult } from "./types"
|
||||
import { getSessionInfo, readSessionMessages } from "./storage"
|
||||
|
||||
export function formatSessionList(sessionIDs: string[]): string {
|
||||
export async function formatSessionList(sessionIDs: string[]): Promise<string> {
|
||||
if (sessionIDs.length === 0) {
|
||||
return "No sessions found."
|
||||
}
|
||||
|
||||
const infos = sessionIDs.map((id) => getSessionInfo(id)).filter((info): info is SessionInfo => info !== null)
|
||||
const infos = (await Promise.all(sessionIDs.map((id) => getSessionInfo(id)))).filter(
|
||||
(info): info is SessionInfo => info !== null
|
||||
)
|
||||
|
||||
if (infos.length === 0) {
|
||||
return "No valid sessions found."
|
||||
@@ -39,7 +41,11 @@ export function formatSessionList(sessionIDs: string[]): string {
|
||||
return [formatRow(headers), separator, ...rows.map(formatRow)].join("\n")
|
||||
}
|
||||
|
||||
export function formatSessionMessages(messages: SessionMessage[], includeTodos?: boolean, todos?: Array<{id: string; content: string; status: string}>): string {
|
||||
export function formatSessionMessages(
|
||||
messages: SessionMessage[],
|
||||
includeTodos?: boolean,
|
||||
todos?: Array<{ id: string; content: string; status: string }>
|
||||
): string {
|
||||
if (messages.length === 0) {
|
||||
return "No messages found in this session."
|
||||
}
|
||||
@@ -116,32 +122,46 @@ export function formatSearchResults(results: SearchResult[]): string {
|
||||
return lines.join("\n")
|
||||
}
|
||||
|
||||
export function filterSessionsByDate(sessionIDs: string[], fromDate?: string, toDate?: string): string[] {
|
||||
export async function filterSessionsByDate(
|
||||
sessionIDs: string[],
|
||||
fromDate?: string,
|
||||
toDate?: string
|
||||
): Promise<string[]> {
|
||||
if (!fromDate && !toDate) return sessionIDs
|
||||
|
||||
const from = fromDate ? new Date(fromDate) : null
|
||||
const to = toDate ? new Date(toDate) : null
|
||||
|
||||
return sessionIDs.filter((id) => {
|
||||
const info = getSessionInfo(id)
|
||||
if (!info || !info.last_message) return false
|
||||
const results: string[] = []
|
||||
for (const id of sessionIDs) {
|
||||
const info = await getSessionInfo(id)
|
||||
if (!info || !info.last_message) continue
|
||||
|
||||
if (from && info.last_message < from) return false
|
||||
if (to && info.last_message > to) return false
|
||||
if (from && info.last_message < from) continue
|
||||
if (to && info.last_message > to) continue
|
||||
|
||||
return true
|
||||
})
|
||||
results.push(id)
|
||||
}
|
||||
|
||||
return results
|
||||
}
|
||||
|
||||
export function searchInSession(sessionID: string, query: string, caseSensitive = false): SearchResult[] {
|
||||
const messages = readSessionMessages(sessionID)
|
||||
export async function searchInSession(
|
||||
sessionID: string,
|
||||
query: string,
|
||||
caseSensitive = false,
|
||||
maxResults?: number
|
||||
): Promise<SearchResult[]> {
|
||||
const messages = await readSessionMessages(sessionID)
|
||||
const results: SearchResult[] = []
|
||||
|
||||
const searchQuery = caseSensitive ? query : query.toLowerCase()
|
||||
|
||||
for (const msg of messages) {
|
||||
if (maxResults && results.length >= maxResults) break
|
||||
|
||||
let matchCount = 0
|
||||
let excerpts: string[] = []
|
||||
const excerpts: string[] = []
|
||||
|
||||
for (const part of msg.parts) {
|
||||
if (part.type === "text" && part.text) {
|
||||
|
||||
Reference in New Issue
Block a user