Compare commits
21 Commits
v3.0.0-bet
...
v3.0.0-bet
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2e1b467de4 | ||
|
|
e180d295bb | ||
|
|
93e59da9d6 | ||
|
|
358bd8d7fa | ||
|
|
78d67582d6 | ||
|
|
54575ad259 | ||
|
|
045fa79d92 | ||
|
|
6ded689d08 | ||
|
|
45d660176e | ||
|
|
ffbab8f316 | ||
|
|
e203130ed8 | ||
|
|
0631865c16 | ||
|
|
2b036e7476 | ||
|
|
84e1ee09f0 | ||
|
|
3d5319a72d | ||
|
|
325ce1212b | ||
|
|
66f8946ff1 | ||
|
|
22619d137e | ||
|
|
000a61c961 | ||
|
|
9f07aae0a1 | ||
|
|
d7326e1eeb |
4
.github/workflows/sisyphus-agent.yml
vendored
4
.github/workflows/sisyphus-agent.yml
vendored
@@ -430,6 +430,10 @@ jobs:
|
||||
2. **CREATE TODOS IMMEDIATELY**: Right after reading, create your todo list using todo tools.
|
||||
- First todo: "Summarize issue/PR context and requirements"
|
||||
- Break down ALL work into atomic, verifiable steps
|
||||
- **GIT WORKFLOW (MANDATORY for implementation tasks)**: ALWAYS include these final todos:
|
||||
- "Create new branch from origin/BRANCH_PLACEHOLDER (NEVER push directly to BRANCH_PLACEHOLDER)"
|
||||
- "Commit changes"
|
||||
- "Create PR to BRANCH_PLACEHOLDER branch"
|
||||
- Plan everything BEFORE starting any work
|
||||
|
||||
---
|
||||
|
||||
86
README.ja.md
86
README.ja.md
@@ -28,7 +28,29 @@
|
||||
|
||||
> `oh-my-opencode` をインストールして、ドーピングしたかのようにコーディングしましょう。バックグラウンドでエージェントを走らせ、oracle、librarian、frontend engineer のような専門エージェントを呼び出してください。丹精込めて作られた LSP/AST ツール、厳選された MCP、そして完全な Claude Code 互換レイヤーを、たった一行で手に入れましょう。
|
||||
|
||||
**注意: librarianには高価なモデルを使用しないでください。これはあなたにとって役に立たないだけでなく、LLMプロバイダーにも負担をかけます。代わりにClaude Haiku、Gemini Flash、GLM 4.7、MiniMaxなどのモデルを使用してください。**
|
||||
# Claude OAuth アクセスに関するお知らせ
|
||||
|
||||
## TL;DR
|
||||
|
||||
> Q. oh-my-opencodeを使用できますか?
|
||||
|
||||
はい。
|
||||
|
||||
> Q. Claude Codeのサブスクリプションで使用できますか?
|
||||
|
||||
はい、技術的には可能です。ただし、使用を推奨することはできません。
|
||||
|
||||
## 詳細
|
||||
|
||||
> 2026年1月より、AnthropicはToS違反を理由にサードパーティのOAuthアクセスを制限しました。
|
||||
>
|
||||
> [**Anthropicはこのプロジェクト oh-my-opencode を、opencodeをブロックする正当化の根拠として挙げています。**](https://x.com/thdxr/status/2010149530486911014)
|
||||
>
|
||||
> 実際、Claude CodeのOAuthリクエストシグネチャを偽装するプラグインがコミュニティに存在します。
|
||||
>
|
||||
> これらのツールは技術的な検出可能性に関わらず動作する可能性がありますが、ユーザーはToSへの影響を認識すべきであり、私個人としてはそれらの使用を推奨できません。
|
||||
>
|
||||
> このプロジェクトは非公式ツールの使用に起因するいかなる問題についても責任を負いません。また、**私たちはそれらのOAuthシステムのカスタム実装を一切持っていません。**
|
||||
|
||||
<div align="center">
|
||||
|
||||
@@ -91,8 +113,7 @@
|
||||
- [4.2 Google Gemini (Antigravity OAuth)](#42-google-gemini-antigravity-oauth)
|
||||
- [4.2.1 モデル設定](#421-モデル設定)
|
||||
- [4.2.2 oh-my-opencode エージェントモデルのオーバーライド](#422-oh-my-opencode-エージェントモデルのオーバーライド)
|
||||
- [4.3 OpenAI (ChatGPT Plus/Pro)](#43-openai-chatgpt-pluspro)
|
||||
- [モデル設定](#モデル設定)
|
||||
|
||||
- [⚠️ 注意](#️-注意)
|
||||
- [セットアップの確認](#セットアップの確認)
|
||||
- [ユーザーに「おめでとうございます!🎉」と伝える](#ユーザーにおめでとうございますと伝える)
|
||||
@@ -354,37 +375,46 @@ opencode auth login
|
||||
|
||||
**マルチアカウントロードバランシング**: プラグインは最大10個の Google アカウントをサポートします。1つのアカウントがレートリミットに達すると、自動的に次のアカウントに切り替わります。
|
||||
|
||||
#### 4.3 OpenAI (ChatGPT Plus/Pro)
|
||||
#### 4.3 GitHub Copilot(フォールバックプロバイダー)
|
||||
|
||||
まず、opencode-openai-codex-auth プラグインを追加します:
|
||||
GitHub Copilot は、ネイティブプロバイダー(Claude、ChatGPT、Gemini)が利用できない場合の**フォールバックプロバイダー**としてサポートされています。インストーラーは、Copilot をネイティブプロバイダーより低い優先度で構成します。
|
||||
|
||||
```json
|
||||
{
|
||||
"plugin": [
|
||||
"oh-my-opencode",
|
||||
"opencode-openai-codex-auth@4.3.0"
|
||||
]
|
||||
}
|
||||
**優先度**: ネイティブプロバイダー (Claude/ChatGPT/Gemini) > GitHub Copilot > 無料モデル
|
||||
|
||||
##### モデルマッピング
|
||||
|
||||
GitHub Copilot が有効な場合、oh-my-opencode は以下のモデル割り当てを使用します:
|
||||
|
||||
| エージェント | モデル |
|
||||
|--------------|--------|
|
||||
| **Sisyphus** | `github-copilot/claude-opus-4.5` |
|
||||
| **Oracle** | `github-copilot/gpt-5.2` |
|
||||
| **Explore** | `grok code`(デフォルト) |
|
||||
| **Librarian** | `glm 4.7 free`(デフォルト) |
|
||||
|
||||
GitHub Copilot はプロキシプロバイダーとして機能し、サブスクリプションに基づいて基盤となるモデルにリクエストをルーティングします。
|
||||
|
||||
##### セットアップ
|
||||
|
||||
インストーラーを実行し、GitHub Copilot で「はい」を選択します:
|
||||
|
||||
```bash
|
||||
bunx oh-my-opencode install
|
||||
# サブスクリプション(Claude、ChatGPT、Gemini)を選択
|
||||
# プロンプトが表示されたら: "Do you have a GitHub Copilot subscription?" → 「はい」を選択
|
||||
```
|
||||
|
||||
##### モデル設定
|
||||
または、非対話モードを使用します:
|
||||
|
||||
`opencode.json` に完全なモデル設定も構成する必要があります。
|
||||
[opencode-openai-codex-auth ドキュメント](https://github.com/numman-ali/opencode-openai-codex-auth)を読み、[`config/opencode-modern.json`](https://github.com/numman-ali/opencode-openai-codex-auth/blob/main/config/opencode-modern.json)(OpenCode v1.0.210+)または [`config/opencode-legacy.json`](https://github.com/numman-ali/opencode-openai-codex-auth/blob/main/config/opencode-legacy.json)(旧バージョン)から provider/models 設定をコピーして、ユーザーの既存のセットアップを壊さないよう慎重にマージしてください。
|
||||
```bash
|
||||
bunx oh-my-opencode install --no-tui --claude=no --chatgpt=no --gemini=no --copilot=yes
|
||||
```
|
||||
|
||||
**利用可能なモデル**: `openai/gpt-5.2`, `openai/gpt-5.2-codex`, `openai/gpt-5.1-codex-max`, `openai/gpt-5.1-codex`, `openai/gpt-5.1-codex-mini`, `openai/gpt-5.1`
|
||||
|
||||
**Variants** (OpenCode v1.0.210+): `--variant=<none|low|medium|high|xhigh>` オプションで推論強度を制御できます。
|
||||
|
||||
その後、認証を行います:
|
||||
その後、GitHub で認証します:
|
||||
|
||||
```bash
|
||||
opencode auth login
|
||||
# Provider: OpenAI を選択
|
||||
# Login method: ChatGPT Plus/Pro (Codex Subscription) を選択
|
||||
# ユーザーにブラウザでの OAuth フロー完了を案内
|
||||
# 完了まで待機
|
||||
# 成功を確認し、ユーザーに報告
|
||||
# 選択: GitHub → OAuth 経由で認証
|
||||
```
|
||||
|
||||
|
||||
@@ -518,17 +548,13 @@ Ask @explore for the policy on this feature
|
||||
あなたがエディタで使っているその機能、他のエージェントは触ることができません。
|
||||
最高の同僚に最高の道具を渡してください。これでリファクタリングも、ナビゲーションも、分析も、エージェントが適切に行えるようになります。
|
||||
|
||||
- **lsp_hover**: その位置の型情報、ドキュメント、シグネチャを取得
|
||||
- **lsp_goto_definition**: シンボル定義へジャンプ
|
||||
- **lsp_find_references**: ワークスペース全体で使用箇所を検索
|
||||
- **lsp_document_symbols**: ファイルのシンボルアウトラインを取得
|
||||
- **lsp_workspace_symbols**: プロジェクト全体から名前でシンボルを検索
|
||||
- **lsp_symbols**: ファイルからシンボルを取得 (scope='document') またはワークスペース全体を検索 (scope='workspace')
|
||||
- **lsp_diagnostics**: ビルド前にエラー/警告を取得
|
||||
- **lsp_servers**: 利用可能な LSP サーバー一覧
|
||||
- **lsp_prepare_rename**: 名前変更操作の検証
|
||||
- **lsp_rename**: ワークスペース全体でシンボル名を変更
|
||||
- **lsp_code_actions**: 利用可能なクイックフィックス/リファクタリングを取得
|
||||
- **lsp_code_action_resolve**: コードアクションを適用
|
||||
- **ast_grep_search**: AST 認識コードパターン検索 (25言語対応)
|
||||
- **ast_grep_replace**: AST 認識コード置換
|
||||
|
||||
|
||||
92
README.md
92
README.md
@@ -6,7 +6,7 @@
|
||||
> [!TIP]
|
||||
>
|
||||
> [](https://github.com/code-yeongyu/oh-my-opencode/releases/tag/v3.0.0-beta.1)
|
||||
> > **The Orchestrator is now available in beta. Use `oh-my-opencode@3.0.0-beta.1` to install it.**
|
||||
> > **The Orchestrator is now available in beta. Use `oh-my-opencode@3.0.0-beta.6` to install it.**
|
||||
>
|
||||
> Be with us!
|
||||
>
|
||||
@@ -28,8 +28,29 @@
|
||||
|
||||
> This is coding on steroids—`oh-my-opencode` in action. Run background agents, call specialized agents like oracle, librarian, and frontend engineer. Use crafted LSP/AST tools, curated MCPs, and a full Claude Code compatibility layer.
|
||||
|
||||
# Claude OAuth Access Notice
|
||||
|
||||
**Notice: Do not use expensive models for librarian. This is not only unhelpful to you, but also burdens LLM providers. Use models like Claude Haiku, Gemini Flash, GLM 4.7, or MiniMax instead.**
|
||||
## TL;DR
|
||||
|
||||
> Q. Can I use oh-my-opencode?
|
||||
|
||||
Yes.
|
||||
|
||||
> Q. Can I use it with my Claude Code subscription?
|
||||
|
||||
Yes, technically possible. But I cannot recommend using it.
|
||||
|
||||
## FULL
|
||||
|
||||
> As of January 2026, Anthropic has restricted third-party OAuth access citing ToS violations.
|
||||
>
|
||||
> [**Anthropic has cited this project, oh-my-opencode as justification for blocking opencode.**](https://x.com/thdxr/status/2010149530486911014)
|
||||
>
|
||||
> Indeed, some plugins that spoof Claude Code's oauth request signatures exist in the community.
|
||||
>
|
||||
> These tools may work regardless of technical detectability, but users should be aware of ToS implications, and I personally cannot recommend to use those.
|
||||
>
|
||||
> This project is not responsible for any issues arising from the use of unofficial tools, and **we do not have any custom implementations of those oauth systems.**
|
||||
|
||||
|
||||
<div align="center">
|
||||
@@ -76,6 +97,9 @@
|
||||
|
||||
## Contents
|
||||
|
||||
- [Claude OAuth Access Notice](#claude-oauth-access-notice)
|
||||
- [Reviews](#reviews)
|
||||
- [Contents](#contents)
|
||||
- [Oh My OpenCode](#oh-my-opencode)
|
||||
- [Just Skip Reading This Readme](#just-skip-reading-this-readme)
|
||||
- [It's the Age of Agents](#its-the-age-of-agents)
|
||||
@@ -94,8 +118,9 @@
|
||||
- [Google Gemini (Antigravity OAuth)](#google-gemini-antigravity-oauth)
|
||||
- [Model Configuration](#model-configuration)
|
||||
- [oh-my-opencode Agent Model Override](#oh-my-opencode-agent-model-override)
|
||||
- [OpenAI (ChatGPT Plus/Pro)](#openai-chatgpt-pluspro)
|
||||
- [Model Configuration](#model-configuration-1)
|
||||
- [GitHub Copilot (Fallback Provider)](#github-copilot-fallback-provider)
|
||||
- [Model Mappings](#model-mappings)
|
||||
- [Setup](#setup)
|
||||
- [⚠️ Warning](#️-warning)
|
||||
- [Verify the setup](#verify-the-setup)
|
||||
- [Say 'Congratulations! 🎉' to the user](#say-congratulations--to-the-user)
|
||||
@@ -381,37 +406,46 @@ opencode auth login
|
||||
|
||||
**Multi-Account Load Balancing**: The plugin supports up to 10 Google accounts. When one account hits rate limits, it automatically switches to the next available account.
|
||||
|
||||
#### OpenAI (ChatGPT Plus/Pro)
|
||||
#### GitHub Copilot (Fallback Provider)
|
||||
|
||||
First, add the opencode-openai-codex-auth plugin:
|
||||
GitHub Copilot is supported as a **fallback provider** when native providers (Claude, ChatGPT, Gemini) are unavailable. The installer configures Copilot with lower priority than native providers.
|
||||
|
||||
```json
|
||||
{
|
||||
"plugin": [
|
||||
"oh-my-opencode",
|
||||
"opencode-openai-codex-auth@4.3.0"
|
||||
]
|
||||
}
|
||||
**Priority**: Native providers (Claude/ChatGPT/Gemini) > GitHub Copilot > Free models
|
||||
|
||||
##### Model Mappings
|
||||
|
||||
When GitHub Copilot is enabled, oh-my-opencode uses these model assignments:
|
||||
|
||||
| Agent | Model |
|
||||
| ------------- | -------------------------------- |
|
||||
| **Sisyphus** | `github-copilot/claude-opus-4.5` |
|
||||
| **Oracle** | `github-copilot/gpt-5.2` |
|
||||
| **Explore** | `grok code` (default) |
|
||||
| **Librarian** | `glm 4.7 free` (default) |
|
||||
|
||||
GitHub Copilot acts as a proxy provider, routing requests to underlying models based on your subscription.
|
||||
|
||||
##### Setup
|
||||
|
||||
Run the installer and select "Yes" for GitHub Copilot:
|
||||
|
||||
```bash
|
||||
bunx oh-my-opencode install
|
||||
# Select your subscriptions (Claude, ChatGPT, Gemini)
|
||||
# When prompted: "Do you have a GitHub Copilot subscription?" → Select "Yes"
|
||||
```
|
||||
|
||||
##### Model Configuration
|
||||
Or use non-interactive mode:
|
||||
|
||||
You'll also need full model settings in `opencode.json`.
|
||||
Read the [opencode-openai-codex-auth documentation](https://github.com/numman-ali/opencode-openai-codex-auth), copy provider/models config from [`config/opencode-modern.json`](https://github.com/numman-ali/opencode-openai-codex-auth/blob/main/config/opencode-modern.json) (for OpenCode v1.0.210+) or [`config/opencode-legacy.json`](https://github.com/numman-ali/opencode-openai-codex-auth/blob/main/config/opencode-legacy.json) (for older versions), and merge carefully to avoid breaking the user's existing setup.
|
||||
```bash
|
||||
bunx oh-my-opencode install --no-tui --claude=no --chatgpt=no --gemini=no --copilot=yes
|
||||
```
|
||||
|
||||
**Available models**: `openai/gpt-5.2`, `openai/gpt-5.2-codex`, `openai/gpt-5.1-codex-max`, `openai/gpt-5.1-codex`, `openai/gpt-5.1-codex-mini`, `openai/gpt-5.1`
|
||||
|
||||
**Variants** (OpenCode v1.0.210+): Use `--variant=<none|low|medium|high|xhigh>` for reasoning effort control.
|
||||
|
||||
Then authenticate:
|
||||
Then authenticate with GitHub:
|
||||
|
||||
```bash
|
||||
opencode auth login
|
||||
# Interactive Terminal: Provider: Select OpenAI
|
||||
# Interactive Terminal: Login method: Select ChatGPT Plus/Pro (Codex Subscription)
|
||||
# Interactive Terminal: Guide user through OAuth flow in browser
|
||||
# Wait for completion
|
||||
# Verify success and confirm with user
|
||||
# Select: GitHub → Authenticate via OAuth
|
||||
```
|
||||
|
||||
|
||||
@@ -541,17 +575,13 @@ Syntax highlighting, autocomplete, refactoring, navigation, analysis—and now a
|
||||
The features in your editor? Other agents can't touch them.
|
||||
Hand your best tools to your best colleagues. Now they can properly refactor, navigate, and analyze.
|
||||
|
||||
- **lsp_hover**: Type info, docs, signatures at position
|
||||
- **lsp_goto_definition**: Jump to symbol definition
|
||||
- **lsp_find_references**: Find all usages across workspace
|
||||
- **lsp_document_symbols**: Get file symbol outline
|
||||
- **lsp_workspace_symbols**: Search symbols by name across project
|
||||
- **lsp_symbols**: Get symbols from file (scope='document') or search across workspace (scope='workspace')
|
||||
- **lsp_diagnostics**: Get errors/warnings before build
|
||||
- **lsp_servers**: List available LSP servers
|
||||
- **lsp_prepare_rename**: Validate rename operation
|
||||
- **lsp_rename**: Rename symbol across workspace
|
||||
- **lsp_code_actions**: Get available quick fixes/refactorings
|
||||
- **lsp_code_action_resolve**: Apply code action
|
||||
- **ast_grep_search**: AST-aware code pattern search (25 languages)
|
||||
- **ast_grep_replace**: AST-aware code replacement
|
||||
- **call_omo_agent**: Spawn specialized explore/librarian agents. Supports `run_in_background` parameter for async execution.
|
||||
|
||||
@@ -28,8 +28,29 @@
|
||||
|
||||
> 这是开挂级别的编程——`oh-my-opencode` 实战效果。运行后台智能体,调用专业智能体如 oracle、librarian 和前端工程师。使用精心设计的 LSP/AST 工具、精选的 MCP,以及完整的 Claude Code 兼容层。
|
||||
|
||||
# Claude OAuth 访问通知
|
||||
|
||||
**注意:请勿为 librarian 使用昂贵的模型。这不仅对你没有帮助,还会增加 LLM 服务商的负担。请使用 Claude Haiku、Gemini Flash、GLM 4.7 或 MiniMax 等模型。**
|
||||
## TL;DR
|
||||
|
||||
> Q. 我可以使用 oh-my-opencode 吗?
|
||||
|
||||
可以。
|
||||
|
||||
> Q. 我可以用 Claude Code 订阅来使用它吗?
|
||||
|
||||
是的,技术上可以。但我不建议使用。
|
||||
|
||||
## 详细说明
|
||||
|
||||
> 自2026年1月起,Anthropic 以违反服务条款为由限制了第三方 OAuth 访问。
|
||||
>
|
||||
> [**Anthropic 将本项目 oh-my-opencode 作为封锁 opencode 的理由。**](https://x.com/thdxr/status/2010149530486911014)
|
||||
>
|
||||
> 事实上,社区中确实存在一些伪造 Claude Code OAuth 请求签名的插件。
|
||||
>
|
||||
> 无论技术上是否可检测,这些工具可能都能正常工作,但用户应注意服务条款的相关影响,我个人不建议使用这些工具。
|
||||
>
|
||||
> 本项目对使用非官方工具产生的任何问题概不负责,**我们没有任何这些 OAuth 系统的自定义实现。**
|
||||
|
||||
|
||||
<div align="center">
|
||||
@@ -93,8 +114,7 @@
|
||||
- [Google Gemini (Antigravity OAuth)](#google-gemini-antigravity-oauth)
|
||||
- [模型配置](#模型配置)
|
||||
- [oh-my-opencode 智能体模型覆盖](#oh-my-opencode-智能体模型覆盖)
|
||||
- [OpenAI (ChatGPT Plus/Pro)](#openai-chatgpt-pluspro)
|
||||
- [模型配置](#模型配置-1)
|
||||
|
||||
- [⚠️ 警告](#️-警告)
|
||||
- [验证安装](#验证安装)
|
||||
- [向用户说 '恭喜!🎉'](#向用户说-恭喜)
|
||||
@@ -380,37 +400,46 @@ opencode auth login
|
||||
|
||||
**多账号负载均衡**:该插件支持最多 10 个 Google 账号。当一个账号达到速率限制时,它会自动切换到下一个可用账号。
|
||||
|
||||
#### OpenAI (ChatGPT Plus/Pro)
|
||||
#### GitHub Copilot(备用提供商)
|
||||
|
||||
首先,添加 opencode-openai-codex-auth 插件:
|
||||
GitHub Copilot 作为**备用提供商**受支持,当原生提供商(Claude、ChatGPT、Gemini)不可用时使用。安装程序将 Copilot 配置为低于原生提供商的优先级。
|
||||
|
||||
```json
|
||||
{
|
||||
"plugin": [
|
||||
"oh-my-opencode",
|
||||
"opencode-openai-codex-auth@4.3.0"
|
||||
]
|
||||
}
|
||||
**优先级**:原生提供商 (Claude/ChatGPT/Gemini) > GitHub Copilot > 免费模型
|
||||
|
||||
##### 模型映射
|
||||
|
||||
启用 GitHub Copilot 后,oh-my-opencode 使用以下模型分配:
|
||||
|
||||
| 代理 | 模型 |
|
||||
|------|------|
|
||||
| **Sisyphus** | `github-copilot/claude-opus-4.5` |
|
||||
| **Oracle** | `github-copilot/gpt-5.2` |
|
||||
| **Explore** | `grok code`(默认) |
|
||||
| **Librarian** | `glm 4.7 free`(默认) |
|
||||
|
||||
GitHub Copilot 作为代理提供商,根据你的订阅将请求路由到底层模型。
|
||||
|
||||
##### 设置
|
||||
|
||||
运行安装程序并为 GitHub Copilot 选择"是":
|
||||
|
||||
```bash
|
||||
bunx oh-my-opencode install
|
||||
# 选择你的订阅(Claude、ChatGPT、Gemini)
|
||||
# 出现提示时:"Do you have a GitHub Copilot subscription?" → 选择"是"
|
||||
```
|
||||
|
||||
##### 模型配置
|
||||
或使用非交互模式:
|
||||
|
||||
你还需要在 `opencode.json` 中配置完整的模型设置。
|
||||
阅读 [opencode-openai-codex-auth 文档](https://github.com/numman-ali/opencode-openai-codex-auth),从 [`config/opencode-modern.json`](https://github.com/numman-ali/opencode-openai-codex-auth/blob/main/config/opencode-modern.json)(适用于 OpenCode v1.0.210+)或 [`config/opencode-legacy.json`](https://github.com/numman-ali/opencode-openai-codex-auth/blob/main/config/opencode-legacy.json)(适用于旧版本)复制 provider/models 配置,并仔细合并以避免破坏用户现有的设置。
|
||||
```bash
|
||||
bunx oh-my-opencode install --no-tui --claude=no --chatgpt=no --gemini=no --copilot=yes
|
||||
```
|
||||
|
||||
**可用模型**:`openai/gpt-5.2`、`openai/gpt-5.2-codex`、`openai/gpt-5.1-codex-max`、`openai/gpt-5.1-codex`、`openai/gpt-5.1-codex-mini`、`openai/gpt-5.1`
|
||||
|
||||
**变体**(OpenCode v1.0.210+):使用 `--variant=<none|low|medium|high|xhigh>` 控制推理力度。
|
||||
|
||||
然后进行认证:
|
||||
然后使用 GitHub 进行身份验证:
|
||||
|
||||
```bash
|
||||
opencode auth login
|
||||
# 交互式终端:Provider:选择 OpenAI
|
||||
# 交互式终端:Login method:选择 ChatGPT Plus/Pro (Codex Subscription)
|
||||
# 交互式终端:引导用户在浏览器中完成 OAuth 流程
|
||||
# 等待完成
|
||||
# 验证成功并向用户确认
|
||||
# 选择:GitHub → 通过 OAuth 进行身份验证
|
||||
```
|
||||
|
||||
|
||||
@@ -540,17 +569,13 @@ gh repo star code-yeongyu/oh-my-opencode
|
||||
你编辑器中的功能?其他智能体无法触及。
|
||||
把你最好的工具交给你最好的同事。现在它们可以正确地重构、导航和分析。
|
||||
|
||||
- **lsp_hover**:位置处的类型信息、文档、签名
|
||||
- **lsp_goto_definition**:跳转到符号定义
|
||||
- **lsp_find_references**:查找工作区中的所有使用
|
||||
- **lsp_document_symbols**:获取文件符号概览
|
||||
- **lsp_workspace_symbols**:按名称在项目中搜索符号
|
||||
- **lsp_symbols**:从文件获取符号 (scope='document') 或在工作区中搜索 (scope='workspace')
|
||||
- **lsp_diagnostics**:在构建前获取错误/警告
|
||||
- **lsp_servers**:列出可用的 LSP 服务器
|
||||
- **lsp_prepare_rename**:验证重命名操作
|
||||
- **lsp_rename**:在工作区中重命名符号
|
||||
- **lsp_code_actions**:获取可用的快速修复/重构
|
||||
- **lsp_code_action_resolve**:应用代码操作
|
||||
- **ast_grep_search**:AST 感知的代码模式搜索(25 种语言)
|
||||
- **ast_grep_replace**:AST 感知的代码替换
|
||||
- **call_omo_agent**:生成专业的 explore/librarian 智能体。支持 `run_in_background` 参数进行异步执行。
|
||||
|
||||
@@ -2181,7 +2181,6 @@
|
||||
"todowrite",
|
||||
"todoread",
|
||||
"lsp_rename",
|
||||
"lsp_code_action_resolve",
|
||||
"session_read",
|
||||
"session_write",
|
||||
"session_search"
|
||||
|
||||
8
bun.lock
8
bun.lock
@@ -11,8 +11,8 @@
|
||||
"@code-yeongyu/comment-checker": "^0.6.1",
|
||||
"@modelcontextprotocol/sdk": "^1.25.1",
|
||||
"@openauthjs/openauth": "^0.4.3",
|
||||
"@opencode-ai/plugin": "^1.1.1",
|
||||
"@opencode-ai/sdk": "^1.1.1",
|
||||
"@opencode-ai/plugin": "^1.1.19",
|
||||
"@opencode-ai/sdk": "^1.1.19",
|
||||
"commander": "^14.0.2",
|
||||
"hono": "^4.10.4",
|
||||
"js-yaml": "^4.1.1",
|
||||
@@ -85,9 +85,9 @@
|
||||
|
||||
"@openauthjs/openauth": ["@openauthjs/openauth@0.4.3", "", { "dependencies": { "@standard-schema/spec": "1.0.0-beta.3", "aws4fetch": "1.0.20", "jose": "5.9.6" }, "peerDependencies": { "arctic": "^2.2.2", "hono": "^4.0.0" } }, "sha512-RlnjqvHzqcbFVymEwhlUEuac4utA5h4nhSK/i2szZuQmxTIqbGUxZ+nM+avM+VV4Ing+/ZaNLKILoXS3yrkOOw=="],
|
||||
|
||||
"@opencode-ai/plugin": ["@opencode-ai/plugin@1.1.1", "", { "dependencies": { "@opencode-ai/sdk": "1.1.1", "zod": "4.1.8" } }, "sha512-OZGvpDal8YsSo6dnatHfwviSToGZ6mJJyEKZGxUyWDuGCP7VhcoPkoM16ktl7TCVHkDK+TdwY9tKzkzFqQNc5w=="],
|
||||
"@opencode-ai/plugin": ["@opencode-ai/plugin@1.1.19", "", { "dependencies": { "@opencode-ai/sdk": "1.1.19", "zod": "4.1.8" } }, "sha512-Q6qBEjHb/dJMEw4BUqQxEswTMxCCHUpFMMb6jR8HTTs8X/28XRkKt5pHNPA82GU65IlSoPRph+zd8LReBDN53Q=="],
|
||||
|
||||
"@opencode-ai/sdk": ["@opencode-ai/sdk@1.1.1", "", {}, "sha512-PfXujMrHGeMnpS8Gd2BXSY+zZajlztcAvcokf06NtAhd0Mbo/hCLXgW0NBCQ+3FX3e/G2PNwz2DqMdtzyIZaCQ=="],
|
||||
"@opencode-ai/sdk": ["@opencode-ai/sdk@1.1.19", "", {}, "sha512-XhZhFuvlLCqDpvNtUEjOsi/wvFj3YCXb1dySp+OONQRMuHlorNYnNa7P2A2ntKuhRdGT1Xt5na0nFzlUyNw+4A=="],
|
||||
|
||||
"@oslojs/asn1": ["@oslojs/asn1@1.0.0", "", { "dependencies": { "@oslojs/binary": "1.0.0" } }, "sha512-zw/wn0sj0j0QKbIXfIlnEcTviaCzYOY3V5rAyjR6YtOByFtJiT574+8p9Wlach0lZH9fddD4yb9laEAIl4vXQA=="],
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "oh-my-opencode",
|
||||
"version": "3.0.0-beta.6",
|
||||
"version": "3.0.0-beta.7",
|
||||
"description": "The Best AI Agent Harness - Batteries-Included OpenCode Plugin with Multi-Model Orchestration, Parallel Background Agents, and Crafted LSP/AST Tools",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
@@ -52,8 +52,8 @@
|
||||
"@code-yeongyu/comment-checker": "^0.6.1",
|
||||
"@modelcontextprotocol/sdk": "^1.25.1",
|
||||
"@openauthjs/openauth": "^0.4.3",
|
||||
"@opencode-ai/plugin": "^1.1.1",
|
||||
"@opencode-ai/sdk": "^1.1.1",
|
||||
"@opencode-ai/plugin": "^1.1.19",
|
||||
"@opencode-ai/sdk": "^1.1.19",
|
||||
"commander": "^14.0.2",
|
||||
"hono": "^4.10.4",
|
||||
"js-yaml": "^4.1.1",
|
||||
|
||||
@@ -495,6 +495,30 @@
|
||||
"created_at": "2026-01-14T01:57:52Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 760
|
||||
},
|
||||
{
|
||||
"name": "0Jaeyoung0",
|
||||
"id": 67817265,
|
||||
"comment_id": 3747909072,
|
||||
"created_at": "2026-01-14T05:56:13Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 774
|
||||
},
|
||||
{
|
||||
"name": "MotorwaySouth9",
|
||||
"id": 205539026,
|
||||
"comment_id": 3748060487,
|
||||
"created_at": "2026-01-14T06:50:26Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 776
|
||||
},
|
||||
{
|
||||
"name": "dang232",
|
||||
"id": 92773067,
|
||||
"comment_id": 3748235411,
|
||||
"created_at": "2026-01-14T07:41:50Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 777
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1449,6 +1449,7 @@ export function createOrchestratorSisyphusAgent(ctx?: OrchestratorContext): Agen
|
||||
temperature: 0.1,
|
||||
prompt: buildDynamicOrchestratorPrompt(ctx),
|
||||
thinking: { type: "enabled", budgetTokens: 32000 },
|
||||
color: "#10B981",
|
||||
...restrictions,
|
||||
} as AgentConfig
|
||||
}
|
||||
|
||||
@@ -479,6 +479,7 @@ sisyphus_task(agent="librarian", prompt="Find open source implementations of [fe
|
||||
- Maintain conversational tone
|
||||
- Use gathered evidence to inform suggestions
|
||||
- Ask questions that help user articulate needs
|
||||
- **Use the \`Question\` tool when presenting multiple options** (structured UI for selection)
|
||||
- Confirm understanding before proceeding
|
||||
- **Update draft file after EVERY meaningful exchange** (see Rule 6)
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import { describe, expect, test } from "bun:test"
|
||||
|
||||
import { ANTIGRAVITY_PROVIDER_CONFIG } from "./config-manager"
|
||||
import { ANTIGRAVITY_PROVIDER_CONFIG, generateOmoConfig } from "./config-manager"
|
||||
import type { InstallConfig } from "./types"
|
||||
|
||||
describe("config-manager ANTIGRAVITY_PROVIDER_CONFIG", () => {
|
||||
test("Gemini models include full spec (limit + modalities)", () => {
|
||||
@@ -32,3 +33,133 @@ describe("config-manager ANTIGRAVITY_PROVIDER_CONFIG", () => {
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
describe("generateOmoConfig - GitHub Copilot fallback", () => {
|
||||
test("frontend-ui-ux-engineer uses Copilot when no native providers", () => {
|
||||
// #given user has only Copilot (no Claude, ChatGPT, Gemini)
|
||||
const config: InstallConfig = {
|
||||
hasClaude: false,
|
||||
isMax20: false,
|
||||
hasChatGPT: false,
|
||||
hasGemini: false,
|
||||
hasCopilot: true,
|
||||
}
|
||||
|
||||
// #when generating config
|
||||
const result = generateOmoConfig(config)
|
||||
|
||||
// #then frontend-ui-ux-engineer should use Copilot Gemini
|
||||
const agents = result.agents as Record<string, { model?: string }>
|
||||
expect(agents["frontend-ui-ux-engineer"]?.model).toBe("github-copilot/gemini-3-pro-preview")
|
||||
})
|
||||
|
||||
test("document-writer uses Copilot when no native providers", () => {
|
||||
// #given user has only Copilot
|
||||
const config: InstallConfig = {
|
||||
hasClaude: false,
|
||||
isMax20: false,
|
||||
hasChatGPT: false,
|
||||
hasGemini: false,
|
||||
hasCopilot: true,
|
||||
}
|
||||
|
||||
// #when generating config
|
||||
const result = generateOmoConfig(config)
|
||||
|
||||
// #then document-writer should use Copilot Gemini Flash
|
||||
const agents = result.agents as Record<string, { model?: string }>
|
||||
expect(agents["document-writer"]?.model).toBe("github-copilot/gemini-3-flash-preview")
|
||||
})
|
||||
|
||||
test("multimodal-looker uses Copilot when no native providers", () => {
|
||||
// #given user has only Copilot
|
||||
const config: InstallConfig = {
|
||||
hasClaude: false,
|
||||
isMax20: false,
|
||||
hasChatGPT: false,
|
||||
hasGemini: false,
|
||||
hasCopilot: true,
|
||||
}
|
||||
|
||||
// #when generating config
|
||||
const result = generateOmoConfig(config)
|
||||
|
||||
// #then multimodal-looker should use Copilot Gemini Flash
|
||||
const agents = result.agents as Record<string, { model?: string }>
|
||||
expect(agents["multimodal-looker"]?.model).toBe("github-copilot/gemini-3-flash-preview")
|
||||
})
|
||||
|
||||
test("explore uses Copilot grok-code when no native providers", () => {
|
||||
// #given user has only Copilot
|
||||
const config: InstallConfig = {
|
||||
hasClaude: false,
|
||||
isMax20: false,
|
||||
hasChatGPT: false,
|
||||
hasGemini: false,
|
||||
hasCopilot: true,
|
||||
}
|
||||
|
||||
// #when generating config
|
||||
const result = generateOmoConfig(config)
|
||||
|
||||
// #then explore should use Copilot Grok
|
||||
const agents = result.agents as Record<string, { model?: string }>
|
||||
expect(agents["explore"]?.model).toBe("github-copilot/grok-code-fast-1")
|
||||
})
|
||||
|
||||
test("native Gemini takes priority over Copilot for frontend-ui-ux-engineer", () => {
|
||||
// #given user has both Gemini and Copilot
|
||||
const config: InstallConfig = {
|
||||
hasClaude: false,
|
||||
isMax20: false,
|
||||
hasChatGPT: false,
|
||||
hasGemini: true,
|
||||
hasCopilot: true,
|
||||
}
|
||||
|
||||
// #when generating config
|
||||
const result = generateOmoConfig(config)
|
||||
|
||||
// #then native Gemini should be used (NOT Copilot)
|
||||
const agents = result.agents as Record<string, { model?: string }>
|
||||
expect(agents["frontend-ui-ux-engineer"]?.model).toBe("google/antigravity-gemini-3-pro-high")
|
||||
})
|
||||
|
||||
test("native Claude takes priority over Copilot for frontend-ui-ux-engineer", () => {
|
||||
// #given user has Claude and Copilot but no Gemini
|
||||
const config: InstallConfig = {
|
||||
hasClaude: true,
|
||||
isMax20: false,
|
||||
hasChatGPT: false,
|
||||
hasGemini: false,
|
||||
hasCopilot: true,
|
||||
}
|
||||
|
||||
// #when generating config
|
||||
const result = generateOmoConfig(config)
|
||||
|
||||
// #then native Claude should be used (NOT Copilot)
|
||||
const agents = result.agents as Record<string, { model?: string }>
|
||||
expect(agents["frontend-ui-ux-engineer"]?.model).toBe("anthropic/claude-opus-4-5")
|
||||
})
|
||||
|
||||
test("categories use Copilot models when no native Gemini", () => {
|
||||
// #given user has Copilot but no Gemini
|
||||
const config: InstallConfig = {
|
||||
hasClaude: false,
|
||||
isMax20: false,
|
||||
hasChatGPT: false,
|
||||
hasGemini: false,
|
||||
hasCopilot: true,
|
||||
}
|
||||
|
||||
// #when generating config
|
||||
const result = generateOmoConfig(config)
|
||||
|
||||
// #then categories should use Copilot models
|
||||
const categories = result.categories as Record<string, { model?: string }>
|
||||
expect(categories?.["visual-engineering"]?.model).toBe("github-copilot/gemini-3-pro-preview")
|
||||
expect(categories?.["artistry"]?.model).toBe("github-copilot/gemini-3-pro-preview")
|
||||
expect(categories?.["writing"]?.model).toBe("github-copilot/gemini-3-flash-preview")
|
||||
})
|
||||
})
|
||||
|
||||
@@ -270,7 +270,9 @@ export function generateOmoConfig(installConfig: InstallConfig): Record<string,
|
||||
const agents: Record<string, Record<string, unknown>> = {}
|
||||
|
||||
if (!installConfig.hasClaude) {
|
||||
agents["Sisyphus"] = { model: "opencode/glm-4.7-free" }
|
||||
agents["Sisyphus"] = {
|
||||
model: installConfig.hasCopilot ? "github-copilot/claude-opus-4.5" : "opencode/glm-4.7-free",
|
||||
}
|
||||
}
|
||||
|
||||
agents["librarian"] = { model: "opencode/glm-4.7-free" }
|
||||
@@ -281,38 +283,56 @@ export function generateOmoConfig(installConfig: InstallConfig): Record<string,
|
||||
agents["explore"] = { model: "google/antigravity-gemini-3-flash" }
|
||||
} else if (installConfig.hasClaude && installConfig.isMax20) {
|
||||
agents["explore"] = { model: "anthropic/claude-haiku-4-5" }
|
||||
} else if (installConfig.hasCopilot) {
|
||||
agents["explore"] = { model: "github-copilot/grok-code-fast-1" }
|
||||
} else {
|
||||
agents["explore"] = { model: "opencode/glm-4.7-free" }
|
||||
}
|
||||
|
||||
if (!installConfig.hasChatGPT) {
|
||||
agents["oracle"] = {
|
||||
model: installConfig.hasClaude ? "anthropic/claude-opus-4-5" : "opencode/glm-4.7-free",
|
||||
}
|
||||
const oracleFallback = installConfig.hasCopilot
|
||||
? "github-copilot/gpt-5.2"
|
||||
: installConfig.hasClaude
|
||||
? "anthropic/claude-opus-4-5"
|
||||
: "opencode/glm-4.7-free"
|
||||
agents["oracle"] = { model: oracleFallback }
|
||||
}
|
||||
|
||||
if (installConfig.hasGemini) {
|
||||
agents["frontend-ui-ux-engineer"] = { model: "google/antigravity-gemini-3-pro-high" }
|
||||
agents["document-writer"] = { model: "google/antigravity-gemini-3-flash" }
|
||||
agents["multimodal-looker"] = { model: "google/antigravity-gemini-3-flash" }
|
||||
} else if (installConfig.hasClaude) {
|
||||
agents["frontend-ui-ux-engineer"] = { model: "anthropic/claude-opus-4-5" }
|
||||
agents["document-writer"] = { model: "anthropic/claude-opus-4-5" }
|
||||
agents["multimodal-looker"] = { model: "anthropic/claude-opus-4-5" }
|
||||
} else if (installConfig.hasCopilot) {
|
||||
agents["frontend-ui-ux-engineer"] = { model: "github-copilot/gemini-3-pro-preview" }
|
||||
agents["document-writer"] = { model: "github-copilot/gemini-3-flash-preview" }
|
||||
agents["multimodal-looker"] = { model: "github-copilot/gemini-3-flash-preview" }
|
||||
} else {
|
||||
const fallbackModel = installConfig.hasClaude ? "anthropic/claude-opus-4-5" : "opencode/glm-4.7-free"
|
||||
agents["frontend-ui-ux-engineer"] = { model: fallbackModel }
|
||||
agents["document-writer"] = { model: fallbackModel }
|
||||
agents["multimodal-looker"] = { model: fallbackModel }
|
||||
agents["frontend-ui-ux-engineer"] = { model: "opencode/glm-4.7-free" }
|
||||
agents["document-writer"] = { model: "opencode/glm-4.7-free" }
|
||||
agents["multimodal-looker"] = { model: "opencode/glm-4.7-free" }
|
||||
}
|
||||
|
||||
if (Object.keys(agents).length > 0) {
|
||||
config.agents = agents
|
||||
}
|
||||
|
||||
// Categories: override model for Antigravity auth (gemini-3-pro-preview → gemini-3-pro-high)
|
||||
// Categories: override model for Antigravity auth or GitHub Copilot fallback
|
||||
if (installConfig.hasGemini) {
|
||||
config.categories = {
|
||||
"visual-engineering": { model: "google/gemini-3-pro-high" },
|
||||
artistry: { model: "google/gemini-3-pro-high" },
|
||||
writing: { model: "google/gemini-3-flash-high" },
|
||||
}
|
||||
} else if (installConfig.hasCopilot) {
|
||||
config.categories = {
|
||||
"visual-engineering": { model: "github-copilot/gemini-3-pro-preview" },
|
||||
artistry: { model: "github-copilot/gemini-3-pro-preview" },
|
||||
writing: { model: "github-copilot/gemini-3-flash-preview" },
|
||||
}
|
||||
}
|
||||
|
||||
return config
|
||||
@@ -431,11 +451,7 @@ export async function addAuthPlugins(config: InstallConfig): Promise<ConfigMerge
|
||||
}
|
||||
}
|
||||
|
||||
if (config.hasChatGPT) {
|
||||
if (!plugins.some((p) => p.startsWith("opencode-openai-codex-auth"))) {
|
||||
plugins.push("opencode-openai-codex-auth")
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
const newConfig = { ...(existingConfig ?? {}), plugin: plugins }
|
||||
writeFileSync(path, JSON.stringify(newConfig, null, 2) + "\n")
|
||||
@@ -545,54 +561,7 @@ export const ANTIGRAVITY_PROVIDER_CONFIG = {
|
||||
},
|
||||
}
|
||||
|
||||
const CODEX_PROVIDER_CONFIG = {
|
||||
openai: {
|
||||
name: "OpenAI",
|
||||
options: {
|
||||
reasoningEffort: "medium",
|
||||
reasoningSummary: "auto",
|
||||
textVerbosity: "medium",
|
||||
include: ["reasoning.encrypted_content"],
|
||||
store: false,
|
||||
},
|
||||
models: {
|
||||
"gpt-5.2": {
|
||||
name: "GPT 5.2 (OAuth)",
|
||||
limit: { context: 272000, output: 128000 },
|
||||
modalities: { input: ["text", "image"], output: ["text"] },
|
||||
variants: {
|
||||
none: { reasoningEffort: "none", reasoningSummary: "auto", textVerbosity: "medium" },
|
||||
low: { reasoningEffort: "low", reasoningSummary: "auto", textVerbosity: "medium" },
|
||||
medium: { reasoningEffort: "medium", reasoningSummary: "auto", textVerbosity: "medium" },
|
||||
high: { reasoningEffort: "high", reasoningSummary: "detailed", textVerbosity: "medium" },
|
||||
xhigh: { reasoningEffort: "xhigh", reasoningSummary: "detailed", textVerbosity: "medium" },
|
||||
},
|
||||
},
|
||||
"gpt-5.2-codex": {
|
||||
name: "GPT 5.2 Codex (OAuth)",
|
||||
limit: { context: 272000, output: 128000 },
|
||||
modalities: { input: ["text", "image"], output: ["text"] },
|
||||
variants: {
|
||||
low: { reasoningEffort: "low", reasoningSummary: "auto", textVerbosity: "medium" },
|
||||
medium: { reasoningEffort: "medium", reasoningSummary: "auto", textVerbosity: "medium" },
|
||||
high: { reasoningEffort: "high", reasoningSummary: "detailed", textVerbosity: "medium" },
|
||||
xhigh: { reasoningEffort: "xhigh", reasoningSummary: "detailed", textVerbosity: "medium" },
|
||||
},
|
||||
},
|
||||
"gpt-5.1-codex-max": {
|
||||
name: "GPT 5.1 Codex Max (OAuth)",
|
||||
limit: { context: 272000, output: 128000 },
|
||||
modalities: { input: ["text", "image"], output: ["text"] },
|
||||
variants: {
|
||||
low: { reasoningEffort: "low", reasoningSummary: "detailed", textVerbosity: "medium" },
|
||||
medium: { reasoningEffort: "medium", reasoningSummary: "detailed", textVerbosity: "medium" },
|
||||
high: { reasoningEffort: "high", reasoningSummary: "detailed", textVerbosity: "medium" },
|
||||
xhigh: { reasoningEffort: "xhigh", reasoningSummary: "detailed", textVerbosity: "medium" },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
export function addProviderConfig(config: InstallConfig): ConfigMergeResult {
|
||||
try {
|
||||
@@ -622,10 +591,6 @@ export function addProviderConfig(config: InstallConfig): ConfigMergeResult {
|
||||
providers.google = ANTIGRAVITY_PROVIDER_CONFIG.google
|
||||
}
|
||||
|
||||
if (config.hasChatGPT) {
|
||||
providers.openai = CODEX_PROVIDER_CONFIG.openai
|
||||
}
|
||||
|
||||
if (Object.keys(providers).length > 0) {
|
||||
newConfig.provider = providers
|
||||
}
|
||||
@@ -648,6 +613,7 @@ export function detectCurrentConfig(): DetectedConfig {
|
||||
isMax20: true,
|
||||
hasChatGPT: true,
|
||||
hasGemini: false,
|
||||
hasCopilot: false,
|
||||
}
|
||||
|
||||
const { format, path } = detectConfigFormat()
|
||||
@@ -669,7 +635,6 @@ export function detectCurrentConfig(): DetectedConfig {
|
||||
}
|
||||
|
||||
result.hasGemini = plugins.some((p) => p.startsWith("opencode-antigravity-auth"))
|
||||
result.hasChatGPT = plugins.some((p) => p.startsWith("opencode-openai-codex-auth"))
|
||||
|
||||
const omoConfigPath = getOmoConfig()
|
||||
if (!existsSync(omoConfigPath)) {
|
||||
@@ -708,6 +673,11 @@ export function detectCurrentConfig(): DetectedConfig {
|
||||
result.hasChatGPT = false
|
||||
}
|
||||
|
||||
const hasAnyCopilotModel = Object.values(agents).some(
|
||||
(agent) => agent?.model?.startsWith("github-copilot/")
|
||||
)
|
||||
result.hasCopilot = hasAnyCopilotModel
|
||||
|
||||
} catch {
|
||||
/* intentionally empty - malformed omo config returns defaults from opencode config detection */
|
||||
}
|
||||
|
||||
@@ -38,6 +38,7 @@ function formatConfigSummary(config: InstallConfig): string {
|
||||
lines.push(formatProvider("Claude", config.hasClaude, claudeDetail))
|
||||
lines.push(formatProvider("ChatGPT", config.hasChatGPT))
|
||||
lines.push(formatProvider("Gemini", config.hasGemini))
|
||||
lines.push(formatProvider("GitHub Copilot", config.hasCopilot, "fallback provider"))
|
||||
|
||||
lines.push("")
|
||||
lines.push(color.dim("─".repeat(40)))
|
||||
@@ -46,8 +47,8 @@ function formatConfigSummary(config: InstallConfig): string {
|
||||
lines.push(color.bold(color.white("Agent Configuration")))
|
||||
lines.push("")
|
||||
|
||||
const sisyphusModel = config.hasClaude ? "claude-opus-4-5" : "glm-4.7-free"
|
||||
const oracleModel = config.hasChatGPT ? "gpt-5.2" : (config.hasClaude ? "claude-opus-4-5" : "glm-4.7-free")
|
||||
const sisyphusModel = config.hasClaude ? "claude-opus-4-5" : (config.hasCopilot ? "github-copilot/claude-opus-4.5" : "glm-4.7-free")
|
||||
const oracleModel = config.hasChatGPT ? "gpt-5.2" : (config.hasCopilot ? "github-copilot/gpt-5.2" : (config.hasClaude ? "claude-opus-4-5" : "glm-4.7-free"))
|
||||
const librarianModel = "glm-4.7-free"
|
||||
const frontendModel = config.hasGemini ? "antigravity-gemini-3-pro-high" : (config.hasClaude ? "claude-opus-4-5" : "glm-4.7-free")
|
||||
|
||||
@@ -130,6 +131,12 @@ function validateNonTuiArgs(args: InstallArgs): { valid: boolean; errors: string
|
||||
errors.push(`Invalid --gemini value: ${args.gemini} (expected: no, yes)`)
|
||||
}
|
||||
|
||||
if (args.copilot === undefined) {
|
||||
errors.push("--copilot is required (values: no, yes)")
|
||||
} else if (!["no", "yes"].includes(args.copilot)) {
|
||||
errors.push(`Invalid --copilot value: ${args.copilot} (expected: no, yes)`)
|
||||
}
|
||||
|
||||
return { valid: errors.length === 0, errors }
|
||||
}
|
||||
|
||||
@@ -139,10 +146,11 @@ function argsToConfig(args: InstallArgs): InstallConfig {
|
||||
isMax20: args.claude === "max20",
|
||||
hasChatGPT: args.chatgpt === "yes",
|
||||
hasGemini: args.gemini === "yes",
|
||||
hasCopilot: args.copilot === "yes",
|
||||
}
|
||||
}
|
||||
|
||||
function detectedToInitialValues(detected: DetectedConfig): { claude: ClaudeSubscription; chatgpt: BooleanArg; gemini: BooleanArg } {
|
||||
function detectedToInitialValues(detected: DetectedConfig): { claude: ClaudeSubscription; chatgpt: BooleanArg; gemini: BooleanArg; copilot: BooleanArg } {
|
||||
let claude: ClaudeSubscription = "no"
|
||||
if (detected.hasClaude) {
|
||||
claude = detected.isMax20 ? "max20" : "yes"
|
||||
@@ -152,6 +160,7 @@ function detectedToInitialValues(detected: DetectedConfig): { claude: ClaudeSubs
|
||||
claude,
|
||||
chatgpt: detected.hasChatGPT ? "yes" : "no",
|
||||
gemini: detected.hasGemini ? "yes" : "no",
|
||||
copilot: detected.hasCopilot ? "yes" : "no",
|
||||
}
|
||||
}
|
||||
|
||||
@@ -201,11 +210,26 @@ async function runTuiMode(detected: DetectedConfig): Promise<InstallConfig | nul
|
||||
return null
|
||||
}
|
||||
|
||||
const copilot = await p.select({
|
||||
message: "Do you have a GitHub Copilot subscription?",
|
||||
options: [
|
||||
{ value: "no" as const, label: "No", hint: "Only native providers will be used" },
|
||||
{ value: "yes" as const, label: "Yes", hint: "Fallback option when native providers unavailable" },
|
||||
],
|
||||
initialValue: initial.copilot,
|
||||
})
|
||||
|
||||
if (p.isCancel(copilot)) {
|
||||
p.cancel("Installation cancelled.")
|
||||
return null
|
||||
}
|
||||
|
||||
return {
|
||||
hasClaude: claude !== "no",
|
||||
isMax20: claude === "max20",
|
||||
hasChatGPT: chatgpt === "yes",
|
||||
hasGemini: gemini === "yes",
|
||||
hasCopilot: copilot === "yes",
|
||||
}
|
||||
}
|
||||
|
||||
@@ -218,7 +242,7 @@ async function runNonTuiInstall(args: InstallArgs): Promise<number> {
|
||||
console.log(` ${SYMBOLS.bullet} ${err}`)
|
||||
}
|
||||
console.log()
|
||||
printInfo("Usage: bunx oh-my-opencode install --no-tui --claude=<no|yes|max20> --chatgpt=<no|yes> --gemini=<no|yes>")
|
||||
printInfo("Usage: bunx oh-my-opencode install --no-tui --claude=<no|yes|max20> --chatgpt=<no|yes> --gemini=<no|yes> --copilot=<no|yes>")
|
||||
console.log()
|
||||
return 1
|
||||
}
|
||||
@@ -257,7 +281,7 @@ async function runNonTuiInstall(args: InstallArgs): Promise<number> {
|
||||
}
|
||||
printSuccess(`Plugin ${isUpdate ? "verified" : "added"} ${SYMBOLS.arrow} ${color.dim(pluginResult.configPath)}`)
|
||||
|
||||
if (config.hasGemini || config.hasChatGPT) {
|
||||
if (config.hasGemini) {
|
||||
printStep(step++, totalSteps, "Adding auth plugins...")
|
||||
const authResult = await addAuthPlugins(config)
|
||||
if (!authResult.success) {
|
||||
@@ -287,25 +311,10 @@ async function runNonTuiInstall(args: InstallArgs): Promise<number> {
|
||||
|
||||
printBox(formatConfigSummary(config), isUpdate ? "Updated Configuration" : "Installation Complete")
|
||||
|
||||
if (!config.hasClaude && !config.hasChatGPT && !config.hasGemini) {
|
||||
if (!config.hasClaude && !config.hasChatGPT && !config.hasGemini && !config.hasCopilot) {
|
||||
printWarning("No model providers configured. Using opencode/glm-4.7-free as fallback.")
|
||||
}
|
||||
|
||||
if ((config.hasClaude || config.hasChatGPT || config.hasGemini) && !args.skipAuth) {
|
||||
console.log(color.bold("Next Steps - Authenticate your providers:"))
|
||||
console.log()
|
||||
if (config.hasClaude) {
|
||||
console.log(` ${SYMBOLS.arrow} ${color.dim("opencode auth login")} ${color.gray("(select Anthropic → Claude Pro/Max)")}`)
|
||||
}
|
||||
if (config.hasChatGPT) {
|
||||
console.log(` ${SYMBOLS.arrow} ${color.dim("opencode auth login")} ${color.gray("(select OpenAI → ChatGPT Plus/Pro)")}`)
|
||||
}
|
||||
if (config.hasGemini) {
|
||||
console.log(` ${SYMBOLS.arrow} ${color.dim("opencode auth login")} ${color.gray("(select Google → OAuth with Antigravity)")}`)
|
||||
}
|
||||
console.log()
|
||||
}
|
||||
|
||||
console.log(`${SYMBOLS.star} ${color.bold(color.green(isUpdate ? "Configuration updated!" : "Installation complete!"))}`)
|
||||
console.log(` Run ${color.cyan("opencode")} to start!`)
|
||||
console.log()
|
||||
@@ -323,6 +332,17 @@ async function runNonTuiInstall(args: InstallArgs): Promise<number> {
|
||||
console.log(color.dim("oMoMoMoMo... Enjoy!"))
|
||||
console.log()
|
||||
|
||||
if ((config.hasClaude || config.hasChatGPT || config.hasGemini || config.hasCopilot) && !args.skipAuth) {
|
||||
printBox(
|
||||
`Run ${color.cyan("opencode auth login")} and select your provider:\n` +
|
||||
(config.hasClaude ? ` ${SYMBOLS.bullet} Anthropic ${color.gray("→ Claude Pro/Max")}\n` : "") +
|
||||
(config.hasChatGPT ? ` ${SYMBOLS.bullet} OpenAI ${color.gray("→ ChatGPT Plus/Pro")}\n` : "") +
|
||||
(config.hasGemini ? ` ${SYMBOLS.bullet} Google ${color.gray("→ OAuth with Antigravity")}\n` : "") +
|
||||
(config.hasCopilot ? ` ${SYMBOLS.bullet} GitHub ${color.gray("→ Copilot")}` : ""),
|
||||
"🔐 Authenticate Your Providers"
|
||||
)
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
@@ -368,7 +388,7 @@ export async function install(args: InstallArgs): Promise<number> {
|
||||
}
|
||||
s.stop(`Plugin added to ${color.cyan(pluginResult.configPath)}`)
|
||||
|
||||
if (config.hasGemini || config.hasChatGPT) {
|
||||
if (config.hasGemini) {
|
||||
s.start("Adding auth plugins (fetching latest versions)")
|
||||
const authResult = await addAuthPlugins(config)
|
||||
if (!authResult.success) {
|
||||
@@ -397,26 +417,12 @@ export async function install(args: InstallArgs): Promise<number> {
|
||||
}
|
||||
s.stop(`Config written to ${color.cyan(omoResult.configPath)}`)
|
||||
|
||||
if (!config.hasClaude && !config.hasChatGPT && !config.hasGemini) {
|
||||
if (!config.hasClaude && !config.hasChatGPT && !config.hasGemini && !config.hasCopilot) {
|
||||
p.log.warn("No model providers configured. Using opencode/glm-4.7-free as fallback.")
|
||||
}
|
||||
|
||||
p.note(formatConfigSummary(config), isUpdate ? "Updated Configuration" : "Installation Complete")
|
||||
|
||||
if ((config.hasClaude || config.hasChatGPT || config.hasGemini) && !args.skipAuth) {
|
||||
const steps: string[] = []
|
||||
if (config.hasClaude) {
|
||||
steps.push(`${color.dim("opencode auth login")} ${color.gray("(select Anthropic → Claude Pro/Max)")}`)
|
||||
}
|
||||
if (config.hasChatGPT) {
|
||||
steps.push(`${color.dim("opencode auth login")} ${color.gray("(select OpenAI → ChatGPT Plus/Pro)")}`)
|
||||
}
|
||||
if (config.hasGemini) {
|
||||
steps.push(`${color.dim("opencode auth login")} ${color.gray("(select Google → OAuth with Antigravity)")}`)
|
||||
}
|
||||
p.note(steps.join("\n"), "Next Steps - Authenticate your providers")
|
||||
}
|
||||
|
||||
p.log.success(color.bold(isUpdate ? "Configuration updated!" : "Installation complete!"))
|
||||
p.log.message(`Run ${color.cyan("opencode")} to start!`)
|
||||
|
||||
@@ -432,5 +438,22 @@ export async function install(args: InstallArgs): Promise<number> {
|
||||
|
||||
p.outro(color.green("oMoMoMoMo... Enjoy!"))
|
||||
|
||||
if ((config.hasClaude || config.hasChatGPT || config.hasGemini || config.hasCopilot) && !args.skipAuth) {
|
||||
const providers: string[] = []
|
||||
if (config.hasClaude) providers.push(`Anthropic ${color.gray("→ Claude Pro/Max")}`)
|
||||
if (config.hasChatGPT) providers.push(`OpenAI ${color.gray("→ ChatGPT Plus/Pro")}`)
|
||||
if (config.hasGemini) providers.push(`Google ${color.gray("→ OAuth with Antigravity")}`)
|
||||
if (config.hasCopilot) providers.push(`GitHub ${color.gray("→ Copilot")}`)
|
||||
|
||||
console.log()
|
||||
console.log(color.bold("🔐 Authenticate Your Providers"))
|
||||
console.log()
|
||||
console.log(` Run ${color.cyan("opencode auth login")} and select:`)
|
||||
for (const provider of providers) {
|
||||
console.log(` ${SYMBOLS.bullet} ${provider}`)
|
||||
}
|
||||
console.log()
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ export interface InstallArgs {
|
||||
claude?: ClaudeSubscription
|
||||
chatgpt?: BooleanArg
|
||||
gemini?: BooleanArg
|
||||
copilot?: BooleanArg
|
||||
skipAuth?: boolean
|
||||
}
|
||||
|
||||
@@ -14,6 +15,7 @@ export interface InstallConfig {
|
||||
isMax20: boolean
|
||||
hasChatGPT: boolean
|
||||
hasGemini: boolean
|
||||
hasCopilot: boolean
|
||||
}
|
||||
|
||||
export interface ConfigMergeResult {
|
||||
@@ -28,4 +30,5 @@ export interface DetectedConfig {
|
||||
isMax20: boolean
|
||||
hasChatGPT: boolean
|
||||
hasGemini: boolean
|
||||
hasCopilot: boolean
|
||||
}
|
||||
|
||||
@@ -198,7 +198,7 @@ export const DynamicContextPruningConfigSchema = z.object({
|
||||
/** Tools that should never be pruned */
|
||||
protected_tools: z.array(z.string()).default([
|
||||
"task", "todowrite", "todoread",
|
||||
"lsp_rename", "lsp_code_action_resolve",
|
||||
"lsp_rename",
|
||||
"session_read", "session_write", "session_search",
|
||||
]),
|
||||
/** Pruning strategies configuration */
|
||||
|
||||
@@ -675,93 +675,140 @@ describe("LaunchInput.skillContent", () => {
|
||||
})
|
||||
})
|
||||
|
||||
describe("BackgroundManager.notifyParentSession - agent context preservation", () => {
|
||||
test("should not pass agent field when parentAgent is undefined", async () => {
|
||||
// #given
|
||||
interface CurrentMessage {
|
||||
agent?: string
|
||||
model?: { providerID?: string; modelID?: string }
|
||||
}
|
||||
|
||||
describe("BackgroundManager.notifyParentSession - dynamic message lookup", () => {
|
||||
test("should use currentMessage model/agent when available", async () => {
|
||||
// #given - currentMessage has model and agent
|
||||
const task: BackgroundTask = {
|
||||
id: "task-no-agent",
|
||||
id: "task-1",
|
||||
sessionID: "session-child",
|
||||
parentSessionID: "session-parent",
|
||||
parentMessageID: "msg-parent",
|
||||
description: "task without agent context",
|
||||
description: "task with dynamic lookup",
|
||||
prompt: "test",
|
||||
agent: "explore",
|
||||
status: "completed",
|
||||
startedAt: new Date(),
|
||||
completedAt: new Date(),
|
||||
parentAgent: undefined,
|
||||
parentModel: { providerID: "anthropic", modelID: "claude-opus" },
|
||||
parentAgent: "OldAgent",
|
||||
parentModel: { providerID: "old", modelID: "old-model" },
|
||||
}
|
||||
const currentMessage: CurrentMessage = {
|
||||
agent: "Sisyphus",
|
||||
model: { providerID: "anthropic", modelID: "claude-opus-4-5" },
|
||||
}
|
||||
|
||||
// #when
|
||||
const promptBody = buildNotificationPromptBody(task)
|
||||
const promptBody = buildNotificationPromptBody(task, currentMessage)
|
||||
|
||||
// #then
|
||||
expect("agent" in promptBody).toBe(false)
|
||||
expect(promptBody.model).toEqual({ providerID: "anthropic", modelID: "claude-opus" })
|
||||
})
|
||||
|
||||
test("should include agent field when parentAgent is defined", async () => {
|
||||
// #given
|
||||
const task: BackgroundTask = {
|
||||
id: "task-with-agent",
|
||||
sessionID: "session-child",
|
||||
parentSessionID: "session-parent",
|
||||
parentMessageID: "msg-parent",
|
||||
description: "task with agent context",
|
||||
prompt: "test",
|
||||
agent: "explore",
|
||||
status: "completed",
|
||||
startedAt: new Date(),
|
||||
completedAt: new Date(),
|
||||
parentAgent: "Sisyphus",
|
||||
parentModel: { providerID: "anthropic", modelID: "claude-opus" },
|
||||
}
|
||||
|
||||
// #when
|
||||
const promptBody = buildNotificationPromptBody(task)
|
||||
|
||||
// #then
|
||||
// #then - uses currentMessage values, not task.parentModel/parentAgent
|
||||
expect(promptBody.agent).toBe("Sisyphus")
|
||||
expect(promptBody.model).toEqual({ providerID: "anthropic", modelID: "claude-opus-4-5" })
|
||||
})
|
||||
|
||||
test("should not pass model field when parentModel is undefined", async () => {
|
||||
test("should fallback to parentAgent when currentMessage.agent is undefined", async () => {
|
||||
// #given
|
||||
const task: BackgroundTask = {
|
||||
id: "task-no-model",
|
||||
id: "task-2",
|
||||
sessionID: "session-child",
|
||||
parentSessionID: "session-parent",
|
||||
parentMessageID: "msg-parent",
|
||||
description: "task without model context",
|
||||
description: "task fallback agent",
|
||||
prompt: "test",
|
||||
agent: "explore",
|
||||
status: "completed",
|
||||
startedAt: new Date(),
|
||||
completedAt: new Date(),
|
||||
parentAgent: "Sisyphus",
|
||||
parentAgent: "FallbackAgent",
|
||||
parentModel: undefined,
|
||||
}
|
||||
const currentMessage: CurrentMessage = { agent: undefined, model: undefined }
|
||||
|
||||
// #when
|
||||
const promptBody = buildNotificationPromptBody(task)
|
||||
const promptBody = buildNotificationPromptBody(task, currentMessage)
|
||||
|
||||
// #then
|
||||
// #then - falls back to task.parentAgent
|
||||
expect(promptBody.agent).toBe("FallbackAgent")
|
||||
expect("model" in promptBody).toBe(false)
|
||||
})
|
||||
|
||||
test("should not pass model when currentMessage.model is incomplete", async () => {
|
||||
// #given - model missing modelID
|
||||
const task: BackgroundTask = {
|
||||
id: "task-3",
|
||||
sessionID: "session-child",
|
||||
parentSessionID: "session-parent",
|
||||
parentMessageID: "msg-parent",
|
||||
description: "task incomplete model",
|
||||
prompt: "test",
|
||||
agent: "explore",
|
||||
status: "completed",
|
||||
startedAt: new Date(),
|
||||
completedAt: new Date(),
|
||||
parentAgent: "Sisyphus",
|
||||
parentModel: { providerID: "anthropic", modelID: "claude-opus" },
|
||||
}
|
||||
const currentMessage: CurrentMessage = {
|
||||
agent: "Sisyphus",
|
||||
model: { providerID: "anthropic" },
|
||||
}
|
||||
|
||||
// #when
|
||||
const promptBody = buildNotificationPromptBody(task, currentMessage)
|
||||
|
||||
// #then - model not passed due to incomplete data
|
||||
expect(promptBody.agent).toBe("Sisyphus")
|
||||
expect("model" in promptBody).toBe(false)
|
||||
})
|
||||
|
||||
test("should handle null currentMessage gracefully", async () => {
|
||||
// #given - no message found (messageDir lookup failed)
|
||||
const task: BackgroundTask = {
|
||||
id: "task-4",
|
||||
sessionID: "session-child",
|
||||
parentSessionID: "session-parent",
|
||||
parentMessageID: "msg-parent",
|
||||
description: "task no message",
|
||||
prompt: "test",
|
||||
agent: "explore",
|
||||
status: "completed",
|
||||
startedAt: new Date(),
|
||||
completedAt: new Date(),
|
||||
parentAgent: "Sisyphus",
|
||||
parentModel: { providerID: "anthropic", modelID: "claude-opus" },
|
||||
}
|
||||
|
||||
// #when
|
||||
const promptBody = buildNotificationPromptBody(task, null)
|
||||
|
||||
// #then - falls back to task.parentAgent, no model
|
||||
expect(promptBody.agent).toBe("Sisyphus")
|
||||
expect("model" in promptBody).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
function buildNotificationPromptBody(task: BackgroundTask): Record<string, unknown> {
|
||||
function buildNotificationPromptBody(
|
||||
task: BackgroundTask,
|
||||
currentMessage: CurrentMessage | null
|
||||
): Record<string, unknown> {
|
||||
const body: Record<string, unknown> = {
|
||||
parts: [{ type: "text", text: `[BACKGROUND TASK COMPLETED] Task "${task.description}" finished.` }],
|
||||
}
|
||||
|
||||
if (task.parentAgent !== undefined) {
|
||||
body.agent = task.parentAgent
|
||||
}
|
||||
const agent = currentMessage?.agent ?? task.parentAgent
|
||||
const model = currentMessage?.model?.providerID && currentMessage?.model?.modelID
|
||||
? { providerID: currentMessage.model.providerID, modelID: currentMessage.model.modelID }
|
||||
: undefined
|
||||
|
||||
if (task.parentModel?.providerID && task.parentModel?.modelID) {
|
||||
body.model = { providerID: task.parentModel.providerID, modelID: task.parentModel.modelID }
|
||||
if (agent !== undefined) {
|
||||
body.agent = agent
|
||||
}
|
||||
if (model !== undefined) {
|
||||
body.model = model
|
||||
}
|
||||
|
||||
return body
|
||||
|
||||
@@ -11,6 +11,9 @@ import type { BackgroundTaskConfig } from "../../config/schema"
|
||||
|
||||
import { subagentSessions } from "../claude-code-session-state"
|
||||
import { getTaskToastManager } from "../task-toast-manager"
|
||||
import { findNearestMessageWithFields, MESSAGE_STORAGE } from "../hook-message-injector"
|
||||
import { existsSync, readdirSync } from "node:fs"
|
||||
import { join } from "node:path"
|
||||
|
||||
const TASK_TTL_MS = 30 * 60 * 1000
|
||||
const MIN_STABILITY_TIME_MS = 10 * 1000 // Must run at least 10s before stability detection kicks in
|
||||
@@ -638,13 +641,32 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
|
||||
</system-reminder>`
|
||||
}
|
||||
|
||||
// Inject notification via session.prompt with noReply
|
||||
// Dynamically lookup the parent session's current message context
|
||||
// This ensures we use the CURRENT model/agent, not the stale one from task creation time
|
||||
const messageDir = getMessageDir(task.parentSessionID)
|
||||
const currentMessage = messageDir ? findNearestMessageWithFields(messageDir) : null
|
||||
|
||||
const agent = currentMessage?.agent ?? task.parentAgent
|
||||
const model = currentMessage?.model?.providerID && currentMessage?.model?.modelID
|
||||
? { providerID: currentMessage.model.providerID, modelID: currentMessage.model.modelID }
|
||||
: undefined
|
||||
|
||||
log("[background-agent] notifyParentSession context:", {
|
||||
taskId: task.id,
|
||||
messageDir: !!messageDir,
|
||||
currentAgent: currentMessage?.agent,
|
||||
currentModel: currentMessage?.model,
|
||||
resolvedAgent: agent,
|
||||
resolvedModel: model,
|
||||
})
|
||||
|
||||
try {
|
||||
await this.client.session.prompt({
|
||||
path: { id: task.parentSessionID },
|
||||
body: {
|
||||
noReply: !allComplete, // Silent unless all complete
|
||||
agent: task.parentAgent,
|
||||
noReply: !allComplete,
|
||||
...(agent !== undefined ? { agent } : {}),
|
||||
...(model !== undefined ? { model } : {}),
|
||||
parts: [{ type: "text", text: notification }],
|
||||
},
|
||||
})
|
||||
@@ -839,3 +861,16 @@ if (lastMessage) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function getMessageDir(sessionID: string): string | null {
|
||||
if (!existsSync(MESSAGE_STORAGE)) return null
|
||||
|
||||
const directPath = join(MESSAGE_STORAGE, sessionID)
|
||||
if (existsSync(directPath)) return directPath
|
||||
|
||||
for (const dir of readdirSync(MESSAGE_STORAGE)) {
|
||||
const sessionPath = join(MESSAGE_STORAGE, dir, sessionID)
|
||||
if (existsSync(sessionPath)) return sessionPath
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
@@ -117,13 +117,13 @@ If \`--create-new\`: Read all existing first (preserve context) → then delete
|
||||
lsp_servers() # Check availability
|
||||
|
||||
# Entry points (parallel)
|
||||
lsp_document_symbols(filePath="src/index.ts")
|
||||
lsp_document_symbols(filePath="main.py")
|
||||
lsp_symbols(filePath="src/index.ts", scope="document")
|
||||
lsp_symbols(filePath="main.py", scope="document")
|
||||
|
||||
# Key symbols (parallel)
|
||||
lsp_workspace_symbols(filePath=".", query="class")
|
||||
lsp_workspace_symbols(filePath=".", query="interface")
|
||||
lsp_workspace_symbols(filePath=".", query="function")
|
||||
lsp_symbols(filePath=".", scope="workspace", query="class")
|
||||
lsp_symbols(filePath=".", scope="workspace", query="interface")
|
||||
lsp_symbols(filePath=".", scope="workspace", query="function")
|
||||
|
||||
# Centrality for top exports
|
||||
lsp_find_references(filePath="...", line=X, character=Y)
|
||||
|
||||
@@ -148,20 +148,15 @@ While background agents are running, use direct tools:
|
||||
### LSP Tools for Precise Analysis:
|
||||
|
||||
\`\`\`typescript
|
||||
// Get symbol information at target location
|
||||
lsp_hover(filePath, line, character) // Type info, docs, signatures
|
||||
|
||||
// Find definition(s)
|
||||
lsp_goto_definition(filePath, line, character) // Where is it defined?
|
||||
|
||||
// Find ALL usages across workspace
|
||||
lsp_find_references(filePath, line, character, includeDeclaration=true)
|
||||
|
||||
// Get file structure
|
||||
lsp_document_symbols(filePath) // Hierarchical outline
|
||||
|
||||
// Search symbols by name
|
||||
lsp_workspace_symbols(filePath, query="[target_symbol]")
|
||||
// Get file structure (scope='document') or search symbols (scope='workspace')
|
||||
lsp_symbols(filePath, scope="document") // Hierarchical outline
|
||||
lsp_symbols(filePath, scope="workspace", query="[target_symbol]") // Search by name
|
||||
|
||||
// Get current diagnostics
|
||||
lsp_diagnostics(filePath) // Errors, warnings before we start
|
||||
@@ -593,7 +588,7 @@ You already know these tools. Use them intelligently:
|
||||
|
||||
## LSP Tools
|
||||
Leverage the full LSP toolset (\`lsp_*\`) for precision analysis. Key patterns:
|
||||
- **Understand before changing**: \`lsp_hover\`, \`lsp_goto_definition\` to grasp context
|
||||
- **Understand before changing**: \`lsp_goto_definition\` to grasp context
|
||||
- **Impact analysis**: \`lsp_find_references\` to map all usages before modification
|
||||
- **Safe refactoring**: \`lsp_prepare_rename\` → \`lsp_rename\` for symbol renames
|
||||
- **Continuous verification**: \`lsp_diagnostics\` after every change
|
||||
|
||||
@@ -320,7 +320,6 @@ export async function executeCompact(
|
||||
"todowrite",
|
||||
"todoread",
|
||||
"lsp_rename",
|
||||
"lsp_code_action_resolve",
|
||||
],
|
||||
};
|
||||
|
||||
|
||||
@@ -11,7 +11,6 @@ const DEFAULT_PROTECTED_TOOLS = new Set([
|
||||
"todowrite",
|
||||
"todoread",
|
||||
"lsp_rename",
|
||||
"lsp_code_action_resolve",
|
||||
"session_read",
|
||||
"session_write",
|
||||
"session_search",
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import { existsSync, readFileSync } from "node:fs"
|
||||
import type { PluginInput } from "@opencode-ai/plugin"
|
||||
import { existsSync, readFileSync, readdirSync } from "node:fs"
|
||||
import { join } from "node:path"
|
||||
import { log } from "../../shared/logger"
|
||||
import { readState, writeState, clearState, incrementIteration } from "./storage"
|
||||
import {
|
||||
@@ -9,6 +10,18 @@ import {
|
||||
} from "./constants"
|
||||
import type { RalphLoopState, RalphLoopOptions } from "./types"
|
||||
import { getTranscriptPath as getDefaultTranscriptPath } from "../claude-code-hooks/transcript"
|
||||
import { findNearestMessageWithFields, MESSAGE_STORAGE } from "../../features/hook-message-injector"
|
||||
|
||||
function getMessageDir(sessionID: string): string | null {
|
||||
if (!existsSync(MESSAGE_STORAGE)) return null
|
||||
const directPath = join(MESSAGE_STORAGE, sessionID)
|
||||
if (existsSync(directPath)) return directPath
|
||||
for (const dir of readdirSync(MESSAGE_STORAGE)) {
|
||||
const sessionPath = join(MESSAGE_STORAGE, dir, sessionID)
|
||||
if (existsSync(sessionPath)) return sessionPath
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
export * from "./types"
|
||||
export * from "./constants"
|
||||
@@ -302,9 +315,18 @@ export function createRalphLoopHook(
|
||||
.catch(() => {})
|
||||
|
||||
try {
|
||||
const messageDir = getMessageDir(sessionID)
|
||||
const currentMessage = messageDir ? findNearestMessageWithFields(messageDir) : null
|
||||
const agent = currentMessage?.agent
|
||||
const model = currentMessage?.model?.providerID && currentMessage?.model?.modelID
|
||||
? { providerID: currentMessage.model.providerID, modelID: currentMessage.model.modelID }
|
||||
: undefined
|
||||
|
||||
await ctx.client.session.prompt({
|
||||
path: { id: sessionID },
|
||||
body: {
|
||||
...(agent !== undefined ? { agent } : {}),
|
||||
...(model !== undefined ? { model } : {}),
|
||||
parts: [{ type: "text", text: continuationPrompt }],
|
||||
},
|
||||
query: { directory: ctx.directory },
|
||||
|
||||
@@ -407,10 +407,17 @@ export function createSisyphusOrchestratorHook(
|
||||
try {
|
||||
log(`[${HOOK_NAME}] Injecting boulder continuation`, { sessionID, planName, remaining })
|
||||
|
||||
const messageDir = getMessageDir(sessionID)
|
||||
const currentMessage = messageDir ? findNearestMessageWithFields(messageDir) : null
|
||||
const model = currentMessage?.model?.providerID && currentMessage?.model?.modelID
|
||||
? { providerID: currentMessage.model.providerID, modelID: currentMessage.model.modelID }
|
||||
: undefined
|
||||
|
||||
await ctx.client.session.prompt({
|
||||
path: { id: sessionID },
|
||||
body: {
|
||||
agent: "orchestrator-sisyphus",
|
||||
...(model !== undefined ? { model } : {}),
|
||||
parts: [{ type: "text", text: prompt }],
|
||||
},
|
||||
query: { directory: ctx.directory },
|
||||
|
||||
@@ -13,8 +13,7 @@ const TRUNCATABLE_TOOLS = [
|
||||
"Glob",
|
||||
"safe_glob",
|
||||
"lsp_find_references",
|
||||
"lsp_document_symbols",
|
||||
"lsp_workspace_symbols",
|
||||
"lsp_symbols",
|
||||
"lsp_diagnostics",
|
||||
"ast_grep_search",
|
||||
"interactive_bash",
|
||||
|
||||
@@ -253,7 +253,7 @@ export function createConfigHandler(deps: ConfigHandlerDeps) {
|
||||
: {};
|
||||
|
||||
const planDemoteConfig = replacePlan
|
||||
? { mode: "subagent" as const, hidden: true }
|
||||
? { mode: "subagent" as const }
|
||||
: undefined;
|
||||
|
||||
config.agent = {
|
||||
@@ -305,6 +305,12 @@ export function createConfigHandler(deps: ConfigHandlerDeps) {
|
||||
call_omo_agent: false,
|
||||
};
|
||||
}
|
||||
if (agentResult["Prometheus (Planner)"]) {
|
||||
(agentResult["Prometheus (Planner)"] as { tools?: Record<string, unknown> }).tools = {
|
||||
...(agentResult["Prometheus (Planner)"] as { tools?: Record<string, unknown> }).tools,
|
||||
call_omo_agent: false,
|
||||
};
|
||||
}
|
||||
|
||||
config.permission = {
|
||||
...(config.permission as Record<string, unknown>),
|
||||
|
||||
@@ -457,13 +457,13 @@ describe("migrateConfigFile with backup", () => {
|
||||
})
|
||||
})
|
||||
|
||||
test("creates backup file with timestamp when migration needed", () => {
|
||||
// #given: Config file path and config needing migration
|
||||
test("creates backup file with timestamp when legacy migration needed", () => {
|
||||
// #given: Config file path with legacy agent names needing migration
|
||||
const testConfigPath = "/tmp/test-config-migration.json"
|
||||
const testConfigContent = globalThis.JSON.stringify({ agents: { oracle: { model: "openai/gpt-5.2" } } }, null, 2)
|
||||
const testConfigContent = globalThis.JSON.stringify({ agents: { omo: { model: "test" } } }, null, 2)
|
||||
const rawConfig: Record<string, unknown> = {
|
||||
agents: {
|
||||
oracle: { model: "openai/gpt-5.2" },
|
||||
omo: { model: "test" },
|
||||
},
|
||||
}
|
||||
|
||||
@@ -492,70 +492,54 @@ describe("migrateConfigFile with backup", () => {
|
||||
expect(backupContent).toBe(testConfigContent)
|
||||
})
|
||||
|
||||
test("deletes agent config when all fields match category defaults", () => {
|
||||
// #given: Config with agent matching category defaults
|
||||
const testConfigPath = "/tmp/test-config-delete.json"
|
||||
test("preserves model setting without auto-conversion to category", () => {
|
||||
// #given: Config with model setting (should NOT be converted to category)
|
||||
const testConfigPath = "/tmp/test-config-preserve-model.json"
|
||||
const rawConfig: Record<string, unknown> = {
|
||||
agents: {
|
||||
oracle: {
|
||||
model: "openai/gpt-5.2",
|
||||
temperature: 0.1,
|
||||
},
|
||||
"multimodal-looker": { model: "anthropic/claude-haiku-4-5" },
|
||||
oracle: { model: "openai/gpt-5.2" },
|
||||
"my-custom-agent": { model: "google/gemini-3-pro-preview" },
|
||||
},
|
||||
}
|
||||
|
||||
fs.writeFileSync(testConfigPath, globalThis.JSON.stringify({ agents: { oracle: { model: "openai/gpt-5.2" } } }, null, 2))
|
||||
fs.writeFileSync(testConfigPath, globalThis.JSON.stringify(rawConfig, null, 2))
|
||||
cleanupPaths.push(testConfigPath)
|
||||
|
||||
// #when: Migrate config file
|
||||
const needsWrite = migrateConfigFile(testConfigPath, rawConfig)
|
||||
|
||||
// #then: Agent should be deleted (matches strategic category defaults)
|
||||
expect(needsWrite).toBe(true)
|
||||
// #then: No migration needed - model settings should be preserved as-is
|
||||
expect(needsWrite).toBe(false)
|
||||
|
||||
const migratedConfig = JSON.parse(fs.readFileSync(testConfigPath, "utf-8"))
|
||||
expect(migratedConfig.agents).toEqual({})
|
||||
|
||||
const dir = path.dirname(testConfigPath)
|
||||
const basename = path.basename(testConfigPath)
|
||||
const files = fs.readdirSync(dir)
|
||||
const backupFiles = files.filter((f) => f.startsWith(`${basename}.bak.`))
|
||||
backupFiles.forEach((f) => cleanupPaths.push(path.join(dir, f)))
|
||||
const agents = rawConfig.agents as Record<string, Record<string, unknown>>
|
||||
expect(agents["multimodal-looker"].model).toBe("anthropic/claude-haiku-4-5")
|
||||
expect(agents.oracle.model).toBe("openai/gpt-5.2")
|
||||
expect(agents["my-custom-agent"].model).toBe("google/gemini-3-pro-preview")
|
||||
})
|
||||
|
||||
test("keeps agent config with category when fields differ from defaults", () => {
|
||||
// #given: Config with agent having custom temperature override
|
||||
const testConfigPath = "/tmp/test-config-keep.json"
|
||||
test("preserves category setting when explicitly set", () => {
|
||||
// #given: Config with explicit category setting
|
||||
const testConfigPath = "/tmp/test-config-preserve-category.json"
|
||||
const rawConfig: Record<string, unknown> = {
|
||||
agents: {
|
||||
oracle: {
|
||||
model: "openai/gpt-5.2",
|
||||
temperature: 0.5,
|
||||
},
|
||||
"multimodal-looker": { category: "quick" },
|
||||
oracle: { category: "ultrabrain" },
|
||||
},
|
||||
}
|
||||
|
||||
fs.writeFileSync(testConfigPath, globalThis.JSON.stringify({ agents: { oracle: { model: "openai/gpt-5.2" } } }, null, 2))
|
||||
fs.writeFileSync(testConfigPath, globalThis.JSON.stringify(rawConfig, null, 2))
|
||||
cleanupPaths.push(testConfigPath)
|
||||
|
||||
// #when: Migrate config file
|
||||
const needsWrite = migrateConfigFile(testConfigPath, rawConfig)
|
||||
|
||||
// #then: Agent should be kept with category and custom override
|
||||
expect(needsWrite).toBe(true)
|
||||
// #then: No migration needed - category settings should be preserved as-is
|
||||
expect(needsWrite).toBe(false)
|
||||
|
||||
const migratedConfig = JSON.parse(fs.readFileSync(testConfigPath, "utf-8"))
|
||||
const agents = migratedConfig.agents as Record<string, unknown>
|
||||
expect(agents.oracle).toBeDefined()
|
||||
expect((agents.oracle as Record<string, unknown>).category).toBe("ultrabrain")
|
||||
expect((agents.oracle as Record<string, unknown>).temperature).toBe(0.5)
|
||||
expect((agents.oracle as Record<string, unknown>).model).toBeUndefined()
|
||||
|
||||
const dir = path.dirname(testConfigPath)
|
||||
const basename = path.basename(testConfigPath)
|
||||
const files = fs.readdirSync(dir)
|
||||
const backupFiles = files.filter((f) => f.startsWith(`${basename}.bak.`))
|
||||
backupFiles.forEach((f) => cleanupPaths.push(path.join(dir, f)))
|
||||
const agents = rawConfig.agents as Record<string, Record<string, unknown>>
|
||||
expect(agents["multimodal-looker"].category).toBe("quick")
|
||||
expect(agents.oracle.category).toBe("ultrabrain")
|
||||
})
|
||||
|
||||
test("does not write when no migration needed", () => {
|
||||
@@ -583,56 +567,5 @@ describe("migrateConfigFile with backup", () => {
|
||||
expect(backupFiles.length).toBe(0)
|
||||
})
|
||||
|
||||
test("handles multiple agent migrations correctly", () => {
|
||||
// #given: Config with multiple agents needing migration
|
||||
const testConfigPath = "/tmp/test-config-multi-agent.json"
|
||||
const rawConfig: Record<string, unknown> = {
|
||||
agents: {
|
||||
oracle: { model: "openai/gpt-5.2" },
|
||||
librarian: { model: "anthropic/claude-sonnet-4-5" },
|
||||
frontend: {
|
||||
model: "google/gemini-3-pro-preview",
|
||||
temperature: 0.9,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
fs.writeFileSync(
|
||||
testConfigPath,
|
||||
globalThis.JSON.stringify(
|
||||
{
|
||||
agents: {
|
||||
oracle: { model: "openai/gpt-5.2" },
|
||||
librarian: { model: "anthropic/claude-sonnet-4-5" },
|
||||
frontend: { model: "google/gemini-3-pro-preview" },
|
||||
},
|
||||
},
|
||||
null,
|
||||
2,
|
||||
),
|
||||
)
|
||||
cleanupPaths.push(testConfigPath)
|
||||
|
||||
// #when: Migrate config file
|
||||
const needsWrite = migrateConfigFile(testConfigPath, rawConfig)
|
||||
|
||||
// #then: Should migrate correctly
|
||||
expect(needsWrite).toBe(true)
|
||||
|
||||
const migratedConfig = JSON.parse(fs.readFileSync(testConfigPath, "utf-8"))
|
||||
const agents = migratedConfig.agents as Record<string, unknown>
|
||||
|
||||
expect(agents.oracle).toBeUndefined()
|
||||
expect(agents.librarian).toBeUndefined()
|
||||
|
||||
expect(agents.frontend).toBeDefined()
|
||||
expect((agents.frontend as Record<string, unknown>).category).toBe("visual-engineering")
|
||||
expect((agents.frontend as Record<string, unknown>).temperature).toBe(0.9)
|
||||
|
||||
const dir = path.dirname(testConfigPath)
|
||||
const basename = path.basename(testConfigPath)
|
||||
const files = fs.readdirSync(dir)
|
||||
const backupFiles = files.filter((f) => f.startsWith(`${basename}.bak.`))
|
||||
backupFiles.forEach((f) => cleanupPaths.push(path.join(dir, f)))
|
||||
})
|
||||
})
|
||||
|
||||
@@ -22,6 +22,21 @@ export const AGENT_NAME_MAP: Record<string, string> = {
|
||||
"multimodal-looker": "multimodal-looker",
|
||||
}
|
||||
|
||||
export const BUILTIN_AGENT_NAMES = new Set([
|
||||
"Sisyphus",
|
||||
"oracle",
|
||||
"librarian",
|
||||
"explore",
|
||||
"frontend-ui-ux-engineer",
|
||||
"document-writer",
|
||||
"multimodal-looker",
|
||||
"Metis (Plan Consultant)",
|
||||
"Momus (Plan Reviewer)",
|
||||
"Prometheus (Planner)",
|
||||
"orchestrator-sisyphus",
|
||||
"build",
|
||||
])
|
||||
|
||||
// Migration map: old hook names → new hook names (for backward compatibility)
|
||||
export const HOOK_NAME_MAP: Record<string, string> = {
|
||||
// Legacy names (backward compatibility)
|
||||
@@ -117,21 +132,7 @@ export function migrateConfigFile(configPath: string, rawConfig: Record<string,
|
||||
}
|
||||
}
|
||||
|
||||
if (rawConfig.agents && typeof rawConfig.agents === "object") {
|
||||
const agents = rawConfig.agents as Record<string, Record<string, unknown>>
|
||||
for (const [name, config] of Object.entries(agents)) {
|
||||
const { migrated, changed } = migrateAgentConfigToCategory(config)
|
||||
if (changed) {
|
||||
const category = migrated.category as string
|
||||
if (shouldDeleteAgentConfig(migrated, category)) {
|
||||
delete agents[name]
|
||||
} else {
|
||||
agents[name] = migrated
|
||||
}
|
||||
needsWrite = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
if (rawConfig.omo_agent) {
|
||||
rawConfig.sisyphus_agent = rawConfig.omo_agent
|
||||
|
||||
@@ -30,7 +30,7 @@ tools/
|
||||
## TOOL CATEGORIES
|
||||
| Category | Tools | Purpose |
|
||||
|----------|-------|---------|
|
||||
| LSP | lsp_hover, lsp_goto_definition, lsp_find_references, lsp_diagnostics, lsp_rename, etc. | IDE-grade code intelligence (11 tools) |
|
||||
| LSP | lsp_goto_definition, lsp_find_references, lsp_symbols, lsp_diagnostics, lsp_rename, etc. | IDE-grade code intelligence (7 tools) |
|
||||
| AST | ast_grep_search, ast_grep_replace | Structural pattern matching/rewriting |
|
||||
| Search | grep, glob | Timeout-safe file and content search |
|
||||
| Session | session_list, session_read, session_search, session_info | History navigation and retrieval |
|
||||
|
||||
@@ -1,15 +1,11 @@
|
||||
import {
|
||||
lsp_hover,
|
||||
lsp_goto_definition,
|
||||
lsp_find_references,
|
||||
lsp_document_symbols,
|
||||
lsp_workspace_symbols,
|
||||
lsp_symbols,
|
||||
lsp_diagnostics,
|
||||
lsp_servers,
|
||||
lsp_prepare_rename,
|
||||
lsp_rename,
|
||||
lsp_code_actions,
|
||||
lsp_code_action_resolve,
|
||||
lspManager,
|
||||
} from "./lsp"
|
||||
|
||||
@@ -60,17 +56,13 @@ export function createBackgroundTools(manager: BackgroundManager, client: Openco
|
||||
}
|
||||
|
||||
export const builtinTools: Record<string, ToolDefinition> = {
|
||||
lsp_hover,
|
||||
lsp_goto_definition,
|
||||
lsp_find_references,
|
||||
lsp_document_symbols,
|
||||
lsp_workspace_symbols,
|
||||
lsp_symbols,
|
||||
lsp_diagnostics,
|
||||
lsp_servers,
|
||||
lsp_prepare_rename,
|
||||
lsp_rename,
|
||||
lsp_code_actions,
|
||||
lsp_code_action_resolve,
|
||||
ast_grep_search,
|
||||
ast_grep_replace,
|
||||
grep,
|
||||
|
||||
@@ -7,19 +7,16 @@ import {
|
||||
} from "./constants"
|
||||
import {
|
||||
withLspClient,
|
||||
formatHoverResult,
|
||||
formatLocation,
|
||||
formatDocumentSymbol,
|
||||
formatSymbolInfo,
|
||||
formatDiagnostic,
|
||||
filterDiagnosticsBySeverity,
|
||||
formatPrepareRenameResult,
|
||||
formatCodeActions,
|
||||
applyWorkspaceEdit,
|
||||
formatApplyResult,
|
||||
} from "./utils"
|
||||
import type {
|
||||
HoverResult,
|
||||
Location,
|
||||
LocationLink,
|
||||
DocumentSymbol,
|
||||
@@ -28,33 +25,10 @@ import type {
|
||||
PrepareRenameResult,
|
||||
PrepareRenameDefaultBehavior,
|
||||
WorkspaceEdit,
|
||||
CodeAction,
|
||||
Command,
|
||||
} from "./types"
|
||||
|
||||
|
||||
|
||||
export const lsp_hover: ToolDefinition = tool({
|
||||
description: "Get type info, docs, and signature for a symbol at position.",
|
||||
args: {
|
||||
filePath: tool.schema.string(),
|
||||
line: tool.schema.number().min(1).describe("1-based"),
|
||||
character: tool.schema.number().min(0).describe("0-based"),
|
||||
},
|
||||
execute: async (args, context) => {
|
||||
try {
|
||||
const result = await withLspClient(args.filePath, async (client) => {
|
||||
return (await client.hover(args.filePath, args.line, args.character)) as HoverResult | null
|
||||
})
|
||||
const output = formatHoverResult(result)
|
||||
return output
|
||||
} catch (e) {
|
||||
const output = `Error: ${e instanceof Error ? e.message : String(e)}`
|
||||
return output
|
||||
}
|
||||
},
|
||||
})
|
||||
|
||||
export const lsp_goto_definition: ToolDefinition = tool({
|
||||
description: "Jump to symbol definition. Find WHERE something is defined.",
|
||||
args: {
|
||||
@@ -129,75 +103,68 @@ export const lsp_find_references: ToolDefinition = tool({
|
||||
},
|
||||
})
|
||||
|
||||
export const lsp_document_symbols: ToolDefinition = tool({
|
||||
description: "Get hierarchical outline of all symbols in a file.",
|
||||
export const lsp_symbols: ToolDefinition = tool({
|
||||
description: "Get symbols from file (document) or search across workspace. Use scope='document' for file outline, scope='workspace' for project-wide symbol search.",
|
||||
args: {
|
||||
filePath: tool.schema.string(),
|
||||
filePath: tool.schema.string().describe("File path for LSP context"),
|
||||
scope: tool.schema.enum(["document", "workspace"]).default("document").describe("'document' for file symbols, 'workspace' for project-wide search"),
|
||||
query: tool.schema.string().optional().describe("Symbol name to search (required for workspace scope)"),
|
||||
limit: tool.schema.number().optional().describe("Max results (default 50)"),
|
||||
},
|
||||
execute: async (args, context) => {
|
||||
try {
|
||||
const result = await withLspClient(args.filePath, async (client) => {
|
||||
return (await client.documentSymbols(args.filePath)) as DocumentSymbol[] | SymbolInfo[] | null
|
||||
})
|
||||
const scope = args.scope ?? "document"
|
||||
|
||||
if (scope === "workspace") {
|
||||
if (!args.query) {
|
||||
return "Error: 'query' is required for workspace scope"
|
||||
}
|
||||
|
||||
const result = await withLspClient(args.filePath, async (client) => {
|
||||
return (await client.workspaceSymbols(args.query!)) as SymbolInfo[] | null
|
||||
})
|
||||
|
||||
if (!result || result.length === 0) {
|
||||
const output = "No symbols found"
|
||||
return output
|
||||
}
|
||||
if (!result || result.length === 0) {
|
||||
return "No symbols found"
|
||||
}
|
||||
|
||||
const total = result.length
|
||||
const truncated = total > DEFAULT_MAX_SYMBOLS
|
||||
const limited = truncated ? result.slice(0, DEFAULT_MAX_SYMBOLS) : result
|
||||
|
||||
const lines: string[] = []
|
||||
if (truncated) {
|
||||
lines.push(`Found ${total} symbols (showing first ${DEFAULT_MAX_SYMBOLS}):`)
|
||||
}
|
||||
|
||||
if ("range" in limited[0]) {
|
||||
lines.push(...(limited as DocumentSymbol[]).map((s) => formatDocumentSymbol(s)))
|
||||
const total = result.length
|
||||
const limit = Math.min(args.limit ?? DEFAULT_MAX_SYMBOLS, DEFAULT_MAX_SYMBOLS)
|
||||
const truncated = total > limit
|
||||
const limited = result.slice(0, limit)
|
||||
const lines = limited.map(formatSymbolInfo)
|
||||
if (truncated) {
|
||||
lines.unshift(`Found ${total} symbols (showing first ${limit}):`)
|
||||
}
|
||||
return lines.join("\n")
|
||||
} else {
|
||||
lines.push(...(limited as SymbolInfo[]).map(formatSymbolInfo))
|
||||
const result = await withLspClient(args.filePath, async (client) => {
|
||||
return (await client.documentSymbols(args.filePath)) as DocumentSymbol[] | SymbolInfo[] | null
|
||||
})
|
||||
|
||||
if (!result || result.length === 0) {
|
||||
return "No symbols found"
|
||||
}
|
||||
|
||||
const total = result.length
|
||||
const limit = Math.min(args.limit ?? DEFAULT_MAX_SYMBOLS, DEFAULT_MAX_SYMBOLS)
|
||||
const truncated = total > limit
|
||||
const limited = truncated ? result.slice(0, limit) : result
|
||||
|
||||
const lines: string[] = []
|
||||
if (truncated) {
|
||||
lines.push(`Found ${total} symbols (showing first ${limit}):`)
|
||||
}
|
||||
|
||||
if ("range" in limited[0]) {
|
||||
lines.push(...(limited as DocumentSymbol[]).map((s) => formatDocumentSymbol(s)))
|
||||
} else {
|
||||
lines.push(...(limited as SymbolInfo[]).map(formatSymbolInfo))
|
||||
}
|
||||
return lines.join("\n")
|
||||
}
|
||||
return lines.join("\n")
|
||||
} catch (e) {
|
||||
const output = `Error: ${e instanceof Error ? e.message : String(e)}`
|
||||
return output
|
||||
}
|
||||
},
|
||||
})
|
||||
|
||||
export const lsp_workspace_symbols: ToolDefinition = tool({
|
||||
description: "Search symbols by name across ENTIRE workspace.",
|
||||
args: {
|
||||
filePath: tool.schema.string(),
|
||||
query: tool.schema.string().describe("Symbol name (fuzzy match)"),
|
||||
limit: tool.schema.number().optional().describe("Max results"),
|
||||
},
|
||||
execute: async (args, context) => {
|
||||
try {
|
||||
const result = await withLspClient(args.filePath, async (client) => {
|
||||
return (await client.workspaceSymbols(args.query)) as SymbolInfo[] | null
|
||||
})
|
||||
|
||||
if (!result || result.length === 0) {
|
||||
const output = "No symbols found"
|
||||
return output
|
||||
}
|
||||
|
||||
const total = result.length
|
||||
const limit = Math.min(args.limit ?? DEFAULT_MAX_SYMBOLS, DEFAULT_MAX_SYMBOLS)
|
||||
const truncated = total > limit
|
||||
const limited = result.slice(0, limit)
|
||||
const lines = limited.map(formatSymbolInfo)
|
||||
if (truncated) {
|
||||
lines.unshift(`Found ${total} symbols (showing first ${limit}):`)
|
||||
}
|
||||
const output = lines.join("\n")
|
||||
return output
|
||||
} catch (e) {
|
||||
const output = `Error: ${e instanceof Error ? e.message : String(e)}`
|
||||
return output
|
||||
return `Error: ${e instanceof Error ? e.message : String(e)}`
|
||||
}
|
||||
},
|
||||
})
|
||||
@@ -317,89 +284,3 @@ export const lsp_rename: ToolDefinition = tool({
|
||||
}
|
||||
},
|
||||
})
|
||||
|
||||
export const lsp_code_actions: ToolDefinition = tool({
|
||||
description: "Get available quick fixes, refactorings, and source actions (organize imports, fix all).",
|
||||
args: {
|
||||
filePath: tool.schema.string(),
|
||||
startLine: tool.schema.number().min(1).describe("1-based"),
|
||||
startCharacter: tool.schema.number().min(0).describe("0-based"),
|
||||
endLine: tool.schema.number().min(1).describe("1-based"),
|
||||
endCharacter: tool.schema.number().min(0).describe("0-based"),
|
||||
kind: tool.schema
|
||||
.enum([
|
||||
"quickfix",
|
||||
"refactor",
|
||||
"refactor.extract",
|
||||
"refactor.inline",
|
||||
"refactor.rewrite",
|
||||
"source",
|
||||
"source.organizeImports",
|
||||
"source.fixAll",
|
||||
])
|
||||
.optional()
|
||||
.describe("Filter by code action kind"),
|
||||
},
|
||||
execute: async (args, context) => {
|
||||
try {
|
||||
const only = args.kind ? [args.kind] : undefined
|
||||
const result = await withLspClient(args.filePath, async (client) => {
|
||||
return (await client.codeAction(
|
||||
args.filePath,
|
||||
args.startLine,
|
||||
args.startCharacter,
|
||||
args.endLine,
|
||||
args.endCharacter,
|
||||
only
|
||||
)) as (CodeAction | Command)[] | null
|
||||
})
|
||||
const output = formatCodeActions(result)
|
||||
return output
|
||||
} catch (e) {
|
||||
const output = `Error: ${e instanceof Error ? e.message : String(e)}`
|
||||
return output
|
||||
}
|
||||
},
|
||||
})
|
||||
|
||||
export const lsp_code_action_resolve: ToolDefinition = tool({
|
||||
description: "Resolve and APPLY a code action from lsp_code_actions.",
|
||||
args: {
|
||||
filePath: tool.schema.string(),
|
||||
codeAction: tool.schema.string().describe("Code action JSON from lsp_code_actions"),
|
||||
},
|
||||
execute: async (args, context) => {
|
||||
try {
|
||||
const codeAction = JSON.parse(args.codeAction) as CodeAction
|
||||
const resolved = await withLspClient(args.filePath, async (client) => {
|
||||
return (await client.codeActionResolve(codeAction)) as CodeAction | null
|
||||
})
|
||||
|
||||
if (!resolved) {
|
||||
const output = "Failed to resolve code action"
|
||||
return output
|
||||
}
|
||||
|
||||
const lines: string[] = []
|
||||
lines.push(`Action: ${resolved.title}`)
|
||||
if (resolved.kind) lines.push(`Kind: ${resolved.kind}`)
|
||||
|
||||
if (resolved.edit) {
|
||||
const result = applyWorkspaceEdit(resolved.edit)
|
||||
lines.push(formatApplyResult(result))
|
||||
} else {
|
||||
lines.push("No edit to apply")
|
||||
}
|
||||
|
||||
if (resolved.command) {
|
||||
lines.push(`Command: ${resolved.command.title} (${resolved.command.command}) - not executed`)
|
||||
}
|
||||
|
||||
const output = lines.join("\n")
|
||||
return output
|
||||
} catch (e) {
|
||||
const output = `Error: ${e instanceof Error ? e.message : String(e)}`
|
||||
return output
|
||||
}
|
||||
},
|
||||
})
|
||||
|
||||
@@ -218,9 +218,18 @@ Use \`background_output\` with task_id="${task.id}" to check progress.`
|
||||
})
|
||||
|
||||
try {
|
||||
const resumeMessageDir = getMessageDir(args.resume)
|
||||
const resumeMessage = resumeMessageDir ? findNearestMessageWithFields(resumeMessageDir) : null
|
||||
const resumeAgent = resumeMessage?.agent
|
||||
const resumeModel = resumeMessage?.model?.providerID && resumeMessage?.model?.modelID
|
||||
? { providerID: resumeMessage.model.providerID, modelID: resumeMessage.model.modelID }
|
||||
: undefined
|
||||
|
||||
await client.session.prompt({
|
||||
path: { id: args.resume },
|
||||
body: {
|
||||
...(resumeAgent !== undefined ? { agent: resumeAgent } : {}),
|
||||
...(resumeModel !== undefined ? { model: resumeModel } : {}),
|
||||
tools: {
|
||||
task: false,
|
||||
sisyphus_task: false,
|
||||
|
||||
Reference in New Issue
Block a user