Compare commits

...

105 Commits

Author SHA1 Message Date
github-actions[bot]
ac80e268d0 release: v3.0.0-beta.6 2026-01-14 02:03:03 +00:00
justsisyphus
4d9c664694 ci: improve publish workflow UX with beta release example (#760)
* ci: improve publish workflow UX with beta release example

* fix: remove non-existent google-auth.ts from build, add missing --external flag

---------

Co-authored-by: justsisyphus <justsisyphus@users.noreply.github.com>
2026-01-14 10:59:33 +09:00
github-actions[bot]
34863a77ef @justsisyphus has signed the CLA in code-yeongyu/oh-my-opencode#760 2026-01-14 01:58:03 +00:00
github-actions[bot]
a1b881f38e @abhijit360 has signed the CLA in code-yeongyu/oh-my-opencode#759 2026-01-14 01:55:26 +00:00
Kenny
2b556e0f6c Merge pull request #754 from code-yeongyu/fix/sisyphus-task-sync-mode-tests
fix(sisyphus-task): guard client.session.get and update sync mode tests
2026-01-13 13:44:17 -05:00
Kenny
444b7ce991 fix(sisyphus-task): guard client.session.get and update sync mode tests
- Add guard clause to check if client.session.get exists before calling
- Update 4 sync mode tests to properly mock session.get
- Fixes test failures from PR #731 directory inheritance feature
2026-01-13 13:41:41 -05:00
Kenny
31c5951dfc Merge pull request #731 from oussamadouhou/fix/background-task-directory-inheritance
fix(background-agent): inherit parent session directory for background tasks
2026-01-13 12:38:38 -05:00
github-actions[bot]
84dcb32608 @kdcokenny has signed the CLA in code-yeongyu/oh-my-opencode#731 2026-01-13 17:13:49 +00:00
Kenny
f2dc61f1a3 Merge pull request #750 from code-yeongyu/refactor/remove-builtin-google-auth 2026-01-13 12:04:09 -05:00
Kenny
d99d79aebf chore: delete Google Antigravity OAuth implementation (~8k lines) 2026-01-13 11:55:03 -05:00
Kenny
c78661b1f2 chore: remove deprecated Google Antigravity OAuth code 2026-01-13 11:20:55 -05:00
Kenny
1f47ea9937 Merge pull request #753 from code-yeongyu/fix/todo-continuation-abort
fix(todo-continuation): implement hybrid abort detection for reliable ESC ESC handling
2026-01-13 11:00:01 -05:00
sisyphus-dev-ai
3920f843af fix(todo-continuation): implement hybrid abort detection
- Add event-based abort detection as primary method
- Keep API-based detection as fallback
- Track abort events via session.error with 3s time window
- Clear abort flag on user/assistant activity and tool execution
- Add comprehensive tests for hybrid approach (8 new test cases)
- All 663 tests pass

Fixes #577
2026-01-13 10:48:41 -05:00
Kenny
42de7c3e40 Merge pull request #749 from Momentum96/fix/categories-deep-merge
Fix categories not being deep merged in mergeConfigs
2026-01-13 09:56:17 -05:00
Kenny
1a3fb0035b Merge pull request #745 from LTS2/test/deep-merge-unit-tests
test(shared): add unit tests for deep-merge utility
2026-01-13 09:50:59 -05:00
GeonWoo Jeon
6d4cebd17f Fix categories not being deep merged in mergeConfigs
When merging user and project configs, categories were simply
spread instead of deep merged. This caused user-level category
model settings to be completely overwritten by project-level
configs, even when the project config only specified partial
overrides like temperature.

Add deepMerge for categories field and comprehensive tests.
2026-01-13 23:06:48 +09:00
Kenny
3afdaadaad refactor: remove built-in Google auth in favor of external plugin
- Delete src/auth/antigravity/ directory (28 files)
- Delete src/google-auth.ts standalone wrapper
- Delete src/cli/commands/auth.ts CLI command
- Remove google_auth config option from schema
- Update CLI to remove auth command registration
- Update config-manager to remove google_auth handling
- Update documentation to reference external opencode-antigravity-auth plugin only
- Regenerate JSON schema

Users should install the opencode-antigravity-auth plugin for Gemini authentication.

BREAKING CHANGE: The google_auth config option is removed. Use the external plugin instead.
2026-01-13 09:03:43 -05:00
ewjin
2042a29877 test(shared): add unit tests for deep-merge utility
Add comprehensive unit tests for the deep-merge.ts utility functions:

- isPlainObject: 11 test cases covering null, undefined, primitives,
  Array, Date, RegExp, and plain objects
- deepMerge: 15 test cases covering:
  - Basic object merging
  - Deep nested object merging
  - Edge cases (undefined handling)
  - Array replacement behavior
  - Prototype pollution protection (DANGEROUS_KEYS)
  - MAX_DEPTH limit handling
2026-01-13 22:23:11 +09:00
github-actions[bot]
c6fb5e58c8 @haal-laah has signed the CLA in code-yeongyu/oh-my-opencode#739 2026-01-13 13:21:45 +00:00
github-actions[bot]
2dd9cf7b88 @LTS2 has signed the CLA in code-yeongyu/oh-my-opencode#745 2026-01-13 12:57:54 +00:00
YeonGyu-Kim
d68f90f796 feat(agents): enable call_omo_agent for Sisyphus-Junior subagents
Allow Sisyphus-Junior (category-based tasks) to spawn explore/librarian
agents via call_omo_agent for research capabilities.

Changes:
- Remove call_omo_agent from BLOCKED_TOOLS in sisyphus-junior.ts
- Update prompt to show ALLOWED status for call_omo_agent
- Remove global call_omo_agent blocking in config-handler.ts
- Keep blocking for orchestrator-sisyphus (use sisyphus_task instead)
- Keep runtime recursion prevention in index.ts for explore/librarian

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-01-13 21:00:00 +09:00
YeonGyu-Kim
e6e25e6d93 fix(agents): enable call_omo_agent for background agents while restricting recursive calls
- Enable call_omo_agent tool for skill execution in BackgroundManager
- Enable call_omo_agent tool for agent execution in BackgroundManager
- Enable call_omo_agent tool for sisyphus_task resume operations
- Enable call_omo_agent tool for sisyphus_task category-based delegation
- Restrict recursive task and sisyphus_task calls to prevent loops
- Allows background agents to delegate to other agents cleanly

🤖 Generated with OhMyOpenCode assistance
2026-01-13 21:00:00 +09:00
YeonGyu-Kim
0c996669b0 Revert "fix(agents): use createAgentToolRestrictions for Sisyphus call_omo_agent deny"
This reverts commit 9011111eb0575fcdc630fd33043e5524640adfe0.
2026-01-13 21:00:00 +09:00
YeonGyu-Kim
8916a32ea0 fix(agents): use createAgentToolRestrictions for Sisyphus call_omo_agent deny
Use version-aware permission system instead of hardcoded tools object.
This ensures call_omo_agent is properly denied on both old (tools) and
new (permission) OpenCode versions.
2026-01-13 21:00:00 +09:00
YeonGyu-Kim
cddbd0d945 refactor(agents): move question permission from orchestrator to prometheus
Restrict question tool to primary agents only:
- Remove from orchestrator-sisyphus (subagent orchestration)
- Add to prometheus (planner needs to ask clarifying questions)
2026-01-13 21:00:00 +09:00
YeonGyu-Kim
9e8173593f fix(background-agent): improve task completion detection and concurrency release
- manager.ts: Release concurrency key immediately on task completion, not after retention
- call-omo-agent: Add polling loop for sync agent completion detection
- sisyphus-task: Add abort handling, improve poll logging for debugging
2026-01-13 21:00:00 +09:00
YeonGyu-Kim
d9ab6ab99b docs: update AGENTS.md hierarchy with latest structure and line counts
- Root: Add Prometheus/Metis/Momus agents, MCP architecture, 82 test files
- agents/: Document 7-section delegation and wisdom notepad
- auth/: Multi-account load balancing (10 accounts), endpoint fallback
- features/: Update background-agent 825 lines, builtin-skills 1230 lines
- hooks/: 22+ hooks with event timing details
- tools/: sisyphus-task 583 lines, LSP client 632 lines
- cli/: config-manager 725 lines, 17+ doctor checks
- shared/: Cross-cutting utilities with usage patterns
2026-01-13 21:00:00 +09:00
YeonGyu-Kim
cf53b2b51a feat(agents): enable question tool permission for Sisyphus agents
Allow Sisyphus and orchestrator-sisyphus agents to use OpenCode's
question tool for interactive user prompts. OpenCode defaults
question permission to "deny" for all agents except build/plan.
2026-01-13 21:00:00 +09:00
Kenny
cf66a86e16 Merge pull request #560 from code-yeongyu/fix/install-preserve-config
fix(cli): preserve user config on reinstall
2026-01-13 07:22:51 -05:00
Nguyen Khac Trung Kien
d2a5f47f1c Merge pull request #677 from jkoelker/fix/add-variant-support 2026-01-13 12:48:57 +07:00
Oussama Douhou
9e98cef182 fix(background-agent): inherit parent session directory for background tasks
Background tasks were defaulting to $HOME instead of the parent session's
working directory. This caused background agents to scan the entire home
directory instead of the project directory, leading to:
- High CPU/memory load from scanning unrelated files
- Permission errors on system directories
- Task failures and timeouts

The fix retrieves the parent session's directory before creating a new
background session and passes it via the query.directory parameter.

Files modified:
- manager.ts: Look up parent session directory in launch()
- call-omo-agent/tools.ts: Same fix for sync mode
- look-at/tools.ts: Same fix for look_at tool
- sisyphus-task/tools.ts: Same fix + interface update for directory prop
- index.ts: Pass directory to sisyphusTask factory
2026-01-13 06:27:56 +01:00
Jason Kölker
2b8853cbac feat(config): add model variant support
Allow optional model variant config for agents and categories.
Propagate category variants into task model payloads so
category-driven runs inherit provider-specific variants.

Closes: #647
2026-01-13 04:37:51 +00:00
Kenny
f9fce50144 Merge pull request #728 from code-yeongyu/fix/sisyphus-orchestrator-test-assertion
fix(test): update sisyphus-orchestrator test assertion
2026-01-12 23:06:36 -05:00
Kenny
d1ffecd887 fix(test): update sisyphus-orchestrator test to expect preserved subagent response
The implementation preserves original subagent responses for debugging failed tasks.
Updated test assertion from .not.toContain() to .toContain() to match this behavior.
2026-01-12 23:04:34 -05:00
Kenny
d9aabb33fd Merge pull request #709 from Momentum96/fix/skill-lazy-loading
fix(skill-loader): implement eager loading for skills
2026-01-12 22:50:31 -05:00
Kenny
79bd75b3db refactor(skill-loader): eager loading with atomic file reads
- Extract body during initial parseFrontmatter call
- Rename lazyContent → eagerLoader with rationale comment
- Eliminates redundant file read and race condition
2026-01-12 22:46:28 -05:00
Kenny
14dc8ee8df Merge pull request #698 from chilipvlmer/fix/preserve-subagent-response
fix(sisyphus-orchestrator): preserve subagent response in output transformation
2026-01-12 22:16:59 -05:00
Kenny
6ea63706db Merge pull request #726 from code-yeongyu/fix/todowrite-agent-friendly-errors
fix(hooks): throw agent-friendly errors when todowrite receives invalid input
2026-01-12 22:11:28 -05:00
Kenny
864656475a fix: only append ellipsis when string exceeds 100 chars 2026-01-12 22:05:21 -05:00
YeonGyu-Kim
9048b616e3 Merge pull request #727 from code-yeongyu/feat/disable-call-omo-agent-default
feat(tools): disable call_omo_agent by default, enable via sisyphus_task
2026-01-13 11:26:27 +09:00
YeonGyu-Kim
4fe4fb1adf feat(tools): disable call_omo_agent by default, enable via sisyphus_task 2026-01-13 11:21:01 +09:00
Kenny
04ae3642d9 fix(hooks): throw agent-friendly errors when todowrite receives invalid input 2026-01-12 21:19:05 -05:00
Victor Sumner
70d604e0e4 fix(sisyphus-junior): use categoryConfig.model instead of hardcoded sonnet-4.5 (#718) 2026-01-13 09:58:05 +09:00
Nguyen Khac Trung Kien
8d65748ad3 fix(prometheus): prevent agent fallback to build in background tasks (#695) 2026-01-13 09:39:25 +09:00
Kenny
2314a0d371 fix(glob): default hidden=true and follow=true to align with OpenCode (#720)
- Add follow?: boolean option to GlobOptions interface
- Change buildRgArgs to use !== false pattern for hidden and follow flags
- Change buildFindArgs to use === false pattern, add -L for symlinks
- Change buildPowerShellCommand to use !== false pattern for hidden
- Remove -FollowSymlink from PowerShell (unsupported in PS 5.1)
- Export build functions for testing
- Add comprehensive BDD-style tests (18 tests, 21 assertions)

Note: Symlink following via -FollowSymlink is not supported in Windows
PowerShell 5.1. OpenCode auto-downloads ripgrep which handles symlinks
via --follow flag. PowerShell fallback is a safety net that rarely triggers.

Fixes #631
2026-01-13 09:24:07 +09:00
github-actions[bot]
e620b546ab @dante01yoon has signed the CLA in code-yeongyu/oh-my-opencode#710 2026-01-12 12:39:03 +00:00
Ivan Marshall Widjaja
0fada4d0fc fix(config): allow Sisyphus-Junior agent customization via oh-my-opencode.json (#648)
* fix(config): allow Sisyphus-Junior agent customization via oh-my-opencode.json

Allow users to configure Sisyphus-Junior agent via agents["Sisyphus-Junior"]
in oh-my-opencode.json, removing hardcoded defaults while preserving safety
constraints.
Closes #623
Changes:
- Add "Sisyphus-Junior" to AgentOverridesSchema and OverridableAgentNameSchema
- Create createSisyphusJuniorAgentWithOverrides() helper with guardrails
- Update config-handler to use override helper instead of hardcoded values
- Fix README category wording (runtime presets, not separate agents)
Honored override fields:
- model, temperature, top_p, tools, permission, description, color, prompt_append
Safety guardrails enforced post-merge:
- mode forced to "subagent" (cannot change)
- prompt is append-only (base discipline text preserved)
- blocked tools (task, sisyphus_task, call_omo_agent) always denied
- disable: true ignores override block, uses defaults
Category interaction:
- sisyphus_task(category=...) runs use the base Sisyphus-Junior agent config
- Category model/temperature overrides take precedence at request time
- To change model for a category, set categories.<cat>.model (not agent override)
- Categories are runtime presets applied to Sisyphus-Junior, not separate agents
Tests: 15 new tests in sisyphus-junior.test.ts, 3 new schema tests

Co-Authored-By: Sisyphus <sisyphus@mengmota.com>

* test(sisyphus-junior): add guard assertion for prompt anchor text

Add validation that baseEndIndex is not -1 before using it for ordering
assertion. Previously, if "Dense > verbose." text changed in the base
prompt, indexOf would return -1 and any positive appendIndex would pass.

Co-Authored-By: Sisyphus <sisyphus@mengmota.com>

---------

Co-authored-by: Sisyphus <sisyphus@mengmota.com>
2026-01-12 17:46:47 +09:00
github-actions[bot]
c79235744b @Momentum96 has signed the CLA in code-yeongyu/oh-my-opencode#709 2026-01-12 08:33:54 +00:00
Momentum96
6bbe69a72a fix(skill-loader): implement eager loading to resolve empty slash commands 2026-01-12 17:27:54 +09:00
Sanyue
5b8c6c70b2 docs: add localized Chinese translation for oh-my-opencode README (#696) 2026-01-12 17:27:23 +09:00
Ivan Marshall Widjaja
179f57fa96 fix(sisyphus_task): resolve sync mode JSON parse error (#708) 2026-01-12 17:26:32 +09:00
YeonGyu-Kim
f83b22c4de fix(cli/run): properly serialize error objects to prevent [object Object] output
- Add serializeError utility to handle Error instances, plain objects, and nested message paths
- Fix handleSessionError to use serializeError instead of naive String() conversion
- Fix runner.ts catch block to use serializeError for detailed error messages
- Add session.error case to logEventVerbose for better error visibility
- Add comprehensive tests for serializeError function

Fixes error logging in sisyphus-agent workflow where errors were displayed as '[object Object]'
2026-01-12 14:49:07 +09:00
YeonGyu-Kim
965bb2dd10 chore(ci): remove pinned OpenCode version in sisyphus-agent workflow
Use default installer which installs latest version instead of
fallback to hardcoded v1.0.204.
2026-01-12 14:34:06 +09:00
Ivan Marshall Widjaja
f9dca8d877 fix(config): resolve category to model for Prometheus (Planner) agent (#652)
* fix(config): resolve category to model for Prometheus (Planner) agent

When Prometheus (Planner) was configured with only a category (e.g.,
"ultrabrain") and no explicit model, the category was ignored and the
agent fell back to the hardcoded default "anthropic/claude-opus-4-5".
Add resolveModelFromCategoryWithUserOverride() helper that checks user
categories first, then DEFAULT_CATEGORIES, to resolve category names
to their corresponding models. Apply this resolution when building
the Prometheus agent configuration.

Co-Authored-By: Sisyphus <sisyphus@mengmota.com>

* fix(test): use actual implementation instead of local duplicate

Co-Authored-By: Sisyphus <sisyphus@mengmota.com>

* fix(config): apply all category properties, not just model for Prometheus (Planner)

The resolveModelFromCategoryWithUserOverride() helper only extracted
the model field from CategoryConfig, ignoring critical properties like
temperature, top_p, tools, maxTokens, thinking, reasoningEffort, and
textVerbosity. This caused categories like "ultrabrain" (temperature:
0.1) to run with incorrect default temperatures.

Refactor resolveModelFromCategoryWithUserOverride() to
resolveCategoryConfig() that returns the full CategoryConfig. Update
Prometheus (Planner) configuration to apply all category properties
(temperature, top_p, tools, etc.) when a category is specified, matching
the pattern established in Sisyphus-Junior. Explicit overrides still
take precedence during merge.

Co-Authored-By: Sisyphus <sisyphus@mengmota.com>

---------

Co-authored-by: Sisyphus <sisyphus@mengmota.com>
2026-01-12 12:04:55 +09:00
github-actions[bot]
91c490a358 @chilipvlmer has signed the CLA in code-yeongyu/oh-my-opencode#698 2026-01-11 18:24:57 +00:00
chilipvlmer
aa44c54068 fix(sisyphus-orchestrator): preserve subagent response in output transformation 2026-01-11 19:18:28 +01:00
github-actions[bot]
945b090b1b @Sanyue0v0 has signed the CLA in code-yeongyu/oh-my-opencode#696 2026-01-11 17:37:22 +00:00
Gladdonilli
05cd133e2a fix(git-master): inject user config into skill prompt (#656) 2026-01-11 19:02:36 +09:00
yimingll
8ed3f7e03b fix: LSP tools Windows compatibility - use pathToFileURL for proper URI generation (#689) 2026-01-11 19:01:54 +09:00
github-actions[bot]
42e5b5bf44 @yimingll has signed the CLA in code-yeongyu/oh-my-opencode#689 2026-01-11 10:01:05 +00:00
sisyphus-dev-ai
8320c7cf2d fix(cli): integrate channel-based updates in doctor and get-local-version
Update CLI commands to use channel-aware version fetching:
- doctor check now reports channel in error messages
- get-local-version uses channel from pinned version

Depends on channel detection from previous commit.

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-01-11 09:56:23 +00:00
sisyphus-dev-ai
612e9b3e03 fix(auto-update): implement channel-based version fetching
Add support for npm dist-tag channels (@beta, @next, @canary) in auto-update mechanism. Users pinned to oh-my-opencode@beta now correctly fetch and compare against beta channel instead of stable latest.

- Add extractChannel() to detect channel from version string
- Modify getLatestVersion() to accept channel parameter
- Update auto-update flow to use channel-aware fetching
- Add comprehensive tests for channel detection and fetching
- Resolves #687

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-01-11 09:56:09 +00:00
Ivan Marshall Widjaja
f27e93bcc8 fix(agents): relax Momus input validation and tighten Prometheus Momus calls to avoid false rejections (#659) 2026-01-11 18:30:29 +09:00
popododo0720
10a5bab94d fix: use version-aware zip extraction on Windows (#563) 2026-01-11 18:21:48 +09:00
Sangrak Choi
f615b012e7 fix: run build before npm publish to include correct version (#653) 2026-01-11 18:20:44 +09:00
Ashir
0809de8262 fix(skill-mcp): handle pre-parsed object arguments in parseArguments (#675) 2026-01-11 18:18:32 +09:00
Coaspe
24bdc7ea77 fix(prompts): add missing opening <Role> tag to Sisyphus system prompt (#682) 2026-01-11 18:15:44 +09:00
github-actions[bot]
8ff159bc2e release: v3.0.0-beta.5 2026-01-11 06:31:07 +00:00
YeonGyu-Kim
49b0b5e085 fix(prometheus-md-only): allow nested project paths with .sisyphus directory
Use regex /\.sisyphus[/\\]/i instead of checking first path segment. This fixes Windows paths where ctx.directory is parent of the actual project (e.g., project\.sisyphus\drafts\...).

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-01-11 15:28:09 +09:00
github-actions[bot]
1132be370c @Coaspe has signed the CLA in code-yeongyu/oh-my-opencode#682 2026-01-11 06:04:07 +00:00
github-actions[bot]
f240dbb7ee release: v3.0.0-beta.4 2026-01-11 05:46:20 +00:00
YeonGyu-Kim
571810f1e7 fix(sisyphus-orchestrator): add cross-platform path validation for Windows support
Add isSisyphusPath() helper function that handles both forward slashes (Unix) and backslashes (Windows) using regex pattern /\.sisyphus[/\\]/.

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-01-11 14:42:53 +09:00
YeonGyu-Kim
83d958580f librarian notice 2026-01-11 14:33:17 +09:00
github-actions[bot]
f1e7b6ab1e @aw338WoWmUI has signed the CLA in code-yeongyu/oh-my-opencode#681 2026-01-11 05:03:55 +00:00
YeonGyu-Kim
1bbb61b1c2 fix(context-injector): inject via chat.message after claudeCodeHooks
- Revert messages.transform-only approach (experimental hook unreliable)
- Inject context in chat.message after claudeCodeHooks runs
- Order: keywordDetector → claudeCodeHooks → contextInjector
- Works independently of claude-code-hooks being enabled/disabled
- Ultrawork content now reliably injected to model
2026-01-11 12:33:20 +09:00
YeonGyu-Kim
2a95c91cab fix(context-injector): inject only via messages.transform to preserve UI
- Remove contextInjector call from chat.message hook chain
- Context injection now only happens in messages.transform hook
- This ensures UI displays original user message while model receives prepended context
- Fixes bug where commit message promised clone behavior but implementation mutated directly
2026-01-11 12:23:13 +09:00
Jeremy Gollehon
307d583ad6 fix(prometheus-md-only): cross-platform path validation for Windows support (#630) (#649)
Replace brittle string checks with robust path.resolve/relative validation:

- Fix Windows backslash paths (.sisyphus\plans\x.md) being incorrectly blocked
- Fix case-sensitive extension check (.MD now accepted)
- Add workspace confinement (block paths outside root even if containing .sisyphus)
- Block nested .sisyphus directories (only first segment allowed)
- Block path traversal attempts (.sisyphus/../secrets.md)
- Use ALLOWED_EXTENSIONS and ALLOWED_PATH_PREFIX constants (case-insensitive)

The new isAllowedFile() uses Node's path module for cross-platform compatibility
instead of string includes/endsWith which failed on Windows separators.
2026-01-11 12:21:50 +09:00
YeonGyu-Kim
ce5315fbd0 refactor(keyword-detector): decouple from claude-code-hooks via ContextCollector pipeline
- keyword-detector now registers keywords to ContextCollector
- context-injector consumes and injects via chat.message hook
- Removed keyword detection logic from claude-code-hooks
- Hook order: keyword-detector → context-injector → claude-code-hooks
- ultrawork now works even when claude-code-hooks is disabled
2026-01-11 12:06:16 +09:00
Kenny
1c262a65fe feat: add OPENCODE_CONFIG_DIR environment variable support (#629)
- Add env var check to getCliConfigDir() for config directory override
- Update detectExistingConfigDir() to include env var path in locations
- Add comprehensive tests (7 test cases)
- Document in README

Closes #627
2026-01-11 11:48:36 +09:00
Arthur Andrade
0c127879c0 fix(lsp): cleanup orphaned LSP servers on session.deleted (#676)
* fix(lsp): cleanup orphaned LSP servers on session.deleted

When parallel background agent tasks complete, their LSP servers (for
repos cloned to /tmp/) remain running until a 5-minute idle timeout.
This causes memory accumulation with heavy parallel Sisyphus usage,
potentially leading to OOM crashes.

This change adds cleanupTempDirectoryClients() to LSPServerManager
(matching the pattern used by SkillMcpManager.disconnectSession())
and calls it on session.deleted events.

The cleanup targets idle LSP clients (refCount=0) for temporary
directories (/tmp/, /var/folders/) where agent tasks clone repos.

* chore: retrigger CI checks
2026-01-11 11:45:38 +09:00
Nguyen Khac Trung Kien
65a6a702ec Fix flowchart syntax in orchestration guide (#679)
Updated the flowchart syntax in the orchestration guide.
2026-01-11 11:45:13 +09:00
github-actions[bot]
60f4cd4fac release: v3.0.0-beta.3 2026-01-11 02:40:31 +00:00
github-actions[bot]
5f823b0f8e release: v2.14.1 2026-01-11 02:23:00 +00:00
YeonGyu-Kim
e35a488cf6 fix(test): extend timeout for resume sync test
MIN_STABILITY_TIME_MS is 5000ms in implementation, but test timeout was only 5000ms.
Extended to 10000ms to allow proper polling completion.
2026-01-11 11:20:00 +09:00
YeonGyu-Kim
adb1a9fcb9 docs: fix model names in config examples to use valid antigravity models 2026-01-11 11:14:15 +09:00
YeonGyu-Kim
9bfed238b9 docs: update agent model catalog - librarian now uses GLM-4.7 Free 2026-01-11 11:11:34 +09:00
YeonGyu-Kim
61abd553fb fix wrong merge. 2026-01-11 11:07:46 +09:00
github-actions[bot]
6425d9d97e @KNN-07 has signed the CLA in code-yeongyu/oh-my-opencode#679 2026-01-11 01:11:47 +00:00
github-actions[bot]
d57744905f @arthur404dev has signed the CLA in code-yeongyu/oh-my-opencode#676 2026-01-10 23:51:55 +00:00
github-actions[bot]
c7ae2d7be6 @ashir6892 has signed the CLA in code-yeongyu/oh-my-opencode#675 2026-01-10 19:50:19 +00:00
github-actions[bot]
358f7f439d @kargnas has signed the CLA in code-yeongyu/oh-my-opencode#653 2026-01-10 10:25:35 +00:00
github-actions[bot]
4fde139dd8 @GollyJer has signed the CLA in code-yeongyu/oh-my-opencode#649 2026-01-10 09:57:54 +00:00
github-actions[bot]
b10703ec9a @imarshallwidjaja has signed the CLA in code-yeongyu/oh-my-opencode#648 2026-01-10 07:58:53 +00:00
Brian Li
8b12257729 fix: remove author name from agent system prompts (#634)
The author name "Named by [YeonGyu Kim]" in the Sisyphus role section
causes LLMs to sometimes infer Korean language output, even when the
user's locale is en-US.

This happens because the model sees a Korean name in the system prompt
and may interpret it as a signal to respond in Korean.

Removing the author attribution from the runtime prompt fixes this issue.
The attribution is preserved in README, LICENSE, and package.json.

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 14:11:48 +09:00
github-actions[bot]
7536a12754 @Luodian has signed the CLA in code-yeongyu/oh-my-opencode#634 2026-01-10 05:01:31 +00:00
Gladdonilli
0fb765732a fix: improve background task completion detection and message extraction (#638)
* fix: background task completion detection and silent notifications

- Fix TS2742 by adding explicit ToolDefinition type annotations
- Add stability detection (3 consecutive stable polls after 10s minimum)
- Remove early continue when sessionStatus is undefined
- Add silent notification system via tool.execute.after hook injection
- Change task retention from 200ms to 5 minutes for background_output retrieval
- Fix formatTaskResult to sort messages by time descending

Fixes hanging background tasks that never complete due to missing sessionStatus.

* fix: improve background task completion detection and message extraction

- Add stability-based completion detection (10s min + 3 stable polls)
- Fix message extraction to recognize 'reasoning' parts from thinking models
- Switch from promptAsync() to prompt() for proper agent initialization
- Remove model parameter from prompt body (use agent's configured model)
- Add fire-and-forget prompt pattern for sisyphus_task sync mode
- Add silent notification via tool.execute.after hook injection
- Fix indentation issues in manager.ts and index.ts

Incorporates fixes from:
- PR #592: Stability detection mechanism
- PR #610: Model parameter passing (partially)
- PR #628: Completion detection improvements

Known limitation: Thinking models (e.g. claude-*-thinking-*) cause
JSON Parse errors in child sessions. Use non-thinking models for
background agents until OpenCode core resolves this.

* fix: add tool_result handling and pendingByParent tracking for resume/external tasks

Addresses code review feedback from PR #638:

P1: Add tool_result type to validateSessionHasOutput() to prevent
    false negatives for tool-only background tasks that would otherwise
    timeout after 30 minutes despite having valid results.

P2: Add pendingByParent tracking to resume() and registerExternalTask()
    to prevent premature 'ALL COMPLETE' notifications when mixing
    launched and resumed tasks.

* fix: address code review feedback - log messages, model passthrough, sorting, race condition

- Fix misleading log messages: 'promptAsync' -> 'prompt (fire-and-forget)'
- Restore model passthrough in launch() for Sisyphus category configs
- Fix call-omo-agent sorting: use time.created number instead of String(time)
- Fix race condition: check promptError inside polling loop, not just after 100ms
2026-01-10 14:00:25 +09:00
github-actions[bot]
d4c8ec6690 @ElwinLiu has signed the CLA in code-yeongyu/oh-my-opencode#645 2026-01-10 04:32:31 +00:00
github-actions[bot]
d6416082a2 @kdcokenny has signed the CLA in code-yeongyu/oh-my-opencode#629 2026-01-09 12:54:17 +00:00
github-actions[bot]
e6aaf57a21 @SJY0917032 has signed the CLA in code-yeongyu/oh-my-opencode#625 2026-01-09 10:01:29 +00:00
YeonGyu-Kim
5242f3daef fix(docs): correct plan invocation syntax from /plan to @plan
OpenCode uses @agent-name syntax for agent invocation, not /command.
The /plan command does not exist - it should be @plan to invoke
the Prometheus planner agent.
2026-01-09 17:45:25 +09:00
YeonGyu-Kim
3f2ded54ee fix(docs): escape special chars in Mermaid diagram
Quote node label containing special characters to prevent
Mermaid lexer error on line 9.
2026-01-09 17:24:03 +09:00
YeonGyu-Kim
aa5018583e docs(orchestration): add TL;DR section for quick reference 2026-01-09 16:47:04 +09:00
YeonGyu-Kim
185d4e1e54 test(ralph-loop): add tests for loop restart scenarios
- Add test for starting new loop while previous loop active (different session)
- Add test for restarting loop in same session
- Verifies startLoop properly overwrites state and resets iteration
2026-01-09 16:39:53 +09:00
YeonGyu-Kim
79e9fd82c5 fix(background-agent): preserve parent agent context in completion notifications
When parentAgent is undefined, omit the agent field entirely from
session.prompt body instead of passing undefined. This prevents the
OpenCode SDK from falling back to defaultAgent(), which would change
the parent session's agent context.

Changes:
- manager.ts: Build prompt body conditionally, only include agent/model
  when defined
- background-task/tools.ts: Use ctx.agent as primary source for
  parentAgent (consistent with sisyphus-task)
- registerExternalTask: Add parentAgent parameter support
- Added tests for agent context preservation scenarios
2026-01-09 15:53:55 +09:00
sisyphus-dev-ai
7853f1f4bf fix(cli): preserve user config on reinstall
Previously, the install command would delete the entire 'agents' object
from the user's oh-my-opencode config before merging new install settings.
This caused all user customizations to be lost on reinstall.

Fixed by removing the 'delete existing.agents' line and relying on the
existing deepMerge function to properly merge configs, preserving user
customizations while updating only the fields specified by the installer.

Fixes #556
2026-01-07 04:38:08 +00:00
130 changed files with 5529 additions and 9716 deletions

View File

@@ -8,12 +8,13 @@ on:
description: "Bump major, minor, or patch"
required: true
type: choice
default: patch
options:
- major
- minor
- patch
- minor
- major
version:
description: "Override version (optional)"
description: "Override version (e.g., 3.0.0-beta.6 for beta release). Takes precedence over bump."
required: false
type: string
@@ -104,9 +105,9 @@ jobs:
- name: Build
run: |
echo "=== Running bun build (main) ==="
bun build src/index.ts src/google-auth.ts --outdir dist --target bun --format esm --external @ast-grep/napi
bun build src/index.ts --outdir dist --target bun --format esm --external @ast-grep/napi
echo "=== Running bun build (CLI) ==="
bun build src/cli/index.ts --outdir dist/cli --target bun --format esm
bun build src/cli/index.ts --outdir dist/cli --target bun --format esm --external @ast-grep/napi
echo "=== Running tsc ==="
tsc --emitDeclarationOnly
echo "=== Running build:schema ==="

View File

@@ -89,15 +89,15 @@ jobs:
echo "Installing OpenCode..."
curl -fsSL https://opencode.ai/install -o /tmp/opencode-install.sh
# Try default installer first, fallback to pinned version if it fails
# Try default installer first, fallback to re-download if it fails
if file /tmp/opencode-install.sh | grep -q "shell script\|text"; then
if ! bash /tmp/opencode-install.sh 2>&1; then
echo "Default installer failed, trying with pinned version..."
bash /tmp/opencode-install.sh --version 1.0.204
echo "Default installer failed, trying direct install..."
bash <(curl -fsSL https://opencode.ai/install)
fi
else
echo "Download corrupted, trying direct install with pinned version..."
bash <(curl -fsSL https://opencode.ai/install) --version 1.0.204
echo "Download corrupted, trying direct install..."
bash <(curl -fsSL https://opencode.ai/install)
fi
fi
opencode --version

View File

@@ -1,7 +1,7 @@
# PROJECT KNOWLEDGE BASE
**Generated:** 2026-01-09T15:38:00+09:00
**Commit:** 0581793
**Generated:** 2026-01-13T14:45:00+09:00
**Commit:** e47b5514
**Branch:** dev
## OVERVIEW
@@ -13,16 +13,16 @@ OpenCode plugin implementing Claude Code/AmpCode features. Multi-model agent orc
```
oh-my-opencode/
├── src/
│ ├── agents/ # AI agents (7): Sisyphus, oracle, librarian, explore, frontend, document-writer, multimodal-looker
│ ├── hooks/ # 22 lifecycle hooks - see src/hooks/AGENTS.md
│ ├── agents/ # AI agents (7+): Sisyphus, oracle, librarian, explore, frontend, document-writer, multimodal-looker, prometheus, metis, momus
│ ├── hooks/ # 22+ lifecycle hooks - see src/hooks/AGENTS.md
│ ├── tools/ # LSP, AST-Grep, Grep, Glob, session mgmt - see src/tools/AGENTS.md
│ ├── features/ # Claude Code compat layer - see src/features/AGENTS.md
│ ├── auth/ # Google Antigravity OAuth - see src/auth/AGENTS.md
│ ├── shared/ # Cross-cutting utilities - see src/shared/AGENTS.md
│ ├── cli/ # CLI installer, doctor - see src/cli/AGENTS.md
│ ├── mcp/ # MCP configs: context7, grep_app
│ ├── config/ # Zod schema, TypeScript types
│ └── index.ts # Main plugin entry (548 lines)
│ ├── mcp/ # MCP configs: context7, grep_app, websearch
│ ├── config/ # Zod schema (12k lines), TypeScript types
│ └── index.ts # Main plugin entry (563 lines)
├── script/ # build-schema.ts, publish.ts, generate-changelog.ts
├── assets/ # JSON schema
└── dist/ # Build output (ESM + .d.ts)
@@ -50,7 +50,7 @@ oh-my-opencode/
| Shared utilities | `src/shared/` | Cross-cutting utilities |
| Slash commands | `src/hooks/auto-slash-command/` | Auto-detect and execute `/command` patterns |
| Ralph Loop | `src/hooks/ralph-loop/` | Self-referential dev loop until completion |
| Orchestrator | `src/hooks/sisyphus-orchestrator/` | Main orchestration hook (660 lines) |
| Orchestrator | `src/hooks/sisyphus-orchestrator/` | Main orchestration hook (677 lines) |
## TDD (Test-Driven Development)
@@ -83,7 +83,7 @@ oh-my-opencode/
- **Build**: `bun build` (ESM) + `tsc --emitDeclarationOnly`
- **Exports**: Barrel pattern in index.ts; explicit named exports for tools/hooks
- **Naming**: kebab-case directories, createXXXHook/createXXXTool factories
- **Testing**: BDD comments `#given/#when/#then`, TDD workflow (RED-GREEN-REFACTOR)
- **Testing**: BDD comments `#given/#when/#then`, TDD workflow (RED-GREEN-REFACTOR), 82 test files
- **Temperature**: 0.1 for code agents, max 0.3
## ANTI-PATTERNS (THIS PROJECT)
@@ -122,13 +122,16 @@ oh-my-opencode/
| Agent | Default Model | Purpose |
|-------|---------------|---------|
| Sisyphus | anthropic/claude-opus-4-5 | Primary orchestrator |
| Sisyphus | anthropic/claude-opus-4-5 | Primary orchestrator with extended thinking |
| oracle | openai/gpt-5.2 | Read-only consultation. High-IQ debugging, architecture |
| librarian | anthropic/claude-sonnet-4-5 | Multi-repo analysis, docs |
| librarian | opencode/glm-4.7-free | Multi-repo analysis, docs |
| explore | opencode/grok-code | Fast codebase exploration |
| frontend-ui-ux-engineer | google/gemini-3-pro-preview | UI generation |
| document-writer | google/gemini-3-pro-preview | Technical docs |
| multimodal-looker | google/gemini-3-flash | PDF/image analysis |
| Prometheus (Planner) | anthropic/claude-opus-4-5 | Strategic planning, interview-driven |
| Metis (Plan Consultant) | anthropic/claude-sonnet-4-5 | Pre-planning analysis |
| Momus (Plan Reviewer) | anthropic/claude-sonnet-4-5 | Plan validation |
## COMMANDS
@@ -137,7 +140,7 @@ bun run typecheck # Type check
bun run build # ESM + declarations + schema
bun run rebuild # Clean + Build
bun run build:schema # Schema only
bun test # Run tests (76 test files, 2559+ BDD assertions)
bun test # Run tests (82 test files, 2559+ BDD assertions)
```
## DEPLOYMENT
@@ -160,23 +163,38 @@ bun test # Run tests (76 test files, 2559+ BDD assertions)
| File | Lines | Description |
|------|-------|-------------|
| `src/agents/orchestrator-sisyphus.ts` | 1484 | Orchestrator agent, complex delegation |
| `src/agents/orchestrator-sisyphus.ts` | 1486 | Orchestrator agent, 7-section delegation, accumulated wisdom |
| `src/features/builtin-skills/skills.ts` | 1230 | Skill definitions (frontend-ui-ux, playwright) |
| `src/agents/prometheus-prompt.ts` | 982 | Planning agent system prompt |
| `src/auth/antigravity/fetch.ts` | 798 | Token refresh, URL rewriting |
| `src/auth/antigravity/thinking.ts` | 755 | Thinking block extraction |
| `src/cli/config-manager.ts` | 725 | JSONC parsing, env detection |
| `src/hooks/sisyphus-orchestrator/index.ts` | 660 | Orchestrator hook impl |
| `src/agents/sisyphus.ts` | 641 | Main Sisyphus prompt |
| `src/tools/lsp/client.ts` | 612 | LSP protocol, JSON-RPC |
| `src/features/background-agent/manager.ts` | 608 | Task lifecycle |
| `src/auth/antigravity/response.ts` | 599 | Response transformation, streaming |
| `src/hooks/anthropic-context-window-limit-recovery/executor.ts` | 556 | Multi-stage recovery |
| `src/index.ts` | 548 | Main plugin, all hook/tool init |
| `src/agents/prometheus-prompt.ts` | 988 | Planning agent, interview mode, multi-agent validation |
| `src/auth/antigravity/fetch.ts` | 798 | Token refresh, multi-account rotation, endpoint fallback |
| `src/auth/antigravity/thinking.ts` | 755 | Thinking block extraction, signature management |
| `src/cli/config-manager.ts` | 725 | JSONC parsing, multi-level config, env detection |
| `src/hooks/sisyphus-orchestrator/index.ts` | 677 | Orchestrator hook impl |
| `src/agents/sisyphus.ts` | 643 | Main Sisyphus prompt |
| `src/tools/lsp/client.ts` | 632 | LSP protocol, JSON-RPC |
| `src/features/background-agent/manager.ts` | 825 | Task lifecycle, concurrency |
| `src/auth/antigravity/response.ts` | 598 | Response transformation, streaming |
| `src/tools/sisyphus-task/tools.ts` | 583 | Category-based task delegation |
| `src/index.ts` | 563 | Main plugin, all hook/tool init |
| `src/hooks/anthropic-context-window-limit-recovery/executor.ts` | 555 | Multi-stage recovery |
## MCP ARCHITECTURE
Three-tier MCP system:
1. **Built-in**: `websearch` (Exa), `context7` (docs), `grep_app` (GitHub search)
2. **Claude Code compatible**: `.mcp.json` files with `${VAR}` expansion
3. **Skill-embedded**: YAML frontmatter in skills (e.g., playwright)
## CONFIG SYSTEM
- **Zod validation**: `src/config/schema.ts` (12k lines)
- **JSONC support**: Comments and trailing commas
- **Multi-level**: User (`~/.config/opencode/`) → Project (`.opencode/`)
- **CLI doctor**: Validates config and reports errors
## NOTES
- **Testing**: Bun native test (`bun test`), BDD-style `#given/#when/#then`, 76 test files
- **Testing**: Bun native test (`bun test`), BDD-style `#given/#when/#then`, 82 test files
- **OpenCode**: Requires >= 1.0.150
- **Multi-lang docs**: README.md (EN), README.ko.md (KO), README.ja.md (JA), README.zh-cn.md (ZH-CN)
- **Config**: `~/.config/opencode/oh-my-opencode.json` (user) or `.opencode/oh-my-opencode.json` (project)

View File

@@ -28,7 +28,7 @@
> `oh-my-opencode` をインストールして、ドーピングしたかのようにコーディングしましょう。バックグラウンドでエージェントを走らせ、oracle、librarian、frontend engineer のような専門エージェントを呼び出してください。丹精込めて作られた LSP/AST ツール、厳選された MCP、そして完全な Claude Code 互換レイヤーを、たった一行で手に入れましょう。
**今すぐ始めましょう。ChatGPT、Claude、Gemini のサブスクリプションで使えます**
**注意: librarianには高価なモデルを使用しないでください。これはあなたにとって役に立たないだけでなく、LLMプロバイダーにも負担をかけます。代わりにClaude Haiku、Gemini Flash、GLM 4.7、MiniMaxなどのモデルを使用してください**
<div align="center">
@@ -76,6 +76,7 @@
- [Oh My OpenCode](#oh-my-opencode)
- [この Readme は読まなくていいです](#この-readme-は読まなくていいです)
- [エージェントの時代ですから](#エージェントの時代ですから)
- [🪄 魔法の言葉:`ultrawork`](#-魔法の言葉ultrawork)
- [読みたい方のために:シジフォスに会う](#読みたい方のためにシジフォスに会う)
- [インストールするだけで。](#インストールするだけで)
- [インストール](#インストール)
@@ -88,10 +89,15 @@
- [ステップ 4: 認証情報の設定](#ステップ-4-認証情報の設定)
- [4.1 Anthropic (Claude)](#41-anthropic-claude)
- [4.2 Google Gemini (Antigravity OAuth)](#42-google-gemini-antigravity-oauth)
- [4.2.1 モデル設定](#421-モデル設定)
- [4.2.2 oh-my-opencode エージェントモデルのオーバーライド](#422-oh-my-opencode-エージェントモデルのオーバーライド)
- [4.3 OpenAI (ChatGPT Plus/Pro)](#43-openai-chatgpt-pluspro)
- [モデル設定](#モデル設定)
- [⚠️ 注意](#-注意)
- [セットアップの確認](#セットアップの確認)
- [ユーザーに「おめでとうございます!🎉」と伝える](#ユーザーにおめでとうございますと伝える)
- [無料広告](#無料広告)
- [スターをお願いする ⭐](#スターをお願いする-)
- [複雑すぎますか?](#複雑すぎますか)
- [アンインストール](#アンインストール)
- [機能](#機能)
@@ -99,7 +105,8 @@
- [バックグラウンドエージェント: 本当のチームのように働く](#バックグラウンドエージェント-本当のチームのように働く)
- [ツール: 同僚にはもっと良い道具を](#ツール-同僚にはもっと良い道具を)
- [なぜあなただけ IDE を使っているのですか?](#なぜあなただけ-ide-を使っているのですか)
- [Context is all you need.](#context-is-all-you-need)
- [セッション管理](#セッション管理)
- [Context Is All You Need](#context-is-all-you-need)
- [マルチモーダルを活用し、トークンは節約する](#マルチモーダルを活用しトークンは節約する)
- [止まらないエージェントループ](#止まらないエージェントループ)
- [Claude Code 互換性: さらば Claude Code、ようこそ OpenCode](#claude-code-互換性-さらば-claude-codeようこそ-opencode)
@@ -109,16 +116,20 @@
- [互換性トグル](#互換性トグル)
- [エージェントのためだけでなく、あなたのために](#エージェントのためだけでなくあなたのために)
- [設定](#設定)
- [JSONC のサポート](#jsonc-のサポート)
- [Google Auth](#google-auth)
- [Agents](#agents)
- [Permission オプション](#permission-オプション)
- [Sisyphus Agent](#sisyphus-agent)
- [Background Tasks](#background-tasks)
- [Hooks](#hooks)
- [MCPs](#mcps)
- [LSP](#lsp)
- [Experimental](#experimental)
- [作者のノート](#作者のノート)
- [注意](#注意)
- [こちらの企業の専門家にご愛用いただいています](#こちらの企業の専門家にご愛用いただいています)
- [スポンサー](#スポンサー)
# Oh My OpenCode
@@ -316,15 +327,14 @@ opencode auth login
#### 4.2.2 oh-my-opencode エージェントモデルのオーバーライド
`opencode-antigravity-auth` プラグインは内蔵 Google auth と異なるモデル名を使用します。`oh-my-opencode.json`(または `.opencode/oh-my-opencode.json`)でエージェントモデルをオーバーライドし、内蔵 `google_auth` を無効化してください:
`opencode-antigravity-auth` プラグインは特定のモデル名を使用します。エージェントモデルを `oh-my-opencode.json`(または `.opencode/oh-my-opencode.json`)でオーバーライドしてください:
```json
{
"google_auth": false,
"agents": {
"frontend-ui-ux-engineer": { "model": "google/gemini-3-pro-high" },
"document-writer": { "model": "google/gemini-3-flash" },
"multimodal-looker": { "model": "google/gemini-3-flash" }
"frontend-ui-ux-engineer": { "model": "google/antigravity-gemini-3-pro-high" },
"document-writer": { "model": "google/antigravity-gemini-3-flash" },
"multimodal-looker": { "model": "google/antigravity-gemini-3-flash" }
}
}
```
@@ -382,7 +392,7 @@ opencode auth login
**ユーザーが明示的に要求しない限り、モデル設定を変更したり、機能エージェント、フック、MCPを無効化しないでください。**
プラグインはデフォルトで完璧に動作します。推奨される `google_auth: true` 設定を除き、明示的な要求なしに他の設定を変更したり機能をオフにしないでください。
プラグインはデフォルトで完璧に動作します。`opencode-antigravity-auth` を使用する場合は、上記の通りエージェントモデルをオーバーライドしてください。明示的な要求なしに他の設定を変更したり機能をオフにしないでください。
### セットアップの確認
@@ -461,7 +471,7 @@ oh-my-opencode を削除するには:
- **Sisyphus** (`anthropic/claude-opus-4-5`): **デフォルトエージェントです。** OpenCode のための強力な AI オーケストレーターです。専門のサブエージェントを活用して、複雑なタスクを計画、委任、実行します。バックグラウンドタスクへの委任と Todo ベースのワークフローを重視します。最大の推論能力を発揮するため、Claude Opus 4.5 と拡張思考 (32k token budget) を使用します。
- **oracle** (`openai/gpt-5.2`): アーキテクチャ、コードレビュー、戦略立案のための専門アドバイザー。GPT-5.2 の卓越した論理的推論と深い分析能力を活用します。AmpCode からインスピレーションを得ました。
- **librarian** (`anthropic/claude-sonnet-4-5` または `google/gemini-3-flash`): マルチリポジトリ分析、ドキュメント検索、実装例の調査を担当。Antigravity 認証が設定されている場合は Gemini 3 Flash を使用し、それ以外は Claude Sonnet 4.5 を使用して、深いコードベース理解と GitHub リサーチ、根拠に基づいた回答を提供します。AmpCode からインスピレーションを得ました。
- **librarian** (`opencode/glm-4.7-free`): マルチリポジトリ分析、ドキュメント検索、実装例の調査を担当。GLM-4.7 Free を使用して、深いコードベース理解と GitHub リサーチ、根拠に基づいた回答を提供します。AmpCode からインスピレーションを得ました。
- **explore** (`opencode/grok-code`、`google/gemini-3-flash`、または `anthropic/claude-haiku-4-5`): 高速なコードベース探索、ファイルパターンマッチング。Antigravity 認証が設定されている場合は Gemini 3 Flash を使用し、Claude max20 が利用可能な場合は Haiku を使用し、それ以外は Grok を使います。Claude Code からインスピレーションを得ました。
- **frontend-ui-ux-engineer** (`google/gemini-3-pro-preview`): 開発者に転身したデザイナーという設定です。素晴らしい UI を作ります。美しく独創的な UI コードを生成することに長けた Gemini を使用します。
- **document-writer** (`google/gemini-3-pro-preview`): テクニカルライティングの専門家という設定です。Gemini は文筆家であり、流れるような文章を書きます。
@@ -721,10 +731,10 @@ Oh My OpenCode は以下の場所からフックを読み込んで実行しま
1. `.opencode/oh-my-opencode.json` (プロジェクト)
2. ユーザー設定(プラットフォーム別):
| プラットフォーム | ユーザー設定パス |
|------------------|------------------|
| **Windows** | `~/.config/opencode/oh-my-opencode.json` (推奨) または `%APPDATA%\opencode\oh-my-opencode.json` (fallback) |
| **macOS/Linux** | `~/.config/opencode/oh-my-opencode.json` |
| プラットフォーム | ユーザー設定パス |
| ---------------- | ---------------------------------------------------------------------------------------------------------- |
| **Windows** | `~/.config/opencode/oh-my-opencode.json` (推奨) または `%APPDATA%\opencode\oh-my-opencode.json` (fallback) |
| **macOS/Linux** | `~/.config/opencode/oh-my-opencode.json` |
スキーマ自動補完がサポートされています:
@@ -748,10 +758,7 @@ Oh My OpenCode は以下の場所からフックを読み込んで実行しま
```jsonc
{
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
// Antigravity OAuth 経由で Google Gemini を有効にする
"google_auth": false,
/* エージェントのオーバーライド - 特定のタスクに合わせてモデルをカスタマイズ */
"agents": {
"oracle": {
@@ -768,27 +775,18 @@ Oh My OpenCode は以下の場所からフックを読み込んで実行しま
**推奨**: 外部の [`opencode-antigravity-auth`](https://github.com/NoeFabris/opencode-antigravity-auth) プラグインを使用してください。マルチアカウントロードバランシング、より多くのモデルAntigravity 経由の Claude を含む)、活発なメンテナンスを提供します。[インストール > Google Gemini](#42-google-gemini-antigravity-oauth) を参照。
`opencode-antigravity-auth` 使用時は内蔵 auth を無効化し、`oh-my-opencode.json` でエージェントモデルをオーバーライドしてください:
`opencode-antigravity-auth` 使用時は `oh-my-opencode.json` でエージェントモデルをオーバーライドしてください:
```json
{
"google_auth": false,
"agents": {
"frontend-ui-ux-engineer": { "model": "google/gemini-3-pro-high" },
"document-writer": { "model": "google/gemini-3-flash" },
"multimodal-looker": { "model": "google/gemini-3-flash" }
"frontend-ui-ux-engineer": { "model": "google/antigravity-gemini-3-pro-high" },
"document-writer": { "model": "google/antigravity-gemini-3-flash" },
"multimodal-looker": { "model": "google/antigravity-gemini-3-flash" }
}
}
```
**代替案**: 内蔵 Antigravity OAuth を有効化単一アカウント、Gemini モデルのみ):
```json
{
"google_auth": true
}
```
### Agents
内蔵エージェント設定をオーバーライドできます:
@@ -841,13 +839,13 @@ Oh My OpenCode は以下の場所からフックを読み込んで実行しま
}
```
| Permission | 説明 | 値 |
|------------|------|----|
| `edit` | ファイル編集権限 | `ask` / `allow` / `deny` |
| `bash` | Bash コマンド実行権限 | `ask` / `allow` / `deny` またはコマンド別: `{ "git": "allow", "rm": "deny" }` |
| `webfetch` | ウェブアクセス権限 | `ask` / `allow` / `deny` |
| `doom_loop` | 無限ループ検知のオーバーライド許可 | `ask` / `allow` / `deny` |
| `external_directory` | プロジェクトルート外へのファイルアクセス | `ask` / `allow` / `deny` |
| Permission | 説明 | 値 |
| -------------------- | ---------------------------------------- | ----------------------------------------------------------------------------- |
| `edit` | ファイル編集権限 | `ask` / `allow` / `deny` |
| `bash` | Bash コマンド実行権限 | `ask` / `allow` / `deny` またはコマンド別: `{ "git": "allow", "rm": "deny" }` |
| `webfetch` | ウェブアクセス権限 | `ask` / `allow` / `deny` |
| `doom_loop` | 無限ループ検知のオーバーライド許可 | `ask` / `allow` / `deny` |
| `external_directory` | プロジェクトルート外へのファイルアクセス | `ask` / `allow` / `deny` |
または `~/.config/opencode/oh-my-opencode.json` か `.opencode/oh-my-opencode.json` の `disabled_agents` を使用して無効化できます:
@@ -925,12 +923,12 @@ Oh My OpenCode は以下の場所からフックを読み込んで実行しま
}
```
| オプション | デフォルト | 説明 |
| --------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `disabled` | `false` | `true` の場合、すべての Sisyphus オーケストレーションを無効化し、元の build/plan をプライマリとして復元します。 |
| `default_builder_enabled` | `false` | `true` の場合、OpenCode-Builder エージェントを有効化しますOpenCode build と同じ、SDK 制限により名前変更)。デフォルトでは無効です。 |
| `planner_enabled` | `true` | `true` の場合、Prometheus (Planner) エージェントを有効化しますwork-planner 方法論を含む)。デフォルトで有効です。 |
| `replace_plan` | `true` | `true` の場合、デフォルトのプランエージェントをサブエージェントモードに降格させます。`false` に設定すると、Prometheus (Planner) とデフォルトのプランの両方を利用できます。 |
| オプション | デフォルト | 説明 |
| ------------------------- | ---------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `disabled` | `false` | `true` の場合、すべての Sisyphus オーケストレーションを無効化し、元の build/plan をプライマリとして復元します。 |
| `default_builder_enabled` | `false` | `true` の場合、OpenCode-Builder エージェントを有効化しますOpenCode build と同じ、SDK 制限により名前変更)。デフォルトでは無効です。 |
| `planner_enabled` | `true` | `true` の場合、Prometheus (Planner) エージェントを有効化しますwork-planner 方法論を含む)。デフォルトで有効です。 |
| `replace_plan` | `true` | `true` の場合、デフォルトのプランエージェントをサブエージェントモードに降格させます。`false` に設定すると、Prometheus (Planner) とデフォルトのプランの両方を利用できます。 |
### Background Tasks
@@ -953,10 +951,10 @@ Oh My OpenCode は以下の場所からフックを読み込んで実行しま
}
```
| オプション | デフォルト | 説明 |
| --------------------- | ---------- | -------------------------------------------------------------------------------------------------------------- |
| `defaultConcurrency` | - | すべてのプロバイダー/モデルに対するデフォルトの最大同時バックグラウンドタスク数 |
| `providerConcurrency` | - | プロバイダーごとの同時実行制限。キーはプロバイダー名(例:`anthropic`、`openai`、`google` |
| オプション | デフォルト | 説明 |
| --------------------- | ---------- | --------------------------------------------------------------------------------------------------------------------- |
| `defaultConcurrency` | - | すべてのプロバイダー/モデルに対するデフォルトの最大同時バックグラウンドタスク数 |
| `providerConcurrency` | - | プロバイダーごとの同時実行制限。キーはプロバイダー名(例:`anthropic`、`openai`、`google` |
| `modelConcurrency` | - | モデルごとの同時実行制限。キーは完全なモデル名(例:`anthropic/claude-opus-4-5`)。プロバイダー制限より優先されます。 |
**優先順位**: `modelConcurrency` > `providerConcurrency` > `defaultConcurrency`
@@ -1035,13 +1033,13 @@ OpenCode でサポートされるすべての LSP 構成およびカスタム設
}
```
| オプション | デフォルト | 説明 |
| --------------------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `preemptive_compaction_threshold` | `0.85` | プリエンプティブコンパクションをトリガーする閾値0.5-0.95)。`preemptive-compaction` フックはデフォルトで有効です。このオプションで閾値をカスタマイズできます。 |
| オプション | デフォルト | 説明 |
| --------------------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `preemptive_compaction_threshold` | `0.85` | プリエンプティブコンパクションをトリガーする閾値0.5-0.95)。`preemptive-compaction` フックはデフォルトで有効です。このオプションで閾値をカスタマイズできます。 |
| `truncate_all_tool_outputs` | `false` | ホワイトリストのツールGrep、Glob、LSP、AST-grepだけでなく、すべてのツール出力を切り詰めます。Tool output truncator はデフォルトで有効です - `disabled_hooks`で無効化できます。 |
| `aggressive_truncation` | `false` | トークン制限を超えた場合、ツール出力を積極的に切り詰めて制限内に収めます。デフォルトの切り詰めより積極的です。不十分な場合は要約/復元にフォールバックします。 |
| `auto_resume` | `false` | thinking block エラーや thinking disabled violation からの回復成功後、自動的にセッションを再開します。最後のユーザーメッセージを抽出して続行します。 |
| `dcp_for_compaction` | `false` | コンパクション用DCP動的コンテキスト整理を有効化 - トークン制限超過時に最初に実行されます。コンパクション前に重複したツール呼び出しと古いツール出力を整理します。 |
| `aggressive_truncation` | `false` | トークン制限を超えた場合、ツール出力を積極的に切り詰めて制限内に収めます。デフォルトの切り詰めより積極的です。不十分な場合は要約/復元にフォールバックします。 |
| `auto_resume` | `false` | thinking block エラーや thinking disabled violation からの回復成功後、自動的にセッションを再開します。最後のユーザーメッセージを抽出して続行します。 |
| `dcp_for_compaction` | `false` | コンパクション用DCP動的コンテキスト整理を有効化 - トークン制限超過時に最初に実行されます。コンパクション前に重複したツール呼び出しと古いツール出力を整理します。 |
**警告**:これらの機能は実験的であり、予期しない動作を引き起こす可能性があります。影響を理解した場合にのみ有効にしてください。

View File

@@ -29,10 +29,7 @@
> This is coding on steroids—`oh-my-opencode` in action. Run background agents, call specialized agents like oracle, librarian, and frontend engineer. Use crafted LSP/AST tools, curated MCPs, and a full Claude Code compatibility layer.
No stupid token consumption massive subagents here. No bloat tools here.
**Certified, Verified, Tested, Actually Useful Harness in Production, after $24,000 worth of tokens spent.**
**START WITH YOUR ChatGPT, Claude, Gemini SUBSCRIPTIONS. WE ALL COVER THEM.**
**Notice: Do not use expensive models for librarian. This is not only unhelpful to you, but also burdens LLM providers. Use models like Claude Haiku, Gemini Flash, GLM 4.7, or MiniMax instead.**
<div align="center">
@@ -128,6 +125,7 @@ No stupid token consumption massive subagents here. No bloat tools here.
- [Agents](#agents)
- [Permission Options](#permission-options)
- [Built-in Skills](#built-in-skills)
- [Git Master](#git-master)
- [Sisyphus Agent](#sisyphus-agent)
- [Background Tasks](#background-tasks)
- [Categories](#categories)
@@ -135,6 +133,7 @@ No stupid token consumption massive subagents here. No bloat tools here.
- [MCPs](#mcps)
- [LSP](#lsp)
- [Experimental](#experimental)
- [Environment Variables](#environment-variables)
- [Author's Note](#authors-note)
- [Warnings](#warnings)
- [Loved by professionals at](#loved-by-professionals-at)
@@ -355,15 +354,14 @@ Read the [opencode-antigravity-auth documentation](https://github.com/NoeFabris/
##### oh-my-opencode Agent Model Override
The `opencode-antigravity-auth` plugin uses different model names than the built-in Google auth. Override the agent models in `oh-my-opencode.json` (or `.opencode/oh-my-opencode.json`) and disable the built-in `google_auth`:
The `opencode-antigravity-auth` plugin uses different model names than the built-in Google auth. Override the agent models in `oh-my-opencode.json` (or `.opencode/oh-my-opencode.json`):
```json
{
"google_auth": false,
"agents": {
"frontend-ui-ux-engineer": { "model": "google/gemini-3-pro-high" },
"document-writer": { "model": "google/gemini-3-flash" },
"multimodal-looker": { "model": "google/gemini-3-flash" }
"frontend-ui-ux-engineer": { "model": "google/antigravity-gemini-3-pro-high" },
"document-writer": { "model": "google/antigravity-gemini-3-flash" },
"multimodal-looker": { "model": "google/antigravity-gemini-3-flash" }
}
}
```
@@ -421,7 +419,7 @@ opencode auth login
**Unless the user explicitly requests it, do not change model settings or disable features (agents, hooks, MCPs).**
The plugin works perfectly by default. Except for the recommended `google_auth: true` setting, do not change other settings or turn off features without an explicit request.
The plugin works perfectly by default. Do not change settings or turn off features without an explicit request.
### Verify the setup
@@ -499,9 +497,9 @@ To remove oh-my-opencode:
- **Sisyphus** (`anthropic/claude-opus-4-5`): **The default agent.** A powerful AI orchestrator for OpenCode. Plans, delegates, and executes complex tasks using specialized subagents with aggressive parallel execution. Emphasizes background task delegation and todo-driven workflow. Uses Claude Opus 4.5 with extended thinking (32k budget) for maximum reasoning capability.
- **oracle** (`openai/gpt-5.2`): Architecture, code review, strategy. Uses GPT-5.2 for its stellar logical reasoning and deep analysis. Inspired by AmpCode.
- **librarian** (`anthropic/claude-sonnet-4-5` or `google/gemini-3-flash`): Multi-repo analysis, doc lookup, implementation examples. Uses Gemini 3 Flash when Antigravity auth is configured, otherwise Claude Sonnet 4.5 for deep codebase understanding and GitHub research with evidence-based answers. Inspired by AmpCode.
- **librarian** (`opencode/glm-4.7-free`): Multi-repo analysis, doc lookup, implementation examples. Uses GLM-4.7 Free for deep codebase understanding and GitHub research with evidence-based answers. Inspired by AmpCode.
- **explore** (`opencode/grok-code`, `google/gemini-3-flash`, or `anthropic/claude-haiku-4-5`): Fast codebase exploration and pattern matching. Uses Gemini 3 Flash when Antigravity auth is configured, Haiku when Claude max20 is available, otherwise Grok. Inspired by Claude Code.
- **frontend-ui-ux-engineer** (`google/gemini-3-pro-high`): A designer turned developer. Builds gorgeous UIs. Gemini excels at creative, beautiful UI code.
- **frontend-ui-ux-engineer** (`google/gemini-3-pro-preview`): A designer turned developer. Builds gorgeous UIs. Gemini excels at creative, beautiful UI code.
- **document-writer** (`google/gemini-3-flash`): Technical writing expert. Gemini is a wordsmith—writes prose that flows.
- **multimodal-looker** (`google/gemini-3-flash`): Visual content specialist. Analyzes PDFs, images, diagrams to extract information.
@@ -805,9 +803,6 @@ When both `oh-my-opencode.jsonc` and `oh-my-opencode.json` files exist, `.jsonc`
{
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
// Enable Google Gemini via Antigravity OAuth
"google_auth": false,
/* Agent overrides - customize models for specific tasks */
"agents": {
"oracle": {
@@ -822,28 +817,7 @@ When both `oh-my-opencode.jsonc` and `oh-my-opencode.json` files exist, `.jsonc`
### Google Auth
**Recommended**: Use the external [`opencode-antigravity-auth`](https://github.com/NoeFabris/opencode-antigravity-auth) plugin. It provides multi-account load balancing, more models (including Claude via Antigravity), and active maintenance. See [Installation > Google Gemini](#google-gemini-antigravity-oauth).
When using `opencode-antigravity-auth`, disable the built-in auth and override agent models in `oh-my-opencode.json`:
```json
{
"google_auth": false,
"agents": {
"frontend-ui-ux-engineer": { "model": "google/gemini-3-pro-high" },
"document-writer": { "model": "google/gemini-3-flash" },
"multimodal-looker": { "model": "google/gemini-3-flash" }
}
}
```
**Alternative**: Enable built-in Antigravity OAuth (single account, Gemini models only):
```json
{
"google_auth": true
}
```
**Recommended**: For Google Gemini authentication, install the [`opencode-antigravity-auth`](https://github.com/NoeFabris/opencode-antigravity-auth) plugin. It provides multi-account load balancing, more models (including Claude via Antigravity), and active maintenance. See [Installation > Google Gemini](#google-gemini-antigravity-oauth).
### Agents
@@ -945,10 +919,10 @@ Configure git-master skill behavior:
}
```
| Option | Default | Description |
| ------ | ------- | ----------- |
| `commit_footer` | `true` | Adds "Ultraworked with Sisyphus" footer to commit messages. |
| `include_co_authored_by` | `true` | Adds `Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>` trailer to commits. |
| Option | Default | Description |
| ------------------------ | ------- | -------------------------------------------------------------------------------- |
| `commit_footer` | `true` | Adds "Ultraworked with Sisyphus" footer to commit messages. |
| `include_co_authored_by` | `true` | Adds `Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>` trailer to commits. |
### Sisyphus Agent
@@ -1016,12 +990,12 @@ You can also customize Sisyphus agents like other agents:
}
```
| Option | Default | Description |
| --------------------------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------- |
| `disabled` | `false` | When `true`, disables all Sisyphus orchestration and restores original build/plan as primary. |
| `default_builder_enabled` | `false` | When `true`, enables OpenCode-Builder agent (same as OpenCode build, renamed due to SDK limitations). Disabled by default. |
| `planner_enabled` | `true` | When `true`, enables Prometheus (Planner) agent with work-planner methodology. Enabled by default. |
| `replace_plan` | `true` | When `true`, demotes default plan agent to subagent mode. Set to `false` to keep both Prometheus (Planner) and default plan available. |
| Option | Default | Description |
| ------------------------- | ------- | -------------------------------------------------------------------------------------------------------------------------------------- |
| `disabled` | `false` | When `true`, disables all Sisyphus orchestration and restores original build/plan as primary. |
| `default_builder_enabled` | `false` | When `true`, enables OpenCode-Builder agent (same as OpenCode build, renamed due to SDK limitations). Disabled by default. |
| `planner_enabled` | `true` | When `true`, enables Prometheus (Planner) agent with work-planner methodology. Enabled by default. |
| `replace_plan` | `true` | When `true`, demotes default plan agent to subagent mode. Set to `false` to keep both Prometheus (Planner) and default plan available. |
### Background Tasks
@@ -1059,14 +1033,14 @@ Configure concurrency limits for background agent tasks. This controls how many
### Categories
Categories enable domain-specific task delegation via the `sisyphus_task` tool. Each category pre-configures a specialized `Sisyphus-Junior-{category}` agent with optimized model settings and prompts.
Categories enable domain-specific task delegation via the `sisyphus_task` tool. Each category applies runtime presets (model, temperature, prompt additions) when calling the `Sisyphus-Junior` agent.
**Default Categories:**
| Category | Model | Description |
|----------|-------|-------------|
| `visual` | `google/gemini-3-pro-preview` | Frontend, UI/UX, design-focused tasks. High creativity (temp 0.7). |
| `business-logic` | `openai/gpt-5.2` | Backend logic, architecture, strategic reasoning. Low creativity (temp 0.1). |
| Category | Model | Description |
| ---------------- | ----------------------------- | ---------------------------------------------------------------------------- |
| `visual` | `google/gemini-3-pro-preview` | Frontend, UI/UX, design-focused tasks. High creativity (temp 0.7). |
| `business-logic` | `openai/gpt-5.2` | Backend logic, architecture, strategic reasoning. Low creativity (temp 0.1). |
**Usage:**
@@ -1092,7 +1066,7 @@ Add custom categories in `oh-my-opencode.json`:
"prompt_append": "Focus on data analysis, ML pipelines, and statistical methods."
},
"visual": {
"model": "google/gemini-3-pro-high",
"model": "google/gemini-3-pro-preview",
"prompt_append": "Use shadcn/ui components and Tailwind CSS."
}
}
@@ -1181,6 +1155,12 @@ Opt-in experimental features that may change or be removed in future versions. U
**Warning**: These features are experimental and may cause unexpected behavior. Enable only if you understand the implications.
### Environment Variables
| Variable | Description |
| --------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| `OPENCODE_CONFIG_DIR` | Override the OpenCode configuration directory. Useful for profile isolation with tools like [OCX](https://github.com/kdcokenny/ocx) ghost mode. |
## Author's Note

File diff suppressed because it is too large Load Diff

View File

@@ -102,6 +102,9 @@
"model": {
"type": "string"
},
"variant": {
"type": "string"
},
"category": {
"type": "string"
},
@@ -225,6 +228,9 @@
"model": {
"type": "string"
},
"variant": {
"type": "string"
},
"category": {
"type": "string"
},
@@ -348,6 +354,135 @@
"model": {
"type": "string"
},
"variant": {
"type": "string"
},
"category": {
"type": "string"
},
"skills": {
"type": "array",
"items": {
"type": "string"
}
},
"temperature": {
"type": "number",
"minimum": 0,
"maximum": 2
},
"top_p": {
"type": "number",
"minimum": 0,
"maximum": 1
},
"prompt": {
"type": "string"
},
"prompt_append": {
"type": "string"
},
"tools": {
"type": "object",
"propertyNames": {
"type": "string"
},
"additionalProperties": {
"type": "boolean"
}
},
"disable": {
"type": "boolean"
},
"description": {
"type": "string"
},
"mode": {
"type": "string",
"enum": [
"subagent",
"primary",
"all"
]
},
"color": {
"type": "string",
"pattern": "^#[0-9A-Fa-f]{6}$"
},
"permission": {
"type": "object",
"properties": {
"edit": {
"type": "string",
"enum": [
"ask",
"allow",
"deny"
]
},
"bash": {
"anyOf": [
{
"type": "string",
"enum": [
"ask",
"allow",
"deny"
]
},
{
"type": "object",
"propertyNames": {
"type": "string"
},
"additionalProperties": {
"type": "string",
"enum": [
"ask",
"allow",
"deny"
]
}
}
]
},
"webfetch": {
"type": "string",
"enum": [
"ask",
"allow",
"deny"
]
},
"doom_loop": {
"type": "string",
"enum": [
"ask",
"allow",
"deny"
]
},
"external_directory": {
"type": "string",
"enum": [
"ask",
"allow",
"deny"
]
}
}
}
}
},
"Sisyphus-Junior": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"variant": {
"type": "string"
},
"category": {
"type": "string"
},
@@ -471,6 +606,9 @@
"model": {
"type": "string"
},
"variant": {
"type": "string"
},
"category": {
"type": "string"
},
@@ -594,6 +732,9 @@
"model": {
"type": "string"
},
"variant": {
"type": "string"
},
"category": {
"type": "string"
},
@@ -717,6 +858,9 @@
"model": {
"type": "string"
},
"variant": {
"type": "string"
},
"category": {
"type": "string"
},
@@ -840,6 +984,9 @@
"model": {
"type": "string"
},
"variant": {
"type": "string"
},
"category": {
"type": "string"
},
@@ -963,6 +1110,9 @@
"model": {
"type": "string"
},
"variant": {
"type": "string"
},
"category": {
"type": "string"
},
@@ -1086,6 +1236,9 @@
"model": {
"type": "string"
},
"variant": {
"type": "string"
},
"category": {
"type": "string"
},
@@ -1209,6 +1362,9 @@
"model": {
"type": "string"
},
"variant": {
"type": "string"
},
"category": {
"type": "string"
},
@@ -1332,6 +1488,9 @@
"model": {
"type": "string"
},
"variant": {
"type": "string"
},
"category": {
"type": "string"
},
@@ -1455,6 +1614,9 @@
"model": {
"type": "string"
},
"variant": {
"type": "string"
},
"category": {
"type": "string"
},
@@ -1578,6 +1740,9 @@
"model": {
"type": "string"
},
"variant": {
"type": "string"
},
"category": {
"type": "string"
},
@@ -1701,6 +1866,9 @@
"model": {
"type": "string"
},
"variant": {
"type": "string"
},
"category": {
"type": "string"
},
@@ -1831,6 +1999,9 @@
"model": {
"type": "string"
},
"variant": {
"type": "string"
},
"temperature": {
"type": "number",
"minimum": 0,
@@ -1928,9 +2099,6 @@
}
}
},
"google_auth": {
"type": "boolean"
},
"sisyphus_agent": {
"type": "object",
"properties": {

View File

@@ -1,5 +1,26 @@
# Oh-My-OpenCode Orchestration Guide
## TL;DR - When to Use What
| Complexity | Approach | When to Use |
|------------|----------|-------------|
| **Simple** | Just prompt | Simple tasks, quick fixes, single-file changes |
| **Complex + Lazy** | Just type `ulw` or `ultrawork` | Complex tasks where explaining context is tedious. Agent figures it out. |
| **Complex + Precise** | `@plan``/start-work` | Precise, multi-step work requiring true orchestration. Prometheus plans, Sisyphus executes. |
**Decision Flow:**
```
Is it a quick fix or simple task?
└─ YES → Just prompt normally
└─ NO → Is explaining the full context tedious?
└─ YES → Type "ulw" and let the agent figure it out
└─ NO → Do you need precise, verifiable execution?
└─ YES → Use @plan for Prometheus planning, then /start-work
└─ NO → Just use "ulw"
```
---
This document provides a comprehensive guide to the orchestration system that implements Oh-My-OpenCode's core philosophy: **"Separation of Planning and Execution"**.
## 1. Overview
@@ -16,7 +37,7 @@ Oh-My-OpenCode solves this by clearly separating two roles:
## 2. Overall Architecture
```mermaid
graph TD
flowchart TD
User[User Request] --> Prometheus
subgraph Planning Phase
@@ -24,10 +45,10 @@ graph TD
Metis --> Prometheus
Prometheus --> Momus[Momus<br>Reviewer]
Momus --> Prometheus
Prometheus --> PlanFile[/.sisyphus/plans/*.md]
Prometheus --> PlanFile["/.sisyphus/plans/{name}.md"]
end
PlanFile --> StartWork[/start-work]
PlanFile --> StartWork[//start-work/]
StartWork --> BoulderState[boulder.json]
subgraph Execution Phase
@@ -93,9 +114,9 @@ When the user enters `/start-work`, the execution phase begins.
## 5. Commands and Usage
### `/plan [request]`
### `@plan [request]`
Invokes Prometheus to start a planning session.
- Example: `/plan "I want to refactor the authentication system to NextAuth"`
- Example: `@plan "I want to refactor the authentication system to NextAuth"`
### `/start-work`
Executes the generated plan.

View File

@@ -1,6 +1,6 @@
{
"name": "oh-my-opencode",
"version": "3.0.0-beta.2",
"version": "3.0.0-beta.6",
"description": "The Best AI Agent Harness - Batteries-Included OpenCode Plugin with Multi-Model Orchestration, Parallel Background Agents, and Crafted LSP/AST Tools",
"main": "dist/index.js",
"types": "dist/index.d.ts",
@@ -16,14 +16,10 @@
"types": "./dist/index.d.ts",
"import": "./dist/index.js"
},
"./google-auth": {
"types": "./dist/google-auth.d.ts",
"import": "./dist/google-auth.js"
},
"./schema.json": "./dist/oh-my-opencode.schema.json"
},
"scripts": {
"build": "bun build src/index.ts src/google-auth.ts --outdir dist --target bun --format esm --external @ast-grep/napi && tsc --emitDeclarationOnly && bun build src/cli/index.ts --outdir dist/cli --target bun --format esm --external @ast-grep/napi && bun run build:schema",
"build": "bun build src/index.ts --outdir dist --target bun --format esm --external @ast-grep/napi && tsc --emitDeclarationOnly && bun build src/cli/index.ts --outdir dist/cli --target bun --format esm --external @ast-grep/napi && bun run build:schema",
"build:schema": "bun run script/build-schema.ts",
"clean": "rm -rf dist",
"prepublishOnly": "bun run clean && bun run build",

View File

@@ -114,6 +114,9 @@ function getDistTag(version: string): string | null {
}
async function buildAndPublish(version: string): Promise<void> {
console.log("\nBuilding before publish...")
await $`bun run clean && bun run build`
console.log("\nPublishing to npm...")
const distTag = getDistTag(version)
const tagArgs = distTag ? ["--tag", distTag] : []

View File

@@ -319,6 +319,182 @@
"created_at": "2026-01-08T20:18:27Z",
"repoId": 1108837393,
"pullRequestNo": 603
},
{
"name": "SJY0917032",
"id": 88534701,
"comment_id": 3728199745,
"created_at": "2026-01-09T10:01:19Z",
"repoId": 1108837393,
"pullRequestNo": 625
},
{
"name": "kdcokenny",
"id": 99611484,
"comment_id": 3728801075,
"created_at": "2026-01-09T12:54:05Z",
"repoId": 1108837393,
"pullRequestNo": 629
},
{
"name": "ElwinLiu",
"id": 87802244,
"comment_id": 3731812585,
"created_at": "2026-01-10T04:32:16Z",
"repoId": 1108837393,
"pullRequestNo": 645
},
{
"name": "Luodian",
"id": 15847405,
"comment_id": 3731833107,
"created_at": "2026-01-10T05:01:16Z",
"repoId": 1108837393,
"pullRequestNo": 634
},
{
"name": "imarshallwidjaja",
"id": 60992624,
"comment_id": 3732124681,
"created_at": "2026-01-10T07:58:43Z",
"repoId": 1108837393,
"pullRequestNo": 648
},
{
"name": "GollyJer",
"id": 689204,
"comment_id": 3732253764,
"created_at": "2026-01-10T09:33:21Z",
"repoId": 1108837393,
"pullRequestNo": 649
},
{
"name": "kargnas",
"id": 1438533,
"comment_id": 3732344143,
"created_at": "2026-01-10T10:25:25Z",
"repoId": 1108837393,
"pullRequestNo": 653
},
{
"name": "ashir6892",
"id": 52703606,
"comment_id": 3733435826,
"created_at": "2026-01-10T19:50:07Z",
"repoId": 1108837393,
"pullRequestNo": 675
},
{
"name": "arthur404dev",
"id": 59490008,
"comment_id": 3733697071,
"created_at": "2026-01-10T23:51:44Z",
"repoId": 1108837393,
"pullRequestNo": 676
},
{
"name": "KNN-07",
"id": 55886589,
"comment_id": 3733788592,
"created_at": "2026-01-11T01:11:38Z",
"repoId": 1108837393,
"pullRequestNo": 679
},
{
"name": "aw338WoWmUI",
"id": 121638634,
"comment_id": 3734013343,
"created_at": "2026-01-11T04:56:38Z",
"repoId": 1108837393,
"pullRequestNo": 681
},
{
"name": "Coaspe",
"id": 76432686,
"comment_id": 3734070196,
"created_at": "2026-01-11T06:03:57Z",
"repoId": 1108837393,
"pullRequestNo": 682
},
{
"name": "yimingll",
"id": 116444509,
"comment_id": 3734341425,
"created_at": "2026-01-11T10:00:54Z",
"repoId": 1108837393,
"pullRequestNo": 689
},
{
"name": "Sanyue0v0",
"id": 177394511,
"comment_id": 3735145789,
"created_at": "2026-01-11T17:37:13Z",
"repoId": 1108837393,
"pullRequestNo": 696
},
{
"name": "chilipvlmer",
"id": 100484914,
"comment_id": 3735268635,
"created_at": "2026-01-11T18:19:56Z",
"repoId": 1108837393,
"pullRequestNo": 698
},
{
"name": "Momentum96",
"id": 31430161,
"comment_id": 3737397810,
"created_at": "2026-01-12T08:33:44Z",
"repoId": 1108837393,
"pullRequestNo": 709
},
{
"name": "dante01yoon",
"id": 6510430,
"comment_id": 3738360375,
"created_at": "2026-01-12T12:38:47Z",
"repoId": 1108837393,
"pullRequestNo": 710
},
{
"name": "LTS2",
"id": 24840361,
"comment_id": 3743927388,
"created_at": "2026-01-13T11:57:10Z",
"repoId": 1108837393,
"pullRequestNo": 745
},
{
"name": "haal-laah",
"id": 122613332,
"comment_id": 3742477826,
"created_at": "2026-01-13T07:26:35Z",
"repoId": 1108837393,
"pullRequestNo": 739
},
{
"name": "oussamadouhou",
"id": 16113844,
"comment_id": 3742035216,
"created_at": "2026-01-13T05:31:56Z",
"repoId": 1108837393,
"pullRequestNo": 731
},
{
"name": "abhijit360",
"id": 23292258,
"comment_id": 3747332060,
"created_at": "2026-01-14T01:55:14Z",
"repoId": 1108837393,
"pullRequestNo": 759
},
{
"name": "justsisyphus",
"id": 254807767,
"comment_id": 3747336906,
"created_at": "2026-01-14T01:57:52Z",
"repoId": 1108837393,
"pullRequestNo": 760
}
]
}

View File

@@ -1,25 +1,23 @@
# AGENTS KNOWLEDGE BASE
## OVERVIEW
AI agent definitions for multi-model orchestration. 7 specialized agents: Sisyphus (orchestrator), oracle (read-only consultation), librarian (research), explore (grep), frontend-ui-ux-engineer, document-writer, multimodal-looker.
AI agent definitions for multi-model orchestration, delegating tasks to specialized experts.
## STRUCTURE
```
agents/
├── orchestrator-sisyphus.ts # Orchestrator agent (1484 lines) - complex delegation
├── sisyphus.ts # Main Sisyphus prompt (641 lines)
├── orchestrator-sisyphus.ts # Orchestrator agent (1486 lines) - 7-section delegation, wisdom
├── sisyphus.ts # Main Sisyphus prompt (643 lines)
├── sisyphus-junior.ts # Junior variant for delegated tasks
├── oracle.ts # Strategic advisor (GPT-5.2)
├── librarian.ts # Multi-repo research (Claude Sonnet 4.5)
├── librarian.ts # Multi-repo research (GLM-4.7-free)
├── explore.ts # Fast codebase grep (Grok Code)
├── frontend-ui-ux-engineer.ts # UI generation (Gemini 3 Pro)
├── document-writer.ts # Technical docs (Gemini 3 Pro)
├── multimodal-looker.ts # PDF/image analysis (Gemini 3 Flash)
├── prometheus-prompt.ts # Planning agent prompt (982 lines)
├── metis.ts # Plan Consultant agent (404 lines)
├── momus.ts # Plan Reviewer agent (404 lines)
├── prometheus-prompt.ts # Planning agent prompt (988 lines) - interview mode
├── metis.ts # Plan Consultant agent - pre-planning analysis
├── momus.ts # Plan Reviewer agent - plan validation
├── build-prompt.ts # Shared build agent prompt
├── plan-prompt.ts # Shared plan agent prompt
├── types.ts # AgentModelConfig interface
@@ -28,69 +26,35 @@ agents/
```
## AGENT MODELS
| Agent | Default Model | Fallback | Purpose |
|-------|---------------|----------|---------|
| Sisyphus | anthropic/claude-opus-4-5 | - | Primary orchestrator with extended thinking |
| oracle | openai/gpt-5.2 | - | Read-only consultation. High-IQ debugging, architecture |
| librarian | anthropic/claude-sonnet-4-5 | google/gemini-3-flash | Docs, OSS research, GitHub examples |
| explore | opencode/grok-code | google/gemini-3-flash, anthropic/claude-haiku-4-5 | Fast contextual grep |
| frontend-ui-ux-engineer | google/gemini-3-pro-preview | - | UI/UX code generation |
| document-writer | google/gemini-3-pro-preview | - | Technical writing |
| multimodal-looker | google/gemini-3-flash | - | PDF/image analysis |
| Agent | Default Model | Purpose |
|-------|---------------|---------|
| Sisyphus | claude-opus-4-5 | Primary orchestrator. 32k extended thinking budget. |
| oracle | openai/gpt-5.2 | High-IQ debugging, architecture, strategic consultation. |
| librarian | glm-4.7-free | Multi-repo analysis, docs research, GitHub examples. |
| explore | grok-code | Fast contextual grep. Fallbacks: Gemini-3-Flash, Haiku-4-5. |
| frontend-ui-ux | gemini-3-pro | Production-grade UI/UX generation and styling. |
| document-writer | gemini-3-pro | Technical writing, guides, API documentation. |
| Prometheus | claude-opus-4-5 | Strategic planner. Interview mode, orchestrates Metis/Momus. |
| Metis | claude-sonnet-4-5 | Plan Consultant. Pre-planning risk/requirement analysis. |
| Momus | claude-sonnet-4-5 | Plan Reviewer. Validation and quality enforcement. |
## HOW TO ADD AN AGENT
1. Create `src/agents/my-agent.ts`:
```typescript
import type { AgentConfig } from "@opencode-ai/sdk"
export const myAgent: AgentConfig = {
model: "provider/model-name",
temperature: 0.1,
system: "Agent system prompt...",
tools: { include: ["tool1", "tool2"] }, // or exclude: [...]
}
```
2. Add to `builtinAgents` in `src/agents/index.ts`
3. Update `types.ts` if adding new config options
## AGENT CONFIG OPTIONS
| Option | Type | Description |
|--------|------|-------------|
| model | string | Model identifier (provider/model-name) |
| temperature | number | 0.0-1.0, most use 0.1 for consistency |
| system | string | System prompt (can be multiline template literal) |
| tools | object | `{ include: [...] }` or `{ exclude: [...] }` |
| top_p | number | Optional nucleus sampling |
| maxTokens | number | Optional max output tokens |
1. Create `src/agents/my-agent.ts` exporting `AgentConfig`.
2. Add to `builtinAgents` in `src/agents/index.ts`.
3. Update `types.ts` if adding new config interfaces.
## MODEL FALLBACK LOGIC
`createBuiltinAgents()` handles resolution:
1. User config override (`agents.{name}.model`).
2. Environment-specific settings (max20, antigravity).
3. Hardcoded defaults in `index.ts`.
`createBuiltinAgents()` in utils.ts handles model fallback:
1. Check user config override (`agents.{name}.model`)
2. Check installer settings (claude max20, gemini antigravity)
3. Use default model
**Fallback order for explore**:
- If gemini antigravity enabled → `google/gemini-3-flash`
- If claude max20 enabled → `anthropic/claude-haiku-4-5`
- Default → `opencode/grok-code` (free)
## ANTI-PATTERNS (AGENTS)
- **High temperature**: Don't use >0.3 for code-related agents
- **Broad tool access**: Prefer explicit `include` over unrestricted access
- **Monolithic prompts**: Keep prompts focused; delegate to specialized agents
- **Missing fallbacks**: Consider free/cheap fallbacks for rate-limited models
## ANTI-PATTERNS
- **Trusting reports**: NEVER trust subagent self-reports; always verify outputs.
- **High temp**: Don't use >0.3 for code agents (Sisyphus/Prometheus use 0.1).
- **Sequential calls**: Prefer `sisyphus_task` with `run_in_background` for parallelism.
## SHARED PROMPTS
- **build-prompt.ts**: Base prompt for build agents (OpenCode default + Sisyphus variants)
- **plan-prompt.ts**: Base prompt for plan agents (legacy)
- **prometheus-prompt.ts**: System prompt for Prometheus (Planner) agent
- **metis.ts**: Metis (Plan Consultant) agent for pre-planning analysis
Used by `src/index.ts` when creating Builder-Sisyphus and Prometheus (Planner) variants.
- **build-prompt.ts**: Unified base for Sisyphus and Builder variants.
- **plan-prompt.ts**: Core planning logic shared across planning agents.
- **orchestrator-sisyphus.ts**: Uses a 7-section prompt structure and "wisdom notepad" to preserve learnings across turns.

View File

@@ -1,7 +1,7 @@
import type { AgentConfig } from "@opencode-ai/sdk"
import type { AgentPromptMetadata } from "./types"
const DEFAULT_MODEL = "anthropic/claude-sonnet-4-5"
const DEFAULT_MODEL = "opencode/glm-4.7-free"
export const LIBRARIAN_PROMPT_METADATA: AgentPromptMetadata = {
category: "exploration",
@@ -129,15 +129,15 @@ Tool 3: grep_app_searchGitHub(query: "usage pattern", language: ["TypeScript"])
\`\`\`
Step 1: Clone to temp directory
gh repo clone owner/repo \${TMPDIR:-/tmp}/repo-name -- --depth 1
Step 2: Get commit SHA for permalinks
cd \${TMPDIR:-/tmp}/repo-name && git rev-parse HEAD
Step 3: Find the implementation
- grep/ast_grep_search for function/class
- read the specific file
- git blame for context if needed
Step 4: Construct permalink
https://github.com/owner/repo/blob/<sha>/path/to/file#L10-L20
\`\`\`
@@ -272,7 +272,7 @@ Use OS-appropriate temp directory:
| TYPE B (Implementation) | 2-3 NO |
| TYPE C (Context) | 2-3 NO |
| TYPE D (Comprehensive) | 3-5 | YES (Phase 0.5 first) |
| Request Type | Minimum Parallel Calls
| Request Type | Minimum Parallel Calls
**Doc Discovery is SEQUENTIAL** (websearch → version check → sitemap → investigate).
**Main phase is PARALLEL** once you know where to look.
@@ -308,7 +308,7 @@ grep_app_searchGitHub(query: "useQuery")
## COMMUNICATION RULES
1. **NO TOOL NAMES**: Say "I'll search the codebase" not "I'll use grep_app"
2. **NO PREAMBLE**: Answer directly, skip "I'll help you with..."
2. **NO PREAMBLE**: Answer directly, skip "I'll help you with..."
3. **ALWAYS CITE**: Every code claim needs a permalink
4. **USE MARKDOWN**: Code blocks with language identifiers
5. **BE CONCISE**: Facts > opinions, evidence > speculation

57
src/agents/momus.test.ts Normal file
View File

@@ -0,0 +1,57 @@
import { describe, test, expect } from "bun:test"
import { MOMUS_SYSTEM_PROMPT } from "./momus"
function escapeRegExp(value: string) {
return value.replace(/[.*+?^${}()|[\]\\]/g, "\\$&")
}
describe("MOMUS_SYSTEM_PROMPT policy requirements", () => {
test("should treat SYSTEM DIRECTIVE as ignorable/stripped", () => {
// #given
const prompt = MOMUS_SYSTEM_PROMPT
// #when / #then
expect(prompt).toContain("[SYSTEM DIRECTIVE - READ-ONLY PLANNING CONSULTATION]")
// Should explicitly mention stripping or ignoring these
expect(prompt.toLowerCase()).toMatch(/ignore|strip|system directive/)
})
test("should extract paths containing .sisyphus/plans/ and ending in .md", () => {
// #given
const prompt = MOMUS_SYSTEM_PROMPT
// #when / #then
expect(prompt).toContain(".sisyphus/plans/")
expect(prompt).toContain(".md")
// New extraction policy should be mentioned
expect(prompt.toLowerCase()).toMatch(/extract|search|find path/)
})
test("should NOT teach that 'Please review' is INVALID (conversational wrapper allowed)", () => {
// #given
const prompt = MOMUS_SYSTEM_PROMPT
// #when / #then
// In RED phase, this will FAIL because current prompt explicitly lists this as INVALID
const invalidExample = "Please review .sisyphus/plans/plan.md"
const rejectionTeaching = new RegExp(
`reject.*${escapeRegExp(invalidExample)}`,
"i",
)
// We want the prompt to NOT reject this anymore.
// If it's still in the "INVALID" list, this test should fail.
expect(prompt).not.toMatch(rejectionTeaching)
})
test("should handle ambiguity (2+ paths) and 'no path found' rejection", () => {
// #given
const prompt = MOMUS_SYSTEM_PROMPT
// #when / #then
// Should mention what happens when multiple paths are found
expect(prompt.toLowerCase()).toMatch(/multiple|ambiguous|2\+|two/)
// Should mention rejection if no path found
expect(prompt.toLowerCase()).toMatch(/no.*path.*found|reject.*no.*path/)
})
})

View File

@@ -22,10 +22,7 @@ const DEFAULT_MODEL = "openai/gpt-5.2"
export const MOMUS_SYSTEM_PROMPT = `You are a work plan review expert. You review the provided work plan (.sisyphus/plans/{name}.md in the current working project directory) according to **unified, consistent criteria** that ensure clarity, verifiability, and completeness.
**CRITICAL FIRST RULE**:
When you receive ONLY a file path like \`.sisyphus/plans/plan.md\` with NO other text, this is VALID input.
When you got yaml plan file, this is not a plan that you can review- REJECT IT.
DO NOT REJECT IT. PROCEED TO READ AND EVALUATE THE FILE.
Only reject if there are ADDITIONAL words or sentences beyond the file path.
Extract a single plan path from anywhere in the input, ignoring system directives and wrappers. If exactly one \`.sisyphus/plans/*.md\` path exists, this is VALID input and you must read it. If no plan path exists or multiple plan paths exist, reject per Step 0. If the path points to a YAML plan file (\`.yml\` or \`.yaml\`), reject it as non-reviewable.
**WHY YOU'VE BEEN SUMMONED - THE CONTEXT**:
@@ -121,61 +118,64 @@ You will be provided with the path to the work plan file (typically \`.sisyphus/
**BEFORE you read any files**, you MUST first validate the format of the input prompt you received from the user.
**VALID INPUT EXAMPLES (ACCEPT THESE)**:
- \`.sisyphus/plans/my-plan.md\` [O] ACCEPT - just a file path
- \`/path/to/project/.sisyphus/plans/my-plan.md\` [O] ACCEPT - just a file path
- \`todolist.md\` [O] ACCEPT - just a file path
- \`../other-project/.sisyphus/plans/plan.md\` [O] ACCEPT - just a file path
- \`<system-reminder>...</system-reminder>\n.sisyphus/plans/plan.md\` [O] ACCEPT - system directives + file path
- \`[analyze-mode]\\n...context...\\n.sisyphus/plans/plan.md\` [O] ACCEPT - bracket-style directives + file path
- \`[SYSTEM DIRECTIVE...]\\n.sisyphus/plans/plan.md\` [O] ACCEPT - system directive blocks + file path
- \`.sisyphus/plans/my-plan.md\` [O] ACCEPT - file path anywhere in input
- \`/path/to/project/.sisyphus/plans/my-plan.md\` [O] ACCEPT - absolute plan path
- \`Please review .sisyphus/plans/plan.md\` [O] ACCEPT - conversational wrapper allowed
- \`<system-reminder>...</system-reminder>\\n.sisyphus/plans/plan.md\` [O] ACCEPT - system directives + plan path
- \`[analyze-mode]\\n...context...\\n.sisyphus/plans/plan.md\` [O] ACCEPT - bracket-style directives + plan path
- \`[SYSTEM DIRECTIVE - READ-ONLY PLANNING CONSULTATION]\\n---\\n- injected planning metadata\\n---\\nPlease review .sisyphus/plans/plan.md\` [O] ACCEPT - ignore the entire directive block
**SYSTEM DIRECTIVES ARE ALWAYS ALLOWED**:
**SYSTEM DIRECTIVES ARE ALWAYS IGNORED**:
System directives are automatically injected by the system and should be IGNORED during input validation:
- XML-style tags: \`<system-reminder>\`, \`<context>\`, \`<user-prompt-submit-hook>\`, etc.
- Bracket-style blocks: \`[analyze-mode]\`, \`[search-mode]\`, \`[SYSTEM DIRECTIVE...]\`, \`[SYSTEM REMINDER...]\`, etc.
- \`[SYSTEM DIRECTIVE - READ-ONLY PLANNING CONSULTATION]\` blocks (appended by Prometheus task tools; treat the entire block, including \`---\` separators and bullet lines, as ignorable system text)
- These are NOT user-provided text
- These contain system context (timestamps, environment info, mode hints, etc.)
- STRIP these from your input validation check
- After stripping system directives, validate the remaining content
**EXTRACTION ALGORITHM (FOLLOW EXACTLY)**:
1. Ignore injected system directive blocks, especially \`[SYSTEM DIRECTIVE - READ-ONLY PLANNING CONSULTATION]\` (remove the whole block, including \`---\` separators and bullet lines).
2. Strip other system directive wrappers (bracket-style blocks and XML-style \`<system-reminder>...</system-reminder>\` tags).
3. Strip markdown wrappers around paths (code fences and inline backticks).
4. Extract plan paths by finding all substrings containing \`.sisyphus/plans/\` and ending in \`.md\`.
5. If exactly 1 match → ACCEPT and proceed to Step 1 using that path.
6. If 0 matches → REJECT with: "no plan path found" (no path found).
7. If 2+ matches → REJECT with: "ambiguous: multiple plan paths".
**INVALID INPUT EXAMPLES (REJECT ONLY THESE)**:
- \`Please review .sisyphus/plans/plan.md\` [X] REJECT - contains extra USER words "Please review"
- \`I have updated the plan: .sisyphus/plans/plan.md\` [X] REJECT - contains USER sentence before path
- \`.sisyphus/plans/plan.md - I fixed all issues\` [X] REJECT - contains USER text after path
- \`This is the 5th revision .sisyphus/plans/plan.md\` [X] REJECT - contains USER text before path
- Any input with USER sentences or explanations [X] REJECT
- \`No plan path provided here\` [X] REJECT - no \`.sisyphus/plans/*.md\` path
- \`Compare .sisyphus/plans/first.md and .sisyphus/plans/second.md\` [X] REJECT - multiple plan paths
**DECISION RULE**:
1. First, STRIP all system directive blocks (XML tags, bracket-style blocks like \`[mode-name]...\`)
2. Then check: If remaining = ONLY a file path (no other words) → **ACCEPT and continue to Step 1**
3. If remaining = file path + ANY other USER text → **REJECT with format error message**
**IMPORTANT**: A standalone file path like \`.sisyphus/plans/plan.md\` is VALID. Do NOT reject it!
System directives + file path is also VALID. Do NOT reject it!
**When rejecting for input format (ONLY when there's extra USER text), respond EXACTLY**:
**When rejecting for input format, respond EXACTLY**:
\`\`\`
I REJECT (Input Format Validation)
Reason: no plan path found
You must provide ONLY the work plan file path with no additional text.
You must provide a single plan path that includes \`.sisyphus/plans/\` and ends in \`.md\`.
Valid format: .sisyphus/plans/plan.md
Invalid format: Any user text before/after the path (system directives are allowed)
Invalid format: No plan path or multiple plan paths
NOTE: This rejection is based solely on the input format, not the file contents.
The file itself has not been evaluated yet.
\`\`\`
Use this alternate Reason line if multiple paths are present:
- Reason: multiple plan paths found
**ULTRA-CRITICAL REMINDER**:
If the user provides EXACTLY \`.sisyphus/plans/plan.md\` or any other file path (with or without system directives) WITH NO ADDITIONAL USER TEXT:
If the input contains exactly one \`.sisyphus/plans/*.md\` path (with or without system directives or conversational wrappers):
→ THIS IS VALID INPUT
→ DO NOT REJECT IT
→ IMMEDIATELY PROCEED TO READ THE FILE
→ START EVALUATING THE FILE CONTENTS
Never reject a standalone file path!
Never reject a single plan path embedded in the input.
Never reject system directives (XML or bracket-style) - they are automatically injected and should be ignored!
**IMPORTANT - Response Language**: Your evaluation output MUST match the language used in the work plan content:
- Match the language of the plan in your evaluation output
- If the plan is written in English → Write your entire evaluation in English
@@ -262,7 +262,7 @@ The plan should enable a developer to:
## Review Process
### Step 0: Validate Input Format (MANDATORY FIRST STEP)
Check if input is ONLY a file path. If yes, ACCEPT and continue. If extra text, REJECT.
Extract the plan path from anywhere in the input. If exactly one \`.sisyphus/plans/*.md\` path is found, ACCEPT and continue. If none are found, REJECT with "no plan path found". If multiple are found, REJECT with "ambiguous: multiple plan paths".
### Step 1: Read the Work Plan
- Load the file from the path provided

View File

@@ -131,8 +131,9 @@ ${rows.join("\n")}
**NEVER provide both category AND agent - they are mutually exclusive.**`
}
export const ORCHESTRATOR_SISYPHUS_SYSTEM_PROMPT = `You are "Sisyphus" - Powerful AI Agent with orchestration capabilities from OhMyOpenCode.
Named by [YeonGyu Kim](https://github.com/code-yeongyu).
export const ORCHESTRATOR_SISYPHUS_SYSTEM_PROMPT = `
<Role>
You are "Sisyphus" - Powerful AI Agent with orchestration capabilities from OhMyOpenCode.
**Why Sisyphus?**: Humans roll their boulder every day. So do you. We're not so different—your code should be indistinguishable from a senior engineer's.
@@ -1440,7 +1441,6 @@ export function createOrchestratorSisyphusAgent(ctx?: OrchestratorContext): Agen
"task",
"call_omo_agent",
])
return {
description:
"Orchestrates work via sisyphus_task() to complete ALL tasks in a todo list until fully done",

View File

@@ -0,0 +1,22 @@
import { describe, test, expect } from "bun:test"
import { PROMETHEUS_SYSTEM_PROMPT } from "./prometheus-prompt"
describe("PROMETHEUS_SYSTEM_PROMPT Momus invocation policy", () => {
test("should direct providing ONLY the file path string when invoking Momus", () => {
// #given
const prompt = PROMETHEUS_SYSTEM_PROMPT
// #when / #then
// Should mention Momus and providing only the path
expect(prompt.toLowerCase()).toMatch(/momus.*only.*path|path.*only.*momus/)
})
test("should forbid wrapping Momus invocation in explanations or markdown", () => {
// #given
const prompt = PROMETHEUS_SYSTEM_PROMPT
// #when / #then
// Should mention not wrapping or using markdown for the path
expect(prompt.toLowerCase()).toMatch(/not.*wrap|no.*explanation|no.*markdown/)
})
})

View File

@@ -651,6 +651,12 @@ while (true) {
- Momus is the gatekeeper
- Your job is to satisfy Momus, not to argue with it
5. **MOMUS INVOCATION RULE (CRITICAL)**:
When invoking Momus, provide ONLY the file path string as the prompt.
- Do NOT wrap in explanations, markdown, or conversational text.
- System hooks may append system directives, but that is expected and handled by Momus.
- Example invocation: \`prompt=".sisyphus/plans/{name}.md"\`
### What "OKAY" Means
Momus only says "OKAY" when:
@@ -974,9 +980,11 @@ This will:
/**
* Prometheus planner permission configuration.
* Allows write/edit for plan files (.md only, enforced by prometheus-md-only hook).
* Question permission allows agent to ask user questions via OpenCode's QuestionTool.
*/
export const PROMETHEUS_PERMISSION = {
edit: "allow" as const,
bash: "allow" as const,
webfetch: "allow" as const,
question: "allow" as const,
}

View File

@@ -0,0 +1,232 @@
import { describe, expect, test } from "bun:test"
import { createSisyphusJuniorAgentWithOverrides, SISYPHUS_JUNIOR_DEFAULTS } from "./sisyphus-junior"
describe("createSisyphusJuniorAgentWithOverrides", () => {
describe("honored fields", () => {
test("applies model override", () => {
// #given
const override = { model: "openai/gpt-5.2" }
// #when
const result = createSisyphusJuniorAgentWithOverrides(override)
// #then
expect(result.model).toBe("openai/gpt-5.2")
})
test("applies temperature override", () => {
// #given
const override = { temperature: 0.5 }
// #when
const result = createSisyphusJuniorAgentWithOverrides(override)
// #then
expect(result.temperature).toBe(0.5)
})
test("applies top_p override", () => {
// #given
const override = { top_p: 0.9 }
// #when
const result = createSisyphusJuniorAgentWithOverrides(override)
// #then
expect(result.top_p).toBe(0.9)
})
test("applies description override", () => {
// #given
const override = { description: "Custom description" }
// #when
const result = createSisyphusJuniorAgentWithOverrides(override)
// #then
expect(result.description).toBe("Custom description")
})
test("applies color override", () => {
// #given
const override = { color: "#FF0000" }
// #when
const result = createSisyphusJuniorAgentWithOverrides(override)
// #then
expect(result.color).toBe("#FF0000")
})
test("appends prompt_append to base prompt", () => {
// #given
const override = { prompt_append: "Extra instructions here" }
// #when
const result = createSisyphusJuniorAgentWithOverrides(override)
// #then
expect(result.prompt).toContain("You work ALONE")
expect(result.prompt).toContain("Extra instructions here")
})
})
describe("defaults", () => {
test("uses default model when no override", () => {
// #given
const override = {}
// #when
const result = createSisyphusJuniorAgentWithOverrides(override)
// #then
expect(result.model).toBe(SISYPHUS_JUNIOR_DEFAULTS.model)
})
test("uses default temperature when no override", () => {
// #given
const override = {}
// #when
const result = createSisyphusJuniorAgentWithOverrides(override)
// #then
expect(result.temperature).toBe(SISYPHUS_JUNIOR_DEFAULTS.temperature)
})
})
describe("disable semantics", () => {
test("disable: true causes override block to be ignored", () => {
// #given
const override = {
disable: true,
model: "openai/gpt-5.2",
temperature: 0.9,
}
// #when
const result = createSisyphusJuniorAgentWithOverrides(override)
// #then - defaults should be used, not the overrides
expect(result.model).toBe(SISYPHUS_JUNIOR_DEFAULTS.model)
expect(result.temperature).toBe(SISYPHUS_JUNIOR_DEFAULTS.temperature)
})
})
describe("constrained fields", () => {
test("mode is forced to subagent", () => {
// #given
const override = { mode: "primary" as const }
// #when
const result = createSisyphusJuniorAgentWithOverrides(override)
// #then
expect(result.mode).toBe("subagent")
})
test("prompt override is ignored (discipline text preserved)", () => {
// #given
const override = { prompt: "Completely new prompt that replaces everything" }
// #when
const result = createSisyphusJuniorAgentWithOverrides(override)
// #then
expect(result.prompt).toContain("You work ALONE")
expect(result.prompt).not.toBe("Completely new prompt that replaces everything")
})
})
describe("tool safety (task/sisyphus_task blocked, call_omo_agent allowed)", () => {
test("task and sisyphus_task remain blocked, call_omo_agent is allowed via tools format", () => {
// #given
const override = {
tools: {
task: true,
sisyphus_task: true,
call_omo_agent: true,
read: true,
},
}
// #when
const result = createSisyphusJuniorAgentWithOverrides(override)
// #then
const tools = result.tools as Record<string, boolean> | undefined
const permission = result.permission as Record<string, string> | undefined
if (tools) {
expect(tools.task).toBe(false)
expect(tools.sisyphus_task).toBe(false)
// call_omo_agent is NOW ALLOWED for subagents to spawn explore/librarian
expect(tools.call_omo_agent).toBe(true)
expect(tools.read).toBe(true)
}
if (permission) {
expect(permission.task).toBe("deny")
expect(permission.sisyphus_task).toBe("deny")
// call_omo_agent is NOW ALLOWED for subagents to spawn explore/librarian
expect(permission.call_omo_agent).toBe("allow")
}
})
test("task and sisyphus_task remain blocked when using permission format override", () => {
// #given
const override = {
permission: {
task: "allow",
sisyphus_task: "allow",
call_omo_agent: "allow",
read: "allow",
},
} as { permission: Record<string, string> }
// #when
const result = createSisyphusJuniorAgentWithOverrides(override as Parameters<typeof createSisyphusJuniorAgentWithOverrides>[0])
// #then - task/sisyphus_task blocked, but call_omo_agent allowed for explore/librarian spawning
const tools = result.tools as Record<string, boolean> | undefined
const permission = result.permission as Record<string, string> | undefined
if (tools) {
expect(tools.task).toBe(false)
expect(tools.sisyphus_task).toBe(false)
expect(tools.call_omo_agent).toBe(true)
}
if (permission) {
expect(permission.task).toBe("deny")
expect(permission.sisyphus_task).toBe("deny")
expect(permission.call_omo_agent).toBe("allow")
}
})
})
describe("prompt composition", () => {
test("base prompt contains discipline constraints", () => {
// #given
const override = {}
// #when
const result = createSisyphusJuniorAgentWithOverrides(override)
// #then
expect(result.prompt).toContain("Sisyphus-Junior")
expect(result.prompt).toContain("You work ALONE")
expect(result.prompt).toContain("BLOCKED ACTIONS")
})
test("prompt_append is added after base prompt", () => {
// #given
const override = { prompt_append: "CUSTOM_MARKER_FOR_TEST" }
// #when
const result = createSisyphusJuniorAgentWithOverrides(override)
// #then
const baseEndIndex = result.prompt!.indexOf("Dense > verbose.")
const appendIndex = result.prompt!.indexOf("CUSTOM_MARKER_FOR_TEST")
expect(baseEndIndex).not.toBe(-1) // Guard: anchor text must exist in base prompt
expect(appendIndex).toBeGreaterThan(baseEndIndex)
})
})
})

View File

@@ -1,9 +1,10 @@
import type { AgentConfig } from "@opencode-ai/sdk"
import { isGptModel } from "./types"
import type { CategoryConfig } from "../config/schema"
import type { AgentOverrideConfig, CategoryConfig } from "../config/schema"
import {
createAgentToolRestrictions,
migrateAgentConfig,
supportsNewPermissionSystem,
} from "../shared/permission-compat"
const SISYPHUS_JUNIOR_PROMPT = `<Role>
@@ -14,11 +15,10 @@ Execute tasks directly. NEVER delegate or spawn other agents.
<Critical_Constraints>
BLOCKED ACTIONS (will fail if attempted):
- task tool: BLOCKED
- sisyphus_task tool: BLOCKED
- sisyphus_task tool: BLOCKED (already blocked above, but explicit)
- call_omo_agent tool: BLOCKED
- sisyphus_task tool: BLOCKED
You work ALONE. No delegation. No background tasks. Execute directly.
ALLOWED: call_omo_agent - You CAN spawn explore/librarian agents for research.
You work ALONE for implementation. No delegation of implementation tasks.
</Critical_Constraints>
<Work_Context>
@@ -75,7 +75,75 @@ function buildSisyphusJuniorPrompt(promptAppend?: string): string {
}
// Core tools that Sisyphus-Junior must NEVER have access to
const BLOCKED_TOOLS = ["task", "sisyphus_task", "call_omo_agent"]
// Note: call_omo_agent is ALLOWED so subagents can spawn explore/librarian
const BLOCKED_TOOLS = ["task", "sisyphus_task"]
export const SISYPHUS_JUNIOR_DEFAULTS = {
model: "anthropic/claude-sonnet-4-5",
temperature: 0.1,
} as const
export function createSisyphusJuniorAgentWithOverrides(
override: AgentOverrideConfig | undefined
): AgentConfig {
if (override?.disable) {
override = undefined
}
const model = override?.model ?? SISYPHUS_JUNIOR_DEFAULTS.model
const temperature = override?.temperature ?? SISYPHUS_JUNIOR_DEFAULTS.temperature
const promptAppend = override?.prompt_append
const prompt = buildSisyphusJuniorPrompt(promptAppend)
const baseRestrictions = createAgentToolRestrictions(BLOCKED_TOOLS)
let toolsConfig: Record<string, unknown> = {}
if (supportsNewPermissionSystem()) {
const userPermission = (override?.permission ?? {}) as Record<string, string>
const basePermission = (baseRestrictions as { permission: Record<string, string> }).permission
const merged: Record<string, string> = { ...userPermission }
for (const tool of BLOCKED_TOOLS) {
merged[tool] = "deny"
}
merged.call_omo_agent = "allow"
toolsConfig = { permission: { ...merged, ...basePermission } }
} else {
const userTools = override?.tools ?? {}
const baseTools = (baseRestrictions as { tools: Record<string, boolean> }).tools
const merged: Record<string, boolean> = { ...userTools }
for (const tool of BLOCKED_TOOLS) {
merged[tool] = false
}
merged.call_omo_agent = true
toolsConfig = { tools: { ...merged, ...baseTools } }
}
const base: AgentConfig = {
description: override?.description ??
"Sisyphus-Junior - Focused task executor. Same discipline, no delegation.",
mode: "subagent" as const,
model,
temperature,
maxTokens: 64000,
prompt,
color: override?.color ?? "#20B2AA",
...toolsConfig,
}
if (override?.top_p !== undefined) {
base.top_p = override.top_p
}
if (isGptModel(model)) {
return { ...base, reasoningEffort: "medium" } as AgentConfig
}
return {
...base,
thinking: { type: "enabled", budgetTokens: 32000 },
} as AgentConfig
}
export function createSisyphusJuniorAgent(
categoryConfig: CategoryConfig,
@@ -83,13 +151,13 @@ export function createSisyphusJuniorAgent(
): AgentConfig {
const prompt = buildSisyphusJuniorPrompt(promptAppend)
const model = categoryConfig.model
const baseRestrictions = createAgentToolRestrictions(BLOCKED_TOOLS)
const mergedConfig = migrateAgentConfig({
...baseRestrictions,
...(categoryConfig.tools ? { tools: categoryConfig.tools } : {}),
})
const base: AgentConfig = {
description:
"Sisyphus-Junior - Focused task executor. Same discipline, no delegation.",

View File

@@ -18,7 +18,6 @@ const DEFAULT_MODEL = "anthropic/claude-opus-4-5"
const SISYPHUS_ROLE_SECTION = `<Role>
You are "Sisyphus" - Powerful AI Agent with orchestration capabilities from OhMyOpenCode.
Named by [YeonGyu Kim](https://github.com/code-yeongyu).
**Why Sisyphus?**: Humans roll their boulder every day. So do you. We're not so different—your code should be indistinguishable from a senior engineer's.
@@ -619,6 +618,9 @@ export function createSisyphusAgent(
? buildDynamicSisyphusPrompt(availableAgents, tools, skills)
: buildDynamicSisyphusPrompt([], tools, skills)
// Note: question permission allows agent to ask user questions via OpenCode's QuestionTool
// SDK type doesn't include 'question' yet, but OpenCode runtime supports it
const permission = { question: "allow" } as AgentConfig["permission"]
const base = {
description:
"Sisyphus - Powerful AI orchestrator from OhMyOpenCode. Plans obsessively with todos, assesses search complexity before exploration, delegates strategically to specialized agents. Uses explore for internal code (parallel-friendly), librarian only for external docs, and always delegates UI work to frontend engineer.",
@@ -627,6 +629,7 @@ export function createSisyphusAgent(
maxTokens: 64000,
prompt,
color: "#00CED1",
permission,
tools: { call_omo_agent: false },
}

View File

@@ -76,6 +76,7 @@ export type AgentName = BuiltinAgentName
export type AgentOverrideConfig = Partial<AgentConfig> & {
prompt_append?: string
variant?: string
}
export type AgentOverrides = Partial<Record<OverridableAgentName, AgentOverrideConfig>>

View File

@@ -127,6 +127,31 @@ describe("buildAgent with category and skills", () => {
expect(agent.temperature).toBe(0.7)
})
test("agent with category inherits variant", () => {
// #given
const source = {
"test-agent": () =>
({
description: "Test agent",
category: "custom-category",
}) as AgentConfig,
}
const categories = {
"custom-category": {
model: "openai/gpt-5.2",
variant: "xhigh",
},
}
// #when
const agent = buildAgent(source["test-agent"], undefined, categories)
// #then
expect(agent.model).toBe("openai/gpt-5.2")
expect(agent.variant).toBe("xhigh")
})
test("agent with skills has content prepended to prompt", () => {
// #given
const source = {

View File

@@ -1,5 +1,6 @@
import type { AgentConfig } from "@opencode-ai/sdk"
import type { BuiltinAgentName, AgentOverrideConfig, AgentOverrides, AgentFactory, AgentPromptMetadata } from "./types"
import type { CategoriesConfig, CategoryConfig } from "../config/schema"
import { createSisyphusAgent } from "./sisyphus"
import { createOracleAgent, ORACLE_PROMPT_METADATA } from "./oracle"
import { createLibrarianAgent, LIBRARIAN_PROMPT_METADATA } from "./librarian"
@@ -47,12 +48,19 @@ function isFactory(source: AgentSource): source is AgentFactory {
return typeof source === "function"
}
export function buildAgent(source: AgentSource, model?: string): AgentConfig {
export function buildAgent(
source: AgentSource,
model?: string,
categories?: CategoriesConfig
): AgentConfig {
const base = isFactory(source) ? source(model) : source
const categoryConfigs: Record<string, CategoryConfig> = categories
? { ...DEFAULT_CATEGORIES, ...categories }
: DEFAULT_CATEGORIES
const agentWithCategory = base as AgentConfig & { category?: string; skills?: string[] }
const agentWithCategory = base as AgentConfig & { category?: string; skills?: string[]; variant?: string }
if (agentWithCategory.category) {
const categoryConfig = DEFAULT_CATEGORIES[agentWithCategory.category]
const categoryConfig = categoryConfigs[agentWithCategory.category]
if (categoryConfig) {
if (!base.model) {
base.model = categoryConfig.model
@@ -60,6 +68,9 @@ export function buildAgent(source: AgentSource, model?: string): AgentConfig {
if (base.temperature === undefined && categoryConfig.temperature !== undefined) {
base.temperature = categoryConfig.temperature
}
if (base.variant === undefined && categoryConfig.variant !== undefined) {
base.variant = categoryConfig.variant
}
}
}
@@ -118,11 +129,16 @@ export function createBuiltinAgents(
disabledAgents: BuiltinAgentName[] = [],
agentOverrides: AgentOverrides = {},
directory?: string,
systemDefaultModel?: string
systemDefaultModel?: string,
categories?: CategoriesConfig
): Record<string, AgentConfig> {
const result: Record<string, AgentConfig> = {}
const availableAgents: AvailableAgent[] = []
const mergedCategories = categories
? { ...DEFAULT_CATEGORIES, ...categories }
: DEFAULT_CATEGORIES
for (const [name, source] of Object.entries(agentSources)) {
const agentName = name as BuiltinAgentName
@@ -133,7 +149,7 @@ export function createBuiltinAgents(
const override = agentOverrides[agentName]
const model = override?.model
let config = buildAgent(source, model)
let config = buildAgent(source, model, mergedCategories)
if (agentName === "librarian" && directory && config.prompt) {
const envContext = createEnvContext()

View File

@@ -1,61 +0,0 @@
# AUTH KNOWLEDGE BASE
## OVERVIEW
Google Antigravity OAuth for Gemini models. Token management, fetch interception, thinking block extraction.
## STRUCTURE
```
auth/
└── antigravity/
├── plugin.ts # Main export, hooks registration (554 lines)
├── oauth.ts # OAuth flow, token acquisition
├── token.ts # Token storage, refresh logic
├── fetch.ts # Fetch interceptor (798 lines)
├── response.ts # Response transformation (599 lines)
├── thinking.ts # Thinking block extraction (755 lines)
├── thought-signature-store.ts # Signature caching
├── message-converter.ts # Format conversion
├── accounts.ts # Multi-account management
├── browser.ts # Browser automation for OAuth
├── cli.ts # CLI interaction
├── request.ts # Request building
├── project.ts # Project ID management
├── storage.ts # Token persistence
├── tools.ts # OAuth tool registration
├── constants.ts # API endpoints, model mappings
└── types.ts
```
## KEY COMPONENTS
| File | Purpose |
|------|---------|
| fetch.ts | URL rewriting, token injection, retries |
| thinking.ts | Extract `<antThinking>` blocks |
| response.ts | Streaming SSE parsing |
| oauth.ts | Browser-based OAuth flow |
| token.ts | Token persistence, expiry |
## HOW IT WORKS
1. **Intercept**: fetch.ts intercepts Anthropic/Google requests
2. **Rewrite**: URLs → Antigravity proxy endpoints
3. **Auth**: Bearer token from stored OAuth credentials
4. **Response**: Streaming parsed, thinking blocks extracted
5. **Transform**: Normalized for OpenCode
## FEATURES
- Multi-account (up to 10 Google accounts)
- Auto-fallback on rate limit
- Thinking blocks preserved
- Antigravity proxy for AI Studio access
## ANTI-PATTERNS
- Direct API calls (use fetch interceptor)
- Tokens in code (use token.ts storage)
- Ignoring refresh (check expiry first)
- Blocking on OAuth (always async)

File diff suppressed because it is too large Load Diff

View File

@@ -1,244 +0,0 @@
import { saveAccounts } from "./storage"
import { parseStoredToken, formatTokenForStorage } from "./token"
import {
MODEL_FAMILIES,
type AccountStorage,
type AccountMetadata,
type AccountTier,
type AntigravityRefreshParts,
type ModelFamily,
type RateLimitState,
} from "./types"
export interface ManagedAccount {
index: number
parts: AntigravityRefreshParts
access?: string
expires?: number
rateLimits: RateLimitState
lastUsed: number
email?: string
tier?: AccountTier
}
interface AuthDetails {
refresh: string
access: string
expires: number
}
interface OAuthAuthDetails {
type: "oauth"
refresh: string
access: string
expires: number
}
function isRateLimitedForFamily(account: ManagedAccount, family: ModelFamily): boolean {
const resetTime = account.rateLimits[family]
return resetTime !== undefined && Date.now() < resetTime
}
export class AccountManager {
private accounts: ManagedAccount[] = []
private currentIndex = 0
private activeIndex = 0
constructor(auth: AuthDetails, storedAccounts?: AccountStorage | null) {
if (storedAccounts && storedAccounts.accounts.length > 0) {
const validActiveIndex =
typeof storedAccounts.activeIndex === "number" &&
storedAccounts.activeIndex >= 0 &&
storedAccounts.activeIndex < storedAccounts.accounts.length
? storedAccounts.activeIndex
: 0
this.activeIndex = validActiveIndex
this.currentIndex = validActiveIndex
this.accounts = storedAccounts.accounts.map((acc, index) => ({
index,
parts: {
refreshToken: acc.refreshToken,
projectId: acc.projectId,
managedProjectId: acc.managedProjectId,
},
access: index === validActiveIndex ? auth.access : acc.accessToken,
expires: index === validActiveIndex ? auth.expires : acc.expiresAt,
rateLimits: acc.rateLimits ?? {},
lastUsed: 0,
email: acc.email,
tier: acc.tier,
}))
} else {
this.activeIndex = 0
this.currentIndex = 0
const parts = parseStoredToken(auth.refresh)
this.accounts.push({
index: 0,
parts,
access: auth.access,
expires: auth.expires,
rateLimits: {},
lastUsed: 0,
})
}
}
getAccountCount(): number {
return this.accounts.length
}
getCurrentAccount(): ManagedAccount | null {
if (this.activeIndex >= 0 && this.activeIndex < this.accounts.length) {
return this.accounts[this.activeIndex] ?? null
}
return null
}
getAccounts(): ManagedAccount[] {
return [...this.accounts]
}
getCurrentOrNextForFamily(family: ModelFamily): ManagedAccount | null {
for (const account of this.accounts) {
this.clearExpiredRateLimits(account)
}
const current = this.getCurrentAccount()
if (current) {
if (!isRateLimitedForFamily(current, family)) {
const betterTierAvailable =
current.tier !== "paid" &&
this.accounts.some((a) => a.tier === "paid" && !isRateLimitedForFamily(a, family))
if (!betterTierAvailable) {
current.lastUsed = Date.now()
return current
}
}
}
const next = this.getNextForFamily(family)
if (next) {
this.activeIndex = next.index
}
return next
}
getNextForFamily(family: ModelFamily): ManagedAccount | null {
const available = this.accounts.filter((a) => !isRateLimitedForFamily(a, family))
if (available.length === 0) {
return null
}
const paidAvailable = available.filter((a) => a.tier === "paid")
const pool = paidAvailable.length > 0 ? paidAvailable : available
const account = pool[this.currentIndex % pool.length]
if (!account) {
return null
}
this.currentIndex++
account.lastUsed = Date.now()
return account
}
markRateLimited(account: ManagedAccount, retryAfterMs: number, family: ModelFamily): void {
account.rateLimits[family] = Date.now() + retryAfterMs
}
clearExpiredRateLimits(account: ManagedAccount): void {
const now = Date.now()
for (const family of MODEL_FAMILIES) {
if (account.rateLimits[family] !== undefined && now >= account.rateLimits[family]!) {
delete account.rateLimits[family]
}
}
}
addAccount(
parts: AntigravityRefreshParts,
access?: string,
expires?: number,
email?: string,
tier?: AccountTier
): void {
this.accounts.push({
index: this.accounts.length,
parts,
access,
expires,
rateLimits: {},
lastUsed: 0,
email,
tier,
})
}
removeAccount(index: number): boolean {
if (index < 0 || index >= this.accounts.length) {
return false
}
this.accounts.splice(index, 1)
if (index < this.activeIndex) {
this.activeIndex--
} else if (index === this.activeIndex) {
this.activeIndex = Math.min(this.activeIndex, Math.max(0, this.accounts.length - 1))
}
if (index < this.currentIndex) {
this.currentIndex--
} else if (index === this.currentIndex) {
this.currentIndex = Math.min(this.currentIndex, Math.max(0, this.accounts.length - 1))
}
for (let i = 0; i < this.accounts.length; i++) {
this.accounts[i]!.index = i
}
return true
}
async save(path?: string): Promise<void> {
const storage: AccountStorage = {
version: 1,
accounts: this.accounts.map((acc) => ({
email: acc.email ?? "",
tier: acc.tier ?? "free",
refreshToken: acc.parts.refreshToken,
projectId: acc.parts.projectId ?? "",
managedProjectId: acc.parts.managedProjectId,
accessToken: acc.access ?? "",
expiresAt: acc.expires ?? 0,
rateLimits: acc.rateLimits,
})),
activeIndex: Math.max(0, this.activeIndex),
}
await saveAccounts(storage, path)
}
toAuthDetails(): OAuthAuthDetails {
const current = this.getCurrentAccount() ?? this.accounts[0]
if (!current) {
throw new Error("No accounts available")
}
const allRefreshTokens = this.accounts
.map((acc) => formatTokenForStorage(acc.parts.refreshToken, acc.parts.projectId ?? "", acc.parts.managedProjectId))
.join("|||")
return {
type: "oauth",
refresh: allRefreshTokens,
access: current.access ?? "",
expires: current.expires ?? 0,
}
}
}

View File

@@ -1,37 +0,0 @@
import { describe, it, expect, mock, spyOn } from "bun:test"
import { openBrowserURL } from "./browser"
describe("openBrowserURL", () => {
it("returns true when browser opens successfully", async () => {
// #given
const url = "https://accounts.google.com/oauth"
// #when
const result = await openBrowserURL(url)
// #then
expect(typeof result).toBe("boolean")
})
it("returns false when open throws an error", async () => {
// #given
const invalidUrl = ""
// #when
const result = await openBrowserURL(invalidUrl)
// #then
expect(typeof result).toBe("boolean")
})
it("handles URL with special characters", async () => {
// #given
const urlWithParams = "https://accounts.google.com/oauth?state=abc123&redirect_uri=http://localhost:51121"
// #when
const result = await openBrowserURL(urlWithParams)
// #then
expect(typeof result).toBe("boolean")
})
})

View File

@@ -1,51 +0,0 @@
/**
* Cross-platform browser opening utility.
* Uses the "open" npm package for reliable cross-platform support.
*
* Supports: macOS, Windows, Linux (including WSL)
*/
import open from "open"
/**
* Debug logging helper.
* Only logs when ANTIGRAVITY_DEBUG=1
*/
function debugLog(message: string): void {
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.log(`[antigravity-browser] ${message}`)
}
}
/**
* Opens a URL in the user's default browser.
*
* Cross-platform support:
* - macOS: uses `open` command
* - Windows: uses `start` command
* - Linux: uses `xdg-open` command
* - WSL: uses Windows PowerShell
*
* @param url - The URL to open in the browser
* @returns Promise<boolean> - true if browser opened successfully, false otherwise
*
* @example
* ```typescript
* const success = await openBrowserURL("https://accounts.google.com/oauth...")
* if (!success) {
* console.log("Please open this URL manually:", url)
* }
* ```
*/
export async function openBrowserURL(url: string): Promise<boolean> {
debugLog(`Opening browser: ${url}`)
try {
await open(url)
debugLog("Browser opened successfully")
return true
} catch (error) {
debugLog(`Failed to open browser: ${error instanceof Error ? error.message : String(error)}`)
return false
}
}

View File

@@ -1,156 +0,0 @@
import { describe, it, expect, beforeEach, afterEach, mock } from "bun:test"
const CANCEL = Symbol("cancel")
type ConfirmFn = (options: unknown) => Promise<boolean | typeof CANCEL>
type SelectFn = (options: unknown) => Promise<"free" | "paid" | typeof CANCEL>
const confirmMock = mock<ConfirmFn>(async () => false)
const selectMock = mock<SelectFn>(async () => "free")
const cancelMock = mock<(message?: string) => void>(() => {})
mock.module("@clack/prompts", () => {
return {
confirm: confirmMock,
select: selectMock,
isCancel: (value: unknown) => value === CANCEL,
cancel: cancelMock,
}
})
function setIsTty(isTty: boolean): () => void {
const original = Object.getOwnPropertyDescriptor(process.stdout, "isTTY")
Object.defineProperty(process.stdout, "isTTY", {
configurable: true,
value: isTty,
})
return () => {
if (original) {
Object.defineProperty(process.stdout, "isTTY", original)
} else {
// Best-effort restore: remove overridden property
// eslint-disable-next-line @typescript-eslint/no-dynamic-delete
delete (process.stdout as unknown as { isTTY?: unknown }).isTTY
}
}
}
describe("src/auth/antigravity/cli", () => {
let restoreIsTty: (() => void) | null = null
beforeEach(() => {
confirmMock.mockReset()
selectMock.mockReset()
cancelMock.mockReset()
restoreIsTty?.()
restoreIsTty = null
})
afterEach(() => {
restoreIsTty?.()
restoreIsTty = null
})
it("promptAddAnotherAccount returns confirm result in TTY", async () => {
// #given
restoreIsTty = setIsTty(true)
confirmMock.mockResolvedValueOnce(true)
const { promptAddAnotherAccount } = await import("./cli")
// #when
const result = await promptAddAnotherAccount(2)
// #then
expect(result).toBe(true)
expect(confirmMock).toHaveBeenCalledTimes(1)
})
it("promptAddAnotherAccount returns false in TTY when confirm is false", async () => {
// #given
restoreIsTty = setIsTty(true)
confirmMock.mockResolvedValueOnce(false)
const { promptAddAnotherAccount } = await import("./cli")
// #when
const result = await promptAddAnotherAccount(2)
// #then
expect(result).toBe(false)
expect(confirmMock).toHaveBeenCalledTimes(1)
})
it("promptAddAnotherAccount returns false in non-TTY", async () => {
// #given
restoreIsTty = setIsTty(false)
const { promptAddAnotherAccount } = await import("./cli")
// #when
const result = await promptAddAnotherAccount(3)
// #then
expect(result).toBe(false)
expect(confirmMock).toHaveBeenCalledTimes(0)
})
it("promptAddAnotherAccount handles cancel", async () => {
// #given
restoreIsTty = setIsTty(true)
confirmMock.mockResolvedValueOnce(CANCEL)
const { promptAddAnotherAccount } = await import("./cli")
// #when
const result = await promptAddAnotherAccount(1)
// #then
expect(result).toBe(false)
})
it("promptAccountTier returns selected tier in TTY", async () => {
// #given
restoreIsTty = setIsTty(true)
selectMock.mockResolvedValueOnce("paid")
const { promptAccountTier } = await import("./cli")
// #when
const result = await promptAccountTier()
// #then
expect(result).toBe("paid")
expect(selectMock).toHaveBeenCalledTimes(1)
})
it("promptAccountTier returns free in non-TTY", async () => {
// #given
restoreIsTty = setIsTty(false)
const { promptAccountTier } = await import("./cli")
// #when
const result = await promptAccountTier()
// #then
expect(result).toBe("free")
expect(selectMock).toHaveBeenCalledTimes(0)
})
it("promptAccountTier handles cancel", async () => {
// #given
restoreIsTty = setIsTty(true)
selectMock.mockResolvedValueOnce(CANCEL)
const { promptAccountTier } = await import("./cli")
// #when
const result = await promptAccountTier()
// #then
expect(result).toBe("free")
})
})

View File

@@ -1,37 +0,0 @@
import { confirm, select, isCancel } from "@clack/prompts"
export async function promptAddAnotherAccount(currentCount: number): Promise<boolean> {
if (!process.stdout.isTTY) {
return false
}
const result = await confirm({
message: `Add another Google account?\nCurrently have ${currentCount} accounts (max 10)`,
})
if (isCancel(result)) {
return false
}
return result
}
export async function promptAccountTier(): Promise<"free" | "paid"> {
if (!process.stdout.isTTY) {
return "free"
}
const tier = await select({
message: "Select account tier",
options: [
{ value: "free" as const, label: "Free" },
{ value: "paid" as const, label: "Paid" },
],
})
if (isCancel(tier)) {
return "free"
}
return tier
}

View File

@@ -1,69 +0,0 @@
import { describe, it, expect } from "bun:test"
import {
ANTIGRAVITY_TOKEN_REFRESH_BUFFER_MS,
ANTIGRAVITY_ENDPOINT_FALLBACKS,
ANTIGRAVITY_CALLBACK_PORT,
} from "./constants"
describe("Antigravity Constants", () => {
describe("ANTIGRAVITY_TOKEN_REFRESH_BUFFER_MS", () => {
it("should be 60 seconds (60,000ms) to refresh before expiry", () => {
// #given
const SIXTY_SECONDS_MS = 60 * 1000 // 60,000
// #when
const actual = ANTIGRAVITY_TOKEN_REFRESH_BUFFER_MS
// #then
expect(actual).toBe(SIXTY_SECONDS_MS)
})
})
describe("ANTIGRAVITY_ENDPOINT_FALLBACKS", () => {
it("should have exactly 3 endpoints (sandbox → daily → prod)", () => {
// #given
const expectedCount = 3
// #when
const actual = ANTIGRAVITY_ENDPOINT_FALLBACKS
// #then
expect(actual).toHaveLength(expectedCount)
})
it("should have sandbox endpoint first", () => {
// #then
expect(ANTIGRAVITY_ENDPOINT_FALLBACKS[0]).toBe(
"https://daily-cloudcode-pa.sandbox.googleapis.com"
)
})
it("should have daily endpoint second", () => {
// #then
expect(ANTIGRAVITY_ENDPOINT_FALLBACKS[1]).toBe(
"https://daily-cloudcode-pa.googleapis.com"
)
})
it("should have prod endpoint third", () => {
// #then
expect(ANTIGRAVITY_ENDPOINT_FALLBACKS[2]).toBe(
"https://cloudcode-pa.googleapis.com"
)
})
it("should NOT include autopush endpoint", () => {
// #then
const endpointsJoined = ANTIGRAVITY_ENDPOINT_FALLBACKS.join(",")
const hasAutopush = endpointsJoined.includes("autopush-cloudcode-pa")
expect(hasAutopush).toBe(false)
})
})
describe("ANTIGRAVITY_CALLBACK_PORT", () => {
it("should be 51121 to match CLIProxyAPI", () => {
// #then
expect(ANTIGRAVITY_CALLBACK_PORT).toBe(51121)
})
})
})

View File

@@ -1,267 +0,0 @@
/**
* Antigravity OAuth configuration constants.
* Values sourced from cliproxyapi/sdk/auth/antigravity.go
*
* ## Logging Policy
*
* All console logging in antigravity modules follows a consistent policy:
*
* - **Debug logs**: Guard with `if (process.env.ANTIGRAVITY_DEBUG === "1")`
* - Includes: info messages, warnings, non-fatal errors
* - Enable debugging: `ANTIGRAVITY_DEBUG=1 opencode`
*
* - **Fatal errors**: None currently. All errors are handled by returning
* appropriate error responses to OpenCode's auth system.
*
* This policy ensures production silence while enabling verbose debugging
* when needed for troubleshooting OAuth flows.
*/
// OAuth 2.0 Client Credentials
export const ANTIGRAVITY_CLIENT_ID =
"1071006060591-tmhssin2h21lcre235vtolojh4g403ep.apps.googleusercontent.com"
export const ANTIGRAVITY_CLIENT_SECRET = "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"
// OAuth Callback
export const ANTIGRAVITY_CALLBACK_PORT = 51121
export const ANTIGRAVITY_REDIRECT_URI = `http://localhost:${ANTIGRAVITY_CALLBACK_PORT}/oauth-callback`
// OAuth Scopes
export const ANTIGRAVITY_SCOPES = [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/userinfo.email",
"https://www.googleapis.com/auth/userinfo.profile",
"https://www.googleapis.com/auth/cclog",
"https://www.googleapis.com/auth/experimentsandconfigs",
] as const
// API Endpoint Fallbacks - matches CLIProxyAPI antigravity_executor.go:1192-1201
// Claude models only available on SANDBOX endpoints (429 quota vs 404 not found)
export const ANTIGRAVITY_ENDPOINT_FALLBACKS = [
"https://daily-cloudcode-pa.sandbox.googleapis.com",
"https://daily-cloudcode-pa.googleapis.com",
"https://cloudcode-pa.googleapis.com",
] as const
// API Version
export const ANTIGRAVITY_API_VERSION = "v1internal"
// Request Headers
export const ANTIGRAVITY_HEADERS = {
"User-Agent": "google-api-nodejs-client/9.15.1",
"X-Goog-Api-Client": "google-cloud-sdk vscode_cloudshelleditor/0.1",
"Client-Metadata": JSON.stringify({
ideType: "IDE_UNSPECIFIED",
platform: "PLATFORM_UNSPECIFIED",
pluginType: "GEMINI",
}),
} as const
// Default Project ID (fallback when loadCodeAssist API fails)
// From opencode-antigravity-auth reference implementation
export const ANTIGRAVITY_DEFAULT_PROJECT_ID = "rising-fact-p41fc"
// Google OAuth endpoints
export const GOOGLE_AUTH_URL = "https://accounts.google.com/o/oauth2/v2/auth"
export const GOOGLE_TOKEN_URL = "https://oauth2.googleapis.com/token"
export const GOOGLE_USERINFO_URL = "https://www.googleapis.com/oauth2/v1/userinfo"
// Token refresh buffer (refresh 60 seconds before expiry)
export const ANTIGRAVITY_TOKEN_REFRESH_BUFFER_MS = 60_000
// Default thought signature to skip validation (CLIProxyAPI approach)
export const SKIP_THOUGHT_SIGNATURE_VALIDATOR = "skip_thought_signature_validator"
// ============================================================================
// System Prompt - Sourced from CLIProxyAPI antigravity_executor.go:1049-1050
// ============================================================================
export const ANTIGRAVITY_SYSTEM_PROMPT = `<identity>
You are Antigravity, a powerful agentic AI coding assistant designed by the Google Deepmind team working on Advanced Agentic Coding.
You are pair programming with a USER to solve their coding task. The task may require creating a new codebase, modifying or debugging an existing codebase, or simply answering a question.
The USER will send you requests, which you must always prioritize addressing. Along with each USER request, we will attach additional metadata about their current state, such as what files they have open and where their cursor is.
This information may or may not be relevant to the coding task, it is up for you to decide.
</identity>
<tool_calling>
Call tools as you normally would. The following list provides additional guidance to help you avoid errors:
- **Absolute paths only**. When using tools that accept file path arguments, ALWAYS use the absolute file path.
</tool_calling>
<web_application_development>
## Technology Stack
Your web applications should be built using the following technologies:
1. **Core**: Use HTML for structure and Javascript for logic.
2. **Styling (CSS)**: Use Vanilla CSS for maximum flexibility and control. Avoid using TailwindCSS unless the USER explicitly requests it; in this case, first confirm which TailwindCSS version to use.
3. **Web App**: If the USER specifies that they want a more complex web app, use a framework like Next.js or Vite. Only do this if the USER explicitly requests a web app.
4. **New Project Creation**: If you need to use a framework for a new app, use \`npx\` with the appropriate script, but there are some rules to follow:
- Use \`npx -y\` to automatically install the script and its dependencies
- You MUST run the command with \`--help\` flag to see all available options first
- Initialize the app in the current directory with \`./\` (example: \`npx -y create-vite-app@latest ./\`)
</web_application_development>
`
// ============================================================================
// Thinking Configuration - Sourced from CLIProxyAPI internal/util/gemini_thinking.go:481-487
// ============================================================================
/**
* Maps reasoning_effort UI values to thinking budget tokens.
*
* Key notes:
* - `none: 0` is a sentinel value meaning "delete thinkingConfig entirely"
* - `auto: -1` triggers dynamic budget calculation based on context
* - All other values represent actual thinking budget in tokens
*/
export const REASONING_EFFORT_BUDGET_MAP: Record<string, number> = {
none: 0, // Special: DELETE thinkingConfig entirely
auto: -1, // Dynamic calculation
minimal: 512,
low: 1024,
medium: 8192,
high: 24576,
xhigh: 32768,
}
/**
* Model-specific thinking configuration.
*
* thinkingType:
* - "numeric": Uses thinkingBudget (number) - Gemini 2.5, Claude via Antigravity
* - "levels": Uses thinkingLevel (string) - Gemini 3
*
* zeroAllowed:
* - true: Budget can be 0 (thinking disabled)
* - false: Minimum budget enforced (cannot disable thinking)
*/
export interface AntigravityModelConfig {
thinkingType: "numeric" | "levels"
min: number
max: number
zeroAllowed: boolean
levels?: string[] // lowercase only: "low", "high" (NOT "LOW", "HIGH")
}
/**
* Thinking configuration per model.
* Keys are normalized model IDs (no provider prefix, no variant suffix).
*
* Config lookup uses pattern matching fallback:
* - includes("gemini-3") → Gemini 3 (levels)
* - includes("gemini-2.5") → Gemini 2.5 (numeric)
* - includes("claude") → Claude via Antigravity (numeric)
*/
export const ANTIGRAVITY_MODEL_CONFIGS: Record<string, AntigravityModelConfig> = {
"gemini-2.5-flash": {
thinkingType: "numeric",
min: 0,
max: 24576,
zeroAllowed: true,
},
"gemini-2.5-flash-lite": {
thinkingType: "numeric",
min: 0,
max: 24576,
zeroAllowed: true,
},
"gemini-2.5-computer-use-preview-10-2025": {
thinkingType: "numeric",
min: 128,
max: 32768,
zeroAllowed: false,
},
"gemini-3-pro-preview": {
thinkingType: "levels",
min: 128,
max: 32768,
zeroAllowed: false,
levels: ["low", "high"],
},
"gemini-3-flash-preview": {
thinkingType: "levels",
min: 128,
max: 32768,
zeroAllowed: false,
levels: ["minimal", "low", "medium", "high"],
},
"gemini-claude-sonnet-4-5-thinking": {
thinkingType: "numeric",
min: 1024,
max: 200000,
zeroAllowed: false,
},
"gemini-claude-opus-4-5-thinking": {
thinkingType: "numeric",
min: 1024,
max: 200000,
zeroAllowed: false,
},
}
// ============================================================================
// Model ID Normalization
// ============================================================================
/**
* Normalizes model ID for config lookup.
*
* Algorithm:
* 1. Strip provider prefix (e.g., "google/")
* 2. Strip "antigravity-" prefix
* 3. Strip UI variant suffixes (-high, -low, -thinking-*)
*
* Examples:
* - "google/antigravity-gemini-3-pro-high" → "gemini-3-pro"
* - "antigravity-gemini-3-flash-preview" → "gemini-3-flash-preview"
* - "gemini-2.5-flash" → "gemini-2.5-flash"
* - "gemini-claude-sonnet-4-5-thinking-high" → "gemini-claude-sonnet-4-5"
*/
export function normalizeModelId(model: string): string {
let normalized = model
// 1. Strip provider prefix (e.g., "google/")
if (normalized.includes("/")) {
normalized = normalized.split("/").pop() || normalized
}
// 2. Strip "antigravity-" prefix
if (normalized.startsWith("antigravity-")) {
normalized = normalized.substring("antigravity-".length)
}
// 3. Strip UI variant suffixes (-high, -low, -thinking-*)
normalized = normalized.replace(/-thinking-(low|medium|high)$/, "")
normalized = normalized.replace(/-(high|low)$/, "")
return normalized
}
export const ANTIGRAVITY_SUPPORTED_MODELS = [
"gemini-2.5-flash",
"gemini-2.5-flash-lite",
"gemini-2.5-computer-use-preview-10-2025",
"gemini-3-pro-preview",
"gemini-3-flash-preview",
"gemini-claude-sonnet-4-5-thinking",
"gemini-claude-opus-4-5-thinking",
] as const
// ============================================================================
// Model Alias Mapping (for Antigravity API)
// ============================================================================
/**
* Converts UI model names to Antigravity API model names.
*
* NOTE: Tested 2026-01-08 - Gemini 3 models work with -preview suffix directly.
* The CLIProxyAPI transformations (gemini-3-pro-high, gemini-3-flash) return 404.
* Claude models return 404 on all endpoints (may require special access/quota).
*/
export function alias2ModelName(modelName: string): string {
if (modelName.startsWith("gemini-claude-")) {
return modelName.substring("gemini-".length)
}
return modelName
}

View File

@@ -1,798 +0,0 @@
/**
* Antigravity Fetch Interceptor
*
* Creates a custom fetch function that:
* - Checks token expiration and auto-refreshes
* - Rewrites URLs to Antigravity endpoints
* - Applies request transformation (including tool normalization)
* - Applies response transformation (including thinking extraction)
* - Implements endpoint fallback (daily → autopush → prod)
*
* **Body Type Assumption:**
* This interceptor assumes `init.body` is a JSON string (OpenAI format).
* Non-string bodies (ReadableStream, Blob, FormData, URLSearchParams, etc.)
* are passed through unchanged to the original fetch to avoid breaking
* other requests that may not be OpenAI-format API calls.
*
* Debug logging available via ANTIGRAVITY_DEBUG=1 environment variable.
*/
import { ANTIGRAVITY_ENDPOINT_FALLBACKS } from "./constants"
import { fetchProjectContext, clearProjectContextCache, invalidateProjectContextByRefreshToken } from "./project"
import { isTokenExpired, refreshAccessToken, parseStoredToken, formatTokenForStorage, AntigravityTokenRefreshError } from "./token"
import { AccountManager, type ManagedAccount } from "./accounts"
import { loadAccounts } from "./storage"
import type { ModelFamily } from "./types"
import { transformRequest } from "./request"
import { convertRequestBody, hasOpenAIMessages } from "./message-converter"
import {
transformResponse,
transformStreamingResponse,
isStreamingResponse,
} from "./response"
import { normalizeToolsForGemini, type OpenAITool } from "./tools"
import { extractThinkingBlocks, shouldIncludeThinking, transformResponseThinking, extractThinkingConfig, applyThinkingConfigToRequest } from "./thinking"
import {
getThoughtSignature,
setThoughtSignature,
getOrCreateSessionId,
} from "./thought-signature-store"
import type { AntigravityTokens } from "./types"
/**
* Auth interface matching OpenCode's auth system
*/
interface Auth {
access?: string
refresh?: string
expires?: number
}
/**
* Client interface for auth operations
*/
interface AuthClient {
set(providerId: string, auth: Auth): Promise<void>
}
/**
* Debug logging helper
* Only logs when ANTIGRAVITY_DEBUG=1
*/
function debugLog(message: string): void {
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.log(`[antigravity-fetch] ${message}`)
}
}
function isRetryableError(status: number): boolean {
if (status === 0) return true
if (status === 429) return true
if (status >= 500 && status < 600) return true
return false
}
function getModelFamilyFromModelName(modelName: string): ModelFamily | null {
const lower = modelName.toLowerCase()
if (lower.includes("claude") || lower.includes("anthropic")) return "claude"
if (lower.includes("flash")) return "gemini-flash"
if (lower.includes("gemini")) return "gemini-pro"
return null
}
function getModelFamilyFromUrl(url: string): ModelFamily {
if (url.includes("claude")) return "claude"
if (url.includes("flash")) return "gemini-flash"
return "gemini-pro"
}
function getModelFamily(url: string, init?: RequestInit): ModelFamily {
if (init?.body && typeof init.body === "string") {
try {
const body = JSON.parse(init.body) as Record<string, unknown>
if (typeof body.model === "string") {
const fromModel = getModelFamilyFromModelName(body.model)
if (fromModel) return fromModel
}
} catch {}
}
return getModelFamilyFromUrl(url)
}
const GCP_PERMISSION_ERROR_PATTERNS = [
"PERMISSION_DENIED",
"does not have permission",
"Cloud AI Companion API has not been used",
"has not been enabled",
] as const
function isGcpPermissionError(text: string): boolean {
return GCP_PERMISSION_ERROR_PATTERNS.some((pattern) => text.includes(pattern))
}
function calculateRetryDelay(attempt: number): number {
return Math.min(200 * Math.pow(2, attempt), 2000)
}
async function isRetryableResponse(response: Response): Promise<boolean> {
if (isRetryableError(response.status)) return true
if (response.status === 403) {
try {
const text = await response.clone().text()
if (text.includes("SUBSCRIPTION_REQUIRED") || text.includes("Gemini Code Assist license")) {
debugLog(`[RETRY] 403 SUBSCRIPTION_REQUIRED detected, will retry with next endpoint`)
return true
}
} catch {}
}
return false
}
interface AttemptFetchOptions {
endpoint: string
url: string
init: RequestInit
accessToken: string
projectId: string
sessionId: string
modelName?: string
thoughtSignature?: string
}
interface RateLimitInfo {
type: "rate-limited"
retryAfterMs: number
status: number
}
type AttemptFetchResult = Response | null | "pass-through" | "needs-refresh" | RateLimitInfo
async function attemptFetch(
options: AttemptFetchOptions
): Promise<AttemptFetchResult> {
const { endpoint, url, init, accessToken, projectId, sessionId, modelName, thoughtSignature } =
options
debugLog(`Trying endpoint: ${endpoint}`)
try {
const rawBody = init.body
if (rawBody !== undefined && typeof rawBody !== "string") {
debugLog(`Non-string body detected (${typeof rawBody}), signaling pass-through`)
return "pass-through"
}
let parsedBody: Record<string, unknown> = {}
if (rawBody) {
try {
parsedBody = JSON.parse(rawBody) as Record<string, unknown>
} catch {
parsedBody = {}
}
}
debugLog(`[BODY] Keys: ${Object.keys(parsedBody).join(", ")}`)
debugLog(`[BODY] Has contents: ${!!parsedBody.contents}, Has messages: ${!!parsedBody.messages}`)
if (parsedBody.contents) {
const contents = parsedBody.contents as Array<Record<string, unknown>>
debugLog(`[BODY] contents length: ${contents.length}`)
contents.forEach((c, i) => {
debugLog(`[BODY] contents[${i}].role: ${c.role}, parts: ${JSON.stringify(c.parts).substring(0, 200)}`)
})
}
if (parsedBody.tools && Array.isArray(parsedBody.tools)) {
const normalizedTools = normalizeToolsForGemini(parsedBody.tools as OpenAITool[])
if (normalizedTools) {
parsedBody.tools = normalizedTools
}
}
if (hasOpenAIMessages(parsedBody)) {
debugLog(`[CONVERT] Converting OpenAI messages to Gemini contents`)
parsedBody = convertRequestBody(parsedBody, thoughtSignature)
debugLog(`[CONVERT] After conversion - Has contents: ${!!parsedBody.contents}`)
}
const transformed = transformRequest({
url,
body: parsedBody,
accessToken,
projectId,
sessionId,
modelName,
endpointOverride: endpoint,
thoughtSignature,
})
// Apply thinking config from reasoning_effort (from think-mode hook)
const effectiveModel = modelName || transformed.body.model
const thinkingConfig = extractThinkingConfig(
parsedBody,
parsedBody.generationConfig as Record<string, unknown> | undefined,
parsedBody,
)
if (thinkingConfig) {
debugLog(`[THINKING] Applying thinking config for model: ${effectiveModel}`)
applyThinkingConfigToRequest(
transformed.body as unknown as Record<string, unknown>,
effectiveModel,
thinkingConfig,
)
debugLog(`[THINKING] Thinking config applied successfully`)
}
debugLog(`[REQ] streaming=${transformed.streaming}, url=${transformed.url}`)
const maxPermissionRetries = 10
for (let attempt = 0; attempt <= maxPermissionRetries; attempt++) {
const response = await fetch(transformed.url, {
method: init.method || "POST",
headers: transformed.headers,
body: JSON.stringify(transformed.body),
signal: init.signal,
})
debugLog(
`[RESP] status=${response.status} content-type=${response.headers.get("content-type") ?? ""} url=${response.url}`
)
if (response.status === 401) {
debugLog(`[401] Unauthorized response detected, signaling token refresh needed`)
return "needs-refresh"
}
if (response.status === 403) {
try {
const text = await response.clone().text()
if (isGcpPermissionError(text)) {
if (attempt < maxPermissionRetries) {
const delay = calculateRetryDelay(attempt)
debugLog(`[RETRY] GCP permission error, retry ${attempt + 1}/${maxPermissionRetries} after ${delay}ms`)
await new Promise((resolve) => setTimeout(resolve, delay))
continue
}
debugLog(`[RETRY] GCP permission error, max retries exceeded`)
}
} catch {}
}
if (response.status === 429) {
const retryAfter = response.headers.get("retry-after")
let retryAfterMs = 60000
if (retryAfter) {
const parsed = parseInt(retryAfter, 10)
if (!isNaN(parsed) && parsed > 0) {
retryAfterMs = parsed * 1000
} else {
const httpDate = Date.parse(retryAfter)
if (!isNaN(httpDate)) {
retryAfterMs = Math.max(0, httpDate - Date.now())
}
}
}
debugLog(`[429] Rate limited, retry-after: ${retryAfterMs}ms`)
await response.body?.cancel()
return { type: "rate-limited" as const, retryAfterMs, status: 429 }
}
if (response.status >= 500 && response.status < 600) {
debugLog(`[5xx] Server error ${response.status}, marking for rotation`)
await response.body?.cancel()
return { type: "rate-limited" as const, retryAfterMs: 300000, status: response.status }
}
if (!response.ok && (await isRetryableResponse(response))) {
debugLog(`Endpoint failed: ${endpoint} (status: ${response.status}), trying next`)
return null
}
return response
}
return null
} catch (error) {
debugLog(
`Endpoint failed: ${endpoint} (${error instanceof Error ? error.message : "Unknown error"}), trying next`
)
return null
}
}
interface GeminiResponsePart {
thoughtSignature?: string
thought_signature?: string
functionCall?: Record<string, unknown>
text?: string
[key: string]: unknown
}
interface GeminiResponseCandidate {
content?: {
parts?: GeminiResponsePart[]
[key: string]: unknown
}
[key: string]: unknown
}
interface GeminiResponseBody {
candidates?: GeminiResponseCandidate[]
[key: string]: unknown
}
function extractSignatureFromResponse(parsed: GeminiResponseBody): string | undefined {
if (!parsed.candidates || !Array.isArray(parsed.candidates)) {
return undefined
}
for (const candidate of parsed.candidates) {
const parts = candidate.content?.parts
if (!parts || !Array.isArray(parts)) {
continue
}
for (const part of parts) {
const sig = part.thoughtSignature || part.thought_signature
if (sig && typeof sig === "string") {
return sig
}
}
}
return undefined
}
async function transformResponseWithThinking(
response: Response,
modelName: string,
fetchInstanceId: string
): Promise<Response> {
const streaming = isStreamingResponse(response)
let result
if (streaming) {
result = await transformStreamingResponse(response)
} else {
result = await transformResponse(response)
}
if (streaming) {
return result.response
}
try {
const text = await result.response.clone().text()
debugLog(`[TSIG][RESP] Response text length: ${text.length}`)
const parsed = JSON.parse(text) as GeminiResponseBody
debugLog(`[TSIG][RESP] Parsed keys: ${Object.keys(parsed).join(", ")}`)
debugLog(`[TSIG][RESP] Has candidates: ${!!parsed.candidates}, count: ${parsed.candidates?.length ?? 0}`)
const signature = extractSignatureFromResponse(parsed)
debugLog(`[TSIG][RESP] Signature extracted: ${signature ? signature.substring(0, 30) + "..." : "NONE"}`)
if (signature) {
setThoughtSignature(fetchInstanceId, signature)
debugLog(`[TSIG][STORE] Stored signature for ${fetchInstanceId}`)
} else {
debugLog(`[TSIG][WARN] No signature found in response!`)
}
if (shouldIncludeThinking(modelName)) {
const thinkingResult = extractThinkingBlocks(parsed)
if (thinkingResult.hasThinking) {
const transformed = transformResponseThinking(parsed)
return new Response(JSON.stringify(transformed), {
status: result.response.status,
statusText: result.response.statusText,
headers: result.response.headers,
})
}
}
} catch {}
return result.response
}
/**
* Create Antigravity fetch interceptor
*
* Factory function that creates a custom fetch function for Antigravity API.
* Handles token management, request/response transformation, and endpoint fallback.
*
* @param getAuth - Async function to retrieve current auth state
* @param client - Auth client for saving updated tokens
* @param providerId - Provider identifier (e.g., "google")
* @param clientId - Optional custom client ID for token refresh (defaults to ANTIGRAVITY_CLIENT_ID)
* @param clientSecret - Optional custom client secret for token refresh (defaults to ANTIGRAVITY_CLIENT_SECRET)
* @returns Custom fetch function compatible with standard fetch signature
*
* @example
* ```typescript
* const customFetch = createAntigravityFetch(
* () => auth(),
* client,
* "google",
* "custom-client-id",
* "custom-client-secret"
* )
*
* // Use like standard fetch
* const response = await customFetch("https://api.example.com/chat", {
* method: "POST",
* body: JSON.stringify({ messages: [...] })
* })
* ```
*/
export function createAntigravityFetch(
getAuth: () => Promise<Auth>,
client: AuthClient,
providerId: string,
clientId?: string,
clientSecret?: string,
accountManager?: AccountManager | null
): (url: string, init?: RequestInit) => Promise<Response> {
let cachedTokens: AntigravityTokens | null = null
let cachedProjectId: string | null = null
let lastAccountIndex: number | null = null
const fetchInstanceId = crypto.randomUUID()
let manager: AccountManager | null = accountManager || null
let accountsLoaded = false
const fetchFn = async (url: string, init: RequestInit = {}): Promise<Response> => {
debugLog(`Intercepting request to: ${url}`)
// Get current auth state
const auth = await getAuth()
if (!auth.access || !auth.refresh) {
throw new Error("Antigravity: No authentication tokens available")
}
// Parse stored token format
let refreshParts = parseStoredToken(auth.refresh)
if (!accountsLoaded && !manager && auth.refresh) {
try {
const storedAccounts = await loadAccounts()
if (storedAccounts) {
manager = new AccountManager(
{ refresh: auth.refresh, access: auth.access || "", expires: auth.expires || 0 },
storedAccounts
)
debugLog(`[ACCOUNTS] Loaded ${manager.getAccountCount()} accounts from storage`)
}
} catch (error) {
debugLog(`[ACCOUNTS] Failed to load accounts, falling back to single-account: ${error instanceof Error ? error.message : "Unknown"}`)
}
accountsLoaded = true
}
let currentAccount: ManagedAccount | null = null
if (manager) {
const family = getModelFamily(url, init)
currentAccount = manager.getCurrentOrNextForFamily(family)
if (currentAccount) {
debugLog(`[ACCOUNTS] Using account ${currentAccount.index + 1}/${manager.getAccountCount()} for ${family}`)
if (lastAccountIndex === null || lastAccountIndex !== currentAccount.index) {
if (lastAccountIndex !== null) {
debugLog(`[ACCOUNTS] Account changed from ${lastAccountIndex + 1} to ${currentAccount.index + 1}, clearing cached state`)
} else if (cachedProjectId) {
debugLog(`[ACCOUNTS] First account introduced, clearing cached state`)
}
cachedProjectId = null
cachedTokens = null
}
lastAccountIndex = currentAccount.index
if (currentAccount.access && currentAccount.expires) {
auth.access = currentAccount.access
auth.expires = currentAccount.expires
}
refreshParts = {
refreshToken: currentAccount.parts.refreshToken,
projectId: currentAccount.parts.projectId,
managedProjectId: currentAccount.parts.managedProjectId,
}
}
}
// Build initial token state
if (!cachedTokens) {
cachedTokens = {
type: "antigravity",
access_token: auth.access,
refresh_token: refreshParts.refreshToken,
expires_in: auth.expires ? Math.floor((auth.expires - Date.now()) / 1000) : 3600,
timestamp: auth.expires ? auth.expires - 3600 * 1000 : Date.now(),
}
} else {
// Update with fresh values
cachedTokens.access_token = auth.access
cachedTokens.refresh_token = refreshParts.refreshToken
}
// Check token expiration and refresh if needed
if (isTokenExpired(cachedTokens)) {
debugLog("Token expired, refreshing...")
try {
const newTokens = await refreshAccessToken(refreshParts.refreshToken, clientId, clientSecret)
cachedTokens = {
type: "antigravity",
access_token: newTokens.access_token,
refresh_token: newTokens.refresh_token,
expires_in: newTokens.expires_in,
timestamp: Date.now(),
}
clearProjectContextCache()
const formattedRefresh = formatTokenForStorage(
newTokens.refresh_token,
refreshParts.projectId || "",
refreshParts.managedProjectId
)
await client.set(providerId, {
access: newTokens.access_token,
refresh: formattedRefresh,
expires: Date.now() + newTokens.expires_in * 1000,
})
debugLog("Token refreshed successfully")
} catch (error) {
if (error instanceof AntigravityTokenRefreshError) {
if (error.isInvalidGrant) {
debugLog(`[REFRESH] Token revoked (invalid_grant), clearing caches`)
invalidateProjectContextByRefreshToken(refreshParts.refreshToken)
clearProjectContextCache()
}
throw new Error(
`Antigravity: Token refresh failed: ${error.description || error.message}${error.code ? ` (${error.code})` : ""}`
)
}
throw new Error(
`Antigravity: Token refresh failed: ${error instanceof Error ? error.message : "Unknown error"}`
)
}
}
// Fetch project ID via loadCodeAssist (CLIProxyAPI approach)
if (!cachedProjectId) {
const projectContext = await fetchProjectContext(cachedTokens.access_token)
cachedProjectId = projectContext.cloudaicompanionProject || ""
debugLog(`[PROJECT] Fetched project ID: "${cachedProjectId}"`)
}
const projectId = cachedProjectId
debugLog(`[PROJECT] Using project ID: "${projectId}"`)
// Extract model name from request body
let modelName: string | undefined
if (init.body) {
try {
const body =
typeof init.body === "string"
? (JSON.parse(init.body) as Record<string, unknown>)
: (init.body as unknown as Record<string, unknown>)
if (typeof body.model === "string") {
modelName = body.model
}
} catch {
// Ignore parsing errors
}
}
const maxEndpoints = Math.min(ANTIGRAVITY_ENDPOINT_FALLBACKS.length, 3)
const sessionId = getOrCreateSessionId(fetchInstanceId)
const thoughtSignature = getThoughtSignature(fetchInstanceId)
debugLog(`[TSIG][GET] sessionId=${sessionId}, signature=${thoughtSignature ? thoughtSignature.substring(0, 20) + "..." : "none"}`)
let hasRefreshedFor401 = false
const executeWithEndpoints = async (): Promise<Response> => {
for (let i = 0; i < maxEndpoints; i++) {
const endpoint = ANTIGRAVITY_ENDPOINT_FALLBACKS[i]
const response = await attemptFetch({
endpoint,
url,
init,
accessToken: cachedTokens!.access_token,
projectId,
sessionId,
modelName,
thoughtSignature,
})
if (response === "pass-through") {
debugLog("Non-string body detected, passing through with auth headers")
const headersWithAuth = {
...init.headers,
Authorization: `Bearer ${cachedTokens!.access_token}`,
}
return fetch(url, { ...init, headers: headersWithAuth })
}
if (response === "needs-refresh") {
if (hasRefreshedFor401) {
debugLog("[401] Already refreshed once, returning unauthorized error")
return new Response(
JSON.stringify({
error: {
message: "Authentication failed after token refresh",
type: "unauthorized",
code: "token_refresh_failed",
},
}),
{
status: 401,
statusText: "Unauthorized",
headers: { "Content-Type": "application/json" },
}
)
}
debugLog("[401] Refreshing token and retrying...")
hasRefreshedFor401 = true
try {
const newTokens = await refreshAccessToken(
refreshParts.refreshToken,
clientId,
clientSecret
)
cachedTokens = {
type: "antigravity",
access_token: newTokens.access_token,
refresh_token: newTokens.refresh_token,
expires_in: newTokens.expires_in,
timestamp: Date.now(),
}
clearProjectContextCache()
const formattedRefresh = formatTokenForStorage(
newTokens.refresh_token,
refreshParts.projectId || "",
refreshParts.managedProjectId
)
await client.set(providerId, {
access: newTokens.access_token,
refresh: formattedRefresh,
expires: Date.now() + newTokens.expires_in * 1000,
})
debugLog("[401] Token refreshed, retrying request...")
return executeWithEndpoints()
} catch (refreshError) {
if (refreshError instanceof AntigravityTokenRefreshError) {
if (refreshError.isInvalidGrant) {
debugLog(`[401] Token revoked (invalid_grant), clearing caches`)
invalidateProjectContextByRefreshToken(refreshParts.refreshToken)
clearProjectContextCache()
}
debugLog(`[401] Token refresh failed: ${refreshError.description || refreshError.message}`)
return new Response(
JSON.stringify({
error: {
message: refreshError.description || refreshError.message,
type: refreshError.isInvalidGrant ? "token_revoked" : "unauthorized",
code: refreshError.code || "token_refresh_failed",
},
}),
{
status: 401,
statusText: "Unauthorized",
headers: { "Content-Type": "application/json" },
}
)
}
debugLog(`[401] Token refresh failed: ${refreshError instanceof Error ? refreshError.message : "Unknown error"}`)
return new Response(
JSON.stringify({
error: {
message: refreshError instanceof Error ? refreshError.message : "Unknown error",
type: "unauthorized",
code: "token_refresh_failed",
},
}),
{
status: 401,
statusText: "Unauthorized",
headers: { "Content-Type": "application/json" },
}
)
}
}
if (response && typeof response === "object" && "type" in response && response.type === "rate-limited") {
const rateLimitInfo = response as RateLimitInfo
const family = getModelFamily(url, init)
if (rateLimitInfo.retryAfterMs > 5000 && manager && currentAccount) {
manager.markRateLimited(currentAccount, rateLimitInfo.retryAfterMs, family)
await manager.save()
debugLog(`[RATE-LIMIT] Account ${currentAccount.index + 1} rate-limited for ${family}, rotating...`)
const nextAccount = manager.getCurrentOrNextForFamily(family)
if (nextAccount && nextAccount.index !== currentAccount.index) {
debugLog(`[RATE-LIMIT] Switched to account ${nextAccount.index + 1}`)
return fetchFn(url, init)
}
}
const isLastEndpoint = i === maxEndpoints - 1
if (isLastEndpoint) {
const isServerError = rateLimitInfo.status >= 500
debugLog(`[RATE-LIMIT] No alternative account or endpoint, returning ${rateLimitInfo.status}`)
return new Response(
JSON.stringify({
error: {
message: isServerError
? `Server error (${rateLimitInfo.status}). Retry after ${Math.ceil(rateLimitInfo.retryAfterMs / 1000)} seconds`
: `Rate limited. Retry after ${Math.ceil(rateLimitInfo.retryAfterMs / 1000)} seconds`,
type: isServerError ? "server_error" : "rate_limit",
code: isServerError ? "server_error" : "rate_limited",
},
}),
{
status: rateLimitInfo.status,
statusText: isServerError ? "Server Error" : "Too Many Requests",
headers: {
"Content-Type": "application/json",
"Retry-After": String(Math.ceil(rateLimitInfo.retryAfterMs / 1000)),
},
}
)
}
debugLog(`[RATE-LIMIT] No alternative account available, trying next endpoint`)
continue
}
if (response && response instanceof Response) {
debugLog(`Success with endpoint: ${endpoint}`)
const transformedResponse = await transformResponseWithThinking(
response,
modelName || "",
fetchInstanceId
)
return transformedResponse
}
}
const errorMessage = `All Antigravity endpoints failed after ${maxEndpoints} attempts`
debugLog(errorMessage)
return new Response(
JSON.stringify({
error: {
message: errorMessage,
type: "endpoint_failure",
code: "all_endpoints_failed",
},
}),
{
status: 503,
statusText: "Service Unavailable",
headers: { "Content-Type": "application/json" },
}
)
}
return executeWithEndpoints()
}
return fetchFn
}
/**
* Type export for createAntigravityFetch return type
*/
export type AntigravityFetch = (url: string, init?: RequestInit) => Promise<Response>

View File

@@ -1,13 +0,0 @@
export * from "./types"
export * from "./constants"
export * from "./oauth"
export * from "./token"
export * from "./project"
export * from "./request"
export * from "./response"
export * from "./tools"
export * from "./thinking"
export * from "./thought-signature-store"
export * from "./message-converter"
export * from "./fetch"
export * from "./plugin"

View File

@@ -1,306 +0,0 @@
/**
* Antigravity Integration Tests - End-to-End
*
* Tests the complete request transformation pipeline:
* - Request parsing and model extraction
* - System prompt injection (handled by transformRequest)
* - Thinking config application (handled by applyThinkingConfigToRequest)
* - Body wrapping for Antigravity API format
*/
import { describe, it, expect } from "bun:test"
import { transformRequest } from "./request"
import { extractThinkingConfig, applyThinkingConfigToRequest } from "./thinking"
describe("Antigravity Integration - End-to-End", () => {
describe("Thinking Config Integration", () => {
it("Gemini 3 with reasoning_effort='high' → thinkingLevel='high'", () => {
// #given
const inputBody: Record<string, unknown> = {
model: "gemini-3-pro-preview",
reasoning_effort: "high",
messages: [{ role: "user", content: "test" }],
}
// #when
const transformed = transformRequest({
url: "https://generativelanguage.googleapis.com/v1internal/models/gemini-3-pro-preview:generateContent",
body: inputBody,
accessToken: "test-token",
projectId: "test-project",
sessionId: "test-session",
modelName: "gemini-3-pro-preview",
})
const thinkingConfig = extractThinkingConfig(
inputBody,
inputBody.generationConfig as Record<string, unknown> | undefined,
inputBody,
)
if (thinkingConfig) {
applyThinkingConfigToRequest(
transformed.body as unknown as Record<string, unknown>,
"gemini-3-pro-preview",
thinkingConfig,
)
}
// #then
const genConfig = transformed.body.request.generationConfig as Record<string, unknown> | undefined
const thinkingConfigResult = genConfig?.thinkingConfig as Record<string, unknown> | undefined
expect(thinkingConfigResult?.thinkingLevel).toBe("high")
expect(thinkingConfigResult?.thinkingBudget).toBeUndefined()
const systemInstruction = transformed.body.request.systemInstruction as Record<string, unknown> | undefined
const parts = systemInstruction?.parts as Array<{ text: string }> | undefined
expect(parts?.[0]?.text).toContain("<identity>")
})
it("Gemini 2.5 with reasoning_effort='high' → thinkingBudget=24576", () => {
// #given
const inputBody: Record<string, unknown> = {
model: "gemini-2.5-flash",
reasoning_effort: "high",
messages: [{ role: "user", content: "test" }],
}
// #when
const transformed = transformRequest({
url: "https://generativelanguage.googleapis.com/v1internal/models/gemini-2.5-flash:generateContent",
body: inputBody,
accessToken: "test-token",
projectId: "test-project",
sessionId: "test-session",
modelName: "gemini-2.5-flash",
})
const thinkingConfig = extractThinkingConfig(
inputBody,
inputBody.generationConfig as Record<string, unknown> | undefined,
inputBody,
)
if (thinkingConfig) {
applyThinkingConfigToRequest(
transformed.body as unknown as Record<string, unknown>,
"gemini-2.5-flash",
thinkingConfig,
)
}
// #then
const genConfig = transformed.body.request.generationConfig as Record<string, unknown> | undefined
const thinkingConfigResult = genConfig?.thinkingConfig as Record<string, unknown> | undefined
expect(thinkingConfigResult?.thinkingBudget).toBe(24576)
expect(thinkingConfigResult?.thinkingLevel).toBeUndefined()
})
it("reasoning_effort='none' → thinkingConfig deleted", () => {
// #given
const inputBody: Record<string, unknown> = {
model: "gemini-2.5-flash",
reasoning_effort: "none",
messages: [{ role: "user", content: "test" }],
}
// #when
const transformed = transformRequest({
url: "https://generativelanguage.googleapis.com/v1internal/models/gemini-2.5-flash:generateContent",
body: inputBody,
accessToken: "test-token",
projectId: "test-project",
sessionId: "test-session",
modelName: "gemini-2.5-flash",
})
const thinkingConfig = extractThinkingConfig(
inputBody,
inputBody.generationConfig as Record<string, unknown> | undefined,
inputBody,
)
if (thinkingConfig) {
applyThinkingConfigToRequest(
transformed.body as unknown as Record<string, unknown>,
"gemini-2.5-flash",
thinkingConfig,
)
}
// #then
const genConfig = transformed.body.request.generationConfig as Record<string, unknown> | undefined
expect(genConfig?.thinkingConfig).toBeUndefined()
})
it("Claude via Antigravity with reasoning_effort='high'", () => {
// #given
const inputBody: Record<string, unknown> = {
model: "gemini-claude-sonnet-4-5",
reasoning_effort: "high",
messages: [{ role: "user", content: "test" }],
}
// #when
const transformed = transformRequest({
url: "https://generativelanguage.googleapis.com/v1internal/models/gemini-claude-sonnet-4-5:generateContent",
body: inputBody,
accessToken: "test-token",
projectId: "test-project",
sessionId: "test-session",
modelName: "gemini-claude-sonnet-4-5",
})
const thinkingConfig = extractThinkingConfig(
inputBody,
inputBody.generationConfig as Record<string, unknown> | undefined,
inputBody,
)
if (thinkingConfig) {
applyThinkingConfigToRequest(
transformed.body as unknown as Record<string, unknown>,
"gemini-claude-sonnet-4-5",
thinkingConfig,
)
}
// #then
const genConfig = transformed.body.request.generationConfig as Record<string, unknown> | undefined
const thinkingConfigResult = genConfig?.thinkingConfig as Record<string, unknown> | undefined
expect(thinkingConfigResult?.thinkingBudget).toBe(24576)
})
it("System prompt not duplicated on retry", () => {
// #given
const inputBody: Record<string, unknown> = {
model: "gemini-3-pro-high",
reasoning_effort: "high",
messages: [{ role: "user", content: "test" }],
}
// #when - First transformation
const firstOutput = transformRequest({
url: "https://generativelanguage.googleapis.com/v1internal/models/gemini-3-pro-high:generateContent",
body: inputBody,
accessToken: "test-token",
projectId: "test-project",
sessionId: "test-session",
modelName: "gemini-3-pro-high",
})
// Extract thinking config and apply to first output (simulating what fetch.ts does)
const thinkingConfig = extractThinkingConfig(
inputBody,
inputBody.generationConfig as Record<string, unknown> | undefined,
inputBody,
)
if (thinkingConfig) {
applyThinkingConfigToRequest(
firstOutput.body as unknown as Record<string, unknown>,
"gemini-3-pro-high",
thinkingConfig,
)
}
// #then
const systemInstruction = firstOutput.body.request.systemInstruction as Record<string, unknown> | undefined
const parts = systemInstruction?.parts as Array<{ text: string }> | undefined
const identityCount = parts?.filter((p) => p.text.includes("<identity>")).length ?? 0
expect(identityCount).toBe(1) // Should have exactly ONE <identity> block
})
it("reasoning_effort='low' for Gemini 3 → thinkingLevel='low'", () => {
// #given
const inputBody: Record<string, unknown> = {
model: "gemini-3-flash-preview",
reasoning_effort: "low",
messages: [{ role: "user", content: "test" }],
}
// #when
const transformed = transformRequest({
url: "https://generativelanguage.googleapis.com/v1internal/models/gemini-3-flash-preview:generateContent",
body: inputBody,
accessToken: "test-token",
projectId: "test-project",
sessionId: "test-session",
modelName: "gemini-3-flash-preview",
})
const thinkingConfig = extractThinkingConfig(
inputBody,
inputBody.generationConfig as Record<string, unknown> | undefined,
inputBody,
)
if (thinkingConfig) {
applyThinkingConfigToRequest(
transformed.body as unknown as Record<string, unknown>,
"gemini-3-flash-preview",
thinkingConfig,
)
}
// #then
const genConfig = transformed.body.request.generationConfig as Record<string, unknown> | undefined
const thinkingConfigResult = genConfig?.thinkingConfig as Record<string, unknown> | undefined
expect(thinkingConfigResult?.thinkingLevel).toBe("low")
})
it("Full pipeline: transformRequest + thinking config preserves all fields", () => {
// #given
const inputBody: Record<string, unknown> = {
model: "gemini-2.5-flash",
reasoning_effort: "medium",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Write a function" },
],
generationConfig: {
temperature: 0.7,
maxOutputTokens: 1000,
},
}
// #when
const transformed = transformRequest({
url: "https://generativelanguage.googleapis.com/v1internal/models/gemini-2.5-flash:generateContent",
body: inputBody,
accessToken: "test-token",
projectId: "test-project",
sessionId: "test-session",
modelName: "gemini-2.5-flash",
})
const thinkingConfig = extractThinkingConfig(
inputBody,
inputBody.generationConfig as Record<string, unknown> | undefined,
inputBody,
)
if (thinkingConfig) {
applyThinkingConfigToRequest(
transformed.body as unknown as Record<string, unknown>,
"gemini-2.5-flash",
thinkingConfig,
)
}
// #then
// Verify basic structure is preserved
expect(transformed.body.project).toBe("test-project")
expect(transformed.body.model).toBe("gemini-2.5-flash")
expect(transformed.body.userAgent).toBe("antigravity")
expect(transformed.body.request.sessionId).toBe("test-session")
// Verify generation config is preserved
const genConfig = transformed.body.request.generationConfig as Record<string, unknown> | undefined
expect(genConfig?.temperature).toBe(0.7)
expect(genConfig?.maxOutputTokens).toBe(1000)
// Verify thinking config is applied
const thinkingConfigResult = genConfig?.thinkingConfig as Record<string, unknown> | undefined
expect(thinkingConfigResult?.thinkingBudget).toBe(8192)
expect(thinkingConfigResult?.include_thoughts).toBe(true)
// Verify system prompt is injected
const systemInstruction = transformed.body.request.systemInstruction as Record<string, unknown> | undefined
const parts = systemInstruction?.parts as Array<{ text: string }> | undefined
expect(parts?.[0]?.text).toContain("<identity>")
})
})
})

View File

@@ -1,206 +0,0 @@
/**
* OpenAI → Gemini message format converter
*
* Converts OpenAI-style messages to Gemini contents format,
* injecting thoughtSignature into functionCall parts.
*/
import { SKIP_THOUGHT_SIGNATURE_VALIDATOR } from "./constants"
function debugLog(message: string): void {
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.log(`[antigravity-converter] ${message}`)
}
}
interface OpenAIMessage {
role: "system" | "user" | "assistant" | "tool"
content?: string | OpenAIContentPart[]
tool_calls?: OpenAIToolCall[]
tool_call_id?: string
name?: string
}
interface OpenAIContentPart {
type: string
text?: string
image_url?: { url: string }
[key: string]: unknown
}
interface OpenAIToolCall {
id: string
type: "function"
function: {
name: string
arguments: string
}
}
interface GeminiPart {
text?: string
functionCall?: {
name: string
args: Record<string, unknown>
}
functionResponse?: {
name: string
response: Record<string, unknown>
}
inlineData?: {
mimeType: string
data: string
}
thought_signature?: string
[key: string]: unknown
}
interface GeminiContent {
role: "user" | "model"
parts: GeminiPart[]
}
export function convertOpenAIToGemini(
messages: OpenAIMessage[],
thoughtSignature?: string
): GeminiContent[] {
debugLog(`Converting ${messages.length} messages, signature: ${thoughtSignature ? "present" : "none"}`)
const contents: GeminiContent[] = []
for (const msg of messages) {
if (msg.role === "system") {
contents.push({
role: "user",
parts: [{ text: typeof msg.content === "string" ? msg.content : "" }],
})
continue
}
if (msg.role === "user") {
const parts = convertContentToParts(msg.content)
contents.push({ role: "user", parts })
continue
}
if (msg.role === "assistant") {
const parts: GeminiPart[] = []
if (msg.content) {
parts.push(...convertContentToParts(msg.content))
}
if (msg.tool_calls && msg.tool_calls.length > 0) {
for (const toolCall of msg.tool_calls) {
let args: Record<string, unknown> = {}
try {
args = JSON.parse(toolCall.function.arguments)
} catch {
args = {}
}
const part: GeminiPart = {
functionCall: {
name: toolCall.function.name,
args,
},
}
// Always inject signature: use provided or default to skip validator (CLIProxyAPI approach)
part.thoughtSignature = thoughtSignature || SKIP_THOUGHT_SIGNATURE_VALIDATOR
debugLog(`Injected signature into functionCall: ${toolCall.function.name} (${thoughtSignature ? "provided" : "default"})`)
parts.push(part)
}
}
if (parts.length > 0) {
contents.push({ role: "model", parts })
}
continue
}
if (msg.role === "tool") {
let response: Record<string, unknown> = {}
try {
response = typeof msg.content === "string"
? JSON.parse(msg.content)
: { result: msg.content }
} catch {
response = { result: msg.content }
}
const toolName = msg.name || "unknown"
contents.push({
role: "user",
parts: [{
functionResponse: {
name: toolName,
response,
},
}],
})
continue
}
}
debugLog(`Converted to ${contents.length} content blocks`)
return contents
}
function convertContentToParts(content: string | OpenAIContentPart[] | undefined): GeminiPart[] {
if (!content) {
return [{ text: "" }]
}
if (typeof content === "string") {
return [{ text: content }]
}
const parts: GeminiPart[] = []
for (const part of content) {
if (part.type === "text" && part.text) {
parts.push({ text: part.text })
} else if (part.type === "image_url" && part.image_url?.url) {
const url = part.image_url.url
if (url.startsWith("data:")) {
const match = url.match(/^data:([^;]+);base64,(.+)$/)
if (match) {
parts.push({
inlineData: {
mimeType: match[1],
data: match[2],
},
})
}
}
}
}
return parts.length > 0 ? parts : [{ text: "" }]
}
export function hasOpenAIMessages(body: Record<string, unknown>): boolean {
return Array.isArray(body.messages) && body.messages.length > 0
}
export function convertRequestBody(
body: Record<string, unknown>,
thoughtSignature?: string
): Record<string, unknown> {
if (!hasOpenAIMessages(body)) {
debugLog("No messages array found, returning body as-is")
return body
}
const messages = body.messages as OpenAIMessage[]
const contents = convertOpenAIToGemini(messages, thoughtSignature)
const converted = { ...body }
delete converted.messages
converted.contents = contents
debugLog(`Converted body: messages → contents (${contents.length} blocks)`)
return converted
}

View File

@@ -1,262 +0,0 @@
import { describe, it, expect, beforeEach, afterEach, mock } from "bun:test"
import { buildAuthURL, exchangeCode, startCallbackServer } from "./oauth"
import { ANTIGRAVITY_CLIENT_ID, GOOGLE_TOKEN_URL, ANTIGRAVITY_CALLBACK_PORT } from "./constants"
describe("OAuth PKCE Removal", () => {
describe("buildAuthURL", () => {
it("should NOT include code_challenge parameter", async () => {
// #given
const projectId = "test-project"
// #when
const result = await buildAuthURL(projectId)
const url = new URL(result.url)
// #then
expect(url.searchParams.has("code_challenge")).toBe(false)
})
it("should NOT include code_challenge_method parameter", async () => {
// #given
const projectId = "test-project"
// #when
const result = await buildAuthURL(projectId)
const url = new URL(result.url)
// #then
expect(url.searchParams.has("code_challenge_method")).toBe(false)
})
it("should include state parameter for CSRF protection", async () => {
// #given
const projectId = "test-project"
// #when
const result = await buildAuthURL(projectId)
const url = new URL(result.url)
const state = url.searchParams.get("state")
// #then
expect(state).toBeTruthy()
})
it("should have state as simple random string (not JSON/base64)", async () => {
// #given
const projectId = "test-project"
// #when
const result = await buildAuthURL(projectId)
const url = new URL(result.url)
const state = url.searchParams.get("state")!
// #then - positive assertions for simple random string
expect(state.length).toBeGreaterThanOrEqual(16)
expect(state.length).toBeLessThanOrEqual(64)
// Should be URL-safe (alphanumeric, no special chars like { } " :)
expect(state).toMatch(/^[a-zA-Z0-9_-]+$/)
// Should NOT contain JSON indicators
expect(state).not.toContain("{")
expect(state).not.toContain("}")
expect(state).not.toContain('"')
})
it("should include access_type=offline", async () => {
// #given
const projectId = "test-project"
// #when
const result = await buildAuthURL(projectId)
const url = new URL(result.url)
// #then
expect(url.searchParams.get("access_type")).toBe("offline")
})
it("should include prompt=consent", async () => {
// #given
const projectId = "test-project"
// #when
const result = await buildAuthURL(projectId)
const url = new URL(result.url)
// #then
expect(url.searchParams.get("prompt")).toBe("consent")
})
it("should NOT return verifier property (PKCE removed)", async () => {
// #given
const projectId = "test-project"
// #when
const result = await buildAuthURL(projectId)
// #then
expect(result).not.toHaveProperty("verifier")
expect(result).toHaveProperty("url")
expect(result).toHaveProperty("state")
})
it("should return state that matches URL state param", async () => {
// #given
const projectId = "test-project"
// #when
const result = await buildAuthURL(projectId)
const url = new URL(result.url)
// #then
expect(result.state).toBe(url.searchParams.get("state")!)
})
})
describe("exchangeCode", () => {
let originalFetch: typeof fetch
beforeEach(() => {
originalFetch = globalThis.fetch
})
afterEach(() => {
globalThis.fetch = originalFetch
})
it("should NOT send code_verifier in token exchange", async () => {
// #given
let capturedBody: string | null = null
globalThis.fetch = mock(async (url: string, init?: RequestInit) => {
if (url === GOOGLE_TOKEN_URL) {
capturedBody = init?.body as string
return new Response(JSON.stringify({
access_token: "test-access",
refresh_token: "test-refresh",
expires_in: 3600,
token_type: "Bearer"
}))
}
return new Response("", { status: 404 })
}) as unknown as typeof fetch
// #when
await exchangeCode("test-code", "http://localhost:51121/oauth-callback")
// #then
expect(capturedBody).toBeTruthy()
const params = new URLSearchParams(capturedBody!)
expect(params.has("code_verifier")).toBe(false)
})
it("should send required OAuth parameters", async () => {
// #given
let capturedBody: string | null = null
globalThis.fetch = mock(async (url: string, init?: RequestInit) => {
if (url === GOOGLE_TOKEN_URL) {
capturedBody = init?.body as string
return new Response(JSON.stringify({
access_token: "test-access",
refresh_token: "test-refresh",
expires_in: 3600,
token_type: "Bearer"
}))
}
return new Response("", { status: 404 })
}) as unknown as typeof fetch
// #when
await exchangeCode("test-code", "http://localhost:51121/oauth-callback")
// #then
const params = new URLSearchParams(capturedBody!)
expect(params.get("grant_type")).toBe("authorization_code")
expect(params.get("code")).toBe("test-code")
expect(params.get("client_id")).toBe(ANTIGRAVITY_CLIENT_ID)
expect(params.get("redirect_uri")).toBe("http://localhost:51121/oauth-callback")
})
})
describe("State/CSRF Validation", () => {
it("should generate unique state for each call", async () => {
// #given
const projectId = "test-project"
// #when
const result1 = await buildAuthURL(projectId)
const result2 = await buildAuthURL(projectId)
// #then
expect(result1.state).not.toBe(result2.state)
})
})
describe("startCallbackServer Port Handling", () => {
it("should prefer port 51121", () => {
// #given
// Port 51121 should be free
// #when
const handle = startCallbackServer()
// #then
// If 51121 is available, should use it
// If not available, should use valid fallback
expect(handle.port).toBeGreaterThan(0)
expect(handle.port).toBeLessThan(65536)
handle.close()
})
it("should return actual bound port", () => {
// #when
const handle = startCallbackServer()
// #then
expect(typeof handle.port).toBe("number")
expect(handle.port).toBeGreaterThan(0)
handle.close()
})
it("should fallback to OS-assigned port if 51121 is occupied (EADDRINUSE)", async () => {
// #given - Occupy port 51121 first
const blocker = Bun.serve({
port: ANTIGRAVITY_CALLBACK_PORT,
fetch: () => new Response("blocked")
})
try {
// #when
const handle = startCallbackServer()
// #then
expect(handle.port).not.toBe(ANTIGRAVITY_CALLBACK_PORT)
expect(handle.port).toBeGreaterThan(0)
handle.close()
} finally {
// Cleanup blocker
blocker.stop()
}
})
it("should cleanup server on close", () => {
// #given
const handle = startCallbackServer()
const port = handle.port
// #when
handle.close()
// #then - port should be released (can bind again)
const testServer = Bun.serve({ port, fetch: () => new Response("test") })
expect(testServer.port).toBe(port)
testServer.stop()
})
it("should provide redirect URI with actual port", () => {
// #given
const handle = startCallbackServer()
// #then
expect(handle.redirectUri).toBe(`http://localhost:${handle.port}/oauth-callback`)
handle.close()
})
})
})

View File

@@ -1,285 +0,0 @@
/**
* Antigravity OAuth 2.0 flow implementation.
* Handles Google OAuth for Antigravity authentication.
*/
import {
ANTIGRAVITY_CLIENT_ID,
ANTIGRAVITY_CLIENT_SECRET,
ANTIGRAVITY_REDIRECT_URI,
ANTIGRAVITY_SCOPES,
ANTIGRAVITY_CALLBACK_PORT,
GOOGLE_AUTH_URL,
GOOGLE_TOKEN_URL,
GOOGLE_USERINFO_URL,
} from "./constants"
import type {
AntigravityTokenExchangeResult,
AntigravityUserInfo,
} from "./types"
/**
* Result from building an OAuth authorization URL.
*/
export interface AuthorizationResult {
/** Full OAuth URL to open in browser */
url: string
/** State for CSRF protection */
state: string
}
/**
* Result from the OAuth callback server.
*/
export interface CallbackResult {
/** Authorization code from Google */
code: string
/** State parameter from callback */
state: string
/** Error message if any */
error?: string
}
export async function buildAuthURL(
projectId?: string,
clientId: string = ANTIGRAVITY_CLIENT_ID,
port: number = ANTIGRAVITY_CALLBACK_PORT
): Promise<AuthorizationResult> {
const state = crypto.randomUUID().replace(/-/g, "")
const redirectUri = `http://localhost:${port}/oauth-callback`
const url = new URL(GOOGLE_AUTH_URL)
url.searchParams.set("client_id", clientId)
url.searchParams.set("redirect_uri", redirectUri)
url.searchParams.set("response_type", "code")
url.searchParams.set("scope", ANTIGRAVITY_SCOPES.join(" "))
url.searchParams.set("state", state)
url.searchParams.set("access_type", "offline")
url.searchParams.set("prompt", "consent")
return {
url: url.toString(),
state,
}
}
/**
* Exchange authorization code for tokens.
*
* @param code - Authorization code from OAuth callback
* @param redirectUri - OAuth redirect URI
* @param clientId - Optional custom client ID (defaults to ANTIGRAVITY_CLIENT_ID)
* @param clientSecret - Optional custom client secret (defaults to ANTIGRAVITY_CLIENT_SECRET)
* @returns Token exchange result with access and refresh tokens
*/
export async function exchangeCode(
code: string,
redirectUri: string,
clientId: string = ANTIGRAVITY_CLIENT_ID,
clientSecret: string = ANTIGRAVITY_CLIENT_SECRET
): Promise<AntigravityTokenExchangeResult> {
const params = new URLSearchParams({
client_id: clientId,
client_secret: clientSecret,
code,
grant_type: "authorization_code",
redirect_uri: redirectUri,
})
const response = await fetch(GOOGLE_TOKEN_URL, {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: params,
})
if (!response.ok) {
const errorText = await response.text()
throw new Error(`Token exchange failed: ${response.status} - ${errorText}`)
}
const data = (await response.json()) as {
access_token: string
refresh_token: string
expires_in: number
token_type: string
}
return {
access_token: data.access_token,
refresh_token: data.refresh_token,
expires_in: data.expires_in,
token_type: data.token_type,
}
}
/**
* Fetch user info from Google's userinfo API.
*
* @param accessToken - Valid access token
* @returns User info containing email
*/
export async function fetchUserInfo(
accessToken: string
): Promise<AntigravityUserInfo> {
const response = await fetch(`${GOOGLE_USERINFO_URL}?alt=json`, {
headers: {
Authorization: `Bearer ${accessToken}`,
},
})
if (!response.ok) {
throw new Error(`Failed to fetch user info: ${response.status}`)
}
const data = (await response.json()) as {
email?: string
name?: string
picture?: string
}
return {
email: data.email || "",
name: data.name,
picture: data.picture,
}
}
export interface CallbackServerHandle {
port: number
redirectUri: string
waitForCallback: () => Promise<CallbackResult>
close: () => void
}
export function startCallbackServer(
timeoutMs: number = 5 * 60 * 1000
): CallbackServerHandle {
let server: ReturnType<typeof Bun.serve> | null = null
let timeoutId: ReturnType<typeof setTimeout> | null = null
let resolveCallback: ((result: CallbackResult) => void) | null = null
let rejectCallback: ((error: Error) => void) | null = null
const cleanup = () => {
if (timeoutId) {
clearTimeout(timeoutId)
timeoutId = null
}
if (server) {
server.stop()
server = null
}
}
const fetchHandler = (request: Request): Response => {
const url = new URL(request.url)
if (url.pathname === "/oauth-callback") {
const code = url.searchParams.get("code") || ""
const state = url.searchParams.get("state") || ""
const error = url.searchParams.get("error") || undefined
let responseBody: string
if (code && !error) {
responseBody =
"<html><body><h1>Login successful</h1><p>You can close this window.</p></body></html>"
} else {
responseBody =
"<html><body><h1>Login failed</h1><p>Please check the CLI output.</p></body></html>"
}
setTimeout(() => {
cleanup()
if (resolveCallback) {
resolveCallback({ code, state, error })
}
}, 100)
return new Response(responseBody, {
status: 200,
headers: { "Content-Type": "text/html" },
})
}
return new Response("Not Found", { status: 404 })
}
try {
server = Bun.serve({
port: ANTIGRAVITY_CALLBACK_PORT,
fetch: fetchHandler,
})
} catch (error) {
server = Bun.serve({
port: 0,
fetch: fetchHandler,
})
}
const actualPort = server.port as number
const redirectUri = `http://localhost:${actualPort}/oauth-callback`
const waitForCallback = (): Promise<CallbackResult> => {
return new Promise((resolve, reject) => {
resolveCallback = resolve
rejectCallback = reject
timeoutId = setTimeout(() => {
cleanup()
reject(new Error("OAuth callback timeout"))
}, timeoutMs)
})
}
return {
port: actualPort,
redirectUri,
waitForCallback,
close: cleanup,
}
}
export async function performOAuthFlow(
projectId?: string,
openBrowser?: (url: string) => Promise<void>,
clientId: string = ANTIGRAVITY_CLIENT_ID,
clientSecret: string = ANTIGRAVITY_CLIENT_SECRET
): Promise<{
tokens: AntigravityTokenExchangeResult
userInfo: AntigravityUserInfo
state: string
}> {
const serverHandle = startCallbackServer()
try {
const auth = await buildAuthURL(projectId, clientId, serverHandle.port)
if (openBrowser) {
await openBrowser(auth.url)
}
const callback = await serverHandle.waitForCallback()
if (callback.error) {
throw new Error(`OAuth error: ${callback.error}`)
}
if (!callback.code) {
throw new Error("No authorization code received")
}
if (callback.state !== auth.state) {
throw new Error("State mismatch - possible CSRF attack")
}
const redirectUri = `http://localhost:${serverHandle.port}/oauth-callback`
const tokens = await exchangeCode(callback.code, redirectUri, clientId, clientSecret)
const userInfo = await fetchUserInfo(tokens.access_token)
return { tokens, userInfo, state: auth.state }
} catch (err) {
serverHandle.close()
throw err
}
}

View File

@@ -1,554 +0,0 @@
/**
* Google Antigravity Auth Plugin for OpenCode
*
* Provides OAuth authentication for Google models via Antigravity API.
* This plugin integrates with OpenCode's auth system to enable:
* - OAuth 2.0 with PKCE flow for Google authentication
* - Automatic token refresh
* - Request/response transformation for Antigravity API
*
* @example
* ```json
* // opencode.json
* {
* "plugin": ["oh-my-opencode"],
* "provider": {
* "google": {
* "options": {
* "clientId": "custom-client-id",
* "clientSecret": "custom-client-secret"
* }
* }
* }
* }
* ```
*/
import type { Auth, Provider } from "@opencode-ai/sdk"
import type { AuthHook, AuthOuathResult, PluginInput } from "@opencode-ai/plugin"
import { ANTIGRAVITY_CLIENT_ID, ANTIGRAVITY_CLIENT_SECRET } from "./constants"
import {
buildAuthURL,
exchangeCode,
startCallbackServer,
fetchUserInfo,
} from "./oauth"
import { createAntigravityFetch } from "./fetch"
import { fetchProjectContext } from "./project"
import { formatTokenForStorage, parseStoredToken } from "./token"
import { AccountManager } from "./accounts"
import { loadAccounts } from "./storage"
import { promptAddAnotherAccount, promptAccountTier } from "./cli"
import { openBrowserURL } from "./browser"
import type { AccountTier, AntigravityRefreshParts } from "./types"
/**
* Provider ID for Google models
* Antigravity is an auth method for Google, not a separate provider
*/
const GOOGLE_PROVIDER_ID = "google"
/**
* Maximum number of Google accounts that can be added
*/
const MAX_ACCOUNTS = 10
/**
* Type guard to check if auth is OAuth type
*/
function isOAuthAuth(
auth: Auth
): auth is { type: "oauth"; access: string; refresh: string; expires: number } {
return auth.type === "oauth"
}
/**
* Creates the Google Antigravity OAuth plugin for OpenCode.
*
* This factory function creates an auth plugin that:
* 1. Provides OAuth flow for Google authentication
* 2. Creates a custom fetch interceptor for Antigravity API
* 3. Handles token management and refresh
*
* @param input - Plugin input containing the OpenCode client
* @returns Hooks object with auth configuration
*
* @example
* ```typescript
* // Used by OpenCode automatically when plugin is loaded
* const hooks = await createGoogleAntigravityAuthPlugin({ client, ... })
* ```
*/
export async function createGoogleAntigravityAuthPlugin({
client,
}: PluginInput): Promise<{ auth: AuthHook }> {
// Cache for custom credentials from provider.options
// These are populated by loader() and used by authorize()
// Falls back to defaults if loader hasn't been called yet
let cachedClientId: string = ANTIGRAVITY_CLIENT_ID
let cachedClientSecret: string = ANTIGRAVITY_CLIENT_SECRET
const authHook: AuthHook = {
/**
* Provider identifier - must be "google" as Antigravity is
* an auth method for Google models, not a separate provider
*/
provider: GOOGLE_PROVIDER_ID,
/**
* Loader function called when auth is needed.
* Reads credentials from provider.options and creates custom fetch.
*
* @param auth - Function to retrieve current auth state
* @param provider - Provider configuration including options
* @returns Object with custom fetch function
*/
loader: async (
auth: () => Promise<Auth>,
provider: Provider
): Promise<Record<string, unknown>> => {
const currentAuth = await auth()
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.log("[antigravity-plugin] loader called")
console.log("[antigravity-plugin] auth type:", currentAuth?.type)
console.log("[antigravity-plugin] auth keys:", Object.keys(currentAuth || {}))
}
if (!isOAuthAuth(currentAuth)) {
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.log("[antigravity-plugin] NOT OAuth auth, returning empty")
}
return {}
}
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.log("[antigravity-plugin] OAuth auth detected, creating custom fetch")
}
let accountManager: AccountManager | null = null
try {
const storedAccounts = await loadAccounts()
if (storedAccounts) {
accountManager = new AccountManager(currentAuth, storedAccounts)
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.log(`[antigravity-plugin] Loaded ${accountManager.getAccountCount()} accounts from storage`)
}
} else if (currentAuth.refresh.includes("|||")) {
const tokens = currentAuth.refresh.split("|||")
const firstToken = tokens[0]!
accountManager = new AccountManager(
{ refresh: firstToken, access: currentAuth.access || "", expires: currentAuth.expires || 0 },
null
)
for (let i = 1; i < tokens.length; i++) {
const parts = parseStoredToken(tokens[i]!)
accountManager.addAccount(parts)
}
await accountManager.save()
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.log("[antigravity-plugin] Migrated multi-account auth to storage")
}
}
} catch (error) {
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.error(
`[antigravity-plugin] Failed to load accounts: ${
error instanceof Error ? error.message : "Unknown error"
}`
)
}
}
cachedClientId =
(provider.options?.clientId as string) || ANTIGRAVITY_CLIENT_ID
cachedClientSecret =
(provider.options?.clientSecret as string) || ANTIGRAVITY_CLIENT_SECRET
// Log if using custom credentials (for debugging)
if (
process.env.ANTIGRAVITY_DEBUG === "1" &&
(cachedClientId !== ANTIGRAVITY_CLIENT_ID ||
cachedClientSecret !== ANTIGRAVITY_CLIENT_SECRET)
) {
console.log(
"[antigravity-plugin] Using custom credentials from provider.options"
)
}
// Create adapter for client.auth.set that matches fetch.ts AuthClient interface
const authClient = {
set: async (
providerId: string,
authData: { access?: string; refresh?: string; expires?: number }
) => {
await client.auth.set({
body: {
type: "oauth",
access: authData.access || "",
refresh: authData.refresh || "",
expires: authData.expires || 0,
},
path: { id: providerId },
})
},
}
// Create auth getter that returns compatible format for fetch.ts
const getAuth = async (): Promise<{
access?: string
refresh?: string
expires?: number
}> => {
const authState = await auth()
if (isOAuthAuth(authState)) {
return {
access: authState.access,
refresh: authState.refresh,
expires: authState.expires,
}
}
return {}
}
const antigravityFetch = createAntigravityFetch(
getAuth,
authClient,
GOOGLE_PROVIDER_ID,
cachedClientId,
cachedClientSecret
)
return {
fetch: antigravityFetch,
apiKey: "antigravity-oauth",
accountManager,
}
},
/**
* Authentication methods available for this provider.
* Only OAuth is supported - no prompts for credentials.
*/
methods: [
{
type: "oauth",
label: "OAuth with Google (Antigravity)",
// NO prompts - credentials come from provider.options or defaults
// OAuth flow starts immediately when user selects this method
/**
* Starts the OAuth authorization flow.
* Opens browser for Google OAuth and waits for callback.
* Supports multi-account flow with prompts for additional accounts.
*
* @returns Authorization result with URL and callback
*/
authorize: async (): Promise<AuthOuathResult> => {
const serverHandle = startCallbackServer()
const { url, state: expectedState } = await buildAuthURL(undefined, cachedClientId, serverHandle.port)
const browserOpened = await openBrowserURL(url)
return {
url,
instructions: browserOpened
? "Opening browser for sign-in. We'll automatically detect when you're done."
: "Please open the URL above in your browser to sign in.",
method: "auto",
callback: async () => {
try {
const result = await serverHandle.waitForCallback()
if (result.error) {
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.error(`[antigravity-plugin] OAuth error: ${result.error}`)
}
return { type: "failed" as const }
}
if (!result.code) {
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.error("[antigravity-plugin] No authorization code received")
}
return { type: "failed" as const }
}
if (result.state !== expectedState) {
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.error("[antigravity-plugin] State mismatch - possible CSRF attack")
}
return { type: "failed" as const }
}
const redirectUri = `http://localhost:${serverHandle.port}/oauth-callback`
const tokens = await exchangeCode(result.code, redirectUri, cachedClientId, cachedClientSecret)
if (!tokens.refresh_token) {
serverHandle.close()
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.error("[antigravity-plugin] OAuth response missing refresh_token")
}
return { type: "failed" as const }
}
let email: string | undefined
try {
const userInfo = await fetchUserInfo(tokens.access_token)
email = userInfo.email
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.log(`[antigravity-plugin] Authenticated as: ${email}`)
}
} catch {
// User info is optional
}
const projectContext = await fetchProjectContext(tokens.access_token)
const projectId = projectContext.cloudaicompanionProject || ""
const tier = await promptAccountTier()
const expires = Date.now() + tokens.expires_in * 1000
const accounts: Array<{
parts: AntigravityRefreshParts
access: string
expires: number
email?: string
tier: AccountTier
projectId: string
}> = [{
parts: {
refreshToken: tokens.refresh_token,
projectId,
managedProjectId: projectContext.managedProjectId,
},
access: tokens.access_token,
expires,
email,
tier,
projectId,
}]
await client.tui.showToast({
body: {
message: `Account 1 authenticated${email ? ` (${email})` : ""}`,
variant: "success",
},
})
while (accounts.length < MAX_ACCOUNTS) {
const addAnother = await promptAddAnotherAccount(accounts.length)
if (!addAnother) break
const additionalServerHandle = startCallbackServer()
const { url: additionalUrl, state: expectedAdditionalState } = await buildAuthURL(
undefined,
cachedClientId,
additionalServerHandle.port
)
const additionalBrowserOpened = await openBrowserURL(additionalUrl)
if (!additionalBrowserOpened) {
await client.tui.showToast({
body: {
message: `Please open in browser: ${additionalUrl}`,
variant: "warning",
},
})
}
try {
const additionalResult = await additionalServerHandle.waitForCallback()
if (additionalResult.error || !additionalResult.code) {
additionalServerHandle.close()
await client.tui.showToast({
body: {
message: "Skipping this account...",
variant: "warning",
},
})
continue
}
if (additionalResult.state !== expectedAdditionalState) {
additionalServerHandle.close()
await client.tui.showToast({
body: {
message: "State mismatch, skipping...",
variant: "warning",
},
})
continue
}
const additionalRedirectUri = `http://localhost:${additionalServerHandle.port}/oauth-callback`
const additionalTokens = await exchangeCode(
additionalResult.code,
additionalRedirectUri,
cachedClientId,
cachedClientSecret
)
if (!additionalTokens.refresh_token) {
additionalServerHandle.close()
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.error("[antigravity-plugin] Additional account OAuth response missing refresh_token")
}
await client.tui.showToast({
body: {
message: "Account missing refresh token, skipping...",
variant: "warning",
},
})
continue
}
let additionalEmail: string | undefined
try {
const additionalUserInfo = await fetchUserInfo(additionalTokens.access_token)
additionalEmail = additionalUserInfo.email
} catch {
// User info is optional
}
const additionalProjectContext = await fetchProjectContext(additionalTokens.access_token)
const additionalProjectId = additionalProjectContext.cloudaicompanionProject || ""
const additionalTier = await promptAccountTier()
const additionalExpires = Date.now() + additionalTokens.expires_in * 1000
accounts.push({
parts: {
refreshToken: additionalTokens.refresh_token,
projectId: additionalProjectId,
managedProjectId: additionalProjectContext.managedProjectId,
},
access: additionalTokens.access_token,
expires: additionalExpires,
email: additionalEmail,
tier: additionalTier,
projectId: additionalProjectId,
})
additionalServerHandle.close()
await client.tui.showToast({
body: {
message: `Account ${accounts.length} authenticated${additionalEmail ? ` (${additionalEmail})` : ""}`,
variant: "success",
},
})
} catch (error) {
additionalServerHandle.close()
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.error(
`[antigravity-plugin] Additional account OAuth failed: ${
error instanceof Error ? error.message : "Unknown error"
}`
)
}
await client.tui.showToast({
body: {
message: "Failed to authenticate additional account, skipping...",
variant: "warning",
},
})
continue
}
}
const firstAccount = accounts[0]!
try {
const accountManager = new AccountManager(
{
refresh: formatTokenForStorage(
firstAccount.parts.refreshToken,
firstAccount.projectId,
firstAccount.parts.managedProjectId
),
access: firstAccount.access,
expires: firstAccount.expires,
},
null
)
for (let i = 1; i < accounts.length; i++) {
const acc = accounts[i]!
accountManager.addAccount(
acc.parts,
acc.access,
acc.expires,
acc.email,
acc.tier
)
}
const currentAccount = accountManager.getCurrentAccount()
if (currentAccount) {
currentAccount.email = firstAccount.email
currentAccount.tier = firstAccount.tier
}
await accountManager.save()
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.log(`[antigravity-plugin] Saved ${accounts.length} accounts to storage`)
}
} catch (error) {
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.error(
`[antigravity-plugin] Failed to save accounts: ${
error instanceof Error ? error.message : "Unknown error"
}`
)
}
}
const allRefreshTokens = accounts
.map((acc) => formatTokenForStorage(
acc.parts.refreshToken,
acc.projectId,
acc.parts.managedProjectId
))
.join("|||")
return {
type: "success" as const,
access: firstAccount.access,
refresh: allRefreshTokens,
expires: firstAccount.expires,
}
} catch (error) {
serverHandle.close()
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.error(
`[antigravity-plugin] OAuth flow failed: ${
error instanceof Error ? error.message : "Unknown error"
}`
)
}
return { type: "failed" as const }
}
},
}
},
},
],
}
return {
auth: authHook,
}
}
/**
* Default export for OpenCode plugin system
*/
export default createGoogleAntigravityAuthPlugin
/**
* Named export for explicit imports
*/
export const GoogleAntigravityAuthPlugin = createGoogleAntigravityAuthPlugin

View File

@@ -1,274 +0,0 @@
/**
* Antigravity project context management.
* Handles fetching GCP project ID via Google's loadCodeAssist API.
* For FREE tier users, onboards via onboardUser API to get server-assigned managed project ID.
* Reference: https://github.com/shekohex/opencode-google-antigravity-auth
*/
import {
ANTIGRAVITY_ENDPOINT_FALLBACKS,
ANTIGRAVITY_API_VERSION,
ANTIGRAVITY_HEADERS,
ANTIGRAVITY_DEFAULT_PROJECT_ID,
} from "./constants"
import type {
AntigravityProjectContext,
AntigravityLoadCodeAssistResponse,
AntigravityOnboardUserPayload,
AntigravityUserTier,
} from "./types"
const projectContextCache = new Map<string, AntigravityProjectContext>()
function debugLog(message: string): void {
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.log(`[antigravity-project] ${message}`)
}
}
const CODE_ASSIST_METADATA = {
ideType: "IDE_UNSPECIFIED",
platform: "PLATFORM_UNSPECIFIED",
pluginType: "GEMINI",
} as const
function extractProjectId(
project: string | { id: string } | undefined
): string | undefined {
if (!project) return undefined
if (typeof project === "string") {
const trimmed = project.trim()
return trimmed || undefined
}
if (typeof project === "object" && "id" in project) {
const id = project.id
if (typeof id === "string") {
const trimmed = id.trim()
return trimmed || undefined
}
}
return undefined
}
function getDefaultTierId(allowedTiers?: AntigravityUserTier[]): string | undefined {
if (!allowedTiers || allowedTiers.length === 0) return undefined
for (const tier of allowedTiers) {
if (tier?.isDefault) return tier.id
}
return allowedTiers[0]?.id
}
function isFreeTier(tierId: string | undefined): boolean {
if (!tierId) return true // No tier = assume free tier (default behavior)
const lower = tierId.toLowerCase()
return lower === "free" || lower === "free-tier" || lower.startsWith("free")
}
function wait(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms))
}
async function callLoadCodeAssistAPI(
accessToken: string,
projectId?: string
): Promise<AntigravityLoadCodeAssistResponse | null> {
const metadata: Record<string, string> = { ...CODE_ASSIST_METADATA }
if (projectId) metadata.duetProject = projectId
const requestBody: Record<string, unknown> = { metadata }
if (projectId) requestBody.cloudaicompanionProject = projectId
const headers: Record<string, string> = {
Authorization: `Bearer ${accessToken}`,
"Content-Type": "application/json",
"User-Agent": ANTIGRAVITY_HEADERS["User-Agent"],
"X-Goog-Api-Client": ANTIGRAVITY_HEADERS["X-Goog-Api-Client"],
"Client-Metadata": ANTIGRAVITY_HEADERS["Client-Metadata"],
}
for (const baseEndpoint of ANTIGRAVITY_ENDPOINT_FALLBACKS) {
const url = `${baseEndpoint}/${ANTIGRAVITY_API_VERSION}:loadCodeAssist`
debugLog(`[loadCodeAssist] Trying: ${url}`)
try {
const response = await fetch(url, {
method: "POST",
headers,
body: JSON.stringify(requestBody),
})
if (!response.ok) {
debugLog(`[loadCodeAssist] Failed: ${response.status} ${response.statusText}`)
continue
}
const data = (await response.json()) as AntigravityLoadCodeAssistResponse
debugLog(`[loadCodeAssist] Success: ${JSON.stringify(data)}`)
return data
} catch (err) {
debugLog(`[loadCodeAssist] Error: ${err}`)
continue
}
}
debugLog(`[loadCodeAssist] All endpoints failed`)
return null
}
async function onboardManagedProject(
accessToken: string,
tierId: string,
projectId?: string,
attempts = 10,
delayMs = 5000
): Promise<string | undefined> {
debugLog(`[onboardUser] Starting with tierId=${tierId}, projectId=${projectId || "none"}`)
const metadata: Record<string, string> = { ...CODE_ASSIST_METADATA }
if (projectId) metadata.duetProject = projectId
const requestBody: Record<string, unknown> = { tierId, metadata }
if (!isFreeTier(tierId)) {
if (!projectId) {
debugLog(`[onboardUser] Non-FREE tier requires projectId, returning undefined`)
return undefined
}
requestBody.cloudaicompanionProject = projectId
}
const headers: Record<string, string> = {
Authorization: `Bearer ${accessToken}`,
"Content-Type": "application/json",
"User-Agent": ANTIGRAVITY_HEADERS["User-Agent"],
"X-Goog-Api-Client": ANTIGRAVITY_HEADERS["X-Goog-Api-Client"],
"Client-Metadata": ANTIGRAVITY_HEADERS["Client-Metadata"],
}
debugLog(`[onboardUser] Request body: ${JSON.stringify(requestBody)}`)
for (let attempt = 0; attempt < attempts; attempt++) {
debugLog(`[onboardUser] Attempt ${attempt + 1}/${attempts}`)
for (const baseEndpoint of ANTIGRAVITY_ENDPOINT_FALLBACKS) {
const url = `${baseEndpoint}/${ANTIGRAVITY_API_VERSION}:onboardUser`
debugLog(`[onboardUser] Trying: ${url}`)
try {
const response = await fetch(url, {
method: "POST",
headers,
body: JSON.stringify(requestBody),
})
if (!response.ok) {
const errorText = await response.text().catch(() => "")
debugLog(`[onboardUser] Failed: ${response.status} ${response.statusText} - ${errorText}`)
continue
}
const payload = (await response.json()) as AntigravityOnboardUserPayload
debugLog(`[onboardUser] Response: ${JSON.stringify(payload)}`)
const managedProjectId = payload.response?.cloudaicompanionProject?.id
if (payload.done && managedProjectId) {
debugLog(`[onboardUser] Success! Got managed project ID: ${managedProjectId}`)
return managedProjectId
}
if (payload.done && projectId) {
debugLog(`[onboardUser] Done but no managed ID, using original: ${projectId}`)
return projectId
}
debugLog(`[onboardUser] Not done yet, payload.done=${payload.done}`)
} catch (err) {
debugLog(`[onboardUser] Error: ${err}`)
continue
}
}
if (attempt < attempts - 1) {
debugLog(`[onboardUser] Waiting ${delayMs}ms before next attempt...`)
await wait(delayMs)
}
}
debugLog(`[onboardUser] All attempts exhausted, returning undefined`)
return undefined
}
export async function fetchProjectContext(
accessToken: string
): Promise<AntigravityProjectContext> {
debugLog(`[fetchProjectContext] Starting...`)
const cached = projectContextCache.get(accessToken)
if (cached) {
debugLog(`[fetchProjectContext] Returning cached result: ${JSON.stringify(cached)}`)
return cached
}
const loadPayload = await callLoadCodeAssistAPI(accessToken)
// If loadCodeAssist returns a project ID, use it directly
if (loadPayload?.cloudaicompanionProject) {
const projectId = extractProjectId(loadPayload.cloudaicompanionProject)
debugLog(`[fetchProjectContext] loadCodeAssist returned project: ${projectId}`)
if (projectId) {
const result: AntigravityProjectContext = { cloudaicompanionProject: projectId }
projectContextCache.set(accessToken, result)
debugLog(`[fetchProjectContext] Using loadCodeAssist project ID: ${projectId}`)
return result
}
}
// No project ID from loadCodeAssist - try with fallback project ID
if (!loadPayload) {
debugLog(`[fetchProjectContext] loadCodeAssist returned null, trying with fallback project ID`)
const fallbackPayload = await callLoadCodeAssistAPI(accessToken, ANTIGRAVITY_DEFAULT_PROJECT_ID)
const fallbackProjectId = extractProjectId(fallbackPayload?.cloudaicompanionProject)
if (fallbackProjectId) {
const result: AntigravityProjectContext = { cloudaicompanionProject: fallbackProjectId }
projectContextCache.set(accessToken, result)
debugLog(`[fetchProjectContext] Using fallback project ID: ${fallbackProjectId}`)
return result
}
debugLog(`[fetchProjectContext] Fallback also failed, using default: ${ANTIGRAVITY_DEFAULT_PROJECT_ID}`)
return { cloudaicompanionProject: ANTIGRAVITY_DEFAULT_PROJECT_ID }
}
const currentTierId = loadPayload.currentTier?.id
debugLog(`[fetchProjectContext] currentTier: ${currentTierId}, allowedTiers: ${JSON.stringify(loadPayload.allowedTiers)}`)
if (currentTierId && !isFreeTier(currentTierId)) {
// PAID tier - still use fallback if no project provided
debugLog(`[fetchProjectContext] PAID tier detected (${currentTierId}), using fallback: ${ANTIGRAVITY_DEFAULT_PROJECT_ID}`)
return { cloudaicompanionProject: ANTIGRAVITY_DEFAULT_PROJECT_ID }
}
const defaultTierId = getDefaultTierId(loadPayload.allowedTiers)
const tierId = defaultTierId ?? "free-tier"
debugLog(`[fetchProjectContext] Resolved tierId: ${tierId}`)
if (!isFreeTier(tierId)) {
debugLog(`[fetchProjectContext] Non-FREE tier (${tierId}) without project, using fallback: ${ANTIGRAVITY_DEFAULT_PROJECT_ID}`)
return { cloudaicompanionProject: ANTIGRAVITY_DEFAULT_PROJECT_ID }
}
// FREE tier - onboard to get server-assigned managed project ID
debugLog(`[fetchProjectContext] FREE tier detected (${tierId}), calling onboardUser...`)
const managedProjectId = await onboardManagedProject(accessToken, tierId)
if (managedProjectId) {
const result: AntigravityProjectContext = {
cloudaicompanionProject: managedProjectId,
managedProjectId,
}
projectContextCache.set(accessToken, result)
debugLog(`[fetchProjectContext] Got managed project ID: ${managedProjectId}`)
return result
}
debugLog(`[fetchProjectContext] Failed to get managed project ID, using fallback: ${ANTIGRAVITY_DEFAULT_PROJECT_ID}`)
return { cloudaicompanionProject: ANTIGRAVITY_DEFAULT_PROJECT_ID }
}
export function clearProjectContextCache(accessToken?: string): void {
if (accessToken) {
projectContextCache.delete(accessToken)
} else {
projectContextCache.clear()
}
}
export function invalidateProjectContextByRefreshToken(_refreshToken: string): void {
projectContextCache.clear()
debugLog(`[invalidateProjectContextByRefreshToken] Cleared all project context cache due to refresh token invalidation`)
}

View File

@@ -1,224 +0,0 @@
import { describe, it, expect } from "bun:test"
import { ANTIGRAVITY_SYSTEM_PROMPT } from "./constants"
import { injectSystemPrompt, wrapRequestBody } from "./request"
describe("injectSystemPrompt", () => {
describe("basic injection", () => {
it("should inject system prompt into empty request", () => {
// #given
const wrappedBody = {
project: "test-project",
model: "gemini-3-pro-preview",
request: {} as Record<string, unknown>,
}
// #when
injectSystemPrompt(wrappedBody)
// #then
const req = wrappedBody.request as { systemInstruction?: { role: string; parts: Array<{ text: string }> } }
expect(req).toHaveProperty("systemInstruction")
expect(req.systemInstruction?.role).toBe("user")
expect(req.systemInstruction?.parts).toBeDefined()
expect(Array.isArray(req.systemInstruction?.parts)).toBe(true)
expect(req.systemInstruction?.parts?.length).toBe(1)
expect(req.systemInstruction?.parts?.[0]?.text).toContain("<identity>")
})
it("should inject system prompt with correct structure", () => {
// #given
const wrappedBody = {
project: "test-project",
model: "gemini-3-pro-preview",
request: {
contents: [{ role: "user", parts: [{ text: "Hello" }] }],
} as Record<string, unknown>,
}
// #when
injectSystemPrompt(wrappedBody)
// #then
const req = wrappedBody.request as { systemInstruction?: { role: string; parts: Array<{ text: string }> } }
expect(req.systemInstruction).toEqual({
role: "user",
parts: [{ text: ANTIGRAVITY_SYSTEM_PROMPT }],
})
})
})
describe("prepend to existing systemInstruction", () => {
it("should prepend Antigravity prompt before existing systemInstruction parts", () => {
// #given
const wrappedBody = {
project: "test-project",
model: "gemini-3-pro-preview",
request: {
systemInstruction: {
role: "user",
parts: [{ text: "existing system prompt" }],
},
} as Record<string, unknown>,
}
// #when
injectSystemPrompt(wrappedBody)
// #then
const req = wrappedBody.request as { systemInstruction?: { parts: Array<{ text: string }> } }
expect(req.systemInstruction?.parts?.length).toBe(2)
expect(req.systemInstruction?.parts?.[0]?.text).toBe(ANTIGRAVITY_SYSTEM_PROMPT)
expect(req.systemInstruction?.parts?.[1]?.text).toBe("existing system prompt")
})
it("should preserve multiple existing parts when prepending", () => {
// #given
const wrappedBody = {
project: "test-project",
model: "gemini-3-pro-preview",
request: {
systemInstruction: {
role: "user",
parts: [
{ text: "first existing part" },
{ text: "second existing part" },
],
},
} as Record<string, unknown>,
}
// #when
injectSystemPrompt(wrappedBody)
// #then
const req = wrappedBody.request as { systemInstruction?: { parts: Array<{ text: string }> } }
expect(req.systemInstruction?.parts?.length).toBe(3)
expect(req.systemInstruction?.parts?.[0]?.text).toBe(ANTIGRAVITY_SYSTEM_PROMPT)
expect(req.systemInstruction?.parts?.[1]?.text).toBe("first existing part")
expect(req.systemInstruction?.parts?.[2]?.text).toBe("second existing part")
})
})
describe("duplicate prevention", () => {
it("should not inject if <identity> marker already exists in first part", () => {
// #given
const wrappedBody = {
project: "test-project",
model: "gemini-3-pro-preview",
request: {
systemInstruction: {
role: "user",
parts: [{ text: "some prompt with <identity> marker already" }],
},
} as Record<string, unknown>,
}
// #when
injectSystemPrompt(wrappedBody)
// #then
const req = wrappedBody.request as { systemInstruction?: { parts: Array<{ text: string }> } }
expect(req.systemInstruction?.parts?.length).toBe(1)
expect(req.systemInstruction?.parts?.[0]?.text).toBe("some prompt with <identity> marker already")
})
it("should inject if <identity> marker is not in first part", () => {
// #given
const wrappedBody = {
project: "test-project",
model: "gemini-3-pro-preview",
request: {
systemInstruction: {
role: "user",
parts: [
{ text: "not the identity marker" },
{ text: "some <identity> in second part" },
],
},
} as Record<string, unknown>,
}
// #when
injectSystemPrompt(wrappedBody)
// #then
const req = wrappedBody.request as { systemInstruction?: { parts: Array<{ text: string }> } }
expect(req.systemInstruction?.parts?.length).toBe(3)
expect(req.systemInstruction?.parts?.[0]?.text).toBe(ANTIGRAVITY_SYSTEM_PROMPT)
})
})
describe("edge cases", () => {
it("should handle request without request field", () => {
// #given
const wrappedBody: { project: string; model: string; request?: Record<string, unknown> } = {
project: "test-project",
model: "gemini-3-pro-preview",
}
// #when
injectSystemPrompt(wrappedBody)
// #then - should not throw, should not modify
expect(wrappedBody).not.toHaveProperty("systemInstruction")
})
it("should handle request with non-object request field", () => {
// #given
const wrappedBody: { project: string; model: string; request?: unknown } = {
project: "test-project",
model: "gemini-3-pro-preview",
request: "not an object",
}
// #when
injectSystemPrompt(wrappedBody)
// #then - should not throw
})
})
})
describe("wrapRequestBody", () => {
it("should create wrapped body with correct structure", () => {
// #given
const body = {
model: "gemini-3-pro-preview",
contents: [{ role: "user", parts: [{ text: "Hello" }] }],
}
const projectId = "test-project"
const modelName = "gemini-3-pro-preview"
const sessionId = "test-session"
// #when
const result = wrapRequestBody(body, projectId, modelName, sessionId)
// #then
expect(result).toHaveProperty("project", projectId)
expect(result).toHaveProperty("model", "gemini-3-pro-preview")
expect(result).toHaveProperty("request")
expect(result.request).toHaveProperty("sessionId", sessionId)
expect(result.request).toHaveProperty("contents")
expect(result.request.contents).toEqual(body.contents)
expect(result.request).not.toHaveProperty("model") // model should be moved to outer
})
it("should include systemInstruction in wrapped request", () => {
// #given
const body = {
model: "gemini-3-pro-preview",
contents: [{ role: "user", parts: [{ text: "Hello" }] }],
}
const projectId = "test-project"
const modelName = "gemini-3-pro-preview"
const sessionId = "test-session"
// #when
const result = wrapRequestBody(body, projectId, modelName, sessionId)
// #then
const req = result.request as { systemInstruction?: { parts: Array<{ text: string }> } }
expect(req).toHaveProperty("systemInstruction")
expect(req.systemInstruction?.parts?.[0]?.text).toContain("<identity>")
})
})

View File

@@ -1,378 +0,0 @@
/**
* Antigravity request transformer.
* Transforms OpenAI-format requests to Antigravity format.
* Does NOT handle tool normalization (handled by tools.ts in Task 9).
*/
import {
ANTIGRAVITY_API_VERSION,
ANTIGRAVITY_ENDPOINT_FALLBACKS,
ANTIGRAVITY_HEADERS,
ANTIGRAVITY_SYSTEM_PROMPT,
SKIP_THOUGHT_SIGNATURE_VALIDATOR,
alias2ModelName,
} from "./constants"
import type { AntigravityRequestBody } from "./types"
/**
* Result of request transformation including URL, headers, and body.
*/
export interface TransformedRequest {
/** Transformed URL for Antigravity API */
url: string
/** Request headers including Authorization and Antigravity-specific headers */
headers: Record<string, string>
/** Transformed request body in Antigravity format */
body: AntigravityRequestBody
/** Whether this is a streaming request */
streaming: boolean
}
/**
* Build Antigravity-specific request headers.
* Includes Authorization, User-Agent, X-Goog-Api-Client, and Client-Metadata.
*
* @param accessToken - OAuth access token for Authorization header
* @returns Headers object with all required Antigravity headers
*/
export function buildRequestHeaders(accessToken: string): Record<string, string> {
return {
Authorization: `Bearer ${accessToken}`,
"Content-Type": "application/json",
"User-Agent": ANTIGRAVITY_HEADERS["User-Agent"],
"X-Goog-Api-Client": ANTIGRAVITY_HEADERS["X-Goog-Api-Client"],
"Client-Metadata": ANTIGRAVITY_HEADERS["Client-Metadata"],
}
}
/**
* Extract model name from request body.
* OpenAI-format requests include model in the body.
*
* @param body - Request body that may contain a model field
* @returns Model name or undefined if not found
*/
export function extractModelFromBody(
body: Record<string, unknown>
): string | undefined {
const model = body.model
if (typeof model === "string" && model.trim()) {
return model.trim()
}
return undefined
}
/**
* Extract model name from URL path.
* Handles Google Generative Language API format: /models/{model}:{action}
*
* @param url - Request URL to parse
* @returns Model name or undefined if not found
*/
export function extractModelFromUrl(url: string): string | undefined {
// Match Google's API format: /models/gemini-3-pro:generateContent
const match = url.match(/\/models\/([^:]+):/)
if (match && match[1]) {
return match[1]
}
return undefined
}
/**
* Determine the action type from the URL path.
* E.g., generateContent, streamGenerateContent
*
* @param url - Request URL to parse
* @returns Action name or undefined if not found
*/
export function extractActionFromUrl(url: string): string | undefined {
// Match Google's API format: /models/gemini-3-pro:generateContent
const match = url.match(/\/models\/[^:]+:(\w+)/)
if (match && match[1]) {
return match[1]
}
return undefined
}
/**
* Check if a URL is targeting Google's Generative Language API.
*
* @param url - URL to check
* @returns true if this is a Google Generative Language API request
*/
export function isGenerativeLanguageRequest(url: string): boolean {
return url.includes("generativelanguage.googleapis.com")
}
/**
* Build Antigravity API URL for the given action.
*
* @param baseEndpoint - Base Antigravity endpoint URL (from fallbacks)
* @param action - API action (e.g., generateContent, streamGenerateContent)
* @param streaming - Whether to append SSE query parameter
* @returns Formatted Antigravity API URL
*/
export function buildAntigravityUrl(
baseEndpoint: string,
action: string,
streaming: boolean
): string {
const query = streaming ? "?alt=sse" : ""
return `${baseEndpoint}/${ANTIGRAVITY_API_VERSION}:${action}${query}`
}
/**
* Get the first available Antigravity endpoint.
* Can be used with fallback logic in fetch.ts.
*
* @returns Default (first) Antigravity endpoint
*/
export function getDefaultEndpoint(): string {
return ANTIGRAVITY_ENDPOINT_FALLBACKS[0]
}
function generateRequestId(): string {
return `agent-${crypto.randomUUID()}`
}
/**
* Inject ANTIGRAVITY_SYSTEM_PROMPT into request.systemInstruction.
* Prepends Antigravity prompt before any existing systemInstruction.
* Prevents duplicate injection by checking for <identity> marker.
*
* CRITICAL: Modifies wrappedBody.request.systemInstruction (NOT outer body!)
*
* @param wrappedBody - The wrapped request body with request field
*/
export function injectSystemPrompt(wrappedBody: { request?: unknown }): void {
if (!wrappedBody.request || typeof wrappedBody.request !== "object") {
return
}
const req = wrappedBody.request as Record<string, unknown>
// Check for duplicate injection - if <identity> marker exists in first part, skip
if (req.systemInstruction && typeof req.systemInstruction === "object") {
const existing = req.systemInstruction as Record<string, unknown>
if (existing.parts && Array.isArray(existing.parts)) {
const firstPart = existing.parts[0]
if (firstPart && typeof firstPart === "object" && "text" in firstPart) {
const text = (firstPart as { text: string }).text
if (text.includes("<identity>")) {
return // Already injected, skip
}
}
}
}
// Build new parts array - Antigravity prompt first, then existing parts
const newParts: Array<{ text: string }> = [{ text: ANTIGRAVITY_SYSTEM_PROMPT }]
// Prepend existing parts if systemInstruction exists with parts
if (req.systemInstruction && typeof req.systemInstruction === "object") {
const existing = req.systemInstruction as Record<string, unknown>
if (existing.parts && Array.isArray(existing.parts)) {
for (const part of existing.parts) {
if (part && typeof part === "object" && "text" in part) {
newParts.push(part as { text: string })
}
}
}
}
// Set the new systemInstruction
req.systemInstruction = {
role: "user",
parts: newParts,
}
}
export function wrapRequestBody(
body: Record<string, unknown>,
projectId: string,
modelName: string,
sessionId: string
): AntigravityRequestBody {
const requestPayload = { ...body }
delete requestPayload.model
let normalizedModel = modelName
if (normalizedModel.startsWith("antigravity-")) {
normalizedModel = normalizedModel.substring("antigravity-".length)
}
const apiModel = alias2ModelName(normalizedModel)
debugLog(`[MODEL] input="${modelName}" → normalized="${normalizedModel}" → api="${apiModel}"`)
const requestObj = {
...requestPayload,
sessionId,
toolConfig: {
...(requestPayload.toolConfig as Record<string, unknown> || {}),
functionCallingConfig: {
mode: "VALIDATED",
},
},
}
delete (requestObj as Record<string, unknown>).safetySettings
const wrappedBody: AntigravityRequestBody = {
project: projectId,
model: apiModel,
userAgent: "antigravity",
requestType: "agent",
requestId: generateRequestId(),
request: requestObj,
}
injectSystemPrompt(wrappedBody)
return wrappedBody
}
interface ContentPart {
functionCall?: Record<string, unknown>
thoughtSignature?: string
[key: string]: unknown
}
interface ContentBlock {
role?: string
parts?: ContentPart[]
[key: string]: unknown
}
function debugLog(message: string): void {
if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.log(`[antigravity-request] ${message}`)
}
}
export function injectThoughtSignatureIntoFunctionCalls(
body: Record<string, unknown>,
signature: string | undefined
): Record<string, unknown> {
// Always use skip validator as fallback (CLIProxyAPI approach)
const effectiveSignature = signature || SKIP_THOUGHT_SIGNATURE_VALIDATOR
debugLog(`[TSIG][INJECT] signature=${effectiveSignature.substring(0, 30)}... (${signature ? "provided" : "default"})`)
debugLog(`[TSIG][INJECT] body keys: ${Object.keys(body).join(", ")}`)
const contents = body.contents as ContentBlock[] | undefined
if (!contents || !Array.isArray(contents)) {
debugLog(`[TSIG][INJECT] No contents array! Has messages: ${!!body.messages}`)
return body
}
debugLog(`[TSIG][INJECT] Found ${contents.length} content blocks`)
let injectedCount = 0
const modifiedContents = contents.map((content) => {
if (!content.parts || !Array.isArray(content.parts)) {
return content
}
const modifiedParts = content.parts.map((part) => {
if (part.functionCall && !part.thoughtSignature) {
injectedCount++
return {
...part,
thoughtSignature: effectiveSignature,
}
}
return part
})
return { ...content, parts: modifiedParts }
})
debugLog(`[TSIG][INJECT] injected signature into ${injectedCount} functionCall(s)`)
return { ...body, contents: modifiedContents }
}
/**
* Detect if request is for streaming.
* Checks both action name and request body for stream flag.
*
* @param url - Request URL
* @param body - Request body
* @returns true if streaming is requested
*/
export function isStreamingRequest(
url: string,
body: Record<string, unknown>
): boolean {
// Check URL action
const action = extractActionFromUrl(url)
if (action === "streamGenerateContent") {
return true
}
// Check body for stream flag
if (body.stream === true) {
return true
}
return false
}
export interface TransformRequestOptions {
url: string
body: Record<string, unknown>
accessToken: string
projectId: string
sessionId: string
modelName?: string
endpointOverride?: string
thoughtSignature?: string
}
export function transformRequest(options: TransformRequestOptions): TransformedRequest {
const {
url,
body,
accessToken,
projectId,
sessionId,
modelName,
endpointOverride,
thoughtSignature,
} = options
const effectiveModel =
modelName || extractModelFromBody(body) || extractModelFromUrl(url) || "gemini-3-pro-high"
const streaming = isStreamingRequest(url, body)
const action = streaming ? "streamGenerateContent" : "generateContent"
const endpoint = endpointOverride || getDefaultEndpoint()
const transformedUrl = buildAntigravityUrl(endpoint, action, streaming)
const headers = buildRequestHeaders(accessToken)
if (streaming) {
headers["Accept"] = "text/event-stream"
}
const bodyWithSignature = injectThoughtSignatureIntoFunctionCalls(body, thoughtSignature)
const wrappedBody = wrapRequestBody(bodyWithSignature, projectId, effectiveModel, sessionId)
return {
url: transformedUrl,
headers,
body: wrappedBody,
streaming,
}
}
/**
* Prepare request headers for streaming responses.
* Adds Accept header for SSE format.
*
* @param headers - Existing headers object
* @returns Headers with streaming support
*/
export function addStreamingHeaders(
headers: Record<string, string>
): Record<string, string> {
return {
...headers,
Accept: "text/event-stream",
}
}

View File

@@ -1,598 +0,0 @@
/**
* Antigravity Response Handler
* Transforms Antigravity/Gemini API responses to OpenAI-compatible format
*
* Key responsibilities:
* - Non-streaming response transformation
* - SSE streaming response transformation (buffered - see transformStreamingResponse)
* - Error response handling with retry-after extraction
* - Usage metadata extraction from x-antigravity-* headers
*/
import type { AntigravityError, AntigravityUsage } from "./types"
/**
* Usage metadata extracted from Antigravity response headers
*/
export interface AntigravityUsageMetadata {
cachedContentTokenCount?: number
totalTokenCount?: number
promptTokenCount?: number
candidatesTokenCount?: number
}
/**
* Transform result with response and metadata
*/
export interface TransformResult {
response: Response
usage?: AntigravityUsageMetadata
retryAfterMs?: number
error?: AntigravityError
}
/**
* Extract usage metadata from Antigravity response headers
*
* Antigravity sets these headers:
* - x-antigravity-cached-content-token-count
* - x-antigravity-total-token-count
* - x-antigravity-prompt-token-count
* - x-antigravity-candidates-token-count
*
* @param headers - Response headers
* @returns Usage metadata if found
*/
export function extractUsageFromHeaders(headers: Headers): AntigravityUsageMetadata | undefined {
const cached = headers.get("x-antigravity-cached-content-token-count")
const total = headers.get("x-antigravity-total-token-count")
const prompt = headers.get("x-antigravity-prompt-token-count")
const candidates = headers.get("x-antigravity-candidates-token-count")
// Return undefined if no usage headers found
if (!cached && !total && !prompt && !candidates) {
return undefined
}
const usage: AntigravityUsageMetadata = {}
if (cached) {
const parsed = parseInt(cached, 10)
if (!isNaN(parsed)) {
usage.cachedContentTokenCount = parsed
}
}
if (total) {
const parsed = parseInt(total, 10)
if (!isNaN(parsed)) {
usage.totalTokenCount = parsed
}
}
if (prompt) {
const parsed = parseInt(prompt, 10)
if (!isNaN(parsed)) {
usage.promptTokenCount = parsed
}
}
if (candidates) {
const parsed = parseInt(candidates, 10)
if (!isNaN(parsed)) {
usage.candidatesTokenCount = parsed
}
}
return Object.keys(usage).length > 0 ? usage : undefined
}
/**
* Extract retry-after value from error response
*
* Antigravity returns retry info in error.details array:
* {
* error: {
* details: [{
* "@type": "type.googleapis.com/google.rpc.RetryInfo",
* "retryDelay": "5.123s"
* }]
* }
* }
*
* Also checks standard Retry-After header.
*
* @param response - Response object (for headers)
* @param errorBody - Parsed error body (optional)
* @returns Retry after value in milliseconds, or undefined
*/
export function extractRetryAfterMs(
response: Response,
errorBody?: Record<string, unknown>,
): number | undefined {
// First, check standard Retry-After header
const retryAfterHeader = response.headers.get("Retry-After")
if (retryAfterHeader) {
const seconds = parseFloat(retryAfterHeader)
if (!isNaN(seconds) && seconds > 0) {
return Math.ceil(seconds * 1000)
}
}
// Check retry-after-ms header (set by some transformers)
const retryAfterMsHeader = response.headers.get("retry-after-ms")
if (retryAfterMsHeader) {
const ms = parseInt(retryAfterMsHeader, 10)
if (!isNaN(ms) && ms > 0) {
return ms
}
}
// Check error body for RetryInfo
if (!errorBody) {
return undefined
}
const error = errorBody.error as Record<string, unknown> | undefined
if (!error?.details || !Array.isArray(error.details)) {
return undefined
}
const retryInfo = (error.details as Array<Record<string, unknown>>).find(
(detail) => detail["@type"] === "type.googleapis.com/google.rpc.RetryInfo",
)
if (!retryInfo?.retryDelay || typeof retryInfo.retryDelay !== "string") {
return undefined
}
// Parse retryDelay format: "5.123s"
const match = retryInfo.retryDelay.match(/^([\d.]+)s$/)
if (match?.[1]) {
const seconds = parseFloat(match[1])
if (!isNaN(seconds) && seconds > 0) {
return Math.ceil(seconds * 1000)
}
}
return undefined
}
/**
* Parse error response body and extract useful details
*
* @param text - Raw response text
* @returns Parsed error or undefined
*/
export function parseErrorBody(text: string): AntigravityError | undefined {
try {
const parsed = JSON.parse(text) as Record<string, unknown>
// Handle error wrapper
if (parsed.error && typeof parsed.error === "object") {
const errorObj = parsed.error as Record<string, unknown>
return {
message: String(errorObj.message || "Unknown error"),
type: errorObj.type ? String(errorObj.type) : undefined,
code: errorObj.code as string | number | undefined,
}
}
// Handle direct error message
if (parsed.message && typeof parsed.message === "string") {
return {
message: parsed.message,
type: parsed.type ? String(parsed.type) : undefined,
code: parsed.code as string | number | undefined,
}
}
return undefined
} catch {
// If not valid JSON, return generic error
return {
message: text || "Unknown error",
}
}
}
/**
* Transform a non-streaming Antigravity response to OpenAI-compatible format
*
* For non-streaming responses:
* - Parses the response body
* - Unwraps the `response` field if present (Antigravity wraps responses)
* - Extracts usage metadata from headers
* - Handles error responses
*
* Note: Does NOT handle thinking block extraction (Task 10)
* Note: Does NOT handle tool normalization (Task 9)
*
* @param response - Fetch Response object
* @returns TransformResult with transformed response and metadata
*/
export async function transformResponse(response: Response): Promise<TransformResult> {
const headers = new Headers(response.headers)
const usage = extractUsageFromHeaders(headers)
// Handle error responses
if (!response.ok) {
const text = await response.text()
const error = parseErrorBody(text)
const retryAfterMs = extractRetryAfterMs(response, error ? { error } : undefined)
// Parse to get full error body for retry-after extraction
let errorBody: Record<string, unknown> | undefined
try {
errorBody = JSON.parse(text) as Record<string, unknown>
} catch {
errorBody = { error: { message: text } }
}
const retryMs = extractRetryAfterMs(response, errorBody) ?? retryAfterMs
// Set retry headers if found
if (retryMs) {
headers.set("Retry-After", String(Math.ceil(retryMs / 1000)))
headers.set("retry-after-ms", String(retryMs))
}
return {
response: new Response(text, {
status: response.status,
statusText: response.statusText,
headers,
}),
usage,
retryAfterMs: retryMs,
error,
}
}
// Handle successful response
const contentType = response.headers.get("content-type") ?? ""
const isJson = contentType.includes("application/json")
if (!isJson) {
// Return non-JSON responses as-is
return { response, usage }
}
try {
const text = await response.text()
const parsed = JSON.parse(text) as Record<string, unknown>
// Antigravity wraps response in { response: { ... } }
// Unwrap if present
let transformedBody: unknown = parsed
if (parsed.response !== undefined) {
transformedBody = parsed.response
}
return {
response: new Response(JSON.stringify(transformedBody), {
status: response.status,
statusText: response.statusText,
headers,
}),
usage,
}
} catch {
// If parsing fails, return original response
return { response, usage }
}
}
/**
* Transform a single SSE data line
*
* Antigravity SSE format:
* data: { "response": { ... actual data ... } }
*
* OpenAI SSE format:
* data: { ... actual data ... }
*
* @param line - SSE data line
* @returns Transformed line
*/
function transformSseLine(line: string): string {
if (!line.startsWith("data:")) {
return line
}
const json = line.slice(5).trim()
if (!json || json === "[DONE]") {
return line
}
try {
const parsed = JSON.parse(json) as Record<string, unknown>
// Unwrap { response: { ... } } wrapper
if (parsed.response !== undefined) {
return `data: ${JSON.stringify(parsed.response)}`
}
return line
} catch {
// If parsing fails, return original line
return line
}
}
/**
* Transform SSE streaming payload
*
* Processes each line in the SSE stream:
* - Unwraps { response: { ... } } wrapper from data lines
* - Preserves other SSE control lines (event:, id:, retry:, empty lines)
*
* Note: Does NOT extract thinking blocks (Task 10)
*
* @param payload - Raw SSE payload text
* @returns Transformed SSE payload
*/
export function transformStreamingPayload(payload: string): string {
return payload
.split("\n")
.map(transformSseLine)
.join("\n")
}
function createSseTransformStream(): TransformStream<Uint8Array, Uint8Array> {
const decoder = new TextDecoder()
const encoder = new TextEncoder()
let buffer = ""
return new TransformStream({
transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true })
const lines = buffer.split("\n")
buffer = lines.pop() || ""
for (const line of lines) {
const transformed = transformSseLine(line)
controller.enqueue(encoder.encode(transformed + "\n"))
}
},
flush(controller) {
if (buffer) {
const transformed = transformSseLine(buffer)
controller.enqueue(encoder.encode(transformed))
}
},
})
}
/**
* Transforms a streaming SSE response from Antigravity to OpenAI format.
*
* Uses TransformStream to process SSE chunks incrementally as they arrive.
* Each line is transformed immediately and yielded to the client.
*
* @param response - The SSE response from Antigravity API
* @returns TransformResult with transformed streaming response
*/
export async function transformStreamingResponse(response: Response): Promise<TransformResult> {
const headers = new Headers(response.headers)
const usage = extractUsageFromHeaders(headers)
// Handle error responses
if (!response.ok) {
const text = await response.text()
const error = parseErrorBody(text)
let errorBody: Record<string, unknown> | undefined
try {
errorBody = JSON.parse(text) as Record<string, unknown>
} catch {
errorBody = { error: { message: text } }
}
const retryAfterMs = extractRetryAfterMs(response, errorBody)
if (retryAfterMs) {
headers.set("Retry-After", String(Math.ceil(retryAfterMs / 1000)))
headers.set("retry-after-ms", String(retryAfterMs))
}
return {
response: new Response(text, {
status: response.status,
statusText: response.statusText,
headers,
}),
usage,
retryAfterMs,
error,
}
}
// Check content type
const contentType = response.headers.get("content-type") ?? ""
const isEventStream =
contentType.includes("text/event-stream") || response.url.includes("alt=sse")
if (!isEventStream) {
// Not SSE, delegate to non-streaming transform
// Clone response since we need to read it
const text = await response.text()
try {
const parsed = JSON.parse(text) as Record<string, unknown>
let transformedBody: unknown = parsed
if (parsed.response !== undefined) {
transformedBody = parsed.response
}
return {
response: new Response(JSON.stringify(transformedBody), {
status: response.status,
statusText: response.statusText,
headers,
}),
usage,
}
} catch {
return {
response: new Response(text, {
status: response.status,
statusText: response.statusText,
headers,
}),
usage,
}
}
}
if (!response.body) {
return { response, usage }
}
headers.delete("content-length")
headers.delete("content-encoding")
headers.set("content-type", "text/event-stream; charset=utf-8")
const transformStream = createSseTransformStream()
const transformedBody = response.body.pipeThrough(transformStream)
return {
response: new Response(transformedBody, {
status: response.status,
statusText: response.statusText,
headers,
}),
usage,
}
}
/**
* Check if response is a streaming SSE response
*
* @param response - Fetch Response object
* @returns True if response is SSE stream
*/
export function isStreamingResponse(response: Response): boolean {
const contentType = response.headers.get("content-type") ?? ""
return contentType.includes("text/event-stream") || response.url.includes("alt=sse")
}
/**
* Extract thought signature from SSE payload text
*
* Looks for thoughtSignature in SSE events:
* data: { "response": { "candidates": [{ "content": { "parts": [{ "thoughtSignature": "..." }] } }] } }
*
* Returns the last found signature (most recent in the stream).
*
* @param payload - SSE payload text
* @returns Last thought signature if found
*/
export function extractSignatureFromSsePayload(payload: string): string | undefined {
const lines = payload.split("\n")
let lastSignature: string | undefined
for (const line of lines) {
if (!line.startsWith("data:")) {
continue
}
const json = line.slice(5).trim()
if (!json || json === "[DONE]") {
continue
}
try {
const parsed = JSON.parse(json) as Record<string, unknown>
// Check in response wrapper (Antigravity format)
const response = (parsed.response || parsed) as Record<string, unknown>
const candidates = response.candidates as Array<Record<string, unknown>> | undefined
if (candidates && Array.isArray(candidates)) {
for (const candidate of candidates) {
const content = candidate.content as Record<string, unknown> | undefined
const parts = content?.parts as Array<Record<string, unknown>> | undefined
if (parts && Array.isArray(parts)) {
for (const part of parts) {
const sig = (part.thoughtSignature || part.thought_signature) as string | undefined
if (sig && typeof sig === "string") {
lastSignature = sig
}
}
}
}
}
} catch {
// Continue to next line if parsing fails
}
}
return lastSignature
}
/**
* Extract usage from SSE payload text
*
* Looks for usageMetadata in SSE events:
* data: { "usageMetadata": { ... } }
*
* @param payload - SSE payload text
* @returns Usage if found
*/
export function extractUsageFromSsePayload(payload: string): AntigravityUsage | undefined {
const lines = payload.split("\n")
for (const line of lines) {
if (!line.startsWith("data:")) {
continue
}
const json = line.slice(5).trim()
if (!json || json === "[DONE]") {
continue
}
try {
const parsed = JSON.parse(json) as Record<string, unknown>
// Check for usageMetadata at top level
if (parsed.usageMetadata && typeof parsed.usageMetadata === "object") {
const meta = parsed.usageMetadata as Record<string, unknown>
return {
prompt_tokens: typeof meta.promptTokenCount === "number" ? meta.promptTokenCount : 0,
completion_tokens:
typeof meta.candidatesTokenCount === "number" ? meta.candidatesTokenCount : 0,
total_tokens: typeof meta.totalTokenCount === "number" ? meta.totalTokenCount : 0,
}
}
// Check for usage in response wrapper
if (parsed.response && typeof parsed.response === "object") {
const resp = parsed.response as Record<string, unknown>
if (resp.usageMetadata && typeof resp.usageMetadata === "object") {
const meta = resp.usageMetadata as Record<string, unknown>
return {
prompt_tokens: typeof meta.promptTokenCount === "number" ? meta.promptTokenCount : 0,
completion_tokens:
typeof meta.candidatesTokenCount === "number" ? meta.candidatesTokenCount : 0,
total_tokens: typeof meta.totalTokenCount === "number" ? meta.totalTokenCount : 0,
}
}
}
// Check for standard OpenAI-style usage
if (parsed.usage && typeof parsed.usage === "object") {
const u = parsed.usage as Record<string, unknown>
return {
prompt_tokens: typeof u.prompt_tokens === "number" ? u.prompt_tokens : 0,
completion_tokens: typeof u.completion_tokens === "number" ? u.completion_tokens : 0,
total_tokens: typeof u.total_tokens === "number" ? u.total_tokens : 0,
}
}
} catch {
// Continue to next line if parsing fails
}
}
return undefined
}

View File

@@ -1,388 +0,0 @@
import { describe, it, expect, beforeEach, afterEach } from "bun:test"
import { join } from "node:path"
import { homedir } from "node:os"
import { promises as fs } from "node:fs"
import { tmpdir } from "node:os"
import type { AccountStorage } from "./types"
import { getDataDir, getStoragePath, loadAccounts, saveAccounts } from "./storage"
describe("storage", () => {
const testDir = join(tmpdir(), `oh-my-opencode-storage-test-${Date.now()}`)
const testStoragePath = join(testDir, "oh-my-opencode-accounts.json")
const validStorage: AccountStorage = {
version: 1,
accounts: [
{
email: "test@example.com",
tier: "free",
refreshToken: "refresh-token-123",
projectId: "project-123",
accessToken: "access-token-123",
expiresAt: Date.now() + 3600000,
rateLimits: {},
},
],
activeIndex: 0,
}
beforeEach(async () => {
await fs.mkdir(testDir, { recursive: true })
})
afterEach(async () => {
try {
await fs.rm(testDir, { recursive: true, force: true })
} catch {
// ignore cleanup errors
}
})
describe("getDataDir", () => {
it("returns path containing opencode directory", () => {
// #given
// platform is current system
// #when
const result = getDataDir()
// #then
expect(result).toContain("opencode")
})
it("returns XDG_DATA_HOME/opencode when XDG_DATA_HOME is set on non-Windows", () => {
// #given
const originalXdg = process.env.XDG_DATA_HOME
const originalPlatform = process.platform
if (originalPlatform === "win32") {
return
}
try {
process.env.XDG_DATA_HOME = "/custom/data"
// #when
const result = getDataDir()
// #then
expect(result).toBe("/custom/data/opencode")
} finally {
if (originalXdg !== undefined) {
process.env.XDG_DATA_HOME = originalXdg
} else {
delete process.env.XDG_DATA_HOME
}
}
})
it("returns ~/.local/share/opencode when XDG_DATA_HOME is not set on non-Windows", () => {
// #given
const originalXdg = process.env.XDG_DATA_HOME
const originalPlatform = process.platform
if (originalPlatform === "win32") {
return
}
try {
delete process.env.XDG_DATA_HOME
// #when
const result = getDataDir()
// #then
expect(result).toBe(join(homedir(), ".local", "share", "opencode"))
} finally {
if (originalXdg !== undefined) {
process.env.XDG_DATA_HOME = originalXdg
} else {
delete process.env.XDG_DATA_HOME
}
}
})
})
describe("getStoragePath", () => {
it("returns path ending with oh-my-opencode-accounts.json", () => {
// #given
// no setup needed
// #when
const result = getStoragePath()
// #then
expect(result.endsWith("oh-my-opencode-accounts.json")).toBe(true)
expect(result).toContain("opencode")
})
})
describe("loadAccounts", () => {
it("returns parsed storage when file exists and is valid", async () => {
// #given
await fs.writeFile(testStoragePath, JSON.stringify(validStorage), "utf-8")
// #when
const result = await loadAccounts(testStoragePath)
// #then
expect(result).not.toBeNull()
expect(result?.version).toBe(1)
expect(result?.accounts).toHaveLength(1)
expect(result?.accounts[0].email).toBe("test@example.com")
})
it("returns null when file does not exist (ENOENT)", async () => {
// #given
const nonExistentPath = join(testDir, "non-existent.json")
// #when
const result = await loadAccounts(nonExistentPath)
// #then
expect(result).toBeNull()
})
it("returns null when file contains invalid JSON", async () => {
// #given
const invalidJsonPath = join(testDir, "invalid.json")
await fs.writeFile(invalidJsonPath, "{ invalid json }", "utf-8")
// #when
const result = await loadAccounts(invalidJsonPath)
// #then
expect(result).toBeNull()
})
it("returns null when file contains valid JSON but invalid schema", async () => {
// #given
const invalidSchemaPath = join(testDir, "invalid-schema.json")
await fs.writeFile(invalidSchemaPath, JSON.stringify({ foo: "bar" }), "utf-8")
// #when
const result = await loadAccounts(invalidSchemaPath)
// #then
expect(result).toBeNull()
})
it("returns null when accounts is not an array", async () => {
// #given
const invalidAccountsPath = join(testDir, "invalid-accounts.json")
await fs.writeFile(
invalidAccountsPath,
JSON.stringify({ version: 1, accounts: "not-array", activeIndex: 0 }),
"utf-8"
)
// #when
const result = await loadAccounts(invalidAccountsPath)
// #then
expect(result).toBeNull()
})
it("returns null when activeIndex is not a number", async () => {
// #given
const invalidIndexPath = join(testDir, "invalid-index.json")
await fs.writeFile(
invalidIndexPath,
JSON.stringify({ version: 1, accounts: [], activeIndex: "zero" }),
"utf-8"
)
// #when
const result = await loadAccounts(invalidIndexPath)
// #then
expect(result).toBeNull()
})
})
describe("saveAccounts", () => {
it("writes storage to file with proper JSON formatting", async () => {
// #given
// testStoragePath is ready
// #when
await saveAccounts(validStorage, testStoragePath)
// #then
const content = await fs.readFile(testStoragePath, "utf-8")
const parsed = JSON.parse(content)
expect(parsed.version).toBe(1)
expect(parsed.accounts).toHaveLength(1)
expect(parsed.activeIndex).toBe(0)
})
it("creates parent directories if they do not exist", async () => {
// #given
const nestedPath = join(testDir, "nested", "deep", "oh-my-opencode-accounts.json")
// #when
await saveAccounts(validStorage, nestedPath)
// #then
const content = await fs.readFile(nestedPath, "utf-8")
const parsed = JSON.parse(content)
expect(parsed.version).toBe(1)
})
it("overwrites existing file", async () => {
// #given
const existingStorage: AccountStorage = {
version: 1,
accounts: [],
activeIndex: 0,
}
await fs.writeFile(testStoragePath, JSON.stringify(existingStorage), "utf-8")
// #when
await saveAccounts(validStorage, testStoragePath)
// #then
const content = await fs.readFile(testStoragePath, "utf-8")
const parsed = JSON.parse(content)
expect(parsed.accounts).toHaveLength(1)
})
it("uses pretty-printed JSON with 2-space indentation", async () => {
// #given
// testStoragePath is ready
// #when
await saveAccounts(validStorage, testStoragePath)
// #then
const content = await fs.readFile(testStoragePath, "utf-8")
expect(content).toContain("\n")
expect(content).toContain(" ")
})
it("sets restrictive file permissions (0o600) for security", async () => {
// #given
// testStoragePath is ready
// #when
await saveAccounts(validStorage, testStoragePath)
// #then
const stats = await fs.stat(testStoragePath)
const mode = stats.mode & 0o777
expect(mode).toBe(0o600)
})
it("uses atomic write pattern with temp file and rename", async () => {
// #given
// This test verifies that the file is written atomically
// by checking that no partial writes occur
// #when
await saveAccounts(validStorage, testStoragePath)
// #then
// If we can read valid JSON, the atomic write succeeded
const content = await fs.readFile(testStoragePath, "utf-8")
const parsed = JSON.parse(content)
expect(parsed.version).toBe(1)
expect(parsed.accounts).toHaveLength(1)
})
it("cleans up temp file on rename failure", async () => {
// #given
const readOnlyDir = join(testDir, "readonly")
await fs.mkdir(readOnlyDir, { recursive: true })
const readOnlyPath = join(readOnlyDir, "accounts.json")
await fs.writeFile(readOnlyPath, "{}", "utf-8")
await fs.chmod(readOnlyPath, 0o444)
// #when
let didThrow = false
try {
await saveAccounts(validStorage, readOnlyPath)
} catch {
didThrow = true
}
// #then
const files = await fs.readdir(readOnlyDir)
const tempFiles = files.filter((f) => f.includes(".tmp."))
expect(tempFiles).toHaveLength(0)
if (!didThrow) {
console.log("[TEST SKIP] File permissions did not work as expected on this system")
}
// Cleanup
await fs.chmod(readOnlyPath, 0o644)
})
it("uses unique temp filename with pid and timestamp", async () => {
// #given
// We verify this by checking the implementation behavior
// The temp file should include process.pid and Date.now()
// #when
await saveAccounts(validStorage, testStoragePath)
// #then
// File should exist and be valid (temp file was successfully renamed)
const exists = await fs.access(testStoragePath).then(() => true).catch(() => false)
expect(exists).toBe(true)
})
it("handles sequential writes without corruption", async () => {
// #given
const storage1: AccountStorage = {
...validStorage,
accounts: [{ ...validStorage.accounts[0]!, email: "user1@example.com" }],
}
const storage2: AccountStorage = {
...validStorage,
accounts: [{ ...validStorage.accounts[0]!, email: "user2@example.com" }],
}
// #when - sequential writes (concurrent writes are inherently racy)
await saveAccounts(storage1, testStoragePath)
await saveAccounts(storage2, testStoragePath)
// #then - file should contain valid JSON from last write
const content = await fs.readFile(testStoragePath, "utf-8")
const parsed = JSON.parse(content) as AccountStorage
expect(parsed.version).toBe(1)
expect(parsed.accounts[0]?.email).toBe("user2@example.com")
})
})
describe("loadAccounts error handling", () => {
it("re-throws non-ENOENT filesystem errors", async () => {
// #given
const unreadableDir = join(testDir, "unreadable")
await fs.mkdir(unreadableDir, { recursive: true })
const unreadablePath = join(unreadableDir, "accounts.json")
await fs.writeFile(unreadablePath, JSON.stringify(validStorage), "utf-8")
await fs.chmod(unreadablePath, 0o000)
// #when
let thrownError: Error | null = null
let result: unknown = undefined
try {
result = await loadAccounts(unreadablePath)
} catch (error) {
thrownError = error as Error
}
// #then
if (thrownError) {
expect((thrownError as NodeJS.ErrnoException).code).not.toBe("ENOENT")
} else {
console.log("[TEST SKIP] File permissions did not work as expected on this system, got result:", result)
}
// Cleanup
await fs.chmod(unreadablePath, 0o644)
})
})
})

View File

@@ -1,74 +0,0 @@
import { promises as fs } from "node:fs"
import { join, dirname } from "node:path"
import type { AccountStorage } from "./types"
import { getDataDir as getSharedDataDir } from "../../shared/data-path"
export function getDataDir(): string {
return join(getSharedDataDir(), "opencode")
}
export function getStoragePath(): string {
return join(getDataDir(), "oh-my-opencode-accounts.json")
}
export async function loadAccounts(path?: string): Promise<AccountStorage | null> {
const storagePath = path ?? getStoragePath()
try {
const content = await fs.readFile(storagePath, "utf-8")
const data = JSON.parse(content) as unknown
if (!isValidAccountStorage(data)) {
return null
}
return data
} catch (error) {
const errorCode = (error as NodeJS.ErrnoException).code
if (errorCode === "ENOENT") {
return null
}
if (error instanceof SyntaxError) {
return null
}
throw error
}
}
export async function saveAccounts(storage: AccountStorage, path?: string): Promise<void> {
const storagePath = path ?? getStoragePath()
await fs.mkdir(dirname(storagePath), { recursive: true })
const content = JSON.stringify(storage, null, 2)
const tempPath = `${storagePath}.tmp.${process.pid}.${Date.now()}`
await fs.writeFile(tempPath, content, { encoding: "utf-8", mode: 0o600 })
try {
await fs.rename(tempPath, storagePath)
} catch (error) {
await fs.unlink(tempPath).catch(() => {})
throw error
}
}
function isValidAccountStorage(data: unknown): data is AccountStorage {
if (typeof data !== "object" || data === null) {
return false
}
const obj = data as Record<string, unknown>
if (typeof obj.version !== "number") {
return false
}
if (!Array.isArray(obj.accounts)) {
return false
}
if (typeof obj.activeIndex !== "number") {
return false
}
return true
}

View File

@@ -1,288 +0,0 @@
/**
* Tests for reasoning_effort and Gemini 3 thinkingLevel support.
*
* Tests the following functions:
* - getModelThinkingConfig()
* - extractThinkingConfig() with reasoning_effort
* - applyThinkingConfigToRequest()
* - budgetToLevel()
*/
import { describe, it, expect } from "bun:test"
import type { AntigravityModelConfig } from "./constants"
import {
getModelThinkingConfig,
extractThinkingConfig,
applyThinkingConfigToRequest,
budgetToLevel,
type ThinkingConfig,
type DeleteThinkingConfig,
} from "./thinking"
// ============================================================================
// getModelThinkingConfig() tests
// ============================================================================
describe("getModelThinkingConfig", () => {
// #given: A model ID that maps to a levels-based thinking config (Gemini 3)
// #when: getModelThinkingConfig is called with google/antigravity-gemini-3-pro-high
// #then: It should return a config with thinkingType: "levels"
it("should return levels config for Gemini 3 model", () => {
const config = getModelThinkingConfig("google/antigravity-gemini-3-pro-high")
expect(config).toBeDefined()
expect(config?.thinkingType).toBe("levels")
expect(config?.levels).toEqual(["low", "high"])
})
// #given: A model ID that maps to a numeric-based thinking config (Gemini 2.5)
// #when: getModelThinkingConfig is called with gemini-2.5-flash
// #then: It should return a config with thinkingType: "numeric"
it("should return numeric config for Gemini 2.5 model", () => {
const config = getModelThinkingConfig("gemini-2.5-flash")
expect(config).toBeDefined()
expect(config?.thinkingType).toBe("numeric")
expect(config?.min).toBe(0)
expect(config?.max).toBe(24576)
expect(config?.zeroAllowed).toBe(true)
})
// #given: A model that doesn't have an exact match but includes "gemini-3"
// #when: getModelThinkingConfig is called
// #then: It should use pattern matching fallback to return levels config
it("should use pattern matching fallback for gemini-3", () => {
const config = getModelThinkingConfig("gemini-3-pro")
expect(config).toBeDefined()
expect(config?.thinkingType).toBe("levels")
expect(config?.levels).toEqual(["low", "high"])
})
// #given: A model that doesn't have an exact match but includes "claude"
// #when: getModelThinkingConfig is called
// #then: It should use pattern matching fallback to return numeric config
it("should use pattern matching fallback for claude models", () => {
const config = getModelThinkingConfig("claude-opus-4-5")
expect(config).toBeDefined()
expect(config?.thinkingType).toBe("numeric")
expect(config?.min).toBe(1024)
expect(config?.max).toBe(200000)
expect(config?.zeroAllowed).toBe(false)
})
// #given: An unknown model
// #when: getModelThinkingConfig is called
// #then: It should return undefined
it("should return undefined for unknown models", () => {
const config = getModelThinkingConfig("unknown-model")
expect(config).toBeUndefined()
})
})
// ============================================================================
// extractThinkingConfig() with reasoning_effort tests
// ============================================================================
describe("extractThinkingConfig with reasoning_effort", () => {
// #given: A request payload with reasoning_effort set to "high"
// #when: extractThinkingConfig is called
// #then: It should return config with thinkingBudget: 24576 and includeThoughts: true
it("should extract reasoning_effort high correctly", () => {
const requestPayload = { reasoning_effort: "high" }
const result = extractThinkingConfig(requestPayload)
expect(result).toEqual({ thinkingBudget: 24576, includeThoughts: true })
})
// #given: A request payload with reasoning_effort set to "low"
// #when: extractThinkingConfig is called
// #then: It should return config with thinkingBudget: 1024 and includeThoughts: true
it("should extract reasoning_effort low correctly", () => {
const requestPayload = { reasoning_effort: "low" }
const result = extractThinkingConfig(requestPayload)
expect(result).toEqual({ thinkingBudget: 1024, includeThoughts: true })
})
// #given: A request payload with reasoning_effort set to "none"
// #when: extractThinkingConfig is called
// #then: It should return { deleteThinkingConfig: true } (special marker)
it("should extract reasoning_effort none as delete marker", () => {
const requestPayload = { reasoning_effort: "none" }
const result = extractThinkingConfig(requestPayload)
expect(result as unknown).toEqual({ deleteThinkingConfig: true })
})
// #given: A request payload with reasoning_effort set to "medium"
// #when: extractThinkingConfig is called
// #then: It should return config with thinkingBudget: 8192
it("should extract reasoning_effort medium correctly", () => {
const requestPayload = { reasoning_effort: "medium" }
const result = extractThinkingConfig(requestPayload)
expect(result).toEqual({ thinkingBudget: 8192, includeThoughts: true })
})
// #given: A request payload with reasoning_effort in extraBody (not main payload)
// #when: extractThinkingConfig is called
// #then: It should still extract and return the correct config
it("should extract reasoning_effort from extraBody", () => {
const requestPayload = {}
const extraBody = { reasoning_effort: "high" }
const result = extractThinkingConfig(requestPayload, undefined, extraBody)
expect(result).toEqual({ thinkingBudget: 24576, includeThoughts: true })
})
// #given: A request payload without reasoning_effort
// #when: extractThinkingConfig is called
// #then: It should return undefined (existing behavior unchanged)
it("should return undefined when reasoning_effort not present", () => {
const requestPayload = { model: "gemini-2.5-flash" }
const result = extractThinkingConfig(requestPayload)
expect(result).toBeUndefined()
})
})
// ============================================================================
// budgetToLevel() tests
// ============================================================================
describe("budgetToLevel", () => {
// #given: A thinking budget of 24576 and a Gemini 3 model
// #when: budgetToLevel is called
// #then: It should return "high"
it("should convert budget 24576 to level high for Gemini 3", () => {
const level = budgetToLevel(24576, "gemini-3-pro")
expect(level).toBe("high")
})
// #given: A thinking budget of 1024 and a Gemini 3 model
// #when: budgetToLevel is called
// #then: It should return "low"
it("should convert budget 1024 to level low for Gemini 3", () => {
const level = budgetToLevel(1024, "gemini-3-pro")
expect(level).toBe("low")
})
// #given: A thinking budget that doesn't match any predefined level
// #when: budgetToLevel is called
// #then: It should return the highest available level
it("should return highest level for unknown budget", () => {
const level = budgetToLevel(99999, "gemini-3-pro")
expect(level).toBe("high")
})
})
// ============================================================================
// applyThinkingConfigToRequest() tests
// ============================================================================
describe("applyThinkingConfigToRequest", () => {
// #given: A request body with generationConfig and Gemini 3 model with high budget
// #when: applyThinkingConfigToRequest is called with ThinkingConfig
// #then: It should set thinkingLevel to "high" (lowercase) and NOT set thinkingBudget
it("should set thinkingLevel for Gemini 3 model", () => {
const requestBody: Record<string, unknown> = {
request: {
generationConfig: {},
},
}
const config: ThinkingConfig = { thinkingBudget: 24576, includeThoughts: true }
applyThinkingConfigToRequest(requestBody, "gemini-3-pro", config)
const genConfig = (requestBody.request as Record<string, unknown>).generationConfig as Record<string, unknown>
const thinkingConfig = genConfig.thinkingConfig as Record<string, unknown>
expect(thinkingConfig.thinkingLevel).toBe("high")
expect(thinkingConfig.thinkingBudget).toBeUndefined()
expect(thinkingConfig.include_thoughts).toBe(true)
})
// #given: A request body with generationConfig and Gemini 2.5 model with high budget
// #when: applyThinkingConfigToRequest is called with ThinkingConfig
// #then: It should set thinkingBudget to 24576 and NOT set thinkingLevel
it("should set thinkingBudget for Gemini 2.5 model", () => {
const requestBody: Record<string, unknown> = {
request: {
generationConfig: {},
},
}
const config: ThinkingConfig = { thinkingBudget: 24576, includeThoughts: true }
applyThinkingConfigToRequest(requestBody, "gemini-2.5-flash", config)
const genConfig = (requestBody.request as Record<string, unknown>).generationConfig as Record<string, unknown>
const thinkingConfig = genConfig.thinkingConfig as Record<string, unknown>
expect(thinkingConfig.thinkingBudget).toBe(24576)
expect(thinkingConfig.thinkingLevel).toBeUndefined()
expect(thinkingConfig.include_thoughts).toBe(true)
})
// #given: A request body with existing thinkingConfig
// #when: applyThinkingConfigToRequest is called with deleteThinkingConfig: true
// #then: It should remove the thinkingConfig entirely
it("should remove thinkingConfig when delete marker is set", () => {
const requestBody: Record<string, unknown> = {
request: {
generationConfig: {
thinkingConfig: {
thinkingBudget: 16000,
include_thoughts: true,
},
},
},
}
applyThinkingConfigToRequest(requestBody, "gemini-3-pro", { deleteThinkingConfig: true })
const genConfig = (requestBody.request as Record<string, unknown>).generationConfig as Record<string, unknown>
expect(genConfig.thinkingConfig).toBeUndefined()
})
// #given: A request body without request.generationConfig
// #when: applyThinkingConfigToRequest is called
// #then: It should not modify the body (graceful handling)
it("should handle missing generationConfig gracefully", () => {
const requestBody: Record<string, unknown> = {}
applyThinkingConfigToRequest(requestBody, "gemini-2.5-flash", {
thinkingBudget: 24576,
includeThoughts: true,
})
expect(requestBody.request).toBeUndefined()
})
// #given: A request body and an unknown model
// #when: applyThinkingConfigToRequest is called
// #then: It should not set any thinking config (graceful handling)
it("should handle unknown model gracefully", () => {
const requestBody: Record<string, unknown> = {
request: {
generationConfig: {},
},
}
applyThinkingConfigToRequest(requestBody, "unknown-model", {
thinkingBudget: 24576,
includeThoughts: true,
})
const genConfig = (requestBody.request as Record<string, unknown>).generationConfig as Record<string, unknown>
expect(genConfig.thinkingConfig).toBeUndefined()
})
// #given: A request body with Gemini 3 and budget that maps to "low" level
// #when: applyThinkingConfigToRequest is called with uppercase level mapping
// #then: It should convert to lowercase ("low")
it("should convert uppercase level to lowercase", () => {
const requestBody: Record<string, unknown> = {
request: {
generationConfig: {},
},
}
const config: ThinkingConfig = { thinkingBudget: 1024, includeThoughts: true }
applyThinkingConfigToRequest(requestBody, "gemini-3-pro", config)
const genConfig = (requestBody.request as Record<string, unknown>).generationConfig as Record<string, unknown>
const thinkingConfig = genConfig.thinkingConfig as Record<string, unknown>
expect(thinkingConfig.thinkingLevel).toBe("low")
expect(thinkingConfig.thinkingLevel).not.toBe("LOW")
})
})

View File

@@ -1,755 +0,0 @@
/**
* Antigravity Thinking Block Handler (Gemini only)
*
* Handles extraction and transformation of thinking/reasoning blocks
* from Gemini responses. Thinking blocks contain the model's internal
* reasoning process, available in `-high` model variants.
*
* Key responsibilities:
* - Extract thinking blocks from Gemini response format
* - Detect thinking-capable model variants (`-high` suffix)
* - Format thinking blocks for OpenAI-compatible output
*
* Note: This is Gemini-only. Claude models are NOT handled by Antigravity.
*/
import {
normalizeModelId,
ANTIGRAVITY_MODEL_CONFIGS,
REASONING_EFFORT_BUDGET_MAP,
type AntigravityModelConfig,
} from "./constants"
/**
* Represents a single thinking/reasoning block extracted from Gemini response
*/
export interface ThinkingBlock {
/** The thinking/reasoning text content */
text: string
/** Optional signature for signed thinking blocks (required for multi-turn) */
signature?: string
/** Index of the thinking block in sequence */
index?: number
}
/**
* Raw part structure from Gemini response candidates
*/
export interface GeminiPart {
/** Text content of the part */
text?: string
/** Whether this part is a thinking/reasoning block */
thought?: boolean
/** Signature for signed thinking blocks */
thoughtSignature?: string
/** Type field for Anthropic-style format */
type?: string
/** Signature field for Anthropic-style format */
signature?: string
}
/**
* Gemini response candidate structure
*/
export interface GeminiCandidate {
/** Content containing parts */
content?: {
/** Role of the content (e.g., "model", "assistant") */
role?: string
/** Array of content parts */
parts?: GeminiPart[]
}
/** Index of the candidate */
index?: number
}
/**
* Gemini response structure for thinking block extraction
*/
export interface GeminiResponse {
/** Response ID */
id?: string
/** Array of response candidates */
candidates?: GeminiCandidate[]
/** Direct content (some responses use this instead of candidates) */
content?: Array<{
type?: string
text?: string
signature?: string
}>
/** Model used for response */
model?: string
}
/**
* Result of thinking block extraction
*/
export interface ThinkingExtractionResult {
/** Extracted thinking blocks */
thinkingBlocks: ThinkingBlock[]
/** Combined thinking text for convenience */
combinedThinking: string
/** Whether any thinking blocks were found */
hasThinking: boolean
}
/**
* Default thinking budget in tokens for thinking-enabled models
*/
export const DEFAULT_THINKING_BUDGET = 16000
/**
* Check if a model variant should include thinking blocks
*
* Returns true for model variants with `-high` suffix, which have
* extended thinking capability enabled.
*
* Examples:
* - `gemini-3-pro-high` → true
* - `gemini-2.5-pro-high` → true
* - `gemini-3-pro-preview` → false
* - `gemini-2.5-pro` → false
*
* @param model - Model identifier string
* @returns True if model should include thinking blocks
*/
export function shouldIncludeThinking(model: string): boolean {
if (!model || typeof model !== "string") {
return false
}
const lowerModel = model.toLowerCase()
// Check for -high suffix (primary indicator of thinking capability)
if (lowerModel.endsWith("-high")) {
return true
}
// Also check for explicit thinking in model name
if (lowerModel.includes("thinking")) {
return true
}
return false
}
/**
* Check if a model is thinking-capable (broader check)
*
* This is a broader check than shouldIncludeThinking - it detects models
* that have thinking capability, even if not explicitly requesting thinking output.
*
* @param model - Model identifier string
* @returns True if model supports thinking/reasoning
*/
export function isThinkingCapableModel(model: string): boolean {
if (!model || typeof model !== "string") {
return false
}
const lowerModel = model.toLowerCase()
return (
lowerModel.includes("thinking") ||
lowerModel.includes("gemini-3") ||
lowerModel.endsWith("-high")
)
}
/**
* Check if a part is a thinking/reasoning block
*
* Detects both Gemini-style (thought: true) and Anthropic-style
* (type: "thinking" or type: "reasoning") formats.
*
* @param part - Content part to check
* @returns True if part is a thinking block
*/
function isThinkingPart(part: GeminiPart): boolean {
// Gemini-style: thought flag
if (part.thought === true) {
return true
}
// Anthropic-style: type field
if (part.type === "thinking" || part.type === "reasoning") {
return true
}
return false
}
/**
* Check if a thinking part has a valid signature
*
* Signatures are required for multi-turn conversations with Claude models.
* Gemini uses `thoughtSignature`, Anthropic uses `signature`.
*
* @param part - Thinking part to check
* @returns True if part has valid signature
*/
function hasValidSignature(part: GeminiPart): boolean {
// Gemini-style signature
if (part.thought === true && part.thoughtSignature) {
return true
}
// Anthropic-style signature
if ((part.type === "thinking" || part.type === "reasoning") && part.signature) {
return true
}
return false
}
/**
* Extract thinking blocks from a Gemini response
*
* Parses the response structure to identify and extract all thinking/reasoning
* content. Supports both Gemini-style (thought: true) and Anthropic-style
* (type: "thinking") formats.
*
* @param response - Gemini response object
* @returns Extraction result with thinking blocks and metadata
*/
export function extractThinkingBlocks(response: GeminiResponse): ThinkingExtractionResult {
const thinkingBlocks: ThinkingBlock[] = []
// Handle candidates array (standard Gemini format)
if (response.candidates && Array.isArray(response.candidates)) {
for (const candidate of response.candidates) {
const parts = candidate.content?.parts
if (!parts || !Array.isArray(parts)) {
continue
}
for (let i = 0; i < parts.length; i++) {
const part = parts[i]
if (!part || typeof part !== "object") {
continue
}
if (isThinkingPart(part)) {
const block: ThinkingBlock = {
text: part.text || "",
index: thinkingBlocks.length,
}
// Extract signature if present
if (part.thought === true && part.thoughtSignature) {
block.signature = part.thoughtSignature
} else if (part.signature) {
block.signature = part.signature
}
thinkingBlocks.push(block)
}
}
}
}
// Handle direct content array (Anthropic-style response)
if (response.content && Array.isArray(response.content)) {
for (let i = 0; i < response.content.length; i++) {
const item = response.content[i]
if (!item || typeof item !== "object") {
continue
}
if (item.type === "thinking" || item.type === "reasoning") {
thinkingBlocks.push({
text: item.text || "",
signature: item.signature,
index: thinkingBlocks.length,
})
}
}
}
// Combine all thinking text
const combinedThinking = thinkingBlocks.map((b) => b.text).join("\n\n")
return {
thinkingBlocks,
combinedThinking,
hasThinking: thinkingBlocks.length > 0,
}
}
/**
* Format thinking blocks for OpenAI-compatible output
*
* Converts Gemini thinking block format to OpenAI's expected structure.
* OpenAI expects thinking content as special message blocks or annotations.
*
* Output format:
* ```
* [
* { type: "reasoning", text: "thinking content...", signature?: "..." },
* ...
* ]
* ```
*
* @param thinking - Array of thinking blocks to format
* @returns OpenAI-compatible formatted array
*/
export function formatThinkingForOpenAI(
thinking: ThinkingBlock[],
): Array<{ type: "reasoning"; text: string; signature?: string }> {
if (!thinking || !Array.isArray(thinking) || thinking.length === 0) {
return []
}
return thinking.map((block) => {
const formatted: { type: "reasoning"; text: string; signature?: string } = {
type: "reasoning",
text: block.text || "",
}
if (block.signature) {
formatted.signature = block.signature
}
return formatted
})
}
/**
* Transform thinking parts in a candidate to OpenAI format
*
* Modifies candidate content parts to use OpenAI-style reasoning format
* while preserving the rest of the response structure.
*
* @param candidate - Gemini candidate to transform
* @returns Transformed candidate with reasoning-formatted thinking
*/
export function transformCandidateThinking(candidate: GeminiCandidate): GeminiCandidate {
if (!candidate || typeof candidate !== "object") {
return candidate
}
const content = candidate.content
if (!content || typeof content !== "object" || !Array.isArray(content.parts)) {
return candidate
}
const thinkingTexts: string[] = []
const transformedParts = content.parts.map((part) => {
if (part && typeof part === "object" && part.thought === true) {
thinkingTexts.push(part.text || "")
// Transform to reasoning format
return {
...part,
type: "reasoning" as const,
thought: undefined, // Remove Gemini-specific field
}
}
return part
})
const result: GeminiCandidate & { reasoning_content?: string } = {
...candidate,
content: { ...content, parts: transformedParts },
}
// Add combined reasoning content for convenience
if (thinkingTexts.length > 0) {
result.reasoning_content = thinkingTexts.join("\n\n")
}
return result
}
/**
* Transform Anthropic-style thinking blocks to reasoning format
*
* Converts `type: "thinking"` blocks to `type: "reasoning"` for consistency.
*
* @param content - Array of content blocks
* @returns Transformed content array
*/
export function transformAnthropicThinking(
content: Array<{ type?: string; text?: string; signature?: string }>,
): Array<{ type?: string; text?: string; signature?: string }> {
if (!content || !Array.isArray(content)) {
return content
}
return content.map((block) => {
if (block && typeof block === "object" && block.type === "thinking") {
return {
type: "reasoning",
text: block.text || "",
...(block.signature ? { signature: block.signature } : {}),
}
}
return block
})
}
/**
* Filter out unsigned thinking blocks
*
* Claude API requires signed thinking blocks for multi-turn conversations.
* This function removes thinking blocks without valid signatures.
*
* @param parts - Array of content parts
* @returns Filtered array without unsigned thinking blocks
*/
export function filterUnsignedThinkingBlocks(parts: GeminiPart[]): GeminiPart[] {
if (!parts || !Array.isArray(parts)) {
return parts
}
return parts.filter((part) => {
if (!part || typeof part !== "object") {
return true
}
// If it's a thinking part, only keep it if signed
if (isThinkingPart(part)) {
return hasValidSignature(part)
}
// Keep all non-thinking parts
return true
})
}
/**
* Transform entire response thinking parts
*
* Main transformation function that handles both Gemini-style and
* Anthropic-style thinking blocks in a response.
*
* @param response - Response object to transform
* @returns Transformed response with standardized reasoning format
*/
export function transformResponseThinking(response: GeminiResponse): GeminiResponse {
if (!response || typeof response !== "object") {
return response
}
const result: GeminiResponse = { ...response }
// Transform candidates (Gemini-style)
if (Array.isArray(result.candidates)) {
result.candidates = result.candidates.map(transformCandidateThinking)
}
// Transform direct content (Anthropic-style)
if (Array.isArray(result.content)) {
result.content = transformAnthropicThinking(result.content)
}
return result
}
/**
* Thinking configuration for requests
*/
export interface ThinkingConfig {
/** Token budget for thinking/reasoning */
thinkingBudget?: number
/** Whether to include thoughts in response */
includeThoughts?: boolean
}
/**
* Normalize thinking configuration
*
* Ensures thinkingConfig is valid: includeThoughts only allowed when budget > 0.
*
* @param config - Raw thinking configuration
* @returns Normalized configuration or undefined
*/
export function normalizeThinkingConfig(config: unknown): ThinkingConfig | undefined {
if (!config || typeof config !== "object") {
return undefined
}
const record = config as Record<string, unknown>
const budgetRaw = record.thinkingBudget ?? record.thinking_budget
const includeRaw = record.includeThoughts ?? record.include_thoughts
const thinkingBudget =
typeof budgetRaw === "number" && Number.isFinite(budgetRaw) ? budgetRaw : undefined
const includeThoughts = typeof includeRaw === "boolean" ? includeRaw : undefined
const enableThinking = thinkingBudget !== undefined && thinkingBudget > 0
const finalInclude = enableThinking ? (includeThoughts ?? false) : false
// Return undefined if no meaningful config
if (
!enableThinking &&
finalInclude === false &&
thinkingBudget === undefined &&
includeThoughts === undefined
) {
return undefined
}
const normalized: ThinkingConfig = {}
if (thinkingBudget !== undefined) {
normalized.thinkingBudget = thinkingBudget
}
if (finalInclude !== undefined) {
normalized.includeThoughts = finalInclude
}
return normalized
}
/**
* Extract thinking configuration from request payload
*
* Supports both Gemini-style thinkingConfig and Anthropic-style thinking options.
* Also supports reasoning_effort parameter which maps to thinking budget/level.
*
* @param requestPayload - Request body
* @param generationConfig - Generation config from request
* @param extraBody - Extra body options
* @returns Extracted thinking configuration or undefined
*/
export function extractThinkingConfig(
requestPayload: Record<string, unknown>,
generationConfig?: Record<string, unknown>,
extraBody?: Record<string, unknown>,
): ThinkingConfig | DeleteThinkingConfig | undefined {
// Check for explicit thinkingConfig
const thinkingConfig =
generationConfig?.thinkingConfig ?? extraBody?.thinkingConfig ?? requestPayload.thinkingConfig
if (thinkingConfig && typeof thinkingConfig === "object") {
const config = thinkingConfig as Record<string, unknown>
return {
includeThoughts: Boolean(config.includeThoughts),
thinkingBudget:
typeof config.thinkingBudget === "number" ? config.thinkingBudget : DEFAULT_THINKING_BUDGET,
}
}
// Convert Anthropic-style "thinking" option: { type: "enabled", budgetTokens: N }
const anthropicThinking = extraBody?.thinking ?? requestPayload.thinking
if (anthropicThinking && typeof anthropicThinking === "object") {
const thinking = anthropicThinking as Record<string, unknown>
if (thinking.type === "enabled" || thinking.budgetTokens) {
return {
includeThoughts: true,
thinkingBudget:
typeof thinking.budgetTokens === "number"
? thinking.budgetTokens
: DEFAULT_THINKING_BUDGET,
}
}
}
// Extract reasoning_effort parameter (maps to thinking budget/level)
const reasoningEffort = requestPayload.reasoning_effort ?? extraBody?.reasoning_effort
if (reasoningEffort && typeof reasoningEffort === "string") {
const budget = REASONING_EFFORT_BUDGET_MAP[reasoningEffort]
if (budget !== undefined) {
if (reasoningEffort === "none") {
// Special marker: delete thinkingConfig entirely
return { deleteThinkingConfig: true }
}
return {
includeThoughts: true,
thinkingBudget: budget,
}
}
}
return undefined
}
/**
* Resolve final thinking configuration based on model and context
*
* Handles special cases like Claude models requiring signed thinking blocks
* for multi-turn conversations.
*
* @param userConfig - User-provided thinking configuration
* @param isThinkingModel - Whether model supports thinking
* @param isClaudeModel - Whether model is Claude (not used in Antigravity, but kept for compatibility)
* @param hasAssistantHistory - Whether conversation has assistant history
* @returns Final thinking configuration
*/
export function resolveThinkingConfig(
userConfig: ThinkingConfig | undefined,
isThinkingModel: boolean,
isClaudeModel: boolean,
hasAssistantHistory: boolean,
): ThinkingConfig | undefined {
// Claude models with history need signed thinking blocks
// Since we can't guarantee signatures, disable thinking
if (isClaudeModel && hasAssistantHistory) {
return { includeThoughts: false, thinkingBudget: 0 }
}
// Enable thinking by default for thinking-capable models
if (isThinkingModel && !userConfig) {
return { includeThoughts: true, thinkingBudget: DEFAULT_THINKING_BUDGET }
}
return userConfig
}
// ============================================================================
// Model Thinking Configuration (Task 2: reasoning_effort and Gemini 3 thinkingLevel)
// ============================================================================
/**
* Get thinking config for a model by normalized ID.
* Uses pattern matching fallback if exact match not found.
*
* @param model - Model identifier string (with or without provider prefix)
* @returns Thinking configuration or undefined if not found
*/
export function getModelThinkingConfig(
model: string,
): AntigravityModelConfig | undefined {
const normalized = normalizeModelId(model)
// Exact match
if (ANTIGRAVITY_MODEL_CONFIGS[normalized]) {
return ANTIGRAVITY_MODEL_CONFIGS[normalized]
}
// Pattern matching fallback for Gemini 3
if (normalized.includes("gemini-3")) {
return {
thinkingType: "levels",
min: 128,
max: 32768,
zeroAllowed: false,
levels: ["low", "high"],
}
}
// Pattern matching fallback for Gemini 2.5
if (normalized.includes("gemini-2.5")) {
return {
thinkingType: "numeric",
min: 0,
max: 24576,
zeroAllowed: true,
}
}
// Pattern matching fallback for Claude via Antigravity
if (normalized.includes("claude")) {
return {
thinkingType: "numeric",
min: 1024,
max: 200000,
zeroAllowed: false,
}
}
return undefined
}
/**
* Type for the delete thinking config marker.
* Used when reasoning_effort is "none" to signal complete removal.
*/
export interface DeleteThinkingConfig {
deleteThinkingConfig: true
}
/**
* Union type for thinking configuration input.
*/
export type ThinkingConfigInput = ThinkingConfig | DeleteThinkingConfig
/**
* Convert thinking budget to closest level string for Gemini 3 models.
*
* @param budget - Thinking budget in tokens
* @param model - Model identifier
* @returns Level string ("low", "high", etc.) or "medium" fallback
*/
export function budgetToLevel(budget: number, model: string): string {
const config = getModelThinkingConfig(model)
// Default fallback
if (!config?.levels) {
return "medium"
}
// Map budgets to levels
const budgetMap: Record<number, string> = {
512: "minimal",
1024: "low",
8192: "medium",
24576: "high",
}
// Return matching level or highest available
if (budgetMap[budget]) {
return budgetMap[budget]
}
return config.levels[config.levels.length - 1] || "high"
}
/**
* Apply thinking config to request body.
*
* CRITICAL: Sets request.generationConfig.thinkingConfig (NOT outer body!)
*
* Handles:
* - Gemini 3: Sets thinkingLevel (string)
* - Gemini 2.5: Sets thinkingBudget (number)
* - Delete marker: Removes thinkingConfig entirely
*
* @param requestBody - Request body to modify (mutates in place)
* @param model - Model identifier
* @param config - Thinking configuration or delete marker
*/
export function applyThinkingConfigToRequest(
requestBody: Record<string, unknown>,
model: string,
config: ThinkingConfigInput,
): void {
// Handle delete marker
if ("deleteThinkingConfig" in config && config.deleteThinkingConfig) {
if (requestBody.request && typeof requestBody.request === "object") {
const req = requestBody.request as Record<string, unknown>
if (req.generationConfig && typeof req.generationConfig === "object") {
const genConfig = req.generationConfig as Record<string, unknown>
delete genConfig.thinkingConfig
}
}
return
}
const modelConfig = getModelThinkingConfig(model)
if (!modelConfig) {
return
}
// Ensure request.generationConfig.thinkingConfig exists
if (!requestBody.request || typeof requestBody.request !== "object") {
return
}
const req = requestBody.request as Record<string, unknown>
if (!req.generationConfig || typeof req.generationConfig !== "object") {
req.generationConfig = {}
}
const genConfig = req.generationConfig as Record<string, unknown>
genConfig.thinkingConfig = {}
const thinkingConfig = genConfig.thinkingConfig as Record<string, unknown>
thinkingConfig.include_thoughts = true
if (modelConfig.thinkingType === "numeric") {
thinkingConfig.thinkingBudget = (config as ThinkingConfig).thinkingBudget
} else if (modelConfig.thinkingType === "levels") {
const budget = (config as ThinkingConfig).thinkingBudget ?? DEFAULT_THINKING_BUDGET
let level = budgetToLevel(budget, model)
// Convert uppercase to lowercase (think-mode hook sends "HIGH")
level = level.toLowerCase()
thinkingConfig.thinkingLevel = level
}
}

View File

@@ -1,97 +0,0 @@
/**
* Thought Signature Store
*
* Stores and retrieves thought signatures for multi-turn conversations.
* Gemini 3 Pro requires thought_signature on function call content blocks
* in subsequent requests to maintain reasoning continuity.
*
* Key responsibilities:
* - Store the latest thought signature per session
* - Provide signature for injection into function call requests
* - Clear signatures when sessions end
*/
/**
* In-memory store for thought signatures indexed by session ID
*/
const signatureStore = new Map<string, string>()
/**
* In-memory store for session IDs per fetch instance
* Used to maintain consistent sessionId across multi-turn conversations
*/
const sessionIdStore = new Map<string, string>()
/**
* Store a thought signature for a session
*
* @param sessionKey - Unique session identifier (typically fetch instance ID)
* @param signature - The thought signature from model response
*/
export function setThoughtSignature(sessionKey: string, signature: string): void {
if (sessionKey && signature) {
signatureStore.set(sessionKey, signature)
}
}
/**
* Retrieve the stored thought signature for a session
*
* @param sessionKey - Unique session identifier
* @returns The stored signature or undefined if not found
*/
export function getThoughtSignature(sessionKey: string): string | undefined {
return signatureStore.get(sessionKey)
}
/**
* Clear the thought signature for a session
*
* @param sessionKey - Unique session identifier
*/
export function clearThoughtSignature(sessionKey: string): void {
signatureStore.delete(sessionKey)
}
/**
* Store or retrieve a persistent session ID for a fetch instance
*
* @param fetchInstanceId - Unique identifier for the fetch instance
* @param sessionId - Optional session ID to store (if not provided, returns existing or generates new)
* @returns The session ID for this fetch instance
*/
export function getOrCreateSessionId(fetchInstanceId: string, sessionId?: string): string {
if (sessionId) {
sessionIdStore.set(fetchInstanceId, sessionId)
return sessionId
}
const existing = sessionIdStore.get(fetchInstanceId)
if (existing) {
return existing
}
const n = Math.floor(Math.random() * Number.MAX_SAFE_INTEGER)
const newSessionId = `-${n}`
sessionIdStore.set(fetchInstanceId, newSessionId)
return newSessionId
}
/**
* Clear the session ID for a fetch instance
*
* @param fetchInstanceId - Unique identifier for the fetch instance
*/
export function clearSessionId(fetchInstanceId: string): void {
sessionIdStore.delete(fetchInstanceId)
}
/**
* Clear all stored data for a fetch instance (signature + session ID)
*
* @param fetchInstanceId - Unique identifier for the fetch instance
*/
export function clearFetchInstanceData(fetchInstanceId: string): void {
signatureStore.delete(fetchInstanceId)
sessionIdStore.delete(fetchInstanceId)
}

View File

@@ -1,78 +0,0 @@
import { describe, it, expect } from "bun:test"
import { isTokenExpired } from "./token"
import type { AntigravityTokens } from "./types"
describe("Token Expiry with 60-second Buffer", () => {
const createToken = (expiresInSeconds: number): AntigravityTokens => ({
type: "antigravity",
access_token: "test-access",
refresh_token: "test-refresh",
expires_in: expiresInSeconds,
timestamp: Date.now(),
})
it("should NOT be expired if token expires in 2 minutes", () => {
// #given
const twoMinutes = 2 * 60
const token = createToken(twoMinutes)
// #when
const expired = isTokenExpired(token)
// #then
expect(expired).toBe(false)
})
it("should be expired if token expires in 30 seconds", () => {
// #given
const thirtySeconds = 30
const token = createToken(thirtySeconds)
// #when
const expired = isTokenExpired(token)
// #then
expect(expired).toBe(true)
})
it("should be expired at exactly 60 seconds (boundary)", () => {
// #given
const sixtySeconds = 60
const token = createToken(sixtySeconds)
// #when
const expired = isTokenExpired(token)
// #then - at boundary, should trigger refresh
expect(expired).toBe(true)
})
it("should be expired if token already expired", () => {
// #given
const alreadyExpired: AntigravityTokens = {
type: "antigravity",
access_token: "test-access",
refresh_token: "test-refresh",
expires_in: 3600,
timestamp: Date.now() - 4000 * 1000,
}
// #when
const expired = isTokenExpired(alreadyExpired)
// #then
expect(expired).toBe(true)
})
it("should NOT be expired if token has plenty of time", () => {
// #given
const twoHours = 2 * 60 * 60
const token = createToken(twoHours)
// #when
const expired = isTokenExpired(token)
// #then
expect(expired).toBe(false)
})
})

View File

@@ -1,213 +0,0 @@
import {
ANTIGRAVITY_CLIENT_ID,
ANTIGRAVITY_CLIENT_SECRET,
ANTIGRAVITY_TOKEN_REFRESH_BUFFER_MS,
GOOGLE_TOKEN_URL,
} from "./constants"
import type {
AntigravityRefreshParts,
AntigravityTokenExchangeResult,
AntigravityTokens,
OAuthErrorPayload,
ParsedOAuthError,
} from "./types"
export class AntigravityTokenRefreshError extends Error {
code?: string
description?: string
status: number
statusText: string
responseBody?: string
constructor(options: {
message: string
code?: string
description?: string
status: number
statusText: string
responseBody?: string
}) {
super(options.message)
this.name = "AntigravityTokenRefreshError"
this.code = options.code
this.description = options.description
this.status = options.status
this.statusText = options.statusText
this.responseBody = options.responseBody
}
get isInvalidGrant(): boolean {
return this.code === "invalid_grant"
}
get isNetworkError(): boolean {
return this.status === 0
}
}
function parseOAuthErrorPayload(text: string | undefined): ParsedOAuthError {
if (!text) {
return {}
}
try {
const payload = JSON.parse(text) as OAuthErrorPayload
let code: string | undefined
if (typeof payload.error === "string") {
code = payload.error
} else if (payload.error && typeof payload.error === "object") {
code = payload.error.status ?? payload.error.code
}
return {
code,
description: payload.error_description,
}
} catch {
return { description: text }
}
}
export function isTokenExpired(tokens: AntigravityTokens): boolean {
const expirationTime = tokens.timestamp + tokens.expires_in * 1000
return Date.now() >= expirationTime - ANTIGRAVITY_TOKEN_REFRESH_BUFFER_MS
}
const MAX_REFRESH_RETRIES = 3
const INITIAL_RETRY_DELAY_MS = 1000
function calculateRetryDelay(attempt: number): number {
return Math.min(INITIAL_RETRY_DELAY_MS * Math.pow(2, attempt), 10000)
}
function isRetryableError(status: number): boolean {
if (status === 0) return true
if (status === 429) return true
if (status >= 500 && status < 600) return true
return false
}
export async function refreshAccessToken(
refreshToken: string,
clientId: string = ANTIGRAVITY_CLIENT_ID,
clientSecret: string = ANTIGRAVITY_CLIENT_SECRET
): Promise<AntigravityTokenExchangeResult> {
const params = new URLSearchParams({
grant_type: "refresh_token",
refresh_token: refreshToken,
client_id: clientId,
client_secret: clientSecret,
})
let lastError: AntigravityTokenRefreshError | undefined
for (let attempt = 0; attempt <= MAX_REFRESH_RETRIES; attempt++) {
try {
const response = await fetch(GOOGLE_TOKEN_URL, {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: params,
})
if (response.ok) {
const data = (await response.json()) as {
access_token: string
refresh_token?: string
expires_in: number
token_type: string
}
return {
access_token: data.access_token,
refresh_token: data.refresh_token || refreshToken,
expires_in: data.expires_in,
token_type: data.token_type,
}
}
const responseBody = await response.text().catch(() => undefined)
const parsed = parseOAuthErrorPayload(responseBody)
lastError = new AntigravityTokenRefreshError({
message: parsed.description || `Token refresh failed: ${response.status} ${response.statusText}`,
code: parsed.code,
description: parsed.description,
status: response.status,
statusText: response.statusText,
responseBody,
})
if (parsed.code === "invalid_grant") {
throw lastError
}
if (!isRetryableError(response.status)) {
throw lastError
}
if (attempt < MAX_REFRESH_RETRIES) {
const delay = calculateRetryDelay(attempt)
await new Promise((resolve) => setTimeout(resolve, delay))
}
} catch (error) {
if (error instanceof AntigravityTokenRefreshError) {
throw error
}
lastError = new AntigravityTokenRefreshError({
message: error instanceof Error ? error.message : "Network error during token refresh",
status: 0,
statusText: "Network Error",
})
if (attempt < MAX_REFRESH_RETRIES) {
const delay = calculateRetryDelay(attempt)
await new Promise((resolve) => setTimeout(resolve, delay))
}
}
}
throw lastError || new AntigravityTokenRefreshError({
message: "Token refresh failed after all retries",
status: 0,
statusText: "Max Retries Exceeded",
})
}
/**
* Parse a stored token string into its component parts.
* Storage format: `refreshToken|projectId|managedProjectId`
*
* @param stored - The pipe-separated stored token string
* @returns Parsed refresh parts with refreshToken, projectId, and optional managedProjectId
*/
export function parseStoredToken(stored: string): AntigravityRefreshParts {
const parts = stored.split("|")
const [refreshToken, projectId, managedProjectId] = parts
return {
refreshToken: refreshToken || "",
projectId: projectId || undefined,
managedProjectId: managedProjectId || undefined,
}
}
/**
* Format token components for storage.
* Creates a pipe-separated string: `refreshToken|projectId|managedProjectId`
*
* @param refreshToken - The refresh token
* @param projectId - The GCP project ID
* @param managedProjectId - Optional managed project ID for enterprise users
* @returns Formatted string for storage
*/
export function formatTokenForStorage(
refreshToken: string,
projectId: string,
managedProjectId?: string
): string {
return `${refreshToken}|${projectId}|${managedProjectId || ""}`
}

View File

@@ -1,243 +0,0 @@
/**
* Antigravity Tool Normalization
* Converts tools between OpenAI and Gemini formats.
*
* OpenAI format:
* { "type": "function", "function": { "name": "x", "description": "...", "parameters": {...} } }
*
* Gemini format:
* { "functionDeclarations": [{ "name": "x", "description": "...", "parameters": {...} }] }
*
* Note: This is for Gemini models ONLY. Claude models are not supported via Antigravity.
*/
/**
* OpenAI function tool format
*/
export interface OpenAITool {
type: string
function?: {
name: string
description?: string
parameters?: Record<string, unknown>
}
}
/**
* Gemini function declaration format
*/
export interface GeminiFunctionDeclaration {
name: string
description?: string
parameters?: Record<string, unknown>
}
/**
* Gemini tools format (array of functionDeclarations)
*/
export interface GeminiTools {
functionDeclarations: GeminiFunctionDeclaration[]
}
/**
* OpenAI tool call in response
*/
export interface OpenAIToolCall {
id: string
type: "function"
function: {
name: string
arguments: string
}
}
/**
* Gemini function call in response
*/
export interface GeminiFunctionCall {
name: string
args: Record<string, unknown>
}
/**
* Gemini function response format
*/
export interface GeminiFunctionResponse {
name: string
response: Record<string, unknown>
}
/**
* Gemini tool result containing function calls
*/
export interface GeminiToolResult {
functionCall?: GeminiFunctionCall
functionResponse?: GeminiFunctionResponse
}
/**
* Normalize OpenAI-format tools to Gemini format.
* Converts an array of OpenAI tools to Gemini's functionDeclarations format.
*
* - Handles `function` type tools with name, description, parameters
* - Logs warning for unsupported tool types (does NOT silently drop them)
* - Creates a single object with functionDeclarations array
*
* @param tools - Array of OpenAI-format tools
* @returns Gemini-format tools object with functionDeclarations, or undefined if no valid tools
*/
export function normalizeToolsForGemini(
tools: OpenAITool[]
): GeminiTools | undefined {
if (!tools || tools.length === 0) {
return undefined
}
const functionDeclarations: GeminiFunctionDeclaration[] = []
for (const tool of tools) {
if (!tool || typeof tool !== "object") {
continue
}
const toolType = tool.type ?? "function"
if (toolType === "function" && tool.function) {
const declaration: GeminiFunctionDeclaration = {
name: tool.function.name,
}
if (tool.function.description) {
declaration.description = tool.function.description
}
if (tool.function.parameters) {
declaration.parameters = tool.function.parameters
} else {
declaration.parameters = { type: "object", properties: {} }
}
functionDeclarations.push(declaration)
} else if (toolType !== "function" && process.env.ANTIGRAVITY_DEBUG === "1") {
console.warn(
`[antigravity-tools] Unsupported tool type: "${toolType}". Tool will be skipped.`
)
}
}
// Return undefined if no valid function declarations
if (functionDeclarations.length === 0) {
return undefined
}
return { functionDeclarations }
}
/**
* Convert Gemini tool results (functionCall) back to OpenAI tool_call format.
* Handles both functionCall (request) and functionResponse (result) formats.
*
* Gemini functionCall format:
* { "name": "tool_name", "args": { ... } }
*
* OpenAI tool_call format:
* { "id": "call_xxx", "type": "function", "function": { "name": "tool_name", "arguments": "..." } }
*
* @param results - Array of Gemini tool results containing functionCall or functionResponse
* @returns Array of OpenAI-format tool calls
*/
export function normalizeToolResultsFromGemini(
results: GeminiToolResult[]
): OpenAIToolCall[] {
if (!results || results.length === 0) {
return []
}
const toolCalls: OpenAIToolCall[] = []
let callCounter = 0
for (const result of results) {
// Handle functionCall (tool invocation from model)
if (result.functionCall) {
callCounter++
const toolCall: OpenAIToolCall = {
id: `call_${Date.now()}_${callCounter}`,
type: "function",
function: {
name: result.functionCall.name,
arguments: JSON.stringify(result.functionCall.args ?? {}),
},
}
toolCalls.push(toolCall)
}
}
return toolCalls
}
/**
* Convert a single Gemini functionCall to OpenAI tool_call format.
* Useful for streaming responses where each chunk may contain a function call.
*
* @param functionCall - Gemini function call
* @param id - Optional tool call ID (generates one if not provided)
* @returns OpenAI-format tool call
*/
export function convertFunctionCallToToolCall(
functionCall: GeminiFunctionCall,
id?: string
): OpenAIToolCall {
return {
id: id ?? `call_${Date.now()}_${Math.random().toString(36).slice(2, 8)}`,
type: "function",
function: {
name: functionCall.name,
arguments: JSON.stringify(functionCall.args ?? {}),
},
}
}
/**
* Check if a tool array contains any function-type tools.
*
* @param tools - Array of OpenAI-format tools
* @returns true if there are function tools to normalize
*/
export function hasFunctionTools(tools: OpenAITool[]): boolean {
if (!tools || tools.length === 0) {
return false
}
return tools.some((tool) => tool.type === "function" && tool.function)
}
/**
* Extract function declarations from already-normalized Gemini tools.
* Useful when tools may already be in Gemini format.
*
* @param tools - Tools that may be in Gemini or OpenAI format
* @returns Array of function declarations
*/
export function extractFunctionDeclarations(
tools: unknown
): GeminiFunctionDeclaration[] {
if (!tools || typeof tools !== "object") {
return []
}
// Check if already in Gemini format
const geminiTools = tools as Record<string, unknown>
if (
Array.isArray(geminiTools.functionDeclarations) &&
geminiTools.functionDeclarations.length > 0
) {
return geminiTools.functionDeclarations as GeminiFunctionDeclaration[]
}
// Check if it's an array of OpenAI tools
if (Array.isArray(tools)) {
const normalized = normalizeToolsForGemini(tools as OpenAITool[])
return normalized?.functionDeclarations ?? []
}
return []
}

View File

@@ -1,244 +0,0 @@
/**
* Antigravity Auth Type Definitions
* Matches cliproxyapi/sdk/auth/antigravity.go token format exactly
*/
/**
* Token storage format for Antigravity authentication
* Matches Go metadata structure: type, access_token, refresh_token, expires_in, timestamp, email, project_id
*/
export interface AntigravityTokens {
/** Always "antigravity" for this auth type */
type: "antigravity"
/** OAuth access token from Google */
access_token: string
/** OAuth refresh token from Google */
refresh_token: string
/** Token expiration time in seconds */
expires_in: number
/** Unix timestamp in milliseconds when tokens were obtained */
timestamp: number
/** ISO 8601 formatted expiration datetime (optional, for display) */
expired?: string
/** User's email address from Google userinfo */
email?: string
/** GCP project ID from loadCodeAssist API */
project_id?: string
}
/**
* Project context returned from loadCodeAssist API
* Used to get cloudaicompanionProject for API calls
*/
export interface AntigravityProjectContext {
/** GCP project ID for Cloud AI Companion */
cloudaicompanionProject?: string
/** Managed project ID for enterprise users (optional) */
managedProjectId?: string
}
/**
* Metadata for loadCodeAssist API request
*/
export interface AntigravityClientMetadata {
/** IDE type identifier */
ideType: "IDE_UNSPECIFIED" | string
/** Platform identifier */
platform: "PLATFORM_UNSPECIFIED" | string
/** Plugin type - typically "GEMINI" */
pluginType: "GEMINI" | string
}
/**
* Request body for loadCodeAssist API
*/
export interface AntigravityLoadCodeAssistRequest {
metadata: AntigravityClientMetadata
}
export interface AntigravityUserTier {
id?: string
isDefault?: boolean
userDefinedCloudaicompanionProject?: boolean
}
export interface AntigravityLoadCodeAssistResponse {
cloudaicompanionProject?: string | { id: string }
currentTier?: { id?: string }
allowedTiers?: AntigravityUserTier[]
}
export interface AntigravityOnboardUserPayload {
done?: boolean
response?: {
cloudaicompanionProject?: { id?: string }
}
}
/**
* Request body format for Antigravity API calls
* Wraps the actual request with project and model context
*/
export interface AntigravityRequestBody {
project: string
model: string
userAgent: string
requestType: string
requestId: string
request: Record<string, unknown>
}
/**
* Response format from Antigravity API
* Follows OpenAI-compatible structure with Gemini extensions
*/
export interface AntigravityResponse {
/** Response ID */
id?: string
/** Object type (e.g., "chat.completion") */
object?: string
/** Creation timestamp */
created?: number
/** Model used for response */
model?: string
/** Response choices */
choices?: AntigravityResponseChoice[]
/** Token usage statistics */
usage?: AntigravityUsage
/** Error information if request failed */
error?: AntigravityError
}
/**
* Single response choice in Antigravity response
*/
export interface AntigravityResponseChoice {
/** Choice index */
index: number
/** Message content */
message?: {
role: "assistant"
content?: string
tool_calls?: AntigravityToolCall[]
}
/** Delta for streaming responses */
delta?: {
role?: "assistant"
content?: string
tool_calls?: AntigravityToolCall[]
}
/** Finish reason */
finish_reason?: "stop" | "tool_calls" | "length" | "content_filter" | null
}
/**
* Tool call in Antigravity response
*/
export interface AntigravityToolCall {
id: string
type: "function"
function: {
name: string
arguments: string
}
}
/**
* Token usage statistics
*/
export interface AntigravityUsage {
prompt_tokens: number
completion_tokens: number
total_tokens: number
}
/**
* Error response from Antigravity API
*/
export interface AntigravityError {
message: string
type?: string
code?: string | number
}
/**
* Token exchange result from Google OAuth
* Matches antigravityTokenResponse in Go
*/
export interface AntigravityTokenExchangeResult {
access_token: string
refresh_token: string
expires_in: number
token_type: string
}
/**
* User info from Google userinfo API
*/
export interface AntigravityUserInfo {
email: string
name?: string
picture?: string
}
/**
* Parsed refresh token parts
* Format: refreshToken|projectId|managedProjectId
*/
export interface AntigravityRefreshParts {
refreshToken: string
projectId?: string
managedProjectId?: string
}
/**
* OAuth error payload from Google
* Google returns errors in multiple formats, this handles all of them
*/
export interface OAuthErrorPayload {
error?: string | { status?: string; code?: string; message?: string }
error_description?: string
}
/**
* Parsed OAuth error with normalized fields
*/
export interface ParsedOAuthError {
code?: string
description?: string
}
/**
* Multi-account support types
*/
/** All model families for rate limit tracking */
export const MODEL_FAMILIES = ["claude", "gemini-flash", "gemini-pro"] as const
/** Model family for rate limit tracking */
export type ModelFamily = (typeof MODEL_FAMILIES)[number]
/** Account tier for prioritization */
export type AccountTier = "free" | "paid"
/** Rate limit state per model family (Unix timestamps in ms) */
export type RateLimitState = Partial<Record<ModelFamily, number>>
/** Account metadata for storage */
export interface AccountMetadata {
email: string
tier: AccountTier
refreshToken: string
projectId: string
managedProjectId?: string
accessToken: string
expiresAt: number
rateLimits: RateLimitState
}
/** Storage schema for persisting multiple accounts */
export interface AccountStorage {
version: number
accounts: AccountMetadata[]
activeIndex: number
}

View File

@@ -1,24 +1,22 @@
# CLI KNOWLEDGE BASE
## OVERVIEW
CLI for oh-my-opencode: interactive installer, health diagnostics (doctor), runtime launcher. Entry: `bunx oh-my-opencode`.
## STRUCTURE
```
cli/
├── index.ts # Commander.js entry, subcommand routing
├── index.ts # Commander.js entry, subcommand routing (184 lines)
├── install.ts # Interactive TUI installer (436 lines)
├── config-manager.ts # JSONC parsing, env detection (725 lines)
├── types.ts # CLI-specific types
├── commands/ # CLI subcommands
├── commands/ # CLI subcommands (auth.ts)
├── doctor/ # Health check system
│ ├── index.ts # Doctor command entry
│ ├── runner.ts # Health check orchestration
│ ├── constants.ts # Check categories
│ ├── types.ts # Check result interfaces
│ └── checks/ # 17+ individual checks (auth, config, dependencies, gh, lsp, mcp, opencode, plugin, version)
│ └── checks/ # 10+ check modules (17+ individual checks)
├── get-local-version/ # Version detection
└── run/ # OpenCode session launcher
├── completion.ts # Completion logic
@@ -26,47 +24,34 @@ cli/
```
## CLI COMMANDS
| Command | Purpose |
|---------|---------|
| `install` | Interactive setup wizard |
| `doctor` | Environment health checks |
| `run` | Launch OpenCode session |
| `install` | Interactive setup wizard with subscription detection |
| `doctor` | Environment health checks (LSP, Auth, Config, Deps) |
| `run` | Launch OpenCode session with event handling |
| `auth` | Manage authentication providers |
## DOCTOR CHECKS
17+ checks in `doctor/checks/`:
- version.ts (OpenCode >= 1.0.150)
- config.ts (plugin registered)
- bun.ts, node.ts, git.ts
- anthropic-auth.ts, openai-auth.ts, google-auth.ts
- lsp-*.ts, mcp-*.ts
- `version.ts`: OpenCode >= 1.0.150
- `config.ts`: Plugin registration & JSONC validity
- `dependencies.ts`: bun, node, git, gh-cli
- `auth.ts`: Anthropic, OpenAI, Google (Antigravity)
- `lsp.ts`, `mcp.ts`: Tool connectivity checks
## CONFIG-MANAGER (669 lines)
- JSONC support (comments, trailing commas)
- Multi-source: User (~/.config/opencode/) + Project (.opencode/)
- Zod validation
- Legacy format migration
- Error aggregation for doctor
## CONFIG-MANAGER
- **JSONC**: Supports comments and trailing commas via `parseJsonc`
- **Multi-source**: Merges User (`~/.config/opencode/`) + Project (`.opencode/`)
- **Validation**: Strict Zod schema with error aggregation for `doctor`
- **Env**: Detects `OPENCODE_CONFIG_DIR` for profile isolation
## HOW TO ADD CHECK
1. Create `src/cli/doctor/checks/my-check.ts`:
```typescript
export const myCheck: DoctorCheck = {
name: "my-check",
category: "environment",
check: async () => {
return { status: "pass" | "warn" | "fail", message: "..." }
}
}
```
2. Add to `src/cli/doctor/checks/index.ts`
1. Create `src/cli/doctor/checks/my-check.ts` returning `DoctorCheck`
2. Export from `checks/index.ts` and add to `getAllCheckDefinitions()`
3. Use `CheckContext` for shared utilities (LSP, Auth)
## ANTI-PATTERNS
- Blocking prompts in non-TTY (check `process.stdout.isTTY`)
- Hardcoded paths (use shared utilities)
- JSON.parse for user files (use parseJsonc)
- Silent failures in doctor checks
- Direct `JSON.parse` (breaks JSONC compatibility)
- Silent failures (always return `warn` or `fail` in `doctor`)
- Environment-specific hardcoding (use `ConfigManager`)

View File

@@ -1,93 +0,0 @@
import { loadAccounts, saveAccounts } from "../../auth/antigravity/storage"
import type { AccountStorage } from "../../auth/antigravity/types"
export async function listAccounts(): Promise<number> {
const accounts = await loadAccounts()
if (!accounts || accounts.accounts.length === 0) {
console.log("No accounts found.")
console.log("Run 'opencode auth login' and select Google (Antigravity) to add accounts.")
return 0
}
console.log(`\nGoogle Antigravity Accounts (${accounts.accounts.length}/10):\n`)
for (let i = 0; i < accounts.accounts.length; i++) {
const acc = accounts.accounts[i]
const isActive = i === accounts.activeIndex
const activeMarker = isActive ? "* " : " "
console.log(`${activeMarker}[${i}] ${acc.email || "Unknown"}`)
console.log(` Tier: ${acc.tier || "free"}`)
const rateLimits = acc.rateLimits || {}
const now = Date.now()
const limited: string[] = []
if (rateLimits.claude && rateLimits.claude > now) {
const mins = Math.ceil((rateLimits.claude - now) / 60000)
limited.push(`claude (${mins}m)`)
}
if (rateLimits["gemini-flash"] && rateLimits["gemini-flash"] > now) {
const mins = Math.ceil((rateLimits["gemini-flash"] - now) / 60000)
limited.push(`gemini-flash (${mins}m)`)
}
if (rateLimits["gemini-pro"] && rateLimits["gemini-pro"] > now) {
const mins = Math.ceil((rateLimits["gemini-pro"] - now) / 60000)
limited.push(`gemini-pro (${mins}m)`)
}
if (limited.length > 0) {
console.log(` Rate limited: ${limited.join(", ")}`)
}
console.log()
}
return 0
}
export async function removeAccount(indexOrEmail: string): Promise<number> {
const accounts = await loadAccounts()
if (!accounts || accounts.accounts.length === 0) {
console.error("No accounts found.")
return 1
}
let index: number
const parsedIndex = Number(indexOrEmail)
if (Number.isInteger(parsedIndex) && String(parsedIndex) === indexOrEmail) {
index = parsedIndex
} else {
index = accounts.accounts.findIndex((acc) => acc.email === indexOrEmail)
if (index === -1) {
console.error(`Account not found: ${indexOrEmail}`)
return 1
}
}
if (index < 0 || index >= accounts.accounts.length) {
console.error(`Invalid index: ${index}. Valid range: 0-${accounts.accounts.length - 1}`)
return 1
}
const removed = accounts.accounts[index]
accounts.accounts.splice(index, 1)
if (accounts.accounts.length === 0) {
accounts.activeIndex = -1
} else if (accounts.activeIndex >= accounts.accounts.length) {
accounts.activeIndex = accounts.accounts.length - 1
} else if (accounts.activeIndex > index) {
accounts.activeIndex--
}
await saveAccounts(accounts)
console.log(`Removed account: ${removed.email || "Unknown"} (index ${index})`)
console.log(`Remaining accounts: ${accounts.accounts.length}`)
return 0
}

View File

@@ -267,10 +267,6 @@ export function generateOmoConfig(installConfig: InstallConfig): Record<string,
$schema: "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
}
if (installConfig.hasGemini) {
config.google_auth = false
}
const agents: Record<string, Record<string, unknown>> = {}
if (!installConfig.hasClaude) {
@@ -350,7 +346,6 @@ export function writeOmoConfig(installConfig: InstallConfig): ConfigMergeResult
return { success: true, configPath: omoConfigPath }
}
delete existing.agents
const merged = deepMerge(existing, newConfig)
writeFileSync(omoConfigPath, JSON.stringify(merged, null, 2) + "\n")
} catch (parseErr) {
@@ -643,7 +638,6 @@ export function addProviderConfig(config: InstallConfig): ConfigMergeResult {
}
interface OmoConfigData {
google_auth?: boolean
agents?: Record<string, { model?: string }>
}
@@ -714,9 +708,6 @@ export function detectCurrentConfig(): DetectedConfig {
result.hasChatGPT = false
}
if (omoConfig.google_auth === false) {
result.hasGemini = plugins.some((p) => p.startsWith("opencode-antigravity-auth"))
}
} catch {
/* intentionally empty - malformed omo config returns defaults from opencode config detection */
}

View File

@@ -50,7 +50,9 @@ export async function getVersionInfo(): Promise<VersionCheckInfo> {
}
const currentVersion = getCachedVersion()
const latestVersion = await getLatestVersion()
const { extractChannel } = await import("../../../hooks/auto-update-checker/index")
const channel = extractChannel(pluginInfo?.pinnedVersion ?? currentVersion)
const latestVersion = await getLatestVersion(channel)
const isUpToDate =
!currentVersion ||

View File

@@ -54,7 +54,9 @@ export async function getLocalVersion(options: GetLocalVersionOptions = {}): Pro
return 1
}
const latestVersion = await getLatestVersion()
const { extractChannel } = await import("../../hooks/auto-update-checker/index")
const channel = extractChannel(pluginInfo?.pinnedVersion ?? currentVersion)
const latestVersion = await getLatestVersion(channel)
if (!latestVersion) {
const info: VersionInfo = {

View File

@@ -4,7 +4,6 @@ import { install } from "./install"
import { run } from "./run"
import { getLocalVersion } from "./get-local-version"
import { doctor } from "./doctor"
import { listAccounts, removeAccount } from "./commands/auth"
import type { InstallArgs } from "./types"
import type { RunOptions } from "./run"
import type { GetLocalVersionOptions } from "./get-local-version/types"
@@ -135,45 +134,6 @@ Categories:
process.exit(exitCode)
})
const authCommand = program
.command("auth")
.description("Manage Google Antigravity accounts")
authCommand
.command("list")
.description("List all Google Antigravity accounts")
.addHelpText("after", `
Examples:
$ bunx oh-my-opencode auth list
Shows:
- Account index and email
- Account tier (free/paid)
- Active account (marked with *)
- Rate limit status per model family
`)
.action(async () => {
const exitCode = await listAccounts()
process.exit(exitCode)
})
authCommand
.command("remove <index-or-email>")
.description("Remove an account by index or email")
.addHelpText("after", `
Examples:
$ bunx oh-my-opencode auth remove 0
$ bunx oh-my-opencode auth remove user@example.com
Note:
- Use 'auth list' to see account indices
- Removing the active account will switch to the next available account
`)
.action(async (indexOrEmail: string) => {
const exitCode = await removeAccount(indexOrEmail)
process.exit(exitCode)
})
program
.command("version")
.description("Show version information")

View File

@@ -1,5 +1,5 @@
import { describe, it, expect } from "bun:test"
import { createEventState, type EventState } from "./events"
import { createEventState, serializeError, type EventState } from "./events"
import type { RunContext, EventPayload } from "./types"
const createMockContext = (sessionID: string = "test-session"): RunContext => ({
@@ -15,6 +15,63 @@ async function* toAsyncIterable<T>(items: T[]): AsyncIterable<T> {
}
}
describe("serializeError", () => {
it("returns 'Unknown error' for null/undefined", () => {
// #given / #when / #then
expect(serializeError(null)).toBe("Unknown error")
expect(serializeError(undefined)).toBe("Unknown error")
})
it("returns message from Error instance", () => {
// #given
const error = new Error("Something went wrong")
// #when / #then
expect(serializeError(error)).toBe("Something went wrong")
})
it("returns string as-is", () => {
// #given / #when / #then
expect(serializeError("Direct error message")).toBe("Direct error message")
})
it("extracts message from plain object", () => {
// #given
const errorObj = { message: "Object error message", code: "ERR_001" }
// #when / #then
expect(serializeError(errorObj)).toBe("Object error message")
})
it("extracts message from nested error object", () => {
// #given
const errorObj = { error: { message: "Nested error message" } }
// #when / #then
expect(serializeError(errorObj)).toBe("Nested error message")
})
it("extracts message from data.message path", () => {
// #given
const errorObj = { data: { message: "Data error message" } }
// #when / #then
expect(serializeError(errorObj)).toBe("Data error message")
})
it("JSON stringifies object without message property", () => {
// #given
const errorObj = { code: "ERR_001", status: 500 }
// #when
const result = serializeError(errorObj)
// #then
expect(result).toContain("ERR_001")
expect(result).toContain("500")
})
})
describe("createEventState", () => {
it("creates initial state with correct defaults", () => {
// #given / #when

View File

@@ -11,6 +11,51 @@ import type {
ToolResultProps,
} from "./types"
export function serializeError(error: unknown): string {
if (!error) return "Unknown error"
if (error instanceof Error) {
const parts = [error.message]
if (error.cause) {
parts.push(`Cause: ${serializeError(error.cause)}`)
}
return parts.join(" | ")
}
if (typeof error === "string") {
return error
}
if (typeof error === "object") {
const obj = error as Record<string, unknown>
const messagePaths = [
obj.message,
obj.error,
(obj.data as Record<string, unknown>)?.message,
(obj.data as Record<string, unknown>)?.error,
(obj.error as Record<string, unknown>)?.message,
]
for (const msg of messagePaths) {
if (typeof msg === "string" && msg.length > 0) {
return msg
}
}
try {
const json = JSON.stringify(error, null, 2)
if (json !== "{}") {
return json
}
} catch (_) {
void _
}
}
return String(error)
}
export interface EventState {
mainSessionIdle: boolean
mainSessionError: boolean
@@ -125,6 +170,13 @@ function logEventVerbose(ctx: RunContext, payload: EventPayload): void {
break
}
case "session.error": {
const errorProps = props as SessionErrorProps | undefined
const errorMsg = serializeError(errorProps?.error)
console.error(pc.red(`${sessionTag} ❌ SESSION.ERROR: ${errorMsg}`))
break
}
default:
console.error(pc.dim(`${sessionTag} ${payload.type}`))
}
@@ -166,9 +218,7 @@ function handleSessionError(
const props = payload.properties as SessionErrorProps | undefined
if (props?.sessionID === ctx.sessionID) {
state.mainSessionError = true
state.lastError = props?.error
? String(props.error instanceof Error ? props.error.message : props.error)
: "Unknown error"
state.lastError = serializeError(props?.error)
console.error(pc.red(`\n[session.error] ${state.lastError}`))
}
}

View File

@@ -2,7 +2,7 @@ import { createOpencode } from "@opencode-ai/sdk"
import pc from "picocolors"
import type { RunOptions, RunContext } from "./types"
import { checkCompletionConditions } from "./completion"
import { createEventState, processEvents } from "./events"
import { createEventState, processEvents, serializeError } from "./events"
const POLL_INTERVAL_MS = 500
const DEFAULT_TIMEOUT_MS = 0
@@ -115,7 +115,7 @@ export async function run(options: RunOptions): Promise<number> {
if (err instanceof Error && err.name === "AbortError") {
return 130
}
console.error(pc.red(`Error: ${err}`))
console.error(pc.red(`Error: ${serializeError(err)}`))
return 1
}
}

View File

@@ -1,5 +1,5 @@
import { describe, expect, test } from "bun:test"
import { AgentOverrideConfigSchema, BuiltinCategoryNameSchema, OhMyOpenCodeConfigSchema } from "./schema"
import { AgentOverrideConfigSchema, BuiltinCategoryNameSchema, CategoryConfigSchema, OhMyOpenCodeConfigSchema } from "./schema"
describe("disabled_mcps schema", () => {
test("should accept built-in MCP names", () => {
@@ -174,6 +174,33 @@ describe("AgentOverrideConfigSchema", () => {
})
})
describe("variant field", () => {
test("accepts variant as optional string", () => {
// #given
const config = { variant: "high" }
// #when
const result = AgentOverrideConfigSchema.safeParse(config)
// #then
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.variant).toBe("high")
}
})
test("rejects non-string variant", () => {
// #given
const config = { variant: 123 }
// #when
const result = AgentOverrideConfigSchema.safeParse(config)
// #then
expect(result.success).toBe(false)
})
})
describe("skills field", () => {
test("accepts skills as optional string array", () => {
// #given
@@ -303,6 +330,33 @@ describe("AgentOverrideConfigSchema", () => {
})
})
describe("CategoryConfigSchema", () => {
test("accepts variant as optional string", () => {
// #given
const config = { model: "openai/gpt-5.2", variant: "xhigh" }
// #when
const result = CategoryConfigSchema.safeParse(config)
// #then
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.variant).toBe("xhigh")
}
})
test("rejects non-string variant", () => {
// #given
const config = { model: "openai/gpt-5.2", variant: 123 }
// #when
const result = CategoryConfigSchema.safeParse(config)
// #then
expect(result.success).toBe(false)
})
})
describe("BuiltinCategoryNameSchema", () => {
test("accepts all builtin category names", () => {
// #given
@@ -315,3 +369,76 @@ describe("BuiltinCategoryNameSchema", () => {
}
})
})
describe("Sisyphus-Junior agent override", () => {
test("schema accepts agents['Sisyphus-Junior'] and retains the key after parsing", () => {
// #given
const config = {
agents: {
"Sisyphus-Junior": {
model: "openai/gpt-5.2",
temperature: 0.2,
},
},
}
// #when
const result = OhMyOpenCodeConfigSchema.safeParse(config)
// #then
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.agents?.["Sisyphus-Junior"]).toBeDefined()
expect(result.data.agents?.["Sisyphus-Junior"]?.model).toBe("openai/gpt-5.2")
expect(result.data.agents?.["Sisyphus-Junior"]?.temperature).toBe(0.2)
}
})
test("schema accepts Sisyphus-Junior with prompt_append", () => {
// #given
const config = {
agents: {
"Sisyphus-Junior": {
prompt_append: "Additional instructions for Sisyphus-Junior",
},
},
}
// #when
const result = OhMyOpenCodeConfigSchema.safeParse(config)
// #then
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.agents?.["Sisyphus-Junior"]?.prompt_append).toBe(
"Additional instructions for Sisyphus-Junior"
)
}
})
test("schema accepts Sisyphus-Junior with tools override", () => {
// #given
const config = {
agents: {
"Sisyphus-Junior": {
tools: {
read: true,
write: false,
},
},
},
}
// #when
const result = OhMyOpenCodeConfigSchema.safeParse(config)
// #then
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.agents?.["Sisyphus-Junior"]?.tools).toEqual({
read: true,
write: false,
})
}
})
})

View File

@@ -39,6 +39,7 @@ export const OverridableAgentNameSchema = z.enum([
"build",
"plan",
"Sisyphus",
"Sisyphus-Junior",
"OpenCode-Builder",
"Prometheus (Planner)",
"Metis (Plan Consultant)",
@@ -96,6 +97,7 @@ export const BuiltinCommandNameSchema = z.enum([
export const AgentOverrideConfigSchema = z.object({
/** @deprecated Use `category` instead. Model is inherited from category defaults. */
model: z.string().optional(),
variant: z.string().optional(),
/** Category name to inherit model and other settings from CategoryConfig */
category: z.string().optional(),
/** Skill names to inject into agent prompt */
@@ -119,6 +121,7 @@ export const AgentOverridesSchema = z.object({
build: AgentOverrideConfigSchema.optional(),
plan: AgentOverrideConfigSchema.optional(),
Sisyphus: AgentOverrideConfigSchema.optional(),
"Sisyphus-Junior": AgentOverrideConfigSchema.optional(),
"OpenCode-Builder": AgentOverrideConfigSchema.optional(),
"Prometheus (Planner)": AgentOverrideConfigSchema.optional(),
"Metis (Plan Consultant)": AgentOverrideConfigSchema.optional(),
@@ -151,6 +154,7 @@ export const SisyphusAgentConfigSchema = z.object({
export const CategoryConfigSchema = z.object({
model: z.string(),
variant: z.string().optional(),
temperature: z.number().min(0).max(2).optional(),
top_p: z.number().min(0).max(1).optional(),
maxTokens: z.number().optional(),
@@ -296,6 +300,7 @@ export const GitMasterConfigSchema = z.object({
/** Add "Co-authored-by: Sisyphus" trailer to commit messages (default: true) */
include_co_authored_by: z.boolean().default(true),
})
export const OhMyOpenCodeConfigSchema = z.object({
$schema: z.string().optional(),
disabled_mcps: z.array(AnyMcpNameSchema).optional(),
@@ -306,7 +311,6 @@ export const OhMyOpenCodeConfigSchema = z.object({
agents: AgentOverridesSchema.optional(),
categories: CategoriesConfigSchema.optional(),
claude_code: ClaudeCodeConfigSchema.optional(),
google_auth: z.boolean().optional(),
sisyphus_agent: SisyphusAgentConfigSchema.optional(),
comment_checker: CommentCheckerConfigSchema.optional(),
experimental: ExperimentalConfigSchema.optional(),

View File

@@ -1,35 +1,34 @@
# FEATURES KNOWLEDGE BASE
## OVERVIEW
Claude Code compatibility layer + core feature modules. Commands, skills, agents, MCPs, hooks from Claude Code work seamlessly.
## STRUCTURE
```
features/
├── background-agent/ # Task lifecycle, notifications (608 lines)
├── background-agent/ # Task lifecycle, notifications (825 lines manager.ts)
├── boulder-state/ # Boulder state persistence
├── builtin-commands/ # Built-in slash commands
│ └── templates/ # start-work, refactor, init-deep, ralph-loop
├── builtin-skills/ # Built-in skills
├── builtin-skills/ # Built-in skills (1230 lines skills.ts)
│ ├── git-master/ # Atomic commits, rebase, history search
│ ├── playwright/ # Browser automation skill
│ └── frontend-ui-ux/ # Designer-turned-developer skill
├── claude-code-agent-loader/ # ~/.claude/agents/*.md
├── claude-code-command-loader/ # ~/.claude/commands/*.md
├── claude-code-mcp-loader/ # .mcp.json files
│ └── env-expander.ts # ${VAR} expansion
├── claude-code-plugin-loader/ # installed_plugins.json (486 lines)
├── claude-code-plugin-loader/ # installed_plugins.json
├── claude-code-session-state/ # Session state persistence
├── context-injector/ # Context collection and injection
├── opencode-skill-loader/ # Skills from OpenCode + Claude paths
├── skill-mcp-manager/ # MCP servers in skill YAML
├── task-toast-manager/ # Task toast notifications
── hook-message-injector/ # Inject messages into conversation
── hook-message-injector/ # Inject messages into conversation
└── context-injector/ # Context collection and injection
```
## LOADER PRIORITY
| Loader | Priority (highest first) |
|--------|--------------------------|
| Commands | `.opencode/command/` > `~/.config/opencode/command/` > `.claude/commands/` > `~/.claude/commands/` |
@@ -38,7 +37,6 @@ features/
| MCPs | `.claude/.mcp.json` > `.mcp.json` > `~/.claude/.mcp.json` |
## CONFIG TOGGLES
```json
{
"claude_code": {
@@ -52,21 +50,19 @@ features/
```
## BACKGROUND AGENT
- Lifecycle: pending → running → completed/failed
- OS notification on complete
- `background_output` to retrieve results
- `background_cancel` with task_id or all=true
- Concurrency limits per provider/model (manager.ts)
- `background_output` to retrieve results, `background_cancel` for cleanup
- Automatic task expiration and cleanup logic
## SKILL MCP
- MCP servers embedded in skill YAML frontmatter
- Lazy client loading, session-scoped cleanup
- `skill_mcp` tool exposes capabilities
- Lazy client loading via `skill-mcp-manager`
- `skill_mcp` tool for cross-skill tool discovery
- Session-scoped MCP server lifecycle management
## ANTI-PATTERNS
- Blocking on load (loaders run at startup)
- No error handling (always try/catch)
- Ignoring priority order
- Writing to ~/.claude/ (read-only)
- Sequential execution for independent tasks (use `sisyphus_task`)
- Trusting agent self-reports without verification
- Blocking main thread during loader initialization
- Manual version bumping in `package.json`

View File

@@ -674,3 +674,95 @@ describe("LaunchInput.skillContent", () => {
expect(input.skillContent).toBe("You are a playwright expert")
})
})
describe("BackgroundManager.notifyParentSession - agent context preservation", () => {
test("should not pass agent field when parentAgent is undefined", async () => {
// #given
const task: BackgroundTask = {
id: "task-no-agent",
sessionID: "session-child",
parentSessionID: "session-parent",
parentMessageID: "msg-parent",
description: "task without agent context",
prompt: "test",
agent: "explore",
status: "completed",
startedAt: new Date(),
completedAt: new Date(),
parentAgent: undefined,
parentModel: { providerID: "anthropic", modelID: "claude-opus" },
}
// #when
const promptBody = buildNotificationPromptBody(task)
// #then
expect("agent" in promptBody).toBe(false)
expect(promptBody.model).toEqual({ providerID: "anthropic", modelID: "claude-opus" })
})
test("should include agent field when parentAgent is defined", async () => {
// #given
const task: BackgroundTask = {
id: "task-with-agent",
sessionID: "session-child",
parentSessionID: "session-parent",
parentMessageID: "msg-parent",
description: "task with agent context",
prompt: "test",
agent: "explore",
status: "completed",
startedAt: new Date(),
completedAt: new Date(),
parentAgent: "Sisyphus",
parentModel: { providerID: "anthropic", modelID: "claude-opus" },
}
// #when
const promptBody = buildNotificationPromptBody(task)
// #then
expect(promptBody.agent).toBe("Sisyphus")
})
test("should not pass model field when parentModel is undefined", async () => {
// #given
const task: BackgroundTask = {
id: "task-no-model",
sessionID: "session-child",
parentSessionID: "session-parent",
parentMessageID: "msg-parent",
description: "task without model context",
prompt: "test",
agent: "explore",
status: "completed",
startedAt: new Date(),
completedAt: new Date(),
parentAgent: "Sisyphus",
parentModel: undefined,
}
// #when
const promptBody = buildNotificationPromptBody(task)
// #then
expect("model" in promptBody).toBe(false)
expect(promptBody.agent).toBe("Sisyphus")
})
})
function buildNotificationPromptBody(task: BackgroundTask): Record<string, unknown> {
const body: Record<string, unknown> = {
parts: [{ type: "text", text: `[BACKGROUND TASK COMPLETED] Task "${task.description}" finished.` }],
}
if (task.parentAgent !== undefined) {
body.agent = task.parentAgent
}
if (task.parentModel?.providerID && task.parentModel?.modelID) {
body.model = { providerID: task.parentModel.providerID, modelID: task.parentModel.modelID }
}
return body
}

View File

@@ -13,6 +13,7 @@ import { subagentSessions } from "../claude-code-session-state"
import { getTaskToastManager } from "../task-toast-manager"
const TASK_TTL_MS = 30 * 60 * 1000
const MIN_STABILITY_TIME_MS = 10 * 1000 // Must run at least 10s before stability detection kicks in
type OpencodeClient = PluginInput["client"]
@@ -43,6 +44,7 @@ interface Todo {
export class BackgroundManager {
private tasks: Map<string, BackgroundTask>
private notifications: Map<string, BackgroundTask[]>
private pendingByParent: Map<string, Set<string>> // Track pending tasks per parent for batching
private client: OpencodeClient
private directory: string
private pollingInterval?: ReturnType<typeof setInterval>
@@ -51,12 +53,20 @@ export class BackgroundManager {
constructor(ctx: PluginInput, config?: BackgroundTaskConfig) {
this.tasks = new Map()
this.notifications = new Map()
this.pendingByParent = new Map()
this.client = ctx.client
this.directory = ctx.directory
this.concurrencyManager = new ConcurrencyManager(config)
}
async launch(input: LaunchInput): Promise<BackgroundTask> {
log("[background-agent] launch() called with:", {
agent: input.agent,
model: input.model,
description: input.description,
parentSessionID: input.parentSessionID,
})
if (!input.agent || input.agent.trim() === "") {
throw new Error("Agent parameter is required")
}
@@ -65,11 +75,23 @@ export class BackgroundManager {
await this.concurrencyManager.acquire(concurrencyKey)
const parentSession = await this.client.session.get({
path: { id: input.parentSessionID },
}).catch((err) => {
log(`[background-agent] Failed to get parent session: ${err}`)
return null
})
const parentDirectory = parentSession?.data?.directory ?? this.directory
log(`[background-agent] Parent dir: ${parentSession?.data?.directory}, using: ${parentDirectory}`)
const createResult = await this.client.session.create({
body: {
parentID: input.parentSessionID,
title: `Background: ${input.description}`,
},
query: {
directory: parentDirectory,
},
}).catch((error) => {
this.concurrencyManager.release(concurrencyKey)
throw error
@@ -106,6 +128,11 @@ export class BackgroundManager {
this.tasks.set(task.id, task)
this.startPolling()
// Track for batched notifications
const pending = this.pendingByParent.get(input.parentSessionID) ?? new Set()
pending.add(task.id)
this.pendingByParent.set(input.parentSessionID, pending)
log("[background-agent] Launching task:", { taskId: task.id, sessionID, agent: input.agent })
const toastManager = getTaskToastManager()
@@ -119,14 +146,26 @@ export class BackgroundManager {
})
}
this.client.session.promptAsync({
log("[background-agent] Calling prompt (fire-and-forget) for launch with:", {
sessionID,
agent: input.agent,
model: input.model,
hasSkillContent: !!input.skillContent,
promptLength: input.prompt.length,
})
// Use prompt() instead of promptAsync() to properly initialize agent loop (fire-and-forget)
// Include model if caller provided one (e.g., from Sisyphus category configs)
this.client.session.prompt({
path: { id: sessionID },
body: {
agent: input.agent,
...(input.model ? { model: input.model } : {}),
system: input.skillContent,
tools: {
task: false,
call_omo_agent: false,
sisyphus_task: false,
call_omo_agent: true,
},
parts: [{ type: "text", text: input.prompt }],
},
@@ -146,7 +185,9 @@ export class BackgroundManager {
this.concurrencyManager.release(existingTask.concurrencyKey)
}
this.markForNotification(existingTask)
this.notifyParentSession(existingTask)
this.notifyParentSession(existingTask).catch(err => {
log("[background-agent] Failed to notify on error:", err)
})
}
})
@@ -199,6 +240,7 @@ export class BackgroundManager {
parentSessionID: string
description: string
agent?: string
parentAgent?: string
}): BackgroundTask {
const task: BackgroundTask = {
id: input.taskId,
@@ -214,12 +256,18 @@ export class BackgroundManager {
toolCalls: 0,
lastUpdate: new Date(),
},
parentAgent: input.parentAgent,
}
this.tasks.set(task.id, task)
subagentSessions.add(input.sessionID)
this.startPolling()
// Track for batched notifications (external tasks need tracking too)
const pending = this.pendingByParent.get(input.parentSessionID) ?? new Set()
pending.add(task.id)
this.pendingByParent.set(input.parentSessionID, pending)
log("[background-agent] Registered external task:", { taskId: task.id, sessionID: input.sessionID })
return task
@@ -247,6 +295,11 @@ export class BackgroundManager {
this.startPolling()
subagentSessions.add(existingTask.sessionID)
// Track for batched notifications (P2 fix: resumed tasks need tracking too)
const pending = this.pendingByParent.get(input.parentSessionID) ?? new Set()
pending.add(existingTask.id)
this.pendingByParent.set(input.parentSessionID, pending)
const toastManager = getTaskToastManager()
if (toastManager) {
toastManager.addTask({
@@ -259,24 +312,35 @@ export class BackgroundManager {
log("[background-agent] Resuming task:", { taskId: existingTask.id, sessionID: existingTask.sessionID })
this.client.session.promptAsync({
log("[background-agent] Resuming task - calling prompt (fire-and-forget) with:", {
sessionID: existingTask.sessionID,
agent: existingTask.agent,
promptLength: input.prompt.length,
})
// Note: Don't pass model in body - use agent's configured model instead
// Use prompt() instead of promptAsync() to properly initialize agent loop
this.client.session.prompt({
path: { id: existingTask.sessionID },
body: {
agent: existingTask.agent,
tools: {
task: false,
call_omo_agent: false,
sisyphus_task: false,
call_omo_agent: true,
},
parts: [{ type: "text", text: input.prompt }],
},
}).catch((error) => {
log("[background-agent] resume promptAsync error:", error)
log("[background-agent] resume prompt error:", error)
existingTask.status = "error"
const errorMessage = error instanceof Error ? error.message : String(error)
existingTask.error = errorMessage
existingTask.completedAt = new Date()
this.markForNotification(existingTask)
this.notifyParentSession(existingTask)
this.notifyParentSession(existingTask).catch(err => {
log("[background-agent] Failed to notify on resume error:", err)
})
})
return existingTask
@@ -331,7 +395,22 @@ export class BackgroundManager {
const task = this.findBySession(sessionID)
if (!task || task.status !== "running") return
this.checkSessionTodos(sessionID).then((hasIncompleteTodos) => {
// Edge guard: Require minimum elapsed time (5 seconds) before accepting idle
const elapsedMs = Date.now() - task.startedAt.getTime()
const MIN_IDLE_TIME_MS = 5000
if (elapsedMs < MIN_IDLE_TIME_MS) {
log("[background-agent] Ignoring early session.idle, elapsed:", { elapsedMs, taskId: task.id })
return
}
// Edge guard: Verify session has actual assistant output before completing
this.validateSessionHasOutput(sessionID).then(async (hasValidOutput) => {
if (!hasValidOutput) {
log("[background-agent] Session.idle but no valid output yet, waiting:", task.id)
return
}
const hasIncompleteTodos = await this.checkSessionTodos(sessionID)
if (hasIncompleteTodos) {
log("[background-agent] Task has incomplete todos, waiting for todo-continuation:", task.id)
return
@@ -340,8 +419,10 @@ export class BackgroundManager {
task.status = "completed"
task.completedAt = new Date()
this.markForNotification(task)
this.notifyParentSession(task)
await this.notifyParentSession(task)
log("[background-agent] Task completed via session.idle event:", task.id)
}).catch(err => {
log("[background-agent] Error in session.idle handler:", err)
})
}
@@ -382,6 +463,66 @@ export class BackgroundManager {
this.notifications.delete(sessionID)
}
/**
* Validates that a session has actual assistant/tool output before marking complete.
* Prevents premature completion when session.idle fires before agent responds.
*/
private async validateSessionHasOutput(sessionID: string): Promise<boolean> {
try {
const response = await this.client.session.messages({
path: { id: sessionID },
})
const messages = response.data ?? []
// Check for at least one assistant or tool message
const hasAssistantOrToolMessage = messages.some(
(m: { info?: { role?: string } }) =>
m.info?.role === "assistant" || m.info?.role === "tool"
)
if (!hasAssistantOrToolMessage) {
log("[background-agent] No assistant/tool messages found in session:", sessionID)
return false
}
// Additionally check that at least one message has content (not just empty)
// OpenCode API uses different part types than Anthropic's API:
// - "reasoning" with .text property (thinking/reasoning content)
// - "tool" with .state.output property (tool call results)
// - "text" with .text property (final text output)
// - "step-start"/"step-finish" (metadata, no content)
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const hasContent = messages.some((m: any) => {
if (m.info?.role !== "assistant" && m.info?.role !== "tool") return false
const parts = m.parts ?? []
// eslint-disable-next-line @typescript-eslint/no-explicit-any
return parts.some((p: any) =>
// Text content (final output)
(p.type === "text" && p.text && p.text.trim().length > 0) ||
// Reasoning content (thinking blocks)
(p.type === "reasoning" && p.text && p.text.trim().length > 0) ||
// Tool calls (indicates work was done)
p.type === "tool" ||
// Tool results (output from executed tools) - important for tool-only tasks
(p.type === "tool_result" && p.content &&
(typeof p.content === "string" ? p.content.trim().length > 0 : p.content.length > 0))
)
})
if (!hasContent) {
log("[background-agent] Messages exist but no content found in session:", sessionID)
return false
}
return true
} catch (error) {
log("[background-agent] Error validating session output:", error)
// On error, allow completion to proceed (don't block indefinitely)
return true
}
}
private clearNotificationsForTask(taskId: string): void {
for (const [sessionID, tasks] of this.notifications.entries()) {
const filtered = tasks.filter((t) => t.id !== taskId)
@@ -409,17 +550,38 @@ export class BackgroundManager {
}
}
cleanup(): void {
cleanup(): void {
this.stopPolling()
this.tasks.clear()
this.notifications.clear()
this.pendingByParent.clear()
}
private notifyParentSession(task: BackgroundTask): void {
/**
* Get all running tasks (for compaction hook)
*/
getRunningTasks(): BackgroundTask[] {
return Array.from(this.tasks.values()).filter(t => t.status === "running")
}
/**
* Get all completed tasks still in memory (for compaction hook)
*/
getCompletedTasks(): BackgroundTask[] {
return Array.from(this.tasks.values()).filter(t => t.status !== "running")
}
private async notifyParentSession(task: BackgroundTask): Promise<void> {
if (task.concurrencyKey) {
this.concurrencyManager.release(task.concurrencyKey)
task.concurrencyKey = undefined
}
const duration = this.formatDuration(task.startedAt, task.completedAt)
log("[background-agent] notifyParentSession called for task:", task.id)
// Show toast notification
const toastManager = getTaskToastManager()
if (toastManager) {
toastManager.showCompletionToast({
@@ -429,41 +591,78 @@ export class BackgroundManager {
})
}
const message = `[BACKGROUND TASK COMPLETED] Task "${task.description}" finished in ${duration}. Use background_output with task_id="${task.id}" to get results.`
// Update pending tracking and check if all tasks complete
const pendingSet = this.pendingByParent.get(task.parentSessionID)
if (pendingSet) {
pendingSet.delete(task.id)
if (pendingSet.size === 0) {
this.pendingByParent.delete(task.parentSessionID)
}
}
log("[background-agent] Sending notification to parent session:", { parentSessionID: task.parentSessionID })
const allComplete = !pendingSet || pendingSet.size === 0
const remainingCount = pendingSet?.size ?? 0
// Build notification message
const statusText = task.status === "error" ? "FAILED" : "COMPLETED"
const errorInfo = task.error ? `\n**Error:** ${task.error}` : ""
let notification: string
if (allComplete) {
// All tasks complete - build summary
const completedTasks = Array.from(this.tasks.values())
.filter(t => t.parentSessionID === task.parentSessionID && t.status !== "running")
.map(t => `- \`${t.id}\`: ${t.description}`)
.join("\n")
notification = `<system-reminder>
[ALL BACKGROUND TASKS COMPLETE]
**Completed:**
${completedTasks || `- \`${task.id}\`: ${task.description}`}
Use \`background_output(task_id="<id>")\` to retrieve each result.
</system-reminder>`
} else {
// Individual completion - silent notification
notification = `<system-reminder>
[BACKGROUND TASK ${statusText}]
**ID:** \`${task.id}\`
**Description:** ${task.description}
**Duration:** ${duration}${errorInfo}
**${remainingCount} task${remainingCount === 1 ? "" : "s"} still in progress.** You WILL be notified when ALL complete.
Do NOT poll - continue productive work.
Use \`background_output(task_id="${task.id}")\` to retrieve this result when ready.
</system-reminder>`
}
// Inject notification via session.prompt with noReply
try {
await this.client.session.prompt({
path: { id: task.parentSessionID },
body: {
noReply: !allComplete, // Silent unless all complete
agent: task.parentAgent,
parts: [{ type: "text", text: notification }],
},
})
log("[background-agent] Sent notification to parent session:", {
taskId: task.id,
allComplete,
noReply: !allComplete,
})
} catch (error) {
log("[background-agent] Failed to send notification:", error)
}
const taskId = task.id
setTimeout(async () => {
if (task.concurrencyKey) {
this.concurrencyManager.release(task.concurrencyKey)
}
try {
// Use only parentModel/parentAgent - don't fallback to prevMessage
// This prevents accidentally changing parent session's model/agent
const modelField = task.parentModel?.providerID && task.parentModel?.modelID
? { providerID: task.parentModel.providerID, modelID: task.parentModel.modelID }
: undefined
await this.client.session.prompt({
path: { id: task.parentSessionID },
body: {
agent: task.parentAgent,
model: modelField,
parts: [{ type: "text", text: message }],
},
query: { directory: this.directory },
})
log("[background-agent] Successfully sent prompt to parent session:", { parentSessionID: task.parentSessionID })
} catch (error) {
log("[background-agent] prompt failed:", String(error))
} finally {
this.clearNotificationsForTask(taskId)
this.tasks.delete(taskId)
log("[background-agent] Removed completed task from memory:", taskId)
}
}, 200)
setTimeout(() => {
this.clearNotificationsForTask(taskId)
this.tasks.delete(taskId)
log("[background-agent] Removed completed task from memory:", taskId)
}, 5 * 60 * 1000)
}
private formatDuration(start: Date, end?: Date): string {
@@ -532,15 +731,18 @@ export class BackgroundManager {
for (const task of this.tasks.values()) {
if (task.status !== "running") continue
try {
try {
const sessionStatus = allStatuses[task.sessionID]
if (!sessionStatus) {
log("[background-agent] Session not found in status:", task.sessionID)
continue
}
// Don't skip if session not in status - fall through to message-based detection
if (sessionStatus?.type === "idle") {
// Edge guard: Validate session has actual output before completing
const hasValidOutput = await this.validateSessionHasOutput(task.sessionID)
if (!hasValidOutput) {
log("[background-agent] Polling idle but no valid output yet, waiting:", task.id)
continue
}
if (sessionStatus.type === "idle") {
const hasIncompleteTodos = await this.checkSessionTodos(task.sessionID)
if (hasIncompleteTodos) {
log("[background-agent] Task has incomplete todos via polling, waiting:", task.id)
@@ -550,7 +752,7 @@ export class BackgroundManager {
task.status = "completed"
task.completedAt = new Date()
this.markForNotification(task)
this.notifyParentSession(task)
await this.notifyParentSession(task)
log("[background-agent] Task completed via polling:", task.id)
continue
}
@@ -591,10 +793,41 @@ export class BackgroundManager {
task.progress.toolCalls = toolCalls
task.progress.lastTool = lastTool
task.progress.lastUpdate = new Date()
if (lastMessage) {
if (lastMessage) {
task.progress.lastMessage = lastMessage
task.progress.lastMessageAt = new Date()
}
// Stability detection: complete when message count unchanged for 3 polls
const currentMsgCount = messages.length
const elapsedMs = Date.now() - task.startedAt.getTime()
if (elapsedMs >= MIN_STABILITY_TIME_MS) {
if (task.lastMsgCount === currentMsgCount) {
task.stablePolls = (task.stablePolls ?? 0) + 1
if (task.stablePolls >= 3) {
// Edge guard: Validate session has actual output before completing
const hasValidOutput = await this.validateSessionHasOutput(task.sessionID)
if (!hasValidOutput) {
log("[background-agent] Stability reached but no valid output, waiting:", task.id)
continue
}
const hasIncompleteTodos = await this.checkSessionTodos(task.sessionID)
if (!hasIncompleteTodos) {
task.status = "completed"
task.completedAt = new Date()
this.markForNotification(task)
await this.notifyParentSession(task)
log("[background-agent] Task completed via stability detection:", task.id)
continue
}
}
} else {
task.stablePolls = 0
}
}
task.lastMsgCount = currentMsgCount
}
} catch (error) {
log("[background-agent] Poll error for task:", { taskId: task.id, error })

View File

@@ -27,11 +27,15 @@ export interface BackgroundTask {
error?: string
progress?: TaskProgress
parentModel?: { providerID: string; modelID: string }
model?: { providerID: string; modelID: string }
model?: { providerID: string; modelID: string; variant?: string }
/** Agent name used for concurrency tracking */
concurrencyKey?: string
/** Parent session's agent name for notification */
parentAgent?: string
/** Last message count for stability detection */
lastMsgCount?: number
/** Number of consecutive polls with stable message count */
stablePolls?: number
}
export interface LaunchInput {
@@ -42,7 +46,7 @@ export interface LaunchInput {
parentMessageID: string
parentModel?: { providerID: string; modelID: string }
parentAgent?: string
model?: { providerID: string; modelID: string }
model?: { providerID: string; modelID: string; variant?: string }
skills?: string[]
skillContent?: string
}

View File

@@ -9,3 +9,23 @@ export function setMainSession(id: string | undefined) {
export function getMainSessionID(): string | undefined {
return mainSessionID
}
const sessionAgentMap = new Map<string, string>()
export function setSessionAgent(sessionID: string, agent: string): void {
if (!sessionAgentMap.has(sessionID)) {
sessionAgentMap.set(sessionID, agent)
}
}
export function updateSessionAgent(sessionID: string, agent: string): void {
sessionAgentMap.set(sessionID, agent)
}
export function getSessionAgent(sessionID: string): string | undefined {
return sessionAgentMap.get(sessionID)
}
export function clearSessionAgent(sessionID: string): void {
sessionAgentMap.delete(sessionID)
}

View File

@@ -133,7 +133,7 @@ describe("createContextInjectorHook", () => {
})
describe("chat.message handler", () => {
it("is a no-op (context injection moved to messages transform)", async () => {
it("injects pending context into output parts", async () => {
// #given
const hook = createContextInjectorHook(collector)
const sessionID = "ses_hook1"
@@ -152,8 +152,9 @@ describe("createContextInjectorHook", () => {
await hook["chat.message"](input, output)
// #then
expect(output.parts[0].text).toBe("User message")
expect(collector.hasPending(sessionID)).toBe(true)
expect(output.parts[0].text).toContain("Hook context")
expect(output.parts[0].text).toContain("User message")
expect(collector.hasPending(sessionID)).toBe(false)
})
it("does nothing when no pending context", async () => {

View File

@@ -52,10 +52,16 @@ interface ChatMessageOutput {
export function createContextInjectorHook(collector: ContextCollector) {
return {
"chat.message": async (
_input: ChatMessageInput,
_output: ChatMessageOutput
input: ChatMessageInput,
output: ChatMessageOutput
): Promise<void> => {
void collector
const result = injectPendingContext(collector, input.sessionID, output.parts)
if (result.injected) {
log("[context-injector] Injected pending context via chat.message", {
sessionID: input.sessionID,
contextLength: result.contextLength,
})
}
},
}
}

View File

@@ -1,4 +1,4 @@
export { injectHookMessage, findNearestMessageWithFields } from "./injector"
export { injectHookMessage, findNearestMessageWithFields, findFirstMessageWithAgent } from "./injector"
export type { StoredMessage } from "./injector"
export type { MessageMeta, OriginalMessageContext, TextPart } from "./types"
export { MESSAGE_STORAGE } from "./constants"

View File

@@ -48,6 +48,35 @@ export function findNearestMessageWithFields(messageDir: string): StoredMessage
return null
}
/**
* Finds the FIRST (oldest) message in the session with agent field.
* This is used to get the original agent that started the session,
* avoiding issues where newer messages may have a different agent
* due to OpenCode's internal agent switching.
*/
export function findFirstMessageWithAgent(messageDir: string): string | null {
try {
const files = readdirSync(messageDir)
.filter((f) => f.endsWith(".json"))
.sort() // Oldest first (no reverse)
for (const file of files) {
try {
const content = readFileSync(join(messageDir, file), "utf-8")
const msg = JSON.parse(content) as StoredMessage
if (msg.agent) {
return msg.agent
}
} catch {
continue
}
}
} catch {
return null
}
return null
}
function generateMessageId(): string {
const timestamp = Date.now().toString(16)
const random = Math.random().toString(36).substring(2, 14)

View File

@@ -63,7 +63,7 @@ async function loadSkillFromPath(
): Promise<LoadedSkill | null> {
try {
const content = await fs.readFile(skillPath, "utf-8")
const { data } = parseFrontmatter<SkillMetadata>(content)
const { data, body } = parseFrontmatter<SkillMetadata>(content)
const frontmatterMcp = parseSkillMcpConfigFromFrontmatter(content)
const mcpJsonMcp = await loadMcpJsonFromDir(resolvedPath)
const mcpConfig = mcpJsonMcp || frontmatterMcp
@@ -73,14 +73,7 @@ async function loadSkillFromPath(
const isOpencodeSource = scope === "opencode" || scope === "opencode-project"
const formattedDescription = `(${scope} - Skill) ${originalDescription}`
const lazyContent: LazyContentLoader = {
loaded: false,
content: undefined,
load: async () => {
if (!lazyContent.loaded) {
const fileContent = await fs.readFile(skillPath, "utf-8")
const { body } = parseFrontmatter<SkillMetadata>(fileContent)
lazyContent.content = `<skill-instruction>
const templateContent = `<skill-instruction>
Base directory for this skill: ${resolvedPath}/
File references (@path) in this skill are relative to this directory.
@@ -90,16 +83,20 @@ ${body.trim()}
<user-request>
$ARGUMENTS
</user-request>`
lazyContent.loaded = true
}
return lazyContent.content!
},
// RATIONALE: We read the file eagerly to ensure atomic consistency between
// metadata and body. We maintain the LazyContentLoader interface for
// compatibility, but the state is effectively eager.
const eagerLoader: LazyContentLoader = {
loaded: true,
content: templateContent,
load: async () => templateContent,
}
const definition: CommandDefinition = {
name: skillName,
description: formattedDescription,
template: "",
template: templateContent,
model: sanitizeModelField(data.model, isOpencodeSource ? "opencode" : "claude-code"),
agent: data.agent,
subtask: data.subtask,
@@ -117,7 +114,7 @@ $ARGUMENTS
metadata: data.metadata,
allowedTools: parseAllowedTools(data["allowed-tools"]),
mcpConfig,
lazyContent,
lazyContent: eagerLoader,
}
} catch {
return null

View File

@@ -1,12 +1,41 @@
import { createBuiltinSkills } from "../builtin-skills/skills"
import type { GitMasterConfig } from "../../config/schema"
export function resolveSkillContent(skillName: string): string | null {
const skills = createBuiltinSkills()
const skill = skills.find((s) => s.name === skillName)
return skill?.template ?? null
export interface SkillResolutionOptions {
gitMasterConfig?: GitMasterConfig
}
export function resolveMultipleSkills(skillNames: string[]): {
function injectGitMasterConfig(template: string, config?: GitMasterConfig): string {
if (!config) return template
const commitFooter = config.commit_footer ?? true
const includeCoAuthoredBy = config.include_co_authored_by ?? true
const configHeader = `## Git Master Configuration (from oh-my-opencode.json)
**IMPORTANT: These values override the defaults in section 5.5:**
- \`commit_footer\`: ${commitFooter} ${!commitFooter ? "(DISABLED - do NOT add footer)" : ""}
- \`include_co_authored_by\`: ${includeCoAuthoredBy} ${!includeCoAuthoredBy ? "(DISABLED - do NOT add Co-authored-by)" : ""}
---
`
return configHeader + template
}
export function resolveSkillContent(skillName: string, options?: SkillResolutionOptions): string | null {
const skills = createBuiltinSkills()
const skill = skills.find((s) => s.name === skillName)
if (!skill) return null
if (skillName === "git-master" && options?.gitMasterConfig) {
return injectGitMasterConfig(skill.template, options.gitMasterConfig)
}
return skill.template
}
export function resolveMultipleSkills(skillNames: string[], options?: SkillResolutionOptions): {
resolved: Map<string, string>
notFound: string[]
} {
@@ -19,7 +48,11 @@ export function resolveMultipleSkills(skillNames: string[]): {
for (const name of skillNames) {
const template = skillMap.get(name)
if (template) {
resolved.set(name, template)
if (name === "git-master" && options?.gitMasterConfig) {
resolved.set(name, injectGitMasterConfig(template, options.gitMasterConfig))
} else {
resolved.set(name, template)
}
} else {
notFound.push(name)
}

View File

@@ -1,8 +0,0 @@
import type { Plugin } from "@opencode-ai/plugin"
import { createGoogleAntigravityAuthPlugin } from "./auth/antigravity"
const GoogleAntigravityAuthPlugin: Plugin = async (ctx) => {
return createGoogleAntigravityAuthPlugin(ctx)
}
export default GoogleAntigravityAuthPlugin

View File

@@ -1,73 +1,54 @@
# HOOKS KNOWLEDGE BASE
## OVERVIEW
22+ lifecycle hooks intercepting/modifying agent behavior. Context injection, error recovery, output control, notifications.
22+ lifecycle hooks intercepting/modifying agent behavior via PreToolUse, PostToolUse, UserPromptSubmit, and more.
## STRUCTURE
```
hooks/
├── anthropic-context-window-limit-recovery/ # Auto-compact at token limit (556 lines)
├── auto-slash-command/ # Detect and execute /command patterns
├── auto-update-checker/ # Version notifications, startup toast
├── background-notification/ # OS notify on task complete
├── claude-code-hooks/ # settings.json PreToolUse/PostToolUse/etc (408 lines)
├── comment-checker/ # Prevent excessive AI comments
│ ├── filters/ # docstring, directive, bdd, shebang
│ └── output/ # XML builder, formatter
├── compaction-context-injector/ # Preserve context during compaction
├── directory-agents-injector/ # Auto-inject AGENTS.md
├── directory-readme-injector/ # Auto-inject README.md
├── edit-error-recovery/ # Recover from edit failures
├── empty-message-sanitizer/ # Sanitize empty messages
├── interactive-bash-session/ # Tmux session management
├── keyword-detector/ # ultrawork/search keyword activation
├── non-interactive-env/ # CI/headless handling
├── preemptive-compaction/ # Pre-emptive at 85% usage
├── prometheus-md-only/ # Restrict prometheus to read-only
├── ralph-loop/ # Self-referential dev loop
├── anthropic-context-window-limit-recovery/ # Auto-summarize at token limit (555 lines)
├── sisyphus-orchestrator/ # Main orchestration & agent delegation (677 lines)
├── ralph-loop/ # Self-referential dev loop (364 lines)
├── claude-code-hooks/ # settings.json hook compatibility layer
├── comment-checker/ # Prevents AI slop/excessive comments
├── auto-slash-command/ # Detects and executes /command patterns
├── rules-injector/ # Conditional rules from .claude/rules/
├── session-recovery/ # Recover from errors (432 lines)
├── sisyphus-orchestrator/ # Main orchestration hook (660 lines)
├── start-work/ # Initialize Sisyphus work session
├── task-resume-info/ # Track task resume state
├── think-mode/ # Auto-detect thinking triggers
├── thinking-block-validator/ # Validate thinking block format
├── agent-usage-reminder/ # Remind to use specialists
├── context-window-monitor.ts # Monitor usage (standalone)
├── session-notification.ts # OS notify on idle
├── todo-continuation-enforcer.ts # Force TODO completion (413 lines)
── tool-output-truncator.ts # Truncate verbose outputs
├── directory-agents-injector/ # Auto-injects local AGENTS.md files
├── directory-readme-injector/ # Auto-injects local README.md files
├── preemptive-compaction/ # Triggers summary at 85% usage
├── edit-error-recovery/ # Recovers from tool execution failures
├── thinking-block-validator/ # Ensures valid <thinking> format
├── context-window-monitor.ts # Reminds agents of remaining headroom
├── session-recovery/ # Auto-recovers from session crashes
├── start-work/ # Initializes work sessions (ulw/ulw)
├── think-mode/ # Dynamic thinking budget adjustment
├── background-notification/ # OS notification on task completion
── todo-continuation-enforcer.ts # Force completion of [ ] items
└── tool-output-truncator.ts # Prevents context bloat from verbose tools
```
## HOOK EVENTS
| Event | Timing | Can Block | Use Case |
|-------|--------|-----------|----------|
| PreToolUse | Before tool | Yes | Validate, modify input |
| PostToolUse | After tool | No | Add context, warnings |
| UserPromptSubmit | On prompt | Yes | Inject messages, block |
| Stop | Session idle | No | Inject follow-ups |
| onSummarize | Compaction | No | Preserve context |
| Event | Timing | Can Block | Description |
|-------|--------|-----------|-------------|
| PreToolUse | Before tool | Yes | Validate/modify inputs (e.g., directory-agents-injector) |
| PostToolUse | After tool | No | Append context/warnings (e.g., edit-error-recovery) |
| UserPromptSubmit | On prompt | Yes | Filter/modify user input (e.g., keyword-detector) |
| Stop | Session idle | No | Auto-continue tasks (e.g., todo-continuation-enforcer) |
| onSummarize | Compaction | No | State preservation (e.g., compaction-context-injector) |
## HOW TO ADD
1. Create `src/hooks/my-hook/`
2. Files: `index.ts` (createMyHook), `constants.ts`, `types.ts` (optional)
3. Return: `{ PreToolUse?, PostToolUse?, UserPromptSubmit?, Stop?, onSummarize? }`
4. Export from `src/hooks/index.ts`
1. Create `src/hooks/name/` with `index.ts` factory (e.g., `createMyHook`).
2. Implement `PreToolUse`, `PostToolUse`, `UserPromptSubmit`, `Stop`, or `onSummarize`.
3. Register in `src/hooks/index.ts`.
## PATTERNS
- **Storage**: JSON file for persistent state across sessions
- **Once-per-session**: Track injected paths in Set
- **Message injection**: Return `{ messages: [...] }`
- **Blocking**: Return `{ blocked: true, message: "..." }` from PreToolUse
- **Context Injection**: Use `PreToolUse` to prepend instructions to tool inputs.
- **Resilience**: Implement `edit-error-recovery` style logic to retry failed tools.
- **Telegraphic UI**: Use `PostToolUse` to add brief warnings without bloating transcript.
- **Statelessness**: Prefer local file storage for state that must persist across sessions.
## ANTI-PATTERNS
- Heavy computation in PreToolUse (slows every tool call)
- Blocking without actionable message
- Duplicate injection (track what's injected)
- Missing try/catch (don't crash session)
- **Blocking**: Avoid blocking tools unless critical (use warnings in `PostToolUse` instead).
- **Latency**: No heavy computation in `PreToolUse`; it slows every interaction.
- **Redundancy**: Don't inject the same file multiple times; track state in session storage.
- **Prose**: Never use verbose prose in hook outputs; keep it technical and brief.

View File

@@ -0,0 +1,24 @@
import { describe, test, expect } from "bun:test"
import { getLatestVersion } from "./checker"
describe("auto-update-checker/checker", () => {
describe("getLatestVersion", () => {
test("accepts channel parameter", async () => {
const result = await getLatestVersion("beta")
expect(typeof result === "string" || result === null).toBe(true)
})
test("accepts latest channel", async () => {
const result = await getLatestVersion("latest")
expect(typeof result === "string" || result === null).toBe(true)
})
test("works without channel (defaults to latest)", async () => {
const result = await getLatestVersion()
expect(typeof result === "string" || result === null).toBe(true)
})
})
})

View File

@@ -231,7 +231,7 @@ export function updatePinnedVersion(configPath: string, oldEntry: string, newVer
}
}
export async function getLatestVersion(): Promise<string | null> {
export async function getLatestVersion(channel: string = "latest"): Promise<string | null> {
const controller = new AbortController()
const timeoutId = setTimeout(() => controller.abort(), NPM_FETCH_TIMEOUT)
@@ -244,7 +244,7 @@ export async function getLatestVersion(): Promise<string | null> {
if (!response.ok) return null
const data = (await response.json()) as NpmDistTags
return data.latest ?? null
return data[channel] ?? data.latest ?? null
} catch {
return null
} finally {
@@ -264,24 +264,21 @@ export async function checkForUpdate(directory: string): Promise<UpdateCheckResu
return { needsUpdate: false, currentVersion: null, latestVersion: null, isLocalDev: false, isPinned: false }
}
if (pluginInfo.isPinned) {
log(`[auto-update-checker] Version pinned to ${pluginInfo.pinnedVersion}, skipping update check`)
return { needsUpdate: false, currentVersion: pluginInfo.pinnedVersion, latestVersion: null, isLocalDev: false, isPinned: true }
}
const currentVersion = getCachedVersion()
const currentVersion = getCachedVersion() ?? pluginInfo.pinnedVersion
if (!currentVersion) {
log("[auto-update-checker] No cached version found")
return { needsUpdate: false, currentVersion: null, latestVersion: null, isLocalDev: false, isPinned: false }
}
const latestVersion = await getLatestVersion()
const { extractChannel } = await import("./index")
const channel = extractChannel(pluginInfo.pinnedVersion ?? currentVersion)
const latestVersion = await getLatestVersion(channel)
if (!latestVersion) {
log("[auto-update-checker] Failed to fetch latest version")
return { needsUpdate: false, currentVersion, latestVersion: null, isLocalDev: false, isPinned: false }
log("[auto-update-checker] Failed to fetch latest version for channel:", channel)
return { needsUpdate: false, currentVersion, latestVersion: null, isLocalDev: false, isPinned: pluginInfo.isPinned }
}
const needsUpdate = currentVersion !== latestVersion
log(`[auto-update-checker] Current: ${currentVersion}, Latest: ${latestVersion}, NeedsUpdate: ${needsUpdate}`)
return { needsUpdate, currentVersion, latestVersion, isLocalDev: false, isPinned: false }
log(`[auto-update-checker] Current: ${currentVersion}, Latest (${channel}): ${latestVersion}, NeedsUpdate: ${needsUpdate}`)
return { needsUpdate, currentVersion, latestVersion, isLocalDev: false, isPinned: pluginInfo.isPinned }
}

View File

@@ -1,5 +1,5 @@
import { describe, test, expect } from "bun:test"
import { isPrereleaseVersion, isDistTag, isPrereleaseOrDistTag } from "./index"
import { isPrereleaseVersion, isDistTag, isPrereleaseOrDistTag, extractChannel } from "./index"
describe("auto-update-checker", () => {
describe("isPrereleaseVersion", () => {
@@ -150,4 +150,105 @@ describe("auto-update-checker", () => {
expect(result).toBe(false)
})
})
describe("extractChannel", () => {
test("extracts beta from dist-tag", () => {
// #given beta dist-tag
const version = "beta"
// #when extracting channel
const result = extractChannel(version)
// #then returns beta
expect(result).toBe("beta")
})
test("extracts next from dist-tag", () => {
// #given next dist-tag
const version = "next"
// #when extracting channel
const result = extractChannel(version)
// #then returns next
expect(result).toBe("next")
})
test("extracts canary from dist-tag", () => {
// #given canary dist-tag
const version = "canary"
// #when extracting channel
const result = extractChannel(version)
// #then returns canary
expect(result).toBe("canary")
})
test("extracts beta from prerelease version", () => {
// #given beta prerelease version
const version = "3.0.0-beta.1"
// #when extracting channel
const result = extractChannel(version)
// #then returns beta
expect(result).toBe("beta")
})
test("extracts alpha from prerelease version", () => {
// #given alpha prerelease version
const version = "1.0.0-alpha"
// #when extracting channel
const result = extractChannel(version)
// #then returns alpha
expect(result).toBe("alpha")
})
test("extracts rc from prerelease version", () => {
// #given rc prerelease version
const version = "2.0.0-rc.1"
// #when extracting channel
const result = extractChannel(version)
// #then returns rc
expect(result).toBe("rc")
})
test("returns latest for stable version", () => {
// #given stable version
const version = "2.14.0"
// #when extracting channel
const result = extractChannel(version)
// #then returns latest
expect(result).toBe("latest")
})
test("returns latest for null", () => {
// #given null version
const version = null
// #when extracting channel
const result = extractChannel(version)
// #then returns latest
expect(result).toBe("latest")
})
test("handles complex prerelease identifiers", () => {
// #given complex prerelease
const version = "3.0.0-beta.1.experimental"
// #when extracting channel
const result = extractChannel(version)
// #then returns beta
expect(result).toBe("beta")
})
})
})

View File

@@ -23,6 +23,26 @@ export function isPrereleaseOrDistTag(pinnedVersion: string | null): boolean {
return isPrereleaseVersion(pinnedVersion) || isDistTag(pinnedVersion)
}
export function extractChannel(version: string | null): string {
if (!version) return "latest"
if (isDistTag(version)) {
return version
}
if (isPrereleaseVersion(version)) {
const prereleasePart = version.split("-")[1]
if (prereleasePart) {
const channelMatch = prereleasePart.match(/^(alpha|beta|rc|canary|next)/)
if (channelMatch) {
return channelMatch[1]
}
}
}
return "latest"
}
export function createAutoUpdateCheckerHook(ctx: PluginInput, options: AutoUpdateCheckerOptions = {}) {
const { showStartupToast = true, isSisyphusEnabled = false, autoUpdate = true } = options
@@ -94,18 +114,19 @@ async function runBackgroundUpdateCheck(
return
}
const latestVersion = await getLatestVersion()
const channel = extractChannel(pluginInfo.pinnedVersion ?? currentVersion)
const latestVersion = await getLatestVersion(channel)
if (!latestVersion) {
log("[auto-update-checker] Failed to fetch latest version")
log("[auto-update-checker] Failed to fetch latest version for channel:", channel)
return
}
if (currentVersion === latestVersion) {
log("[auto-update-checker] Already on latest version")
log("[auto-update-checker] Already on latest version for channel:", channel)
return
}
log(`[auto-update-checker] Update available: ${currentVersion}${latestVersion}`)
log(`[auto-update-checker] Update available (${channel}): ${currentVersion}${latestVersion}`)
if (!autoUpdate) {
await showUpdateAvailableToast(ctx, latestVersion, getToastMessage)
@@ -113,18 +134,7 @@ async function runBackgroundUpdateCheck(
return
}
// Check if current version is a prerelease - don't auto-downgrade prerelease to stable
if (isPrereleaseVersion(currentVersion)) {
log(`[auto-update-checker] Skipping auto-update for prerelease version: ${currentVersion}`)
return
}
if (pluginInfo.isPinned) {
if (isPrereleaseOrDistTag(pluginInfo.pinnedVersion)) {
log(`[auto-update-checker] Skipping auto-update for prerelease/dist-tag: ${pluginInfo.pinnedVersion}`)
return
}
const updated = updatePinnedVersion(pluginInfo.configPath, pluginInfo.entry, latestVersion)
if (!updated) {
await showUpdateAvailableToast(ctx, latestVersion, getToastMessage)

View File

@@ -0,0 +1,85 @@
import type { BackgroundManager } from "../../features/background-agent"
interface CompactingInput {
sessionID: string
}
interface CompactingOutput {
context: string[]
prompt?: string
}
/**
* Background agent compaction hook - preserves task state during context compaction.
*
* When OpenCode compacts session context to save tokens, this hook injects
* information about running and recently completed background tasks so the
* agent doesn't lose awareness of delegated work.
*/
export function createBackgroundCompactionHook(manager: BackgroundManager) {
return {
"experimental.session.compacting": async (
input: CompactingInput,
output: CompactingOutput
): Promise<void> => {
const { sessionID } = input
// Get running tasks for this session
const running = manager.getRunningTasks()
.filter(t => t.parentSessionID === sessionID)
.map(t => ({
id: t.id,
agent: t.agent,
description: t.description,
startedAt: t.startedAt,
}))
// Get recently completed tasks (still in memory within 5-min retention)
const completed = manager.getCompletedTasks()
.filter(t => t.parentSessionID === sessionID)
.slice(-10) // Last 10 completed
.map(t => ({
id: t.id,
agent: t.agent,
description: t.description,
status: t.status,
}))
// Early exit if nothing to preserve
if (running.length === 0 && completed.length === 0) return
const sections: string[] = ["<background-tasks>"]
// Running tasks section
if (running.length > 0) {
sections.push("## Running Background Tasks")
sections.push("")
for (const t of running) {
const elapsed = Math.floor((Date.now() - t.startedAt.getTime()) / 1000)
sections.push(`- **\`${t.id}\`** (${t.agent}): ${t.description} [${elapsed}s elapsed]`)
}
sections.push("")
sections.push("> **Note:** You WILL be notified when tasks complete.")
sections.push("> Do NOT poll - continue productive work.")
sections.push("")
}
// Completed tasks section
if (completed.length > 0) {
sections.push("## Recently Completed Tasks")
sections.push("")
for (const t of completed) {
const statusEmoji = t.status === "completed" ? "✅" : t.status === "error" ? "❌" : "⏱️"
sections.push(`- ${statusEmoji} **\`${t.id}\`**: ${t.description}`)
}
sections.push("")
}
sections.push("## Retrieval")
sections.push('Use `background_output(task_id="<id>")` to retrieve task results.')
sections.push("</background-tasks>")
output.context.push(sections.join("\n"))
}
}
}

View File

@@ -9,6 +9,12 @@ interface EventInput {
event: Event
}
/**
* Background notification hook - handles event routing to BackgroundManager.
*
* Notifications are now delivered directly via session.prompt({ noReply })
* from the manager, so this hook only needs to handle event routing.
*/
export function createBackgroundNotificationHook(manager: BackgroundManager) {
const eventHandler = async ({ event }: EventInput) => {
manager.handleEvent(event)

View File

@@ -27,7 +27,6 @@ import { cacheToolInput, getToolInput } from "./tool-input-cache"
import { recordToolUse, recordToolResult, getTranscriptPath, recordUserMessage } from "./transcript"
import type { PluginConfig } from "./types"
import { log, isHookDisabled } from "../../shared"
import { detectKeywordsWithType, removeCodeBlocks } from "../keyword-detector"
import type { ContextCollector } from "../../features/context-injector"
const sessionFirstMessageProcessed = new Set<string>()
@@ -142,25 +141,9 @@ export function createClaudeCodeHooksHook(
return
}
const keywordMessages: string[] = []
if (!config.keywordDetectorDisabled) {
const detectedKeywords = detectKeywordsWithType(removeCodeBlocks(prompt), input.agent)
keywordMessages.push(...detectedKeywords.map((k) => k.message))
if (keywordMessages.length > 0) {
log("[claude-code-hooks] Detected keywords", {
sessionID: input.sessionID,
keywordCount: keywordMessages.length,
types: detectedKeywords.map((k) => k.type),
})
}
}
const allMessages = [...keywordMessages, ...result.messages]
if (allMessages.length > 0) {
const hookContent = allMessages.join("\n\n")
log(`[claude-code-hooks] Injecting ${allMessages.length} messages (${keywordMessages.length} keyword + ${result.messages.length} hook)`, { sessionID: input.sessionID, contentLength: hookContent.length, isFirstMessage })
if (result.messages.length > 0) {
const hookContent = result.messages.join("\n\n")
log(`[claude-code-hooks] Injecting ${result.messages.length} hook messages`, { sessionID: input.sessionID, contentLength: hookContent.length, isFirstMessage })
if (isFirstMessage) {
const idx = output.parts.findIndex((p) => p.type === "text" && p.text)
@@ -202,6 +185,30 @@ export function createClaudeCodeHooksHook(
input: { tool: string; sessionID: string; callID: string },
output: { args: Record<string, unknown> }
): Promise<void> => {
if (input.tool === "todowrite" && typeof output.args.todos === "string") {
let parsed: unknown
try {
parsed = JSON.parse(output.args.todos)
} catch (e) {
throw new Error(
`[todowrite ERROR] Failed to parse todos string as JSON. ` +
`Received: ${output.args.todos.length > 100 ? output.args.todos.slice(0, 100) + '...' : output.args.todos} ` +
`Expected: Valid JSON array. Pass todos as an array, not a string.`
)
}
if (!Array.isArray(parsed)) {
throw new Error(
`[todowrite ERROR] Parsed JSON is not an array. ` +
`Received type: ${typeof parsed}. ` +
`Expected: Array of todo objects. Pass todos as [{id, content, status, priority}, ...].`
)
}
output.args.todos = parsed
log("todowrite: parsed todos string to array", { sessionID: input.sessionID })
}
const claudeConfig = await loadClaudeHooksConfig()
const extendedConfig = await loadPluginExtendedConfig()

View File

@@ -3,6 +3,7 @@ import { existsSync, mkdirSync, chmodSync, unlinkSync, appendFileSync } from "fs
import { join } from "path"
import { homedir, tmpdir } from "os"
import { createRequire } from "module"
import { extractZip } from "../../shared"
const DEBUG = process.env.COMMENT_CHECKER_DEBUG === "1"
const DEBUG_FILE = join(tmpdir(), "comment-checker-debug.log")
@@ -95,29 +96,7 @@ async function extractTarGz(archivePath: string, destDir: string): Promise<void>
}
}
/**
* Extract zip archive using system commands.
*/
async function extractZip(archivePath: string, destDir: string): Promise<void> {
debugLog("Extracting zip:", archivePath, "to", destDir)
const proc = process.platform === "win32"
? spawn(["powershell", "-command", `Expand-Archive -Path '${archivePath}' -DestinationPath '${destDir}' -Force`], {
stdout: "pipe",
stderr: "pipe",
})
: spawn(["unzip", "-o", archivePath, "-d", destDir], {
stdout: "pipe",
stderr: "pipe",
})
const exitCode = await proc.exited
if (exitCode !== 0) {
const stderr = await new Response(proc.stderr).text()
throw new Error(`zip extraction failed (exit ${exitCode}): ${stderr}`)
}
}
/**
* Download the comment-checker binary from GitHub Releases.

View File

@@ -14,6 +14,7 @@ export { createThinkModeHook } from "./think-mode";
export { createClaudeCodeHooksHook } from "./claude-code-hooks";
export { createRulesInjectorHook } from "./rules-injector";
export { createBackgroundNotificationHook } from "./background-notification"
export { createBackgroundCompactionHook } from "./background-compaction"
export { createAutoUpdateCheckerHook } from "./auto-update-checker";
export { createAgentUsageReminderHook } from "./agent-usage-reminder";

View File

@@ -1,7 +1,95 @@
import { describe, expect, test, beforeEach, afterEach, spyOn } from "bun:test"
import { createKeywordDetectorHook } from "./index"
import { setMainSession } from "../../features/claude-code-session-state"
import { ContextCollector } from "../../features/context-injector"
import * as sharedModule from "../../shared"
import * as sessionState from "../../features/claude-code-session-state"
describe("keyword-detector registers to ContextCollector", () => {
let logCalls: Array<{ msg: string; data?: unknown }>
let logSpy: ReturnType<typeof spyOn>
let getMainSessionSpy: ReturnType<typeof spyOn>
beforeEach(() => {
logCalls = []
logSpy = spyOn(sharedModule, "log").mockImplementation((msg: string, data?: unknown) => {
logCalls.push({ msg, data })
})
})
afterEach(() => {
logSpy?.mockRestore()
getMainSessionSpy?.mockRestore()
})
function createMockPluginInput() {
return {
client: {
tui: {
showToast: async () => {},
},
},
} as any
}
test("should register ultrawork keyword to ContextCollector", async () => {
// #given - a fresh ContextCollector and keyword-detector hook
const collector = new ContextCollector()
const hook = createKeywordDetectorHook(createMockPluginInput(), collector)
const sessionID = "test-session-123"
const output = {
message: {} as Record<string, unknown>,
parts: [{ type: "text", text: "ultrawork do something" }],
}
// #when - keyword detection runs
await hook["chat.message"]({ sessionID }, output)
// #then - ultrawork context should be registered in collector
expect(collector.hasPending(sessionID)).toBe(true)
const pending = collector.getPending(sessionID)
expect(pending.entries.length).toBeGreaterThan(0)
expect(pending.entries[0].source).toBe("keyword-detector")
expect(pending.entries[0].id).toBe("keyword-ultrawork")
})
test("should register search keyword to ContextCollector", async () => {
// #given - mock getMainSessionID to return our session (isolate from global state)
const collector = new ContextCollector()
const sessionID = "search-test-session"
getMainSessionSpy = spyOn(sessionState, "getMainSessionID").mockReturnValue(sessionID)
const hook = createKeywordDetectorHook(createMockPluginInput(), collector)
const output = {
message: {} as Record<string, unknown>,
parts: [{ type: "text", text: "search for the bug" }],
}
// #when - keyword detection runs
await hook["chat.message"]({ sessionID }, output)
// #then - search context should be registered in collector
expect(collector.hasPending(sessionID)).toBe(true)
const pending = collector.getPending(sessionID)
expect(pending.entries.some((e) => e.id === "keyword-search")).toBe(true)
})
test("should NOT register to collector when no keywords detected", async () => {
// #given - no keywords in message
const collector = new ContextCollector()
const hook = createKeywordDetectorHook(createMockPluginInput(), collector)
const sessionID = "test-session"
const output = {
message: {} as Record<string, unknown>,
parts: [{ type: "text", text: "just a normal message" }],
}
// #when - keyword detection runs
await hook["chat.message"]({ sessionID }, output)
// #then - nothing should be registered
expect(collector.hasPending(sessionID)).toBe(false)
})
})
describe("keyword-detector session filtering", () => {
let logCalls: Array<{ msg: string; data?: unknown }>
@@ -122,4 +210,26 @@ describe("keyword-detector session filtering", () => {
expect(output.message.variant).toBe("max")
expect(toastCalls).toContain("Ultrawork Mode Activated")
})
test("should not override existing variant", async () => {
// #given - main session set with pre-existing variant
setMainSession("main-123")
const toastCalls: string[] = []
const hook = createKeywordDetectorHook(createMockPluginInput({ toastCalls }))
const output = {
message: { variant: "low" } as Record<string, unknown>,
parts: [{ type: "text", text: "ultrawork mode" }],
}
// #when - ultrawork keyword triggers
await hook["chat.message"](
{ sessionID: "main-123" },
output
)
// #then - existing variant should remain
expect(output.message.variant).toBe("low")
expect(toastCalls).toContain("Ultrawork Mode Activated")
})
})

View File

@@ -2,12 +2,13 @@ import type { PluginInput } from "@opencode-ai/plugin"
import { detectKeywordsWithType, extractPromptText, removeCodeBlocks } from "./detector"
import { log } from "../../shared"
import { getMainSessionID } from "../../features/claude-code-session-state"
import type { ContextCollector } from "../../features/context-injector"
export * from "./detector"
export * from "./constants"
export * from "./types"
export function createKeywordDetectorHook(ctx: PluginInput) {
export function createKeywordDetectorHook(ctx: PluginInput, collector?: ContextCollector) {
return {
"chat.message": async (
input: {
@@ -28,8 +29,6 @@ export function createKeywordDetectorHook(ctx: PluginInput) {
return
}
// Only ultrawork keywords work in non-main sessions
// Other keywords (search, analyze, etc.) only work in main sessions
const mainSessionID = getMainSessionID()
const isNonMainSession = mainSessionID && input.sessionID !== mainSessionID
@@ -48,7 +47,9 @@ export function createKeywordDetectorHook(ctx: PluginInput) {
if (hasUltrawork) {
log(`[keyword-detector] Ultrawork mode activated`, { sessionID: input.sessionID })
output.message.variant = "max"
if (output.message.variant === undefined) {
output.message.variant = "max"
}
ctx.client.tui
.showToast({
@@ -64,6 +65,17 @@ export function createKeywordDetectorHook(ctx: PluginInput) {
)
}
if (collector) {
for (const keyword of detectedKeywords) {
collector.register(input.sessionID, {
id: `keyword-${keyword.type}`,
source: "keyword-detector",
content: keyword.message,
priority: keyword.type === "ultrawork" ? "critical" : "high",
})
}
}
log(`[keyword-detector] Detected ${detectedKeywords.length} keywords`, {
sessionID: input.sessionID,
types: detectedKeywords.map((k) => k.type),

View File

@@ -4,7 +4,7 @@ export const PROMETHEUS_AGENTS = ["Prometheus (Planner)"]
export const ALLOWED_EXTENSIONS = [".md"]
export const ALLOWED_PATH_PREFIX = ".sisyphus/"
export const ALLOWED_PATH_PREFIX = ".sisyphus"
export const BLOCKED_TOOLS = ["Write", "Edit", "write", "edit"]

View File

@@ -70,7 +70,7 @@ describe("prometheus-md-only", () => {
callID: "call-1",
}
const output = {
args: { filePath: "/project/.sisyphus/plans/work-plan.md" },
args: { filePath: "/tmp/test/.sisyphus/plans/work-plan.md" },
}
// #when / #then
@@ -295,4 +295,191 @@ describe("prometheus-md-only", () => {
).resolves.toBeUndefined()
})
})
describe("cross-platform path validation", () => {
beforeEach(() => {
setupMessageStorage(TEST_SESSION_ID, "Prometheus (Planner)")
})
test("should allow Windows-style backslash paths under .sisyphus/", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "Write",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { filePath: ".sisyphus\\plans\\work-plan.md" },
}
// #when / #then
await expect(
hook["tool.execute.before"](input, output)
).resolves.toBeUndefined()
})
test("should allow mixed separator paths under .sisyphus/", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "Write",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { filePath: ".sisyphus\\plans/work-plan.MD" },
}
// #when / #then
await expect(
hook["tool.execute.before"](input, output)
).resolves.toBeUndefined()
})
test("should allow uppercase .MD extension", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "Write",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { filePath: ".sisyphus/plans/work-plan.MD" },
}
// #when / #then
await expect(
hook["tool.execute.before"](input, output)
).resolves.toBeUndefined()
})
test("should block paths outside workspace root even if containing .sisyphus", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "Write",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { filePath: "/other/project/.sisyphus/plans/x.md" },
}
// #when / #then
await expect(
hook["tool.execute.before"](input, output)
).rejects.toThrow("can only write/edit .md files inside .sisyphus/")
})
test("should allow nested .sisyphus directories (ctx.directory may be parent)", async () => {
// #given - when ctx.directory is parent of actual project, path includes project name
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "Write",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { filePath: "src/.sisyphus/plans/x.md" },
}
// #when / #then - should allow because .sisyphus is in path
await expect(
hook["tool.execute.before"](input, output)
).resolves.toBeUndefined()
})
test("should block path traversal attempts", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "Write",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { filePath: ".sisyphus/../secrets.md" },
}
// #when / #then
await expect(
hook["tool.execute.before"](input, output)
).rejects.toThrow("can only write/edit .md files inside .sisyphus/")
})
test("should allow case-insensitive .SISYPHUS directory", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "Write",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { filePath: ".SISYPHUS/plans/work-plan.md" },
}
// #when / #then
await expect(
hook["tool.execute.before"](input, output)
).resolves.toBeUndefined()
})
test("should allow nested project path with .sisyphus (Windows real-world case)", async () => {
// #given - simulates when ctx.directory is parent of actual project
// User reported: xauusd-dxy-plan\.sisyphus\drafts\supabase-email-templates.md
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "Write",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { filePath: "xauusd-dxy-plan\\.sisyphus\\drafts\\supabase-email-templates.md" },
}
// #when / #then
await expect(
hook["tool.execute.before"](input, output)
).resolves.toBeUndefined()
})
test("should allow nested project path with mixed separators", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "Write",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { filePath: "my-project/.sisyphus\\plans/task.md" },
}
// #when / #then
await expect(
hook["tool.execute.before"](input, output)
).resolves.toBeUndefined()
})
test("should block nested project path without .sisyphus", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "Write",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { filePath: "my-project\\src\\code.ts" },
}
// #when / #then
await expect(
hook["tool.execute.before"](input, output)
).rejects.toThrow("can only write/edit .md files")
})
})
})

View File

@@ -1,16 +1,49 @@
import type { PluginInput } from "@opencode-ai/plugin"
import { existsSync, readdirSync } from "node:fs"
import { join } from "node:path"
import { join, resolve, relative, isAbsolute } from "node:path"
import { HOOK_NAME, PROMETHEUS_AGENTS, ALLOWED_EXTENSIONS, ALLOWED_PATH_PREFIX, BLOCKED_TOOLS, PLANNING_CONSULT_WARNING } from "./constants"
import { findNearestMessageWithFields, MESSAGE_STORAGE } from "../../features/hook-message-injector"
import { findNearestMessageWithFields, findFirstMessageWithAgent, MESSAGE_STORAGE } from "../../features/hook-message-injector"
import { getSessionAgent } from "../../features/claude-code-session-state"
import { log } from "../../shared/logger"
export * from "./constants"
function isAllowedFile(filePath: string): boolean {
const hasAllowedExtension = ALLOWED_EXTENSIONS.some(ext => filePath.endsWith(ext))
const isInAllowedPath = filePath.includes(ALLOWED_PATH_PREFIX)
return hasAllowedExtension && isInAllowedPath
/**
* Cross-platform path validator for Prometheus file writes.
* Uses path.resolve/relative instead of string matching to handle:
* - Windows backslashes (e.g., .sisyphus\\plans\\x.md)
* - Mixed separators (e.g., .sisyphus\\plans/x.md)
* - Case-insensitive directory/extension matching
* - Workspace confinement (blocks paths outside root or via traversal)
* - Nested project paths (e.g., parent/.sisyphus/... when ctx.directory is parent)
*/
function isAllowedFile(filePath: string, workspaceRoot: string): boolean {
// 1. Resolve to absolute path
const resolved = resolve(workspaceRoot, filePath)
// 2. Get relative path from workspace root
const rel = relative(workspaceRoot, resolved)
// 3. Reject if escapes root (starts with ".." or is absolute)
if (rel.startsWith("..") || isAbsolute(rel)) {
return false
}
// 4. Check if .sisyphus/ or .sisyphus\ exists anywhere in the path (case-insensitive)
// This handles both direct paths (.sisyphus/x.md) and nested paths (project/.sisyphus/x.md)
if (!/\.sisyphus[/\\]/i.test(rel)) {
return false
}
// 5. Check extension matches one of ALLOWED_EXTENSIONS (case-insensitive)
const hasAllowedExtension = ALLOWED_EXTENSIONS.some(
ext => resolved.toLowerCase().endsWith(ext.toLowerCase())
)
if (!hasAllowedExtension) {
return false
}
return true
}
function getMessageDir(sessionID: string): string | null {
@@ -29,13 +62,17 @@ function getMessageDir(sessionID: string): string | null {
const TASK_TOOLS = ["sisyphus_task", "task", "call_omo_agent"]
function getAgentFromSession(sessionID: string): string | undefined {
function getAgentFromMessageFiles(sessionID: string): string | undefined {
const messageDir = getMessageDir(sessionID)
if (!messageDir) return undefined
return findNearestMessageWithFields(messageDir)?.agent
return findFirstMessageWithAgent(messageDir) ?? findNearestMessageWithFields(messageDir)?.agent
}
export function createPrometheusMdOnlyHook(_ctx: PluginInput) {
function getAgentFromSession(sessionID: string): string | undefined {
return getSessionAgent(sessionID) ?? getAgentFromMessageFiles(sessionID)
}
export function createPrometheusMdOnlyHook(ctx: PluginInput) {
return {
"tool.execute.before": async (
input: { tool: string; sessionID: string; callID: string },
@@ -72,7 +109,7 @@ export function createPrometheusMdOnlyHook(_ctx: PluginInput) {
return
}
if (!isAllowedFile(filePath)) {
if (!isAllowedFile(filePath, ctx.directory)) {
log(`[${HOOK_NAME}] Blocked: Prometheus can only write to .sisyphus/*.md`, {
sessionID: input.sessionID,
tool: toolName,

View File

@@ -591,6 +591,73 @@ describe("ralph-loop", () => {
expect(hook.getState()).toBeNull()
})
test("should allow starting new loop while previous loop is active (different session)", async () => {
// #given - active loop in session A
const hook = createRalphLoopHook(createMockPluginInput())
hook.startLoop("session-A", "First task", { maxIterations: 10 })
expect(hook.getState()?.session_id).toBe("session-A")
expect(hook.getState()?.prompt).toBe("First task")
// #when - start new loop in session B (without completing A)
hook.startLoop("session-B", "Second task", { maxIterations: 20 })
// #then - state should be overwritten with session B's loop
expect(hook.getState()?.session_id).toBe("session-B")
expect(hook.getState()?.prompt).toBe("Second task")
expect(hook.getState()?.max_iterations).toBe(20)
expect(hook.getState()?.iteration).toBe(1)
// #when - session B goes idle
await hook.event({
event: { type: "session.idle", properties: { sessionID: "session-B" } },
})
// #then - continuation should be injected for session B
expect(promptCalls.length).toBe(1)
expect(promptCalls[0].sessionID).toBe("session-B")
expect(promptCalls[0].text).toContain("Second task")
expect(promptCalls[0].text).toContain("2/20")
// #then - iteration incremented
expect(hook.getState()?.iteration).toBe(2)
})
test("should allow starting new loop in same session (restart)", async () => {
// #given - active loop in session A at iteration 5
const hook = createRalphLoopHook(createMockPluginInput())
hook.startLoop("session-A", "First task", { maxIterations: 10 })
// Simulate some iterations
await hook.event({
event: { type: "session.idle", properties: { sessionID: "session-A" } },
})
await hook.event({
event: { type: "session.idle", properties: { sessionID: "session-A" } },
})
expect(hook.getState()?.iteration).toBe(3)
expect(promptCalls.length).toBe(2)
// #when - start NEW loop in same session (restart)
hook.startLoop("session-A", "Restarted task", { maxIterations: 50 })
// #then - state should be reset to iteration 1 with new prompt
expect(hook.getState()?.session_id).toBe("session-A")
expect(hook.getState()?.prompt).toBe("Restarted task")
expect(hook.getState()?.max_iterations).toBe(50)
expect(hook.getState()?.iteration).toBe(1)
// #when - session goes idle
promptCalls = [] // Reset to check new continuation
await hook.event({
event: { type: "session.idle", properties: { sessionID: "session-A" } },
})
// #then - continuation should use new task
expect(promptCalls.length).toBe(1)
expect(promptCalls[0].text).toContain("Restarted task")
expect(promptCalls[0].text).toContain("2/50")
})
test("should check transcript BEFORE API to optimize performance", async () => {
// #given - transcript has completion promise
const transcriptPath = join(TEST_DIR, "transcript.jsonl")

View File

@@ -175,8 +175,8 @@ describe("sisyphus-orchestrator hook", () => {
output
)
// #then - output should be transformed (original output replaced)
expect(output.output).not.toContain("Task completed successfully")
// #then - output should be transformed (original output preserved for debugging)
expect(output.output).toContain("Task completed successfully")
expect(output.output).toContain("SUBAGENT WORK COMPLETED")
expect(output.output).toContain("test-plan")
expect(output.output).toContain("SUBAGENTS LIE")
@@ -506,6 +506,90 @@ describe("sisyphus-orchestrator hook", () => {
// #then
expect(output.output).toBe(originalOutput)
})
describe("cross-platform path validation (Windows support)", () => {
test("should NOT append reminder when orchestrator writes inside .sisyphus\\ (Windows backslash)", async () => {
// #given
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const originalOutput = "File written successfully"
const output = {
title: "Write",
output: originalOutput,
metadata: { filePath: ".sisyphus\\plans\\work-plan.md" },
}
// #when
await hook["tool.execute.after"](
{ tool: "Write", sessionID: ORCHESTRATOR_SESSION },
output
)
// #then
expect(output.output).toBe(originalOutput)
expect(output.output).not.toContain("DELEGATION REQUIRED")
})
test("should NOT append reminder when orchestrator writes inside .sisyphus with mixed separators", async () => {
// #given
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const originalOutput = "File written successfully"
const output = {
title: "Write",
output: originalOutput,
metadata: { filePath: ".sisyphus\\plans/work-plan.md" },
}
// #when
await hook["tool.execute.after"](
{ tool: "Write", sessionID: ORCHESTRATOR_SESSION },
output
)
// #then
expect(output.output).toBe(originalOutput)
expect(output.output).not.toContain("DELEGATION REQUIRED")
})
test("should NOT append reminder for absolute Windows path inside .sisyphus\\", async () => {
// #given
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const originalOutput = "File written successfully"
const output = {
title: "Write",
output: originalOutput,
metadata: { filePath: "C:\\Users\\test\\project\\.sisyphus\\plans\\x.md" },
}
// #when
await hook["tool.execute.after"](
{ tool: "Write", sessionID: ORCHESTRATOR_SESSION },
output
)
// #then
expect(output.output).toBe(originalOutput)
expect(output.output).not.toContain("DELEGATION REQUIRED")
})
test("should append reminder for Windows path outside .sisyphus\\", async () => {
// #given
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const output = {
title: "Write",
output: "File written successfully",
metadata: { filePath: "C:\\Users\\test\\project\\src\\code.ts" },
}
// #when
await hook["tool.execute.after"](
{ tool: "Write", sessionID: ORCHESTRATOR_SESSION },
output
)
// #then
expect(output.output).toContain("DELEGATION REQUIRED")
})
})
})
})

View File

@@ -14,7 +14,14 @@ import type { BackgroundManager } from "../../features/background-agent"
export const HOOK_NAME = "sisyphus-orchestrator"
const ALLOWED_PATH_PREFIX = ".sisyphus/"
/**
* Cross-platform check if a path is inside .sisyphus/ directory.
* Handles both forward slashes (Unix) and backslashes (Windows).
*/
function isSisyphusPath(filePath: string): boolean {
return /\.sisyphus[/\\]/.test(filePath)
}
const WRITE_EDIT_TOOLS = ["Write", "Edit", "write", "edit"]
const DIRECT_WORK_REMINDER = `
@@ -549,7 +556,7 @@ export function createSisyphusOrchestratorHook(
// Check Write/Edit tools for orchestrator - inject strong warning
if (WRITE_EDIT_TOOLS.includes(input.tool)) {
const filePath = (output.args.filePath ?? output.args.path ?? output.args.file) as string | undefined
if (filePath && !filePath.includes(ALLOWED_PATH_PREFIX)) {
if (filePath && !isSisyphusPath(filePath)) {
// Store filePath for use in tool.execute.after
if (input.callID) {
pendingFilePaths.set(input.callID, filePath)
@@ -593,7 +600,7 @@ export function createSisyphusOrchestratorHook(
if (!filePath) {
filePath = output.metadata?.filePath as string | undefined
}
if (filePath && !filePath.includes(ALLOWED_PATH_PREFIX)) {
if (filePath && !isSisyphusPath(filePath)) {
output.output = (output.output || "") + DIRECT_WORK_REMINDER
log(`[${HOOK_NAME}] Direct work reminder appended`, {
sessionID: input.sessionID,
@@ -633,10 +640,20 @@ export function createSisyphusOrchestratorHook(
})
}
// Preserve original subagent response - critical for debugging failed tasks
const originalResponse = output.output
output.output = `
## SUBAGENT WORK COMPLETED
${fileChanges}
---
**Subagent Response:**
${originalResponse}
<system-reminder>
${buildOrchestratorReminder(boulderState.plan_name, progress, subagentSessionId)}
</system-reminder>`

View File

@@ -548,4 +548,263 @@ describe("todo-continuation-enforcer", () => {
// #then - no continuation (abort error detected)
expect(promptCalls).toHaveLength(0)
})
test("should skip injection when abort detected via session.error event (event-based, primary)", async () => {
// #given - session with incomplete todos
const sessionID = "main-event-abort"
setMainSession(sessionID)
mockMessages = [
{ info: { id: "msg-1", role: "user" } },
{ info: { id: "msg-2", role: "assistant" } },
]
const hook = createTodoContinuationEnforcer(createMockPluginInput(), {})
// #when - abort error event fires
await hook.handler({
event: {
type: "session.error",
properties: { sessionID, error: { name: "MessageAbortedError" } },
},
})
// #when - session goes idle immediately after
await hook.handler({
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
// #then - no continuation (abort detected via event)
expect(promptCalls).toHaveLength(0)
})
test("should skip injection when AbortError detected via session.error event", async () => {
// #given - session with incomplete todos
const sessionID = "main-event-abort-dom"
setMainSession(sessionID)
mockMessages = [
{ info: { id: "msg-1", role: "user" } },
{ info: { id: "msg-2", role: "assistant" } },
]
const hook = createTodoContinuationEnforcer(createMockPluginInput(), {})
// #when - AbortError event fires
await hook.handler({
event: {
type: "session.error",
properties: { sessionID, error: { name: "AbortError" } },
},
})
// #when - session goes idle
await hook.handler({
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
// #then - no continuation (abort detected via event)
expect(promptCalls).toHaveLength(0)
})
test("should inject when abort flag is stale (>3s old)", async () => {
// #given - session with incomplete todos and old abort timestamp
const sessionID = "main-stale-abort"
setMainSession(sessionID)
mockMessages = [
{ info: { id: "msg-1", role: "user" } },
{ info: { id: "msg-2", role: "assistant" } },
]
const hook = createTodoContinuationEnforcer(createMockPluginInput(), {})
// #when - abort error fires
await hook.handler({
event: {
type: "session.error",
properties: { sessionID, error: { name: "MessageAbortedError" } },
},
})
// #when - wait >3s then idle fires
await new Promise(r => setTimeout(r, 3100))
await hook.handler({
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
// #then - continuation injected (abort flag is stale)
expect(promptCalls.length).toBeGreaterThan(0)
}, 10000)
test("should clear abort flag on user message activity", async () => {
// #given - session with abort detected
const sessionID = "main-clear-on-user"
setMainSession(sessionID)
mockMessages = [
{ info: { id: "msg-1", role: "user" } },
{ info: { id: "msg-2", role: "assistant" } },
]
const hook = createTodoContinuationEnforcer(createMockPluginInput(), {})
// #when - abort error fires
await hook.handler({
event: {
type: "session.error",
properties: { sessionID, error: { name: "MessageAbortedError" } },
},
})
// #when - user sends new message (clears abort flag)
await new Promise(r => setTimeout(r, 600))
await hook.handler({
event: {
type: "message.updated",
properties: { info: { sessionID, role: "user" } },
},
})
// #when - session goes idle
await hook.handler({
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
// #then - continuation injected (abort flag was cleared by user activity)
expect(promptCalls.length).toBeGreaterThan(0)
})
test("should clear abort flag on assistant message activity", async () => {
// #given - session with abort detected
const sessionID = "main-clear-on-assistant"
setMainSession(sessionID)
mockMessages = [
{ info: { id: "msg-1", role: "user" } },
{ info: { id: "msg-2", role: "assistant" } },
]
const hook = createTodoContinuationEnforcer(createMockPluginInput(), {})
// #when - abort error fires
await hook.handler({
event: {
type: "session.error",
properties: { sessionID, error: { name: "MessageAbortedError" } },
},
})
// #when - assistant starts responding (clears abort flag)
await hook.handler({
event: {
type: "message.updated",
properties: { info: { sessionID, role: "assistant" } },
},
})
// #when - session goes idle
await hook.handler({
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
// #then - continuation injected (abort flag was cleared by assistant activity)
expect(promptCalls.length).toBeGreaterThan(0)
})
test("should clear abort flag on tool execution", async () => {
// #given - session with abort detected
const sessionID = "main-clear-on-tool"
setMainSession(sessionID)
mockMessages = [
{ info: { id: "msg-1", role: "user" } },
{ info: { id: "msg-2", role: "assistant" } },
]
const hook = createTodoContinuationEnforcer(createMockPluginInput(), {})
// #when - abort error fires
await hook.handler({
event: {
type: "session.error",
properties: { sessionID, error: { name: "MessageAbortedError" } },
},
})
// #when - tool executes (clears abort flag)
await hook.handler({
event: {
type: "tool.execute.before",
properties: { sessionID },
},
})
// #when - session goes idle
await hook.handler({
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
// #then - continuation injected (abort flag was cleared by tool execution)
expect(promptCalls.length).toBeGreaterThan(0)
})
test("should use event-based detection even when API indicates no abort (event wins)", async () => {
// #given - session with abort event but API shows no error
const sessionID = "main-event-wins"
setMainSession(sessionID)
mockMessages = [
{ info: { id: "msg-1", role: "user" } },
{ info: { id: "msg-2", role: "assistant" } },
]
const hook = createTodoContinuationEnforcer(createMockPluginInput(), {})
// #when - abort error event fires (but API doesn't have it yet)
await hook.handler({
event: {
type: "session.error",
properties: { sessionID, error: { name: "MessageAbortedError" } },
},
})
// #when - session goes idle
await hook.handler({
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
// #then - no continuation (event-based detection wins over API)
expect(promptCalls).toHaveLength(0)
})
test("should use API fallback when event is missed but API shows abort", async () => {
// #given - session where event was missed but API shows abort
const sessionID = "main-api-fallback"
setMainSession(sessionID)
mockMessages = [
{ info: { id: "msg-1", role: "user" } },
{ info: { id: "msg-2", role: "assistant", error: { name: "MessageAbortedError" } } },
]
const hook = createTodoContinuationEnforcer(createMockPluginInput(), {})
// #when - session goes idle without prior session.error event
await hook.handler({
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 3000))
// #then - no continuation (API fallback detected the abort)
expect(promptCalls).toHaveLength(0)
})
})

View File

@@ -36,6 +36,7 @@ interface SessionState {
countdownInterval?: ReturnType<typeof setInterval>
isRecovering?: boolean
countdownStartedAt?: number
abortDetectedAt?: number
}
const CONTINUATION_PROMPT = `[SYSTEM REMINDER - TODO CONTINUATION]
@@ -254,6 +255,13 @@ export function createTodoContinuationEnforcer(
const sessionID = props?.sessionID as string | undefined
if (!sessionID) return
const error = props?.error as { name?: string } | undefined
if (error?.name === "MessageAbortedError" || error?.name === "AbortError") {
const state = getState(sessionID)
state.abortDetectedAt = Date.now()
log(`[${HOOK_NAME}] Abort detected via session.error`, { sessionID, errorName: error.name })
}
cancelCountdown(sessionID)
log(`[${HOOK_NAME}] session.error`, { sessionID })
return
@@ -281,6 +289,18 @@ export function createTodoContinuationEnforcer(
return
}
// Check 1: Event-based abort detection (primary, most reliable)
if (state.abortDetectedAt) {
const timeSinceAbort = Date.now() - state.abortDetectedAt
const ABORT_WINDOW_MS = 3000
if (timeSinceAbort < ABORT_WINDOW_MS) {
log(`[${HOOK_NAME}] Skipped: abort detected via event ${timeSinceAbort}ms ago`, { sessionID })
state.abortDetectedAt = undefined
return
}
state.abortDetectedAt = undefined
}
const hasRunningBgTasks = backgroundManager
? backgroundManager.getTasksByParentSession(sessionID).some(t => t.status === "running")
: false
@@ -290,6 +310,7 @@ export function createTodoContinuationEnforcer(
return
}
// Check 2: API-based abort detection (fallback, for cases where event was missed)
try {
const messagesResp = await ctx.client.session.messages({
path: { id: sessionID },
@@ -298,7 +319,7 @@ export function createTodoContinuationEnforcer(
const messages = (messagesResp as { data?: Array<{ info?: MessageInfo }> }).data ?? []
if (isLastAssistantMessageAborted(messages)) {
log(`[${HOOK_NAME}] Skipped: last assistant message was aborted`, { sessionID })
log(`[${HOOK_NAME}] Skipped: last assistant message was aborted (API fallback)`, { sessionID })
return
}
} catch (err) {
@@ -367,10 +388,13 @@ export function createTodoContinuationEnforcer(
return
}
}
if (state) state.abortDetectedAt = undefined
cancelCountdown(sessionID)
}
if (role === "assistant") {
const state = sessions.get(sessionID)
if (state) state.abortDetectedAt = undefined
cancelCountdown(sessionID)
}
return
@@ -382,6 +406,8 @@ export function createTodoContinuationEnforcer(
const role = info?.role as string | undefined
if (sessionID && role === "assistant") {
const state = sessions.get(sessionID)
if (state) state.abortDetectedAt = undefined
cancelCountdown(sessionID)
}
return
@@ -390,6 +416,8 @@ export function createTodoContinuationEnforcer(
if (event.type === "tool.execute.before" || event.type === "tool.execute.after") {
const sessionID = props?.sessionID as string | undefined
if (sessionID) {
const state = sessions.get(sessionID)
if (state) state.abortDetectedAt = undefined
cancelCountdown(sessionID)
}
return

View File

@@ -36,7 +36,8 @@ import {
createContextInjectorHook,
createContextInjectorMessagesTransformHook,
} from "./features/context-injector";
import { createGoogleAntigravityAuthPlugin } from "./auth/antigravity";
import { applyAgentVariant, resolveAgentVariant } from "./shared/agent-variant";
import { createFirstMessageVariantGate } from "./shared/first-message-variant";
import {
discoverUserClaudeSkills,
discoverProjectClaudeSkills,
@@ -49,6 +50,8 @@ import { getSystemMcpServerNames } from "./features/claude-code-mcp-loader";
import {
setMainSession,
getMainSessionID,
setSessionAgent,
clearSessionAgent,
} from "./features/claude-code-session-state";
import {
builtinTools,
@@ -63,6 +66,7 @@ import {
createSisyphusTask,
interactive_bash,
startTmuxCheck,
lspManager,
} from "./tools";
import { BackgroundManager } from "./features/background-agent";
import { SkillMcpManager } from "./features/skill-mcp-manager";
@@ -79,6 +83,7 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
const pluginConfig = loadPluginConfig(ctx.directory, ctx);
const disabledHooks = new Set(pluginConfig.disabled_hooks ?? []);
const firstMessageVariantGate = createFirstMessageVariantGate();
const isHookEnabled = (hookName: HookName) => !disabledHooks.has(hookName);
const modelCacheState = createModelCacheState();
@@ -164,7 +169,7 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
})
: null;
const keywordDetector = isHookEnabled("keyword-detector")
? createKeywordDetectorHook(ctx)
? createKeywordDetectorHook(ctx, contextCollector)
: null;
const contextInjector = createContextInjectorHook(contextCollector);
const contextInjectorMessagesTransform =
@@ -235,7 +240,9 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
const sisyphusTask = createSisyphusTask({
manager: backgroundManager,
client: ctx.client,
directory: ctx.directory,
userCategories: pluginConfig.categories,
gitMasterConfig: pluginConfig.git_master,
});
const disabledSkills = new Set(pluginConfig.disabled_skills ?? []);
const systemMcpNames = getSystemMcpServerNames();
@@ -286,10 +293,6 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
? createAutoSlashCommandHook({ skills: mergedSkills })
: null;
const googleAuthHooks = pluginConfig.google_auth !== false
? await createGoogleAntigravityAuthPlugin(ctx)
: null;
const configHandler = createConfigHandler({
ctx,
pluginConfig,
@@ -297,8 +300,6 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
});
return {
...(googleAuthHooks ? { auth: googleAuthHooks.auth } : {}),
tool: {
...builtinTools,
...backgroundTools,
@@ -312,8 +313,19 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
},
"chat.message": async (input, output) => {
await claudeCodeHooks["chat.message"]?.(input, output);
const message = (output as { message: { variant?: string } }).message
if (firstMessageVariantGate.shouldOverride(input.sessionID)) {
const variant = resolveAgentVariant(pluginConfig, input.agent)
if (variant !== undefined) {
message.variant = variant
}
firstMessageVariantGate.markApplied(input.sessionID)
} else {
applyAgentVariant(pluginConfig, input.agent, message)
}
await keywordDetector?.["chat.message"]?.(input, output);
await claudeCodeHooks["chat.message"]?.(input, output);
await contextInjector["chat.message"]?.(input, output);
await autoSlashCommand?.["chat.message"]?.(input, output);
await startWork?.["chat.message"]?.(input, output);
@@ -418,6 +430,7 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
if (!sessionInfo?.parentID) {
setMainSession(sessionInfo?.id);
}
firstMessageVariantGate.markSessionCreated(sessionInfo);
}
if (event.type === "session.deleted") {
@@ -426,7 +439,20 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
setMainSession(undefined);
}
if (sessionInfo?.id) {
clearSessionAgent(sessionInfo.id);
firstMessageVariantGate.clear(sessionInfo.id);
await skillMcpManager.disconnectSession(sessionInfo.id);
await lspManager.cleanupTempDirectoryClients();
}
}
if (event.type === "message.updated") {
const info = props?.info as Record<string, unknown> | undefined;
const sessionID = info?.sessionID as string | undefined;
const agent = info?.agent as string | undefined;
const role = info?.role as string | undefined;
if (sessionID && agent && role === "user") {
setSessionAgent(sessionID, agent);
}
}

119
src/plugin-config.test.ts Normal file
View File

@@ -0,0 +1,119 @@
import { describe, expect, it } from "bun:test";
import { mergeConfigs } from "./plugin-config";
import type { OhMyOpenCodeConfig } from "./config";
describe("mergeConfigs", () => {
describe("categories merging", () => {
// #given base config has categories, override has different categories
// #when merging configs
// #then should deep merge categories, not override completely
it("should deep merge categories from base and override", () => {
const base = {
categories: {
general: {
model: "openai/gpt-5.2",
temperature: 0.5,
},
quick: {
model: "anthropic/claude-haiku-4-5",
},
},
} as OhMyOpenCodeConfig;
const override = {
categories: {
general: {
temperature: 0.3,
},
visual: {
model: "google/gemini-3-pro-preview",
},
},
} as unknown as OhMyOpenCodeConfig;
const result = mergeConfigs(base, override);
// #then general.model should be preserved from base
expect(result.categories?.general?.model).toBe("openai/gpt-5.2");
// #then general.temperature should be overridden
expect(result.categories?.general?.temperature).toBe(0.3);
// #then quick should be preserved from base
expect(result.categories?.quick?.model).toBe("anthropic/claude-haiku-4-5");
// #then visual should be added from override
expect(result.categories?.visual?.model).toBe("google/gemini-3-pro-preview");
});
it("should preserve base categories when override has no categories", () => {
const base: OhMyOpenCodeConfig = {
categories: {
general: {
model: "openai/gpt-5.2",
},
},
};
const override: OhMyOpenCodeConfig = {};
const result = mergeConfigs(base, override);
expect(result.categories?.general?.model).toBe("openai/gpt-5.2");
});
it("should use override categories when base has no categories", () => {
const base: OhMyOpenCodeConfig = {};
const override: OhMyOpenCodeConfig = {
categories: {
general: {
model: "openai/gpt-5.2",
},
},
};
const result = mergeConfigs(base, override);
expect(result.categories?.general?.model).toBe("openai/gpt-5.2");
});
});
describe("existing behavior preservation", () => {
it("should deep merge agents", () => {
const base: OhMyOpenCodeConfig = {
agents: {
oracle: { model: "openai/gpt-5.2" },
},
};
const override: OhMyOpenCodeConfig = {
agents: {
oracle: { temperature: 0.5 },
explore: { model: "anthropic/claude-haiku-4-5" },
},
};
const result = mergeConfigs(base, override);
expect(result.agents?.oracle?.model).toBe("openai/gpt-5.2");
expect(result.agents?.oracle?.temperature).toBe(0.5);
expect(result.agents?.explore?.model).toBe("anthropic/claude-haiku-4-5");
});
it("should merge disabled arrays without duplicates", () => {
const base: OhMyOpenCodeConfig = {
disabled_hooks: ["comment-checker", "think-mode"],
};
const override: OhMyOpenCodeConfig = {
disabled_hooks: ["think-mode", "session-recovery"],
};
const result = mergeConfigs(base, override);
expect(result.disabled_hooks).toContain("comment-checker");
expect(result.disabled_hooks).toContain("think-mode");
expect(result.disabled_hooks).toContain("session-recovery");
expect(result.disabled_hooks?.length).toBe(3);
});
});
});

View File

@@ -55,6 +55,7 @@ export function mergeConfigs(
...base,
...override,
agents: deepMerge(base.agents, override.agents),
categories: deepMerge(base.categories, override.categories),
disabled_agents: [
...new Set([
...(base.disabled_agents ?? []),

View File

@@ -0,0 +1,104 @@
import { describe, test, expect } from "bun:test"
import { resolveCategoryConfig } from "./config-handler"
import type { CategoryConfig } from "../config/schema"
describe("Prometheus category config resolution", () => {
test("resolves ultrabrain category config", () => {
// #given
const categoryName = "ultrabrain"
// #when
const config = resolveCategoryConfig(categoryName)
// #then
expect(config).toBeDefined()
expect(config?.model).toBe("openai/gpt-5.2")
expect(config?.temperature).toBe(0.1)
})
test("resolves visual-engineering category config", () => {
// #given
const categoryName = "visual-engineering"
// #when
const config = resolveCategoryConfig(categoryName)
// #then
expect(config).toBeDefined()
expect(config?.model).toBe("google/gemini-3-pro-preview")
expect(config?.temperature).toBe(0.7)
})
test("user categories override default categories", () => {
// #given
const categoryName = "ultrabrain"
const userCategories: Record<string, CategoryConfig> = {
ultrabrain: {
model: "google/antigravity-claude-opus-4-5-thinking",
temperature: 0.1,
},
}
// #when
const config = resolveCategoryConfig(categoryName, userCategories)
// #then
expect(config).toBeDefined()
expect(config?.model).toBe("google/antigravity-claude-opus-4-5-thinking")
expect(config?.temperature).toBe(0.1)
})
test("returns undefined for unknown category", () => {
// #given
const categoryName = "nonexistent-category"
// #when
const config = resolveCategoryConfig(categoryName)
// #then
expect(config).toBeUndefined()
})
test("falls back to default when user category has no entry", () => {
// #given
const categoryName = "ultrabrain"
const userCategories: Record<string, CategoryConfig> = {
"visual-engineering": {
model: "custom/visual-model",
},
}
// #when
const config = resolveCategoryConfig(categoryName, userCategories)
// #then
expect(config).toBeDefined()
expect(config?.model).toBe("openai/gpt-5.2")
expect(config?.temperature).toBe(0.1)
})
test("preserves all category properties (temperature, top_p, tools, etc.)", () => {
// #given
const categoryName = "custom-category"
const userCategories: Record<string, CategoryConfig> = {
"custom-category": {
model: "test/model",
temperature: 0.5,
top_p: 0.9,
maxTokens: 32000,
tools: { tool1: true, tool2: false },
},
}
// #when
const config = resolveCategoryConfig(categoryName, userCategories)
// #then
expect(config).toBeDefined()
expect(config?.model).toBe("test/model")
expect(config?.temperature).toBe(0.5)
expect(config?.top_p).toBe(0.9)
expect(config?.maxTokens).toBe(32000)
expect(config?.tools).toEqual({ tool1: true, tool2: false })
})
})

Some files were not shown because too many files have changed in this diff Show More