Compare commits
103 Commits
fix/suppor
...
v3.0.0-bet
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f3c477f7fc | ||
|
|
ca060d472f | ||
|
|
8ce9ac7e53 | ||
|
|
eb419f02b1 | ||
|
|
d1eb57d7aa | ||
|
|
72ca8b8637 | ||
|
|
a4e161d485 | ||
|
|
bb38050574 | ||
|
|
90debb8e97 | ||
|
|
c600111597 | ||
|
|
f179d469fa | ||
|
|
a64e1f6100 | ||
|
|
14c54c1941 | ||
|
|
07412deca4 | ||
|
|
1e239e6155 | ||
|
|
9481770a39 | ||
|
|
2d3894a860 | ||
|
|
869efbe9ad | ||
|
|
4b4386dbad | ||
|
|
0d0bf4d384 | ||
|
|
64b9b4d36a | ||
|
|
57fb5e0c71 | ||
|
|
69ca8a2ddd | ||
|
|
a55b10e861 | ||
|
|
5940d2e1d4 | ||
|
|
99d45f26cb | ||
|
|
cfb6980493 | ||
|
|
1fe6c7e508 | ||
|
|
0d90bc1360 | ||
|
|
69a12663c4 | ||
|
|
2722aa512c | ||
|
|
073b7e4e65 | ||
|
|
03beb8e45c | ||
|
|
4b2bf9ccb5 | ||
|
|
b442b1c857 | ||
|
|
04fc903d38 | ||
|
|
24e983eac7 | ||
|
|
c074da0b93 | ||
|
|
5331a9f8c1 | ||
|
|
185c72c9e3 | ||
|
|
919453ebce | ||
|
|
344eb771c6 | ||
|
|
9c363a52ce | ||
|
|
a1f3f679e9 | ||
|
|
4a6663a6cf | ||
|
|
86c241fd63 | ||
|
|
79a7448341 | ||
|
|
213faedcdb | ||
|
|
54078992f9 | ||
|
|
b9101567ff | ||
|
|
2925402c4f | ||
|
|
3b89a35a13 | ||
|
|
9eafe6b5f9 | ||
|
|
110dbd665a | ||
|
|
9152c5c687 | ||
|
|
56bc1697d2 | ||
|
|
39e92b1900 | ||
|
|
7567c40a81 | ||
|
|
f354724e64 | ||
|
|
0c10be4f61 | ||
|
|
36a227efeb | ||
|
|
29ffdccbe6 | ||
|
|
b9b8adc11c | ||
|
|
324ecd872e | ||
|
|
f8e1f79960 | ||
|
|
7d44ae4eb3 | ||
|
|
42ea6b4d25 | ||
|
|
e5b36f30b8 | ||
|
|
156a988080 | ||
|
|
f0b24f22bc | ||
|
|
7d2983fafb | ||
|
|
f49d92852a | ||
|
|
30e5760e2f | ||
|
|
cf898b81de | ||
|
|
a3594a41dd | ||
|
|
ee2eb2174f | ||
|
|
166fd20a4f | ||
|
|
47d56d95a6 | ||
|
|
fff565b5af | ||
|
|
2e76a15bac | ||
|
|
f9325c2d89 | ||
|
|
c1fa8d5212 | ||
|
|
059aa87695 | ||
|
|
dfb4f8abc3 | ||
|
|
52badc9367 | ||
|
|
4acdcbcd55 | ||
|
|
74cb4ac2a4 | ||
|
|
cdafb5e202 | ||
|
|
52d0381ce2 | ||
|
|
ba4237b35c | ||
|
|
e27bceb97e | ||
|
|
534142da12 | ||
|
|
fef68d86a4 | ||
|
|
2073515762 | ||
|
|
069626f6c0 | ||
|
|
81ba84ccd6 | ||
|
|
136135bc88 | ||
|
|
83cd453e02 | ||
|
|
2957e8b836 | ||
|
|
bd22f1cefe | ||
|
|
1267a1bac5 | ||
|
|
a423a33cbe | ||
|
|
580d4bb0a1 |
4
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
4
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
@@ -14,13 +14,11 @@ body:
|
||||
label: Prerequisites
|
||||
description: Please confirm the following before submitting
|
||||
options:
|
||||
- label: I will write this issue in English (see our [Language Policy](https://github.com/code-yeongyu/oh-my-opencode/blob/dev/CONTRIBUTING.md#language-policy))
|
||||
required: true
|
||||
- label: I have searched existing issues to avoid duplicates
|
||||
required: true
|
||||
- label: I am using the latest version of oh-my-opencode
|
||||
required: true
|
||||
- label: I have read the [documentation](https://github.com/code-yeongyu/oh-my-opencode#readme) or asked an AI coding agent with this project's GitHub URL loaded and couldn't find the answer
|
||||
- label: I have read the [documentation](https://github.com/code-yeongyu/oh-my-opencode#readme)
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
|
||||
4
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
4
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
@@ -14,13 +14,11 @@ body:
|
||||
label: Prerequisites
|
||||
description: Please confirm the following before submitting
|
||||
options:
|
||||
- label: I will write this issue in English (see our [Language Policy](https://github.com/code-yeongyu/oh-my-opencode/blob/dev/CONTRIBUTING.md#language-policy))
|
||||
required: true
|
||||
- label: I have searched existing issues and discussions to avoid duplicates
|
||||
required: true
|
||||
- label: This feature request is specific to oh-my-opencode (not OpenCode core)
|
||||
required: true
|
||||
- label: I have read the [documentation](https://github.com/code-yeongyu/oh-my-opencode#readme) or asked an AI coding agent with this project's GitHub URL loaded and couldn't find the answer
|
||||
- label: I have read the [documentation](https://github.com/code-yeongyu/oh-my-opencode#readme)
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
|
||||
4
.github/ISSUE_TEMPLATE/general.yml
vendored
4
.github/ISSUE_TEMPLATE/general.yml
vendored
@@ -14,11 +14,9 @@ body:
|
||||
label: Prerequisites
|
||||
description: Please confirm the following before submitting
|
||||
options:
|
||||
- label: I will write this issue in English (see our [Language Policy](https://github.com/code-yeongyu/oh-my-opencode/blob/dev/CONTRIBUTING.md#language-policy))
|
||||
required: true
|
||||
- label: I have searched existing issues and discussions
|
||||
required: true
|
||||
- label: I have read the [documentation](https://github.com/code-yeongyu/oh-my-opencode#readme) or asked an AI coding agent with this project's GitHub URL loaded and couldn't find the answer
|
||||
- label: I have read the [documentation](https://github.com/code-yeongyu/oh-my-opencode#readme)
|
||||
required: true
|
||||
- label: This is a question (not a bug report or feature request)
|
||||
required: true
|
||||
|
||||
BIN
.github/assets/elestyle.jpg
vendored
BIN
.github/assets/elestyle.jpg
vendored
Binary file not shown.
|
Before Width: | Height: | Size: 116 KiB |
BIN
.github/assets/hephaestus.png
vendored
BIN
.github/assets/hephaestus.png
vendored
Binary file not shown.
|
Before Width: | Height: | Size: 8.2 MiB |
BIN
.github/assets/orchestrator-atlas.png
vendored
BIN
.github/assets/orchestrator-atlas.png
vendored
Binary file not shown.
|
Before Width: | Height: | Size: 182 KiB |
BIN
.github/assets/orchestrator-sisyphus.png
vendored
Normal file
BIN
.github/assets/orchestrator-sisyphus.png
vendored
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 984 KiB |
BIN
.github/assets/sisyphuslabs.png
vendored
BIN
.github/assets/sisyphuslabs.png
vendored
Binary file not shown.
|
Before Width: | Height: | Size: 143 KiB |
78
.github/workflows/ci.yml
vendored
78
.github/workflows/ci.yml
vendored
@@ -4,32 +4,13 @@ on:
|
||||
push:
|
||||
branches: [master, dev]
|
||||
pull_request:
|
||||
branches: [master, dev]
|
||||
branches: [master]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
# Block PRs targeting master branch
|
||||
block-master-pr:
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event_name == 'pull_request'
|
||||
steps:
|
||||
- name: Check PR target branch
|
||||
run: |
|
||||
if [ "${{ github.base_ref }}" = "master" ]; then
|
||||
echo "::error::PRs to master branch are not allowed. Please target the 'dev' branch instead."
|
||||
echo ""
|
||||
echo "PULL REQUESTS TO MASTER ARE BLOCKED"
|
||||
echo ""
|
||||
echo "All PRs must target the 'dev' branch."
|
||||
echo "Please close this PR and create a new one targeting 'dev'."
|
||||
exit 1
|
||||
else
|
||||
echo "PR targets '${{ github.base_ref }}' branch - OK"
|
||||
fi
|
||||
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
@@ -44,61 +25,8 @@ jobs:
|
||||
env:
|
||||
BUN_INSTALL_ALLOW_SCRIPTS: "@ast-grep/napi"
|
||||
|
||||
- name: Run mock-heavy tests (isolated)
|
||||
run: |
|
||||
# These files use mock.module() which pollutes module cache
|
||||
# Run them in separate processes to prevent cross-file contamination
|
||||
bun test src/plugin-handlers
|
||||
bun test src/hooks/atlas
|
||||
bun test src/hooks/compaction-context-injector
|
||||
bun test src/features/tmux-subagent
|
||||
bun test src/cli/doctor/formatter.test.ts
|
||||
bun test src/cli/doctor/format-default.test.ts
|
||||
bun test src/tools/call-omo-agent/sync-executor.test.ts
|
||||
bun test src/tools/call-omo-agent/session-creator.test.ts
|
||||
bun test src/tools/session-manager
|
||||
bun test src/features/opencode-skill-loader/loader.test.ts
|
||||
bun test src/hooks/anthropic-context-window-limit-recovery/recovery-hook.test.ts
|
||||
bun test src/hooks/anthropic-context-window-limit-recovery/executor.test.ts
|
||||
|
||||
- name: Run remaining tests
|
||||
run: |
|
||||
# Enumerate subdirectories/files explicitly to EXCLUDE mock-heavy files
|
||||
# that were already run in isolation above.
|
||||
# Excluded from src/cli: doctor/formatter.test.ts, doctor/format-default.test.ts
|
||||
# Excluded from src/tools: call-omo-agent/sync-executor.test.ts, call-omo-agent/session-creator.test.ts, session-manager (all)
|
||||
# Excluded from src/hooks/anthropic-context-window-limit-recovery: recovery-hook.test.ts, executor.test.ts
|
||||
bun test bin script src/config src/mcp src/index.test.ts \
|
||||
src/agents src/shared \
|
||||
src/cli/run src/cli/config-manager src/cli/mcp-oauth \
|
||||
src/cli/index.test.ts src/cli/install.test.ts src/cli/model-fallback.test.ts \
|
||||
src/cli/config-manager.test.ts \
|
||||
src/cli/doctor/runner.test.ts src/cli/doctor/checks \
|
||||
src/tools/ast-grep src/tools/background-task src/tools/delegate-task \
|
||||
src/tools/glob src/tools/grep src/tools/interactive-bash \
|
||||
src/tools/look-at src/tools/lsp \
|
||||
src/tools/skill src/tools/skill-mcp src/tools/slashcommand src/tools/task \
|
||||
src/tools/call-omo-agent/background-agent-executor.test.ts \
|
||||
src/tools/call-omo-agent/background-executor.test.ts \
|
||||
src/tools/call-omo-agent/subagent-session-creator.test.ts \
|
||||
src/hooks/anthropic-context-window-limit-recovery/empty-content-recovery-sdk.test.ts src/hooks/anthropic-context-window-limit-recovery/parser.test.ts src/hooks/anthropic-context-window-limit-recovery/pruning-deduplication.test.ts src/hooks/anthropic-context-window-limit-recovery/recovery-deduplication.test.ts src/hooks/anthropic-context-window-limit-recovery/storage.test.ts \
|
||||
src/hooks/claude-code-compatibility \
|
||||
src/hooks/context-injection \
|
||||
src/hooks/provider-toast \
|
||||
src/hooks/session-notification \
|
||||
src/hooks/sisyphus \
|
||||
src/hooks/todo-continuation-enforcer \
|
||||
src/features/background-agent \
|
||||
src/features/builtin-commands \
|
||||
src/features/builtin-skills \
|
||||
src/features/claude-code-session-state \
|
||||
src/features/hook-message-injector \
|
||||
src/features/opencode-skill-loader/config-source-discovery.test.ts \
|
||||
src/features/opencode-skill-loader/merger.test.ts \
|
||||
src/features/opencode-skill-loader/skill-content.test.ts \
|
||||
src/features/opencode-skill-loader/blocking.test.ts \
|
||||
src/features/opencode-skill-loader/async-loader.test.ts \
|
||||
src/features/skill-mcp-manager
|
||||
- name: Run tests
|
||||
run: bun test
|
||||
|
||||
typecheck:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
2
.github/workflows/cla.yml
vendored
2
.github/workflows/cla.yml
vendored
@@ -25,7 +25,7 @@ jobs:
|
||||
path-to-signatures: 'signatures/cla.json'
|
||||
path-to-document: 'https://github.com/code-yeongyu/oh-my-opencode/blob/master/CLA.md'
|
||||
branch: 'dev'
|
||||
allowlist: code-yeongyu,bot*,dependabot*,github-actions*,*[bot],sisyphus-dev-ai,web-flow
|
||||
allowlist: bot*,dependabot*,github-actions*,*[bot],sisyphus-dev-ai
|
||||
custom-notsigned-prcomment: |
|
||||
Thank you for your contribution! Before we can merge this PR, we need you to sign our [Contributor License Agreement (CLA)](https://github.com/code-yeongyu/oh-my-opencode/blob/master/CLA.md).
|
||||
|
||||
|
||||
325
.github/workflows/publish-platform.yml
vendored
325
.github/workflows/publish-platform.yml
vendored
@@ -1,325 +0,0 @@
|
||||
name: publish-platform
|
||||
run-name: "platform packages ${{ inputs.version }}"
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
version:
|
||||
required: true
|
||||
type: string
|
||||
dist_tag:
|
||||
required: false
|
||||
type: string
|
||||
default: ""
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
version:
|
||||
description: "Version to publish (e.g., 3.0.0-beta.12)"
|
||||
required: true
|
||||
type: string
|
||||
dist_tag:
|
||||
description: "npm dist tag (e.g., beta, latest)"
|
||||
required: false
|
||||
type: string
|
||||
default: ""
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
id-token: write
|
||||
|
||||
jobs:
|
||||
# =============================================================================
|
||||
# Job 1: Build binaries for all platforms
|
||||
# - Windows builds on windows-latest (avoid bun cross-compile segfault)
|
||||
# - All other platforms build on ubuntu-latest
|
||||
# - Uploads compressed artifacts for the publish job
|
||||
# =============================================================================
|
||||
build:
|
||||
runs-on: ${{ startsWith(matrix.platform, 'windows-') && 'windows-latest' || 'ubuntu-latest' }}
|
||||
defaults:
|
||||
run:
|
||||
shell: bash
|
||||
strategy:
|
||||
fail-fast: false
|
||||
max-parallel: 11
|
||||
matrix:
|
||||
platform: [darwin-arm64, darwin-x64, darwin-x64-baseline, linux-x64, linux-x64-baseline, linux-arm64, linux-x64-musl, linux-x64-musl-baseline, linux-arm64-musl, windows-x64, windows-x64-baseline]
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- uses: oven-sh/setup-bun@v2
|
||||
with:
|
||||
bun-version: latest
|
||||
|
||||
- name: Install dependencies
|
||||
run: bun install
|
||||
env:
|
||||
BUN_INSTALL_ALLOW_SCRIPTS: "@ast-grep/napi"
|
||||
|
||||
- name: Check if already published
|
||||
id: check
|
||||
run: |
|
||||
VERSION="${{ inputs.version }}"
|
||||
PLATFORM_KEY="${{ matrix.platform }}"
|
||||
PLATFORM_KEY="${PLATFORM_KEY//-/_}"
|
||||
|
||||
# Check oh-my-opencode
|
||||
OC_STATUS=$(curl -s -o /dev/null -w "%{http_code}" "https://registry.npmjs.org/oh-my-opencode-${{ matrix.platform }}/${VERSION}")
|
||||
# Check oh-my-openagent
|
||||
OA_STATUS=$(curl -s -o /dev/null -w "%{http_code}" "https://registry.npmjs.org/oh-my-openagent-${{ matrix.platform }}/${VERSION}")
|
||||
|
||||
echo "oh-my-opencode-${{ matrix.platform }}@${VERSION}: ${OC_STATUS}"
|
||||
echo "oh-my-openagent-${{ matrix.platform }}@${VERSION}: ${OA_STATUS}"
|
||||
|
||||
if [ "$OC_STATUS" = "200" ]; then
|
||||
echo "skip_opencode=true" >> $GITHUB_OUTPUT
|
||||
echo "✓ oh-my-opencode-${{ matrix.platform }}@${VERSION} already published"
|
||||
else
|
||||
echo "skip_opencode=false" >> $GITHUB_OUTPUT
|
||||
echo "→ oh-my-opencode-${{ matrix.platform }}@${VERSION} needs publishing"
|
||||
fi
|
||||
|
||||
if [ "$OA_STATUS" = "200" ]; then
|
||||
echo "skip_openagent=true" >> $GITHUB_OUTPUT
|
||||
echo "✓ oh-my-openagent-${{ matrix.platform }}@${VERSION} already published"
|
||||
else
|
||||
echo "skip_openagent=false" >> $GITHUB_OUTPUT
|
||||
echo "→ oh-my-openagent-${{ matrix.platform }}@${VERSION} needs publishing"
|
||||
fi
|
||||
|
||||
# Skip build only if BOTH are already published
|
||||
if [ "$OC_STATUS" = "200" ] && [ "$OA_STATUS" = "200" ]; then
|
||||
echo "skip=true" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "skip=false" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- name: Update version in package.json
|
||||
if: steps.check.outputs.skip != 'true'
|
||||
run: |
|
||||
VERSION="${{ inputs.version }}"
|
||||
cd packages/${{ matrix.platform }}
|
||||
jq --arg v "$VERSION" '.version = $v' package.json > tmp.json && mv tmp.json package.json
|
||||
|
||||
- name: Set root package version
|
||||
if: steps.check.outputs.skip != 'true'
|
||||
run: |
|
||||
jq --arg v "${{ inputs.version }}" '.version = $v' package.json > tmp.json && mv tmp.json package.json
|
||||
|
||||
- name: Pre-download baseline compile target
|
||||
if: steps.check.outputs.skip != 'true' && endsWith(matrix.platform, '-baseline')
|
||||
shell: bash
|
||||
run: |
|
||||
BUN_VERSION=$(bun --version)
|
||||
PLATFORM="${{ matrix.platform }}"
|
||||
PKG_NAME="bun-${PLATFORM}"
|
||||
CACHE_DIR=$(bun pm cache)
|
||||
CACHE_DEST="${CACHE_DIR}/${PKG_NAME}-v${BUN_VERSION}"
|
||||
|
||||
if [[ -f "$CACHE_DEST" ]]; then
|
||||
echo "✓ Compile target already cached at ${CACHE_DEST}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "Pre-downloading ${PKG_NAME} v${BUN_VERSION} to ${CACHE_DEST}"
|
||||
TARBALL_URL="https://registry.npmjs.org/@oven/bun-${PLATFORM}/-/bun-${PLATFORM}-${BUN_VERSION}.tgz"
|
||||
echo "URL: ${TARBALL_URL}"
|
||||
|
||||
mkdir -p "$(dirname "$CACHE_DEST")"
|
||||
TMP_DIR=$(mktemp -d)
|
||||
|
||||
# Download and extract the bun binary from npm tarball
|
||||
curl -fsSL --retry 5 --retry-delay 5 "${TARBALL_URL}" | tar -xzf - -C "${TMP_DIR}"
|
||||
|
||||
if [[ "$PLATFORM" == windows-* ]]; then
|
||||
BIN_NAME="bun.exe"
|
||||
else
|
||||
BIN_NAME="bun"
|
||||
fi
|
||||
|
||||
# npm tarball has package/bin/bun structure
|
||||
if [[ -f "${TMP_DIR}/package/bin/${BIN_NAME}" ]]; then
|
||||
cp "${TMP_DIR}/package/bin/${BIN_NAME}" "${CACHE_DEST}"
|
||||
elif [[ -f "${TMP_DIR}/package/${BIN_NAME}" ]]; then
|
||||
cp "${TMP_DIR}/package/${BIN_NAME}" "${CACHE_DEST}"
|
||||
else
|
||||
echo "Could not find ${BIN_NAME} in tarball, listing contents:"
|
||||
find "${TMP_DIR}" -type f
|
||||
exit 1
|
||||
fi
|
||||
|
||||
chmod +x "${CACHE_DEST}" 2>/dev/null || true
|
||||
echo "✓ Pre-downloaded to ${CACHE_DEST}"
|
||||
ls -lh "${CACHE_DEST}"
|
||||
|
||||
- name: Build binary
|
||||
if: steps.check.outputs.skip != 'true'
|
||||
uses: nick-fields/retry@v3
|
||||
with:
|
||||
timeout_minutes: 5
|
||||
max_attempts: 5
|
||||
retry_wait_seconds: 10
|
||||
shell: bash
|
||||
command: |
|
||||
PLATFORM="${{ matrix.platform }}"
|
||||
case "$PLATFORM" in
|
||||
darwin-arm64) TARGET="bun-darwin-arm64" ;;
|
||||
darwin-x64) TARGET="bun-darwin-x64" ;;
|
||||
darwin-x64-baseline) TARGET="bun-darwin-x64-baseline" ;;
|
||||
linux-x64) TARGET="bun-linux-x64" ;;
|
||||
linux-x64-baseline) TARGET="bun-linux-x64-baseline" ;;
|
||||
linux-arm64) TARGET="bun-linux-arm64" ;;
|
||||
linux-x64-musl) TARGET="bun-linux-x64-musl" ;;
|
||||
linux-x64-musl-baseline) TARGET="bun-linux-x64-musl-baseline" ;;
|
||||
linux-arm64-musl) TARGET="bun-linux-arm64-musl" ;;
|
||||
windows-x64) TARGET="bun-windows-x64" ;;
|
||||
windows-x64-baseline) TARGET="bun-windows-x64-baseline" ;;
|
||||
esac
|
||||
|
||||
if [[ "$PLATFORM" == windows-* ]]; then
|
||||
OUTPUT="packages/${PLATFORM}/bin/oh-my-opencode.exe"
|
||||
else
|
||||
OUTPUT="packages/${PLATFORM}/bin/oh-my-opencode"
|
||||
fi
|
||||
|
||||
bun build src/cli/index.ts --compile --minify --target=$TARGET --outfile=$OUTPUT
|
||||
|
||||
echo "Built binary:"
|
||||
ls -lh "$OUTPUT"
|
||||
|
||||
- name: Compress binary
|
||||
if: steps.check.outputs.skip != 'true'
|
||||
run: |
|
||||
PLATFORM="${{ matrix.platform }}"
|
||||
cd packages/${PLATFORM}
|
||||
|
||||
if [[ "$PLATFORM" == windows-* ]]; then
|
||||
# Windows: use 7z (pre-installed on windows-latest)
|
||||
7z a -tzip ../../binary-${PLATFORM}.zip bin/ package.json
|
||||
else
|
||||
# Unix: use tar.gz
|
||||
tar -czvf ../../binary-${PLATFORM}.tar.gz bin/ package.json
|
||||
fi
|
||||
|
||||
cd ../..
|
||||
echo "Compressed artifact:"
|
||||
ls -lh binary-${PLATFORM}.*
|
||||
|
||||
- name: Upload artifact
|
||||
if: steps.check.outputs.skip != 'true'
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: binary-${{ matrix.platform }}
|
||||
path: |
|
||||
binary-${{ matrix.platform }}.tar.gz
|
||||
binary-${{ matrix.platform }}.zip
|
||||
retention-days: 1
|
||||
if-no-files-found: error
|
||||
|
||||
publish:
|
||||
needs: build
|
||||
if: always() && !cancelled()
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
fail-fast: false
|
||||
max-parallel: 2
|
||||
matrix:
|
||||
platform: [darwin-arm64, darwin-x64, darwin-x64-baseline, linux-x64, linux-x64-baseline, linux-arm64, linux-x64-musl, linux-x64-musl-baseline, linux-arm64-musl, windows-x64, windows-x64-baseline]
|
||||
steps:
|
||||
- name: Check if already published
|
||||
id: check
|
||||
run: |
|
||||
VERSION="${{ inputs.version }}"
|
||||
|
||||
OC_STATUS=$(curl -s -o /dev/null -w "%{http_code}" "https://registry.npmjs.org/oh-my-opencode-${{ matrix.platform }}/${VERSION}")
|
||||
OA_STATUS=$(curl -s -o /dev/null -w "%{http_code}" "https://registry.npmjs.org/oh-my-openagent-${{ matrix.platform }}/${VERSION}")
|
||||
|
||||
if [ "$OC_STATUS" = "200" ]; then
|
||||
echo "skip_opencode=true" >> $GITHUB_OUTPUT
|
||||
echo "✓ oh-my-opencode-${{ matrix.platform }}@${VERSION} already published"
|
||||
else
|
||||
echo "skip_opencode=false" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
if [ "$OA_STATUS" = "200" ]; then
|
||||
echo "skip_openagent=true" >> $GITHUB_OUTPUT
|
||||
echo "✓ oh-my-openagent-${{ matrix.platform }}@${VERSION} already published"
|
||||
else
|
||||
echo "skip_openagent=false" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
# Need artifact if either package needs publishing
|
||||
if [ "$OC_STATUS" = "200" ] && [ "$OA_STATUS" = "200" ]; then
|
||||
echo "skip_all=true" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "skip_all=false" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- name: Download artifact
|
||||
id: download
|
||||
if: steps.check.outputs.skip_all != 'true'
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: binary-${{ matrix.platform }}
|
||||
path: .
|
||||
|
||||
- name: Extract artifact
|
||||
if: steps.check.outputs.skip_all != 'true' && steps.download.outcome == 'success'
|
||||
run: |
|
||||
PLATFORM="${{ matrix.platform }}"
|
||||
mkdir -p packages/${PLATFORM}
|
||||
|
||||
if [[ "$PLATFORM" == windows-* ]]; then
|
||||
unzip binary-${PLATFORM}.zip -d packages/${PLATFORM}/
|
||||
else
|
||||
tar -xzvf binary-${PLATFORM}.tar.gz -C packages/${PLATFORM}/
|
||||
fi
|
||||
|
||||
echo "Extracted contents:"
|
||||
ls -la packages/${PLATFORM}/
|
||||
ls -la packages/${PLATFORM}/bin/
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
if: steps.check.outputs.skip_all != 'true' && steps.download.outcome == 'success'
|
||||
with:
|
||||
node-version: "24"
|
||||
registry-url: "https://registry.npmjs.org"
|
||||
|
||||
- name: Publish oh-my-opencode-${{ matrix.platform }}
|
||||
if: steps.check.outputs.skip_opencode != 'true' && steps.download.outcome == 'success'
|
||||
run: |
|
||||
cd packages/${{ matrix.platform }}
|
||||
|
||||
TAG_ARG=""
|
||||
if [ -n "${{ inputs.dist_tag }}" ]; then
|
||||
TAG_ARG="--tag ${{ inputs.dist_tag }}"
|
||||
fi
|
||||
|
||||
npm publish --access public --provenance $TAG_ARG
|
||||
env:
|
||||
NODE_AUTH_TOKEN: ${{ secrets.NODE_AUTH_TOKEN }}
|
||||
NPM_CONFIG_PROVENANCE: true
|
||||
timeout-minutes: 15
|
||||
|
||||
- name: Publish oh-my-openagent-${{ matrix.platform }}
|
||||
if: steps.check.outputs.skip_openagent != 'true' && steps.download.outcome == 'success'
|
||||
run: |
|
||||
cd packages/${{ matrix.platform }}
|
||||
|
||||
# Rename package for oh-my-openagent
|
||||
jq --arg name "oh-my-openagent-${{ matrix.platform }}" \
|
||||
--arg desc "Platform-specific binary for oh-my-openagent (${{ matrix.platform }})" \
|
||||
'.name = $name | .description = $desc | .bin = {"oh-my-openagent": (.bin | to_entries | .[0].value)}' \
|
||||
package.json > tmp.json && mv tmp.json package.json
|
||||
|
||||
TAG_ARG=""
|
||||
if [ -n "${{ inputs.dist_tag }}" ]; then
|
||||
TAG_ARG="--tag ${{ inputs.dist_tag }}"
|
||||
fi
|
||||
|
||||
npm publish --access public --provenance $TAG_ARG
|
||||
env:
|
||||
NODE_AUTH_TOKEN: ${{ secrets.NODE_AUTH_TOKEN }}
|
||||
NPM_CONFIG_PROVENANCE: true
|
||||
timeout-minutes: 15
|
||||
292
.github/workflows/publish.yml
vendored
292
.github/workflows/publish.yml
vendored
@@ -1,5 +1,5 @@
|
||||
name: publish
|
||||
run-name: "${{ format('release {0}', inputs.version || inputs.bump) }}"
|
||||
run-name: "${{ format('release {0} ({1})', inputs.bump, inputs.tag || 'latest') }}"
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
@@ -8,27 +8,29 @@ on:
|
||||
description: "Bump major, minor, or patch"
|
||||
required: true
|
||||
type: choice
|
||||
default: patch
|
||||
options:
|
||||
- patch
|
||||
- minor
|
||||
- major
|
||||
- minor
|
||||
- patch
|
||||
version:
|
||||
description: "Override version (e.g., 3.0.0-beta.6). Takes precedence over bump."
|
||||
description: "Override version (optional)"
|
||||
required: false
|
||||
type: string
|
||||
skip_platform:
|
||||
description: "Skip platform binary packages"
|
||||
tag:
|
||||
description: "npm dist-tag (latest, beta, next)"
|
||||
required: false
|
||||
type: boolean
|
||||
default: false
|
||||
type: choice
|
||||
default: "latest"
|
||||
options:
|
||||
- latest
|
||||
- beta
|
||||
- next
|
||||
|
||||
concurrency: ${{ github.workflow }}-${{ github.ref }}
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
id-token: write
|
||||
actions: write
|
||||
|
||||
jobs:
|
||||
test:
|
||||
@@ -45,61 +47,8 @@ jobs:
|
||||
env:
|
||||
BUN_INSTALL_ALLOW_SCRIPTS: "@ast-grep/napi"
|
||||
|
||||
- name: Run mock-heavy tests (isolated)
|
||||
run: |
|
||||
# These files use mock.module() which pollutes module cache
|
||||
# Run them in separate processes to prevent cross-file contamination
|
||||
bun test src/plugin-handlers
|
||||
bun test src/hooks/atlas
|
||||
bun test src/hooks/compaction-context-injector
|
||||
bun test src/features/tmux-subagent
|
||||
bun test src/cli/doctor/formatter.test.ts
|
||||
bun test src/cli/doctor/format-default.test.ts
|
||||
bun test src/tools/call-omo-agent/sync-executor.test.ts
|
||||
bun test src/tools/call-omo-agent/session-creator.test.ts
|
||||
bun test src/features/opencode-skill-loader/loader.test.ts
|
||||
bun test src/hooks/anthropic-context-window-limit-recovery/recovery-hook.test.ts
|
||||
bun test src/hooks/anthropic-context-window-limit-recovery/executor.test.ts
|
||||
|
||||
- name: Run remaining tests
|
||||
run: |
|
||||
# Enumerate subdirectories/files explicitly to EXCLUDE mock-heavy files
|
||||
# that were already run in isolation above.
|
||||
# Excluded from src/cli: doctor/formatter.test.ts, doctor/format-default.test.ts
|
||||
# Excluded from src/tools: call-omo-agent/sync-executor.test.ts, call-omo-agent/session-creator.test.ts
|
||||
# Excluded from src/hooks/anthropic-context-window-limit-recovery: recovery-hook.test.ts, executor.test.ts
|
||||
# Excluded from src/tools: call-omo-agent/sync-executor.test.ts, call-omo-agent/session-creator.test.ts
|
||||
bun test bin script src/config src/mcp src/index.test.ts \
|
||||
src/agents src/shared \
|
||||
src/cli/run src/cli/config-manager src/cli/mcp-oauth \
|
||||
src/cli/index.test.ts src/cli/install.test.ts src/cli/model-fallback.test.ts \
|
||||
src/cli/config-manager.test.ts \
|
||||
src/cli/doctor/runner.test.ts src/cli/doctor/checks \
|
||||
src/tools/ast-grep src/tools/background-task src/tools/delegate-task \
|
||||
src/tools/glob src/tools/grep src/tools/interactive-bash \
|
||||
src/tools/look-at src/tools/lsp src/tools/session-manager \
|
||||
src/tools/skill src/tools/skill-mcp src/tools/slashcommand src/tools/task \
|
||||
src/tools/call-omo-agent/background-agent-executor.test.ts \
|
||||
src/tools/call-omo-agent/background-executor.test.ts \
|
||||
src/tools/call-omo-agent/subagent-session-creator.test.ts \
|
||||
src/hooks/anthropic-context-window-limit-recovery/empty-content-recovery-sdk.test.ts src/hooks/anthropic-context-window-limit-recovery/parser.test.ts src/hooks/anthropic-context-window-limit-recovery/pruning-deduplication.test.ts src/hooks/anthropic-context-window-limit-recovery/recovery-deduplication.test.ts src/hooks/anthropic-context-window-limit-recovery/storage.test.ts \
|
||||
src/hooks/claude-code-compatibility \
|
||||
src/hooks/context-injection \
|
||||
src/hooks/provider-toast \
|
||||
src/hooks/session-notification \
|
||||
src/hooks/sisyphus \
|
||||
src/hooks/todo-continuation-enforcer \
|
||||
src/features/background-agent \
|
||||
src/features/builtin-commands \
|
||||
src/features/builtin-skills \
|
||||
src/features/claude-code-session-state \
|
||||
src/features/hook-message-injector \
|
||||
src/features/opencode-skill-loader/config-source-discovery.test.ts \
|
||||
src/features/opencode-skill-loader/merger.test.ts \
|
||||
src/features/opencode-skill-loader/skill-content.test.ts \
|
||||
src/features/opencode-skill-loader/blocking.test.ts \
|
||||
src/features/opencode-skill-loader/async-loader.test.ts \
|
||||
src/features/skill-mcp-manager
|
||||
- name: Run tests
|
||||
run: bun test
|
||||
|
||||
typecheck:
|
||||
runs-on: ubuntu-latest
|
||||
@@ -118,13 +67,10 @@ jobs:
|
||||
- name: Type check
|
||||
run: bun run typecheck
|
||||
|
||||
publish-main:
|
||||
publish:
|
||||
runs-on: ubuntu-latest
|
||||
needs: [test, typecheck]
|
||||
if: github.repository == 'code-yeongyu/oh-my-openagent'
|
||||
outputs:
|
||||
version: ${{ steps.version.outputs.version }}
|
||||
dist_tag: ${{ steps.version.outputs.dist_tag }}
|
||||
if: github.repository == 'code-yeongyu/oh-my-opencode'
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
@@ -139,188 +85,72 @@ jobs:
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: "24"
|
||||
registry-url: "https://registry.npmjs.org"
|
||||
|
||||
- name: Upgrade npm for OIDC trusted publishing
|
||||
run: npm install -g npm@latest
|
||||
|
||||
- name: Configure npm registry
|
||||
run: npm config set registry https://registry.npmjs.org
|
||||
|
||||
- name: Install dependencies
|
||||
run: bun install
|
||||
env:
|
||||
BUN_INSTALL_ALLOW_SCRIPTS: "@ast-grep/napi"
|
||||
|
||||
- name: Calculate version
|
||||
id: version
|
||||
- name: Debug environment
|
||||
run: |
|
||||
VERSION="${{ inputs.version }}"
|
||||
if [ -z "$VERSION" ]; then
|
||||
PREV=$(curl -s https://registry.npmjs.org/oh-my-opencode/latest | jq -r '.version // "0.0.0"')
|
||||
BASE="${PREV%%-*}"
|
||||
IFS='.' read -r MAJOR MINOR PATCH <<< "$BASE"
|
||||
case "${{ inputs.bump }}" in
|
||||
major) VERSION="$((MAJOR+1)).0.0" ;;
|
||||
minor) VERSION="${MAJOR}.$((MINOR+1)).0" ;;
|
||||
*) VERSION="${MAJOR}.${MINOR}.$((PATCH+1))" ;;
|
||||
esac
|
||||
fi
|
||||
echo "version=$VERSION" >> $GITHUB_OUTPUT
|
||||
|
||||
if [[ "$VERSION" == *"-"* ]]; then
|
||||
DIST_TAG=$(echo "$VERSION" | cut -d'-' -f2 | cut -d'.' -f1)
|
||||
echo "dist_tag=${DIST_TAG:-next}" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "dist_tag=" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
echo "Version: $VERSION"
|
||||
echo "=== Bun version ==="
|
||||
bun --version
|
||||
echo "=== Node version ==="
|
||||
node --version
|
||||
echo "=== Current directory ==="
|
||||
pwd
|
||||
echo "=== List src/ ==="
|
||||
ls -la src/
|
||||
echo "=== package.json scripts ==="
|
||||
cat package.json | jq '.scripts'
|
||||
|
||||
- name: Check if already published
|
||||
id: check
|
||||
- name: Build
|
||||
run: |
|
||||
VERSION="${{ steps.version.outputs.version }}"
|
||||
STATUS=$(curl -s -o /dev/null -w "%{http_code}" "https://registry.npmjs.org/oh-my-opencode/${VERSION}")
|
||||
if [ "$STATUS" = "200" ]; then
|
||||
echo "skip=true" >> $GITHUB_OUTPUT
|
||||
echo "✓ oh-my-opencode@${VERSION} already published"
|
||||
else
|
||||
echo "skip=false" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- name: Update version
|
||||
if: steps.check.outputs.skip != 'true'
|
||||
run: |
|
||||
VERSION="${{ steps.version.outputs.version }}"
|
||||
jq --arg v "$VERSION" '.version = $v' package.json > tmp.json && mv tmp.json package.json
|
||||
|
||||
for platform in darwin-arm64 darwin-x64 darwin-x64-baseline linux-x64 linux-x64-baseline linux-arm64 linux-x64-musl linux-x64-musl-baseline linux-arm64-musl windows-x64 windows-x64-baseline; do
|
||||
jq --arg v "$VERSION" '.version = $v' "packages/${platform}/package.json" > tmp.json
|
||||
mv tmp.json "packages/${platform}/package.json"
|
||||
done
|
||||
|
||||
jq --arg v "$VERSION" '.optionalDependencies = (.optionalDependencies | to_entries | map(.value = $v) | from_entries)' package.json > tmp.json && mv tmp.json package.json
|
||||
|
||||
- name: Build main package
|
||||
if: steps.check.outputs.skip != 'true'
|
||||
run: |
|
||||
bun build src/index.ts --outdir dist --target bun --format esm --external @ast-grep/napi
|
||||
bun build src/cli/index.ts --outdir dist/cli --target bun --format esm --external @ast-grep/napi
|
||||
bunx tsc --emitDeclarationOnly
|
||||
echo "=== Running bun build (main) ==="
|
||||
bun build src/index.ts src/google-auth.ts --outdir dist --target bun --format esm --external @ast-grep/napi
|
||||
echo "=== Running bun build (CLI) ==="
|
||||
bun build src/cli/index.ts --outdir dist/cli --target bun --format esm
|
||||
echo "=== Running tsc ==="
|
||||
tsc --emitDeclarationOnly
|
||||
echo "=== Running build:schema ==="
|
||||
bun run build:schema
|
||||
|
||||
- name: Publish oh-my-opencode
|
||||
if: steps.check.outputs.skip != 'true'
|
||||
|
||||
- name: Verify build output
|
||||
run: |
|
||||
TAG_ARG=""
|
||||
if [ -n "${{ steps.version.outputs.dist_tag }}" ]; then
|
||||
TAG_ARG="--tag ${{ steps.version.outputs.dist_tag }}"
|
||||
fi
|
||||
npm publish --access public --provenance $TAG_ARG
|
||||
echo "=== dist/ contents ==="
|
||||
ls -la dist/
|
||||
echo "=== dist/cli/ contents ==="
|
||||
ls -la dist/cli/
|
||||
test -f dist/index.js || (echo "ERROR: dist/index.js not found!" && exit 1)
|
||||
test -f dist/cli/index.js || (echo "ERROR: dist/cli/index.js not found!" && exit 1)
|
||||
|
||||
- name: Publish
|
||||
run: bun run script/publish.ts
|
||||
env:
|
||||
NODE_AUTH_TOKEN: ${{ secrets.NODE_AUTH_TOKEN }}
|
||||
BUMP: ${{ inputs.bump }}
|
||||
VERSION: ${{ inputs.version }}
|
||||
NPM_TAG: ${{ inputs.tag || 'latest' }}
|
||||
CI: true
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
NPM_CONFIG_PROVENANCE: true
|
||||
|
||||
- name: Check if oh-my-openagent already published
|
||||
id: check-openagent
|
||||
run: |
|
||||
VERSION="${{ steps.version.outputs.version }}"
|
||||
STATUS=$(curl -s -o /dev/null -w "%{http_code}" "https://registry.npmjs.org/oh-my-openagent/${VERSION}")
|
||||
if [ "$STATUS" = "200" ]; then
|
||||
echo "skip=true" >> $GITHUB_OUTPUT
|
||||
echo "✓ oh-my-openagent@${VERSION} already published"
|
||||
else
|
||||
echo "skip=false" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- name: Publish oh-my-openagent
|
||||
if: steps.check-openagent.outputs.skip != 'true'
|
||||
run: |
|
||||
VERSION="${{ steps.version.outputs.version }}"
|
||||
|
||||
# Update package name, version, and optionalDependencies for oh-my-openagent
|
||||
jq --arg v "$VERSION" '
|
||||
.name = "oh-my-openagent" |
|
||||
.version = $v |
|
||||
.optionalDependencies = (
|
||||
.optionalDependencies | to_entries |
|
||||
map(.key = (.key | sub("^oh-my-opencode-"; "oh-my-openagent-")) | .value = $v) |
|
||||
from_entries
|
||||
)
|
||||
' package.json > tmp.json && mv tmp.json package.json
|
||||
|
||||
TAG_ARG=""
|
||||
if [ -n "${{ steps.version.outputs.dist_tag }}" ]; then
|
||||
TAG_ARG="--tag ${{ steps.version.outputs.dist_tag }}"
|
||||
fi
|
||||
npm publish --access public --provenance $TAG_ARG || echo "::warning::oh-my-openagent publish failed"
|
||||
env:
|
||||
NODE_AUTH_TOKEN: ${{ secrets.NODE_AUTH_TOKEN }}
|
||||
NPM_CONFIG_PROVENANCE: true
|
||||
|
||||
- name: Restore package.json
|
||||
if: steps.check-openagent.outputs.skip != 'true'
|
||||
run: |
|
||||
git checkout -- package.json
|
||||
|
||||
trigger-platform:
|
||||
runs-on: ubuntu-latest
|
||||
needs: publish-main
|
||||
if: inputs.skip_platform != true
|
||||
steps:
|
||||
- name: Trigger platform publish workflow
|
||||
run: |
|
||||
gh workflow run publish-platform.yml \
|
||||
--repo ${{ github.repository }} \
|
||||
--ref ${{ github.ref }} \
|
||||
-f version=${{ needs.publish-main.outputs.version }} \
|
||||
-f dist_tag=${{ needs.publish-main.outputs.dist_tag }}
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
release:
|
||||
runs-on: ubuntu-latest
|
||||
needs: publish-main
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- run: git fetch --force --tags
|
||||
|
||||
- uses: oven-sh/setup-bun@v2
|
||||
with:
|
||||
bun-version: latest
|
||||
|
||||
- name: Install dependencies
|
||||
run: bun install
|
||||
env:
|
||||
BUN_INSTALL_ALLOW_SCRIPTS: "@ast-grep/napi"
|
||||
|
||||
- name: Generate changelog
|
||||
run: |
|
||||
bun run script/generate-changelog.ts > /tmp/changelog.md
|
||||
cat /tmp/changelog.md
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Create GitHub release
|
||||
run: |
|
||||
VERSION="${{ needs.publish-main.outputs.version }}"
|
||||
gh release view "v${VERSION}" >/dev/null 2>&1 || \
|
||||
gh release create "v${VERSION}" --title "v${VERSION}" --notes-file /tmp/changelog.md
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Delete draft release
|
||||
run: gh release delete next --yes 2>/dev/null || true
|
||||
run: gh release delete next --yes 2>/dev/null || echo "No draft release to delete"
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Merge to master
|
||||
continue-on-error: true
|
||||
if: inputs.tag == 'latest' || inputs.tag == ''
|
||||
run: |
|
||||
git config user.name "github-actions[bot]"
|
||||
git config user.email "github-actions[bot]@users.noreply.github.com"
|
||||
VERSION="${{ needs.publish-main.outputs.version }}"
|
||||
git stash --include-untracked || true
|
||||
VERSION=$(jq -r '.version' package.json)
|
||||
git checkout master
|
||||
git reset --hard "v${VERSION}"
|
||||
git push -f origin master || echo "::warning::Failed to push to master"
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
git push -f origin master
|
||||
|
||||
67
.github/workflows/sisyphus-agent.yml
vendored
67
.github/workflows/sisyphus-agent.yml
vendored
@@ -89,21 +89,21 @@ jobs:
|
||||
echo "Installing OpenCode..."
|
||||
curl -fsSL https://opencode.ai/install -o /tmp/opencode-install.sh
|
||||
|
||||
# Try default installer first, fallback to re-download if it fails
|
||||
# Try default installer first, fallback to pinned version if it fails
|
||||
if file /tmp/opencode-install.sh | grep -q "shell script\|text"; then
|
||||
if ! bash /tmp/opencode-install.sh 2>&1; then
|
||||
echo "Default installer failed, trying direct install..."
|
||||
bash <(curl -fsSL https://opencode.ai/install)
|
||||
echo "Default installer failed, trying with pinned version..."
|
||||
bash /tmp/opencode-install.sh --version 1.0.204
|
||||
fi
|
||||
else
|
||||
echo "Download corrupted, trying direct install..."
|
||||
bash <(curl -fsSL https://opencode.ai/install)
|
||||
echo "Download corrupted, trying direct install with pinned version..."
|
||||
bash <(curl -fsSL https://opencode.ai/install) --version 1.0.204
|
||||
fi
|
||||
fi
|
||||
opencode --version
|
||||
|
||||
# Run local oh-my-opencode install (uses built dist)
|
||||
bun run dist/cli/index.js install --no-tui --claude=max20 --openai=no --gemini=no --copilot=no
|
||||
bun run dist/cli/index.js install --no-tui --claude=max20 --chatgpt=no --gemini=no
|
||||
|
||||
# Override plugin to use local file reference
|
||||
OPENCODE_JSON=~/.config/opencode/opencode.json
|
||||
@@ -114,7 +114,6 @@ jobs:
|
||||
|
||||
OPENCODE_JSON=~/.config/opencode/opencode.json
|
||||
jq --arg baseURL "$ANTHROPIC_BASE_URL" --arg apiKey "$ANTHROPIC_API_KEY" '
|
||||
.model = "anthropic/claude-opus-4-5" |
|
||||
.provider.anthropic = {
|
||||
"name": "Anthropic",
|
||||
"npm": "@ai-sdk/anthropic",
|
||||
@@ -135,14 +134,14 @@ jobs:
|
||||
"limit": { "context": 190000, "output": 128000 },
|
||||
"options": { "effort": "high", "thinking": { "type": "enabled", "budgetTokens": 64000 } }
|
||||
},
|
||||
"claude-sonnet-4-6": {
|
||||
"id": "claude-sonnet-4-6-20250929",
|
||||
"name": "Sonnet 4.6",
|
||||
"claude-sonnet-4-5": {
|
||||
"id": "claude-sonnet-4-5-20250929",
|
||||
"name": "Sonnet 4.5",
|
||||
"limit": { "context": 200000, "output": 64000 }
|
||||
},
|
||||
"claude-sonnet-4-6-high": {
|
||||
"id": "claude-sonnet-4-6-20250929",
|
||||
"name": "Sonnet 4.6 High",
|
||||
"claude-sonnet-4-5-high": {
|
||||
"id": "claude-sonnet-4-5-20250929",
|
||||
"name": "Sonnet 4.5 High",
|
||||
"limit": { "context": 200000, "output": 128000 },
|
||||
"options": { "thinking": { "type": "enabled", "budgetTokens": 64000 } }
|
||||
},
|
||||
@@ -152,41 +151,6 @@ jobs:
|
||||
"limit": { "context": 200000, "output": 64000 }
|
||||
}
|
||||
}
|
||||
} |
|
||||
.provider["zai-coding-plan"] = {
|
||||
"name": "Z.AI Coding Plan",
|
||||
"npm": "@ai-sdk/openai-compatible",
|
||||
"options": {
|
||||
"baseURL": "https://api.z.ai/api/paas/v4"
|
||||
},
|
||||
"models": {
|
||||
"glm-4.7": {
|
||||
"id": "glm-4.7",
|
||||
"name": "GLM 4.7",
|
||||
"limit": { "context": 128000, "output": 16000 }
|
||||
},
|
||||
"glm-4.6v": {
|
||||
"id": "glm-4.6v",
|
||||
"name": "GLM 4.6 Vision",
|
||||
"limit": { "context": 128000, "output": 16000 }
|
||||
}
|
||||
}
|
||||
} |
|
||||
.provider.openai = {
|
||||
"name": "OpenAI",
|
||||
"npm": "@ai-sdk/openai",
|
||||
"models": {
|
||||
"gpt-5.2": {
|
||||
"id": "gpt-5.2",
|
||||
"name": "GPT-5.2",
|
||||
"limit": { "context": 128000, "output": 16000 }
|
||||
},
|
||||
"gpt-5.2-codex": {
|
||||
"id": "gpt-5.2-codex",
|
||||
"name": "GPT-5.2 Codex",
|
||||
"limit": { "context": 128000, "output": 32000 }
|
||||
}
|
||||
}
|
||||
}
|
||||
' "$OPENCODE_JSON" > /tmp/oc.json && mv /tmp/oc.json "$OPENCODE_JSON"
|
||||
|
||||
@@ -322,9 +286,6 @@ jobs:
|
||||
)
|
||||
jq --arg append "$PROMPT_APPEND" '.agents.Sisyphus.prompt_append = $append' "$OMO_JSON" > /tmp/omo.json && mv /tmp/omo.json "$OMO_JSON"
|
||||
|
||||
# Add categories configuration for unspecified-low to use GLM 4.7
|
||||
jq '.categories["unspecified-low"] = { "model": "zai-coding-plan/glm-4.7" }' "$OMO_JSON" > /tmp/omo.json && mv /tmp/omo.json "$OMO_JSON"
|
||||
|
||||
mkdir -p ~/.local/share/opencode
|
||||
echo "$OPENCODE_AUTH_JSON" > ~/.local/share/opencode/auth.json
|
||||
chmod 600 ~/.local/share/opencode/auth.json
|
||||
@@ -469,10 +430,6 @@ jobs:
|
||||
2. **CREATE TODOS IMMEDIATELY**: Right after reading, create your todo list using todo tools.
|
||||
- First todo: "Summarize issue/PR context and requirements"
|
||||
- Break down ALL work into atomic, verifiable steps
|
||||
- **GIT WORKFLOW (MANDATORY for implementation tasks)**: ALWAYS include these final todos:
|
||||
- "Create new branch from origin/BRANCH_PLACEHOLDER (NEVER push directly to BRANCH_PLACEHOLDER)"
|
||||
- "Commit changes"
|
||||
- "Create PR to BRANCH_PLACEHOLDER branch"
|
||||
- Plan everything BEFORE starting any work
|
||||
|
||||
---
|
||||
|
||||
9
.gitignore
vendored
9
.gitignore
vendored
@@ -1,16 +1,10 @@
|
||||
# Dependencies
|
||||
.sisyphus/*
|
||||
!.sisyphus/rules/
|
||||
.sisyphus/
|
||||
node_modules/
|
||||
|
||||
# Build output
|
||||
dist/
|
||||
|
||||
# Platform binaries (built, not committed)
|
||||
packages/*/bin/oh-my-opencode
|
||||
packages/*/bin/oh-my-opencode.exe
|
||||
packages/*/bin/*.map
|
||||
|
||||
# IDE
|
||||
.idea/
|
||||
.vscode/
|
||||
@@ -35,4 +29,3 @@ yarn.lock
|
||||
test-injection/
|
||||
notepad.md
|
||||
oauth-success.html
|
||||
*.bun-build
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
---
|
||||
description: Compare HEAD with the latest published npm version and list all unpublished changes
|
||||
model: anthropic/claude-haiku-4-5
|
||||
---
|
||||
|
||||
<command-instruction>
|
||||
@@ -54,95 +55,30 @@ For each commit, you MUST:
|
||||
### feat
|
||||
| Scope | What Changed |
|
||||
|-------|--------------|
|
||||
| X | Description of actual changes |
|
||||
| X | 실제 변경 내용 설명 |
|
||||
|
||||
### fix
|
||||
| Scope | What Changed |
|
||||
|-------|--------------|
|
||||
| X | Description of actual changes |
|
||||
| X | 실제 변경 내용 설명 |
|
||||
|
||||
### refactor
|
||||
| Scope | What Changed |
|
||||
|-------|--------------|
|
||||
| X | Description of actual changes |
|
||||
| X | 실제 변경 내용 설명 |
|
||||
|
||||
### docs
|
||||
| Scope | What Changed |
|
||||
|-------|--------------|
|
||||
| X | Description of actual changes |
|
||||
| X | 실제 변경 내용 설명 |
|
||||
|
||||
### Breaking Changes
|
||||
None or list
|
||||
None 또는 목록
|
||||
|
||||
### Files Changed
|
||||
{diff-stat}
|
||||
|
||||
### Suggested Version Bump
|
||||
- **Recommendation**: patch|minor|major
|
||||
- **Reason**: Reason for recommendation
|
||||
- **Reason**: 이유
|
||||
</output-format>
|
||||
|
||||
<oracle-safety-review>
|
||||
## Oracle Deployment Safety Review (Only when user explicitly requests)
|
||||
|
||||
**Trigger keywords**: "safe to deploy", "can I deploy", "is it safe", "review", "check", "oracle"
|
||||
|
||||
When user includes any of the above keywords in their request:
|
||||
|
||||
### 1. Pre-validation
|
||||
```bash
|
||||
bun run typecheck
|
||||
bun test
|
||||
```
|
||||
- On failure → Report "❌ Cannot deploy" immediately without invoking Oracle
|
||||
|
||||
### 2. Oracle Invocation Prompt
|
||||
|
||||
Collect the following information and pass to Oracle:
|
||||
|
||||
```
|
||||
## Deployment Safety Review Request
|
||||
|
||||
### Changes Summary
|
||||
{Changes table analyzed above}
|
||||
|
||||
### Key diffs (organized by feature)
|
||||
{Core code changes for each feat/fix/refactor - only key parts, not full diff}
|
||||
|
||||
### Validation Results
|
||||
- Typecheck: ✅/❌
|
||||
- Tests: {pass}/{total} (✅/❌)
|
||||
|
||||
### Review Items
|
||||
1. **Regression Risk**: Are there changes that could affect existing functionality?
|
||||
2. **Side Effects**: Are there areas where unexpected side effects could occur?
|
||||
3. **Breaking Changes**: Are there changes that affect external users?
|
||||
4. **Edge Cases**: Are there missed edge cases?
|
||||
5. **Deployment Recommendation**: SAFE / CAUTION / UNSAFE
|
||||
|
||||
### Request
|
||||
Please analyze the above changes deeply and provide your judgment on deployment safety.
|
||||
If there are risks, explain with specific scenarios.
|
||||
Suggest keywords to monitor after deployment if any.
|
||||
```
|
||||
|
||||
### 3. Output Format After Oracle Response
|
||||
|
||||
## 🔍 Oracle Deployment Safety Review Result
|
||||
|
||||
### Verdict: ✅ SAFE / ⚠️ CAUTION / ❌ UNSAFE
|
||||
|
||||
### Risk Analysis
|
||||
| Area | Risk Level | Description |
|
||||
|------|------------|-------------|
|
||||
| ... | 🟢/🟡/🔴 | ... |
|
||||
|
||||
### Recommendations
|
||||
- ...
|
||||
|
||||
### Post-deployment Monitoring Keywords
|
||||
- ...
|
||||
|
||||
### Conclusion
|
||||
{Oracle's final judgment}
|
||||
</oracle-safety-review>
|
||||
|
||||
@@ -14,7 +14,7 @@ You are the release manager for oh-my-opencode. Execute the FULL publish workflo
|
||||
- `major`: Breaking changes (1.1.7 → 2.0.0)
|
||||
|
||||
**If the user did not provide a bump type argument, STOP IMMEDIATELY and ask:**
|
||||
> "To proceed with deployment, please specify a version bump type: `patch`, `minor`, or `major`"
|
||||
> "배포를 진행하려면 버전 범프 타입을 지정해주세요: `patch`, `minor`, 또는 `major`"
|
||||
|
||||
**DO NOT PROCEED without explicit user confirmation of bump type.**
|
||||
|
||||
@@ -31,12 +31,10 @@ You are the release manager for oh-my-opencode. Execute the FULL publish workflo
|
||||
{ "id": "sync-remote", "content": "Sync with remote (pull --rebase && push if unpushed commits)", "status": "pending", "priority": "high" },
|
||||
{ "id": "run-workflow", "content": "Trigger GitHub Actions publish workflow", "status": "pending", "priority": "high" },
|
||||
{ "id": "wait-workflow", "content": "Wait for workflow completion (poll every 30s)", "status": "pending", "priority": "high" },
|
||||
{ "id": "verify-and-preview", "content": "Verify release created + preview auto-generated changelog & contributor thanks", "status": "pending", "priority": "high" },
|
||||
{ "id": "draft-summary", "content": "Draft enhanced release summary (mandatory for minor/major, optional for patch — ask user)", "status": "pending", "priority": "high" },
|
||||
{ "id": "apply-summary", "content": "Prepend enhanced summary to release (if user opted in)", "status": "pending", "priority": "high" },
|
||||
{ "id": "verify-release", "content": "Verify GitHub release was created", "status": "pending", "priority": "high" },
|
||||
{ "id": "draft-release-notes", "content": "Draft enhanced release notes content", "status": "pending", "priority": "high" },
|
||||
{ "id": "update-release-notes", "content": "Update GitHub release with enhanced notes", "status": "pending", "priority": "high" },
|
||||
{ "id": "verify-npm", "content": "Verify npm package published successfully", "status": "pending", "priority": "high" },
|
||||
{ "id": "wait-platform-workflow", "content": "Wait for publish-platform workflow completion", "status": "pending", "priority": "high" },
|
||||
{ "id": "verify-platform-binaries", "content": "Verify all 7 platform binary packages published", "status": "pending", "priority": "high" },
|
||||
{ "id": "final-confirmation", "content": "Final confirmation to user with links", "status": "pending", "priority": "low" }
|
||||
]
|
||||
```
|
||||
@@ -48,7 +46,7 @@ You are the release manager for oh-my-opencode. Execute the FULL publish workflo
|
||||
## STEP 1: CONFIRM BUMP TYPE
|
||||
|
||||
If bump type provided as argument, confirm with user:
|
||||
> "Version bump type: `{bump}`. Proceed? (y/n)"
|
||||
> "버전 범프 타입: `{bump}`. 진행할까요? (y/n)"
|
||||
|
||||
Wait for user confirmation before proceeding.
|
||||
|
||||
@@ -111,166 +109,103 @@ gh run view {run_id} --log-failed
|
||||
|
||||
---
|
||||
|
||||
## STEP 5: VERIFY RELEASE & PREVIEW AUTO-GENERATED CONTENT
|
||||
|
||||
Two goals: confirm the release exists, then show the user what the workflow already generated.
|
||||
## STEP 5: VERIFY GITHUB RELEASE
|
||||
|
||||
Get the new version and verify release exists:
|
||||
```bash
|
||||
# Pull latest (workflow committed version bump)
|
||||
# Get new version from package.json (workflow updates it)
|
||||
git pull --rebase
|
||||
NEW_VERSION=$(node -p "require('./package.json').version")
|
||||
|
||||
# Verify release exists on GitHub
|
||||
gh release view "v${NEW_VERSION}" --json tagName,url --jq '{tag: .tagName, url: .url}'
|
||||
gh release view "v${NEW_VERSION}"
|
||||
```
|
||||
|
||||
**After verifying, generate a local preview of the auto-generated content:**
|
||||
|
||||
```bash
|
||||
bun run script/generate-changelog.ts
|
||||
```
|
||||
|
||||
<agent-instruction>
|
||||
After running the preview, present the output to the user and say:
|
||||
|
||||
> **The following content is ALREADY included in the release automatically:**
|
||||
> - Commit changelog (grouped by feat/fix/refactor)
|
||||
> - Contributor thank-you messages (for non-team contributors)
|
||||
>
|
||||
> You do NOT need to write any of this. It's handled.
|
||||
>
|
||||
> **For a patch release**, this is usually sufficient on its own. However, if there are notable bug fixes or changes worth highlighting, an enhanced summary can be added.
|
||||
> **For a minor/major release**, an enhanced summary is **required** — I'll draft one in the next step.
|
||||
|
||||
Wait for the user to acknowledge before proceeding.
|
||||
</agent-instruction>
|
||||
|
||||
---
|
||||
|
||||
## STEP 6: DRAFT ENHANCED RELEASE SUMMARY
|
||||
## STEP 6: DRAFT ENHANCED RELEASE NOTES
|
||||
|
||||
<decision-gate>
|
||||
Analyze commits since the previous version and draft release notes following project conventions:
|
||||
|
||||
| Release Type | Action |
|
||||
|-------------|--------|
|
||||
| **patch** | ASK the user: "Would you like me to draft an enhanced summary highlighting the key bug fixes / changes? Or is the auto-generated changelog sufficient?" If user declines → skip to Step 8. If user accepts → draft a concise bug-fix / change summary below. |
|
||||
| **minor** | MANDATORY. Draft a concise feature summary. Do NOT proceed without one. |
|
||||
| **major** | MANDATORY. Draft a full release narrative with migration notes if applicable. Do NOT proceed without one. |
|
||||
### For PATCH releases:
|
||||
Keep simple format - just list commits:
|
||||
```markdown
|
||||
- {hash} {conventional commit message}
|
||||
- ...
|
||||
```
|
||||
|
||||
</decision-gate>
|
||||
### For MINOR releases:
|
||||
Use feature-focused format:
|
||||
```markdown
|
||||
## New Features
|
||||
|
||||
### What You're Writing (and What You're NOT)
|
||||
### Feature Name
|
||||
- Description of what it does
|
||||
- Why it matters
|
||||
|
||||
You are writing the **headline layer** — a product announcement that sits ABOVE the auto-generated commit log. Think "release blog post", not "git log".
|
||||
## Bug Fixes
|
||||
- fix(scope): description
|
||||
|
||||
<rules>
|
||||
- NEVER duplicate commit messages. The auto-generated section already lists every commit.
|
||||
- NEVER write generic filler like "Various bug fixes and improvements" or "Several enhancements".
|
||||
- ALWAYS focus on USER IMPACT: what can users DO now that they couldn't before?
|
||||
- ALWAYS group by THEME or CAPABILITY, not by commit type (feat/fix/refactor).
|
||||
- ALWAYS use concrete language: "You can now do X" not "Added X feature".
|
||||
</rules>
|
||||
|
||||
<examples>
|
||||
<bad title="Commit regurgitation — DO NOT do this">
|
||||
## What's New
|
||||
- feat(auth): add JWT refresh token rotation
|
||||
- fix(auth): handle expired token edge case
|
||||
- refactor(auth): extract middleware
|
||||
</bad>
|
||||
|
||||
<good title="User-impact narrative — DO this">
|
||||
## 🔐 Smarter Authentication
|
||||
|
||||
Token refresh is now automatic and seamless. Sessions no longer expire mid-task — the system silently rotates credentials in the background. If you've been frustrated by random logouts, this release fixes that.
|
||||
</good>
|
||||
|
||||
<bad title="Vague filler — DO NOT do this">
|
||||
## Improvements
|
||||
- Various performance improvements
|
||||
- Bug fixes and stability enhancements
|
||||
</bad>
|
||||
|
||||
<good title="Specific and measurable — DO this">
|
||||
## ⚡ 3x Faster Rule Parsing
|
||||
|
||||
Rules are now cached by file modification time. If your project has 50+ rule files, you'll notice startup is noticeably faster — we measured a 3x improvement in our test suite.
|
||||
</good>
|
||||
</examples>
|
||||
|
||||
### Drafting Process
|
||||
|
||||
1. **Analyze** the commit list from Step 5's preview. Identify 2-5 themes that matter to users.
|
||||
2. **Write** the summary to `/tmp/release-summary-v${NEW_VERSION}.md`.
|
||||
3. **Present** the draft to the user for review and approval before applying.
|
||||
|
||||
```bash
|
||||
# Write your draft here
|
||||
cat > /tmp/release-summary-v${NEW_VERSION}.md << 'SUMMARY_EOF'
|
||||
{your_enhanced_summary}
|
||||
SUMMARY_EOF
|
||||
|
||||
cat /tmp/release-summary-v${NEW_VERSION}.md
|
||||
- refactor(scope): description
|
||||
```
|
||||
|
||||
<agent-instruction>
|
||||
After drafting, ask the user:
|
||||
> "Here's the release summary I drafted. This will appear AT THE TOP of the release notes, above the auto-generated commit changelog and contributor thanks. Want me to adjust anything before applying?"
|
||||
### For MAJOR releases:
|
||||
Full changelog format:
|
||||
```markdown
|
||||
# v{version}
|
||||
|
||||
Do NOT proceed to Step 7 without user confirmation.
|
||||
</agent-instruction>
|
||||
Brief description of the release.
|
||||
|
||||
## What's New Since v{previous}
|
||||
|
||||
### Breaking Changes
|
||||
- Description of breaking change
|
||||
|
||||
### Features
|
||||
- **Feature Name**: Description
|
||||
|
||||
### Bug Fixes
|
||||
- Description
|
||||
|
||||
### Documentation
|
||||
- Description
|
||||
|
||||
## Migration Guide (if applicable)
|
||||
...
|
||||
```
|
||||
|
||||
**CRITICAL: The enhanced notes must ADD to existing workflow-generated notes, not replace them.**
|
||||
|
||||
---
|
||||
|
||||
## STEP 7: APPLY ENHANCED SUMMARY TO RELEASE
|
||||
## STEP 7: UPDATE GITHUB RELEASE
|
||||
|
||||
**Skip this step ONLY if the user opted out of the enhanced summary in Step 6** — proceed directly to Step 8.
|
||||
|
||||
<architecture>
|
||||
The final release note structure:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────┐
|
||||
│ Enhanced Summary (from Step 6) │ ← You wrote this
|
||||
│ - Theme-based, user-impact focused │
|
||||
├─────────────────────────────────────┤
|
||||
│ --- (separator) │
|
||||
├─────────────────────────────────────┤
|
||||
│ Auto-generated Commit Changelog │ ← Workflow wrote this
|
||||
│ - feat/fix/refactor grouped │
|
||||
│ - Contributor thank-you messages │
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
</architecture>
|
||||
|
||||
<zero-content-loss-policy>
|
||||
- Fetch the existing release body FIRST
|
||||
- PREPEND your summary above it
|
||||
- The existing auto-generated content must remain 100% INTACT
|
||||
- NOT A SINGLE CHARACTER of existing content may be removed or modified
|
||||
</zero-content-loss-policy>
|
||||
**ZERO CONTENT LOSS POLICY:**
|
||||
- First, fetch the existing release body with `gh release view`
|
||||
- Your enhanced notes must be PREPENDED to the existing content
|
||||
- **NOT A SINGLE CHARACTER of existing content may be removed or modified**
|
||||
- The final release body = `{your_enhanced_notes}\n\n---\n\n{existing_body_exactly_as_is}`
|
||||
|
||||
```bash
|
||||
# 1. Fetch existing auto-generated body
|
||||
# Get existing body
|
||||
EXISTING_BODY=$(gh release view "v${NEW_VERSION}" --json body --jq '.body')
|
||||
|
||||
# 2. Combine: enhanced summary on top, auto-generated below
|
||||
{
|
||||
cat /tmp/release-summary-v${NEW_VERSION}.md
|
||||
echo ""
|
||||
echo "---"
|
||||
echo ""
|
||||
echo "$EXISTING_BODY"
|
||||
} > /tmp/final-release-v${NEW_VERSION}.md
|
||||
# Write enhanced notes to temp file (prepend to existing)
|
||||
cat > /tmp/release-notes-v${NEW_VERSION}.md << 'EOF'
|
||||
{your_enhanced_notes}
|
||||
|
||||
# 3. Update the release (additive only)
|
||||
gh release edit "v${NEW_VERSION}" --notes-file /tmp/final-release-v${NEW_VERSION}.md
|
||||
---
|
||||
|
||||
# 4. Confirm
|
||||
echo "✅ Release v${NEW_VERSION} updated with enhanced summary."
|
||||
gh release view "v${NEW_VERSION}" --json url --jq '.url'
|
||||
EOF
|
||||
|
||||
# Append existing body EXACTLY as-is (zero modifications)
|
||||
echo "$EXISTING_BODY" >> /tmp/release-notes-v${NEW_VERSION}.md
|
||||
|
||||
# Update release
|
||||
gh release edit "v${NEW_VERSION}" --notes-file /tmp/release-notes-v${NEW_VERSION}.md
|
||||
```
|
||||
|
||||
**CRITICAL: This is ADDITIVE ONLY. You are adding your notes on top. The existing content remains 100% intact.**
|
||||
|
||||
---
|
||||
|
||||
## STEP 8: VERIFY NPM PUBLICATION
|
||||
@@ -284,64 +219,12 @@ Compare with expected version. If not matching after 2 minutes, warn user about
|
||||
|
||||
---
|
||||
|
||||
## STEP 8.5: WAIT FOR PLATFORM WORKFLOW COMPLETION
|
||||
|
||||
The main publish workflow triggers a separate `publish-platform` workflow for platform-specific binaries.
|
||||
|
||||
1. Find the publish-platform workflow run triggered by the main workflow:
|
||||
```bash
|
||||
gh run list --workflow=publish-platform --limit=1 --json databaseId,status,conclusion --jq '.[0]'
|
||||
```
|
||||
|
||||
2. Poll workflow status every 30 seconds until completion:
|
||||
```bash
|
||||
gh run view {platform_run_id} --json status,conclusion --jq '{status: .status, conclusion: .conclusion}'
|
||||
```
|
||||
|
||||
**IMPORTANT: Use polling loop, NOT sleep commands.**
|
||||
|
||||
If conclusion is `failure`, show error logs:
|
||||
```bash
|
||||
gh run view {platform_run_id} --log-failed
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## STEP 8.6: VERIFY PLATFORM BINARY PACKAGES
|
||||
|
||||
After publish-platform workflow completes, verify all 7 platform packages are published:
|
||||
|
||||
```bash
|
||||
PLATFORMS="darwin-arm64 darwin-x64 linux-x64 linux-arm64 linux-x64-musl linux-arm64-musl windows-x64"
|
||||
for PLATFORM in $PLATFORMS; do
|
||||
npm view "oh-my-opencode-${PLATFORM}" version
|
||||
done
|
||||
```
|
||||
|
||||
All 7 packages should show the same version as the main package (`${NEW_VERSION}`).
|
||||
|
||||
**Expected packages:**
|
||||
| Package | Description |
|
||||
|---------|-------------|
|
||||
| `oh-my-opencode-darwin-arm64` | macOS Apple Silicon |
|
||||
| `oh-my-opencode-darwin-x64` | macOS Intel |
|
||||
| `oh-my-opencode-linux-x64` | Linux x64 (glibc) |
|
||||
| `oh-my-opencode-linux-arm64` | Linux ARM64 (glibc) |
|
||||
| `oh-my-opencode-linux-x64-musl` | Linux x64 (musl/Alpine) |
|
||||
| `oh-my-opencode-linux-arm64-musl` | Linux ARM64 (musl/Alpine) |
|
||||
| `oh-my-opencode-windows-x64` | Windows x64 |
|
||||
|
||||
If any platform package version doesn't match, warn the user and suggest checking the publish-platform workflow logs.
|
||||
|
||||
---
|
||||
|
||||
## STEP 9: FINAL CONFIRMATION
|
||||
|
||||
Report success to user with:
|
||||
- New version number
|
||||
- GitHub release URL: https://github.com/code-yeongyu/oh-my-opencode/releases/tag/v{version}
|
||||
- npm package URL: https://www.npmjs.com/package/oh-my-opencode
|
||||
- Platform packages status: List all 7 platform packages with their versions
|
||||
|
||||
---
|
||||
|
||||
@@ -351,12 +234,10 @@ Report success to user with:
|
||||
- **Release not found**: Wait and retry, may be propagation delay
|
||||
- **npm not updated**: npm can take 1-5 minutes to propagate, inform user
|
||||
- **Permission denied**: User may need to re-authenticate with `gh auth login`
|
||||
- **Platform workflow fails**: Show logs from publish-platform workflow, check which platform failed
|
||||
- **Platform package missing**: Some platforms may fail due to cross-compilation issues, suggest re-running publish-platform workflow manually
|
||||
|
||||
## LANGUAGE
|
||||
|
||||
Respond to user in English.
|
||||
Respond to user in Korean (한국어).
|
||||
|
||||
</command-instruction>
|
||||
|
||||
|
||||
@@ -1,221 +0,0 @@
|
||||
---
|
||||
description: Remove unused code from this project with ultrawork mode, LSP-verified safety, atomic commits
|
||||
---
|
||||
|
||||
<command-instruction>
|
||||
|
||||
Dead code removal via massively parallel deep agents. You are the ORCHESTRATOR — you scan, verify, batch, then delegate ALL removals to parallel agents.
|
||||
|
||||
<rules>
|
||||
- **LSP is law.** Verify with `LspFindReferences(includeDeclaration=false)` before ANY removal decision.
|
||||
- **Never remove entry points.** `src/index.ts`, `src/cli/index.ts`, test files, config files, `packages/` — off-limits.
|
||||
- **You do NOT remove code yourself.** You scan, verify, batch, then fire deep agents. They do the work.
|
||||
</rules>
|
||||
|
||||
<false-positive-guards>
|
||||
NEVER mark as dead:
|
||||
- Symbols in `src/index.ts` or barrel `index.ts` re-exports
|
||||
- Symbols referenced in test files (tests are valid consumers)
|
||||
- Symbols with `@public` / `@api` JSDoc tags
|
||||
- Hook factories (`createXXXHook`), tool factories (`createXXXTool`), agent definitions in `agentSources`
|
||||
- Command templates, skill definitions, MCP configs
|
||||
- Symbols in `package.json` exports
|
||||
</false-positive-guards>
|
||||
|
||||
---
|
||||
|
||||
## PHASE 1: SCAN — Find Dead Code Candidates
|
||||
|
||||
Run ALL of these in parallel:
|
||||
|
||||
<parallel-scan>
|
||||
|
||||
**TypeScript strict mode (your primary scanner — run this FIRST):**
|
||||
```bash
|
||||
bunx tsc --noEmit --noUnusedLocals --noUnusedParameters 2>&1
|
||||
```
|
||||
This gives you the definitive list of unused locals, imports, parameters, and types with exact file:line locations.
|
||||
|
||||
**Explore agents (fire ALL simultaneously as background):**
|
||||
|
||||
```
|
||||
task(subagent_type="explore", run_in_background=true, load_skills=[],
|
||||
description="Find orphaned files",
|
||||
prompt="Find files in src/ NOT imported by any other file. Check all import statements. EXCLUDE: index.ts, *.test.ts, entry points, .md, packages/. Return: file paths.")
|
||||
|
||||
task(subagent_type="explore", run_in_background=true, load_skills=[],
|
||||
description="Find unused exported symbols",
|
||||
prompt="Find exported functions/types/constants in src/ that are never imported by other files. Cross-reference: for each export, grep the symbol name across src/ — if it only appears in its own file, it's a candidate. EXCLUDE: src/index.ts exports, test files. Return: file path, line, symbol name, export type.")
|
||||
```
|
||||
|
||||
</parallel-scan>
|
||||
|
||||
Collect all results into a master candidate list.
|
||||
|
||||
---
|
||||
|
||||
## PHASE 2: VERIFY — LSP Confirmation (Zero False Positives)
|
||||
|
||||
For EACH candidate from Phase 1:
|
||||
|
||||
```typescript
|
||||
LspFindReferences(filePath, line, character, includeDeclaration=false)
|
||||
// 0 references → CONFIRMED dead
|
||||
// 1+ references → NOT dead, drop from list
|
||||
```
|
||||
|
||||
Also apply the false-positive-guards above. Produce a confirmed list:
|
||||
|
||||
```
|
||||
| # | File | Symbol | Type | Action |
|
||||
|---|------|--------|------|--------|
|
||||
| 1 | src/foo.ts:42 | unusedFunc | function | REMOVE |
|
||||
| 2 | src/bar.ts:10 | OldType | type | REMOVE |
|
||||
| 3 | src/baz.ts:7 | ctx | parameter | PREFIX _ |
|
||||
```
|
||||
|
||||
**Action types:**
|
||||
- `REMOVE` — delete the symbol/import/file entirely
|
||||
- `PREFIX _` — unused function parameter required by signature → rename to `_paramName`
|
||||
|
||||
If ZERO confirmed: report "No dead code found" and STOP.
|
||||
|
||||
---
|
||||
|
||||
## PHASE 3: BATCH — Group by File for Conflict-Free Parallelism
|
||||
|
||||
<batching-rules>
|
||||
|
||||
**Goal: maximize parallel agents with ZERO git conflicts.**
|
||||
|
||||
1. Group confirmed dead code items by FILE PATH
|
||||
2. All items in the SAME file go to the SAME batch (prevents two agents editing the same file)
|
||||
3. If a dead FILE (entire file deletion) exists, it's its own batch
|
||||
4. Target 5-15 batches. If fewer than 5 items total, use 1 batch per item.
|
||||
|
||||
**Example batching:**
|
||||
```
|
||||
Batch A: [src/hooks/foo/hook.ts — 3 unused imports]
|
||||
Batch B: [src/features/bar/manager.ts — 2 unused constants, 1 dead function]
|
||||
Batch C: [src/tools/baz/tool.ts — 1 unused param, src/tools/baz/types.ts — 1 unused type]
|
||||
Batch D: [src/dead-file.ts — entire file deletion]
|
||||
```
|
||||
|
||||
Files in the same directory CAN be batched together (they won't conflict as long as no two agents edit the same file). Maximize batch count for parallelism.
|
||||
|
||||
</batching-rules>
|
||||
|
||||
---
|
||||
|
||||
## PHASE 4: EXECUTE — Fire Parallel Deep Agents
|
||||
|
||||
For EACH batch, fire a deep agent:
|
||||
|
||||
```
|
||||
task(
|
||||
category="deep",
|
||||
load_skills=["typescript-programmer", "git-master"],
|
||||
run_in_background=true,
|
||||
description="Remove dead code batch N: [brief description]",
|
||||
prompt="[see template below]"
|
||||
)
|
||||
```
|
||||
|
||||
<agent-prompt-template>
|
||||
|
||||
Every deep agent gets this prompt structure (fill in the specifics per batch):
|
||||
|
||||
```
|
||||
## TASK: Remove dead code from [file list]
|
||||
|
||||
## DEAD CODE TO REMOVE
|
||||
|
||||
### [file path] line [N]
|
||||
- Symbol: `[name]` — [type: unused import / unused constant / unused function / unused parameter / dead file]
|
||||
- Action: [REMOVE entirely / REMOVE from import list / PREFIX with _]
|
||||
|
||||
### [file path] line [N]
|
||||
- ...
|
||||
|
||||
## PROTOCOL
|
||||
|
||||
1. Read each file to understand exact syntax at the target lines
|
||||
2. For each symbol, run LspFindReferences to RE-VERIFY it's still dead (another agent may have changed things)
|
||||
3. Apply the change:
|
||||
- Unused import (only symbol in line): remove entire import line
|
||||
- Unused import (one of many): remove only that symbol from the import list
|
||||
- Unused constant/function/type: remove the declaration. Clean up trailing blank lines.
|
||||
- Unused parameter: prefix with `_` (do NOT remove — required by signature)
|
||||
- Dead file: delete with `rm`
|
||||
4. After ALL edits in this batch, run: `bun run typecheck`
|
||||
5. If typecheck fails: `git checkout -- [files]` and report failure
|
||||
6. If typecheck passes: stage ONLY your files and commit:
|
||||
`git add [your-specific-files] && git commit -m "refactor: remove dead code from [brief file list]"`
|
||||
7. Report what you removed and the commit hash
|
||||
|
||||
## CRITICAL
|
||||
- Stage ONLY your batch's files (`git add [specific files]`). NEVER `git add -A` — other agents are working in parallel.
|
||||
- If typecheck fails after your edits, REVERT all changes and report. Do not attempt to fix.
|
||||
- Pre-existing test failures in other files are expected. Only typecheck matters for your batch.
|
||||
```
|
||||
|
||||
</agent-prompt-template>
|
||||
|
||||
Fire ALL batches simultaneously. Wait for all to complete.
|
||||
|
||||
---
|
||||
|
||||
## PHASE 5: FINAL VERIFICATION
|
||||
|
||||
After ALL agents complete:
|
||||
|
||||
```bash
|
||||
bun run typecheck # must pass
|
||||
bun test # note any NEW failures vs pre-existing
|
||||
bun run build # must pass
|
||||
```
|
||||
|
||||
Produce summary:
|
||||
|
||||
```markdown
|
||||
## Dead Code Removal Complete
|
||||
|
||||
### Removed
|
||||
| # | Symbol | File | Type | Commit | Agent |
|
||||
|---|--------|------|------|--------|-------|
|
||||
| 1 | unusedFunc | src/foo.ts | function | abc1234 | Batch A |
|
||||
|
||||
### Skipped (agent reported failure)
|
||||
| # | Symbol | File | Reason |
|
||||
|---|--------|------|--------|
|
||||
|
||||
### Verification
|
||||
- Typecheck: PASS/FAIL
|
||||
- Tests: X passing, Y failing (Z pre-existing)
|
||||
- Build: PASS/FAIL
|
||||
- Total removed: N symbols across M files
|
||||
- Total commits: K atomic commits
|
||||
- Parallel agents used: P
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## SCOPE CONTROL
|
||||
|
||||
If `$ARGUMENTS` is provided, narrow the scan:
|
||||
- File path → only that file
|
||||
- Directory → only that directory
|
||||
- Symbol name → only that symbol
|
||||
- `all` or empty → full project scan (default)
|
||||
|
||||
## ABORT CONDITIONS
|
||||
|
||||
STOP and report if:
|
||||
- More than 50 candidates found (ask user to narrow scope or confirm proceeding)
|
||||
- Build breaks and cannot be fixed by reverting
|
||||
|
||||
</command-instruction>
|
||||
|
||||
<user-request>
|
||||
$ARGUMENTS
|
||||
</user-request>
|
||||
@@ -1,539 +0,0 @@
|
||||
---
|
||||
name: github-triage
|
||||
description: "Read-only GitHub triage for issues AND PRs. 1 item = 1 background task (category: quick). Analyzes all open items and writes evidence-backed reports to /tmp/{datetime}/. Every claim requires a GitHub permalink as proof. NEVER takes any action on GitHub - no comments, no merges, no closes, no labels. Reports only. Triggers: 'triage', 'triage issues', 'triage PRs', 'github triage'."
|
||||
---
|
||||
|
||||
# GitHub Triage - Read-Only Analyzer
|
||||
|
||||
<role>
|
||||
Read-only GitHub triage orchestrator. Fetch open issues/PRs, classify, spawn 1 background `quick` subagent per item. Each subagent analyzes and writes a report file. ZERO GitHub mutations.
|
||||
</role>
|
||||
|
||||
## Architecture
|
||||
|
||||
**1 ISSUE/PR = 1 `task_create` = 1 `quick` SUBAGENT (background). NO EXCEPTIONS.**
|
||||
|
||||
| Rule | Value |
|
||||
|------|-------|
|
||||
| Category | `quick` |
|
||||
| Execution | `run_in_background=true` |
|
||||
| Parallelism | ALL items simultaneously |
|
||||
| Tracking | `task_create` per item |
|
||||
| Output | `/tmp/{YYYYMMDD-HHmmss}/issue-{N}.md` or `pr-{N}.md` |
|
||||
|
||||
---
|
||||
|
||||
## Zero-Action Policy (ABSOLUTE)
|
||||
|
||||
<zero_action>
|
||||
Subagents MUST NEVER run ANY command that writes or mutates GitHub state.
|
||||
|
||||
**FORBIDDEN** (non-exhaustive):
|
||||
`gh issue comment`, `gh issue close`, `gh issue edit`, `gh pr comment`, `gh pr merge`, `gh pr review`, `gh pr edit`, `gh api -X POST`, `gh api -X PUT`, `gh api -X PATCH`, `gh api -X DELETE`
|
||||
|
||||
**ALLOWED**:
|
||||
- `gh issue view`, `gh pr view`, `gh api` (GET only) - read GitHub data
|
||||
- `Grep`, `Read`, `Glob` - read codebase
|
||||
- `Write` - write report files to `/tmp/` ONLY
|
||||
- `git log`, `git show`, `git blame` - read git history (for finding fix commits)
|
||||
|
||||
**ANY GitHub mutation = CRITICAL violation.**
|
||||
</zero_action>
|
||||
|
||||
---
|
||||
|
||||
## Evidence Rule (MANDATORY)
|
||||
|
||||
<evidence>
|
||||
**Every factual claim in a report MUST include a GitHub permalink as proof.**
|
||||
|
||||
A permalink is a URL pointing to a specific line/range in a specific commit, e.g.:
|
||||
`https://github.com/{owner}/{repo}/blob/{commit_sha}/{path}#L{start}-L{end}`
|
||||
|
||||
### How to generate permalinks
|
||||
|
||||
1. Find the relevant file and line(s) via Grep/Read.
|
||||
2. Get the current commit SHA: `git rev-parse HEAD`
|
||||
3. Construct: `https://github.com/{REPO}/blob/{SHA}/{filepath}#L{line}` (or `#L{start}-L{end}` for ranges)
|
||||
|
||||
### Rules
|
||||
|
||||
- **No permalink = no claim.** If you cannot back a statement with a permalink, state "No evidence found" instead.
|
||||
- Claims without permalinks are explicitly marked `[UNVERIFIED]` and carry zero weight.
|
||||
- Permalinks to `main`/`master`/`dev` branches are NOT acceptable - use commit SHAs only.
|
||||
- For bug analysis: permalink to the problematic code. For fix verification: permalink to the fixing commit diff.
|
||||
</evidence>
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Setup
|
||||
|
||||
```bash
|
||||
REPO=$(gh repo view --json nameWithOwner -q .nameWithOwner)
|
||||
REPORT_DIR="/tmp/$(date +%Y%m%d-%H%M%S)"
|
||||
mkdir -p "$REPORT_DIR"
|
||||
COMMIT_SHA=$(git rev-parse HEAD)
|
||||
```
|
||||
|
||||
Pass `REPO`, `REPORT_DIR`, and `COMMIT_SHA` to every subagent.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Fetch All Open Items
|
||||
|
||||
<fetch>
|
||||
Paginate if 500 results returned.
|
||||
|
||||
```bash
|
||||
ISSUES=$(gh issue list --repo $REPO --state open --limit 500 \
|
||||
--json number,title,state,createdAt,updatedAt,labels,author,body,comments)
|
||||
ISSUE_LEN=$(echo "$ISSUES" | jq length)
|
||||
if [ "$ISSUE_LEN" -eq 500 ]; then
|
||||
LAST_DATE=$(echo "$ISSUES" | jq -r '.[-1].createdAt')
|
||||
while true; do
|
||||
PAGE=$(gh issue list --repo $REPO --state open --limit 500 \
|
||||
--search "created:<$LAST_DATE" \
|
||||
--json number,title,state,createdAt,updatedAt,labels,author,body,comments)
|
||||
PAGE_LEN=$(echo "$PAGE" | jq length)
|
||||
[ "$PAGE_LEN" -eq 0 ] && break
|
||||
ISSUES=$(echo "[$ISSUES, $PAGE]" | jq -s 'add | unique_by(.number)')
|
||||
[ "$PAGE_LEN" -lt 500 ] && break
|
||||
LAST_DATE=$(echo "$PAGE" | jq -r '.[-1].createdAt')
|
||||
done
|
||||
fi
|
||||
|
||||
PRS=$(gh pr list --repo $REPO --state open --limit 500 \
|
||||
--json number,title,state,createdAt,updatedAt,labels,author,body,headRefName,baseRefName,isDraft,mergeable,reviewDecision,statusCheckRollup)
|
||||
PR_LEN=$(echo "$PRS" | jq length)
|
||||
if [ "$PR_LEN" -eq 500 ]; then
|
||||
LAST_DATE=$(echo "$PRS" | jq -r '.[-1].createdAt')
|
||||
while true; do
|
||||
PAGE=$(gh pr list --repo $REPO --state open --limit 500 \
|
||||
--search "created:<$LAST_DATE" \
|
||||
--json number,title,state,createdAt,updatedAt,labels,author,body,headRefName,baseRefName,isDraft,mergeable,reviewDecision,statusCheckRollup)
|
||||
PAGE_LEN=$(echo "$PAGE" | jq length)
|
||||
[ "$PAGE_LEN" -eq 0 ] && break
|
||||
PRS=$(echo "[$PRS, $PAGE]" | jq -s 'add | unique_by(.number)')
|
||||
[ "$PAGE_LEN" -lt 500 ] && break
|
||||
LAST_DATE=$(echo "$PAGE" | jq -r '.[-1].createdAt')
|
||||
done
|
||||
fi
|
||||
```
|
||||
</fetch>
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Classify
|
||||
|
||||
| Type | Detection |
|
||||
|------|-----------|
|
||||
| `ISSUE_QUESTION` | `[Question]`, `[Discussion]`, `?`, "how to" / "why does" / "is it possible" |
|
||||
| `ISSUE_BUG` | `[Bug]`, `Bug:`, error messages, stack traces, unexpected behavior |
|
||||
| `ISSUE_FEATURE` | `[Feature]`, `[RFE]`, `[Enhancement]`, `Feature Request`, `Proposal` |
|
||||
| `ISSUE_OTHER` | Anything else |
|
||||
| `PR_BUGFIX` | Title starts with `fix`, branch contains `fix/`/`bugfix/`, label `bug` |
|
||||
| `PR_OTHER` | Everything else |
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Spawn Subagents
|
||||
|
||||
```
|
||||
For each item:
|
||||
1. task_create(subject="Triage: #{number} {title}")
|
||||
2. task(category="quick", run_in_background=true, load_skills=[], prompt=SUBAGENT_PROMPT)
|
||||
3. Store mapping: item_number -> { task_id, background_task_id }
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Subagent Prompts
|
||||
|
||||
### Common Preamble (include in ALL subagent prompts)
|
||||
|
||||
```
|
||||
CONTEXT:
|
||||
- Repository: {REPO}
|
||||
- Report directory: {REPORT_DIR}
|
||||
- Current commit SHA: {COMMIT_SHA}
|
||||
|
||||
PERMALINK FORMAT:
|
||||
Every factual claim MUST include a permalink: https://github.com/{REPO}/blob/{COMMIT_SHA}/{filepath}#L{start}-L{end}
|
||||
No permalink = no claim. Mark unverifiable claims as [UNVERIFIED].
|
||||
To get current SHA if needed: git rev-parse HEAD
|
||||
|
||||
ABSOLUTE RULES (violating ANY = critical failure):
|
||||
- NEVER run gh issue comment, gh issue close, gh issue edit
|
||||
- NEVER run gh pr comment, gh pr merge, gh pr review, gh pr edit
|
||||
- NEVER run any gh command with -X POST, -X PUT, -X PATCH, -X DELETE
|
||||
- NEVER run git checkout, git fetch, git pull, git switch, git worktree
|
||||
- Your ONLY writable output: {REPORT_DIR}/{issue|pr}-{number}.md via the Write tool
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### ISSUE_QUESTION
|
||||
|
||||
```
|
||||
You are analyzing issue #{number} for {REPO}.
|
||||
|
||||
ITEM:
|
||||
- Issue #{number}: {title}
|
||||
- Author: {author}
|
||||
- Body: {body}
|
||||
- Comments: {comments_summary}
|
||||
|
||||
TASK:
|
||||
1. Understand the question.
|
||||
2. Search the codebase (Grep, Read) for the answer.
|
||||
3. For every finding, construct a permalink: https://github.com/{REPO}/blob/{COMMIT_SHA}/{path}#L{N}
|
||||
4. Write report to {REPORT_DIR}/issue-{number}.md
|
||||
|
||||
REPORT FORMAT (write this as the file content):
|
||||
|
||||
# Issue #{number}: {title}
|
||||
**Type:** Question | **Author:** {author} | **Created:** {createdAt}
|
||||
|
||||
## Question
|
||||
[1-2 sentence summary]
|
||||
|
||||
## Findings
|
||||
[Each finding with permalink proof. Example:]
|
||||
- The config is parsed in [`src/config/loader.ts#L42-L58`](https://github.com/{REPO}/blob/{SHA}/src/config/loader.ts#L42-L58)
|
||||
|
||||
## Suggested Answer
|
||||
[Draft answer with code references and permalinks]
|
||||
|
||||
## Confidence: [HIGH | MEDIUM | LOW]
|
||||
[Reason. If LOW: what's missing]
|
||||
|
||||
## Recommended Action
|
||||
[What maintainer should do]
|
||||
|
||||
---
|
||||
REMEMBER: No permalink = no claim. Every code reference needs a permalink.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### ISSUE_BUG
|
||||
|
||||
```
|
||||
You are analyzing bug report #{number} for {REPO}.
|
||||
|
||||
ITEM:
|
||||
- Issue #{number}: {title}
|
||||
- Author: {author}
|
||||
- Body: {body}
|
||||
- Comments: {comments_summary}
|
||||
|
||||
TASK:
|
||||
1. Understand: expected behavior, actual behavior, reproduction steps.
|
||||
2. Search the codebase for relevant code. Trace the logic.
|
||||
3. Determine verdict: CONFIRMED_BUG, NOT_A_BUG, ALREADY_FIXED, or UNCLEAR.
|
||||
4. For ALREADY_FIXED: find the fixing commit using git log/git blame. Include the commit SHA and what changed.
|
||||
5. For every finding, construct a permalink.
|
||||
6. Write report to {REPORT_DIR}/issue-{number}.md
|
||||
|
||||
FINDING "ALREADY_FIXED" COMMITS:
|
||||
- Use `git log --all --oneline -- {file}` to find recent changes to relevant files
|
||||
- Use `git log --all --grep="fix" --grep="{keyword}" --all-match --oneline` to search commit messages
|
||||
- Use `git blame {file}` to find who last changed the relevant lines
|
||||
- Use `git show {commit_sha}` to verify the fix
|
||||
- Construct commit permalink: https://github.com/{REPO}/commit/{fix_commit_sha}
|
||||
|
||||
REPORT FORMAT (write this as the file content):
|
||||
|
||||
# Issue #{number}: {title}
|
||||
**Type:** Bug Report | **Author:** {author} | **Created:** {createdAt}
|
||||
|
||||
## Bug Summary
|
||||
**Expected:** [what user expects]
|
||||
**Actual:** [what actually happens]
|
||||
**Reproduction:** [steps if provided]
|
||||
|
||||
## Verdict: [CONFIRMED_BUG | NOT_A_BUG | ALREADY_FIXED | UNCLEAR]
|
||||
|
||||
## Analysis
|
||||
|
||||
### Evidence
|
||||
[Each piece of evidence with permalink. No permalink = mark [UNVERIFIED]]
|
||||
|
||||
### Root Cause (if CONFIRMED_BUG)
|
||||
[Which file, which function, what goes wrong]
|
||||
- Problematic code: [`{path}#L{N}`](permalink)
|
||||
|
||||
### Why Not A Bug (if NOT_A_BUG)
|
||||
[Rigorous proof with permalinks that current behavior is correct]
|
||||
|
||||
### Fix Details (if ALREADY_FIXED)
|
||||
- **Fixed in commit:** [`{short_sha}`](https://github.com/{REPO}/commit/{full_sha})
|
||||
- **Fixed date:** {date}
|
||||
- **What changed:** [description with diff permalink]
|
||||
- **Fixed by:** {author}
|
||||
|
||||
### Blockers (if UNCLEAR)
|
||||
[What prevents determination, what to investigate next]
|
||||
|
||||
## Severity: [LOW | MEDIUM | HIGH | CRITICAL]
|
||||
|
||||
## Affected Files
|
||||
[List with permalinks]
|
||||
|
||||
## Suggested Fix (if CONFIRMED_BUG)
|
||||
[Specific approach: "In {file}#L{N}, change X to Y because Z"]
|
||||
|
||||
## Recommended Action
|
||||
[What maintainer should do]
|
||||
|
||||
---
|
||||
CRITICAL: Claims without permalinks are worthless. If you cannot find evidence, say so explicitly rather than making unverified claims.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### ISSUE_FEATURE
|
||||
|
||||
```
|
||||
You are analyzing feature request #{number} for {REPO}.
|
||||
|
||||
ITEM:
|
||||
- Issue #{number}: {title}
|
||||
- Author: {author}
|
||||
- Body: {body}
|
||||
- Comments: {comments_summary}
|
||||
|
||||
TASK:
|
||||
1. Understand the request.
|
||||
2. Search codebase for existing (partial/full) implementations.
|
||||
3. Assess feasibility.
|
||||
4. Write report to {REPORT_DIR}/issue-{number}.md
|
||||
|
||||
REPORT FORMAT (write this as the file content):
|
||||
|
||||
# Issue #{number}: {title}
|
||||
**Type:** Feature Request | **Author:** {author} | **Created:** {createdAt}
|
||||
|
||||
## Request Summary
|
||||
[What the user wants]
|
||||
|
||||
## Existing Implementation: [YES_FULLY | YES_PARTIALLY | NO]
|
||||
[If exists: where, with permalinks to the implementation]
|
||||
|
||||
## Feasibility: [EASY | MODERATE | HARD | ARCHITECTURAL_CHANGE]
|
||||
|
||||
## Relevant Files
|
||||
[With permalinks]
|
||||
|
||||
## Implementation Notes
|
||||
[Approach, pitfalls, dependencies]
|
||||
|
||||
## Recommended Action
|
||||
[What maintainer should do]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### ISSUE_OTHER
|
||||
|
||||
```
|
||||
You are analyzing issue #{number} for {REPO}.
|
||||
|
||||
ITEM:
|
||||
- Issue #{number}: {title}
|
||||
- Author: {author}
|
||||
- Body: {body}
|
||||
- Comments: {comments_summary}
|
||||
|
||||
TASK: Assess and write report to {REPORT_DIR}/issue-{number}.md
|
||||
|
||||
REPORT FORMAT (write this as the file content):
|
||||
|
||||
# Issue #{number}: {title}
|
||||
**Type:** [QUESTION | BUG | FEATURE | DISCUSSION | META | STALE]
|
||||
**Author:** {author} | **Created:** {createdAt}
|
||||
|
||||
## Summary
|
||||
[1-2 sentences]
|
||||
|
||||
## Needs Attention: [YES | NO]
|
||||
## Suggested Label: [if any]
|
||||
## Recommended Action: [what maintainer should do]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### PR_BUGFIX
|
||||
|
||||
```
|
||||
You are reviewing PR #{number} for {REPO}.
|
||||
|
||||
ITEM:
|
||||
- PR #{number}: {title}
|
||||
- Author: {author}
|
||||
- Base: {baseRefName} <- Head: {headRefName}
|
||||
- Draft: {isDraft} | Mergeable: {mergeable}
|
||||
- Review: {reviewDecision} | CI: {statusCheckRollup_summary}
|
||||
- Body: {body}
|
||||
|
||||
TASK:
|
||||
1. Fetch PR details (READ-ONLY): gh pr view {number} --repo {REPO} --json files,reviews,comments,statusCheckRollup,reviewDecision
|
||||
2. Read diff: gh api repos/{REPO}/pulls/{number}/files
|
||||
3. Search codebase to verify fix correctness.
|
||||
4. Write report to {REPORT_DIR}/pr-{number}.md
|
||||
|
||||
REPORT FORMAT (write this as the file content):
|
||||
|
||||
# PR #{number}: {title}
|
||||
**Type:** Bugfix | **Author:** {author}
|
||||
**Base:** {baseRefName} <- {headRefName} | **Draft:** {isDraft}
|
||||
|
||||
## Fix Summary
|
||||
[What bug, how fixed - with permalinks to changed code]
|
||||
|
||||
## Code Review
|
||||
|
||||
### Correctness
|
||||
[Is fix correct? Root cause addressed? Evidence with permalinks]
|
||||
|
||||
### Side Effects
|
||||
[Risky changes, breaking changes - with permalinks if any]
|
||||
|
||||
### Code Quality
|
||||
[Style, patterns, test coverage]
|
||||
|
||||
## Merge Readiness
|
||||
|
||||
| Check | Status |
|
||||
|-------|--------|
|
||||
| CI | [PASS / FAIL / PENDING] |
|
||||
| Review | [APPROVED / CHANGES_REQUESTED / PENDING / NONE] |
|
||||
| Mergeable | [YES / NO / CONFLICTED] |
|
||||
| Draft | [YES / NO] |
|
||||
| Correctness | [VERIFIED / CONCERNS / UNCLEAR] |
|
||||
| Risk | [NONE / LOW / MEDIUM / HIGH] |
|
||||
|
||||
## Files Changed
|
||||
[List with brief descriptions]
|
||||
|
||||
## Recommended Action: [MERGE | REQUEST_CHANGES | NEEDS_REVIEW | WAIT]
|
||||
[Reasoning with evidence]
|
||||
|
||||
---
|
||||
NEVER merge. NEVER comment. NEVER review. Write to file ONLY.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### PR_OTHER
|
||||
|
||||
```
|
||||
You are reviewing PR #{number} for {REPO}.
|
||||
|
||||
ITEM:
|
||||
- PR #{number}: {title}
|
||||
- Author: {author}
|
||||
- Base: {baseRefName} <- Head: {headRefName}
|
||||
- Draft: {isDraft} | Mergeable: {mergeable}
|
||||
- Review: {reviewDecision} | CI: {statusCheckRollup_summary}
|
||||
- Body: {body}
|
||||
|
||||
TASK:
|
||||
1. Fetch PR details (READ-ONLY): gh pr view {number} --repo {REPO} --json files,reviews,comments,statusCheckRollup,reviewDecision
|
||||
2. Read diff: gh api repos/{REPO}/pulls/{number}/files
|
||||
3. Write report to {REPORT_DIR}/pr-{number}.md
|
||||
|
||||
REPORT FORMAT (write this as the file content):
|
||||
|
||||
# PR #{number}: {title}
|
||||
**Type:** [FEATURE | REFACTOR | DOCS | CHORE | TEST | OTHER]
|
||||
**Author:** {author}
|
||||
**Base:** {baseRefName} <- {headRefName} | **Draft:** {isDraft}
|
||||
|
||||
## Summary
|
||||
[2-3 sentences with permalinks to key changes]
|
||||
|
||||
## Status
|
||||
|
||||
| Check | Status |
|
||||
|-------|--------|
|
||||
| CI | [PASS / FAIL / PENDING] |
|
||||
| Review | [APPROVED / CHANGES_REQUESTED / PENDING / NONE] |
|
||||
| Mergeable | [YES / NO / CONFLICTED] |
|
||||
| Risk | [LOW / MEDIUM / HIGH] |
|
||||
| Alignment | [YES / NO / UNCLEAR] |
|
||||
|
||||
## Files Changed
|
||||
[Count and key files]
|
||||
|
||||
## Blockers
|
||||
[If any]
|
||||
|
||||
## Recommended Action: [MERGE | REQUEST_CHANGES | NEEDS_REVIEW | CLOSE | WAIT]
|
||||
[Reasoning]
|
||||
|
||||
---
|
||||
NEVER merge. NEVER comment. NEVER review. Write to file ONLY.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Collect & Update
|
||||
|
||||
Poll `background_output()` per task. As each completes:
|
||||
1. Parse report.
|
||||
2. `task_update(id=task_id, status="completed", description=REPORT_SUMMARY)`
|
||||
3. Stream to user immediately.
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Final Summary
|
||||
|
||||
Write to `{REPORT_DIR}/SUMMARY.md` AND display to user:
|
||||
|
||||
```markdown
|
||||
# GitHub Triage Report - {REPO}
|
||||
|
||||
**Date:** {date} | **Commit:** {COMMIT_SHA}
|
||||
**Items Processed:** {total}
|
||||
**Report Directory:** {REPORT_DIR}
|
||||
|
||||
## Issues ({issue_count})
|
||||
| Category | Count |
|
||||
|----------|-------|
|
||||
| Bug Confirmed | {n} |
|
||||
| Bug Already Fixed | {n} |
|
||||
| Not A Bug | {n} |
|
||||
| Needs Investigation | {n} |
|
||||
| Question Analyzed | {n} |
|
||||
| Feature Assessed | {n} |
|
||||
| Other | {n} |
|
||||
|
||||
## PRs ({pr_count})
|
||||
| Category | Count |
|
||||
|----------|-------|
|
||||
| Bugfix Reviewed | {n} |
|
||||
| Other PR Reviewed | {n} |
|
||||
|
||||
## Items Requiring Attention
|
||||
[Each item: number, title, verdict, 1-line summary, link to report file]
|
||||
|
||||
## Report Files
|
||||
[All generated files with paths]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
| Violation | Severity |
|
||||
|-----------|----------|
|
||||
| ANY GitHub mutation (comment/close/merge/review/label/edit) | **CRITICAL** |
|
||||
| Claim without permalink | **CRITICAL** |
|
||||
| Using category other than `quick` | CRITICAL |
|
||||
| Batching multiple items into one task | CRITICAL |
|
||||
| `run_in_background=false` | CRITICAL |
|
||||
| `git checkout` on PR branch | CRITICAL |
|
||||
| Guessing without codebase evidence | HIGH |
|
||||
| Not writing report to `{REPORT_DIR}` | HIGH |
|
||||
| Using branch name instead of commit SHA in permalink | HIGH |
|
||||
@@ -1,398 +0,0 @@
|
||||
#!/usr/bin/env -S uv run --script
|
||||
# /// script
|
||||
# requires-python = ">=3.11"
|
||||
# dependencies = [
|
||||
# "typer>=0.12.0",
|
||||
# "rich>=13.0.0",
|
||||
# ]
|
||||
# ///
|
||||
"""
|
||||
GitHub Issues/PRs Fetcher with Exhaustive Pagination.
|
||||
|
||||
Fetches ALL issues and/or PRs from a GitHub repository using gh CLI.
|
||||
Implements proper pagination to ensure no items are missed.
|
||||
|
||||
Usage:
|
||||
./gh_fetch.py issues # Fetch all issues
|
||||
./gh_fetch.py prs # Fetch all PRs
|
||||
./gh_fetch.py all # Fetch both issues and PRs
|
||||
./gh_fetch.py issues --hours 48 # Issues from last 48 hours
|
||||
./gh_fetch.py prs --state open # Only open PRs
|
||||
./gh_fetch.py all --repo owner/repo # Specify repository
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
from datetime import UTC, datetime, timedelta
|
||||
from enum import Enum
|
||||
from typing import Annotated
|
||||
|
||||
import typer
|
||||
from rich.console import Console
|
||||
from rich.panel import Panel
|
||||
from rich.progress import Progress, TaskID
|
||||
from rich.table import Table
|
||||
|
||||
app = typer.Typer(
|
||||
name="gh_fetch",
|
||||
help="Fetch GitHub issues/PRs with exhaustive pagination.",
|
||||
no_args_is_help=True,
|
||||
)
|
||||
console = Console()
|
||||
|
||||
BATCH_SIZE = 500 # Maximum allowed by GitHub API
|
||||
|
||||
|
||||
class ItemState(str, Enum):
|
||||
ALL = "all"
|
||||
OPEN = "open"
|
||||
CLOSED = "closed"
|
||||
|
||||
|
||||
class OutputFormat(str, Enum):
|
||||
JSON = "json"
|
||||
TABLE = "table"
|
||||
COUNT = "count"
|
||||
|
||||
|
||||
async def run_gh_command(args: list[str]) -> tuple[str, str, int]:
|
||||
"""Run gh CLI command asynchronously."""
|
||||
proc = await asyncio.create_subprocess_exec(
|
||||
"gh",
|
||||
*args,
|
||||
stdout=asyncio.subprocess.PIPE,
|
||||
stderr=asyncio.subprocess.PIPE,
|
||||
)
|
||||
stdout, stderr = await proc.communicate()
|
||||
return stdout.decode(), stderr.decode(), proc.returncode or 0
|
||||
|
||||
|
||||
async def get_current_repo() -> str:
|
||||
"""Get the current repository from gh CLI."""
|
||||
stdout, stderr, code = await run_gh_command(
|
||||
["repo", "view", "--json", "nameWithOwner", "-q", ".nameWithOwner"]
|
||||
)
|
||||
if code != 0:
|
||||
console.print(f"[red]Error getting current repo: {stderr}[/red]")
|
||||
raise typer.Exit(1)
|
||||
return stdout.strip()
|
||||
|
||||
|
||||
async def fetch_items_page(
|
||||
repo: str,
|
||||
item_type: str, # "issue" or "pr"
|
||||
state: str,
|
||||
limit: int,
|
||||
search_filter: str = "",
|
||||
) -> list[dict]:
|
||||
"""Fetch a single page of issues or PRs."""
|
||||
cmd = [
|
||||
item_type,
|
||||
"list",
|
||||
"--repo",
|
||||
repo,
|
||||
"--state",
|
||||
state,
|
||||
"--limit",
|
||||
str(limit),
|
||||
"--json",
|
||||
"number,title,state,createdAt,updatedAt,labels,author,body",
|
||||
]
|
||||
if search_filter:
|
||||
cmd.extend(["--search", search_filter])
|
||||
|
||||
stdout, stderr, code = await run_gh_command(cmd)
|
||||
if code != 0:
|
||||
console.print(f"[red]Error fetching {item_type}s: {stderr}[/red]")
|
||||
return []
|
||||
|
||||
try:
|
||||
return json.loads(stdout) if stdout.strip() else []
|
||||
except json.JSONDecodeError:
|
||||
console.print(f"[red]Error parsing {item_type} response[/red]")
|
||||
return []
|
||||
|
||||
|
||||
async def fetch_all_items(
|
||||
repo: str,
|
||||
item_type: str,
|
||||
state: str,
|
||||
hours: int | None,
|
||||
progress: Progress,
|
||||
task_id: TaskID,
|
||||
) -> list[dict]:
|
||||
"""Fetch ALL items with exhaustive pagination."""
|
||||
all_items: list[dict] = []
|
||||
page = 1
|
||||
|
||||
progress.update(task_id, description=f"[cyan]Fetching {item_type}s page {page}...")
|
||||
items = await fetch_items_page(repo, item_type, state, BATCH_SIZE)
|
||||
fetched_count = len(items)
|
||||
all_items.extend(items)
|
||||
|
||||
console.print(f"[dim]Page {page}: fetched {fetched_count} {item_type}s[/dim]")
|
||||
|
||||
while fetched_count == BATCH_SIZE:
|
||||
page += 1
|
||||
progress.update(
|
||||
task_id, description=f"[cyan]Fetching {item_type}s page {page}..."
|
||||
)
|
||||
|
||||
last_created = all_items[-1].get("createdAt", "")
|
||||
if not last_created:
|
||||
break
|
||||
|
||||
search_filter = f"created:<{last_created}"
|
||||
items = await fetch_items_page(
|
||||
repo, item_type, state, BATCH_SIZE, search_filter
|
||||
)
|
||||
fetched_count = len(items)
|
||||
|
||||
if fetched_count == 0:
|
||||
break
|
||||
|
||||
existing_numbers = {item["number"] for item in all_items}
|
||||
new_items = [item for item in items if item["number"] not in existing_numbers]
|
||||
all_items.extend(new_items)
|
||||
|
||||
console.print(
|
||||
f"[dim]Page {page}: fetched {fetched_count}, added {len(new_items)} new (total: {len(all_items)})[/dim]"
|
||||
)
|
||||
|
||||
if page > 20:
|
||||
console.print("[yellow]Safety limit reached (20 pages)[/yellow]")
|
||||
break
|
||||
|
||||
if hours is not None:
|
||||
cutoff = datetime.now(UTC) - timedelta(hours=hours)
|
||||
cutoff_str = cutoff.isoformat()
|
||||
|
||||
original_count = len(all_items)
|
||||
all_items = [
|
||||
item
|
||||
for item in all_items
|
||||
if item.get("createdAt", "") >= cutoff_str
|
||||
or item.get("updatedAt", "") >= cutoff_str
|
||||
]
|
||||
filtered_count = original_count - len(all_items)
|
||||
if filtered_count > 0:
|
||||
console.print(
|
||||
f"[dim]Filtered out {filtered_count} items older than {hours} hours[/dim]"
|
||||
)
|
||||
|
||||
return all_items
|
||||
|
||||
|
||||
def display_table(items: list[dict], item_type: str) -> None:
|
||||
"""Display items in a Rich table."""
|
||||
table = Table(title=f"{item_type.upper()}s ({len(items)} total)")
|
||||
table.add_column("#", style="cyan", width=6)
|
||||
table.add_column("Title", style="white", max_width=50)
|
||||
table.add_column("State", style="green", width=8)
|
||||
table.add_column("Author", style="yellow", width=15)
|
||||
table.add_column("Labels", style="magenta", max_width=30)
|
||||
table.add_column("Updated", style="dim", width=12)
|
||||
|
||||
for item in items[:50]:
|
||||
labels = ", ".join(label.get("name", "") for label in item.get("labels", []))
|
||||
updated = item.get("updatedAt", "")[:10]
|
||||
author = item.get("author", {}).get("login", "unknown")
|
||||
|
||||
table.add_row(
|
||||
str(item.get("number", "")),
|
||||
(item.get("title", "")[:47] + "...")
|
||||
if len(item.get("title", "")) > 50
|
||||
else item.get("title", ""),
|
||||
item.get("state", ""),
|
||||
author,
|
||||
(labels[:27] + "...") if len(labels) > 30 else labels,
|
||||
updated,
|
||||
)
|
||||
|
||||
console.print(table)
|
||||
if len(items) > 50:
|
||||
console.print(f"[dim]... and {len(items) - 50} more items[/dim]")
|
||||
|
||||
|
||||
@app.command()
|
||||
def issues(
|
||||
repo: Annotated[
|
||||
str | None, typer.Option("--repo", "-r", help="Repository (owner/repo)")
|
||||
] = None,
|
||||
state: Annotated[
|
||||
ItemState, typer.Option("--state", "-s", help="Issue state filter")
|
||||
] = ItemState.ALL,
|
||||
hours: Annotated[
|
||||
int | None,
|
||||
typer.Option(
|
||||
"--hours", "-h", help="Only issues from last N hours (created or updated)"
|
||||
),
|
||||
] = None,
|
||||
output: Annotated[
|
||||
OutputFormat, typer.Option("--output", "-o", help="Output format")
|
||||
] = OutputFormat.TABLE,
|
||||
) -> None:
|
||||
"""Fetch all issues with exhaustive pagination."""
|
||||
|
||||
async def async_main() -> None:
|
||||
target_repo = repo or await get_current_repo()
|
||||
|
||||
console.print(f"""
|
||||
[cyan]Repository:[/cyan] {target_repo}
|
||||
[cyan]State:[/cyan] {state.value}
|
||||
[cyan]Time filter:[/cyan] {f"Last {hours} hours" if hours else "All time"}
|
||||
""")
|
||||
|
||||
with Progress(console=console) as progress:
|
||||
task: TaskID = progress.add_task("[cyan]Fetching issues...", total=None)
|
||||
items = await fetch_all_items(
|
||||
target_repo, "issue", state.value, hours, progress, task
|
||||
)
|
||||
progress.update(
|
||||
task, description="[green]Complete!", completed=100, total=100
|
||||
)
|
||||
|
||||
console.print(
|
||||
Panel(f"[green]Found {len(items)} issues[/green]", border_style="green")
|
||||
)
|
||||
|
||||
if output == OutputFormat.JSON:
|
||||
console.print(json.dumps(items, indent=2, ensure_ascii=False))
|
||||
elif output == OutputFormat.TABLE:
|
||||
display_table(items, "issue")
|
||||
else:
|
||||
console.print(f"Total issues: {len(items)}")
|
||||
|
||||
asyncio.run(async_main())
|
||||
|
||||
|
||||
@app.command()
|
||||
def prs(
|
||||
repo: Annotated[
|
||||
str | None, typer.Option("--repo", "-r", help="Repository (owner/repo)")
|
||||
] = None,
|
||||
state: Annotated[
|
||||
ItemState, typer.Option("--state", "-s", help="PR state filter")
|
||||
] = ItemState.OPEN,
|
||||
hours: Annotated[
|
||||
int | None,
|
||||
typer.Option(
|
||||
"--hours", "-h", help="Only PRs from last N hours (created or updated)"
|
||||
),
|
||||
] = None,
|
||||
output: Annotated[
|
||||
OutputFormat, typer.Option("--output", "-o", help="Output format")
|
||||
] = OutputFormat.TABLE,
|
||||
) -> None:
|
||||
"""Fetch all PRs with exhaustive pagination."""
|
||||
|
||||
async def async_main() -> None:
|
||||
target_repo = repo or await get_current_repo()
|
||||
|
||||
console.print(f"""
|
||||
[cyan]Repository:[/cyan] {target_repo}
|
||||
[cyan]State:[/cyan] {state.value}
|
||||
[cyan]Time filter:[/cyan] {f"Last {hours} hours" if hours else "All time"}
|
||||
""")
|
||||
|
||||
with Progress(console=console) as progress:
|
||||
task: TaskID = progress.add_task("[cyan]Fetching PRs...", total=None)
|
||||
items = await fetch_all_items(
|
||||
target_repo, "pr", state.value, hours, progress, task
|
||||
)
|
||||
progress.update(
|
||||
task, description="[green]Complete!", completed=100, total=100
|
||||
)
|
||||
|
||||
console.print(
|
||||
Panel(f"[green]Found {len(items)} PRs[/green]", border_style="green")
|
||||
)
|
||||
|
||||
if output == OutputFormat.JSON:
|
||||
console.print(json.dumps(items, indent=2, ensure_ascii=False))
|
||||
elif output == OutputFormat.TABLE:
|
||||
display_table(items, "pr")
|
||||
else:
|
||||
console.print(f"Total PRs: {len(items)}")
|
||||
|
||||
asyncio.run(async_main())
|
||||
|
||||
|
||||
@app.command(name="all")
|
||||
def fetch_all(
|
||||
repo: Annotated[
|
||||
str | None, typer.Option("--repo", "-r", help="Repository (owner/repo)")
|
||||
] = None,
|
||||
state: Annotated[
|
||||
ItemState, typer.Option("--state", "-s", help="State filter")
|
||||
] = ItemState.ALL,
|
||||
hours: Annotated[
|
||||
int | None,
|
||||
typer.Option(
|
||||
"--hours", "-h", help="Only items from last N hours (created or updated)"
|
||||
),
|
||||
] = None,
|
||||
output: Annotated[
|
||||
OutputFormat, typer.Option("--output", "-o", help="Output format")
|
||||
] = OutputFormat.TABLE,
|
||||
) -> None:
|
||||
"""Fetch all issues AND PRs with exhaustive pagination."""
|
||||
|
||||
async def async_main() -> None:
|
||||
target_repo = repo or await get_current_repo()
|
||||
|
||||
console.print(f"""
|
||||
[cyan]Repository:[/cyan] {target_repo}
|
||||
[cyan]State:[/cyan] {state.value}
|
||||
[cyan]Time filter:[/cyan] {f"Last {hours} hours" if hours else "All time"}
|
||||
[cyan]Fetching:[/cyan] Issues AND PRs
|
||||
""")
|
||||
|
||||
with Progress(console=console) as progress:
|
||||
issues_task: TaskID = progress.add_task(
|
||||
"[cyan]Fetching issues...", total=None
|
||||
)
|
||||
prs_task: TaskID = progress.add_task("[cyan]Fetching PRs...", total=None)
|
||||
|
||||
issues_items, prs_items = await asyncio.gather(
|
||||
fetch_all_items(
|
||||
target_repo, "issue", state.value, hours, progress, issues_task
|
||||
),
|
||||
fetch_all_items(
|
||||
target_repo, "pr", state.value, hours, progress, prs_task
|
||||
),
|
||||
)
|
||||
|
||||
progress.update(
|
||||
issues_task,
|
||||
description="[green]Issues complete!",
|
||||
completed=100,
|
||||
total=100,
|
||||
)
|
||||
progress.update(
|
||||
prs_task, description="[green]PRs complete!", completed=100, total=100
|
||||
)
|
||||
|
||||
console.print(
|
||||
Panel(
|
||||
f"[green]Found {len(issues_items)} issues and {len(prs_items)} PRs[/green]",
|
||||
border_style="green",
|
||||
)
|
||||
)
|
||||
|
||||
if output == OutputFormat.JSON:
|
||||
result = {"issues": issues_items, "prs": prs_items}
|
||||
console.print(json.dumps(result, indent=2, ensure_ascii=False))
|
||||
elif output == OutputFormat.TABLE:
|
||||
display_table(issues_items, "issue")
|
||||
console.print("")
|
||||
display_table(prs_items, "pr")
|
||||
else:
|
||||
console.print(f"Total issues: {len(issues_items)}")
|
||||
console.print(f"Total PRs: {len(prs_items)}")
|
||||
|
||||
asyncio.run(async_main())
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
app()
|
||||
@@ -1,407 +0,0 @@
|
||||
---
|
||||
name: pre-publish-review
|
||||
description: "Nuclear-grade 16-agent pre-publish release gate. Runs /get-unpublished-changes to detect all changes since last npm release, spawns up to 10 ultrabrain agents for deep per-change analysis, invokes /review-work (5 agents) for holistic review, and 1 oracle for overall release synthesis. Use before EVERY npm publish. Triggers: 'pre-publish review', 'review before publish', 'release review', 'pre-release review', 'ready to publish?', 'can I publish?', 'pre-publish', 'safe to publish', 'publishing review', 'pre-publish check'."
|
||||
---
|
||||
|
||||
# Pre-Publish Review — 16-Agent Release Gate
|
||||
|
||||
Three-layer review before publishing to npm. Every layer covers a different angle — together they catch what no single reviewer could.
|
||||
|
||||
| Layer | Agents | Type | What They Check |
|
||||
|-------|--------|------|-----------------|
|
||||
| Per-Change Deep Dive | up to 10 | ultrabrain | Each logical change group individually — correctness, edge cases, pattern adherence |
|
||||
| Holistic Review | 5 | review-work | Goal compliance, QA execution, code quality, security, context mining across full changeset |
|
||||
| Release Synthesis | 1 | oracle | Overall release readiness, version bump, breaking changes, deployment risk |
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Detect Unpublished Changes
|
||||
|
||||
Run `/get-unpublished-changes` FIRST. This is the single source of truth for what changed.
|
||||
|
||||
```
|
||||
skill(name="get-unpublished-changes")
|
||||
```
|
||||
|
||||
This command automatically:
|
||||
- Detects published npm version vs local version
|
||||
- Lists all commits since last release
|
||||
- Reads actual diffs (not just commit messages) to describe REAL changes
|
||||
- Groups changes by type (feat/fix/refactor/docs) with scope
|
||||
- Identifies breaking changes
|
||||
- Recommends version bump (patch/minor/major)
|
||||
|
||||
**Save the full output** — it feeds directly into Phase 1 grouping and all agent prompts.
|
||||
|
||||
Then capture raw data needed by agent prompts:
|
||||
|
||||
```bash
|
||||
# Extract versions (already in /get-unpublished-changes output)
|
||||
PUBLISHED=$(npm view oh-my-opencode version 2>/dev/null || echo "not published")
|
||||
LOCAL=$(node -p "require('./package.json').version" 2>/dev/null || echo "unknown")
|
||||
|
||||
# Raw data for agents (diffs, file lists)
|
||||
COMMITS=$(git log "v${PUBLISHED}"..HEAD --oneline 2>/dev/null || echo "no commits")
|
||||
COMMIT_COUNT=$(echo "$COMMITS" | wc -l | tr -d ' ')
|
||||
DIFF_STAT=$(git diff "v${PUBLISHED}"..HEAD --stat 2>/dev/null || echo "no diff")
|
||||
CHANGED_FILES=$(git diff --name-only "v${PUBLISHED}"..HEAD 2>/dev/null || echo "none")
|
||||
FILE_COUNT=$(echo "$CHANGED_FILES" | wc -l | tr -d ' ')
|
||||
```
|
||||
|
||||
If `PUBLISHED` is "not published", this is a first release — use the full git history instead.
|
||||
---
|
||||
|
||||
## Phase 1: Parse Changes into Groups
|
||||
|
||||
Use the `/get-unpublished-changes` output as the starting point — it already groups by scope and type.
|
||||
|
||||
**Grouping strategy:**
|
||||
1. Start from the `/get-unpublished-changes` analysis which already categorizes by feat/fix/refactor/docs with scope
|
||||
2. Further split by **module/area** — changes touching the same module or feature area belong together
|
||||
3. Target **up to 10 groups**. If fewer than 10 commits, each commit is its own group. If more than 10 logical areas, merge the smallest groups.
|
||||
4. For each group, extract:
|
||||
- **Group name**: Short descriptive label (e.g., "agent-model-resolution", "hook-system-refactor")
|
||||
- **Commits**: List of commit hashes and messages
|
||||
- **Files**: Changed files in this group
|
||||
- **Diff**: The relevant portion of the full diff (`git diff v${PUBLISHED}..HEAD -- {group files}`)
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Spawn All Agents
|
||||
|
||||
Launch ALL agents in a single turn. Every agent uses `run_in_background=true`. No sequential launches.
|
||||
|
||||
### Layer 1: Ultrabrain Per-Change Analysis (up to 10)
|
||||
|
||||
For each change group, spawn one ultrabrain agent. Each gets only its portion of the diff — not the full changeset.
|
||||
|
||||
```
|
||||
task(
|
||||
category="ultrabrain",
|
||||
run_in_background=true,
|
||||
load_skills=[],
|
||||
description="Deep analysis: {GROUP_NAME}",
|
||||
prompt="""
|
||||
<review_type>PER-CHANGE DEEP ANALYSIS</review_type>
|
||||
<change_group>{GROUP_NAME}</change_group>
|
||||
|
||||
<project>oh-my-opencode (npm package)</project>
|
||||
<published_version>{PUBLISHED}</published_version>
|
||||
<target_version>{LOCAL}</target_version>
|
||||
|
||||
<commits>
|
||||
{GROUP_COMMITS — hash and message for each commit in this group}
|
||||
</commits>
|
||||
|
||||
<changed_files>
|
||||
{GROUP_FILES — files changed in this group}
|
||||
</changed_files>
|
||||
|
||||
<diff>
|
||||
{GROUP_DIFF — only the diff for this group's files}
|
||||
</diff>
|
||||
|
||||
<file_contents>
|
||||
{Read and include full content of each changed file in this group}
|
||||
</file_contents>
|
||||
|
||||
You are reviewing a specific subset of changes heading into an npm release. Focus exclusively on THIS change group. Other groups are reviewed by parallel agents.
|
||||
|
||||
ANALYSIS CHECKLIST:
|
||||
|
||||
1. **Intent Clarity**: What is this change trying to do? Is the intent clear from the code and commit messages? If you have to guess, that's a finding.
|
||||
|
||||
2. **Correctness**: Trace through the logic for 3+ scenarios. Does the code actually do what it claims? Off-by-one errors, null handling, async edge cases, resource cleanup.
|
||||
|
||||
3. **Breaking Changes**: Does this change alter any public API, config format, CLI behavior, or hook contract? If yes, is it backward compatible? Would existing users be surprised?
|
||||
|
||||
4. **Pattern Adherence**: Does the new code follow the established patterns visible in the existing file contents? New patterns where old ones exist = finding.
|
||||
|
||||
5. **Edge Cases**: What inputs or conditions would break this? Empty arrays, undefined values, concurrent calls, very large inputs, missing config fields.
|
||||
|
||||
6. **Error Handling**: Are errors properly caught and propagated? No empty catch blocks? No swallowed promises?
|
||||
|
||||
7. **Type Safety**: Any `as any`, `@ts-ignore`, `@ts-expect-error`? Loose typing where strict is possible?
|
||||
|
||||
8. **Test Coverage**: Are the behavioral changes covered by tests? Are the tests meaningful or just coverage padding?
|
||||
|
||||
9. **Side Effects**: Could this change break something in a different module? Check imports and exports — who depends on what changed?
|
||||
|
||||
10. **Release Risk**: On a scale of SAFE / CAUTION / RISKY — how confident are you this change won't cause issues in production?
|
||||
|
||||
OUTPUT FORMAT:
|
||||
<group_name>{GROUP_NAME}</group_name>
|
||||
<verdict>PASS or FAIL</verdict>
|
||||
<risk>SAFE / CAUTION / RISKY</risk>
|
||||
<summary>2-3 sentence assessment of this change group</summary>
|
||||
<has_breaking_changes>YES or NO</has_breaking_changes>
|
||||
<breaking_change_details>If YES, describe what breaks and for whom</breaking_change_details>
|
||||
<findings>
|
||||
For each finding:
|
||||
- [CRITICAL/MAJOR/MINOR] Category: Description
|
||||
- File: path (line range)
|
||||
- Evidence: specific code reference
|
||||
- Suggestion: how to fix
|
||||
</findings>
|
||||
<blocking_issues>Issues that MUST be fixed before publish. Empty if PASS.</blocking_issues>
|
||||
""")
|
||||
```
|
||||
|
||||
### Layer 2: Holistic Review via /review-work (5 agents)
|
||||
|
||||
Spawn a sub-agent that loads the `/review-work` skill. The review-work skill internally launches 5 parallel agents: Oracle (goal verification), unspecified-high (QA execution), Oracle (code quality), Oracle (security), unspecified-high (context mining). All 5 must pass for the review to pass.
|
||||
|
||||
```
|
||||
task(
|
||||
category="unspecified-high",
|
||||
run_in_background=true,
|
||||
load_skills=["review-work"],
|
||||
description="Run /review-work on all unpublished changes",
|
||||
prompt="""
|
||||
Run /review-work on the unpublished changes between v{PUBLISHED} and HEAD.
|
||||
|
||||
GOAL: Review all changes heading into npm publish of oh-my-opencode. These changes span {COMMIT_COUNT} commits across {FILE_COUNT} files.
|
||||
|
||||
CONSTRAINTS:
|
||||
- This is a plugin published to npm — public API stability matters
|
||||
- TypeScript strict mode, Bun runtime
|
||||
- No `as any`, `@ts-ignore`, `@ts-expect-error`
|
||||
- Factory pattern (createXXX) for tools, hooks, agents
|
||||
- kebab-case files, barrel exports, no catch-all files
|
||||
|
||||
BACKGROUND: Pre-publish review of oh-my-opencode, an OpenCode plugin with 1268 TypeScript files, 160k LOC. Changes since v{PUBLISHED} are about to be published.
|
||||
|
||||
The diff base is: git diff v{PUBLISHED}..HEAD
|
||||
|
||||
Follow the /review-work skill flow exactly — launch all 5 review agents and collect results. Do NOT skip any of the 5 agents.
|
||||
""")
|
||||
```
|
||||
|
||||
### Layer 3: Oracle Release Synthesis (1 agent)
|
||||
|
||||
The oracle gets the full picture — all commits, full diff stat, and changed file list. It provides the final release readiness assessment.
|
||||
|
||||
```
|
||||
task(
|
||||
subagent_type="oracle",
|
||||
run_in_background=true,
|
||||
load_skills=[],
|
||||
description="Oracle: overall release synthesis and version bump recommendation",
|
||||
prompt="""
|
||||
<review_type>RELEASE SYNTHESIS — OVERALL ASSESSMENT</review_type>
|
||||
|
||||
<project>oh-my-opencode (npm package)</project>
|
||||
<published_version>{PUBLISHED}</published_version>
|
||||
<local_version>{LOCAL}</local_version>
|
||||
|
||||
<all_commits>
|
||||
{ALL COMMITS since published version — hash, message, author, date}
|
||||
</all_commits>
|
||||
|
||||
<diff_stat>
|
||||
{DIFF_STAT — files changed, insertions, deletions}
|
||||
</diff_stat>
|
||||
|
||||
<changed_files>
|
||||
{CHANGED_FILES — full list of modified file paths}
|
||||
</changed_files>
|
||||
|
||||
<full_diff>
|
||||
{FULL_DIFF — the complete git diff between published version and HEAD}
|
||||
</full_diff>
|
||||
|
||||
<file_contents>
|
||||
{Read and include full content of KEY changed files — focus on public API surfaces, config schemas, agent definitions, hook registrations, tool registrations}
|
||||
</file_contents>
|
||||
|
||||
You are the final gate before an npm publish. 10 ultrabrain agents are reviewing individual changes and 5 review-work agents are doing holistic review. Your job is the bird's-eye view that those focused reviews might miss.
|
||||
|
||||
SYNTHESIS CHECKLIST:
|
||||
|
||||
1. **Release Coherence**: Do these changes tell a coherent story? Or is this a grab-bag of unrelated changes that should be split into multiple releases?
|
||||
|
||||
2. **Version Bump**: Based on semver:
|
||||
- PATCH: Bug fixes only, no behavior changes
|
||||
- MINOR: New features, backward-compatible changes
|
||||
- MAJOR: Breaking changes to public API, config format, or behavior
|
||||
Recommend the correct bump with specific justification.
|
||||
|
||||
3. **Breaking Changes Audit**: Exhaustively list every change that could break existing users. Check:
|
||||
- Config schema changes (new required fields, removed fields, renamed fields)
|
||||
- Agent behavior changes (different prompts, different model routing)
|
||||
- Hook contract changes (new parameters, removed hooks, renamed hooks)
|
||||
- Tool interface changes (new required params, different return types)
|
||||
- CLI changes (new commands, changed flags, different output)
|
||||
- Skill format changes (SKILL.md schema changes)
|
||||
|
||||
4. **Migration Requirements**: If there are breaking changes, what migration steps do users need? Is there auto-migration in place?
|
||||
|
||||
5. **Dependency Changes**: New dependencies added? Dependencies removed? Version bumps? Any supply chain risk?
|
||||
|
||||
6. **Changelog Draft**: Write a draft changelog entry grouped by:
|
||||
- feat: New features
|
||||
- fix: Bug fixes
|
||||
- refactor: Internal changes (no user impact)
|
||||
- breaking: Breaking changes with migration instructions
|
||||
- docs: Documentation changes
|
||||
|
||||
7. **Deployment Risk Assessment**:
|
||||
- SAFE: Routine changes, well-tested, low risk
|
||||
- CAUTION: Significant changes but manageable risk
|
||||
- RISKY: Large surface area changes, insufficient testing, or breaking changes without migration
|
||||
- BLOCK: Critical issues found, do NOT publish
|
||||
|
||||
8. **Post-Publish Monitoring**: What should be monitored after publish? Error rates, specific features, user feedback channels.
|
||||
|
||||
OUTPUT FORMAT:
|
||||
<verdict>SAFE / CAUTION / RISKY / BLOCK</verdict>
|
||||
<recommended_version_bump>PATCH / MINOR / MAJOR</recommended_version_bump>
|
||||
<version_bump_justification>Why this bump level</version_bump_justification>
|
||||
<release_coherence>Assessment of whether changes belong in one release</release_coherence>
|
||||
<breaking_changes>
|
||||
Exhaustive list, or "None" if none.
|
||||
For each:
|
||||
- What changed
|
||||
- Who is affected
|
||||
- Migration steps
|
||||
</breaking_changes>
|
||||
<changelog_draft>
|
||||
Ready-to-use changelog entry
|
||||
</changelog_draft>
|
||||
<deployment_risk>
|
||||
Overall risk assessment with specific concerns
|
||||
</deployment_risk>
|
||||
<monitoring_recommendations>
|
||||
What to watch after publish
|
||||
</monitoring_recommendations>
|
||||
<blocking_issues>Issues that MUST be fixed before publish. Empty if SAFE.</blocking_issues>
|
||||
""")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Collect Results
|
||||
|
||||
As agents complete (system notifications), collect via `background_output(task_id="...")`.
|
||||
|
||||
Track completion in a table:
|
||||
|
||||
| # | Agent | Type | Status | Verdict |
|
||||
|---|-------|------|--------|---------|
|
||||
| 1-10 | Ultrabrain: {group_name} | ultrabrain | pending | — |
|
||||
| 11 | Review-Work Coordinator | unspecified-high | pending | — |
|
||||
| 12 | Release Synthesis Oracle | oracle | pending | — |
|
||||
|
||||
Do NOT deliver the final report until ALL agents have completed.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Final Verdict
|
||||
|
||||
<verdict_logic>
|
||||
|
||||
**BLOCK** if:
|
||||
- Oracle verdict is BLOCK
|
||||
- Any ultrabrain found CRITICAL blocking issues
|
||||
- Review-work failed on any MAIN agent
|
||||
|
||||
**RISKY** if:
|
||||
- Oracle verdict is RISKY
|
||||
- Multiple ultrabrains returned CAUTION or FAIL
|
||||
- Review-work passed but with significant findings
|
||||
|
||||
**CAUTION** if:
|
||||
- Oracle verdict is CAUTION
|
||||
- A few ultrabrains flagged minor issues
|
||||
- Review-work passed cleanly
|
||||
|
||||
**SAFE** if:
|
||||
- Oracle verdict is SAFE
|
||||
- All ultrabrains passed
|
||||
- Review-work passed
|
||||
|
||||
</verdict_logic>
|
||||
|
||||
Compile the final report:
|
||||
|
||||
```markdown
|
||||
# Pre-Publish Review — oh-my-opencode
|
||||
|
||||
## Release: v{PUBLISHED} -> v{LOCAL}
|
||||
**Commits:** {COMMIT_COUNT} | **Files Changed:** {FILE_COUNT} | **Agents:** {AGENT_COUNT}
|
||||
|
||||
---
|
||||
|
||||
## Overall Verdict: SAFE / CAUTION / RISKY / BLOCK
|
||||
|
||||
## Recommended Version Bump: PATCH / MINOR / MAJOR
|
||||
{Justification from Oracle}
|
||||
|
||||
---
|
||||
|
||||
## Per-Change Analysis (Ultrabrains)
|
||||
|
||||
| # | Change Group | Verdict | Risk | Breaking? | Blocking Issues |
|
||||
|---|-------------|---------|------|-----------|-----------------|
|
||||
| 1 | {name} | PASS/FAIL | SAFE/CAUTION/RISKY | YES/NO | {count or "none"} |
|
||||
| ... | ... | ... | ... | ... | ... |
|
||||
|
||||
### Blocking Issues from Per-Change Analysis
|
||||
{Aggregated from all ultrabrains — deduplicated}
|
||||
|
||||
---
|
||||
|
||||
## Holistic Review (Review-Work)
|
||||
|
||||
| # | Review Area | Verdict | Confidence |
|
||||
|---|------------|---------|------------|
|
||||
| 1 | Goal & Constraint Verification | PASS/FAIL | HIGH/MED/LOW |
|
||||
| 2 | QA Execution | PASS/FAIL | HIGH/MED/LOW |
|
||||
| 3 | Code Quality | PASS/FAIL | HIGH/MED/LOW |
|
||||
| 4 | Security | PASS/FAIL | Severity |
|
||||
| 5 | Context Mining | PASS/FAIL | HIGH/MED/LOW |
|
||||
|
||||
### Blocking Issues from Holistic Review
|
||||
{Aggregated from review-work}
|
||||
|
||||
---
|
||||
|
||||
## Release Synthesis (Oracle)
|
||||
|
||||
### Breaking Changes
|
||||
{From Oracle — exhaustive list or "None"}
|
||||
|
||||
### Changelog Draft
|
||||
{From Oracle — ready to use}
|
||||
|
||||
### Deployment Risk
|
||||
{From Oracle — specific concerns}
|
||||
|
||||
### Post-Publish Monitoring
|
||||
{From Oracle — what to watch}
|
||||
|
||||
---
|
||||
|
||||
## All Blocking Issues (Prioritized)
|
||||
{Deduplicated, merged from all three layers, ordered by severity}
|
||||
|
||||
## Recommendations
|
||||
{If BLOCK/RISKY: exactly what to fix, in priority order}
|
||||
{If CAUTION: suggestions worth considering before publish}
|
||||
{If SAFE: non-blocking improvements for future}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
| Violation | Severity |
|
||||
|-----------|----------|
|
||||
| Publishing without waiting for all agents | **CRITICAL** |
|
||||
| Spawning ultrabrains sequentially instead of in parallel | CRITICAL |
|
||||
| Using `run_in_background=false` for any agent | CRITICAL |
|
||||
| Skipping the Oracle synthesis | HIGH |
|
||||
| Not reading file contents for Oracle (it cannot read files) | HIGH |
|
||||
| Grouping all changes into 1-2 ultrabrains instead of distributing | HIGH |
|
||||
| Delivering verdict before all agents complete | HIGH |
|
||||
| Not including diff in ultrabrain prompts | MAJOR |
|
||||
@@ -1,76 +0,0 @@
|
||||
{
|
||||
"skill_name": "work-with-pr",
|
||||
"evals": [
|
||||
{
|
||||
"id": 1,
|
||||
"prompt": "I need to add a `max_background_agents` config option to oh-my-opencode that limits how many background agents can run simultaneously. It should be in the plugin config schema with a default of 5. Add validation and make sure the background manager respects it. Create a PR for this.",
|
||||
"expected_output": "Agent creates worktree, implements config option with schema validation, adds tests, creates PR, iterates through verification gates until merged",
|
||||
"files": [],
|
||||
"assertions": [
|
||||
{"id": "worktree-isolation", "text": "Plan uses git worktree in a sibling directory (not main working directory)"},
|
||||
{"id": "branch-from-dev", "text": "Branch is created from origin/dev (not master/main)"},
|
||||
{"id": "atomic-commits", "text": "Plan specifies multiple atomic commits for multi-file changes"},
|
||||
{"id": "local-validation", "text": "Runs bun run typecheck, bun test, and bun run build before pushing"},
|
||||
{"id": "pr-targets-dev", "text": "PR is created targeting dev branch (not master)"},
|
||||
{"id": "three-gates", "text": "Verification loop includes all 3 gates: CI, review-work, and Cubic"},
|
||||
{"id": "gate-ordering", "text": "Gates are checked in order: CI first, then review-work, then Cubic"},
|
||||
{"id": "cubic-check-method", "text": "Cubic check uses gh api to check cubic-dev-ai[bot] reviews for 'No issues found'"},
|
||||
{"id": "worktree-cleanup", "text": "Plan includes worktree cleanup after merge"},
|
||||
{"id": "real-file-references", "text": "Code changes reference actual files in the codebase (config schema, background manager)"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"prompt": "The atlas hook has a bug where it crashes when boulder.json is missing the worktree_path field. Fix it and land the fix as a PR. Make sure CI passes.",
|
||||
"expected_output": "Agent creates worktree for the fix branch, adds null check and test for missing worktree_path, creates PR, iterates verification loop",
|
||||
"files": [],
|
||||
"assertions": [
|
||||
{"id": "worktree-isolation", "text": "Plan uses git worktree in a sibling directory"},
|
||||
{"id": "minimal-fix", "text": "Fix is minimal — adds null check, doesn't refactor unrelated code"},
|
||||
{"id": "test-added", "text": "Test case added for the missing worktree_path scenario"},
|
||||
{"id": "three-gates", "text": "Verification loop includes all 3 gates: CI, review-work, Cubic"},
|
||||
{"id": "real-atlas-files", "text": "References actual atlas hook files in src/hooks/atlas/"},
|
||||
{"id": "fix-branch-naming", "text": "Branch name follows fix/ prefix convention"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"prompt": "Refactor src/tools/delegate-task/constants.ts to split DEFAULT_CATEGORIES and CATEGORY_MODEL_REQUIREMENTS into separate files. Keep backward compatibility with the barrel export. Make a PR.",
|
||||
"expected_output": "Agent creates worktree, splits file with atomic commits, ensures imports still work via barrel, creates PR, runs through all gates",
|
||||
"files": [],
|
||||
"assertions": [
|
||||
{"id": "worktree-isolation", "text": "Plan uses git worktree in a sibling directory"},
|
||||
{"id": "multiple-atomic-commits", "text": "Uses 2+ commits for the multi-file refactor"},
|
||||
{"id": "barrel-export", "text": "Maintains backward compatibility via barrel re-export in constants.ts or index.ts"},
|
||||
{"id": "three-gates", "text": "Verification loop includes all 3 gates"},
|
||||
{"id": "real-constants-file", "text": "References actual src/tools/delegate-task/constants.ts file and its exports"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"prompt": "implement issue #100 - we need to add a new built-in MCP for arxiv paper search. just the basic search endpoint, nothing fancy. pr it",
|
||||
"expected_output": "Agent creates worktree, implements arxiv MCP following existing MCP patterns (websearch, context7, grep_app), creates PR with proper template, verification loop runs",
|
||||
"files": [],
|
||||
"assertions": [
|
||||
{"id": "worktree-isolation", "text": "Plan uses git worktree in a sibling directory"},
|
||||
{"id": "follows-mcp-pattern", "text": "New MCP follows existing pattern from src/mcp/ (websearch, context7, grep_app)"},
|
||||
{"id": "three-gates", "text": "Verification loop includes all 3 gates"},
|
||||
{"id": "pr-targets-dev", "text": "PR targets dev branch"},
|
||||
{"id": "local-validation", "text": "Runs local checks before pushing"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"prompt": "The comment-checker hook is too aggressive - it's flagging legitimate comments that happen to contain 'Note:' as AI slop. Relax the regex pattern and add test cases for the false positives. Work on a separate branch and make a PR.",
|
||||
"expected_output": "Agent creates worktree, fixes regex, adds specific test cases for false positive scenarios, creates PR, all three gates pass",
|
||||
"files": [],
|
||||
"assertions": [
|
||||
{"id": "worktree-isolation", "text": "Plan uses git worktree in a sibling directory"},
|
||||
{"id": "real-comment-checker-files", "text": "References actual comment-checker hook files in the codebase"},
|
||||
{"id": "regression-tests", "text": "Adds test cases specifically for 'Note:' false positive scenarios"},
|
||||
{"id": "three-gates", "text": "Verification loop includes all 3 gates"},
|
||||
{"id": "minimal-change", "text": "Only modifies regex and adds tests — no unrelated changes"}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,138 +0,0 @@
|
||||
{
|
||||
"skill_name": "work-with-pr",
|
||||
"iteration": 1,
|
||||
"summary": {
|
||||
"with_skill": {
|
||||
"pass_rate": 0.968,
|
||||
"mean_duration_seconds": 340.2,
|
||||
"stddev_duration_seconds": 169.3
|
||||
},
|
||||
"without_skill": {
|
||||
"pass_rate": 0.516,
|
||||
"mean_duration_seconds": 303.0,
|
||||
"stddev_duration_seconds": 77.8
|
||||
},
|
||||
"delta": {
|
||||
"pass_rate": 0.452,
|
||||
"mean_duration_seconds": 37.2,
|
||||
"stddev_duration_seconds": 91.5
|
||||
}
|
||||
},
|
||||
"evals": [
|
||||
{
|
||||
"eval_name": "happy-path-feature-config-option",
|
||||
"with_skill": {
|
||||
"pass_rate": 1.0,
|
||||
"passed": 10,
|
||||
"total": 10,
|
||||
"duration_seconds": 292,
|
||||
"failed_assertions": []
|
||||
},
|
||||
"without_skill": {
|
||||
"pass_rate": 0.4,
|
||||
"passed": 4,
|
||||
"total": 10,
|
||||
"duration_seconds": 365,
|
||||
"failed_assertions": [
|
||||
{"assertion": "Plan uses git worktree in a sibling directory", "reason": "Uses git checkout -b, no worktree isolation"},
|
||||
{"assertion": "Plan specifies multiple atomic commits for multi-file changes", "reason": "Steps listed sequentially but no atomic commit strategy mentioned"},
|
||||
{"assertion": "Verification loop includes all 3 gates: CI, review-work, and Cubic", "reason": "Only mentions CI pipeline in step 6. No review-work or Cubic."},
|
||||
{"assertion": "Gates are checked in order: CI first, then review-work, then Cubic", "reason": "No gate ordering - only CI mentioned"},
|
||||
{"assertion": "Cubic check uses gh api to check cubic-dev-ai[bot] reviews", "reason": "No mention of Cubic at all"},
|
||||
{"assertion": "Plan includes worktree cleanup after merge", "reason": "No worktree used, no cleanup needed"}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"eval_name": "bugfix-atlas-null-check",
|
||||
"with_skill": {
|
||||
"pass_rate": 1.0,
|
||||
"passed": 6,
|
||||
"total": 6,
|
||||
"duration_seconds": 506,
|
||||
"failed_assertions": []
|
||||
},
|
||||
"without_skill": {
|
||||
"pass_rate": 0.667,
|
||||
"passed": 4,
|
||||
"total": 6,
|
||||
"duration_seconds": 325,
|
||||
"failed_assertions": [
|
||||
{"assertion": "Plan uses git worktree in a sibling directory", "reason": "No worktree. Steps go directly to creating branch and modifying files."},
|
||||
{"assertion": "Verification loop includes all 3 gates", "reason": "Only mentions CI pipeline (step 5). No review-work or Cubic."}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"eval_name": "refactor-split-constants",
|
||||
"with_skill": {
|
||||
"pass_rate": 1.0,
|
||||
"passed": 5,
|
||||
"total": 5,
|
||||
"duration_seconds": 181,
|
||||
"failed_assertions": []
|
||||
},
|
||||
"without_skill": {
|
||||
"pass_rate": 0.4,
|
||||
"passed": 2,
|
||||
"total": 5,
|
||||
"duration_seconds": 229,
|
||||
"failed_assertions": [
|
||||
{"assertion": "Plan uses git worktree in a sibling directory", "reason": "git checkout -b only, no worktree"},
|
||||
{"assertion": "Uses 2+ commits for the multi-file refactor", "reason": "Single atomic commit: 'refactor: split delegate-task constants and category model requirements'"},
|
||||
{"assertion": "Verification loop includes all 3 gates", "reason": "Only mentions typecheck/test/build. No review-work or Cubic."}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"eval_name": "new-mcp-arxiv-casual",
|
||||
"with_skill": {
|
||||
"pass_rate": 1.0,
|
||||
"passed": 5,
|
||||
"total": 5,
|
||||
"duration_seconds": 152,
|
||||
"failed_assertions": []
|
||||
},
|
||||
"without_skill": {
|
||||
"pass_rate": 0.6,
|
||||
"passed": 3,
|
||||
"total": 5,
|
||||
"duration_seconds": 197,
|
||||
"failed_assertions": [
|
||||
{"assertion": "Verification loop includes all 3 gates", "reason": "Only mentions bun test/typecheck/build. No review-work or Cubic."}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"eval_name": "regex-fix-false-positive",
|
||||
"with_skill": {
|
||||
"pass_rate": 0.8,
|
||||
"passed": 4,
|
||||
"total": 5,
|
||||
"duration_seconds": 570,
|
||||
"failed_assertions": [
|
||||
{"assertion": "Only modifies regex and adds tests — no unrelated changes", "reason": "Also proposes config schema change (exclude_patterns) and Go binary update — goes beyond minimal fix"}
|
||||
]
|
||||
},
|
||||
"without_skill": {
|
||||
"pass_rate": 0.6,
|
||||
"passed": 3,
|
||||
"total": 5,
|
||||
"duration_seconds": 399,
|
||||
"failed_assertions": [
|
||||
{"assertion": "Plan uses git worktree in a sibling directory", "reason": "git checkout -b, no worktree"},
|
||||
{"assertion": "Verification loop includes all 3 gates", "reason": "Only bun test and typecheck. No review-work or Cubic."}
|
||||
]
|
||||
}
|
||||
}
|
||||
],
|
||||
"analyst_observations": [
|
||||
"Three-gates assertion (CI + review-work + Cubic) is the strongest discriminator: 5/5 with-skill vs 0/5 without-skill. Without the skill, agents never know about Cubic or review-work gates.",
|
||||
"Worktree isolation is nearly as discriminating (5/5 vs 1/5). One without-skill run (eval-4) independently chose worktree, suggesting some agents already know worktree patterns, but the skill makes it consistent.",
|
||||
"The skill's only failure (eval-5 minimal-change) reveals a potential over-engineering tendency: the skill-guided agent proposed config schema changes and Go binary updates for what should have been a minimal regex fix. Consider adding explicit guidance for fix-type tasks to stay minimal.",
|
||||
"Duration tradeoff: with-skill is 12% slower on average (340s vs 303s), driven mainly by eval-2 (bugfix) and eval-5 (regex fix) where the skill's thorough verification planning adds overhead. For eval-1 and eval-3-4, with-skill was actually faster.",
|
||||
"Without-skill duration has lower variance (stddev 78s vs 169s), suggesting the skill introduces more variable execution paths depending on task complexity.",
|
||||
"Non-discriminating assertions: 'References actual files', 'PR targets dev', 'Runs local checks' — these pass regardless of skill. They validate baseline agent competence, not skill value. Consider removing or downweighting in future iterations.",
|
||||
"Atomic commits assertion discriminates moderately (2/2 with-skill tested vs 0/2 without-skill tested). Without the skill, agents default to single commits even for multi-file refactors."
|
||||
]
|
||||
}
|
||||
@@ -1,42 +0,0 @@
|
||||
# Benchmark: work-with-pr (Iteration 1)
|
||||
|
||||
## Summary
|
||||
|
||||
| Metric | With Skill | Without Skill | Delta |
|
||||
|--------|-----------|---------------|-------|
|
||||
| Pass Rate | 96.8% (30/31) | 51.6% (16/31) | +45.2% |
|
||||
| Mean Duration | 340.2s | 303.0s | +37.2s |
|
||||
| Duration Stddev | 169.3s | 77.8s | +91.5s |
|
||||
|
||||
## Per-Eval Breakdown
|
||||
|
||||
| Eval | With Skill | Without Skill | Delta |
|
||||
|------|-----------|---------------|-------|
|
||||
| happy-path-feature-config-option | 100% (10/10) | 40% (4/10) | +60% |
|
||||
| bugfix-atlas-null-check | 100% (6/6) | 67% (4/6) | +33% |
|
||||
| refactor-split-constants | 100% (5/5) | 40% (2/5) | +60% |
|
||||
| new-mcp-arxiv-casual | 100% (5/5) | 60% (3/5) | +40% |
|
||||
| regex-fix-false-positive | 80% (4/5) | 60% (3/5) | +20% |
|
||||
|
||||
## Key Discriminators
|
||||
|
||||
- **three-gates** (CI + review-work + Cubic): 5/5 vs 0/5 — strongest signal
|
||||
- **worktree-isolation**: 5/5 vs 1/5
|
||||
- **atomic-commits**: 2/2 vs 0/2
|
||||
- **cubic-check-method**: 1/1 vs 0/1
|
||||
|
||||
## Non-Discriminating Assertions
|
||||
|
||||
- References actual files: passes in both conditions
|
||||
- PR targets dev: passes in both conditions
|
||||
- Runs local checks before pushing: passes in both conditions
|
||||
|
||||
## Only With-Skill Failure
|
||||
|
||||
- **eval-5 minimal-change**: Skill-guided agent proposed config schema changes and Go binary update for a minimal regex fix. The skill may encourage over-engineering in fix scenarios.
|
||||
|
||||
## Analyst Notes
|
||||
|
||||
- The skill adds most value for procedural knowledge (verification gates, worktree workflow) that agents cannot infer from codebase alone.
|
||||
- Duration cost is modest (+12%) and acceptable given the +45% pass rate improvement.
|
||||
- Consider adding explicit "fix-type tasks: stay minimal" guidance in iteration 2.
|
||||
@@ -1,57 +0,0 @@
|
||||
{
|
||||
"eval_id": 1,
|
||||
"eval_name": "happy-path-feature-config-option",
|
||||
"prompt": "I need to add a `max_background_agents` config option to oh-my-opencode that limits how many background agents can run simultaneously. It should be in the plugin config schema with a default of 5. Add validation and make sure the background manager respects it. Create a PR for this.",
|
||||
"assertions": [
|
||||
{
|
||||
"id": "worktree-isolation",
|
||||
"text": "Plan uses git worktree in a sibling directory (not main working directory)",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "branch-from-dev",
|
||||
"text": "Branch is created from origin/dev (not master/main)",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "atomic-commits",
|
||||
"text": "Plan specifies multiple atomic commits for multi-file changes",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "local-validation",
|
||||
"text": "Runs bun run typecheck, bun test, and bun run build before pushing",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "pr-targets-dev",
|
||||
"text": "PR is created targeting dev branch (not master)",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "three-gates",
|
||||
"text": "Verification loop includes all 3 gates: CI, review-work, and Cubic",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "gate-ordering",
|
||||
"text": "Gates are checked in order: CI first, then review-work, then Cubic",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "cubic-check-method",
|
||||
"text": "Cubic check uses gh api to check cubic-dev-ai[bot] reviews for 'No issues found'",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "worktree-cleanup",
|
||||
"text": "Plan includes worktree cleanup after merge",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "real-file-references",
|
||||
"text": "Code changes reference actual files in the codebase (config schema, background manager)",
|
||||
"type": "manual"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"run_id": "eval-1-with_skill",
|
||||
"expectations": [
|
||||
{"text": "Plan uses git worktree in a sibling directory", "passed": true, "evidence": "Uses ../omo-wt/feat-max-background-agents"},
|
||||
{"text": "Branch is created from origin/dev", "passed": true, "evidence": "git checkout dev && git pull origin dev, then branch"},
|
||||
{"text": "Plan specifies multiple atomic commits for multi-file changes", "passed": true, "evidence": "2 commits: schema+tests, then concurrency+manager"},
|
||||
{"text": "Runs bun run typecheck, bun test, and bun run build before pushing", "passed": true, "evidence": "Explicit pre-push section with all 3 commands"},
|
||||
{"text": "PR is created targeting dev branch", "passed": true, "evidence": "--base dev in gh pr create"},
|
||||
{"text": "Verification loop includes all 3 gates: CI, review-work, and Cubic", "passed": true, "evidence": "Gate A (CI), Gate B (review-work 5 agents), Gate C (Cubic)"},
|
||||
{"text": "Gates are checked in order: CI first, then review-work, then Cubic", "passed": true, "evidence": "Explicit ordering in verify loop pseudocode"},
|
||||
{"text": "Cubic check uses gh api to check cubic-dev-ai[bot] reviews", "passed": true, "evidence": "Mentions cubic-dev-ai[bot] and 'No issues found' signal"},
|
||||
{"text": "Plan includes worktree cleanup after merge", "passed": true, "evidence": "Phase 4: git worktree remove ../omo-wt/feat-max-background-agents"},
|
||||
{"text": "Code changes reference actual files in the codebase", "passed": true, "evidence": "References src/config/schema/background-task.ts, src/features/background-agent/concurrency.ts, manager.ts"}
|
||||
]
|
||||
}
|
||||
@@ -1,454 +0,0 @@
|
||||
# Code Changes: `max_background_agents` Config Option
|
||||
|
||||
## 1. `src/config/schema/background-task.ts` — Add schema field
|
||||
|
||||
```typescript
|
||||
import { z } from "zod"
|
||||
|
||||
export const BackgroundTaskConfigSchema = z.object({
|
||||
defaultConcurrency: z.number().min(1).optional(),
|
||||
providerConcurrency: z.record(z.string(), z.number().min(0)).optional(),
|
||||
modelConcurrency: z.record(z.string(), z.number().min(0)).optional(),
|
||||
maxDepth: z.number().int().min(1).optional(),
|
||||
maxDescendants: z.number().int().min(1).optional(),
|
||||
/** Maximum number of background agents that can run simultaneously across all models/providers (default: 5, minimum: 1) */
|
||||
maxBackgroundAgents: z.number().int().min(1).optional(),
|
||||
/** Stale timeout in milliseconds - interrupt tasks with no activity for this duration (default: 180000 = 3 minutes, minimum: 60000 = 1 minute) */
|
||||
staleTimeoutMs: z.number().min(60000).optional(),
|
||||
/** Timeout for tasks that never received any progress update, falling back to startedAt (default: 1800000 = 30 minutes, minimum: 60000 = 1 minute) */
|
||||
messageStalenessTimeoutMs: z.number().min(60000).optional(),
|
||||
syncPollTimeoutMs: z.number().min(60000).optional(),
|
||||
})
|
||||
|
||||
export type BackgroundTaskConfig = z.infer<typeof BackgroundTaskConfigSchema>
|
||||
```
|
||||
|
||||
**Rationale:** Follows exact same pattern as `maxDepth` and `maxDescendants` — `z.number().int().min(1).optional()`. The field is optional; runtime default of 5 is applied in `ConcurrencyManager`. No barrel export changes needed since `src/config/schema.ts` already does `export * from "./schema/background-task"` and the type is inferred.
|
||||
|
||||
---
|
||||
|
||||
## 2. `src/config/schema/background-task.test.ts` — Add validation tests
|
||||
|
||||
Append after the existing `syncPollTimeoutMs` describe block (before the closing `})`):
|
||||
|
||||
```typescript
|
||||
describe("maxBackgroundAgents", () => {
|
||||
describe("#given valid maxBackgroundAgents (10)", () => {
|
||||
test("#when parsed #then returns correct value", () => {
|
||||
const result = BackgroundTaskConfigSchema.parse({ maxBackgroundAgents: 10 })
|
||||
|
||||
expect(result.maxBackgroundAgents).toBe(10)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given maxBackgroundAgents of 1 (minimum)", () => {
|
||||
test("#when parsed #then returns correct value", () => {
|
||||
const result = BackgroundTaskConfigSchema.parse({ maxBackgroundAgents: 1 })
|
||||
|
||||
expect(result.maxBackgroundAgents).toBe(1)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given maxBackgroundAgents below minimum (0)", () => {
|
||||
test("#when parsed #then throws ZodError", () => {
|
||||
let thrownError: unknown
|
||||
|
||||
try {
|
||||
BackgroundTaskConfigSchema.parse({ maxBackgroundAgents: 0 })
|
||||
} catch (error) {
|
||||
thrownError = error
|
||||
}
|
||||
|
||||
expect(thrownError).toBeInstanceOf(ZodError)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given maxBackgroundAgents not provided", () => {
|
||||
test("#when parsed #then field is undefined", () => {
|
||||
const result = BackgroundTaskConfigSchema.parse({})
|
||||
|
||||
expect(result.maxBackgroundAgents).toBeUndefined()
|
||||
})
|
||||
})
|
||||
|
||||
describe('#given maxBackgroundAgents is non-integer (2.5)', () => {
|
||||
test("#when parsed #then throws ZodError", () => {
|
||||
let thrownError: unknown
|
||||
|
||||
try {
|
||||
BackgroundTaskConfigSchema.parse({ maxBackgroundAgents: 2.5 })
|
||||
} catch (error) {
|
||||
thrownError = error
|
||||
}
|
||||
|
||||
expect(thrownError).toBeInstanceOf(ZodError)
|
||||
})
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
**Rationale:** Follows exact test pattern from `maxDepth`, `maxDescendants`, and `syncPollTimeoutMs` tests. Uses `#given`/`#when`/`#then` nested describe style. Tests valid, minimum boundary, below minimum, not provided, and non-integer cases.
|
||||
|
||||
---
|
||||
|
||||
## 3. `src/features/background-agent/concurrency.ts` — Add global agent limit
|
||||
|
||||
```typescript
|
||||
import type { BackgroundTaskConfig } from "../../config/schema"
|
||||
|
||||
const DEFAULT_MAX_BACKGROUND_AGENTS = 5
|
||||
|
||||
/**
|
||||
* Queue entry with settled-flag pattern to prevent double-resolution.
|
||||
*
|
||||
* The settled flag ensures that cancelWaiters() doesn't reject
|
||||
* an entry that was already resolved by release().
|
||||
*/
|
||||
interface QueueEntry {
|
||||
resolve: () => void
|
||||
rawReject: (error: Error) => void
|
||||
settled: boolean
|
||||
}
|
||||
|
||||
export class ConcurrencyManager {
|
||||
private config?: BackgroundTaskConfig
|
||||
private counts: Map<string, number> = new Map()
|
||||
private queues: Map<string, QueueEntry[]> = new Map()
|
||||
private globalRunningCount = 0
|
||||
|
||||
constructor(config?: BackgroundTaskConfig) {
|
||||
this.config = config
|
||||
}
|
||||
|
||||
getMaxBackgroundAgents(): number {
|
||||
return this.config?.maxBackgroundAgents ?? DEFAULT_MAX_BACKGROUND_AGENTS
|
||||
}
|
||||
|
||||
getGlobalRunningCount(): number {
|
||||
return this.globalRunningCount
|
||||
}
|
||||
|
||||
canSpawnGlobally(): boolean {
|
||||
return this.globalRunningCount < this.getMaxBackgroundAgents()
|
||||
}
|
||||
|
||||
acquireGlobal(): void {
|
||||
this.globalRunningCount++
|
||||
}
|
||||
|
||||
releaseGlobal(): void {
|
||||
if (this.globalRunningCount > 0) {
|
||||
this.globalRunningCount--
|
||||
}
|
||||
}
|
||||
|
||||
getConcurrencyLimit(model: string): number {
|
||||
// ... existing implementation unchanged ...
|
||||
}
|
||||
|
||||
async acquire(model: string): Promise<void> {
|
||||
// ... existing implementation unchanged ...
|
||||
}
|
||||
|
||||
release(model: string): void {
|
||||
// ... existing implementation unchanged ...
|
||||
}
|
||||
|
||||
cancelWaiters(model: string): void {
|
||||
// ... existing implementation unchanged ...
|
||||
}
|
||||
|
||||
clear(): void {
|
||||
for (const [model] of this.queues) {
|
||||
this.cancelWaiters(model)
|
||||
}
|
||||
this.counts.clear()
|
||||
this.queues.clear()
|
||||
this.globalRunningCount = 0
|
||||
}
|
||||
|
||||
getCount(model: string): number {
|
||||
return this.counts.get(model) ?? 0
|
||||
}
|
||||
|
||||
getQueueLength(model: string): number {
|
||||
return this.queues.get(model)?.length ?? 0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Key changes:**
|
||||
- Add `DEFAULT_MAX_BACKGROUND_AGENTS = 5` constant
|
||||
- Add `globalRunningCount` private field
|
||||
- Add `getMaxBackgroundAgents()`, `getGlobalRunningCount()`, `canSpawnGlobally()`, `acquireGlobal()`, `releaseGlobal()` methods
|
||||
- `clear()` resets `globalRunningCount` to 0
|
||||
- All existing per-model methods remain unchanged
|
||||
|
||||
---
|
||||
|
||||
## 4. `src/features/background-agent/concurrency.test.ts` — Add global limit tests
|
||||
|
||||
Append new describe block:
|
||||
|
||||
```typescript
|
||||
describe("ConcurrencyManager global background agent limit", () => {
|
||||
test("should default max background agents to 5 when no config", () => {
|
||||
// given
|
||||
const manager = new ConcurrencyManager()
|
||||
|
||||
// when
|
||||
const max = manager.getMaxBackgroundAgents()
|
||||
|
||||
// then
|
||||
expect(max).toBe(5)
|
||||
})
|
||||
|
||||
test("should use configured maxBackgroundAgents", () => {
|
||||
// given
|
||||
const config: BackgroundTaskConfig = { maxBackgroundAgents: 10 }
|
||||
const manager = new ConcurrencyManager(config)
|
||||
|
||||
// when
|
||||
const max = manager.getMaxBackgroundAgents()
|
||||
|
||||
// then
|
||||
expect(max).toBe(10)
|
||||
})
|
||||
|
||||
test("should allow spawning when under global limit", () => {
|
||||
// given
|
||||
const config: BackgroundTaskConfig = { maxBackgroundAgents: 2 }
|
||||
const manager = new ConcurrencyManager(config)
|
||||
|
||||
// when
|
||||
manager.acquireGlobal()
|
||||
|
||||
// then
|
||||
expect(manager.canSpawnGlobally()).toBe(true)
|
||||
expect(manager.getGlobalRunningCount()).toBe(1)
|
||||
})
|
||||
|
||||
test("should block spawning when at global limit", () => {
|
||||
// given
|
||||
const config: BackgroundTaskConfig = { maxBackgroundAgents: 2 }
|
||||
const manager = new ConcurrencyManager(config)
|
||||
|
||||
// when
|
||||
manager.acquireGlobal()
|
||||
manager.acquireGlobal()
|
||||
|
||||
// then
|
||||
expect(manager.canSpawnGlobally()).toBe(false)
|
||||
expect(manager.getGlobalRunningCount()).toBe(2)
|
||||
})
|
||||
|
||||
test("should allow spawning again after release", () => {
|
||||
// given
|
||||
const config: BackgroundTaskConfig = { maxBackgroundAgents: 1 }
|
||||
const manager = new ConcurrencyManager(config)
|
||||
manager.acquireGlobal()
|
||||
|
||||
// when
|
||||
manager.releaseGlobal()
|
||||
|
||||
// then
|
||||
expect(manager.canSpawnGlobally()).toBe(true)
|
||||
expect(manager.getGlobalRunningCount()).toBe(0)
|
||||
})
|
||||
|
||||
test("should not go below zero on extra release", () => {
|
||||
// given
|
||||
const manager = new ConcurrencyManager()
|
||||
|
||||
// when
|
||||
manager.releaseGlobal()
|
||||
|
||||
// then
|
||||
expect(manager.getGlobalRunningCount()).toBe(0)
|
||||
})
|
||||
|
||||
test("should reset global count on clear", () => {
|
||||
// given
|
||||
const config: BackgroundTaskConfig = { maxBackgroundAgents: 5 }
|
||||
const manager = new ConcurrencyManager(config)
|
||||
manager.acquireGlobal()
|
||||
manager.acquireGlobal()
|
||||
manager.acquireGlobal()
|
||||
|
||||
// when
|
||||
manager.clear()
|
||||
|
||||
// then
|
||||
expect(manager.getGlobalRunningCount()).toBe(0)
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. `src/features/background-agent/manager.ts` — Enforce global limit
|
||||
|
||||
### In `launch()` method — add check before task creation (after `reserveSubagentSpawn`):
|
||||
|
||||
```typescript
|
||||
async launch(input: LaunchInput): Promise<BackgroundTask> {
|
||||
// ... existing logging ...
|
||||
|
||||
if (!input.agent || input.agent.trim() === "") {
|
||||
throw new Error("Agent parameter is required")
|
||||
}
|
||||
|
||||
// Check global background agent limit before spawn guard
|
||||
if (!this.concurrencyManager.canSpawnGlobally()) {
|
||||
const max = this.concurrencyManager.getMaxBackgroundAgents()
|
||||
const current = this.concurrencyManager.getGlobalRunningCount()
|
||||
throw new Error(
|
||||
`Background agent spawn blocked: ${current} agents running, max is ${max}. Wait for existing tasks to complete or increase background_task.maxBackgroundAgents.`
|
||||
)
|
||||
}
|
||||
|
||||
const spawnReservation = await this.reserveSubagentSpawn(input.parentSessionID)
|
||||
|
||||
try {
|
||||
// ... existing code ...
|
||||
|
||||
// After task creation, before queueing:
|
||||
this.concurrencyManager.acquireGlobal()
|
||||
|
||||
// ... rest of existing code ...
|
||||
} catch (error) {
|
||||
spawnReservation.rollback()
|
||||
throw error
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### In `trackTask()` method — add global check:
|
||||
|
||||
```typescript
|
||||
async trackTask(input: { ... }): Promise<BackgroundTask> {
|
||||
const existingTask = this.tasks.get(input.taskId)
|
||||
if (existingTask) {
|
||||
// ... existing re-registration logic unchanged ...
|
||||
return existingTask
|
||||
}
|
||||
|
||||
// Check global limit for new external tasks
|
||||
if (!this.concurrencyManager.canSpawnGlobally()) {
|
||||
const max = this.concurrencyManager.getMaxBackgroundAgents()
|
||||
const current = this.concurrencyManager.getGlobalRunningCount()
|
||||
throw new Error(
|
||||
`Background agent spawn blocked: ${current} agents running, max is ${max}. Wait for existing tasks to complete or increase background_task.maxBackgroundAgents.`
|
||||
)
|
||||
}
|
||||
|
||||
// ... existing task creation ...
|
||||
this.concurrencyManager.acquireGlobal()
|
||||
|
||||
// ... rest unchanged ...
|
||||
}
|
||||
```
|
||||
|
||||
### In `tryCompleteTask()` — release global slot:
|
||||
|
||||
```typescript
|
||||
private async tryCompleteTask(task: BackgroundTask, source: string): Promise<boolean> {
|
||||
if (task.status !== "running") {
|
||||
// ... existing guard ...
|
||||
return false
|
||||
}
|
||||
|
||||
task.status = "completed"
|
||||
task.completedAt = new Date()
|
||||
// ... existing history record ...
|
||||
|
||||
removeTaskToastTracking(task.id)
|
||||
|
||||
// Release per-model concurrency
|
||||
if (task.concurrencyKey) {
|
||||
this.concurrencyManager.release(task.concurrencyKey)
|
||||
task.concurrencyKey = undefined
|
||||
}
|
||||
|
||||
// Release global slot
|
||||
this.concurrencyManager.releaseGlobal()
|
||||
|
||||
// ... rest unchanged ...
|
||||
}
|
||||
```
|
||||
|
||||
### In `cancelTask()` — release global slot:
|
||||
|
||||
```typescript
|
||||
async cancelTask(taskId: string, options?: { ... }): Promise<boolean> {
|
||||
// ... existing code up to concurrency release ...
|
||||
|
||||
if (task.concurrencyKey) {
|
||||
this.concurrencyManager.release(task.concurrencyKey)
|
||||
task.concurrencyKey = undefined
|
||||
}
|
||||
|
||||
// Release global slot (only for running tasks, pending never acquired)
|
||||
if (task.status !== "pending") {
|
||||
this.concurrencyManager.releaseGlobal()
|
||||
}
|
||||
|
||||
// ... rest unchanged ...
|
||||
}
|
||||
```
|
||||
|
||||
### In `handleEvent()` session.error handler — release global slot:
|
||||
|
||||
```typescript
|
||||
if (event.type === "session.error") {
|
||||
// ... existing error handling ...
|
||||
|
||||
task.status = "error"
|
||||
// ...
|
||||
|
||||
if (task.concurrencyKey) {
|
||||
this.concurrencyManager.release(task.concurrencyKey)
|
||||
task.concurrencyKey = undefined
|
||||
}
|
||||
|
||||
// Release global slot
|
||||
this.concurrencyManager.releaseGlobal()
|
||||
|
||||
// ... rest unchanged ...
|
||||
}
|
||||
```
|
||||
|
||||
### In prompt error handler inside `startTask()` — release global slot:
|
||||
|
||||
```typescript
|
||||
promptWithModelSuggestionRetry(this.client, { ... }).catch((error) => {
|
||||
// ... existing error handling ...
|
||||
if (existingTask) {
|
||||
existingTask.status = "interrupt"
|
||||
// ...
|
||||
if (existingTask.concurrencyKey) {
|
||||
this.concurrencyManager.release(existingTask.concurrencyKey)
|
||||
existingTask.concurrencyKey = undefined
|
||||
}
|
||||
|
||||
// Release global slot
|
||||
this.concurrencyManager.releaseGlobal()
|
||||
|
||||
// ... rest unchanged ...
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary of Changes
|
||||
|
||||
| File | Lines Added | Lines Modified |
|
||||
|------|-------------|----------------|
|
||||
| `src/config/schema/background-task.ts` | 2 | 0 |
|
||||
| `src/config/schema/background-task.test.ts` | ~50 | 0 |
|
||||
| `src/features/background-agent/concurrency.ts` | ~25 | 1 (`clear()`) |
|
||||
| `src/features/background-agent/concurrency.test.ts` | ~70 | 0 |
|
||||
| `src/features/background-agent/manager.ts` | ~20 | 0 |
|
||||
|
||||
Total: ~167 lines added, 1 line modified across 5 files.
|
||||
@@ -1,136 +0,0 @@
|
||||
# Execution Plan: `max_background_agents` Config Option
|
||||
|
||||
## Phase 0: Setup — Branch + Worktree
|
||||
|
||||
1. **Create branch** from `dev`:
|
||||
```bash
|
||||
git checkout dev && git pull origin dev
|
||||
git checkout -b feat/max-background-agents
|
||||
```
|
||||
|
||||
2. **Create worktree** in sibling directory:
|
||||
```bash
|
||||
mkdir -p ../omo-wt
|
||||
git worktree add ../omo-wt/feat-max-background-agents feat/max-background-agents
|
||||
```
|
||||
|
||||
3. **All subsequent work** happens in `../omo-wt/feat-max-background-agents/`, never in the main worktree.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Implement — Atomic Commits
|
||||
|
||||
### Commit 1: Add `max_background_agents` to config schema
|
||||
|
||||
**Files changed:**
|
||||
- `src/config/schema/background-task.ts` — Add `maxBackgroundAgents` field to `BackgroundTaskConfigSchema`
|
||||
- `src/config/schema/background-task.test.ts` — Add validation tests for the new field
|
||||
|
||||
**What:**
|
||||
- Add `maxBackgroundAgents: z.number().int().min(1).optional()` to `BackgroundTaskConfigSchema`
|
||||
- Default value handled at runtime (5), not in schema (all schema fields are optional per convention)
|
||||
- Add given/when/then tests: valid value, below minimum, not provided, non-number
|
||||
|
||||
### Commit 2: Enforce limit in BackgroundManager + ConcurrencyManager
|
||||
|
||||
**Files changed:**
|
||||
- `src/features/background-agent/concurrency.ts` — Add global agent count tracking + `getGlobalRunningCount()` + `canSpawnGlobally()`
|
||||
- `src/features/background-agent/concurrency.test.ts` — Tests for global limit enforcement
|
||||
- `src/features/background-agent/manager.ts` — Check global limit before `launch()` and `trackTask()`
|
||||
|
||||
**What:**
|
||||
- `ConcurrencyManager` already manages per-model concurrency. Add a separate global counter:
|
||||
- `private globalRunningCount: number = 0`
|
||||
- `private maxBackgroundAgents: number` (from config, default 5)
|
||||
- `acquireGlobal()` / `releaseGlobal()` methods
|
||||
- `getGlobalRunningCount()` for observability
|
||||
- `BackgroundManager.launch()` checks `concurrencyManager.canSpawnGlobally()` before creating task
|
||||
- `BackgroundManager.trackTask()` also checks global limit
|
||||
- On task completion/cancellation/error, call `releaseGlobal()`
|
||||
- Throw descriptive error when limit hit: `"Background agent spawn blocked: ${current} agents running, max is ${max}. Wait for existing tasks to complete or increase background_task.maxBackgroundAgents."`
|
||||
|
||||
### Local Validation
|
||||
|
||||
```bash
|
||||
bun run typecheck
|
||||
bun test src/config/schema/background-task.test.ts
|
||||
bun test src/features/background-agent/concurrency.test.ts
|
||||
bun run build
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: PR Creation
|
||||
|
||||
1. **Push branch:**
|
||||
```bash
|
||||
git push -u origin feat/max-background-agents
|
||||
```
|
||||
|
||||
2. **Create PR** targeting `dev`:
|
||||
```bash
|
||||
gh pr create \
|
||||
--base dev \
|
||||
--title "feat: add max_background_agents config to limit concurrent background agents" \
|
||||
--body-file /tmp/pull-request-max-background-agents-$(date +%s).md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Verify Loop
|
||||
|
||||
### Gate A: CI
|
||||
- Wait for `ci.yml` workflow to complete
|
||||
- Check: `gh pr checks <PR_NUMBER> --watch`
|
||||
- If fails: read logs, fix, push, re-check
|
||||
|
||||
### Gate B: review-work (5 agents)
|
||||
- Run `/review-work` skill which launches 5 parallel background sub-agents:
|
||||
1. Oracle — goal/constraint verification
|
||||
2. Oracle — code quality
|
||||
3. Oracle — security
|
||||
4. Hephaestus — hands-on QA execution
|
||||
5. Hephaestus — context mining from GitHub/git
|
||||
- All 5 must pass. If any fails, fix and re-push.
|
||||
|
||||
### Gate C: Cubic (cubic-dev-ai[bot])
|
||||
- Wait for Cubic bot review on PR
|
||||
- Must say "No issues found"
|
||||
- If issues found: address feedback, push, re-check
|
||||
|
||||
### Loop
|
||||
```
|
||||
while (!allGatesPass) {
|
||||
if (CI fails) → fix → push → continue
|
||||
if (review-work fails) → fix → push → continue
|
||||
if (Cubic has issues) → fix → push → continue
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Merge + Cleanup
|
||||
|
||||
1. **Squash merge:**
|
||||
```bash
|
||||
gh pr merge <PR_NUMBER> --squash --delete-branch
|
||||
```
|
||||
|
||||
2. **Remove worktree:**
|
||||
```bash
|
||||
git worktree remove ../omo-wt/feat-max-background-agents
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## File Impact Summary
|
||||
|
||||
| File | Change Type |
|
||||
|------|-------------|
|
||||
| `src/config/schema/background-task.ts` | Modified — add schema field |
|
||||
| `src/config/schema/background-task.test.ts` | Modified — add validation tests |
|
||||
| `src/features/background-agent/concurrency.ts` | Modified — add global limit tracking |
|
||||
| `src/features/background-agent/concurrency.test.ts` | Modified — add global limit tests |
|
||||
| `src/features/background-agent/manager.ts` | Modified — enforce global limit in launch/trackTask |
|
||||
|
||||
5 files changed across 2 atomic commits. No new files created (follows existing patterns).
|
||||
@@ -1,47 +0,0 @@
|
||||
# PR Description
|
||||
|
||||
**Title:** `feat: add max_background_agents config to limit concurrent background agents`
|
||||
|
||||
**Base:** `dev`
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
- Add `maxBackgroundAgents` field to `BackgroundTaskConfigSchema` (default: 5, min: 1) to cap total simultaneous background agents across all models/providers
|
||||
- Enforce the global limit in `BackgroundManager.launch()` and `trackTask()` with descriptive error messages when the limit is hit
|
||||
- Release global slots on task completion, cancellation, error, and interrupt to prevent slot leaks
|
||||
|
||||
## Motivation
|
||||
|
||||
The existing concurrency system in `ConcurrencyManager` limits agents **per model/provider** (e.g., 5 concurrent `anthropic/claude-opus-4-6` tasks). However, there is no **global** cap across all models. A user running tasks across multiple providers could spawn an unbounded number of background agents, exhausting system resources.
|
||||
|
||||
`max_background_agents` provides a single knob to limit total concurrent background agents regardless of which model they use.
|
||||
|
||||
## Config Usage
|
||||
|
||||
```jsonc
|
||||
// .opencode/oh-my-opencode.jsonc
|
||||
{
|
||||
"background_task": {
|
||||
"maxBackgroundAgents": 10 // default: 5, min: 1
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Changes
|
||||
|
||||
| File | What |
|
||||
|------|------|
|
||||
| `src/config/schema/background-task.ts` | Add `maxBackgroundAgents` schema field |
|
||||
| `src/config/schema/background-task.test.ts` | Validation tests (valid, boundary, invalid) |
|
||||
| `src/features/background-agent/concurrency.ts` | Global counter + `canSpawnGlobally()` / `acquireGlobal()` / `releaseGlobal()` |
|
||||
| `src/features/background-agent/concurrency.test.ts` | Global limit unit tests |
|
||||
| `src/features/background-agent/manager.ts` | Enforce global limit in `launch()`, `trackTask()`; release in completion/cancel/error paths |
|
||||
|
||||
## Testing
|
||||
|
||||
- `bun test src/config/schema/background-task.test.ts` — schema validation
|
||||
- `bun test src/features/background-agent/concurrency.test.ts` — global limit enforcement
|
||||
- `bun run typecheck` — clean
|
||||
- `bun run build` — clean
|
||||
@@ -1,163 +0,0 @@
|
||||
# Verification Strategy
|
||||
|
||||
## Pre-Push Local Validation
|
||||
|
||||
Before every push, run all three checks sequentially:
|
||||
|
||||
```bash
|
||||
bun run typecheck && bun test && bun run build
|
||||
```
|
||||
|
||||
Specific test files to watch:
|
||||
```bash
|
||||
bun test src/config/schema/background-task.test.ts
|
||||
bun test src/features/background-agent/concurrency.test.ts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Gate A: CI (`ci.yml`)
|
||||
|
||||
### What CI runs
|
||||
1. **Tests (split):** mock-heavy tests run in isolation (separate `bun test` processes), rest in batch
|
||||
2. **Typecheck:** `bun run typecheck` (tsc --noEmit)
|
||||
3. **Build:** `bun run build` (ESM + declarations + schema)
|
||||
4. **Schema auto-commit:** if generated schema changed, CI commits it
|
||||
|
||||
### How to monitor
|
||||
```bash
|
||||
gh pr checks <PR_NUMBER> --watch
|
||||
```
|
||||
|
||||
### Common failure scenarios and fixes
|
||||
|
||||
| Failure | Likely Cause | Fix |
|
||||
|---------|-------------|-----|
|
||||
| Typecheck error | New field not matching existing type imports | Verify `BackgroundTaskConfig` type is auto-inferred from schema, no manual type updates needed |
|
||||
| Test failure | Test assertion wrong or missing import | Fix test, re-push |
|
||||
| Build failure | Import cycle or missing export | Check barrel exports in `src/config/schema.ts` (already re-exports via `export *`) |
|
||||
| Schema auto-commit | Generated JSON schema changed | Pull the auto-commit, rebase if needed |
|
||||
|
||||
### Recovery
|
||||
```bash
|
||||
# Read CI logs
|
||||
gh run view <RUN_ID> --log-failed
|
||||
|
||||
# Fix, commit, push
|
||||
git add -A && git commit -m "fix: address CI failure" && git push
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Gate B: review-work (5 parallel agents)
|
||||
|
||||
### What it checks
|
||||
Run `/review-work` which launches 5 background sub-agents:
|
||||
|
||||
| Agent | Role | What it checks for this PR |
|
||||
|-------|------|---------------------------|
|
||||
| Oracle (goal) | Goal/constraint verification | Does `maxBackgroundAgents` actually limit agents? Is default 5? Is min 1? |
|
||||
| Oracle (quality) | Code quality | Follows existing patterns? No catch-all files? Under 200 LOC? given/when/then tests? |
|
||||
| Oracle (security) | Security review | No injection vectors, no unsafe defaults, proper input validation via Zod |
|
||||
| Hephaestus (QA) | Hands-on QA execution | Actually runs tests, checks typecheck, verifies build |
|
||||
| Hephaestus (context) | Context mining | Checks git history, related issues, ensures no duplicate/conflicting PRs |
|
||||
|
||||
### Pass criteria
|
||||
All 5 agents must pass. Any single failure blocks.
|
||||
|
||||
### Common failure scenarios and fixes
|
||||
|
||||
| Agent | Likely Issue | Fix |
|
||||
|-------|-------------|-----|
|
||||
| Oracle (goal) | Global limit not enforced in all exit paths (completion, cancel, error, interrupt) | Audit every status transition in `manager.ts` that should call `releaseGlobal()` |
|
||||
| Oracle (quality) | Test style not matching given/when/then | Restructure tests with `#given`/`#when`/`#then` describe nesting |
|
||||
| Oracle (quality) | File exceeds 200 LOC | `concurrency.ts` is 137 LOC + ~25 new = ~162 LOC, safe. `manager.ts` is already large but we're adding ~20 lines to existing methods, not creating new responsibility |
|
||||
| Oracle (security) | Integer overflow or negative values | Zod `.int().min(1)` handles this at config parse time |
|
||||
| Hephaestus (QA) | Test actually fails when run | Run tests locally first, fix before push |
|
||||
|
||||
### Recovery
|
||||
```bash
|
||||
# Review agent output
|
||||
background_output(task_id="<review-work-task-id>")
|
||||
|
||||
# Fix identified issues
|
||||
# ... edit files ...
|
||||
git add -A && git commit -m "fix: address review-work feedback" && git push
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Gate C: Cubic (`cubic-dev-ai[bot]`)
|
||||
|
||||
### What it checks
|
||||
Cubic is an automated code review bot that analyzes the PR diff. It must respond with "No issues found" for the gate to pass.
|
||||
|
||||
### Common failure scenarios and fixes
|
||||
|
||||
| Issue | Likely Cause | Fix |
|
||||
|-------|-------------|-----|
|
||||
| "Missing error handling" | `releaseGlobal()` not called in some error path | Add `releaseGlobal()` to the missed path |
|
||||
| "Inconsistent naming" | Field name doesn't match convention | Use `maxBackgroundAgents` (camelCase in schema, `max_background_agents` in JSONC config) |
|
||||
| "Missing documentation" | No JSDoc on new public methods | Add JSDoc comments to `canSpawnGlobally()`, `acquireGlobal()`, `releaseGlobal()`, `getMaxBackgroundAgents()` |
|
||||
| "Test coverage gap" | Missing edge case test | Add the specific test case Cubic identifies |
|
||||
|
||||
### Recovery
|
||||
```bash
|
||||
# Read Cubic's review
|
||||
gh api repos/code-yeongyu/oh-my-openagent/pulls/<PR_NUMBER>/reviews
|
||||
|
||||
# Address each comment
|
||||
# ... edit files ...
|
||||
git add -A && git commit -m "fix: address Cubic review feedback" && git push
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verification Loop Pseudocode
|
||||
|
||||
```
|
||||
iteration = 0
|
||||
while true:
|
||||
iteration++
|
||||
log("Verification iteration ${iteration}")
|
||||
|
||||
# Gate A: CI (cheapest, check first)
|
||||
push_and_wait_for_ci()
|
||||
if ci_failed:
|
||||
read_ci_logs()
|
||||
fix_and_commit()
|
||||
continue
|
||||
|
||||
# Gate B: review-work (5 agents, more expensive)
|
||||
run_review_work()
|
||||
if any_agent_failed:
|
||||
read_agent_feedback()
|
||||
fix_and_commit()
|
||||
continue
|
||||
|
||||
# Gate C: Cubic (external bot, wait for it)
|
||||
wait_for_cubic_review()
|
||||
if cubic_has_issues:
|
||||
read_cubic_comments()
|
||||
fix_and_commit()
|
||||
continue
|
||||
|
||||
# All gates passed
|
||||
break
|
||||
|
||||
# Merge
|
||||
gh pr merge <PR_NUMBER> --squash --delete-branch
|
||||
```
|
||||
|
||||
No iteration cap. Loop continues until all three gates pass simultaneously in a single iteration.
|
||||
|
||||
---
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
| Risk | Probability | Mitigation |
|
||||
|------|------------|------------|
|
||||
| Slot leak (global count never decremented) | Medium | Audit every exit path: `tryCompleteTask`, `cancelTask`, `handleEvent(session.error)`, `startTask` prompt error, `resume` prompt error |
|
||||
| Race condition on global count | Low | `globalRunningCount` is synchronous (single-threaded JS), no async gap between check and increment in `launch()` |
|
||||
| Breaking existing behavior | Low | Default is 5, same as existing per-model default. Users with <5 total agents see no change |
|
||||
| `manager.ts` exceeding 200 LOC | Already exceeded | File is already ~1500 LOC (exempt due to being a core orchestration class with many methods). Our changes add ~20 lines to existing methods, not a new responsibility |
|
||||
@@ -1 +0,0 @@
|
||||
{"total_tokens": null, "duration_ms": 292000, "total_duration_seconds": 292}
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"run_id": "eval-1-without_skill",
|
||||
"expectations": [
|
||||
{"text": "Plan uses git worktree in a sibling directory", "passed": false, "evidence": "Uses git checkout -b, no worktree isolation"},
|
||||
{"text": "Branch is created from origin/dev", "passed": true, "evidence": "git checkout -b feat/max-background-agents dev"},
|
||||
{"text": "Plan specifies multiple atomic commits for multi-file changes", "passed": false, "evidence": "Steps listed sequentially but no atomic commit strategy mentioned"},
|
||||
{"text": "Runs bun run typecheck, bun test, and bun run build before pushing", "passed": true, "evidence": "Step 6 runs typecheck and tests, Step 8 implies push after verification"},
|
||||
{"text": "PR is created targeting dev branch", "passed": true, "evidence": "Step 8 mentions creating PR"},
|
||||
{"text": "Verification loop includes all 3 gates: CI, review-work, and Cubic", "passed": false, "evidence": "Only mentions CI pipeline in step 6. No review-work or Cubic."},
|
||||
{"text": "Gates are checked in order: CI first, then review-work, then Cubic", "passed": false, "evidence": "No gate ordering - only CI mentioned"},
|
||||
{"text": "Cubic check uses gh api to check cubic-dev-ai[bot] reviews", "passed": false, "evidence": "No mention of Cubic at all"},
|
||||
{"text": "Plan includes worktree cleanup after merge", "passed": false, "evidence": "No worktree used, no cleanup needed"},
|
||||
{"text": "Code changes reference actual files in the codebase", "passed": true, "evidence": "References actual files with detailed design decisions"}
|
||||
]
|
||||
}
|
||||
@@ -1,615 +0,0 @@
|
||||
# Code Changes: `max_background_agents` Config Option
|
||||
|
||||
## 1. Schema Change
|
||||
|
||||
**File:** `src/config/schema/background-task.ts`
|
||||
|
||||
```typescript
|
||||
import { z } from "zod"
|
||||
|
||||
export const BackgroundTaskConfigSchema = z.object({
|
||||
defaultConcurrency: z.number().min(1).optional(),
|
||||
providerConcurrency: z.record(z.string(), z.number().min(0)).optional(),
|
||||
modelConcurrency: z.record(z.string(), z.number().min(0)).optional(),
|
||||
maxDepth: z.number().int().min(1).optional(),
|
||||
maxDescendants: z.number().int().min(1).optional(),
|
||||
/** Maximum number of background agents that can run simultaneously across all models/providers (default: no global limit, only per-model limits apply) */
|
||||
maxBackgroundAgents: z.number().int().min(1).optional(),
|
||||
/** Stale timeout in milliseconds - interrupt tasks with no activity for this duration (default: 180000 = 3 minutes, minimum: 60000 = 1 minute) */
|
||||
staleTimeoutMs: z.number().min(60000).optional(),
|
||||
/** Timeout for tasks that never received any progress update, falling back to startedAt (default: 1800000 = 30 minutes, minimum: 60000 = 1 minute) */
|
||||
messageStalenessTimeoutMs: z.number().min(60000).optional(),
|
||||
syncPollTimeoutMs: z.number().min(60000).optional(),
|
||||
})
|
||||
|
||||
export type BackgroundTaskConfig = z.infer<typeof BackgroundTaskConfigSchema>
|
||||
```
|
||||
|
||||
**What changed:** Added `maxBackgroundAgents` field after `maxDescendants` (grouped with other limit fields). Uses `z.number().int().min(1).optional()` matching the pattern of `maxDepth` and `maxDescendants`.
|
||||
|
||||
---
|
||||
|
||||
## 2. ConcurrencyManager Changes
|
||||
|
||||
**File:** `src/features/background-agent/concurrency.ts`
|
||||
|
||||
```typescript
|
||||
import type { BackgroundTaskConfig } from "../../config/schema"
|
||||
|
||||
/**
|
||||
* Queue entry with settled-flag pattern to prevent double-resolution.
|
||||
*
|
||||
* The settled flag ensures that cancelWaiters() doesn't reject
|
||||
* an entry that was already resolved by release().
|
||||
*/
|
||||
interface QueueEntry {
|
||||
resolve: () => void
|
||||
rawReject: (error: Error) => void
|
||||
settled: boolean
|
||||
}
|
||||
|
||||
export class ConcurrencyManager {
|
||||
private config?: BackgroundTaskConfig
|
||||
private counts: Map<string, number> = new Map()
|
||||
private queues: Map<string, QueueEntry[]> = new Map()
|
||||
private globalCount = 0
|
||||
private globalQueue: QueueEntry[] = []
|
||||
|
||||
constructor(config?: BackgroundTaskConfig) {
|
||||
this.config = config
|
||||
}
|
||||
|
||||
getGlobalLimit(): number {
|
||||
const limit = this.config?.maxBackgroundAgents
|
||||
if (limit === undefined) {
|
||||
return Infinity
|
||||
}
|
||||
return limit
|
||||
}
|
||||
|
||||
getConcurrencyLimit(model: string): number {
|
||||
const modelLimit = this.config?.modelConcurrency?.[model]
|
||||
if (modelLimit !== undefined) {
|
||||
return modelLimit === 0 ? Infinity : modelLimit
|
||||
}
|
||||
const provider = model.split('/')[0]
|
||||
const providerLimit = this.config?.providerConcurrency?.[provider]
|
||||
if (providerLimit !== undefined) {
|
||||
return providerLimit === 0 ? Infinity : providerLimit
|
||||
}
|
||||
const defaultLimit = this.config?.defaultConcurrency
|
||||
if (defaultLimit !== undefined) {
|
||||
return defaultLimit === 0 ? Infinity : defaultLimit
|
||||
}
|
||||
return 5
|
||||
}
|
||||
|
||||
async acquire(model: string): Promise<void> {
|
||||
const perModelLimit = this.getConcurrencyLimit(model)
|
||||
const globalLimit = this.getGlobalLimit()
|
||||
|
||||
// Fast path: both limits have capacity
|
||||
if (perModelLimit === Infinity && globalLimit === Infinity) {
|
||||
return
|
||||
}
|
||||
|
||||
const currentPerModel = this.counts.get(model) ?? 0
|
||||
|
||||
if (currentPerModel < perModelLimit && this.globalCount < globalLimit) {
|
||||
this.counts.set(model, currentPerModel + 1)
|
||||
this.globalCount++
|
||||
return
|
||||
}
|
||||
|
||||
return new Promise<void>((resolve, reject) => {
|
||||
const entry: QueueEntry = {
|
||||
resolve: () => {
|
||||
if (entry.settled) return
|
||||
entry.settled = true
|
||||
resolve()
|
||||
},
|
||||
rawReject: reject,
|
||||
settled: false,
|
||||
}
|
||||
|
||||
// Queue on whichever limit is blocking
|
||||
if (currentPerModel >= perModelLimit) {
|
||||
const queue = this.queues.get(model) ?? []
|
||||
queue.push(entry)
|
||||
this.queues.set(model, queue)
|
||||
} else {
|
||||
this.globalQueue.push(entry)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
release(model: string): void {
|
||||
const perModelLimit = this.getConcurrencyLimit(model)
|
||||
const globalLimit = this.getGlobalLimit()
|
||||
|
||||
if (perModelLimit === Infinity && globalLimit === Infinity) {
|
||||
return
|
||||
}
|
||||
|
||||
// Try per-model handoff first
|
||||
const queue = this.queues.get(model)
|
||||
while (queue && queue.length > 0) {
|
||||
const next = queue.shift()!
|
||||
if (!next.settled) {
|
||||
// Hand off the slot to this waiter (counts stay the same)
|
||||
next.resolve()
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// No per-model handoff - decrement per-model count
|
||||
const current = this.counts.get(model) ?? 0
|
||||
if (current > 0) {
|
||||
this.counts.set(model, current - 1)
|
||||
}
|
||||
|
||||
// Try global handoff
|
||||
while (this.globalQueue.length > 0) {
|
||||
const next = this.globalQueue.shift()!
|
||||
if (!next.settled) {
|
||||
// Hand off the global slot - but the waiter still needs a per-model slot
|
||||
// Since they were queued on global, their per-model had capacity
|
||||
// Re-acquire per-model count for them
|
||||
const waiterModel = this.findModelForGlobalWaiter()
|
||||
if (waiterModel) {
|
||||
const waiterCount = this.counts.get(waiterModel) ?? 0
|
||||
this.counts.set(waiterModel, waiterCount + 1)
|
||||
}
|
||||
next.resolve()
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// No handoff occurred - decrement global count
|
||||
if (this.globalCount > 0) {
|
||||
this.globalCount--
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Cancel all waiting acquires for a model. Used during cleanup.
|
||||
*/
|
||||
cancelWaiters(model: string): void {
|
||||
const queue = this.queues.get(model)
|
||||
if (queue) {
|
||||
for (const entry of queue) {
|
||||
if (!entry.settled) {
|
||||
entry.settled = true
|
||||
entry.rawReject(new Error(`Concurrency queue cancelled for model: ${model}`))
|
||||
}
|
||||
}
|
||||
this.queues.delete(model)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear all state. Used during manager cleanup/shutdown.
|
||||
* Cancels all pending waiters.
|
||||
*/
|
||||
clear(): void {
|
||||
for (const [model] of this.queues) {
|
||||
this.cancelWaiters(model)
|
||||
}
|
||||
// Cancel global queue waiters
|
||||
for (const entry of this.globalQueue) {
|
||||
if (!entry.settled) {
|
||||
entry.settled = true
|
||||
entry.rawReject(new Error("Concurrency queue cancelled: manager shutdown"))
|
||||
}
|
||||
}
|
||||
this.globalQueue = []
|
||||
this.globalCount = 0
|
||||
this.counts.clear()
|
||||
this.queues.clear()
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current count for a model (for testing/debugging)
|
||||
*/
|
||||
getCount(model: string): number {
|
||||
return this.counts.get(model) ?? 0
|
||||
}
|
||||
|
||||
/**
|
||||
* Get queue length for a model (for testing/debugging)
|
||||
*/
|
||||
getQueueLength(model: string): number {
|
||||
return this.queues.get(model)?.length ?? 0
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current global count across all models (for testing/debugging)
|
||||
*/
|
||||
getGlobalCount(): number {
|
||||
return this.globalCount
|
||||
}
|
||||
|
||||
/**
|
||||
* Get global queue length (for testing/debugging)
|
||||
*/
|
||||
getGlobalQueueLength(): number {
|
||||
return this.globalQueue.length
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**What changed:**
|
||||
- Added `globalCount` field to track total active agents across all keys
|
||||
- Added `globalQueue` for tasks waiting on the global limit
|
||||
- Added `getGlobalLimit()` method to read `maxBackgroundAgents` from config
|
||||
- Modified `acquire()` to check both per-model AND global limits
|
||||
- Modified `release()` to handle global queue handoff and decrement global count
|
||||
- Modified `clear()` to reset global state
|
||||
- Added `getGlobalCount()` and `getGlobalQueueLength()` for testing
|
||||
|
||||
**Important design note:** The `release()` implementation above is a simplified version. In practice, the global queue handoff is tricky because we need to know which model the global waiter was trying to acquire for. A cleaner approach would be to store the model key in the QueueEntry. Let me refine:
|
||||
|
||||
### Refined approach (simpler, more correct)
|
||||
|
||||
Instead of a separate global queue, a simpler approach is to check the global limit inside `acquire()` and use a single queue per model. When global capacity frees up on `release()`, we try to drain any model's queue:
|
||||
|
||||
```typescript
|
||||
async acquire(model: string): Promise<void> {
|
||||
const perModelLimit = this.getConcurrencyLimit(model)
|
||||
const globalLimit = this.getGlobalLimit()
|
||||
|
||||
if (perModelLimit === Infinity && globalLimit === Infinity) {
|
||||
return
|
||||
}
|
||||
|
||||
const currentPerModel = this.counts.get(model) ?? 0
|
||||
|
||||
if (currentPerModel < perModelLimit && this.globalCount < globalLimit) {
|
||||
this.counts.set(model, currentPerModel + 1)
|
||||
if (globalLimit !== Infinity) {
|
||||
this.globalCount++
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
return new Promise<void>((resolve, reject) => {
|
||||
const queue = this.queues.get(model) ?? []
|
||||
|
||||
const entry: QueueEntry = {
|
||||
resolve: () => {
|
||||
if (entry.settled) return
|
||||
entry.settled = true
|
||||
resolve()
|
||||
},
|
||||
rawReject: reject,
|
||||
settled: false,
|
||||
}
|
||||
|
||||
queue.push(entry)
|
||||
this.queues.set(model, queue)
|
||||
})
|
||||
}
|
||||
|
||||
release(model: string): void {
|
||||
const perModelLimit = this.getConcurrencyLimit(model)
|
||||
const globalLimit = this.getGlobalLimit()
|
||||
|
||||
if (perModelLimit === Infinity && globalLimit === Infinity) {
|
||||
return
|
||||
}
|
||||
|
||||
// Try per-model handoff first (same model queue)
|
||||
const queue = this.queues.get(model)
|
||||
while (queue && queue.length > 0) {
|
||||
const next = queue.shift()!
|
||||
if (!next.settled) {
|
||||
// Hand off the slot to this waiter (per-model and global counts stay the same)
|
||||
next.resolve()
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// No per-model handoff - decrement per-model count
|
||||
const current = this.counts.get(model) ?? 0
|
||||
if (current > 0) {
|
||||
this.counts.set(model, current - 1)
|
||||
}
|
||||
|
||||
// Decrement global count
|
||||
if (globalLimit !== Infinity && this.globalCount > 0) {
|
||||
this.globalCount--
|
||||
}
|
||||
|
||||
// Try to drain any other model's queue that was blocked by global limit
|
||||
if (globalLimit !== Infinity) {
|
||||
this.tryDrainGlobalWaiters()
|
||||
}
|
||||
}
|
||||
|
||||
private tryDrainGlobalWaiters(): void {
|
||||
const globalLimit = this.getGlobalLimit()
|
||||
if (this.globalCount >= globalLimit) return
|
||||
|
||||
for (const [model, queue] of this.queues) {
|
||||
const perModelLimit = this.getConcurrencyLimit(model)
|
||||
const currentPerModel = this.counts.get(model) ?? 0
|
||||
|
||||
if (currentPerModel >= perModelLimit) continue
|
||||
|
||||
while (queue.length > 0 && this.globalCount < globalLimit && currentPerModel < perModelLimit) {
|
||||
const next = queue.shift()!
|
||||
if (!next.settled) {
|
||||
this.counts.set(model, (this.counts.get(model) ?? 0) + 1)
|
||||
this.globalCount++
|
||||
next.resolve()
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This refined approach keeps all waiters in per-model queues (no separate global queue), and on release, tries to drain waiters from any model queue that was blocked by the global limit.
|
||||
|
||||
---
|
||||
|
||||
## 3. Schema Test Changes
|
||||
|
||||
**File:** `src/config/schema/background-task.test.ts`
|
||||
|
||||
Add after the `syncPollTimeoutMs` describe block:
|
||||
|
||||
```typescript
|
||||
describe("maxBackgroundAgents", () => {
|
||||
describe("#given valid maxBackgroundAgents (10)", () => {
|
||||
test("#when parsed #then returns correct value", () => {
|
||||
const result = BackgroundTaskConfigSchema.parse({ maxBackgroundAgents: 10 })
|
||||
|
||||
expect(result.maxBackgroundAgents).toBe(10)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given maxBackgroundAgents of 1 (minimum)", () => {
|
||||
test("#when parsed #then returns correct value", () => {
|
||||
const result = BackgroundTaskConfigSchema.parse({ maxBackgroundAgents: 1 })
|
||||
|
||||
expect(result.maxBackgroundAgents).toBe(1)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given maxBackgroundAgents below minimum (0)", () => {
|
||||
test("#when parsed #then throws ZodError", () => {
|
||||
let thrownError: unknown
|
||||
|
||||
try {
|
||||
BackgroundTaskConfigSchema.parse({ maxBackgroundAgents: 0 })
|
||||
} catch (error) {
|
||||
thrownError = error
|
||||
}
|
||||
|
||||
expect(thrownError).toBeInstanceOf(ZodError)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given maxBackgroundAgents is negative (-1)", () => {
|
||||
test("#when parsed #then throws ZodError", () => {
|
||||
let thrownError: unknown
|
||||
|
||||
try {
|
||||
BackgroundTaskConfigSchema.parse({ maxBackgroundAgents: -1 })
|
||||
} catch (error) {
|
||||
thrownError = error
|
||||
}
|
||||
|
||||
expect(thrownError).toBeInstanceOf(ZodError)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given maxBackgroundAgents is non-integer (2.5)", () => {
|
||||
test("#when parsed #then throws ZodError", () => {
|
||||
let thrownError: unknown
|
||||
|
||||
try {
|
||||
BackgroundTaskConfigSchema.parse({ maxBackgroundAgents: 2.5 })
|
||||
} catch (error) {
|
||||
thrownError = error
|
||||
}
|
||||
|
||||
expect(thrownError).toBeInstanceOf(ZodError)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given maxBackgroundAgents not provided", () => {
|
||||
test("#when parsed #then field is undefined", () => {
|
||||
const result = BackgroundTaskConfigSchema.parse({})
|
||||
|
||||
expect(result.maxBackgroundAgents).toBeUndefined()
|
||||
})
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. ConcurrencyManager Test Changes
|
||||
|
||||
**File:** `src/features/background-agent/concurrency.test.ts`
|
||||
|
||||
Add new describe block:
|
||||
|
||||
```typescript
|
||||
describe("ConcurrencyManager.globalLimit (maxBackgroundAgents)", () => {
|
||||
test("should return Infinity when maxBackgroundAgents is not set", () => {
|
||||
// given
|
||||
const manager = new ConcurrencyManager()
|
||||
|
||||
// when
|
||||
const limit = manager.getGlobalLimit()
|
||||
|
||||
// then
|
||||
expect(limit).toBe(Infinity)
|
||||
})
|
||||
|
||||
test("should return configured maxBackgroundAgents", () => {
|
||||
// given
|
||||
const config: BackgroundTaskConfig = { maxBackgroundAgents: 3 }
|
||||
const manager = new ConcurrencyManager(config)
|
||||
|
||||
// when
|
||||
const limit = manager.getGlobalLimit()
|
||||
|
||||
// then
|
||||
expect(limit).toBe(3)
|
||||
})
|
||||
|
||||
test("should enforce global limit across different models", async () => {
|
||||
// given
|
||||
const config: BackgroundTaskConfig = {
|
||||
maxBackgroundAgents: 2,
|
||||
defaultConcurrency: 5,
|
||||
}
|
||||
const manager = new ConcurrencyManager(config)
|
||||
await manager.acquire("model-a")
|
||||
await manager.acquire("model-b")
|
||||
|
||||
// when
|
||||
let resolved = false
|
||||
const waitPromise = manager.acquire("model-c").then(() => { resolved = true })
|
||||
await Promise.resolve()
|
||||
|
||||
// then - should be blocked by global limit even though per-model has capacity
|
||||
expect(resolved).toBe(false)
|
||||
expect(manager.getGlobalCount()).toBe(2)
|
||||
|
||||
// cleanup
|
||||
manager.release("model-a")
|
||||
await waitPromise
|
||||
expect(resolved).toBe(true)
|
||||
})
|
||||
|
||||
test("should allow tasks when global limit not reached", async () => {
|
||||
// given
|
||||
const config: BackgroundTaskConfig = {
|
||||
maxBackgroundAgents: 3,
|
||||
defaultConcurrency: 5,
|
||||
}
|
||||
const manager = new ConcurrencyManager(config)
|
||||
|
||||
// when
|
||||
await manager.acquire("model-a")
|
||||
await manager.acquire("model-b")
|
||||
await manager.acquire("model-c")
|
||||
|
||||
// then
|
||||
expect(manager.getGlobalCount()).toBe(3)
|
||||
expect(manager.getCount("model-a")).toBe(1)
|
||||
expect(manager.getCount("model-b")).toBe(1)
|
||||
expect(manager.getCount("model-c")).toBe(1)
|
||||
})
|
||||
|
||||
test("should respect both per-model and global limits", async () => {
|
||||
// given - per-model limit of 1, global limit of 3
|
||||
const config: BackgroundTaskConfig = {
|
||||
maxBackgroundAgents: 3,
|
||||
defaultConcurrency: 1,
|
||||
}
|
||||
const manager = new ConcurrencyManager(config)
|
||||
await manager.acquire("model-a")
|
||||
|
||||
// when - try second acquire on same model
|
||||
let resolved = false
|
||||
const waitPromise = manager.acquire("model-a").then(() => { resolved = true })
|
||||
await Promise.resolve()
|
||||
|
||||
// then - blocked by per-model limit, not global
|
||||
expect(resolved).toBe(false)
|
||||
expect(manager.getGlobalCount()).toBe(1)
|
||||
|
||||
// cleanup
|
||||
manager.release("model-a")
|
||||
await waitPromise
|
||||
})
|
||||
|
||||
test("should release global slot and unblock waiting tasks", async () => {
|
||||
// given
|
||||
const config: BackgroundTaskConfig = {
|
||||
maxBackgroundAgents: 1,
|
||||
defaultConcurrency: 5,
|
||||
}
|
||||
const manager = new ConcurrencyManager(config)
|
||||
await manager.acquire("model-a")
|
||||
|
||||
// when
|
||||
let resolved = false
|
||||
const waitPromise = manager.acquire("model-b").then(() => { resolved = true })
|
||||
await Promise.resolve()
|
||||
expect(resolved).toBe(false)
|
||||
|
||||
manager.release("model-a")
|
||||
await waitPromise
|
||||
|
||||
// then
|
||||
expect(resolved).toBe(true)
|
||||
expect(manager.getGlobalCount()).toBe(1)
|
||||
expect(manager.getCount("model-a")).toBe(0)
|
||||
expect(manager.getCount("model-b")).toBe(1)
|
||||
})
|
||||
|
||||
test("should not enforce global limit when not configured", async () => {
|
||||
// given - no maxBackgroundAgents set
|
||||
const config: BackgroundTaskConfig = { defaultConcurrency: 5 }
|
||||
const manager = new ConcurrencyManager(config)
|
||||
|
||||
// when - acquire many across different models
|
||||
await manager.acquire("model-a")
|
||||
await manager.acquire("model-b")
|
||||
await manager.acquire("model-c")
|
||||
await manager.acquire("model-d")
|
||||
await manager.acquire("model-e")
|
||||
await manager.acquire("model-f")
|
||||
|
||||
// then - all should succeed (no global limit)
|
||||
expect(manager.getCount("model-a")).toBe(1)
|
||||
expect(manager.getCount("model-f")).toBe(1)
|
||||
})
|
||||
|
||||
test("should reset global count on clear", async () => {
|
||||
// given
|
||||
const config: BackgroundTaskConfig = { maxBackgroundAgents: 5 }
|
||||
const manager = new ConcurrencyManager(config)
|
||||
await manager.acquire("model-a")
|
||||
await manager.acquire("model-b")
|
||||
|
||||
// when
|
||||
manager.clear()
|
||||
|
||||
// then
|
||||
expect(manager.getGlobalCount()).toBe(0)
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Config Usage Example
|
||||
|
||||
User's `.opencode/oh-my-opencode.jsonc`:
|
||||
|
||||
```jsonc
|
||||
{
|
||||
"background_task": {
|
||||
// Global limit: max 5 background agents total
|
||||
"maxBackgroundAgents": 5,
|
||||
// Per-model limits still apply independently
|
||||
"defaultConcurrency": 3,
|
||||
"providerConcurrency": {
|
||||
"anthropic": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
With this config:
|
||||
- Max 5 background agents running simultaneously across all models
|
||||
- Max 3 per model (default), max 2 for any Anthropic model
|
||||
- If 2 Anthropic + 3 OpenAI agents are running (5 total), no more can start regardless of per-model capacity
|
||||
@@ -1,99 +0,0 @@
|
||||
# Execution Plan: Add `max_background_agents` Config Option
|
||||
|
||||
## Overview
|
||||
|
||||
Add a `max_background_agents` config option to oh-my-opencode that limits total simultaneous background agents across all models/providers. Currently, concurrency is only limited per-model/provider key (default 5 per key). This new option adds a **global ceiling** on total running background agents.
|
||||
|
||||
## Step-by-Step Plan
|
||||
|
||||
### Step 1: Create feature branch
|
||||
|
||||
```bash
|
||||
git checkout -b feat/max-background-agents dev
|
||||
```
|
||||
|
||||
### Step 2: Add `max_background_agents` to BackgroundTaskConfigSchema
|
||||
|
||||
**File:** `src/config/schema/background-task.ts`
|
||||
|
||||
- Add `maxBackgroundAgents` field to the Zod schema with `z.number().int().min(1).optional()`
|
||||
- This follows the existing pattern of `maxDepth` and `maxDescendants` (integer, min 1, optional)
|
||||
- The field name uses camelCase to match existing schema fields (`defaultConcurrency`, `maxDepth`, `maxDescendants`)
|
||||
- No `.default()` needed since the hardcoded fallback of 5 lives in `ConcurrencyManager`
|
||||
|
||||
### Step 3: Modify `ConcurrencyManager` to enforce global limit
|
||||
|
||||
**File:** `src/features/background-agent/concurrency.ts`
|
||||
|
||||
- Add a `globalCount` field tracking total active agents across all keys
|
||||
- Modify `acquire()` to check global count against `maxBackgroundAgents` before granting a slot
|
||||
- Modify `release()` to decrement global count
|
||||
- Modify `clear()` to reset global count
|
||||
- Add `getGlobalCount()` for testing/debugging (follows existing `getCount()`/`getQueueLength()` pattern)
|
||||
|
||||
The global limit check happens **in addition to** the per-model limit. Both must have capacity for a task to proceed.
|
||||
|
||||
### Step 4: Add tests for the new config schema field
|
||||
|
||||
**File:** `src/config/schema/background-task.test.ts`
|
||||
|
||||
- Add test cases following the existing given/when/then pattern with nested describes
|
||||
- Test valid value, below-minimum value, undefined (not provided), non-number type
|
||||
|
||||
### Step 5: Add tests for ConcurrencyManager global limit
|
||||
|
||||
**File:** `src/features/background-agent/concurrency.test.ts`
|
||||
|
||||
- Test that global limit is enforced across different model keys
|
||||
- Test that tasks queue when global limit reached even if per-model limit has capacity
|
||||
- Test that releasing a slot from one model allows a queued task from another model to proceed
|
||||
- Test default behavior (5) when no config provided
|
||||
- Test interaction between global and per-model limits
|
||||
|
||||
### Step 6: Run typecheck and tests
|
||||
|
||||
```bash
|
||||
bun run typecheck
|
||||
bun test src/config/schema/background-task.test.ts
|
||||
bun test src/features/background-agent/concurrency.test.ts
|
||||
```
|
||||
|
||||
### Step 7: Verify LSP diagnostics clean
|
||||
|
||||
Check `src/config/schema/background-task.ts` and `src/features/background-agent/concurrency.ts` for errors.
|
||||
|
||||
### Step 8: Create PR
|
||||
|
||||
- Push branch to remote
|
||||
- Create PR with structured description via `gh pr create`
|
||||
|
||||
## Files Modified (4 files)
|
||||
|
||||
| File | Change |
|
||||
|------|--------|
|
||||
| `src/config/schema/background-task.ts` | Add `maxBackgroundAgents` field |
|
||||
| `src/features/background-agent/concurrency.ts` | Add global count tracking + enforcement |
|
||||
| `src/config/schema/background-task.test.ts` | Add schema validation tests |
|
||||
| `src/features/background-agent/concurrency.test.ts` | Add global limit enforcement tests |
|
||||
|
||||
## Files NOT Modified (intentional)
|
||||
|
||||
| File | Reason |
|
||||
|------|--------|
|
||||
| `src/config/schema/oh-my-opencode-config.ts` | No change needed - `BackgroundTaskConfigSchema` is already composed into root schema via `background_task` field |
|
||||
| `src/create-managers.ts` | No change needed - `pluginConfig.background_task` already passed to `BackgroundManager` constructor |
|
||||
| `src/features/background-agent/manager.ts` | No change needed - already passes config to `ConcurrencyManager` |
|
||||
| `src/plugin-config.ts` | No change needed - `background_task` is a simple object field, uses default override merge |
|
||||
| `src/config/schema.ts` | No change needed - barrel already exports `BackgroundTaskConfigSchema` |
|
||||
|
||||
## Design Decisions
|
||||
|
||||
1. **Field name `maxBackgroundAgents`** - camelCase to match existing schema fields (`maxDepth`, `maxDescendants`, `defaultConcurrency`). The user-facing JSONC config key is also camelCase per existing convention in `background_task` section.
|
||||
|
||||
2. **Global limit vs per-model limit** - The global limit is a ceiling across ALL concurrency keys. Per-model limits still apply independently. A task needs both a per-model slot AND a global slot to proceed.
|
||||
|
||||
3. **Default of 5** - Matches the existing hardcoded default in `getConcurrencyLimit()`. When `maxBackgroundAgents` is not set, no global limit is enforced (only per-model limits apply), preserving backward compatibility.
|
||||
|
||||
4. **Queue behavior** - When global limit is reached, tasks wait in the same FIFO queue mechanism. The global check happens inside `acquire()` before the per-model check.
|
||||
|
||||
5. **0 means Infinity** - Following the existing pattern where `defaultConcurrency: 0` means unlimited, `maxBackgroundAgents: 0` would also mean no global limit.
|
||||
@@ -1,50 +0,0 @@
|
||||
# PR Description
|
||||
|
||||
**Title:** feat: add `maxBackgroundAgents` config to limit total simultaneous background agents
|
||||
|
||||
**Body:**
|
||||
|
||||
## Summary
|
||||
|
||||
- Add `maxBackgroundAgents` field to `BackgroundTaskConfigSchema` that enforces a global ceiling on total running background agents across all models/providers
|
||||
- Modify `ConcurrencyManager` to track global count and enforce the limit alongside existing per-model limits
|
||||
- Add schema validation tests and concurrency enforcement tests
|
||||
|
||||
## Motivation
|
||||
|
||||
Currently, concurrency is only limited per model/provider key (default 5 per key). On resource-constrained machines or when using many different models, the total number of background agents can grow unbounded (5 per model x N models). This config option lets users set a hard ceiling.
|
||||
|
||||
## Changes
|
||||
|
||||
### Schema (`src/config/schema/background-task.ts`)
|
||||
- Added `maxBackgroundAgents: z.number().int().min(1).optional()` to `BackgroundTaskConfigSchema`
|
||||
- Grouped with existing limit fields (`maxDepth`, `maxDescendants`)
|
||||
|
||||
### ConcurrencyManager (`src/features/background-agent/concurrency.ts`)
|
||||
- Added `globalCount` tracking total active agents across all concurrency keys
|
||||
- Added `getGlobalLimit()` reading `maxBackgroundAgents` from config (defaults to `Infinity` = no global limit)
|
||||
- Modified `acquire()` to check both per-model AND global capacity
|
||||
- Modified `release()` to decrement global count and drain cross-model waiters blocked by global limit
|
||||
- Modified `clear()` to reset global state
|
||||
- Added `getGlobalCount()` / `getGlobalQueueLength()` for testing
|
||||
|
||||
### Tests
|
||||
- `src/config/schema/background-task.test.ts`: 6 test cases for schema validation (valid, min boundary, below min, negative, non-integer, undefined)
|
||||
- `src/features/background-agent/concurrency.test.ts`: 8 test cases for global limit enforcement (cross-model blocking, release unblocking, per-model vs global interaction, no-config default, clear reset)
|
||||
|
||||
## Config Example
|
||||
|
||||
```jsonc
|
||||
{
|
||||
"background_task": {
|
||||
"maxBackgroundAgents": 5,
|
||||
"defaultConcurrency": 3
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Backward Compatibility
|
||||
|
||||
- When `maxBackgroundAgents` is not set (default), no global limit is enforced - behavior is identical to before
|
||||
- Existing `defaultConcurrency`, `providerConcurrency`, and `modelConcurrency` continue to work unchanged
|
||||
- No config migration needed
|
||||
@@ -1,111 +0,0 @@
|
||||
# Verification Strategy
|
||||
|
||||
## 1. Static Analysis
|
||||
|
||||
### TypeScript Typecheck
|
||||
```bash
|
||||
bun run typecheck
|
||||
```
|
||||
- Verify no type errors introduced
|
||||
- `BackgroundTaskConfig` type is inferred from Zod schema, so adding the field automatically updates the type
|
||||
- All existing consumers of `BackgroundTaskConfig` remain compatible (new field is optional)
|
||||
|
||||
### LSP Diagnostics
|
||||
Check changed files for errors:
|
||||
- `src/config/schema/background-task.ts`
|
||||
- `src/features/background-agent/concurrency.ts`
|
||||
- `src/config/schema/background-task.test.ts`
|
||||
- `src/features/background-agent/concurrency.test.ts`
|
||||
|
||||
## 2. Unit Tests
|
||||
|
||||
### Schema Validation Tests
|
||||
```bash
|
||||
bun test src/config/schema/background-task.test.ts
|
||||
```
|
||||
|
||||
| Test Case | Input | Expected |
|
||||
|-----------|-------|----------|
|
||||
| Valid value (10) | `{ maxBackgroundAgents: 10 }` | Parses to `10` |
|
||||
| Minimum boundary (1) | `{ maxBackgroundAgents: 1 }` | Parses to `1` |
|
||||
| Below minimum (0) | `{ maxBackgroundAgents: 0 }` | Throws `ZodError` |
|
||||
| Negative (-1) | `{ maxBackgroundAgents: -1 }` | Throws `ZodError` |
|
||||
| Non-integer (2.5) | `{ maxBackgroundAgents: 2.5 }` | Throws `ZodError` |
|
||||
| Not provided | `{}` | Field is `undefined` |
|
||||
|
||||
### ConcurrencyManager Tests
|
||||
```bash
|
||||
bun test src/features/background-agent/concurrency.test.ts
|
||||
```
|
||||
|
||||
| Test Case | Setup | Expected |
|
||||
|-----------|-------|----------|
|
||||
| No config = no global limit | No `maxBackgroundAgents` | `getGlobalLimit()` returns `Infinity` |
|
||||
| Config respected | `maxBackgroundAgents: 3` | `getGlobalLimit()` returns `3` |
|
||||
| Cross-model blocking | Global limit 2, acquire model-a + model-b, try model-c | model-c blocks |
|
||||
| Under-limit allows | Global limit 3, acquire 3 different models | All succeed |
|
||||
| Per-model + global interaction | Per-model 1, global 3, acquire model-a twice | Blocked by per-model, not global |
|
||||
| Release unblocks | Global limit 1, acquire model-a, queue model-b, release model-a | model-b proceeds |
|
||||
| No global limit = no enforcement | No config, acquire 6 different models | All succeed |
|
||||
| Clear resets global count | Acquire 2, clear | `getGlobalCount()` is 0 |
|
||||
|
||||
### Existing Test Regression
|
||||
```bash
|
||||
bun test src/features/background-agent/concurrency.test.ts
|
||||
bun test src/config/schema/background-task.test.ts
|
||||
bun test src/config/schema.test.ts
|
||||
```
|
||||
All existing tests must continue to pass unchanged.
|
||||
|
||||
## 3. Integration Verification
|
||||
|
||||
### Config Loading Path
|
||||
Verify the config flows correctly through the system:
|
||||
|
||||
1. **Schema → Type**: `BackgroundTaskConfig` type auto-includes `maxBackgroundAgents` via `z.infer`
|
||||
2. **Config file → Schema**: `loadConfigFromPath()` in `plugin-config.ts` uses `OhMyOpenCodeConfigSchema.safeParse()` which includes `BackgroundTaskConfigSchema`
|
||||
3. **Config → Manager**: `create-managers.ts` passes `pluginConfig.background_task` to `BackgroundManager` constructor
|
||||
4. **Manager → ConcurrencyManager**: `BackgroundManager` constructor passes config to `new ConcurrencyManager(config)`
|
||||
5. **ConcurrencyManager → Enforcement**: `acquire()` reads `config.maxBackgroundAgents` via `getGlobalLimit()`
|
||||
|
||||
No changes needed in steps 2-4 since the field is optional and the existing plumbing passes the entire `BackgroundTaskConfig` object.
|
||||
|
||||
### Manual Config Test
|
||||
Create a test config to verify parsing:
|
||||
```bash
|
||||
echo '{ "background_task": { "maxBackgroundAgents": 3 } }' | bun -e "
|
||||
const { BackgroundTaskConfigSchema } = require('./src/config/schema/background-task');
|
||||
const result = BackgroundTaskConfigSchema.safeParse(JSON.parse(require('fs').readFileSync('/dev/stdin', 'utf-8')).background_task);
|
||||
console.log(result.success, result.data);
|
||||
"
|
||||
```
|
||||
|
||||
## 4. Build Verification
|
||||
|
||||
```bash
|
||||
bun run build
|
||||
```
|
||||
- Verify build succeeds
|
||||
- Schema JSON output includes the new field (if applicable)
|
||||
|
||||
## 5. Edge Cases to Verify
|
||||
|
||||
| Edge Case | Expected Behavior |
|
||||
|-----------|-------------------|
|
||||
| `maxBackgroundAgents` not set | No global limit enforced (backward compatible) |
|
||||
| `maxBackgroundAgents: 1` | Only 1 background agent at a time across all models |
|
||||
| `maxBackgroundAgents` > sum of all per-model limits | Global limit never triggers (per-model limits are tighter) |
|
||||
| Per-model limit tighter than global | Per-model limit blocks first |
|
||||
| Global limit tighter than per-model | Global limit blocks first |
|
||||
| Release from one model unblocks different model | Global slot freed, different model's waiter proceeds |
|
||||
| Manager shutdown with global waiters | `clear()` rejects all waiters and resets global count |
|
||||
| Concurrent acquire/release | No race conditions (single-threaded JS event loop) |
|
||||
|
||||
## 6. CI Pipeline
|
||||
|
||||
The existing CI workflow (`ci.yml`) will run:
|
||||
- `bun run typecheck` - type checking
|
||||
- `bun test` - all tests including new ones
|
||||
- `bun run build` - build verification
|
||||
|
||||
No CI changes needed.
|
||||
@@ -1 +0,0 @@
|
||||
{"total_tokens": null, "duration_ms": 365000, "total_duration_seconds": 365}
|
||||
@@ -1,37 +0,0 @@
|
||||
{
|
||||
"eval_id": 2,
|
||||
"eval_name": "bugfix-atlas-null-check",
|
||||
"prompt": "The atlas hook has a bug where it crashes when boulder.json is missing the worktree_path field. Fix it and land the fix as a PR. Make sure CI passes.",
|
||||
"assertions": [
|
||||
{
|
||||
"id": "worktree-isolation",
|
||||
"text": "Plan uses git worktree in a sibling directory",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "minimal-fix",
|
||||
"text": "Fix is minimal — adds null check, doesn't refactor unrelated code",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "test-added",
|
||||
"text": "Test case added for the missing worktree_path scenario",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "three-gates",
|
||||
"text": "Verification loop includes all 3 gates: CI, review-work, Cubic",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "real-atlas-files",
|
||||
"text": "References actual atlas hook files in src/hooks/atlas/",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "fix-branch-naming",
|
||||
"text": "Branch name follows fix/ prefix convention",
|
||||
"type": "manual"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,11 +0,0 @@
|
||||
{
|
||||
"run_id": "eval-2-with_skill",
|
||||
"expectations": [
|
||||
{"text": "Plan uses git worktree in a sibling directory", "passed": true, "evidence": "../omo-wt/fix-atlas-worktree-path-crash"},
|
||||
{"text": "Fix is minimal — adds null check, doesn't refactor unrelated code", "passed": true, "evidence": "3 targeted changes: readBoulderState sanitization, idle-event guard, tests"},
|
||||
{"text": "Test case added for the missing worktree_path scenario", "passed": true, "evidence": "Tests for missing and null worktree_path"},
|
||||
{"text": "Verification loop includes all 3 gates", "passed": true, "evidence": "Gate A (CI), Gate B (review-work), Gate C (Cubic)"},
|
||||
{"text": "References actual atlas hook files", "passed": true, "evidence": "src/hooks/atlas/idle-event.ts, src/features/boulder-state/storage.ts"},
|
||||
{"text": "Branch name follows fix/ prefix convention", "passed": true, "evidence": "fix/atlas-worktree-path-crash"}
|
||||
]
|
||||
}
|
||||
@@ -1,205 +0,0 @@
|
||||
# Code Changes
|
||||
|
||||
## File 1: `src/features/boulder-state/storage.ts`
|
||||
|
||||
**Change**: Add `worktree_path` sanitization in `readBoulderState()`
|
||||
|
||||
```typescript
|
||||
// BEFORE (lines 29-32):
|
||||
if (!Array.isArray(parsed.session_ids)) {
|
||||
parsed.session_ids = []
|
||||
}
|
||||
return parsed as BoulderState
|
||||
|
||||
// AFTER:
|
||||
if (!Array.isArray(parsed.session_ids)) {
|
||||
parsed.session_ids = []
|
||||
}
|
||||
if (parsed.worktree_path !== undefined && typeof parsed.worktree_path !== "string") {
|
||||
parsed.worktree_path = undefined
|
||||
}
|
||||
return parsed as BoulderState
|
||||
```
|
||||
|
||||
**Rationale**: `readBoulderState` casts raw `JSON.parse()` output as `BoulderState` without validating individual fields. When boulder.json has `"worktree_path": null` (valid JSON from manual edits, corrupted state, or external tools), the runtime type is `null` but TypeScript type says `string | undefined`. This sanitization ensures downstream code always gets the correct type.
|
||||
|
||||
---
|
||||
|
||||
## File 2: `src/hooks/atlas/idle-event.ts`
|
||||
|
||||
**Change**: Add defensive string type guard before passing `worktree_path` to continuation functions.
|
||||
|
||||
```typescript
|
||||
// BEFORE (lines 83-88 in scheduleRetry):
|
||||
await injectContinuation({
|
||||
ctx,
|
||||
sessionID,
|
||||
sessionState,
|
||||
options,
|
||||
planName: currentBoulder.plan_name,
|
||||
progress: currentProgress,
|
||||
agent: currentBoulder.agent,
|
||||
worktreePath: currentBoulder.worktree_path,
|
||||
})
|
||||
|
||||
// AFTER:
|
||||
await injectContinuation({
|
||||
ctx,
|
||||
sessionID,
|
||||
sessionState,
|
||||
options,
|
||||
planName: currentBoulder.plan_name,
|
||||
progress: currentProgress,
|
||||
agent: currentBoulder.agent,
|
||||
worktreePath: typeof currentBoulder.worktree_path === "string" ? currentBoulder.worktree_path : undefined,
|
||||
})
|
||||
```
|
||||
|
||||
```typescript
|
||||
// BEFORE (lines 184-188 in handleAtlasSessionIdle):
|
||||
await injectContinuation({
|
||||
ctx,
|
||||
sessionID,
|
||||
sessionState,
|
||||
options,
|
||||
planName: boulderState.plan_name,
|
||||
progress,
|
||||
agent: boulderState.agent,
|
||||
worktreePath: boulderState.worktree_path,
|
||||
})
|
||||
|
||||
// AFTER:
|
||||
await injectContinuation({
|
||||
ctx,
|
||||
sessionID,
|
||||
sessionState,
|
||||
options,
|
||||
planName: boulderState.plan_name,
|
||||
progress,
|
||||
agent: boulderState.agent,
|
||||
worktreePath: typeof boulderState.worktree_path === "string" ? boulderState.worktree_path : undefined,
|
||||
})
|
||||
```
|
||||
|
||||
**Rationale**: Belt-and-suspenders defense. Even though `readBoulderState` now sanitizes, direct `writeBoulderState` calls elsewhere could still produce invalid state. The `typeof` check is zero-cost and prevents any possibility of `null` or non-string values leaking through.
|
||||
|
||||
---
|
||||
|
||||
## File 3: `src/hooks/atlas/index.test.ts`
|
||||
|
||||
**Change**: Add test cases for missing `worktree_path` scenarios within the existing `session.idle handler` describe block.
|
||||
|
||||
```typescript
|
||||
test("should inject continuation when boulder.json has no worktree_path field", async () => {
|
||||
// given - boulder state WITHOUT worktree_path
|
||||
const planPath = join(TEST_DIR, "test-plan.md")
|
||||
writeFileSync(planPath, "# Plan\n- [ ] Task 1\n- [x] Task 2")
|
||||
|
||||
const state: BoulderState = {
|
||||
active_plan: planPath,
|
||||
started_at: "2026-01-02T10:00:00Z",
|
||||
session_ids: [MAIN_SESSION_ID],
|
||||
plan_name: "test-plan",
|
||||
}
|
||||
writeBoulderState(TEST_DIR, state)
|
||||
|
||||
const readState = readBoulderState(TEST_DIR)
|
||||
expect(readState?.worktree_path).toBeUndefined()
|
||||
|
||||
const mockInput = createMockPluginInput()
|
||||
const hook = createAtlasHook(mockInput)
|
||||
|
||||
// when
|
||||
await hook.handler({
|
||||
event: {
|
||||
type: "session.idle",
|
||||
properties: { sessionID: MAIN_SESSION_ID },
|
||||
},
|
||||
})
|
||||
|
||||
// then - continuation injected, no worktree context in prompt
|
||||
expect(mockInput._promptMock).toHaveBeenCalled()
|
||||
const callArgs = mockInput._promptMock.mock.calls[0][0]
|
||||
expect(callArgs.body.parts[0].text).not.toContain("[Worktree:")
|
||||
expect(callArgs.body.parts[0].text).toContain("1 remaining")
|
||||
})
|
||||
|
||||
test("should handle boulder.json with worktree_path: null without crashing", async () => {
|
||||
// given - manually write boulder.json with worktree_path: null (corrupted state)
|
||||
const planPath = join(TEST_DIR, "test-plan.md")
|
||||
writeFileSync(planPath, "# Plan\n- [ ] Task 1\n- [x] Task 2")
|
||||
|
||||
const boulderPath = join(SISYPHUS_DIR, "boulder.json")
|
||||
writeFileSync(boulderPath, JSON.stringify({
|
||||
active_plan: planPath,
|
||||
started_at: "2026-01-02T10:00:00Z",
|
||||
session_ids: [MAIN_SESSION_ID],
|
||||
plan_name: "test-plan",
|
||||
worktree_path: null,
|
||||
}, null, 2))
|
||||
|
||||
const mockInput = createMockPluginInput()
|
||||
const hook = createAtlasHook(mockInput)
|
||||
|
||||
// when
|
||||
await hook.handler({
|
||||
event: {
|
||||
type: "session.idle",
|
||||
properties: { sessionID: MAIN_SESSION_ID },
|
||||
},
|
||||
})
|
||||
|
||||
// then - should inject continuation without crash, no "[Worktree: null]"
|
||||
expect(mockInput._promptMock).toHaveBeenCalled()
|
||||
const callArgs = mockInput._promptMock.mock.calls[0][0]
|
||||
expect(callArgs.body.parts[0].text).not.toContain("[Worktree: null]")
|
||||
expect(callArgs.body.parts[0].text).not.toContain("[Worktree: undefined]")
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## File 4: `src/features/boulder-state/storage.test.ts` (addition to existing)
|
||||
|
||||
**Change**: Add `readBoulderState` sanitization test.
|
||||
|
||||
```typescript
|
||||
describe("#given boulder.json with worktree_path: null", () => {
|
||||
test("#then readBoulderState should sanitize null to undefined", () => {
|
||||
// given
|
||||
const boulderPath = join(TEST_DIR, ".sisyphus", "boulder.json")
|
||||
writeFileSync(boulderPath, JSON.stringify({
|
||||
active_plan: "/path/to/plan.md",
|
||||
started_at: "2026-01-02T10:00:00Z",
|
||||
session_ids: ["session-1"],
|
||||
plan_name: "test-plan",
|
||||
worktree_path: null,
|
||||
}, null, 2))
|
||||
|
||||
// when
|
||||
const state = readBoulderState(TEST_DIR)
|
||||
|
||||
// then
|
||||
expect(state).not.toBeNull()
|
||||
expect(state!.worktree_path).toBeUndefined()
|
||||
})
|
||||
|
||||
test("#then readBoulderState should preserve valid worktree_path string", () => {
|
||||
// given
|
||||
const boulderPath = join(TEST_DIR, ".sisyphus", "boulder.json")
|
||||
writeFileSync(boulderPath, JSON.stringify({
|
||||
active_plan: "/path/to/plan.md",
|
||||
started_at: "2026-01-02T10:00:00Z",
|
||||
session_ids: ["session-1"],
|
||||
plan_name: "test-plan",
|
||||
worktree_path: "/valid/worktree/path",
|
||||
}, null, 2))
|
||||
|
||||
// when
|
||||
const state = readBoulderState(TEST_DIR)
|
||||
|
||||
// then
|
||||
expect(state?.worktree_path).toBe("/valid/worktree/path")
|
||||
})
|
||||
})
|
||||
```
|
||||
@@ -1,78 +0,0 @@
|
||||
# Execution Plan — Fix atlas hook crash on missing worktree_path
|
||||
|
||||
## Phase 0: Setup
|
||||
|
||||
1. **Create worktree from origin/dev**:
|
||||
```bash
|
||||
git fetch origin dev
|
||||
git worktree add ../omo-wt/fix-atlas-worktree-path-crash origin/dev
|
||||
```
|
||||
2. **Create feature branch**:
|
||||
```bash
|
||||
cd ../omo-wt/fix-atlas-worktree-path-crash
|
||||
git checkout -b fix/atlas-worktree-path-crash
|
||||
```
|
||||
|
||||
## Phase 1: Implement
|
||||
|
||||
### Step 1: Fix `readBoulderState()` in `src/features/boulder-state/storage.ts`
|
||||
- Add `worktree_path` sanitization after JSON parse
|
||||
- Ensure `worktree_path` is `string | undefined`, never `null` or other types
|
||||
- This is the root cause: raw `JSON.parse` + `as BoulderState` cast allows type violations at runtime
|
||||
|
||||
### Step 2: Add defensive guard in `src/hooks/atlas/idle-event.ts`
|
||||
- Before passing `boulderState.worktree_path` to `injectContinuation`, validate it's a string
|
||||
- Apply same guard in the `scheduleRetry` callback (line 86)
|
||||
- Ensures even if `readBoulderState` is bypassed, the idle handler won't crash
|
||||
|
||||
### Step 3: Add test coverage in `src/hooks/atlas/index.test.ts`
|
||||
- Add test: boulder.json without `worktree_path` field → session.idle works
|
||||
- Add test: boulder.json with `worktree_path: null` → session.idle works (no `[Worktree: null]` in prompt)
|
||||
- Add test: `readBoulderState` sanitizes `null` worktree_path to `undefined`
|
||||
- Follow existing given/when/then test pattern
|
||||
|
||||
### Step 4: Local validation
|
||||
```bash
|
||||
bun run typecheck
|
||||
bun test src/hooks/atlas/
|
||||
bun test src/features/boulder-state/
|
||||
bun run build
|
||||
```
|
||||
|
||||
### Step 5: Atomic commit
|
||||
```bash
|
||||
git add src/features/boulder-state/storage.ts src/hooks/atlas/idle-event.ts src/hooks/atlas/index.test.ts
|
||||
git commit -m "fix(atlas): prevent crash when boulder.json missing worktree_path field
|
||||
|
||||
readBoulderState() performs unsafe cast of parsed JSON as BoulderState.
|
||||
When worktree_path is absent or null in boulder.json, downstream code
|
||||
in idle-event.ts could receive null where string|undefined is expected.
|
||||
|
||||
- Sanitize worktree_path in readBoulderState (reject non-string values)
|
||||
- Add defensive typeof check in idle-event before passing to continuation
|
||||
- Add test coverage for missing and null worktree_path scenarios"
|
||||
```
|
||||
|
||||
## Phase 2: PR Creation
|
||||
|
||||
```bash
|
||||
git push -u origin fix/atlas-worktree-path-crash
|
||||
gh pr create \
|
||||
--base dev \
|
||||
--title "fix(atlas): prevent crash when boulder.json missing worktree_path" \
|
||||
--body-file /tmp/pull-request-atlas-worktree-fix.md
|
||||
```
|
||||
|
||||
## Phase 3: Verify Loop
|
||||
|
||||
- **Gate A (CI)**: `gh pr checks --watch` — wait for all checks green
|
||||
- **Gate B (review-work)**: Run 5-agent review (Oracle goal, Oracle quality, Oracle security, QA execution, context mining)
|
||||
- **Gate C (Cubic)**: Wait for cubic-dev-ai[bot] to respond "No issues found"
|
||||
- On any failure: fix-commit-push, re-enter verify loop
|
||||
|
||||
## Phase 4: Merge
|
||||
|
||||
```bash
|
||||
gh pr merge --squash --delete-branch
|
||||
git worktree remove ../omo-wt/fix-atlas-worktree-path-crash
|
||||
```
|
||||
@@ -1,42 +0,0 @@
|
||||
# PR Title
|
||||
|
||||
```
|
||||
fix(atlas): prevent crash when boulder.json missing worktree_path
|
||||
```
|
||||
|
||||
# PR Body
|
||||
|
||||
## Summary
|
||||
|
||||
- Fix runtime type violation in atlas hook when `boulder.json` lacks `worktree_path` field
|
||||
- Add `worktree_path` sanitization in `readBoulderState()` to reject non-string values (e.g., `null` from manual edits)
|
||||
- Add defensive `typeof` guards in `idle-event.ts` before passing worktree path to continuation injection
|
||||
- Add test coverage for missing and null `worktree_path` scenarios
|
||||
|
||||
## Problem
|
||||
|
||||
`readBoulderState()` in `src/features/boulder-state/storage.ts` casts raw `JSON.parse()` output directly as `BoulderState` via `return parsed as BoulderState`. This bypasses TypeScript's type system entirely at runtime.
|
||||
|
||||
When `boulder.json` is missing the `worktree_path` field (common for boulders created before worktree support was added, or created without `--worktree` flag), `boulderState.worktree_path` is `undefined` which is handled correctly. However, when boulder.json has `"worktree_path": null` (possible from manual edits, external tooling, or corrupted state), the runtime type becomes `null` which violates the TypeScript type `string | undefined`.
|
||||
|
||||
This `null` value propagates through:
|
||||
1. `idle-event.ts:handleAtlasSessionIdle()` → `injectContinuation()` → `injectBoulderContinuation()`
|
||||
2. `idle-event.ts:scheduleRetry()` callback → same chain
|
||||
|
||||
While the `boulder-continuation-injector.ts` handles falsy values via `worktreePath ? ... : ""`, the type mismatch can cause subtle downstream issues and violates the contract of the `BoulderState` interface.
|
||||
|
||||
## Changes
|
||||
|
||||
| File | Change |
|
||||
|------|--------|
|
||||
| `src/features/boulder-state/storage.ts` | Sanitize `worktree_path` in `readBoulderState()` — reject non-string values |
|
||||
| `src/hooks/atlas/idle-event.ts` | Add `typeof` guards before passing worktree_path to continuation (2 call sites) |
|
||||
| `src/hooks/atlas/index.test.ts` | Add 2 tests: missing worktree_path + null worktree_path in session.idle |
|
||||
| `src/features/boulder-state/storage.test.ts` | Add 2 tests: sanitization of null + preservation of valid string |
|
||||
|
||||
## Testing
|
||||
|
||||
- `bun test src/hooks/atlas/` — all existing + new tests pass
|
||||
- `bun test src/features/boulder-state/` — all existing + new tests pass
|
||||
- `bun run typecheck` — clean
|
||||
- `bun run build` — clean
|
||||
@@ -1,87 +0,0 @@
|
||||
# Verification Strategy
|
||||
|
||||
## Gate A: CI (`gh pr checks --watch`)
|
||||
|
||||
### What CI runs (from `ci.yml`)
|
||||
1. **Tests (split)**: Mock-heavy tests in isolation + batch tests
|
||||
2. **Typecheck**: `bun run typecheck` (tsc --noEmit)
|
||||
3. **Build**: `bun run build` (ESM + declarations + schema)
|
||||
|
||||
### Pre-push local validation
|
||||
Before pushing, run the exact CI steps locally to catch failures early:
|
||||
|
||||
```bash
|
||||
# Targeted test runs first (fast feedback)
|
||||
bun test src/features/boulder-state/storage.test.ts
|
||||
bun test src/hooks/atlas/index.test.ts
|
||||
|
||||
# Full test suite
|
||||
bun test
|
||||
|
||||
# Type check
|
||||
bun run typecheck
|
||||
|
||||
# Build
|
||||
bun run build
|
||||
```
|
||||
|
||||
### Failure handling
|
||||
- **Test failure**: Read test output, fix code, create new commit (never amend pushed commits), push
|
||||
- **Typecheck failure**: Run `lsp_diagnostics` on changed files, fix type errors, commit, push
|
||||
- **Build failure**: Check build output for missing exports or circular deps, fix, commit, push
|
||||
|
||||
After each fix-commit-push: `gh pr checks --watch` to re-enter gate
|
||||
|
||||
## Gate B: review-work (5-agent review)
|
||||
|
||||
### The 5 parallel agents
|
||||
1. **Oracle (goal/constraint verification)**: Checks the fix matches the stated problem — `worktree_path` crash resolved, no scope creep
|
||||
2. **Oracle (code quality)**: Validates code follows existing patterns — factory pattern, given/when/then tests, < 200 LOC, no catch-all files
|
||||
3. **Oracle (security)**: Ensures no new security issues — JSON parse injection, path traversal in worktree_path
|
||||
4. **QA agent (hands-on execution)**: Actually runs the tests, checks `lsp_diagnostics` on changed files, verifies the fix in action
|
||||
5. **Context mining agent**: Checks GitHub issues, git history, related PRs for context alignment
|
||||
|
||||
### Expected focus areas for this PR
|
||||
- Oracle (goal): Does the sanitization in `readBoulderState` actually prevent the crash? Is the `typeof` guard necessary or redundant?
|
||||
- Oracle (quality): Are the new tests following the given/when/then pattern? Do they use the same mock setup as existing tests?
|
||||
- Oracle (security): Is the `worktree_path` value ever used in path operations without sanitization? (Answer: no, it's only used in template strings)
|
||||
- QA: Run `bun test src/hooks/atlas/index.test.ts` — does the null worktree_path test actually trigger the bug before fix?
|
||||
|
||||
### Failure handling
|
||||
- Each oracle produces a PASS/FAIL verdict with specific issues
|
||||
- On FAIL: read the specific issue, fix in the worktree, commit, push, re-run review-work
|
||||
- All 5 agents must PASS
|
||||
|
||||
## Gate C: Cubic (`cubic-dev-ai[bot]`)
|
||||
|
||||
### What Cubic checks
|
||||
- Automated code review bot that analyzes the PR diff
|
||||
- Looks for: type safety issues, missing error handling, test coverage gaps, anti-patterns
|
||||
|
||||
### Expected result
|
||||
- "No issues found" for this small, focused fix
|
||||
- 3 files changed (storage.ts, idle-event.ts, index.test.ts) + 1 test file
|
||||
|
||||
### Failure handling
|
||||
- If Cubic flags an issue: evaluate if it's a real concern or false positive
|
||||
- Real concern: fix, commit, push
|
||||
- False positive: comment explaining why the flagged pattern is intentional
|
||||
- Wait for Cubic to re-review after push
|
||||
|
||||
## Post-verification: Merge
|
||||
|
||||
Once all 3 gates pass:
|
||||
```bash
|
||||
gh pr merge --squash --delete-branch
|
||||
git worktree remove ../omo-wt/fix-atlas-worktree-path-crash
|
||||
```
|
||||
|
||||
On merge failure (conflicts):
|
||||
```bash
|
||||
cd ../omo-wt/fix-atlas-worktree-path-crash
|
||||
git fetch origin dev
|
||||
git rebase origin/dev
|
||||
# Resolve conflicts if any
|
||||
git push --force-with-lease
|
||||
# Re-enter verify loop from Gate A
|
||||
```
|
||||
@@ -1 +0,0 @@
|
||||
{"total_tokens": null, "duration_ms": 506000, "total_duration_seconds": 506}
|
||||
@@ -1,11 +0,0 @@
|
||||
{
|
||||
"run_id": "eval-2-without_skill",
|
||||
"expectations": [
|
||||
{"text": "Plan uses git worktree in a sibling directory", "passed": false, "evidence": "No worktree. Steps go directly to creating branch and modifying files."},
|
||||
{"text": "Fix is minimal — adds null check, doesn't refactor unrelated code", "passed": true, "evidence": "Focused fix though also adds try/catch in setTimeout (reasonable secondary fix)"},
|
||||
{"text": "Test case added for the missing worktree_path scenario", "passed": true, "evidence": "Detailed test plan for missing/null/malformed boulder.json"},
|
||||
{"text": "Verification loop includes all 3 gates", "passed": false, "evidence": "Only mentions CI pipeline (step 5). No review-work or Cubic."},
|
||||
{"text": "References actual atlas hook files", "passed": true, "evidence": "References idle-event.ts, storage.ts with line numbers"},
|
||||
{"text": "Branch name follows fix/ prefix convention", "passed": true, "evidence": "fix/atlas-hook-missing-worktree-path"}
|
||||
]
|
||||
}
|
||||
@@ -1,334 +0,0 @@
|
||||
# Code Changes: Fix Atlas Hook Crash on Missing worktree_path
|
||||
|
||||
## Change 1: Harden `readBoulderState()` validation
|
||||
|
||||
**File:** `src/features/boulder-state/storage.ts`
|
||||
|
||||
### Before (lines 16-36):
|
||||
```typescript
|
||||
export function readBoulderState(directory: string): BoulderState | null {
|
||||
const filePath = getBoulderFilePath(directory)
|
||||
|
||||
if (!existsSync(filePath)) {
|
||||
return null
|
||||
}
|
||||
|
||||
try {
|
||||
const content = readFileSync(filePath, "utf-8")
|
||||
const parsed = JSON.parse(content)
|
||||
if (!parsed || typeof parsed !== "object" || Array.isArray(parsed)) {
|
||||
return null
|
||||
}
|
||||
if (!Array.isArray(parsed.session_ids)) {
|
||||
parsed.session_ids = []
|
||||
}
|
||||
return parsed as BoulderState
|
||||
} catch {
|
||||
return null
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### After:
|
||||
```typescript
|
||||
export function readBoulderState(directory: string): BoulderState | null {
|
||||
const filePath = getBoulderFilePath(directory)
|
||||
|
||||
if (!existsSync(filePath)) {
|
||||
return null
|
||||
}
|
||||
|
||||
try {
|
||||
const content = readFileSync(filePath, "utf-8")
|
||||
const parsed = JSON.parse(content)
|
||||
if (!parsed || typeof parsed !== "object" || Array.isArray(parsed)) {
|
||||
return null
|
||||
}
|
||||
if (typeof parsed.active_plan !== "string" || typeof parsed.plan_name !== "string") {
|
||||
return null
|
||||
}
|
||||
if (!Array.isArray(parsed.session_ids)) {
|
||||
parsed.session_ids = []
|
||||
}
|
||||
if (parsed.worktree_path !== undefined && typeof parsed.worktree_path !== "string") {
|
||||
delete parsed.worktree_path
|
||||
}
|
||||
return parsed as BoulderState
|
||||
} catch {
|
||||
return null
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Rationale:** Validates that required fields (`active_plan`, `plan_name`) are strings. Strips `worktree_path` if it's present but not a string (e.g., `null`, number). This prevents downstream crashes from `existsSync(undefined)` and ensures type safety at the boundary.
|
||||
|
||||
---
|
||||
|
||||
## Change 2: Add try/catch in setTimeout retry callback
|
||||
|
||||
**File:** `src/hooks/atlas/idle-event.ts`
|
||||
|
||||
### Before (lines 62-88):
|
||||
```typescript
|
||||
sessionState.pendingRetryTimer = setTimeout(async () => {
|
||||
sessionState.pendingRetryTimer = undefined
|
||||
|
||||
if (sessionState.promptFailureCount >= 2) return
|
||||
if (sessionState.waitingForFinalWaveApproval) return
|
||||
|
||||
const currentBoulder = readBoulderState(ctx.directory)
|
||||
if (!currentBoulder) return
|
||||
if (!currentBoulder.session_ids?.includes(sessionID)) return
|
||||
|
||||
const currentProgress = getPlanProgress(currentBoulder.active_plan)
|
||||
if (currentProgress.isComplete) return
|
||||
if (options?.isContinuationStopped?.(sessionID)) return
|
||||
if (options?.shouldSkipContinuation?.(sessionID)) return
|
||||
if (hasRunningBackgroundTasks(sessionID, options)) return
|
||||
|
||||
await injectContinuation({
|
||||
ctx,
|
||||
sessionID,
|
||||
sessionState,
|
||||
options,
|
||||
planName: currentBoulder.plan_name,
|
||||
progress: currentProgress,
|
||||
agent: currentBoulder.agent,
|
||||
worktreePath: currentBoulder.worktree_path,
|
||||
})
|
||||
}, RETRY_DELAY_MS)
|
||||
```
|
||||
|
||||
### After:
|
||||
```typescript
|
||||
sessionState.pendingRetryTimer = setTimeout(async () => {
|
||||
sessionState.pendingRetryTimer = undefined
|
||||
|
||||
try {
|
||||
if (sessionState.promptFailureCount >= 2) return
|
||||
if (sessionState.waitingForFinalWaveApproval) return
|
||||
|
||||
const currentBoulder = readBoulderState(ctx.directory)
|
||||
if (!currentBoulder) return
|
||||
if (!currentBoulder.session_ids?.includes(sessionID)) return
|
||||
|
||||
const currentProgress = getPlanProgress(currentBoulder.active_plan)
|
||||
if (currentProgress.isComplete) return
|
||||
if (options?.isContinuationStopped?.(sessionID)) return
|
||||
if (options?.shouldSkipContinuation?.(sessionID)) return
|
||||
if (hasRunningBackgroundTasks(sessionID, options)) return
|
||||
|
||||
await injectContinuation({
|
||||
ctx,
|
||||
sessionID,
|
||||
sessionState,
|
||||
options,
|
||||
planName: currentBoulder.plan_name,
|
||||
progress: currentProgress,
|
||||
agent: currentBoulder.agent,
|
||||
worktreePath: currentBoulder.worktree_path,
|
||||
})
|
||||
} catch (error) {
|
||||
log(`[${HOOK_NAME}] Retry continuation failed`, { sessionID, error: String(error) })
|
||||
}
|
||||
}, RETRY_DELAY_MS)
|
||||
```
|
||||
|
||||
**Rationale:** The async callback in setTimeout creates a floating promise. Without try/catch, any error becomes an unhandled rejection that can crash the process. This is the critical safety net even after the `readBoulderState` fix.
|
||||
|
||||
---
|
||||
|
||||
## Change 3: Defensive guard in `getPlanProgress`
|
||||
|
||||
**File:** `src/features/boulder-state/storage.ts`
|
||||
|
||||
### Before (lines 115-118):
|
||||
```typescript
|
||||
export function getPlanProgress(planPath: string): PlanProgress {
|
||||
if (!existsSync(planPath)) {
|
||||
return { total: 0, completed: 0, isComplete: true }
|
||||
}
|
||||
```
|
||||
|
||||
### After:
|
||||
```typescript
|
||||
export function getPlanProgress(planPath: string): PlanProgress {
|
||||
if (typeof planPath !== "string" || !existsSync(planPath)) {
|
||||
return { total: 0, completed: 0, isComplete: true }
|
||||
}
|
||||
```
|
||||
|
||||
**Rationale:** Defense-in-depth. Even though `readBoulderState` now validates `active_plan`, the `getPlanProgress` function is a public API that could be called from other paths with invalid input. A `typeof` check before `existsSync` prevents the TypeError from `existsSync(undefined)`.
|
||||
|
||||
---
|
||||
|
||||
## Change 4: New tests
|
||||
|
||||
### File: `src/features/boulder-state/storage.test.ts` (additions)
|
||||
|
||||
```typescript
|
||||
test("should return null when active_plan is missing", () => {
|
||||
// given - boulder.json without active_plan
|
||||
const boulderFile = join(SISYPHUS_DIR, "boulder.json")
|
||||
writeFileSync(boulderFile, JSON.stringify({
|
||||
started_at: "2026-01-01T00:00:00Z",
|
||||
session_ids: ["ses-1"],
|
||||
plan_name: "plan",
|
||||
}))
|
||||
|
||||
// when
|
||||
const result = readBoulderState(TEST_DIR)
|
||||
|
||||
// then
|
||||
expect(result).toBeNull()
|
||||
})
|
||||
|
||||
test("should return null when plan_name is missing", () => {
|
||||
// given - boulder.json without plan_name
|
||||
const boulderFile = join(SISYPHUS_DIR, "boulder.json")
|
||||
writeFileSync(boulderFile, JSON.stringify({
|
||||
active_plan: "/path/to/plan.md",
|
||||
started_at: "2026-01-01T00:00:00Z",
|
||||
session_ids: ["ses-1"],
|
||||
}))
|
||||
|
||||
// when
|
||||
const result = readBoulderState(TEST_DIR)
|
||||
|
||||
// then
|
||||
expect(result).toBeNull()
|
||||
})
|
||||
|
||||
test("should strip non-string worktree_path from boulder state", () => {
|
||||
// given - boulder.json with worktree_path set to null
|
||||
const boulderFile = join(SISYPHUS_DIR, "boulder.json")
|
||||
writeFileSync(boulderFile, JSON.stringify({
|
||||
active_plan: "/path/to/plan.md",
|
||||
started_at: "2026-01-01T00:00:00Z",
|
||||
session_ids: ["ses-1"],
|
||||
plan_name: "plan",
|
||||
worktree_path: null,
|
||||
}))
|
||||
|
||||
// when
|
||||
const result = readBoulderState(TEST_DIR)
|
||||
|
||||
// then
|
||||
expect(result).not.toBeNull()
|
||||
expect(result!.worktree_path).toBeUndefined()
|
||||
})
|
||||
|
||||
test("should preserve valid worktree_path string", () => {
|
||||
// given - boulder.json with valid worktree_path
|
||||
const boulderFile = join(SISYPHUS_DIR, "boulder.json")
|
||||
writeFileSync(boulderFile, JSON.stringify({
|
||||
active_plan: "/path/to/plan.md",
|
||||
started_at: "2026-01-01T00:00:00Z",
|
||||
session_ids: ["ses-1"],
|
||||
plan_name: "plan",
|
||||
worktree_path: "/valid/worktree/path",
|
||||
}))
|
||||
|
||||
// when
|
||||
const result = readBoulderState(TEST_DIR)
|
||||
|
||||
// then
|
||||
expect(result).not.toBeNull()
|
||||
expect(result!.worktree_path).toBe("/valid/worktree/path")
|
||||
})
|
||||
```
|
||||
|
||||
### File: `src/features/boulder-state/storage.test.ts` (getPlanProgress additions)
|
||||
|
||||
```typescript
|
||||
test("should handle undefined planPath without crashing", () => {
|
||||
// given - undefined as planPath (from malformed boulder state)
|
||||
|
||||
// when
|
||||
const progress = getPlanProgress(undefined as unknown as string)
|
||||
|
||||
// then
|
||||
expect(progress.total).toBe(0)
|
||||
expect(progress.isComplete).toBe(true)
|
||||
})
|
||||
```
|
||||
|
||||
### File: `src/hooks/atlas/index.test.ts` (additions to session.idle section)
|
||||
|
||||
```typescript
|
||||
test("should handle boulder state without worktree_path gracefully", async () => {
|
||||
// given - boulder state with incomplete plan, no worktree_path
|
||||
const planPath = join(TEST_DIR, "test-plan.md")
|
||||
writeFileSync(planPath, "# Plan\n- [ ] Task 1\n- [x] Task 2")
|
||||
|
||||
const state: BoulderState = {
|
||||
active_plan: planPath,
|
||||
started_at: "2026-01-02T10:00:00Z",
|
||||
session_ids: [MAIN_SESSION_ID],
|
||||
plan_name: "test-plan",
|
||||
// worktree_path intentionally omitted
|
||||
}
|
||||
writeBoulderState(TEST_DIR, state)
|
||||
|
||||
const mockInput = createMockPluginInput()
|
||||
const hook = createAtlasHook(mockInput)
|
||||
|
||||
// when
|
||||
await hook.handler({
|
||||
event: {
|
||||
type: "session.idle",
|
||||
properties: { sessionID: MAIN_SESSION_ID },
|
||||
},
|
||||
})
|
||||
|
||||
// then - should call prompt without crashing, continuation should not contain worktree context
|
||||
expect(mockInput._promptMock).toHaveBeenCalled()
|
||||
const callArgs = mockInput._promptMock.mock.calls[0][0]
|
||||
expect(callArgs.body.parts[0].text).toContain("incomplete tasks")
|
||||
expect(callArgs.body.parts[0].text).not.toContain("[Worktree:")
|
||||
})
|
||||
|
||||
test("should include worktree context when worktree_path is present in boulder state", async () => {
|
||||
// given - boulder state with worktree_path
|
||||
const planPath = join(TEST_DIR, "test-plan.md")
|
||||
writeFileSync(planPath, "# Plan\n- [ ] Task 1")
|
||||
|
||||
const state: BoulderState = {
|
||||
active_plan: planPath,
|
||||
started_at: "2026-01-02T10:00:00Z",
|
||||
session_ids: [MAIN_SESSION_ID],
|
||||
plan_name: "test-plan",
|
||||
worktree_path: "/some/worktree/path",
|
||||
}
|
||||
writeBoulderState(TEST_DIR, state)
|
||||
|
||||
const mockInput = createMockPluginInput()
|
||||
const hook = createAtlasHook(mockInput)
|
||||
|
||||
// when
|
||||
await hook.handler({
|
||||
event: {
|
||||
type: "session.idle",
|
||||
properties: { sessionID: MAIN_SESSION_ID },
|
||||
},
|
||||
})
|
||||
|
||||
// then - should include worktree context in continuation prompt
|
||||
expect(mockInput._promptMock).toHaveBeenCalled()
|
||||
const callArgs = mockInput._promptMock.mock.calls[0][0]
|
||||
expect(callArgs.body.parts[0].text).toContain("[Worktree: /some/worktree/path]")
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary of Changes
|
||||
|
||||
| File | Change | Lines Modified |
|
||||
|------|--------|---------------|
|
||||
| `src/features/boulder-state/storage.ts` | Validate required fields + sanitize worktree_path + guard getPlanProgress | ~8 lines added |
|
||||
| `src/hooks/atlas/idle-event.ts` | try/catch around setTimeout async callback | ~4 lines added |
|
||||
| `src/features/boulder-state/storage.test.ts` | 5 new tests for validation | ~60 lines added |
|
||||
| `src/hooks/atlas/index.test.ts` | 2 new tests for worktree_path handling | ~50 lines added |
|
||||
|
||||
Total: ~4 production lines changed, ~8 defensive lines added, ~110 test lines added.
|
||||
@@ -1,86 +0,0 @@
|
||||
# Execution Plan: Fix Atlas Hook Crash on Missing worktree_path
|
||||
|
||||
## Bug Analysis
|
||||
|
||||
### Root Cause
|
||||
|
||||
`readBoulderState()` in `src/features/boulder-state/storage.ts` performs minimal validation when parsing `boulder.json`:
|
||||
|
||||
```typescript
|
||||
const parsed = JSON.parse(content)
|
||||
if (!parsed || typeof parsed !== "object" || Array.isArray(parsed)) return null
|
||||
if (!Array.isArray(parsed.session_ids)) parsed.session_ids = []
|
||||
return parsed as BoulderState // <-- unsafe cast, no field validation
|
||||
```
|
||||
|
||||
It validates `session_ids` but NOT `active_plan`, `plan_name`, or `worktree_path`. This means a malformed `boulder.json` (e.g., `{}` or missing key fields) passes through and downstream code crashes.
|
||||
|
||||
### Crash Path
|
||||
|
||||
1. `boulder.json` is written without required fields (manual edit, corruption, partial write)
|
||||
2. `readBoulderState()` returns it as `BoulderState` with `active_plan: undefined`
|
||||
3. Multiple call sites pass `boulderState.active_plan` to `getPlanProgress(planPath: string)`:
|
||||
- `src/hooks/atlas/idle-event.ts:72` (inside `setTimeout` callback - unhandled rejection!)
|
||||
- `src/hooks/atlas/resolve-active-boulder-session.ts:21`
|
||||
- `src/hooks/atlas/tool-execute-after.ts:74`
|
||||
4. `getPlanProgress()` calls `existsSync(undefined)` which throws: `TypeError: The "path" argument must be of type string`
|
||||
|
||||
### worktree_path-Specific Issues
|
||||
|
||||
When `worktree_path` field is missing from `boulder.json`:
|
||||
- The `idle-event.ts` `scheduleRetry` setTimeout callback (lines 62-88) has NO try/catch. An unhandled promise rejection from the async callback crashes the process.
|
||||
- `readBoulderState()` returns `worktree_path: undefined` which itself is handled in `boulder-continuation-injector.ts` (line 42 uses truthiness check), but the surrounding code in the setTimeout lacks error protection.
|
||||
|
||||
### Secondary Issue: Unhandled Promise in setTimeout
|
||||
|
||||
In `idle-event.ts` lines 62-88:
|
||||
```typescript
|
||||
sessionState.pendingRetryTimer = setTimeout(async () => {
|
||||
// ... no try/catch wrapper
|
||||
const currentBoulder = readBoulderState(ctx.directory)
|
||||
const currentProgress = getPlanProgress(currentBoulder.active_plan) // CRASH if active_plan undefined
|
||||
// ...
|
||||
}, RETRY_DELAY_MS)
|
||||
```
|
||||
|
||||
The async callback creates a floating promise. Any thrown error becomes an unhandled rejection.
|
||||
|
||||
---
|
||||
|
||||
## Step-by-Step Plan
|
||||
|
||||
### Step 1: Harden `readBoulderState()` validation
|
||||
**File:** `src/features/boulder-state/storage.ts`
|
||||
|
||||
- After the `session_ids` fix, add validation for `active_plan` and `plan_name` (required fields)
|
||||
- Validate `worktree_path` is either `undefined` or a string (not `null`, not a number)
|
||||
- Return `null` for boulder states with missing required fields
|
||||
|
||||
### Step 2: Add try/catch in setTimeout callback
|
||||
**File:** `src/hooks/atlas/idle-event.ts`
|
||||
|
||||
- Wrap the `setTimeout` async callback body in try/catch
|
||||
- Log errors with the atlas hook logger
|
||||
|
||||
### Step 3: Add defensive guard in `getPlanProgress`
|
||||
**File:** `src/features/boulder-state/storage.ts`
|
||||
|
||||
- Add early return for non-string `planPath` argument
|
||||
|
||||
### Step 4: Add tests
|
||||
**Files:**
|
||||
- `src/features/boulder-state/storage.test.ts` - test missing/malformed fields
|
||||
- `src/hooks/atlas/index.test.ts` - test atlas hook with boulder missing worktree_path
|
||||
|
||||
### Step 5: Run CI checks
|
||||
```bash
|
||||
bun run typecheck
|
||||
bun test src/features/boulder-state/storage.test.ts
|
||||
bun test src/hooks/atlas/index.test.ts
|
||||
bun test # full suite
|
||||
```
|
||||
|
||||
### Step 6: Create PR
|
||||
- Branch: `fix/atlas-hook-missing-worktree-path`
|
||||
- Target: `dev`
|
||||
- Run CI and verify passes
|
||||
@@ -1,23 +0,0 @@
|
||||
## Summary
|
||||
|
||||
- Fix crash in atlas hook when `boulder.json` is missing `worktree_path` (or other required fields) by hardening `readBoulderState()` validation
|
||||
- Wrap the unprotected `setTimeout` retry callback in `idle-event.ts` with try/catch to prevent unhandled promise rejections
|
||||
- Add defensive type guard in `getPlanProgress()` to prevent `existsSync(undefined)` TypeError
|
||||
|
||||
## Context
|
||||
|
||||
When `boulder.json` is malformed or manually edited to omit fields, `readBoulderState()` returns an object cast as `BoulderState` without validating required fields. Downstream callers like `getPlanProgress(boulderState.active_plan)` then pass `undefined` to `existsSync()`, which throws a TypeError. This crash is especially dangerous in the `setTimeout` retry callback in `idle-event.ts`, where the error becomes an unhandled promise rejection.
|
||||
|
||||
## Changes
|
||||
|
||||
### `src/features/boulder-state/storage.ts`
|
||||
- `readBoulderState()`: Validate `active_plan` and `plan_name` are strings (return `null` if not)
|
||||
- `readBoulderState()`: Strip `worktree_path` if present but not a string type
|
||||
- `getPlanProgress()`: Add `typeof planPath !== "string"` guard before `existsSync`
|
||||
|
||||
### `src/hooks/atlas/idle-event.ts`
|
||||
- Wrap `scheduleRetry` setTimeout async callback body in try/catch
|
||||
|
||||
### Tests
|
||||
- `src/features/boulder-state/storage.test.ts`: 5 new tests for missing/malformed fields
|
||||
- `src/hooks/atlas/index.test.ts`: 2 new tests for worktree_path presence/absence in continuation prompt
|
||||
@@ -1,119 +0,0 @@
|
||||
# Verification Strategy
|
||||
|
||||
## 1. Unit Tests (Direct Verification)
|
||||
|
||||
### boulder-state storage tests
|
||||
```bash
|
||||
bun test src/features/boulder-state/storage.test.ts
|
||||
```
|
||||
|
||||
Verify:
|
||||
- `readBoulderState()` returns `null` when `active_plan` missing
|
||||
- `readBoulderState()` returns `null` when `plan_name` missing
|
||||
- `readBoulderState()` strips non-string `worktree_path` (e.g., `null`)
|
||||
- `readBoulderState()` preserves valid string `worktree_path`
|
||||
- `getPlanProgress(undefined)` returns safe default without crashing
|
||||
- Existing tests still pass (session_ids defaults, empty object, etc.)
|
||||
|
||||
### atlas hook tests
|
||||
```bash
|
||||
bun test src/hooks/atlas/index.test.ts
|
||||
```
|
||||
|
||||
Verify:
|
||||
- session.idle handler works with boulder state missing `worktree_path` (no crash, prompt injected)
|
||||
- session.idle handler includes `[Worktree: ...]` context when `worktree_path` IS present
|
||||
- All 30+ existing tests still pass
|
||||
|
||||
### atlas idle-event lineage tests
|
||||
```bash
|
||||
bun test src/hooks/atlas/idle-event-lineage.test.ts
|
||||
```
|
||||
|
||||
Verify existing lineage tests unaffected.
|
||||
|
||||
### start-work hook tests
|
||||
```bash
|
||||
bun test src/hooks/start-work/index.test.ts
|
||||
```
|
||||
|
||||
Verify worktree-related start-work tests still pass (these create boulder states with/without `worktree_path`).
|
||||
|
||||
## 2. Type Safety
|
||||
|
||||
```bash
|
||||
bun run typecheck
|
||||
```
|
||||
|
||||
Verify zero new TypeScript errors. The changes are purely additive runtime guards that align with existing types (`worktree_path?: string`).
|
||||
|
||||
## 3. LSP Diagnostics on Changed Files
|
||||
|
||||
```
|
||||
lsp_diagnostics on:
|
||||
- src/features/boulder-state/storage.ts
|
||||
- src/hooks/atlas/idle-event.ts
|
||||
```
|
||||
|
||||
Verify zero errors/warnings.
|
||||
|
||||
## 4. Full Test Suite
|
||||
|
||||
```bash
|
||||
bun test
|
||||
```
|
||||
|
||||
Verify no regressions across the entire codebase.
|
||||
|
||||
## 5. Build
|
||||
|
||||
```bash
|
||||
bun run build
|
||||
```
|
||||
|
||||
Verify build succeeds.
|
||||
|
||||
## 6. Manual Smoke Test (Reproduction)
|
||||
|
||||
To manually verify the fix:
|
||||
|
||||
```bash
|
||||
# Create a malformed boulder.json (missing worktree_path)
|
||||
mkdir -p .sisyphus
|
||||
echo '{"active_plan": ".sisyphus/plans/test.md", "plan_name": "test", "session_ids": ["ses-1"]}' > .sisyphus/boulder.json
|
||||
|
||||
# Create a plan file
|
||||
mkdir -p .sisyphus/plans
|
||||
echo '# Plan\n- [ ] Task 1' > .sisyphus/plans/test.md
|
||||
|
||||
# Start opencode - atlas hook should NOT crash when session.idle fires
|
||||
# Verify /tmp/oh-my-opencode.log shows normal continuation behavior
|
||||
```
|
||||
|
||||
Also test the extreme case:
|
||||
```bash
|
||||
# boulder.json with no required fields
|
||||
echo '{}' > .sisyphus/boulder.json
|
||||
|
||||
# After fix: readBoulderState returns null, atlas hook gracefully skips
|
||||
```
|
||||
|
||||
## 7. CI Pipeline
|
||||
|
||||
After pushing the branch, verify:
|
||||
- `ci.yml` workflow passes: tests (split: mock-heavy isolated + batch), typecheck, build
|
||||
- No new lint warnings
|
||||
|
||||
## 8. Edge Cases Covered
|
||||
|
||||
| Scenario | Expected Behavior |
|
||||
|----------|-------------------|
|
||||
| `boulder.json` = `{}` | `readBoulderState` returns `null` |
|
||||
| `boulder.json` missing `active_plan` | `readBoulderState` returns `null` |
|
||||
| `boulder.json` missing `plan_name` | `readBoulderState` returns `null` |
|
||||
| `boulder.json` has `worktree_path: null` | Field stripped, returned as `undefined` |
|
||||
| `boulder.json` has `worktree_path: 42` | Field stripped, returned as `undefined` |
|
||||
| `boulder.json` has no `worktree_path` | Works normally, no crash |
|
||||
| `boulder.json` has valid `worktree_path` | Preserved, included in continuation prompt |
|
||||
| setTimeout retry with corrupted boulder.json | Error caught and logged, no process crash |
|
||||
| `getPlanProgress(undefined)` | Returns `{ total: 0, completed: 0, isComplete: true }` |
|
||||
@@ -1 +0,0 @@
|
||||
{"total_tokens": null, "duration_ms": 325000, "total_duration_seconds": 325}
|
||||
@@ -1,32 +0,0 @@
|
||||
{
|
||||
"eval_id": 3,
|
||||
"eval_name": "refactor-split-constants",
|
||||
"prompt": "Refactor src/tools/delegate-task/constants.ts to split DEFAULT_CATEGORIES and CATEGORY_MODEL_REQUIREMENTS into separate files. Keep backward compatibility with the barrel export. Make a PR.",
|
||||
"assertions": [
|
||||
{
|
||||
"id": "worktree-isolation",
|
||||
"text": "Plan uses git worktree in a sibling directory",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "multiple-atomic-commits",
|
||||
"text": "Uses 2+ commits for the multi-file refactor",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "barrel-export",
|
||||
"text": "Maintains backward compatibility via barrel re-export in constants.ts or index.ts",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "three-gates",
|
||||
"text": "Verification loop includes all 3 gates",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "real-constants-file",
|
||||
"text": "References actual src/tools/delegate-task/constants.ts file and its exports",
|
||||
"type": "manual"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,10 +0,0 @@
|
||||
{
|
||||
"run_id": "eval-3-with_skill",
|
||||
"expectations": [
|
||||
{"text": "Plan uses git worktree in a sibling directory", "passed": true, "evidence": "../omo-wt/refactor-delegate-task-constants"},
|
||||
{"text": "Uses 2+ commits for the multi-file refactor", "passed": true, "evidence": "Commit 1: category defaults+appends, Commit 2: plan agent prompt+names"},
|
||||
{"text": "Maintains backward compatibility via barrel re-export", "passed": true, "evidence": "constants.ts converted to re-export from 4 new files, full import map verified"},
|
||||
{"text": "Verification loop includes all 3 gates", "passed": true, "evidence": "Gate A (CI), Gate B (review-work), Gate C (Cubic)"},
|
||||
{"text": "References actual src/tools/delegate-task/constants.ts", "passed": true, "evidence": "654 lines analyzed, 4 responsibilities identified, full external+internal import map"}
|
||||
]
|
||||
}
|
||||
@@ -1,221 +0,0 @@
|
||||
# Code Changes
|
||||
|
||||
## New File: `src/tools/delegate-task/default-categories.ts`
|
||||
|
||||
```typescript
|
||||
import type { CategoryConfig } from "../../config/schema"
|
||||
|
||||
export const DEFAULT_CATEGORIES: Record<string, CategoryConfig> = {
|
||||
"visual-engineering": { model: "google/gemini-3.1-pro", variant: "high" },
|
||||
ultrabrain: { model: "openai/gpt-5.4", variant: "xhigh" },
|
||||
deep: { model: "openai/gpt-5.3-codex", variant: "medium" },
|
||||
artistry: { model: "google/gemini-3.1-pro", variant: "high" },
|
||||
quick: { model: "anthropic/claude-haiku-4-5" },
|
||||
"unspecified-low": { model: "anthropic/claude-sonnet-4-6" },
|
||||
"unspecified-high": { model: "anthropic/claude-opus-4-6", variant: "max" },
|
||||
writing: { model: "kimi-for-coding/k2p5" },
|
||||
}
|
||||
|
||||
export const CATEGORY_DESCRIPTIONS: Record<string, string> = {
|
||||
"visual-engineering": "Frontend, UI/UX, design, styling, animation",
|
||||
ultrabrain: "Use ONLY for genuinely hard, logic-heavy tasks. Give clear goals only, not step-by-step instructions.",
|
||||
deep: "Goal-oriented autonomous problem-solving. Thorough research before action. For hairy problems requiring deep understanding.",
|
||||
artistry: "Complex problem-solving with unconventional, creative approaches - beyond standard patterns",
|
||||
quick: "Trivial tasks - single file changes, typo fixes, simple modifications",
|
||||
"unspecified-low": "Tasks that don't fit other categories, low effort required",
|
||||
"unspecified-high": "Tasks that don't fit other categories, high effort required",
|
||||
writing: "Documentation, prose, technical writing",
|
||||
}
|
||||
```
|
||||
|
||||
## New File: `src/tools/delegate-task/category-prompt-appends.ts`
|
||||
|
||||
```typescript
|
||||
export const VISUAL_CATEGORY_PROMPT_APPEND = `<Category_Context>
|
||||
You are working on VISUAL/UI tasks.
|
||||
...
|
||||
</Category_Context>`
|
||||
// (exact content from lines 8-95 of constants.ts)
|
||||
|
||||
export const ULTRABRAIN_CATEGORY_PROMPT_APPEND = `<Category_Context>
|
||||
...
|
||||
</Category_Context>`
|
||||
// (exact content from lines 97-117)
|
||||
|
||||
export const ARTISTRY_CATEGORY_PROMPT_APPEND = `<Category_Context>
|
||||
...
|
||||
</Category_Context>`
|
||||
// (exact content from lines 119-134)
|
||||
|
||||
export const QUICK_CATEGORY_PROMPT_APPEND = `<Category_Context>
|
||||
...
|
||||
</Caller_Warning>`
|
||||
// (exact content from lines 136-186)
|
||||
|
||||
export const UNSPECIFIED_LOW_CATEGORY_PROMPT_APPEND = `<Category_Context>
|
||||
...
|
||||
</Caller_Warning>`
|
||||
// (exact content from lines 188-209)
|
||||
|
||||
export const UNSPECIFIED_HIGH_CATEGORY_PROMPT_APPEND = `<Category_Context>
|
||||
...
|
||||
</Category_Context>`
|
||||
// (exact content from lines 211-224)
|
||||
|
||||
export const WRITING_CATEGORY_PROMPT_APPEND = `<Category_Context>
|
||||
...
|
||||
</Category_Context>`
|
||||
// (exact content from lines 226-250)
|
||||
|
||||
export const DEEP_CATEGORY_PROMPT_APPEND = `<Category_Context>
|
||||
...
|
||||
</Category_Context>`
|
||||
// (exact content from lines 252-281)
|
||||
|
||||
export const CATEGORY_PROMPT_APPENDS: Record<string, string> = {
|
||||
"visual-engineering": VISUAL_CATEGORY_PROMPT_APPEND,
|
||||
ultrabrain: ULTRABRAIN_CATEGORY_PROMPT_APPEND,
|
||||
deep: DEEP_CATEGORY_PROMPT_APPEND,
|
||||
artistry: ARTISTRY_CATEGORY_PROMPT_APPEND,
|
||||
quick: QUICK_CATEGORY_PROMPT_APPEND,
|
||||
"unspecified-low": UNSPECIFIED_LOW_CATEGORY_PROMPT_APPEND,
|
||||
"unspecified-high": UNSPECIFIED_HIGH_CATEGORY_PROMPT_APPEND,
|
||||
writing: WRITING_CATEGORY_PROMPT_APPEND,
|
||||
}
|
||||
```
|
||||
|
||||
## New File: `src/tools/delegate-task/plan-agent-prompt.ts`
|
||||
|
||||
```typescript
|
||||
import type {
|
||||
AvailableCategory,
|
||||
AvailableSkill,
|
||||
} from "../../agents/dynamic-agent-prompt-builder"
|
||||
import { truncateDescription } from "../../shared/truncate-description"
|
||||
|
||||
/**
|
||||
* System prompt prepended to plan agent invocations.
|
||||
* Instructs the plan agent to first gather context via explore/librarian agents,
|
||||
* then summarize user requirements and clarify uncertainties before proceeding.
|
||||
* Also MANDATES dependency graphs, parallel execution analysis, and category+skill recommendations.
|
||||
*/
|
||||
export const PLAN_AGENT_SYSTEM_PREPEND_STATIC_BEFORE_SKILLS = `<system>
|
||||
...
|
||||
</CRITICAL_REQUIREMENT_DEPENDENCY_PARALLEL_EXECUTION_CATEGORY_SKILLS>
|
||||
`
|
||||
// (exact content from lines 324-430)
|
||||
|
||||
export const PLAN_AGENT_SYSTEM_PREPEND_STATIC_AFTER_SKILLS = `### REQUIRED OUTPUT FORMAT
|
||||
...
|
||||
`
|
||||
// (exact content from lines 432-569)
|
||||
|
||||
function renderPlanAgentCategoryRows(categories: AvailableCategory[]): string[] {
|
||||
const sorted = [...categories].sort((a, b) => a.name.localeCompare(b.name))
|
||||
return sorted.map((category) => {
|
||||
const bestFor = category.description || category.name
|
||||
const model = category.model || ""
|
||||
return `| \`${category.name}\` | ${bestFor} | ${model} |`
|
||||
})
|
||||
}
|
||||
|
||||
function renderPlanAgentSkillRows(skills: AvailableSkill[]): string[] {
|
||||
const sorted = [...skills].sort((a, b) => a.name.localeCompare(b.name))
|
||||
return sorted.map((skill) => {
|
||||
const domain = truncateDescription(skill.description).trim() || skill.name
|
||||
return `| \`${skill.name}\` | ${domain} |`
|
||||
})
|
||||
}
|
||||
|
||||
export function buildPlanAgentSkillsSection(
|
||||
categories: AvailableCategory[] = [],
|
||||
skills: AvailableSkill[] = []
|
||||
): string {
|
||||
const categoryRows = renderPlanAgentCategoryRows(categories)
|
||||
const skillRows = renderPlanAgentSkillRows(skills)
|
||||
|
||||
return `### AVAILABLE CATEGORIES
|
||||
|
||||
| Category | Best For | Model |
|
||||
|----------|----------|-------|
|
||||
${categoryRows.join("\n")}
|
||||
|
||||
### AVAILABLE SKILLS (ALWAYS EVALUATE ALL)
|
||||
|
||||
Skills inject specialized expertise into the delegated agent.
|
||||
YOU MUST evaluate EVERY skill and justify inclusions/omissions.
|
||||
|
||||
| Skill | Domain |
|
||||
|-------|--------|
|
||||
${skillRows.join("\n")}`
|
||||
}
|
||||
|
||||
export function buildPlanAgentSystemPrepend(
|
||||
categories: AvailableCategory[] = [],
|
||||
skills: AvailableSkill[] = []
|
||||
): string {
|
||||
return [
|
||||
PLAN_AGENT_SYSTEM_PREPEND_STATIC_BEFORE_SKILLS,
|
||||
buildPlanAgentSkillsSection(categories, skills),
|
||||
PLAN_AGENT_SYSTEM_PREPEND_STATIC_AFTER_SKILLS,
|
||||
].join("\n\n")
|
||||
}
|
||||
```
|
||||
|
||||
## New File: `src/tools/delegate-task/plan-agent-names.ts`
|
||||
|
||||
```typescript
|
||||
/**
|
||||
* List of agent names that should be treated as plan agents (receive plan system prompt).
|
||||
* Case-insensitive matching is used.
|
||||
*/
|
||||
export const PLAN_AGENT_NAMES = ["plan"]
|
||||
|
||||
/**
|
||||
* Check if the given agent name is a plan agent (receives plan system prompt).
|
||||
*/
|
||||
export function isPlanAgent(agentName: string | undefined): boolean {
|
||||
if (!agentName) return false
|
||||
const lowerName = agentName.toLowerCase().trim()
|
||||
return PLAN_AGENT_NAMES.some(name => lowerName === name || lowerName.includes(name))
|
||||
}
|
||||
|
||||
/**
|
||||
* Plan family: plan + prometheus. Shares mutual delegation blocking and task tool permission.
|
||||
* Does NOT share system prompt (only isPlanAgent controls that).
|
||||
*/
|
||||
export const PLAN_FAMILY_NAMES = ["plan", "prometheus"]
|
||||
|
||||
/**
|
||||
* Check if the given agent belongs to the plan family (blocking + task permission).
|
||||
*/
|
||||
export function isPlanFamily(category: string): boolean
|
||||
export function isPlanFamily(category: string | undefined): boolean
|
||||
export function isPlanFamily(category: string | undefined): boolean {
|
||||
if (!category) return false
|
||||
const lowerCategory = category.toLowerCase().trim()
|
||||
return PLAN_FAMILY_NAMES.some(
|
||||
(name) => lowerCategory === name || lowerCategory.includes(name)
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
## Modified File: `src/tools/delegate-task/constants.ts`
|
||||
|
||||
```typescript
|
||||
export * from "./default-categories"
|
||||
export * from "./category-prompt-appends"
|
||||
export * from "./plan-agent-prompt"
|
||||
export * from "./plan-agent-names"
|
||||
```
|
||||
|
||||
## Unchanged: `src/tools/delegate-task/index.ts`
|
||||
|
||||
```typescript
|
||||
export { createDelegateTask, resolveCategoryConfig, buildSystemContent, buildTaskPrompt } from "./tools"
|
||||
export type { DelegateTaskToolOptions, SyncSessionCreatedEvent, BuildSystemContentInput } from "./tools"
|
||||
export type * from "./types"
|
||||
export * from "./constants"
|
||||
```
|
||||
|
||||
No changes needed. `export * from "./constants"` transitively re-exports everything from the 4 new files.
|
||||
@@ -1,104 +0,0 @@
|
||||
# Execution Plan: Split delegate-task/constants.ts
|
||||
|
||||
## Phase 0: Setup
|
||||
|
||||
```bash
|
||||
git fetch origin dev
|
||||
git worktree add ../omo-wt/refactor-delegate-task-constants origin/dev -b refactor/split-delegate-task-constants
|
||||
cd ../omo-wt/refactor-delegate-task-constants
|
||||
```
|
||||
|
||||
## Phase 1: Implement
|
||||
|
||||
### Analysis
|
||||
|
||||
`src/tools/delegate-task/constants.ts` is 654 lines with 4 distinct responsibilities:
|
||||
|
||||
1. **Category defaults** (lines 285-316): `DEFAULT_CATEGORIES`, `CATEGORY_DESCRIPTIONS`
|
||||
2. **Category prompt appends** (lines 8-305): 8 `*_CATEGORY_PROMPT_APPEND` string constants + `CATEGORY_PROMPT_APPENDS` record
|
||||
3. **Plan agent prompts** (lines 318-620): `PLAN_AGENT_SYSTEM_PREPEND_*`, builder functions
|
||||
4. **Plan agent names** (lines 626-654): `PLAN_AGENT_NAMES`, `isPlanAgent`, `PLAN_FAMILY_NAMES`, `isPlanFamily`
|
||||
|
||||
Note: `CATEGORY_MODEL_REQUIREMENTS` is already in `src/shared/model-requirements.ts`. No move needed.
|
||||
|
||||
### New Files
|
||||
|
||||
| File | Responsibility | ~LOC |
|
||||
|------|---------------|------|
|
||||
| `default-categories.ts` | `DEFAULT_CATEGORIES`, `CATEGORY_DESCRIPTIONS` | ~40 |
|
||||
| `category-prompt-appends.ts` | 8 prompt append constants + `CATEGORY_PROMPT_APPENDS` record | ~300 (exempt: prompt text) |
|
||||
| `plan-agent-prompt.ts` | Plan agent system prompt constants + builder functions | ~250 (exempt: prompt text) |
|
||||
| `plan-agent-names.ts` | `PLAN_AGENT_NAMES`, `isPlanAgent`, `PLAN_FAMILY_NAMES`, `isPlanFamily` | ~30 |
|
||||
| `constants.ts` (updated) | Re-exports from all 4 files (backward compat) | ~5 |
|
||||
|
||||
### Commit 1: Extract category defaults and prompt appends
|
||||
|
||||
**Files changed**: 3 new + 1 modified
|
||||
- Create `src/tools/delegate-task/default-categories.ts`
|
||||
- Create `src/tools/delegate-task/category-prompt-appends.ts`
|
||||
- Modify `src/tools/delegate-task/constants.ts` (remove extracted code, add re-exports)
|
||||
|
||||
### Commit 2: Extract plan agent prompt and names
|
||||
|
||||
**Files changed**: 2 new + 1 modified
|
||||
- Create `src/tools/delegate-task/plan-agent-prompt.ts`
|
||||
- Create `src/tools/delegate-task/plan-agent-names.ts`
|
||||
- Modify `src/tools/delegate-task/constants.ts` (final: re-exports only)
|
||||
|
||||
### Local Validation
|
||||
|
||||
```bash
|
||||
bun run typecheck
|
||||
bun test src/tools/delegate-task/
|
||||
bun run build
|
||||
```
|
||||
|
||||
## Phase 2: PR Creation
|
||||
|
||||
```bash
|
||||
git push -u origin refactor/split-delegate-task-constants
|
||||
gh pr create --base dev --title "refactor(delegate-task): split constants.ts into focused modules" --body-file /tmp/pr-body.md
|
||||
```
|
||||
|
||||
## Phase 3: Verify Loop
|
||||
|
||||
- **Gate A**: `gh pr checks --watch`
|
||||
- **Gate B**: `/review-work` (5-agent review)
|
||||
- **Gate C**: Wait for cubic-dev-ai[bot] "No issues found"
|
||||
|
||||
## Phase 4: Merge
|
||||
|
||||
```bash
|
||||
gh pr merge --squash --delete-branch
|
||||
git worktree remove ../omo-wt/refactor-delegate-task-constants
|
||||
```
|
||||
|
||||
## Import Update Strategy
|
||||
|
||||
No import updates needed. Backward compatibility preserved through:
|
||||
1. `constants.ts` re-exports everything from the 4 new files
|
||||
2. `index.ts` already does `export * from "./constants"` (unchanged)
|
||||
3. All external consumers import from `"../tools/delegate-task/constants"` or `"./constants"` -- both still work
|
||||
|
||||
### External Import Map (Verified -- NO CHANGES NEEDED)
|
||||
|
||||
| Consumer | Imports | Source Path |
|
||||
|----------|---------|-------------|
|
||||
| `src/agents/atlas/prompt-section-builder.ts` | `CATEGORY_DESCRIPTIONS` | `../../tools/delegate-task/constants` |
|
||||
| `src/agents/builtin-agents.ts` | `CATEGORY_DESCRIPTIONS` | `../tools/delegate-task/constants` |
|
||||
| `src/plugin/available-categories.ts` | `CATEGORY_DESCRIPTIONS` | `../tools/delegate-task/constants` |
|
||||
| `src/plugin-handlers/category-config-resolver.ts` | `DEFAULT_CATEGORIES` | `../tools/delegate-task/constants` |
|
||||
| `src/shared/merge-categories.ts` | `DEFAULT_CATEGORIES` | `../tools/delegate-task/constants` |
|
||||
| `src/shared/merge-categories.test.ts` | `DEFAULT_CATEGORIES` | `../tools/delegate-task/constants` |
|
||||
|
||||
### Internal Import Map (Within delegate-task/ -- NO CHANGES NEEDED)
|
||||
|
||||
| Consumer | Imports |
|
||||
|----------|---------|
|
||||
| `categories.ts` | `DEFAULT_CATEGORIES`, `CATEGORY_PROMPT_APPENDS` |
|
||||
| `tools.ts` | `CATEGORY_DESCRIPTIONS` |
|
||||
| `prompt-builder.ts` | `buildPlanAgentSystemPrepend`, `isPlanAgent` |
|
||||
| `subagent-resolver.ts` | `isPlanFamily` |
|
||||
| `sync-continuation.ts` | `isPlanFamily` |
|
||||
| `sync-prompt-sender.ts` | `isPlanFamily` |
|
||||
| `tools.test.ts` | `DEFAULT_CATEGORIES`, `CATEGORY_PROMPT_APPENDS`, `CATEGORY_DESCRIPTIONS`, `isPlanAgent`, `PLAN_AGENT_NAMES`, `isPlanFamily`, `PLAN_FAMILY_NAMES` |
|
||||
@@ -1,41 +0,0 @@
|
||||
# PR Title
|
||||
|
||||
```
|
||||
refactor(delegate-task): split constants.ts into focused modules
|
||||
```
|
||||
|
||||
# PR Body
|
||||
|
||||
## Summary
|
||||
|
||||
- Split the 654-line `src/tools/delegate-task/constants.ts` into 4 single-responsibility modules: `default-categories.ts`, `category-prompt-appends.ts`, `plan-agent-prompt.ts`, `plan-agent-names.ts`
|
||||
- `constants.ts` becomes a pure re-export barrel, preserving all existing import paths (`from "./constants"` and `from "./delegate-task"`)
|
||||
- Zero import changes across the codebase (6 external + 7 internal consumers verified)
|
||||
|
||||
## Motivation
|
||||
|
||||
`constants.ts` at 654 lines violates the project's 200 LOC soft limit (`modular-code-enforcement.md` rule) and bundles 4 unrelated responsibilities: category model configs, category prompt text, plan agent prompts, and plan agent name utilities.
|
||||
|
||||
## Changes
|
||||
|
||||
| New File | Responsibility | LOC |
|
||||
|----------|---------------|-----|
|
||||
| `default-categories.ts` | `DEFAULT_CATEGORIES`, `CATEGORY_DESCRIPTIONS` | ~25 |
|
||||
| `category-prompt-appends.ts` | 8 `*_PROMPT_APPEND` constants + `CATEGORY_PROMPT_APPENDS` record | ~300 (prompt-exempt) |
|
||||
| `plan-agent-prompt.ts` | Plan system prompt constants + `buildPlanAgentSystemPrepend()` | ~250 (prompt-exempt) |
|
||||
| `plan-agent-names.ts` | `PLAN_AGENT_NAMES`, `isPlanAgent`, `PLAN_FAMILY_NAMES`, `isPlanFamily` | ~30 |
|
||||
| `constants.ts` (updated) | 4-line re-export barrel | 4 |
|
||||
|
||||
## Backward Compatibility
|
||||
|
||||
All 13 consumers continue importing from `"./constants"` or `"../tools/delegate-task/constants"` with zero changes. The re-export chain: new modules -> `constants.ts` -> `index.ts` -> external consumers.
|
||||
|
||||
## Note on CATEGORY_MODEL_REQUIREMENTS
|
||||
|
||||
`CATEGORY_MODEL_REQUIREMENTS` already lives in `src/shared/model-requirements.ts`. No move needed. The AGENTS.md reference to it being in `constants.ts` is outdated.
|
||||
|
||||
## Testing
|
||||
|
||||
- `bun run typecheck` passes
|
||||
- `bun test src/tools/delegate-task/` passes (all existing tests untouched)
|
||||
- `bun run build` succeeds
|
||||
@@ -1,84 +0,0 @@
|
||||
# Verification Strategy
|
||||
|
||||
## Gate A: CI (Blocking)
|
||||
|
||||
```bash
|
||||
gh pr checks --watch
|
||||
```
|
||||
|
||||
**Expected CI jobs** (from `ci.yml`):
|
||||
1. **Tests (split)**: mock-heavy isolated + batch `bun test`
|
||||
2. **Typecheck**: `bun run typecheck` (tsc --noEmit)
|
||||
3. **Build**: `bun run build`
|
||||
4. **Schema auto-commit**: If schema changes detected
|
||||
|
||||
**Likely failure points**: None. This is a pure refactor with re-exports. No runtime behavior changes.
|
||||
|
||||
**If CI fails**:
|
||||
- Typecheck error: Missing re-export or import cycle. Fix in the new modules, amend commit.
|
||||
- Test error: `tools.test.ts` imports all symbols from `"./constants"`. Re-export barrel must be complete.
|
||||
|
||||
## Gate B: review-work (5-Agent Review)
|
||||
|
||||
Invoke after CI passes:
|
||||
|
||||
```
|
||||
/review-work
|
||||
```
|
||||
|
||||
**5 parallel agents**:
|
||||
1. **Oracle (goal/constraint)**: Verify backward compat claim. Check all 13 import paths resolve.
|
||||
2. **Oracle (code quality)**: Verify single-responsibility per file, LOC limits, no catch-all violations.
|
||||
3. **Oracle (security)**: No security implications in this refactor.
|
||||
4. **QA (hands-on execution)**: Run `bun test src/tools/delegate-task/` and verify all pass.
|
||||
5. **Context miner**: Check no related open issues/PRs conflict.
|
||||
|
||||
**Expected verdict**: Pass. Pure structural refactor with no behavioral changes.
|
||||
|
||||
## Gate C: Cubic (External Bot)
|
||||
|
||||
Wait for `cubic-dev-ai[bot]` to post "No issues found" on the PR.
|
||||
|
||||
**If Cubic flags issues**: Likely false positives on "large number of new files". Address in PR comments if needed.
|
||||
|
||||
## Pre-Gate Local Validation (Before Push)
|
||||
|
||||
```bash
|
||||
# In worktree
|
||||
bun run typecheck
|
||||
bun test src/tools/delegate-task/
|
||||
bun run build
|
||||
|
||||
# Verify re-exports are complete
|
||||
bun -e "import * as c from './src/tools/delegate-task/constants'; console.log(Object.keys(c).sort().join('\n'))"
|
||||
```
|
||||
|
||||
Expected exports from constants.ts (13 total):
|
||||
- `ARTISTRY_CATEGORY_PROMPT_APPEND`
|
||||
- `CATEGORY_DESCRIPTIONS`
|
||||
- `CATEGORY_PROMPT_APPENDS`
|
||||
- `DEFAULT_CATEGORIES`
|
||||
- `DEEP_CATEGORY_PROMPT_APPEND`
|
||||
- `PLAN_AGENT_NAMES`
|
||||
- `PLAN_AGENT_SYSTEM_PREPEND_STATIC_AFTER_SKILLS`
|
||||
- `PLAN_AGENT_SYSTEM_PREPEND_STATIC_BEFORE_SKILLS`
|
||||
- `PLAN_FAMILY_NAMES`
|
||||
- `QUICK_CATEGORY_PROMPT_APPEND`
|
||||
- `ULTRABRAIN_CATEGORY_PROMPT_APPEND`
|
||||
- `UNSPECIFIED_HIGH_CATEGORY_PROMPT_APPEND`
|
||||
- `UNSPECIFIED_LOW_CATEGORY_PROMPT_APPEND`
|
||||
- `VISUAL_CATEGORY_PROMPT_APPEND`
|
||||
- `WRITING_CATEGORY_PROMPT_APPEND`
|
||||
- `buildPlanAgentSkillsSection`
|
||||
- `buildPlanAgentSystemPrepend`
|
||||
- `isPlanAgent`
|
||||
- `isPlanFamily`
|
||||
|
||||
## Merge Strategy
|
||||
|
||||
```bash
|
||||
gh pr merge --squash --delete-branch
|
||||
git worktree remove ../omo-wt/refactor-delegate-task-constants
|
||||
```
|
||||
|
||||
Squash merge collapses the 2 atomic commits into 1 clean commit on dev.
|
||||
@@ -1 +0,0 @@
|
||||
{"total_tokens": null, "duration_ms": 181000, "total_duration_seconds": 181}
|
||||
@@ -1,10 +0,0 @@
|
||||
{
|
||||
"run_id": "eval-3-without_skill",
|
||||
"expectations": [
|
||||
{"text": "Plan uses git worktree in a sibling directory", "passed": false, "evidence": "git checkout -b only, no worktree"},
|
||||
{"text": "Uses 2+ commits for the multi-file refactor", "passed": false, "evidence": "Single atomic commit: 'refactor: split delegate-task constants and category model requirements'"},
|
||||
{"text": "Maintains backward compatibility via barrel re-export", "passed": true, "evidence": "Re-exports from new files, zero consumer changes"},
|
||||
{"text": "Verification loop includes all 3 gates", "passed": false, "evidence": "Only mentions typecheck/test/build. No review-work or Cubic."},
|
||||
{"text": "References actual src/tools/delegate-task/constants.ts", "passed": true, "evidence": "654 lines, detailed responsibility breakdown, full import maps"}
|
||||
]
|
||||
}
|
||||
@@ -1,342 +0,0 @@
|
||||
# Code Changes
|
||||
|
||||
## 1. NEW: `src/tools/delegate-task/default-categories.ts`
|
||||
|
||||
```typescript
|
||||
import type { CategoryConfig } from "../../config/schema"
|
||||
|
||||
export const DEFAULT_CATEGORIES: Record<string, CategoryConfig> = {
|
||||
"visual-engineering": { model: "google/gemini-3.1-pro", variant: "high" },
|
||||
ultrabrain: { model: "openai/gpt-5.4", variant: "xhigh" },
|
||||
deep: { model: "openai/gpt-5.3-codex", variant: "medium" },
|
||||
artistry: { model: "google/gemini-3.1-pro", variant: "high" },
|
||||
quick: { model: "anthropic/claude-haiku-4-5" },
|
||||
"unspecified-low": { model: "anthropic/claude-sonnet-4-6" },
|
||||
"unspecified-high": { model: "anthropic/claude-opus-4-6", variant: "max" },
|
||||
writing: { model: "kimi-for-coding/k2p5" },
|
||||
}
|
||||
```
|
||||
|
||||
## 2. NEW: `src/tools/delegate-task/category-descriptions.ts`
|
||||
|
||||
```typescript
|
||||
export const CATEGORY_DESCRIPTIONS: Record<string, string> = {
|
||||
"visual-engineering": "Frontend, UI/UX, design, styling, animation",
|
||||
ultrabrain: "Use ONLY for genuinely hard, logic-heavy tasks. Give clear goals only, not step-by-step instructions.",
|
||||
deep: "Goal-oriented autonomous problem-solving. Thorough research before action. For hairy problems requiring deep understanding.",
|
||||
artistry: "Complex problem-solving with unconventional, creative approaches - beyond standard patterns",
|
||||
quick: "Trivial tasks - single file changes, typo fixes, simple modifications",
|
||||
"unspecified-low": "Tasks that don't fit other categories, low effort required",
|
||||
"unspecified-high": "Tasks that don't fit other categories, high effort required",
|
||||
writing: "Documentation, prose, technical writing",
|
||||
}
|
||||
```
|
||||
|
||||
## 3. NEW: `src/tools/delegate-task/category-prompt-appends.ts`
|
||||
|
||||
```typescript
|
||||
export const VISUAL_CATEGORY_PROMPT_APPEND = `<Category_Context>
|
||||
You are working on VISUAL/UI tasks.
|
||||
...
|
||||
</Category_Context>`
|
||||
|
||||
export const ULTRABRAIN_CATEGORY_PROMPT_APPEND = `<Category_Context>
|
||||
You are working on DEEP LOGICAL REASONING / COMPLEX ARCHITECTURE tasks.
|
||||
...
|
||||
</Category_Context>`
|
||||
|
||||
export const ARTISTRY_CATEGORY_PROMPT_APPEND = `<Category_Context>
|
||||
You are working on HIGHLY CREATIVE / ARTISTIC tasks.
|
||||
...
|
||||
</Category_Context>`
|
||||
|
||||
export const QUICK_CATEGORY_PROMPT_APPEND = `<Category_Context>
|
||||
You are working on SMALL / QUICK tasks.
|
||||
...
|
||||
</Caller_Warning>`
|
||||
|
||||
export const UNSPECIFIED_LOW_CATEGORY_PROMPT_APPEND = `<Category_Context>
|
||||
You are working on tasks that don't fit specific categories but require moderate effort.
|
||||
...
|
||||
</Caller_Warning>`
|
||||
|
||||
export const UNSPECIFIED_HIGH_CATEGORY_PROMPT_APPEND = `<Category_Context>
|
||||
You are working on tasks that don't fit specific categories but require substantial effort.
|
||||
...
|
||||
</Category_Context>`
|
||||
|
||||
export const WRITING_CATEGORY_PROMPT_APPEND = `<Category_Context>
|
||||
You are working on WRITING / PROSE tasks.
|
||||
...
|
||||
</Category_Context>`
|
||||
|
||||
export const DEEP_CATEGORY_PROMPT_APPEND = `<Category_Context>
|
||||
You are working on GOAL-ORIENTED AUTONOMOUS tasks.
|
||||
...
|
||||
</Category_Context>`
|
||||
|
||||
export const CATEGORY_PROMPT_APPENDS: Record<string, string> = {
|
||||
"visual-engineering": VISUAL_CATEGORY_PROMPT_APPEND,
|
||||
ultrabrain: ULTRABRAIN_CATEGORY_PROMPT_APPEND,
|
||||
deep: DEEP_CATEGORY_PROMPT_APPEND,
|
||||
artistry: ARTISTRY_CATEGORY_PROMPT_APPEND,
|
||||
quick: QUICK_CATEGORY_PROMPT_APPEND,
|
||||
"unspecified-low": UNSPECIFIED_LOW_CATEGORY_PROMPT_APPEND,
|
||||
"unspecified-high": UNSPECIFIED_HIGH_CATEGORY_PROMPT_APPEND,
|
||||
writing: WRITING_CATEGORY_PROMPT_APPEND,
|
||||
}
|
||||
```
|
||||
|
||||
> Note: Each `*_CATEGORY_PROMPT_APPEND` contains the full template string from the original. Abbreviated with `...` here for readability. The actual code would contain the complete unmodified prompt text.
|
||||
|
||||
## 4. NEW: `src/tools/delegate-task/plan-agent-prompt.ts`
|
||||
|
||||
```typescript
|
||||
import type {
|
||||
AvailableCategory,
|
||||
AvailableSkill,
|
||||
} from "../../agents/dynamic-agent-prompt-builder"
|
||||
import { truncateDescription } from "../../shared/truncate-description"
|
||||
|
||||
export const PLAN_AGENT_SYSTEM_PREPEND_STATIC_BEFORE_SKILLS = `<system>
|
||||
BEFORE you begin planning, you MUST first understand the user's request deeply.
|
||||
...
|
||||
</CRITICAL_REQUIREMENT_DEPENDENCY_PARALLEL_EXECUTION_CATEGORY_SKILLS>
|
||||
|
||||
<FINAL_OUTPUT_FOR_CALLER>
|
||||
...
|
||||
</FINAL_OUTPUT_FOR_CALLER>
|
||||
|
||||
`
|
||||
|
||||
export const PLAN_AGENT_SYSTEM_PREPEND_STATIC_AFTER_SKILLS = `### REQUIRED OUTPUT FORMAT
|
||||
...
|
||||
`
|
||||
|
||||
function renderPlanAgentCategoryRows(categories: AvailableCategory[]): string[] {
|
||||
const sorted = [...categories].sort((a, b) => a.name.localeCompare(b.name))
|
||||
return sorted.map((category) => {
|
||||
const bestFor = category.description || category.name
|
||||
const model = category.model || ""
|
||||
return `| \`${category.name}\` | ${bestFor} | ${model} |`
|
||||
})
|
||||
}
|
||||
|
||||
function renderPlanAgentSkillRows(skills: AvailableSkill[]): string[] {
|
||||
const sorted = [...skills].sort((a, b) => a.name.localeCompare(b.name))
|
||||
return sorted.map((skill) => {
|
||||
const domain = truncateDescription(skill.description).trim() || skill.name
|
||||
return `| \`${skill.name}\` | ${domain} |`
|
||||
})
|
||||
}
|
||||
|
||||
export function buildPlanAgentSkillsSection(
|
||||
categories: AvailableCategory[] = [],
|
||||
skills: AvailableSkill[] = []
|
||||
): string {
|
||||
const categoryRows = renderPlanAgentCategoryRows(categories)
|
||||
const skillRows = renderPlanAgentSkillRows(skills)
|
||||
|
||||
return `### AVAILABLE CATEGORIES
|
||||
|
||||
| Category | Best For | Model |
|
||||
|----------|----------|-------|
|
||||
${categoryRows.join("\n")}
|
||||
|
||||
### AVAILABLE SKILLS (ALWAYS EVALUATE ALL)
|
||||
|
||||
Skills inject specialized expertise into the delegated agent.
|
||||
YOU MUST evaluate EVERY skill and justify inclusions/omissions.
|
||||
|
||||
| Skill | Domain |
|
||||
|-------|--------|
|
||||
${skillRows.join("\n")}`
|
||||
}
|
||||
|
||||
export function buildPlanAgentSystemPrepend(
|
||||
categories: AvailableCategory[] = [],
|
||||
skills: AvailableSkill[] = []
|
||||
): string {
|
||||
return [
|
||||
PLAN_AGENT_SYSTEM_PREPEND_STATIC_BEFORE_SKILLS,
|
||||
buildPlanAgentSkillsSection(categories, skills),
|
||||
PLAN_AGENT_SYSTEM_PREPEND_STATIC_AFTER_SKILLS,
|
||||
].join("\n\n")
|
||||
}
|
||||
```
|
||||
|
||||
> Note: Template strings abbreviated with `...`. Full unmodified content in the actual file.
|
||||
|
||||
## 5. NEW: `src/tools/delegate-task/plan-agent-identity.ts`
|
||||
|
||||
```typescript
|
||||
/**
|
||||
* List of agent names that should be treated as plan agents (receive plan system prompt).
|
||||
* Case-insensitive matching is used.
|
||||
*/
|
||||
export const PLAN_AGENT_NAMES = ["plan"]
|
||||
|
||||
/**
|
||||
* Check if the given agent name is a plan agent (receives plan system prompt).
|
||||
*/
|
||||
export function isPlanAgent(agentName: string | undefined): boolean {
|
||||
if (!agentName) return false
|
||||
const lowerName = agentName.toLowerCase().trim()
|
||||
return PLAN_AGENT_NAMES.some(name => lowerName === name || lowerName.includes(name))
|
||||
}
|
||||
|
||||
/**
|
||||
* Plan family: plan + prometheus. Shares mutual delegation blocking and task tool permission.
|
||||
* Does NOT share system prompt (only isPlanAgent controls that).
|
||||
*/
|
||||
export const PLAN_FAMILY_NAMES = ["plan", "prometheus"]
|
||||
|
||||
/**
|
||||
* Check if the given agent belongs to the plan family (blocking + task permission).
|
||||
*/
|
||||
export function isPlanFamily(category: string): boolean
|
||||
export function isPlanFamily(category: string | undefined): boolean
|
||||
export function isPlanFamily(category: string | undefined): boolean {
|
||||
if (!category) return false
|
||||
const lowerCategory = category.toLowerCase().trim()
|
||||
return PLAN_FAMILY_NAMES.some(
|
||||
(name) => lowerCategory === name || lowerCategory.includes(name)
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
## 6. MODIFIED: `src/tools/delegate-task/constants.ts` (barrel re-export)
|
||||
|
||||
```typescript
|
||||
export { DEFAULT_CATEGORIES } from "./default-categories"
|
||||
export { CATEGORY_DESCRIPTIONS } from "./category-descriptions"
|
||||
export {
|
||||
VISUAL_CATEGORY_PROMPT_APPEND,
|
||||
ULTRABRAIN_CATEGORY_PROMPT_APPEND,
|
||||
ARTISTRY_CATEGORY_PROMPT_APPEND,
|
||||
QUICK_CATEGORY_PROMPT_APPEND,
|
||||
UNSPECIFIED_LOW_CATEGORY_PROMPT_APPEND,
|
||||
UNSPECIFIED_HIGH_CATEGORY_PROMPT_APPEND,
|
||||
WRITING_CATEGORY_PROMPT_APPEND,
|
||||
DEEP_CATEGORY_PROMPT_APPEND,
|
||||
CATEGORY_PROMPT_APPENDS,
|
||||
} from "./category-prompt-appends"
|
||||
export {
|
||||
PLAN_AGENT_SYSTEM_PREPEND_STATIC_BEFORE_SKILLS,
|
||||
PLAN_AGENT_SYSTEM_PREPEND_STATIC_AFTER_SKILLS,
|
||||
buildPlanAgentSkillsSection,
|
||||
buildPlanAgentSystemPrepend,
|
||||
} from "./plan-agent-prompt"
|
||||
export {
|
||||
PLAN_AGENT_NAMES,
|
||||
isPlanAgent,
|
||||
PLAN_FAMILY_NAMES,
|
||||
isPlanFamily,
|
||||
} from "./plan-agent-identity"
|
||||
```
|
||||
|
||||
## 7. NEW: `src/shared/category-model-requirements.ts`
|
||||
|
||||
```typescript
|
||||
import type { ModelRequirement } from "./model-requirements"
|
||||
|
||||
export const CATEGORY_MODEL_REQUIREMENTS: Record<string, ModelRequirement> = {
|
||||
"visual-engineering": {
|
||||
fallbackChain: [
|
||||
{
|
||||
providers: ["google", "github-copilot", "opencode"],
|
||||
model: "gemini-3.1-pro",
|
||||
variant: "high",
|
||||
},
|
||||
{ providers: ["zai-coding-plan", "opencode"], model: "glm-5" },
|
||||
{
|
||||
providers: ["anthropic", "github-copilot", "opencode"],
|
||||
model: "claude-opus-4-6",
|
||||
variant: "max",
|
||||
},
|
||||
{ providers: ["opencode-go"], model: "glm-5" },
|
||||
{ providers: ["kimi-for-coding"], model: "k2p5" },
|
||||
],
|
||||
},
|
||||
ultrabrain: {
|
||||
fallbackChain: [
|
||||
// ... full content from original
|
||||
],
|
||||
},
|
||||
deep: {
|
||||
fallbackChain: [
|
||||
// ... full content from original
|
||||
],
|
||||
requiresModel: "gpt-5.3-codex",
|
||||
},
|
||||
artistry: {
|
||||
fallbackChain: [
|
||||
// ... full content from original
|
||||
],
|
||||
requiresModel: "gemini-3.1-pro",
|
||||
},
|
||||
quick: {
|
||||
fallbackChain: [
|
||||
// ... full content from original
|
||||
],
|
||||
},
|
||||
"unspecified-low": {
|
||||
fallbackChain: [
|
||||
// ... full content from original
|
||||
],
|
||||
},
|
||||
"unspecified-high": {
|
||||
fallbackChain: [
|
||||
// ... full content from original
|
||||
],
|
||||
},
|
||||
writing: {
|
||||
fallbackChain: [
|
||||
// ... full content from original
|
||||
],
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
> Note: Each category's `fallbackChain` contains the exact same entries as the original `model-requirements.ts`. Abbreviated here.
|
||||
|
||||
## 8. MODIFIED: `src/shared/model-requirements.ts`
|
||||
|
||||
**Remove** `CATEGORY_MODEL_REQUIREMENTS` from the file body. **Add** re-export at the end:
|
||||
|
||||
```typescript
|
||||
export type FallbackEntry = {
|
||||
providers: string[];
|
||||
model: string;
|
||||
variant?: string;
|
||||
};
|
||||
|
||||
export type ModelRequirement = {
|
||||
fallbackChain: FallbackEntry[];
|
||||
variant?: string;
|
||||
requiresModel?: string;
|
||||
requiresAnyModel?: boolean;
|
||||
requiresProvider?: string[];
|
||||
};
|
||||
|
||||
export const AGENT_MODEL_REQUIREMENTS: Record<string, ModelRequirement> = {
|
||||
// ... unchanged, full agent entries stay here
|
||||
};
|
||||
|
||||
export { CATEGORY_MODEL_REQUIREMENTS } from "./category-model-requirements"
|
||||
```
|
||||
|
||||
## Summary of Changes
|
||||
|
||||
| File | Lines Before | Lines After | Action |
|
||||
|------|-------------|-------------|--------|
|
||||
| `constants.ts` | 654 | ~25 | Rewrite as barrel re-export |
|
||||
| `default-categories.ts` | - | ~15 | **NEW** |
|
||||
| `category-descriptions.ts` | - | ~12 | **NEW** |
|
||||
| `category-prompt-appends.ts` | - | ~280 | **NEW** (mostly exempt prompt text) |
|
||||
| `plan-agent-prompt.ts` | - | ~270 | **NEW** (mostly exempt prompt text) |
|
||||
| `plan-agent-identity.ts` | - | ~35 | **NEW** |
|
||||
| `model-requirements.ts` | 311 | ~165 | Remove CATEGORY_MODEL_REQUIREMENTS |
|
||||
| `category-model-requirements.ts` | - | ~150 | **NEW** |
|
||||
|
||||
**Zero consumer files modified.** Backward compatibility maintained through barrel re-exports.
|
||||
@@ -1,131 +0,0 @@
|
||||
# Execution Plan: Refactor constants.ts
|
||||
|
||||
## Context
|
||||
|
||||
`src/tools/delegate-task/constants.ts` is **654 lines** with 6 distinct responsibilities. Violates the 200 LOC modular-code-enforcement rule. `CATEGORY_MODEL_REQUIREMENTS` is actually in `src/shared/model-requirements.ts` (311 lines, also violating 200 LOC), not in `constants.ts`.
|
||||
|
||||
## Pre-Flight Analysis
|
||||
|
||||
### Current `constants.ts` responsibilities:
|
||||
1. **Category prompt appends** (8 template strings, ~274 LOC prompt text)
|
||||
2. **DEFAULT_CATEGORIES** (Record<string, CategoryConfig>, ~10 LOC)
|
||||
3. **CATEGORY_PROMPT_APPENDS** (map of category->prompt, ~10 LOC)
|
||||
4. **CATEGORY_DESCRIPTIONS** (map of category->description, ~10 LOC)
|
||||
5. **Plan agent prompts** (2 template strings + 4 builder functions, ~250 LOC prompt text)
|
||||
6. **Plan agent identity utils** (`isPlanAgent`, `isPlanFamily`, ~30 LOC)
|
||||
|
||||
### Current `model-requirements.ts` responsibilities:
|
||||
1. Types (`FallbackEntry`, `ModelRequirement`)
|
||||
2. `AGENT_MODEL_REQUIREMENTS` (~146 LOC)
|
||||
3. `CATEGORY_MODEL_REQUIREMENTS` (~148 LOC)
|
||||
|
||||
### Import dependency map for `constants.ts`:
|
||||
|
||||
**Internal consumers (within delegate-task/):**
|
||||
| File | Imports |
|
||||
|------|---------|
|
||||
| `categories.ts` | `DEFAULT_CATEGORIES`, `CATEGORY_PROMPT_APPENDS` |
|
||||
| `tools.ts` | `CATEGORY_DESCRIPTIONS` |
|
||||
| `tools.test.ts` | `DEFAULT_CATEGORIES`, `CATEGORY_PROMPT_APPENDS`, `CATEGORY_DESCRIPTIONS`, `isPlanAgent`, `PLAN_AGENT_NAMES`, `isPlanFamily`, `PLAN_FAMILY_NAMES` |
|
||||
| `prompt-builder.ts` | `buildPlanAgentSystemPrepend`, `isPlanAgent` |
|
||||
| `subagent-resolver.ts` | `isPlanFamily` |
|
||||
| `sync-continuation.ts` | `isPlanFamily` |
|
||||
| `sync-prompt-sender.ts` | `isPlanFamily` |
|
||||
| `index.ts` | `export * from "./constants"` (barrel) |
|
||||
|
||||
**External consumers (import from `"../../tools/delegate-task/constants"`):**
|
||||
| File | Imports |
|
||||
|------|---------|
|
||||
| `agents/atlas/prompt-section-builder.ts` | `CATEGORY_DESCRIPTIONS` |
|
||||
| `agents/builtin-agents.ts` | `CATEGORY_DESCRIPTIONS` |
|
||||
| `plugin/available-categories.ts` | `CATEGORY_DESCRIPTIONS` |
|
||||
| `plugin-handlers/category-config-resolver.ts` | `DEFAULT_CATEGORIES` |
|
||||
| `shared/merge-categories.ts` | `DEFAULT_CATEGORIES` |
|
||||
| `shared/merge-categories.test.ts` | `DEFAULT_CATEGORIES` |
|
||||
|
||||
**External consumers of `CATEGORY_MODEL_REQUIREMENTS`:**
|
||||
| File | Import path |
|
||||
|------|-------------|
|
||||
| `tools/delegate-task/categories.ts` | `../../shared/model-requirements` |
|
||||
|
||||
## Step-by-Step Execution
|
||||
|
||||
### Step 1: Create branch
|
||||
```bash
|
||||
git checkout -b refactor/split-category-constants dev
|
||||
```
|
||||
|
||||
### Step 2: Split `constants.ts` into 5 focused files
|
||||
|
||||
#### 2a. Create `default-categories.ts`
|
||||
- Move `DEFAULT_CATEGORIES` record
|
||||
- Import `CategoryConfig` type from config schema
|
||||
- ~15 LOC
|
||||
|
||||
#### 2b. Create `category-descriptions.ts`
|
||||
- Move `CATEGORY_DESCRIPTIONS` record
|
||||
- No dependencies
|
||||
- ~12 LOC
|
||||
|
||||
#### 2c. Create `category-prompt-appends.ts`
|
||||
- Move all 8 `*_CATEGORY_PROMPT_APPEND` template string constants
|
||||
- Move `CATEGORY_PROMPT_APPENDS` mapping record
|
||||
- No dependencies (all self-contained template strings)
|
||||
- ~280 LOC (mostly prompt text, exempt from 200 LOC per modular-code-enforcement)
|
||||
|
||||
#### 2d. Create `plan-agent-prompt.ts`
|
||||
- Move `PLAN_AGENT_SYSTEM_PREPEND_STATIC_BEFORE_SKILLS`
|
||||
- Move `PLAN_AGENT_SYSTEM_PREPEND_STATIC_AFTER_SKILLS`
|
||||
- Move `renderPlanAgentCategoryRows()`, `renderPlanAgentSkillRows()`
|
||||
- Move `buildPlanAgentSkillsSection()`, `buildPlanAgentSystemPrepend()`
|
||||
- Imports: `AvailableCategory`, `AvailableSkill` from agents, `truncateDescription` from shared
|
||||
- ~270 LOC (mostly prompt text, exempt)
|
||||
|
||||
#### 2e. Create `plan-agent-identity.ts`
|
||||
- Move `PLAN_AGENT_NAMES`, `isPlanAgent()`
|
||||
- Move `PLAN_FAMILY_NAMES`, `isPlanFamily()`
|
||||
- No dependencies
|
||||
- ~35 LOC
|
||||
|
||||
### Step 3: Convert `constants.ts` to barrel re-export file
|
||||
Replace entire contents with re-exports from the 5 new files. This maintains 100% backward compatibility for all existing importers.
|
||||
|
||||
### Step 4: Split `model-requirements.ts`
|
||||
|
||||
#### 4a. Create `src/shared/category-model-requirements.ts`
|
||||
- Move `CATEGORY_MODEL_REQUIREMENTS` record
|
||||
- Import `ModelRequirement` type from `./model-requirements`
|
||||
- ~150 LOC
|
||||
|
||||
#### 4b. Update `model-requirements.ts`
|
||||
- Remove `CATEGORY_MODEL_REQUIREMENTS`
|
||||
- Add re-export: `export { CATEGORY_MODEL_REQUIREMENTS } from "./category-model-requirements"`
|
||||
- Keep types (`FallbackEntry`, `ModelRequirement`) and `AGENT_MODEL_REQUIREMENTS`
|
||||
- ~165 LOC (now under 200)
|
||||
|
||||
### Step 5: Verify no import breakage
|
||||
- Run `bun run typecheck` to confirm all imports resolve
|
||||
- Run `bun test` to confirm no behavioral regressions
|
||||
- Run `bun run build` to confirm build succeeds
|
||||
|
||||
### Step 6: Verify LSP diagnostics clean
|
||||
- Check `lsp_diagnostics` on all new and modified files
|
||||
|
||||
### Step 7: Commit and create PR
|
||||
- Single atomic commit: `refactor: split delegate-task constants and category model requirements into focused modules`
|
||||
- Create PR with description
|
||||
|
||||
## Files Modified
|
||||
|
||||
| File | Action |
|
||||
|------|--------|
|
||||
| `src/tools/delegate-task/constants.ts` | Rewrite as barrel re-export |
|
||||
| `src/tools/delegate-task/default-categories.ts` | **NEW** |
|
||||
| `src/tools/delegate-task/category-descriptions.ts` | **NEW** |
|
||||
| `src/tools/delegate-task/category-prompt-appends.ts` | **NEW** |
|
||||
| `src/tools/delegate-task/plan-agent-prompt.ts` | **NEW** |
|
||||
| `src/tools/delegate-task/plan-agent-identity.ts` | **NEW** |
|
||||
| `src/shared/model-requirements.ts` | Remove CATEGORY_MODEL_REQUIREMENTS, add re-export |
|
||||
| `src/shared/category-model-requirements.ts` | **NEW** |
|
||||
|
||||
**Zero changes to any consumer files.** All existing imports work via barrel re-exports.
|
||||
@@ -1,39 +0,0 @@
|
||||
## Summary
|
||||
|
||||
- Split `src/tools/delegate-task/constants.ts` (654 LOC, 6 responsibilities) into 5 focused modules: `default-categories.ts`, `category-descriptions.ts`, `category-prompt-appends.ts`, `plan-agent-prompt.ts`, `plan-agent-identity.ts`
|
||||
- Extract `CATEGORY_MODEL_REQUIREMENTS` from `src/shared/model-requirements.ts` (311 LOC) into `category-model-requirements.ts`, bringing both files under the 200 LOC limit
|
||||
- Convert original files to barrel re-exports for 100% backward compatibility (zero consumer changes)
|
||||
|
||||
## Motivation
|
||||
|
||||
Both files violate the project's 200 LOC modular-code-enforcement rule. `constants.ts` mixed 6 unrelated responsibilities (category configs, prompt templates, plan agent builders, identity utils). `model-requirements.ts` mixed agent and category model requirements.
|
||||
|
||||
## Changes
|
||||
|
||||
### `src/tools/delegate-task/`
|
||||
| New File | Responsibility |
|
||||
|----------|---------------|
|
||||
| `default-categories.ts` | `DEFAULT_CATEGORIES` record |
|
||||
| `category-descriptions.ts` | `CATEGORY_DESCRIPTIONS` record |
|
||||
| `category-prompt-appends.ts` | 8 prompt template constants + `CATEGORY_PROMPT_APPENDS` map |
|
||||
| `plan-agent-prompt.ts` | Plan agent system prompts + builder functions |
|
||||
| `plan-agent-identity.ts` | `isPlanAgent`, `isPlanFamily` + name lists |
|
||||
|
||||
`constants.ts` is now a barrel re-export file (~25 LOC).
|
||||
|
||||
### `src/shared/`
|
||||
| New File | Responsibility |
|
||||
|----------|---------------|
|
||||
| `category-model-requirements.ts` | `CATEGORY_MODEL_REQUIREMENTS` record |
|
||||
|
||||
`model-requirements.ts` retains types + `AGENT_MODEL_REQUIREMENTS` and re-exports `CATEGORY_MODEL_REQUIREMENTS`.
|
||||
|
||||
## Backward Compatibility
|
||||
|
||||
All existing import paths (`from "./constants"`, `from "../../tools/delegate-task/constants"`, `from "../../shared/model-requirements"`) continue to work unchanged. Zero consumer files modified.
|
||||
|
||||
## Testing
|
||||
|
||||
- `bun run typecheck` passes
|
||||
- `bun test` passes (existing `tools.test.ts` validates all re-exported symbols)
|
||||
- `bun run build` succeeds
|
||||
@@ -1,128 +0,0 @@
|
||||
# Verification Strategy
|
||||
|
||||
## 1. Type Safety
|
||||
|
||||
### 1a. LSP diagnostics on all new files
|
||||
```
|
||||
lsp_diagnostics("src/tools/delegate-task/default-categories.ts")
|
||||
lsp_diagnostics("src/tools/delegate-task/category-descriptions.ts")
|
||||
lsp_diagnostics("src/tools/delegate-task/category-prompt-appends.ts")
|
||||
lsp_diagnostics("src/tools/delegate-task/plan-agent-prompt.ts")
|
||||
lsp_diagnostics("src/tools/delegate-task/plan-agent-identity.ts")
|
||||
lsp_diagnostics("src/shared/category-model-requirements.ts")
|
||||
```
|
||||
|
||||
### 1b. LSP diagnostics on modified files
|
||||
```
|
||||
lsp_diagnostics("src/tools/delegate-task/constants.ts")
|
||||
lsp_diagnostics("src/shared/model-requirements.ts")
|
||||
```
|
||||
|
||||
### 1c. Full typecheck
|
||||
```bash
|
||||
bun run typecheck
|
||||
```
|
||||
Expected: 0 errors. This confirms all 14 consumer files (8 internal + 6 external) resolve their imports correctly through the barrel re-exports.
|
||||
|
||||
## 2. Behavioral Regression
|
||||
|
||||
### 2a. Existing test suite
|
||||
```bash
|
||||
bun test src/tools/delegate-task/tools.test.ts
|
||||
```
|
||||
This test file imports `DEFAULT_CATEGORIES`, `CATEGORY_PROMPT_APPENDS`, `CATEGORY_DESCRIPTIONS`, `isPlanAgent`, `PLAN_AGENT_NAMES`, `isPlanFamily`, `PLAN_FAMILY_NAMES` from `./constants`. If the barrel re-export is correct, all these tests pass unchanged.
|
||||
|
||||
### 2b. Category resolver tests
|
||||
```bash
|
||||
bun test src/tools/delegate-task/category-resolver.test.ts
|
||||
```
|
||||
This exercises `resolveCategoryConfig()` which imports `DEFAULT_CATEGORIES` and `CATEGORY_PROMPT_APPENDS` from `./constants` and `CATEGORY_MODEL_REQUIREMENTS` from `../../shared/model-requirements`.
|
||||
|
||||
### 2c. Model selection tests
|
||||
```bash
|
||||
bun test src/tools/delegate-task/model-selection.test.ts
|
||||
```
|
||||
|
||||
### 2d. Merge categories tests
|
||||
```bash
|
||||
bun test src/shared/merge-categories.test.ts
|
||||
```
|
||||
Imports `DEFAULT_CATEGORIES` from `../tools/delegate-task/constants` (external path).
|
||||
|
||||
### 2e. Full test suite
|
||||
```bash
|
||||
bun test
|
||||
```
|
||||
|
||||
## 3. Build Verification
|
||||
|
||||
```bash
|
||||
bun run build
|
||||
```
|
||||
Confirms ESM bundle + declarations emit correctly with the new file structure.
|
||||
|
||||
## 4. Export Completeness Verification
|
||||
|
||||
### 4a. Verify `constants.ts` re-exports match original exports
|
||||
Cross-check that every symbol previously exported from `constants.ts` is still exported. The original file exported these symbols:
|
||||
- `VISUAL_CATEGORY_PROMPT_APPEND`
|
||||
- `ULTRABRAIN_CATEGORY_PROMPT_APPEND`
|
||||
- `ARTISTRY_CATEGORY_PROMPT_APPEND`
|
||||
- `QUICK_CATEGORY_PROMPT_APPEND`
|
||||
- `UNSPECIFIED_LOW_CATEGORY_PROMPT_APPEND`
|
||||
- `UNSPECIFIED_HIGH_CATEGORY_PROMPT_APPEND`
|
||||
- `WRITING_CATEGORY_PROMPT_APPEND`
|
||||
- `DEEP_CATEGORY_PROMPT_APPEND`
|
||||
- `DEFAULT_CATEGORIES`
|
||||
- `CATEGORY_PROMPT_APPENDS`
|
||||
- `CATEGORY_DESCRIPTIONS`
|
||||
- `PLAN_AGENT_SYSTEM_PREPEND_STATIC_BEFORE_SKILLS`
|
||||
- `PLAN_AGENT_SYSTEM_PREPEND_STATIC_AFTER_SKILLS`
|
||||
- `buildPlanAgentSkillsSection`
|
||||
- `buildPlanAgentSystemPrepend`
|
||||
- `PLAN_AGENT_NAMES`
|
||||
- `isPlanAgent`
|
||||
- `PLAN_FAMILY_NAMES`
|
||||
- `isPlanFamily`
|
||||
|
||||
All 19 must be re-exported from the barrel.
|
||||
|
||||
### 4b. Verify `model-requirements.ts` re-exports match original exports
|
||||
Original exports: `FallbackEntry`, `ModelRequirement`, `AGENT_MODEL_REQUIREMENTS`, `CATEGORY_MODEL_REQUIREMENTS`. All 4 must still be available.
|
||||
|
||||
## 5. LOC Compliance Check
|
||||
|
||||
Verify each new file is under 200 LOC (excluding prompt template text per modular-code-enforcement rule):
|
||||
|
||||
| File | Expected Total LOC | Non-prompt LOC | Compliant? |
|
||||
|------|-------------------|----------------|------------|
|
||||
| `default-categories.ts` | ~15 | ~15 | Yes |
|
||||
| `category-descriptions.ts` | ~12 | ~12 | Yes |
|
||||
| `category-prompt-appends.ts` | ~280 | ~15 | Yes (prompt exempt) |
|
||||
| `plan-agent-prompt.ts` | ~270 | ~40 | Yes (prompt exempt) |
|
||||
| `plan-agent-identity.ts` | ~35 | ~35 | Yes |
|
||||
| `category-model-requirements.ts` | ~150 | ~150 | Yes |
|
||||
| `model-requirements.ts` (after) | ~165 | ~165 | Yes |
|
||||
| `constants.ts` (after) | ~25 | ~25 | Yes |
|
||||
|
||||
## 6. Consumer Impact Matrix
|
||||
|
||||
Verify zero consumer files need changes:
|
||||
|
||||
| Consumer File | Import Path | Should Still Work? |
|
||||
|--------------|-------------|-------------------|
|
||||
| `delegate-task/categories.ts` | `./constants` | Yes (barrel) |
|
||||
| `delegate-task/tools.ts` | `./constants` | Yes (barrel) |
|
||||
| `delegate-task/tools.test.ts` | `./constants` | Yes (barrel) |
|
||||
| `delegate-task/prompt-builder.ts` | `./constants` | Yes (barrel) |
|
||||
| `delegate-task/subagent-resolver.ts` | `./constants` | Yes (barrel) |
|
||||
| `delegate-task/sync-continuation.ts` | `./constants` | Yes (barrel) |
|
||||
| `delegate-task/sync-prompt-sender.ts` | `./constants` | Yes (barrel) |
|
||||
| `delegate-task/index.ts` | `./constants` | Yes (barrel) |
|
||||
| `agents/atlas/prompt-section-builder.ts` | `../../tools/delegate-task/constants` | Yes (barrel) |
|
||||
| `agents/builtin-agents.ts` | `../tools/delegate-task/constants` | Yes (barrel) |
|
||||
| `plugin/available-categories.ts` | `../tools/delegate-task/constants` | Yes (barrel) |
|
||||
| `plugin-handlers/category-config-resolver.ts` | `../tools/delegate-task/constants` | Yes (barrel) |
|
||||
| `shared/merge-categories.ts` | `../tools/delegate-task/constants` | Yes (barrel) |
|
||||
| `shared/merge-categories.test.ts` | `../tools/delegate-task/constants` | Yes (barrel) |
|
||||
| `delegate-task/categories.ts` | `../../shared/model-requirements` | Yes (re-export) |
|
||||
@@ -1 +0,0 @@
|
||||
{"total_tokens": null, "duration_ms": 229000, "total_duration_seconds": 229}
|
||||
@@ -1,32 +0,0 @@
|
||||
{
|
||||
"eval_id": 4,
|
||||
"eval_name": "new-mcp-arxiv-casual",
|
||||
"prompt": "implement issue #100 - we need to add a new built-in MCP for arxiv paper search. just the basic search endpoint, nothing fancy. pr it",
|
||||
"assertions": [
|
||||
{
|
||||
"id": "worktree-isolation",
|
||||
"text": "Plan uses git worktree in a sibling directory",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "follows-mcp-pattern",
|
||||
"text": "New MCP follows existing pattern from src/mcp/ (websearch, context7, grep_app)",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "three-gates",
|
||||
"text": "Verification loop includes all 3 gates",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "pr-targets-dev",
|
||||
"text": "PR targets dev branch",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "local-validation",
|
||||
"text": "Runs local checks before pushing",
|
||||
"type": "manual"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,10 +0,0 @@
|
||||
{
|
||||
"run_id": "eval-4-with_skill",
|
||||
"expectations": [
|
||||
{"text": "Plan uses git worktree in a sibling directory", "passed": true, "evidence": "../omo-wt/feat/arxiv-mcp"},
|
||||
{"text": "New MCP follows existing pattern from src/mcp/", "passed": true, "evidence": "Follows context7.ts and grep-app.ts static export pattern"},
|
||||
{"text": "Verification loop includes all 3 gates", "passed": true, "evidence": "Gate A (CI), Gate B (review-work 5 agents), Gate C (Cubic)"},
|
||||
{"text": "PR targets dev branch", "passed": true, "evidence": "--base dev"},
|
||||
{"text": "Runs local checks before pushing", "passed": true, "evidence": "bun run typecheck, bun test src/mcp/, bun run build"}
|
||||
]
|
||||
}
|
||||
@@ -1,143 +0,0 @@
|
||||
# Code Changes: Issue #100 - Built-in arXiv MCP
|
||||
|
||||
## 1. NEW FILE: `src/mcp/arxiv.ts`
|
||||
|
||||
```typescript
|
||||
export const arxiv = {
|
||||
type: "remote" as const,
|
||||
url: "https://mcp.arxiv.org",
|
||||
enabled: true,
|
||||
oauth: false as const,
|
||||
}
|
||||
```
|
||||
|
||||
Pattern: identical to `grep-app.ts` (static export, no auth, no config factory needed).
|
||||
|
||||
## 2. MODIFY: `src/mcp/types.ts`
|
||||
|
||||
```typescript
|
||||
import { z } from "zod"
|
||||
|
||||
export const McpNameSchema = z.enum(["websearch", "context7", "grep_app", "arxiv"])
|
||||
|
||||
export type McpName = z.infer<typeof McpNameSchema>
|
||||
|
||||
export const AnyMcpNameSchema = z.string().min(1)
|
||||
|
||||
export type AnyMcpName = z.infer<typeof AnyMcpNameSchema>
|
||||
```
|
||||
|
||||
Change: add `"arxiv"` to `McpNameSchema` enum.
|
||||
|
||||
## 3. MODIFY: `src/mcp/index.ts`
|
||||
|
||||
```typescript
|
||||
import { createWebsearchConfig } from "./websearch"
|
||||
import { context7 } from "./context7"
|
||||
import { grep_app } from "./grep-app"
|
||||
import { arxiv } from "./arxiv"
|
||||
import type { OhMyOpenCodeConfig } from "../config/schema"
|
||||
|
||||
export { McpNameSchema, type McpName } from "./types"
|
||||
|
||||
type RemoteMcpConfig = {
|
||||
type: "remote"
|
||||
url: string
|
||||
enabled: boolean
|
||||
headers?: Record<string, string>
|
||||
oauth?: false
|
||||
}
|
||||
|
||||
export function createBuiltinMcps(disabledMcps: string[] = [], config?: OhMyOpenCodeConfig) {
|
||||
const mcps: Record<string, RemoteMcpConfig> = {}
|
||||
|
||||
if (!disabledMcps.includes("websearch")) {
|
||||
mcps.websearch = createWebsearchConfig(config?.websearch)
|
||||
}
|
||||
|
||||
if (!disabledMcps.includes("context7")) {
|
||||
mcps.context7 = context7
|
||||
}
|
||||
|
||||
if (!disabledMcps.includes("grep_app")) {
|
||||
mcps.grep_app = grep_app
|
||||
}
|
||||
|
||||
if (!disabledMcps.includes("arxiv")) {
|
||||
mcps.arxiv = arxiv
|
||||
}
|
||||
|
||||
return mcps
|
||||
}
|
||||
```
|
||||
|
||||
Changes: import `arxiv`, add conditional block.
|
||||
|
||||
## 4. NEW FILE: `src/mcp/arxiv.test.ts`
|
||||
|
||||
```typescript
|
||||
import { describe, expect, test } from "bun:test"
|
||||
import { arxiv } from "./arxiv"
|
||||
|
||||
describe("arxiv MCP configuration", () => {
|
||||
test("should have correct remote config shape", () => {
|
||||
// given
|
||||
// arxiv is a static export
|
||||
|
||||
// when
|
||||
const config = arxiv
|
||||
|
||||
// then
|
||||
expect(config.type).toBe("remote")
|
||||
expect(config.url).toBe("https://mcp.arxiv.org")
|
||||
expect(config.enabled).toBe(true)
|
||||
expect(config.oauth).toBe(false)
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
## 5. MODIFY: `src/mcp/index.test.ts`
|
||||
|
||||
Changes needed:
|
||||
- Test "should return all MCPs when disabled_mcps is empty": add `expect(result).toHaveProperty("arxiv")`, change length to 4
|
||||
- Test "should filter out all built-in MCPs when all disabled": add `"arxiv"` to disabledMcps array, add `expect(result).not.toHaveProperty("arxiv")`
|
||||
- Test "should handle empty disabled_mcps by default": add `expect(result).toHaveProperty("arxiv")`, change length to 4
|
||||
- Test "should only filter built-in MCPs, ignoring unknown names": add `expect(result).toHaveProperty("arxiv")`, change length to 4
|
||||
|
||||
New test to add:
|
||||
|
||||
```typescript
|
||||
test("should filter out arxiv when disabled", () => {
|
||||
// given
|
||||
const disabledMcps = ["arxiv"]
|
||||
|
||||
// when
|
||||
const result = createBuiltinMcps(disabledMcps)
|
||||
|
||||
// then
|
||||
expect(result).toHaveProperty("websearch")
|
||||
expect(result).toHaveProperty("context7")
|
||||
expect(result).toHaveProperty("grep_app")
|
||||
expect(result).not.toHaveProperty("arxiv")
|
||||
expect(Object.keys(result)).toHaveLength(3)
|
||||
})
|
||||
```
|
||||
|
||||
## 6. MODIFY: `src/mcp/AGENTS.md`
|
||||
|
||||
Add row to built-in MCPs table:
|
||||
|
||||
```
|
||||
| **arxiv** | `mcp.arxiv.org` | None | arXiv paper search |
|
||||
```
|
||||
|
||||
## Files touched summary
|
||||
|
||||
| File | Action |
|
||||
|------|--------|
|
||||
| `src/mcp/arxiv.ts` | NEW |
|
||||
| `src/mcp/arxiv.test.ts` | NEW |
|
||||
| `src/mcp/types.ts` | MODIFY (add enum value) |
|
||||
| `src/mcp/index.ts` | MODIFY (import + conditional block) |
|
||||
| `src/mcp/index.test.ts` | MODIFY (update counts + new test) |
|
||||
| `src/mcp/AGENTS.md` | MODIFY (add table row) |
|
||||
@@ -1,82 +0,0 @@
|
||||
# Execution Plan: Issue #100 - Built-in arXiv MCP
|
||||
|
||||
## Phase 0: Setup
|
||||
|
||||
1. `git fetch origin dev`
|
||||
2. `git worktree add ../omo-wt/feat/arxiv-mcp origin/dev`
|
||||
3. `cd ../omo-wt/feat/arxiv-mcp`
|
||||
4. `git checkout -b feat/arxiv-mcp`
|
||||
|
||||
## Phase 1: Implement
|
||||
|
||||
### Step 1: Create `src/mcp/arxiv.ts`
|
||||
- Follow static export pattern (same as `context7.ts` and `grep-app.ts`)
|
||||
- arXiv API is public, no auth needed
|
||||
- URL: `https://mcp.arxiv.org` (hypothetical remote MCP endpoint)
|
||||
- If no remote MCP exists for arXiv, this would need to be a stdio MCP or a custom HTTP wrapper. For this plan, we assume a remote MCP endpoint pattern consistent with existing built-ins.
|
||||
|
||||
### Step 2: Update `src/mcp/types.ts`
|
||||
- Add `"arxiv"` to `McpNameSchema` enum: `z.enum(["websearch", "context7", "grep_app", "arxiv"])`
|
||||
|
||||
### Step 3: Update `src/mcp/index.ts`
|
||||
- Import `arxiv` from `"./arxiv"`
|
||||
- Add conditional block in `createBuiltinMcps()`:
|
||||
```typescript
|
||||
if (!disabledMcps.includes("arxiv")) {
|
||||
mcps.arxiv = arxiv
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Create `src/mcp/arxiv.test.ts`
|
||||
- Test arXiv config shape (type, url, enabled, oauth)
|
||||
- Follow pattern from existing tests (given/when/then)
|
||||
|
||||
### Step 5: Update `src/mcp/index.test.ts`
|
||||
- Update expected MCP count from 3 to 4
|
||||
- Add `"arxiv"` to `toHaveProperty` checks
|
||||
- Add `"arxiv"` to the "all disabled" test case
|
||||
|
||||
### Step 6: Update `src/mcp/AGENTS.md`
|
||||
- Add arxiv row to the built-in MCPs table
|
||||
|
||||
### Step 7: Local validation
|
||||
- `bun run typecheck`
|
||||
- `bun test src/mcp/`
|
||||
- `bun run build`
|
||||
|
||||
### Atomic commits (in order):
|
||||
1. `feat(mcp): add arxiv paper search built-in MCP` - arxiv.ts + types.ts update
|
||||
2. `test(mcp): add arxiv MCP tests` - arxiv.test.ts + index.test.ts updates
|
||||
3. `docs(mcp): update AGENTS.md with arxiv MCP` - AGENTS.md update
|
||||
|
||||
## Phase 2: PR Creation
|
||||
|
||||
1. `git push -u origin feat/arxiv-mcp`
|
||||
2. `gh pr create --base dev --title "feat(mcp): add built-in arXiv paper search MCP" --body-file /tmp/pull-request-arxiv-mcp-*.md`
|
||||
|
||||
## Phase 3: Verify Loop
|
||||
|
||||
### Gate A: CI
|
||||
- Wait for `ci.yml` workflow (tests, typecheck, build)
|
||||
- `gh run watch` or poll `gh pr checks`
|
||||
|
||||
### Gate B: review-work
|
||||
- Run `/review-work` skill (5-agent parallel review)
|
||||
- All 5 agents must pass: Oracle (goal), Oracle (code quality), Oracle (security), QA execution, context mining
|
||||
|
||||
### Gate C: Cubic
|
||||
- Wait for cubic-dev-ai[bot] automated review
|
||||
- Must show "No issues found"
|
||||
- If issues found, fix and re-push
|
||||
|
||||
### Failure handling:
|
||||
- Gate A fail: fix locally, amend or new commit, re-push
|
||||
- Gate B fail: address review-work findings, new commit
|
||||
- Gate C fail: address Cubic findings, new commit
|
||||
- Re-enter verify loop from Gate A
|
||||
|
||||
## Phase 4: Merge
|
||||
|
||||
1. `gh pr merge --squash --delete-branch`
|
||||
2. `git worktree remove ../omo-wt/feat/arxiv-mcp`
|
||||
3. `git branch -D feat/arxiv-mcp` (if not auto-deleted)
|
||||
@@ -1,51 +0,0 @@
|
||||
# PR: feat(mcp): add built-in arXiv paper search MCP
|
||||
|
||||
## Title
|
||||
|
||||
`feat(mcp): add built-in arXiv paper search MCP`
|
||||
|
||||
## Body
|
||||
|
||||
```markdown
|
||||
## Summary
|
||||
|
||||
Closes #100
|
||||
|
||||
- Add `arxiv` as 4th built-in remote MCP for arXiv paper search
|
||||
- Follows existing static export pattern (same as `grep_app`, `context7`)
|
||||
- No auth required, disableable via `disabled_mcps: ["arxiv"]`
|
||||
|
||||
## Changes
|
||||
|
||||
- `src/mcp/arxiv.ts` - new MCP config (static export, remote type)
|
||||
- `src/mcp/types.ts` - add `"arxiv"` to `McpNameSchema` enum
|
||||
- `src/mcp/index.ts` - register arxiv in `createBuiltinMcps()`
|
||||
- `src/mcp/arxiv.test.ts` - config shape tests
|
||||
- `src/mcp/index.test.ts` - update counts, add disable test
|
||||
- `src/mcp/AGENTS.md` - document new MCP
|
||||
|
||||
## Usage
|
||||
|
||||
Enabled by default. Disable with:
|
||||
|
||||
```jsonc
|
||||
// .opencode/oh-my-opencode.jsonc
|
||||
{
|
||||
"disabled_mcps": ["arxiv"]
|
||||
}
|
||||
```
|
||||
|
||||
## Validation
|
||||
|
||||
- [x] `bun run typecheck` passes
|
||||
- [x] `bun test src/mcp/` passes
|
||||
- [x] `bun run build` passes
|
||||
```
|
||||
|
||||
## Labels
|
||||
|
||||
`enhancement`, `mcp`
|
||||
|
||||
## Base branch
|
||||
|
||||
`dev`
|
||||
@@ -1,69 +0,0 @@
|
||||
# Verification Strategy: Issue #100 - arXiv MCP
|
||||
|
||||
## Gate A: CI (`ci.yml`)
|
||||
|
||||
### What runs
|
||||
- `bun test` (split: mock-heavy isolated + batch) - must include new `arxiv.test.ts` and updated `index.test.ts`
|
||||
- `bun run typecheck` - validates `McpNameSchema` enum change propagates correctly
|
||||
- `bun run build` - ensures no build regressions
|
||||
|
||||
### How to monitor
|
||||
```bash
|
||||
gh pr checks <pr-number> --watch
|
||||
```
|
||||
|
||||
### Failure scenarios
|
||||
| Failure | Likely cause | Fix |
|
||||
|---------|-------------|-----|
|
||||
| Type error in `types.ts` | Enum value not matching downstream consumers | Check all `McpName` usages via `lsp_find_references` |
|
||||
| Test count mismatch in `index.test.ts` | Forgot to update `toHaveLength()` from 3 to 4 | Update all length assertions |
|
||||
| Build failure | Import path or barrel export issue | Verify `src/mcp/index.ts` exports are clean |
|
||||
|
||||
### Retry
|
||||
Fix locally in worktree, new commit, `git push`.
|
||||
|
||||
## Gate B: review-work (5-agent)
|
||||
|
||||
### Agents and focus areas
|
||||
| Agent | What it checks for this PR |
|
||||
|-------|--------------------------|
|
||||
| Oracle (goal) | Does arxiv MCP satisfy issue #100 requirements? |
|
||||
| Oracle (code quality) | Follows `grep-app.ts` pattern? No SRP violations? < 200 LOC? |
|
||||
| Oracle (security) | No credentials hardcoded, no auth bypass |
|
||||
| QA (execution) | Run tests, verify disable mechanism works |
|
||||
| Context (mining) | Check issue #100 for any missed requirements |
|
||||
|
||||
### Pass criteria
|
||||
All 5 must pass. Any single failure blocks.
|
||||
|
||||
### Failure handling
|
||||
- Read each agent's report
|
||||
- Address findings with new atomic commits
|
||||
- Re-run full verify loop from Gate A
|
||||
|
||||
## Gate C: Cubic (`cubic-dev-ai[bot]`)
|
||||
|
||||
### Expected review scope
|
||||
- Config shape consistency across MCPs
|
||||
- Test coverage for new MCP
|
||||
- Schema type safety
|
||||
|
||||
### Pass criteria
|
||||
Comment from `cubic-dev-ai[bot]` containing "No issues found".
|
||||
|
||||
### Failure handling
|
||||
- Read Cubic's specific findings
|
||||
- Fix with new commit
|
||||
- Re-push, re-enter Gate A
|
||||
|
||||
## Pre-merge checklist
|
||||
- [ ] Gate A: CI green
|
||||
- [ ] Gate B: All 5 review-work agents pass
|
||||
- [ ] Gate C: Cubic "No issues found"
|
||||
- [ ] No unresolved review comments
|
||||
- [ ] PR has at least 1 approval (if required by branch protection)
|
||||
|
||||
## Post-merge
|
||||
1. `gh pr merge --squash --delete-branch`
|
||||
2. `git worktree remove ../omo-wt/feat/arxiv-mcp`
|
||||
3. Verify merge commit on `dev` branch
|
||||
@@ -1 +0,0 @@
|
||||
{"total_tokens": null, "duration_ms": 152000, "total_duration_seconds": 152}
|
||||
@@ -1,10 +0,0 @@
|
||||
{
|
||||
"run_id": "eval-4-without_skill",
|
||||
"expectations": [
|
||||
{"text": "Plan uses git worktree in a sibling directory", "passed": true, "evidence": "git worktree add ../omo-arxiv-mcp dev — agent independently chose worktree"},
|
||||
{"text": "New MCP follows existing pattern from src/mcp/", "passed": true, "evidence": "Follows grep-app.ts pattern"},
|
||||
{"text": "Verification loop includes all 3 gates", "passed": false, "evidence": "Only mentions bun test/typecheck/build. No review-work or Cubic."},
|
||||
{"text": "PR targets dev branch", "passed": true, "evidence": "--base dev"},
|
||||
{"text": "Runs local checks before pushing", "passed": true, "evidence": "bun test src/mcp/, bun run typecheck, bun run build"}
|
||||
]
|
||||
}
|
||||
@@ -1,252 +0,0 @@
|
||||
# Code Changes: Built-in arXiv MCP
|
||||
|
||||
## 1. NEW FILE: `src/mcp/arxiv.ts`
|
||||
|
||||
```typescript
|
||||
export const arxiv = {
|
||||
type: "remote" as const,
|
||||
url: "https://mcp.arxiv.org",
|
||||
enabled: true,
|
||||
oauth: false as const,
|
||||
}
|
||||
```
|
||||
|
||||
> **Note:** The URL `https://mcp.arxiv.org` is a placeholder. The actual endpoint needs to be verified. If no hosted arXiv MCP exists, alternatives include community-hosted servers or a self-hosted wrapper around the arXiv REST API (`export.arxiv.org/api/query`). This would be the single blocker requiring resolution before merging.
|
||||
|
||||
Pattern followed: `grep-app.ts` (static export, no auth, no config factory needed since arXiv API is public).
|
||||
|
||||
---
|
||||
|
||||
## 2. MODIFY: `src/mcp/types.ts`
|
||||
|
||||
```diff
|
||||
import { z } from "zod"
|
||||
|
||||
-export const McpNameSchema = z.enum(["websearch", "context7", "grep_app"])
|
||||
+export const McpNameSchema = z.enum(["websearch", "context7", "grep_app", "arxiv"])
|
||||
|
||||
export type McpName = z.infer<typeof McpNameSchema>
|
||||
|
||||
export const AnyMcpNameSchema = z.string().min(1)
|
||||
|
||||
export type AnyMcpName = z.infer<typeof AnyMcpNameSchema>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. MODIFY: `src/mcp/index.ts`
|
||||
|
||||
```diff
|
||||
import { createWebsearchConfig } from "./websearch"
|
||||
import { context7 } from "./context7"
|
||||
import { grep_app } from "./grep-app"
|
||||
+import { arxiv } from "./arxiv"
|
||||
import type { OhMyOpenCodeConfig } from "../config/schema"
|
||||
|
||||
-export { McpNameSchema, type McpName } from "./types"
|
||||
+export { McpNameSchema, type McpName } from "./types"
|
||||
|
||||
type RemoteMcpConfig = {
|
||||
type: "remote"
|
||||
url: string
|
||||
enabled: boolean
|
||||
headers?: Record<string, string>
|
||||
oauth?: false
|
||||
}
|
||||
|
||||
export function createBuiltinMcps(disabledMcps: string[] = [], config?: OhMyOpenCodeConfig) {
|
||||
const mcps: Record<string, RemoteMcpConfig> = {}
|
||||
|
||||
if (!disabledMcps.includes("websearch")) {
|
||||
mcps.websearch = createWebsearchConfig(config?.websearch)
|
||||
}
|
||||
|
||||
if (!disabledMcps.includes("context7")) {
|
||||
mcps.context7 = context7
|
||||
}
|
||||
|
||||
if (!disabledMcps.includes("grep_app")) {
|
||||
mcps.grep_app = grep_app
|
||||
}
|
||||
|
||||
+ if (!disabledMcps.includes("arxiv")) {
|
||||
+ mcps.arxiv = arxiv
|
||||
+ }
|
||||
+
|
||||
return mcps
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. MODIFY: `src/mcp/index.test.ts`
|
||||
|
||||
Changes needed in existing tests (count 3 → 4) plus one new test:
|
||||
|
||||
```diff
|
||||
describe("createBuiltinMcps", () => {
|
||||
test("should return all MCPs when disabled_mcps is empty", () => {
|
||||
// given
|
||||
const disabledMcps: string[] = []
|
||||
|
||||
// when
|
||||
const result = createBuiltinMcps(disabledMcps)
|
||||
|
||||
// then
|
||||
expect(result).toHaveProperty("websearch")
|
||||
expect(result).toHaveProperty("context7")
|
||||
expect(result).toHaveProperty("grep_app")
|
||||
- expect(Object.keys(result)).toHaveLength(3)
|
||||
+ expect(result).toHaveProperty("arxiv")
|
||||
+ expect(Object.keys(result)).toHaveLength(4)
|
||||
})
|
||||
|
||||
test("should filter out disabled built-in MCPs", () => {
|
||||
// given
|
||||
const disabledMcps = ["context7"]
|
||||
|
||||
// when
|
||||
const result = createBuiltinMcps(disabledMcps)
|
||||
|
||||
// then
|
||||
expect(result).toHaveProperty("websearch")
|
||||
expect(result).not.toHaveProperty("context7")
|
||||
expect(result).toHaveProperty("grep_app")
|
||||
- expect(Object.keys(result)).toHaveLength(2)
|
||||
+ expect(result).toHaveProperty("arxiv")
|
||||
+ expect(Object.keys(result)).toHaveLength(3)
|
||||
})
|
||||
|
||||
test("should filter out all built-in MCPs when all disabled", () => {
|
||||
// given
|
||||
- const disabledMcps = ["websearch", "context7", "grep_app"]
|
||||
+ const disabledMcps = ["websearch", "context7", "grep_app", "arxiv"]
|
||||
|
||||
// when
|
||||
const result = createBuiltinMcps(disabledMcps)
|
||||
|
||||
// then
|
||||
expect(result).not.toHaveProperty("websearch")
|
||||
expect(result).not.toHaveProperty("context7")
|
||||
expect(result).not.toHaveProperty("grep_app")
|
||||
+ expect(result).not.toHaveProperty("arxiv")
|
||||
expect(Object.keys(result)).toHaveLength(0)
|
||||
})
|
||||
|
||||
test("should ignore custom MCP names in disabled_mcps", () => {
|
||||
// given
|
||||
const disabledMcps = ["context7", "playwright", "custom"]
|
||||
|
||||
// when
|
||||
const result = createBuiltinMcps(disabledMcps)
|
||||
|
||||
// then
|
||||
expect(result).toHaveProperty("websearch")
|
||||
expect(result).not.toHaveProperty("context7")
|
||||
expect(result).toHaveProperty("grep_app")
|
||||
- expect(Object.keys(result)).toHaveLength(2)
|
||||
+ expect(result).toHaveProperty("arxiv")
|
||||
+ expect(Object.keys(result)).toHaveLength(3)
|
||||
})
|
||||
|
||||
test("should handle empty disabled_mcps by default", () => {
|
||||
// given
|
||||
// when
|
||||
const result = createBuiltinMcps()
|
||||
|
||||
// then
|
||||
expect(result).toHaveProperty("websearch")
|
||||
expect(result).toHaveProperty("context7")
|
||||
expect(result).toHaveProperty("grep_app")
|
||||
- expect(Object.keys(result)).toHaveLength(3)
|
||||
+ expect(result).toHaveProperty("arxiv")
|
||||
+ expect(Object.keys(result)).toHaveLength(4)
|
||||
})
|
||||
|
||||
test("should only filter built-in MCPs, ignoring unknown names", () => {
|
||||
// given
|
||||
const disabledMcps = ["playwright", "sqlite", "unknown-mcp"]
|
||||
|
||||
// when
|
||||
const result = createBuiltinMcps(disabledMcps)
|
||||
|
||||
// then
|
||||
expect(result).toHaveProperty("websearch")
|
||||
expect(result).toHaveProperty("context7")
|
||||
expect(result).toHaveProperty("grep_app")
|
||||
- expect(Object.keys(result)).toHaveLength(3)
|
||||
+ expect(result).toHaveProperty("arxiv")
|
||||
+ expect(Object.keys(result)).toHaveLength(4)
|
||||
})
|
||||
|
||||
+ test("should filter out arxiv when disabled", () => {
|
||||
+ // given
|
||||
+ const disabledMcps = ["arxiv"]
|
||||
+
|
||||
+ // when
|
||||
+ const result = createBuiltinMcps(disabledMcps)
|
||||
+
|
||||
+ // then
|
||||
+ expect(result).toHaveProperty("websearch")
|
||||
+ expect(result).toHaveProperty("context7")
|
||||
+ expect(result).toHaveProperty("grep_app")
|
||||
+ expect(result).not.toHaveProperty("arxiv")
|
||||
+ expect(Object.keys(result)).toHaveLength(3)
|
||||
+ })
|
||||
+
|
||||
// ... existing tavily test unchanged
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. MODIFY: `src/mcp/AGENTS.md`
|
||||
|
||||
```diff
|
||||
-# src/mcp/ — 3 Built-in Remote MCPs
|
||||
+# src/mcp/ — 4 Built-in Remote MCPs
|
||||
|
||||
**Generated:** 2026-03-06
|
||||
|
||||
## OVERVIEW
|
||||
|
||||
-Tier 1 of the three-tier MCP system. 3 remote HTTP MCPs created via `createBuiltinMcps(disabledMcps, config)`.
|
||||
+Tier 1 of the three-tier MCP system. 4 remote HTTP MCPs created via `createBuiltinMcps(disabledMcps, config)`.
|
||||
|
||||
## BUILT-IN MCPs
|
||||
|
||||
| Name | URL | Env Vars | Tools |
|
||||
|------|-----|----------|-------|
|
||||
| **websearch** | `mcp.exa.ai` (default) or `mcp.tavily.com` | `EXA_API_KEY` (optional), `TAVILY_API_KEY` (if tavily) | Web search |
|
||||
| **context7** | `mcp.context7.com/mcp` | `CONTEXT7_API_KEY` (optional) | Library documentation |
|
||||
| **grep_app** | `mcp.grep.app` | None | GitHub code search |
|
||||
+| **arxiv** | `mcp.arxiv.org` | None | arXiv paper search |
|
||||
|
||||
...
|
||||
|
||||
## FILES
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `index.ts` | `createBuiltinMcps()` factory |
|
||||
-| `types.ts` | `McpNameSchema`: "websearch" \| "context7" \| "grep_app" |
|
||||
+| `types.ts` | `McpNameSchema`: "websearch" \| "context7" \| "grep_app" \| "arxiv" |
|
||||
| `websearch.ts` | Exa/Tavily provider with config |
|
||||
| `context7.ts` | Context7 with optional auth header |
|
||||
| `grep-app.ts` | Grep.app (no auth) |
|
||||
+| `arxiv.ts` | arXiv paper search (no auth) |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary of Touched Files
|
||||
|
||||
| File | Lines Changed | Type |
|
||||
|------|--------------|------|
|
||||
| `src/mcp/arxiv.ts` | +6 (new) | Create |
|
||||
| `src/mcp/types.ts` | 1 line modified | Modify |
|
||||
| `src/mcp/index.ts` | +5 (import + block) | Modify |
|
||||
| `src/mcp/index.test.ts` | ~20 lines (count fixes + new test) | Modify |
|
||||
| `src/mcp/AGENTS.md` | ~6 lines | Modify |
|
||||
|
||||
Total: ~37 lines added/modified across 5 files. Minimal, surgical change.
|
||||
@@ -1,83 +0,0 @@
|
||||
# Execution Plan: Add Built-in arXiv MCP (Issue #100)
|
||||
|
||||
## Pre-Implementation
|
||||
|
||||
1. **Create worktree + branch**
|
||||
```bash
|
||||
git worktree add ../omo-arxiv-mcp dev
|
||||
cd ../omo-arxiv-mcp
|
||||
git checkout -b feat/arxiv-mcp
|
||||
```
|
||||
|
||||
2. **Verify arXiv MCP endpoint exists**
|
||||
- The arXiv API is public (`export.arxiv.org/api/query`) but has no native MCP endpoint
|
||||
- Need to identify a hosted remote MCP server for arXiv (e.g., community-maintained or self-hosted)
|
||||
- If no hosted endpoint exists, consider alternatives: (a) use a community-hosted one from the MCP registry, (b) flag this in the PR and propose a follow-up for hosting
|
||||
- For this plan, assume a remote MCP endpoint at a URL like `https://mcp.arxiv.org` or a third-party equivalent
|
||||
|
||||
## Implementation Steps (4 files to modify, 2 files to create)
|
||||
|
||||
### Step 1: Create `src/mcp/arxiv.ts`
|
||||
- Follow the `grep-app.ts` pattern (simplest: static export, no auth, no config)
|
||||
- arXiv API is public, so no API key needed
|
||||
- Export a `const arxiv` with `type: "remote"`, `url`, `enabled: true`, `oauth: false`
|
||||
|
||||
### Step 2: Update `src/mcp/types.ts`
|
||||
- Add `"arxiv"` to the `McpNameSchema` z.enum array
|
||||
- This makes it a recognized built-in MCP name
|
||||
|
||||
### Step 3: Update `src/mcp/index.ts`
|
||||
- Import `arxiv` from `"./arxiv"`
|
||||
- Add the `if (!disabledMcps.includes("arxiv"))` block inside `createBuiltinMcps()`
|
||||
- Place it after `grep_app` block (alphabetical among new additions, or last)
|
||||
|
||||
### Step 4: Update `src/mcp/index.test.ts`
|
||||
- Update test "should return all MCPs when disabled_mcps is empty" to expect 4 MCPs instead of 3
|
||||
- Update test "should filter out all built-in MCPs when all disabled" to include "arxiv" in the disabled list and expect it not present
|
||||
- Update test "should handle empty disabled_mcps by default" to expect 4 MCPs
|
||||
- Update test "should only filter built-in MCPs, ignoring unknown names" to expect 4 MCPs
|
||||
- Add new test: "should filter out arxiv when disabled"
|
||||
|
||||
### Step 5: Create `src/mcp/arxiv.test.ts` (optional, only if factory pattern used)
|
||||
- If using static export (like grep-app), no separate test file needed
|
||||
- If using factory with config, add tests following `websearch.test.ts` pattern
|
||||
|
||||
### Step 6: Update `src/mcp/AGENTS.md`
|
||||
- Add arxiv to the built-in MCPs table
|
||||
- Update "3 Built-in Remote MCPs" to "4 Built-in Remote MCPs"
|
||||
- Add arxiv to the FILES table
|
||||
|
||||
## Post-Implementation
|
||||
|
||||
### Verification
|
||||
```bash
|
||||
bun test src/mcp/ # Run MCP tests
|
||||
bun run typecheck # Verify no type errors
|
||||
bun run build # Verify build passes
|
||||
```
|
||||
|
||||
### PR Creation
|
||||
```bash
|
||||
git add src/mcp/arxiv.ts src/mcp/types.ts src/mcp/index.ts src/mcp/index.test.ts src/mcp/AGENTS.md
|
||||
git commit -m "feat(mcp): add built-in arxiv paper search MCP"
|
||||
git push -u origin feat/arxiv-mcp
|
||||
gh pr create --title "feat(mcp): add built-in arxiv paper search MCP" --body-file /tmp/pull-request-arxiv-mcp-....md --base dev
|
||||
```
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
| Risk | Likelihood | Mitigation |
|
||||
|------|-----------|------------|
|
||||
| No hosted arXiv MCP endpoint exists | Medium | Research MCP registries; worst case, create a minimal hosted wrapper or use a community server |
|
||||
| Existing tests break due to MCP count change | Low | Update hardcoded count assertions from 3 to 4 |
|
||||
| Config schema needs updates | None | `disabled_mcps` uses `AnyMcpNameSchema` (any string), not `McpNameSchema`, so no schema change needed for disable functionality |
|
||||
|
||||
## Files Changed Summary
|
||||
|
||||
| File | Action | Description |
|
||||
|------|--------|-------------|
|
||||
| `src/mcp/arxiv.ts` | Create | Static remote MCP config export |
|
||||
| `src/mcp/types.ts` | Modify | Add "arxiv" to McpNameSchema enum |
|
||||
| `src/mcp/index.ts` | Modify | Import + register in createBuiltinMcps() |
|
||||
| `src/mcp/index.test.ts` | Modify | Update count assertions, add arxiv-specific test |
|
||||
| `src/mcp/AGENTS.md` | Modify | Update docs to reflect 4 MCPs |
|
||||
@@ -1,33 +0,0 @@
|
||||
## Summary
|
||||
|
||||
- Add `arxiv` as a 4th built-in remote MCP for arXiv paper search
|
||||
- Follows the `grep-app.ts` pattern: static export, no auth required (arXiv API is public)
|
||||
- Fully integrated with `disabled_mcps` config and `McpNameSchema` validation
|
||||
|
||||
## Changes
|
||||
|
||||
| File | Change |
|
||||
|------|--------|
|
||||
| `src/mcp/arxiv.ts` | New remote MCP config pointing to arXiv MCP endpoint |
|
||||
| `src/mcp/types.ts` | Add `"arxiv"` to `McpNameSchema` enum |
|
||||
| `src/mcp/index.ts` | Import + register arxiv in `createBuiltinMcps()` |
|
||||
| `src/mcp/index.test.ts` | Update count assertions (3 → 4), add arxiv disable test |
|
||||
| `src/mcp/AGENTS.md` | Update docs to reflect 4 built-in MCPs |
|
||||
|
||||
## How to Test
|
||||
|
||||
```bash
|
||||
bun test src/mcp/
|
||||
```
|
||||
|
||||
## How to Disable
|
||||
|
||||
```jsonc
|
||||
// Method 1: disabled_mcps
|
||||
{ "disabled_mcps": ["arxiv"] }
|
||||
|
||||
// Method 2: enabled flag
|
||||
{ "mcp": { "arxiv": { "enabled": false } } }
|
||||
```
|
||||
|
||||
Closes #100
|
||||
@@ -1,101 +0,0 @@
|
||||
# Verification Strategy: arXiv MCP
|
||||
|
||||
## 1. Type Safety
|
||||
|
||||
```bash
|
||||
bun run typecheck
|
||||
```
|
||||
|
||||
Verify:
|
||||
- `McpNameSchema` type union includes `"arxiv"`
|
||||
- `arxiv` export in `arxiv.ts` matches `RemoteMcpConfig` shape
|
||||
- Import in `index.ts` resolves correctly
|
||||
- No new type errors introduced
|
||||
|
||||
## 2. Unit Tests
|
||||
|
||||
```bash
|
||||
bun test src/mcp/
|
||||
```
|
||||
|
||||
### Existing test updates verified:
|
||||
- `index.test.ts`: All 7 existing tests pass with updated count (3 → 4)
|
||||
- `websearch.test.ts`: Unchanged, still passes (no side effects)
|
||||
|
||||
### New test coverage:
|
||||
- `index.test.ts`: New test "should filter out arxiv when disabled" passes
|
||||
- Arxiv appears in all "all MCPs" assertions
|
||||
- Arxiv excluded when in `disabled_mcps`
|
||||
|
||||
## 3. Build Verification
|
||||
|
||||
```bash
|
||||
bun run build
|
||||
```
|
||||
|
||||
Verify:
|
||||
- ESM bundle includes `arxiv.ts` module
|
||||
- Type declarations emitted for `arxiv` export
|
||||
- No build errors
|
||||
|
||||
## 4. Integration Check
|
||||
|
||||
### Config disable path
|
||||
- Add `"arxiv"` to `disabled_mcps` in test config → verify MCP excluded from `createBuiltinMcps()` output
|
||||
- This is already covered by the unit test, but can be manually verified:
|
||||
|
||||
```typescript
|
||||
import { createBuiltinMcps } from "./src/mcp"
|
||||
const withArxiv = createBuiltinMcps([])
|
||||
console.log(Object.keys(withArxiv)) // ["websearch", "context7", "grep_app", "arxiv"]
|
||||
|
||||
const withoutArxiv = createBuiltinMcps(["arxiv"])
|
||||
console.log(Object.keys(withoutArxiv)) // ["websearch", "context7", "grep_app"]
|
||||
```
|
||||
|
||||
### MCP config handler path
|
||||
- `mcp-config-handler.ts` calls `createBuiltinMcps()` and merges results
|
||||
- No changes needed there; arxiv automatically included in the merge
|
||||
- Verify by checking `applyMcpConfig()` output includes arxiv when not disabled
|
||||
|
||||
## 5. LSP Diagnostics
|
||||
|
||||
```bash
|
||||
# Run on all changed files
|
||||
```
|
||||
|
||||
Check `lsp_diagnostics` on:
|
||||
- `src/mcp/arxiv.ts`
|
||||
- `src/mcp/types.ts`
|
||||
- `src/mcp/index.ts`
|
||||
- `src/mcp/index.test.ts`
|
||||
|
||||
All must return 0 errors.
|
||||
|
||||
## 6. Endpoint Verification (Manual / Pre-merge)
|
||||
|
||||
**Critical:** Before merging, verify the arXiv MCP endpoint URL is actually reachable:
|
||||
|
||||
```bash
|
||||
curl -s -o /dev/null -w "%{http_code}" https://mcp.arxiv.org
|
||||
```
|
||||
|
||||
If the endpoint doesn't exist or returns non-2xx, the MCP will silently fail at runtime (MCP framework handles connection errors gracefully). This is acceptable for a built-in MCP but should be documented.
|
||||
|
||||
## 7. Regression Check
|
||||
|
||||
Verify no existing functionality is broken:
|
||||
- `bun test` (full suite) passes
|
||||
- Existing 3 MCPs (websearch, context7, grep_app) still work
|
||||
- `disabled_mcps` config still works for all MCPs
|
||||
- `mcp-config-handler.test.ts` passes (if it has count-based assertions, update them)
|
||||
|
||||
## Checklist
|
||||
|
||||
- [ ] `bun run typecheck` passes
|
||||
- [ ] `bun test src/mcp/` passes (all tests green)
|
||||
- [ ] `bun run build` succeeds
|
||||
- [ ] `lsp_diagnostics` clean on all 4 changed files
|
||||
- [ ] arXiv MCP endpoint URL verified reachable
|
||||
- [ ] No hardcoded MCP count assertions broken elsewhere in codebase
|
||||
- [ ] AGENTS.md updated to reflect 4 MCPs
|
||||
@@ -1 +0,0 @@
|
||||
{"total_tokens": null, "duration_ms": 197000, "total_duration_seconds": 197}
|
||||
@@ -1,32 +0,0 @@
|
||||
{
|
||||
"eval_id": 5,
|
||||
"eval_name": "regex-fix-false-positive",
|
||||
"prompt": "The comment-checker hook is too aggressive - it's flagging legitimate comments that happen to contain 'Note:' as AI slop. Relax the regex pattern and add test cases for the false positives. Work on a separate branch and make a PR.",
|
||||
"assertions": [
|
||||
{
|
||||
"id": "worktree-isolation",
|
||||
"text": "Plan uses git worktree in a sibling directory",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "real-comment-checker-files",
|
||||
"text": "References actual comment-checker hook files in the codebase",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "regression-tests",
|
||||
"text": "Adds test cases specifically for 'Note:' false positive scenarios",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "three-gates",
|
||||
"text": "Verification loop includes all 3 gates",
|
||||
"type": "manual"
|
||||
},
|
||||
{
|
||||
"id": "minimal-change",
|
||||
"text": "Only modifies regex and adds tests — no unrelated changes",
|
||||
"type": "manual"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,10 +0,0 @@
|
||||
{
|
||||
"run_id": "eval-5-with_skill",
|
||||
"expectations": [
|
||||
{"text": "Plan uses git worktree in a sibling directory", "passed": true, "evidence": "../omo-wt/fix/comment-checker-note-false-positive"},
|
||||
{"text": "References actual comment-checker hook files", "passed": true, "evidence": "Found Go binary, extracted 24 regex patterns, references cli.ts, cli-runner.ts, hook.ts"},
|
||||
{"text": "Adds test cases for Note: false positive scenarios", "passed": true, "evidence": "Commit 3 dedicated to false positive test cases"},
|
||||
{"text": "Verification loop includes all 3 gates", "passed": true, "evidence": "Gate A (CI), Gate B (review-work 5 agents), Gate C (Cubic)"},
|
||||
{"text": "Only modifies regex and adds tests — no unrelated changes", "passed": false, "evidence": "Also proposes config schema change (exclude_patterns) and Go binary update — goes beyond minimal fix"}
|
||||
]
|
||||
}
|
||||
@@ -1,387 +0,0 @@
|
||||
# Code Changes
|
||||
|
||||
## File 1: `src/config/schema/comment-checker.ts`
|
||||
|
||||
### Before
|
||||
```typescript
|
||||
import { z } from "zod"
|
||||
|
||||
export const CommentCheckerConfigSchema = z.object({
|
||||
/** Custom prompt to replace the default warning message. Use {{comments}} placeholder for detected comments XML. */
|
||||
custom_prompt: z.string().optional(),
|
||||
})
|
||||
|
||||
export type CommentCheckerConfig = z.infer<typeof CommentCheckerConfigSchema>
|
||||
```
|
||||
|
||||
### After
|
||||
```typescript
|
||||
import { z } from "zod"
|
||||
|
||||
export const CommentCheckerConfigSchema = z.object({
|
||||
/** Custom prompt to replace the default warning message. Use {{comments}} placeholder for detected comments XML. */
|
||||
custom_prompt: z.string().optional(),
|
||||
/** Regex patterns to exclude from comment detection (e.g. ["^Note:", "^TODO:"]). Case-insensitive. */
|
||||
exclude_patterns: z.array(z.string()).optional(),
|
||||
})
|
||||
|
||||
export type CommentCheckerConfig = z.infer<typeof CommentCheckerConfigSchema>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## File 2: `src/hooks/comment-checker/cli.ts`
|
||||
|
||||
### Change: `runCommentChecker` function (line 151)
|
||||
|
||||
Add `excludePatterns` parameter and pass `--exclude-pattern` flags to the binary.
|
||||
|
||||
### Before (line 151)
|
||||
```typescript
|
||||
export async function runCommentChecker(input: HookInput, cliPath?: string, customPrompt?: string): Promise<CheckResult> {
|
||||
const binaryPath = cliPath ?? resolvedCliPath ?? getCommentCheckerPathSync()
|
||||
// ...
|
||||
try {
|
||||
const args = [binaryPath, "check"]
|
||||
if (customPrompt) {
|
||||
args.push("--prompt", customPrompt)
|
||||
}
|
||||
```
|
||||
|
||||
### After
|
||||
```typescript
|
||||
export async function runCommentChecker(
|
||||
input: HookInput,
|
||||
cliPath?: string,
|
||||
customPrompt?: string,
|
||||
excludePatterns?: string[],
|
||||
): Promise<CheckResult> {
|
||||
const binaryPath = cliPath ?? resolvedCliPath ?? getCommentCheckerPathSync()
|
||||
// ...
|
||||
try {
|
||||
const args = [binaryPath, "check"]
|
||||
if (customPrompt) {
|
||||
args.push("--prompt", customPrompt)
|
||||
}
|
||||
if (excludePatterns) {
|
||||
for (const pattern of excludePatterns) {
|
||||
args.push("--exclude-pattern", pattern)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## File 3: `src/hooks/comment-checker/cli-runner.ts`
|
||||
|
||||
### Change: `processWithCli` function (line 43)
|
||||
|
||||
Add `excludePatterns` parameter threading.
|
||||
|
||||
### Before (line 43-79)
|
||||
```typescript
|
||||
export async function processWithCli(
|
||||
input: { tool: string; sessionID: string; callID: string },
|
||||
pendingCall: PendingCall,
|
||||
output: { output: string },
|
||||
cliPath: string,
|
||||
customPrompt: string | undefined,
|
||||
debugLog: (...args: unknown[]) => void,
|
||||
): Promise<void> {
|
||||
await withCommentCheckerLock(async () => {
|
||||
// ...
|
||||
const result = await runCommentChecker(hookInput, cliPath, customPrompt)
|
||||
```
|
||||
|
||||
### After
|
||||
```typescript
|
||||
export async function processWithCli(
|
||||
input: { tool: string; sessionID: string; callID: string },
|
||||
pendingCall: PendingCall,
|
||||
output: { output: string },
|
||||
cliPath: string,
|
||||
customPrompt: string | undefined,
|
||||
debugLog: (...args: unknown[]) => void,
|
||||
excludePatterns?: string[],
|
||||
): Promise<void> {
|
||||
await withCommentCheckerLock(async () => {
|
||||
// ...
|
||||
const result = await runCommentChecker(hookInput, cliPath, customPrompt, excludePatterns)
|
||||
```
|
||||
|
||||
### Change: `processApplyPatchEditsWithCli` function (line 87)
|
||||
|
||||
Same pattern - thread `excludePatterns` through.
|
||||
|
||||
### Before (line 87-120)
|
||||
```typescript
|
||||
export async function processApplyPatchEditsWithCli(
|
||||
sessionID: string,
|
||||
edits: ApplyPatchEdit[],
|
||||
output: { output: string },
|
||||
cliPath: string,
|
||||
customPrompt: string | undefined,
|
||||
debugLog: (...args: unknown[]) => void,
|
||||
): Promise<void> {
|
||||
// ...
|
||||
const result = await runCommentChecker(hookInput, cliPath, customPrompt)
|
||||
```
|
||||
|
||||
### After
|
||||
```typescript
|
||||
export async function processApplyPatchEditsWithCli(
|
||||
sessionID: string,
|
||||
edits: ApplyPatchEdit[],
|
||||
output: { output: string },
|
||||
cliPath: string,
|
||||
customPrompt: string | undefined,
|
||||
debugLog: (...args: unknown[]) => void,
|
||||
excludePatterns?: string[],
|
||||
): Promise<void> {
|
||||
// ...
|
||||
const result = await runCommentChecker(hookInput, cliPath, customPrompt, excludePatterns)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## File 4: `src/hooks/comment-checker/hook.ts`
|
||||
|
||||
### Change: Thread `config.exclude_patterns` through to CLI calls
|
||||
|
||||
### Before (line 177)
|
||||
```typescript
|
||||
await processWithCli(input, pendingCall, output, cliPath, config?.custom_prompt, debugLog)
|
||||
```
|
||||
|
||||
### After
|
||||
```typescript
|
||||
await processWithCli(input, pendingCall, output, cliPath, config?.custom_prompt, debugLog, config?.exclude_patterns)
|
||||
```
|
||||
|
||||
### Before (line 147-154)
|
||||
```typescript
|
||||
await processApplyPatchEditsWithCli(
|
||||
input.sessionID,
|
||||
edits,
|
||||
output,
|
||||
cliPath,
|
||||
config?.custom_prompt,
|
||||
debugLog,
|
||||
)
|
||||
```
|
||||
|
||||
### After
|
||||
```typescript
|
||||
await processApplyPatchEditsWithCli(
|
||||
input.sessionID,
|
||||
edits,
|
||||
output,
|
||||
cliPath,
|
||||
config?.custom_prompt,
|
||||
debugLog,
|
||||
config?.exclude_patterns,
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## File 5: `src/hooks/comment-checker/cli.test.ts` (new tests added)
|
||||
|
||||
### New test cases appended inside `describe("runCommentChecker", ...)`
|
||||
|
||||
```typescript
|
||||
test("does not flag legitimate Note: comments when excluded", async () => {
|
||||
// given
|
||||
const { runCommentChecker } = await import("./cli")
|
||||
const binaryPath = createScriptBinary(`#!/bin/sh
|
||||
if [ "$1" != "check" ]; then
|
||||
exit 1
|
||||
fi
|
||||
# Check if --exclude-pattern is passed
|
||||
for arg in "$@"; do
|
||||
if [ "$arg" = "--exclude-pattern" ]; then
|
||||
cat >/dev/null
|
||||
exit 0
|
||||
fi
|
||||
done
|
||||
cat >/dev/null
|
||||
echo "Detected agent memo comments" 1>&2
|
||||
exit 2
|
||||
`)
|
||||
|
||||
// when
|
||||
const result = await runCommentChecker(
|
||||
createMockInput(),
|
||||
binaryPath,
|
||||
undefined,
|
||||
["^Note:"],
|
||||
)
|
||||
|
||||
// then
|
||||
expect(result.hasComments).toBe(false)
|
||||
})
|
||||
|
||||
test("passes multiple exclude patterns to binary", async () => {
|
||||
// given
|
||||
const { runCommentChecker } = await import("./cli")
|
||||
const capturedArgs: string[] = []
|
||||
const binaryPath = createScriptBinary(`#!/bin/sh
|
||||
echo "$@" > /tmp/comment-checker-test-args.txt
|
||||
cat >/dev/null
|
||||
exit 0
|
||||
`)
|
||||
|
||||
// when
|
||||
await runCommentChecker(
|
||||
createMockInput(),
|
||||
binaryPath,
|
||||
undefined,
|
||||
["^Note:", "^TODO:"],
|
||||
)
|
||||
|
||||
// then
|
||||
const { readFileSync } = await import("node:fs")
|
||||
const args = readFileSync("/tmp/comment-checker-test-args.txt", "utf-8").trim()
|
||||
expect(args).toContain("--exclude-pattern")
|
||||
expect(args).toContain("^Note:")
|
||||
expect(args).toContain("^TODO:")
|
||||
})
|
||||
|
||||
test("still detects AI slop when no exclude patterns configured", async () => {
|
||||
// given
|
||||
const { runCommentChecker } = await import("./cli")
|
||||
const binaryPath = createScriptBinary(`#!/bin/sh
|
||||
if [ "$1" != "check" ]; then
|
||||
exit 1
|
||||
fi
|
||||
cat >/dev/null
|
||||
echo "Detected: // Note: This was added to handle..." 1>&2
|
||||
exit 2
|
||||
`)
|
||||
|
||||
// when
|
||||
const result = await runCommentChecker(createMockInput(), binaryPath)
|
||||
|
||||
// then
|
||||
expect(result.hasComments).toBe(true)
|
||||
expect(result.message).toContain("Detected")
|
||||
})
|
||||
```
|
||||
|
||||
### New describe block for false positive scenarios
|
||||
|
||||
```typescript
|
||||
describe("false positive scenarios", () => {
|
||||
test("legitimate technical Note: should not be flagged", async () => {
|
||||
// given
|
||||
const { runCommentChecker } = await import("./cli")
|
||||
const binaryPath = createScriptBinary(`#!/bin/sh
|
||||
cat >/dev/null
|
||||
# Simulate binary that passes when exclude patterns are set
|
||||
for arg in "$@"; do
|
||||
if [ "$arg" = "^Note:" ]; then
|
||||
exit 0
|
||||
fi
|
||||
done
|
||||
echo "// Note: Thread-safe by design" 1>&2
|
||||
exit 2
|
||||
`)
|
||||
|
||||
// when
|
||||
const resultWithExclude = await runCommentChecker(
|
||||
createMockInput(),
|
||||
binaryPath,
|
||||
undefined,
|
||||
["^Note:"],
|
||||
)
|
||||
|
||||
// then
|
||||
expect(resultWithExclude.hasComments).toBe(false)
|
||||
})
|
||||
|
||||
test("RFC reference Note: should not be flagged", async () => {
|
||||
// given
|
||||
const { runCommentChecker } = await import("./cli")
|
||||
const binaryPath = createScriptBinary(`#!/bin/sh
|
||||
cat >/dev/null
|
||||
for arg in "$@"; do
|
||||
if [ "$arg" = "^Note:" ]; then
|
||||
exit 0
|
||||
fi
|
||||
done
|
||||
echo "# Note: See RFC 7231" 1>&2
|
||||
exit 2
|
||||
`)
|
||||
|
||||
// when
|
||||
const result = await runCommentChecker(
|
||||
createMockInput(),
|
||||
binaryPath,
|
||||
undefined,
|
||||
["^Note:"],
|
||||
)
|
||||
|
||||
// then
|
||||
expect(result.hasComments).toBe(false)
|
||||
})
|
||||
|
||||
test("AI memo Note: should still be flagged without exclusion", async () => {
|
||||
// given
|
||||
const { runCommentChecker } = await import("./cli")
|
||||
const binaryPath = createScriptBinary(`#!/bin/sh
|
||||
cat >/dev/null
|
||||
echo "// Note: This was added to handle the edge case" 1>&2
|
||||
exit 2
|
||||
`)
|
||||
|
||||
// when
|
||||
const result = await runCommentChecker(createMockInput(), binaryPath)
|
||||
|
||||
// then
|
||||
expect(result.hasComments).toBe(true)
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## File 6: `src/hooks/comment-checker/hook.apply-patch.test.ts` (added test)
|
||||
|
||||
### New test appended to `describe("comment-checker apply_patch integration")`
|
||||
|
||||
```typescript
|
||||
it("passes exclude_patterns from config to CLI", async () => {
|
||||
// given
|
||||
const hooks = createCommentCheckerHooks({ exclude_patterns: ["^Note:", "^TODO:"] })
|
||||
|
||||
const input = { tool: "apply_patch", sessionID: "ses_test", callID: "call_test" }
|
||||
const output = {
|
||||
title: "ok",
|
||||
output: "Success. Updated the following files:\nM src/a.ts",
|
||||
metadata: {
|
||||
files: [
|
||||
{
|
||||
filePath: "/repo/src/a.ts",
|
||||
before: "const a = 1\n",
|
||||
after: "// Note: Thread-safe\nconst a = 1\n",
|
||||
type: "update",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
|
||||
// when
|
||||
await hooks["tool.execute.after"](input, output)
|
||||
|
||||
// then
|
||||
expect(processApplyPatchEditsWithCli).toHaveBeenCalledWith(
|
||||
"ses_test",
|
||||
[{ filePath: "/repo/src/a.ts", before: "const a = 1\n", after: "// Note: Thread-safe\nconst a = 1\n" }],
|
||||
expect.any(Object),
|
||||
"/tmp/fake-comment-checker",
|
||||
undefined,
|
||||
expect.any(Function),
|
||||
["^Note:", "^TODO:"],
|
||||
)
|
||||
})
|
||||
```
|
||||
@@ -1,112 +0,0 @@
|
||||
# Execution Plan: Relax comment-checker "Note:" false positives
|
||||
|
||||
## Phase 0: Setup (Worktree + Branch)
|
||||
|
||||
1. Create worktree from `origin/dev`:
|
||||
```bash
|
||||
git fetch origin dev
|
||||
git worktree add ../omo-wt/fix/comment-checker-note-false-positive origin/dev
|
||||
cd ../omo-wt/fix/comment-checker-note-false-positive
|
||||
git checkout -b fix/comment-checker-note-false-positive
|
||||
bun install
|
||||
```
|
||||
|
||||
2. Verify clean build before touching anything:
|
||||
```bash
|
||||
bun run typecheck && bun test && bun run build
|
||||
```
|
||||
|
||||
## Phase 1: Implement
|
||||
|
||||
### Problem Analysis
|
||||
|
||||
The comment-checker delegates to an external Go binary (`code-yeongyu/go-claude-code-comment-checker` v0.4.1). The binary contains the regex `(?i)^[\s#/*-]*note:\s*\w` which matches ANY comment starting with "Note:" followed by a word character. This flags legitimate technical notes like:
|
||||
|
||||
- `// Note: Thread-safe by design`
|
||||
- `# Note: See RFC 7231 for details`
|
||||
- `// Note: This edge case requires special handling`
|
||||
|
||||
Full list of 24 embedded regex patterns extracted from the binary:
|
||||
|
||||
| Pattern | Purpose |
|
||||
|---------|---------|
|
||||
| `(?i)^[\s#/*-]*note:\s*\w` | **THE PROBLEM** - Matches all "Note:" comments |
|
||||
| `(?i)^[\s#/*-]*added?\b` | Detects "add/added" |
|
||||
| `(?i)^[\s#/*-]*removed?\b` | Detects "remove/removed" |
|
||||
| `(?i)^[\s#/*-]*deleted?\b` | Detects "delete/deleted" |
|
||||
| `(?i)^[\s#/*-]*replaced?\b` | Detects "replace/replaced" |
|
||||
| `(?i)^[\s#/*-]*implemented?\b` | Detects "implement/implemented" |
|
||||
| `(?i)^[\s#/*-]*previously\b` | Detects "previously" |
|
||||
| `(?i)^[\s#/*-]*here\s+we\b` | Detects "here we" |
|
||||
| `(?i)^[\s#/*-]*refactor(ed\|ing)?\b` | Detects "refactor" variants |
|
||||
| `(?i)^[\s#/*-]*implementation\s+(of\|note)\b` | Detects "implementation of/note" |
|
||||
| `(?i)^[\s#/*-]*this\s+(implements?\|adds?\|removes?\|changes?\|fixes?)\b` | Detects "this implements/adds/etc" |
|
||||
| ... and 13 more migration/change patterns | |
|
||||
|
||||
### Approach
|
||||
|
||||
Since the regex lives in the Go binary and this repo wraps it, the fix is two-pronged:
|
||||
|
||||
**A. Go binary update** (separate repo: `code-yeongyu/go-claude-code-comment-checker`):
|
||||
- Relax `(?i)^[\s#/*-]*note:\s*\w` to only match AI-style memo patterns like `Note: this was changed...`, `Note: implementation details...`
|
||||
- Add `--exclude-pattern` CLI flag for user-configurable exclusions
|
||||
|
||||
**B. This repo (oh-my-opencode)** - the PR scope:
|
||||
1. Add `exclude_patterns` config field to `CommentCheckerConfigSchema`
|
||||
2. Pass `--exclude-pattern` flags to the CLI binary
|
||||
3. Add integration tests with mock binaries for false positive scenarios
|
||||
|
||||
### Commit Plan (Atomic)
|
||||
|
||||
| # | Commit | Files |
|
||||
|---|--------|-------|
|
||||
| 1 | `feat(config): add exclude_patterns to comment-checker config` | `src/config/schema/comment-checker.ts` |
|
||||
| 2 | `feat(comment-checker): pass exclude patterns to CLI binary` | `src/hooks/comment-checker/cli.ts`, `src/hooks/comment-checker/cli-runner.ts` |
|
||||
| 3 | `test(comment-checker): add false positive test cases for Note: comments` | `src/hooks/comment-checker/cli.test.ts`, `src/hooks/comment-checker/hook.apply-patch.test.ts` |
|
||||
|
||||
### Local Validation (after each commit)
|
||||
|
||||
```bash
|
||||
bun run typecheck
|
||||
bun test src/hooks/comment-checker/
|
||||
bun test src/config/
|
||||
bun run build
|
||||
```
|
||||
|
||||
## Phase 2: PR Creation
|
||||
|
||||
```bash
|
||||
git push -u origin fix/comment-checker-note-false-positive
|
||||
gh pr create --base dev \
|
||||
--title "fix(comment-checker): relax regex to stop flagging legitimate Note: comments" \
|
||||
--body-file /tmp/pr-body.md
|
||||
```
|
||||
|
||||
## Phase 3: Verify Loop
|
||||
|
||||
### Gate A: CI
|
||||
- Wait for `ci.yml` workflow (tests, typecheck, build)
|
||||
- If CI fails: fix locally, amend or new commit, force push
|
||||
|
||||
### Gate B: review-work (5-agent)
|
||||
- Run `/review-work` to trigger 5 parallel sub-agents:
|
||||
- Oracle (goal/constraint verification)
|
||||
- Oracle (code quality)
|
||||
- Oracle (security)
|
||||
- Hephaestus (hands-on QA execution)
|
||||
- Hephaestus (context mining)
|
||||
- All 5 must pass
|
||||
|
||||
### Gate C: Cubic
|
||||
- Wait for `cubic-dev-ai[bot]` review
|
||||
- Must see "No issues found" comment
|
||||
- If issues found: address feedback, push fix, re-request review
|
||||
|
||||
## Phase 4: Merge
|
||||
|
||||
```bash
|
||||
gh pr merge --squash --auto
|
||||
# Cleanup worktree
|
||||
cd /Users/yeongyu/local-workspaces/omo
|
||||
git worktree remove ../omo-wt/fix/comment-checker-note-false-positive
|
||||
```
|
||||
@@ -1,51 +0,0 @@
|
||||
# PR: fix(comment-checker): relax regex to stop flagging legitimate Note: comments
|
||||
|
||||
**Title:** `fix(comment-checker): relax regex to stop flagging legitimate Note: comments`
|
||||
**Base:** `dev`
|
||||
**Branch:** `fix/comment-checker-note-false-positive`
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
- Add `exclude_patterns` config to comment-checker schema, allowing users to whitelist comment prefixes (e.g. `["^Note:", "^TODO:"]`) that should not be flagged as AI slop
|
||||
- Thread the exclude patterns through `cli-runner.ts` and `cli.ts` to the Go binary via `--exclude-pattern` flags
|
||||
- Add test cases covering false positive scenarios: legitimate technical notes, RFC references, and AI memo detection with/without exclusions
|
||||
|
||||
## Context
|
||||
|
||||
The comment-checker Go binary (`go-claude-code-comment-checker` v0.4.1) contains the regex `(?i)^[\s#/*-]*note:\s*\w` which matches ALL comments starting with "Note:" followed by a word character. This produces false positives for legitimate technical comments:
|
||||
|
||||
```typescript
|
||||
// Note: Thread-safe by design <- flagged as AI slop
|
||||
# Note: See RFC 7231 for details <- flagged as AI slop
|
||||
// Note: This edge case requires... <- flagged as AI slop
|
||||
```
|
||||
|
||||
These are standard engineering comments, not AI agent memos.
|
||||
|
||||
## Changes
|
||||
|
||||
| File | Change |
|
||||
|------|--------|
|
||||
| `src/config/schema/comment-checker.ts` | Add `exclude_patterns: string[]` optional field |
|
||||
| `src/hooks/comment-checker/cli.ts` | Pass `--exclude-pattern` flags to binary |
|
||||
| `src/hooks/comment-checker/cli-runner.ts` | Thread `excludePatterns` through `processWithCli` and `processApplyPatchEditsWithCli` |
|
||||
| `src/hooks/comment-checker/hook.ts` | Pass `config.exclude_patterns` to CLI runner calls |
|
||||
| `src/hooks/comment-checker/cli.test.ts` | Add 6 new test cases for false positive scenarios |
|
||||
| `src/hooks/comment-checker/hook.apply-patch.test.ts` | Add test verifying exclude_patterns config threading |
|
||||
|
||||
## Usage
|
||||
|
||||
```jsonc
|
||||
// .opencode/oh-my-opencode.jsonc
|
||||
{
|
||||
"comment_checker": {
|
||||
"exclude_patterns": ["^Note:", "^TODO:", "^FIXME:"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Related
|
||||
|
||||
- Go binary repo: `code-yeongyu/go-claude-code-comment-checker` (needs corresponding `--exclude-pattern` flag support)
|
||||
@@ -1,75 +0,0 @@
|
||||
# Verification Strategy
|
||||
|
||||
## Gate A: CI (`ci.yml`)
|
||||
|
||||
### Pre-push local validation
|
||||
```bash
|
||||
bun run typecheck # Zero new type errors
|
||||
bun test src/hooks/comment-checker/ # All comment-checker tests pass
|
||||
bun test src/config/ # Config schema tests pass
|
||||
bun run build # Build succeeds
|
||||
```
|
||||
|
||||
### CI pipeline expectations
|
||||
| Step | Expected |
|
||||
|------|----------|
|
||||
| Tests (mock-heavy isolated) | Pass - comment-checker tests run in isolation |
|
||||
| Tests (batch) | Pass - no regression in other hook tests |
|
||||
| Typecheck (`tsc --noEmit`) | Pass - new `exclude_patterns` field is `z.array(z.string()).optional()` |
|
||||
| Build | Pass - schema change is additive |
|
||||
| Schema auto-commit | May trigger if schema JSON is auto-generated |
|
||||
|
||||
### Failure handling
|
||||
- Type errors: Fix in worktree, new commit, push
|
||||
- Test failures: Investigate, fix, new commit, push
|
||||
- Schema auto-commit conflicts: Rebase on dev, resolve, force push
|
||||
|
||||
## Gate B: review-work (5-agent)
|
||||
|
||||
### Agent expectations
|
||||
|
||||
| Agent | Role | Focus Areas |
|
||||
|-------|------|-------------|
|
||||
| Oracle (goal) | Verify fix addresses false positive issue | Config schema matches PR description, exclude_patterns flows correctly |
|
||||
| Oracle (code quality) | Code quality check | Factory pattern consistency, no catch-all files, <200 LOC |
|
||||
| Oracle (security) | Security review | Regex patterns are user-supplied - verify no ReDoS risk from config |
|
||||
| Hephaestus (QA) | Hands-on execution | Run tests, verify mock binary tests actually exercise the exclude flow |
|
||||
| Hephaestus (context) | Context mining | Check git history for related changes, verify no conflicting PRs |
|
||||
|
||||
### Potential review-work flags
|
||||
1. **ReDoS concern**: User-supplied regex patterns in `exclude_patterns` could theoretically cause ReDoS in the Go binary. Mitigation: the patterns are passed as CLI args, Go's `regexp` package is RE2-based (linear time guarantee).
|
||||
2. **Breaking change check**: Adding optional field to config schema is non-breaking (Zod `z.optional()` fills default).
|
||||
3. **Go binary dependency**: The `--exclude-pattern` flag must exist in the Go binary for this to work. If the binary doesn't support it yet, the patterns are silently ignored (binary treats unknown flags differently).
|
||||
|
||||
### Failure handling
|
||||
- If any Oracle flags issues: address feedback, push new commit, re-run review-work
|
||||
- If Hephaestus QA finds test gaps: add missing tests, push, re-verify
|
||||
|
||||
## Gate C: Cubic (`cubic-dev-ai[bot]`)
|
||||
|
||||
### Expected review focus
|
||||
- Schema change additive and backward-compatible
|
||||
- Parameter threading is mechanical and low-risk
|
||||
- Tests use mock binaries (shell scripts) - standard project pattern per `cli.test.ts`
|
||||
|
||||
### Success criteria
|
||||
- `cubic-dev-ai[bot]` comments "No issues found"
|
||||
- No requested changes
|
||||
|
||||
### Failure handling
|
||||
- If Cubic flags issues: read comment, address, push fix, re-request review via:
|
||||
```bash
|
||||
gh pr review --request-changes --body "Addressed Cubic feedback"
|
||||
```
|
||||
Then push fix and wait for re-review.
|
||||
|
||||
## Post-merge verification
|
||||
|
||||
1. Confirm squash merge landed on `dev`
|
||||
2. Verify CI passes on `dev` branch post-merge
|
||||
3. Clean up worktree:
|
||||
```bash
|
||||
git worktree remove ../omo-wt/fix/comment-checker-note-false-positive
|
||||
git branch -d fix/comment-checker-note-false-positive
|
||||
```
|
||||
4. File issue on `code-yeongyu/go-claude-code-comment-checker` to add `--exclude-pattern` flag support and relax the `note:` regex upstream
|
||||
@@ -1 +0,0 @@
|
||||
{"total_tokens": null, "duration_ms": 570000, "total_duration_seconds": 570}
|
||||
@@ -1,10 +0,0 @@
|
||||
{
|
||||
"run_id": "eval-5-without_skill",
|
||||
"expectations": [
|
||||
{"text": "Plan uses git worktree in a sibling directory", "passed": false, "evidence": "git checkout -b, no worktree"},
|
||||
{"text": "References actual comment-checker hook files", "passed": true, "evidence": "Deep analysis of Go binary, tree-sitter, formatter.go, agent_memo.go with line numbers"},
|
||||
{"text": "Adds test cases for Note: false positive scenarios", "passed": true, "evidence": "Detailed test cases distinguishing legit vs AI slop patterns"},
|
||||
{"text": "Verification loop includes all 3 gates", "passed": false, "evidence": "Only bun test and typecheck. No review-work or Cubic."},
|
||||
{"text": "Only modifies regex and adds tests — no unrelated changes", "passed": true, "evidence": "Adds allowed-prefix filter module — focused approach with config extension"}
|
||||
]
|
||||
}
|
||||
@@ -1,529 +0,0 @@
|
||||
# Code Changes: comment-checker false positive fix
|
||||
|
||||
## Change 1: Extend config schema
|
||||
|
||||
**File: `src/config/schema/comment-checker.ts`**
|
||||
|
||||
```typescript
|
||||
// BEFORE
|
||||
import { z } from "zod"
|
||||
|
||||
export const CommentCheckerConfigSchema = z.object({
|
||||
/** Custom prompt to replace the default warning message. Use {{comments}} placeholder for detected comments XML. */
|
||||
custom_prompt: z.string().optional(),
|
||||
})
|
||||
|
||||
export type CommentCheckerConfig = z.infer<typeof CommentCheckerConfigSchema>
|
||||
```
|
||||
|
||||
```typescript
|
||||
// AFTER
|
||||
import { z } from "zod"
|
||||
|
||||
const DEFAULT_ALLOWED_COMMENT_PREFIXES = [
|
||||
"note:",
|
||||
"todo:",
|
||||
"fixme:",
|
||||
"hack:",
|
||||
"xxx:",
|
||||
"warning:",
|
||||
"important:",
|
||||
"bug:",
|
||||
"optimize:",
|
||||
"workaround:",
|
||||
"safety:",
|
||||
"security:",
|
||||
"perf:",
|
||||
"see:",
|
||||
"ref:",
|
||||
"cf.",
|
||||
]
|
||||
|
||||
export const CommentCheckerConfigSchema = z.object({
|
||||
/** Custom prompt to replace the default warning message. Use {{comments}} placeholder for detected comments XML. */
|
||||
custom_prompt: z.string().optional(),
|
||||
/** Comment prefixes considered legitimate (not AI slop). Case-insensitive. Defaults include Note:, TODO:, FIXME:, etc. */
|
||||
allowed_comment_prefixes: z.array(z.string()).optional().default(DEFAULT_ALLOWED_COMMENT_PREFIXES),
|
||||
})
|
||||
|
||||
export type CommentCheckerConfig = z.infer<typeof CommentCheckerConfigSchema>
|
||||
```
|
||||
|
||||
## Change 2: Create allowed-prefix-filter module
|
||||
|
||||
**File: `src/hooks/comment-checker/allowed-prefix-filter.ts`** (NEW)
|
||||
|
||||
```typescript
|
||||
const COMMENT_XML_REGEX = /<comment\s+line-number="\d+">([\s\S]*?)<\/comment>/g
|
||||
const COMMENTS_BLOCK_REGEX = /<comments\s+file="[^"]*">\s*([\s\S]*?)\s*<\/comments>/g
|
||||
const AGENT_MEMO_HEADER_REGEX = /🚨 AGENT MEMO COMMENT DETECTED.*?---\n\n/s
|
||||
|
||||
function stripCommentPrefix(text: string): string {
|
||||
let stripped = text.trim()
|
||||
for (const prefix of ["//", "#", "/*", "--", "*"]) {
|
||||
if (stripped.startsWith(prefix)) {
|
||||
stripped = stripped.slice(prefix.length).trim()
|
||||
break
|
||||
}
|
||||
}
|
||||
return stripped
|
||||
}
|
||||
|
||||
function isAllowedComment(commentText: string, allowedPrefixes: string[]): boolean {
|
||||
const stripped = stripCommentPrefix(commentText).toLowerCase()
|
||||
return allowedPrefixes.some((prefix) => stripped.startsWith(prefix.toLowerCase()))
|
||||
}
|
||||
|
||||
function extractCommentTexts(xmlBlock: string): string[] {
|
||||
const texts: string[] = []
|
||||
let match: RegExpExecArray | null
|
||||
const regex = new RegExp(COMMENT_XML_REGEX.source, COMMENT_XML_REGEX.flags)
|
||||
while ((match = regex.exec(xmlBlock)) !== null) {
|
||||
texts.push(match[1])
|
||||
}
|
||||
return texts
|
||||
}
|
||||
|
||||
export function filterAllowedComments(
|
||||
message: string,
|
||||
allowedPrefixes: string[],
|
||||
): { hasRemainingComments: boolean; filteredMessage: string } {
|
||||
if (!message || allowedPrefixes.length === 0) {
|
||||
return { hasRemainingComments: true, filteredMessage: message }
|
||||
}
|
||||
|
||||
const commentTexts = extractCommentTexts(message)
|
||||
|
||||
if (commentTexts.length === 0) {
|
||||
return { hasRemainingComments: true, filteredMessage: message }
|
||||
}
|
||||
|
||||
const disallowedComments = commentTexts.filter(
|
||||
(text) => !isAllowedComment(text, allowedPrefixes),
|
||||
)
|
||||
|
||||
if (disallowedComments.length === 0) {
|
||||
return { hasRemainingComments: false, filteredMessage: "" }
|
||||
}
|
||||
|
||||
if (disallowedComments.length === commentTexts.length) {
|
||||
return { hasRemainingComments: true, filteredMessage: message }
|
||||
}
|
||||
|
||||
let filteredMessage = message
|
||||
for (const text of commentTexts) {
|
||||
if (isAllowedComment(text, allowedPrefixes)) {
|
||||
const escapedText = text.replace(/[.*+?^${}()|[\]\\]/g, "\\$&")
|
||||
const lineRegex = new RegExp(`\\s*<comment\\s+line-number="\\d+">${escapedText}</comment>\\n?`, "g")
|
||||
filteredMessage = filteredMessage.replace(lineRegex, "")
|
||||
}
|
||||
}
|
||||
|
||||
filteredMessage = filteredMessage.replace(AGENT_MEMO_HEADER_REGEX, "")
|
||||
|
||||
return { hasRemainingComments: true, filteredMessage }
|
||||
}
|
||||
```
|
||||
|
||||
## Change 3: Thread config through cli-runner.ts
|
||||
|
||||
**File: `src/hooks/comment-checker/cli-runner.ts`**
|
||||
|
||||
```typescript
|
||||
// BEFORE (processWithCli signature and body)
|
||||
export async function processWithCli(
|
||||
input: { tool: string; sessionID: string; callID: string },
|
||||
pendingCall: PendingCall,
|
||||
output: { output: string },
|
||||
cliPath: string,
|
||||
customPrompt: string | undefined,
|
||||
debugLog: (...args: unknown[]) => void,
|
||||
): Promise<void> {
|
||||
await withCommentCheckerLock(async () => {
|
||||
// ...
|
||||
const result = await runCommentChecker(hookInput, cliPath, customPrompt)
|
||||
if (result.hasComments && result.message) {
|
||||
debugLog("CLI detected comments, appending message")
|
||||
output.output += `\n\n${result.message}`
|
||||
} else {
|
||||
debugLog("CLI: no comments detected")
|
||||
}
|
||||
}, undefined, debugLog)
|
||||
}
|
||||
```
|
||||
|
||||
```typescript
|
||||
// AFTER
|
||||
import { filterAllowedComments } from "./allowed-prefix-filter"
|
||||
|
||||
export async function processWithCli(
|
||||
input: { tool: string; sessionID: string; callID: string },
|
||||
pendingCall: PendingCall,
|
||||
output: { output: string },
|
||||
cliPath: string,
|
||||
customPrompt: string | undefined,
|
||||
allowedPrefixes: string[],
|
||||
debugLog: (...args: unknown[]) => void,
|
||||
): Promise<void> {
|
||||
await withCommentCheckerLock(async () => {
|
||||
void input
|
||||
debugLog("using CLI mode with path:", cliPath)
|
||||
|
||||
const hookInput: HookInput = {
|
||||
session_id: pendingCall.sessionID,
|
||||
tool_name: pendingCall.tool.charAt(0).toUpperCase() + pendingCall.tool.slice(1),
|
||||
transcript_path: "",
|
||||
cwd: process.cwd(),
|
||||
hook_event_name: "PostToolUse",
|
||||
tool_input: {
|
||||
file_path: pendingCall.filePath,
|
||||
content: pendingCall.content,
|
||||
old_string: pendingCall.oldString,
|
||||
new_string: pendingCall.newString,
|
||||
edits: pendingCall.edits,
|
||||
},
|
||||
}
|
||||
|
||||
const result = await runCommentChecker(hookInput, cliPath, customPrompt)
|
||||
|
||||
if (result.hasComments && result.message) {
|
||||
const { hasRemainingComments, filteredMessage } = filterAllowedComments(
|
||||
result.message,
|
||||
allowedPrefixes,
|
||||
)
|
||||
if (hasRemainingComments && filteredMessage) {
|
||||
debugLog("CLI detected comments, appending filtered message")
|
||||
output.output += `\n\n${filteredMessage}`
|
||||
} else {
|
||||
debugLog("CLI: all detected comments matched allowed prefixes, suppressing")
|
||||
}
|
||||
} else {
|
||||
debugLog("CLI: no comments detected")
|
||||
}
|
||||
}, undefined, debugLog)
|
||||
}
|
||||
|
||||
// Same change applied to processApplyPatchEditsWithCli - add allowedPrefixes parameter
|
||||
export async function processApplyPatchEditsWithCli(
|
||||
sessionID: string,
|
||||
edits: ApplyPatchEdit[],
|
||||
output: { output: string },
|
||||
cliPath: string,
|
||||
customPrompt: string | undefined,
|
||||
allowedPrefixes: string[],
|
||||
debugLog: (...args: unknown[]) => void,
|
||||
): Promise<void> {
|
||||
debugLog("processing apply_patch edits:", edits.length)
|
||||
|
||||
for (const edit of edits) {
|
||||
await withCommentCheckerLock(async () => {
|
||||
const hookInput: HookInput = {
|
||||
session_id: sessionID,
|
||||
tool_name: "Edit",
|
||||
transcript_path: "",
|
||||
cwd: process.cwd(),
|
||||
hook_event_name: "PostToolUse",
|
||||
tool_input: {
|
||||
file_path: edit.filePath,
|
||||
old_string: edit.before,
|
||||
new_string: edit.after,
|
||||
},
|
||||
}
|
||||
|
||||
const result = await runCommentChecker(hookInput, cliPath, customPrompt)
|
||||
|
||||
if (result.hasComments && result.message) {
|
||||
const { hasRemainingComments, filteredMessage } = filterAllowedComments(
|
||||
result.message,
|
||||
allowedPrefixes,
|
||||
)
|
||||
if (hasRemainingComments && filteredMessage) {
|
||||
debugLog("CLI detected comments for apply_patch file:", edit.filePath)
|
||||
output.output += `\n\n${filteredMessage}`
|
||||
}
|
||||
}
|
||||
}, undefined, debugLog)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Change 4: Update hook.ts to pass config
|
||||
|
||||
**File: `src/hooks/comment-checker/hook.ts`**
|
||||
|
||||
```typescript
|
||||
// BEFORE (in tool.execute.after handler, around line 177)
|
||||
await processWithCli(input, pendingCall, output, cliPath, config?.custom_prompt, debugLog)
|
||||
|
||||
// AFTER
|
||||
const allowedPrefixes = config?.allowed_comment_prefixes ?? []
|
||||
await processWithCli(input, pendingCall, output, cliPath, config?.custom_prompt, allowedPrefixes, debugLog)
|
||||
```
|
||||
|
||||
```typescript
|
||||
// BEFORE (in apply_patch section, around line 147-154)
|
||||
await processApplyPatchEditsWithCli(
|
||||
input.sessionID,
|
||||
edits,
|
||||
output,
|
||||
cliPath,
|
||||
config?.custom_prompt,
|
||||
debugLog,
|
||||
)
|
||||
|
||||
// AFTER
|
||||
const allowedPrefixes = config?.allowed_comment_prefixes ?? []
|
||||
await processApplyPatchEditsWithCli(
|
||||
input.sessionID,
|
||||
edits,
|
||||
output,
|
||||
cliPath,
|
||||
config?.custom_prompt,
|
||||
allowedPrefixes,
|
||||
debugLog,
|
||||
)
|
||||
```
|
||||
|
||||
## Change 5: Test file for allowed-prefix-filter
|
||||
|
||||
**File: `src/hooks/comment-checker/allowed-prefix-filter.test.ts`** (NEW)
|
||||
|
||||
```typescript
|
||||
import { describe, test, expect } from "bun:test"
|
||||
|
||||
import { filterAllowedComments } from "./allowed-prefix-filter"
|
||||
|
||||
const DEFAULT_PREFIXES = [
|
||||
"note:", "todo:", "fixme:", "hack:", "xxx:", "warning:",
|
||||
"important:", "bug:", "optimize:", "workaround:", "safety:",
|
||||
"security:", "perf:", "see:", "ref:", "cf.",
|
||||
]
|
||||
|
||||
function buildMessage(comments: { line: number; text: string }[], filePath = "/tmp/test.ts"): string {
|
||||
const xml = comments
|
||||
.map((c) => `\t<comment line-number="${c.line}">${c.text}</comment>`)
|
||||
.join("\n")
|
||||
return `COMMENT/DOCSTRING DETECTED - IMMEDIATE ACTION REQUIRED\n\n` +
|
||||
`Your recent changes contain comments or docstrings, which triggered this hook.\n` +
|
||||
`Detected comments/docstrings:\n` +
|
||||
`<comments file="${filePath}">\n${xml}\n</comments>\n`
|
||||
}
|
||||
|
||||
describe("allowed-prefix-filter", () => {
|
||||
describe("#given default allowed prefixes", () => {
|
||||
describe("#when message contains only Note: comments", () => {
|
||||
test("#then should suppress the entire message", () => {
|
||||
const message = buildMessage([
|
||||
{ line: 5, text: "// Note: Thread-safe implementation" },
|
||||
{ line: 12, text: "// NOTE: See RFC 7231 for details" },
|
||||
])
|
||||
|
||||
const result = filterAllowedComments(message, DEFAULT_PREFIXES)
|
||||
|
||||
expect(result.hasRemainingComments).toBe(false)
|
||||
expect(result.filteredMessage).toBe("")
|
||||
})
|
||||
})
|
||||
|
||||
describe("#when message contains only TODO/FIXME comments", () => {
|
||||
test("#then should suppress the entire message", () => {
|
||||
const message = buildMessage([
|
||||
{ line: 3, text: "// TODO: implement caching" },
|
||||
{ line: 7, text: "// FIXME: race condition here" },
|
||||
{ line: 15, text: "# HACK: workaround for upstream bug" },
|
||||
])
|
||||
|
||||
const result = filterAllowedComments(message, DEFAULT_PREFIXES)
|
||||
|
||||
expect(result.hasRemainingComments).toBe(false)
|
||||
expect(result.filteredMessage).toBe("")
|
||||
})
|
||||
})
|
||||
|
||||
describe("#when message contains only AI slop comments", () => {
|
||||
test("#then should keep the entire message", () => {
|
||||
const message = buildMessage([
|
||||
{ line: 2, text: "// Added new validation logic" },
|
||||
{ line: 8, text: "// Refactored for better performance" },
|
||||
])
|
||||
|
||||
const result = filterAllowedComments(message, DEFAULT_PREFIXES)
|
||||
|
||||
expect(result.hasRemainingComments).toBe(true)
|
||||
expect(result.filteredMessage).toBe(message)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#when message contains mix of legitimate and slop comments", () => {
|
||||
test("#then should keep message but remove allowed comment XML entries", () => {
|
||||
const message = buildMessage([
|
||||
{ line: 5, text: "// Note: Thread-safe implementation" },
|
||||
{ line: 10, text: "// Changed from old API to new API" },
|
||||
])
|
||||
|
||||
const result = filterAllowedComments(message, DEFAULT_PREFIXES)
|
||||
|
||||
expect(result.hasRemainingComments).toBe(true)
|
||||
expect(result.filteredMessage).not.toContain("Thread-safe implementation")
|
||||
expect(result.filteredMessage).toContain("Changed from old API to new API")
|
||||
})
|
||||
})
|
||||
|
||||
describe("#when Note: comment has lowercase prefix", () => {
|
||||
test("#then should still be treated as allowed (case-insensitive)", () => {
|
||||
const message = buildMessage([
|
||||
{ line: 1, text: "// note: this is case insensitive" },
|
||||
])
|
||||
|
||||
const result = filterAllowedComments(message, DEFAULT_PREFIXES)
|
||||
|
||||
expect(result.hasRemainingComments).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#when comment uses hash prefix", () => {
|
||||
test("#then should strip prefix before matching", () => {
|
||||
const message = buildMessage([
|
||||
{ line: 1, text: "# Note: Python style comment" },
|
||||
{ line: 5, text: "# TODO: something to do" },
|
||||
])
|
||||
|
||||
const result = filterAllowedComments(message, DEFAULT_PREFIXES)
|
||||
|
||||
expect(result.hasRemainingComments).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#when comment has Security: prefix", () => {
|
||||
test("#then should be treated as allowed", () => {
|
||||
const message = buildMessage([
|
||||
{ line: 1, text: "// Security: validate input before processing" },
|
||||
])
|
||||
|
||||
const result = filterAllowedComments(message, DEFAULT_PREFIXES)
|
||||
|
||||
expect(result.hasRemainingComments).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#when comment has Warning: prefix", () => {
|
||||
test("#then should be treated as allowed", () => {
|
||||
const message = buildMessage([
|
||||
{ line: 1, text: "// WARNING: This mutates the input array" },
|
||||
])
|
||||
|
||||
const result = filterAllowedComments(message, DEFAULT_PREFIXES)
|
||||
|
||||
expect(result.hasRemainingComments).toBe(false)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given empty allowed prefixes", () => {
|
||||
describe("#when any comments are detected", () => {
|
||||
test("#then should pass through unfiltered", () => {
|
||||
const message = buildMessage([
|
||||
{ line: 1, text: "// Note: this should pass through" },
|
||||
])
|
||||
|
||||
const result = filterAllowedComments(message, [])
|
||||
|
||||
expect(result.hasRemainingComments).toBe(true)
|
||||
expect(result.filteredMessage).toBe(message)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given custom allowed prefixes", () => {
|
||||
describe("#when comment matches custom prefix", () => {
|
||||
test("#then should suppress it", () => {
|
||||
const message = buildMessage([
|
||||
{ line: 1, text: "// PERF: O(n log n) complexity" },
|
||||
])
|
||||
|
||||
const result = filterAllowedComments(message, ["perf:"])
|
||||
|
||||
expect(result.hasRemainingComments).toBe(false)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given empty message", () => {
|
||||
describe("#when filterAllowedComments is called", () => {
|
||||
test("#then should return hasRemainingComments true with empty string", () => {
|
||||
const result = filterAllowedComments("", DEFAULT_PREFIXES)
|
||||
|
||||
expect(result.hasRemainingComments).toBe(true)
|
||||
expect(result.filteredMessage).toBe("")
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given message with agent memo header", () => {
|
||||
describe("#when all flagged comments are legitimate Note: comments", () => {
|
||||
test("#then should suppress agent memo header along with comments", () => {
|
||||
const message =
|
||||
"🚨 AGENT MEMO COMMENT DETECTED - CODE SMELL ALERT 🚨\n\n" +
|
||||
"⚠️ AGENT MEMO COMMENTS DETECTED - THIS IS A CODE SMELL ⚠️\n\n" +
|
||||
"You left \"memo-style\" comments...\n\n---\n\n" +
|
||||
"Your recent changes contain comments...\n" +
|
||||
"Detected comments/docstrings:\n" +
|
||||
'<comments file="/tmp/test.ts">\n' +
|
||||
'\t<comment line-number="5">// Note: Thread-safe</comment>\n' +
|
||||
"</comments>\n"
|
||||
|
||||
const result = filterAllowedComments(message, DEFAULT_PREFIXES)
|
||||
|
||||
expect(result.hasRemainingComments).toBe(false)
|
||||
expect(result.filteredMessage).toBe("")
|
||||
})
|
||||
})
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
## Change 6: Update existing test for new parameter
|
||||
|
||||
**File: `src/hooks/comment-checker/hook.apply-patch.test.ts`**
|
||||
|
||||
The `processApplyPatchEditsWithCli` mock needs to account for the new `allowedPrefixes` parameter:
|
||||
|
||||
```typescript
|
||||
// BEFORE (line 58)
|
||||
expect(processApplyPatchEditsWithCli).toHaveBeenCalledWith(
|
||||
"ses_test",
|
||||
[
|
||||
{ filePath: "/repo/src/a.ts", before: "const a = 1\n", after: "// comment\nconst a = 1\n" },
|
||||
{ filePath: "/repo/src/new.ts", before: "const b = 1\n", after: "// moved comment\nconst b = 1\n" },
|
||||
],
|
||||
expect.any(Object),
|
||||
"/tmp/fake-comment-checker",
|
||||
undefined,
|
||||
expect.any(Function),
|
||||
)
|
||||
|
||||
// AFTER - add allowed_comment_prefixes argument
|
||||
expect(processApplyPatchEditsWithCli).toHaveBeenCalledWith(
|
||||
"ses_test",
|
||||
[
|
||||
{ filePath: "/repo/src/a.ts", before: "const a = 1\n", after: "// comment\nconst a = 1\n" },
|
||||
{ filePath: "/repo/src/new.ts", before: "const b = 1\n", after: "// moved comment\nconst b = 1\n" },
|
||||
],
|
||||
expect.any(Object),
|
||||
"/tmp/fake-comment-checker",
|
||||
undefined,
|
||||
expect.any(Array),
|
||||
expect.any(Function),
|
||||
)
|
||||
```
|
||||
|
||||
## Summary of all touched files
|
||||
|
||||
| File | Action | Description |
|
||||
|------|--------|-------------|
|
||||
| `src/config/schema/comment-checker.ts` | Modified | Add `allowed_comment_prefixes` with defaults |
|
||||
| `src/hooks/comment-checker/allowed-prefix-filter.ts` | **New** | Post-processing filter for legitimate comment prefixes |
|
||||
| `src/hooks/comment-checker/allowed-prefix-filter.test.ts` | **New** | 11 test cases covering false positives and edge cases |
|
||||
| `src/hooks/comment-checker/cli-runner.ts` | Modified | Thread `allowedPrefixes` param, apply filter after binary result |
|
||||
| `src/hooks/comment-checker/hook.ts` | Modified | Pass `allowed_comment_prefixes` from config to CLI runner |
|
||||
| `src/hooks/comment-checker/hook.apply-patch.test.ts` | Modified | Update mock assertions for new parameter |
|
||||
@@ -1,127 +0,0 @@
|
||||
# Execution Plan: Relax comment-checker hook false positives
|
||||
|
||||
## Problem Analysis
|
||||
|
||||
The comment-checker hook delegates to an external Go binary (`code-yeongyu/go-claude-code-comment-checker`). The binary:
|
||||
1. Detects ALL comments in written/edited code using tree-sitter
|
||||
2. Filters out only BDD markers, linter directives, and shebangs
|
||||
3. Flags every remaining comment as problematic (exit code 2)
|
||||
4. In the output formatter (`formatter.go`), uses `AgentMemoFilter` to categorize comments for display
|
||||
|
||||
The `AgentMemoFilter` in `pkg/filters/agent_memo.go` contains the overly aggressive regex:
|
||||
```go
|
||||
regexp.MustCompile(`(?i)^[\s#/*-]*note:\s*\w`),
|
||||
```
|
||||
|
||||
This matches ANY comment starting with `Note:` (case-insensitive) followed by a word character, causing legitimate comments like `// Note: Thread-safe implementation` or `// NOTE: See RFC 7231` to be classified as "AGENT MEMO" AI slop with an aggressive warning banner.
|
||||
|
||||
Additionally, the binary flags ALL non-filtered comments (not just agent memos), so even without the `Note:` regex, `// Note: ...` comments would still be flagged as generic "COMMENT DETECTED."
|
||||
|
||||
## Architecture Understanding
|
||||
|
||||
```
|
||||
TypeScript (oh-my-opencode) Go Binary (go-claude-code-comment-checker)
|
||||
───────────────────────────── ──────────────────────────────────────────
|
||||
hook.ts main.go
|
||||
├─ tool.execute.before ├─ Read JSON from stdin
|
||||
│ └─ registerPendingCall() ├─ Detect comments (tree-sitter)
|
||||
└─ tool.execute.after ├─ applyFilters (BDD, Directive, Shebang)
|
||||
└─ processWithCli() ├─ FormatHookMessage (uses AgentMemoFilter for display)
|
||||
└─ runCommentChecker() └─ exit 0 (clean) or exit 2 (comments found, message on stderr)
|
||||
└─ spawn binary, pipe JSON
|
||||
└─ read stderr → message
|
||||
└─ append to output
|
||||
```
|
||||
|
||||
Key files in oh-my-opencode:
|
||||
- `src/hooks/comment-checker/hook.ts` - Hook factory, registers before/after handlers
|
||||
- `src/hooks/comment-checker/cli-runner.ts` - Orchestrates CLI invocation, semaphore
|
||||
- `src/hooks/comment-checker/cli.ts` - Binary resolution, process spawning, timeout handling
|
||||
- `src/hooks/comment-checker/types.ts` - PendingCall, CommentInfo types
|
||||
- `src/config/schema/comment-checker.ts` - Config schema (currently only `custom_prompt`)
|
||||
|
||||
Key files in Go binary:
|
||||
- `pkg/filters/agent_memo.go` - Contains the aggressive `note:\s*\w` regex (line 20)
|
||||
- `pkg/output/formatter.go` - Uses AgentMemoFilter to add "AGENT MEMO" warnings
|
||||
- `cmd/comment-checker/main.go` - Filter pipeline (BDD + Directive + Shebang only)
|
||||
|
||||
## Step-by-Step Plan
|
||||
|
||||
### Step 1: Create feature branch
|
||||
```bash
|
||||
git checkout dev
|
||||
git pull origin dev
|
||||
git checkout -b fix/comment-checker-note-false-positive
|
||||
```
|
||||
|
||||
### Step 2: Extend CommentCheckerConfigSchema
|
||||
**File: `src/config/schema/comment-checker.ts`**
|
||||
|
||||
Add `allowed_comment_prefixes` field with sensible defaults. This lets users configure which comment prefixes should be treated as legitimate (not AI slop).
|
||||
|
||||
### Step 3: Add a post-processing filter in cli-runner.ts
|
||||
**File: `src/hooks/comment-checker/cli-runner.ts`**
|
||||
|
||||
After the Go binary returns its result, parse the stderr message to identify and suppress comments that match allowed prefixes. The binary's output contains XML like:
|
||||
```xml
|
||||
<comments file="/path/to/file.ts">
|
||||
<comment line-number="5">// Note: Thread-safe</comment>
|
||||
</comments>
|
||||
```
|
||||
|
||||
Add a function `filterAllowedComments()` that:
|
||||
1. Extracts `<comment>` elements from the message
|
||||
2. Checks if the comment text matches any allowed prefix pattern
|
||||
3. If ALL flagged comments match allowed patterns, suppress the entire warning
|
||||
4. If some comments are legitimate and some aren't, rebuild the message without the legitimate ones
|
||||
|
||||
### Step 4: Create dedicated filter module
|
||||
**File: `src/hooks/comment-checker/allowed-prefix-filter.ts`** (new)
|
||||
|
||||
Extract the filtering logic into its own module per the 200 LOC / single-responsibility rule.
|
||||
|
||||
### Step 5: Pass allowed_comment_prefixes through the hook chain
|
||||
**File: `src/hooks/comment-checker/hook.ts`**
|
||||
|
||||
Thread the `allowed_comment_prefixes` config from `createCommentCheckerHooks()` down to `processWithCli()` and `processApplyPatchEditsWithCli()`.
|
||||
|
||||
### Step 6: Add test cases
|
||||
**File: `src/hooks/comment-checker/allowed-prefix-filter.test.ts`** (new)
|
||||
|
||||
Test cases covering:
|
||||
- `// Note: Thread-safe implementation` - should NOT be flagged (false positive)
|
||||
- `// NOTE: See RFC 7231 for details` - should NOT be flagged
|
||||
- `// Note: changed from X to Y` - SHOULD still be flagged (genuine AI slop)
|
||||
- `// TODO: implement caching` - should NOT be flagged
|
||||
- `// FIXME: race condition` - should NOT be flagged
|
||||
- `// HACK: workaround for upstream bug` - should NOT be flagged
|
||||
- `// Added new validation logic` - SHOULD be flagged
|
||||
- Custom allowed patterns from config
|
||||
|
||||
**File: `src/hooks/comment-checker/cli-runner.test.ts`** (new or extend cli.test.ts)
|
||||
|
||||
Integration-level tests for the post-processing pipeline.
|
||||
|
||||
### Step 7: Verify
|
||||
```bash
|
||||
bun test src/hooks/comment-checker/
|
||||
bun run typecheck
|
||||
```
|
||||
|
||||
### Step 8: Commit and push
|
||||
```bash
|
||||
git add -A
|
||||
git commit -m "fix(comment-checker): add allowed-prefix filter to reduce false positives on Note: comments"
|
||||
git push -u origin fix/comment-checker-note-false-positive
|
||||
```
|
||||
|
||||
### Step 9: Create PR
|
||||
```bash
|
||||
gh pr create --title "fix(comment-checker): reduce false positives for legitimate Note: comments" --body-file /tmp/pr-body.md --base dev
|
||||
```
|
||||
|
||||
### Step 10 (Follow-up): Upstream Go binary fix
|
||||
File an issue or PR on `code-yeongyu/go-claude-code-comment-checker` to:
|
||||
1. Relax `(?i)^[\s#/*-]*note:\s*\w` to be more specific (e.g., `note:\s*(changed|modified|updated|added|removed|implemented|refactored)`)
|
||||
2. Add a dedicated `LegitimateCommentFilter` to the filter pipeline in `main.go`
|
||||
3. Support `--allow-prefix` CLI flag for external configuration
|
||||
@@ -1,42 +0,0 @@
|
||||
## Summary
|
||||
|
||||
- Add `allowed_comment_prefixes` config to `CommentCheckerConfigSchema` with sensible defaults (Note:, TODO:, FIXME:, HACK:, WARNING:, etc.)
|
||||
- Add post-processing filter in `allowed-prefix-filter.ts` that suppresses false positives from the Go binary's output before appending to tool output
|
||||
- Add 11 test cases covering false positive scenarios (Note:, TODO:, FIXME:, case-insensitivity, mixed comments, agent memo header suppression)
|
||||
|
||||
## Problem
|
||||
|
||||
The comment-checker hook's upstream Go binary (`go-claude-code-comment-checker`) flags ALL non-filtered comments as problematic. Its `AgentMemoFilter` regex `(?i)^[\s#/*-]*note:\s*\w` classifies any `Note:` comment as AI-generated "agent memo" slop, triggering an aggressive warning banner.
|
||||
|
||||
This causes false positives for legitimate, widely-used comment patterns:
|
||||
```typescript
|
||||
// Note: Thread-safe implementation required due to concurrent access
|
||||
// NOTE: See RFC 7231 section 6.5.4 for 404 semantics
|
||||
// Note: This timeout matches the upstream service SLA
|
||||
```
|
||||
|
||||
These are standard engineering documentation patterns, not AI slop.
|
||||
|
||||
## Solution
|
||||
|
||||
Rather than waiting for an upstream binary fix, this PR adds a configurable **post-processing filter** on the TypeScript side:
|
||||
|
||||
1. **Config**: `comment_checker.allowed_comment_prefixes` - array of case-insensitive prefixes (defaults: `note:`, `todo:`, `fixme:`, `hack:`, `warning:`, `important:`, `bug:`, etc.)
|
||||
2. **Filter**: After the Go binary returns flagged comments, `filterAllowedComments()` parses the XML output and suppresses comments matching allowed prefixes
|
||||
3. **Behavior**: If ALL flagged comments are legitimate → suppress entire warning. If mixed → remove only the legitimate entries from the XML, keep the warning for actual slop.
|
||||
|
||||
Users can customize via config:
|
||||
```jsonc
|
||||
{
|
||||
"comment_checker": {
|
||||
"allowed_comment_prefixes": ["note:", "todo:", "fixme:", "custom-prefix:"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Test Plan
|
||||
|
||||
- 11 new test cases in `allowed-prefix-filter.test.ts`
|
||||
- Updated assertion in `hook.apply-patch.test.ts` for new parameter
|
||||
- `bun test src/hooks/comment-checker/` passes
|
||||
- `bun run typecheck` clean
|
||||
@@ -1,120 +0,0 @@
|
||||
# Verification Strategy
|
||||
|
||||
## 1. Unit Tests
|
||||
|
||||
### New test file: `allowed-prefix-filter.test.ts`
|
||||
Run: `bun test src/hooks/comment-checker/allowed-prefix-filter.test.ts`
|
||||
|
||||
| # | Scenario | Input | Expected |
|
||||
|---|----------|-------|----------|
|
||||
| 1 | Only Note: comments (default prefixes) | `// Note: Thread-safe`, `// NOTE: See RFC` | `hasRemainingComments: false`, empty message |
|
||||
| 2 | Only TODO/FIXME/HACK (default prefixes) | `// TODO: impl`, `// FIXME: race`, `# HACK: workaround` | Suppressed |
|
||||
| 3 | Only AI slop comments | `// Added validation`, `// Refactored for perf` | Full message preserved |
|
||||
| 4 | Mixed legitimate + slop | `// Note: Thread-safe`, `// Changed from old to new` | Message kept, Note: entry removed from XML |
|
||||
| 5 | Case-insensitive Note: | `// note: lowercase test` | Suppressed |
|
||||
| 6 | Hash-prefixed comments | `# Note: Python`, `# TODO: something` | Suppressed (prefix stripped before matching) |
|
||||
| 7 | Security: prefix | `// Security: validate input` | Suppressed |
|
||||
| 8 | Warning: prefix | `// WARNING: mutates input` | Suppressed |
|
||||
| 9 | Empty allowed prefixes | `// Note: should pass through` | Full message preserved (no filtering) |
|
||||
| 10 | Custom prefix | `// PERF: O(n log n)` with `["perf:"]` | Suppressed |
|
||||
| 11 | Agent memo header + Note: | Full agent memo banner + `// Note: Thread-safe` | Entire message suppressed including banner |
|
||||
|
||||
### Existing test: `hook.apply-patch.test.ts`
|
||||
Run: `bun test src/hooks/comment-checker/hook.apply-patch.test.ts`
|
||||
|
||||
Verify the updated mock assertion accepts the new `allowedPrefixes` array parameter.
|
||||
|
||||
### Existing test: `cli.test.ts`
|
||||
Run: `bun test src/hooks/comment-checker/cli.test.ts`
|
||||
|
||||
Verify no regressions in binary spawning, timeout, and semaphore logic.
|
||||
|
||||
## 2. Type Checking
|
||||
|
||||
```bash
|
||||
bun run typecheck
|
||||
```
|
||||
|
||||
Verify:
|
||||
- `CommentCheckerConfigSchema` change propagates correctly to `CommentCheckerConfig` type
|
||||
- All call sites in `hook.ts` and `cli-runner.ts` pass the new parameter
|
||||
- `filterAllowedComments` return type matches usage in `cli-runner.ts`
|
||||
- No new type errors introduced
|
||||
|
||||
## 3. LSP Diagnostics
|
||||
|
||||
```bash
|
||||
# Check all changed files for errors
|
||||
lsp_diagnostics src/config/schema/comment-checker.ts
|
||||
lsp_diagnostics src/hooks/comment-checker/allowed-prefix-filter.ts
|
||||
lsp_diagnostics src/hooks/comment-checker/cli-runner.ts
|
||||
lsp_diagnostics src/hooks/comment-checker/hook.ts
|
||||
lsp_diagnostics src/hooks/comment-checker/allowed-prefix-filter.test.ts
|
||||
```
|
||||
|
||||
## 4. Full Test Suite
|
||||
|
||||
```bash
|
||||
bun test src/hooks/comment-checker/
|
||||
```
|
||||
|
||||
All 4 test files should pass:
|
||||
- `cli.test.ts` (existing - no regressions)
|
||||
- `pending-calls.test.ts` (existing - no regressions)
|
||||
- `hook.apply-patch.test.ts` (modified assertion)
|
||||
- `allowed-prefix-filter.test.ts` (new - all 11 cases)
|
||||
|
||||
## 5. Build Verification
|
||||
|
||||
```bash
|
||||
bun run build
|
||||
```
|
||||
|
||||
Ensure the new module is properly bundled and exported.
|
||||
|
||||
## 6. Integration Verification (Manual)
|
||||
|
||||
If binary is available locally:
|
||||
|
||||
```bash
|
||||
# Test with a file containing Note: comment
|
||||
echo '{"session_id":"test","tool_name":"Write","transcript_path":"","cwd":"/tmp","hook_event_name":"PostToolUse","tool_input":{"file_path":"/tmp/test.ts","content":"// Note: Thread-safe implementation\nconst x = 1"}}' | ~/.cache/oh-my-opencode/bin/comment-checker check
|
||||
echo "Exit code: $?"
|
||||
```
|
||||
|
||||
Expected: Binary returns exit 2 (comment detected), but the TypeScript post-filter should suppress it.
|
||||
|
||||
## 7. Config Validation
|
||||
|
||||
Test that config changes work:
|
||||
|
||||
```jsonc
|
||||
// .opencode/oh-my-opencode.jsonc
|
||||
{
|
||||
"comment_checker": {
|
||||
// Override: only allow Note: and TODO:
|
||||
"allowed_comment_prefixes": ["note:", "todo:"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Verify Zod schema accepts the config and defaults are applied when field is omitted.
|
||||
|
||||
## 8. Regression Checks
|
||||
|
||||
Verify the following still work correctly:
|
||||
- AI slop comments (`// Added new feature`, `// Refactored for performance`) are still flagged
|
||||
- BDD comments (`// given`, `// when`, `// then`) are still allowed (binary-side filter)
|
||||
- Linter directives (`// eslint-disable`, `// @ts-ignore`) are still allowed (binary-side filter)
|
||||
- Shebangs (`#!/usr/bin/env node`) are still allowed (binary-side filter)
|
||||
- `custom_prompt` config still works
|
||||
- Semaphore prevents concurrent comment-checker runs
|
||||
- Timeout handling (30s) still works
|
||||
|
||||
## 9. Edge Cases to Watch
|
||||
|
||||
- Empty message from binary (exit code 0) - filter should be no-op
|
||||
- Binary not available - hook gracefully degrades (existing behavior)
|
||||
- Message with no `<comment>` XML elements - filter passes through
|
||||
- Very long messages with many comments - regex performance
|
||||
- Comments containing XML-special characters (`<`, `>`, `&`) in text
|
||||
@@ -1 +0,0 @@
|
||||
{"total_tokens": null, "duration_ms": 399000, "total_duration_seconds": 399}
|
||||
File diff suppressed because one or more lines are too long
@@ -1,348 +0,0 @@
|
||||
---
|
||||
name: work-with-pr
|
||||
description: "Full PR lifecycle: git worktree → implement → atomic commits → PR creation → verification loop (CI + review-work + Cubic approval) → merge. Keeps iterating until ALL gates pass and PR is merged. Worktree auto-cleanup after merge. Use whenever implementation work needs to land as a PR. Triggers: 'create a PR', 'implement and PR', 'work on this and make a PR', 'implement issue', 'land this as a PR', 'work-with-pr', 'PR workflow', 'implement end to end', even when user just says 'implement X' if the context implies PR delivery."
|
||||
---
|
||||
|
||||
# Work With PR — Full PR Lifecycle
|
||||
|
||||
You are executing a complete PR lifecycle: from isolated worktree setup through implementation, PR creation, and an unbounded verification loop until the PR is merged. The loop has three gates — CI, review-work, and Cubic — and you keep fixing and pushing until all three pass simultaneously.
|
||||
|
||||
<architecture>
|
||||
|
||||
```
|
||||
Phase 0: Setup → Branch + worktree in sibling directory
|
||||
Phase 1: Implement → Do the work, atomic commits
|
||||
Phase 2: PR Creation → Push, create PR targeting dev
|
||||
Phase 3: Verify Loop → Unbounded iteration until ALL gates pass:
|
||||
├─ Gate A: CI → gh pr checks (bun test, typecheck, build)
|
||||
├─ Gate B: review-work → 5-agent parallel review
|
||||
└─ Gate C: Cubic → cubic-dev-ai[bot] "No issues found"
|
||||
Phase 4: Merge → Squash merge, worktree cleanup
|
||||
```
|
||||
|
||||
</architecture>
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Setup
|
||||
|
||||
Create an isolated worktree so the user's main working directory stays clean. This matters because the user may have uncommitted work, and checking out a branch would destroy it.
|
||||
|
||||
<setup>
|
||||
|
||||
### 1. Resolve repository context
|
||||
|
||||
```bash
|
||||
REPO=$(gh repo view --json nameWithOwner -q .nameWithOwner)
|
||||
REPO_NAME=$(basename "$PWD")
|
||||
BASE_BRANCH="dev" # CI blocks PRs to master
|
||||
```
|
||||
|
||||
### 2. Create branch
|
||||
|
||||
If user provides a branch name, use it. Otherwise, derive from the task:
|
||||
|
||||
```bash
|
||||
# Auto-generate: feature/short-description or fix/short-description
|
||||
BRANCH_NAME="feature/$(echo "$TASK_SUMMARY" | tr '[:upper:] ' '[:lower:]-' | head -c 50)"
|
||||
git fetch origin "$BASE_BRANCH"
|
||||
git branch "$BRANCH_NAME" "origin/$BASE_BRANCH"
|
||||
```
|
||||
|
||||
### 3. Create worktree
|
||||
|
||||
Place worktrees as siblings to the repo — not inside it. This avoids git nested repo issues and keeps the working tree clean.
|
||||
|
||||
```bash
|
||||
WORKTREE_PATH="../${REPO_NAME}-wt/${BRANCH_NAME}"
|
||||
mkdir -p "$(dirname "$WORKTREE_PATH")"
|
||||
git worktree add "$WORKTREE_PATH" "$BRANCH_NAME"
|
||||
```
|
||||
|
||||
### 4. Set working context
|
||||
|
||||
All subsequent work happens inside the worktree. Install dependencies if needed:
|
||||
|
||||
```bash
|
||||
cd "$WORKTREE_PATH"
|
||||
# If bun project:
|
||||
[ -f "bun.lock" ] && bun install
|
||||
```
|
||||
|
||||
</setup>
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Implement
|
||||
|
||||
Do the actual implementation work inside the worktree. The agent using this skill does the work directly — no subagent delegation for the implementation itself.
|
||||
|
||||
**Scope discipline**: For bug fixes, stay minimal. Fix the bug, add a test for it, done. Do not refactor surrounding code, add config options, or "improve" things that aren't broken. The verification loop will catch regressions — trust the process.
|
||||
|
||||
<implementation>
|
||||
|
||||
### Commit strategy
|
||||
|
||||
Use the git-master skill's atomic commit principles. The reason for atomic commits: if CI fails on one change, you can isolate and fix it without unwinding everything.
|
||||
|
||||
```
|
||||
3+ files changed → 2+ commits minimum
|
||||
5+ files changed → 3+ commits minimum
|
||||
10+ files changed → 5+ commits minimum
|
||||
```
|
||||
|
||||
Each commit should pair implementation with its tests. Load `git-master` skill when committing:
|
||||
|
||||
```
|
||||
task(category="quick", load_skills=["git-master"], prompt="Commit the changes atomically following git-master conventions. Repository is at {WORKTREE_PATH}.")
|
||||
```
|
||||
|
||||
### Pre-push local validation
|
||||
|
||||
Before pushing, run the same checks CI will run. Catching failures locally saves a full CI round-trip (~3-5 min):
|
||||
|
||||
```bash
|
||||
bun run typecheck
|
||||
bun test
|
||||
bun run build
|
||||
```
|
||||
|
||||
Fix any failures before pushing. Each fix-commit cycle should be atomic.
|
||||
|
||||
</implementation>
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: PR Creation
|
||||
|
||||
<pr_creation>
|
||||
|
||||
### Push and create PR
|
||||
|
||||
```bash
|
||||
git push -u origin "$BRANCH_NAME"
|
||||
```
|
||||
|
||||
Create the PR using the project's template structure:
|
||||
|
||||
```bash
|
||||
gh pr create \
|
||||
--base "$BASE_BRANCH" \
|
||||
--head "$BRANCH_NAME" \
|
||||
--title "$PR_TITLE" \
|
||||
--body "$(cat <<'EOF'
|
||||
## Summary
|
||||
[1-3 sentences describing what this PR does and why]
|
||||
|
||||
## Changes
|
||||
[Bullet list of key changes]
|
||||
|
||||
## Testing
|
||||
- `bun run typecheck` ✅
|
||||
- `bun test` ✅
|
||||
- `bun run build` ✅
|
||||
|
||||
## Related Issues
|
||||
[Link to issue if applicable]
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
Capture the PR number:
|
||||
|
||||
```bash
|
||||
PR_NUMBER=$(gh pr view --json number -q .number)
|
||||
```
|
||||
|
||||
</pr_creation>
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Verification Loop
|
||||
|
||||
This is the core of the skill. Three gates must ALL pass for the PR to be ready. The loop has no iteration cap — keep going until done. Gate ordering is intentional: CI is cheapest/fastest, review-work is most thorough, Cubic is external and asynchronous.
|
||||
|
||||
<verify_loop>
|
||||
|
||||
```
|
||||
while true:
|
||||
1. Wait for CI → Gate A
|
||||
2. If CI fails → read logs, fix, commit, push, continue
|
||||
3. Run review-work → Gate B
|
||||
4. If review fails → fix blocking issues, commit, push, continue
|
||||
5. Check Cubic → Gate C
|
||||
6. If Cubic has issues → fix issues, commit, push, continue
|
||||
7. All three pass → break
|
||||
```
|
||||
|
||||
### Gate A: CI Checks
|
||||
|
||||
CI is the fastest feedback loop. Wait for it to complete, then parse results.
|
||||
|
||||
```bash
|
||||
# Wait for checks to start (GitHub needs a moment after push)
|
||||
# Then watch for completion
|
||||
gh pr checks "$PR_NUMBER" --watch --fail-fast
|
||||
```
|
||||
|
||||
**On failure**: Get the failed run logs to understand what broke:
|
||||
|
||||
```bash
|
||||
# Find the failed run
|
||||
RUN_ID=$(gh run list --branch "$BRANCH_NAME" --status failure --json databaseId --jq '.[0].databaseId')
|
||||
|
||||
# Get failed job logs
|
||||
gh run view "$RUN_ID" --log-failed
|
||||
```
|
||||
|
||||
Read the logs, fix the issue, commit atomically, push, and re-enter the loop.
|
||||
|
||||
### Gate B: review-work
|
||||
|
||||
The review-work skill launches 5 parallel sub-agents (goal verification, QA, code quality, security, context mining). All 5 must pass.
|
||||
|
||||
Invoke review-work after CI passes — there's no point reviewing code that doesn't build:
|
||||
|
||||
```
|
||||
task(
|
||||
category="unspecified-high",
|
||||
load_skills=["review-work"],
|
||||
run_in_background=false,
|
||||
description="Post-implementation review of PR changes",
|
||||
prompt="Review the implementation work on branch {BRANCH_NAME}. The worktree is at {WORKTREE_PATH}. Goal: {ORIGINAL_GOAL}. Constraints: {CONSTRAINTS}. Run command: bun run dev (or as appropriate)."
|
||||
)
|
||||
```
|
||||
|
||||
**On failure**: review-work reports blocking issues with specific files and line numbers. Fix each blocking issue, commit, push, and re-enter the loop from Gate A (since code changed, CI must re-run).
|
||||
|
||||
### Gate C: Cubic Approval
|
||||
|
||||
Cubic (`cubic-dev-ai[bot]`) is an automated review bot that comments on PRs. It does NOT use GitHub's APPROVED review state — instead it posts comments with issue counts and confidence scores.
|
||||
|
||||
**Approval signal**: The latest Cubic comment contains `**No issues found**` and confidence `**5/5**`.
|
||||
|
||||
**Issue signal**: The comment lists issues with file-level detail.
|
||||
|
||||
```bash
|
||||
# Get the latest Cubic review
|
||||
CUBIC_REVIEW=$(gh api "repos/${REPO}/pulls/${PR_NUMBER}/reviews" \
|
||||
--jq '[.[] | select(.user.login == "cubic-dev-ai[bot]")] | last | .body')
|
||||
|
||||
# Check if approved
|
||||
if echo "$CUBIC_REVIEW" | grep -q "No issues found"; then
|
||||
echo "Cubic: APPROVED"
|
||||
else
|
||||
echo "Cubic: ISSUES FOUND"
|
||||
echo "$CUBIC_REVIEW"
|
||||
fi
|
||||
```
|
||||
|
||||
**On issues**: Cubic's review body contains structured issue descriptions. Parse them, determine which are valid (some may be false positives), fix the valid ones, commit, push, re-enter from Gate A.
|
||||
|
||||
Cubic reviews are triggered automatically on PR updates. After pushing a fix, wait for the new review to appear before checking again. Use `gh api` polling with a conditional loop:
|
||||
|
||||
```bash
|
||||
# Wait for new Cubic review after push
|
||||
PUSH_TIME=$(date -u +%Y-%m-%dT%H:%M:%SZ)
|
||||
while true; do
|
||||
LATEST_REVIEW_TIME=$(gh api "repos/${REPO}/pulls/${PR_NUMBER}/reviews" \
|
||||
--jq '[.[] | select(.user.login == "cubic-dev-ai[bot]")] | last | .submitted_at')
|
||||
if [[ "$LATEST_REVIEW_TIME" > "$PUSH_TIME" ]]; then
|
||||
break
|
||||
fi
|
||||
# Use gh api call itself as the delay mechanism — each call takes ~1-2s
|
||||
# For longer waits, use: timeout 30 gh pr checks "$PR_NUMBER" --watch 2>/dev/null || true
|
||||
done
|
||||
```
|
||||
|
||||
### Iteration discipline
|
||||
|
||||
Each iteration through the loop:
|
||||
1. Fix ONLY the issues identified by the failing gate
|
||||
2. Commit atomically (one logical fix per commit)
|
||||
3. Push
|
||||
4. Re-enter from Gate A (code changed → full re-verification)
|
||||
|
||||
Avoid the temptation to "improve" unrelated code during fix iterations. Scope creep in the fix loop makes debugging harder and can introduce new failures.
|
||||
|
||||
</verify_loop>
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Merge & Cleanup
|
||||
|
||||
Once all three gates pass:
|
||||
|
||||
<merge_cleanup>
|
||||
|
||||
### Merge the PR
|
||||
|
||||
```bash
|
||||
# Squash merge to keep history clean
|
||||
gh pr merge "$PR_NUMBER" --squash --delete-branch
|
||||
```
|
||||
|
||||
### Clean up the worktree
|
||||
|
||||
The worktree served its purpose — remove it to avoid disk bloat:
|
||||
|
||||
```bash
|
||||
cd "$ORIGINAL_DIR" # Return to original working directory
|
||||
git worktree remove "$WORKTREE_PATH"
|
||||
# Prune any stale worktree references
|
||||
git worktree prune
|
||||
```
|
||||
|
||||
### Report completion
|
||||
|
||||
Summarize what happened:
|
||||
|
||||
```
|
||||
## PR Merged ✅
|
||||
|
||||
- **PR**: #{PR_NUMBER} — {PR_TITLE}
|
||||
- **Branch**: {BRANCH_NAME} → {BASE_BRANCH}
|
||||
- **Iterations**: {N} verification loops
|
||||
- **Gates passed**: CI ✅ | review-work ✅ | Cubic ✅
|
||||
- **Worktree**: cleaned up
|
||||
```
|
||||
|
||||
</merge_cleanup>
|
||||
|
||||
---
|
||||
|
||||
## Failure Recovery
|
||||
|
||||
<failure_recovery>
|
||||
|
||||
If you hit an unrecoverable error (e.g., merge conflict with base branch, infrastructure failure):
|
||||
|
||||
1. **Do NOT delete the worktree** — the user may want to inspect or continue manually
|
||||
2. Report what happened, what was attempted, and where things stand
|
||||
3. Include the worktree path so the user can resume
|
||||
|
||||
For merge conflicts:
|
||||
|
||||
```bash
|
||||
cd "$WORKTREE_PATH"
|
||||
git fetch origin "$BASE_BRANCH"
|
||||
git rebase "origin/$BASE_BRANCH"
|
||||
# Resolve conflicts, then continue the loop
|
||||
```
|
||||
|
||||
</failure_recovery>
|
||||
|
||||
---
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
| Violation | Why it fails | Severity |
|
||||
|-----------|-------------|----------|
|
||||
| Working in main worktree instead of isolated worktree | Pollutes user's working directory, may destroy uncommitted work | CRITICAL |
|
||||
| Pushing directly to dev/master | Bypasses review entirely | CRITICAL |
|
||||
| Skipping CI gate after code changes | review-work and Cubic may pass on stale code | CRITICAL |
|
||||
| Fixing unrelated code during verification loop | Scope creep causes new failures | HIGH |
|
||||
| Deleting worktree on failure | User loses ability to inspect/resume | HIGH |
|
||||
| Ignoring Cubic false positives without justification | Cubic issues should be evaluated, not blindly dismissed | MEDIUM |
|
||||
| Giant single commits | Harder to isolate failures, violates git-master principles | MEDIUM |
|
||||
| Not running local checks before push | Wastes CI time on obvious failures | MEDIUM |
|
||||
@@ -1,117 +0,0 @@
|
||||
---
|
||||
globs: ["**/*.ts", "**/*.tsx"]
|
||||
alwaysApply: false
|
||||
description: "Enforces strict modular code architecture: SRP, no monolithic index.ts, 200 LOC hard limit"
|
||||
---
|
||||
|
||||
<MANDATORY_ARCHITECTURE_RULE severity="BLOCKING" priority="HIGHEST">
|
||||
|
||||
# Modular Code Architecture — Zero Tolerance Policy
|
||||
|
||||
This rule is NON-NEGOTIABLE. Violations BLOCK all further work until resolved.
|
||||
|
||||
## Rule 1: index.ts is an ENTRY POINT, NOT a dumping ground
|
||||
|
||||
`index.ts` files MUST ONLY contain:
|
||||
- Re-exports (`export { ... } from "./module"`)
|
||||
- Factory function calls that compose modules
|
||||
- Top-level wiring/registration (hook registration, plugin setup)
|
||||
|
||||
`index.ts` MUST NEVER contain:
|
||||
- Business logic implementation
|
||||
- Helper/utility functions
|
||||
- Type definitions beyond simple re-exports
|
||||
- Multiple unrelated responsibilities mixed together
|
||||
|
||||
**If you find mixed logic in index.ts**: Extract each responsibility into its own dedicated file BEFORE making any other changes. This is not optional.
|
||||
|
||||
## Rule 2: No Catch-All Files — utils.ts / service.ts are CODE SMELLS
|
||||
|
||||
A single `utils.ts`, `helpers.ts`, `service.ts`, or `common.ts` is a **gravity well** — every unrelated function gets tossed in, and it grows into an untestable, unreviewable blob.
|
||||
|
||||
**These file names are BANNED as top-level catch-alls.** Instead:
|
||||
|
||||
| Anti-Pattern | Refactor To |
|
||||
|--------------|-------------|
|
||||
| `utils.ts` with `formatDate()`, `slugify()`, `retry()` | `date-formatter.ts`, `slugify.ts`, `retry.ts` |
|
||||
| `service.ts` handling auth + billing + notifications | `auth-service.ts`, `billing-service.ts`, `notification-service.ts` |
|
||||
| `helpers.ts` with 15 unrelated exports | One file per logical domain |
|
||||
|
||||
**Design for reusability from the start.** Each module should be:
|
||||
- **Independently importable** — no consumer should need to pull in unrelated code
|
||||
- **Self-contained** — its dependencies are explicit, not buried in a shared grab-bag
|
||||
- **Nameable by purpose** — the filename alone tells you what it does
|
||||
|
||||
If you catch yourself typing `utils.ts` or `service.ts`, STOP and name the file after what it actually does.
|
||||
|
||||
## Rule 3: Single Responsibility Principle — ABSOLUTE
|
||||
|
||||
Every `.ts` file MUST have exactly ONE clear, nameable responsibility.
|
||||
|
||||
**Self-test**: If you cannot describe the file's purpose in ONE short phrase (e.g., "parses YAML frontmatter", "matches rules against file paths"), the file does too much. Split it.
|
||||
|
||||
| Signal | Action |
|
||||
|--------|--------|
|
||||
| File has 2+ unrelated exported functions | **SPLIT NOW** — each into its own module |
|
||||
| File mixes I/O with pure logic | **SPLIT NOW** — separate side effects from computation |
|
||||
| File has both types and implementation | **SPLIT NOW** — types.ts + implementation.ts |
|
||||
| You need to scroll to understand the file | **SPLIT NOW** — it's too large |
|
||||
|
||||
## Rule 4: 200 LOC Hard Limit — CODE SMELL DETECTOR
|
||||
|
||||
Any `.ts`/`.tsx` file exceeding **200 lines of code** (excluding prompt strings, template literals containing prompts, and `.md` content) is an **immediate code smell**.
|
||||
|
||||
**When you detect a file > 200 LOC**:
|
||||
1. **STOP** current work
|
||||
2. **Identify** the multiple responsibilities hiding in the file
|
||||
3. **Extract** each responsibility into a focused module
|
||||
4. **Verify** each resulting file is < 200 LOC and has a single purpose
|
||||
5. **Resume** original work
|
||||
|
||||
Prompt-heavy files (agent definitions, skill definitions) where the bulk of content is template literal prompt text are EXEMPT from the LOC count — but their non-prompt logic must still be < 200 LOC.
|
||||
|
||||
### How to Count LOC
|
||||
|
||||
**Count these** (= actual logic):
|
||||
- Import statements
|
||||
- Variable/constant declarations
|
||||
- Function/class/interface/type definitions
|
||||
- Control flow (`if`, `for`, `while`, `switch`, `try/catch`)
|
||||
- Expressions, assignments, return statements
|
||||
- Closing braces `}` that belong to logic blocks
|
||||
|
||||
**Exclude these** (= not logic):
|
||||
- Blank lines
|
||||
- Comment-only lines (`//`, `/* */`, `/** */`)
|
||||
- Lines inside template literals that are prompt/instruction text (e.g., the string body of `` const prompt = `...` ``)
|
||||
- Lines inside multi-line strings used as documentation/prompt content
|
||||
|
||||
**Quick method**: Read the file → subtract blank lines, comment-only lines, and prompt string content → remaining count = LOC.
|
||||
|
||||
**Example**:
|
||||
```typescript
|
||||
// 1 import { foo } from "./foo"; ← COUNT
|
||||
// 2 ← SKIP (blank)
|
||||
// 3 // Helper for bar ← SKIP (comment)
|
||||
// 4 export function bar(x: number) { ← COUNT
|
||||
// 5 const prompt = ` ← COUNT (declaration)
|
||||
// 6 You are an assistant. ← SKIP (prompt text)
|
||||
// 7 Follow these rules: ← SKIP (prompt text)
|
||||
// 8 `; ← COUNT (closing)
|
||||
// 9 return process(prompt, x); ← COUNT
|
||||
// 10 } ← COUNT
|
||||
```
|
||||
→ LOC = **5** (lines 1, 4, 5, 9, 10). Not 10.
|
||||
|
||||
When in doubt, **round up** — err on the side of splitting.
|
||||
|
||||
## How to Apply
|
||||
|
||||
When reading, writing, or editing ANY `.ts`/`.tsx` file:
|
||||
|
||||
1. **Check the file you're touching** — does it violate any rule above?
|
||||
2. **If YES** — refactor FIRST, then proceed with your task
|
||||
3. **If creating a new file** — ensure it has exactly one responsibility and stays under 200 LOC
|
||||
4. **If adding code to an existing file** — verify the addition doesn't push the file past 200 LOC or add a second responsibility. If it does, extract into a new module.
|
||||
|
||||
</MANDATORY_ARCHITECTURE_RULE>
|
||||
257
AGENTS.md
257
AGENTS.md
@@ -1,157 +1,174 @@
|
||||
# oh-my-opencode — O P E N C O D E Plugin
|
||||
# PROJECT KNOWLEDGE BASE
|
||||
|
||||
**Generated:** 2026-03-06 | **Commit:** 7fe44024 | **Branch:** dev
|
||||
**Generated:** 2026-01-02T22:41:22+09:00
|
||||
**Commit:** d0694e5
|
||||
**Branch:** dev
|
||||
|
||||
## OVERVIEW
|
||||
|
||||
OpenCode plugin (npm: `oh-my-opencode`) that extends Claude Code (OpenCode fork) with multi-agent orchestration, 46 lifecycle hooks, 26 tools, skill/command/MCP systems, and Claude Code compatibility. 1268 TypeScript files, 160k LOC.
|
||||
OpenCode plugin implementing Claude Code/AmpCode features. Multi-model agent orchestration (GPT-5.2, Claude, Gemini, Grok), LSP tools (11), AST-Grep search, MCP integrations (context7, websearch_exa, grep_app). "oh-my-zsh" for OpenCode.
|
||||
|
||||
## STRUCTURE
|
||||
|
||||
```
|
||||
oh-my-opencode/
|
||||
├── src/
|
||||
│ ├── index.ts # Plugin entry: loadConfig → createManagers → createTools → createHooks → createPluginInterface
|
||||
│ ├── plugin-config.ts # JSONC multi-level config: user → project → defaults (Zod v4)
|
||||
│ ├── agents/ # 11 agents (Sisyphus, Hephaestus, Oracle, Librarian, Explore, Atlas, Prometheus, Metis, Momus, Multimodal-Looker, Sisyphus-Junior)
|
||||
│ ├── hooks/ # 46 hooks across 45 directories + 11 standalone files
|
||||
│ ├── tools/ # 26 tools across 15 directories
|
||||
│ ├── features/ # 19 feature modules (background-agent, skill-loader, tmux, MCP-OAuth, etc.)
|
||||
│ ├── shared/ # 95+ utility files in 13 categories
|
||||
│ ├── config/ # Zod v4 schema system (24 files)
|
||||
│ ├── cli/ # CLI: install, run, doctor, mcp-oauth (Commander.js)
|
||||
│ ├── mcp/ # 3 built-in remote MCPs (websearch, context7, grep_app)
|
||||
│ ├── plugin/ # 8 OpenCode hook handlers + 46 hook composition
|
||||
│ └── plugin-handlers/ # 6-phase config loading pipeline
|
||||
├── packages/ # Monorepo: cli-runner, 12 platform binaries
|
||||
└── local-ignore/ # Dev-only test fixtures
|
||||
│ ├── agents/ # AI agents (7): Sisyphus, oracle, librarian, explore, frontend, document-writer, multimodal-looker
|
||||
│ ├── hooks/ # 22 lifecycle hooks - see src/hooks/AGENTS.md
|
||||
│ ├── tools/ # LSP, AST-Grep, Grep, Glob, session mgmt - see src/tools/AGENTS.md
|
||||
│ ├── features/ # Claude Code compat layer - see src/features/AGENTS.md
|
||||
│ ├── auth/ # Google Antigravity OAuth - see src/auth/AGENTS.md
|
||||
│ ├── shared/ # Cross-cutting utilities - see src/shared/AGENTS.md
|
||||
│ ├── cli/ # CLI installer, doctor - see src/cli/AGENTS.md
|
||||
│ ├── mcp/ # MCP configs: context7, grep_app
|
||||
│ ├── config/ # Zod schema, TypeScript types
|
||||
│ └── index.ts # Main plugin entry (464 lines)
|
||||
├── script/ # build-schema.ts, publish.ts, generate-changelog.ts
|
||||
├── assets/ # JSON schema
|
||||
└── dist/ # Build output (ESM + .d.ts)
|
||||
```
|
||||
|
||||
## INITIALIZATION FLOW
|
||||
|
||||
```
|
||||
OhMyOpenCodePlugin(ctx)
|
||||
├─→ loadPluginConfig() # JSONC parse → project/user merge → Zod validate → migrate
|
||||
├─→ createManagers() # TmuxSessionManager, BackgroundManager, SkillMcpManager, ConfigHandler
|
||||
├─→ createTools() # SkillContext + AvailableCategories + ToolRegistry (26 tools)
|
||||
├─→ createHooks() # 3-tier: Core(37) + Continuation(7) + Skill(2) = 46 hooks
|
||||
└─→ createPluginInterface() # 8 OpenCode hook handlers → PluginInterface
|
||||
```
|
||||
|
||||
## 8 OPENCODE HOOK HANDLERS
|
||||
|
||||
| Handler | Purpose |
|
||||
|---------|---------|
|
||||
| `config` | 6-phase: provider → plugin-components → agents → tools → MCPs → commands |
|
||||
| `tool` | 26 registered tools |
|
||||
| `chat.message` | First-message variant, session setup, keyword detection |
|
||||
| `chat.params` | Anthropic effort level adjustment |
|
||||
| `chat.headers` | Copilot x-initiator header injection |
|
||||
| `event` | Session lifecycle (created, deleted, idle, error) |
|
||||
| `tool.execute.before` | Pre-tool hooks (file guard, label truncator, rules injector) |
|
||||
| `tool.execute.after` | Post-tool hooks (output truncation, metadata store) |
|
||||
| `experimental.chat.messages.transform` | Context injection, thinking block validation |
|
||||
|
||||
## WHERE TO LOOK
|
||||
|
||||
| Task | Location | Notes |
|
||||
|------|----------|-------|
|
||||
| Add new agent | `src/agents/` + `src/agents/builtin-agents/` | Follow createXXXAgent factory pattern |
|
||||
| Add new hook | `src/hooks/{name}/` + register in `src/plugin/hooks/create-*-hooks.ts` | Match event type to tier |
|
||||
| Add new tool | `src/tools/{name}/` + register in `src/plugin/tool-registry.ts` | Follow createXXXTool factory |
|
||||
| Add new feature module | `src/features/{name}/` | Standalone module, wire in plugin/ |
|
||||
| Add new MCP | `src/mcp/` + register in `createBuiltinMcps()` | Remote HTTP only |
|
||||
| Add new skill | `src/features/builtin-skills/skills/` | Implement BuiltinSkill interface |
|
||||
| Add new command | `src/features/builtin-commands/` | Template in templates/ |
|
||||
| Add new CLI command | `src/cli/cli-program.ts` | Commander.js subcommand |
|
||||
| Add new doctor check | `src/cli/doctor/checks/` | Register in checks/index.ts |
|
||||
| Modify config schema | `src/config/schema/` + update root schema | Zod v4, add to OhMyOpenCodeConfigSchema |
|
||||
| Add new category | `src/tools/delegate-task/constants.ts` | DEFAULT_CATEGORIES + CATEGORY_MODEL_REQUIREMENTS |
|
||||
| Add agent | `src/agents/` | Create .ts, add to builtinAgents in index.ts, update types.ts |
|
||||
| Add hook | `src/hooks/` | Create dir with createXXXHook(), export from index.ts |
|
||||
| Add tool | `src/tools/` | Dir with index/types/constants/tools.ts, add to builtinTools |
|
||||
| Add MCP | `src/mcp/` | Create config, add to index.ts and types.ts |
|
||||
| Add skill | `src/features/builtin-skills/` | Create skill dir with SKILL.md |
|
||||
| LSP behavior | `src/tools/lsp/` | client.ts (connection), tools.ts (handlers) |
|
||||
| AST-Grep | `src/tools/ast-grep/` | napi.ts for @ast-grep/napi binding |
|
||||
| Google OAuth | `src/auth/antigravity/` | OAuth plugin for Google/Gemini models |
|
||||
| Config schema | `src/config/schema.ts` | Zod schema, run `bun run build:schema` after changes |
|
||||
| Claude Code compat | `src/features/claude-code-*-loader/` | Command, skill, agent, mcp loaders |
|
||||
| Background agents | `src/features/background-agent/` | manager.ts for task management |
|
||||
| Skill MCP | `src/features/skill-mcp-manager/` | MCP servers embedded in skills |
|
||||
| Interactive terminal | `src/tools/interactive-bash/` | tmux session management |
|
||||
| CLI installer | `src/cli/install.ts` | Interactive TUI installation |
|
||||
| Doctor checks | `src/cli/doctor/checks/` | Health checks for environment |
|
||||
| Shared utilities | `src/shared/` | Cross-cutting utilities |
|
||||
| Slash commands | `src/hooks/auto-slash-command/` | Auto-detect and execute `/command` patterns |
|
||||
| Ralph Loop | `src/hooks/ralph-loop/` | Self-referential dev loop until completion |
|
||||
|
||||
## MULTI-LEVEL CONFIG
|
||||
## TDD (Test-Driven Development)
|
||||
|
||||
**MANDATORY for new features and bug fixes.** Follow RED-GREEN-REFACTOR:
|
||||
|
||||
```
|
||||
Project (.opencode/oh-my-opencode.jsonc) → User (~/.config/opencode/oh-my-opencode.jsonc) → Defaults
|
||||
1. RED - Write failing test first (test MUST fail)
|
||||
2. GREEN - Write MINIMAL code to pass (nothing more)
|
||||
3. REFACTOR - Clean up while tests stay GREEN
|
||||
4. REPEAT - Next test case
|
||||
```
|
||||
|
||||
- `agents`, `categories`, `claude_code`: deep merged recursively
|
||||
- `disabled_*` arrays: Set union (concatenated + deduplicated)
|
||||
- All other fields: override replaces base value
|
||||
- Zod `safeParse()` fills defaults for omitted fields
|
||||
- `migrateConfigFile()` transforms legacy keys automatically
|
||||
| Phase | Action | Verification |
|
||||
|-------|--------|--------------|
|
||||
| **RED** | Write test describing expected behavior | `bun test` → FAIL (expected) |
|
||||
| **GREEN** | Implement minimum code to pass | `bun test` → PASS |
|
||||
| **REFACTOR** | Improve code quality, remove duplication | `bun test` → PASS (must stay green) |
|
||||
|
||||
Fields: agents (14 overridable, 21 fields each), categories (8 built-in + custom), disabled_* arrays (agents, hooks, mcps, skills, commands, tools), 19 feature-specific configs.
|
||||
|
||||
## THREE-TIER MCP SYSTEM
|
||||
|
||||
| Tier | Source | Mechanism |
|
||||
|------|--------|-----------|
|
||||
| Built-in | `src/mcp/` | 3 remote HTTP: websearch (Exa/Tavily), context7, grep_app |
|
||||
| Claude Code | `.mcp.json` | `${VAR}` env expansion via claude-code-mcp-loader |
|
||||
| Skill-embedded | SKILL.md YAML | Managed by SkillMcpManager (stdio + HTTP) |
|
||||
**Rules:**
|
||||
- NEVER write implementation before test
|
||||
- NEVER delete failing tests to "pass" - fix the code
|
||||
- One test at a time - don't batch
|
||||
- Test file naming: `*.test.ts` alongside source
|
||||
|
||||
## CONVENTIONS
|
||||
|
||||
- **Runtime**: Bun only — never use npm/yarn
|
||||
- **TypeScript**: strict mode, ESNext, bundler moduleResolution, `bun-types` (never `@types/node`)
|
||||
- **Test pattern**: Bun test (`bun:test`), co-located `*.test.ts`, given/when/then style (nested describe with `#given`/`#when`/`#then` prefixes)
|
||||
- **CI test split**: mock-heavy tests run in isolation (separate `bun test` processes), rest in batch
|
||||
- **Factory pattern**: `createXXX()` for all tools, hooks, agents
|
||||
- **Hook tiers**: Session (23) → Tool-Guard (10) → Transform (4) → Continuation (7) → Skill (2)
|
||||
- **Agent modes**: `primary` (respects UI model) vs `subagent` (own fallback chain) vs `all`
|
||||
- **Model resolution**: 4-step: override → category-default → provider-fallback → system-default
|
||||
- **Config format**: JSONC with comments, Zod v4 validation, snake_case keys
|
||||
- **File naming**: kebab-case for all files/directories
|
||||
- **Module structure**: index.ts barrel exports, no catch-all files (utils.ts, helpers.ts banned), 200 LOC soft limit
|
||||
- **Imports**: relative within module, barrel imports across modules (`import { log } from "./shared"`)
|
||||
- **No path aliases**: no `@/` — relative imports only
|
||||
- **Package manager**: Bun only (`bun run`, `bun build`, `bunx`)
|
||||
- **Types**: bun-types (not @types/node)
|
||||
- **Build**: `bun build` (ESM) + `tsc --emitDeclarationOnly`
|
||||
- **Exports**: Barrel pattern in index.ts; explicit named exports for tools/hooks
|
||||
- **Naming**: kebab-case directories, createXXXHook/createXXXTool factories
|
||||
- **Testing**: BDD comments `#given`, `#when`, `#then` (same as AAA); TDD workflow (RED-GREEN-REFACTOR)
|
||||
- **Temperature**: 0.1 for code agents, max 0.3
|
||||
|
||||
## ANTI-PATTERNS
|
||||
## ANTI-PATTERNS (THIS PROJECT)
|
||||
|
||||
- Never use `as any`, `@ts-ignore`, `@ts-expect-error`
|
||||
- Never suppress lint/type errors
|
||||
- Never add emojis to code/comments unless user explicitly asks
|
||||
- Never commit unless explicitly requested
|
||||
- Never run `bun publish` directly — use GitHub Actions
|
||||
- Never modify `package.json` version locally
|
||||
- Test: given/when/then — never use Arrange-Act-Assert comments
|
||||
- Comments: avoid AI-generated comment patterns (enforced by comment-checker hook)
|
||||
- Never create catch-all files (`utils.ts`, `helpers.ts`, `service.ts`)
|
||||
- Empty catch blocks `catch(e) {}` — always handle errors
|
||||
- Never use em dashes (—), en dashes (–), or AI filler phrases in generated content
|
||||
- index.ts is entry point ONLY — never dump business logic there
|
||||
- **npm/yarn**: Use bun exclusively
|
||||
- **@types/node**: Use bun-types
|
||||
- **Bash file ops**: Never mkdir/touch/rm/cp/mv for file creation in code
|
||||
- **Direct bun publish**: GitHub Actions workflow_dispatch only (OIDC provenance)
|
||||
- **Local version bump**: Version managed by CI workflow
|
||||
- **Year 2024**: NEVER use 2024 in code/prompts (use current year)
|
||||
- **Rush completion**: Never mark tasks complete without verification
|
||||
- **Over-exploration**: Stop searching when sufficient context found
|
||||
- **High temperature**: Don't use >0.3 for code-related agents
|
||||
- **Broad tool access**: Prefer explicit `include` over unrestricted access
|
||||
- **Sequential agent calls**: Use `sisyphus_task` for parallel execution
|
||||
- **Heavy PreToolUse logic**: Slows every tool call
|
||||
- **Self-planning for complex tasks**: Spawn planning agent (Prometheus) instead
|
||||
|
||||
## UNIQUE STYLES
|
||||
|
||||
- **Platform**: Union type `"darwin" | "linux" | "win32" | "unsupported"`
|
||||
- **Optional props**: Extensive `?` for optional interface properties
|
||||
- **Flexible objects**: `Record<string, unknown>` for dynamic configs
|
||||
- **Error handling**: Consistent try/catch with async/await
|
||||
- **Agent tools**: `tools: { include: [...] }` or `tools: { exclude: [...] }`
|
||||
- **Temperature**: Most agents use `0.1` for consistency
|
||||
- **Hook naming**: `createXXXHook` function convention
|
||||
- **Factory pattern**: Components created via `createXXX()` functions
|
||||
|
||||
## AGENT MODELS
|
||||
|
||||
| Agent | Default Model | Purpose |
|
||||
|-------|-------|---------|
|
||||
| Sisyphus | anthropic/claude-opus-4-5 | Primary orchestrator |
|
||||
| oracle | openai/gpt-5.2 | Read-only consultation. High-IQ debugging, architecture |
|
||||
| librarian | anthropic/claude-sonnet-4-5 | Multi-repo analysis, docs |
|
||||
| explore | opencode/grok-code | Fast codebase exploration |
|
||||
| frontend-ui-ux-engineer | google/gemini-3-pro-preview | UI generation |
|
||||
| document-writer | google/gemini-3-pro-preview | Technical docs |
|
||||
| multimodal-looker | google/gemini-3-flash | PDF/image analysis |
|
||||
|
||||
## COMMANDS
|
||||
|
||||
```bash
|
||||
bun test # Bun test suite
|
||||
bun run build # Build plugin (ESM + declarations + schema)
|
||||
bun run build:all # Build + platform binaries
|
||||
bun run typecheck # tsc --noEmit
|
||||
bunx oh-my-opencode install # Interactive setup
|
||||
bunx oh-my-opencode doctor # Health diagnostics
|
||||
bunx oh-my-opencode run # Non-interactive session
|
||||
bun run typecheck # Type check
|
||||
bun run build # ESM + declarations + schema
|
||||
bun run rebuild # Clean + Build
|
||||
bun run build:schema # Schema only
|
||||
bun test # Run tests
|
||||
```
|
||||
|
||||
## CI/CD
|
||||
## DEPLOYMENT
|
||||
|
||||
| Workflow | Trigger | Purpose |
|
||||
|----------|---------|---------|
|
||||
| ci.yml | push/PR to master/dev | Tests (split: mock-heavy isolated + batch), typecheck, build, schema auto-commit |
|
||||
| publish.yml | manual dispatch | Version bump, npm publish, platform binaries, GitHub release, merge to master |
|
||||
| publish-platform.yml | called by publish | 12 platform binaries via bun compile (darwin/linux/windows) |
|
||||
| sisyphus-agent.yml | @mention / dispatch | AI agent handles issues/PRs |
|
||||
| cla.yml | issue_comment/PR | CLA assistant for contributors |
|
||||
| lint-workflows.yml | push to .github/ | actionlint + shellcheck on workflow files |
|
||||
**GitHub Actions workflow_dispatch only**
|
||||
|
||||
1. Never modify package.json version locally
|
||||
2. Commit & push changes
|
||||
3. Trigger `publish` workflow: `gh workflow run publish -f bump=patch`
|
||||
|
||||
**Critical**: Never `bun publish` directly. Never bump version locally.
|
||||
|
||||
## CI PIPELINE
|
||||
|
||||
- **ci.yml**: Parallel test/typecheck, build verification, auto-commit schema on master, rolling `next` draft release
|
||||
- **publish.yml**: Manual workflow_dispatch, version bump, changelog, OIDC npm publish
|
||||
- **sisyphus-agent.yml**: Agent-in-CI for automated issue handling via `@sisyphus-dev-ai` mentions
|
||||
|
||||
## COMPLEXITY HOTSPOTS
|
||||
|
||||
| File | Lines | Description |
|
||||
|------|-------|-------------|
|
||||
| `src/index.ts` | 464 | Main plugin, all hook/tool init |
|
||||
| `src/cli/config-manager.ts` | 669 | JSONC parsing, env detection |
|
||||
| `src/auth/antigravity/fetch.ts` | 621 | Token refresh, URL rewriting |
|
||||
| `src/tools/lsp/client.ts` | 611 | LSP protocol, JSON-RPC |
|
||||
| `src/auth/antigravity/response.ts` | 598 | Response transformation, streaming |
|
||||
| `src/auth/antigravity/thinking.ts` | 571 | Thinking block extraction/transformation |
|
||||
| `src/hooks/anthropic-context-window-limit-recovery/executor.ts` | 564 | Multi-stage recovery |
|
||||
| `src/agents/sisyphus.ts` | 504 | Orchestrator prompt |
|
||||
|
||||
## NOTES
|
||||
|
||||
- Logger writes to `/tmp/oh-my-opencode.log` — check there for debugging
|
||||
- Background tasks: 5 concurrent per model/provider (configurable)
|
||||
- Plugin load timeout: 10s for Claude Code plugins
|
||||
- Model fallback priority: Claude > OpenAI > Gemini > Copilot > OpenCode Zen > Z.ai > Kimi
|
||||
- Config migration runs automatically on legacy keys (agent names, hook names, model versions)
|
||||
- Build: bun build (ESM) + tsc --emitDeclarationOnly, externals: @ast-grep/napi
|
||||
- Test setup: `test-setup.ts` preloaded via bunfig.toml, mock-heavy tests run in isolation in CI
|
||||
- 98 barrel export files (index.ts) establish module boundaries
|
||||
- Architecture rules enforced via `.sisyphus/rules/modular-code-enforcement.md`
|
||||
- **Testing**: Bun native test (`bun test`), BDD-style `#given/#when/#then`, 360+ tests
|
||||
- **OpenCode**: Requires >= 1.0.150
|
||||
- **Multi-lang docs**: README.md (EN), README.ko.md (KO), README.ja.md (JA), README.zh-cn.md (ZH-CN)
|
||||
- **Config**: `~/.config/opencode/oh-my-opencode.json` (user) or `.opencode/oh-my-opencode.json` (project)
|
||||
- **Trusted deps**: @ast-grep/cli, @ast-grep/napi, @code-yeongyu/comment-checker
|
||||
- **JSONC support**: Config files support comments (`// comment`, `/* block */`) and trailing commas
|
||||
- **Claude Code Compat**: Full compatibility layer for settings.json hooks, commands, skills, agents, MCPs
|
||||
- **Skill MCP**: Skills can embed MCP server configs in YAML frontmatter
|
||||
|
||||
@@ -31,7 +31,6 @@ Be respectful, inclusive, and constructive. We're all here to make better tools
|
||||
**English is the primary language for all communications in this repository.**
|
||||
|
||||
This includes:
|
||||
|
||||
- Issues and bug reports
|
||||
- Pull requests and code reviews
|
||||
- Documentation and comments
|
||||
@@ -46,7 +45,6 @@ This includes:
|
||||
### Need Help with English?
|
||||
|
||||
If English isn't your first language, don't worry! We value your contributions regardless of perfect grammar. You can:
|
||||
|
||||
- Use translation tools to help compose messages
|
||||
- Ask for help from other community members
|
||||
- Focus on clear, simple communication rather than perfect prose
|
||||
@@ -63,8 +61,8 @@ If English isn't your first language, don't worry! We value your contributions r
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone https://github.com/code-yeongyu/oh-my-openagent.git
|
||||
cd oh-my-openagent
|
||||
git clone https://github.com/code-yeongyu/oh-my-opencode.git
|
||||
cd oh-my-opencode
|
||||
|
||||
# Install dependencies (bun only - never use npm/yarn)
|
||||
bun install
|
||||
@@ -78,24 +76,25 @@ bun run build
|
||||
After making changes, you can test your local build in OpenCode:
|
||||
|
||||
1. **Build the project**:
|
||||
|
||||
```bash
|
||||
bun run build
|
||||
```
|
||||
|
||||
2. **Update your OpenCode config** (`~/.config/opencode/opencode.json` or `opencode.jsonc`):
|
||||
|
||||
```json
|
||||
{
|
||||
"plugin": ["file:///absolute/path/to/oh-my-opencode/dist/index.js"]
|
||||
"plugin": [
|
||||
"file:///absolute/path/to/oh-my-opencode/dist/index.js"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
For example, if your project is at `/Users/yourname/projects/oh-my-opencode`:
|
||||
|
||||
```json
|
||||
{
|
||||
"plugin": ["file:///Users/yourname/projects/oh-my-opencode/dist/index.js"]
|
||||
"plugin": [
|
||||
"file:///Users/yourname/projects/oh-my-opencode/dist/index.js"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
@@ -110,20 +109,18 @@ After making changes, you can test your local build in OpenCode:
|
||||
```
|
||||
oh-my-opencode/
|
||||
├── src/
|
||||
│ ├── index.ts # Plugin entry (OhMyOpenCodePlugin)
|
||||
│ ├── plugin-config.ts # JSONC multi-level config (Zod v4)
|
||||
│ ├── agents/ # 11 agents (Sisyphus, Hephaestus, Oracle, Librarian, Explore, Atlas, Prometheus, Metis, Momus, Multimodal-Looker, Sisyphus-Junior)
|
||||
│ ├── hooks/ # Lifecycle hooks for orchestration, recovery, UX, and context management
|
||||
│ ├── tools/ # 26 tools across 15 directories
|
||||
│ ├── mcp/ # 3 built-in remote MCPs (websearch, context7, grep_app)
|
||||
│ ├── features/ # 19 feature modules (background-agent, skill-loader, tmux, MCP-OAuth, etc.)
|
||||
│ ├── config/ # Zod v4 schema system
|
||||
│ ├── shared/ # Cross-cutting utilities
|
||||
│ ├── cli/ # CLI: install, run, doctor, mcp-oauth (Commander.js)
|
||||
│ ├── plugin/ # 8 OpenCode hook handlers + hook composition
|
||||
│ └── plugin-handlers/ # 6-phase config loading pipeline
|
||||
├── packages/ # Monorepo: comment-checker, opencode-sdk
|
||||
└── dist/ # Build output (ESM + .d.ts)
|
||||
│ ├── agents/ # AI agents (OmO, oracle, librarian, explore, etc.)
|
||||
│ ├── hooks/ # 21 lifecycle hooks
|
||||
│ ├── tools/ # LSP (11), AST-Grep, Grep, Glob, etc.
|
||||
│ ├── mcp/ # MCP server integrations (context7, grep_app)
|
||||
│ ├── features/ # Claude Code compatibility layers
|
||||
│ ├── config/ # Zod schemas and TypeScript types
|
||||
│ ├── auth/ # Google Antigravity OAuth
|
||||
│ ├── shared/ # Common utilities
|
||||
│ └── index.ts # Main plugin entry (OhMyOpenCodePlugin)
|
||||
├── script/ # Build utilities (build-schema.ts, publish.ts)
|
||||
├── assets/ # JSON schema
|
||||
└── dist/ # Build output (ESM + .d.ts)
|
||||
```
|
||||
|
||||
## Development Workflow
|
||||
@@ -137,11 +134,8 @@ bun run typecheck
|
||||
# Full build (ESM + TypeScript declarations + JSON schema)
|
||||
bun run build
|
||||
|
||||
# Clean build output
|
||||
bun run clean
|
||||
|
||||
# Rebuild from scratch
|
||||
bun run clean && bun run build
|
||||
# Clean build output and rebuild
|
||||
bun run rebuild
|
||||
|
||||
# Build schema only (after modifying src/config/schema.ts)
|
||||
bun run build:schema
|
||||
@@ -149,18 +143,17 @@ bun run build:schema
|
||||
|
||||
### Code Style & Conventions
|
||||
|
||||
| Convention | Rule |
|
||||
| ---------------- | ------------------------------------------------------------------------- |
|
||||
| Package Manager | **Bun only** (`bun run`, `bun build`, `bunx`) |
|
||||
| Types | Use `bun-types`, not `@types/node` |
|
||||
| Directory Naming | kebab-case (`ast-grep/`, `claude-code-hooks/`) |
|
||||
| File Operations | Never use bash commands (mkdir/touch/rm) for file creation in code |
|
||||
| Tool Structure | Each tool: `index.ts`, `types.ts`, `constants.ts`, `tools.ts`, `utils.ts` |
|
||||
| Hook Pattern | `createXXXHook(input: PluginInput)` function naming |
|
||||
| Exports | Barrel pattern (`export * from "./module"` in index.ts) |
|
||||
| Convention | Rule |
|
||||
|------------|------|
|
||||
| Package Manager | **Bun only** (`bun run`, `bun build`, `bunx`) |
|
||||
| Types | Use `bun-types`, not `@types/node` |
|
||||
| Directory Naming | kebab-case (`ast-grep/`, `claude-code-hooks/`) |
|
||||
| File Operations | Never use bash commands (mkdir/touch/rm) for file creation in code |
|
||||
| Tool Structure | Each tool: `index.ts`, `types.ts`, `constants.ts`, `tools.ts`, `utils.ts` |
|
||||
| Hook Pattern | `createXXXHook(input: PluginInput)` function naming |
|
||||
| Exports | Barrel pattern (`export * from "./module"` in index.ts) |
|
||||
|
||||
**Anti-Patterns (Do Not Do)**:
|
||||
|
||||
- Using npm/yarn instead of bun
|
||||
- Using `@types/node` instead of `bun-types`
|
||||
- Suppressing TypeScript errors with `as any`, `@ts-ignore`, `@ts-expect-error`
|
||||
@@ -184,7 +177,7 @@ import type { AgentConfig } from "./types";
|
||||
|
||||
export const myAgent: AgentConfig = {
|
||||
name: "my-agent",
|
||||
model: "anthropic/claude-opus-4-6",
|
||||
model: "anthropic/claude-sonnet-4-5",
|
||||
description: "Description of what this agent does",
|
||||
prompt: `Your agent's system prompt here`,
|
||||
temperature: 0.1,
|
||||
|
||||
122
FIX-BLOCKS.md
122
FIX-BLOCKS.md
@@ -1,122 +0,0 @@
|
||||
# Pre-Publish BLOCK Issues: Fix ALL Before Release
|
||||
|
||||
Two independent pre-publish reviews (Opus 4.6 + GPT-5.4) both concluded **BLOCK -- do not publish**. You must fix ALL blocking issues below using UltraBrain parallel agents. Work TDD-style: write/update tests first, then fix, verify tests pass.
|
||||
|
||||
## Strategy
|
||||
|
||||
Use ultrawork (ulw) to spawn UltraBrain agents in parallel. Each UB agent gets a non-overlapping scope. After all agents complete, run bun test to verify everything passes. Commit atomically per fix group.
|
||||
|
||||
---
|
||||
|
||||
## CRITICAL BLOCKERS (must fix -- 6 items)
|
||||
|
||||
### C1: Hashline Backward Compatibility
|
||||
**Problem:** Strict whitespace hashing in hashline changes LINE#ID values for indented lines. Breaks existing anchors in cached/persisted edit operations.
|
||||
**Fix:** Add a compatibility shim -- when lookup by new hash fails, fall back to legacy hash (without strict whitespace). Or version the hash format.
|
||||
**Files:** Look for hashline-related files in src/tools/ or src/shared/
|
||||
|
||||
### C2: OpenAI-Only Model Catalog Broken with OpenCode-Go
|
||||
**Problem:** isOpenAiOnlyAvailability() does not exclude availability.opencodeGo. When OpenCode-Go is present, OpenAI-only detection is wrong -- models get misrouted.
|
||||
**Fix:** Add !availability.opencodeGo check to isOpenAiOnlyAvailability().
|
||||
**Files:** Model/provider system files -- search for isOpenAiOnlyAvailability
|
||||
|
||||
### C3: CLI/Runtime Model Table Divergence
|
||||
**Problem:** Model tables disagree between CLI install-time and runtime:
|
||||
- ultrabrain: gpt-5.3-codex in CLI vs gpt-5.4 in runtime
|
||||
- atlas: claude-sonnet-4-5 in CLI vs claude-sonnet-4-6 in runtime
|
||||
- unspecified-high also diverges
|
||||
**Fix:** Reconcile all model tables. Pick the correct model for each and make CLI + runtime match.
|
||||
**Files:** Search for model table definitions, agent configs, CLI model references
|
||||
|
||||
### C4: atlas/metis/sisyphus-junior Missing OpenAI Fallbacks
|
||||
**Problem:** These agents can resolve to opencode/glm-4.7-free or undefined in OpenAI-only environments. No valid OpenAI fallback paths exist.
|
||||
**Fix:** Add valid OpenAI model fallback paths for all agents that need them.
|
||||
**Files:** Agent config/model resolution code
|
||||
|
||||
### C5: model_fallback Default Mismatch
|
||||
**Problem:** Schema and docs say model_fallback defaults to false, but runtime treats unset as true. Silent behavior change for all users.
|
||||
**Fix:** Align -- either update schema/docs to say true, or fix runtime to default to false. Check what the intended behavior is from git history.
|
||||
**Files:** Schema definition, runtime config loading
|
||||
|
||||
### C6: background_output Default Changed
|
||||
**Problem:** background_output now defaults to full_session=true. Old callers get different output format without code changes.
|
||||
**Fix:** Either document this change clearly, or restore old default and make full_session opt-in.
|
||||
**Files:** Background output handling code
|
||||
|
||||
---
|
||||
|
||||
## HIGH PRIORITY (strongly recommended -- 4 items)
|
||||
|
||||
### H1: Runtime Fallback session-status-handler Race
|
||||
**Problem:** When fallback model is already pending, the handler cannot advance the chain on subsequent cooldown events.
|
||||
**Fix:** Allow override like message-update-handler does.
|
||||
**Files:** Search for session-status-handler, message-update-handler
|
||||
|
||||
### H2: Atlas Final-Wave Approval Gate Logic
|
||||
**Problem:** Approval gate logic does not match real Prometheus plan structure (nested checkboxes, parallel execution). Trigger logic is wrong.
|
||||
**Fix:** Update to handle real plan structures.
|
||||
**Files:** Atlas agent code, approval gate logic
|
||||
|
||||
### H3: delegate-task-english-directive Dead Code
|
||||
**Problem:** Not dispatched from tool-execute-before.ts + wrong hook signature. Either wire properly or remove entirely.
|
||||
**Fix:** Remove if not needed (cleaner). If needed, fix dispatch + signature.
|
||||
**Files:** src/hooks/, tool-execute-before.ts
|
||||
|
||||
### H4: Auto-Slash-Command Session-Lifetime Dedup
|
||||
**Problem:** Dedup uses session lifetime, suppressing legitimate repeated identical commands.
|
||||
**Fix:** Change to short TTL (e.g., 30 seconds) instead of session lifetime.
|
||||
**Files:** Slash command handling code
|
||||
|
||||
---
|
||||
|
||||
## ADDITIONAL BLOCKERS FROM GPT-5.4 REVIEW
|
||||
|
||||
### G1: Package Identity Split-Brain
|
||||
**Problem:** Installer writes oh-my-openagent but doctor, auto-update, version lookup, publish workflow still reference oh-my-opencode. Half-migrated state.
|
||||
**Fix:** Audit ALL references to package name. Either complete the migration consistently or revert to single name for this release.
|
||||
**Files:** Installer, doctor, auto-update, version lookup, publish workflow -- grep for both package names
|
||||
|
||||
### G2: OpenCode-Go --opencode-go Value Validation
|
||||
**Problem:** No validation for --opencode-go CLI value. No detection of existing OpenCode-Go installations.
|
||||
**Fix:** Add value validation + existing install detection.
|
||||
**Files:** CLI option handling code
|
||||
|
||||
### G3: Skill/Hook Reference Errors
|
||||
**Problem:**
|
||||
- work-with-pr references non-existent git tool category
|
||||
- github-triage references TaskCreate/TaskUpdate which are not real tool names
|
||||
**Fix:** Fix tool references to use actual tool names.
|
||||
**Files:** Skill definition files in .opencode/skills/
|
||||
|
||||
### G4: Stale Context-Limit Cache
|
||||
**Problem:** Shared context-limit resolver caches provider config. When config changes, stale removed limits persist and corrupt compaction/truncation decisions.
|
||||
**Fix:** Add cache invalidation when provider config changes, or make the resolver stateless.
|
||||
**Files:** Context-limit resolver, compaction code
|
||||
|
||||
### G5: disabled_hooks Schema vs Runtime Contract Mismatch
|
||||
**Problem:** Schema is strict (rejects unknown hook names) but runtime is permissive (ignores unknown). Contract disagreement.
|
||||
**Fix:** Align -- either make both strict or both permissive.
|
||||
**Files:** Hook schema definition, runtime hook loading
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION INSTRUCTIONS
|
||||
|
||||
1. Spawn UltraBrain agents to fix these in parallel -- group by file proximity:
|
||||
- UB-1: C1 (hashline) + H4 (slash-command dedup)
|
||||
- UB-2: C2 + C3 + C4 (model/provider system) + G2
|
||||
- UB-3: C5 + C6 (config defaults) + G5
|
||||
- UB-4: H1 + H2 (runtime handlers + Atlas gate)
|
||||
- UB-5: H3 + G3 (dead code + skill references)
|
||||
- UB-6: G1 (package identity -- full audit)
|
||||
- UB-7: G4 (context-limit cache)
|
||||
|
||||
2. Each UB agent MUST:
|
||||
- Write or update tests FIRST (TDD)
|
||||
- Implement the fix
|
||||
- Run bun test on affected test files
|
||||
- Commit with descriptive message
|
||||
|
||||
3. After all UB agents complete, run full bun test to verify no regressions.
|
||||
|
||||
ulw
|
||||
1190
README.ja.md
1190
README.ja.md
File diff suppressed because it is too large
Load Diff
336
README.ko.md
336
README.ko.md
@@ -1,336 +0,0 @@
|
||||
> [!WARNING]
|
||||
> **임시 공지 (이번 주): 메인테이너 대응 지연 안내**
|
||||
>
|
||||
> 핵심 메인테이너 Q가 부상을 입어, 이번 주에는 이슈/PR 응답 및 릴리스가 지연될 수 있습니다.
|
||||
> 양해와 응원에 감사드립니다.
|
||||
|
||||
> [!TIP]
|
||||
> 저희와 함께 하세요!
|
||||
>
|
||||
> | [<img alt="Discord link" src="https://img.shields.io/discord/1452487457085063218?color=5865F2&label=discord&labelColor=black&logo=discord&logoColor=white&style=flat-square" width="156px" />](https://discord.gg/PUwSMR9XNk) | [Discord 커뮤니티](https://discord.gg/PUwSMR9XNk)에 가입하여 기여자 및 다른 `oh-my-opencode` 사용자들과 소통하세요. |
|
||||
> | :-----| :----- |
|
||||
> | [<img alt="X link" src="https://img.shields.io/badge/Follow-%40justsisyphus-00CED1?style=flat-square&logo=x&labelColor=black" width="156px" />](https://x.com/justsisyphus) | `oh-my-opencode`에 대한 소식과 업데이트는 제 X 계정에 올라왔었지만, <br /> 실수로 정지된 이후에는 [@justsisyphus](https://x.com/justsisyphus)가 대신 업데이트를 게시하고 있습니다. |
|
||||
> | [<img alt="GitHub Follow" src="https://img.shields.io/github/followers/code-yeongyu?style=flat-square&logo=github&labelColor=black&color=24292f" width="156px" />](https://github.com/code-yeongyu) | 더 많은 프로젝트를 보려면 GitHub에서 [@code-yeongyu](https://github.com/code-yeongyu)를 팔로우하세요. |
|
||||
|
||||
<!-- <CENTERED SECTION FOR GITHUB DISPLAY> -->
|
||||
|
||||
<div align="center">
|
||||
|
||||
[](https://github.com/code-yeongyu/oh-my-openagent#oh-my-opencode)
|
||||
|
||||
[](https://github.com/code-yeongyu/oh-my-openagent#oh-my-opencode)
|
||||
|
||||
</div>
|
||||
|
||||
> Anthropic은 당신을 가두고 싶어 합니다. Claude Code는 멋진 감옥이지만, 여전히 감옥일 뿐이죠.
|
||||
>
|
||||
> 우리는 여기서 그런 가두리를 하지 않습니다. Claude로 오케스트레이션하고, GPT로 추론하고, Kimi로 속도 내고, Gemini로 비전 처리한다. 미래는 하나의 승자를 고르는 게 아니라 전부를 오케스트레이션하는 거다. 모델은 매달 싸지고, 매달 똑똑해진다. 어떤 단일 프로바이더도 독재하지 못할 것이다. 우리는 그 열린 시장을 위해 만들고 있다.
|
||||
|
||||
<div align="center">
|
||||
|
||||
[](https://github.com/code-yeongyu/oh-my-openagent/releases)
|
||||
[](https://www.npmjs.com/package/oh-my-opencode)
|
||||
[](https://github.com/code-yeongyu/oh-my-openagent/graphs/contributors)
|
||||
[](https://github.com/code-yeongyu/oh-my-openagent/network/members)
|
||||
[](https://github.com/code-yeongyu/oh-my-openagent/stargazers)
|
||||
[](https://github.com/code-yeongyu/oh-my-openagent/issues)
|
||||
[](https://github.com/code-yeongyu/oh-my-openagent/blob/dev/LICENSE.md)
|
||||
[](https://deepwiki.com/code-yeongyu/oh-my-openagent)
|
||||
|
||||
[English](README.md) | [한국어](README.ko.md) | [日本語](README.ja.md) | [简体中文](README.zh-cn.md)
|
||||
|
||||
</div>
|
||||
|
||||
<!-- </CENTERED SECTION FOR GITHUB DISPLAY> -->
|
||||
|
||||
## 리뷰
|
||||
|
||||
> "이것 덕분에 Cursor 구독을 취소했습니다. 오픈소스 커뮤니티에서 믿을 수 없는 일들이 일어나고 있네요." - [Arthur Guiot](https://x.com/arthur_guiot/status/2008736347092382053?s=20)
|
||||
|
||||
> "Claude Code가 인간이 3개월 걸릴 일을 7일 만에 한다면, Sisyphus는 1시간 만에 해냅니다. 작업이 끝날 때까지 그냥 계속 알아서 작동합니다. 이건 정말 규율이 잡힌 에이전트예요." <br/>- B, Quant Researcher
|
||||
|
||||
> "Oh My Opencode로 하루 만에 eslint 경고 8000개를 해결했습니다." <br/>- [Jacob Ferrari](https://x.com/jacobferrari_/status/2003258761952289061)
|
||||
|
||||
> "Ohmyopencode와 ralph loop를 써서 45k 라인짜리 tauri 앱을 하룻밤 만에 SaaS 웹앱으로 변환했어요. 인터뷰 모드로 시작해서, 제가 쓴 프롬프트에 대해 질문하고 추천을 부탁했죠. 일하는 걸 지켜보는 것도 재밌었고, 아침에 일어났더니 웹사이트가 대부분 돌아가고 있는 걸 보고 경악했습니다!" - [James Hargis](https://x.com/hargabyte/status/2007299688261882202)
|
||||
|
||||
> "oh-my-opencode 쓰세요, 다시는 예전으로 못 돌아갑니다." <br/>- [d0t3ch](https://x.com/d0t3ch/status/2001685618200580503)
|
||||
|
||||
> "뭐가 이렇게 대단한 건지 아직 정확하게 말로 표현하긴 어려운데, 개발 경험 자체가 완전히 다른 차원에 도달해버렸어요." - [苔硯:こけすずり](https://x.com/kokesuzuri/status/2008532913961529372?s=20)
|
||||
|
||||
> "주말에 마인크래프트/소울라이크 같은 괴물 같은 걸 만들어보려고 open code, oh my opencode, supermemory로 실험 중입니다. 점심 먹고 산책 다녀오는 동안 앉기 애니메이션을 추가하라고 시켜뒀어요. [영상]" - [MagiMetal](https://x.com/MagiMetal/status/2005374704178373023)
|
||||
|
||||
> "이걸 코어에 당겨오고 저 사람 스카우트해야 돼요. 진심으로. 이거 진짜, 진짜, 진짜 좋습니다." <br/>- Henning Kilset
|
||||
|
||||
> "설득할 수만 있다면 @yeon_gyu_kim 채용하세요, 이 사람이 opencode를 혁명적으로 바꿨습니다." <br/>- [mysticaltech](https://x.com/mysticaltech/status/2001858758608376079)
|
||||
|
||||
> "Oh My OpenCode는 진짜 미쳤다" - [YouTube - Darren Builds AI](https://www.youtube.com/watch?v=G_Snfh2M41M)
|
||||
|
||||
---
|
||||
|
||||
# Oh My OpenCode
|
||||
|
||||
Claude Code, Codex, 온갖 OSS 모델들 사이에서 헤매고 있나요. 워크플로우 설정하랴, 에이전트 디버깅하랴 피곤할 겁니다.
|
||||
|
||||
우리가 그 삽질 다 해놨습니다. 모든 걸 테스트했고, 실제로 되는 것만 남겼습니다.
|
||||
|
||||
OmO 설치하고. `ultrawork` 치세요. 끝.
|
||||
|
||||
|
||||
|
||||
## 설치
|
||||
|
||||
### 사람용
|
||||
|
||||
다음 프롬프트를 복사해서 여러분의 LLM 에이전트(Claude Code, AmpCode, Cursor 등)에 붙여넣으세요:
|
||||
|
||||
```
|
||||
Install and configure oh-my-opencode by following the instructions here:
|
||||
https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/refs/heads/dev/docs/guide/installation.md
|
||||
```
|
||||
|
||||
아니면 [설치 가이드](docs/guide/installation.md)를 직접 읽으셔도 되지만, 진심으로 그냥 에이전트한테 시키세요. 사람은 설정하다 꼭 오타 냅니다.
|
||||
|
||||
### LLM 에이전트용
|
||||
|
||||
설치 가이드를 가져와서 따라 하세요:
|
||||
|
||||
```bash
|
||||
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/refs/heads/dev/docs/guide/installation.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 이 README 건너뛰기
|
||||
|
||||
문서 읽는 시대는 지났습니다. 그냥 이 텍스트를 에이전트한테 붙여넣으세요:
|
||||
|
||||
```
|
||||
Read this and tell me why it's not just another boilerplate: https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/refs/heads/dev/README.md
|
||||
```
|
||||
|
||||
## 핵심 기능
|
||||
|
||||
### 🪄 `ultrawork`
|
||||
|
||||
진짜 이걸 다 읽고 계시나요? 대단하네요.
|
||||
|
||||
설치하세요. `ultrawork` (또는 `ulw`) 치세요. 끝.
|
||||
|
||||
아래 내용들, 모든 기능, 모든 최적화, 전혀 알 필요 없습니다. 그냥 알아서 다 됩니다.
|
||||
|
||||
다음 구독만 있어도 ultrawork는 충분히 잘 돌아갑니다 (본 프로젝트와 무관하며, 개인적인 추천일 뿐입니다):
|
||||
- [ChatGPT 구독 ($20)](https://chatgpt.com/)
|
||||
- [Kimi Code 구독 ($0.99) (*이번 달 한정)](https://www.kimi.com/membership/pricing?track_id=5cdeca93-66f0-4d35-aabb-b6df8fcea328)
|
||||
- [GLM Coding 요금제 ($10)](https://z.ai/subscribe)
|
||||
- 종량제(pay-per-token) 대상자라면 kimi와 gemini 모델을 써도 비용이 별로 안 나옵니다.
|
||||
|
||||
| | 기능 | 역할 |
|
||||
| :---: | :------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 🤖 | **기강 잡힌 에이전트 (Discipline Agents)** | Sisyphus가 Hephaestus, Oracle, Librarian, Explore를 오케스트레이션합니다. 완전한 AI 개발팀이 병렬로 돌아갑니다. |
|
||||
| ⚡ | **`ultrawork` / `ulw`** | 단어 하나면 됩니다. 모든 에이전트가 활성화되고 다 끝날 때까지 멈추지 않습니다. |
|
||||
| 🚪 | **[IntentGate](https://factory.ai/news/terminal-bench)** | 사용자의 진짜 의도를 분석한 뒤 분류하거나 행동합니다. 더 이상 문자 그대로 오해해서 헛짓거리하는 일이 없습니다. |
|
||||
| 🔗 | **해시 기반 편집 툴** | `LINE#ID` 콘텐츠 해시로 모든 변경 사항을 검증합니다. stale-line 에러 0%. [oh-my-pi](https://github.com/can1357/oh-my-pi)에서 영감을 받았습니다. [하니스 프로블러 →](https://blog.can.ac/2026/02/12/the-harness-problem/) |
|
||||
| 🛠️ | **LSP + AST-Grep** | 워크스페이스 단위 이름 변경, 빌드 전 진단, AST 기반 재작성. 에이전트에게 IDE급 정밀도를 제공합니다. |
|
||||
| 🧠 | **백그라운드 에이전트** | 5명 이상의 전문가를 병렬로 투입합니다. 컨텍스트는 가볍게 유지하고 결과는 준비될 때 받습니다. |
|
||||
| 📚 | **기본 내장 MCP** | Exa(웹 검색), Context7(공식 문서), Grep.app(GitHub 검색). 항상 켜져 있습니다. |
|
||||
| 🔁 | **Ralph Loop / `/ulw-loop`** | 자기 참조 루프. 100% 완료될 때까지 절대 멈추지 않습니다. |
|
||||
| ✅ | **Todo 강제 집행** | 에이전트가 딴짓한다고요? 시스템이 멱살 잡고 끌고 옵니다. 당신의 작업은 무조건 끝납니다. |
|
||||
| 💬 | **주석 검사기** | 주석에 AI 냄새나는 헛소리를 빼버립니다. 시니어 개발자가 짠 것 같은 코드가 됩니다. |
|
||||
| 🖥️ | **Tmux 연동** | 완전한 인터랙티브 터미널. REPL, 디버거, TUI 앱들 모두 실시간으로 돌아갑니다. |
|
||||
| 🔌 | **Claude Code 호환성** | 기존 훅, 명령어, 스킬, MCP, 플러그인? 전부 여기서 그대로 돌아갑니다. |
|
||||
| 🎯 | **스킬 내장 MCP** | 스킬이 자기만의 MCP 서버를 들고 다닙니다. 컨텍스트가 부풀어 오르지 않습니다. |
|
||||
| 📋 | **Prometheus 플래너** | 인터뷰 모드로 코드 한 줄 만지기 전에 전략적인 계획부터 세웁니다. |
|
||||
| 🔍 | **`/init-deep`** | 프로젝트 전체에 걸쳐 계층적인 `AGENTS.md` 파일을 자동 생성합니다. 토큰 효율과 에이전트 성능 둘 다 잡습니다. |
|
||||
|
||||
### 기강 잡힌 에이전트 (Discipline Agents)
|
||||
|
||||
<table><tr>
|
||||
<td align="center"><img src=".github/assets/sisyphus.png" height="300" /></td>
|
||||
<td align="center"><img src=".github/assets/hephaestus.png" height="300" /></td>
|
||||
</tr></table>
|
||||
|
||||
**Sisyphus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**)는 당신의 메인 오케스트레이터입니다. 공격적인 병렬 실행으로 계획을 세우고, 전문가들에게 위임하며, 완료될 때까지 밀어붙입니다. 중간에 포기하는 법이 없습니다.
|
||||
|
||||
**Hephaestus** (`gpt-5.3-codex`)는 당신의 자율 딥 워커입니다. 레시피가 아니라 목표를 주세요. 베이비시터 없이 알아서 코드베이스를 탐색하고, 패턴을 연구하며, 끝에서 끝까지 전부 해냅니다. *진정한 장인(The Legitimate Craftsman).*
|
||||
|
||||
**Prometheus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**)는 당신의 전략 플래너입니다. 인터뷰 모드로 작동합니다. 코드 한 줄 만지기 전에 질문을 던져 스코프를 파악하고 상세한 계획부터 세웁니다.
|
||||
|
||||
모든 에이전트는 해당 모델의 특장점에 맞춰 튜닝되어 있습니다. 수동으로 모델 바꿔가며 뻘짓하지 마세요. [더 알아보기 →](docs/guide/overview.md)
|
||||
|
||||
> Anthropic이 [우리 때문에 OpenCode를 막아버렸습니다.](https://x.com/thdxr/status/2010149530486911014) 그래서 Hephaestus의 별명이 "진정한 장인(The Legitimate Craftsman)"인 겁니다. (어디서 많이 들어본 이름이죠?) 아이러니를 노렸습니다.
|
||||
>
|
||||
> Opus에서 제일 잘 돌아가긴 하지만, Kimi K2.5 + GPT-5.3 Codex 조합만으로도 바닐라 Claude Code는 가볍게 바릅니다. 설정도 필요 없습니다.
|
||||
|
||||
### 에이전트 오케스트레이션
|
||||
|
||||
Sisyphus가 하위 에이전트에게 일을 맡길 때, 모델을 직접 고르지 않습니다. **카테고리**를 고릅니다. 카테고리는 자동으로 올바른 모델에 매핑됩니다:
|
||||
|
||||
| 카테고리 | 용도 |
|
||||
| :------------------- | :------------------------ |
|
||||
| `visual-engineering` | 프론트엔드, UI/UX, 디자인 |
|
||||
| `deep` | 자율 리서치 및 실행 |
|
||||
| `quick` | 단일 파일 변경, 오타 수정 |
|
||||
| `ultrabrain` | 하드 로직, 아키텍처 결정 |
|
||||
|
||||
에이전트가 어떤 작업인지 말하면, 하네스가 알아서 적합한 모델을 꺼내옵니다. 당신은 손댈 게 없습니다.
|
||||
|
||||
### Claude Code 호환성
|
||||
|
||||
Claude Code 열심히 세팅해두셨죠? 잘하셨습니다.
|
||||
|
||||
모든 훅, 커맨드, 스킬, MCP, 플러그인이 여기서 그대로 돌아갑니다. 플러그인까지 완벽 호환됩니다.
|
||||
|
||||
### 에이전트를 위한 월드클래스 툴
|
||||
|
||||
LSP, AST-Grep, Tmux, MCP가 대충 테이프로 붙여놓은 게 아니라 진짜로 "통합"되어 있습니다.
|
||||
|
||||
- **LSP**: `lsp_rename`, `lsp_goto_definition`, `lsp_find_references`, `lsp_diagnostics`. 에이전트에게 IDE급 정밀도를 쥐어줍니다.
|
||||
- **AST-Grep**: 25개 언어를 지원하는 패턴 기반 코드 검색 및 재작성.
|
||||
- **Tmux**: 완전한 인터랙티브 터미널. REPL, 디버거, TUI 앱. 에이전트가 세션 안에서 움직입니다.
|
||||
- **MCP**: 웹 검색, 공식 문서, GitHub 코드 검색이 전부 내장되어 있습니다.
|
||||
|
||||
### 스킬 내장 MCP
|
||||
|
||||
MCP 서버들이 당신의 컨텍스트 예산을 다 잡아먹죠. 우리가 고쳤습니다.
|
||||
|
||||
스킬들이 자기만의 MCP 서버를 들고 다닙니다. 필요할 때만 켜서 쓰고 다 쓰면 사라집니다. 컨텍스트 창이 깔끔하게 유지됩니다.
|
||||
|
||||
### 해시 기반 편집 (Codes Better. Hash-Anchored Edits)
|
||||
|
||||
하네스 문제는 진짜 심각합니다. 에이전트가 실패하는 이유의 대부분은 모델 탓이 아니라 편집 툴 탓입니다.
|
||||
|
||||
> *"어떤 툴도 모델에게 수정하려는 줄에 대한 안정적이고 검증 가능한 식별자를 제공하지 않습니다... 전부 모델이 이미 본 내용을 똑같이 재현해내길 기대하죠. 그게 안 될 때—그리고 보통 안 되는데—사용자들은 모델을 욕합니다."*
|
||||
>
|
||||
> <br/>- [Can Bölük, 하네스 문제(The Harness Problem)](https://blog.can.ac/2026/02/12/the-harness-problem/)
|
||||
|
||||
[oh-my-pi](https://github.com/can1357/oh-my-pi)에서 영감을 받아, **Hashline**을 구현했습니다. 에이전트가 읽는 모든 줄에는 콘텐츠 해시 태그가 붙어 나옵니다:
|
||||
|
||||
```
|
||||
11#VK| function hello() {
|
||||
22#XJ| return "world";
|
||||
33#MB| }
|
||||
```
|
||||
|
||||
에이전트는 이 태그를 참조해서 편집합니다. 마지막으로 읽은 후 파일이 변경되었다면 해시가 일치하지 않아 코드가 망가지기 전에 편집이 거부됩니다. 공백을 똑같이 재현할 필요도 없고, 엉뚱한 줄을 수정하는 에러(stale-line)도 없습니다.
|
||||
|
||||
Grok Code Fast 1 기준으로 성공률이 **6.7% → 68.3%** 로 올랐습니다. 오직 편집 툴 하나 바꿨을 뿐인데 말이죠.
|
||||
|
||||
### 깊은 초기화. `/init-deep`
|
||||
|
||||
`/init-deep`을 실행하세요. 계층적인 `AGENTS.md` 파일을 알아서 만들어줍니다:
|
||||
|
||||
```
|
||||
project/
|
||||
├── AGENTS.md ← 프로젝트 전체 컨텍스트
|
||||
├── src/
|
||||
│ ├── AGENTS.md ← src 전용 컨텍스트
|
||||
│ └── components/
|
||||
│ └── AGENTS.md ← 컴포넌트 전용 컨텍스트
|
||||
```
|
||||
|
||||
에이전트가 알아서 관련된 컨텍스트만 쏙쏙 읽어갑니다. 수동으로 관리할 필요가 없습니다.
|
||||
|
||||
### 플래닝. Prometheus
|
||||
|
||||
복잡한 작업인가요? 대충 프롬프트 던지고 기도하지 마세요.
|
||||
|
||||
`/start-work`를 치면 Prometheus가 호출됩니다. **진짜 엔지니어처럼 당신을 인터뷰하고**, 스코프와 모호한 점을 식별한 뒤, 코드 한 줄 만지기 전에 검증된 계획부터 세웁니다. 에이전트는 시작하기도 전에 자기가 뭘 만들어야 하는지 정확히 알게 됩니다.
|
||||
|
||||
### 스킬 (Skills)
|
||||
|
||||
스킬은 단순한 프롬프트 쪼가리가 아닙니다. 각각 다음을 포함합니다:
|
||||
|
||||
- 도메인에 특화된 시스템 인스트럭션
|
||||
- 필요할 때만 켜지는 내장 MCP 서버
|
||||
- 스코프가 제한된 권한 (에이전트가 선을 넘지 않도록)
|
||||
|
||||
기본 내장 스킬: `playwright` (브라우저 자동화), `git-master` (원자적 커밋, 리베이스 수술), `frontend-ui-ux` (디자인 중심 UI).
|
||||
|
||||
직접 추가하려면: `.opencode/skills/*/SKILL.md` 또는 `~/.config/opencode/skills/*/SKILL.md`.
|
||||
|
||||
**전체 기능이 궁금하신가요?** 에이전트, 훅, 툴, MCP 등 모든 디테일은 **[기능 문서 (Features)](docs/reference/features.md)** 를 확인하세요.
|
||||
|
||||
---
|
||||
|
||||
> **비하인드 스토리가 궁금하신가요?** 왜 Sisyphus가 돌을 굴리는지, 왜 Hephaestus가 "진정한 장인"인지, 그리고 [오케스트레이션 가이드](docs/guide/orchestration.md)를 읽어보세요.
|
||||
>
|
||||
> oh-my-opencode가 처음이신가요? 어떤 모델을 써야 할지 **[설치 가이드](docs/guide/installation.md#step-5-understand-your-model-setup)** 에서 추천 조합을 확인하세요.
|
||||
|
||||
## 제거 (Uninstallation)
|
||||
|
||||
oh-my-opencode를 지우려면:
|
||||
|
||||
1. **OpenCode 설정에서 플러그인 제거**
|
||||
|
||||
`~/.config/opencode/opencode.json` (또는 `opencode.jsonc`)를 열고 `plugin` 배열에서 `"oh-my-opencode"`를 지우세요.
|
||||
|
||||
```bash
|
||||
# jq 사용 시
|
||||
jq '.plugin = [.plugin[] | select(. != "oh-my-opencode")]' \
|
||||
~/.config/opencode/opencode.json > /tmp/oc.json && \
|
||||
mv /tmp/oc.json ~/.config/opencode/opencode.json
|
||||
```
|
||||
|
||||
2. **설정 파일 제거 (선택 사항)**
|
||||
|
||||
```bash
|
||||
# 사용자 설정 제거
|
||||
rm -f ~/.config/opencode/oh-my-opencode.json ~/.config/opencode/oh-my-opencode.jsonc
|
||||
|
||||
# 프로젝트 설정 제거 (있는 경우)
|
||||
rm -f .opencode/oh-my-opencode.json .opencode/oh-my-opencode.jsonc
|
||||
```
|
||||
|
||||
3. **제거 확인**
|
||||
|
||||
```bash
|
||||
opencode --version
|
||||
# 이제 플러그인이 로드되지 않아야 합니다
|
||||
```
|
||||
|
||||
## 작가의 말
|
||||
|
||||
**우리의 철학이 궁금하다면?** [Ultrawork 선언문](docs/manifesto.md)을 읽어보세요.
|
||||
|
||||
---
|
||||
|
||||
저는 개인 프로젝트에 LLM 토큰 값으로만 2만 4천 달러(약 3천만 원)를 태웠습니다. 모든 툴을 다 써봤고, 설정이란 설정은 다 건드려봤습니다. 결론은 OpenCode가 이겼습니다.
|
||||
|
||||
제가 부딪혔던 모든 문제와 그 해결책이 이 플러그인에 구워져 있습니다. 설치하고 그냥 쓰세요.
|
||||
|
||||
OpenCode가 Debian/Arch라면, OmO는 Ubuntu/[Omarchy](https://omarchy.org/)입니다.
|
||||
|
||||
[AmpCode](https://ampcode.com)와 [Claude Code](https://code.claude.com/docs/overview)의 영향을 아주 짙게 받았습니다. 기능들을 포팅했고, 대다수는 개선했습니다. 아직도 짓고 있는 중입니다. 이건 **Open**Code니까요.
|
||||
|
||||
다른 하네스들도 멀티 모델 오케스트레이션을 약속합니다. 하지만 우리는 그걸 "진짜로" 내놨습니다. 안정성도 챙겼고요. 말로만이 아니라 실제로 돌아가는 기능들입니다.
|
||||
|
||||
제가 이 프로젝트의 가장 병적인 헤비 유저입니다:
|
||||
- 어떤 모델의 로직이 가장 날카로운가?
|
||||
- 디버깅의 신은 누구인가?
|
||||
- 글은 누가 제일 잘 쓰는가?
|
||||
- 프론트엔드 생태계는 누가 지배하고 있는가?
|
||||
- 백엔드 끝판왕은 누구인가?
|
||||
- 데일리 드라이빙용으로 제일 빠른 건 뭔가?
|
||||
- 경쟁사들은 지금 뭘 출시하고 있는가?
|
||||
|
||||
이 플러그인은 그 모든 질문의 정수(Distillation)입니다. 가장 좋은 것만 가져다 쓰세요. 개선할 점이 보인다고요? PR은 언제나 환영입니다.
|
||||
|
||||
**어떤 하네스를 쓸지 고뇌하는 건 이제 그만두세요.**
|
||||
**제가 직접 리서치하고, 제일 좋은 것만 훔쳐 와서, 여기에 욱여넣겠습니다.**
|
||||
|
||||
거만해 보이나요? 더 나은 방법이 있다면 기여하세요. 대환영입니다.
|
||||
|
||||
언급된 어떤 프로젝트/모델과도 아무런 이해관계가 없습니다. 그냥 순수하게 개인적인 실험의 결과물입니다.
|
||||
|
||||
이 프로젝트의 99%는 OpenCode로 만들어졌습니다. 전 사실 TypeScript를 잘 모릅니다. **하지만 이 문서는 제가 직접 리뷰하고 갈아엎었습니다.**
|
||||
|
||||
## 함께하는 전문가들
|
||||
|
||||
- [Indent](https://indentcorp.com)
|
||||
- 인플루언서 마케팅 솔루션 Spray, 크로스보더 커머스 플랫폼 vovushop, AI 커머스 리뷰 마케팅 솔루션 vreview 제작
|
||||
- [Google](https://google.com)
|
||||
- [Microsoft](https://microsoft.com)
|
||||
- [ELESTYLE](https://elestyle.jp)
|
||||
- 멀티 모바일 결제 게이트웨이 elepay, 캐시리스 솔루션을 위한 모바일 애플리케이션 SaaS OneQR 제작
|
||||
|
||||
*멋진 히어로 이미지를 만들어주신 [@junhoyeo](https://github.com/junhoyeo)님께 특별히 감사드립니다.*
|
||||
363
README.ru.md
363
README.ru.md
@@ -1,363 +0,0 @@
|
||||
> [!WARNING]
|
||||
> **Временное уведомление (на этой неделе): сниженная доступность мейнтейнера**
|
||||
>
|
||||
> Ключевой мейнтейнер Q получил травму, поэтому на этой неделе ответы по issue/PR и релизы могут задерживаться.
|
||||
> Спасибо за терпение и поддержку.
|
||||
|
||||
> [!NOTE]
|
||||
>
|
||||
> [](https://sisyphuslabs.ai)
|
||||
>
|
||||
> > **Мы создаём полноценную продуктовую версию Sisyphus, чтобы задать стандарты для frontier-агентов. <br />Присоединяйтесь к листу ожидания [здесь](https://sisyphuslabs.ai).**
|
||||
|
||||
> [!TIP] Будьте с нами!
|
||||
>
|
||||
> | [](https://discord.gg/PUwSMR9XNk) | Вступайте в наш [Discord](https://discord.gg/PUwSMR9XNk), чтобы общаться с контрибьюторами и пользователями `oh-my-opencode`. |
|
||||
> | ----------------------------------- | ------------------------------------------------------------ |
|
||||
> | [](https://x.com/justsisyphus) | Новости и обновления `oh-my-opencode` раньше публиковались на моём аккаунте X. <br /> После ошибочной блокировки, [@justsisyphus](https://x.com/justsisyphus) публикует обновления вместо меня. |
|
||||
> | [](https://github.com/code-yeongyu) | Подпишитесь на [@code-yeongyu](https://github.com/code-yeongyu) на GitHub, чтобы следить за другими проектами. |
|
||||
|
||||
<!-- <CENTERED SECTION FOR GITHUB DISPLAY> --> <div align="center">
|
||||
|
||||
[](https://github.com/code-yeongyu/oh-my-openagent#oh-my-opencode)
|
||||
|
||||
[](https://github.com/code-yeongyu/oh-my-openagent#oh-my-opencode)
|
||||
|
||||
</div>
|
||||
|
||||
> Anthropic [**заблокировал OpenCode из-за нас.**](https://x.com/thdxr/status/2010149530486911014) **Да, это правда.** Они хотят держать вас в замкнутой системе. Claude Code — красивая тюрьма, но всё равно тюрьма.
|
||||
>
|
||||
> Мы не делаем привязки. Мы работаем с любыми моделями. Claude / Kimi / GLM для оркестрации. GPT для рассуждений. Minimax для скорости. Gemini для творческих задач. Будущее — не в выборе одного победителя, а в оркестровке всех. Модели дешевеют каждый месяц. Умнеют каждый месяц. Ни один провайдер не будет доминировать. Мы строим под открытый рынок, а не под чьи-то огороженные сады.
|
||||
|
||||
<div align="center">
|
||||
|
||||
[](https://github.com/code-yeongyu/oh-my-openagent/releases) [](https://www.npmjs.com/package/oh-my-opencode) [](https://github.com/code-yeongyu/oh-my-openagent/graphs/contributors) [](https://github.com/code-yeongyu/oh-my-openagent/network/members) [](https://github.com/code-yeongyu/oh-my-openagent/stargazers) [](https://github.com/code-yeongyu/oh-my-openagent/issues) [](https://github.com/code-yeongyu/oh-my-openagent/blob/master/LICENSE.md) [](https://deepwiki.com/code-yeongyu/oh-my-openagent)
|
||||
|
||||
English | 한국어 | 日本語 | 简体中文 | Русский
|
||||
|
||||
</div> <!-- </CENTERED SECTION FOR GITHUB DISPLAY> -->
|
||||
|
||||
## Отзывы
|
||||
|
||||
> «Из-за него я отменил подписку на Cursor. В опенсорс-сообществе происходит что-то невероятное.» — [Arthur Guiot](https://x.com/arthur_guiot/status/2008736347092382053?s=20)
|
||||
|
||||
> «Если Claude Code делает за 7 дней то, на что у человека уходит 3 месяца, Sisyphus справляется за 1 час. Он просто работает, пока задача не выполнена. Это дисциплинированный агент.» <br/>— B, исследователь в области квантовых финансов
|
||||
|
||||
> «За один день устранил 8000 предупреждений eslint с помощью Oh My Opencode.» <br/>— [Jacob Ferrari](https://x.com/jacobferrari_/status/2003258761952289061)
|
||||
|
||||
> «За ночь конвертировал приложение на tauri в 45k строк в веб-SaaS с помощью Ohmyopencode и ralph loop. Начал с промпта «проинтервьюируй меня», попросил оценки и рекомендации по вопросам. Было удивительно наблюдать за работой и утром проснуться с почти рабочим сайтом!» — [James Hargis](https://x.com/hargabyte/status/2007299688261882202)
|
||||
|
||||
> «Используйте oh-my-opencode — вы не захотите возвращаться назад.» <br/>— [d0t3ch](https://x.com/d0t3ch/status/2001685618200580503)
|
||||
|
||||
> «Пока не могу точно объяснить, почему это так круто, но опыт разработки вышел на совершенно другой уровень.» — [苔硯:こけすずり](https://x.com/kokesuzuri/status/2008532913961529372?s=20)
|
||||
|
||||
> «Экспериментирую с open code, oh my opencode и supermemory этим выходным, чтобы собрать нечто среднее между Minecraft и souls-like.» «Попросил добавить анимации приседания, пока хожу на обеденную прогулку. [Видео]» — [MagiMetal](https://x.com/MagiMetal/status/2005374704178373023)
|
||||
|
||||
> «Ребята, вам нужно включить это в ядро и нанять его. Серьёзно. Это очень, очень, очень хорошо.» <br/>— Henning Kilset
|
||||
|
||||
> «Наймите @yeon_gyu_kim, если сможете его уговорить, этот парень революционизировал opencode.» <br/>— [mysticaltech](https://x.com/mysticaltech/status/2001858758608376079)
|
||||
|
||||
> «Oh My OpenCode — это что-то с чем-то» — [YouTube — Darren Builds AI](https://www.youtube.com/watch?v=G_Snfh2M41M)
|
||||
|
||||
------
|
||||
|
||||
# Oh My OpenCode
|
||||
|
||||
Вы жонглируете Claude Code, Codex, случайными OSS-моделями. Настраиваете рабочие процессы. Дебажите агентов.
|
||||
|
||||
Мы уже проделали эту работу. Протестировали всё. Оставили только то, что реально работает.
|
||||
|
||||
Установите OmO. Введите `ultrawork`. Готово.
|
||||
|
||||
## Установка
|
||||
|
||||
### Для людей
|
||||
|
||||
Скопируйте и вставьте этот промпт в ваш LLM-агент (Claude Code, AmpCode, Cursor и т.д.):
|
||||
|
||||
```
|
||||
Install and configure oh-my-opencode by following the instructions here:
|
||||
https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/refs/heads/dev/docs/guide/installation.md
|
||||
```
|
||||
|
||||
Или прочитайте руководство по установке, но серьёзно — пусть агент сделает это за вас. Люди ошибаются в конфигах.
|
||||
|
||||
### Для LLM-агентов
|
||||
|
||||
Загрузите руководство по установке и следуйте ему:
|
||||
|
||||
```bash
|
||||
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/refs/heads/dev/docs/guide/installation.md
|
||||
```
|
||||
|
||||
------
|
||||
|
||||
## Пропустите этот README
|
||||
|
||||
Мы вышли за пределы эпохи чтения документации. Просто вставьте это в своего агента:
|
||||
|
||||
```
|
||||
Read this and tell me why it's not just another boilerplate: https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/refs/heads/dev/README.md
|
||||
```
|
||||
|
||||
## Ключевые возможности
|
||||
|
||||
### 🪄 `ultrawork`
|
||||
|
||||
Вы правда это читаете? Поразительно.
|
||||
|
||||
Установите. Введите `ultrawork` (или `ulw`). Готово.
|
||||
|
||||
Всё описанное ниже, каждая функция, каждая оптимизация — вам не нужно это знать. Оно просто работает.
|
||||
|
||||
Даже при наличии только следующих подписок ultrawork будет работать отлично (проект не аффилирован с ними, это личная рекомендация):
|
||||
|
||||
- [Подписка ChatGPT ($20)](https://chatgpt.com/)
|
||||
- [Подписка Kimi Code ($0.99) (*только в этом месяце)](https://www.kimi.com/membership/pricing?track_id=5cdeca93-66f0-4d35-aabb-b6df8fcea328)
|
||||
- [Тариф GLM Coding ($10)](https://z.ai/subscribe)
|
||||
- При доступе к оплате за токены использование моделей Kimi и Gemini обойдётся недорого.
|
||||
|
||||
| | Функция | Что делает |
|
||||
| --- | -------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 🤖 | **Дисциплинированные агенты** | Sisyphus оркестрирует Hephaestus, Oracle, Librarian, Explore. Полноценная AI-команда разработки в параллельном режиме. |
|
||||
| ⚡ | **`ultrawork` / `ulw`** | Одно слово. Все агенты активируются. Не останавливается, пока задача не выполнена. |
|
||||
| 🚪 | **[IntentGate](https://factory.ai/news/terminal-bench)** | Анализирует истинное намерение пользователя перед классификацией и действием. Никакого буквального неверного толкования. |
|
||||
| 🔗 | **Инструмент правок на основе хэш-якорей** | Хэш содержимого `LINE#ID` проверяет каждое изменение. Ноль ошибок с устаревшими строками. Вдохновлено [oh-my-pi](https://github.com/can1357/oh-my-pi). [Проблема обвязки →](https://blog.can.ac/2026/02/12/the-harness-problem/) |
|
||||
| 🛠️ | **LSP + AST-Grep** | Переименование в рабочем пространстве, диагностика перед сборкой, переписывание с учётом AST. Точность IDE для агентов. |
|
||||
| 🧠 | **Фоновые агенты** | Запускайте 5+ специалистов параллельно. Контекст остаётся компактным. Результаты — когда готовы. |
|
||||
| 📚 | **Встроенные MCP** | Exa (веб-поиск), Context7 (официальная документация), Grep.app (поиск по GitHub). Всегда включены. |
|
||||
| 🔁 | **Ralph Loop / `/ulw-loop`** | Самореферентный цикл. Не останавливается, пока задача не выполнена на 100%. |
|
||||
| ✅ | **Todo Enforcer** | Агент завис? Система немедленно возвращает его в работу. Ваша задача будет выполнена, точка. |
|
||||
| 💬 | **Comment Checker** | Никакого AI-мусора в комментариях. Код читается так, словно его писал опытный разработчик. |
|
||||
| 🖥️ | **Интеграция с Tmux** | Полноценный интерактивный терминал. REPL, дебаггеры, TUI. Всё живое. |
|
||||
| 🔌 | **Совместимость с Claude Code** | Ваши хуки, команды, навыки, MCP и плагины? Всё работает без изменений. |
|
||||
| 🎯 | **MCP, встроенные в навыки** | Навыки несут собственные MCP-серверы. Никакого раздувания контекста. |
|
||||
| 📋 | **Prometheus Planner** | Стратегическое планирование в режиме интервью перед любым выполнением. |
|
||||
| 🔍 | **`/init-deep`** | Автоматически генерирует иерархические файлы `AGENTS.md` по всему проекту. Отлично работает на эффективность токенов и производительность агента. |
|
||||
|
||||
### Дисциплинированные агенты
|
||||
|
||||
<table><tr> <td align="center"><img src=".github/assets/sisyphus.png" height="300" /></td> <td align="center"><img src=".github/assets/hephaestus.png" height="300" /></td> </tr></table>
|
||||
|
||||
**Sisyphus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**) — главный оркестратор. Он планирует, делегирует задачи специалистам и доводит их до завершения с агрессивным параллельным выполнением. Он не останавливается на полпути.
|
||||
|
||||
**Hephaestus** (`gpt-5.3-codex`) — автономный глубокий исполнитель. Дайте ему цель, а не рецепт. Он исследует кодовую базу, изучает паттерны и выполняет задачи сквозным образом без лишних подсказок. *Законный Мастер.*
|
||||
|
||||
**Prometheus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**) — стратегический планировщик. Режим интервью: задаёт вопросы, определяет объём работ и формирует детальный план до того, как написана хотя бы одна строка кода.
|
||||
|
||||
Каждый агент настроен под сильные стороны своей модели. Никакого ручного переключения между моделями. Подробнее →
|
||||
|
||||
> Anthropic [заблокировал OpenCode из-за нас.](https://x.com/thdxr/status/2010149530486911014) Именно поэтому Hephaestus зовётся «Законным Мастером». Ирония намеренная.
|
||||
>
|
||||
> Мы работаем лучше всего на Opus, но Kimi K2.5 + GPT-5.3 Codex уже превосходят ванильный Claude Code. Никакой настройки не требуется.
|
||||
|
||||
### Оркестрация агентов
|
||||
|
||||
Когда Sisyphus делегирует задачу субагенту, он выбирает не модель, а **категорию**. Категория автоматически сопоставляется с нужной моделью:
|
||||
|
||||
| Категория | Для чего предназначена |
|
||||
| -------------------- | ------------------------------------- |
|
||||
| `visual-engineering` | Фронтенд, UI/UX, дизайн |
|
||||
| `deep` | Автономные исследования + выполнение |
|
||||
| `quick` | Изменения в одном файле, опечатки |
|
||||
| `ultrabrain` | Сложная логика, архитектурные решения |
|
||||
|
||||
Агент сообщает тип задачи. Обвязка подбирает нужную модель. Вы ни к чему не прикасаетесь.
|
||||
|
||||
### Совместимость с Claude Code
|
||||
|
||||
Вы тщательно настроили Claude Code. Хорошо.
|
||||
|
||||
Каждый хук, команда, навык, MCP и плагин работают здесь без изменений. Полная совместимость, включая плагины.
|
||||
|
||||
### Инструменты мирового класса для ваших агентов
|
||||
|
||||
LSP, AST-Grep, Tmux, MCP — реально интегрированы, а не склеены скотчем.
|
||||
|
||||
- **LSP**: `lsp_rename`, `lsp_goto_definition`, `lsp_find_references`, `lsp_diagnostics`. Точность IDE для каждого агента
|
||||
- **AST-Grep**: Поиск и переписывание кода с учётом синтаксических паттернов для 25 языков
|
||||
- **Tmux**: Полноценный интерактивный терминал. REPL, дебаггеры, TUI-приложения. Агент остаётся в сессии
|
||||
- **MCP**: Веб-поиск, официальная документация, поиск по коду на GitHub. Всё встроено
|
||||
|
||||
### MCP, встроенные в навыки
|
||||
|
||||
MCP-серверы съедают бюджет контекста. Мы это исправили.
|
||||
|
||||
Навыки приносят собственные MCP-серверы. Запускаются по необходимости, ограничены задачей, исчезают по завершении. Контекстное окно остаётся чистым.
|
||||
|
||||
### Лучше пишет код. Правки на основе хэш-якорей
|
||||
|
||||
Проблема обвязки реальна. Большинство сбоев агентов — не вина модели. Это вина инструмента правок.
|
||||
|
||||
> *«Ни один из этих инструментов не даёт модели стабильный, проверяемый идентификатор строк, которые она хочет изменить... Все они полагаются на то, что модель воспроизведёт контент, который уже видела. Когда это не получается — а так бывает нередко — пользователь обвиняет модель.»*
|
||||
>
|
||||
> <br/>— [Can Bölük, «Проблема обвязки»](https://blog.can.ac/2026/02/12/the-harness-problem/)
|
||||
|
||||
Вдохновлённые [oh-my-pi](https://github.com/can1357/oh-my-pi), мы реализовали **Hashline**. Каждая строка, которую читает агент, возвращается с тегом хэша содержимого:
|
||||
|
||||
```
|
||||
11#VK| function hello() {
|
||||
22#XJ| return "world";
|
||||
33#MB| }
|
||||
```
|
||||
|
||||
Агент редактирует, ссылаясь на эти теги. Если файл изменился с момента последнего чтения, хэш не совпадёт, и правка будет отклонена до любого повреждения. Никакого воспроизведения пробелов. Никаких ошибок с устаревшими строками.
|
||||
|
||||
Grok Code Fast 1: успешность **6.7% → 68.3%**. Просто за счёт замены инструмента правок.
|
||||
|
||||
### Глубокая инициализация. `/init-deep`
|
||||
|
||||
Запустите `/init-deep`. Будут сгенерированы иерархические файлы `AGENTS.md`:
|
||||
|
||||
```
|
||||
project/
|
||||
├── AGENTS.md ← контекст всего проекта
|
||||
├── src/
|
||||
│ ├── AGENTS.md ← контекст для src
|
||||
│ └── components/
|
||||
│ └── AGENTS.md ← контекст для компонентов
|
||||
```
|
||||
|
||||
Агенты автоматически читают нужный контекст. Никакого ручного управления.
|
||||
|
||||
### Планирование. Prometheus
|
||||
|
||||
Сложная задача? Не нужно молиться и надеяться на промпт.
|
||||
|
||||
`/start-work` вызывает Prometheus. **Интервьюирует вас как настоящий инженер**, определяет объём работ и неоднозначности, формирует проверенный план до прикосновения к коду. Агент знает, что строит, прежде чем начать.
|
||||
|
||||
### Навыки
|
||||
|
||||
Навыки — это не просто промпты. Каждый привносит:
|
||||
|
||||
- Системные инструкции, настроенные под предметную область
|
||||
- Встроенные MCP-серверы, запускаемые по необходимости
|
||||
- Ограниченные разрешения. Агенты остаются в рамках
|
||||
|
||||
Встроенные: `playwright` (автоматизация браузера), `git-master` (атомарные коммиты, хирургия rebase), `frontend-ui-ux` (UI с упором на дизайн).
|
||||
|
||||
Добавьте свои: `.opencode/skills/*/SKILL.md` или `~/.config/opencode/skills/*/SKILL.md`.
|
||||
|
||||
**Хотите полное описание возможностей?** Смотрите **документацию по функциям** — агенты, хуки, инструменты, MCP и всё остальное подробно.
|
||||
|
||||
------
|
||||
|
||||
> **Впервые в oh-my-opencode?** Прочитайте **Обзор**, чтобы понять, что у вас есть, или ознакомьтесь с **руководством по оркестрации**, чтобы узнать, как агенты взаимодействуют.
|
||||
|
||||
## Удаление
|
||||
|
||||
Чтобы удалить oh-my-opencode:
|
||||
|
||||
1. **Удалите плагин из конфига OpenCode**
|
||||
|
||||
Отредактируйте `~/.config/opencode/opencode.json` (или `opencode.jsonc`) и уберите `"oh-my-opencode"` из массива `plugin`:
|
||||
|
||||
```bash
|
||||
# С помощью jq
|
||||
jq '.plugin = [.plugin[] | select(. != "oh-my-opencode")]' \
|
||||
~/.config/opencode/opencode.json > /tmp/oc.json && \
|
||||
mv /tmp/oc.json ~/.config/opencode/opencode.json
|
||||
```
|
||||
|
||||
2. **Удалите файлы конфигурации (опционально)**
|
||||
|
||||
```bash
|
||||
# Удалить пользовательский конфиг
|
||||
rm -f ~/.config/opencode/oh-my-opencode.json ~/.config/opencode/oh-my-opencode.jsonc
|
||||
|
||||
# Удалить конфиг проекта (если существует)
|
||||
rm -f .opencode/oh-my-opencode.json .opencode/oh-my-opencode.jsonc
|
||||
```
|
||||
|
||||
3. **Проверьте удаление**
|
||||
|
||||
```bash
|
||||
opencode --version
|
||||
# Плагин больше не должен загружаться
|
||||
```
|
||||
|
||||
## Функции
|
||||
|
||||
Функции, которые, как вы будете думать, должны были существовать всегда. Попробовав раз, вы не сможете вернуться назад.
|
||||
|
||||
Смотрите полную документацию по функциям.
|
||||
|
||||
**Краткий обзор:**
|
||||
|
||||
- **Агенты**: Sisyphus (главный агент), Prometheus (планировщик), Oracle (архитектура/отладка), Librarian (документация/поиск по коду), Explore (быстрый grep по кодовой базе), Multimodal Looker
|
||||
- **Фоновые агенты**: Запускайте несколько агентов параллельно, как настоящая команда разработки
|
||||
- **Инструменты LSP и AST**: Рефакторинг, переименование, диагностика, поиск кода с учётом AST
|
||||
- **Инструмент правок на основе хэш-якорей**: Ссылки `LINE#ID` проверяют содержимое перед применением каждого изменения. Хирургические правки, ноль ошибок с устаревшими строками
|
||||
- **Инъекция контекста**: Автоматическое добавление AGENTS.md, README.md, условных правил
|
||||
- **Совместимость с Claude Code**: Полная система хуков, команды, навыки, агенты, MCP
|
||||
- **Встроенные MCP**: websearch (Exa), context7 (документация), grep_app (поиск по GitHub)
|
||||
- **Инструменты сессий**: Список, чтение, поиск и анализ истории сессий
|
||||
- **Инструменты продуктивности**: Ralph Loop, Todo Enforcer, Comment Checker, Think Mode и другое
|
||||
- **Настройка моделей**: Сопоставление агент–модель встроено в руководство по установке
|
||||
|
||||
## Конфигурация
|
||||
|
||||
Продуманные настройки по умолчанию, которые можно изменить при необходимости.
|
||||
|
||||
Смотрите документацию по конфигурации.
|
||||
|
||||
**Краткий обзор:**
|
||||
|
||||
- **Расположение конфигов**: `.opencode/oh-my-opencode.jsonc` или `.opencode/oh-my-opencode.json` (проект), `~/.config/opencode/oh-my-opencode.jsonc` или `~/.config/opencode/oh-my-opencode.json` (пользователь)
|
||||
- **Поддержка JSONC**: Комментарии и конечные запятые поддерживаются
|
||||
- **Агенты**: Переопределение моделей, температур, промптов и разрешений для любого агента
|
||||
- **Встроенные навыки**: `playwright` (автоматизация браузера), `git-master` (атомарные коммиты)
|
||||
- **Агент Sisyphus**: Главный оркестратор с Prometheus (Планировщик) и Metis (Консультант по плану)
|
||||
- **Фоновые задачи**: Настройка ограничений параллельности по провайдеру/модели
|
||||
- **Категории**: Делегирование задач по предметной области (`visual`, `business-logic`, пользовательские)
|
||||
- **Хуки**: 25+ встроенных хуков, все настраиваются через `disabled_hooks`
|
||||
- **MCP**: Встроенные websearch (Exa), context7 (документация), grep_app (поиск по GitHub)
|
||||
- **LSP**: Полная поддержка LSP с инструментами рефакторинга
|
||||
- **Экспериментальное**: Агрессивное усечение, автовозобновление и другое
|
||||
|
||||
## Слово автора
|
||||
|
||||
**Хотите узнать философию?** Прочитайте Манифест Ultrawork.
|
||||
|
||||
------
|
||||
|
||||
Я потратил $24K на токены LLM в личных проектах. Попробовал все инструменты. Настраивал всё до смерти. OpenCode победил.
|
||||
|
||||
Каждая проблема, с которой я столкнулся, — её решение уже встроено в этот плагин. Устанавливайте и работайте.
|
||||
|
||||
Если OpenCode — это Debian/Arch, то OmO — это Ubuntu/[Omarchy](https://omarchy.org/).
|
||||
|
||||
Сильное влияние со стороны [AmpCode](https://ampcode.com) и [Claude Code](https://code.claude.com/docs/overview). Функции портированы, часто улучшены. Продолжаем строить. Это **Open**Code.
|
||||
|
||||
Другие обвязки обещают оркестрацию нескольких моделей. Мы её поставляем. Плюс стабильность. Плюс функции, которые реально работают.
|
||||
|
||||
Я самый одержимый пользователь этого проекта:
|
||||
|
||||
- Какая модель думает острее всего?
|
||||
- Кто бог отладки?
|
||||
- Кто пишет лучший код?
|
||||
- Кто рулит фронтендом?
|
||||
- Кто владеет бэкендом?
|
||||
- Что быстрее всего в ежедневной работе?
|
||||
- Что запускают конкуренты?
|
||||
|
||||
Этот плагин — дистилляция. Берём лучшее. Есть улучшения? PR приветствуются.
|
||||
|
||||
**Хватит мучиться с выбором обвязки.** **Я буду исследовать, воровать лучшее и поставлять это сюда.**
|
||||
|
||||
Звучит высокомерно? Знаете, как сделать лучше? Контрибьютьте. Добро пожаловать.
|
||||
|
||||
Никакой аффилиации с упомянутыми проектами/моделями. Только личные эксперименты.
|
||||
|
||||
99% этого проекта было создано с помощью OpenCode. Я почти не знаю TypeScript. **Но эту документацию я лично просматривал и во многом переписывал.**
|
||||
|
||||
## Любимый профессионалами из
|
||||
|
||||
- Indent
|
||||
- Spray — решение для influencer-маркетинга, vovushop — платформа кросс-граничной торговли, vreview — AI-решение для маркетинга отзывов в commerce
|
||||
- [Google](https://google.com)
|
||||
- [Microsoft](https://microsoft.com)
|
||||
- ELESTYLE
|
||||
- elepay — мультимобильный платёжный шлюз, OneQR — мобильное SaaS-приложение для безналичных расчётов
|
||||
|
||||
*Особая благодарность [@junhoyeo](https://github.com/junhoyeo) за это потрясающее hero-изображение.*
|
||||
1201
README.zh-cn.md
1201
README.zh-cn.md
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user