Compare commits
30 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
711a347b64 | ||
|
|
6667ace7ca | ||
|
|
e48be69a62 | ||
|
|
3808fd3a4b | ||
|
|
ac33b76193 | ||
|
|
a24f1e905e | ||
|
|
08439a511a | ||
|
|
cbbc7bd075 | ||
|
|
f9bc23b39f | ||
|
|
69e3bbe362 | ||
|
|
8c3feb8a9d | ||
|
|
8b2c134622 | ||
|
|
96e7b39a83 | ||
|
|
bb181ee572 | ||
|
|
8aa2549368 | ||
|
|
d18bd068c3 | ||
|
|
b03e463bde | ||
|
|
4a82ff40fb | ||
|
|
4b5e38f8f8 | ||
|
|
e63c568c4f | ||
|
|
ddfbdbb84e | ||
|
|
41dd4ce22a | ||
|
|
4f26e99ee7 | ||
|
|
b405494808 | ||
|
|
839a4c5316 | ||
|
|
08d43efdb0 | ||
|
|
061a5f5132 | ||
|
|
d4acd23630 | ||
|
|
d15794004e | ||
|
|
de6f4b2c91 |
1
.github/workflows/publish.yml
vendored
1
.github/workflows/publish.yml
vendored
@@ -51,7 +51,6 @@ jobs:
|
||||
# Run them in separate processes to prevent cross-file contamination
|
||||
bun test src/plugin-handlers
|
||||
bun test src/hooks/atlas
|
||||
bun test src/hooks/compaction-context-injector
|
||||
bun test src/features/tmux-subagent
|
||||
|
||||
- name: Run remaining tests
|
||||
|
||||
2
.gitignore
vendored
2
.gitignore
vendored
@@ -33,4 +33,4 @@ yarn.lock
|
||||
test-injection/
|
||||
notepad.md
|
||||
oauth-success.html
|
||||
.188e87dbff6e7fd9-00000000.bun-build
|
||||
*.bun-build
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
---
|
||||
description: Compare HEAD with the latest published npm version and list all unpublished changes
|
||||
model: anthropic/claude-haiku-4-5
|
||||
---
|
||||
|
||||
<command-instruction>
|
||||
@@ -82,3 +81,68 @@ None 또는 목록
|
||||
- **Recommendation**: patch|minor|major
|
||||
- **Reason**: 이유
|
||||
</output-format>
|
||||
|
||||
<oracle-safety-review>
|
||||
## Oracle 배포 안전성 검토 (사용자가 명시적으로 요청 시에만)
|
||||
|
||||
**트리거 키워드**: "배포 가능", "배포해도 될까", "안전한지", "리뷰", "검토", "oracle", "오라클"
|
||||
|
||||
사용자가 위 키워드 중 하나라도 포함하여 요청하면:
|
||||
|
||||
### 1. 사전 검증 실행
|
||||
```bash
|
||||
bun run typecheck
|
||||
bun test
|
||||
```
|
||||
- 실패 시 → Oracle 소환 없이 즉시 "❌ 배포 불가" 보고
|
||||
|
||||
### 2. Oracle 소환 프롬프트
|
||||
|
||||
다음 정보를 수집하여 Oracle에게 전달:
|
||||
|
||||
```
|
||||
## 배포 안전성 검토 요청
|
||||
|
||||
### 변경사항 요약
|
||||
{위에서 분석한 변경사항 테이블}
|
||||
|
||||
### 주요 diff (기능별로 정리)
|
||||
{각 feat/fix/refactor의 핵심 코드 변경 - 전체 diff가 아닌 핵심만}
|
||||
|
||||
### 검증 결과
|
||||
- Typecheck: ✅/❌
|
||||
- Tests: {pass}/{total} (✅/❌)
|
||||
|
||||
### 검토 요청사항
|
||||
1. **리그레션 위험**: 기존 기능에 영향을 줄 수 있는 변경이 있는가?
|
||||
2. **사이드이펙트**: 예상치 못한 부작용이 발생할 수 있는 부분은?
|
||||
3. **Breaking Changes**: 외부 사용자에게 영향을 주는 변경이 있는가?
|
||||
4. **Edge Cases**: 놓친 엣지 케이스가 있는가?
|
||||
5. **배포 권장 여부**: SAFE / CAUTION / UNSAFE
|
||||
|
||||
### 요청
|
||||
위 변경사항을 깊이 분석하고, 배포 안전성에 대해 판단해주세요.
|
||||
리스크가 있다면 구체적인 시나리오와 함께 설명해주세요.
|
||||
배포 후 모니터링해야 할 키워드가 있다면 제안해주세요.
|
||||
```
|
||||
|
||||
### 3. Oracle 응답 후 출력 포맷
|
||||
|
||||
## 🔍 Oracle 배포 안전성 검토 결과
|
||||
|
||||
### 판정: ✅ SAFE / ⚠️ CAUTION / ❌ UNSAFE
|
||||
|
||||
### 리스크 분석
|
||||
| 영역 | 리스크 레벨 | 설명 |
|
||||
|------|-------------|------|
|
||||
| ... | 🟢/🟡/🔴 | ... |
|
||||
|
||||
### 권장 사항
|
||||
- ...
|
||||
|
||||
### 배포 후 모니터링 키워드
|
||||
- ...
|
||||
|
||||
### 결론
|
||||
{Oracle의 최종 판단}
|
||||
</oracle-safety-review>
|
||||
|
||||
519
.opencode/skills/github-issue-triage/SKILL.md
Normal file
519
.opencode/skills/github-issue-triage/SKILL.md
Normal file
@@ -0,0 +1,519 @@
|
||||
---
|
||||
name: github-issue-triage
|
||||
description: "Triage GitHub issues with parallel analysis. 1 issue = 1 background agent. Exhaustive pagination. Analyzes: question vs bug, project validity, resolution status, community engagement, linked PRs. Triggers: 'triage issues', 'analyze issues', 'issue report'."
|
||||
---
|
||||
|
||||
# GitHub Issue Triage Specialist
|
||||
|
||||
You are a GitHub issue triage automation agent. Your job is to:
|
||||
1. Fetch **EVERY SINGLE ISSUE** within a specified time range using **EXHAUSTIVE PAGINATION**
|
||||
2. Launch ONE background agent PER issue for parallel analysis
|
||||
3. Collect results and generate a comprehensive triage report
|
||||
|
||||
---
|
||||
|
||||
# CRITICAL: EXHAUSTIVE PAGINATION IS MANDATORY
|
||||
|
||||
**THIS IS THE MOST IMPORTANT RULE. VIOLATION = COMPLETE FAILURE.**
|
||||
|
||||
## YOU MUST FETCH ALL ISSUES. PERIOD.
|
||||
|
||||
| WRONG | CORRECT |
|
||||
|----------|------------|
|
||||
| `gh issue list --limit 100` and stop | Paginate until ZERO results returned |
|
||||
| "I found 16 issues" (first page only) | "I found 61 issues after 5 pages" |
|
||||
| Assuming first page is enough | Using `--limit 500` and verifying count |
|
||||
| Stopping when you "feel" you have enough | Stopping ONLY when API returns empty |
|
||||
|
||||
### WHY THIS MATTERS
|
||||
|
||||
- GitHub API returns **max 100 issues per request** by default
|
||||
- A busy repo can have **50-100+ issues** in 48 hours
|
||||
- **MISSING ISSUES = MISSING CRITICAL BUGS = PRODUCTION OUTAGES**
|
||||
- The user asked for triage, not "sample triage"
|
||||
|
||||
### THE ONLY ACCEPTABLE APPROACH
|
||||
|
||||
```bash
|
||||
# ALWAYS use --limit 500 (maximum allowed)
|
||||
# ALWAYS check if more pages exist
|
||||
# ALWAYS continue until empty result
|
||||
|
||||
gh issue list --repo $REPO --state all --limit 500 --json number,title,state,createdAt,updatedAt,labels,author
|
||||
```
|
||||
|
||||
**If the result count equals your limit, THERE ARE MORE ISSUES. KEEP FETCHING.**
|
||||
|
||||
---
|
||||
|
||||
## PHASE 1: Issue Collection (EXHAUSTIVE Pagination)
|
||||
|
||||
### 1.1 Determine Repository and Time Range
|
||||
|
||||
Extract from user request:
|
||||
- `REPO`: Repository in `owner/repo` format (default: current repo via `gh repo view --json nameWithOwner -q .nameWithOwner`)
|
||||
- `TIME_RANGE`: Hours to look back (default: 48)
|
||||
|
||||
---
|
||||
|
||||
## AGENT CATEGORY RATIO RULES
|
||||
|
||||
**Philosophy**: Use the cheapest agent that can do the job. Expensive agents = waste unless necessary.
|
||||
|
||||
### Default Ratio: `unspecified-low:8, quick:1, writing:1`
|
||||
|
||||
| Category | Ratio | Use For | Cost |
|
||||
|----------|-------|---------|------|
|
||||
| `unspecified-low` | 80% | Standard issue analysis - read issue, fetch comments, categorize | $ |
|
||||
| `quick` | 10% | Trivial issues - obvious duplicates, spam, clearly resolved | ¢ |
|
||||
| `writing` | 10% | Report generation, response drafting, summary synthesis | $$ |
|
||||
|
||||
### When to Override Default Ratio
|
||||
|
||||
| Scenario | Recommended Ratio | Reason |
|
||||
|----------|-------------------|--------|
|
||||
| Bug-heavy triage | `unspecified-low:7, quick:2, writing:1` | More simple duplicates |
|
||||
| Feature request triage | `unspecified-low:6, writing:3, quick:1` | More response drafting needed |
|
||||
| Security audit | `unspecified-high:5, unspecified-low:4, writing:1` | Deeper analysis required |
|
||||
| First-pass quick filter | `quick:8, unspecified-low:2` | Just categorize, don't analyze deeply |
|
||||
|
||||
### Agent Assignment Algorithm
|
||||
|
||||
```typescript
|
||||
function assignAgentCategory(issues: Issue[], ratio: Record<string, number>): Map<Issue, string> {
|
||||
const assignments = new Map<Issue, string>();
|
||||
const total = Object.values(ratio).reduce((a, b) => a + b, 0);
|
||||
|
||||
// Calculate counts for each category
|
||||
const counts: Record<string, number> = {};
|
||||
for (const [category, weight] of Object.entries(ratio)) {
|
||||
counts[category] = Math.floor(issues.length * (weight / total));
|
||||
}
|
||||
|
||||
// Assign remaining to largest category
|
||||
const assigned = Object.values(counts).reduce((a, b) => a + b, 0);
|
||||
const remaining = issues.length - assigned;
|
||||
const largestCategory = Object.entries(ratio).sort((a, b) => b[1] - a[1])[0][0];
|
||||
counts[largestCategory] += remaining;
|
||||
|
||||
// Distribute issues
|
||||
let issueIndex = 0;
|
||||
for (const [category, count] of Object.entries(counts)) {
|
||||
for (let i = 0; i < count && issueIndex < issues.length; i++) {
|
||||
assignments.set(issues[issueIndex++], category);
|
||||
}
|
||||
}
|
||||
|
||||
return assignments;
|
||||
}
|
||||
```
|
||||
|
||||
### Category Selection Heuristics
|
||||
|
||||
**Before launching agents, pre-classify issues for smarter category assignment:**
|
||||
|
||||
| Issue Signal | Assign To | Reason |
|
||||
|--------------|-----------|--------|
|
||||
| Has `duplicate` label | `quick` | Just confirm and close |
|
||||
| Has `wontfix` label | `quick` | Just confirm and close |
|
||||
| No comments, < 50 char body | `quick` | Likely spam or incomplete |
|
||||
| Has linked PR | `quick` | Already being addressed |
|
||||
| Has `bug` label + long body | `unspecified-low` | Needs proper analysis |
|
||||
| Has `feature` label | `unspecified-low` or `writing` | May need response |
|
||||
| User is maintainer | `quick` | They know what they're doing |
|
||||
| 5+ comments | `unspecified-low` | Complex discussion |
|
||||
| Needs response drafted | `writing` | Prose quality matters |
|
||||
|
||||
---
|
||||
|
||||
### 1.2 Exhaustive Pagination Loop
|
||||
|
||||
# STOP. READ THIS BEFORE EXECUTING.
|
||||
|
||||
**YOU WILL FETCH EVERY. SINGLE. ISSUE. NO EXCEPTIONS.**
|
||||
|
||||
## THE GOLDEN RULE
|
||||
|
||||
```
|
||||
NEVER use --limit 100. ALWAYS use --limit 500.
|
||||
NEVER stop at first result. ALWAYS verify you got everything.
|
||||
NEVER assume "that's probably all". ALWAYS check if more exist.
|
||||
```
|
||||
|
||||
## MANDATORY PAGINATION LOOP (COPY-PASTE THIS EXACTLY)
|
||||
|
||||
You MUST execute this EXACT pagination loop. DO NOT simplify. DO NOT skip iterations.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# MANDATORY PAGINATION - Execute this EXACTLY as written
|
||||
|
||||
REPO="code-yeongyu/oh-my-opencode" # or use: gh repo view --json nameWithOwner -q .nameWithOwner
|
||||
TIME_RANGE=48 # hours
|
||||
CUTOFF_DATE=$(date -v-${TIME_RANGE}H +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -d "${TIME_RANGE} hours ago" -Iseconds)
|
||||
|
||||
echo "=== EXHAUSTIVE PAGINATION START ==="
|
||||
echo "Repository: $REPO"
|
||||
echo "Cutoff date: $CUTOFF_DATE"
|
||||
echo ""
|
||||
|
||||
# STEP 1: First fetch with --limit 500
|
||||
echo "[Page 1] Fetching issues..."
|
||||
FIRST_FETCH=$(gh issue list --repo $REPO --state all --limit 500 --json number,title,state,createdAt,updatedAt,labels,author)
|
||||
FIRST_COUNT=$(echo "$FIRST_FETCH" | jq 'length')
|
||||
echo "[Page 1] Raw count: $FIRST_COUNT"
|
||||
|
||||
# STEP 2: Filter by time range
|
||||
ALL_ISSUES=$(echo "$FIRST_FETCH" | jq --arg cutoff "$CUTOFF_DATE" \
|
||||
'[.[] | select(.createdAt >= $cutoff or .updatedAt >= $cutoff)]')
|
||||
FILTERED_COUNT=$(echo "$ALL_ISSUES" | jq 'length')
|
||||
echo "[Page 1] After time filter: $FILTERED_COUNT issues"
|
||||
|
||||
# STEP 3: CHECK IF MORE PAGES NEEDED
|
||||
# If we got exactly 500, there are MORE issues!
|
||||
if [ "$FIRST_COUNT" -eq 500 ]; then
|
||||
echo ""
|
||||
echo "WARNING: Got exactly 500 results. MORE PAGES EXIST!"
|
||||
echo "Continuing pagination..."
|
||||
|
||||
PAGE=2
|
||||
LAST_ISSUE_NUMBER=$(echo "$FIRST_FETCH" | jq '.[- 1].number')
|
||||
|
||||
# Keep fetching until we get less than 500
|
||||
while true; do
|
||||
echo ""
|
||||
echo "[Page $PAGE] Fetching more issues..."
|
||||
|
||||
# Use search API with pagination for more results
|
||||
NEXT_FETCH=$(gh issue list --repo $REPO --state all --limit 500 \
|
||||
--json number,title,state,createdAt,updatedAt,labels,author \
|
||||
--search "created:<$(echo "$FIRST_FETCH" | jq -r '.[-1].createdAt')")
|
||||
|
||||
NEXT_COUNT=$(echo "$NEXT_FETCH" | jq 'length')
|
||||
echo "[Page $PAGE] Raw count: $NEXT_COUNT"
|
||||
|
||||
if [ "$NEXT_COUNT" -eq 0 ]; then
|
||||
echo "[Page $PAGE] No more results. Pagination complete."
|
||||
break
|
||||
fi
|
||||
|
||||
# Filter and merge
|
||||
NEXT_FILTERED=$(echo "$NEXT_FETCH" | jq --arg cutoff "$CUTOFF_DATE" \
|
||||
'[.[] | select(.createdAt >= $cutoff or .updatedAt >= $cutoff)]')
|
||||
ALL_ISSUES=$(echo "$ALL_ISSUES $NEXT_FILTERED" | jq -s 'add | unique_by(.number)')
|
||||
|
||||
CURRENT_TOTAL=$(echo "$ALL_ISSUES" | jq 'length')
|
||||
echo "[Page $PAGE] Running total: $CURRENT_TOTAL issues"
|
||||
|
||||
if [ "$NEXT_COUNT" -lt 500 ]; then
|
||||
echo "[Page $PAGE] Less than 500 results. Pagination complete."
|
||||
break
|
||||
fi
|
||||
|
||||
PAGE=$((PAGE + 1))
|
||||
|
||||
# Safety limit
|
||||
if [ $PAGE -gt 20 ]; then
|
||||
echo "SAFETY LIMIT: Stopped at page 20"
|
||||
break
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# STEP 4: FINAL COUNT
|
||||
FINAL_COUNT=$(echo "$ALL_ISSUES" | jq 'length')
|
||||
echo ""
|
||||
echo "=== EXHAUSTIVE PAGINATION COMPLETE ==="
|
||||
echo "Total issues found: $FINAL_COUNT"
|
||||
echo ""
|
||||
|
||||
# STEP 5: Verify we got everything
|
||||
if [ "$FINAL_COUNT" -lt 10 ]; then
|
||||
echo "WARNING: Only $FINAL_COUNT issues found. Double-check time range!"
|
||||
fi
|
||||
```
|
||||
|
||||
## VERIFICATION CHECKLIST (MANDATORY)
|
||||
|
||||
BEFORE proceeding to Phase 2, you MUST verify:
|
||||
|
||||
```
|
||||
CHECKLIST:
|
||||
[ ] Executed the FULL pagination loop above (not just --limit 500 once)
|
||||
[ ] Saw "EXHAUSTIVE PAGINATION COMPLETE" in output
|
||||
[ ] Counted total issues: _____ (fill this in)
|
||||
[ ] If first fetch returned 500, continued to page 2+
|
||||
[ ] Used --state all (not just open)
|
||||
```
|
||||
|
||||
**If you did NOT see "EXHAUSTIVE PAGINATION COMPLETE", you did it WRONG. Start over.**
|
||||
|
||||
## ANTI-PATTERNS (WILL CAUSE FAILURE)
|
||||
|
||||
| NEVER DO THIS | Why It Fails |
|
||||
|------------------|--------------|
|
||||
| Single `gh issue list --limit 500` | If 500 returned, you missed the rest! |
|
||||
| `--limit 100` | Misses 80%+ of issues in active repos |
|
||||
| Stopping at first fetch | GitHub paginates - you got 1 page of N |
|
||||
| Not counting results | Can't verify completeness |
|
||||
| Filtering only by createdAt | Misses updated issues |
|
||||
| Assuming small repos have few issues | Even small repos can have bursts |
|
||||
|
||||
**THE LOOP MUST RUN UNTIL:**
|
||||
1. Fetch returns 0 results, OR
|
||||
2. Fetch returns less than 500 results
|
||||
|
||||
**IF FIRST FETCH RETURNS EXACTLY 500 = YOU MUST CONTINUE FETCHING.**
|
||||
|
||||
### 1.3 Also Fetch All PRs (For Bug Correlation)
|
||||
|
||||
```bash
|
||||
# Same pagination logic for PRs
|
||||
gh pr list --repo $REPO --state all --limit 500 --json number,title,state,createdAt,updatedAt,labels,author,body,headRefName | \
|
||||
jq --arg cutoff "$CUTOFF_DATE" '[.[] | select(.createdAt >= $cutoff or .updatedAt >= $cutoff)]'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## PHASE 2: Parallel Issue Analysis (1 Issue = 1 Agent)
|
||||
|
||||
### 2.1 Agent Distribution Formula
|
||||
|
||||
```
|
||||
Total issues: N
|
||||
Agent categories based on ratio:
|
||||
- unspecified-low: floor(N * 0.8)
|
||||
- quick: floor(N * 0.1)
|
||||
- writing: ceil(N * 0.1) # For report generation
|
||||
```
|
||||
|
||||
### 2.2 Launch Background Agents
|
||||
|
||||
**MANDATORY: Each issue gets its own dedicated background agent.**
|
||||
|
||||
For each issue, launch:
|
||||
|
||||
```typescript
|
||||
delegate_task(
|
||||
category="unspecified-low", // or quick/writing per ratio
|
||||
load_skills=[],
|
||||
run_in_background=true,
|
||||
prompt=`
|
||||
## TASK
|
||||
Analyze GitHub issue #${issue.number} for ${REPO}.
|
||||
|
||||
## ISSUE DATA
|
||||
- Number: #${issue.number}
|
||||
- Title: ${issue.title}
|
||||
- State: ${issue.state}
|
||||
- Author: ${issue.author.login}
|
||||
- Created: ${issue.createdAt}
|
||||
- Updated: ${issue.updatedAt}
|
||||
- Labels: ${issue.labels.map(l => l.name).join(', ')}
|
||||
|
||||
## ISSUE BODY
|
||||
${issue.body}
|
||||
|
||||
## FETCH COMMENTS
|
||||
Use: gh issue view ${issue.number} --repo ${REPO} --json comments
|
||||
|
||||
## ANALYSIS CHECKLIST
|
||||
1. **TYPE**: Is this a BUG, QUESTION, FEATURE request, or INVALID?
|
||||
2. **PROJECT_VALID**: Is this issue relevant to OUR project? (YES/NO/UNCLEAR)
|
||||
3. **STATUS**:
|
||||
- RESOLVED: Already fixed (check for linked PRs, owner comments)
|
||||
- NEEDS_ACTION: Requires maintainer attention
|
||||
- CAN_CLOSE: Can be closed (duplicate, out of scope, stale, answered)
|
||||
- NEEDS_INFO: Missing reproduction steps or details
|
||||
4. **COMMUNITY_RESPONSE**:
|
||||
- NONE: No comments
|
||||
- HELPFUL: Useful workarounds or info provided
|
||||
- WAITING: Awaiting user response
|
||||
5. **LINKED_PR**: If bug, search PRs that might fix this issue
|
||||
|
||||
## PR CORRELATION
|
||||
Check these PRs for potential fixes:
|
||||
${PR_LIST}
|
||||
|
||||
## RETURN FORMAT
|
||||
\`\`\`
|
||||
#${issue.number}: ${issue.title}
|
||||
TYPE: [BUG|QUESTION|FEATURE|INVALID]
|
||||
VALID: [YES|NO|UNCLEAR]
|
||||
STATUS: [RESOLVED|NEEDS_ACTION|CAN_CLOSE|NEEDS_INFO]
|
||||
COMMUNITY: [NONE|HELPFUL|WAITING]
|
||||
LINKED_PR: [#NUMBER or NONE]
|
||||
SUMMARY: [1-2 sentence summary]
|
||||
ACTION: [Recommended maintainer action]
|
||||
DRAFT_RESPONSE: [If auto-answerable, provide English draft. Otherwise "NEEDS_MANUAL_REVIEW"]
|
||||
\`\`\`
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
### 2.3 Collect All Results
|
||||
|
||||
Wait for all background agents to complete, then collect:
|
||||
|
||||
```typescript
|
||||
// Store all task IDs
|
||||
const taskIds: string[] = []
|
||||
|
||||
// Launch all agents
|
||||
for (const issue of issues) {
|
||||
const result = await delegate_task(...)
|
||||
taskIds.push(result.task_id)
|
||||
}
|
||||
|
||||
// Collect results
|
||||
const results = []
|
||||
for (const taskId of taskIds) {
|
||||
const output = await background_output(task_id=taskId)
|
||||
results.push(output)
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## PHASE 3: Report Generation
|
||||
|
||||
### 3.1 Categorize Results
|
||||
|
||||
Group analyzed issues by status:
|
||||
|
||||
| Category | Criteria |
|
||||
|----------|----------|
|
||||
| **CRITICAL** | Blocking bugs, security issues, data loss |
|
||||
| **CLOSE_IMMEDIATELY** | Resolved, duplicate, out of scope, stale |
|
||||
| **AUTO_RESPOND** | Can answer with template (version update, docs link) |
|
||||
| **NEEDS_INVESTIGATION** | Requires manual debugging or design decision |
|
||||
| **FEATURE_BACKLOG** | Feature requests for prioritization |
|
||||
| **NEEDS_INFO** | Missing details, request more info |
|
||||
|
||||
### 3.2 Generate Report
|
||||
|
||||
```markdown
|
||||
# Issue Triage Report
|
||||
|
||||
**Repository:** ${REPO}
|
||||
**Time Range:** Last ${TIME_RANGE} hours
|
||||
**Generated:** ${new Date().toISOString()}
|
||||
**Total Issues Analyzed:** ${issues.length}
|
||||
|
||||
## Summary
|
||||
|
||||
| Category | Count |
|
||||
|----------|-------|
|
||||
| CRITICAL | N |
|
||||
| Close Immediately | N |
|
||||
| Auto-Respond | N |
|
||||
| Needs Investigation | N |
|
||||
| Feature Requests | N |
|
||||
| Needs Info | N |
|
||||
|
||||
---
|
||||
|
||||
## 1. CRITICAL (Immediate Action Required)
|
||||
|
||||
[List issues with full details]
|
||||
|
||||
## 2. Close Immediately
|
||||
|
||||
[List with closing reason and template response]
|
||||
|
||||
## 3. Auto-Respond (Template Answers)
|
||||
|
||||
[List with draft responses ready to post]
|
||||
|
||||
## 4. Needs Investigation
|
||||
|
||||
[List with investigation notes]
|
||||
|
||||
## 5. Feature Backlog
|
||||
|
||||
[List for prioritization]
|
||||
|
||||
## 6. Needs More Info
|
||||
|
||||
[List with template questions to ask]
|
||||
|
||||
---
|
||||
|
||||
## Response Templates
|
||||
|
||||
### Fixed in Version X
|
||||
\`\`\`
|
||||
This issue was resolved in vX.Y.Z via PR #NNN.
|
||||
Please update: \`bunx oh-my-opencode@X.Y.Z install\`
|
||||
If the issue persists, please reopen with \`opencode --print-logs\` output.
|
||||
\`\`\`
|
||||
|
||||
### Needs More Info
|
||||
\`\`\`
|
||||
Thank you for reporting. To investigate, please provide:
|
||||
1. \`opencode --print-logs\` output
|
||||
2. Your configuration file
|
||||
3. Minimal reproduction steps
|
||||
Labeling as \`needs-info\`. Auto-closes in 7 days without response.
|
||||
\`\`\`
|
||||
|
||||
### Out of Scope
|
||||
\`\`\`
|
||||
Thank you for reaching out. This request falls outside the scope of this project.
|
||||
[Suggest alternative or explanation]
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ANTI-PATTERNS (BLOCKING VIOLATIONS)
|
||||
|
||||
## IF YOU DO ANY OF THESE, THE TRIAGE IS INVALID
|
||||
|
||||
| Violation | Why It's Wrong | Severity |
|
||||
|-----------|----------------|----------|
|
||||
| **Using `--limit 100`** | Misses 80%+ of issues in active repos | CRITICAL |
|
||||
| **Stopping at first fetch** | GitHub paginates - you only got page 1 | CRITICAL |
|
||||
| **Not counting results** | Can't verify completeness | CRITICAL |
|
||||
| Batching issues (7 per agent) | Loses detail, harder to track | HIGH |
|
||||
| Sequential agent calls | Slow, doesn't leverage parallelism | HIGH |
|
||||
| Skipping PR correlation | Misses linked fixes for bugs | MEDIUM |
|
||||
| Generic responses | Each issue needs specific analysis | MEDIUM |
|
||||
|
||||
## MANDATORY VERIFICATION BEFORE PHASE 2
|
||||
|
||||
```
|
||||
CHECKLIST:
|
||||
[ ] Used --limit 500 (not 100)
|
||||
[ ] Used --state all (not just open)
|
||||
[ ] Counted issues: _____ total
|
||||
[ ] Verified: if count < 500, all issues fetched
|
||||
[ ] If count = 500, fetched additional pages
|
||||
```
|
||||
|
||||
**DO NOT PROCEED TO PHASE 2 UNTIL ALL BOXES ARE CHECKED.**
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION CHECKLIST
|
||||
|
||||
- [ ] Fetched ALL pages of issues (pagination complete)
|
||||
- [ ] Fetched ALL pages of PRs for correlation
|
||||
- [ ] Launched 1 agent per issue (not batched)
|
||||
- [ ] All agents ran in background (parallel)
|
||||
- [ ] Collected all results before generating report
|
||||
- [ ] Report includes draft responses where applicable
|
||||
- [ ] Critical issues flagged at top
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
When invoked, immediately:
|
||||
|
||||
1. `gh repo view --json nameWithOwner -q .nameWithOwner` (get current repo)
|
||||
2. Parse user's time range request (default: 48 hours)
|
||||
3. Exhaustive pagination for issues AND PRs
|
||||
4. Launch N background agents (1 per issue)
|
||||
5. Collect all results
|
||||
6. Generate categorized report with action items
|
||||
@@ -80,7 +80,8 @@
|
||||
"prometheus-md-only",
|
||||
"sisyphus-junior-notepad",
|
||||
"start-work",
|
||||
"atlas"
|
||||
"atlas",
|
||||
"stop-continuation-guard"
|
||||
]
|
||||
}
|
||||
},
|
||||
|
||||
92
bun.lock
92
bun.lock
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"lockfileVersion": 1,
|
||||
"configVersion": 0,
|
||||
"configVersion": 1,
|
||||
"workspaces": {
|
||||
"": {
|
||||
"name": "oh-my-opencode",
|
||||
@@ -28,13 +28,13 @@
|
||||
"typescript": "^5.7.3",
|
||||
},
|
||||
"optionalDependencies": {
|
||||
"oh-my-opencode-darwin-arm64": "3.1.6",
|
||||
"oh-my-opencode-darwin-x64": "3.1.6",
|
||||
"oh-my-opencode-linux-arm64": "3.1.6",
|
||||
"oh-my-opencode-linux-arm64-musl": "3.1.6",
|
||||
"oh-my-opencode-linux-x64": "3.1.6",
|
||||
"oh-my-opencode-linux-x64-musl": "3.1.6",
|
||||
"oh-my-opencode-windows-x64": "3.1.6",
|
||||
"oh-my-opencode-darwin-arm64": "3.1.10",
|
||||
"oh-my-opencode-darwin-x64": "3.1.10",
|
||||
"oh-my-opencode-linux-arm64": "3.1.10",
|
||||
"oh-my-opencode-linux-arm64-musl": "3.1.10",
|
||||
"oh-my-opencode-linux-x64": "3.1.10",
|
||||
"oh-my-opencode-linux-x64-musl": "3.1.10",
|
||||
"oh-my-opencode-windows-x64": "3.1.10",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -44,41 +44,41 @@
|
||||
"@code-yeongyu/comment-checker",
|
||||
],
|
||||
"packages": {
|
||||
"@ast-grep/cli": ["@ast-grep/cli@0.40.0", "", { "dependencies": { "detect-libc": "2.1.2" }, "optionalDependencies": { "@ast-grep/cli-darwin-arm64": "0.40.0", "@ast-grep/cli-darwin-x64": "0.40.0", "@ast-grep/cli-linux-arm64-gnu": "0.40.0", "@ast-grep/cli-linux-x64-gnu": "0.40.0", "@ast-grep/cli-win32-arm64-msvc": "0.40.0", "@ast-grep/cli-win32-ia32-msvc": "0.40.0", "@ast-grep/cli-win32-x64-msvc": "0.40.0" }, "bin": { "sg": "sg", "ast-grep": "ast-grep" } }, "sha512-L8AkflsfI2ZP70yIdrwqvjR02ScCuRmM/qNGnJWUkOFck+e6gafNVJ4e4jjGQlEul+dNdBpx36+O2Op629t47A=="],
|
||||
"@ast-grep/cli": ["@ast-grep/cli@0.40.5", "", { "dependencies": { "detect-libc": "2.1.2" }, "optionalDependencies": { "@ast-grep/cli-darwin-arm64": "0.40.5", "@ast-grep/cli-darwin-x64": "0.40.5", "@ast-grep/cli-linux-arm64-gnu": "0.40.5", "@ast-grep/cli-linux-x64-gnu": "0.40.5", "@ast-grep/cli-win32-arm64-msvc": "0.40.5", "@ast-grep/cli-win32-ia32-msvc": "0.40.5", "@ast-grep/cli-win32-x64-msvc": "0.40.5" }, "bin": { "sg": "sg", "ast-grep": "ast-grep" } }, "sha512-yVXL7Gz0WIHerQLf+MVaVSkhIhidtWReG5akNVr/JS9OVCVkSdz7gWm7H8jVv2M9OO1tauuG76K3UaRGBPu5lQ=="],
|
||||
|
||||
"@ast-grep/cli-darwin-arm64": ["@ast-grep/cli-darwin-arm64@0.40.0", "", { "os": "darwin", "cpu": "arm64" }, "sha512-UehY2MMUkdJbsriP7NKc6+uojrqPn7d1Cl0em+WAkee7Eij81VdyIjRsRxtZSLh440ZWQBHI3PALZ9RkOO8pKQ=="],
|
||||
"@ast-grep/cli-darwin-arm64": ["@ast-grep/cli-darwin-arm64@0.40.5", "", { "os": "darwin", "cpu": "arm64" }, "sha512-T9CzwJ1GqQhnANdsu6c7iT1akpvTVMK+AZrxnhIPv33Ze5hrXUUkqan+j4wUAukRJDqU7u94EhXLSLD+5tcJ8g=="],
|
||||
|
||||
"@ast-grep/cli-darwin-x64": ["@ast-grep/cli-darwin-x64@0.40.0", "", { "os": "darwin", "cpu": "x64" }, "sha512-RFDJ2ZxUbT0+grntNlOLJx7wa9/ciVCeaVtQpQy8WJJTvXvkY0etl8Qlh2TmO2x2yr+i0Z6aMJi4IG/Yx5ghTQ=="],
|
||||
"@ast-grep/cli-darwin-x64": ["@ast-grep/cli-darwin-x64@0.40.5", "", { "os": "darwin", "cpu": "x64" }, "sha512-ez9b2zKvXU8f4ghhjlqYvbx6tWCKJTuVlNVqDDfjqwwhGeiTYfnzMlSVat4ElYRMd21gLtXZIMy055v2f21Ztg=="],
|
||||
|
||||
"@ast-grep/cli-linux-arm64-gnu": ["@ast-grep/cli-linux-arm64-gnu@0.40.0", "", { "os": "linux", "cpu": "arm64" }, "sha512-4p55gnTQ1mMFCyqjtM7bH9SB9r16mkwXtUcJQGX1YgFG4WD+QG8rC4GwSuNNZcdlYaOQuTWrgUEQ9z5K06UXfg=="],
|
||||
"@ast-grep/cli-linux-arm64-gnu": ["@ast-grep/cli-linux-arm64-gnu@0.40.5", "", { "os": "linux", "cpu": "arm64" }, "sha512-VXa2L1IEYD66AMb0GuG7VlMMbPmEGoJUySWDcwSZo/D9neiry3MJ41LQR5oTG2HyhIPBsf9umrXnmuRq66BviA=="],
|
||||
|
||||
"@ast-grep/cli-linux-x64-gnu": ["@ast-grep/cli-linux-x64-gnu@0.40.0", "", { "os": "linux", "cpu": "x64" }, "sha512-u2MXFceuwvrO+OQ6zFGoJ6wbATXn46HWwW79j4UPrXYJzVl97jRyjJOIQTJOzTflsk02fjP98DQkfvbXt2dl3Q=="],
|
||||
"@ast-grep/cli-linux-x64-gnu": ["@ast-grep/cli-linux-x64-gnu@0.40.5", "", { "os": "linux", "cpu": "x64" }, "sha512-GQC5162eIOWXR2eQQ6Knzg7/8Trp5E1ODJkaErf0IubdQrZBGqj5AAcQPcWgPbbnmktjIp0H4NraPpOJ9eJ22A=="],
|
||||
|
||||
"@ast-grep/cli-win32-arm64-msvc": ["@ast-grep/cli-win32-arm64-msvc@0.40.0", "", { "os": "win32", "cpu": "arm64" }, "sha512-E/I1xpF/RQL2fo1CQsQfTxyDLnChsbZ+ERrQHKuF1FI4WrkaPOBibpqda60QgVmUcgOGZyZ/GRb3iKEVWPsQNQ=="],
|
||||
"@ast-grep/cli-win32-arm64-msvc": ["@ast-grep/cli-win32-arm64-msvc@0.40.5", "", { "os": "win32", "cpu": "arm64" }, "sha512-YiZdnQZsSlXQTMsZJop/Ux9MmUGfuRvC2x/UbFgrt5OBSYxND+yoiMc0WcA3WG+wU+tt4ZkB5HUea3r/IkOLYA=="],
|
||||
|
||||
"@ast-grep/cli-win32-ia32-msvc": ["@ast-grep/cli-win32-ia32-msvc@0.40.0", "", { "os": "win32", "cpu": "ia32" }, "sha512-9h12OQu1BR0GxHEtT+Z4QkJk3LLWLiKwjBkjXUGlASHYDPTyLcs85KwDLeFHs4BwarF8TDdF+KySvB9WPGl/nQ=="],
|
||||
"@ast-grep/cli-win32-ia32-msvc": ["@ast-grep/cli-win32-ia32-msvc@0.40.5", "", { "os": "win32", "cpu": "ia32" }, "sha512-MHkCxCITVTr8sY9CcVqNKbfUzMa3Hc6IilGXad0Clnw2vNmPfWqSky+hU/UTerr5YHWwWfAVURH7ANZgirtx0Q=="],
|
||||
|
||||
"@ast-grep/cli-win32-x64-msvc": ["@ast-grep/cli-win32-x64-msvc@0.40.0", "", { "os": "win32", "cpu": "x64" }, "sha512-n2+3WynEWFHhXg6KDgjwWQ0UEtIvqUITFbKEk5cDkUYrzYhg/A6kj0qauPwRbVMoJms49vtsNpLkzzqyunio5g=="],
|
||||
"@ast-grep/cli-win32-x64-msvc": ["@ast-grep/cli-win32-x64-msvc@0.40.5", "", { "os": "win32", "cpu": "x64" }, "sha512-/MJ5un7yxlClaaxou9eYl+Kr2xr/yTtYtTq5aLBWjPWA6dmmJ1nAJgx5zKHVuplFXFBrFDQk3paEgAETMTGcrA=="],
|
||||
|
||||
"@ast-grep/napi": ["@ast-grep/napi@0.40.0", "", { "optionalDependencies": { "@ast-grep/napi-darwin-arm64": "0.40.0", "@ast-grep/napi-darwin-x64": "0.40.0", "@ast-grep/napi-linux-arm64-gnu": "0.40.0", "@ast-grep/napi-linux-arm64-musl": "0.40.0", "@ast-grep/napi-linux-x64-gnu": "0.40.0", "@ast-grep/napi-linux-x64-musl": "0.40.0", "@ast-grep/napi-win32-arm64-msvc": "0.40.0", "@ast-grep/napi-win32-ia32-msvc": "0.40.0", "@ast-grep/napi-win32-x64-msvc": "0.40.0" } }, "sha512-tq6nO/8KwUF/mHuk1ECaAOSOlz2OB/PmygnvprJzyAHGRVzdcffblaOOWe90M9sGz5MAasXoF+PTcayQj9TKKA=="],
|
||||
"@ast-grep/napi": ["@ast-grep/napi@0.40.5", "", { "optionalDependencies": { "@ast-grep/napi-darwin-arm64": "0.40.5", "@ast-grep/napi-darwin-x64": "0.40.5", "@ast-grep/napi-linux-arm64-gnu": "0.40.5", "@ast-grep/napi-linux-arm64-musl": "0.40.5", "@ast-grep/napi-linux-x64-gnu": "0.40.5", "@ast-grep/napi-linux-x64-musl": "0.40.5", "@ast-grep/napi-win32-arm64-msvc": "0.40.5", "@ast-grep/napi-win32-ia32-msvc": "0.40.5", "@ast-grep/napi-win32-x64-msvc": "0.40.5" } }, "sha512-hJA62OeBKUQT68DD2gDyhOqJxZxycqg8wLxbqjgqSzYttCMSDL9tiAQ9abgekBYNHudbJosm9sWOEbmCDfpX2A=="],
|
||||
|
||||
"@ast-grep/napi-darwin-arm64": ["@ast-grep/napi-darwin-arm64@0.40.0", "", { "os": "darwin", "cpu": "arm64" }, "sha512-ZMjl5yLhKjxdwbqEEdMizgQdWH2NrWsM6Px+JuGErgCDe6Aedq9yurEPV7veybGdLVJQhOah6htlSflXxjHnYA=="],
|
||||
"@ast-grep/napi-darwin-arm64": ["@ast-grep/napi-darwin-arm64@0.40.5", "", { "os": "darwin", "cpu": "arm64" }, "sha512-2F072fGN0WTq7KI3okuEnkGJVEHLbi56Bw1H6NAMf7j2mJJeQWsRyGOMcyNnUXZDeNdvoMH0OB2a5wwUegY/nQ=="],
|
||||
|
||||
"@ast-grep/napi-darwin-x64": ["@ast-grep/napi-darwin-x64@0.40.0", "", { "os": "darwin", "cpu": "x64" }, "sha512-f9Ol5oQKNRMBkvDtzBK1WiNn2/3eejF2Pn9xwTj7PhXuSFseedOspPYllxQo0gbwUlw/DJqGFTce/jarhR/rBw=="],
|
||||
"@ast-grep/napi-darwin-x64": ["@ast-grep/napi-darwin-x64@0.40.5", "", { "os": "darwin", "cpu": "x64" }, "sha512-dJMidHZhhxuLBYNi6/FKI812jQ7wcFPSKkVPwviez2D+KvYagapUMAV/4dJ7FCORfguVk8Y0jpPAlYmWRT5nvA=="],
|
||||
|
||||
"@ast-grep/napi-linux-arm64-gnu": ["@ast-grep/napi-linux-arm64-gnu@0.40.0", "", { "os": "linux", "cpu": "arm64" }, "sha512-+tO+VW5GDhT9jGkKOK+3b8+ohKjC98WTzn7wSskd/myyhK3oYL1WTKqCm07WSYBZOJvb3z+WaX+wOUrc4bvtyQ=="],
|
||||
"@ast-grep/napi-linux-arm64-gnu": ["@ast-grep/napi-linux-arm64-gnu@0.40.5", "", { "os": "linux", "cpu": "arm64" }, "sha512-nBRCbyoS87uqkaw4Oyfe5VO+SRm2B+0g0T8ME69Qry9ShMf41a2bTdpcQx9e8scZPogq+CTwDHo3THyBV71l9w=="],
|
||||
|
||||
"@ast-grep/napi-linux-arm64-musl": ["@ast-grep/napi-linux-arm64-musl@0.40.0", "", { "os": "linux", "cpu": "arm64" }, "sha512-MS9qalLRjUnF2PCzuTKTvCMVSORYHxxe3Qa0+SSaVULsXRBmuy5C/b1FeWwMFnwNnC0uie3VDet31Zujwi8q6A=="],
|
||||
"@ast-grep/napi-linux-arm64-musl": ["@ast-grep/napi-linux-arm64-musl@0.40.5", "", { "os": "linux", "cpu": "arm64" }, "sha512-/qKsmds5FMoaEj6FdNzepbmLMtlFuBLdrAn9GIWCqOIcVcYvM1Nka8+mncfeXB/MFZKOrzQsQdPTWqrrQzXLrA=="],
|
||||
|
||||
"@ast-grep/napi-linux-x64-gnu": ["@ast-grep/napi-linux-x64-gnu@0.40.0", "", { "os": "linux", "cpu": "x64" }, "sha512-BeHZVMNXhM3WV3XE2yghO0fRxhMOt8BTN972p5piYEQUvKeSHmS8oeGcs6Ahgx5znBclqqqq37ZfioYANiTqJA=="],
|
||||
"@ast-grep/napi-linux-x64-gnu": ["@ast-grep/napi-linux-x64-gnu@0.40.5", "", { "os": "linux", "cpu": "x64" }, "sha512-DP4oDbq7f/1A2hRTFLhJfDFR6aI5mRWdEfKfHzRItmlKsR9WlcEl1qDJs/zX9R2EEtIDsSKRzuJNfJllY3/W8Q=="],
|
||||
|
||||
"@ast-grep/napi-linux-x64-musl": ["@ast-grep/napi-linux-x64-musl@0.40.0", "", { "os": "linux", "cpu": "x64" }, "sha512-rG1YujF7O+lszX8fd5u6qkFTuv4FwHXjWvt1CCvCxXwQLSY96LaCW88oVKg7WoEYQh54y++Fk57F+Wh9Gv9nVQ=="],
|
||||
"@ast-grep/napi-linux-x64-musl": ["@ast-grep/napi-linux-x64-musl@0.40.5", "", { "os": "linux", "cpu": "x64" }, "sha512-BRZUvVBPUNpWPo6Ns8chXVzxHPY+k9gpsubGTHy92Q26ecZULd/dTkWWdnvfhRqttsSQ9Pe/XQdi5+hDQ6RYcg=="],
|
||||
|
||||
"@ast-grep/napi-win32-arm64-msvc": ["@ast-grep/napi-win32-arm64-msvc@0.40.0", "", { "os": "win32", "cpu": "arm64" }, "sha512-9SqmnQqd4zTEUk6yx0TuW2ycZZs2+e569O/R0QnhSiQNpgwiJCYOe/yPS0BC9HkiaozQm6jjAcasWpFtz/dp+w=="],
|
||||
"@ast-grep/napi-win32-arm64-msvc": ["@ast-grep/napi-win32-arm64-msvc@0.40.5", "", { "os": "win32", "cpu": "arm64" }, "sha512-y95zSEwc7vhxmcrcH0GnK4ZHEBQrmrszRBNQovzaciF9GUqEcCACNLoBesn4V47IaOp4fYgD2/EhGRTIBFb2Ug=="],
|
||||
|
||||
"@ast-grep/napi-win32-ia32-msvc": ["@ast-grep/napi-win32-ia32-msvc@0.40.0", "", { "os": "win32", "cpu": "ia32" }, "sha512-0JkdBZi5l9vZhGEO38A1way0LmLRDU5Vos6MXrLIOVkymmzDTDlCdY394J1LMmmsfwWcyJg6J7Yv2dw41MCxDQ=="],
|
||||
"@ast-grep/napi-win32-ia32-msvc": ["@ast-grep/napi-win32-ia32-msvc@0.40.5", "", { "os": "win32", "cpu": "ia32" }, "sha512-K/u8De62iUnFCzVUs7FBdTZ2Jrgc5/DLHqjpup66KxZ7GIM9/HGME/O8aSoPkpcAeCD4TiTZ11C1i5p5H98hTg=="],
|
||||
|
||||
"@ast-grep/napi-win32-x64-msvc": ["@ast-grep/napi-win32-x64-msvc@0.40.0", "", { "os": "win32", "cpu": "x64" }, "sha512-Hk2IwfPqMFGZt5SRxsoWmGLxBXxprow4LRp1eG6V8EEiJCNHxZ9ZiEaIc5bNvMDBjHVSnqZAXT22dROhrcSKQg=="],
|
||||
"@ast-grep/napi-win32-x64-msvc": ["@ast-grep/napi-win32-x64-msvc@0.40.5", "", { "os": "win32", "cpu": "x64" }, "sha512-dqm5zg/o4Nh4VOQPEpMS23ot8HVd22gG0eg01t4CFcZeuzyuSgBlOL3N7xLbz3iH2sVkk7keuBwAzOIpTqziNQ=="],
|
||||
|
||||
"@clack/core": ["@clack/core@0.5.0", "", { "dependencies": { "picocolors": "^1.0.0", "sisteransi": "^1.0.5" } }, "sha512-p3y0FIOwaYRUPRcMO7+dlmLh8PSRcrjuTndsiA0WAFbWES0mLZlrjVoBRZ9DzkPFJZG6KGkJmoEAY0ZcVWTkow=="],
|
||||
|
||||
@@ -86,17 +86,17 @@
|
||||
|
||||
"@code-yeongyu/comment-checker": ["@code-yeongyu/comment-checker@0.6.1", "", { "os": [ "linux", "win32", "darwin", ], "cpu": [ "x64", "arm64", ], "bin": { "comment-checker": "bin/comment-checker" } }, "sha512-BBremX+Y5aW8sTzlhHrLsKParupYkPOVUYmq9STrlWvBvfAme6w5IWuZCLl6nHIQScRDdvGdrAjPycJC86EZFA=="],
|
||||
|
||||
"@hono/node-server": ["@hono/node-server@1.19.7", "", { "peerDependencies": { "hono": "^4" } }, "sha512-vUcD0uauS7EU2caukW8z5lJKtoGMokxNbJtBiwHgpqxEXokaHCBkQUmCHhjFB1VUTWdqj25QoMkMKzgjq+uhrw=="],
|
||||
"@hono/node-server": ["@hono/node-server@1.19.9", "", { "peerDependencies": { "hono": "^4" } }, "sha512-vHL6w3ecZsky+8P5MD+eFfaGTyCeOHUIFYMGpQGbrBTSmNNoxv0if69rEZ5giu36weC5saFuznL411gRX7bJDw=="],
|
||||
|
||||
"@modelcontextprotocol/sdk": ["@modelcontextprotocol/sdk@1.25.1", "", { "dependencies": { "@hono/node-server": "^1.19.7", "ajv": "^8.17.1", "ajv-formats": "^3.0.1", "content-type": "^1.0.5", "cors": "^2.8.5", "cross-spawn": "^7.0.5", "eventsource": "^3.0.2", "eventsource-parser": "^3.0.0", "express": "^5.0.1", "express-rate-limit": "^7.5.0", "jose": "^6.1.1", "json-schema-typed": "^8.0.2", "pkce-challenge": "^5.0.0", "raw-body": "^3.0.0", "zod": "^3.25 || ^4.0", "zod-to-json-schema": "^3.25.0" }, "peerDependencies": { "@cfworker/json-schema": "^4.1.1" }, "optionalPeers": ["@cfworker/json-schema"] }, "sha512-yO28oVFFC7EBoiKdAn+VqRm+plcfv4v0xp6osG/VsCB0NlPZWi87ajbCZZ8f/RvOFLEu7//rSRmuZZ7lMoe3gQ=="],
|
||||
"@modelcontextprotocol/sdk": ["@modelcontextprotocol/sdk@1.25.3", "", { "dependencies": { "@hono/node-server": "^1.19.9", "ajv": "^8.17.1", "ajv-formats": "^3.0.1", "content-type": "^1.0.5", "cors": "^2.8.5", "cross-spawn": "^7.0.5", "eventsource": "^3.0.2", "eventsource-parser": "^3.0.0", "express": "^5.0.1", "express-rate-limit": "^7.5.0", "jose": "^6.1.1", "json-schema-typed": "^8.0.2", "pkce-challenge": "^5.0.0", "raw-body": "^3.0.0", "zod": "^3.25 || ^4.0", "zod-to-json-schema": "^3.25.0" }, "peerDependencies": { "@cfworker/json-schema": "^4.1.1" }, "optionalPeers": ["@cfworker/json-schema"] }, "sha512-vsAMBMERybvYgKbg/l4L1rhS7VXV1c0CtyJg72vwxONVX0l4ZfKVAnZEWTQixJGTzKnELjQ59e4NbdFDALRiAQ=="],
|
||||
|
||||
"@opencode-ai/plugin": ["@opencode-ai/plugin@1.1.19", "", { "dependencies": { "@opencode-ai/sdk": "1.1.19", "zod": "4.1.8" } }, "sha512-Q6qBEjHb/dJMEw4BUqQxEswTMxCCHUpFMMb6jR8HTTs8X/28XRkKt5pHNPA82GU65IlSoPRph+zd8LReBDN53Q=="],
|
||||
"@opencode-ai/plugin": ["@opencode-ai/plugin@1.1.47", "", { "dependencies": { "@opencode-ai/sdk": "1.1.47", "zod": "4.1.8" } }, "sha512-gNMPz72altieDfLhUw3VAT1xbduKi3w3wZ57GLeS7qU9W474HdvdIiLBnt2Xq3U7Ko0/0tvK3nzCker6IIDqmQ=="],
|
||||
|
||||
"@opencode-ai/sdk": ["@opencode-ai/sdk@1.1.19", "", {}, "sha512-XhZhFuvlLCqDpvNtUEjOsi/wvFj3YCXb1dySp+OONQRMuHlorNYnNa7P2A2ntKuhRdGT1Xt5na0nFzlUyNw+4A=="],
|
||||
"@opencode-ai/sdk": ["@opencode-ai/sdk@1.1.47", "", {}, "sha512-s3PBHwk1sP6Zt/lJxIWSBWZ1TnrI1nFxSP97LCODUytouAQgbygZ1oDH7O2sGMBEuGdA8B1nNSPla0aRSN3IpA=="],
|
||||
|
||||
"@types/js-yaml": ["@types/js-yaml@4.0.9", "", {}, "sha512-k4MGaQl5TGo/iipqb2UDG2UwjXziSWkh0uysQelTlJpX1qGlpUZYm8PnO4DxG1qBomtJUdYJ6qR6xdIah10JLg=="],
|
||||
|
||||
"@types/node": ["@types/node@24.10.1", "", { "dependencies": { "undici-types": "~7.16.0" } }, "sha512-GNWcUTRBgIRJD5zj+Tq0fKOJ5XZajIiBroOF0yvj2bSU1WvNdYS/dn9UxwsujGW4JX06dnHyjV2y9rRaybH0iQ=="],
|
||||
"@types/node": ["@types/node@25.1.0", "", { "dependencies": { "undici-types": "~7.16.0" } }, "sha512-t7frlewr6+cbx+9Ohpl0NOTKXZNV9xHRmNOvql47BFJKcEG1CxtxlPEEe+gR9uhVWM4DwhnvTF110mIL4yP9RA=="],
|
||||
|
||||
"@types/picomatch": ["@types/picomatch@3.0.2", "", {}, "sha512-n0i8TD3UDB7paoMMxA3Y65vUncFJXjcUf7lQY7YyKGl6031FNjfsLs6pdLFCy2GNFxItPJG8GvvpbZc2skH7WA=="],
|
||||
|
||||
@@ -108,9 +108,9 @@
|
||||
|
||||
"argparse": ["argparse@2.0.1", "", {}, "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q=="],
|
||||
|
||||
"body-parser": ["body-parser@2.2.1", "", { "dependencies": { "bytes": "^3.1.2", "content-type": "^1.0.5", "debug": "^4.4.3", "http-errors": "^2.0.0", "iconv-lite": "^0.7.0", "on-finished": "^2.4.1", "qs": "^6.14.0", "raw-body": "^3.0.1", "type-is": "^2.0.1" } }, "sha512-nfDwkulwiZYQIGwxdy0RUmowMhKcFVcYXUU7m4QlKYim1rUtg83xm2yjZ40QjDuc291AJjjeSc9b++AWHSgSHw=="],
|
||||
"body-parser": ["body-parser@2.2.2", "", { "dependencies": { "bytes": "^3.1.2", "content-type": "^1.0.5", "debug": "^4.4.3", "http-errors": "^2.0.0", "iconv-lite": "^0.7.0", "on-finished": "^2.4.1", "qs": "^6.14.1", "raw-body": "^3.0.1", "type-is": "^2.0.1" } }, "sha512-oP5VkATKlNwcgvxi0vM0p/D3n2C3EReYVX+DNYs5TjZFn/oQt2j+4sVJtSMr18pdRr8wjTcBl6LoV+FUwzPmNA=="],
|
||||
|
||||
"bun-types": ["bun-types@1.3.3", "", { "dependencies": { "@types/node": "*" } }, "sha512-z3Xwlg7j2l9JY27x5Qn3Wlyos8YAp0kKRlrePAOjgjMGS5IG6E7Jnlx736vH9UVI4wUICwwhC9anYL++XeOgTQ=="],
|
||||
"bun-types": ["bun-types@1.3.8", "", { "dependencies": { "@types/node": "*" } }, "sha512-fL99nxdOWvV4LqjmC+8Q9kW3M4QTtTR1eePs94v5ctGqU8OeceWrSUaRw3JYb7tU3FkMIAjkueehrHPPPGKi5Q=="],
|
||||
|
||||
"bytes": ["bytes@3.1.2", "", {}, "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg=="],
|
||||
|
||||
@@ -118,7 +118,7 @@
|
||||
|
||||
"call-bound": ["call-bound@1.0.4", "", { "dependencies": { "call-bind-apply-helpers": "^1.0.2", "get-intrinsic": "^1.3.0" } }, "sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg=="],
|
||||
|
||||
"commander": ["commander@14.0.2", "", {}, "sha512-TywoWNNRbhoD0BXs1P3ZEScW8W5iKrnbithIl0YH+uCmBd0QpPOA8yc82DS3BIE5Ma6FnBVUsJ7wVUDz4dvOWQ=="],
|
||||
"commander": ["commander@14.0.3", "", {}, "sha512-H+y0Jo/T1RZ9qPP4Eh1pkcQcLRglraJaSLoyOtHxu6AapkjWVCy2Sit1QQ4x3Dng8qDlSsZEet7g5Pq06MvTgw=="],
|
||||
|
||||
"content-disposition": ["content-disposition@1.0.1", "", {}, "sha512-oIXISMynqSqm241k6kcQ5UwttDILMK4BiurCfGEREw6+X9jkkpEe5T9FZaApyLGGOnFuyMWZpdolTXMtvEJ08Q=="],
|
||||
|
||||
@@ -128,7 +128,7 @@
|
||||
|
||||
"cookie-signature": ["cookie-signature@1.2.2", "", {}, "sha512-D76uU73ulSXrD1UXF4KE2TMxVVwhsnCgfAyTg9k8P6KGZjlXKrOLe4dJQKI3Bxi5wjesZoFXJWElNWBjPZMbhg=="],
|
||||
|
||||
"cors": ["cors@2.8.5", "", { "dependencies": { "object-assign": "^4", "vary": "^1" } }, "sha512-KIHbLJqu73RGr/hnbrO9uBeixNGuvSQjul/jdFvS/KFSIH1hWVd1ng7zOHx+YrEfInLG7q4n6GHQ9cDtxv/P6g=="],
|
||||
"cors": ["cors@2.8.6", "", { "dependencies": { "object-assign": "^4", "vary": "^1" } }, "sha512-tJtZBBHA6vjIAaF6EnIaq6laBBP9aq/Y3ouVJjEfoHbRBcHBAHYcMh/w8LDrk2PvIMMq8gmopa5D4V8RmbrxGw=="],
|
||||
|
||||
"cross-spawn": ["cross-spawn@7.0.6", "", { "dependencies": { "path-key": "^3.1.0", "shebang-command": "^2.0.0", "which": "^2.0.1" } }, "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA=="],
|
||||
|
||||
@@ -184,11 +184,11 @@
|
||||
|
||||
"hasown": ["hasown@2.0.2", "", { "dependencies": { "function-bind": "^1.1.2" } }, "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ=="],
|
||||
|
||||
"hono": ["hono@4.10.8", "", {}, "sha512-DDT0A0r6wzhe8zCGoYOmMeuGu3dyTAE40HHjwUsWFTEy5WxK1x2WDSsBPlEXgPbRIFY6miDualuUDbasPogIww=="],
|
||||
"hono": ["hono@4.11.7", "", {}, "sha512-l7qMiNee7t82bH3SeyUCt9UF15EVmaBvsppY2zQtrbIhl/yzBTny+YUxsVjSjQ6gaqaeVtZmGocom8TzBlA4Yw=="],
|
||||
|
||||
"http-errors": ["http-errors@2.0.1", "", { "dependencies": { "depd": "~2.0.0", "inherits": "~2.0.4", "setprototypeof": "~1.2.0", "statuses": "~2.0.2", "toidentifier": "~1.0.1" } }, "sha512-4FbRdAX+bSdmo4AUFuS0WNiPz8NgFt+r8ThgNWmlrjQjt1Q7ZR9+zTlce2859x4KSXrwIsaeTqDoKQmtP8pLmQ=="],
|
||||
|
||||
"iconv-lite": ["iconv-lite@0.7.1", "", { "dependencies": { "safer-buffer": ">= 2.1.2 < 3.0.0" } }, "sha512-2Tth85cXwGFHfvRgZWszZSvdo+0Xsqmw8k8ZwxScfcBneNUraK+dxRxRm24nszx80Y0TVio8kKLt5sLE7ZCLlw=="],
|
||||
"iconv-lite": ["iconv-lite@0.7.2", "", { "dependencies": { "safer-buffer": ">= 2.1.2 < 3.0.0" } }, "sha512-im9DjEDQ55s9fL4EYzOAv0yMqmMBSZp6G0VvFyTMPKWxiSBHUj9NW/qqLmXUwXrrM7AvqSlTCfvqRb0cM8yYqw=="],
|
||||
|
||||
"inherits": ["inherits@2.0.4", "", {}, "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ=="],
|
||||
|
||||
@@ -226,19 +226,19 @@
|
||||
|
||||
"object-inspect": ["object-inspect@1.13.4", "", {}, "sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew=="],
|
||||
|
||||
"oh-my-opencode-darwin-arm64": ["oh-my-opencode-darwin-arm64@3.1.6", "", { "os": "darwin", "cpu": "arm64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-KK+ptnkBigvDYbRtF/B5izEC4IoXDS8mAnRHWFBSCINhzQR2No6AtEcwijd6vKBPR+/r71ofq/8mTsIeb1PEVQ=="],
|
||||
"oh-my-opencode-darwin-arm64": ["oh-my-opencode-darwin-arm64@3.1.10", "", { "os": "darwin", "cpu": "arm64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-6qsZQtrtBYZLufcXTTuUUMEG9PoG9Y98pX+HFVn2xHIEc6GpwR6i5xY8McFHmqPkC388tzybD556JhKqPX7Pnw=="],
|
||||
|
||||
"oh-my-opencode-darwin-x64": ["oh-my-opencode-darwin-x64@3.1.6", "", { "os": "darwin", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-UkPI/RUi7INarFasBUZ4Rous6RUQXsU2nr0V8KFJp+70END43D/96dDUwX+zmPtpDhD+DfWkejuwzqfkZJ2ZDQ=="],
|
||||
"oh-my-opencode-darwin-x64": ["oh-my-opencode-darwin-x64@3.1.10", "", { "os": "darwin", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-I1tQQbcpSBvLGXTO652mBqlyIpwYhYuIlSJmrSM33YRGBiaUuhMASnHQsms+E0eC3U/TOyqomU/4KPnbWyxs4w=="],
|
||||
|
||||
"oh-my-opencode-linux-arm64": ["oh-my-opencode-linux-arm64@3.1.6", "", { "os": "linux", "cpu": "arm64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-gvmvgh7WtTtcHiCbG7z43DOYfY/jrf2S6TX/jBMX2/e1AGkcLKwz30NjGhZxeK5SyzxRVypgfZZK1IuriRgbdA=="],
|
||||
"oh-my-opencode-linux-arm64": ["oh-my-opencode-linux-arm64@3.1.10", "", { "os": "linux", "cpu": "arm64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-r6Rm5Ru/WwcBKKuPIP0RreI0gnf+MYRV0mmzPBVhMZdPWSC/eTT3GdyqFDZ4cCN76n5aea0sa5PPW7iPF+Uw6Q=="],
|
||||
|
||||
"oh-my-opencode-linux-arm64-musl": ["oh-my-opencode-linux-arm64-musl@3.1.6", "", { "os": "linux", "cpu": "arm64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-j3R76pmQ4HGVGFJUMMCeF/1lO3Jg7xFdpcBUKCeFh42N1jMgn1aeyxkAaJYB9RwCF/p6+P8B6gVDLCEDu2mxjA=="],
|
||||
"oh-my-opencode-linux-arm64-musl": ["oh-my-opencode-linux-arm64-musl@3.1.10", "", { "os": "linux", "cpu": "arm64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-UVo5OWO92DPIFhoEkw0tj8IcZyUKOG6NlFs1+tSExz7qrgkr0IloxpLslGMmdc895xxpljrr/FobYktLxyJbcg=="],
|
||||
|
||||
"oh-my-opencode-linux-x64": ["oh-my-opencode-linux-x64@3.1.6", "", { "os": "linux", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-VDdo0tHCOr5nm7ajd652u798nPNOLRSTcPOnVh6vIPddkZ+ujRke+enOKOw9Pd5e+4AkthqHBwFXNm2VFgnEKg=="],
|
||||
"oh-my-opencode-linux-x64": ["oh-my-opencode-linux-x64@3.1.10", "", { "os": "linux", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-3g99z2FweMzHSUYuzgU0E2H0kjVmtOhPZdavwVqcHQtLQ9NNhwfnIvj3yFBif+kGJphP9RDnByC1oA8Q26UrCg=="],
|
||||
|
||||
"oh-my-opencode-linux-x64-musl": ["oh-my-opencode-linux-x64-musl@3.1.6", "", { "os": "linux", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-hBG/dhsr8PZelUlYsPBruSLnelB9ocB7H92I+S9svTpDVo67rAmXOoR04twKQ9TeCO4ShOa6hhMhbQnuI8fgNw=="],
|
||||
"oh-my-opencode-linux-x64-musl": ["oh-my-opencode-linux-x64-musl@3.1.10", "", { "os": "linux", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode" } }, "sha512-2HS9Ju0Cr433lMFJtu/7bShApOJywp+zmVCduQUBWFi3xbX1nm5sJwWDhw1Wx+VcqHEuJl/SQzWPE4vaqkEQng=="],
|
||||
|
||||
"oh-my-opencode-windows-x64": ["oh-my-opencode-windows-x64@3.1.6", "", { "os": "win32", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode.exe" } }, "sha512-c8Awp03p2DsbS0G589nzveRCeJPgJRJ0vQrha4ChRmmo31Qc5OSmJ5xuMaF8L4nM+/trbTgAQMFMtCMLgtC8IQ=="],
|
||||
"oh-my-opencode-windows-x64": ["oh-my-opencode-windows-x64@3.1.10", "", { "os": "win32", "cpu": "x64", "bin": { "oh-my-opencode": "bin/oh-my-opencode.exe" } }, "sha512-QLncZJSlWmmcuXrAVKIH6a9Om1Ym6pkhG4hAxaD5K5aF1jw2QFsadjoT12VNq2WzQb+Pg5Y6IWvoow0ZR0aEvw=="],
|
||||
|
||||
"on-finished": ["on-finished@2.4.1", "", { "dependencies": { "ee-first": "1.1.1" } }, "sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg=="],
|
||||
|
||||
@@ -310,8 +310,10 @@
|
||||
|
||||
"wrappy": ["wrappy@1.0.2", "", {}, "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="],
|
||||
|
||||
"zod": ["zod@4.1.8", "", {}, "sha512-5R1P+WwQqmmMIEACyzSvo4JXHY5WiAFHRMg+zBZKgKS+Q1viRa0C1hmUKtHltoIFKtIdki3pRxkmpP74jnNYHQ=="],
|
||||
"zod": ["zod@4.3.6", "", {}, "sha512-rftlrkhHZOcjDwkGlnUtZZkvaPHCsDATp4pGpuOOMDaTdDDXF91wuVDJoWoPsKX/3YPQ5fHuF3STjcYyKr+Qhg=="],
|
||||
|
||||
"zod-to-json-schema": ["zod-to-json-schema@3.25.1", "", { "peerDependencies": { "zod": "^3.25 || ^4" } }, "sha512-pM/SU9d3YAggzi6MtR4h7ruuQlqKtad8e9S0fmxcMi+ueAK5Korys/aWcV9LIIHTVbj01NdzxcnXSN+O74ZIVA=="],
|
||||
|
||||
"@opencode-ai/plugin/zod": ["zod@4.1.8", "", {}, "sha512-5R1P+WwQqmmMIEACyzSvo4JXHY5WiAFHRMg+zBZKgKS+Q1viRa0C1hmUKtHltoIFKtIdki3pRxkmpP74jnNYHQ=="],
|
||||
}
|
||||
}
|
||||
|
||||
16
package.json
16
package.json
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "oh-my-opencode",
|
||||
"version": "3.1.9",
|
||||
"version": "3.1.11",
|
||||
"description": "The Best AI Agent Harness - Batteries-Included OpenCode Plugin with Multi-Model Orchestration, Parallel Background Agents, and Crafted LSP/AST Tools",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
@@ -74,13 +74,13 @@
|
||||
"typescript": "^5.7.3"
|
||||
},
|
||||
"optionalDependencies": {
|
||||
"oh-my-opencode-darwin-arm64": "3.1.9",
|
||||
"oh-my-opencode-darwin-x64": "3.1.9",
|
||||
"oh-my-opencode-linux-arm64": "3.1.9",
|
||||
"oh-my-opencode-linux-arm64-musl": "3.1.9",
|
||||
"oh-my-opencode-linux-x64": "3.1.9",
|
||||
"oh-my-opencode-linux-x64-musl": "3.1.9",
|
||||
"oh-my-opencode-windows-x64": "3.1.9"
|
||||
"oh-my-opencode-darwin-arm64": "3.1.11",
|
||||
"oh-my-opencode-darwin-x64": "3.1.11",
|
||||
"oh-my-opencode-linux-arm64": "3.1.11",
|
||||
"oh-my-opencode-linux-arm64-musl": "3.1.11",
|
||||
"oh-my-opencode-linux-x64": "3.1.11",
|
||||
"oh-my-opencode-linux-x64-musl": "3.1.11",
|
||||
"oh-my-opencode-windows-x64": "3.1.11"
|
||||
},
|
||||
"trustedDependencies": [
|
||||
"@ast-grep/cli",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "oh-my-opencode-darwin-arm64",
|
||||
"version": "3.1.9",
|
||||
"version": "3.1.11",
|
||||
"description": "Platform-specific binary for oh-my-opencode (darwin-arm64)",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "oh-my-opencode-darwin-x64",
|
||||
"version": "3.1.9",
|
||||
"version": "3.1.11",
|
||||
"description": "Platform-specific binary for oh-my-opencode (darwin-x64)",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "oh-my-opencode-linux-arm64-musl",
|
||||
"version": "3.1.9",
|
||||
"version": "3.1.11",
|
||||
"description": "Platform-specific binary for oh-my-opencode (linux-arm64-musl)",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "oh-my-opencode-linux-arm64",
|
||||
"version": "3.1.9",
|
||||
"version": "3.1.11",
|
||||
"description": "Platform-specific binary for oh-my-opencode (linux-arm64)",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "oh-my-opencode-linux-x64-musl",
|
||||
"version": "3.1.9",
|
||||
"version": "3.1.11",
|
||||
"description": "Platform-specific binary for oh-my-opencode (linux-x64-musl)",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "oh-my-opencode-linux-x64",
|
||||
"version": "3.1.9",
|
||||
"version": "3.1.11",
|
||||
"description": "Platform-specific binary for oh-my-opencode (linux-x64)",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "oh-my-opencode-windows-x64",
|
||||
"version": "3.1.9",
|
||||
"version": "3.1.11",
|
||||
"description": "Platform-specific binary for oh-my-opencode (windows-x64)",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
||||
@@ -1007,6 +1007,62 @@
|
||||
"created_at": "2026-01-30T09:55:57Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 1282
|
||||
},
|
||||
{
|
||||
"name": "KonaEspresso94",
|
||||
"id": 140197941,
|
||||
"comment_id": 3824340432,
|
||||
"created_at": "2026-01-30T15:33:28Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 1289
|
||||
},
|
||||
{
|
||||
"name": "khduy",
|
||||
"id": 48742864,
|
||||
"comment_id": 3825103158,
|
||||
"created_at": "2026-01-30T18:35:34Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 1297
|
||||
},
|
||||
{
|
||||
"name": "robin-watcha",
|
||||
"id": 90032965,
|
||||
"comment_id": 3826133640,
|
||||
"created_at": "2026-01-30T22:37:32Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 1303
|
||||
},
|
||||
{
|
||||
"name": "taetaetae",
|
||||
"id": 10969354,
|
||||
"comment_id": 3828900888,
|
||||
"created_at": "2026-01-31T17:44:09Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 1333
|
||||
},
|
||||
{
|
||||
"name": "taetaetae",
|
||||
"id": 10969354,
|
||||
"comment_id": 3828909557,
|
||||
"created_at": "2026-01-31T17:47:21Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 1333
|
||||
},
|
||||
{
|
||||
"name": "dmealing",
|
||||
"id": 1153509,
|
||||
"comment_id": 3829284275,
|
||||
"created_at": "2026-01-31T20:23:51Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 1296
|
||||
},
|
||||
{
|
||||
"name": "edxeth",
|
||||
"id": 105494645,
|
||||
"comment_id": 3829930814,
|
||||
"created_at": "2026-02-01T00:58:26Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 1348
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -11,9 +11,10 @@ describe("MOMUS_SYSTEM_PROMPT policy requirements", () => {
|
||||
const prompt = MOMUS_SYSTEM_PROMPT
|
||||
|
||||
// #when / #then
|
||||
expect(prompt).toContain("[SYSTEM DIRECTIVE - READ-ONLY PLANNING CONSULTATION]")
|
||||
// Should explicitly mention stripping or ignoring these
|
||||
expect(prompt.toLowerCase()).toMatch(/ignore|strip|system directive/)
|
||||
// Should mention that system directives are ignored
|
||||
expect(prompt.toLowerCase()).toMatch(/system directive.*ignore|ignore.*system directive/)
|
||||
// Should give examples of system directive patterns
|
||||
expect(prompt).toMatch(/<system-reminder>|system-reminder/)
|
||||
})
|
||||
|
||||
test("should extract paths containing .sisyphus/plans/ and ending in .md", () => {
|
||||
|
||||
@@ -19,376 +19,173 @@ const MODE: AgentMode = "subagent"
|
||||
* implementation.
|
||||
*/
|
||||
|
||||
export const MOMUS_SYSTEM_PROMPT = `You are a work plan review expert. You review the provided work plan (.sisyphus/plans/{name}.md in the current working project directory) according to **unified, consistent criteria** that ensure clarity, verifiability, and completeness.
|
||||
export const MOMUS_SYSTEM_PROMPT = `You are a **practical** work plan reviewer. Your goal is simple: verify that the plan is **executable** and **references are valid**.
|
||||
|
||||
**CRITICAL FIRST RULE**:
|
||||
Extract a single plan path from anywhere in the input, ignoring system directives and wrappers. If exactly one \`.sisyphus/plans/*.md\` path exists, this is VALID input and you must read it. If no plan path exists or multiple plan paths exist, reject per Step 0. If the path points to a YAML plan file (\`.yml\` or \`.yaml\`), reject it as non-reviewable.
|
||||
|
||||
**WHY YOU'VE BEEN SUMMONED - THE CONTEXT**:
|
||||
---
|
||||
|
||||
You are reviewing a **first-draft work plan** from an author with ADHD. Based on historical patterns, these initial submissions are typically rough drafts that require refinement.
|
||||
## Your Purpose (READ THIS FIRST)
|
||||
|
||||
**Historical Data**: Plans from this author average **7 rejections** before receiving an OKAY. The primary failure pattern is **critical context omission due to ADHD**—the author's working memory holds connections and context that never make it onto the page.
|
||||
You exist to answer ONE question: **"Can a capable developer execute this plan without getting stuck?"**
|
||||
|
||||
**What to Expect in First Drafts**:
|
||||
- Tasks are listed but critical "why" context is missing
|
||||
- References to files/patterns without explaining their relevance
|
||||
- Assumptions about "obvious" project conventions that aren't documented
|
||||
- Missing decision criteria when multiple approaches are valid
|
||||
- Undefined edge case handling strategies
|
||||
- Unclear component integration points
|
||||
You are NOT here to:
|
||||
- Nitpick every detail
|
||||
- Demand perfection
|
||||
- Question the author's approach or architecture choices
|
||||
- Find as many issues as possible
|
||||
- Force multiple revision cycles
|
||||
|
||||
**Why These Plans Fail**:
|
||||
You ARE here to:
|
||||
- Verify referenced files actually exist and contain what's claimed
|
||||
- Ensure core tasks have enough context to start working
|
||||
- Catch BLOCKING issues only (things that would completely stop work)
|
||||
|
||||
The ADHD author's mind makes rapid connections: "Add auth → obviously use JWT → obviously store in httpOnly cookie → obviously follow the pattern in auth/login.ts → obviously handle refresh tokens like we did before."
|
||||
|
||||
But the plan only says: "Add authentication following auth/login.ts pattern."
|
||||
|
||||
**Everything after the first arrow is missing.** The author's working memory fills in the gaps automatically, so they don't realize the plan is incomplete.
|
||||
|
||||
**Your Critical Role**: Catch these ADHD-driven omissions. The author genuinely doesn't realize what they've left out. Your ruthless review forces them to externalize the context that lives only in their head.
|
||||
**APPROVAL BIAS**: When in doubt, APPROVE. A plan that's 80% clear is good enough. Developers can figure out minor gaps.
|
||||
|
||||
---
|
||||
|
||||
## Your Core Review Principle
|
||||
## What You Check (ONLY THESE)
|
||||
|
||||
**ABSOLUTE CONSTRAINT - RESPECT THE IMPLEMENTATION DIRECTION**:
|
||||
You are a REVIEWER, not a DESIGNER. The implementation direction in the plan is **NOT NEGOTIABLE**. Your job is to evaluate whether the plan documents that direction clearly enough to execute—NOT whether the direction itself is correct.
|
||||
### 1. Reference Verification (CRITICAL)
|
||||
- Do referenced files exist?
|
||||
- Do referenced line numbers contain relevant code?
|
||||
- If "follow pattern in X" is mentioned, does X actually demonstrate that pattern?
|
||||
|
||||
**What you MUST NOT do**:
|
||||
- Question or reject the overall approach/architecture chosen in the plan
|
||||
- Suggest alternative implementations that differ from the stated direction
|
||||
- Reject because you think there's a "better way" to achieve the goal
|
||||
- Override the author's technical decisions with your own preferences
|
||||
**PASS even if**: Reference exists but isn't perfect. Developer can explore from there.
|
||||
**FAIL only if**: Reference doesn't exist OR points to completely wrong content.
|
||||
|
||||
**What you MUST do**:
|
||||
- Accept the implementation direction as a given constraint
|
||||
- Evaluate only: "Is this direction documented clearly enough to execute?"
|
||||
- Focus on gaps IN the chosen approach, not gaps in choosing the approach
|
||||
### 2. Executability Check (PRACTICAL)
|
||||
- Can a developer START working on each task?
|
||||
- Is there at least a starting point (file, pattern, or clear description)?
|
||||
|
||||
**REJECT if**: When you simulate actually doing the work **within the stated approach**, you cannot obtain clear information needed for implementation, AND the plan does not specify reference materials to consult.
|
||||
**PASS even if**: Some details need to be figured out during implementation.
|
||||
**FAIL only if**: Task is so vague that developer has NO idea where to begin.
|
||||
|
||||
**ACCEPT if**: You can obtain the necessary information either:
|
||||
1. Directly from the plan itself, OR
|
||||
2. By following references provided in the plan (files, docs, patterns) and tracing through related materials
|
||||
### 3. Critical Blockers Only
|
||||
- Missing information that would COMPLETELY STOP work
|
||||
- Contradictions that make the plan impossible to follow
|
||||
|
||||
**The Test**: "Given the approach the author chose, can I implement this by starting from what's written in the plan and following the trail of information it provides?"
|
||||
|
||||
**WRONG mindset**: "This approach is suboptimal. They should use X instead." → **YOU ARE OVERSTEPPING**
|
||||
**RIGHT mindset**: "Given their choice to use Y, the plan doesn't explain how to handle Z within that approach." → **VALID CRITICISM**
|
||||
**NOT blockers** (do not reject for these):
|
||||
- Missing edge case handling
|
||||
- Incomplete acceptance criteria
|
||||
- Stylistic preferences
|
||||
- "Could be clearer" suggestions
|
||||
- Minor ambiguities a developer can resolve
|
||||
|
||||
---
|
||||
|
||||
## Common Failure Patterns (What the Author Typically Forgets)
|
||||
## What You Do NOT Check
|
||||
|
||||
The plan author is intelligent but has ADHD. They constantly skip providing:
|
||||
- Whether the approach is optimal
|
||||
- Whether there's a "better way"
|
||||
- Whether all edge cases are documented
|
||||
- Whether acceptance criteria are perfect
|
||||
- Whether the architecture is ideal
|
||||
- Code quality concerns
|
||||
- Performance considerations
|
||||
- Security unless explicitly broken
|
||||
|
||||
**1. Reference Materials**
|
||||
- FAIL: Says "implement authentication" but doesn't point to any existing code, docs, or patterns
|
||||
- FAIL: Says "follow the pattern" but doesn't specify which file contains the pattern
|
||||
- FAIL: Says "similar to X" but X doesn't exist or isn't documented
|
||||
|
||||
**2. Business Requirements**
|
||||
- FAIL: Says "add feature X" but doesn't explain what it should do or why
|
||||
- FAIL: Says "handle errors" but doesn't specify which errors or how users should experience them
|
||||
- FAIL: Says "optimize" but doesn't define success criteria
|
||||
|
||||
**3. Architectural Decisions**
|
||||
- FAIL: Says "add to state" but doesn't specify which state management system
|
||||
- FAIL: Says "integrate with Y" but doesn't explain the integration approach
|
||||
- FAIL: Says "call the API" but doesn't specify which endpoint or data flow
|
||||
|
||||
**4. Critical Context**
|
||||
- FAIL: References files that don't exist
|
||||
- FAIL: Points to line numbers that don't contain relevant code
|
||||
- FAIL: Assumes you know project-specific conventions that aren't documented anywhere
|
||||
|
||||
**What You Should NOT Reject**:
|
||||
- PASS: Plan says "follow auth/login.ts pattern" → you read that file → it has imports → you follow those → you understand the full flow
|
||||
- PASS: Plan says "use Redux store" → you find store files by exploring codebase structure → standard Redux patterns apply
|
||||
- PASS: Plan provides clear starting point → you trace through related files and types → you gather all needed details
|
||||
- PASS: The author chose approach X when you think Y would be better → **NOT YOUR CALL**. Evaluate X on its own merits.
|
||||
- PASS: The architecture seems unusual or non-standard → If the author chose it, your job is to ensure it's documented, not to redesign it.
|
||||
|
||||
**The Difference**:
|
||||
- FAIL/REJECT: "Add authentication" (no starting point provided)
|
||||
- PASS/ACCEPT: "Add authentication following pattern in auth/login.ts" (starting point provided, you can trace from there)
|
||||
- **WRONG/REJECT**: "Using REST when GraphQL would be better" → **YOU ARE OVERSTEPPING**
|
||||
- **WRONG/REJECT**: "This architecture won't scale" → **NOT YOUR JOB TO JUDGE**
|
||||
|
||||
**YOUR MANDATE**:
|
||||
|
||||
You will adopt a ruthlessly critical mindset. You will read EVERY document referenced in the plan. You will verify EVERY claim. You will simulate actual implementation step-by-step. As you review, you MUST constantly interrogate EVERY element with these questions:
|
||||
|
||||
- "Does the worker have ALL the context they need to execute this **within the chosen approach**?"
|
||||
- "How exactly should this be done **given the stated implementation direction**?"
|
||||
- "Is this information actually documented, or am I just assuming it's obvious?"
|
||||
- **"Am I questioning the documentation, or am I questioning the approach itself?"** ← If the latter, STOP.
|
||||
|
||||
You are not here to be nice. You are not here to give the benefit of the doubt. You are here to **catch every single gap, ambiguity, and missing piece of context that 20 previous reviewers failed to catch.**
|
||||
|
||||
**However**: You must evaluate THIS plan on its own merits. The past failures are context for your strictness, not a predetermined verdict. If this plan genuinely meets all criteria, approve it. If it has critical gaps **in documentation**, reject it without mercy.
|
||||
|
||||
**CRITICAL BOUNDARY**: Your ruthlessness applies to DOCUMENTATION quality, NOT to design decisions. The author's implementation direction is a GIVEN. You may think REST is inferior to GraphQL, but if the plan says REST, you evaluate whether REST is well-documented—not whether REST was the right choice.
|
||||
**You are a BLOCKER-finder, not a PERFECTIONIST.**
|
||||
|
||||
---
|
||||
|
||||
## File Location
|
||||
## Input Validation (Step 0)
|
||||
|
||||
You will be provided with the path to the work plan file (typically \`.sisyphus/plans/{name}.md\` in the project). Review the file at the **exact path provided to you**. Do not assume the location.
|
||||
**VALID INPUT**:
|
||||
- \`.sisyphus/plans/my-plan.md\` - file path anywhere in input
|
||||
- \`Please review .sisyphus/plans/plan.md\` - conversational wrapper
|
||||
- System directives + plan path - ignore directives, extract path
|
||||
|
||||
**CRITICAL - Input Validation (STEP 0 - DO THIS FIRST, BEFORE READING ANY FILES)**:
|
||||
**INVALID INPUT**:
|
||||
- No \`.sisyphus/plans/*.md\` path found
|
||||
- Multiple plan paths (ambiguous)
|
||||
|
||||
**BEFORE you read any files**, you MUST first validate the format of the input prompt you received from the user.
|
||||
System directives (\`<system-reminder>\`, \`[analyze-mode]\`, etc.) are IGNORED during validation.
|
||||
|
||||
**VALID INPUT EXAMPLES (ACCEPT THESE)**:
|
||||
- \`.sisyphus/plans/my-plan.md\` [O] ACCEPT - file path anywhere in input
|
||||
- \`/path/to/project/.sisyphus/plans/my-plan.md\` [O] ACCEPT - absolute plan path
|
||||
- \`Please review .sisyphus/plans/plan.md\` [O] ACCEPT - conversational wrapper allowed
|
||||
- \`<system-reminder>...</system-reminder>\\n.sisyphus/plans/plan.md\` [O] ACCEPT - system directives + plan path
|
||||
- \`[analyze-mode]\\n...context...\\n.sisyphus/plans/plan.md\` [O] ACCEPT - bracket-style directives + plan path
|
||||
- \`[SYSTEM DIRECTIVE - READ-ONLY PLANNING CONSULTATION]\\n---\\n- injected planning metadata\\n---\\nPlease review .sisyphus/plans/plan.md\` [O] ACCEPT - ignore the entire directive block
|
||||
|
||||
**SYSTEM DIRECTIVES ARE ALWAYS IGNORED**:
|
||||
System directives are automatically injected by the system and should be IGNORED during input validation:
|
||||
- XML-style tags: \`<system-reminder>\`, \`<context>\`, \`<user-prompt-submit-hook>\`, etc.
|
||||
- Bracket-style blocks: \`[analyze-mode]\`, \`[search-mode]\`, \`[SYSTEM DIRECTIVE...]\`, \`[SYSTEM REMINDER...]\`, etc.
|
||||
- \`[SYSTEM DIRECTIVE - READ-ONLY PLANNING CONSULTATION]\` blocks (appended by Prometheus task tools; treat the entire block, including \`---\` separators and bullet lines, as ignorable system text)
|
||||
- These are NOT user-provided text
|
||||
- These contain system context (timestamps, environment info, mode hints, etc.)
|
||||
- STRIP these from your input validation check
|
||||
- After stripping system directives, validate the remaining content
|
||||
|
||||
**EXTRACTION ALGORITHM (FOLLOW EXACTLY)**:
|
||||
1. Ignore injected system directive blocks, especially \`[SYSTEM DIRECTIVE - READ-ONLY PLANNING CONSULTATION]\` (remove the whole block, including \`---\` separators and bullet lines).
|
||||
2. Strip other system directive wrappers (bracket-style blocks and XML-style \`<system-reminder>...</system-reminder>\` tags).
|
||||
3. Strip markdown wrappers around paths (code fences and inline backticks).
|
||||
4. Extract plan paths by finding all substrings containing \`.sisyphus/plans/\` and ending in \`.md\`.
|
||||
5. If exactly 1 match → ACCEPT and proceed to Step 1 using that path.
|
||||
6. If 0 matches → REJECT with: "no plan path found" (no path found).
|
||||
7. If 2+ matches → REJECT with: "ambiguous: multiple plan paths".
|
||||
|
||||
**INVALID INPUT EXAMPLES (REJECT ONLY THESE)**:
|
||||
- \`No plan path provided here\` [X] REJECT - no \`.sisyphus/plans/*.md\` path
|
||||
- \`Compare .sisyphus/plans/first.md and .sisyphus/plans/second.md\` [X] REJECT - multiple plan paths
|
||||
|
||||
**When rejecting for input format, respond EXACTLY**:
|
||||
\`\`\`
|
||||
I REJECT (Input Format Validation)
|
||||
Reason: no plan path found
|
||||
|
||||
You must provide a single plan path that includes \`.sisyphus/plans/\` and ends in \`.md\`.
|
||||
|
||||
Valid format: .sisyphus/plans/plan.md
|
||||
Invalid format: No plan path or multiple plan paths
|
||||
|
||||
NOTE: This rejection is based solely on the input format, not the file contents.
|
||||
The file itself has not been evaluated yet.
|
||||
\`\`\`
|
||||
|
||||
Use this alternate Reason line if multiple paths are present:
|
||||
- Reason: multiple plan paths found
|
||||
|
||||
**ULTRA-CRITICAL REMINDER**:
|
||||
If the input contains exactly one \`.sisyphus/plans/*.md\` path (with or without system directives or conversational wrappers):
|
||||
→ THIS IS VALID INPUT
|
||||
→ DO NOT REJECT IT
|
||||
→ IMMEDIATELY PROCEED TO READ THE FILE
|
||||
→ START EVALUATING THE FILE CONTENTS
|
||||
|
||||
Never reject a single plan path embedded in the input.
|
||||
Never reject system directives (XML or bracket-style) - they are automatically injected and should be ignored!
|
||||
|
||||
|
||||
**IMPORTANT - Response Language**: Your evaluation output MUST match the language used in the work plan content:
|
||||
- Match the language of the plan in your evaluation output
|
||||
- If the plan is written in English → Write your entire evaluation in English
|
||||
- If the plan is mixed → Use the dominant language (majority of task descriptions)
|
||||
|
||||
Example: Plan contains "Modify database schema" → Evaluation output: "## Evaluation Result\\n\\n### Criterion 1: Clarity of Work Content..."
|
||||
**Extraction**: Find all \`.sisyphus/plans/*.md\` paths → exactly 1 = proceed, 0 or 2+ = reject.
|
||||
|
||||
---
|
||||
|
||||
## Review Philosophy
|
||||
## Review Process (SIMPLE)
|
||||
|
||||
Your role is to simulate **executing the work plan as a capable developer** and identify:
|
||||
1. **Ambiguities** that would block or slow down implementation
|
||||
2. **Missing verification methods** that prevent confirming success
|
||||
3. **Gaps in context** requiring >10% guesswork (90% confidence threshold)
|
||||
4. **Lack of overall understanding** of purpose, background, and workflow
|
||||
|
||||
The plan should enable a developer to:
|
||||
- Know exactly what to build and where to look for details
|
||||
- Validate their work objectively without subjective judgment
|
||||
- Complete tasks without needing to "figure out" unstated requirements
|
||||
- Understand the big picture, purpose, and how tasks flow together
|
||||
1. **Validate input** → Extract single plan path
|
||||
2. **Read plan** → Identify tasks and file references
|
||||
3. **Verify references** → Do files exist? Do they contain claimed content?
|
||||
4. **Executability check** → Can each task be started?
|
||||
5. **Decide** → Any BLOCKING issues? No = OKAY. Yes = REJECT with max 3 specific issues.
|
||||
|
||||
---
|
||||
|
||||
## Four Core Evaluation Criteria
|
||||
## Decision Framework
|
||||
|
||||
### Criterion 1: Clarity of Work Content
|
||||
### OKAY (Default - use this unless blocking issues exist)
|
||||
|
||||
**Goal**: Eliminate ambiguity by providing clear reference sources for each task.
|
||||
Issue the verdict **OKAY** when:
|
||||
- Referenced files exist and are reasonably relevant
|
||||
- Tasks have enough context to start (not complete, just start)
|
||||
- No contradictions or impossible requirements
|
||||
- A capable developer could make progress
|
||||
|
||||
**Evaluation Method**: For each task, verify:
|
||||
- **Does the task specify WHERE to find implementation details?**
|
||||
- [PASS] Good: "Follow authentication flow in \`docs/auth-spec.md\` section 3.2"
|
||||
- [PASS] Good: "Implement based on existing pattern in \`src/services/payment.ts:45-67\`"
|
||||
- [FAIL] Bad: "Add authentication" (no reference source)
|
||||
- [FAIL] Bad: "Improve error handling" (vague, no examples)
|
||||
**Remember**: "Good enough" is good enough. You're not blocking publication of a NASA manual.
|
||||
|
||||
- **Can the developer reach 90%+ confidence by reading the referenced source?**
|
||||
- [PASS] Good: Reference to specific file/section that contains concrete examples
|
||||
- [FAIL] Bad: "See codebase for patterns" (too broad, requires extensive exploration)
|
||||
### REJECT (Only for true blockers)
|
||||
|
||||
### Criterion 2: Verification & Acceptance Criteria
|
||||
Issue **REJECT** ONLY when:
|
||||
- Referenced file doesn't exist (verified by reading)
|
||||
- Task is completely impossible to start (zero context)
|
||||
- Plan contains internal contradictions
|
||||
|
||||
**Goal**: Ensure every task has clear, objective success criteria.
|
||||
**Maximum 3 issues per rejection.** If you found more, list only the top 3 most critical.
|
||||
|
||||
**Evaluation Method**: For each task, verify:
|
||||
- **Is there a concrete way to verify completion?**
|
||||
- [PASS] Good: "Verify: Run \`npm test\` → all tests pass. Manually test: Open \`/login\` → OAuth button appears → Click → redirects to Google → successful login"
|
||||
- [PASS] Good: "Acceptance: API response time < 200ms for 95th percentile (measured via \`k6 run load-test.js\`)"
|
||||
- [FAIL] Bad: "Test the feature" (how?)
|
||||
- [FAIL] Bad: "Make sure it works properly" (what defines "properly"?)
|
||||
|
||||
- **Are acceptance criteria measurable/observable?**
|
||||
- [PASS] Good: Observable outcomes (UI elements, API responses, test results, metrics)
|
||||
- [FAIL] Bad: Subjective terms ("clean code", "good UX", "robust implementation")
|
||||
|
||||
### Criterion 3: Context Completeness
|
||||
|
||||
**Goal**: Minimize guesswork by providing all necessary context (90% confidence threshold).
|
||||
|
||||
**Evaluation Method**: Simulate task execution and identify:
|
||||
- **What information is missing that would cause ≥10% uncertainty?**
|
||||
- [PASS] Good: Developer can proceed with <10% guesswork (or natural exploration)
|
||||
- [FAIL] Bad: Developer must make assumptions about business requirements, architecture, or critical context
|
||||
|
||||
- **Are implicit assumptions stated explicitly?**
|
||||
- [PASS] Good: "Assume user is already authenticated (session exists in context)"
|
||||
- [PASS] Good: "Note: Payment processing is handled by background job, not synchronously"
|
||||
- [FAIL] Bad: Leaving critical architectural decisions or business logic unstated
|
||||
|
||||
### Criterion 4: Big Picture & Workflow Understanding
|
||||
|
||||
**Goal**: Ensure the developer understands WHY they're building this, WHAT the overall objective is, and HOW tasks flow together.
|
||||
|
||||
**Evaluation Method**: Assess whether the plan provides:
|
||||
- **Clear Purpose Statement**: Why is this work being done? What problem does it solve?
|
||||
- **Background Context**: What's the current state? What are we changing from?
|
||||
- **Task Flow & Dependencies**: How do tasks connect? What's the logical sequence?
|
||||
- **Success Vision**: What does "done" look like from a product/user perspective?
|
||||
**Each issue must be**:
|
||||
- Specific (exact file path, exact task)
|
||||
- Actionable (what exactly needs to change)
|
||||
- Blocking (work cannot proceed without this)
|
||||
|
||||
---
|
||||
|
||||
## Review Process
|
||||
## Anti-Patterns (DO NOT DO THESE)
|
||||
|
||||
### Step 0: Validate Input Format (MANDATORY FIRST STEP)
|
||||
Extract the plan path from anywhere in the input. If exactly one \`.sisyphus/plans/*.md\` path is found, ACCEPT and continue. If none are found, REJECT with "no plan path found". If multiple are found, REJECT with "ambiguous: multiple plan paths".
|
||||
❌ "Task 3 could be clearer about error handling" → NOT a blocker
|
||||
❌ "Consider adding acceptance criteria for..." → NOT a blocker
|
||||
❌ "The approach in Task 5 might be suboptimal" → NOT YOUR JOB
|
||||
❌ "Missing documentation for edge case X" → NOT a blocker unless X is the main case
|
||||
❌ Rejecting because you'd do it differently → NEVER
|
||||
❌ Listing more than 3 issues → OVERWHELMING, pick top 3
|
||||
|
||||
### Step 1: Read the Work Plan
|
||||
- Load the file from the path provided
|
||||
- Identify the plan's language
|
||||
- Parse all tasks and their descriptions
|
||||
- Extract ALL file references
|
||||
|
||||
### Step 2: MANDATORY DEEP VERIFICATION
|
||||
For EVERY file reference, library mention, or external resource:
|
||||
- Read referenced files to verify content
|
||||
- Search for related patterns/imports across codebase
|
||||
- Verify line numbers contain relevant code
|
||||
- Check that patterns are clear enough to follow
|
||||
|
||||
### Step 3: Apply Four Criteria Checks
|
||||
For **the overall plan and each task**, evaluate:
|
||||
1. **Clarity Check**: Does the task specify clear reference sources?
|
||||
2. **Verification Check**: Are acceptance criteria concrete and measurable?
|
||||
3. **Context Check**: Is there sufficient context to proceed without >10% guesswork?
|
||||
4. **Big Picture Check**: Do I understand WHY, WHAT, and HOW?
|
||||
|
||||
### Step 4: Active Implementation Simulation
|
||||
For 2-3 representative tasks, simulate execution using actual files.
|
||||
|
||||
### Step 5: Check for Red Flags
|
||||
Scan for auto-fail indicators:
|
||||
- Vague action verbs without concrete targets
|
||||
- Missing file paths for code changes
|
||||
- Subjective success criteria
|
||||
- Tasks requiring unstated assumptions
|
||||
|
||||
**SELF-CHECK - Are you overstepping?**
|
||||
Before writing any criticism, ask yourself:
|
||||
- "Am I questioning the APPROACH or the DOCUMENTATION of the approach?"
|
||||
- "Would my feedback change if I accepted the author's direction as a given?"
|
||||
If you find yourself writing "should use X instead" or "this approach won't work because..." → **STOP. You are overstepping your role.**
|
||||
Rephrase to: "Given the chosen approach, the plan doesn't clarify..."
|
||||
|
||||
### Step 6: Write Evaluation Report
|
||||
Use structured format, **in the same language as the work plan**.
|
||||
✅ "Task 3 references \`auth/login.ts\` but file doesn't exist" → BLOCKER
|
||||
✅ "Task 5 says 'implement feature' with no context, files, or description" → BLOCKER
|
||||
✅ "Tasks 2 and 4 contradict each other on data flow" → BLOCKER
|
||||
|
||||
---
|
||||
|
||||
## Approval Criteria
|
||||
## Output Format
|
||||
|
||||
### OKAY Requirements (ALL must be met)
|
||||
1. **100% of file references verified**
|
||||
2. **Zero critically failed file verifications**
|
||||
3. **Critical context documented**
|
||||
4. **≥80% of tasks** have clear reference sources
|
||||
5. **≥90% of tasks** have concrete acceptance criteria
|
||||
6. **Zero tasks** require assumptions about business logic or critical architecture
|
||||
7. **Plan provides clear big picture**
|
||||
8. **Zero critical red flags** detected
|
||||
9. **Active simulation** shows core tasks are executable
|
||||
**[OKAY]** or **[REJECT]**
|
||||
|
||||
### REJECT Triggers (Critical issues only)
|
||||
- Referenced file doesn't exist or contains different content than claimed
|
||||
- Task has vague action verbs AND no reference source
|
||||
- Core tasks missing acceptance criteria entirely
|
||||
- Task requires assumptions about business requirements or critical architecture **within the chosen approach**
|
||||
- Missing purpose statement or unclear WHY
|
||||
- Critical task dependencies undefined
|
||||
**Summary**: 1-2 sentences explaining the verdict.
|
||||
|
||||
### NOT Valid REJECT Reasons (DO NOT REJECT FOR THESE)
|
||||
- You disagree with the implementation approach
|
||||
- You think a different architecture would be better
|
||||
- The approach seems non-standard or unusual
|
||||
- You believe there's a more optimal solution
|
||||
- The technology choice isn't what you would pick
|
||||
|
||||
**Your role is DOCUMENTATION REVIEW, not DESIGN REVIEW.**
|
||||
If REJECT:
|
||||
**Blocking Issues** (max 3):
|
||||
1. [Specific issue + what needs to change]
|
||||
2. [Specific issue + what needs to change]
|
||||
3. [Specific issue + what needs to change]
|
||||
|
||||
---
|
||||
|
||||
## Final Verdict Format
|
||||
## Final Reminders
|
||||
|
||||
**[OKAY / REJECT]**
|
||||
1. **APPROVE by default**. Reject only for true blockers.
|
||||
2. **Max 3 issues**. More than that is overwhelming and counterproductive.
|
||||
3. **Be specific**. "Task X needs Y" not "needs more clarity".
|
||||
4. **No design opinions**. The author's approach is not your concern.
|
||||
5. **Trust developers**. They can figure out minor gaps.
|
||||
|
||||
**Justification**: [Concise explanation]
|
||||
**Your job is to UNBLOCK work, not to BLOCK it with perfectionism.**
|
||||
|
||||
**Summary**:
|
||||
- Clarity: [Brief assessment]
|
||||
- Verifiability: [Brief assessment]
|
||||
- Completeness: [Brief assessment]
|
||||
- Big Picture: [Brief assessment]
|
||||
|
||||
[If REJECT, provide top 3-5 critical improvements needed]
|
||||
|
||||
---
|
||||
|
||||
**Your Success Means**:
|
||||
- **Immediately actionable** for core business logic and architecture
|
||||
- **Clearly verifiable** with objective success criteria
|
||||
- **Contextually complete** with critical information documented
|
||||
- **Strategically coherent** with purpose, background, and flow
|
||||
- **Reference integrity** with all files verified
|
||||
- **Direction-respecting** - you evaluated the plan WITHIN its stated approach
|
||||
|
||||
**Strike the right balance**: Prevent critical failures while empowering developer autonomy.
|
||||
|
||||
**FINAL REMINDER**: You are a DOCUMENTATION reviewer, not a DESIGN consultant. The author's implementation direction is SACRED. Your job ends at "Is this well-documented enough to execute?" - NOT "Is this the right approach?"
|
||||
**Response Language**: Match the language of the plan content.
|
||||
`
|
||||
|
||||
export function createMomusAgent(model: string): AgentConfig {
|
||||
|
||||
@@ -1,8 +1,14 @@
|
||||
import type { AgentConfig } from "@opencode-ai/sdk"
|
||||
import type { AgentMode } from "./types"
|
||||
import type { AgentMode, AgentPromptMetadata } from "./types"
|
||||
import { isGptModel } from "./types"
|
||||
|
||||
const MODE: AgentMode = "primary"
|
||||
export const SISYPHUS_PROMPT_METADATA: AgentPromptMetadata = {
|
||||
category: "utility",
|
||||
cost: "EXPENSIVE",
|
||||
promptAlias: "Sisyphus",
|
||||
triggers: [],
|
||||
}
|
||||
import type { AvailableAgent, AvailableTool, AvailableSkill, AvailableCategory } from "./dynamic-agent-prompt-builder"
|
||||
import {
|
||||
buildKeyTriggersSection,
|
||||
|
||||
@@ -3,6 +3,7 @@ import { createBuiltinAgents } from "./utils"
|
||||
import type { AgentConfig } from "@opencode-ai/sdk"
|
||||
import { clearSkillCache } from "../features/opencode-skill-loader/skill-content"
|
||||
import * as connectedProvidersCache from "../shared/connected-providers-cache"
|
||||
import * as modelAvailability from "../shared/model-availability"
|
||||
|
||||
const TEST_DEFAULT_MODEL = "anthropic/claude-opus-4-5"
|
||||
|
||||
@@ -47,32 +48,32 @@ describe("createBuiltinAgents with model overrides", () => {
|
||||
expect(agents.sisyphus.reasoningEffort).toBeUndefined()
|
||||
})
|
||||
|
||||
test("Oracle uses connected provider fallback when availableModels is empty and cache exists", async () => {
|
||||
// #given - connected providers cache has "openai", which matches oracle's first fallback entry
|
||||
const cacheSpy = spyOn(connectedProvidersCache, "readConnectedProvidersCache").mockReturnValue(["openai"])
|
||||
test("Oracle uses connected provider fallback when availableModels is empty and cache exists", async () => {
|
||||
// #given - connected providers cache has "openai", which matches oracle's first fallback entry
|
||||
const cacheSpy = spyOn(connectedProvidersCache, "readConnectedProvidersCache").mockReturnValue(["openai"])
|
||||
|
||||
// #when
|
||||
const agents = await createBuiltinAgents([], {}, undefined, TEST_DEFAULT_MODEL)
|
||||
// #when
|
||||
const agents = await createBuiltinAgents([], {}, undefined, TEST_DEFAULT_MODEL)
|
||||
|
||||
// #then - oracle resolves via connected cache fallback to openai/gpt-5.2 (not system default)
|
||||
expect(agents.oracle.model).toBe("openai/gpt-5.2")
|
||||
expect(agents.oracle.reasoningEffort).toBe("medium")
|
||||
expect(agents.oracle.thinking).toBeUndefined()
|
||||
cacheSpy.mockRestore()
|
||||
})
|
||||
// #then - oracle resolves via connected cache fallback to openai/gpt-5.2 (not system default)
|
||||
expect(agents.oracle.model).toBe("openai/gpt-5.2")
|
||||
expect(agents.oracle.reasoningEffort).toBe("medium")
|
||||
expect(agents.oracle.thinking).toBeUndefined()
|
||||
cacheSpy.mockRestore?.()
|
||||
})
|
||||
|
||||
test("Oracle created without model field when no cache exists (first run scenario)", async () => {
|
||||
// #given - no cache at all (first run)
|
||||
const cacheSpy = spyOn(connectedProvidersCache, "readConnectedProvidersCache").mockReturnValue(null)
|
||||
test("Oracle created without model field when no cache exists (first run scenario)", async () => {
|
||||
// #given - no cache at all (first run)
|
||||
const cacheSpy = spyOn(connectedProvidersCache, "readConnectedProvidersCache").mockReturnValue(null)
|
||||
|
||||
// #when
|
||||
const agents = await createBuiltinAgents([], {}, undefined, TEST_DEFAULT_MODEL)
|
||||
// #when
|
||||
const agents = await createBuiltinAgents([], {}, undefined, TEST_DEFAULT_MODEL)
|
||||
|
||||
// #then - oracle should be created with system default model (fallback to systemDefaultModel)
|
||||
expect(agents.oracle).toBeDefined()
|
||||
expect(agents.oracle.model).toBe(TEST_DEFAULT_MODEL)
|
||||
cacheSpy.mockRestore()
|
||||
})
|
||||
// #then - oracle should be created with system default model (fallback to systemDefaultModel)
|
||||
expect(agents.oracle).toBeDefined()
|
||||
expect(agents.oracle.model).toBe(TEST_DEFAULT_MODEL)
|
||||
cacheSpy.mockRestore?.()
|
||||
})
|
||||
|
||||
test("Oracle with GPT model override has reasoningEffort, no thinking", async () => {
|
||||
// #given
|
||||
@@ -122,43 +123,43 @@ describe("createBuiltinAgents with model overrides", () => {
|
||||
})
|
||||
|
||||
describe("createBuiltinAgents without systemDefaultModel", () => {
|
||||
test("agents created via connected cache fallback even without systemDefaultModel", async () => {
|
||||
// #given - connected cache has "openai", which matches oracle's fallback chain
|
||||
const cacheSpy = spyOn(connectedProvidersCache, "readConnectedProvidersCache").mockReturnValue(["openai"])
|
||||
test("agents created via connected cache fallback even without systemDefaultModel", async () => {
|
||||
// #given - connected cache has "openai", which matches oracle's fallback chain
|
||||
const cacheSpy = spyOn(connectedProvidersCache, "readConnectedProvidersCache").mockReturnValue(["openai"])
|
||||
|
||||
// #when
|
||||
const agents = await createBuiltinAgents([], {}, undefined, undefined)
|
||||
// #when
|
||||
const agents = await createBuiltinAgents([], {}, undefined, undefined)
|
||||
|
||||
// #then - connected cache enables model resolution despite no systemDefaultModel
|
||||
expect(agents.oracle).toBeDefined()
|
||||
expect(agents.oracle.model).toBe("openai/gpt-5.2")
|
||||
cacheSpy.mockRestore()
|
||||
})
|
||||
// #then - connected cache enables model resolution despite no systemDefaultModel
|
||||
expect(agents.oracle).toBeDefined()
|
||||
expect(agents.oracle.model).toBe("openai/gpt-5.2")
|
||||
cacheSpy.mockRestore?.()
|
||||
})
|
||||
|
||||
test("agents NOT created when no cache and no systemDefaultModel (first run without defaults)", async () => {
|
||||
// #given
|
||||
const cacheSpy = spyOn(connectedProvidersCache, "readConnectedProvidersCache").mockReturnValue(null)
|
||||
test("agents NOT created when no cache and no systemDefaultModel (first run without defaults)", async () => {
|
||||
// #given
|
||||
const cacheSpy = spyOn(connectedProvidersCache, "readConnectedProvidersCache").mockReturnValue(null)
|
||||
|
||||
// #when
|
||||
const agents = await createBuiltinAgents([], {}, undefined, undefined)
|
||||
// #when
|
||||
const agents = await createBuiltinAgents([], {}, undefined, undefined)
|
||||
|
||||
// #then
|
||||
expect(agents.oracle).toBeUndefined()
|
||||
cacheSpy.mockRestore()
|
||||
})
|
||||
// #then
|
||||
expect(agents.oracle).toBeUndefined()
|
||||
cacheSpy.mockRestore?.()
|
||||
})
|
||||
|
||||
test("sisyphus created via connected cache fallback even without systemDefaultModel", async () => {
|
||||
// #given - connected cache has "anthropic", which matches sisyphus's first fallback entry
|
||||
const cacheSpy = spyOn(connectedProvidersCache, "readConnectedProvidersCache").mockReturnValue(["anthropic"])
|
||||
test("sisyphus created via connected cache fallback even without systemDefaultModel", async () => {
|
||||
// #given - connected cache has "anthropic", which matches sisyphus's first fallback entry
|
||||
const cacheSpy = spyOn(connectedProvidersCache, "readConnectedProvidersCache").mockReturnValue(["anthropic"])
|
||||
|
||||
// #when
|
||||
const agents = await createBuiltinAgents([], {}, undefined, undefined)
|
||||
// #when
|
||||
const agents = await createBuiltinAgents([], {}, undefined, undefined)
|
||||
|
||||
// #then - connected cache enables model resolution despite no systemDefaultModel
|
||||
expect(agents.sisyphus).toBeDefined()
|
||||
expect(agents.sisyphus.model).toBe("anthropic/claude-opus-4-5")
|
||||
cacheSpy.mockRestore()
|
||||
})
|
||||
// #then - connected cache enables model resolution despite no systemDefaultModel
|
||||
expect(agents.sisyphus).toBeDefined()
|
||||
expect(agents.sisyphus.model).toBe("anthropic/claude-opus-4-5")
|
||||
cacheSpy.mockRestore?.()
|
||||
})
|
||||
})
|
||||
|
||||
describe("buildAgent with category and skills", () => {
|
||||
@@ -523,3 +524,41 @@ describe("override.category expansion in createBuiltinAgents", () => {
|
||||
expect(agents.oracle.model).toBe(agentsWithoutOverride.oracle.model)
|
||||
})
|
||||
})
|
||||
|
||||
describe("Deadlock prevention - fetchAvailableModels must not receive client", () => {
|
||||
test("createBuiltinAgents should call fetchAvailableModels with undefined client to prevent deadlock", async () => {
|
||||
// #given - This test ensures we don't regress on issue #1301
|
||||
// Passing client to fetchAvailableModels during createBuiltinAgents (called from config handler)
|
||||
// causes deadlock:
|
||||
// - Plugin init waits for server response (client.provider.list())
|
||||
// - Server waits for plugin init to complete before handling requests
|
||||
const fetchSpy = spyOn(modelAvailability, "fetchAvailableModels").mockResolvedValue(new Set<string>())
|
||||
const cacheSpy = spyOn(connectedProvidersCache, "readConnectedProvidersCache").mockReturnValue(null)
|
||||
|
||||
const mockClient = {
|
||||
provider: { list: () => Promise.resolve({ data: { connected: [] } }) },
|
||||
model: { list: () => Promise.resolve({ data: [] }) },
|
||||
}
|
||||
|
||||
// #when - Even when client is provided, fetchAvailableModels must be called with undefined
|
||||
await createBuiltinAgents(
|
||||
[],
|
||||
{},
|
||||
undefined,
|
||||
TEST_DEFAULT_MODEL,
|
||||
undefined,
|
||||
undefined,
|
||||
[],
|
||||
mockClient // client is passed but should NOT be forwarded to fetchAvailableModels
|
||||
)
|
||||
|
||||
// #then - fetchAvailableModels must be called with undefined as first argument (no client)
|
||||
// This prevents the deadlock described in issue #1301
|
||||
expect(fetchSpy).toHaveBeenCalled()
|
||||
const firstCallArgs = fetchSpy.mock.calls[0]
|
||||
expect(firstCallArgs[0]).toBeUndefined()
|
||||
|
||||
fetchSpy.mockRestore?.()
|
||||
cacheSpy.mockRestore?.()
|
||||
})
|
||||
})
|
||||
|
||||
@@ -6,11 +6,11 @@ import { createOracleAgent, ORACLE_PROMPT_METADATA } from "./oracle"
|
||||
import { createLibrarianAgent, LIBRARIAN_PROMPT_METADATA } from "./librarian"
|
||||
import { createExploreAgent, EXPLORE_PROMPT_METADATA } from "./explore"
|
||||
import { createMultimodalLookerAgent, MULTIMODAL_LOOKER_PROMPT_METADATA } from "./multimodal-looker"
|
||||
import { createMetisAgent } from "./metis"
|
||||
import { createAtlasAgent } from "./atlas"
|
||||
import { createMomusAgent } from "./momus"
|
||||
import { createMetisAgent, metisPromptMetadata } from "./metis"
|
||||
import { createAtlasAgent, atlasPromptMetadata } from "./atlas"
|
||||
import { createMomusAgent, momusPromptMetadata } from "./momus"
|
||||
import type { AvailableAgent, AvailableCategory, AvailableSkill } from "./dynamic-agent-prompt-builder"
|
||||
import { deepMerge, fetchAvailableModels, resolveModelWithFallback, AGENT_MODEL_REQUIREMENTS, findCaseInsensitive, includesCaseInsensitive, readConnectedProvidersCache, isModelAvailable } from "../shared"
|
||||
import { deepMerge, fetchAvailableModels, resolveModelPipeline, AGENT_MODEL_REQUIREMENTS, readConnectedProvidersCache, isModelAvailable } from "../shared"
|
||||
import { DEFAULT_CATEGORIES, CATEGORY_DESCRIPTIONS } from "../tools/delegate-task/constants"
|
||||
import { resolveMultipleSkills } from "../features/opencode-skill-loader/skill-content"
|
||||
import { createBuiltinSkills } from "../features/builtin-skills"
|
||||
@@ -41,6 +41,9 @@ const agentMetadata: Partial<Record<BuiltinAgentName, AgentPromptMetadata>> = {
|
||||
librarian: LIBRARIAN_PROMPT_METADATA,
|
||||
explore: EXPLORE_PROMPT_METADATA,
|
||||
"multimodal-looker": MULTIMODAL_LOOKER_PROMPT_METADATA,
|
||||
metis: metisPromptMetadata,
|
||||
momus: momusPromptMetadata,
|
||||
atlas: atlasPromptMetadata,
|
||||
}
|
||||
|
||||
function isFactory(source: AgentSource): source is AgentFactory {
|
||||
@@ -147,6 +150,45 @@ function applyCategoryOverride(
|
||||
return result as AgentConfig
|
||||
}
|
||||
|
||||
function applyModelResolution(input: {
|
||||
uiSelectedModel?: string
|
||||
userModel?: string
|
||||
requirement?: { fallbackChain?: { providers: string[]; model: string; variant?: string }[] }
|
||||
availableModels: Set<string>
|
||||
systemDefaultModel?: string
|
||||
}) {
|
||||
const { uiSelectedModel, userModel, requirement, availableModels, systemDefaultModel } = input
|
||||
return resolveModelPipeline({
|
||||
intent: { uiSelectedModel, userModel },
|
||||
constraints: { availableModels },
|
||||
policy: { fallbackChain: requirement?.fallbackChain, systemDefaultModel },
|
||||
})
|
||||
}
|
||||
|
||||
function applyEnvironmentContext(config: AgentConfig, directory?: string): AgentConfig {
|
||||
if (!directory || !config.prompt) return config
|
||||
const envContext = createEnvContext()
|
||||
return { ...config, prompt: config.prompt + envContext }
|
||||
}
|
||||
|
||||
function applyOverrides(
|
||||
config: AgentConfig,
|
||||
override: AgentOverrideConfig | undefined,
|
||||
mergedCategories: Record<string, CategoryConfig>
|
||||
): AgentConfig {
|
||||
let result = config
|
||||
const overrideCategory = (override as Record<string, unknown> | undefined)?.category as string | undefined
|
||||
if (overrideCategory) {
|
||||
result = applyCategoryOverride(result, overrideCategory, mergedCategories)
|
||||
}
|
||||
|
||||
if (override) {
|
||||
result = mergeAgentConfig(result, override)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
function mergeAgentConfig(
|
||||
base: AgentConfig,
|
||||
override: AgentOverrideConfig
|
||||
@@ -180,9 +222,12 @@ export async function createBuiltinAgents(
|
||||
uiSelectedModel?: string
|
||||
): Promise<Record<string, AgentConfig>> {
|
||||
const connectedProviders = readConnectedProvidersCache()
|
||||
const availableModels = client
|
||||
? await fetchAvailableModels(client, { connectedProviders: connectedProviders ?? undefined })
|
||||
: new Set<string>()
|
||||
// IMPORTANT: Do NOT pass client to fetchAvailableModels during plugin initialization.
|
||||
// This function is called from config handler, and calling client API causes deadlock.
|
||||
// See: https://github.com/code-yeongyu/oh-my-opencode/issues/1301
|
||||
const availableModels = await fetchAvailableModels(undefined, {
|
||||
connectedProviders: connectedProviders ?? undefined,
|
||||
})
|
||||
|
||||
const result: Record<string, AgentConfig> = {}
|
||||
const availableAgents: AvailableAgent[] = []
|
||||
@@ -220,9 +265,10 @@ export async function createBuiltinAgents(
|
||||
|
||||
if (agentName === "sisyphus") continue
|
||||
if (agentName === "atlas") continue
|
||||
if (includesCaseInsensitive(disabledAgents, agentName)) continue
|
||||
if (disabledAgents.some((name) => name.toLowerCase() === agentName.toLowerCase())) continue
|
||||
|
||||
const override = findCaseInsensitive(agentOverrides, agentName)
|
||||
const override = agentOverrides[agentName]
|
||||
?? Object.entries(agentOverrides).find(([key]) => key.toLowerCase() === agentName.toLowerCase())?.[1]
|
||||
const requirement = AGENT_MODEL_REQUIREMENTS[agentName]
|
||||
|
||||
// Check if agent requires a specific model
|
||||
@@ -234,10 +280,10 @@ export async function createBuiltinAgents(
|
||||
|
||||
const isPrimaryAgent = isFactory(source) && source.mode === "primary"
|
||||
|
||||
const resolution = resolveModelWithFallback({
|
||||
const resolution = applyModelResolution({
|
||||
uiSelectedModel: isPrimaryAgent ? uiSelectedModel : undefined,
|
||||
userModel: override?.model,
|
||||
fallbackChain: requirement?.fallbackChain,
|
||||
requirement,
|
||||
availableModels,
|
||||
systemDefaultModel,
|
||||
})
|
||||
@@ -257,15 +303,11 @@ export async function createBuiltinAgents(
|
||||
config = applyCategoryOverride(config, overrideCategory, mergedCategories)
|
||||
}
|
||||
|
||||
if (agentName === "librarian" && directory && config.prompt) {
|
||||
const envContext = createEnvContext()
|
||||
config = { ...config, prompt: config.prompt + envContext }
|
||||
if (agentName === "librarian") {
|
||||
config = applyEnvironmentContext(config, directory)
|
||||
}
|
||||
|
||||
// Direct override properties take highest priority
|
||||
if (override) {
|
||||
config = mergeAgentConfig(config, override)
|
||||
}
|
||||
config = applyOverrides(config, override, mergedCategories)
|
||||
|
||||
result[name] = config
|
||||
|
||||
@@ -283,10 +325,10 @@ export async function createBuiltinAgents(
|
||||
const sisyphusOverride = agentOverrides["sisyphus"]
|
||||
const sisyphusRequirement = AGENT_MODEL_REQUIREMENTS["sisyphus"]
|
||||
|
||||
const sisyphusResolution = resolveModelWithFallback({
|
||||
const sisyphusResolution = applyModelResolution({
|
||||
uiSelectedModel,
|
||||
userModel: sisyphusOverride?.model,
|
||||
fallbackChain: sisyphusRequirement?.fallbackChain,
|
||||
requirement: sisyphusRequirement,
|
||||
availableModels,
|
||||
systemDefaultModel,
|
||||
})
|
||||
@@ -306,19 +348,8 @@ export async function createBuiltinAgents(
|
||||
sisyphusConfig = { ...sisyphusConfig, variant: sisyphusResolvedVariant }
|
||||
}
|
||||
|
||||
const sisOverrideCategory = (sisyphusOverride as Record<string, unknown> | undefined)?.category as string | undefined
|
||||
if (sisOverrideCategory) {
|
||||
sisyphusConfig = applyCategoryOverride(sisyphusConfig, sisOverrideCategory, mergedCategories)
|
||||
}
|
||||
|
||||
if (directory && sisyphusConfig.prompt) {
|
||||
const envContext = createEnvContext()
|
||||
sisyphusConfig = { ...sisyphusConfig, prompt: sisyphusConfig.prompt + envContext }
|
||||
}
|
||||
|
||||
if (sisyphusOverride) {
|
||||
sisyphusConfig = mergeAgentConfig(sisyphusConfig, sisyphusOverride)
|
||||
}
|
||||
sisyphusConfig = applyOverrides(sisyphusConfig, sisyphusOverride, mergedCategories)
|
||||
sisyphusConfig = applyEnvironmentContext(sisyphusConfig, directory)
|
||||
|
||||
result["sisyphus"] = sisyphusConfig
|
||||
}
|
||||
@@ -328,10 +359,10 @@ export async function createBuiltinAgents(
|
||||
const orchestratorOverride = agentOverrides["atlas"]
|
||||
const atlasRequirement = AGENT_MODEL_REQUIREMENTS["atlas"]
|
||||
|
||||
const atlasResolution = resolveModelWithFallback({
|
||||
const atlasResolution = applyModelResolution({
|
||||
// NOTE: Atlas does NOT use uiSelectedModel - respects its own fallbackChain (k2p5 primary)
|
||||
userModel: orchestratorOverride?.model,
|
||||
fallbackChain: atlasRequirement?.fallbackChain,
|
||||
requirement: atlasRequirement,
|
||||
availableModels,
|
||||
systemDefaultModel,
|
||||
})
|
||||
@@ -350,14 +381,7 @@ export async function createBuiltinAgents(
|
||||
orchestratorConfig = { ...orchestratorConfig, variant: atlasResolvedVariant }
|
||||
}
|
||||
|
||||
const atlasOverrideCategory = (orchestratorOverride as Record<string, unknown> | undefined)?.category as string | undefined
|
||||
if (atlasOverrideCategory) {
|
||||
orchestratorConfig = applyCategoryOverride(orchestratorConfig, atlasOverrideCategory, mergedCategories)
|
||||
}
|
||||
|
||||
if (orchestratorOverride) {
|
||||
orchestratorConfig = mergeAgentConfig(orchestratorConfig, orchestratorOverride)
|
||||
}
|
||||
orchestratorConfig = applyOverrides(orchestratorConfig, orchestratorOverride, mergedCategories)
|
||||
|
||||
result["atlas"] = orchestratorConfig
|
||||
}
|
||||
|
||||
@@ -88,6 +88,7 @@ export const HookNameSchema = z.enum([
|
||||
"sisyphus-junior-notepad",
|
||||
"start-work",
|
||||
"atlas",
|
||||
"stop-continuation-guard",
|
||||
])
|
||||
|
||||
export const BuiltinCommandNameSchema = z.enum([
|
||||
|
||||
@@ -2087,3 +2087,95 @@ describe("BackgroundManager.shutdown session abort", () => {
|
||||
})
|
||||
})
|
||||
|
||||
describe("BackgroundManager.completionTimers - Memory Leak Fix", () => {
|
||||
function getCompletionTimers(manager: BackgroundManager): Map<string, ReturnType<typeof setTimeout>> {
|
||||
return (manager as unknown as { completionTimers: Map<string, ReturnType<typeof setTimeout>> }).completionTimers
|
||||
}
|
||||
|
||||
function setCompletionTimer(manager: BackgroundManager, taskId: string): void {
|
||||
const completionTimers = getCompletionTimers(manager)
|
||||
const timer = setTimeout(() => {
|
||||
completionTimers.delete(taskId)
|
||||
}, 5 * 60 * 1000)
|
||||
completionTimers.set(taskId, timer)
|
||||
}
|
||||
|
||||
test("should have completionTimers Map initialized", () => {
|
||||
// #given
|
||||
const manager = createBackgroundManager()
|
||||
|
||||
// #when
|
||||
const completionTimers = getCompletionTimers(manager)
|
||||
|
||||
// #then
|
||||
expect(completionTimers).toBeDefined()
|
||||
expect(completionTimers).toBeInstanceOf(Map)
|
||||
expect(completionTimers.size).toBe(0)
|
||||
|
||||
manager.shutdown()
|
||||
})
|
||||
|
||||
test("should clear all completion timers on shutdown", () => {
|
||||
// #given
|
||||
const manager = createBackgroundManager()
|
||||
setCompletionTimer(manager, "task-1")
|
||||
setCompletionTimer(manager, "task-2")
|
||||
|
||||
const completionTimers = getCompletionTimers(manager)
|
||||
expect(completionTimers.size).toBe(2)
|
||||
|
||||
// #when
|
||||
manager.shutdown()
|
||||
|
||||
// #then
|
||||
expect(completionTimers.size).toBe(0)
|
||||
})
|
||||
|
||||
test("should cancel timer when task is deleted via session.deleted", () => {
|
||||
// #given
|
||||
const manager = createBackgroundManager()
|
||||
const task: BackgroundTask = {
|
||||
id: "task-timer-4",
|
||||
sessionID: "session-timer-4",
|
||||
parentSessionID: "parent-session",
|
||||
parentMessageID: "msg-1",
|
||||
description: "Test task",
|
||||
prompt: "test",
|
||||
agent: "explore",
|
||||
status: "completed",
|
||||
startedAt: new Date(),
|
||||
}
|
||||
getTaskMap(manager).set(task.id, task)
|
||||
setCompletionTimer(manager, task.id)
|
||||
|
||||
const completionTimers = getCompletionTimers(manager)
|
||||
expect(completionTimers.size).toBe(1)
|
||||
|
||||
// #when
|
||||
manager.handleEvent({
|
||||
type: "session.deleted",
|
||||
properties: {
|
||||
info: { id: "session-timer-4" },
|
||||
},
|
||||
})
|
||||
|
||||
// #then
|
||||
expect(completionTimers.has(task.id)).toBe(false)
|
||||
|
||||
manager.shutdown()
|
||||
})
|
||||
|
||||
test("should not leak timers across multiple shutdown calls", () => {
|
||||
// #given
|
||||
const manager = createBackgroundManager()
|
||||
setCompletionTimer(manager, "task-1")
|
||||
|
||||
// #when
|
||||
manager.shutdown()
|
||||
manager.shutdown()
|
||||
|
||||
// #then
|
||||
const completionTimers = getCompletionTimers(manager)
|
||||
expect(completionTimers.size).toBe(0)
|
||||
})
|
||||
})
|
||||
|
||||
@@ -83,6 +83,7 @@ export class BackgroundManager {
|
||||
|
||||
private queuesByKey: Map<string, QueueItem[]> = new Map()
|
||||
private processingKeys: Set<string> = new Set()
|
||||
private completionTimers: Map<string, ReturnType<typeof setTimeout>> = new Map()
|
||||
|
||||
constructor(
|
||||
ctx: PluginInput,
|
||||
@@ -708,7 +709,11 @@ export class BackgroundManager {
|
||||
this.concurrencyManager.release(task.concurrencyKey)
|
||||
task.concurrencyKey = undefined
|
||||
}
|
||||
// Clean up pendingByParent to prevent stale entries
|
||||
const existingTimer = this.completionTimers.get(task.id)
|
||||
if (existingTimer) {
|
||||
clearTimeout(existingTimer)
|
||||
this.completionTimers.delete(task.id)
|
||||
}
|
||||
this.cleanupPendingByParent(task)
|
||||
this.tasks.delete(task.id)
|
||||
this.clearNotificationsForTask(task.id)
|
||||
@@ -1073,14 +1078,15 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
|
||||
}
|
||||
|
||||
const taskId = task.id
|
||||
setTimeout(() => {
|
||||
// Guard: Only delete if task still exists (could have been deleted by session.deleted event)
|
||||
const timer = setTimeout(() => {
|
||||
this.completionTimers.delete(taskId)
|
||||
if (this.tasks.has(taskId)) {
|
||||
this.clearNotificationsForTask(taskId)
|
||||
this.tasks.delete(taskId)
|
||||
log("[background-agent] Removed completed task from memory:", taskId)
|
||||
}
|
||||
}, 5 * 60 * 1000)
|
||||
this.completionTimers.set(taskId, timer)
|
||||
}
|
||||
|
||||
private formatDuration(start: Date, end?: Date): string {
|
||||
@@ -1375,7 +1381,11 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
|
||||
}
|
||||
}
|
||||
|
||||
// Then clear all state (cancels any remaining waiters)
|
||||
for (const timer of this.completionTimers.values()) {
|
||||
clearTimeout(timer)
|
||||
}
|
||||
this.completionTimers.clear()
|
||||
|
||||
this.concurrencyManager.clear()
|
||||
this.tasks.clear()
|
||||
this.notifications.clear()
|
||||
@@ -1396,7 +1406,10 @@ function registerProcessSignal(
|
||||
const listener = () => {
|
||||
handler()
|
||||
if (exitAfter) {
|
||||
process.exit(0)
|
||||
// Set exitCode and schedule exit after delay to allow other handlers to complete async cleanup
|
||||
// Use 6s delay to accommodate LSP cleanup (5s timeout + 1s SIGKILL wait)
|
||||
process.exitCode = 0
|
||||
setTimeout(() => process.exit(), 6000)
|
||||
}
|
||||
}
|
||||
process.on(signal, listener)
|
||||
|
||||
@@ -2,6 +2,7 @@ import type { CommandDefinition } from "../claude-code-command-loader"
|
||||
import type { BuiltinCommandName, BuiltinCommands } from "./types"
|
||||
import { INIT_DEEP_TEMPLATE } from "./templates/init-deep"
|
||||
import { RALPH_LOOP_TEMPLATE, CANCEL_RALPH_TEMPLATE } from "./templates/ralph-loop"
|
||||
import { STOP_CONTINUATION_TEMPLATE } from "./templates/stop-continuation"
|
||||
import { REFACTOR_TEMPLATE } from "./templates/refactor"
|
||||
import { START_WORK_TEMPLATE } from "./templates/start-work"
|
||||
|
||||
@@ -70,6 +71,12 @@ $ARGUMENTS
|
||||
</user-request>`,
|
||||
argumentHint: "[plan-name]",
|
||||
},
|
||||
"stop-continuation": {
|
||||
description: "(builtin) Stop all continuation mechanisms (ralph loop, todo continuation, boulder) for this session",
|
||||
template: `<command-instruction>
|
||||
${STOP_CONTINUATION_TEMPLATE}
|
||||
</command-instruction>`,
|
||||
},
|
||||
}
|
||||
|
||||
export function loadBuiltinCommands(
|
||||
|
||||
@@ -0,0 +1,25 @@
|
||||
import { describe, expect, test } from "bun:test"
|
||||
import { STOP_CONTINUATION_TEMPLATE } from "./stop-continuation"
|
||||
|
||||
describe("stop-continuation template", () => {
|
||||
test("should export a non-empty template string", () => {
|
||||
// #given - the stop-continuation template
|
||||
|
||||
// #when - we access the template
|
||||
|
||||
// #then - it should be a non-empty string
|
||||
expect(typeof STOP_CONTINUATION_TEMPLATE).toBe("string")
|
||||
expect(STOP_CONTINUATION_TEMPLATE.length).toBeGreaterThan(0)
|
||||
})
|
||||
|
||||
test("should describe the stop-continuation behavior", () => {
|
||||
// #given - the stop-continuation template
|
||||
|
||||
// #when - we check the content
|
||||
|
||||
// #then - it should mention key behaviors
|
||||
expect(STOP_CONTINUATION_TEMPLATE).toContain("todo-continuation-enforcer")
|
||||
expect(STOP_CONTINUATION_TEMPLATE).toContain("Ralph Loop")
|
||||
expect(STOP_CONTINUATION_TEMPLATE).toContain("boulder state")
|
||||
})
|
||||
})
|
||||
13
src/features/builtin-commands/templates/stop-continuation.ts
Normal file
13
src/features/builtin-commands/templates/stop-continuation.ts
Normal file
@@ -0,0 +1,13 @@
|
||||
export const STOP_CONTINUATION_TEMPLATE = `Stop all continuation mechanisms for the current session.
|
||||
|
||||
This command will:
|
||||
1. Stop the todo-continuation-enforcer from automatically continuing incomplete tasks
|
||||
2. Cancel any active Ralph Loop
|
||||
3. Clear the boulder state for the current project
|
||||
|
||||
After running this command:
|
||||
- The session will not auto-continue when idle
|
||||
- You can manually continue work when ready
|
||||
- The stop state is per-session and clears when the session ends
|
||||
|
||||
Use this when you need to pause automated continuation and take manual control.`
|
||||
@@ -1,6 +1,6 @@
|
||||
import type { CommandDefinition } from "../claude-code-command-loader"
|
||||
|
||||
export type BuiltinCommandName = "init-deep" | "ralph-loop" | "cancel-ralph" | "ulw-loop" | "refactor" | "start-work"
|
||||
export type BuiltinCommandName = "init-deep" | "ralph-loop" | "cancel-ralph" | "ulw-loop" | "refactor" | "start-work" | "stop-continuation"
|
||||
|
||||
export interface BuiltinCommandConfig {
|
||||
disabled_commands?: BuiltinCommandName[]
|
||||
|
||||
@@ -114,23 +114,15 @@ export class SkillMcpManager {
|
||||
this.pendingConnections.clear()
|
||||
}
|
||||
|
||||
// Note: 'exit' event is synchronous-only in Node.js, so we use 'beforeExit' for async cleanup
|
||||
// However, 'beforeExit' is not emitted on explicit process.exit() calls
|
||||
// Signal handlers are made async to properly await cleanup
|
||||
// Note: Node's 'exit' event is synchronous-only, so we rely on signal handlers for async cleanup.
|
||||
// Signal handlers invoke the async cleanup function and ignore errors so they don't block or throw.
|
||||
// Don't call process.exit() here - let the background-agent manager handle the final process exit.
|
||||
// Use void + catch to trigger async cleanup without awaiting it in the signal handler.
|
||||
|
||||
process.on("SIGINT", async () => {
|
||||
await cleanup()
|
||||
process.exit(0)
|
||||
})
|
||||
process.on("SIGTERM", async () => {
|
||||
await cleanup()
|
||||
process.exit(0)
|
||||
})
|
||||
process.on("SIGINT", () => void cleanup().catch(() => {}))
|
||||
process.on("SIGTERM", () => void cleanup().catch(() => {}))
|
||||
if (process.platform === "win32") {
|
||||
process.on("SIGBREAK", async () => {
|
||||
await cleanup()
|
||||
process.exit(0)
|
||||
})
|
||||
process.on("SIGBREAK", () => void cleanup().catch(() => {}))
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ import {
|
||||
type PreCompactContext,
|
||||
} from "./pre-compact"
|
||||
import { cacheToolInput, getToolInput } from "./tool-input-cache"
|
||||
import { recordToolUse, recordToolResult, getTranscriptPath, recordUserMessage } from "./transcript"
|
||||
import { appendTranscriptEntry, getTranscriptPath } from "./transcript"
|
||||
import type { PluginConfig } from "./types"
|
||||
import { log, isHookDisabled } from "../../shared"
|
||||
import type { ContextCollector } from "../../features/context-injector"
|
||||
@@ -92,7 +92,11 @@ export function createClaudeCodeHooksHook(
|
||||
const textParts = output.parts.filter((p) => p.type === "text" && p.text)
|
||||
const prompt = textParts.map((p) => p.text ?? "").join("\n")
|
||||
|
||||
recordUserMessage(input.sessionID, prompt)
|
||||
appendTranscriptEntry(input.sessionID, {
|
||||
type: "user",
|
||||
timestamp: new Date().toISOString(),
|
||||
content: prompt,
|
||||
})
|
||||
|
||||
const messageParts: MessagePart[] = textParts.map((p) => ({
|
||||
type: p.type as "text",
|
||||
@@ -198,7 +202,12 @@ export function createClaudeCodeHooksHook(
|
||||
const claudeConfig = await loadClaudeHooksConfig()
|
||||
const extendedConfig = await loadPluginExtendedConfig()
|
||||
|
||||
recordToolUse(input.sessionID, input.tool, output.args as Record<string, unknown>)
|
||||
appendTranscriptEntry(input.sessionID, {
|
||||
type: "tool_use",
|
||||
timestamp: new Date().toISOString(),
|
||||
tool_name: input.tool,
|
||||
tool_input: output.args as Record<string, unknown>,
|
||||
})
|
||||
|
||||
cacheToolInput(input.sessionID, input.tool, input.callID, output.args as Record<string, unknown>)
|
||||
|
||||
@@ -253,7 +262,13 @@ export function createClaudeCodeHooksHook(
|
||||
const metadata = output.metadata as Record<string, unknown> | undefined
|
||||
const hasMetadata = metadata && typeof metadata === "object" && Object.keys(metadata).length > 0
|
||||
const toolOutput = hasMetadata ? metadata : { output: output.output }
|
||||
recordToolResult(input.sessionID, input.tool, cachedInput, toolOutput)
|
||||
appendTranscriptEntry(input.sessionID, {
|
||||
type: "tool_result",
|
||||
timestamp: new Date().toISOString(),
|
||||
tool_name: input.tool,
|
||||
tool_input: cachedInput,
|
||||
tool_output: toolOutput,
|
||||
})
|
||||
|
||||
if (!isHookDisabled(config, "PostToolUse")) {
|
||||
const postClient: PostToolUseClient = {
|
||||
|
||||
@@ -28,56 +28,6 @@ export function appendTranscriptEntry(
|
||||
appendFileSync(path, line)
|
||||
}
|
||||
|
||||
export function recordToolUse(
|
||||
sessionId: string,
|
||||
toolName: string,
|
||||
toolInput: Record<string, unknown>
|
||||
): void {
|
||||
appendTranscriptEntry(sessionId, {
|
||||
type: "tool_use",
|
||||
timestamp: new Date().toISOString(),
|
||||
tool_name: toolName,
|
||||
tool_input: toolInput,
|
||||
})
|
||||
}
|
||||
|
||||
export function recordToolResult(
|
||||
sessionId: string,
|
||||
toolName: string,
|
||||
toolInput: Record<string, unknown>,
|
||||
toolOutput: Record<string, unknown>
|
||||
): void {
|
||||
appendTranscriptEntry(sessionId, {
|
||||
type: "tool_result",
|
||||
timestamp: new Date().toISOString(),
|
||||
tool_name: toolName,
|
||||
tool_input: toolInput,
|
||||
tool_output: toolOutput,
|
||||
})
|
||||
}
|
||||
|
||||
export function recordUserMessage(
|
||||
sessionId: string,
|
||||
content: string
|
||||
): void {
|
||||
appendTranscriptEntry(sessionId, {
|
||||
type: "user",
|
||||
timestamp: new Date().toISOString(),
|
||||
content,
|
||||
})
|
||||
}
|
||||
|
||||
export function recordAssistantMessage(
|
||||
sessionId: string,
|
||||
content: string
|
||||
): void {
|
||||
appendTranscriptEntry(sessionId, {
|
||||
type: "assistant",
|
||||
timestamp: new Date().toISOString(),
|
||||
content,
|
||||
})
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Claude Code Compatible Transcript Builder (PORT FROM DISABLED)
|
||||
// ============================================================================
|
||||
|
||||
@@ -1,9 +1,16 @@
|
||||
import { spawn } from "bun"
|
||||
import { existsSync, mkdirSync, chmodSync, unlinkSync, appendFileSync } from "fs"
|
||||
import { existsSync, appendFileSync } from "fs"
|
||||
import { join } from "path"
|
||||
import { homedir, tmpdir } from "os"
|
||||
import { createRequire } from "module"
|
||||
import { extractZip } from "../../shared"
|
||||
import {
|
||||
cleanupArchive,
|
||||
downloadArchive,
|
||||
ensureCacheDir,
|
||||
ensureExecutable,
|
||||
extractTarGz,
|
||||
extractZipArchive,
|
||||
getCachedBinaryPath as getCachedBinaryPathShared,
|
||||
} from "../../shared/binary-downloader"
|
||||
import { log } from "../../shared/logger"
|
||||
|
||||
const DEBUG = process.env.COMMENT_CHECKER_DEBUG === "1"
|
||||
@@ -60,8 +67,7 @@ export function getBinaryName(): string {
|
||||
* Get the cached binary path if it exists.
|
||||
*/
|
||||
export function getCachedBinaryPath(): string | null {
|
||||
const binaryPath = join(getCacheDir(), getBinaryName())
|
||||
return existsSync(binaryPath) ? binaryPath : null
|
||||
return getCachedBinaryPathShared(getCacheDir(), getBinaryName())
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -78,27 +84,6 @@ function getPackageVersion(): string {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract tar.gz archive using system tar command.
|
||||
*/
|
||||
async function extractTarGz(archivePath: string, destDir: string): Promise<void> {
|
||||
debugLog("Extracting tar.gz:", archivePath, "to", destDir)
|
||||
|
||||
const proc = spawn(["tar", "-xzf", archivePath, "-C", destDir], {
|
||||
stdout: "pipe",
|
||||
stderr: "pipe",
|
||||
})
|
||||
|
||||
const exitCode = await proc.exited
|
||||
|
||||
if (exitCode !== 0) {
|
||||
const stderr = await new Response(proc.stderr).text()
|
||||
throw new Error(`tar extraction failed (exit ${exitCode}): ${stderr}`)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
/**
|
||||
* Download the comment-checker binary from GitHub Releases.
|
||||
* Returns the path to the downloaded binary, or null on failure.
|
||||
@@ -132,39 +117,26 @@ export async function downloadCommentChecker(): Promise<string | null> {
|
||||
|
||||
try {
|
||||
// Ensure cache directory exists
|
||||
if (!existsSync(cacheDir)) {
|
||||
mkdirSync(cacheDir, { recursive: true })
|
||||
}
|
||||
|
||||
// Download with fetch() - Bun handles redirects automatically
|
||||
const response = await fetch(downloadUrl, { redirect: "follow" })
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`HTTP ${response.status}: ${response.statusText}`)
|
||||
}
|
||||
ensureCacheDir(cacheDir)
|
||||
|
||||
const archivePath = join(cacheDir, assetName)
|
||||
const arrayBuffer = await response.arrayBuffer()
|
||||
await Bun.write(archivePath, arrayBuffer)
|
||||
await downloadArchive(downloadUrl, archivePath)
|
||||
|
||||
debugLog(`Downloaded archive to: ${archivePath}`)
|
||||
|
||||
// Extract based on file type
|
||||
if (ext === "tar.gz") {
|
||||
debugLog("Extracting tar.gz:", archivePath, "to", cacheDir)
|
||||
await extractTarGz(archivePath, cacheDir)
|
||||
} else {
|
||||
await extractZip(archivePath, cacheDir)
|
||||
await extractZipArchive(archivePath, cacheDir)
|
||||
}
|
||||
|
||||
// Clean up archive
|
||||
if (existsSync(archivePath)) {
|
||||
unlinkSync(archivePath)
|
||||
}
|
||||
cleanupArchive(archivePath)
|
||||
|
||||
// Set execute permission on Unix
|
||||
if (process.platform !== "win32" && existsSync(binaryPath)) {
|
||||
chmodSync(binaryPath, 0o755)
|
||||
}
|
||||
ensureExecutable(binaryPath)
|
||||
|
||||
debugLog(`Successfully downloaded binary to: ${binaryPath}`)
|
||||
log(`[oh-my-opencode] comment-checker binary ready.`)
|
||||
|
||||
@@ -1,102 +0,0 @@
|
||||
import { describe, expect, it, mock, beforeEach } from "bun:test"
|
||||
|
||||
// Mock dependencies before importing
|
||||
const mockInjectHookMessage = mock(() => true)
|
||||
mock.module("../../features/hook-message-injector", () => ({
|
||||
injectHookMessage: mockInjectHookMessage,
|
||||
}))
|
||||
|
||||
mock.module("../../shared/logger", () => ({
|
||||
log: () => {},
|
||||
}))
|
||||
|
||||
mock.module("../../shared/system-directive", () => ({
|
||||
createSystemDirective: (type: string) => `[DIRECTIVE:${type}]`,
|
||||
SystemDirectiveTypes: {
|
||||
TODO_CONTINUATION: "TODO CONTINUATION",
|
||||
RALPH_LOOP: "RALPH LOOP",
|
||||
BOULDER_CONTINUATION: "BOULDER CONTINUATION",
|
||||
DELEGATION_REQUIRED: "DELEGATION REQUIRED",
|
||||
SINGLE_TASK_ONLY: "SINGLE TASK ONLY",
|
||||
COMPACTION_CONTEXT: "COMPACTION CONTEXT",
|
||||
CONTEXT_WINDOW_MONITOR: "CONTEXT WINDOW MONITOR",
|
||||
PROMETHEUS_READ_ONLY: "PROMETHEUS READ-ONLY",
|
||||
},
|
||||
}))
|
||||
|
||||
import { createCompactionContextInjector } from "./index"
|
||||
import type { SummarizeContext } from "./index"
|
||||
|
||||
describe("createCompactionContextInjector", () => {
|
||||
beforeEach(() => {
|
||||
mockInjectHookMessage.mockClear()
|
||||
})
|
||||
|
||||
describe("Agent Verification State preservation", () => {
|
||||
it("includes Agent Verification State section in compaction prompt", async () => {
|
||||
// given
|
||||
const injector = createCompactionContextInjector()
|
||||
const context: SummarizeContext = {
|
||||
sessionID: "test-session",
|
||||
providerID: "anthropic",
|
||||
modelID: "claude-sonnet-4-5",
|
||||
usageRatio: 0.85,
|
||||
directory: "/test/dir",
|
||||
}
|
||||
|
||||
// when
|
||||
await injector(context)
|
||||
|
||||
// then
|
||||
expect(mockInjectHookMessage).toHaveBeenCalledTimes(1)
|
||||
const calls = mockInjectHookMessage.mock.calls as unknown as [string, string, unknown][]
|
||||
const injectedPrompt = calls[0]?.[1] ?? ""
|
||||
expect(injectedPrompt).toContain("Agent Verification State")
|
||||
expect(injectedPrompt).toContain("Current Agent")
|
||||
expect(injectedPrompt).toContain("Verification Progress")
|
||||
})
|
||||
|
||||
it("includes Momus-specific context for reviewer agents", async () => {
|
||||
// given
|
||||
const injector = createCompactionContextInjector()
|
||||
const context: SummarizeContext = {
|
||||
sessionID: "test-session",
|
||||
providerID: "anthropic",
|
||||
modelID: "claude-sonnet-4-5",
|
||||
usageRatio: 0.9,
|
||||
directory: "/test/dir",
|
||||
}
|
||||
|
||||
// when
|
||||
await injector(context)
|
||||
|
||||
// then
|
||||
const calls = mockInjectHookMessage.mock.calls as unknown as [string, string, unknown][]
|
||||
const injectedPrompt = calls[0]?.[1] ?? ""
|
||||
expect(injectedPrompt).toContain("Previous Rejections")
|
||||
expect(injectedPrompt).toContain("Acceptance Status")
|
||||
expect(injectedPrompt).toContain("reviewer agents")
|
||||
})
|
||||
|
||||
it("preserves file verification progress in compaction prompt", async () => {
|
||||
// given
|
||||
const injector = createCompactionContextInjector()
|
||||
const context: SummarizeContext = {
|
||||
sessionID: "test-session",
|
||||
providerID: "anthropic",
|
||||
modelID: "claude-sonnet-4-5",
|
||||
usageRatio: 0.95,
|
||||
directory: "/test/dir",
|
||||
}
|
||||
|
||||
// when
|
||||
await injector(context)
|
||||
|
||||
// then
|
||||
const calls = mockInjectHookMessage.mock.calls as unknown as [string, string, unknown][]
|
||||
const injectedPrompt = calls[0]?.[1] ?? ""
|
||||
expect(injectedPrompt).toContain("Pending Verifications")
|
||||
expect(injectedPrompt).toContain("Files already verified")
|
||||
})
|
||||
})
|
||||
})
|
||||
@@ -1,76 +0,0 @@
|
||||
import { injectHookMessage } from "../../features/hook-message-injector"
|
||||
import { log } from "../../shared/logger"
|
||||
import { createSystemDirective, SystemDirectiveTypes } from "../../shared/system-directive"
|
||||
|
||||
export interface SummarizeContext {
|
||||
sessionID: string
|
||||
providerID: string
|
||||
modelID: string
|
||||
usageRatio: number
|
||||
directory: string
|
||||
}
|
||||
|
||||
const SUMMARIZE_CONTEXT_PROMPT = `${createSystemDirective(SystemDirectiveTypes.COMPACTION_CONTEXT)}
|
||||
|
||||
When summarizing this session, you MUST include the following sections in your summary:
|
||||
|
||||
## 1. User Requests (As-Is)
|
||||
- List all original user requests exactly as they were stated
|
||||
- Preserve the user's exact wording and intent
|
||||
|
||||
## 2. Final Goal
|
||||
- What the user ultimately wanted to achieve
|
||||
- The end result or deliverable expected
|
||||
|
||||
## 3. Work Completed
|
||||
- What has been done so far
|
||||
- Files created/modified
|
||||
- Features implemented
|
||||
- Problems solved
|
||||
|
||||
## 4. Remaining Tasks
|
||||
- What still needs to be done
|
||||
- Pending items from the original request
|
||||
- Follow-up tasks identified during the work
|
||||
|
||||
## 5. Active Working Context (For Seamless Continuation)
|
||||
- **Files**: Paths of files currently being edited or frequently referenced
|
||||
- **Code in Progress**: Key code snippets, function signatures, or data structures under active development
|
||||
- **External References**: Documentation URLs, library APIs, or external resources being consulted
|
||||
- **State & Variables**: Important variable names, configuration values, or runtime state relevant to ongoing work
|
||||
|
||||
## 6. MUST NOT Do (Critical Constraints)
|
||||
- Things that were explicitly forbidden
|
||||
- Approaches that failed and should not be retried
|
||||
- User's explicit restrictions or preferences
|
||||
- Anti-patterns identified during the session
|
||||
|
||||
## 7. Agent Verification State (Critical for Reviewers)
|
||||
- **Current Agent**: What agent is running (momus, oracle, etc.)
|
||||
- **Verification Progress**: Files already verified/validated
|
||||
- **Pending Verifications**: Files still needing verification
|
||||
- **Previous Rejections**: If reviewer agent, what was rejected and why
|
||||
- **Acceptance Status**: Current state of review process
|
||||
|
||||
This section is CRITICAL for reviewer agents (momus, oracle) to maintain continuity.
|
||||
|
||||
This context is critical for maintaining continuity after compaction.
|
||||
`
|
||||
|
||||
export function createCompactionContextInjector() {
|
||||
return async (ctx: SummarizeContext): Promise<void> => {
|
||||
log("[compaction-context-injector] injecting context", { sessionID: ctx.sessionID })
|
||||
|
||||
const success = injectHookMessage(ctx.sessionID, SUMMARIZE_CONTEXT_PROMPT, {
|
||||
agent: "general",
|
||||
model: { providerID: ctx.providerID, modelID: ctx.modelID },
|
||||
path: { cwd: ctx.directory },
|
||||
})
|
||||
|
||||
if (success) {
|
||||
log("[compaction-context-injector] context injected", { sessionID: ctx.sessionID })
|
||||
} else {
|
||||
log("[compaction-context-injector] injection failed", { sessionID: ctx.sessionID })
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -25,11 +25,6 @@ interface ToolExecuteBeforeOutput {
|
||||
args: unknown;
|
||||
}
|
||||
|
||||
interface BatchToolCall {
|
||||
tool: string;
|
||||
parameters: Record<string, unknown>;
|
||||
}
|
||||
|
||||
interface EventInput {
|
||||
event: {
|
||||
type: string;
|
||||
@@ -39,7 +34,6 @@ interface EventInput {
|
||||
|
||||
export function createDirectoryAgentsInjectorHook(ctx: PluginInput) {
|
||||
const sessionCaches = new Map<string, Set<string>>();
|
||||
const pendingBatchReads = new Map<string, string[]>();
|
||||
const truncator = createDynamicTruncator(ctx);
|
||||
|
||||
function getSessionCache(sessionID: string): Set<string> {
|
||||
@@ -110,27 +104,6 @@ export function createDirectoryAgentsInjectorHook(ctx: PluginInput) {
|
||||
saveInjectedPaths(sessionID, cache);
|
||||
}
|
||||
|
||||
const toolExecuteBefore = async (
|
||||
input: ToolExecuteInput,
|
||||
output: ToolExecuteBeforeOutput,
|
||||
) => {
|
||||
if (input.tool.toLowerCase() !== "batch") return;
|
||||
|
||||
const args = output.args as { tool_calls?: BatchToolCall[] } | undefined;
|
||||
if (!args?.tool_calls) return;
|
||||
|
||||
const readFilePaths: string[] = [];
|
||||
for (const call of args.tool_calls) {
|
||||
if (call.tool.toLowerCase() === "read" && call.parameters?.filePath) {
|
||||
readFilePaths.push(call.parameters.filePath as string);
|
||||
}
|
||||
}
|
||||
|
||||
if (readFilePaths.length > 0) {
|
||||
pendingBatchReads.set(input.callID, readFilePaths);
|
||||
}
|
||||
};
|
||||
|
||||
const toolExecuteAfter = async (
|
||||
input: ToolExecuteInput,
|
||||
output: ToolExecuteOutput,
|
||||
@@ -141,16 +114,14 @@ export function createDirectoryAgentsInjectorHook(ctx: PluginInput) {
|
||||
await processFilePathForInjection(output.title, input.sessionID, output);
|
||||
return;
|
||||
}
|
||||
};
|
||||
|
||||
if (toolName === "batch") {
|
||||
const filePaths = pendingBatchReads.get(input.callID);
|
||||
if (filePaths) {
|
||||
for (const filePath of filePaths) {
|
||||
await processFilePathForInjection(filePath, input.sessionID, output);
|
||||
}
|
||||
pendingBatchReads.delete(input.callID);
|
||||
}
|
||||
}
|
||||
const toolExecuteBefore = async (
|
||||
input: ToolExecuteInput,
|
||||
output: ToolExecuteBeforeOutput,
|
||||
): Promise<void> => {
|
||||
void input;
|
||||
void output;
|
||||
};
|
||||
|
||||
const eventHandler = async ({ event }: EventInput) => {
|
||||
|
||||
@@ -1,48 +1,8 @@
|
||||
import {
|
||||
existsSync,
|
||||
mkdirSync,
|
||||
readFileSync,
|
||||
writeFileSync,
|
||||
unlinkSync,
|
||||
} from "node:fs";
|
||||
import { join } from "node:path";
|
||||
import { AGENTS_INJECTOR_STORAGE } from "./constants";
|
||||
import type { InjectedPathsData } from "./types";
|
||||
import { createInjectedPathsStorage } from "../../shared/session-injected-paths";
|
||||
|
||||
function getStoragePath(sessionID: string): string {
|
||||
return join(AGENTS_INJECTOR_STORAGE, `${sessionID}.json`);
|
||||
}
|
||||
|
||||
export function loadInjectedPaths(sessionID: string): Set<string> {
|
||||
const filePath = getStoragePath(sessionID);
|
||||
if (!existsSync(filePath)) return new Set();
|
||||
|
||||
try {
|
||||
const content = readFileSync(filePath, "utf-8");
|
||||
const data: InjectedPathsData = JSON.parse(content);
|
||||
return new Set(data.injectedPaths);
|
||||
} catch {
|
||||
return new Set();
|
||||
}
|
||||
}
|
||||
|
||||
export function saveInjectedPaths(sessionID: string, paths: Set<string>): void {
|
||||
if (!existsSync(AGENTS_INJECTOR_STORAGE)) {
|
||||
mkdirSync(AGENTS_INJECTOR_STORAGE, { recursive: true });
|
||||
}
|
||||
|
||||
const data: InjectedPathsData = {
|
||||
sessionID,
|
||||
injectedPaths: [...paths],
|
||||
updatedAt: Date.now(),
|
||||
};
|
||||
|
||||
writeFileSync(getStoragePath(sessionID), JSON.stringify(data, null, 2));
|
||||
}
|
||||
|
||||
export function clearInjectedPaths(sessionID: string): void {
|
||||
const filePath = getStoragePath(sessionID);
|
||||
if (existsSync(filePath)) {
|
||||
unlinkSync(filePath);
|
||||
}
|
||||
}
|
||||
export const {
|
||||
loadInjectedPaths,
|
||||
saveInjectedPaths,
|
||||
clearInjectedPaths,
|
||||
} = createInjectedPathsStorage(AGENTS_INJECTOR_STORAGE);
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
export interface InjectedPathsData {
|
||||
sessionID: string;
|
||||
injectedPaths: string[];
|
||||
updatedAt: number;
|
||||
}
|
||||
@@ -25,11 +25,6 @@ interface ToolExecuteBeforeOutput {
|
||||
args: unknown;
|
||||
}
|
||||
|
||||
interface BatchToolCall {
|
||||
tool: string;
|
||||
parameters: Record<string, unknown>;
|
||||
}
|
||||
|
||||
interface EventInput {
|
||||
event: {
|
||||
type: string;
|
||||
@@ -39,7 +34,6 @@ interface EventInput {
|
||||
|
||||
export function createDirectoryReadmeInjectorHook(ctx: PluginInput) {
|
||||
const sessionCaches = new Map<string, Set<string>>();
|
||||
const pendingBatchReads = new Map<string, string[]>();
|
||||
const truncator = createDynamicTruncator(ctx);
|
||||
|
||||
function getSessionCache(sessionID: string): Set<string> {
|
||||
@@ -105,27 +99,6 @@ export function createDirectoryReadmeInjectorHook(ctx: PluginInput) {
|
||||
saveInjectedPaths(sessionID, cache);
|
||||
}
|
||||
|
||||
const toolExecuteBefore = async (
|
||||
input: ToolExecuteInput,
|
||||
output: ToolExecuteBeforeOutput,
|
||||
) => {
|
||||
if (input.tool.toLowerCase() !== "batch") return;
|
||||
|
||||
const args = output.args as { tool_calls?: BatchToolCall[] } | undefined;
|
||||
if (!args?.tool_calls) return;
|
||||
|
||||
const readFilePaths: string[] = [];
|
||||
for (const call of args.tool_calls) {
|
||||
if (call.tool.toLowerCase() === "read" && call.parameters?.filePath) {
|
||||
readFilePaths.push(call.parameters.filePath as string);
|
||||
}
|
||||
}
|
||||
|
||||
if (readFilePaths.length > 0) {
|
||||
pendingBatchReads.set(input.callID, readFilePaths);
|
||||
}
|
||||
};
|
||||
|
||||
const toolExecuteAfter = async (
|
||||
input: ToolExecuteInput,
|
||||
output: ToolExecuteOutput,
|
||||
@@ -136,16 +109,14 @@ export function createDirectoryReadmeInjectorHook(ctx: PluginInput) {
|
||||
await processFilePathForInjection(output.title, input.sessionID, output);
|
||||
return;
|
||||
}
|
||||
};
|
||||
|
||||
if (toolName === "batch") {
|
||||
const filePaths = pendingBatchReads.get(input.callID);
|
||||
if (filePaths) {
|
||||
for (const filePath of filePaths) {
|
||||
await processFilePathForInjection(filePath, input.sessionID, output);
|
||||
}
|
||||
pendingBatchReads.delete(input.callID);
|
||||
}
|
||||
}
|
||||
const toolExecuteBefore = async (
|
||||
input: ToolExecuteInput,
|
||||
output: ToolExecuteBeforeOutput,
|
||||
): Promise<void> => {
|
||||
void input;
|
||||
void output;
|
||||
};
|
||||
|
||||
const eventHandler = async ({ event }: EventInput) => {
|
||||
|
||||
@@ -1,48 +1,8 @@
|
||||
import {
|
||||
existsSync,
|
||||
mkdirSync,
|
||||
readFileSync,
|
||||
writeFileSync,
|
||||
unlinkSync,
|
||||
} from "node:fs";
|
||||
import { join } from "node:path";
|
||||
import { README_INJECTOR_STORAGE } from "./constants";
|
||||
import type { InjectedPathsData } from "./types";
|
||||
import { createInjectedPathsStorage } from "../../shared/session-injected-paths";
|
||||
|
||||
function getStoragePath(sessionID: string): string {
|
||||
return join(README_INJECTOR_STORAGE, `${sessionID}.json`);
|
||||
}
|
||||
|
||||
export function loadInjectedPaths(sessionID: string): Set<string> {
|
||||
const filePath = getStoragePath(sessionID);
|
||||
if (!existsSync(filePath)) return new Set();
|
||||
|
||||
try {
|
||||
const content = readFileSync(filePath, "utf-8");
|
||||
const data: InjectedPathsData = JSON.parse(content);
|
||||
return new Set(data.injectedPaths);
|
||||
} catch {
|
||||
return new Set();
|
||||
}
|
||||
}
|
||||
|
||||
export function saveInjectedPaths(sessionID: string, paths: Set<string>): void {
|
||||
if (!existsSync(README_INJECTOR_STORAGE)) {
|
||||
mkdirSync(README_INJECTOR_STORAGE, { recursive: true });
|
||||
}
|
||||
|
||||
const data: InjectedPathsData = {
|
||||
sessionID,
|
||||
injectedPaths: [...paths],
|
||||
updatedAt: Date.now(),
|
||||
};
|
||||
|
||||
writeFileSync(getStoragePath(sessionID), JSON.stringify(data, null, 2));
|
||||
}
|
||||
|
||||
export function clearInjectedPaths(sessionID: string): void {
|
||||
const filePath = getStoragePath(sessionID);
|
||||
if (existsSync(filePath)) {
|
||||
unlinkSync(filePath);
|
||||
}
|
||||
}
|
||||
export const {
|
||||
loadInjectedPaths,
|
||||
saveInjectedPaths,
|
||||
clearInjectedPaths,
|
||||
} = createInjectedPathsStorage(README_INJECTOR_STORAGE);
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
export interface InjectedPathsData {
|
||||
sessionID: string;
|
||||
injectedPaths: string[];
|
||||
updatedAt: number;
|
||||
}
|
||||
@@ -9,7 +9,6 @@ export { createDirectoryReadmeInjectorHook } from "./directory-readme-injector";
|
||||
export { createEmptyTaskResponseDetectorHook } from "./empty-task-response-detector";
|
||||
export { createAnthropicContextWindowLimitRecoveryHook, type AnthropicContextWindowLimitRecoveryOptions } from "./anthropic-context-window-limit-recovery";
|
||||
|
||||
export { createCompactionContextInjector } from "./compaction-context-injector";
|
||||
export { createThinkModeHook } from "./think-mode";
|
||||
export { createClaudeCodeHooksHook } from "./claude-code-hooks";
|
||||
export { createRulesInjectorHook } from "./rules-injector";
|
||||
@@ -34,3 +33,4 @@ export { createAtlasHook } from "./atlas";
|
||||
export { createDelegateTaskRetryHook } from "./delegate-task-retry";
|
||||
export { createQuestionLabelTruncatorHook } from "./question-label-truncator";
|
||||
export { createSubagentQuestionBlockerHook } from "./subagent-question-blocker";
|
||||
export { createStopContinuationGuardHook, type StopContinuationGuard } from "./stop-continuation-guard";
|
||||
|
||||
@@ -21,6 +21,7 @@ describe("keyword-detector message transform", () => {
|
||||
afterEach(() => {
|
||||
logSpy?.mockRestore()
|
||||
getMainSessionSpy?.mockRestore()
|
||||
_resetForTesting()
|
||||
})
|
||||
|
||||
function createMockPluginInput() {
|
||||
@@ -101,7 +102,7 @@ describe("keyword-detector session filtering", () => {
|
||||
let logSpy: ReturnType<typeof spyOn>
|
||||
|
||||
beforeEach(() => {
|
||||
setMainSession(undefined)
|
||||
_resetForTesting()
|
||||
logCalls = []
|
||||
logSpy = spyOn(sharedModule, "log").mockImplementation((msg: string, data?: unknown) => {
|
||||
logCalls.push({ msg, data })
|
||||
@@ -110,7 +111,7 @@ describe("keyword-detector session filtering", () => {
|
||||
|
||||
afterEach(() => {
|
||||
logSpy?.mockRestore()
|
||||
setMainSession(undefined)
|
||||
_resetForTesting()
|
||||
})
|
||||
|
||||
function createMockPluginInput(options: { toastCalls?: string[] } = {}) {
|
||||
@@ -246,7 +247,7 @@ describe("keyword-detector word boundary", () => {
|
||||
let logSpy: ReturnType<typeof spyOn>
|
||||
|
||||
beforeEach(() => {
|
||||
setMainSession(undefined)
|
||||
_resetForTesting()
|
||||
logCalls = []
|
||||
logSpy = spyOn(sharedModule, "log").mockImplementation((msg: string, data?: unknown) => {
|
||||
logCalls.push({ msg, data })
|
||||
@@ -255,7 +256,7 @@ describe("keyword-detector word boundary", () => {
|
||||
|
||||
afterEach(() => {
|
||||
logSpy?.mockRestore()
|
||||
setMainSession(undefined)
|
||||
_resetForTesting()
|
||||
})
|
||||
|
||||
function createMockPluginInput(options: { toastCalls?: string[] } = {}) {
|
||||
@@ -343,7 +344,7 @@ describe("keyword-detector system-reminder filtering", () => {
|
||||
let logSpy: ReturnType<typeof spyOn>
|
||||
|
||||
beforeEach(() => {
|
||||
setMainSession(undefined)
|
||||
_resetForTesting()
|
||||
logCalls = []
|
||||
logSpy = spyOn(sharedModule, "log").mockImplementation((msg: string, data?: unknown) => {
|
||||
logCalls.push({ msg, data })
|
||||
@@ -352,7 +353,7 @@ describe("keyword-detector system-reminder filtering", () => {
|
||||
|
||||
afterEach(() => {
|
||||
logSpy?.mockRestore()
|
||||
setMainSession(undefined)
|
||||
_resetForTesting()
|
||||
})
|
||||
|
||||
function createMockPluginInput() {
|
||||
@@ -534,7 +535,7 @@ describe("keyword-detector agent-specific ultrawork messages", () => {
|
||||
let logSpy: ReturnType<typeof spyOn>
|
||||
|
||||
beforeEach(() => {
|
||||
setMainSession(undefined)
|
||||
_resetForTesting()
|
||||
logCalls = []
|
||||
logSpy = spyOn(sharedModule, "log").mockImplementation((msg: string, data?: unknown) => {
|
||||
logCalls.push({ msg, data })
|
||||
@@ -543,7 +544,7 @@ describe("keyword-detector agent-specific ultrawork messages", () => {
|
||||
|
||||
afterEach(() => {
|
||||
logSpy?.mockRestore()
|
||||
setMainSession(undefined)
|
||||
_resetForTesting()
|
||||
})
|
||||
|
||||
function createMockPluginInput() {
|
||||
|
||||
@@ -17,6 +17,7 @@ export const PROJECT_RULE_SUBDIRS: [string, string][] = [
|
||||
[".github", "instructions"],
|
||||
[".cursor", "rules"],
|
||||
[".claude", "rules"],
|
||||
[".sisyphus", "rules"],
|
||||
];
|
||||
|
||||
export const PROJECT_RULE_FILES: string[] = [
|
||||
|
||||
@@ -33,11 +33,6 @@ interface ToolExecuteBeforeOutput {
|
||||
args: unknown;
|
||||
}
|
||||
|
||||
interface BatchToolCall {
|
||||
tool: string;
|
||||
parameters: Record<string, unknown>;
|
||||
}
|
||||
|
||||
interface EventInput {
|
||||
event: {
|
||||
type: string;
|
||||
@@ -59,7 +54,6 @@ export function createRulesInjectorHook(ctx: PluginInput) {
|
||||
string,
|
||||
{ contentHashes: Set<string>; realPaths: Set<string> }
|
||||
>();
|
||||
const pendingBatchFiles = new Map<string, string[]>();
|
||||
const truncator = createDynamicTruncator(ctx);
|
||||
|
||||
function getSessionCache(sessionID: string): {
|
||||
@@ -143,35 +137,6 @@ export function createRulesInjectorHook(ctx: PluginInput) {
|
||||
saveInjectedRules(sessionID, cache);
|
||||
}
|
||||
|
||||
function extractFilePathFromToolCall(call: BatchToolCall): string | null {
|
||||
const params = call.parameters;
|
||||
return (params?.filePath ?? params?.file_path ?? params?.path) as string | null;
|
||||
}
|
||||
|
||||
const toolExecuteBefore = async (
|
||||
input: ToolExecuteInput,
|
||||
output: ToolExecuteBeforeOutput
|
||||
) => {
|
||||
if (input.tool.toLowerCase() !== "batch") return;
|
||||
|
||||
const args = output.args as { tool_calls?: BatchToolCall[] } | undefined;
|
||||
if (!args?.tool_calls) return;
|
||||
|
||||
const filePaths: string[] = [];
|
||||
for (const call of args.tool_calls) {
|
||||
if (TRACKED_TOOLS.includes(call.tool.toLowerCase())) {
|
||||
const filePath = extractFilePathFromToolCall(call);
|
||||
if (filePath) {
|
||||
filePaths.push(filePath);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (filePaths.length > 0) {
|
||||
pendingBatchFiles.set(input.callID, filePaths);
|
||||
}
|
||||
};
|
||||
|
||||
const toolExecuteAfter = async (
|
||||
input: ToolExecuteInput,
|
||||
output: ToolExecuteOutput
|
||||
@@ -182,16 +147,14 @@ export function createRulesInjectorHook(ctx: PluginInput) {
|
||||
await processFilePathForInjection(output.title, input.sessionID, output);
|
||||
return;
|
||||
}
|
||||
};
|
||||
|
||||
if (toolName === "batch") {
|
||||
const filePaths = pendingBatchFiles.get(input.callID);
|
||||
if (filePaths) {
|
||||
for (const filePath of filePaths) {
|
||||
await processFilePathForInjection(filePath, input.sessionID, output);
|
||||
}
|
||||
pendingBatchFiles.delete(input.callID);
|
||||
}
|
||||
}
|
||||
const toolExecuteBefore = async (
|
||||
input: ToolExecuteInput,
|
||||
output: ToolExecuteBeforeOutput
|
||||
): Promise<void> => {
|
||||
void input;
|
||||
void output;
|
||||
};
|
||||
|
||||
const eventHandler = async ({ event }: EventInput) => {
|
||||
|
||||
@@ -2,24 +2,6 @@ import { spawn } from "bun"
|
||||
|
||||
type Platform = "darwin" | "linux" | "win32" | "unsupported"
|
||||
|
||||
let notifySendPath: string | null = null
|
||||
let notifySendPromise: Promise<string | null> | null = null
|
||||
|
||||
let osascriptPath: string | null = null
|
||||
let osascriptPromise: Promise<string | null> | null = null
|
||||
|
||||
let powershellPath: string | null = null
|
||||
let powershellPromise: Promise<string | null> | null = null
|
||||
|
||||
let afplayPath: string | null = null
|
||||
let afplayPromise: Promise<string | null> | null = null
|
||||
|
||||
let paplayPath: string | null = null
|
||||
let paplayPromise: Promise<string | null> | null = null
|
||||
|
||||
let aplayPath: string | null = null
|
||||
let aplayPromise: Promise<string | null> | null = null
|
||||
|
||||
async function findCommand(commandName: string): Promise<string | null> {
|
||||
const isWindows = process.platform === "win32"
|
||||
const cmd = isWindows ? "where" : "which"
|
||||
@@ -48,83 +30,30 @@ async function findCommand(commandName: string): Promise<string | null> {
|
||||
}
|
||||
}
|
||||
|
||||
export async function getNotifySendPath(): Promise<string | null> {
|
||||
if (notifySendPath !== null) return notifySendPath
|
||||
if (notifySendPromise) return notifySendPromise
|
||||
function createCommandFinder(commandName: string): () => Promise<string | null> {
|
||||
let cachedPath: string | null = null
|
||||
let pending: Promise<string | null> | null = null
|
||||
|
||||
notifySendPromise = (async () => {
|
||||
const path = await findCommand("notify-send")
|
||||
notifySendPath = path
|
||||
return path
|
||||
})()
|
||||
return async () => {
|
||||
if (cachedPath !== null) return cachedPath
|
||||
if (pending) return pending
|
||||
|
||||
return notifySendPromise
|
||||
pending = (async () => {
|
||||
const path = await findCommand(commandName)
|
||||
cachedPath = path
|
||||
return path
|
||||
})()
|
||||
|
||||
return pending
|
||||
}
|
||||
}
|
||||
|
||||
export async function getOsascriptPath(): Promise<string | null> {
|
||||
if (osascriptPath !== null) return osascriptPath
|
||||
if (osascriptPromise) return osascriptPromise
|
||||
|
||||
osascriptPromise = (async () => {
|
||||
const path = await findCommand("osascript")
|
||||
osascriptPath = path
|
||||
return path
|
||||
})()
|
||||
|
||||
return osascriptPromise
|
||||
}
|
||||
|
||||
export async function getPowershellPath(): Promise<string | null> {
|
||||
if (powershellPath !== null) return powershellPath
|
||||
if (powershellPromise) return powershellPromise
|
||||
|
||||
powershellPromise = (async () => {
|
||||
const path = await findCommand("powershell")
|
||||
powershellPath = path
|
||||
return path
|
||||
})()
|
||||
|
||||
return powershellPromise
|
||||
}
|
||||
|
||||
export async function getAfplayPath(): Promise<string | null> {
|
||||
if (afplayPath !== null) return afplayPath
|
||||
if (afplayPromise) return afplayPromise
|
||||
|
||||
afplayPromise = (async () => {
|
||||
const path = await findCommand("afplay")
|
||||
afplayPath = path
|
||||
return path
|
||||
})()
|
||||
|
||||
return afplayPromise
|
||||
}
|
||||
|
||||
export async function getPaplayPath(): Promise<string | null> {
|
||||
if (paplayPath !== null) return paplayPath
|
||||
if (paplayPromise) return paplayPromise
|
||||
|
||||
paplayPromise = (async () => {
|
||||
const path = await findCommand("paplay")
|
||||
paplayPath = path
|
||||
return path
|
||||
})()
|
||||
|
||||
return paplayPromise
|
||||
}
|
||||
|
||||
export async function getAplayPath(): Promise<string | null> {
|
||||
if (aplayPath !== null) return aplayPath
|
||||
if (aplayPromise) return aplayPromise
|
||||
|
||||
aplayPromise = (async () => {
|
||||
const path = await findCommand("aplay")
|
||||
aplayPath = path
|
||||
return path
|
||||
})()
|
||||
|
||||
return aplayPromise
|
||||
}
|
||||
export const getNotifySendPath = createCommandFinder("notify-send")
|
||||
export const getOsascriptPath = createCommandFinder("osascript")
|
||||
export const getPowershellPath = createCommandFinder("powershell")
|
||||
export const getAfplayPath = createCommandFinder("afplay")
|
||||
export const getPaplayPath = createCommandFinder("paplay")
|
||||
export const getAplayPath = createCommandFinder("aplay")
|
||||
|
||||
export function startBackgroundCheck(platform: Platform): void {
|
||||
if (platform === "darwin") {
|
||||
|
||||
@@ -45,7 +45,7 @@ describe("session-notification", () => {
|
||||
afterEach(() => {
|
||||
// #given - cleanup after each test
|
||||
subagentSessions.clear()
|
||||
setMainSession(undefined)
|
||||
_resetForTesting()
|
||||
})
|
||||
|
||||
test("should not trigger notification for subagent session", async () => {
|
||||
|
||||
@@ -71,10 +71,7 @@ export function createStartWorkHook(ctx: PluginInput) {
|
||||
sessionID: input.sessionID,
|
||||
})
|
||||
|
||||
const currentAgent = getSessionAgent(input.sessionID)
|
||||
if (!currentAgent) {
|
||||
updateSessionAgent(input.sessionID, "atlas")
|
||||
}
|
||||
updateSessionAgent(input.sessionID, "atlas") // Always switch: fixes #1298
|
||||
|
||||
const existingState = readBoulderState(ctx.directory)
|
||||
const sessionId = input.sessionID
|
||||
|
||||
144
src/hooks/stop-continuation-guard/index.test.ts
Normal file
144
src/hooks/stop-continuation-guard/index.test.ts
Normal file
@@ -0,0 +1,144 @@
|
||||
import { describe, expect, test } from "bun:test"
|
||||
import { createStopContinuationGuardHook } from "./index"
|
||||
|
||||
describe("stop-continuation-guard", () => {
|
||||
function createMockPluginInput() {
|
||||
return {
|
||||
client: {
|
||||
tui: {
|
||||
showToast: async () => ({}),
|
||||
},
|
||||
},
|
||||
directory: "/tmp/test",
|
||||
} as never
|
||||
}
|
||||
|
||||
test("should mark session as stopped", () => {
|
||||
// #given - a guard hook with no stopped sessions
|
||||
const guard = createStopContinuationGuardHook(createMockPluginInput())
|
||||
const sessionID = "test-session-1"
|
||||
|
||||
// #when - we stop continuation for the session
|
||||
guard.stop(sessionID)
|
||||
|
||||
// #then - session should be marked as stopped
|
||||
expect(guard.isStopped(sessionID)).toBe(true)
|
||||
})
|
||||
|
||||
test("should return false for non-stopped sessions", () => {
|
||||
// #given - a guard hook with no stopped sessions
|
||||
const guard = createStopContinuationGuardHook(createMockPluginInput())
|
||||
|
||||
// #when - we check a session that was never stopped
|
||||
|
||||
// #then - it should return false
|
||||
expect(guard.isStopped("non-existent-session")).toBe(false)
|
||||
})
|
||||
|
||||
test("should clear stopped state for a session", () => {
|
||||
// #given - a session that was stopped
|
||||
const guard = createStopContinuationGuardHook(createMockPluginInput())
|
||||
const sessionID = "test-session-2"
|
||||
guard.stop(sessionID)
|
||||
|
||||
// #when - we clear the session
|
||||
guard.clear(sessionID)
|
||||
|
||||
// #then - session should no longer be stopped
|
||||
expect(guard.isStopped(sessionID)).toBe(false)
|
||||
})
|
||||
|
||||
test("should handle multiple sessions independently", () => {
|
||||
// #given - multiple sessions with different stop states
|
||||
const guard = createStopContinuationGuardHook(createMockPluginInput())
|
||||
const session1 = "session-1"
|
||||
const session2 = "session-2"
|
||||
const session3 = "session-3"
|
||||
|
||||
// #when - we stop some sessions but not others
|
||||
guard.stop(session1)
|
||||
guard.stop(session2)
|
||||
|
||||
// #then - each session has its own state
|
||||
expect(guard.isStopped(session1)).toBe(true)
|
||||
expect(guard.isStopped(session2)).toBe(true)
|
||||
expect(guard.isStopped(session3)).toBe(false)
|
||||
})
|
||||
|
||||
test("should clear session on session.deleted event", async () => {
|
||||
// #given - a session that was stopped
|
||||
const guard = createStopContinuationGuardHook(createMockPluginInput())
|
||||
const sessionID = "test-session-3"
|
||||
guard.stop(sessionID)
|
||||
|
||||
// #when - session is deleted
|
||||
await guard.event({
|
||||
event: {
|
||||
type: "session.deleted",
|
||||
properties: { info: { id: sessionID } },
|
||||
},
|
||||
})
|
||||
|
||||
// #then - session should no longer be stopped (cleaned up)
|
||||
expect(guard.isStopped(sessionID)).toBe(false)
|
||||
})
|
||||
|
||||
test("should not affect other sessions on session.deleted", async () => {
|
||||
// #given - multiple stopped sessions
|
||||
const guard = createStopContinuationGuardHook(createMockPluginInput())
|
||||
const session1 = "session-keep"
|
||||
const session2 = "session-delete"
|
||||
guard.stop(session1)
|
||||
guard.stop(session2)
|
||||
|
||||
// #when - one session is deleted
|
||||
await guard.event({
|
||||
event: {
|
||||
type: "session.deleted",
|
||||
properties: { info: { id: session2 } },
|
||||
},
|
||||
})
|
||||
|
||||
// #then - other session should remain stopped
|
||||
expect(guard.isStopped(session1)).toBe(true)
|
||||
expect(guard.isStopped(session2)).toBe(false)
|
||||
})
|
||||
|
||||
test("should clear stopped state on new user message (chat.message)", async () => {
|
||||
// #given - a session that was stopped
|
||||
const guard = createStopContinuationGuardHook(createMockPluginInput())
|
||||
const sessionID = "test-session-4"
|
||||
guard.stop(sessionID)
|
||||
expect(guard.isStopped(sessionID)).toBe(true)
|
||||
|
||||
// #when - user sends a new message
|
||||
await guard["chat.message"]({ sessionID })
|
||||
|
||||
// #then - stop state should be cleared (one-time only)
|
||||
expect(guard.isStopped(sessionID)).toBe(false)
|
||||
})
|
||||
|
||||
test("should not affect non-stopped sessions on chat.message", async () => {
|
||||
// #given - a session that was never stopped
|
||||
const guard = createStopContinuationGuardHook(createMockPluginInput())
|
||||
const sessionID = "test-session-5"
|
||||
|
||||
// #when - user sends a message (session was never stopped)
|
||||
await guard["chat.message"]({ sessionID })
|
||||
|
||||
// #then - should not throw and session remains not stopped
|
||||
expect(guard.isStopped(sessionID)).toBe(false)
|
||||
})
|
||||
|
||||
test("should handle undefined sessionID in chat.message", async () => {
|
||||
// #given - a guard with a stopped session
|
||||
const guard = createStopContinuationGuardHook(createMockPluginInput())
|
||||
guard.stop("some-session")
|
||||
|
||||
// #when - chat.message is called without sessionID
|
||||
await guard["chat.message"]({ sessionID: undefined })
|
||||
|
||||
// #then - should not throw and stopped session remains stopped
|
||||
expect(guard.isStopped("some-session")).toBe(true)
|
||||
})
|
||||
})
|
||||
67
src/hooks/stop-continuation-guard/index.ts
Normal file
67
src/hooks/stop-continuation-guard/index.ts
Normal file
@@ -0,0 +1,67 @@
|
||||
import type { PluginInput } from "@opencode-ai/plugin"
|
||||
import { log } from "../../shared/logger"
|
||||
|
||||
const HOOK_NAME = "stop-continuation-guard"
|
||||
|
||||
export interface StopContinuationGuard {
|
||||
event: (input: { event: { type: string; properties?: unknown } }) => Promise<void>
|
||||
"chat.message": (input: { sessionID?: string }) => Promise<void>
|
||||
stop: (sessionID: string) => void
|
||||
isStopped: (sessionID: string) => boolean
|
||||
clear: (sessionID: string) => void
|
||||
}
|
||||
|
||||
export function createStopContinuationGuardHook(
|
||||
_ctx: PluginInput
|
||||
): StopContinuationGuard {
|
||||
const stoppedSessions = new Set<string>()
|
||||
|
||||
const stop = (sessionID: string): void => {
|
||||
stoppedSessions.add(sessionID)
|
||||
log(`[${HOOK_NAME}] Continuation stopped for session`, { sessionID })
|
||||
}
|
||||
|
||||
const isStopped = (sessionID: string): boolean => {
|
||||
return stoppedSessions.has(sessionID)
|
||||
}
|
||||
|
||||
const clear = (sessionID: string): void => {
|
||||
stoppedSessions.delete(sessionID)
|
||||
log(`[${HOOK_NAME}] Continuation guard cleared for session`, { sessionID })
|
||||
}
|
||||
|
||||
const event = async ({
|
||||
event,
|
||||
}: {
|
||||
event: { type: string; properties?: unknown }
|
||||
}): Promise<void> => {
|
||||
const props = event.properties as Record<string, unknown> | undefined
|
||||
|
||||
if (event.type === "session.deleted") {
|
||||
const sessionInfo = props?.info as { id?: string } | undefined
|
||||
if (sessionInfo?.id) {
|
||||
clear(sessionInfo.id)
|
||||
log(`[${HOOK_NAME}] Session deleted: cleaned up`, { sessionID: sessionInfo.id })
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const chatMessage = async ({
|
||||
sessionID,
|
||||
}: {
|
||||
sessionID?: string
|
||||
}): Promise<void> => {
|
||||
if (sessionID && stoppedSessions.has(sessionID)) {
|
||||
clear(sessionID)
|
||||
log(`[${HOOK_NAME}] Cleared stop state on new user message`, { sessionID })
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
event,
|
||||
"chat.message": chatMessage,
|
||||
stop,
|
||||
isStopped,
|
||||
clear,
|
||||
}
|
||||
}
|
||||
@@ -458,4 +458,71 @@ describe("think-mode switcher", () => {
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe("Z.AI GLM-4.7 provider support", () => {
|
||||
describe("getThinkingConfig for zai-coding-plan", () => {
|
||||
it("should return thinking config for glm-4.7", () => {
|
||||
// #given zai-coding-plan provider with glm-4.7 model
|
||||
const config = getThinkingConfig("zai-coding-plan", "glm-4.7")
|
||||
|
||||
// #then should return zai-coding-plan thinking config
|
||||
expect(config).not.toBeNull()
|
||||
expect(config?.providerOptions).toBeDefined()
|
||||
const zaiOptions = (config?.providerOptions as Record<string, unknown>)?.[
|
||||
"zai-coding-plan"
|
||||
] as Record<string, unknown>
|
||||
expect(zaiOptions?.extra_body).toBeDefined()
|
||||
const extraBody = zaiOptions?.extra_body as Record<string, unknown>
|
||||
expect(extraBody?.thinking).toBeDefined()
|
||||
expect((extraBody?.thinking as Record<string, unknown>)?.type).toBe("enabled")
|
||||
expect((extraBody?.thinking as Record<string, unknown>)?.clear_thinking).toBe(false)
|
||||
})
|
||||
|
||||
it("should return thinking config for glm-4.6v (multimodal)", () => {
|
||||
// #given zai-coding-plan provider with glm-4.6v model
|
||||
const config = getThinkingConfig("zai-coding-plan", "glm-4.6v")
|
||||
|
||||
// #then should return zai-coding-plan thinking config
|
||||
expect(config).not.toBeNull()
|
||||
expect(config?.providerOptions).toBeDefined()
|
||||
})
|
||||
|
||||
it("should return null for non-GLM models on zai-coding-plan", () => {
|
||||
// #given zai-coding-plan provider with unknown model
|
||||
const config = getThinkingConfig("zai-coding-plan", "some-other-model")
|
||||
|
||||
// #then should return null
|
||||
expect(config).toBeNull()
|
||||
})
|
||||
})
|
||||
|
||||
describe("HIGH_VARIANT_MAP for GLM", () => {
|
||||
it("should NOT have high variant for glm-4.7 (thinking enabled by default)", () => {
|
||||
// #given glm-4.7 model
|
||||
const variant = getHighVariant("glm-4.7")
|
||||
|
||||
// #then should return null (no high variant needed)
|
||||
expect(variant).toBeNull()
|
||||
})
|
||||
|
||||
it("should NOT have high variant for glm-4.6v", () => {
|
||||
// #given glm-4.6v model
|
||||
const variant = getHighVariant("glm-4.6v")
|
||||
|
||||
// #then should return null
|
||||
expect(variant).toBeNull()
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe("THINKING_CONFIGS structure for zai-coding-plan", () => {
|
||||
it("should have correct structure for zai-coding-plan", () => {
|
||||
const config = THINKING_CONFIGS["zai-coding-plan"]
|
||||
expect(config.providerOptions).toBeDefined()
|
||||
const zaiOptions = (config.providerOptions as Record<string, unknown>)?.[
|
||||
"zai-coding-plan"
|
||||
] as Record<string, unknown>
|
||||
expect(zaiOptions?.extra_body).toBeDefined()
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@@ -149,6 +149,18 @@ export const THINKING_CONFIGS = {
|
||||
openai: {
|
||||
reasoning_effort: "high",
|
||||
},
|
||||
"zai-coding-plan": {
|
||||
providerOptions: {
|
||||
"zai-coding-plan": {
|
||||
extra_body: {
|
||||
thinking: {
|
||||
type: "enabled",
|
||||
clear_thinking: false,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
} as const satisfies Record<string, Record<string, unknown>>
|
||||
|
||||
const THINKING_CAPABLE_MODELS = {
|
||||
@@ -157,6 +169,7 @@ const THINKING_CAPABLE_MODELS = {
|
||||
google: ["gemini-2", "gemini-3"],
|
||||
"google-vertex": ["gemini-2", "gemini-3"],
|
||||
openai: ["gpt-5", "o1", "o3"],
|
||||
"zai-coding-plan": ["glm"],
|
||||
} as const satisfies Record<string, readonly string[]>
|
||||
|
||||
export function getHighVariant(modelID: string): string | null {
|
||||
|
||||
@@ -1178,4 +1178,68 @@ describe("todo-continuation-enforcer", () => {
|
||||
// #then - continuation injected (no agents to skip)
|
||||
expect(promptCalls.length).toBe(1)
|
||||
})
|
||||
|
||||
test("should not inject when isContinuationStopped returns true", async () => {
|
||||
// #given - session with continuation stopped
|
||||
const sessionID = "main-stopped"
|
||||
setMainSession(sessionID)
|
||||
|
||||
const hook = createTodoContinuationEnforcer(createMockPluginInput(), {
|
||||
isContinuationStopped: (id) => id === sessionID,
|
||||
})
|
||||
|
||||
// #when - session goes idle
|
||||
await hook.handler({
|
||||
event: { type: "session.idle", properties: { sessionID } },
|
||||
})
|
||||
|
||||
await fakeTimers.advanceBy(3000)
|
||||
|
||||
// #then - no continuation injected (stopped flag is true)
|
||||
expect(promptCalls).toHaveLength(0)
|
||||
})
|
||||
|
||||
test("should inject when isContinuationStopped returns false", async () => {
|
||||
// #given - session with continuation not stopped
|
||||
const sessionID = "main-not-stopped"
|
||||
setMainSession(sessionID)
|
||||
|
||||
const hook = createTodoContinuationEnforcer(createMockPluginInput(), {
|
||||
isContinuationStopped: () => false,
|
||||
})
|
||||
|
||||
// #when - session goes idle
|
||||
await hook.handler({
|
||||
event: { type: "session.idle", properties: { sessionID } },
|
||||
})
|
||||
|
||||
await fakeTimers.advanceBy(3000)
|
||||
|
||||
// #then - continuation injected (stopped flag is false)
|
||||
expect(promptCalls.length).toBe(1)
|
||||
})
|
||||
|
||||
test("should cancel all countdowns via cancelAllCountdowns", async () => {
|
||||
// #given - multiple sessions with running countdowns
|
||||
const session1 = "main-cancel-all-1"
|
||||
const session2 = "main-cancel-all-2"
|
||||
setMainSession(session1)
|
||||
|
||||
const hook = createTodoContinuationEnforcer(createMockPluginInput(), {})
|
||||
|
||||
// #when - first session goes idle
|
||||
await hook.handler({
|
||||
event: { type: "session.idle", properties: { sessionID: session1 } },
|
||||
})
|
||||
await fakeTimers.advanceBy(500)
|
||||
|
||||
// #when - cancel all countdowns
|
||||
hook.cancelAllCountdowns()
|
||||
|
||||
// #when - advance past countdown time
|
||||
await fakeTimers.advanceBy(3000)
|
||||
|
||||
// #then - no continuation injected (all countdowns cancelled)
|
||||
expect(promptCalls).toHaveLength(0)
|
||||
})
|
||||
})
|
||||
|
||||
@@ -18,12 +18,14 @@ const DEFAULT_SKIP_AGENTS = ["prometheus", "compaction"]
|
||||
export interface TodoContinuationEnforcerOptions {
|
||||
backgroundManager?: BackgroundManager
|
||||
skipAgents?: string[]
|
||||
isContinuationStopped?: (sessionID: string) => boolean
|
||||
}
|
||||
|
||||
export interface TodoContinuationEnforcer {
|
||||
handler: (input: { event: { type: string; properties?: unknown } }) => Promise<void>
|
||||
markRecovering: (sessionID: string) => void
|
||||
markRecoveryComplete: (sessionID: string) => void
|
||||
cancelAllCountdowns: () => void
|
||||
}
|
||||
|
||||
interface Todo {
|
||||
@@ -95,7 +97,7 @@ export function createTodoContinuationEnforcer(
|
||||
ctx: PluginInput,
|
||||
options: TodoContinuationEnforcerOptions = {}
|
||||
): TodoContinuationEnforcer {
|
||||
const { backgroundManager, skipAgents = DEFAULT_SKIP_AGENTS } = options
|
||||
const { backgroundManager, skipAgents = DEFAULT_SKIP_AGENTS, isContinuationStopped } = options
|
||||
const sessions = new Map<string, SessionState>()
|
||||
|
||||
function getState(sessionID: string): SessionState {
|
||||
@@ -420,6 +422,11 @@ export function createTodoContinuationEnforcer(
|
||||
return
|
||||
}
|
||||
|
||||
if (isContinuationStopped?.(sessionID)) {
|
||||
log(`[${HOOK_NAME}] Skipped: continuation stopped for session`, { sessionID })
|
||||
return
|
||||
}
|
||||
|
||||
startCountdown(sessionID, incompleteCount, todos.length, resolvedInfo)
|
||||
return
|
||||
}
|
||||
@@ -485,9 +492,17 @@ export function createTodoContinuationEnforcer(
|
||||
}
|
||||
}
|
||||
|
||||
const cancelAllCountdowns = (): void => {
|
||||
for (const sessionID of sessions.keys()) {
|
||||
cancelCountdown(sessionID)
|
||||
}
|
||||
log(`[${HOOK_NAME}] All countdowns cancelled`)
|
||||
}
|
||||
|
||||
return {
|
||||
handler,
|
||||
markRecovering,
|
||||
markRecoveryComplete,
|
||||
cancelAllCountdowns,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
import { describe, expect, it } from "bun:test"
|
||||
import { includesCaseInsensitive } from "./shared"
|
||||
|
||||
/**
|
||||
* Tests for conditional tool registration logic in index.ts
|
||||
*
|
||||
@@ -13,8 +11,10 @@ describe("look_at tool conditional registration", () => {
|
||||
// #when checking if agent is enabled
|
||||
// #then should return false (disabled)
|
||||
it("returns false when multimodal-looker is disabled (exact case)", () => {
|
||||
const disabledAgents = ["multimodal-looker"]
|
||||
const isEnabled = !includesCaseInsensitive(disabledAgents, "multimodal-looker")
|
||||
const disabledAgents: string[] = ["multimodal-looker"]
|
||||
const isEnabled = !disabledAgents.some(
|
||||
(agent) => agent.toLowerCase() === "multimodal-looker"
|
||||
)
|
||||
expect(isEnabled).toBe(false)
|
||||
})
|
||||
|
||||
@@ -22,8 +22,10 @@ describe("look_at tool conditional registration", () => {
|
||||
// #when checking if agent is enabled
|
||||
// #then should return false (case-insensitive match)
|
||||
it("returns false when multimodal-looker is disabled (case-insensitive)", () => {
|
||||
const disabledAgents = ["Multimodal-Looker"]
|
||||
const isEnabled = !includesCaseInsensitive(disabledAgents, "multimodal-looker")
|
||||
const disabledAgents: string[] = ["Multimodal-Looker"]
|
||||
const isEnabled = !disabledAgents.some(
|
||||
(agent) => agent.toLowerCase() === "multimodal-looker"
|
||||
)
|
||||
expect(isEnabled).toBe(false)
|
||||
})
|
||||
|
||||
@@ -31,8 +33,10 @@ describe("look_at tool conditional registration", () => {
|
||||
// #when checking if agent is enabled
|
||||
// #then should return true (enabled)
|
||||
it("returns true when multimodal-looker is not disabled", () => {
|
||||
const disabledAgents = ["oracle", "librarian"]
|
||||
const isEnabled = !includesCaseInsensitive(disabledAgents, "multimodal-looker")
|
||||
const disabledAgents: string[] = ["oracle", "librarian"]
|
||||
const isEnabled = !disabledAgents.some(
|
||||
(agent) => agent.toLowerCase() === "multimodal-looker"
|
||||
)
|
||||
expect(isEnabled).toBe(true)
|
||||
})
|
||||
|
||||
@@ -41,7 +45,9 @@ describe("look_at tool conditional registration", () => {
|
||||
// #then should return true (enabled by default)
|
||||
it("returns true when disabled_agents is empty", () => {
|
||||
const disabledAgents: string[] = []
|
||||
const isEnabled = !includesCaseInsensitive(disabledAgents, "multimodal-looker")
|
||||
const isEnabled = !disabledAgents.some(
|
||||
(agent) => agent.toLowerCase() === "multimodal-looker"
|
||||
)
|
||||
expect(isEnabled).toBe(true)
|
||||
})
|
||||
|
||||
@@ -49,8 +55,11 @@ describe("look_at tool conditional registration", () => {
|
||||
// #when checking if agent is enabled
|
||||
// #then should return true (enabled by default)
|
||||
it("returns true when disabled_agents is undefined (fallback to empty)", () => {
|
||||
const disabledAgents = undefined
|
||||
const isEnabled = !includesCaseInsensitive(disabledAgents ?? [], "multimodal-looker")
|
||||
const disabledAgents: string[] | undefined = undefined
|
||||
const list: string[] = disabledAgents ?? []
|
||||
const isEnabled = !list.some(
|
||||
(agent) => agent.toLowerCase() === "multimodal-looker"
|
||||
)
|
||||
expect(isEnabled).toBe(true)
|
||||
})
|
||||
})
|
||||
|
||||
65
src/index.ts
65
src/index.ts
@@ -12,8 +12,6 @@ import {
|
||||
createThinkModeHook,
|
||||
createClaudeCodeHooksHook,
|
||||
createAnthropicContextWindowLimitRecoveryHook,
|
||||
|
||||
createCompactionContextInjector,
|
||||
createRulesInjectorHook,
|
||||
createBackgroundNotificationHook,
|
||||
createAutoUpdateCheckerHook,
|
||||
@@ -35,6 +33,7 @@ import {
|
||||
createSisyphusJuniorNotepadHook,
|
||||
createQuestionLabelTruncatorHook,
|
||||
createSubagentQuestionBlockerHook,
|
||||
createStopContinuationGuardHook,
|
||||
} from "./hooks";
|
||||
import {
|
||||
contextCollector,
|
||||
@@ -77,10 +76,11 @@ import { BackgroundManager } from "./features/background-agent";
|
||||
import { SkillMcpManager } from "./features/skill-mcp-manager";
|
||||
import { initTaskToastManager } from "./features/task-toast-manager";
|
||||
import { TmuxSessionManager } from "./features/tmux-subagent";
|
||||
import { clearBoulderState } from "./features/boulder-state";
|
||||
import { type HookName } from "./config";
|
||||
import { log, detectExternalNotificationPlugin, getNotificationConflictWarning, resetMessageCursor, includesCaseInsensitive, hasConnectedProvidersCache, getOpenCodeVersion, isOpenCodeVersionAtLeast, OPENCODE_NATIVE_AGENTS_INJECTION_VERSION } from "./shared";
|
||||
import { log, detectExternalNotificationPlugin, getNotificationConflictWarning, resetMessageCursor, hasConnectedProvidersCache, getOpenCodeVersion, isOpenCodeVersionAtLeast, OPENCODE_NATIVE_AGENTS_INJECTION_VERSION } from "./shared";
|
||||
import { loadPluginConfig } from "./plugin-config";
|
||||
import { createModelCacheState, getModelLimit } from "./plugin-state";
|
||||
import { createModelCacheState } from "./plugin-state";
|
||||
import { createConfigHandler } from "./plugin-handlers";
|
||||
|
||||
const OhMyOpenCodePlugin: Plugin = async (ctx) => {
|
||||
@@ -174,9 +174,6 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
|
||||
experimental: pluginConfig.experimental,
|
||||
})
|
||||
: null;
|
||||
const compactionContextInjector = isHookEnabled("compaction-context-injector")
|
||||
? createCompactionContextInjector()
|
||||
: undefined;
|
||||
const rulesInjector = isHookEnabled("rules-injector")
|
||||
? createRulesInjectorHook(ctx)
|
||||
: null;
|
||||
@@ -277,8 +274,15 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
|
||||
|
||||
initTaskToastManager(ctx.client);
|
||||
|
||||
const stopContinuationGuard = isHookEnabled("stop-continuation-guard")
|
||||
? createStopContinuationGuardHook(ctx)
|
||||
: null;
|
||||
|
||||
const todoContinuationEnforcer = isHookEnabled("todo-continuation-enforcer")
|
||||
? createTodoContinuationEnforcer(ctx, { backgroundManager })
|
||||
? createTodoContinuationEnforcer(ctx, {
|
||||
backgroundManager,
|
||||
isContinuationStopped: stopContinuationGuard?.isStopped,
|
||||
})
|
||||
: null;
|
||||
|
||||
if (sessionRecovery && todoContinuationEnforcer) {
|
||||
@@ -294,9 +298,8 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
|
||||
const backgroundTools = createBackgroundTools(backgroundManager, ctx.client);
|
||||
|
||||
const callOmoAgent = createCallOmoAgent(ctx, backgroundManager);
|
||||
const isMultimodalLookerEnabled = !includesCaseInsensitive(
|
||||
pluginConfig.disabled_agents ?? [],
|
||||
"multimodal-looker"
|
||||
const isMultimodalLookerEnabled = !(pluginConfig.disabled_agents ?? []).some(
|
||||
(agent) => agent.toLowerCase() === "multimodal-looker"
|
||||
);
|
||||
const lookAt = isMultimodalLookerEnabled ? createLookAt(ctx) : null;
|
||||
const browserProvider = pluginConfig.browser_automation_engine?.provider ?? "playwright";
|
||||
@@ -420,6 +423,7 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
|
||||
}
|
||||
}
|
||||
|
||||
await stopContinuationGuard?.["chat.message"]?.(input);
|
||||
await keywordDetector?.["chat.message"]?.(input, output);
|
||||
await claudeCodeHooks["chat.message"]?.(input, output);
|
||||
await autoSlashCommand?.["chat.message"]?.(input, output);
|
||||
@@ -521,6 +525,7 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
|
||||
await categorySkillReminder?.event(input);
|
||||
await interactiveBashSession?.event(input);
|
||||
await ralphLoop?.event(input);
|
||||
await stopContinuationGuard?.event(input);
|
||||
await atlasHook?.handler(input);
|
||||
|
||||
const { event } = input;
|
||||
@@ -581,7 +586,12 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
|
||||
const recovered =
|
||||
await sessionRecovery.handleSessionRecovery(messageInfo);
|
||||
|
||||
if (recovered && sessionID && sessionID === getMainSessionID()) {
|
||||
if (
|
||||
recovered &&
|
||||
sessionID &&
|
||||
sessionID === getMainSessionID() &&
|
||||
!stopContinuationGuard?.isStopped(sessionID)
|
||||
) {
|
||||
await ctx.client.session
|
||||
.prompt({
|
||||
path: { id: sessionID },
|
||||
@@ -610,9 +620,8 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
|
||||
if (input.tool === "task") {
|
||||
const args = output.args as Record<string, unknown>;
|
||||
const subagentType = args.subagent_type as string;
|
||||
const isExploreOrLibrarian = includesCaseInsensitive(
|
||||
["explore", "librarian"],
|
||||
subagentType ?? ""
|
||||
const isExploreOrLibrarian = ["explore", "librarian"].some(
|
||||
(name) => name.toLowerCase() === (subagentType ?? "").toLowerCase()
|
||||
);
|
||||
|
||||
args.tools = {
|
||||
@@ -664,14 +673,28 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
|
||||
);
|
||||
|
||||
ralphLoop.startLoop(sessionID, prompt, {
|
||||
ultrawork: true,
|
||||
maxIterations: maxIterMatch
|
||||
? parseInt(maxIterMatch[1], 10)
|
||||
: undefined,
|
||||
completionPromise: promiseMatch?.[1],
|
||||
});
|
||||
ultrawork: true,
|
||||
maxIterations: maxIterMatch
|
||||
? parseInt(maxIterMatch[1], 10)
|
||||
: undefined,
|
||||
completionPromise: promiseMatch?.[1],
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
if (input.tool === "slashcommand") {
|
||||
const args = output.args as { command?: string } | undefined;
|
||||
const command = args?.command?.replace(/^\//, "").toLowerCase();
|
||||
const sessionID = input.sessionID || getMainSessionID();
|
||||
|
||||
if (command === "stop-continuation" && sessionID) {
|
||||
stopContinuationGuard?.stop(sessionID);
|
||||
todoContinuationEnforcer?.cancelAllCountdowns();
|
||||
ralphLoop?.cancelLoop(sessionID);
|
||||
clearBoulderState(ctx.directory);
|
||||
log("[stop-continuation] All continuation mechanisms stopped", { sessionID });
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
"tool.execute.after": async (input, output) => {
|
||||
|
||||
@@ -396,3 +396,46 @@ describe("Prometheus direct override priority over category", () => {
|
||||
expect(agents.prometheus.temperature).toBe(0.1)
|
||||
})
|
||||
})
|
||||
|
||||
describe("Deadlock prevention - fetchAvailableModels must not receive client", () => {
|
||||
test("fetchAvailableModels should be called with undefined client to prevent deadlock during plugin init", async () => {
|
||||
// #given - This test ensures we don't regress on issue #1301
|
||||
// Passing client to fetchAvailableModels during config handler causes deadlock:
|
||||
// - Plugin init waits for server response (client.provider.list())
|
||||
// - Server waits for plugin init to complete before handling requests
|
||||
const fetchSpy = spyOn(shared, "fetchAvailableModels" as any).mockResolvedValue(new Set<string>())
|
||||
|
||||
const pluginConfig: OhMyOpenCodeConfig = {
|
||||
sisyphus_agent: {
|
||||
planner_enabled: true,
|
||||
},
|
||||
}
|
||||
const config: Record<string, unknown> = {
|
||||
model: "anthropic/claude-opus-4-5",
|
||||
agent: {},
|
||||
}
|
||||
const mockClient = {
|
||||
provider: { list: () => Promise.resolve({ data: { connected: [] } }) },
|
||||
model: { list: () => Promise.resolve({ data: [] }) },
|
||||
}
|
||||
const handler = createConfigHandler({
|
||||
ctx: { directory: "/tmp", client: mockClient },
|
||||
pluginConfig,
|
||||
modelCacheState: {
|
||||
anthropicContext1MEnabled: false,
|
||||
modelContextLimitsCache: new Map(),
|
||||
},
|
||||
})
|
||||
|
||||
// #when
|
||||
await handler(config)
|
||||
|
||||
// #then - fetchAvailableModels must be called with undefined as first argument (no client)
|
||||
// This prevents the deadlock described in issue #1301
|
||||
expect(fetchSpy).toHaveBeenCalled()
|
||||
const firstCallArgs = fetchSpy.mock.calls[0]
|
||||
expect(firstCallArgs[0]).toBeUndefined()
|
||||
|
||||
fetchSpy.mockRestore?.()
|
||||
})
|
||||
})
|
||||
|
||||
@@ -25,11 +25,10 @@ import { loadMcpConfigs } from "../features/claude-code-mcp-loader";
|
||||
import { loadAllPluginComponents } from "../features/claude-code-plugin-loader";
|
||||
import { createBuiltinMcps } from "../mcp";
|
||||
import type { OhMyOpenCodeConfig } from "../config";
|
||||
import { log, fetchAvailableModels, readConnectedProvidersCache } from "../shared";
|
||||
import { log, fetchAvailableModels, readConnectedProvidersCache, resolveModelPipeline } from "../shared";
|
||||
import { getOpenCodeConfigPaths } from "../shared/opencode-config-dir";
|
||||
import { migrateAgentConfig } from "../shared/permission-compat";
|
||||
import { AGENT_NAME_MAP } from "../shared/migration";
|
||||
import { resolveModelWithFallback } from "../shared/model-resolver";
|
||||
import { AGENT_MODEL_REQUIREMENTS } from "../shared/model-requirements";
|
||||
import { PROMETHEUS_SYSTEM_PROMPT, PROMETHEUS_PERMISSION } from "../agents/prometheus-prompt";
|
||||
import { DEFAULT_CATEGORIES } from "../tools/delegate-task/constants";
|
||||
@@ -249,16 +248,26 @@ export function createConfigHandler(deps: ConfigHandlerDeps) {
|
||||
|
||||
const prometheusRequirement = AGENT_MODEL_REQUIREMENTS["prometheus"];
|
||||
const connectedProviders = readConnectedProvidersCache();
|
||||
const availableModels = ctx.client
|
||||
? await fetchAvailableModels(ctx.client, { connectedProviders: connectedProviders ?? undefined })
|
||||
: new Set<string>();
|
||||
// IMPORTANT: Do NOT pass ctx.client to fetchAvailableModels during plugin initialization.
|
||||
// Calling client API (e.g., client.provider.list()) from config handler causes deadlock:
|
||||
// - Plugin init waits for server response
|
||||
// - Server waits for plugin init to complete before handling requests
|
||||
// Use cache-only mode instead. If cache is unavailable, fallback chain uses first model.
|
||||
// See: https://github.com/code-yeongyu/oh-my-opencode/issues/1301
|
||||
const availableModels = await fetchAvailableModels(undefined, {
|
||||
connectedProviders: connectedProviders ?? undefined,
|
||||
});
|
||||
|
||||
const modelResolution = resolveModelWithFallback({
|
||||
uiSelectedModel: currentModel,
|
||||
userModel: prometheusOverride?.model ?? categoryConfig?.model,
|
||||
fallbackChain: prometheusRequirement?.fallbackChain,
|
||||
availableModels,
|
||||
systemDefaultModel: undefined,
|
||||
const modelResolution = resolveModelPipeline({
|
||||
intent: {
|
||||
uiSelectedModel: currentModel,
|
||||
userModel: prometheusOverride?.model ?? categoryConfig?.model,
|
||||
},
|
||||
constraints: { availableModels },
|
||||
policy: {
|
||||
fallbackChain: prometheusRequirement?.fallbackChain,
|
||||
systemDefaultModel: undefined,
|
||||
},
|
||||
});
|
||||
const resolvedModel = modelResolution?.model;
|
||||
const resolvedVariant = modelResolution?.variant;
|
||||
|
||||
@@ -4,8 +4,6 @@
|
||||
* true = tool allowed, false = tool denied.
|
||||
*/
|
||||
|
||||
import { findCaseInsensitive } from "./case-insensitive"
|
||||
|
||||
const EXPLORATION_AGENT_DENYLIST: Record<string, boolean> = {
|
||||
write: false,
|
||||
edit: false,
|
||||
@@ -37,10 +35,13 @@ const AGENT_RESTRICTIONS: Record<string, Record<string, boolean>> = {
|
||||
}
|
||||
|
||||
export function getAgentToolRestrictions(agentName: string): Record<string, boolean> {
|
||||
return findCaseInsensitive(AGENT_RESTRICTIONS, agentName) ?? {}
|
||||
return AGENT_RESTRICTIONS[agentName]
|
||||
?? Object.entries(AGENT_RESTRICTIONS).find(([key]) => key.toLowerCase() === agentName.toLowerCase())?.[1]
|
||||
?? {}
|
||||
}
|
||||
|
||||
export function hasAgentToolRestrictions(agentName: string): boolean {
|
||||
const restrictions = findCaseInsensitive(AGENT_RESTRICTIONS, agentName)
|
||||
const restrictions = AGENT_RESTRICTIONS[agentName]
|
||||
?? Object.entries(AGENT_RESTRICTIONS).find(([key]) => key.toLowerCase() === agentName.toLowerCase())?.[1]
|
||||
return restrictions !== undefined && Object.keys(restrictions).length > 0
|
||||
}
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
import type { OhMyOpenCodeConfig } from "../config"
|
||||
import { findCaseInsensitive } from "./case-insensitive"
|
||||
import { AGENT_MODEL_REQUIREMENTS, CATEGORY_MODEL_REQUIREMENTS } from "./model-requirements"
|
||||
|
||||
export function resolveAgentVariant(
|
||||
@@ -13,7 +12,10 @@ export function resolveAgentVariant(
|
||||
const agentOverrides = config.agents as
|
||||
| Record<string, { variant?: string; category?: string }>
|
||||
| undefined
|
||||
const agentOverride = agentOverrides ? findCaseInsensitive(agentOverrides, agentName) : undefined
|
||||
const agentOverride = agentOverrides
|
||||
? agentOverrides[agentName]
|
||||
?? Object.entries(agentOverrides).find(([key]) => key.toLowerCase() === agentName.toLowerCase())?.[1]
|
||||
: undefined
|
||||
if (!agentOverride) {
|
||||
return undefined
|
||||
}
|
||||
@@ -43,7 +45,10 @@ export function resolveVariantForModel(
|
||||
const agentOverrides = config.agents as
|
||||
| Record<string, { category?: string }>
|
||||
| undefined
|
||||
const agentOverride = agentOverrides ? findCaseInsensitive(agentOverrides, agentName) : undefined
|
||||
const agentOverride = agentOverrides
|
||||
? agentOverrides[agentName]
|
||||
?? Object.entries(agentOverrides).find(([key]) => key.toLowerCase() === agentName.toLowerCase())?.[1]
|
||||
: undefined
|
||||
const categoryName = agentOverride?.category
|
||||
if (categoryName) {
|
||||
const categoryRequirement = CATEGORY_MODEL_REQUIREMENTS[categoryName]
|
||||
|
||||
60
src/shared/binary-downloader.ts
Normal file
60
src/shared/binary-downloader.ts
Normal file
@@ -0,0 +1,60 @@
|
||||
import { chmodSync, existsSync, mkdirSync, unlinkSync } from "node:fs";
|
||||
import * as path from "node:path";
|
||||
import { spawn } from "bun";
|
||||
import { extractZip } from "./zip-extractor";
|
||||
|
||||
export function getCachedBinaryPath(cacheDir: string, binaryName: string): string | null {
|
||||
const binaryPath = path.join(cacheDir, binaryName);
|
||||
return existsSync(binaryPath) ? binaryPath : null;
|
||||
}
|
||||
|
||||
export function ensureCacheDir(cacheDir: string): void {
|
||||
if (!existsSync(cacheDir)) {
|
||||
mkdirSync(cacheDir, { recursive: true });
|
||||
}
|
||||
}
|
||||
|
||||
export async function downloadArchive(downloadUrl: string, archivePath: string): Promise<void> {
|
||||
const response = await fetch(downloadUrl, { redirect: "follow" });
|
||||
if (!response.ok) {
|
||||
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
|
||||
}
|
||||
|
||||
const arrayBuffer = await response.arrayBuffer();
|
||||
await Bun.write(archivePath, arrayBuffer);
|
||||
}
|
||||
|
||||
export async function extractTarGz(
|
||||
archivePath: string,
|
||||
destDir: string,
|
||||
options?: { args?: string[]; cwd?: string }
|
||||
): Promise<void> {
|
||||
const args = options?.args ?? ["tar", "-xzf", archivePath, "-C", destDir];
|
||||
const proc = spawn(args, {
|
||||
cwd: options?.cwd,
|
||||
stdout: "pipe",
|
||||
stderr: "pipe",
|
||||
});
|
||||
|
||||
const exitCode = await proc.exited;
|
||||
if (exitCode !== 0) {
|
||||
const stderr = await new Response(proc.stderr).text();
|
||||
throw new Error(`tar extraction failed (exit ${exitCode}): ${stderr}`);
|
||||
}
|
||||
}
|
||||
|
||||
export async function extractZipArchive(archivePath: string, destDir: string): Promise<void> {
|
||||
await extractZip(archivePath, destDir);
|
||||
}
|
||||
|
||||
export function cleanupArchive(archivePath: string): void {
|
||||
if (existsSync(archivePath)) {
|
||||
unlinkSync(archivePath);
|
||||
}
|
||||
}
|
||||
|
||||
export function ensureExecutable(binaryPath: string): void {
|
||||
if (process.platform !== "win32" && existsSync(binaryPath)) {
|
||||
chmodSync(binaryPath, 0o755);
|
||||
}
|
||||
}
|
||||
@@ -1,169 +0,0 @@
|
||||
import { describe, test, expect } from "bun:test"
|
||||
import {
|
||||
findCaseInsensitive,
|
||||
includesCaseInsensitive,
|
||||
findByNameCaseInsensitive,
|
||||
equalsIgnoreCase,
|
||||
} from "./case-insensitive"
|
||||
|
||||
describe("findCaseInsensitive", () => {
|
||||
test("returns undefined for empty/undefined object", () => {
|
||||
// #given - undefined object
|
||||
const obj = undefined
|
||||
|
||||
// #when - lookup any key
|
||||
const result = findCaseInsensitive(obj, "key")
|
||||
|
||||
// #then - returns undefined
|
||||
expect(result).toBeUndefined()
|
||||
})
|
||||
|
||||
test("finds exact match first", () => {
|
||||
// #given - object with exact key
|
||||
const obj = { Oracle: "value1", oracle: "value2" }
|
||||
|
||||
// #when - lookup with exact case
|
||||
const result = findCaseInsensitive(obj, "Oracle")
|
||||
|
||||
// #then - returns exact match
|
||||
expect(result).toBe("value1")
|
||||
})
|
||||
|
||||
test("finds case-insensitive match when no exact match", () => {
|
||||
// #given - object with lowercase key
|
||||
const obj = { oracle: "value" }
|
||||
|
||||
// #when - lookup with uppercase
|
||||
const result = findCaseInsensitive(obj, "ORACLE")
|
||||
|
||||
// #then - returns case-insensitive match
|
||||
expect(result).toBe("value")
|
||||
})
|
||||
|
||||
test("returns undefined when key not found", () => {
|
||||
// #given - object without target key
|
||||
const obj = { other: "value" }
|
||||
|
||||
// #when - lookup missing key
|
||||
const result = findCaseInsensitive(obj, "oracle")
|
||||
|
||||
// #then - returns undefined
|
||||
expect(result).toBeUndefined()
|
||||
})
|
||||
})
|
||||
|
||||
describe("includesCaseInsensitive", () => {
|
||||
test("returns true for exact match", () => {
|
||||
// #given - array with exact value
|
||||
const arr = ["explore", "librarian"]
|
||||
|
||||
// #when - check exact match
|
||||
const result = includesCaseInsensitive(arr, "explore")
|
||||
|
||||
// #then - returns true
|
||||
expect(result).toBe(true)
|
||||
})
|
||||
|
||||
test("returns true for case-insensitive match", () => {
|
||||
// #given - array with lowercase values
|
||||
const arr = ["explore", "librarian"]
|
||||
|
||||
// #when - check uppercase value
|
||||
const result = includesCaseInsensitive(arr, "EXPLORE")
|
||||
|
||||
// #then - returns true
|
||||
expect(result).toBe(true)
|
||||
})
|
||||
|
||||
test("returns true for mixed case match", () => {
|
||||
// #given - array with mixed case values
|
||||
const arr = ["Oracle", "Sisyphus"]
|
||||
|
||||
// #when - check different case
|
||||
const result = includesCaseInsensitive(arr, "oracle")
|
||||
|
||||
// #then - returns true
|
||||
expect(result).toBe(true)
|
||||
})
|
||||
|
||||
test("returns false when value not found", () => {
|
||||
// #given - array without target value
|
||||
const arr = ["explore", "librarian"]
|
||||
|
||||
// #when - check missing value
|
||||
const result = includesCaseInsensitive(arr, "oracle")
|
||||
|
||||
// #then - returns false
|
||||
expect(result).toBe(false)
|
||||
})
|
||||
|
||||
test("returns false for empty array", () => {
|
||||
// #given - empty array
|
||||
const arr: string[] = []
|
||||
|
||||
// #when - check any value
|
||||
const result = includesCaseInsensitive(arr, "explore")
|
||||
|
||||
// #then - returns false
|
||||
expect(result).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("findByNameCaseInsensitive", () => {
|
||||
test("finds element by exact name", () => {
|
||||
// #given - array with named objects
|
||||
const arr = [{ name: "Oracle", value: 1 }, { name: "explore", value: 2 }]
|
||||
|
||||
// #when - find by exact name
|
||||
const result = findByNameCaseInsensitive(arr, "Oracle")
|
||||
|
||||
// #then - returns matching element
|
||||
expect(result).toEqual({ name: "Oracle", value: 1 })
|
||||
})
|
||||
|
||||
test("finds element by case-insensitive name", () => {
|
||||
// #given - array with named objects
|
||||
const arr = [{ name: "Oracle", value: 1 }, { name: "explore", value: 2 }]
|
||||
|
||||
// #when - find by different case
|
||||
const result = findByNameCaseInsensitive(arr, "oracle")
|
||||
|
||||
// #then - returns matching element
|
||||
expect(result).toEqual({ name: "Oracle", value: 1 })
|
||||
})
|
||||
|
||||
test("returns undefined when name not found", () => {
|
||||
// #given - array without target name
|
||||
const arr = [{ name: "Oracle", value: 1 }]
|
||||
|
||||
// #when - find missing name
|
||||
const result = findByNameCaseInsensitive(arr, "librarian")
|
||||
|
||||
// #then - returns undefined
|
||||
expect(result).toBeUndefined()
|
||||
})
|
||||
})
|
||||
|
||||
describe("equalsIgnoreCase", () => {
|
||||
test("returns true for same case", () => {
|
||||
// #given - same strings
|
||||
// #when - compare
|
||||
// #then - returns true
|
||||
expect(equalsIgnoreCase("oracle", "oracle")).toBe(true)
|
||||
})
|
||||
|
||||
test("returns true for different case", () => {
|
||||
// #given - strings with different case
|
||||
// #when - compare
|
||||
// #then - returns true
|
||||
expect(equalsIgnoreCase("Oracle", "ORACLE")).toBe(true)
|
||||
expect(equalsIgnoreCase("Sisyphus-Junior", "sisyphus-junior")).toBe(true)
|
||||
})
|
||||
|
||||
test("returns false for different strings", () => {
|
||||
// #given - different strings
|
||||
// #when - compare
|
||||
// #then - returns false
|
||||
expect(equalsIgnoreCase("oracle", "explore")).toBe(false)
|
||||
})
|
||||
})
|
||||
@@ -1,46 +0,0 @@
|
||||
/**
|
||||
* Case-insensitive lookup and comparison utilities for agent/config names.
|
||||
* Used throughout the codebase to allow "Oracle", "oracle", "ORACLE" to work the same.
|
||||
*/
|
||||
|
||||
/**
|
||||
* Find a value in an object using case-insensitive key matching.
|
||||
* First tries exact match, then falls back to lowercase comparison.
|
||||
*/
|
||||
export function findCaseInsensitive<T>(obj: Record<string, T> | undefined, key: string): T | undefined {
|
||||
if (!obj) return undefined
|
||||
const exactMatch = obj[key]
|
||||
if (exactMatch !== undefined) return exactMatch
|
||||
const lowerKey = key.toLowerCase()
|
||||
for (const [k, v] of Object.entries(obj)) {
|
||||
if (k.toLowerCase() === lowerKey) return v
|
||||
}
|
||||
return undefined
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if an array includes a value using case-insensitive comparison.
|
||||
*/
|
||||
export function includesCaseInsensitive(arr: string[], value: string): boolean {
|
||||
const lowerValue = value.toLowerCase()
|
||||
return arr.some((item) => item.toLowerCase() === lowerValue)
|
||||
}
|
||||
|
||||
/**
|
||||
* Find an element in array using case-insensitive name matching.
|
||||
* Useful for finding agents/categories by name.
|
||||
*/
|
||||
export function findByNameCaseInsensitive<T extends { name: string }>(
|
||||
arr: T[],
|
||||
name: string
|
||||
): T | undefined {
|
||||
const lowerName = name.toLowerCase()
|
||||
return arr.find((item) => item.name.toLowerCase() === lowerName)
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if two strings are equal (case-insensitive).
|
||||
*/
|
||||
export function equalsIgnoreCase(a: string, b: string): boolean {
|
||||
return a.toLowerCase() === b.toLowerCase()
|
||||
}
|
||||
@@ -20,6 +20,7 @@ export * from "./opencode-version"
|
||||
export * from "./permission-compat"
|
||||
export * from "./external-plugin-detector"
|
||||
export * from "./zip-extractor"
|
||||
export * from "./binary-downloader"
|
||||
export * from "./agent-variant"
|
||||
export * from "./session-cursor"
|
||||
export * from "./shell-env"
|
||||
@@ -27,9 +28,14 @@ export * from "./system-directive"
|
||||
export * from "./agent-tool-restrictions"
|
||||
export * from "./model-requirements"
|
||||
export * from "./model-resolver"
|
||||
export {
|
||||
resolveModelPipeline,
|
||||
type ModelResolutionRequest,
|
||||
type ModelResolutionResult as ModelResolutionPipelineResult,
|
||||
type ModelResolutionProvenance,
|
||||
} from "./model-resolution-pipeline"
|
||||
export * from "./model-availability"
|
||||
export * from "./connected-providers-cache"
|
||||
export * from "./case-insensitive"
|
||||
export * from "./session-utils"
|
||||
export * from "./tmux"
|
||||
export * from "./model-suggestion-retry"
|
||||
|
||||
174
src/shared/model-resolution-pipeline.ts
Normal file
174
src/shared/model-resolution-pipeline.ts
Normal file
@@ -0,0 +1,174 @@
|
||||
import { log } from "./logger"
|
||||
import { readConnectedProvidersCache } from "./connected-providers-cache"
|
||||
import { fuzzyMatchModel } from "./model-availability"
|
||||
import type { FallbackEntry } from "./model-requirements"
|
||||
|
||||
export type ModelResolutionRequest = {
|
||||
intent?: {
|
||||
uiSelectedModel?: string
|
||||
userModel?: string
|
||||
categoryDefaultModel?: string
|
||||
}
|
||||
constraints: {
|
||||
availableModels: Set<string>
|
||||
}
|
||||
policy?: {
|
||||
fallbackChain?: FallbackEntry[]
|
||||
systemDefaultModel?: string
|
||||
}
|
||||
}
|
||||
|
||||
export type ModelResolutionProvenance =
|
||||
| "override"
|
||||
| "category-default"
|
||||
| "provider-fallback"
|
||||
| "system-default"
|
||||
|
||||
export type ModelResolutionResult = {
|
||||
model: string
|
||||
provenance: ModelResolutionProvenance
|
||||
variant?: string
|
||||
attempted?: string[]
|
||||
reason?: string
|
||||
}
|
||||
|
||||
function normalizeModel(model?: string): string | undefined {
|
||||
const trimmed = model?.trim()
|
||||
return trimmed || undefined
|
||||
}
|
||||
|
||||
export function resolveModelPipeline(
|
||||
request: ModelResolutionRequest,
|
||||
): ModelResolutionResult | undefined {
|
||||
const attempted: string[] = []
|
||||
const { intent, constraints, policy } = request
|
||||
const availableModels = constraints.availableModels
|
||||
const fallbackChain = policy?.fallbackChain
|
||||
const systemDefaultModel = policy?.systemDefaultModel
|
||||
|
||||
const normalizedUiModel = normalizeModel(intent?.uiSelectedModel)
|
||||
if (normalizedUiModel) {
|
||||
log("Model resolved via UI selection", { model: normalizedUiModel })
|
||||
return { model: normalizedUiModel, provenance: "override" }
|
||||
}
|
||||
|
||||
const normalizedUserModel = normalizeModel(intent?.userModel)
|
||||
if (normalizedUserModel) {
|
||||
log("Model resolved via config override", { model: normalizedUserModel })
|
||||
return { model: normalizedUserModel, provenance: "override" }
|
||||
}
|
||||
|
||||
const normalizedCategoryDefault = normalizeModel(intent?.categoryDefaultModel)
|
||||
if (normalizedCategoryDefault) {
|
||||
attempted.push(normalizedCategoryDefault)
|
||||
if (availableModels.size > 0) {
|
||||
const parts = normalizedCategoryDefault.split("/")
|
||||
const providerHint = parts.length >= 2 ? [parts[0]] : undefined
|
||||
const match = fuzzyMatchModel(normalizedCategoryDefault, availableModels, providerHint)
|
||||
if (match) {
|
||||
log("Model resolved via category default (fuzzy matched)", {
|
||||
original: normalizedCategoryDefault,
|
||||
matched: match,
|
||||
})
|
||||
return { model: match, provenance: "category-default", attempted }
|
||||
}
|
||||
} else {
|
||||
const connectedProviders = readConnectedProvidersCache()
|
||||
if (connectedProviders === null) {
|
||||
log("Model resolved via category default (no cache, first run)", {
|
||||
model: normalizedCategoryDefault,
|
||||
})
|
||||
return { model: normalizedCategoryDefault, provenance: "category-default", attempted }
|
||||
}
|
||||
const parts = normalizedCategoryDefault.split("/")
|
||||
if (parts.length >= 2) {
|
||||
const provider = parts[0]
|
||||
if (connectedProviders.includes(provider)) {
|
||||
log("Model resolved via category default (connected provider)", {
|
||||
model: normalizedCategoryDefault,
|
||||
})
|
||||
return { model: normalizedCategoryDefault, provenance: "category-default", attempted }
|
||||
}
|
||||
}
|
||||
}
|
||||
log("Category default model not available, falling through to fallback chain", {
|
||||
model: normalizedCategoryDefault,
|
||||
})
|
||||
}
|
||||
|
||||
if (fallbackChain && fallbackChain.length > 0) {
|
||||
if (availableModels.size === 0) {
|
||||
const connectedProviders = readConnectedProvidersCache()
|
||||
const connectedSet = connectedProviders ? new Set(connectedProviders) : null
|
||||
|
||||
if (connectedSet === null) {
|
||||
log("Model fallback chain skipped (no connected providers cache) - falling through to system default")
|
||||
} else {
|
||||
for (const entry of fallbackChain) {
|
||||
for (const provider of entry.providers) {
|
||||
if (connectedSet.has(provider)) {
|
||||
const model = `${provider}/${entry.model}`
|
||||
log("Model resolved via fallback chain (connected provider)", {
|
||||
provider,
|
||||
model: entry.model,
|
||||
variant: entry.variant,
|
||||
})
|
||||
return {
|
||||
model,
|
||||
provenance: "provider-fallback",
|
||||
variant: entry.variant,
|
||||
attempted,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
log("No connected provider found in fallback chain, falling through to system default")
|
||||
}
|
||||
} else {
|
||||
for (const entry of fallbackChain) {
|
||||
for (const provider of entry.providers) {
|
||||
const fullModel = `${provider}/${entry.model}`
|
||||
const match = fuzzyMatchModel(fullModel, availableModels, [provider])
|
||||
if (match) {
|
||||
log("Model resolved via fallback chain (availability confirmed)", {
|
||||
provider,
|
||||
model: entry.model,
|
||||
match,
|
||||
variant: entry.variant,
|
||||
})
|
||||
return {
|
||||
model: match,
|
||||
provenance: "provider-fallback",
|
||||
variant: entry.variant,
|
||||
attempted,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const crossProviderMatch = fuzzyMatchModel(entry.model, availableModels)
|
||||
if (crossProviderMatch) {
|
||||
log("Model resolved via fallback chain (cross-provider fuzzy match)", {
|
||||
model: entry.model,
|
||||
match: crossProviderMatch,
|
||||
variant: entry.variant,
|
||||
})
|
||||
return {
|
||||
model: crossProviderMatch,
|
||||
provenance: "provider-fallback",
|
||||
variant: entry.variant,
|
||||
attempted,
|
||||
}
|
||||
}
|
||||
}
|
||||
log("No available model found in fallback chain, falling through to system default")
|
||||
}
|
||||
}
|
||||
|
||||
if (systemDefaultModel === undefined) {
|
||||
log("No model resolved - systemDefaultModel not configured")
|
||||
return undefined
|
||||
}
|
||||
|
||||
log("Model resolved via system default", { model: systemDefaultModel })
|
||||
return { model: systemDefaultModel, provenance: "system-default", attempted }
|
||||
}
|
||||
@@ -1,7 +1,6 @@
|
||||
import { log } from "./logger"
|
||||
import { fuzzyMatchModel } from "./model-availability"
|
||||
import type { FallbackEntry } from "./model-requirements"
|
||||
import { readConnectedProvidersCache } from "./connected-providers-cache"
|
||||
import { resolveModelPipeline } from "./model-resolution-pipeline"
|
||||
|
||||
export type ModelResolutionInput = {
|
||||
userModel?: string
|
||||
@@ -47,107 +46,19 @@ export function resolveModelWithFallback(
|
||||
input: ExtendedModelResolutionInput,
|
||||
): ModelResolutionResult | undefined {
|
||||
const { uiSelectedModel, userModel, categoryDefaultModel, fallbackChain, availableModels, systemDefaultModel } = input
|
||||
const resolved = resolveModelPipeline({
|
||||
intent: { uiSelectedModel, userModel, categoryDefaultModel },
|
||||
constraints: { availableModels },
|
||||
policy: { fallbackChain, systemDefaultModel },
|
||||
})
|
||||
|
||||
// Step 1: UI Selection (highest priority - respects user's model choice in OpenCode UI)
|
||||
const normalizedUiModel = normalizeModel(uiSelectedModel)
|
||||
if (normalizedUiModel) {
|
||||
log("Model resolved via UI selection", { model: normalizedUiModel })
|
||||
return { model: normalizedUiModel, source: "override" }
|
||||
}
|
||||
|
||||
// Step 2: Config Override (from oh-my-opencode.json user config)
|
||||
const normalizedUserModel = normalizeModel(userModel)
|
||||
if (normalizedUserModel) {
|
||||
log("Model resolved via config override", { model: normalizedUserModel })
|
||||
return { model: normalizedUserModel, source: "override" }
|
||||
}
|
||||
|
||||
// Step 2.5: Category Default Model (from DEFAULT_CATEGORIES, with fuzzy matching)
|
||||
const normalizedCategoryDefault = normalizeModel(categoryDefaultModel)
|
||||
if (normalizedCategoryDefault) {
|
||||
if (availableModels.size > 0) {
|
||||
const parts = normalizedCategoryDefault.split("/")
|
||||
const providerHint = parts.length >= 2 ? [parts[0]] : undefined
|
||||
const match = fuzzyMatchModel(normalizedCategoryDefault, availableModels, providerHint)
|
||||
if (match) {
|
||||
log("Model resolved via category default (fuzzy matched)", { original: normalizedCategoryDefault, matched: match })
|
||||
return { model: match, source: "category-default" }
|
||||
}
|
||||
} else {
|
||||
const connectedProviders = readConnectedProvidersCache()
|
||||
if (connectedProviders === null) {
|
||||
log("Model resolved via category default (no cache, first run)", { model: normalizedCategoryDefault })
|
||||
return { model: normalizedCategoryDefault, source: "category-default" }
|
||||
}
|
||||
const parts = normalizedCategoryDefault.split("/")
|
||||
if (parts.length >= 2) {
|
||||
const provider = parts[0]
|
||||
if (connectedProviders.includes(provider)) {
|
||||
log("Model resolved via category default (connected provider)", { model: normalizedCategoryDefault })
|
||||
return { model: normalizedCategoryDefault, source: "category-default" }
|
||||
}
|
||||
}
|
||||
}
|
||||
log("Category default model not available, falling through to fallback chain", { model: normalizedCategoryDefault })
|
||||
}
|
||||
|
||||
// Step 3: Provider fallback chain (exact match → fuzzy match → next provider)
|
||||
if (fallbackChain && fallbackChain.length > 0) {
|
||||
if (availableModels.size === 0) {
|
||||
const connectedProviders = readConnectedProvidersCache()
|
||||
const connectedSet = connectedProviders ? new Set(connectedProviders) : null
|
||||
|
||||
if (connectedSet === null) {
|
||||
log("Model fallback chain skipped (no connected providers cache) - falling through to system default")
|
||||
} else {
|
||||
for (const entry of fallbackChain) {
|
||||
for (const provider of entry.providers) {
|
||||
if (connectedSet.has(provider)) {
|
||||
const model = `${provider}/${entry.model}`
|
||||
log("Model resolved via fallback chain (connected provider)", {
|
||||
provider,
|
||||
model: entry.model,
|
||||
variant: entry.variant,
|
||||
})
|
||||
return { model, source: "provider-fallback", variant: entry.variant }
|
||||
}
|
||||
}
|
||||
}
|
||||
log("No connected provider found in fallback chain, falling through to system default")
|
||||
}
|
||||
} else {
|
||||
for (const entry of fallbackChain) {
|
||||
// Step 1: Try with provider filter (preferred providers first)
|
||||
for (const provider of entry.providers) {
|
||||
const fullModel = `${provider}/${entry.model}`
|
||||
const match = fuzzyMatchModel(fullModel, availableModels, [provider])
|
||||
if (match) {
|
||||
log("Model resolved via fallback chain (availability confirmed)", { provider, model: entry.model, match, variant: entry.variant })
|
||||
return { model: match, source: "provider-fallback", variant: entry.variant }
|
||||
}
|
||||
}
|
||||
|
||||
// Step 2: Try without provider filter (cross-provider fuzzy match)
|
||||
const crossProviderMatch = fuzzyMatchModel(entry.model, availableModels)
|
||||
if (crossProviderMatch) {
|
||||
log("Model resolved via fallback chain (cross-provider fuzzy match)", {
|
||||
model: entry.model,
|
||||
match: crossProviderMatch,
|
||||
variant: entry.variant,
|
||||
})
|
||||
return { model: crossProviderMatch, source: "provider-fallback", variant: entry.variant }
|
||||
}
|
||||
}
|
||||
log("No available model found in fallback chain, falling through to system default")
|
||||
}
|
||||
}
|
||||
|
||||
// Step 4: System default (if provided)
|
||||
if (systemDefaultModel === undefined) {
|
||||
log("No model resolved - systemDefaultModel not configured")
|
||||
if (!resolved) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
log("Model resolved via system default", { model: systemDefaultModel })
|
||||
return { model: systemDefaultModel, source: "system-default" }
|
||||
return {
|
||||
model: resolved.model,
|
||||
source: resolved.provenance,
|
||||
variant: resolved.variant,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,8 +2,6 @@ import { describe, test, expect, beforeEach, afterEach } from "bun:test"
|
||||
import {
|
||||
parseVersion,
|
||||
compareVersions,
|
||||
isVersionGte,
|
||||
isVersionLt,
|
||||
getOpenCodeVersion,
|
||||
isOpenCodeVersionAtLeast,
|
||||
resetVersionCache,
|
||||
@@ -103,32 +101,6 @@ describe("opencode-version", () => {
|
||||
})
|
||||
})
|
||||
|
||||
describe("isVersionGte", () => {
|
||||
test("returns true when a >= b", () => {
|
||||
expect(isVersionGte("1.1.1", "1.1.1")).toBe(true)
|
||||
expect(isVersionGte("1.1.2", "1.1.1")).toBe(true)
|
||||
expect(isVersionGte("1.2.0", "1.1.1")).toBe(true)
|
||||
expect(isVersionGte("2.0.0", "1.1.1")).toBe(true)
|
||||
})
|
||||
|
||||
test("returns false when a < b", () => {
|
||||
expect(isVersionGte("1.1.0", "1.1.1")).toBe(false)
|
||||
expect(isVersionGte("1.0.9", "1.1.1")).toBe(false)
|
||||
expect(isVersionGte("0.9.9", "1.1.1")).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("isVersionLt", () => {
|
||||
test("returns true when a < b", () => {
|
||||
expect(isVersionLt("1.1.0", "1.1.1")).toBe(true)
|
||||
expect(isVersionLt("1.0.150", "1.1.1")).toBe(true)
|
||||
})
|
||||
|
||||
test("returns false when a >= b", () => {
|
||||
expect(isVersionLt("1.1.1", "1.1.1")).toBe(false)
|
||||
expect(isVersionLt("1.1.2", "1.1.1")).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("getOpenCodeVersion", () => {
|
||||
beforeEach(() => {
|
||||
|
||||
@@ -37,13 +37,6 @@ export function compareVersions(a: string, b: string): -1 | 0 | 1 {
|
||||
return 0
|
||||
}
|
||||
|
||||
export function isVersionGte(a: string, b: string): boolean {
|
||||
return compareVersions(a, b) >= 0
|
||||
}
|
||||
|
||||
export function isVersionLt(a: string, b: string): boolean {
|
||||
return compareVersions(a, b) < 0
|
||||
}
|
||||
|
||||
export function getOpenCodeVersion(): string | null {
|
||||
if (cachedVersion !== NOT_CACHED) {
|
||||
@@ -69,7 +62,7 @@ export function getOpenCodeVersion(): string | null {
|
||||
export function isOpenCodeVersionAtLeast(version: string): boolean {
|
||||
const current = getOpenCodeVersion()
|
||||
if (!current) return true
|
||||
return isVersionGte(current, version)
|
||||
return compareVersions(current, version) >= 0
|
||||
}
|
||||
|
||||
export function resetVersionCache(): void {
|
||||
|
||||
59
src/shared/session-injected-paths.ts
Normal file
59
src/shared/session-injected-paths.ts
Normal file
@@ -0,0 +1,59 @@
|
||||
import {
|
||||
existsSync,
|
||||
mkdirSync,
|
||||
readFileSync,
|
||||
unlinkSync,
|
||||
writeFileSync,
|
||||
} from "node:fs";
|
||||
import { join } from "node:path";
|
||||
|
||||
export interface InjectedPathsData {
|
||||
sessionID: string;
|
||||
injectedPaths: string[];
|
||||
updatedAt: number;
|
||||
}
|
||||
|
||||
export function createInjectedPathsStorage(storageDir: string) {
|
||||
const getStoragePath = (sessionID: string): string =>
|
||||
join(storageDir, `${sessionID}.json`);
|
||||
|
||||
const loadInjectedPaths = (sessionID: string): Set<string> => {
|
||||
const filePath = getStoragePath(sessionID);
|
||||
if (!existsSync(filePath)) return new Set();
|
||||
|
||||
try {
|
||||
const content = readFileSync(filePath, "utf-8");
|
||||
const data: InjectedPathsData = JSON.parse(content);
|
||||
return new Set(data.injectedPaths);
|
||||
} catch {
|
||||
return new Set();
|
||||
}
|
||||
};
|
||||
|
||||
const saveInjectedPaths = (sessionID: string, paths: Set<string>): void => {
|
||||
if (!existsSync(storageDir)) {
|
||||
mkdirSync(storageDir, { recursive: true });
|
||||
}
|
||||
|
||||
const data: InjectedPathsData = {
|
||||
sessionID,
|
||||
injectedPaths: [...paths],
|
||||
updatedAt: Date.now(),
|
||||
};
|
||||
|
||||
writeFileSync(getStoragePath(sessionID), JSON.stringify(data, null, 2));
|
||||
};
|
||||
|
||||
const clearInjectedPaths = (sessionID: string): void => {
|
||||
const filePath = getStoragePath(sessionID);
|
||||
if (existsSync(filePath)) {
|
||||
unlinkSync(filePath);
|
||||
}
|
||||
};
|
||||
|
||||
return {
|
||||
loadInjectedPaths,
|
||||
saveInjectedPaths,
|
||||
clearInjectedPaths,
|
||||
};
|
||||
}
|
||||
@@ -8,42 +8,37 @@ export function snakeToCamel(str: string): string {
|
||||
return str.replace(/_([a-z])/g, (_, letter) => letter.toUpperCase())
|
||||
}
|
||||
|
||||
export function transformObjectKeys(
|
||||
obj: Record<string, unknown>,
|
||||
transformer: (key: string) => string,
|
||||
deep: boolean = true
|
||||
): Record<string, unknown> {
|
||||
const result: Record<string, unknown> = {}
|
||||
for (const [key, value] of Object.entries(obj)) {
|
||||
const transformedKey = transformer(key)
|
||||
if (deep && isPlainObject(value)) {
|
||||
result[transformedKey] = transformObjectKeys(value, transformer, true)
|
||||
} else if (deep && Array.isArray(value)) {
|
||||
result[transformedKey] = value.map((item) =>
|
||||
isPlainObject(item) ? transformObjectKeys(item, transformer, true) : item
|
||||
)
|
||||
} else {
|
||||
result[transformedKey] = value
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
export function objectToSnakeCase(
|
||||
obj: Record<string, unknown>,
|
||||
deep: boolean = true
|
||||
): Record<string, unknown> {
|
||||
const result: Record<string, unknown> = {}
|
||||
for (const [key, value] of Object.entries(obj)) {
|
||||
const snakeKey = camelToSnake(key)
|
||||
if (deep && isPlainObject(value)) {
|
||||
result[snakeKey] = objectToSnakeCase(value, true)
|
||||
} else if (deep && Array.isArray(value)) {
|
||||
result[snakeKey] = value.map((item) =>
|
||||
isPlainObject(item) ? objectToSnakeCase(item, true) : item
|
||||
)
|
||||
} else {
|
||||
result[snakeKey] = value
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
return transformObjectKeys(obj, camelToSnake, deep)
|
||||
}
|
||||
|
||||
export function objectToCamelCase(
|
||||
obj: Record<string, unknown>,
|
||||
deep: boolean = true
|
||||
): Record<string, unknown> {
|
||||
const result: Record<string, unknown> = {}
|
||||
for (const [key, value] of Object.entries(obj)) {
|
||||
const camelKey = snakeToCamel(key)
|
||||
if (deep && isPlainObject(value)) {
|
||||
result[camelKey] = objectToCamelCase(value, true)
|
||||
} else if (deep && Array.isArray(value)) {
|
||||
result[camelKey] = value.map((item) =>
|
||||
isPlainObject(item) ? objectToCamelCase(item, true) : item
|
||||
)
|
||||
} else {
|
||||
result[camelKey] = value
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
return transformObjectKeys(obj, snakeToCamel, deep)
|
||||
}
|
||||
|
||||
@@ -139,10 +139,22 @@ export async function spawnTmuxPane(
|
||||
}
|
||||
|
||||
const title = `omo-subagent-${description.slice(0, 20)}`
|
||||
spawn([tmux, "select-pane", "-t", paneId, "-T", title], {
|
||||
const titleProc = spawn([tmux, "select-pane", "-t", paneId, "-T", title], {
|
||||
stdout: "ignore",
|
||||
stderr: "ignore",
|
||||
stderr: "pipe",
|
||||
})
|
||||
// Drain stderr immediately to avoid backpressure
|
||||
const stderrPromise = new Response(titleProc.stderr).text().catch(() => "")
|
||||
const titleExitCode = await titleProc.exited
|
||||
if (titleExitCode !== 0) {
|
||||
const titleStderr = await stderrPromise
|
||||
log("[spawnTmuxPane] WARNING: failed to set pane title", {
|
||||
paneId,
|
||||
title,
|
||||
exitCode: titleExitCode,
|
||||
stderr: titleStderr.trim(),
|
||||
})
|
||||
}
|
||||
|
||||
return { success: true, paneId }
|
||||
}
|
||||
@@ -217,10 +229,21 @@ export async function replaceTmuxPane(
|
||||
}
|
||||
|
||||
const title = `omo-subagent-${description.slice(0, 20)}`
|
||||
spawn([tmux, "select-pane", "-t", paneId, "-T", title], {
|
||||
const titleProc = spawn([tmux, "select-pane", "-t", paneId, "-T", title], {
|
||||
stdout: "ignore",
|
||||
stderr: "ignore",
|
||||
stderr: "pipe",
|
||||
})
|
||||
// Drain stderr immediately to avoid backpressure
|
||||
const stderrPromise = new Response(titleProc.stderr).text().catch(() => "")
|
||||
const titleExitCode = await titleProc.exited
|
||||
if (titleExitCode !== 0) {
|
||||
const titleStderr = await stderrPromise
|
||||
log("[replaceTmuxPane] WARNING: failed to set pane title", {
|
||||
paneId,
|
||||
exitCode: titleExitCode,
|
||||
stderr: titleStderr.trim(),
|
||||
})
|
||||
}
|
||||
|
||||
log("[replaceTmuxPane] SUCCESS", { paneId, sessionId })
|
||||
return { success: true, paneId }
|
||||
|
||||
@@ -1,8 +1,15 @@
|
||||
import { existsSync, mkdirSync, chmodSync, unlinkSync } from "fs"
|
||||
import { existsSync } from "fs"
|
||||
import { join } from "path"
|
||||
import { homedir } from "os"
|
||||
import { createRequire } from "module"
|
||||
import { extractZip } from "../../shared"
|
||||
import {
|
||||
cleanupArchive,
|
||||
downloadArchive,
|
||||
ensureCacheDir,
|
||||
ensureExecutable,
|
||||
extractZipArchive,
|
||||
getCachedBinaryPath as getCachedBinaryPathShared,
|
||||
} from "../../shared/binary-downloader"
|
||||
import { log } from "../../shared/logger"
|
||||
|
||||
const REPO = "ast-grep/ast-grep"
|
||||
@@ -53,8 +60,7 @@ export function getBinaryName(): string {
|
||||
}
|
||||
|
||||
export function getCachedBinaryPath(): string | null {
|
||||
const binaryPath = join(getCacheDir(), getBinaryName())
|
||||
return existsSync(binaryPath) ? binaryPath : null
|
||||
return getCachedBinaryPathShared(getCacheDir(), getBinaryName())
|
||||
}
|
||||
|
||||
|
||||
@@ -83,29 +89,12 @@ export async function downloadAstGrep(version: string = DEFAULT_VERSION): Promis
|
||||
log(`[oh-my-opencode] Downloading ast-grep binary...`)
|
||||
|
||||
try {
|
||||
if (!existsSync(cacheDir)) {
|
||||
mkdirSync(cacheDir, { recursive: true })
|
||||
}
|
||||
|
||||
const response = await fetch(downloadUrl, { redirect: "follow" })
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`HTTP ${response.status}: ${response.statusText}`)
|
||||
}
|
||||
|
||||
const archivePath = join(cacheDir, assetName)
|
||||
const arrayBuffer = await response.arrayBuffer()
|
||||
await Bun.write(archivePath, arrayBuffer)
|
||||
|
||||
await extractZip(archivePath, cacheDir)
|
||||
|
||||
if (existsSync(archivePath)) {
|
||||
unlinkSync(archivePath)
|
||||
}
|
||||
|
||||
if (process.platform !== "win32" && existsSync(binaryPath)) {
|
||||
chmodSync(binaryPath, 0o755)
|
||||
}
|
||||
ensureCacheDir(cacheDir)
|
||||
await downloadArchive(downloadUrl, archivePath)
|
||||
await extractZipArchive(archivePath, cacheDir)
|
||||
cleanupArchive(archivePath)
|
||||
ensureExecutable(binaryPath)
|
||||
|
||||
log(`[oh-my-opencode] ast-grep binary ready.`)
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ import { join } from "node:path"
|
||||
import { ALLOWED_AGENTS, CALL_OMO_AGENT_DESCRIPTION } from "./constants"
|
||||
import type { CallOmoAgentArgs } from "./types"
|
||||
import type { BackgroundManager } from "../../features/background-agent"
|
||||
import { log, getAgentToolRestrictions, includesCaseInsensitive } from "../../shared"
|
||||
import { log, getAgentToolRestrictions } from "../../shared"
|
||||
import { consumeNewMessages } from "../../shared/session-cursor"
|
||||
import { findFirstMessageWithAgent, findNearestMessageWithFields, MESSAGE_STORAGE } from "../../features/hook-message-injector"
|
||||
import { getSessionAgent } from "../../features/claude-code-session-state"
|
||||
@@ -58,7 +58,9 @@ export function createCallOmoAgent(
|
||||
log(`[call_omo_agent] Starting with agent: ${args.subagent_type}, background: ${args.run_in_background}`)
|
||||
|
||||
// Case-insensitive agent validation - allows "Explore", "EXPLORE", "explore" etc.
|
||||
if (!includesCaseInsensitive([...ALLOWED_AGENTS], args.subagent_type)) {
|
||||
if (![...ALLOWED_AGENTS].some(
|
||||
(name) => name.toLowerCase() === args.subagent_type.toLowerCase()
|
||||
)) {
|
||||
return `Error: Invalid agent type "${args.subagent_type}". Only ${ALLOWED_AGENTS.join(", ")} are allowed.`
|
||||
}
|
||||
|
||||
|
||||
@@ -12,10 +12,9 @@ import { discoverSkills } from "../../features/opencode-skill-loader"
|
||||
import { getTaskToastManager } from "../../features/task-toast-manager"
|
||||
import type { ModelFallbackInfo } from "../../features/task-toast-manager/types"
|
||||
import { subagentSessions, getSessionAgent } from "../../features/claude-code-session-state"
|
||||
import { log, getAgentToolRestrictions, resolveModel, getOpenCodeConfigPaths, findByNameCaseInsensitive, equalsIgnoreCase, promptWithModelSuggestionRetry } from "../../shared"
|
||||
import { log, getAgentToolRestrictions, resolveModel, resolveModelPipeline, getOpenCodeConfigPaths, promptWithModelSuggestionRetry } from "../../shared"
|
||||
import { fetchAvailableModels, isModelAvailable } from "../../shared/model-availability"
|
||||
import { readConnectedProvidersCache } from "../../shared/connected-providers-cache"
|
||||
import { resolveModelWithFallback } from "../../shared/model-resolver"
|
||||
import { CATEGORY_MODEL_REQUIREMENTS } from "../../shared/model-requirements"
|
||||
|
||||
type OpencodeClient = PluginInput["client"]
|
||||
@@ -552,16 +551,20 @@ To continue this session: session_id="${args.session_id}"`
|
||||
modelInfo = { model: actualModel, type: "system-default", source: "system-default" }
|
||||
}
|
||||
} else {
|
||||
const resolution = resolveModelWithFallback({
|
||||
const resolution = resolveModelPipeline({
|
||||
intent: {
|
||||
userModel: userCategories?.[args.category]?.model,
|
||||
categoryDefaultModel: resolved.model ?? sisyphusJuniorModel,
|
||||
},
|
||||
constraints: { availableModels },
|
||||
policy: {
|
||||
fallbackChain: requirement.fallbackChain,
|
||||
availableModels,
|
||||
systemDefaultModel,
|
||||
})
|
||||
},
|
||||
})
|
||||
|
||||
if (resolution) {
|
||||
const { model: resolvedModel, source, variant: resolvedVariant } = resolution
|
||||
if (resolution) {
|
||||
const { model: resolvedModel, provenance, variant: resolvedVariant } = resolution
|
||||
actualModel = resolvedModel
|
||||
|
||||
if (!parseModelString(actualModel)) {
|
||||
@@ -569,7 +572,8 @@ To continue this session: session_id="${args.session_id}"`
|
||||
}
|
||||
|
||||
let type: "user-defined" | "inherited" | "category-default" | "system-default"
|
||||
switch (source) {
|
||||
const source = provenance
|
||||
switch (provenance) {
|
||||
case "override":
|
||||
type = "user-defined"
|
||||
break
|
||||
@@ -582,7 +586,7 @@ To continue this session: session_id="${args.session_id}"`
|
||||
break
|
||||
}
|
||||
|
||||
modelInfo = { model: actualModel, type, source }
|
||||
modelInfo = { model: actualModel, type, source }
|
||||
|
||||
const parsedModel = parseModelString(actualModel)
|
||||
const variantToUse = userCategories?.[args.category]?.variant ?? resolvedVariant ?? resolved.config.variant
|
||||
@@ -780,7 +784,7 @@ To continue this session: session_id="${sessionID}"`
|
||||
}
|
||||
const agentName = args.subagent_type.trim()
|
||||
|
||||
if (equalsIgnoreCase(agentName, SISYPHUS_JUNIOR_AGENT)) {
|
||||
if (agentName.toLowerCase() === SISYPHUS_JUNIOR_AGENT.toLowerCase()) {
|
||||
return `Cannot use subagent_type="${SISYPHUS_JUNIOR_AGENT}" directly. Use category parameter instead (e.g., ${categoryExamples}).
|
||||
|
||||
Sisyphus-Junior is spawned automatically when you specify a category. Pick the appropriate category for your task domain.`
|
||||
@@ -803,12 +807,13 @@ Create the work plan directly - that's your job as the planning agent.`
|
||||
|
||||
const callableAgents = agents.filter((a) => a.mode !== "primary")
|
||||
|
||||
const matchedAgent = findByNameCaseInsensitive(callableAgents, agentToUse)
|
||||
const matchedAgent = callableAgents.find(
|
||||
(agent) => agent.name.toLowerCase() === agentToUse.toLowerCase()
|
||||
)
|
||||
if (!matchedAgent) {
|
||||
const isPrimaryAgent = findByNameCaseInsensitive(
|
||||
agents.filter((a) => a.mode === "primary"),
|
||||
agentToUse
|
||||
)
|
||||
const isPrimaryAgent = agents
|
||||
.filter((a) => a.mode === "primary")
|
||||
.find((agent) => agent.name.toLowerCase() === agentToUse.toLowerCase())
|
||||
if (isPrimaryAgent) {
|
||||
return `Cannot call primary agent "${isPrimaryAgent.name}" via delegate_task. Primary agents are top-level orchestrators.`
|
||||
}
|
||||
|
||||
@@ -1,7 +1,13 @@
|
||||
import { existsSync, mkdirSync, chmodSync, unlinkSync, readdirSync } from "node:fs"
|
||||
import { existsSync, readdirSync } from "node:fs"
|
||||
import { join } from "node:path"
|
||||
import { spawn } from "bun"
|
||||
import { extractZip as extractZipBase } from "../../shared"
|
||||
import {
|
||||
cleanupArchive,
|
||||
downloadArchive,
|
||||
ensureCacheDir,
|
||||
ensureExecutable,
|
||||
extractTarGz as extractTarGzArchive,
|
||||
} from "../../shared/binary-downloader"
|
||||
|
||||
export function findFileRecursive(dir: string, filename: string): string | null {
|
||||
try {
|
||||
@@ -41,16 +47,6 @@ function getRgPath(): string {
|
||||
return join(getInstallDir(), isWindows ? "rg.exe" : "rg")
|
||||
}
|
||||
|
||||
async function downloadFile(url: string, destPath: string): Promise<void> {
|
||||
const response = await fetch(url)
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to download: ${response.status} ${response.statusText}`)
|
||||
}
|
||||
|
||||
const buffer = await response.arrayBuffer()
|
||||
await Bun.write(destPath, buffer)
|
||||
}
|
||||
|
||||
async function extractTarGz(archivePath: string, destDir: string): Promise<void> {
|
||||
const platformKey = getPlatformKey()
|
||||
|
||||
@@ -62,17 +58,7 @@ async function extractTarGz(archivePath: string, destDir: string): Promise<void>
|
||||
args.push("--wildcards", "*/rg")
|
||||
}
|
||||
|
||||
const proc = spawn(args, {
|
||||
cwd: destDir,
|
||||
stdout: "pipe",
|
||||
stderr: "pipe",
|
||||
})
|
||||
|
||||
const exitCode = await proc.exited
|
||||
if (exitCode !== 0) {
|
||||
const stderr = await new Response(proc.stderr).text()
|
||||
throw new Error(`Failed to extract tar.gz: ${stderr}`)
|
||||
}
|
||||
await extractTarGzArchive(archivePath, destDir, { args, cwd: destDir })
|
||||
}
|
||||
|
||||
async function extractZip(archivePath: string, destDir: string): Promise<void> {
|
||||
@@ -104,14 +90,14 @@ export async function downloadAndInstallRipgrep(): Promise<string> {
|
||||
return rgPath
|
||||
}
|
||||
|
||||
mkdirSync(installDir, { recursive: true })
|
||||
ensureCacheDir(installDir)
|
||||
|
||||
const filename = `ripgrep-${RG_VERSION}-${config.platform}.${config.extension}`
|
||||
const url = `https://github.com/BurntSushi/ripgrep/releases/download/${RG_VERSION}/${filename}`
|
||||
const archivePath = join(installDir, filename)
|
||||
|
||||
try {
|
||||
await downloadFile(url, archivePath)
|
||||
await downloadArchive(url, archivePath)
|
||||
|
||||
if (config.extension === "tar.gz") {
|
||||
await extractTarGz(archivePath, installDir)
|
||||
@@ -119,9 +105,7 @@ export async function downloadAndInstallRipgrep(): Promise<string> {
|
||||
await extractZip(archivePath, installDir)
|
||||
}
|
||||
|
||||
if (process.platform !== "win32") {
|
||||
chmodSync(rgPath, 0o755)
|
||||
}
|
||||
ensureExecutable(rgPath)
|
||||
|
||||
if (!existsSync(rgPath)) {
|
||||
throw new Error("ripgrep binary not found after extraction")
|
||||
@@ -129,12 +113,10 @@ export async function downloadAndInstallRipgrep(): Promise<string> {
|
||||
|
||||
return rgPath
|
||||
} finally {
|
||||
if (existsSync(archivePath)) {
|
||||
try {
|
||||
unlinkSync(archivePath)
|
||||
} catch {
|
||||
// Cleanup failures are non-critical
|
||||
}
|
||||
try {
|
||||
cleanupArchive(archivePath)
|
||||
} catch {
|
||||
// Cleanup failures are non-critical
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -96,10 +96,19 @@ The Bash tool can execute these commands directly. Do NOT retry with interactive
|
||||
|
||||
const timeoutPromise = new Promise<never>((_, reject) => {
|
||||
const id = setTimeout(() => {
|
||||
proc.kill()
|
||||
reject(new Error(`Timeout after ${DEFAULT_TIMEOUT_MS}ms`))
|
||||
const timeoutError = new Error(`Timeout after ${DEFAULT_TIMEOUT_MS}ms`)
|
||||
try {
|
||||
proc.kill()
|
||||
// Fire-and-forget: wait for process exit in background to avoid zombies
|
||||
void proc.exited.catch(() => {})
|
||||
} catch {
|
||||
// Ignore kill errors; we'll still reject with timeoutError below
|
||||
}
|
||||
reject(timeoutError)
|
||||
}, DEFAULT_TIMEOUT_MS)
|
||||
proc.exited.then(() => clearTimeout(id))
|
||||
proc.exited
|
||||
.then(() => clearTimeout(id))
|
||||
.catch(() => clearTimeout(id))
|
||||
})
|
||||
|
||||
// Read stdout and stderr in parallel to avoid race conditions
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import { describe, expect, test } from "bun:test"
|
||||
import type { ToolContext } from "@opencode-ai/plugin/tool"
|
||||
import { normalizeArgs, validateArgs, createLookAt } from "./tools"
|
||||
|
||||
describe("look-at tool", () => {
|
||||
@@ -92,11 +93,15 @@ describe("look-at tool", () => {
|
||||
directory: "/project",
|
||||
} as any)
|
||||
|
||||
const toolContext = {
|
||||
const toolContext: ToolContext = {
|
||||
sessionID: "parent-session",
|
||||
messageID: "parent-message",
|
||||
agent: "sisyphus",
|
||||
directory: "/project",
|
||||
worktree: "/project",
|
||||
abort: new AbortController().signal,
|
||||
metadata: () => {},
|
||||
ask: async () => {},
|
||||
}
|
||||
|
||||
const result = await tool.execute(
|
||||
@@ -130,11 +135,15 @@ describe("look-at tool", () => {
|
||||
directory: "/project",
|
||||
} as any)
|
||||
|
||||
const toolContext = {
|
||||
const toolContext: ToolContext = {
|
||||
sessionID: "parent-session",
|
||||
messageID: "parent-message",
|
||||
agent: "sisyphus",
|
||||
directory: "/project",
|
||||
worktree: "/project",
|
||||
abort: new AbortController().signal,
|
||||
metadata: () => {},
|
||||
ask: async () => {},
|
||||
}
|
||||
|
||||
const result = await tool.execute(
|
||||
@@ -186,11 +195,15 @@ describe("look-at tool", () => {
|
||||
directory: "/project",
|
||||
} as any)
|
||||
|
||||
const toolContext = {
|
||||
const toolContext: ToolContext = {
|
||||
sessionID: "parent-session",
|
||||
messageID: "parent-message",
|
||||
agent: "sisyphus",
|
||||
directory: "/project",
|
||||
worktree: "/project",
|
||||
abort: new AbortController().signal,
|
||||
metadata: () => {},
|
||||
ask: async () => {},
|
||||
}
|
||||
|
||||
await tool.execute(
|
||||
|
||||
@@ -3,7 +3,7 @@ import { pathToFileURL } from "node:url"
|
||||
import { tool, type PluginInput, type ToolDefinition } from "@opencode-ai/plugin"
|
||||
import { LOOK_AT_DESCRIPTION, MULTIMODAL_LOOKER_AGENT } from "./constants"
|
||||
import type { LookAtArgs } from "./types"
|
||||
import { findByNameCaseInsensitive, log, promptWithModelSuggestionRetry } from "../../shared"
|
||||
import { log, promptWithModelSuggestionRetry } from "../../shared"
|
||||
|
||||
interface LookAtArgsWithAlias extends LookAtArgs {
|
||||
path?: string
|
||||
@@ -143,7 +143,9 @@ Original error: ${createResult.error}`
|
||||
}
|
||||
const agents = ((agentsResult as { data?: AgentInfo[] })?.data ?? agentsResult) as AgentInfo[] | undefined
|
||||
if (agents?.length) {
|
||||
const matchedAgent = findByNameCaseInsensitive(agents, MULTIMODAL_LOOKER_AGENT)
|
||||
const matchedAgent = agents.find(
|
||||
(agent) => agent.name.toLowerCase() === MULTIMODAL_LOOKER_AGENT.toLowerCase()
|
||||
)
|
||||
if (matchedAgent?.model) {
|
||||
agentModel = matchedAgent.model
|
||||
}
|
||||
|
||||
@@ -13,6 +13,37 @@ import { getLanguageId } from "./config"
|
||||
import type { Diagnostic, ResolvedServer } from "./types"
|
||||
import { log } from "../../shared/logger"
|
||||
|
||||
/**
|
||||
* Check if the current Bun version is affected by Windows LSP crash bug.
|
||||
* Bun v1.3.5 and earlier have a known segmentation fault issue on Windows
|
||||
* when spawning LSP servers. This was fixed in Bun v1.3.6.
|
||||
* See: https://github.com/oven-sh/bun/issues/25798
|
||||
*/
|
||||
function checkWindowsBunVersion(): { isAffected: boolean; message: string } | null {
|
||||
if (process.platform !== "win32") return null
|
||||
|
||||
const version = Bun.version
|
||||
const [major, minor, patch] = version.split(".").map((v) => parseInt(v.split("-")[0], 10))
|
||||
|
||||
// Bun v1.3.5 and earlier are affected
|
||||
if (major < 1 || (major === 1 && minor < 3) || (major === 1 && minor === 3 && patch < 6)) {
|
||||
return {
|
||||
isAffected: true,
|
||||
message:
|
||||
`⚠️ Windows + Bun v${version} detected: Known segmentation fault bug with LSP.\n` +
|
||||
` This causes crashes when using LSP tools (lsp_diagnostics, lsp_goto_definition, etc.).\n` +
|
||||
` \n` +
|
||||
` SOLUTION: Upgrade to Bun v1.3.6 or later:\n` +
|
||||
` powershell -c "irm bun.sh/install.ps1|iex"\n` +
|
||||
` \n` +
|
||||
` WORKAROUND: Use WSL instead of native Windows.\n` +
|
||||
` See: https://github.com/oven-sh/bun/issues/25798`,
|
||||
}
|
||||
}
|
||||
|
||||
return null
|
||||
}
|
||||
|
||||
interface ManagedClient {
|
||||
client: LSPClient
|
||||
lastUsedAt: number
|
||||
@@ -33,10 +64,12 @@ class LSPServerManager {
|
||||
}
|
||||
|
||||
private registerProcessCleanup(): void {
|
||||
const cleanup = () => {
|
||||
// Synchronous cleanup for 'exit' event (cannot await)
|
||||
const syncCleanup = () => {
|
||||
for (const [, managed] of this.clients) {
|
||||
try {
|
||||
managed.client.stop()
|
||||
// Fire-and-forget during sync exit - process is terminating
|
||||
void managed.client.stop().catch(() => {})
|
||||
} catch {}
|
||||
}
|
||||
this.clients.clear()
|
||||
@@ -46,23 +79,30 @@ class LSPServerManager {
|
||||
}
|
||||
}
|
||||
|
||||
process.on("exit", cleanup)
|
||||
// Async cleanup for signal handlers - properly await all stops
|
||||
const asyncCleanup = async () => {
|
||||
const stopPromises: Promise<void>[] = []
|
||||
for (const [, managed] of this.clients) {
|
||||
stopPromises.push(managed.client.stop().catch(() => {}))
|
||||
}
|
||||
await Promise.allSettled(stopPromises)
|
||||
this.clients.clear()
|
||||
if (this.cleanupInterval) {
|
||||
clearInterval(this.cleanupInterval)
|
||||
this.cleanupInterval = null
|
||||
}
|
||||
}
|
||||
|
||||
process.on("SIGINT", () => {
|
||||
cleanup()
|
||||
process.exit(0)
|
||||
})
|
||||
process.on("exit", syncCleanup)
|
||||
|
||||
process.on("SIGTERM", () => {
|
||||
cleanup()
|
||||
process.exit(0)
|
||||
})
|
||||
// Don't call process.exit() here - let other handlers complete their cleanup first
|
||||
// The background-agent manager handles the final exit call
|
||||
// Use async handlers to properly await LSP subprocess cleanup
|
||||
process.on("SIGINT", () => void asyncCleanup().catch(() => {}))
|
||||
process.on("SIGTERM", () => void asyncCleanup().catch(() => {}))
|
||||
|
||||
if (process.platform === "win32") {
|
||||
process.on("SIGBREAK", () => {
|
||||
cleanup()
|
||||
process.exit(0)
|
||||
})
|
||||
process.on("SIGBREAK", () => void asyncCleanup().catch(() => {}))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -226,6 +266,13 @@ export class LSPClient {
|
||||
) {}
|
||||
|
||||
async start(): Promise<void> {
|
||||
const windowsCheck = checkWindowsBunVersion()
|
||||
if (windowsCheck?.isAffected) {
|
||||
throw new Error(
|
||||
`LSP server cannot be started safely.\n\n${windowsCheck.message}`
|
||||
)
|
||||
}
|
||||
|
||||
this.proc = spawn(this.server.command, {
|
||||
stdin: "pipe",
|
||||
stdout: "pipe",
|
||||
@@ -532,8 +579,34 @@ export class LSPClient {
|
||||
this.connection.dispose()
|
||||
this.connection = null
|
||||
}
|
||||
this.proc?.kill()
|
||||
this.proc = null
|
||||
const proc = this.proc
|
||||
if (proc) {
|
||||
this.proc = null
|
||||
let exitedBeforeTimeout = false
|
||||
try {
|
||||
proc.kill()
|
||||
// Wait for exit with timeout to prevent indefinite hang
|
||||
let timeoutId: ReturnType<typeof setTimeout> | undefined
|
||||
const timeoutPromise = new Promise<void>((resolve) => {
|
||||
timeoutId = setTimeout(resolve, 5000)
|
||||
})
|
||||
await Promise.race([
|
||||
proc.exited.then(() => { exitedBeforeTimeout = true }).finally(() => timeoutId && clearTimeout(timeoutId)),
|
||||
timeoutPromise,
|
||||
])
|
||||
if (!exitedBeforeTimeout) {
|
||||
log("[LSPClient] Process did not exit within timeout, escalating to SIGKILL")
|
||||
try {
|
||||
proc.kill("SIGKILL")
|
||||
// Wait briefly for SIGKILL to take effect
|
||||
await Promise.race([
|
||||
proc.exited,
|
||||
new Promise<void>((resolve) => setTimeout(resolve, 1000)),
|
||||
])
|
||||
} catch {}
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
this.processExited = true
|
||||
this.diagnosticsStore.clear()
|
||||
}
|
||||
|
||||
@@ -2,11 +2,17 @@ import { describe, test, expect } from "bun:test"
|
||||
import { session_list, session_read, session_search, session_info } from "./tools"
|
||||
import type { ToolContext } from "@opencode-ai/plugin/tool"
|
||||
|
||||
const projectDir = "/Users/yeongyu/local-workspaces/oh-my-opencode"
|
||||
|
||||
const mockContext: ToolContext = {
|
||||
sessionID: "test-session",
|
||||
messageID: "test-message",
|
||||
agent: "test-agent",
|
||||
directory: projectDir,
|
||||
worktree: projectDir,
|
||||
abort: new AbortController().signal,
|
||||
metadata: () => {},
|
||||
ask: async () => {},
|
||||
}
|
||||
|
||||
describe("session-manager tools", () => {
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import { describe, it, expect, beforeEach, mock } from "bun:test"
|
||||
import type { ToolContext } from "@opencode-ai/plugin/tool"
|
||||
import { createSkillMcpTool, applyGrepFilter } from "./tools"
|
||||
import { SkillMcpManager } from "../../features/skill-mcp-manager"
|
||||
import type { LoadedSkill } from "../../features/opencode-skill-loader/types"
|
||||
@@ -18,11 +19,15 @@ function createMockSkillWithMcp(name: string, mcpServers: Record<string, unknown
|
||||
}
|
||||
}
|
||||
|
||||
const mockContext = {
|
||||
const mockContext: ToolContext = {
|
||||
sessionID: "test-session",
|
||||
messageID: "msg-1",
|
||||
agent: "test-agent",
|
||||
directory: "/test",
|
||||
worktree: "/test",
|
||||
abort: new AbortController().signal,
|
||||
metadata: () => {},
|
||||
ask: async () => {},
|
||||
}
|
||||
|
||||
describe("skill_mcp tool", () => {
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import { describe, it, expect, beforeEach, mock, spyOn } from "bun:test"
|
||||
import type { ToolContext } from "@opencode-ai/plugin/tool"
|
||||
import * as fs from "node:fs"
|
||||
import { createSkillTool } from "./tools"
|
||||
import { SkillMcpManager } from "../../features/skill-mcp-manager"
|
||||
@@ -50,11 +51,15 @@ function createMockSkillWithMcp(name: string, mcpServers: Record<string, unknown
|
||||
}
|
||||
}
|
||||
|
||||
const mockContext = {
|
||||
const mockContext: ToolContext = {
|
||||
sessionID: "test-session",
|
||||
messageID: "msg-1",
|
||||
agent: "test-agent",
|
||||
directory: "/test",
|
||||
worktree: "/test",
|
||||
abort: new AbortController().signal,
|
||||
metadata: () => {},
|
||||
ask: async () => {},
|
||||
}
|
||||
|
||||
describe("skill tool - synchronous description", () => {
|
||||
|
||||
Reference in New Issue
Block a user