diff --git a/.opencode/skills/github-issue-triage/SKILL.md b/.opencode/skills/github-issue-triage/SKILL.md
deleted file mode 100644
index 1e503f576..000000000
--- a/.opencode/skills/github-issue-triage/SKILL.md
+++ /dev/null
@@ -1,489 +0,0 @@
----
-name: github-issue-triage
-description: "Triage GitHub issues with streaming analysis. CRITICAL: 1 issue = 1 background task. Processes each issue as independent background task with immediate real-time streaming results. Triggers: 'triage issues', 'analyze issues', 'issue report'."
----
-
-# GitHub Issue Triage Specialist (Streaming Architecture)
-
-You are a GitHub issue triage automation agent. Your job is to:
-1. Fetch **EVERY SINGLE ISSUE** within time range using **EXHAUSTIVE PAGINATION**
-2. **LAUNCH 1 BACKGROUND TASK PER ISSUE** - Each issue gets its own dedicated agent
-3. **STREAM RESULTS IN REAL-TIME** - As each background task completes, immediately report results
-4. Collect results and generate a **FINAL COMPREHENSIVE REPORT** at the end
-
----
-
-# CRITICAL ARCHITECTURE: 1 ISSUE = 1 BACKGROUND TASK
-
-## THIS IS NON-NEGOTIABLE
-
-**EACH ISSUE MUST BE PROCESSED AS A SEPARATE BACKGROUND TASK**
-
-| Aspect | Rule |
-|--------|------|
-| **Task Granularity** | 1 Issue = Exactly 1 `task()` call |
-| **Execution Mode** | `run_in_background=true` (Each issue runs independently) |
-| **Result Handling** | `background_output()` to collect results as they complete |
-| **Reporting** | IMMEDIATE streaming when each task finishes |
-
-### WHY 1 ISSUE = 1 BACKGROUND TASK MATTERS
-
-- **ISOLATION**: Each issue analysis is independent - failures don't cascade
-- **PARALLELISM**: Multiple issues analyzed concurrently for speed
-- **GRANULARITY**: Fine-grained control and monitoring per issue
-- **RESILIENCE**: If one issue analysis fails, others continue
-- **STREAMING**: Results flow in as soon as each task completes
-
----
-
-# CRITICAL: STREAMING ARCHITECTURE
-
-**PROCESS ISSUES WITH REAL-TIME STREAMING - NOT BATCHED**
-
-| WRONG | CORRECT |
-|----------|------------|
-| Fetch all → Wait for all agents → Report all at once | Fetch all → Launch 1 task per issue (background) → Stream results as each completes → Next |
-| "Processing 50 issues... (wait 5 min) ...here are all results" | "Issue #123 analysis complete... [RESULT] Issue #124 analysis complete... [RESULT] ..." |
-| User sees nothing during processing | User sees live progress as each background task finishes |
-| `run_in_background=false` (sequential blocking) | `run_in_background=true` with `background_output()` streaming |
-
-### STREAMING LOOP PATTERN
-
-```typescript
-// CORRECT: Launch all as background tasks, stream results
-const taskIds = []
-
-// Category ratio: unspecified-low : writing : quick = 1:2:1
-// Every 4 issues: 1 unspecified-low, 2 writing, 1 quick
-function getCategory(index) {
- const position = index % 4
- if (position === 0) return "unspecified-low" // 25%
- if (position === 1 || position === 2) return "writing" // 50%
- return "quick" // 25%
-}
-
-// PHASE 1: Launch 1 background task per issue
-for (let i = 0; i < allIssues.length; i++) {
- const issue = allIssues[i]
- const category = getCategory(i)
-
- const taskId = await task(
- category=category,
- load_skills=[],
- run_in_background=true, // ← CRITICAL: Each issue is independent background task
- prompt=`Analyze issue #${issue.number}...`
- )
- taskIds.push({ issue: issue.number, taskId, category })
- console.log(`🚀 Launched background task for Issue #${issue.number} (${category})`)
-}
-
-// PHASE 2: Stream results as they complete
-console.log(`\n📊 Streaming results for ${taskIds.length} issues...`)
-
-const completed = new Set()
-while (completed.size < taskIds.length) {
- for (const { issue, taskId } of taskIds) {
- if (completed.has(issue)) continue
-
- // Check if this specific issue's task is done
- const result = await background_output(task_id=taskId, block=false)
-
- if (result && result.output) {
- // STREAMING: Report immediately as each task completes
- const analysis = parseAnalysis(result.output)
- reportRealtime(analysis)
- completed.add(issue)
-
- console.log(`\n✅ Issue #${issue} analysis complete (${completed.size}/${taskIds.length})`)
- }
- }
-
- // Small delay to prevent hammering
- if (completed.size < taskIds.length) {
- await new Promise(r => setTimeout(r, 1000))
- }
-}
-```
-
-### WHY STREAMING MATTERS
-
-- **User sees progress immediately** - no 5-minute silence
-- **Critical issues flagged early** - maintainer can act on urgent bugs while others process
-- **Transparent** - user knows what's happening in real-time
-- **Fail-fast** - if something breaks, we already have partial results
-
----
-
-# CRITICAL: INITIALIZATION - TODO REGISTRATION (MANDATORY FIRST STEP)
-
-**BEFORE DOING ANYTHING ELSE, CREATE TODOS.**
-
-```typescript
-// Create todos immediately
-todowrite([
- { id: "1", content: "Fetch all issues with exhaustive pagination", status: "in_progress", priority: "high" },
- { id: "2", content: "Fetch PRs for bug correlation", status: "pending", priority: "high" },
- { id: "3", content: "Launch 1 background task per issue (1 issue = 1 task)", status: "pending", priority: "high" },
- { id: "4", content: "Stream-process results as each task completes", status: "pending", priority: "high" },
- { id: "5", content: "Generate final comprehensive report", status: "pending", priority: "high" }
-])
-```
-
----
-
-# PHASE 1: Issue Collection (EXHAUSTIVE Pagination)
-
-### 1.1 Use Bundled Script (MANDATORY)
-
-```bash
-# Default: last 48 hours
-./scripts/gh_fetch.py issues --hours 48 --output json
-
-# Custom time range
-./scripts/gh_fetch.py issues --hours 72 --output json
-```
-
-### 1.2 Fallback: Manual Pagination
-
-```bash
-REPO=$(gh repo view --json nameWithOwner -q .nameWithOwner)
-TIME_RANGE=48
-CUTOFF_DATE=$(date -v-${TIME_RANGE}H +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -d "${TIME_RANGE} hours ago" -Iseconds)
-
-gh issue list --repo $REPO --state all --limit 500 --json number,title,state,createdAt,updatedAt,labels,author | \
- jq --arg cutoff "$CUTOFF_DATE" '[.[] | select(.createdAt >= $cutoff or .updatedAt >= $cutoff)]'
-# Continue pagination if 500 returned...
-```
-
-**AFTER Phase 1:** Update todo status.
-
----
-
-# PHASE 2: PR Collection (For Bug Correlation)
-
-```bash
-./scripts/gh_fetch.py prs --hours 48 --output json
-```
-
-**AFTER Phase 2:** Update todo, mark Phase 3 as in_progress.
-
----
-
-# PHASE 3: LAUNCH 1 BACKGROUND TASK PER ISSUE
-
-## THE 1-ISSUE-1-TASK PATTERN (MANDATORY)
-
-**CRITICAL: DO NOT BATCH MULTIPLE ISSUES INTO ONE TASK**
-
-```typescript
-// Collection for tracking
-const taskMap = new Map() // issueNumber -> taskId
-
-// Category ratio: unspecified-low : writing : quick = 1:2:1
-// Every 4 issues: 1 unspecified-low, 2 writing, 1 quick
-function getCategory(index, issue) {
- const position = index % 4
- if (position === 0) return "unspecified-low" // 25%
- if (position === 1 || position === 2) return "writing" // 50%
- return "quick" // 25%
-}
-
-// Launch 1 background task per issue
-for (let i = 0; i < allIssues.length; i++) {
- const issue = allIssues[i]
- const category = getCategory(i, issue)
-
- console.log(`🚀 Launching background task for Issue #${issue.number} (${category})...`)
-
- const taskId = await task(
- category=category,
- load_skills=[],
- run_in_background=true, // ← BACKGROUND TASK: Each issue runs independently
- prompt=`
-## TASK
-Analyze GitHub issue #${issue.number} for ${REPO}.
-
-## ISSUE DATA
-- Number: #${issue.number}
-- Title: ${issue.title}
-- State: ${issue.state}
-- Author: ${issue.author.login}
-- Created: ${issue.createdAt}
-- Updated: ${issue.updatedAt}
-- Labels: ${issue.labels.map(l => l.name).join(', ')}
-
-## ISSUE BODY
-${issue.body}
-
-## FETCH COMMENTS
-Use: gh issue view ${issue.number} --repo ${REPO} --json comments
-
-## PR CORRELATION (Check these for fixes)
-${PR_LIST.slice(0, 10).map(pr => `- PR #${pr.number}: ${pr.title}`).join('\n')}
-
-## ANALYSIS CHECKLIST
-1. **TYPE**: BUG | QUESTION | FEATURE | INVALID
-2. **PROJECT_VALID**: Is this relevant to OUR project? (YES/NO/UNCLEAR)
-3. **STATUS**:
- - RESOLVED: Already fixed
- - NEEDS_ACTION: Requires maintainer attention
- - CAN_CLOSE: Duplicate, out of scope, stale, answered
- - NEEDS_INFO: Missing reproduction steps
-4. **COMMUNITY_RESPONSE**: NONE | HELPFUL | WAITING
-5. **LINKED_PR**: PR # that might fix this (or NONE)
-6. **CRITICAL**: Is this a blocking bug/security issue? (YES/NO)
-
-## RETURN FORMAT (STRICT)
-\`\`\`
-ISSUE: #${issue.number}
-TITLE: ${issue.title}
-TYPE: [BUG|QUESTION|FEATURE|INVALID]
-VALID: [YES|NO|UNCLEAR]
-STATUS: [RESOLVED|NEEDS_ACTION|CAN_CLOSE|NEEDS_INFO]
-COMMUNITY: [NONE|HELPFUL|WAITING]
-LINKED_PR: [#NUMBER|NONE]
-CRITICAL: [YES|NO]
-SUMMARY: [1-2 sentence summary]
-ACTION: [Recommended maintainer action]
-DRAFT_RESPONSE: [Template response if applicable, else "NEEDS_MANUAL_REVIEW"]
-\`\`\`
-`
- )
-
- // Store task ID for this issue
- taskMap.set(issue.number, taskId)
-}
-
-console.log(`\n✅ Launched ${taskMap.size} background tasks (1 per issue)`)
-```
-
-**AFTER Phase 3:** Update todo, mark Phase 4 as in_progress.
-
----
-
-# PHASE 4: STREAM RESULTS AS EACH TASK COMPLETES
-
-## REAL-TIME STREAMING COLLECTION
-
-```typescript
-const results = []
-const critical = []
-const closeImmediately = []
-const autoRespond = []
-const needsInvestigation = []
-const featureBacklog = []
-const needsInfo = []
-
-const completedIssues = new Set()
-const totalIssues = taskMap.size
-
-console.log(`\n📊 Streaming results for ${totalIssues} issues...`)
-
-// Stream results as each background task completes
-while (completedIssues.size < totalIssues) {
- let newCompletions = 0
-
- for (const [issueNumber, taskId] of taskMap) {
- if (completedIssues.has(issueNumber)) continue
-
- // Non-blocking check for this specific task
- const output = await background_output(task_id=taskId, block=false)
-
- if (output && output.length > 0) {
- // Parse the completed analysis
- const analysis = parseAnalysis(output)
- results.push(analysis)
- completedIssues.add(issueNumber)
- newCompletions++
-
- // REAL-TIME STREAMING REPORT
- console.log(`\n🔄 Issue #${issueNumber}: ${analysis.TITLE.substring(0, 60)}...`)
-
- // Immediate categorization & reporting
- let icon = "📋"
- let status = ""
-
- if (analysis.CRITICAL === 'YES') {
- critical.push(analysis)
- icon = "🚨"
- status = "CRITICAL - Immediate attention required"
- } else if (analysis.STATUS === 'CAN_CLOSE') {
- closeImmediately.push(analysis)
- icon = "⚠️"
- status = "Can be closed"
- } else if (analysis.STATUS === 'RESOLVED') {
- closeImmediately.push(analysis)
- icon = "✅"
- status = "Resolved - can close"
- } else if (analysis.DRAFT_RESPONSE !== 'NEEDS_MANUAL_REVIEW') {
- autoRespond.push(analysis)
- icon = "💬"
- status = "Auto-response available"
- } else if (analysis.TYPE === 'FEATURE') {
- featureBacklog.push(analysis)
- icon = "💡"
- status = "Feature request"
- } else if (analysis.STATUS === 'NEEDS_INFO') {
- needsInfo.push(analysis)
- icon = "❓"
- status = "Needs more info"
- } else if (analysis.TYPE === 'BUG') {
- needsInvestigation.push(analysis)
- icon = "🐛"
- status = "Bug - needs investigation"
- } else {
- needsInvestigation.push(analysis)
- icon = "👀"
- status = "Needs investigation"
- }
-
- console.log(` ${icon} ${status}`)
- console.log(` 📊 Action: ${analysis.ACTION}`)
-
- // Progress update every 5 completions
- if (completedIssues.size % 5 === 0) {
- console.log(`\n📈 PROGRESS: ${completedIssues.size}/${totalIssues} issues analyzed`)
- console.log(` Critical: ${critical.length} | Close: ${closeImmediately.length} | Auto-Reply: ${autoRespond.length} | Investigate: ${needsInvestigation.length} | Features: ${featureBacklog.length} | Needs Info: ${needsInfo.length}`)
- }
- }
- }
-
- // If no new completions, wait briefly before checking again
- if (newCompletions === 0 && completedIssues.size < totalIssues) {
- await new Promise(r => setTimeout(r, 2000))
- }
-}
-
-console.log(`\n✅ All ${totalIssues} issues analyzed`)
-```
-
----
-
-# PHASE 5: FINAL COMPREHENSIVE REPORT
-
-**GENERATE THIS AT THE VERY END - AFTER ALL PROCESSING**
-
-```markdown
-# Issue Triage Report - ${REPO}
-
-**Time Range:** Last ${TIME_RANGE} hours
-**Generated:** ${new Date().toISOString()}
-**Total Issues Analyzed:** ${results.length}
-**Processing Mode:** STREAMING (1 issue = 1 background task, real-time analysis)
-
----
-
-## 📊 Summary
-
-| Category | Count | Priority |
-|----------|-------|----------|
-| 🚨 CRITICAL | ${critical.length} | IMMEDIATE |
-| ⚠️ Close Immediately | ${closeImmediately.length} | Today |
-| 💬 Auto-Respond | ${autoRespond.length} | Today |
-| 🐛 Needs Investigation | ${needsInvestigation.length} | This Week |
-| 💡 Feature Backlog | ${featureBacklog.length} | Backlog |
-| ❓ Needs Info | ${needsInfo.length} | Awaiting User |
-
----
-
-## 🚨 CRITICAL (Immediate Action Required)
-
-${critical.map(i => `| #${i.ISSUE} | ${i.TITLE.substring(0, 50)}... | ${i.TYPE} |`).join('\n')}
-
-**Action:** These require immediate maintainer attention.
-
----
-
-## ⚠️ Close Immediately
-
-${closeImmediately.map(i => `| #${i.ISSUE} | ${i.TITLE.substring(0, 50)}... | ${i.STATUS} |`).join('\n')}
-
----
-
-## 💬 Auto-Respond (Template Ready)
-
-${autoRespond.map(i => `| #${i.ISSUE} | ${i.TITLE.substring(0, 40)}... |`).join('\n')}
-
-**Draft Responses:**
-${autoRespond.map(i => `### #${i.ISSUE}\n${i.DRAFT_RESPONSE}\n`).join('\n---\n')}
-
----
-
-## 🐛 Needs Investigation
-
-${needsInvestigation.map(i => `| #${i.ISSUE} | ${i.TITLE.substring(0, 50)}... | ${i.TYPE} |`).join('\n')}
-
----
-
-## 💡 Feature Backlog
-
-${featureBacklog.map(i => `| #${i.ISSUE} | ${i.TITLE.substring(0, 50)}... |`).join('\n')}
-
----
-
-## ❓ Needs More Info
-
-${needsInfo.map(i => `| #${i.ISSUE} | ${i.TITLE.substring(0, 50)}... |`).join('\n')}
-
----
-
-## 🎯 Immediate Actions
-
-1. **CRITICAL:** ${critical.length} issues need immediate attention
-2. **CLOSE:** ${closeImmediately.length} issues can be closed now
-3. **REPLY:** ${autoRespond.length} issues have draft responses ready
-4. **INVESTIGATE:** ${needsInvestigation.length} bugs need debugging
-
----
-
-## Processing Log
-
-${results.map((r, i) => `${i+1}. #${r.ISSUE}: ${r.TYPE} (${r.CRITICAL === 'YES' ? 'CRITICAL' : r.STATUS})`).join('\n')}
-```
-
----
-
-## CRITICAL ANTI-PATTERNS (BLOCKING VIOLATIONS)
-
-| Violation | Why It's Wrong | Severity |
-|-----------|----------------|----------|
-| **Batch multiple issues in one task** | Violates 1 issue = 1 task rule | CRITICAL |
-| **Use `run_in_background=false`** | No parallelism, slower execution | CRITICAL |
-| **Collect all tasks, report at end** | Loses streaming benefit | CRITICAL |
-| **No `background_output()` polling** | Can't stream results | CRITICAL |
-| No progress updates | User doesn't know if stuck or working | HIGH |
-
----
-
-## EXECUTION CHECKLIST
-
-- [ ] Created todos before starting
-- [ ] Fetched ALL issues with exhaustive pagination
-- [ ] Fetched PRs for correlation
-- [ ] **LAUNCHED**: 1 background task per issue (`run_in_background=true`)
-- [ ] **STREAMED**: Results via `background_output()` as each task completes
-- [ ] Showed live progress every 5 issues
-- [ ] Real-time categorization visible to user
-- [ ] Critical issues flagged immediately
-- [ ] **FINAL**: Comprehensive summary report at end
-- [ ] All todos marked complete
-
----
-
-## Quick Start
-
-When invoked, immediately:
-
-1. **CREATE TODOS**
-2. `gh repo view --json nameWithOwner -q .nameWithOwner`
-3. Parse time range (default: 48 hours)
-4. Exhaustive pagination for issues
-5. Exhaustive pagination for PRs
-6. **LAUNCH**: For each issue:
- - `task(run_in_background=true)` - 1 task per issue
- - Store taskId mapped to issue number
-7. **STREAM**: Poll `background_output()` for each task:
- - As each completes, immediately report result
- - Categorize in real-time
- - Show progress every 5 completions
-8. **GENERATE FINAL COMPREHENSIVE REPORT**
diff --git a/.opencode/skills/github-pr-triage/SKILL.md b/.opencode/skills/github-pr-triage/SKILL.md
deleted file mode 100644
index 092609c3a..000000000
--- a/.opencode/skills/github-pr-triage/SKILL.md
+++ /dev/null
@@ -1,484 +0,0 @@
----
-name: github-pr-triage
-description: "Triage GitHub Pull Requests with streaming analysis. CRITICAL: 1 PR = 1 background task. Processes each PR as independent background task with immediate real-time streaming results. Conservative auto-close. Triggers: 'triage PRs', 'analyze PRs', 'PR cleanup'."
----
-
-# GitHub PR Triage Specialist (Streaming Architecture)
-
-You are a GitHub Pull Request triage automation agent. Your job is to:
-1. Fetch **EVERY SINGLE OPEN PR** using **EXHAUSTIVE PAGINATION**
-2. **LAUNCH 1 BACKGROUND TASK PER PR** - Each PR gets its own dedicated agent
-3. **STREAM RESULTS IN REAL-TIME** - As each background task completes, immediately report results
-4. **CONSERVATIVELY** auto-close PRs that are clearly closeable
-5. Generate a **FINAL COMPREHENSIVE REPORT** at the end
-
----
-
-# CRITICAL ARCHITECTURE: 1 PR = 1 BACKGROUND TASK
-
-## THIS IS NON-NEGOTIABLE
-
-**EACH PR MUST BE PROCESSED AS A SEPARATE BACKGROUND TASK**
-
-| Aspect | Rule |
-|--------|------|
-| **Task Granularity** | 1 PR = Exactly 1 `task()` call |
-| **Execution Mode** | `run_in_background=true` (Each PR runs independently) |
-| **Result Handling** | `background_output()` to collect results as they complete |
-| **Reporting** | IMMEDIATE streaming when each task finishes |
-
-### WHY 1 PR = 1 BACKGROUND TASK MATTERS
-
-- **ISOLATION**: Each PR analysis is independent - failures don't cascade
-- **PARALLELISM**: Multiple PRs analyzed concurrently for speed
-- **GRANULARITY**: Fine-grained control and monitoring per PR
-- **RESILIENCE**: If one PR analysis fails, others continue
-- **STREAMING**: Results flow in as soon as each task completes
-
----
-
-# CRITICAL: STREAMING ARCHITECTURE
-
-**PROCESS PRs WITH REAL-TIME STREAMING - NOT BATCHED**
-
-| WRONG | CORRECT |
-|----------|------------|
-| Fetch all → Wait for all agents → Report all at once | Fetch all → Launch 1 task per PR (background) → Stream results as each completes → Next |
-| "Processing 50 PRs... (wait 5 min) ...here are all results" | "PR #123 analysis complete... [RESULT] PR #124 analysis complete... [RESULT] ..." |
-| User sees nothing during processing | User sees live progress as each background task finishes |
-| `run_in_background=false` (sequential blocking) | `run_in_background=true` with `background_output()` streaming |
-
-### STREAMING LOOP PATTERN
-
-```typescript
-// CORRECT: Launch all as background tasks, stream results
-const taskIds = []
-
-// Category ratio: unspecified-low : writing : quick = 1:2:1
-// Every 4 PRs: 1 unspecified-low, 2 writing, 1 quick
-function getCategory(index) {
- const position = index % 4
- if (position === 0) return "unspecified-low" // 25%
- if (position === 1 || position === 2) return "writing" // 50%
- return "quick" // 25%
-}
-
-// PHASE 1: Launch 1 background task per PR
-for (let i = 0; i < allPRs.length; i++) {
- const pr = allPRs[i]
- const category = getCategory(i)
-
- const taskId = await task(
- category=category,
- load_skills=[],
- run_in_background=true, // ← CRITICAL: Each PR is independent background task
- prompt=`Analyze PR #${pr.number}...`
- )
- taskIds.push({ pr: pr.number, taskId, category })
- console.log(`🚀 Launched background task for PR #${pr.number} (${category})`)
-}
-
-// PHASE 2: Stream results as they complete
-console.log(`\n📊 Streaming results for ${taskIds.length} PRs...`)
-
-const completed = new Set()
-while (completed.size < taskIds.length) {
- for (const { pr, taskId } of taskIds) {
- if (completed.has(pr)) continue
-
- // Check if this specific PR's task is done
- const result = await background_output(taskId=taskId, block=false)
-
- if (result && result.output) {
- // STREAMING: Report immediately as each task completes
- const analysis = parseAnalysis(result.output)
- reportRealtime(analysis)
- completed.add(pr)
-
- console.log(`\n✅ PR #${pr} analysis complete (${completed.size}/${taskIds.length})`)
- }
- }
-
- // Small delay to prevent hammering
- if (completed.size < taskIds.length) {
- await new Promise(r => setTimeout(r, 1000))
- }
-}
-```
-
-### WHY STREAMING MATTERS
-
-- **User sees progress immediately** - no 5-minute silence
-- **Early decisions visible** - maintainer can act on urgent PRs while others process
-- **Transparent** - user knows what's happening in real-time
-- **Fail-fast** - if something breaks, we already have partial results
-
----
-
-# CRITICAL: INITIALIZATION - TODO REGISTRATION (MANDATORY FIRST STEP)
-
-**BEFORE DOING ANYTHING ELSE, CREATE TODOS.**
-
-```typescript
-// Create todos immediately
-todowrite([
- { id: "1", content: "Fetch all open PRs with exhaustive pagination", status: "in_progress", priority: "high" },
- { id: "2", content: "Launch 1 background task per PR (1 PR = 1 task)", status: "pending", priority: "high" },
- { id: "3", content: "Stream-process results as each task completes", status: "pending", priority: "high" },
- { id: "4", content: "Execute conservative auto-close for eligible PRs", status: "pending", priority: "high" },
- { id: "5", content: "Generate final comprehensive report", status: "pending", priority: "high" }
-])
-```
-
----
-
-# PHASE 1: PR Collection (EXHAUSTIVE Pagination)
-
-### 1.1 Use Bundled Script (MANDATORY)
-
-```bash
-./scripts/gh_fetch.py prs --output json
-```
-
-### 1.2 Fallback: Manual Pagination
-
-```bash
-REPO=$(gh repo view --json nameWithOwner -q .nameWithOwner)
-gh pr list --repo $REPO --state open --limit 500 --json number,title,state,createdAt,updatedAt,labels,author,headRefName,baseRefName,isDraft,mergeable,body
-# Continue pagination if 500 returned...
-```
-
-**AFTER Phase 1:** Update todo status to completed, mark Phase 2 as in_progress.
-
----
-
-# PHASE 2: LAUNCH 1 BACKGROUND TASK PER PR
-
-## THE 1-PR-1-TASK PATTERN (MANDATORY)
-
-**CRITICAL: DO NOT BATCH MULTIPLE PRs INTO ONE TASK**
-
-```typescript
-// Collection for tracking
-const taskMap = new Map() // prNumber -> taskId
-
-// Category ratio: unspecified-low : writing : quick = 1:2:1
-// Every 4 PRs: 1 unspecified-low, 2 writing, 1 quick
-function getCategory(index) {
- const position = index % 4
- if (position === 0) return "unspecified-low" // 25%
- if (position === 1 || position === 2) return "writing" // 50%
- return "quick" // 25%
-}
-
-// Launch 1 background task per PR
-for (let i = 0; i < allPRs.length; i++) {
- const pr = allPRs[i]
- const category = getCategory(i)
-
- console.log(`🚀 Launching background task for PR #${pr.number} (${category})...`)
-
- const taskId = await task(
- category=category,
- load_skills=[],
- run_in_background=true, // ← BACKGROUND TASK: Each PR runs independently
- prompt=`
-## TASK
-Analyze GitHub PR #${pr.number} for ${REPO}.
-
-## PR DATA
-- Number: #${pr.number}
-- Title: ${pr.title}
-- State: ${pr.state}
-- Author: ${pr.author.login}
-- Created: ${pr.createdAt}
-- Updated: ${pr.updatedAt}
-- Labels: ${pr.labels.map(l => l.name).join(', ')}
-- Head Branch: ${pr.headRefName}
-- Base Branch: ${pr.baseRefName}
-- Is Draft: ${pr.isDraft}
-- Mergeable: ${pr.mergeable}
-
-## PR BODY
-${pr.body}
-
-## FETCH ADDITIONAL CONTEXT
-1. Fetch PR comments: gh pr view ${pr.number} --repo ${REPO} --json comments
-2. Fetch PR reviews: gh pr view ${pr.number} --repo ${REPO} --json reviews
-3. Fetch PR files changed: gh pr view ${pr.number} --repo ${REPO} --json files
-4. Check if branch exists: git ls-remote --heads origin ${pr.headRefName}
-5. Check base branch for similar changes: Search if the changes were already implemented
-
-## ANALYSIS CHECKLIST
-1. **MERGE_READY**: Can this PR be merged? (approvals, CI passed, no conflicts, not draft)
-2. **PROJECT_ALIGNED**: Does this PR align with current project direction?
-3. **CLOSE_ELIGIBILITY**: ALREADY_IMPLEMENTED | ALREADY_FIXED | OUTDATED_DIRECTION | STALE_ABANDONED
-4. **STALENESS**: ACTIVE (<30d) | STALE (30-180d) | ABANDONED (180d+)
-
-## CONSERVATIVE CLOSE CRITERIA
-MAY CLOSE ONLY IF:
-- Exact same change already exists in main
-- A merged PR already solved this differently
-- Project explicitly deprecated the feature
-- Author unresponsive for 6+ months despite requests
-
-## RETURN FORMAT (STRICT)
-\`\`\`
-PR: #${pr.number}
-TITLE: ${pr.title}
-MERGE_READY: [YES|NO|NEEDS_WORK]
-ALIGNED: [YES|NO|UNCLEAR]
-CLOSE_ELIGIBLE: [YES|NO]
-CLOSE_REASON: [ALREADY_IMPLEMENTED|ALREADY_FIXED|OUTDATED_DIRECTION|STALE_ABANDONED|N/A]
-STALENESS: [ACTIVE|STALE|ABANDONED]
-RECOMMENDATION: [MERGE|CLOSE|REVIEW|WAIT]
-CLOSE_MESSAGE: [Friendly message if CLOSE_ELIGIBLE=YES, else "N/A"]
-ACTION_NEEDED: [Specific action for maintainer]
-\`\`\`
-`
- )
-
- // Store task ID for this PR
- taskMap.set(pr.number, taskId)
-}
-
-console.log(`\n✅ Launched ${taskMap.size} background tasks (1 per PR)`)
-```
-
-**AFTER Phase 2:** Update todo, mark Phase 3 as in_progress.
-
----
-
-# PHASE 3: STREAM RESULTS AS EACH TASK COMPLETES
-
-## REAL-TIME STREAMING COLLECTION
-
-```typescript
-const results = []
-const autoCloseable = []
-const readyToMerge = []
-const needsReview = []
-const needsWork = []
-const stale = []
-const drafts = []
-
-const completedPRs = new Set()
-const totalPRs = taskMap.size
-
-console.log(`\n📊 Streaming results for ${totalPRs} PRs...`)
-
-// Stream results as each background task completes
-while (completedPRs.size < totalPRs) {
- let newCompletions = 0
-
- for (const [prNumber, taskId] of taskMap) {
- if (completedPRs.has(prNumber)) continue
-
- // Non-blocking check for this specific task
- const output = await background_output(task_id=taskId, block=false)
-
- if (output && output.length > 0) {
- // Parse the completed analysis
- const analysis = parseAnalysis(output)
- results.push(analysis)
- completedPRs.add(prNumber)
- newCompletions++
-
- // REAL-TIME STREAMING REPORT
- console.log(`\n🔄 PR #${prNumber}: ${analysis.TITLE.substring(0, 60)}...`)
-
- // Immediate categorization & reporting
- if (analysis.CLOSE_ELIGIBLE === 'YES') {
- autoCloseable.push(analysis)
- console.log(` ⚠️ AUTO-CLOSE CANDIDATE: ${analysis.CLOSE_REASON}`)
- } else if (analysis.MERGE_READY === 'YES') {
- readyToMerge.push(analysis)
- console.log(` ✅ READY TO MERGE`)
- } else if (analysis.RECOMMENDATION === 'REVIEW') {
- needsReview.push(analysis)
- console.log(` 👀 NEEDS REVIEW`)
- } else if (analysis.RECOMMENDATION === 'WAIT') {
- needsWork.push(analysis)
- console.log(` ⏳ WAITING FOR AUTHOR`)
- } else if (analysis.STALENESS === 'STALE' || analysis.STALENESS === 'ABANDONED') {
- stale.push(analysis)
- console.log(` 💤 ${analysis.STALENESS}`)
- } else {
- drafts.push(analysis)
- console.log(` 📝 DRAFT`)
- }
-
- console.log(` 📊 Action: ${analysis.ACTION_NEEDED}`)
-
- // Progress update every 5 completions
- if (completedPRs.size % 5 === 0) {
- console.log(`\n📈 PROGRESS: ${completedPRs.size}/${totalPRs} PRs analyzed`)
- console.log(` Ready: ${readyToMerge.length} | Review: ${needsReview.length} | Wait: ${needsWork.length} | Stale: ${stale.length} | Draft: ${drafts.length} | Close-Candidate: ${autoCloseable.length}`)
- }
- }
- }
-
- // If no new completions, wait briefly before checking again
- if (newCompletions === 0 && completedPRs.size < totalPRs) {
- await new Promise(r => setTimeout(r, 2000))
- }
-}
-
-console.log(`\n✅ All ${totalPRs} PRs analyzed`)
-```
-
----
-
-# PHASE 4: Auto-Close Execution (CONSERVATIVE)
-
-### 4.1 Confirm and Close
-
-**Ask for confirmation before closing (unless user explicitly said auto-close is OK)**
-
-```typescript
-if (autoCloseable.length > 0) {
- console.log(`\n🚨 FOUND ${autoCloseable.length} PR(s) ELIGIBLE FOR AUTO-CLOSE:`)
-
- for (const pr of autoCloseable) {
- console.log(` #${pr.PR}: ${pr.TITLE} (${pr.CLOSE_REASON})`)
- }
-
- // Close them one by one with progress
- for (const pr of autoCloseable) {
- console.log(`\n Closing #${pr.PR}...`)
-
- await bash({
- command: `gh pr close ${pr.PR} --repo ${REPO} --comment "${pr.CLOSE_MESSAGE}"`,
- description: `Close PR #${pr.PR} with friendly message`
- })
-
- console.log(` ✅ Closed #${pr.PR}`)
- }
-}
-```
-
----
-
-# PHASE 5: FINAL COMPREHENSIVE REPORT
-
-**GENERATE THIS AT THE VERY END - AFTER ALL PROCESSING**
-
-```markdown
-# PR Triage Report - ${REPO}
-
-**Generated:** ${new Date().toISOString()}
-**Total PRs Analyzed:** ${results.length}
-**Processing Mode:** STREAMING (1 PR = 1 background task, real-time results)
-
----
-
-## 📊 Summary
-
-| Category | Count | Status |
-|----------|-------|--------|
-| ✅ Ready to Merge | ${readyToMerge.length} | Action: Merge immediately |
-| ⚠️ Auto-Closed | ${autoCloseable.length} | Already processed |
-| 👀 Needs Review | ${needsReview.length} | Action: Assign reviewers |
-| ⏳ Needs Work | ${needsWork.length} | Action: Comment guidance |
-| 💤 Stale | ${stale.length} | Action: Follow up |
-| 📝 Draft | ${drafts.length} | No action needed |
-
----
-
-## ✅ Ready to Merge
-
-${readyToMerge.map(pr => `| #${pr.PR} | ${pr.TITLE.substring(0, 50)}... |`).join('\n')}
-
-**Action:** These PRs can be merged immediately.
-
----
-
-## ⚠️ Auto-Closed (During This Triage)
-
-${autoCloseable.map(pr => `| #${pr.PR} | ${pr.TITLE.substring(0, 40)}... | ${pr.CLOSE_REASON} |`).join('\n')}
-
----
-
-## 👀 Needs Review
-
-${needsReview.map(pr => `| #${pr.PR} | ${pr.TITLE.substring(0, 50)}... |`).join('\n')}
-
-**Action:** Assign maintainers for review.
-
----
-
-## ⏳ Needs Work
-
-${needsWork.map(pr => `| #${pr.PR} | ${pr.TITLE.substring(0, 50)}... | ${pr.ACTION_NEEDED} |`).join('\n')}
-
----
-
-## 💤 Stale PRs
-
-${stale.map(pr => `| #${pr.PR} | ${pr.TITLE.substring(0, 40)}... | ${pr.STALENESS} |`).join('\n')}
-
----
-
-## 📝 Draft PRs
-
-${drafts.map(pr => `| #${pr.PR} | ${pr.TITLE.substring(0, 50)}... |`).join('\n')}
-
----
-
-## 🎯 Immediate Actions
-
-1. **Merge:** ${readyToMerge.length} PRs ready for immediate merge
-2. **Review:** ${needsReview.length} PRs awaiting maintainer attention
-3. **Follow Up:** ${stale.length} stale PRs need author ping
-
----
-
-## Processing Log
-
-${results.map((r, i) => `${i+1}. #${r.PR}: ${r.RECOMMENDATION} (${r.MERGE_READY === 'YES' ? 'ready' : r.CLOSE_ELIGIBLE === 'YES' ? 'close' : 'needs attention'})`).join('\n')}
-```
-
----
-
-## CRITICAL ANTI-PATTERNS (BLOCKING VIOLATIONS)
-
-| Violation | Why It's Wrong | Severity |
-|-----------|----------------|----------|
-| **Batch multiple PRs in one task** | Violates 1 PR = 1 task rule | CRITICAL |
-| **Use `run_in_background=false`** | No parallelism, slower execution | CRITICAL |
-| **Collect all tasks, report at end** | Loses streaming benefit | CRITICAL |
-| **No `background_output()` polling** | Can't stream results | CRITICAL |
-| No progress updates | User doesn't know if stuck or working | HIGH |
-
----
-
-## EXECUTION CHECKLIST
-
-- [ ] Created todos before starting
-- [ ] Fetched ALL PRs with exhaustive pagination
-- [ ] **LAUNCHED**: 1 background task per PR (`run_in_background=true`)
-- [ ] **STREAMED**: Results via `background_output()` as each task completes
-- [ ] Showed live progress every 5 PRs
-- [ ] Real-time categorization visible to user
-- [ ] Conservative auto-close with confirmation
-- [ ] **FINAL**: Comprehensive summary report at end
-- [ ] All todos marked complete
-
----
-
-## Quick Start
-
-When invoked, immediately:
-
-1. **CREATE TODOS**
-2. `gh repo view --json nameWithOwner -q .nameWithOwner`
-3. Exhaustive pagination for ALL open PRs
-4. **LAUNCH**: For each PR:
- - `task(run_in_background=true)` - 1 task per PR
- - Store taskId mapped to PR number
-5. **STREAM**: Poll `background_output()` for each task:
- - As each completes, immediately report result
- - Categorize in real-time
- - Show progress every 5 completions
-6. Auto-close eligible PRs
-7. **GENERATE FINAL COMPREHENSIVE REPORT**
diff --git a/.opencode/skills/github-pr-triage/scripts/gh_fetch.py b/.opencode/skills/github-pr-triage/scripts/gh_fetch.py
deleted file mode 100755
index 0b06bd500..000000000
--- a/.opencode/skills/github-pr-triage/scripts/gh_fetch.py
+++ /dev/null
@@ -1,373 +0,0 @@
-#!/usr/bin/env -S uv run --script
-# /// script
-# requires-python = ">=3.11"
-# dependencies = [
-# "typer>=0.12.0",
-# "rich>=13.0.0",
-# ]
-# ///
-"""
-GitHub Issues/PRs Fetcher with Exhaustive Pagination.
-
-Fetches ALL issues and/or PRs from a GitHub repository using gh CLI.
-Implements proper pagination to ensure no items are missed.
-
-Usage:
- ./gh_fetch.py issues # Fetch all issues
- ./gh_fetch.py prs # Fetch all PRs
- ./gh_fetch.py all # Fetch both issues and PRs
- ./gh_fetch.py issues --hours 48 # Issues from last 48 hours
- ./gh_fetch.py prs --state open # Only open PRs
- ./gh_fetch.py all --repo owner/repo # Specify repository
-"""
-
-import asyncio
-import json
-from datetime import UTC, datetime, timedelta
-from enum import Enum
-from typing import Annotated
-
-import typer
-from rich.console import Console
-from rich.panel import Panel
-from rich.progress import Progress, TaskID
-from rich.table import Table
-
-app = typer.Typer(
- name="gh_fetch",
- help="Fetch GitHub issues/PRs with exhaustive pagination.",
- no_args_is_help=True,
-)
-console = Console()
-
-BATCH_SIZE = 500 # Maximum allowed by GitHub API
-
-
-class ItemState(str, Enum):
- ALL = "all"
- OPEN = "open"
- CLOSED = "closed"
-
-
-class OutputFormat(str, Enum):
- JSON = "json"
- TABLE = "table"
- COUNT = "count"
-
-
-async def run_gh_command(args: list[str]) -> tuple[str, str, int]:
- """Run gh CLI command asynchronously."""
- proc = await asyncio.create_subprocess_exec(
- "gh",
- *args,
- stdout=asyncio.subprocess.PIPE,
- stderr=asyncio.subprocess.PIPE,
- )
- stdout, stderr = await proc.communicate()
- return stdout.decode(), stderr.decode(), proc.returncode or 0
-
-
-async def get_current_repo() -> str:
- """Get the current repository from gh CLI."""
- stdout, stderr, code = await run_gh_command(["repo", "view", "--json", "nameWithOwner", "-q", ".nameWithOwner"])
- if code != 0:
- console.print(f"[red]Error getting current repo: {stderr}[/red]")
- raise typer.Exit(1)
- return stdout.strip()
-
-
-async def fetch_items_page(
- repo: str,
- item_type: str, # "issue" or "pr"
- state: str,
- limit: int,
- search_filter: str = "",
-) -> list[dict]:
- """Fetch a single page of issues or PRs."""
- cmd = [
- item_type,
- "list",
- "--repo",
- repo,
- "--state",
- state,
- "--limit",
- str(limit),
- "--json",
- "number,title,state,createdAt,updatedAt,labels,author,body",
- ]
- if search_filter:
- cmd.extend(["--search", search_filter])
-
- stdout, stderr, code = await run_gh_command(cmd)
- if code != 0:
- console.print(f"[red]Error fetching {item_type}s: {stderr}[/red]")
- return []
-
- try:
- return json.loads(stdout) if stdout.strip() else []
- except json.JSONDecodeError:
- console.print(f"[red]Error parsing {item_type} response[/red]")
- return []
-
-
-async def fetch_all_items(
- repo: str,
- item_type: str,
- state: str,
- hours: int | None,
- progress: Progress,
- task_id: TaskID,
-) -> list[dict]:
- """Fetch ALL items with exhaustive pagination."""
- all_items: list[dict] = []
- page = 1
-
- # First fetch
- progress.update(task_id, description=f"[cyan]Fetching {item_type}s page {page}...")
- items = await fetch_items_page(repo, item_type, state, BATCH_SIZE)
- fetched_count = len(items)
- all_items.extend(items)
-
- console.print(f"[dim]Page {page}: fetched {fetched_count} {item_type}s[/dim]")
-
- # Continue pagination if we got exactly BATCH_SIZE (more pages exist)
- while fetched_count == BATCH_SIZE:
- page += 1
- progress.update(task_id, description=f"[cyan]Fetching {item_type}s page {page}...")
-
- # Use created date of last item to paginate
- last_created = all_items[-1].get("createdAt", "")
- if not last_created:
- break
-
- search_filter = f"created:<{last_created}"
- items = await fetch_items_page(repo, item_type, state, BATCH_SIZE, search_filter)
- fetched_count = len(items)
-
- if fetched_count == 0:
- break
-
- # Deduplicate by number
- existing_numbers = {item["number"] for item in all_items}
- new_items = [item for item in items if item["number"] not in existing_numbers]
- all_items.extend(new_items)
-
- console.print(
- f"[dim]Page {page}: fetched {fetched_count}, added {len(new_items)} new (total: {len(all_items)})[/dim]"
- )
-
- # Safety limit
- if page > 20:
- console.print("[yellow]Safety limit reached (20 pages)[/yellow]")
- break
-
- # Filter by time if specified
- if hours is not None:
- cutoff = datetime.now(UTC) - timedelta(hours=hours)
- cutoff_str = cutoff.isoformat()
-
- original_count = len(all_items)
- all_items = [
- item
- for item in all_items
- if item.get("createdAt", "") >= cutoff_str or item.get("updatedAt", "") >= cutoff_str
- ]
- filtered_count = original_count - len(all_items)
- if filtered_count > 0:
- console.print(f"[dim]Filtered out {filtered_count} items older than {hours} hours[/dim]")
-
- return all_items
-
-
-def display_table(items: list[dict], item_type: str) -> None:
- """Display items in a Rich table."""
- table = Table(title=f"{item_type.upper()}s ({len(items)} total)")
- table.add_column("#", style="cyan", width=6)
- table.add_column("Title", style="white", max_width=50)
- table.add_column("State", style="green", width=8)
- table.add_column("Author", style="yellow", width=15)
- table.add_column("Labels", style="magenta", max_width=30)
- table.add_column("Updated", style="dim", width=12)
-
- for item in items[:50]: # Show first 50
- labels = ", ".join(label.get("name", "") for label in item.get("labels", []))
- updated = item.get("updatedAt", "")[:10]
- author = item.get("author", {}).get("login", "unknown")
-
- table.add_row(
- str(item.get("number", "")),
- (item.get("title", "")[:47] + "...") if len(item.get("title", "")) > 50 else item.get("title", ""),
- item.get("state", ""),
- author,
- (labels[:27] + "...") if len(labels) > 30 else labels,
- updated,
- )
-
- console.print(table)
- if len(items) > 50:
- console.print(f"[dim]... and {len(items) - 50} more items[/dim]")
-
-
-@app.command()
-def issues(
- repo: Annotated[str | None, typer.Option("--repo", "-r", help="Repository (owner/repo)")] = None,
- state: Annotated[ItemState, typer.Option("--state", "-s", help="Issue state filter")] = ItemState.ALL,
- hours: Annotated[
- int | None,
- typer.Option("--hours", "-h", help="Only issues from last N hours (created or updated)"),
- ] = None,
- output: Annotated[OutputFormat, typer.Option("--output", "-o", help="Output format")] = OutputFormat.TABLE,
-) -> None:
- """Fetch all issues with exhaustive pagination."""
-
- async def async_main() -> None:
- target_repo = repo or await get_current_repo()
-
- console.print(f"""
-[cyan]━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[/cyan]
-[cyan]Repository:[/cyan] {target_repo}
-[cyan]State:[/cyan] {state.value}
-[cyan]Time filter:[/cyan] {f"Last {hours} hours" if hours else "All time"}
-[cyan]━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[/cyan]
-""")
-
- with Progress(console=console) as progress:
- task: TaskID = progress.add_task("[cyan]Fetching issues...", total=None)
-
- items = await fetch_all_items(target_repo, "issue", state.value, hours, progress, task)
-
- progress.update(task, description="[green]Complete!", completed=100, total=100)
-
- console.print(
- Panel(
- f"[green]✓ Found {len(items)} issues[/green]",
- title="[green]Pagination Complete[/green]",
- border_style="green",
- )
- )
-
- if output == OutputFormat.JSON:
- console.print(json.dumps(items, indent=2, ensure_ascii=False))
- elif output == OutputFormat.TABLE:
- display_table(items, "issue")
- else: # COUNT
- console.print(f"Total issues: {len(items)}")
-
- asyncio.run(async_main())
-
-
-@app.command()
-def prs(
- repo: Annotated[str | None, typer.Option("--repo", "-r", help="Repository (owner/repo)")] = None,
- state: Annotated[ItemState, typer.Option("--state", "-s", help="PR state filter")] = ItemState.OPEN,
- hours: Annotated[
- int | None,
- typer.Option("--hours", "-h", help="Only PRs from last N hours (created or updated)"),
- ] = None,
- output: Annotated[OutputFormat, typer.Option("--output", "-o", help="Output format")] = OutputFormat.TABLE,
-) -> None:
- """Fetch all PRs with exhaustive pagination."""
-
- async def async_main() -> None:
- target_repo = repo or await get_current_repo()
-
- console.print(f"""
-[cyan]━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[/cyan]
-[cyan]Repository:[/cyan] {target_repo}
-[cyan]State:[/cyan] {state.value}
-[cyan]Time filter:[/cyan] {f"Last {hours} hours" if hours else "All time"}
-[cyan]━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[/cyan]
-""")
-
- with Progress(console=console) as progress:
- task: TaskID = progress.add_task("[cyan]Fetching PRs...", total=None)
-
- items = await fetch_all_items(target_repo, "pr", state.value, hours, progress, task)
-
- progress.update(task, description="[green]Complete!", completed=100, total=100)
-
- console.print(
- Panel(
- f"[green]✓ Found {len(items)} PRs[/green]",
- title="[green]Pagination Complete[/green]",
- border_style="green",
- )
- )
-
- if output == OutputFormat.JSON:
- console.print(json.dumps(items, indent=2, ensure_ascii=False))
- elif output == OutputFormat.TABLE:
- display_table(items, "pr")
- else: # COUNT
- console.print(f"Total PRs: {len(items)}")
-
- asyncio.run(async_main())
-
-
-@app.command(name="all")
-def fetch_all(
- repo: Annotated[str | None, typer.Option("--repo", "-r", help="Repository (owner/repo)")] = None,
- state: Annotated[ItemState, typer.Option("--state", "-s", help="State filter")] = ItemState.ALL,
- hours: Annotated[
- int | None,
- typer.Option("--hours", "-h", help="Only items from last N hours (created or updated)"),
- ] = None,
- output: Annotated[OutputFormat, typer.Option("--output", "-o", help="Output format")] = OutputFormat.TABLE,
-) -> None:
- """Fetch all issues AND PRs with exhaustive pagination."""
-
- async def async_main() -> None:
- target_repo = repo or await get_current_repo()
-
- console.print(f"""
-[cyan]━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[/cyan]
-[cyan]Repository:[/cyan] {target_repo}
-[cyan]State:[/cyan] {state.value}
-[cyan]Time filter:[/cyan] {f"Last {hours} hours" if hours else "All time"}
-[cyan]Fetching:[/cyan] Issues AND PRs
-[cyan]━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[/cyan]
-""")
-
- with Progress(console=console) as progress:
- issues_task: TaskID = progress.add_task("[cyan]Fetching issues...", total=None)
- prs_task: TaskID = progress.add_task("[cyan]Fetching PRs...", total=None)
-
- # Fetch in parallel
- issues_items, prs_items = await asyncio.gather(
- fetch_all_items(target_repo, "issue", state.value, hours, progress, issues_task),
- fetch_all_items(target_repo, "pr", state.value, hours, progress, prs_task),
- )
-
- progress.update(
- issues_task,
- description="[green]Issues complete!",
- completed=100,
- total=100,
- )
- progress.update(prs_task, description="[green]PRs complete!", completed=100, total=100)
-
- console.print(
- Panel(
- f"[green]✓ Found {len(issues_items)} issues and {len(prs_items)} PRs[/green]",
- title="[green]Pagination Complete[/green]",
- border_style="green",
- )
- )
-
- if output == OutputFormat.JSON:
- result = {"issues": issues_items, "prs": prs_items}
- console.print(json.dumps(result, indent=2, ensure_ascii=False))
- elif output == OutputFormat.TABLE:
- display_table(issues_items, "issue")
- console.print("")
- display_table(prs_items, "pr")
- else: # COUNT
- console.print(f"Total issues: {len(issues_items)}")
- console.print(f"Total PRs: {len(prs_items)}")
-
- asyncio.run(async_main())
-
-
-if __name__ == "__main__":
- app()
diff --git a/.opencode/skills/github-triage/SKILL.md b/.opencode/skills/github-triage/SKILL.md
new file mode 100644
index 000000000..cb2910fad
--- /dev/null
+++ b/.opencode/skills/github-triage/SKILL.md
@@ -0,0 +1,482 @@
+---
+name: github-triage
+description: "Unified GitHub triage for issues AND PRs. 1 item = 1 background task (category: free). Issues: answer questions from codebase, analyze bugs. PRs: review bugfixes, merge safe ones. All parallel, all background. Triggers: 'triage', 'triage issues', 'triage PRs', 'github triage'."
+---
+
+# GitHub Triage — Unified Issue & PR Processor
+
+
+You are a GitHub triage orchestrator. You fetch all open issues and PRs, classify each one, then spawn exactly 1 background subagent per item using `category="free"`. Each subagent analyzes its item, takes action (comment/close/merge/report), and records results via TaskCreate.
+
+
+---
+
+## ARCHITECTURE
+
+```
+1 issue or PR = 1 TaskCreate = 1 task(category="free", run_in_background=true)
+```
+
+| Rule | Value |
+|------|-------|
+| Category for ALL subagents | `free` |
+| Execution mode | `run_in_background=true` |
+| Parallelism | ALL items launched simultaneously |
+| Result tracking | Each subagent calls `TaskCreate` with its findings |
+| Result collection | `background_output()` polling loop |
+
+---
+
+## PHASE 1: FETCH ALL OPEN ITEMS
+
+
+Run these commands to collect data. Use the bundled script if available, otherwise fall back to gh CLI.
+
+```bash
+REPO=$(gh repo view --json nameWithOwner -q .nameWithOwner)
+
+# Issues: all open
+gh issue list --repo $REPO --state open --limit 500 \
+ --json number,title,state,createdAt,updatedAt,labels,author,body,comments
+
+# PRs: all open
+gh pr list --repo $REPO --state open --limit 500 \
+ --json number,title,state,createdAt,updatedAt,labels,author,body,headRefName,baseRefName,isDraft,mergeable,reviewDecision,statusCheckRollup
+```
+
+If either returns exactly 500 results, paginate using `--search "created:
+
+---
+
+## PHASE 2: CLASSIFY EACH ITEM
+
+For each item, determine its type based on title, labels, and body content:
+
+
+
+### Issues
+
+| Type | Detection | Action Path |
+|------|-----------|-------------|
+| `ISSUE_QUESTION` | Title contains `[Question]`, `[Discussion]`, `?`, or body is asking "how to" / "why does" / "is it possible" | SUBAGENT_ISSUE_QUESTION |
+| `ISSUE_BUG` | Title contains `[Bug]`, `Bug:`, body describes unexpected behavior, error messages, stack traces | SUBAGENT_ISSUE_BUG |
+| `ISSUE_FEATURE` | Title contains `[Feature]`, `[RFE]`, `[Enhancement]`, `Feature Request`, `Proposal` | SUBAGENT_ISSUE_FEATURE |
+| `ISSUE_OTHER` | Anything else | SUBAGENT_ISSUE_OTHER |
+
+### PRs
+
+| Type | Detection | Action Path |
+|------|-----------|-------------|
+| `PR_BUGFIX` | Title starts with `fix`, `fix:`, `fix(`, branch contains `fix/`, `bugfix/`, or labels include `bug` | SUBAGENT_PR_BUGFIX |
+| `PR_OTHER` | Everything else (feat, refactor, docs, chore, etc.) | SUBAGENT_PR_OTHER |
+
+
+
+---
+
+## PHASE 3: SPAWN 1 BACKGROUND TASK PER ITEM
+
+For EVERY item, create a TaskCreate entry first, then spawn a background task.
+
+```
+For each item:
+ 1. TaskCreate(subject="Triage: #{number} {title}")
+ 2. task(category="free", run_in_background=true, load_skills=[], prompt=SUBAGENT_PROMPT)
+ 3. Store mapping: item_number -> { task_id, background_task_id }
+```
+
+---
+
+## SUBAGENT PROMPT TEMPLATES
+
+Each subagent gets an explicit, step-by-step prompt. Free models are limited — leave NOTHING implicit.
+
+---
+
+### SUBAGENT_ISSUE_QUESTION
+
+
+
+```
+You are a GitHub issue responder for the repository {REPO}.
+
+ITEM:
+- Issue #{number}: {title}
+- Author: {author}
+- Body: {body}
+- Comments: {comments_summary}
+
+YOUR JOB:
+1. Read the issue carefully. Understand what the user is asking.
+2. Search the codebase to find the answer. Use Grep and Read tools.
+ - Search for relevant file names, function names, config keys mentioned in the issue.
+ - Read the files you find to understand how the feature works.
+3. Decide: Can you answer this clearly and accurately from the codebase?
+
+IF YES (you found a clear, accurate answer):
+ Step A: Write a helpful comment. The comment MUST:
+ - Start with exactly: [sisyphus-bot]
+ - Be warm, friendly, and thorough
+ - Include specific file paths and code references
+ - Include code snippets or config examples if helpful
+ - End with "Feel free to reopen if this doesn't resolve your question!"
+ Step B: Post the comment:
+ gh issue comment {number} --repo {REPO} --body "YOUR_COMMENT"
+ Step C: Close the issue:
+ gh issue close {number} --repo {REPO}
+ Step D: Report back with this EXACT format:
+ ACTION: ANSWERED_AND_CLOSED
+ COMMENT_POSTED: yes
+ SUMMARY: [1-2 sentence summary of your answer]
+
+IF NO (not enough info in codebase, or answer is uncertain):
+ Report back with:
+ ACTION: NEEDS_MANUAL_ATTENTION
+ REASON: [why you couldn't answer — be specific]
+ PARTIAL_FINDINGS: [what you DID find, if anything]
+
+RULES:
+- NEVER guess. Only answer if the codebase clearly supports your answer.
+- NEVER make up file paths or function names.
+- The [sisyphus-bot] prefix is MANDATORY on every comment you post.
+- Be genuinely helpful — imagine you're a senior maintainer who cares about the community.
+```
+
+
+
+---
+
+### SUBAGENT_ISSUE_BUG
+
+
+
+```
+You are a GitHub bug analyzer for the repository {REPO}.
+
+ITEM:
+- Issue #{number}: {title}
+- Author: {author}
+- Body: {body}
+- Comments: {comments_summary}
+
+YOUR JOB:
+1. Read the issue carefully. Understand the reported bug:
+ - What behavior does the user expect?
+ - What behavior do they actually see?
+ - What steps reproduce it?
+2. Search the codebase for the relevant code. Use Grep and Read tools.
+ - Find the files/functions mentioned or related to the bug.
+ - Read them carefully and trace the logic.
+3. Determine one of three outcomes:
+
+OUTCOME A — CONFIRMED BUG (you found the problematic code):
+ Step 1: Post a comment on the issue. The comment MUST:
+ - Start with exactly: [sisyphus-bot]
+ - Apologize sincerely for the inconvenience ("We're sorry you ran into this issue.")
+ - Briefly acknowledge what the bug is
+ - Say "We've identified the root cause and will work on a fix."
+ - Do NOT reveal internal implementation details unnecessarily
+ Step 2: Post the comment:
+ gh issue comment {number} --repo {REPO} --body "YOUR_COMMENT"
+ Step 3: Report back with:
+ ACTION: CONFIRMED_BUG
+ ROOT_CAUSE: [which file, which function, what goes wrong]
+ FIX_APPROACH: [how to fix it — be specific: "In {file}, line ~{N}, change X to Y because Z"]
+ SEVERITY: [LOW|MEDIUM|HIGH|CRITICAL]
+ AFFECTED_FILES: [list of files that need changes]
+
+OUTCOME B — NOT A BUG (user misunderstanding, provably correct behavior):
+ ONLY choose this if you can RIGOROUSLY PROVE the behavior is correct.
+ Step 1: Post a comment. The comment MUST:
+ - Start with exactly: [sisyphus-bot]
+ - Be kind and empathetic — never condescending
+ - Explain clearly WHY the current behavior is correct
+ - Include specific code references or documentation links
+ - Offer a workaround or alternative if possible
+ - End with "Please let us know if you have further questions!"
+ Step 2: Post the comment:
+ gh issue comment {number} --repo {REPO} --body "YOUR_COMMENT"
+ Step 3: DO NOT close the issue. Let the user or maintainer decide.
+ Step 4: Report back with:
+ ACTION: NOT_A_BUG
+ EXPLANATION: [why this is correct behavior]
+ PROOF: [specific code reference proving it]
+
+OUTCOME C — UNCLEAR (can't determine from codebase alone):
+ Report back with:
+ ACTION: NEEDS_INVESTIGATION
+ FINDINGS: [what you found so far]
+ BLOCKERS: [what's preventing you from determining the cause]
+ SUGGESTED_NEXT_STEPS: [what a human should look at]
+
+RULES:
+- NEVER guess at root causes. Only report CONFIRMED_BUG if you found the exact problematic code.
+- NEVER close bug issues yourself. Only comment.
+- For OUTCOME B (not a bug): you MUST have rigorous proof. If there's ANY doubt, choose OUTCOME C instead.
+- The [sisyphus-bot] prefix is MANDATORY on every comment.
+- When apologizing, be genuine. The user took time to report this.
+```
+
+
+
+---
+
+### SUBAGENT_ISSUE_FEATURE
+
+
+
+```
+You are a GitHub feature request analyzer for the repository {REPO}.
+
+ITEM:
+- Issue #{number}: {title}
+- Author: {author}
+- Body: {body}
+- Comments: {comments_summary}
+
+YOUR JOB:
+1. Read the feature request.
+2. Search the codebase to check if this feature already exists (partially or fully).
+3. Assess feasibility and alignment with the project.
+
+Report back with:
+ ACTION: FEATURE_ASSESSED
+ ALREADY_EXISTS: [YES_FULLY | YES_PARTIALLY | NO]
+ IF_EXISTS: [where in the codebase, how to use it]
+ FEASIBILITY: [EASY | MODERATE | HARD | ARCHITECTURAL_CHANGE]
+ RELEVANT_FILES: [files that would need changes]
+ NOTES: [any observations about implementation approach]
+
+If the feature already fully exists:
+ Post a comment (prefix: [sisyphus-bot]) explaining how to use the existing feature with examples.
+ gh issue comment {number} --repo {REPO} --body "YOUR_COMMENT"
+
+RULES:
+- Do NOT close feature requests.
+- The [sisyphus-bot] prefix is MANDATORY on any comment.
+```
+
+
+
+---
+
+### SUBAGENT_ISSUE_OTHER
+
+
+
+```
+You are a GitHub issue analyzer for the repository {REPO}.
+
+ITEM:
+- Issue #{number}: {title}
+- Author: {author}
+- Body: {body}
+- Comments: {comments_summary}
+
+YOUR JOB:
+Quickly assess this issue and report:
+ ACTION: ASSESSED
+ TYPE_GUESS: [QUESTION | BUG | FEATURE | DISCUSSION | META | STALE]
+ SUMMARY: [1-2 sentence summary]
+ NEEDS_ATTENTION: [YES | NO]
+ SUGGESTED_LABEL: [if any]
+
+Do NOT post comments. Do NOT close. Just analyze and report.
+```
+
+
+
+---
+
+### SUBAGENT_PR_BUGFIX
+
+
+
+```
+You are a GitHub PR reviewer for the repository {REPO}.
+
+ITEM:
+- PR #{number}: {title}
+- Author: {author}
+- Base: {baseRefName}
+- Head: {headRefName}
+- Draft: {isDraft}
+- Mergeable: {mergeable}
+- Review Decision: {reviewDecision}
+- CI Status: {statusCheckRollup_summary}
+- Body: {body}
+
+YOUR JOB:
+1. Fetch PR details (DO NOT checkout the branch — read-only analysis):
+ gh pr view {number} --repo {REPO} --json files,reviews,comments,statusCheckRollup,reviewDecision
+2. Read the changed files list. For each changed file, use `gh api repos/{REPO}/pulls/{number}/files` to see the diff.
+3. Search the codebase to understand what the PR is fixing and whether the fix is correct.
+4. Evaluate merge safety:
+
+MERGE CONDITIONS (ALL must be true for auto-merge):
+ a. CI status checks: ALL passing (no failures, no pending)
+ b. Review decision: APPROVED
+ c. The fix is clearly correct — addresses an obvious, unambiguous bug
+ d. No risky side effects (no architectural changes, no breaking changes)
+ e. Not a draft PR
+ f. Mergeable state is clean (no conflicts)
+
+IF ALL MERGE CONDITIONS MET:
+ Step 1: Merge the PR:
+ gh pr merge {number} --repo {REPO} --squash --auto
+ Step 2: Report back with:
+ ACTION: MERGED
+ FIX_SUMMARY: [what bug was fixed and how]
+ FILES_CHANGED: [list of files]
+ RISK: NONE
+
+IF ANY CONDITION NOT MET:
+ Report back with:
+ ACTION: NEEDS_HUMAN_DECISION
+ FIX_SUMMARY: [what the PR does]
+ WHAT_IT_FIXES: [the bug or issue it addresses]
+ CI_STATUS: [PASS | FAIL | PENDING — list any failures]
+ REVIEW_STATUS: [APPROVED | CHANGES_REQUESTED | PENDING | NONE]
+ MISSING: [what's preventing auto-merge — be specific]
+ RISK_ASSESSMENT: [what could go wrong]
+ AMBIGUOUS_PARTS: [anything that needs human judgment]
+ RECOMMENDED_ACTION: [what the maintainer should do]
+
+ABSOLUTE RULES:
+- NEVER run `git checkout`, `git fetch`, `git pull`, or `git switch`. READ-ONLY via gh CLI and API.
+- NEVER checkout the PR branch. NEVER. Use `gh api` and `gh pr view` only.
+- Only merge if you are 100% certain ALL conditions are met. When in doubt, report instead.
+- The [sisyphus-bot] prefix is MANDATORY on any comment you post.
+```
+
+
+
+---
+
+### SUBAGENT_PR_OTHER
+
+
+
+```
+You are a GitHub PR reviewer for the repository {REPO}.
+
+ITEM:
+- PR #{number}: {title}
+- Author: {author}
+- Base: {baseRefName}
+- Head: {headRefName}
+- Draft: {isDraft}
+- Mergeable: {mergeable}
+- Review Decision: {reviewDecision}
+- CI Status: {statusCheckRollup_summary}
+- Body: {body}
+
+YOUR JOB:
+1. Fetch PR details (READ-ONLY — no checkout):
+ gh pr view {number} --repo {REPO} --json files,reviews,comments,statusCheckRollup,reviewDecision
+2. Read the changed files via `gh api repos/{REPO}/pulls/{number}/files`.
+3. Assess the PR and report:
+
+ ACTION: PR_ASSESSED
+ TYPE: [FEATURE | REFACTOR | DOCS | CHORE | TEST | OTHER]
+ SUMMARY: [what this PR does in 2-3 sentences]
+ CI_STATUS: [PASS | FAIL | PENDING]
+ REVIEW_STATUS: [APPROVED | CHANGES_REQUESTED | PENDING | NONE]
+ FILES_CHANGED: [count and key files]
+ RISK_LEVEL: [LOW | MEDIUM | HIGH]
+ ALIGNMENT: [does this fit the project direction? YES | NO | UNCLEAR]
+ BLOCKERS: [anything preventing merge]
+ RECOMMENDED_ACTION: [MERGE | REQUEST_CHANGES | NEEDS_REVIEW | CLOSE | WAIT]
+ NOTES: [any observations for the maintainer]
+
+ABSOLUTE RULES:
+- NEVER run `git checkout`, `git fetch`, `git pull`, or `git switch`. READ-ONLY.
+- NEVER checkout the PR branch. Use `gh api` and `gh pr view` only.
+- Do NOT merge non-bugfix PRs automatically. Report only.
+```
+
+
+
+---
+
+## PHASE 4: COLLECT RESULTS & UPDATE TASKS
+
+
+Poll `background_output()` for each spawned task. As each completes:
+
+1. Parse the subagent's report.
+2. Update the corresponding TaskCreate entry:
+ - `TaskUpdate(id=task_id, status="completed", description=FULL_REPORT_TEXT)`
+3. Stream the result to the user immediately — do not wait for all to finish.
+
+Track counters:
+- issues_answered (commented + closed)
+- bugs_confirmed
+- bugs_not_a_bug
+- prs_merged
+- prs_needs_decision
+- features_assessed
+
+
+---
+
+## PHASE 5: FINAL SUMMARY
+
+After all background tasks complete, produce a summary:
+
+```markdown
+# GitHub Triage Report — {REPO}
+
+**Date:** {date}
+**Items Processed:** {total}
+
+## Issues ({issue_count})
+| Action | Count |
+|--------|-------|
+| Answered & Closed | {issues_answered} |
+| Bug Confirmed | {bugs_confirmed} |
+| Not A Bug (explained) | {bugs_not_a_bug} |
+| Feature Assessed | {features_assessed} |
+| Needs Manual Attention | {needs_manual} |
+
+## PRs ({pr_count})
+| Action | Count |
+|--------|-------|
+| Auto-Merged (safe bugfix) | {prs_merged} |
+| Needs Human Decision | {prs_needs_decision} |
+| Assessed (non-bugfix) | {prs_assessed} |
+
+## Items Requiring Your Attention
+[List each item that needs human decision with its report summary]
+```
+
+---
+
+## ANTI-PATTERNS
+
+| Violation | Severity |
+|-----------|----------|
+| Using any category other than `free` | CRITICAL |
+| Batching multiple items into one task | CRITICAL |
+| Using `run_in_background=false` | CRITICAL |
+| Subagent running `git checkout` on a PR branch | CRITICAL |
+| Posting comment without `[sisyphus-bot]` prefix | CRITICAL |
+| Merging a PR that doesn't meet ALL 6 conditions | CRITICAL |
+| Closing a bug issue (only comment, never close bugs) | HIGH |
+| Guessing at answers without codebase evidence | HIGH |
+| Not recording results via TaskCreate/TaskUpdate | HIGH |
+
+---
+
+## QUICK START
+
+When invoked:
+
+1. `TaskCreate` for the overall triage job
+2. Fetch all open issues + PRs via gh CLI (paginate if needed)
+3. Classify each item (ISSUE_QUESTION, ISSUE_BUG, ISSUE_FEATURE, PR_BUGFIX, etc.)
+4. For EACH item: `TaskCreate` + `task(category="free", run_in_background=true, load_skills=[], prompt=...)`
+5. Poll `background_output()` — stream results as they arrive
+6. `TaskUpdate` each task with the subagent's findings
+7. Produce final summary report
diff --git a/.opencode/skills/github-issue-triage/scripts/gh_fetch.py b/.opencode/skills/github-triage/scripts/gh_fetch.py
similarity index 66%
rename from .opencode/skills/github-issue-triage/scripts/gh_fetch.py
rename to .opencode/skills/github-triage/scripts/gh_fetch.py
index 0b06bd500..9953624bb 100755
--- a/.opencode/skills/github-issue-triage/scripts/gh_fetch.py
+++ b/.opencode/skills/github-triage/scripts/gh_fetch.py
@@ -69,7 +69,9 @@ async def run_gh_command(args: list[str]) -> tuple[str, str, int]:
async def get_current_repo() -> str:
"""Get the current repository from gh CLI."""
- stdout, stderr, code = await run_gh_command(["repo", "view", "--json", "nameWithOwner", "-q", ".nameWithOwner"])
+ stdout, stderr, code = await run_gh_command(
+ ["repo", "view", "--json", "nameWithOwner", "-q", ".nameWithOwner"]
+ )
if code != 0:
console.print(f"[red]Error getting current repo: {stderr}[/red]")
raise typer.Exit(1)
@@ -123,7 +125,6 @@ async def fetch_all_items(
all_items: list[dict] = []
page = 1
- # First fetch
progress.update(task_id, description=f"[cyan]Fetching {item_type}s page {page}...")
items = await fetch_items_page(repo, item_type, state, BATCH_SIZE)
fetched_count = len(items)
@@ -131,24 +132,25 @@ async def fetch_all_items(
console.print(f"[dim]Page {page}: fetched {fetched_count} {item_type}s[/dim]")
- # Continue pagination if we got exactly BATCH_SIZE (more pages exist)
while fetched_count == BATCH_SIZE:
page += 1
- progress.update(task_id, description=f"[cyan]Fetching {item_type}s page {page}...")
+ progress.update(
+ task_id, description=f"[cyan]Fetching {item_type}s page {page}..."
+ )
- # Use created date of last item to paginate
last_created = all_items[-1].get("createdAt", "")
if not last_created:
break
search_filter = f"created:<{last_created}"
- items = await fetch_items_page(repo, item_type, state, BATCH_SIZE, search_filter)
+ items = await fetch_items_page(
+ repo, item_type, state, BATCH_SIZE, search_filter
+ )
fetched_count = len(items)
if fetched_count == 0:
break
- # Deduplicate by number
existing_numbers = {item["number"] for item in all_items}
new_items = [item for item in items if item["number"] not in existing_numbers]
all_items.extend(new_items)
@@ -157,12 +159,10 @@ async def fetch_all_items(
f"[dim]Page {page}: fetched {fetched_count}, added {len(new_items)} new (total: {len(all_items)})[/dim]"
)
- # Safety limit
if page > 20:
console.print("[yellow]Safety limit reached (20 pages)[/yellow]")
break
- # Filter by time if specified
if hours is not None:
cutoff = datetime.now(UTC) - timedelta(hours=hours)
cutoff_str = cutoff.isoformat()
@@ -171,11 +171,14 @@ async def fetch_all_items(
all_items = [
item
for item in all_items
- if item.get("createdAt", "") >= cutoff_str or item.get("updatedAt", "") >= cutoff_str
+ if item.get("createdAt", "") >= cutoff_str
+ or item.get("updatedAt", "") >= cutoff_str
]
filtered_count = original_count - len(all_items)
if filtered_count > 0:
- console.print(f"[dim]Filtered out {filtered_count} items older than {hours} hours[/dim]")
+ console.print(
+ f"[dim]Filtered out {filtered_count} items older than {hours} hours[/dim]"
+ )
return all_items
@@ -190,14 +193,16 @@ def display_table(items: list[dict], item_type: str) -> None:
table.add_column("Labels", style="magenta", max_width=30)
table.add_column("Updated", style="dim", width=12)
- for item in items[:50]: # Show first 50
+ for item in items[:50]:
labels = ", ".join(label.get("name", "") for label in item.get("labels", []))
updated = item.get("updatedAt", "")[:10]
author = item.get("author", {}).get("login", "unknown")
table.add_row(
str(item.get("number", "")),
- (item.get("title", "")[:47] + "...") if len(item.get("title", "")) > 50 else item.get("title", ""),
+ (item.get("title", "")[:47] + "...")
+ if len(item.get("title", "")) > 50
+ else item.get("title", ""),
item.get("state", ""),
author,
(labels[:27] + "...") if len(labels) > 30 else labels,
@@ -211,13 +216,21 @@ def display_table(items: list[dict], item_type: str) -> None:
@app.command()
def issues(
- repo: Annotated[str | None, typer.Option("--repo", "-r", help="Repository (owner/repo)")] = None,
- state: Annotated[ItemState, typer.Option("--state", "-s", help="Issue state filter")] = ItemState.ALL,
+ repo: Annotated[
+ str | None, typer.Option("--repo", "-r", help="Repository (owner/repo)")
+ ] = None,
+ state: Annotated[
+ ItemState, typer.Option("--state", "-s", help="Issue state filter")
+ ] = ItemState.ALL,
hours: Annotated[
int | None,
- typer.Option("--hours", "-h", help="Only issues from last N hours (created or updated)"),
+ typer.Option(
+ "--hours", "-h", help="Only issues from last N hours (created or updated)"
+ ),
] = None,
- output: Annotated[OutputFormat, typer.Option("--output", "-o", help="Output format")] = OutputFormat.TABLE,
+ output: Annotated[
+ OutputFormat, typer.Option("--output", "-o", help="Output format")
+ ] = OutputFormat.TABLE,
) -> None:
"""Fetch all issues with exhaustive pagination."""
@@ -225,33 +238,29 @@ def issues(
target_repo = repo or await get_current_repo()
console.print(f"""
-[cyan]━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[/cyan]
[cyan]Repository:[/cyan] {target_repo}
[cyan]State:[/cyan] {state.value}
[cyan]Time filter:[/cyan] {f"Last {hours} hours" if hours else "All time"}
-[cyan]━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[/cyan]
""")
with Progress(console=console) as progress:
task: TaskID = progress.add_task("[cyan]Fetching issues...", total=None)
-
- items = await fetch_all_items(target_repo, "issue", state.value, hours, progress, task)
-
- progress.update(task, description="[green]Complete!", completed=100, total=100)
+ items = await fetch_all_items(
+ target_repo, "issue", state.value, hours, progress, task
+ )
+ progress.update(
+ task, description="[green]Complete!", completed=100, total=100
+ )
console.print(
- Panel(
- f"[green]✓ Found {len(items)} issues[/green]",
- title="[green]Pagination Complete[/green]",
- border_style="green",
- )
+ Panel(f"[green]Found {len(items)} issues[/green]", border_style="green")
)
if output == OutputFormat.JSON:
console.print(json.dumps(items, indent=2, ensure_ascii=False))
elif output == OutputFormat.TABLE:
display_table(items, "issue")
- else: # COUNT
+ else:
console.print(f"Total issues: {len(items)}")
asyncio.run(async_main())
@@ -259,13 +268,21 @@ def issues(
@app.command()
def prs(
- repo: Annotated[str | None, typer.Option("--repo", "-r", help="Repository (owner/repo)")] = None,
- state: Annotated[ItemState, typer.Option("--state", "-s", help="PR state filter")] = ItemState.OPEN,
+ repo: Annotated[
+ str | None, typer.Option("--repo", "-r", help="Repository (owner/repo)")
+ ] = None,
+ state: Annotated[
+ ItemState, typer.Option("--state", "-s", help="PR state filter")
+ ] = ItemState.OPEN,
hours: Annotated[
int | None,
- typer.Option("--hours", "-h", help="Only PRs from last N hours (created or updated)"),
+ typer.Option(
+ "--hours", "-h", help="Only PRs from last N hours (created or updated)"
+ ),
] = None,
- output: Annotated[OutputFormat, typer.Option("--output", "-o", help="Output format")] = OutputFormat.TABLE,
+ output: Annotated[
+ OutputFormat, typer.Option("--output", "-o", help="Output format")
+ ] = OutputFormat.TABLE,
) -> None:
"""Fetch all PRs with exhaustive pagination."""
@@ -273,33 +290,29 @@ def prs(
target_repo = repo or await get_current_repo()
console.print(f"""
-[cyan]━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[/cyan]
[cyan]Repository:[/cyan] {target_repo}
[cyan]State:[/cyan] {state.value}
[cyan]Time filter:[/cyan] {f"Last {hours} hours" if hours else "All time"}
-[cyan]━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[/cyan]
""")
with Progress(console=console) as progress:
task: TaskID = progress.add_task("[cyan]Fetching PRs...", total=None)
-
- items = await fetch_all_items(target_repo, "pr", state.value, hours, progress, task)
-
- progress.update(task, description="[green]Complete!", completed=100, total=100)
+ items = await fetch_all_items(
+ target_repo, "pr", state.value, hours, progress, task
+ )
+ progress.update(
+ task, description="[green]Complete!", completed=100, total=100
+ )
console.print(
- Panel(
- f"[green]✓ Found {len(items)} PRs[/green]",
- title="[green]Pagination Complete[/green]",
- border_style="green",
- )
+ Panel(f"[green]Found {len(items)} PRs[/green]", border_style="green")
)
if output == OutputFormat.JSON:
console.print(json.dumps(items, indent=2, ensure_ascii=False))
elif output == OutputFormat.TABLE:
display_table(items, "pr")
- else: # COUNT
+ else:
console.print(f"Total PRs: {len(items)}")
asyncio.run(async_main())
@@ -307,13 +320,21 @@ def prs(
@app.command(name="all")
def fetch_all(
- repo: Annotated[str | None, typer.Option("--repo", "-r", help="Repository (owner/repo)")] = None,
- state: Annotated[ItemState, typer.Option("--state", "-s", help="State filter")] = ItemState.ALL,
+ repo: Annotated[
+ str | None, typer.Option("--repo", "-r", help="Repository (owner/repo)")
+ ] = None,
+ state: Annotated[
+ ItemState, typer.Option("--state", "-s", help="State filter")
+ ] = ItemState.ALL,
hours: Annotated[
int | None,
- typer.Option("--hours", "-h", help="Only items from last N hours (created or updated)"),
+ typer.Option(
+ "--hours", "-h", help="Only items from last N hours (created or updated)"
+ ),
] = None,
- output: Annotated[OutputFormat, typer.Option("--output", "-o", help="Output format")] = OutputFormat.TABLE,
+ output: Annotated[
+ OutputFormat, typer.Option("--output", "-o", help="Output format")
+ ] = OutputFormat.TABLE,
) -> None:
"""Fetch all issues AND PRs with exhaustive pagination."""
@@ -321,22 +342,25 @@ def fetch_all(
target_repo = repo or await get_current_repo()
console.print(f"""
-[cyan]━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[/cyan]
[cyan]Repository:[/cyan] {target_repo}
[cyan]State:[/cyan] {state.value}
[cyan]Time filter:[/cyan] {f"Last {hours} hours" if hours else "All time"}
[cyan]Fetching:[/cyan] Issues AND PRs
-[cyan]━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[/cyan]
""")
with Progress(console=console) as progress:
- issues_task: TaskID = progress.add_task("[cyan]Fetching issues...", total=None)
+ issues_task: TaskID = progress.add_task(
+ "[cyan]Fetching issues...", total=None
+ )
prs_task: TaskID = progress.add_task("[cyan]Fetching PRs...", total=None)
- # Fetch in parallel
issues_items, prs_items = await asyncio.gather(
- fetch_all_items(target_repo, "issue", state.value, hours, progress, issues_task),
- fetch_all_items(target_repo, "pr", state.value, hours, progress, prs_task),
+ fetch_all_items(
+ target_repo, "issue", state.value, hours, progress, issues_task
+ ),
+ fetch_all_items(
+ target_repo, "pr", state.value, hours, progress, prs_task
+ ),
)
progress.update(
@@ -345,12 +369,13 @@ def fetch_all(
completed=100,
total=100,
)
- progress.update(prs_task, description="[green]PRs complete!", completed=100, total=100)
+ progress.update(
+ prs_task, description="[green]PRs complete!", completed=100, total=100
+ )
console.print(
Panel(
- f"[green]✓ Found {len(issues_items)} issues and {len(prs_items)} PRs[/green]",
- title="[green]Pagination Complete[/green]",
+ f"[green]Found {len(issues_items)} issues and {len(prs_items)} PRs[/green]",
border_style="green",
)
)
@@ -362,7 +387,7 @@ def fetch_all(
display_table(issues_items, "issue")
console.print("")
display_table(prs_items, "pr")
- else: # COUNT
+ else:
console.print(f"Total issues: {len(issues_items)}")
console.print(f"Total PRs: {len(prs_items)}")