Compare commits

...

10 Commits

Author SHA1 Message Date
YeonGyu-Kim
09cfd0b408 diag(todo-continuation): add comprehensive debug logging for session idle handling
Add [TODO-DIAG] console.error statements throughout the todo continuation
enforcer to help diagnose why continuation prompts aren't being injected.

Changes:
- Add session.idle event handler diagnostic in handler.ts
- Add detailed blocking reason logging in idle-event.ts for all gate checks
- Update JSON schema to reflect circuit breaker config changes

🤖 Generated with assistance of [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)
2026-03-18 14:45:14 +09:00
YeonGyu-Kim
d48ea025f0 refactor(circuit-breaker): replace sliding window with consecutive call detection
Switch background task loop detection from percentage-based sliding window
(80% of 20-call window) to consecutive same-tool counting. Triggers when
same tool signature is called 20+ times in a row; a different tool resets
the counter.
2026-03-18 14:32:27 +09:00
YeonGyu-Kim
c5c7ba4eed perf: pre-compile regex patterns and optimize hot-path string operations
- error-classifier: pre-compile default retry pattern regex
- think-mode/detector: combine multilingual patterns into single regex
- parser: skip redundant toLowerCase on pre-lowered keywords
- edit-operations: use fast arraysEqual instead of JSON comparison
- hash-computation: optimize streaming line extraction with index tracking
2026-03-18 14:19:23 +09:00
YeonGyu-Kim
90aa3a306c perf(hooks,tools): optimize string operations and reduce redundant iterations
- output-renderer, hashline-edit-diff: replace str += with array join (H2)
- auto-slash-command: single-pass Map grouping instead of 6x filter (M1)
- comment-checker: hoist Zod schema to module scope (M2)
- session-last-agent: reverse iterate sorted array instead of sort+reverse (L2)
2026-03-18 14:19:12 +09:00
YeonGyu-Kim
c2f7d059d2 perf(shared): optimize hot-path utilities across plugin
- task-list: replace O(n³) blocker resolution with Map lookup (C4)
- logger: buffer log entries and flush periodically to reduce sync I/O (C5)
- plugin-interface: create chatParamsHandler once at init (H3)
- pattern-matcher: cache compiled RegExp for wildcard matchers (H6)
- file-reference-resolver: use replaceAll instead of split/join (M9)
- connected-providers-cache: add in-memory cache for read operations (L4)
2026-03-18 14:19:00 +09:00
YeonGyu-Kim
7a96a167e6 perf(claude-code-hooks): defer config loading until after disabled check
Move loadClaudeHooksConfig and loadPluginExtendedConfig after isHookDisabled check
in both tool-execute-before and tool-execute-after handlers to skip 5 file reads
per tool call when hooks are disabled (C1)
2026-03-18 14:18:49 +09:00
YeonGyu-Kim
2da19fe608 perf(background-agent): use Set for countedToolPartIDs, cache circuit breaker settings, optimize loop detector
- Replace countedToolPartIDs string[] with Set<string> for O(1) has/add vs O(n) includes/spread (C2)
- Cache resolveCircuitBreakerSettings at manager level to avoid repeated object creation (C3)
- Optimize recordToolCall to avoid full array copy with slice (L1)
2026-03-18 14:18:38 +09:00
YeonGyu-Kim
952bd5338d fix(background-agent): treat non-active session statuses as terminal to prevent parent session hang
Previously, pollRunningTasks() and checkAndInterruptStaleTasks() treated
any non-"idle" session status as "still running", which caused tasks with
terminal statuses like "interrupted" to be skipped indefinitely — both
for completion detection AND stale timeout. This made the parent session
hang forever waiting for an ALL COMPLETE notification that never came.

Extract isActiveSessionStatus() and isTerminalSessionStatus() that
classify session statuses explicitly. Only known active statuses
("busy", "retry", "running") protect tasks from completion/stale checks.
Known terminal statuses ("interrupted") trigger immediate completion.
Unknown statuses fall through to the standard idle/gone path with output
validation as a conservative default.

Introduced by: a0c93816 (2026-02-14), dc370f7f (2026-03-08)
2026-03-18 14:06:23 +09:00
YeonGyu-Kim
57757a345d refactor: improve test isolation and DI for cache/port-utils/resolve-file-uri
- connected-providers-cache: extract factory pattern (createConnectedProvidersCacheStore) for testable cache dir injection
- port-utils.test: environment-independent tests with real socket probing and contiguous port detection
- resolve-file-uri.test: mock homedir instead of touching real home directory
- github-triage: update SKILL.md
2026-03-18 13:17:01 +09:00
YeonGyu-Kim
3caae14192 fix(ralph-loop): abort stale Oracle sessions before ulw verification restart
When Oracle verification fails in ulw-loop mode, the previous Oracle
session was never aborted before restarting. Each retry created a new
descendant session, causing unbounded session accumulation and 500
errors from server overload.

Now abort the old verification session before:
- restarting the loop after failed verification
- re-entering verification phase on subsequent DONE detection
2026-03-18 12:49:27 +09:00
41 changed files with 1080 additions and 554 deletions

View File

@@ -136,7 +136,36 @@ fi
--- ---
## Phase 3: Spawn Subagents ## Phase 3: Spawn Subagents (Individual Tool Calls)
**CRITICAL: Create tasks ONE BY ONE using individual `task_create` tool calls. NEVER batch or script.**
For each item, execute these steps sequentially:
### Step 3.1: Create Task Record
```typescript
task_create(
subject="Triage: #{number} {title}",
description="GitHub {issue|PR} triage analysis - {type}",
metadata={"type": "{ISSUE_QUESTION|ISSUE_BUG|ISSUE_FEATURE|ISSUE_OTHER|PR_BUGFIX|PR_OTHER}", "number": {number}}
)
```
### Step 3.2: Spawn Analysis Subagent (Background)
```typescript
task(
category="quick",
run_in_background=true,
load_skills=[],
prompt=SUBAGENT_PROMPT
)
```
**ABSOLUTE RULES for Subagents:**
- **ONLY ANALYZE** - Never take action on GitHub (no comments, merges, closes)
- **READ-ONLY** - Use tools only for reading code/GitHub data
- **WRITE REPORT ONLY** - Output goes to `{REPORT_DIR}/{issue|pr}-{number}.md` via Write tool
- **EVIDENCE REQUIRED** - Every claim must have GitHub permalink as proof
``` ```
For each item: For each item:
@@ -170,6 +199,7 @@ ABSOLUTE RULES (violating ANY = critical failure):
- Your ONLY writable output: {REPORT_DIR}/{issue|pr}-{number}.md via the Write tool - Your ONLY writable output: {REPORT_DIR}/{issue|pr}-{number}.md via the Write tool
``` ```
--- ---
### ISSUE_QUESTION ### ISSUE_QUESTION

View File

@@ -3716,15 +3716,10 @@
"minimum": 10, "minimum": 10,
"maximum": 9007199254740991 "maximum": 9007199254740991
}, },
"windowSize": { "consecutiveThreshold": {
"type": "integer", "type": "integer",
"minimum": 5, "minimum": 5,
"maximum": 9007199254740991 "maximum": 9007199254740991
},
"repetitionThresholdPercent": {
"type": "number",
"exclusiveMinimum": 0,
"maximum": 100
} }
}, },
"additionalProperties": false "additionalProperties": false

View File

@@ -1,20 +1,32 @@
import { afterAll, beforeAll, describe, expect, test } from "bun:test" import { afterAll, beforeAll, describe, expect, mock, test } from "bun:test"
import { mkdirSync, rmSync, writeFileSync } from "node:fs" import { mkdirSync, rmSync, writeFileSync } from "node:fs"
import { homedir, tmpdir } from "node:os" import * as os from "node:os"
import { tmpdir } from "node:os"
import { join } from "node:path" import { join } from "node:path"
import { resolvePromptAppend } from "./resolve-file-uri"
const originalHomedir = os.homedir.bind(os)
let mockedHomeDir = ""
let moduleImportCounter = 0
let resolvePromptAppend: typeof import("./resolve-file-uri").resolvePromptAppend
mock.module("node:os", () => ({
...os,
homedir: () => mockedHomeDir || originalHomedir(),
}))
describe("resolvePromptAppend", () => { describe("resolvePromptAppend", () => {
const fixtureRoot = join(tmpdir(), `resolve-file-uri-${Date.now()}`) const fixtureRoot = join(tmpdir(), `resolve-file-uri-${Date.now()}`)
const configDir = join(fixtureRoot, "config") const configDir = join(fixtureRoot, "config")
const homeFixtureDir = join(homedir(), `.resolve-file-uri-home-${Date.now()}`) const homeFixtureRoot = join(fixtureRoot, "home")
const homeFixtureDir = join(homeFixtureRoot, "fixture-home")
const absoluteFilePath = join(fixtureRoot, "absolute.txt") const absoluteFilePath = join(fixtureRoot, "absolute.txt")
const relativeFilePath = join(configDir, "relative.txt") const relativeFilePath = join(configDir, "relative.txt")
const spacedFilePath = join(fixtureRoot, "with space.txt") const spacedFilePath = join(fixtureRoot, "with space.txt")
const homeFilePath = join(homeFixtureDir, "home.txt") const homeFilePath = join(homeFixtureDir, "home.txt")
beforeAll(() => { beforeAll(async () => {
mockedHomeDir = homeFixtureRoot
mkdirSync(fixtureRoot, { recursive: true }) mkdirSync(fixtureRoot, { recursive: true })
mkdirSync(configDir, { recursive: true }) mkdirSync(configDir, { recursive: true })
mkdirSync(homeFixtureDir, { recursive: true }) mkdirSync(homeFixtureDir, { recursive: true })
@@ -23,11 +35,14 @@ describe("resolvePromptAppend", () => {
writeFileSync(relativeFilePath, "relative-content", "utf8") writeFileSync(relativeFilePath, "relative-content", "utf8")
writeFileSync(spacedFilePath, "encoded-content", "utf8") writeFileSync(spacedFilePath, "encoded-content", "utf8")
writeFileSync(homeFilePath, "home-content", "utf8") writeFileSync(homeFilePath, "home-content", "utf8")
moduleImportCounter += 1
;({ resolvePromptAppend } = await import(`./resolve-file-uri?test=${moduleImportCounter}`))
}) })
afterAll(() => { afterAll(() => {
rmSync(fixtureRoot, { recursive: true, force: true }) rmSync(fixtureRoot, { recursive: true, force: true })
rmSync(homeFixtureDir, { recursive: true, force: true }) mock.restore()
}) })
test("returns non-file URI strings unchanged", () => { test("returns non-file URI strings unchanged", () => {
@@ -65,7 +80,7 @@ describe("resolvePromptAppend", () => {
test("resolves home directory URI path", () => { test("resolves home directory URI path", () => {
//#given //#given
const input = `file://~/${homeFixtureDir.split("/").pop()}/home.txt` const input = "file://~/fixture-home/home.txt"
//#when //#when
const resolved = resolvePromptAppend(input) const resolved = resolvePromptAppend(input)

View File

@@ -45,26 +45,26 @@ export function writePaddedText(
return { output: text, atLineStart: text.endsWith("\n") } return { output: text, atLineStart: text.endsWith("\n") }
} }
let output = "" const parts: string[] = []
let lineStart = atLineStart let lineStart = atLineStart
for (let i = 0; i < text.length; i++) { for (let i = 0; i < text.length; i++) {
const ch = text[i] const ch = text[i]
if (lineStart) { if (lineStart) {
output += " " parts.push(" ")
lineStart = false lineStart = false
} }
if (ch === "\n") { if (ch === "\n") {
output += " \n" parts.push(" \n")
lineStart = true lineStart = true
continue continue
} }
output += ch parts.push(ch)
} }
return { output, atLineStart: lineStart } return { output: parts.join(""), atLineStart: lineStart }
} }
function colorizeWithProfileColor(text: string, hexColor?: string): string { function colorizeWithProfileColor(text: string, hexColor?: string): string {

View File

@@ -8,27 +8,24 @@ describe("BackgroundTaskConfigSchema.circuitBreaker", () => {
const result = BackgroundTaskConfigSchema.parse({ const result = BackgroundTaskConfigSchema.parse({
circuitBreaker: { circuitBreaker: {
maxToolCalls: 150, maxToolCalls: 150,
windowSize: 10, consecutiveThreshold: 10,
repetitionThresholdPercent: 70,
}, },
}) })
expect(result.circuitBreaker).toEqual({ expect(result.circuitBreaker).toEqual({
maxToolCalls: 150, maxToolCalls: 150,
windowSize: 10, consecutiveThreshold: 10,
repetitionThresholdPercent: 70,
}) })
}) })
}) })
describe("#given windowSize below minimum", () => { describe("#given consecutiveThreshold below minimum", () => {
test("#when parsed #then throws ZodError", () => { test("#when parsed #then throws ZodError", () => {
let thrownError: unknown let thrownError: unknown
try { try {
BackgroundTaskConfigSchema.parse({ BackgroundTaskConfigSchema.parse({
circuitBreaker: { circuitBreaker: {
windowSize: 4, consecutiveThreshold: 4,
}, },
}) })
} catch (error) { } catch (error) {
@@ -39,14 +36,14 @@ describe("BackgroundTaskConfigSchema.circuitBreaker", () => {
}) })
}) })
describe("#given repetitionThresholdPercent is zero", () => { describe("#given consecutiveThreshold is zero", () => {
test("#when parsed #then throws ZodError", () => { test("#when parsed #then throws ZodError", () => {
let thrownError: unknown let thrownError: unknown
try { try {
BackgroundTaskConfigSchema.parse({ BackgroundTaskConfigSchema.parse({
circuitBreaker: { circuitBreaker: {
repetitionThresholdPercent: 0, consecutiveThreshold: 0,
}, },
}) })
} catch (error) { } catch (error) {

View File

@@ -3,8 +3,7 @@ import { z } from "zod"
const CircuitBreakerConfigSchema = z.object({ const CircuitBreakerConfigSchema = z.object({
enabled: z.boolean().optional(), enabled: z.boolean().optional(),
maxToolCalls: z.number().int().min(10).optional(), maxToolCalls: z.number().int().min(10).optional(),
windowSize: z.number().int().min(5).optional(), consecutiveThreshold: z.number().int().min(5).optional(),
repetitionThresholdPercent: z.number().gt(0).max(100).optional(),
}) })
export const BackgroundTaskConfigSchema = z.object({ export const BackgroundTaskConfigSchema = z.object({

View File

@@ -7,8 +7,7 @@ export const MIN_STABILITY_TIME_MS = 10 * 1000
export const DEFAULT_STALE_TIMEOUT_MS = 1_200_000 export const DEFAULT_STALE_TIMEOUT_MS = 1_200_000
export const DEFAULT_MESSAGE_STALENESS_TIMEOUT_MS = 1_800_000 export const DEFAULT_MESSAGE_STALENESS_TIMEOUT_MS = 1_800_000
export const DEFAULT_MAX_TOOL_CALLS = 200 export const DEFAULT_MAX_TOOL_CALLS = 200
export const DEFAULT_CIRCUIT_BREAKER_WINDOW_SIZE = 20 export const DEFAULT_CIRCUIT_BREAKER_CONSECUTIVE_THRESHOLD = 20
export const DEFAULT_CIRCUIT_BREAKER_REPETITION_THRESHOLD_PERCENT = 80
export const DEFAULT_CIRCUIT_BREAKER_ENABLED = true export const DEFAULT_CIRCUIT_BREAKER_ENABLED = true
export const MIN_RUNTIME_BEFORE_STALE_MS = 30_000 export const MIN_RUNTIME_BEFORE_STALE_MS = 30_000
export const MIN_IDLE_TIME_MS = 5000 export const MIN_IDLE_TIME_MS = 5000

View File

@@ -37,16 +37,14 @@ describe("loop-detector", () => {
maxToolCalls: 200, maxToolCalls: 200,
circuitBreaker: { circuitBreaker: {
maxToolCalls: 120, maxToolCalls: 120,
windowSize: 10, consecutiveThreshold: 7,
repetitionThresholdPercent: 70,
}, },
}) })
expect(result).toEqual({ expect(result).toEqual({
enabled: true, enabled: true,
maxToolCalls: 120, maxToolCalls: 120,
windowSize: 10, consecutiveThreshold: 7,
repetitionThresholdPercent: 70,
}) })
}) })
}) })
@@ -56,8 +54,7 @@ describe("loop-detector", () => {
const result = resolveCircuitBreakerSettings({ const result = resolveCircuitBreakerSettings({
circuitBreaker: { circuitBreaker: {
maxToolCalls: 100, maxToolCalls: 100,
windowSize: 5, consecutiveThreshold: 5,
repetitionThresholdPercent: 60,
}, },
}) })
@@ -71,8 +68,7 @@ describe("loop-detector", () => {
circuitBreaker: { circuitBreaker: {
enabled: false, enabled: false,
maxToolCalls: 100, maxToolCalls: 100,
windowSize: 5, consecutiveThreshold: 5,
repetitionThresholdPercent: 60,
}, },
}) })
@@ -86,8 +82,7 @@ describe("loop-detector", () => {
circuitBreaker: { circuitBreaker: {
enabled: true, enabled: true,
maxToolCalls: 100, maxToolCalls: 100,
windowSize: 5, consecutiveThreshold: 5,
repetitionThresholdPercent: 60,
}, },
}) })
@@ -151,55 +146,52 @@ describe("loop-detector", () => {
}) })
}) })
describe("#given the same tool dominates the recent window", () => { describe("#given the same tool is called consecutively", () => {
test("#when evaluated #then it triggers", () => { test("#when evaluated #then it triggers", () => {
const window = buildWindow([ const window = buildWindow(Array.from({ length: 20 }, () => "read"))
"read",
"read",
"read",
"edit",
"read",
"read",
"read",
"read",
"grep",
"read",
], {
circuitBreaker: {
windowSize: 10,
repetitionThresholdPercent: 80,
},
})
const result = detectRepetitiveToolUse(window) const result = detectRepetitiveToolUse(window)
expect(result).toEqual({ expect(result).toEqual({
triggered: true, triggered: true,
toolName: "read", toolName: "read",
repeatedCount: 8, repeatedCount: 20,
sampleSize: 10,
thresholdPercent: 80,
}) })
}) })
}) })
describe("#given the window is not full yet", () => { describe("#given consecutive calls are interrupted by different tool", () => {
test("#when the current sample crosses the threshold #then it still triggers", () => { test("#when evaluated #then it does not trigger", () => {
const window = buildWindow(["read", "read", "edit", "read", "read", "read", "read", "read"], { const window = buildWindow([
circuitBreaker: { ...Array.from({ length: 19 }, () => "read"),
windowSize: 10, "edit",
repetitionThresholdPercent: 80, "read",
}, ])
})
const result = detectRepetitiveToolUse(window) const result = detectRepetitiveToolUse(window)
expect(result).toEqual({ triggered: false })
})
})
describe("#given threshold boundary", () => {
test("#when below threshold #then it does not trigger", () => {
const belowThresholdWindow = buildWindow(Array.from({ length: 19 }, () => "read"))
const result = detectRepetitiveToolUse(belowThresholdWindow)
expect(result).toEqual({ triggered: false })
})
test("#when equal to threshold #then it triggers", () => {
const atThresholdWindow = buildWindow(Array.from({ length: 20 }, () => "read"))
const result = detectRepetitiveToolUse(atThresholdWindow)
expect(result).toEqual({ expect(result).toEqual({
triggered: true, triggered: true,
toolName: "read", toolName: "read",
repeatedCount: 7, repeatedCount: 20,
sampleSize: 8,
thresholdPercent: 80,
}) })
}) })
}) })
@@ -210,9 +202,7 @@ describe("loop-detector", () => {
tool: "read", tool: "read",
input: { filePath: `/src/file-${i}.ts` }, input: { filePath: `/src/file-${i}.ts` },
})) }))
const window = buildWindowWithInputs(calls, { const window = buildWindowWithInputs(calls)
circuitBreaker: { windowSize: 20, repetitionThresholdPercent: 80 },
})
const result = detectRepetitiveToolUse(window) const result = detectRepetitiveToolUse(window)
expect(result.triggered).toBe(false) expect(result.triggered).toBe(false)
}) })
@@ -220,38 +210,30 @@ describe("loop-detector", () => {
describe("#given same tool with identical file inputs", () => { describe("#given same tool with identical file inputs", () => {
test("#when evaluated #then it triggers with bare tool name", () => { test("#when evaluated #then it triggers with bare tool name", () => {
const calls = [ const calls = Array.from({ length: 20 }, () => ({
...Array.from({ length: 16 }, () => ({ tool: "read", input: { filePath: "/src/same.ts" } })), tool: "read",
{ tool: "grep", input: { pattern: "foo" } }, input: { filePath: "/src/same.ts" },
{ tool: "edit", input: { filePath: "/src/other.ts" } }, }))
{ tool: "bash", input: { command: "ls" } }, const window = buildWindowWithInputs(calls)
{ tool: "glob", input: { pattern: "**/*.ts" } },
]
const window = buildWindowWithInputs(calls, {
circuitBreaker: { windowSize: 20, repetitionThresholdPercent: 80 },
})
const result = detectRepetitiveToolUse(window) const result = detectRepetitiveToolUse(window)
expect(result.triggered).toBe(true) expect(result).toEqual({
expect(result.toolName).toBe("read") triggered: true,
expect(result.repeatedCount).toBe(16) toolName: "read",
repeatedCount: 20,
})
}) })
}) })
describe("#given tool calls with no input", () => { describe("#given tool calls with no input", () => {
test("#when the same tool dominates #then falls back to name-only detection", () => { test("#when evaluated #then it triggers", () => {
const calls = [ const calls = Array.from({ length: 20 }, () => ({ tool: "read" }))
...Array.from({ length: 16 }, () => ({ tool: "read" })), const window = buildWindowWithInputs(calls)
{ tool: "grep" },
{ tool: "edit" },
{ tool: "bash" },
{ tool: "glob" },
]
const window = buildWindowWithInputs(calls, {
circuitBreaker: { windowSize: 20, repetitionThresholdPercent: 80 },
})
const result = detectRepetitiveToolUse(window) const result = detectRepetitiveToolUse(window)
expect(result.triggered).toBe(true) expect(result).toEqual({
expect(result.toolName).toBe("read") triggered: true,
toolName: "read",
repeatedCount: 20,
})
}) })
}) })
}) })

View File

@@ -1,8 +1,7 @@
import type { BackgroundTaskConfig } from "../../config/schema" import type { BackgroundTaskConfig } from "../../config/schema"
import { import {
DEFAULT_CIRCUIT_BREAKER_ENABLED, DEFAULT_CIRCUIT_BREAKER_ENABLED,
DEFAULT_CIRCUIT_BREAKER_REPETITION_THRESHOLD_PERCENT, DEFAULT_CIRCUIT_BREAKER_CONSECUTIVE_THRESHOLD,
DEFAULT_CIRCUIT_BREAKER_WINDOW_SIZE,
DEFAULT_MAX_TOOL_CALLS, DEFAULT_MAX_TOOL_CALLS,
} from "./constants" } from "./constants"
import type { ToolCallWindow } from "./types" import type { ToolCallWindow } from "./types"
@@ -10,16 +9,13 @@ import type { ToolCallWindow } from "./types"
export interface CircuitBreakerSettings { export interface CircuitBreakerSettings {
enabled: boolean enabled: boolean
maxToolCalls: number maxToolCalls: number
windowSize: number consecutiveThreshold: number
repetitionThresholdPercent: number
} }
export interface ToolLoopDetectionResult { export interface ToolLoopDetectionResult {
triggered: boolean triggered: boolean
toolName?: string toolName?: string
repeatedCount?: number repeatedCount?: number
sampleSize?: number
thresholdPercent?: number
} }
export function resolveCircuitBreakerSettings( export function resolveCircuitBreakerSettings(
@@ -29,10 +25,8 @@ export function resolveCircuitBreakerSettings(
enabled: config?.circuitBreaker?.enabled ?? DEFAULT_CIRCUIT_BREAKER_ENABLED, enabled: config?.circuitBreaker?.enabled ?? DEFAULT_CIRCUIT_BREAKER_ENABLED,
maxToolCalls: maxToolCalls:
config?.circuitBreaker?.maxToolCalls ?? config?.maxToolCalls ?? DEFAULT_MAX_TOOL_CALLS, config?.circuitBreaker?.maxToolCalls ?? config?.maxToolCalls ?? DEFAULT_MAX_TOOL_CALLS,
windowSize: config?.circuitBreaker?.windowSize ?? DEFAULT_CIRCUIT_BREAKER_WINDOW_SIZE, consecutiveThreshold:
repetitionThresholdPercent: config?.circuitBreaker?.consecutiveThreshold ?? DEFAULT_CIRCUIT_BREAKER_CONSECUTIVE_THRESHOLD,
config?.circuitBreaker?.repetitionThresholdPercent ??
DEFAULT_CIRCUIT_BREAKER_REPETITION_THRESHOLD_PERCENT,
} }
} }
@@ -42,14 +36,20 @@ export function recordToolCall(
settings: CircuitBreakerSettings, settings: CircuitBreakerSettings,
toolInput?: Record<string, unknown> | null toolInput?: Record<string, unknown> | null
): ToolCallWindow { ): ToolCallWindow {
const previous = window?.toolSignatures ?? []
const signature = createToolCallSignature(toolName, toolInput) const signature = createToolCallSignature(toolName, toolInput)
const toolSignatures = [...previous, signature].slice(-settings.windowSize)
if (window && window.lastSignature === signature) {
return {
lastSignature: signature,
consecutiveCount: window.consecutiveCount + 1,
threshold: settings.consecutiveThreshold,
}
}
return { return {
toolSignatures, lastSignature: signature,
windowSize: settings.windowSize, consecutiveCount: 1,
thresholdPercent: settings.repetitionThresholdPercent, threshold: settings.consecutiveThreshold,
} }
} }
@@ -82,46 +82,13 @@ export function createToolCallSignature(
export function detectRepetitiveToolUse( export function detectRepetitiveToolUse(
window: ToolCallWindow | undefined window: ToolCallWindow | undefined
): ToolLoopDetectionResult { ): ToolLoopDetectionResult {
if (!window || window.toolSignatures.length === 0) { if (!window || window.consecutiveCount < window.threshold) {
return { triggered: false }
}
const counts = new Map<string, number>()
for (const signature of window.toolSignatures) {
counts.set(signature, (counts.get(signature) ?? 0) + 1)
}
let repeatedTool: string | undefined
let repeatedCount = 0
for (const [toolName, count] of counts.entries()) {
if (count > repeatedCount) {
repeatedTool = toolName
repeatedCount = count
}
}
const sampleSize = window.toolSignatures.length
const minimumSampleSize = Math.min(
window.windowSize,
Math.ceil((window.windowSize * window.thresholdPercent) / 100)
)
if (sampleSize < minimumSampleSize) {
return { triggered: false }
}
const thresholdCount = Math.ceil((sampleSize * window.thresholdPercent) / 100)
if (!repeatedTool || repeatedCount < thresholdCount) {
return { triggered: false } return { triggered: false }
} }
return { return {
triggered: true, triggered: true,
toolName: repeatedTool.split("::")[0], toolName: window.lastSignature.split("::")[0],
repeatedCount, repeatedCount: window.consecutiveCount,
sampleSize,
thresholdPercent: window.thresholdPercent,
} }
} }

View File

@@ -38,12 +38,11 @@ async function flushAsyncWork() {
} }
describe("BackgroundManager circuit breaker", () => { describe("BackgroundManager circuit breaker", () => {
describe("#given the same tool dominates the recent window", () => { describe("#given the same tool is called consecutively", () => {
test("#when tool events arrive #then the task is cancelled early", async () => { test("#when consecutive tool events arrive #then the task is cancelled", async () => {
const manager = createManager({ const manager = createManager({
circuitBreaker: { circuitBreaker: {
windowSize: 20, consecutiveThreshold: 20,
repetitionThresholdPercent: 80,
}, },
}) })
const task: BackgroundTask = { const task: BackgroundTask = {
@@ -63,38 +62,17 @@ describe("BackgroundManager circuit breaker", () => {
} }
getTaskMap(manager).set(task.id, task) getTaskMap(manager).set(task.id, task)
for (const toolName of [ for (let i = 0; i < 20; i++) {
"read",
"read",
"grep",
"read",
"edit",
"read",
"read",
"bash",
"read",
"read",
"read",
"glob",
"read",
"read",
"read",
"read",
"read",
"read",
"read",
"read",
]) {
manager.handleEvent({ manager.handleEvent({
type: "message.part.updated", type: "message.part.updated",
properties: { sessionID: task.sessionID, type: "tool", tool: toolName }, properties: { sessionID: task.sessionID, type: "tool", tool: "read" },
}) })
} }
await flushAsyncWork() await flushAsyncWork()
expect(task.status).toBe("cancelled") expect(task.status).toBe("cancelled")
expect(task.error).toContain("repeatedly called read 16/20 times") expect(task.error).toContain("read 20 consecutive times")
}) })
}) })
@@ -102,8 +80,7 @@ describe("BackgroundManager circuit breaker", () => {
test("#when the window fills #then the task keeps running", async () => { test("#when the window fills #then the task keeps running", async () => {
const manager = createManager({ const manager = createManager({
circuitBreaker: { circuitBreaker: {
windowSize: 10, consecutiveThreshold: 10,
repetitionThresholdPercent: 80,
}, },
}) })
const task: BackgroundTask = { const task: BackgroundTask = {
@@ -153,8 +130,7 @@ describe("BackgroundManager circuit breaker", () => {
const manager = createManager({ const manager = createManager({
maxToolCalls: 3, maxToolCalls: 3,
circuitBreaker: { circuitBreaker: {
windowSize: 10, consecutiveThreshold: 95,
repetitionThresholdPercent: 95,
}, },
}) })
const task: BackgroundTask = { const task: BackgroundTask = {
@@ -193,8 +169,7 @@ describe("BackgroundManager circuit breaker", () => {
const manager = createManager({ const manager = createManager({
maxToolCalls: 2, maxToolCalls: 2,
circuitBreaker: { circuitBreaker: {
windowSize: 5, consecutiveThreshold: 5,
repetitionThresholdPercent: 80,
}, },
}) })
const task: BackgroundTask = { const task: BackgroundTask = {
@@ -233,7 +208,7 @@ describe("BackgroundManager circuit breaker", () => {
expect(task.status).toBe("running") expect(task.status).toBe("running")
expect(task.progress?.toolCalls).toBe(1) expect(task.progress?.toolCalls).toBe(1)
expect(task.progress?.countedToolPartIDs).toEqual(["tool-1"]) expect(task.progress?.countedToolPartIDs).toEqual(new Set(["tool-1"]))
}) })
}) })
@@ -241,8 +216,7 @@ describe("BackgroundManager circuit breaker", () => {
test("#when tool events arrive with state.input #then task keeps running", async () => { test("#when tool events arrive with state.input #then task keeps running", async () => {
const manager = createManager({ const manager = createManager({
circuitBreaker: { circuitBreaker: {
windowSize: 20, consecutiveThreshold: 20,
repetitionThresholdPercent: 80,
}, },
}) })
const task: BackgroundTask = { const task: BackgroundTask = {
@@ -287,8 +261,7 @@ describe("BackgroundManager circuit breaker", () => {
test("#when tool events arrive with state.input #then task is cancelled with bare tool name in error", async () => { test("#when tool events arrive with state.input #then task is cancelled with bare tool name in error", async () => {
const manager = createManager({ const manager = createManager({
circuitBreaker: { circuitBreaker: {
windowSize: 20, consecutiveThreshold: 20,
repetitionThresholdPercent: 80,
}, },
}) })
const task: BackgroundTask = { const task: BackgroundTask = {
@@ -325,7 +298,7 @@ describe("BackgroundManager circuit breaker", () => {
await flushAsyncWork() await flushAsyncWork()
expect(task.status).toBe("cancelled") expect(task.status).toBe("cancelled")
expect(task.error).toContain("repeatedly called read") expect(task.error).toContain("read 20 consecutive times")
expect(task.error).not.toContain("::") expect(task.error).not.toContain("::")
}) })
}) })
@@ -335,8 +308,7 @@ describe("BackgroundManager circuit breaker", () => {
const manager = createManager({ const manager = createManager({
circuitBreaker: { circuitBreaker: {
enabled: false, enabled: false,
windowSize: 20, consecutiveThreshold: 20,
repetitionThresholdPercent: 80,
}, },
}) })
const task: BackgroundTask = { const task: BackgroundTask = {
@@ -379,8 +351,7 @@ describe("BackgroundManager circuit breaker", () => {
maxToolCalls: 3, maxToolCalls: 3,
circuitBreaker: { circuitBreaker: {
enabled: false, enabled: false,
windowSize: 10, consecutiveThreshold: 95,
repetitionThresholdPercent: 95,
}, },
}) })
const task: BackgroundTask = { const task: BackgroundTask = {

View File

@@ -153,4 +153,42 @@ describe("BackgroundManager pollRunningTasks", () => {
expect(task.status).toBe("running") expect(task.status).toBe("running")
}) })
}) })
describe("#given a running task whose session has terminal non-idle status", () => {
test('#when session status is "interrupted" #then completes the task', async () => {
//#given
const manager = createManagerWithClient({
status: async () => ({ data: { "ses-interrupted": { type: "interrupted" } } }),
})
const task = createRunningTask("ses-interrupted")
injectTask(manager, task)
//#when
const poll = (manager as unknown as { pollRunningTasks: () => Promise<void> }).pollRunningTasks
await poll.call(manager)
manager.shutdown()
//#then
expect(task.status).toBe("completed")
expect(task.completedAt).toBeDefined()
})
test('#when session status is an unknown type #then completes the task', async () => {
//#given
const manager = createManagerWithClient({
status: async () => ({ data: { "ses-unknown": { type: "some-weird-status" } } }),
})
const task = createRunningTask("ses-unknown")
injectTask(manager, task)
//#when
const poll = (manager as unknown as { pollRunningTasks: () => Promise<void> }).pollRunningTasks
await poll.call(manager)
manager.shutdown()
//#then
expect(task.status).toBe("completed")
expect(task.completedAt).toBeDefined()
})
})
}) })

View File

@@ -52,10 +52,12 @@ import { join } from "node:path"
import { pruneStaleTasksAndNotifications } from "./task-poller" import { pruneStaleTasksAndNotifications } from "./task-poller"
import { checkAndInterruptStaleTasks } from "./task-poller" import { checkAndInterruptStaleTasks } from "./task-poller"
import { removeTaskToastTracking } from "./remove-task-toast-tracking" import { removeTaskToastTracking } from "./remove-task-toast-tracking"
import { isActiveSessionStatus, isTerminalSessionStatus } from "./session-status-classifier"
import { import {
detectRepetitiveToolUse, detectRepetitiveToolUse,
recordToolCall, recordToolCall,
resolveCircuitBreakerSettings, resolveCircuitBreakerSettings,
type CircuitBreakerSettings,
} from "./loop-detector" } from "./loop-detector"
import { import {
createSubagentDepthLimitError, createSubagentDepthLimitError,
@@ -151,6 +153,7 @@ export class BackgroundManager {
private preStartDescendantReservations: Set<string> private preStartDescendantReservations: Set<string>
private enableParentSessionNotifications: boolean private enableParentSessionNotifications: boolean
readonly taskHistory = new TaskHistory() readonly taskHistory = new TaskHistory()
private cachedCircuitBreakerSettings?: CircuitBreakerSettings
constructor( constructor(
ctx: PluginInput, ctx: PluginInput,
@@ -900,23 +903,24 @@ export class BackgroundManager {
task.progress.lastUpdate = new Date() task.progress.lastUpdate = new Date()
if (partInfo?.type === "tool" || partInfo?.tool) { if (partInfo?.type === "tool" || partInfo?.tool) {
const countedToolPartIDs = task.progress.countedToolPartIDs ?? [] const countedToolPartIDs = task.progress.countedToolPartIDs ?? new Set<string>()
const shouldCountToolCall = const shouldCountToolCall =
!partInfo.id || !partInfo.id ||
partInfo.state?.status !== "running" || partInfo.state?.status !== "running" ||
!countedToolPartIDs.includes(partInfo.id) !countedToolPartIDs.has(partInfo.id)
if (!shouldCountToolCall) { if (!shouldCountToolCall) {
return return
} }
if (partInfo.id && partInfo.state?.status === "running") { if (partInfo.id && partInfo.state?.status === "running") {
task.progress.countedToolPartIDs = [...countedToolPartIDs, partInfo.id] countedToolPartIDs.add(partInfo.id)
task.progress.countedToolPartIDs = countedToolPartIDs
} }
task.progress.toolCalls += 1 task.progress.toolCalls += 1
task.progress.lastTool = partInfo.tool task.progress.lastTool = partInfo.tool
const circuitBreaker = resolveCircuitBreakerSettings(this.config) const circuitBreaker = this.cachedCircuitBreakerSettings ?? (this.cachedCircuitBreakerSettings = resolveCircuitBreakerSettings(this.config))
if (partInfo.tool) { if (partInfo.tool) {
task.progress.toolCallWindow = recordToolCall( task.progress.toolCallWindow = recordToolCall(
task.progress.toolCallWindow, task.progress.toolCallWindow,
@@ -928,18 +932,16 @@ export class BackgroundManager {
if (circuitBreaker.enabled) { if (circuitBreaker.enabled) {
const loopDetection = detectRepetitiveToolUse(task.progress.toolCallWindow) const loopDetection = detectRepetitiveToolUse(task.progress.toolCallWindow)
if (loopDetection.triggered) { if (loopDetection.triggered) {
log("[background-agent] Circuit breaker: repetitive tool usage detected", { log("[background-agent] Circuit breaker: consecutive tool usage detected", {
taskId: task.id, taskId: task.id,
agent: task.agent, agent: task.agent,
sessionID, sessionID,
toolName: loopDetection.toolName, toolName: loopDetection.toolName,
repeatedCount: loopDetection.repeatedCount, repeatedCount: loopDetection.repeatedCount,
sampleSize: loopDetection.sampleSize,
thresholdPercent: loopDetection.thresholdPercent,
}) })
void this.cancelTask(task.id, { void this.cancelTask(task.id, {
source: "circuit-breaker", source: "circuit-breaker",
reason: `Subagent repeatedly called ${loopDetection.toolName} ${loopDetection.repeatedCount}/${loopDetection.sampleSize} times in the recent tool-call window (${loopDetection.thresholdPercent}% threshold). This usually indicates an infinite loop. The task was automatically cancelled to prevent excessive token usage.`, reason: `Subagent called ${loopDetection.toolName} ${loopDetection.repeatedCount} consecutive times (threshold: ${circuitBreaker.consecutiveThreshold}). This usually indicates an infinite loop. The task was automatically cancelled to prevent excessive token usage.`,
}) })
return return
} }
@@ -1782,11 +1784,9 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
} }
} }
// Match sync-session-poller pattern: only skip completion check when // Only skip completion when session status is actively running.
// status EXISTS and is not idle (i.e., session is actively running). // Unknown or terminal statuses (like "interrupted") fall through to completion.
// When sessionStatus is undefined, the session has completed and dropped if (sessionStatus && isActiveSessionStatus(sessionStatus.type)) {
// from the status response — fall through to completion detection.
if (sessionStatus && sessionStatus.type !== "idle") {
log("[background-agent] Session still running, relying on event-based progress:", { log("[background-agent] Session still running, relying on event-based progress:", {
taskId: task.id, taskId: task.id,
sessionID, sessionID,
@@ -1796,6 +1796,24 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
continue continue
} }
// Explicit terminal non-idle status (e.g., "interrupted") — complete immediately,
// skipping output validation (session will never produce more output).
// Unknown statuses fall through to the idle/gone path with output validation.
if (sessionStatus && isTerminalSessionStatus(sessionStatus.type)) {
await this.tryCompleteTask(task, `polling (terminal session status: ${sessionStatus.type})`)
continue
}
// Unknown non-idle status — not active, not terminal, not idle.
// Fall through to idle/gone completion path with output validation.
if (sessionStatus && sessionStatus.type !== "idle") {
log("[background-agent] Unknown session status, treating as potentially idle:", {
taskId: task.id,
sessionID,
sessionStatus: sessionStatus.type,
})
}
// Session is idle or no longer in status response (completed/disappeared) // Session is idle or no longer in status response (completed/disappeared)
const completionSource = sessionStatus?.type === "idle" const completionSource = sessionStatus?.type === "idle"
? "polling (idle status)" ? "polling (idle status)"

View File

@@ -0,0 +1,66 @@
import { describe, test, expect, mock } from "bun:test"
import { isActiveSessionStatus, isTerminalSessionStatus } from "./session-status-classifier"
const mockLog = mock()
mock.module("../../shared", () => ({ log: mockLog }))
describe("isActiveSessionStatus", () => {
describe("#given a known active session status", () => {
test('#when type is "busy" #then returns true', () => {
expect(isActiveSessionStatus("busy")).toBe(true)
})
test('#when type is "retry" #then returns true', () => {
expect(isActiveSessionStatus("retry")).toBe(true)
})
test('#when type is "running" #then returns true', () => {
expect(isActiveSessionStatus("running")).toBe(true)
})
})
describe("#given a known terminal session status", () => {
test('#when type is "idle" #then returns false', () => {
expect(isActiveSessionStatus("idle")).toBe(false)
})
test('#when type is "interrupted" #then returns false and does not log', () => {
mockLog.mockClear()
expect(isActiveSessionStatus("interrupted")).toBe(false)
expect(mockLog).not.toHaveBeenCalled()
})
})
describe("#given an unknown session status", () => {
test('#when type is an arbitrary unknown string #then returns false and logs warning', () => {
mockLog.mockClear()
expect(isActiveSessionStatus("some-unknown-status")).toBe(false)
expect(mockLog).toHaveBeenCalledWith(
"[background-agent] Unknown session status type encountered:",
"some-unknown-status",
)
})
test('#when type is empty string #then returns false', () => {
expect(isActiveSessionStatus("")).toBe(false)
})
})
})
describe("isTerminalSessionStatus", () => {
test('#when type is "interrupted" #then returns true', () => {
expect(isTerminalSessionStatus("interrupted")).toBe(true)
})
test('#when type is "idle" #then returns false (idle is handled separately)', () => {
expect(isTerminalSessionStatus("idle")).toBe(false)
})
test('#when type is "busy" #then returns false', () => {
expect(isTerminalSessionStatus("busy")).toBe(false)
})
test('#when type is an unknown string #then returns false', () => {
expect(isTerminalSessionStatus("some-unknown")).toBe(false)
})
})

View File

@@ -0,0 +1,20 @@
import { log } from "../../shared"
const ACTIVE_SESSION_STATUSES = new Set(["busy", "retry", "running"])
const KNOWN_TERMINAL_STATUSES = new Set(["idle", "interrupted"])
export function isActiveSessionStatus(type: string): boolean {
if (ACTIVE_SESSION_STATUSES.has(type)) {
return true
}
if (!KNOWN_TERMINAL_STATUSES.has(type)) {
log("[background-agent] Unknown session status type encountered:", type)
}
return false
}
export function isTerminalSessionStatus(type: string): boolean {
return KNOWN_TERMINAL_STATUSES.has(type) && type !== "idle"
}

View File

@@ -417,6 +417,56 @@ describe("checkAndInterruptStaleTasks", () => {
expect(task.status).toBe("cancelled") expect(task.status).toBe("cancelled")
expect(onTaskInterrupted).toHaveBeenCalledWith(task) expect(onTaskInterrupted).toHaveBeenCalledWith(task)
}) })
it('should NOT protect task when session has terminal non-idle status like "interrupted"', async () => {
//#given — lastUpdate is 5min old, session is "interrupted" (terminal, not active)
const task = createRunningTask({
startedAt: new Date(Date.now() - 300_000),
progress: {
toolCalls: 2,
lastUpdate: new Date(Date.now() - 300_000),
},
})
//#when — session status is "interrupted" (terminal)
await checkAndInterruptStaleTasks({
tasks: [task],
client: mockClient as never,
config: { staleTimeoutMs: 180_000 },
concurrencyManager: mockConcurrencyManager as never,
notifyParentSession: mockNotify,
sessionStatuses: { "ses-1": { type: "interrupted" } },
})
//#then — terminal statuses should not protect from stale timeout
expect(task.status).toBe("cancelled")
expect(task.error).toContain("Stale timeout")
})
it('should NOT protect task when session has unknown status type', async () => {
//#given — lastUpdate is 5min old, session has an unknown status
const task = createRunningTask({
startedAt: new Date(Date.now() - 300_000),
progress: {
toolCalls: 2,
lastUpdate: new Date(Date.now() - 300_000),
},
})
//#when — session has unknown status type
await checkAndInterruptStaleTasks({
tasks: [task],
client: mockClient as never,
config: { staleTimeoutMs: 180_000 },
concurrencyManager: mockConcurrencyManager as never,
notifyParentSession: mockNotify,
sessionStatuses: { "ses-1": { type: "some-weird-status" } },
})
//#then — unknown statuses should not protect from stale timeout
expect(task.status).toBe("cancelled")
expect(task.error).toContain("Stale timeout")
})
}) })
describe("pruneStaleTasksAndNotifications", () => { describe("pruneStaleTasksAndNotifications", () => {

View File

@@ -14,6 +14,7 @@ import {
} from "./constants" } from "./constants"
import { removeTaskToastTracking } from "./remove-task-toast-tracking" import { removeTaskToastTracking } from "./remove-task-toast-tracking"
import { isActiveSessionStatus } from "./session-status-classifier"
const TERMINAL_TASK_STATUSES = new Set<BackgroundTask["status"]>([ const TERMINAL_TASK_STATUSES = new Set<BackgroundTask["status"]>([
"completed", "completed",
"error", "error",
@@ -120,7 +121,7 @@ export async function checkAndInterruptStaleTasks(args: {
if (!startedAt || !sessionID) continue if (!startedAt || !sessionID) continue
const sessionStatus = sessionStatuses?.[sessionID]?.type const sessionStatus = sessionStatuses?.[sessionID]?.type
const sessionIsRunning = sessionStatus !== undefined && sessionStatus !== "idle" const sessionIsRunning = sessionStatus !== undefined && isActiveSessionStatus(sessionStatus)
const runtime = now - startedAt.getTime() const runtime = now - startedAt.getTime()
if (!task.progress?.lastUpdate) { if (!task.progress?.lastUpdate) {

View File

@@ -10,16 +10,16 @@ export type BackgroundTaskStatus =
| "interrupt" | "interrupt"
export interface ToolCallWindow { export interface ToolCallWindow {
toolSignatures: string[] lastSignature: string
windowSize: number consecutiveCount: number
thresholdPercent: number threshold: number
} }
export interface TaskProgress { export interface TaskProgress {
toolCalls: number toolCalls: number
lastTool?: string lastTool?: string
toolCallWindow?: ToolCallWindow toolCallWindow?: ToolCallWindow
countedToolPartIDs?: string[] countedToolPartIDs?: Set<string>
lastUpdate: Date lastUpdate: Date
lastMessage?: string lastMessage?: string
lastMessageAt?: Date lastMessageAt?: Date

View File

@@ -70,7 +70,7 @@ function isTokenLimitError(text: string): boolean {
return false return false
} }
const lower = text.toLowerCase() const lower = text.toLowerCase()
return TOKEN_LIMIT_KEYWORDS.some((kw) => lower.includes(kw.toLowerCase())) return TOKEN_LIMIT_KEYWORDS.some((kw) => lower.includes(kw))
} }
export function parseAnthropicTokenLimitError(err: unknown): ParsedTokenLimitError | null { export function parseAnthropicTokenLimitError(err: unknown): ParsedTokenLimitError | null {

View File

@@ -18,9 +18,9 @@ function getLastAgentFromMessageDir(messageDir: string): string | null {
const files = readdirSync(messageDir) const files = readdirSync(messageDir)
.filter((fileName) => fileName.endsWith(".json")) .filter((fileName) => fileName.endsWith(".json"))
.sort() .sort()
.reverse()
for (const fileName of files) { for (let i = files.length - 1; i >= 0; i--) {
const fileName = files[i]
try { try {
const content = readFileSync(join(messageDir, fileName), "utf-8") const content = readFileSync(join(messageDir, fileName), "utf-8")
const parsed = JSON.parse(content) as { agent?: unknown } const parsed = JSON.parse(content) as { agent?: unknown }

View File

@@ -44,12 +44,6 @@ export interface ExecutorOptions {
agent?: string agent?: string
} }
function filterDiscoveredCommandsByScope(
commands: DiscoveredCommandInfo[],
scope: DiscoveredCommandInfo["scope"],
): DiscoveredCommandInfo[] {
return commands.filter(command => command.scope === scope)
}
async function discoverAllCommands(options?: ExecutorOptions): Promise<CommandInfo[]> { async function discoverAllCommands(options?: ExecutorOptions): Promise<CommandInfo[]> {
const discoveredCommands = discoverCommandsSync(process.cwd(), { const discoveredCommands = discoverCommandsSync(process.cwd(), {
@@ -60,14 +54,18 @@ async function discoverAllCommands(options?: ExecutorOptions): Promise<CommandIn
const skills = options?.skills ?? await discoverAllSkills() const skills = options?.skills ?? await discoverAllSkills()
const skillCommands = skills.map(skillToCommandInfo) const skillCommands = skills.map(skillToCommandInfo)
const scopeOrder: DiscoveredCommandInfo["scope"][] = ["project", "user", "opencode-project", "opencode", "builtin", "plugin"]
const grouped = new Map<string, DiscoveredCommandInfo[]>()
for (const cmd of discoveredCommands) {
const list = grouped.get(cmd.scope) ?? []
list.push(cmd)
grouped.set(cmd.scope, list)
}
const orderedCommands = scopeOrder.flatMap((scope) => grouped.get(scope) ?? [])
return [ return [
...skillCommands, ...skillCommands,
...filterDiscoveredCommandsByScope(discoveredCommands, "project"), ...orderedCommands,
...filterDiscoveredCommandsByScope(discoveredCommands, "user"),
...filterDiscoveredCommandsByScope(discoveredCommands, "opencode-project"),
...filterDiscoveredCommandsByScope(discoveredCommands, "opencode"),
...filterDiscoveredCommandsByScope(discoveredCommands, "builtin"),
...filterDiscoveredCommandsByScope(discoveredCommands, "plugin"),
] ]
} }

View File

@@ -79,8 +79,6 @@ export function createToolExecuteAfterHandler(ctx: PluginInput, config: PluginCo
return return
} }
const claudeConfig = await loadClaudeHooksConfig()
const extendedConfig = await loadPluginExtendedConfig()
const cachedInput = getToolInput(input.sessionID, input.tool, input.callID) || {} const cachedInput = getToolInput(input.sessionID, input.tool, input.callID) || {}
@@ -96,6 +94,9 @@ export function createToolExecuteAfterHandler(ctx: PluginInput, config: PluginCo
return return
} }
const claudeConfig = await loadClaudeHooksConfig()
const extendedConfig = await loadPluginExtendedConfig()
const postClient: PostToolUseClient = { const postClient: PostToolUseClient = {
session: { session: {
messages: (opts) => ctx.client.session.messages(opts), messages: (opts) => ctx.client.session.messages(opts),

View File

@@ -43,8 +43,6 @@ export function createToolExecuteBeforeHandler(ctx: PluginInput, config: PluginC
log("todowrite: parsed todos string to array", { sessionID: input.sessionID }) log("todowrite: parsed todos string to array", { sessionID: input.sessionID })
} }
const claudeConfig = await loadClaudeHooksConfig()
const extendedConfig = await loadPluginExtendedConfig()
appendTranscriptEntry(input.sessionID, { appendTranscriptEntry(input.sessionID, {
type: "tool_use", type: "tool_use",
@@ -59,6 +57,9 @@ export function createToolExecuteBeforeHandler(ctx: PluginInput, config: PluginC
return return
} }
const claudeConfig = await loadClaudeHooksConfig()
const extendedConfig = await loadPluginExtendedConfig()
const preCtx: PreToolUseContext = { const preCtx: PreToolUseContext = {
sessionId: input.sessionID, sessionId: input.sessionID,
toolName: input.tool, toolName: input.tool,

View File

@@ -3,6 +3,18 @@ import type { CommentCheckerConfig } from "../../config/schema"
import z from "zod" import z from "zod"
const ApplyPatchMetadataSchema = z.object({
files: z.array(
z.object({
filePath: z.string(),
movePath: z.string().optional(),
before: z.string(),
after: z.string(),
type: z.string().optional(),
}),
),
})
import { import {
initializeCommentCheckerCli, initializeCommentCheckerCli,
getCommentCheckerCliPathPromise, getCommentCheckerCliPathPromise,
@@ -104,17 +116,6 @@ export function createCommentCheckerHooks(config?: CommentCheckerConfig) {
return return
} }
const ApplyPatchMetadataSchema = z.object({
files: z.array(
z.object({
filePath: z.string(),
movePath: z.string().optional(),
before: z.string(),
after: z.string(),
type: z.string().optional(),
}),
),
})
if (toolLower === "apply_patch") { if (toolLower === "apply_patch") {
const parsed = ApplyPatchMetadataSchema.safeParse(output.metadata) const parsed = ApplyPatchMetadataSchema.safeParse(output.metadata)

View File

@@ -23,6 +23,10 @@ export async function handleDetectedCompletion(
const { sessionID, state, loopState, directory, apiTimeoutMs } = input const { sessionID, state, loopState, directory, apiTimeoutMs } = input
if (state.ultrawork && !state.verification_pending) { if (state.ultrawork && !state.verification_pending) {
if (state.verification_session_id) {
ctx.client.session.abort({ path: { id: state.verification_session_id } }).catch(() => {})
}
const verificationState = loopState.markVerificationPending(sessionID) const verificationState = loopState.markVerificationPending(sessionID)
if (!verificationState) { if (!verificationState) {
log(`[${HOOK_NAME}] Failed to transition ultrawork loop to verification`, { log(`[${HOOK_NAME}] Failed to transition ultrawork loop to verification`, {

View File

@@ -10,6 +10,7 @@ describe("ulw-loop verification", () => {
const testDir = join(tmpdir(), `ulw-loop-verification-${Date.now()}`) const testDir = join(tmpdir(), `ulw-loop-verification-${Date.now()}`)
let promptCalls: Array<{ sessionID: string; text: string }> let promptCalls: Array<{ sessionID: string; text: string }>
let toastCalls: Array<{ title: string; message: string; variant: string }> let toastCalls: Array<{ title: string; message: string; variant: string }>
let abortCalls: Array<{ id: string }>
let parentTranscriptPath: string let parentTranscriptPath: string
let oracleTranscriptPath: string let oracleTranscriptPath: string
@@ -25,6 +26,10 @@ describe("ulw-loop verification", () => {
return {} return {}
}, },
messages: async () => ({ data: [] }), messages: async () => ({ data: [] }),
abort: async (opts: { path: { id: string } }) => {
abortCalls.push({ id: opts.path.id })
return {}
},
}, },
tui: { tui: {
showToast: async (opts: { body: { title: string; message: string; variant: string } }) => { showToast: async (opts: { body: { title: string; message: string; variant: string } }) => {
@@ -40,6 +45,7 @@ describe("ulw-loop verification", () => {
beforeEach(() => { beforeEach(() => {
promptCalls = [] promptCalls = []
toastCalls = [] toastCalls = []
abortCalls = []
parentTranscriptPath = join(testDir, "transcript-parent.jsonl") parentTranscriptPath = join(testDir, "transcript-parent.jsonl")
oracleTranscriptPath = join(testDir, "transcript-oracle.jsonl") oracleTranscriptPath = join(testDir, "transcript-oracle.jsonl")
@@ -385,4 +391,96 @@ describe("ulw-loop verification", () => {
expect(promptCalls).toHaveLength(2) expect(promptCalls).toHaveLength(2)
expect(promptCalls[1]?.text).toContain("Verification failed") expect(promptCalls[1]?.text).toContain("Verification failed")
}) })
test("#given oracle verification fails #when loop restarts #then old oracle session is aborted", async () => {
const sessionMessages: Record<string, unknown[]> = {
"session-123": [{}, {}, {}],
}
const hook = createRalphLoopHook({
...createMockPluginInput(),
client: {
...createMockPluginInput().client,
session: {
...createMockPluginInput().client.session,
messages: async (opts: { path: { id: string } }) => ({
data: sessionMessages[opts.path.id] ?? [],
}),
},
},
} as Parameters<typeof createRalphLoopHook>[0], {
getTranscriptPath: (sessionID) => sessionID === "ses-oracle" ? oracleTranscriptPath : parentTranscriptPath,
})
hook.startLoop("session-123", "Build API", { ultrawork: true })
writeFileSync(
parentTranscriptPath,
`${JSON.stringify({ type: "tool_result", timestamp: new Date().toISOString(), tool_output: { output: "done <promise>DONE</promise>" } })}\n`,
)
await hook.event({ event: { type: "session.idle", properties: { sessionID: "session-123" } } })
writeState(testDir, {
...hook.getState()!,
verification_session_id: "ses-oracle",
})
writeFileSync(
oracleTranscriptPath,
`${JSON.stringify({ type: "tool_result", timestamp: new Date().toISOString(), tool_output: { output: "verification failed: missing tests" } })}\n`,
)
await hook.event({ event: { type: "session.idle", properties: { sessionID: "ses-oracle" } } })
expect(abortCalls).toHaveLength(1)
expect(abortCalls[0].id).toBe("ses-oracle")
})
test("#given ulw loop re-enters verification #when DONE detected again after failed verification #then previous verification session is aborted", async () => {
const sessionMessages: Record<string, unknown[]> = {
"session-123": [{}, {}, {}],
}
const hook = createRalphLoopHook({
...createMockPluginInput(),
client: {
...createMockPluginInput().client,
session: {
...createMockPluginInput().client.session,
messages: async (opts: { path: { id: string } }) => ({
data: sessionMessages[opts.path.id] ?? [],
}),
},
},
} as Parameters<typeof createRalphLoopHook>[0], {
getTranscriptPath: (sessionID) => sessionID === "ses-oracle" ? oracleTranscriptPath : parentTranscriptPath,
})
hook.startLoop("session-123", "Build API", { ultrawork: true })
writeFileSync(
parentTranscriptPath,
`${JSON.stringify({ type: "tool_result", timestamp: new Date().toISOString(), tool_output: { output: "done <promise>DONE</promise>" } })}\n`,
)
await hook.event({ event: { type: "session.idle", properties: { sessionID: "session-123" } } })
writeState(testDir, {
...hook.getState()!,
verification_session_id: "ses-oracle",
})
writeFileSync(
oracleTranscriptPath,
`${JSON.stringify({ type: "tool_result", timestamp: new Date().toISOString(), tool_output: { output: "failed" } })}\n`,
)
await hook.event({ event: { type: "session.idle", properties: { sessionID: "ses-oracle" } } })
abortCalls.length = 0
writeFileSync(
parentTranscriptPath,
`${JSON.stringify({ type: "tool_result", timestamp: new Date().toISOString(), tool_output: { output: "fixed it <promise>DONE</promise>" } })}\n`,
)
writeState(testDir, {
...hook.getState()!,
verification_session_id: "ses-oracle-old",
})
await hook.event({ event: { type: "session.idle", properties: { sessionID: "session-123" } } })
expect(abortCalls).toHaveLength(1)
expect(abortCalls[0].id).toBe("ses-oracle-old")
})
}) })

View File

@@ -68,6 +68,10 @@ export async function handleFailedVerification(
return false return false
} }
if (state.verification_session_id) {
ctx.client.session.abort({ path: { id: state.verification_session_id } }).catch(() => {})
}
const resumedState = loopState.restartAfterFailedVerification( const resumedState = loopState.restartAfterFailedVerification(
parentSessionID, parentSessionID,
messageCountAtStart, messageCountAtStart,

View File

@@ -28,6 +28,8 @@ export function getErrorMessage(error: unknown): string {
} }
} }
const DEFAULT_RETRY_PATTERN = new RegExp(`\\b(${DEFAULT_CONFIG.retry_on_errors.join("|")})\\b`)
export function extractStatusCode(error: unknown, retryOnErrors?: number[]): number | undefined { export function extractStatusCode(error: unknown, retryOnErrors?: number[]): number | undefined {
if (!error) return undefined if (!error) return undefined
@@ -45,8 +47,9 @@ export function extractStatusCode(error: unknown, retryOnErrors?: number[]): num
return statusCode return statusCode
} }
const codes = retryOnErrors ?? DEFAULT_CONFIG.retry_on_errors const pattern = retryOnErrors
const pattern = new RegExp(`\\b(${codes.join("|")})\\b`) ? new RegExp(`\\b(${retryOnErrors.join("|")})\\b`)
: DEFAULT_RETRY_PATTERN
const message = getErrorMessage(error) const message = getErrorMessage(error)
const statusMatch = message.match(pattern) const statusMatch = message.match(pattern)
if (statusMatch) { if (statusMatch) {

View File

@@ -32,8 +32,10 @@ const MULTILINGUAL_KEYWORDS = [
"fikir", "berfikir", "fikir", "berfikir",
] ]
const MULTILINGUAL_PATTERNS = MULTILINGUAL_KEYWORDS.map((kw) => new RegExp(kw, "i")) const COMBINED_THINK_PATTERN = new RegExp(
const THINK_PATTERNS = [...ENGLISH_PATTERNS, ...MULTILINGUAL_PATTERNS] `\\b(?:ultrathink|think)\\b|${MULTILINGUAL_KEYWORDS.join("|")}`,
"i"
)
const CODE_BLOCK_PATTERN = /```[\s\S]*?```/g const CODE_BLOCK_PATTERN = /```[\s\S]*?```/g
const INLINE_CODE_PATTERN = /`[^`]+`/g const INLINE_CODE_PATTERN = /`[^`]+`/g
@@ -44,7 +46,7 @@ function removeCodeBlocks(text: string): string {
export function detectThinkKeyword(text: string): boolean { export function detectThinkKeyword(text: string): boolean {
const textWithoutCode = removeCodeBlocks(text) const textWithoutCode = removeCodeBlocks(text)
return THINK_PATTERNS.some((pattern) => pattern.test(textWithoutCode)) return COMBINED_THINK_PATTERN.test(textWithoutCode)
} }
export function extractPromptText( export function extractPromptText(

View File

@@ -31,6 +31,10 @@ export function createTodoContinuationHandler(args: {
return async ({ event }: { event: { type: string; properties?: unknown } }): Promise<void> => { return async ({ event }: { event: { type: string; properties?: unknown } }): Promise<void> => {
const props = event.properties as Record<string, unknown> | undefined const props = event.properties as Record<string, unknown> | undefined
if (event.type === "session.idle") {
console.error(`[TODO-DIAG] handler received session.idle event`, { sessionID: (props?.sessionID as string) })
}
if (event.type === "session.error") { if (event.type === "session.error") {
const sessionID = props?.sessionID as string | undefined const sessionID = props?.sessionID as string | undefined
if (!sessionID) return if (!sessionID) return

View File

@@ -43,10 +43,12 @@ export async function handleSessionIdle(args: {
} = args } = args
log(`[${HOOK_NAME}] session.idle`, { sessionID }) log(`[${HOOK_NAME}] session.idle`, { sessionID })
console.error(`[TODO-DIAG] session.idle fired for ${sessionID}`)
const state = sessionStateStore.getState(sessionID) const state = sessionStateStore.getState(sessionID)
if (state.isRecovering) { if (state.isRecovering) {
log(`[${HOOK_NAME}] Skipped: in recovery`, { sessionID }) log(`[${HOOK_NAME}] Skipped: in recovery`, { sessionID })
console.error(`[TODO-DIAG] BLOCKED: isRecovering=true`)
return return
} }
@@ -54,6 +56,7 @@ export async function handleSessionIdle(args: {
const timeSinceAbort = Date.now() - state.abortDetectedAt const timeSinceAbort = Date.now() - state.abortDetectedAt
if (timeSinceAbort < ABORT_WINDOW_MS) { if (timeSinceAbort < ABORT_WINDOW_MS) {
log(`[${HOOK_NAME}] Skipped: abort detected via event ${timeSinceAbort}ms ago`, { sessionID }) log(`[${HOOK_NAME}] Skipped: abort detected via event ${timeSinceAbort}ms ago`, { sessionID })
console.error(`[TODO-DIAG] BLOCKED: abort detected ${timeSinceAbort}ms ago`)
state.abortDetectedAt = undefined state.abortDetectedAt = undefined
return return
} }
@@ -66,6 +69,7 @@ export async function handleSessionIdle(args: {
if (hasRunningBgTasks) { if (hasRunningBgTasks) {
log(`[${HOOK_NAME}] Skipped: background tasks running`, { sessionID }) log(`[${HOOK_NAME}] Skipped: background tasks running`, { sessionID })
console.error(`[TODO-DIAG] BLOCKED: background tasks running`, backgroundManager?.getTasksByParentSession(sessionID).filter((t: {status:string}) => t.status === 'running').map((t: {id:string, status:string}) => t.id))
return return
} }
@@ -77,10 +81,12 @@ export async function handleSessionIdle(args: {
const messages = normalizeSDKResponse(messagesResp, [] as Array<{ info?: MessageInfo }>) const messages = normalizeSDKResponse(messagesResp, [] as Array<{ info?: MessageInfo }>)
if (isLastAssistantMessageAborted(messages)) { if (isLastAssistantMessageAborted(messages)) {
log(`[${HOOK_NAME}] Skipped: last assistant message was aborted (API fallback)`, { sessionID }) log(`[${HOOK_NAME}] Skipped: last assistant message was aborted (API fallback)`, { sessionID })
console.error(`[TODO-DIAG] BLOCKED: last assistant message aborted`)
return return
} }
if (hasUnansweredQuestion(messages)) { if (hasUnansweredQuestion(messages)) {
log(`[${HOOK_NAME}] Skipped: pending question awaiting user response`, { sessionID }) log(`[${HOOK_NAME}] Skipped: pending question awaiting user response`, { sessionID })
console.error(`[TODO-DIAG] BLOCKED: hasUnansweredQuestion=true`)
return return
} }
} catch (error) { } catch (error) {
@@ -93,24 +99,30 @@ export async function handleSessionIdle(args: {
todos = normalizeSDKResponse(response, [] as Todo[], { preferResponseOnMissingData: true }) todos = normalizeSDKResponse(response, [] as Todo[], { preferResponseOnMissingData: true })
} catch (error) { } catch (error) {
log(`[${HOOK_NAME}] Todo fetch failed`, { sessionID, error: String(error) }) log(`[${HOOK_NAME}] Todo fetch failed`, { sessionID, error: String(error) })
console.error(`[TODO-DIAG] BLOCKED: todo fetch failed`, String(error))
return return
} }
if (!todos || todos.length === 0) { if (!todos || todos.length === 0) {
sessionStateStore.resetContinuationProgress(sessionID)
sessionStateStore.resetContinuationProgress(sessionID) sessionStateStore.resetContinuationProgress(sessionID)
log(`[${HOOK_NAME}] No todos`, { sessionID }) log(`[${HOOK_NAME}] No todos`, { sessionID })
console.error(`[TODO-DIAG] BLOCKED: no todos`)
return return
} }
const incompleteCount = getIncompleteCount(todos) const incompleteCount = getIncompleteCount(todos)
if (incompleteCount === 0) { if (incompleteCount === 0) {
sessionStateStore.resetContinuationProgress(sessionID)
sessionStateStore.resetContinuationProgress(sessionID) sessionStateStore.resetContinuationProgress(sessionID)
log(`[${HOOK_NAME}] All todos complete`, { sessionID, total: todos.length }) log(`[${HOOK_NAME}] All todos complete`, { sessionID, total: todos.length })
console.error(`[TODO-DIAG] BLOCKED: all todos complete (${todos.length})`)
return return
} }
if (state.inFlight) { if (state.inFlight) {
log(`[${HOOK_NAME}] Skipped: injection in flight`, { sessionID }) log(`[${HOOK_NAME}] Skipped: injection in flight`, { sessionID })
console.error(`[TODO-DIAG] BLOCKED: inFlight=true`)
return return
} }
@@ -124,22 +136,16 @@ export async function handleSessionIdle(args: {
} }
if (state.consecutiveFailures >= MAX_CONSECUTIVE_FAILURES) { if (state.consecutiveFailures >= MAX_CONSECUTIVE_FAILURES) {
log(`[${HOOK_NAME}] Skipped: max consecutive failures reached`, { log(`[${HOOK_NAME}] Skipped: max consecutive failures reached`, { sessionID, consecutiveFailures: state.consecutiveFailures })
sessionID, console.error(`[TODO-DIAG] BLOCKED: consecutiveFailures=${state.consecutiveFailures} >= ${MAX_CONSECUTIVE_FAILURES}`)
consecutiveFailures: state.consecutiveFailures,
maxConsecutiveFailures: MAX_CONSECUTIVE_FAILURES,
})
return return
} }
const effectiveCooldown = const effectiveCooldown =
CONTINUATION_COOLDOWN_MS * Math.pow(2, Math.min(state.consecutiveFailures, 5)) CONTINUATION_COOLDOWN_MS * Math.pow(2, Math.min(state.consecutiveFailures, 5))
if (state.lastInjectedAt && Date.now() - state.lastInjectedAt < effectiveCooldown) { if (state.lastInjectedAt && Date.now() - state.lastInjectedAt < effectiveCooldown) {
log(`[${HOOK_NAME}] Skipped: cooldown active`, { log(`[${HOOK_NAME}] Skipped: cooldown active`, { sessionID, effectiveCooldown, consecutiveFailures: state.consecutiveFailures })
sessionID, console.error(`[TODO-DIAG] BLOCKED: cooldown active (${effectiveCooldown}ms, failures=${state.consecutiveFailures})`)
effectiveCooldown,
consecutiveFailures: state.consecutiveFailures,
})
return return
} }
@@ -165,10 +171,12 @@ export async function handleSessionIdle(args: {
const resolvedAgentName = resolvedInfo?.agent const resolvedAgentName = resolvedInfo?.agent
if (resolvedAgentName && skipAgents.some(s => getAgentConfigKey(s) === getAgentConfigKey(resolvedAgentName))) { if (resolvedAgentName && skipAgents.some(s => getAgentConfigKey(s) === getAgentConfigKey(resolvedAgentName))) {
log(`[${HOOK_NAME}] Skipped: agent in skipAgents list`, { sessionID, agent: resolvedAgentName }) log(`[${HOOK_NAME}] Skipped: agent in skipAgents list`, { sessionID, agent: resolvedAgentName })
console.error(`[TODO-DIAG] BLOCKED: agent '${resolvedAgentName}' in skipAgents`)
return return
} }
if ((compactionGuardActive || encounteredCompaction) && !resolvedInfo?.agent) { if ((compactionGuardActive || encounteredCompaction) && !resolvedInfo?.agent) {
log(`[${HOOK_NAME}] Skipped: compaction occurred but no agent info resolved`, { sessionID }) log(`[${HOOK_NAME}] Skipped: compaction occurred but no agent info resolved`, { sessionID })
console.error(`[TODO-DIAG] BLOCKED: compaction guard + no agent`)
return return
} }
if (state.recentCompactionAt && resolvedInfo?.agent) { if (state.recentCompactionAt && resolvedInfo?.agent) {
@@ -177,18 +185,22 @@ export async function handleSessionIdle(args: {
if (isContinuationStopped?.(sessionID)) { if (isContinuationStopped?.(sessionID)) {
log(`[${HOOK_NAME}] Skipped: continuation stopped for session`, { sessionID }) log(`[${HOOK_NAME}] Skipped: continuation stopped for session`, { sessionID })
console.error(`[TODO-DIAG] BLOCKED: isContinuationStopped=true`)
return return
} }
if (shouldSkipContinuation?.(sessionID)) { if (shouldSkipContinuation?.(sessionID)) {
log(`[${HOOK_NAME}] Skipped: another continuation hook already injected`, { sessionID }) log(`[${HOOK_NAME}] Skipped: another continuation hook already injected`, { sessionID })
console.error(`[TODO-DIAG] BLOCKED: shouldSkipContinuation=true (gptPermissionContinuation recently injected)`)
return return
} }
const progressUpdate = sessionStateStore.trackContinuationProgress(sessionID, incompleteCount, todos) const progressUpdate = sessionStateStore.trackContinuationProgress(sessionID, incompleteCount, todos)
if (shouldStopForStagnation({ sessionID, incompleteCount, progressUpdate })) { if (shouldStopForStagnation({ sessionID, incompleteCount, progressUpdate })) {
console.error(`[TODO-DIAG] BLOCKED: stagnation detected (count=${progressUpdate.stagnationCount})`)
return return
} }
console.error(`[TODO-DIAG] PASSED all gates! Starting countdown (${incompleteCount}/${todos.length} incomplete)`)
startCountdown({ startCountdown({
ctx, ctx,
sessionID, sessionID,

View File

@@ -32,10 +32,7 @@ export function createPluginInterface(args: {
return { return {
tool: tools, tool: tools,
"chat.params": async (input: unknown, output: unknown) => { "chat.params": createChatParamsHandler({ anthropicEffort: hooks.anthropicEffort }),
const handler = createChatParamsHandler({ anthropicEffort: hooks.anthropicEffort })
await handler(input, output)
},
"chat.headers": createChatHeadersHandler({ ctx }), "chat.headers": createChatHeadersHandler({ ctx }),

View File

@@ -1,45 +1,30 @@
/// <reference types="bun-types" /> /// <reference types="bun-types" />
import { beforeAll, beforeEach, afterEach, describe, expect, mock, test } from "bun:test" import { beforeEach, afterEach, describe, expect, test } from "bun:test"
import { existsSync, mkdirSync, mkdtempSync, readFileSync, rmSync, writeFileSync } from "node:fs" import { existsSync, mkdirSync, mkdtempSync, readFileSync, rmSync, writeFileSync } from "node:fs"
import { tmpdir } from "node:os" import { tmpdir } from "node:os"
import { join } from "node:path" import { join } from "node:path"
import * as dataPath from "./data-path" import {
createConnectedProvidersCacheStore,
} from "./connected-providers-cache"
let fakeUserCacheRoot = ""
let testCacheDir = "" let testCacheDir = ""
let moduleImportCounter = 0 let testCacheStore: ReturnType<typeof createConnectedProvidersCacheStore>
const getOmoOpenCodeCacheDirMock = mock(() => testCacheDir)
let updateConnectedProvidersCache: typeof import("./connected-providers-cache").updateConnectedProvidersCache
let readProviderModelsCache: typeof import("./connected-providers-cache").readProviderModelsCache
async function prepareConnectedProvidersCacheTestModule(): Promise<void> {
testCacheDir = mkdtempSync(join(tmpdir(), "connected-providers-cache-test-"))
getOmoOpenCodeCacheDirMock.mockClear()
mock.module("./data-path", () => ({
getOmoOpenCodeCacheDir: getOmoOpenCodeCacheDirMock,
}))
moduleImportCounter += 1
;({ updateConnectedProvidersCache, readProviderModelsCache } = await import(`./connected-providers-cache?test=${moduleImportCounter}`))
}
describe("updateConnectedProvidersCache", () => { describe("updateConnectedProvidersCache", () => {
beforeAll(() => { beforeEach(() => {
mock.restore() fakeUserCacheRoot = mkdtempSync(join(tmpdir(), "connected-providers-user-cache-"))
}) testCacheDir = join(fakeUserCacheRoot, "oh-my-opencode")
testCacheStore = createConnectedProvidersCacheStore(() => testCacheDir)
beforeEach(async () => {
mock.restore()
await prepareConnectedProvidersCacheTestModule()
}) })
afterEach(() => { afterEach(() => {
mock.restore() if (existsSync(fakeUserCacheRoot)) {
if (existsSync(testCacheDir)) { rmSync(fakeUserCacheRoot, { recursive: true, force: true })
rmSync(testCacheDir, { recursive: true, force: true })
} }
fakeUserCacheRoot = ""
testCacheDir = "" testCacheDir = ""
}) })
@@ -76,10 +61,10 @@ describe("updateConnectedProvidersCache", () => {
} }
//#when //#when
await updateConnectedProvidersCache(mockClient) await testCacheStore.updateConnectedProvidersCache(mockClient)
//#then //#then
const cache = readProviderModelsCache() const cache = testCacheStore.readProviderModelsCache()
expect(cache).not.toBeNull() expect(cache).not.toBeNull()
expect(cache!.connected).toEqual(["openai", "anthropic"]) expect(cache!.connected).toEqual(["openai", "anthropic"])
expect(cache!.models).toEqual({ expect(cache!.models).toEqual({
@@ -109,10 +94,10 @@ describe("updateConnectedProvidersCache", () => {
} }
//#when //#when
await updateConnectedProvidersCache(mockClient) await testCacheStore.updateConnectedProvidersCache(mockClient)
//#then //#then
const cache = readProviderModelsCache() const cache = testCacheStore.readProviderModelsCache()
expect(cache).not.toBeNull() expect(cache).not.toBeNull()
expect(cache!.models).toEqual({}) expect(cache!.models).toEqual({})
}) })
@@ -130,10 +115,10 @@ describe("updateConnectedProvidersCache", () => {
} }
//#when //#when
await updateConnectedProvidersCache(mockClient) await testCacheStore.updateConnectedProvidersCache(mockClient)
//#then //#then
const cache = readProviderModelsCache() const cache = testCacheStore.readProviderModelsCache()
expect(cache).not.toBeNull() expect(cache).not.toBeNull()
expect(cache!.models).toEqual({}) expect(cache!.models).toEqual({})
}) })
@@ -143,25 +128,44 @@ describe("updateConnectedProvidersCache", () => {
const mockClient = {} const mockClient = {}
//#when //#when
await updateConnectedProvidersCache(mockClient) await testCacheStore.updateConnectedProvidersCache(mockClient)
//#then //#then
const cache = readProviderModelsCache() const cache = testCacheStore.readProviderModelsCache()
expect(cache).toBeNull() expect(cache).toBeNull()
}) })
test("does not remove the user's real cache directory during test setup", async () => { test("does not remove unrelated files in the cache directory", async () => {
//#given //#given
const realCacheDir = join(dataPath.getCacheDir(), "oh-my-opencode") const realCacheDir = join(fakeUserCacheRoot, "oh-my-opencode")
const sentinelPath = join(realCacheDir, "connected-providers-cache.test-sentinel.json") const sentinelPath = join(realCacheDir, "connected-providers-cache.test-sentinel.json")
mkdirSync(realCacheDir, { recursive: true }) mkdirSync(realCacheDir, { recursive: true })
writeFileSync(sentinelPath, JSON.stringify({ keep: true })) writeFileSync(sentinelPath, JSON.stringify({ keep: true }))
const mockClient = {
provider: {
list: async () => ({
data: {
connected: ["openai"],
all: [
{
id: "openai",
models: {
"gpt-5.4": { id: "gpt-5.4" },
},
},
],
},
}),
},
}
try { try {
//#when //#when
await prepareConnectedProvidersCacheTestModule() await testCacheStore.updateConnectedProvidersCache(mockClient)
//#then //#then
expect(testCacheStore.readConnectedProvidersCache()).toEqual(["openai"])
expect(existsSync(sentinelPath)).toBe(true) expect(existsSync(sentinelPath)).toBe(true)
expect(readFileSync(sentinelPath, "utf-8")).toBe(JSON.stringify({ keep: true })) expect(readFileSync(sentinelPath, "utf-8")).toBe(JSON.stringify({ keep: true }))
} finally { } finally {

View File

@@ -25,172 +25,190 @@ interface ProviderModelsCache {
updatedAt: string updatedAt: string
} }
function getCacheFilePath(filename: string): string { export function createConnectedProvidersCacheStore(
return join(dataPath.getOmoOpenCodeCacheDir(), filename) getCacheDir: () => string = dataPath.getOmoOpenCodeCacheDir
} ) {
function getCacheFilePath(filename: string): string {
function ensureCacheDir(): void { return join(getCacheDir(), filename)
const cacheDir = dataPath.getOmoOpenCodeCacheDir()
if (!existsSync(cacheDir)) {
mkdirSync(cacheDir, { recursive: true })
}
}
/**
* Read the connected providers cache.
* Returns the list of connected provider IDs, or null if cache doesn't exist.
*/
export function readConnectedProvidersCache(): string[] | null {
const cacheFile = getCacheFilePath(CONNECTED_PROVIDERS_CACHE_FILE)
if (!existsSync(cacheFile)) {
log("[connected-providers-cache] Cache file not found", { cacheFile })
return null
} }
try { let memConnected: string[] | null | undefined
const content = readFileSync(cacheFile, "utf-8") let memProviderModels: ProviderModelsCache | null | undefined
const data = JSON.parse(content) as ConnectedProvidersCache
log("[connected-providers-cache] Read cache", { count: data.connected.length, updatedAt: data.updatedAt })
return data.connected
} catch (err) {
log("[connected-providers-cache] Error reading cache", { error: String(err) })
return null
}
}
/** function ensureCacheDir(): void {
* Check if connected providers cache exists. const cacheDir = getCacheDir()
*/ if (!existsSync(cacheDir)) {
export function hasConnectedProvidersCache(): boolean { mkdirSync(cacheDir, { recursive: true })
const cacheFile = getCacheFilePath(CONNECTED_PROVIDERS_CACHE_FILE) }
return existsSync(cacheFile)
}
/**
* Write the connected providers cache.
*/
function writeConnectedProvidersCache(connected: string[]): void {
ensureCacheDir()
const cacheFile = getCacheFilePath(CONNECTED_PROVIDERS_CACHE_FILE)
const data: ConnectedProvidersCache = {
connected,
updatedAt: new Date().toISOString(),
} }
try { function readConnectedProvidersCache(): string[] | null {
writeFileSync(cacheFile, JSON.stringify(data, null, 2)) if (memConnected !== undefined) return memConnected
log("[connected-providers-cache] Cache written", { count: connected.length }) const cacheFile = getCacheFilePath(CONNECTED_PROVIDERS_CACHE_FILE)
} catch (err) {
log("[connected-providers-cache] Error writing cache", { error: String(err) })
}
}
/** if (!existsSync(cacheFile)) {
* Read the provider-models cache. log("[connected-providers-cache] Cache file not found", { cacheFile })
* Returns the cache data, or null if cache doesn't exist. memConnected = null
*/ return null
export function readProviderModelsCache(): ProviderModelsCache | null {
const cacheFile = getCacheFilePath(PROVIDER_MODELS_CACHE_FILE)
if (!existsSync(cacheFile)) {
log("[connected-providers-cache] Provider-models cache file not found", { cacheFile })
return null
}
try {
const content = readFileSync(cacheFile, "utf-8")
const data = JSON.parse(content) as ProviderModelsCache
log("[connected-providers-cache] Read provider-models cache", {
providerCount: Object.keys(data.models).length,
updatedAt: data.updatedAt
})
return data
} catch (err) {
log("[connected-providers-cache] Error reading provider-models cache", { error: String(err) })
return null
}
}
/**
* Check if provider-models cache exists.
*/
export function hasProviderModelsCache(): boolean {
const cacheFile = getCacheFilePath(PROVIDER_MODELS_CACHE_FILE)
return existsSync(cacheFile)
}
/**
* Write the provider-models cache.
*/
export function writeProviderModelsCache(data: { models: Record<string, string[]>; connected: string[] }): void {
ensureCacheDir()
const cacheFile = getCacheFilePath(PROVIDER_MODELS_CACHE_FILE)
const cacheData: ProviderModelsCache = {
...data,
updatedAt: new Date().toISOString(),
}
try {
writeFileSync(cacheFile, JSON.stringify(cacheData, null, 2))
log("[connected-providers-cache] Provider-models cache written", {
providerCount: Object.keys(data.models).length
})
} catch (err) {
log("[connected-providers-cache] Error writing provider-models cache", { error: String(err) })
}
}
/**
* Update the connected providers cache by fetching from the client.
* Also updates the provider-models cache with model lists per provider.
*/
export async function updateConnectedProvidersCache(client: {
provider?: {
list?: () => Promise<{
data?: {
connected?: string[]
all?: Array<{ id: string; models?: Record<string, unknown> }>
}
}>
}
}): Promise<void> {
if (!client?.provider?.list) {
log("[connected-providers-cache] client.provider.list not available")
return
}
try {
const result = await client.provider.list()
const connected = result.data?.connected ?? []
log("[connected-providers-cache] Fetched connected providers", { count: connected.length, providers: connected })
writeConnectedProvidersCache(connected)
const modelsByProvider: Record<string, string[]> = {}
const allProviders = result.data?.all ?? []
for (const provider of allProviders) {
if (provider.models) {
const modelIds = Object.keys(provider.models)
if (modelIds.length > 0) {
modelsByProvider[provider.id] = modelIds
}
}
} }
log("[connected-providers-cache] Extracted models from provider list", { try {
providerCount: Object.keys(modelsByProvider).length, const content = readFileSync(cacheFile, "utf-8")
totalModels: Object.values(modelsByProvider).reduce((sum, ids) => sum + ids.length, 0), const data = JSON.parse(content) as ConnectedProvidersCache
}) log("[connected-providers-cache] Read cache", { count: data.connected.length, updatedAt: data.updatedAt })
memConnected = data.connected
return data.connected
} catch (err) {
log("[connected-providers-cache] Error reading cache", { error: String(err) })
memConnected = null
return null
}
}
writeProviderModelsCache({ function hasConnectedProvidersCache(): boolean {
models: modelsByProvider, const cacheFile = getCacheFilePath(CONNECTED_PROVIDERS_CACHE_FILE)
return existsSync(cacheFile)
}
function writeConnectedProvidersCache(connected: string[]): void {
ensureCacheDir()
const cacheFile = getCacheFilePath(CONNECTED_PROVIDERS_CACHE_FILE)
const data: ConnectedProvidersCache = {
connected, connected,
}) updatedAt: new Date().toISOString(),
} catch (err) { }
log("[connected-providers-cache] Error updating cache", { error: String(err) })
try {
writeFileSync(cacheFile, JSON.stringify(data, null, 2))
memConnected = connected
log("[connected-providers-cache] Cache written", { count: connected.length })
} catch (err) {
log("[connected-providers-cache] Error writing cache", { error: String(err) })
}
}
function readProviderModelsCache(): ProviderModelsCache | null {
if (memProviderModels !== undefined) return memProviderModels
const cacheFile = getCacheFilePath(PROVIDER_MODELS_CACHE_FILE)
if (!existsSync(cacheFile)) {
log("[connected-providers-cache] Provider-models cache file not found", { cacheFile })
memProviderModels = null
return null
}
try {
const content = readFileSync(cacheFile, "utf-8")
const data = JSON.parse(content) as ProviderModelsCache
log("[connected-providers-cache] Read provider-models cache", {
providerCount: Object.keys(data.models).length,
updatedAt: data.updatedAt,
})
memProviderModels = data
return data
} catch (err) {
log("[connected-providers-cache] Error reading provider-models cache", { error: String(err) })
memProviderModels = null
return null
}
}
function hasProviderModelsCache(): boolean {
const cacheFile = getCacheFilePath(PROVIDER_MODELS_CACHE_FILE)
return existsSync(cacheFile)
}
function writeProviderModelsCache(data: { models: Record<string, string[]>; connected: string[] }): void {
ensureCacheDir()
const cacheFile = getCacheFilePath(PROVIDER_MODELS_CACHE_FILE)
const cacheData: ProviderModelsCache = {
...data,
updatedAt: new Date().toISOString(),
}
try {
writeFileSync(cacheFile, JSON.stringify(cacheData, null, 2))
memProviderModels = cacheData
log("[connected-providers-cache] Provider-models cache written", {
providerCount: Object.keys(data.models).length,
})
} catch (err) {
log("[connected-providers-cache] Error writing provider-models cache", { error: String(err) })
}
}
async function updateConnectedProvidersCache(client: {
provider?: {
list?: () => Promise<{
data?: {
connected?: string[]
all?: Array<{ id: string; models?: Record<string, unknown> }>
}
}>
}
}): Promise<void> {
if (!client?.provider?.list) {
log("[connected-providers-cache] client.provider.list not available")
return
}
try {
const result = await client.provider.list()
const connected = result.data?.connected ?? []
log("[connected-providers-cache] Fetched connected providers", {
count: connected.length,
providers: connected,
})
writeConnectedProvidersCache(connected)
const modelsByProvider: Record<string, string[]> = {}
const allProviders = result.data?.all ?? []
for (const provider of allProviders) {
if (provider.models) {
const modelIds = Object.keys(provider.models)
if (modelIds.length > 0) {
modelsByProvider[provider.id] = modelIds
}
}
}
log("[connected-providers-cache] Extracted models from provider list", {
providerCount: Object.keys(modelsByProvider).length,
totalModels: Object.values(modelsByProvider).reduce((sum, ids) => sum + ids.length, 0),
})
writeProviderModelsCache({
models: modelsByProvider,
connected,
})
} catch (err) {
log("[connected-providers-cache] Error updating cache", { error: String(err) })
}
}
return {
readConnectedProvidersCache,
hasConnectedProvidersCache,
readProviderModelsCache,
hasProviderModelsCache,
writeProviderModelsCache,
updateConnectedProvidersCache,
} }
} }
const defaultConnectedProvidersCacheStore = createConnectedProvidersCacheStore(
() => dataPath.getOmoOpenCodeCacheDir()
)
export const {
readConnectedProvidersCache,
hasConnectedProvidersCache,
readProviderModelsCache,
hasProviderModelsCache,
writeProviderModelsCache,
updateConnectedProvidersCache,
} = defaultConnectedProvidersCacheStore

View File

@@ -74,7 +74,7 @@ export async function resolveFileReferencesInText(
let resolved = text let resolved = text
for (const [pattern, replacement] of replacements.entries()) { for (const [pattern, replacement] of replacements.entries()) {
resolved = resolved.split(pattern).join(replacement) resolved = resolved.replaceAll(pattern, replacement)
} }
if (findFileReferences(resolved).length > 0 && depth + 1 < maxDepth) { if (findFileReferences(resolved).length > 0 && depth + 1 < maxDepth) {

View File

@@ -1,16 +1,42 @@
// Shared logging utility for the plugin
import * as fs from "fs" import * as fs from "fs"
import * as os from "os" import * as os from "os"
import * as path from "path" import * as path from "path"
const logFile = path.join(os.tmpdir(), "oh-my-opencode.log") const logFile = path.join(os.tmpdir(), "oh-my-opencode.log")
let buffer: string[] = []
let flushTimer: ReturnType<typeof setTimeout> | null = null
const FLUSH_INTERVAL_MS = 500
const BUFFER_SIZE_LIMIT = 50
function flush(): void {
if (buffer.length === 0) return
const data = buffer.join("")
buffer = []
try {
fs.appendFileSync(logFile, data)
} catch {
}
}
function scheduleFlush(): void {
if (flushTimer) return
flushTimer = setTimeout(() => {
flushTimer = null
flush()
}, FLUSH_INTERVAL_MS)
}
export function log(message: string, data?: unknown): void { export function log(message: string, data?: unknown): void {
try { try {
const timestamp = new Date().toISOString() const timestamp = new Date().toISOString()
const logEntry = `[${timestamp}] ${message} ${data ? JSON.stringify(data) : ""}\n` const logEntry = `[${timestamp}] ${message} ${data ? JSON.stringify(data) : ""}\n`
fs.appendFileSync(logFile, logEntry) buffer.push(logEntry)
if (buffer.length >= BUFFER_SIZE_LIMIT) {
flush()
} else {
scheduleFlush()
}
} catch { } catch {
} }
} }

View File

@@ -9,6 +9,8 @@ function escapeRegexExceptAsterisk(str: string): string {
return str.replace(/[.+?^${}()|[\]\\]/g, "\\$&") return str.replace(/[.+?^${}()|[\]\\]/g, "\\$&")
} }
const regexCache = new Map<string, RegExp>()
export function matchesToolMatcher(toolName: string, matcher: string): boolean { export function matchesToolMatcher(toolName: string, matcher: string): boolean {
if (!matcher) { if (!matcher) {
return true return true
@@ -17,8 +19,12 @@ export function matchesToolMatcher(toolName: string, matcher: string): boolean {
return patterns.some((p) => { return patterns.some((p) => {
if (p.includes("*")) { if (p.includes("*")) {
// First escape regex special chars (except *), then convert * to .* // First escape regex special chars (except *), then convert * to .*
const escaped = escapeRegexExceptAsterisk(p) let regex = regexCache.get(p)
const regex = new RegExp(`^${escaped.replace(/\*/g, ".*")}$`, "i") if (!regex) {
const escaped = escapeRegexExceptAsterisk(p)
regex = new RegExp(`^${escaped.replace(/\*/g, ".*")}$`, "i")
regexCache.set(p, regex)
}
return regex.test(toolName) return regex.test(toolName)
} }
return p.toLowerCase() === toolName.toLowerCase() return p.toLowerCase() === toolName.toLowerCase()

View File

@@ -1,4 +1,4 @@
import { describe, it, expect, beforeAll, afterAll } from "bun:test" import { afterEach, beforeEach, describe, expect, it, spyOn } from "bun:test"
import { import {
isPortAvailable, isPortAvailable,
findAvailablePort, findAvailablePort,
@@ -6,96 +6,283 @@ import {
DEFAULT_SERVER_PORT, DEFAULT_SERVER_PORT,
} from "./port-utils" } from "./port-utils"
const HOSTNAME = "127.0.0.1"
const REAL_PORT_SEARCH_WINDOW = 200
function supportsRealSocketBinding(): boolean {
try {
const server = Bun.serve({
port: 0,
hostname: HOSTNAME,
fetch: () => new Response("probe"),
})
server.stop(true)
return true
} catch {
return false
}
}
const canBindRealSockets = supportsRealSocketBinding()
describe("port-utils", () => { describe("port-utils", () => {
describe("isPortAvailable", () => { if (canBindRealSockets) {
it("#given unused port #when checking availability #then returns true", async () => { function startRealBlocker(port: number = 0) {
const port = 59999 return Bun.serve({
const result = await isPortAvailable(port)
expect(result).toBe(true)
})
it("#given port in use #when checking availability #then returns false", async () => {
const port = 59998
const blocker = Bun.serve({
port, port,
hostname: "127.0.0.1", hostname: HOSTNAME,
fetch: () => new Response("blocked"), fetch: () => new Response("blocked"),
}) })
}
try { async function findContiguousAvailableStart(length: number): Promise<number> {
const result = await isPortAvailable(port) const probe = startRealBlocker()
expect(result).toBe(false) const seedPort = probe.port
} finally { probe.stop(true)
blocker.stop(true)
for (let candidate = seedPort; candidate < seedPort + REAL_PORT_SEARCH_WINDOW; candidate++) {
const checks = await Promise.all(
Array.from({ length }, async (_, offset) => isPortAvailable(candidate + offset, HOSTNAME))
)
if (checks.every(Boolean)) {
return candidate
}
} }
})
})
describe("findAvailablePort", () => { throw new Error(`Could not find ${length} contiguous available ports`)
it("#given start port available #when finding port #then returns start port", async () => { }
const startPort = 59997
const result = await findAvailablePort(startPort)
expect(result).toBe(startPort)
})
it("#given start port blocked #when finding port #then returns next available", async () => { describe("with real sockets", () => {
const startPort = 59996 describe("isPortAvailable", () => {
const blocker = Bun.serve({ it("#given unused port #when checking availability #then returns true", async () => {
port: startPort, const blocker = startRealBlocker()
hostname: "127.0.0.1", const port = blocker.port
fetch: () => new Response("blocked"), blocker.stop(true)
const result = await isPortAvailable(port)
expect(result).toBe(true)
})
it("#given port in use #when checking availability #then returns false", async () => {
const blocker = startRealBlocker()
const port = blocker.port
try {
const result = await isPortAvailable(port)
expect(result).toBe(false)
} finally {
blocker.stop(true)
}
})
}) })
try { describe("findAvailablePort", () => {
const result = await findAvailablePort(startPort) it("#given start port available #when finding port #then returns start port", async () => {
expect(result).toBe(startPort + 1) const startPort = await findContiguousAvailableStart(1)
} finally { const result = await findAvailablePort(startPort)
blocker.stop(true) expect(result).toBe(startPort)
} })
})
it("#given multiple ports blocked #when finding port #then skips all blocked", async () => { it("#given start port blocked #when finding port #then returns next available", async () => {
const startPort = 59993 const startPort = await findContiguousAvailableStart(2)
const blockers = [ const blocker = startRealBlocker(startPort)
Bun.serve({ port: startPort, hostname: "127.0.0.1", fetch: () => new Response() }),
Bun.serve({ port: startPort + 1, hostname: "127.0.0.1", fetch: () => new Response() }),
Bun.serve({ port: startPort + 2, hostname: "127.0.0.1", fetch: () => new Response() }),
]
try { try {
const result = await findAvailablePort(startPort) const result = await findAvailablePort(startPort)
expect(result).toBe(startPort + 3) expect(result).toBe(startPort + 1)
} finally { } finally {
blockers.forEach((b) => b.stop(true)) blocker.stop(true)
} }
}) })
})
describe("getAvailableServerPort", () => { it("#given multiple ports blocked #when finding port #then skips all blocked", async () => {
it("#given preferred port available #when getting port #then returns preferred with wasAutoSelected=false", async () => { const startPort = await findContiguousAvailableStart(4)
const preferredPort = 59990 const blockers = [
const result = await getAvailableServerPort(preferredPort) startRealBlocker(startPort),
expect(result.port).toBe(preferredPort) startRealBlocker(startPort + 1),
expect(result.wasAutoSelected).toBe(false) startRealBlocker(startPort + 2),
}) ]
it("#given preferred port blocked #when getting port #then returns alternative with wasAutoSelected=true", async () => { try {
const preferredPort = 59989 const result = await findAvailablePort(startPort)
const blocker = Bun.serve({ expect(result).toBe(startPort + 3)
port: preferredPort, } finally {
hostname: "127.0.0.1", blockers.forEach((blocker) => blocker.stop(true))
fetch: () => new Response("blocked"), }
})
}) })
try { describe("getAvailableServerPort", () => {
const result = await getAvailableServerPort(preferredPort) it("#given preferred port available #when getting port #then returns preferred with wasAutoSelected=false", async () => {
expect(result.port).toBeGreaterThan(preferredPort) const preferredPort = await findContiguousAvailableStart(1)
expect(result.wasAutoSelected).toBe(true) const result = await getAvailableServerPort(preferredPort)
} finally { expect(result.port).toBe(preferredPort)
blocker.stop(true) expect(result.wasAutoSelected).toBe(false)
} })
it("#given preferred port blocked #when getting port #then returns alternative with wasAutoSelected=true", async () => {
const preferredPort = await findContiguousAvailableStart(2)
const blocker = startRealBlocker(preferredPort)
try {
const result = await getAvailableServerPort(preferredPort)
expect(result.port).toBe(preferredPort + 1)
expect(result.wasAutoSelected).toBe(true)
} finally {
blocker.stop(true)
}
})
})
}) })
}) } else {
const blockedSockets = new Set<string>()
let serveSpy: ReturnType<typeof spyOn>
function getSocketKey(port: number, hostname: string): string {
return `${hostname}:${port}`
}
beforeEach(() => {
blockedSockets.clear()
serveSpy = spyOn(Bun, "serve").mockImplementation(({ port, hostname }) => {
if (typeof port !== "number") {
throw new Error("Test expected numeric port")
}
const resolvedHostname = typeof hostname === "string" ? hostname : HOSTNAME
const socketKey = getSocketKey(port, resolvedHostname)
if (blockedSockets.has(socketKey)) {
const error = new Error(`Failed to start server. Is port ${port} in use?`) as Error & {
code?: string
syscall?: string
errno?: number
address?: string
port?: number
}
error.code = "EADDRINUSE"
error.syscall = "listen"
error.errno = 0
error.address = resolvedHostname
error.port = port
throw error
}
blockedSockets.add(socketKey)
return {
stop: (_force?: boolean) => {
blockedSockets.delete(socketKey)
},
} as { stop: (force?: boolean) => void }
})
})
afterEach(() => {
expect(blockedSockets.size).toBe(0)
serveSpy.mockRestore()
blockedSockets.clear()
})
describe("with mocked sockets fallback", () => {
describe("isPortAvailable", () => {
it("#given unused port #when checking availability #then returns true", async () => {
const port = 59999
const result = await isPortAvailable(port)
expect(result).toBe(true)
expect(blockedSockets.size).toBe(0)
})
it("#given port in use #when checking availability #then returns false", async () => {
const port = 59998
const blocker = Bun.serve({
port,
hostname: HOSTNAME,
fetch: () => new Response("blocked"),
})
try {
const result = await isPortAvailable(port)
expect(result).toBe(false)
} finally {
blocker.stop(true)
}
})
it("#given custom hostname #when checking availability #then passes hostname through to Bun.serve", async () => {
const hostname = "192.0.2.10"
await isPortAvailable(59995, hostname)
expect(serveSpy.mock.calls[0]?.[0]?.hostname).toBe(hostname)
})
})
describe("findAvailablePort", () => {
it("#given start port available #when finding port #then returns start port", async () => {
const startPort = 59997
const result = await findAvailablePort(startPort)
expect(result).toBe(startPort)
})
it("#given start port blocked #when finding port #then returns next available", async () => {
const startPort = 59996
const blocker = Bun.serve({
port: startPort,
hostname: HOSTNAME,
fetch: () => new Response("blocked"),
})
try {
const result = await findAvailablePort(startPort)
expect(result).toBe(startPort + 1)
} finally {
blocker.stop(true)
}
})
it("#given multiple ports blocked #when finding port #then skips all blocked", async () => {
const startPort = 59993
const blockers = [
Bun.serve({ port: startPort, hostname: HOSTNAME, fetch: () => new Response() }),
Bun.serve({ port: startPort + 1, hostname: HOSTNAME, fetch: () => new Response() }),
Bun.serve({ port: startPort + 2, hostname: HOSTNAME, fetch: () => new Response() }),
]
try {
const result = await findAvailablePort(startPort)
expect(result).toBe(startPort + 3)
} finally {
blockers.forEach((blocker) => blocker.stop(true))
}
})
})
describe("getAvailableServerPort", () => {
it("#given preferred port available #when getting port #then returns preferred with wasAutoSelected=false", async () => {
const preferredPort = 59990
const result = await getAvailableServerPort(preferredPort)
expect(result.port).toBe(preferredPort)
expect(result.wasAutoSelected).toBe(false)
})
it("#given preferred port blocked #when getting port #then returns alternative with wasAutoSelected=true", async () => {
const preferredPort = 59989
const blocker = Bun.serve({
port: preferredPort,
hostname: HOSTNAME,
fetch: () => new Response("blocked"),
})
try {
const result = await getAvailableServerPort(preferredPort)
expect(result.port).toBe(preferredPort + 1)
expect(result.wasAutoSelected).toBe(true)
} finally {
blocker.stop(true)
}
})
})
})
}
describe("DEFAULT_SERVER_PORT", () => { describe("DEFAULT_SERVER_PORT", () => {
it("#given constant #when accessed #then returns 4096", () => { it("#given constant #when accessed #then returns 4096", () => {

View File

@@ -11,6 +11,14 @@ import {
} from "./edit-operation-primitives" } from "./edit-operation-primitives"
import { validateLineRefs } from "./validation" import { validateLineRefs } from "./validation"
function arraysEqual(a: string[], b: string[]): boolean {
if (a.length !== b.length) return false
for (let i = 0; i < a.length; i++) {
if (a[i] !== b[i]) return false
}
return true
}
export interface HashlineApplyReport { export interface HashlineApplyReport {
content: string content: string
noopEdits: number noopEdits: number
@@ -51,7 +59,7 @@ export function applyHashlineEditsWithReport(content: string, edits: HashlineEdi
const next = edit.end const next = edit.end
? applyReplaceLines(lines, edit.pos, edit.end, edit.lines, { skipValidation: true }) ? applyReplaceLines(lines, edit.pos, edit.end, edit.lines, { skipValidation: true })
: applySetLine(lines, edit.pos, edit.lines, { skipValidation: true }) : applySetLine(lines, edit.pos, edit.lines, { skipValidation: true })
if (next.join("\n") === lines.join("\n")) { if (arraysEqual(next, lines)) {
noopEdits += 1 noopEdits += 1
break break
} }
@@ -62,7 +70,7 @@ export function applyHashlineEditsWithReport(content: string, edits: HashlineEdi
const next = edit.pos const next = edit.pos
? applyInsertAfter(lines, edit.pos, edit.lines, { skipValidation: true }) ? applyInsertAfter(lines, edit.pos, edit.lines, { skipValidation: true })
: applyAppend(lines, edit.lines) : applyAppend(lines, edit.lines)
if (next.join("\n") === lines.join("\n")) { if (arraysEqual(next, lines)) {
noopEdits += 1 noopEdits += 1
break break
} }
@@ -73,7 +81,7 @@ export function applyHashlineEditsWithReport(content: string, edits: HashlineEdi
const next = edit.pos const next = edit.pos
? applyInsertBefore(lines, edit.pos, edit.lines, { skipValidation: true }) ? applyInsertBefore(lines, edit.pos, edit.lines, { skipValidation: true })
: applyPrepend(lines, edit.lines) : applyPrepend(lines, edit.lines)
if (next.join("\n") === lines.join("\n")) { if (arraysEqual(next, lines)) {
noopEdits += 1 noopEdits += 1
break break
} }

View File

@@ -86,15 +86,17 @@ export async function* streamHashLinesFromUtf8(
pending += text pending += text
const chunksToYield: string[] = [] const chunksToYield: string[] = []
let lastIdx = 0
while (true) { while (true) {
const idx = pending.indexOf("\n") const idx = pending.indexOf("\n", lastIdx)
if (idx === -1) break if (idx === -1) break
const line = pending.slice(0, idx) const line = pending.slice(lastIdx, idx)
pending = pending.slice(idx + 1) lastIdx = idx + 1
endedWithNewline = true endedWithNewline = true
chunksToYield.push(...pushLine(line)) chunksToYield.push(...pushLine(line))
} }
pending = pending.slice(lastIdx)
if (pending.length > 0) endedWithNewline = false if (pending.length > 0) endedWithNewline = false
return chunksToYield return chunksToYield
} }

View File

@@ -4,7 +4,7 @@ export function generateHashlineDiff(oldContent: string, newContent: string, fil
const oldLines = oldContent.split("\n") const oldLines = oldContent.split("\n")
const newLines = newContent.split("\n") const newLines = newContent.split("\n")
let diff = `--- ${filePath}\n+++ ${filePath}\n` const parts: string[] = [`--- ${filePath}\n+++ ${filePath}\n`]
const maxLines = Math.max(oldLines.length, newLines.length) const maxLines = Math.max(oldLines.length, newLines.length)
for (let i = 0; i < maxLines; i += 1) { for (let i = 0; i < maxLines; i += 1) {
@@ -14,18 +14,18 @@ export function generateHashlineDiff(oldContent: string, newContent: string, fil
const hash = computeLineHash(lineNum, newLine) const hash = computeLineHash(lineNum, newLine)
if (i >= oldLines.length) { if (i >= oldLines.length) {
diff += `+ ${lineNum}#${hash}|${newLine}\n` parts.push(`+ ${lineNum}#${hash}|${newLine}\n`)
continue continue
} }
if (i >= newLines.length) { if (i >= newLines.length) {
diff += `- ${lineNum}# |${oldLine}\n` parts.push(`- ${lineNum}# |${oldLine}\n`)
continue continue
} }
if (oldLine !== newLine) { if (oldLine !== newLine) {
diff += `- ${lineNum}# |${oldLine}\n` parts.push(`- ${lineNum}# |${oldLine}\n`)
diff += `+ ${lineNum}#${hash}|${newLine}\n` parts.push(`+ ${lineNum}#${hash}|${newLine}\n`)
} }
} }
return diff return parts.join("")
} }

View File

@@ -45,6 +45,8 @@ Returns summary format: id, subject, status, owner, blockedBy (not full descript
} }
} }
const taskMap = new Map(allTasks.map((t) => [t.id, t]))
// Filter out completed and deleted tasks // Filter out completed and deleted tasks
const activeTasks = allTasks.filter( const activeTasks = allTasks.filter(
(task) => task.status !== "completed" && task.status !== "deleted" (task) => task.status !== "completed" && task.status !== "deleted"
@@ -54,7 +56,7 @@ Returns summary format: id, subject, status, owner, blockedBy (not full descript
const summaries: TaskSummary[] = activeTasks.map((task) => { const summaries: TaskSummary[] = activeTasks.map((task) => {
// Filter blockedBy to only include unresolved (non-completed) blockers // Filter blockedBy to only include unresolved (non-completed) blockers
const unresolvedBlockers = task.blockedBy.filter((blockerId) => { const unresolvedBlockers = task.blockedBy.filter((blockerId) => {
const blockerTask = allTasks.find((t) => t.id === blockerId) const blockerTask = taskMap.get(blockerId)
// Include if blocker doesn't exist (missing) or if it's not completed // Include if blocker doesn't exist (missing) or if it's not completed
return !blockerTask || blockerTask.status !== "completed" return !blockerTask || blockerTask.status !== "completed"
}) })