Compare commits
14 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e84fce3121 | ||
|
|
a8f0300ba6 | ||
|
|
d1e5bd63c1 | ||
|
|
ed43cd4c85 | ||
|
|
8d66d5641a | ||
|
|
d53bcfbced | ||
|
|
c1ee4c8650 | ||
|
|
ead4a1bcf5 | ||
|
|
07ec7be792 | ||
|
|
7e68690c70 | ||
|
|
22b4f465ab | ||
|
|
a39f183c31 | ||
|
|
f7c5c0be35 | ||
|
|
022a351c32 |
23
bun-test.d.ts
vendored
Normal file
23
bun-test.d.ts
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
declare module "bun:test" {
|
||||
export function describe(name: string, fn: () => void): void
|
||||
export function it(name: string, fn: () => void | Promise<void>): void
|
||||
export function beforeEach(fn: () => void | Promise<void>): void
|
||||
export function afterEach(fn: () => void | Promise<void>): void
|
||||
export function beforeAll(fn: () => void | Promise<void>): void
|
||||
export function afterAll(fn: () => void | Promise<void>): void
|
||||
export function mock<T extends (...args: never[]) => unknown>(fn: T): T
|
||||
|
||||
interface Matchers {
|
||||
toBe(expected: unknown): void
|
||||
toEqual(expected: unknown): void
|
||||
toContain(expected: unknown): void
|
||||
toMatch(expected: RegExp | string): void
|
||||
toHaveLength(expected: number): void
|
||||
toBeGreaterThan(expected: number): void
|
||||
toThrow(expected?: RegExp | string): void
|
||||
toStartWith(expected: string): void
|
||||
not: Matchers
|
||||
}
|
||||
|
||||
export function expect(received: unknown): Matchers
|
||||
}
|
||||
@@ -1,10 +1,164 @@
|
||||
# Agent-Model Matching Guide
|
||||
|
||||
> **For agents and users**: How to pick the right model for each agent. Read this before customizing model settings.
|
||||
> **For agents and users**: Why each agent needs a specific model — and how to customize without breaking things.
|
||||
|
||||
## Example Configuration
|
||||
## The Core Insight: Models Are Developers
|
||||
|
||||
Here's a practical example configuration showing agent-model assignments:
|
||||
Think of AI models as developers on a team. Each has a different brain, different personality, different strengths. **A model isn't just "smarter" or "dumber." It thinks differently.** Give the same instruction to Claude and GPT, and they'll interpret it in fundamentally different ways.
|
||||
|
||||
This isn't a bug. It's the foundation of the entire system.
|
||||
|
||||
Oh My OpenCode assigns each agent a model that matches its *working style* — like building a team where each person is in the role that fits their personality.
|
||||
|
||||
### Sisyphus: The Sociable Lead
|
||||
|
||||
Sisyphus is the developer who knows everyone, goes everywhere, and gets things done through communication and coordination. Talks to other agents, understands context across the whole codebase, delegates work intelligently, and codes well too. But deep, purely technical problems? He'll struggle a bit.
|
||||
|
||||
**This is why Sisyphus uses Claude / Kimi / GLM.** These models excel at:
|
||||
- Following complex, multi-step instructions (Sisyphus's prompt is ~1,100 lines)
|
||||
- Maintaining conversation flow across many tool calls
|
||||
- Understanding nuanced delegation and orchestration patterns
|
||||
- Producing well-structured, communicative output
|
||||
|
||||
Using Sisyphus with GPT would be like taking your best project manager — the one who coordinates everyone, runs standups, and keeps the whole team aligned — and sticking them in a room alone to debug a race condition. Wrong fit. No GPT prompt exists for Sisyphus, and for good reason.
|
||||
|
||||
### Hephaestus: The Deep Specialist
|
||||
|
||||
Hephaestus is the developer who stays in their room coding all day. Doesn't talk much. Might seem socially awkward. But give them a hard technical problem and they'll emerge three hours later with a solution nobody else could have found.
|
||||
|
||||
**This is why Hephaestus uses GPT-5.3 Codex.** Codex is built for exactly this:
|
||||
- Deep, autonomous exploration without hand-holding
|
||||
- Multi-file reasoning across complex codebases
|
||||
- Principle-driven execution (give a goal, not a recipe)
|
||||
- Working independently for extended periods
|
||||
|
||||
Using Hephaestus with GLM or Kimi would be like assigning your most communicative, sociable developer to sit alone and do nothing but deep technical work. They'd get it done eventually, but they wouldn't shine — you'd be wasting exactly the skills that make them valuable.
|
||||
|
||||
### The Takeaway
|
||||
|
||||
Every agent's prompt is tuned to match its model's personality. **When you change the model, you change the brain — and the same instructions get understood completely differently.** Model matching isn't about "better" or "worse." It's about fit.
|
||||
|
||||
---
|
||||
|
||||
## How Claude and GPT Think Differently
|
||||
|
||||
This matters for understanding why some agents support both model families while others don't.
|
||||
|
||||
**Claude** responds to **mechanics-driven** prompts — detailed checklists, templates, step-by-step procedures. More rules = more compliance. You can write a 1,100-line prompt with nested workflows and Claude will follow every step.
|
||||
|
||||
**GPT** (especially 5.2+) responds to **principle-driven** prompts — concise principles, XML structure, explicit decision criteria. More rules = more contradiction surface = more drift. GPT works best when you state the goal and let it figure out the mechanics.
|
||||
|
||||
Real example: Prometheus's Claude prompt is ~1,100 lines across 7 files. The GPT prompt achieves the same behavior with 3 principles in ~121 lines. Same outcome, completely different approach.
|
||||
|
||||
Agents that support both families (Prometheus, Atlas) auto-detect your model at runtime and switch prompts via `isGptModel()`. You don't have to think about it.
|
||||
|
||||
---
|
||||
|
||||
## Agent Profiles
|
||||
|
||||
### Communicators → Claude / Kimi / GLM
|
||||
|
||||
These agents have Claude-optimized prompts — long, detailed, mechanics-driven. They need models that reliably follow complex, multi-layered instructions.
|
||||
|
||||
| Agent | Role | Fallback Chain | Notes |
|
||||
|-------|------|----------------|-------|
|
||||
| **Sisyphus** | Main orchestrator | Claude Opus → Kimi K2.5 → GLM 5 | **No GPT prompt.** Claude-family only. |
|
||||
| **Metis** | Plan gap analyzer | Claude Opus → Kimi K2.5 → GPT-5.2 → Gemini 3 Pro | Claude preferred, GPT acceptable fallback. |
|
||||
|
||||
### Dual-Prompt Agents → Claude preferred, GPT supported
|
||||
|
||||
These agents ship separate prompts for Claude and GPT families. They auto-detect your model and switch at runtime.
|
||||
|
||||
| Agent | Role | Fallback Chain | Notes |
|
||||
|-------|------|----------------|-------|
|
||||
| **Prometheus** | Strategic planner | Claude Opus → GPT-5.2 → Kimi K2.5 → Gemini 3 Pro | Interview-mode planning. GPT prompt is compact and principle-driven. |
|
||||
| **Atlas** | Todo orchestrator | Kimi K2.5 → Claude Sonnet → GPT-5.2 | Kimi is the sweet spot — Claude-like but cheaper. |
|
||||
|
||||
### Deep Specialists → GPT
|
||||
|
||||
These agents are built for GPT's principle-driven style. Their prompts assume autonomous, goal-oriented execution. Don't override to Claude.
|
||||
|
||||
| Agent | Role | Fallback Chain | Notes |
|
||||
|-------|------|----------------|-------|
|
||||
| **Hephaestus** | Autonomous deep worker | GPT-5.3 Codex only | No fallback. Requires GPT access. The craftsman. |
|
||||
| **Oracle** | Architecture consultant | GPT-5.2 → Gemini 3 Pro → Claude Opus | Read-only high-IQ consultation. |
|
||||
| **Momus** | Ruthless reviewer | GPT-5.2 → Claude Opus → Gemini 3 Pro | Verification and plan review. |
|
||||
|
||||
### Utility Runners → Speed over Intelligence
|
||||
|
||||
These agents do grep, search, and retrieval. They intentionally use the fastest, cheapest models available. **Don't "upgrade" them to Opus** — that's hiring a senior engineer to file paperwork.
|
||||
|
||||
| Agent | Role | Fallback Chain | Notes |
|
||||
|-------|------|----------------|-------|
|
||||
| **Explore** | Fast codebase grep | Grok Code Fast → MiniMax → Haiku → GPT-5-Nano | Speed is everything. Fire 10 in parallel. |
|
||||
| **Librarian** | Docs/code search | Gemini Flash → MiniMax → GLM | Doc retrieval doesn't need deep reasoning. |
|
||||
| **Multimodal Looker** | Vision/screenshots | Kimi K2.5 → Gemini Flash → GPT-5.2 → GLM-4.6v | Kimi excels at multimodal understanding. |
|
||||
|
||||
---
|
||||
|
||||
## Model Families
|
||||
|
||||
### Claude Family
|
||||
|
||||
Communicative, instruction-following, structured output. Best for agents that need to follow complex multi-step prompts.
|
||||
|
||||
| Model | Strengths |
|
||||
|-------|-----------|
|
||||
| **Claude Opus 4.6** | Best overall. Highest compliance with complex prompts. Default for Sisyphus. |
|
||||
| **Claude Sonnet 4.6** | Faster, cheaper. Good balance for everyday tasks. |
|
||||
| **Claude Haiku 4.5** | Fast and cheap. Good for quick tasks and utility work. |
|
||||
| **Kimi K2.5** | Behaves very similarly to Claude. Great all-rounder at lower cost. Default for Atlas. |
|
||||
| **GLM 5** | Claude-like behavior. Solid for orchestration tasks. |
|
||||
|
||||
### GPT Family
|
||||
|
||||
Principle-driven, explicit reasoning, deep technical capability. Best for agents that work autonomously on complex problems.
|
||||
|
||||
| Model | Strengths |
|
||||
|-------|-----------|
|
||||
| **GPT-5.3 Codex** | Deep coding powerhouse. Autonomous exploration. Required for Hephaestus. |
|
||||
| **GPT-5.2** | High intelligence, strategic reasoning. Default for Oracle and Momus. |
|
||||
| **GPT-5-Nano** | Ultra-cheap, fast. Good for simple utility tasks. |
|
||||
|
||||
### Other Models
|
||||
|
||||
| Model | Strengths |
|
||||
|-------|-----------|
|
||||
| **Gemini 3 Pro** | Excels at visual/frontend tasks. Different reasoning style. Default for `visual-engineering` and `artistry`. |
|
||||
| **Gemini 3 Flash** | Fast. Good for doc search and light tasks. |
|
||||
| **Grok Code Fast 1** | Blazing fast code grep. Default for Explore agent. |
|
||||
| **MiniMax M2.5** | Fast and smart. Good for utility tasks and search/retrieval. |
|
||||
|
||||
### About Free-Tier Fallbacks
|
||||
|
||||
You may see model names like `kimi-k2.5-free`, `minimax-m2.5-free`, or `big-pickle` (GLM 4.6) in the source code or logs. These are free-tier versions of the same model families, served through the OpenCode Zen provider. They exist as lower-priority entries in fallback chains.
|
||||
|
||||
You don't need to configure them. The system includes them so it degrades gracefully when you don't have every paid subscription. If you have the paid version, the paid version is always preferred.
|
||||
|
||||
---
|
||||
|
||||
## Task Categories
|
||||
|
||||
When agents delegate work, they don't pick a model name — they pick a **category**. The category maps to the right model automatically.
|
||||
|
||||
| Category | When Used | Fallback Chain |
|
||||
|----------|-----------|----------------|
|
||||
| `visual-engineering` | Frontend, UI, CSS, design | Gemini 3 Pro → GLM 5 → Claude Opus |
|
||||
| `ultrabrain` | Maximum reasoning needed | GPT-5.3 Codex → Gemini 3 Pro → Claude Opus |
|
||||
| `deep` | Deep coding, complex logic | GPT-5.3 Codex → Claude Opus → Gemini 3 Pro |
|
||||
| `artistry` | Creative, novel approaches | Gemini 3 Pro → Claude Opus → GPT-5.2 |
|
||||
| `quick` | Simple, fast tasks | Claude Haiku → Gemini Flash → GPT-5-Nano |
|
||||
| `unspecified-high` | General complex work | Claude Opus → GPT-5.2 → Gemini 3 Pro |
|
||||
| `unspecified-low` | General standard work | Claude Sonnet → GPT-5.3 Codex → Gemini Flash |
|
||||
| `writing` | Text, docs, prose | Gemini Flash → Claude Sonnet |
|
||||
|
||||
See the [Orchestration System Guide](./orchestration.md) for how agents dispatch tasks to categories.
|
||||
|
||||
---
|
||||
|
||||
## Customization
|
||||
|
||||
### Example Configuration
|
||||
|
||||
```jsonc
|
||||
{
|
||||
@@ -29,19 +183,10 @@ Here's a practical example configuration showing agent-model assignments:
|
||||
},
|
||||
|
||||
"categories": {
|
||||
// quick — trivial tasks
|
||||
"quick": { "model": "opencode/gpt-5-nano" },
|
||||
|
||||
// unspecified-low — moderate tasks
|
||||
"unspecified-low": { "model": "kimi-for-coding/k2p5" },
|
||||
|
||||
// unspecified-high — complex work
|
||||
"unspecified-high": { "model": "anthropic/claude-sonnet-4-6", "variant": "max" },
|
||||
|
||||
// visual-engineering — Gemini dominates visual tasks
|
||||
"visual-engineering": { "model": "google/gemini-3-pro", "variant": "high" },
|
||||
|
||||
// writing — docs/prose
|
||||
"writing": { "model": "kimi-for-coding/k2p5" }
|
||||
},
|
||||
|
||||
@@ -53,183 +198,27 @@ Here's a practical example configuration showing agent-model assignments:
|
||||
}
|
||||
```
|
||||
|
||||
Run `opencode models` to see all available models on your system, and `opencode auth login` to authenticate with providers.
|
||||
|
||||
## Model Families: Know Your Options
|
||||
|
||||
Not all models behave the same way. Understanding which models are "similar" helps you make safe substitutions.
|
||||
|
||||
### Claude-like Models (instruction-following, structured output)
|
||||
|
||||
These models respond similarly to Claude and work well with oh-my-opencode's Claude-optimized prompts:
|
||||
|
||||
| Model | Provider(s) | Notes |
|
||||
|-------|-------------|-------|
|
||||
| **Claude Opus 4.6** | anthropic, github-copilot, opencode | Best overall. Default for Sisyphus. |
|
||||
| **Claude Sonnet 4.6** | anthropic, github-copilot, opencode | Faster, cheaper. Good balance. |
|
||||
| **Claude Haiku 4.5** | anthropic, opencode | Fast and cheap. Good for quick tasks. |
|
||||
| **Kimi K2.5** | kimi-for-coding | Behaves very similarly to Claude. Great all-rounder. Default for Atlas. |
|
||||
| **Kimi K2.5 Free** | opencode | Free-tier Kimi. Rate-limited but functional. |
|
||||
| **GLM 5** | zai-coding-plan, opencode | Claude-like behavior. Good for broad tasks. |
|
||||
| **Big Pickle (GLM 4.6)** | opencode | Free-tier GLM. Decent fallback. |
|
||||
|
||||
### GPT Models (explicit reasoning, principle-driven)
|
||||
|
||||
GPT models need differently structured prompts. Some agents auto-detect GPT and switch prompts:
|
||||
|
||||
| Model | Provider(s) | Notes |
|
||||
|-------|-------------|-------|
|
||||
| **GPT-5.3-codex** | openai, github-copilot, opencode | Deep coding powerhouse. Required for Hephaestus. |
|
||||
| **GPT-5.2** | openai, github-copilot, opencode | High intelligence. Default for Oracle. |
|
||||
| **GPT-5-Nano** | opencode | Ultra-cheap, fast. Good for simple utility tasks. |
|
||||
|
||||
### Different-Behavior Models
|
||||
|
||||
These models have unique characteristics — don't assume they'll behave like Claude or GPT:
|
||||
|
||||
| Model | Provider(s) | Notes |
|
||||
|-------|-------------|-------|
|
||||
| **Gemini 3 Pro** | google, github-copilot, opencode | Excels at visual/frontend tasks. Different reasoning style. |
|
||||
| **Gemini 3 Flash** | google, github-copilot, opencode | Fast, good for doc search and light tasks. |
|
||||
| **MiniMax M2.5** | venice | Fast and smart. Good for utility tasks. |
|
||||
| **MiniMax M2.5 Free** | opencode | Free-tier MiniMax. Fast for search/retrieval. |
|
||||
|
||||
### Speed-Focused Models
|
||||
|
||||
| Model | Provider(s) | Speed | Notes |
|
||||
|-------|-------------|-------|-------|
|
||||
| **Grok Code Fast 1** | github-copilot, venice | Very fast | Optimized for code grep/search. Default for Explore. |
|
||||
| **Claude Haiku 4.5** | anthropic, opencode | Fast | Good balance of speed and intelligence. |
|
||||
| **MiniMax M2.5 (Free)** | opencode, venice | Fast | Smart for its speed class. |
|
||||
| **GPT-5.3-codex-spark** | openai | Extremely fast | Blazing fast but compacts so aggressively that oh-my-opencode's context management doesn't work well with it. Not recommended for omo agents. |
|
||||
|
||||
---
|
||||
|
||||
## Agent Roles and Recommended Models
|
||||
|
||||
### Claude-Optimized Agents
|
||||
|
||||
These agents have prompts tuned for Claude-family models. Use Claude > Kimi K2.5 > GLM 5 in that priority order.
|
||||
|
||||
| Agent | Role | Default Chain | What It Does |
|
||||
|-------|------|---------------|--------------|
|
||||
| **Sisyphus** | Main ultraworker | Opus (max) → Kimi K2.5 → GLM 5 → Big Pickle | Primary coding agent. Orchestrates everything. **Never use GPT — no GPT prompt exists.** |
|
||||
| **Metis** | Plan review | Opus (max) → Kimi K2.5 → GPT-5.2 → Gemini 3 Pro | Reviews Prometheus plans for gaps. |
|
||||
|
||||
### Dual-Prompt Agents (Claude + GPT auto-switch)
|
||||
|
||||
These agents detect your model family at runtime and switch to the appropriate prompt. If you have GPT access, these agents can use it effectively.
|
||||
|
||||
Priority: **Claude > GPT > Claude-like models**
|
||||
|
||||
| Agent | Role | Default Chain | GPT Prompt? |
|
||||
|-------|------|---------------|-------------|
|
||||
| **Prometheus** | Strategic planner | Opus (max) → **GPT-5.2 (high)** → Kimi K2.5 → Gemini 3 Pro | Yes — XML-tagged, principle-driven (~300 lines vs ~1,100 Claude) |
|
||||
| **Atlas** | Todo orchestrator | **Kimi K2.5** → Sonnet → GPT-5.2 | Yes — GPT-optimized todo management |
|
||||
|
||||
### GPT-Native Agents
|
||||
|
||||
These agents are built for GPT. Don't override to Claude.
|
||||
|
||||
| Agent | Role | Default Chain | Notes |
|
||||
|-------|------|---------------|-------|
|
||||
| **Hephaestus** | Deep autonomous worker | GPT-5.3-codex (medium) only | "Codex on steroids." No fallback. Requires GPT access. |
|
||||
| **Oracle** | Architecture/debugging | GPT-5.2 (high) → Gemini 3 Pro → Opus | High-IQ strategic backup. GPT preferred. |
|
||||
| **Momus** | High-accuracy reviewer | GPT-5.2 (medium) → Opus → Gemini 3 Pro | Verification agent. GPT preferred. |
|
||||
|
||||
### Utility Agents (Speed > Intelligence)
|
||||
|
||||
These agents do search, grep, and retrieval. They intentionally use fast, cheap models. **Don't "upgrade" them to Opus — it wastes tokens on simple tasks.**
|
||||
|
||||
| Agent | Role | Default Chain | Design Rationale |
|
||||
|-------|------|---------------|------------------|
|
||||
| **Explore** | Fast codebase grep | MiniMax M2.5 Free → Grok Code Fast → MiniMax M2.5 → Haiku → GPT-5-Nano | Speed is everything. Grok is blazing fast for grep. |
|
||||
| **Librarian** | Docs/code search | MiniMax M2.5 Free → Gemini Flash → Big Pickle | Entirely free-tier. Doc retrieval doesn't need deep reasoning. |
|
||||
| **Multimodal Looker** | Vision/screenshots | Kimi K2.5 → Kimi Free → Gemini Flash → GPT-5.2 → GLM-4.6v | Kimi excels at multimodal understanding. |
|
||||
|
||||
---
|
||||
|
||||
## Task Categories
|
||||
|
||||
Categories control which model is used for `background_task` and `delegate_task`. See the [Orchestration System Guide](./orchestration.md) for how agents dispatch tasks to categories.
|
||||
|
||||
| Category | When Used | Recommended Models | Notes |
|
||||
|----------|-----------|-------------------|-------|
|
||||
| `visual-engineering` | Frontend, UI, CSS, design | Gemini 3 Pro (high) → GLM 5 → Opus → Kimi K2.5 | Gemini dominates visual tasks |
|
||||
| `ultrabrain` | Maximum reasoning needed | GPT-5.3-codex (xhigh) → Gemini 3 Pro → Opus | Highest intelligence available |
|
||||
| `deep` | Deep coding, complex logic | GPT-5.3-codex (medium) → Opus → Gemini 3 Pro | Requires GPT availability |
|
||||
| `artistry` | Creative, novel approaches | Gemini 3 Pro (high) → Opus → GPT-5.2 | Requires Gemini availability |
|
||||
| `quick` | Simple, fast tasks | Haiku → Gemini Flash → GPT-5-Nano | Cheapest and fastest |
|
||||
| `unspecified-high` | General complex work | Opus (max) → GPT-5.2 (high) → Gemini 3 Pro | Default when no category fits |
|
||||
| `unspecified-low` | General standard work | Sonnet → GPT-5.3-codex (medium) → Gemini Flash | Everyday tasks |
|
||||
| `writing` | Text, docs, prose | Kimi K2.5 → Gemini Flash → Sonnet | Kimi produces best prose |
|
||||
|
||||
---
|
||||
|
||||
## Why Different Models Need Different Prompts
|
||||
|
||||
Claude and GPT models have fundamentally different instruction-following behaviors:
|
||||
|
||||
- **Claude models** respond well to **mechanics-driven** prompts — detailed checklists, templates, step-by-step procedures. More rules = more compliance.
|
||||
- **GPT models** (especially 5.2+) respond better to **principle-driven** prompts — concise principles, XML-tagged structure, explicit decision criteria. More rules = more contradiction surface = more drift.
|
||||
|
||||
Key insight from Codex Plan Mode analysis:
|
||||
- Codex Plan Mode achieves the same results with 3 principles in ~121 lines that Prometheus's Claude prompt needs ~1,100 lines across 7 files
|
||||
- The core concept is **"Decision Complete"** — a plan must leave ZERO decisions to the implementer
|
||||
- GPT follows this literally when stated as a principle; Claude needs enforcement mechanisms
|
||||
|
||||
This is why Prometheus and Atlas ship separate prompts per model family — they auto-detect and switch at runtime via `isGptModel()`.
|
||||
|
||||
---
|
||||
|
||||
## Customization Guide
|
||||
|
||||
### How to Customize
|
||||
|
||||
Override in `oh-my-opencode.jsonc`:
|
||||
|
||||
```jsonc
|
||||
{
|
||||
"agents": {
|
||||
"sisyphus": { "model": "kimi-for-coding/k2p5" },
|
||||
"prometheus": { "model": "openai/gpt-5.2" } // Auto-switches to GPT prompt
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Selection Priority
|
||||
|
||||
When choosing models for Claude-optimized agents:
|
||||
|
||||
```
|
||||
Claude (Opus/Sonnet) > GPT (if agent has dual prompt) > Claude-like (Kimi K2.5, GLM 5)
|
||||
```
|
||||
|
||||
When choosing models for GPT-native agents:
|
||||
|
||||
```
|
||||
GPT (5.3-codex, 5.2) > Claude Opus (decent fallback) > Gemini (acceptable)
|
||||
```
|
||||
Run `opencode models` to see available models, `opencode auth login` to authenticate providers.
|
||||
|
||||
### Safe vs Dangerous Overrides
|
||||
|
||||
**Safe** (same family):
|
||||
- Sisyphus: Opus → Sonnet, Kimi K2.5, GLM 5
|
||||
- Prometheus: Opus → GPT-5.2 (auto-switches prompt)
|
||||
- Atlas: Kimi K2.5 → Sonnet, GPT-5.2 (auto-switches)
|
||||
**Safe** — same personality type:
|
||||
- Sisyphus: Opus → Sonnet, Kimi K2.5, GLM 5 (all communicative models)
|
||||
- Prometheus: Opus → GPT-5.2 (auto-switches to GPT prompt)
|
||||
- Atlas: Kimi K2.5 → Sonnet, GPT-5.2 (auto-switches to GPT prompt)
|
||||
|
||||
**Dangerous** (no prompt support):
|
||||
- Sisyphus → GPT: **No GPT prompt. Will degrade significantly.**
|
||||
- Hephaestus → Claude: **Built for Codex. Claude can't replicate this.**
|
||||
**Dangerous** — personality mismatch:
|
||||
- Sisyphus → GPT: **No GPT prompt exists. Will degrade significantly.**
|
||||
- Hephaestus → Claude: **Built for Codex's autonomous style. Claude can't replicate this.**
|
||||
- Explore → Opus: **Massive cost waste. Explore needs speed, not intelligence.**
|
||||
- Librarian → Opus: **Same. Doc search doesn't need Opus-level reasoning.**
|
||||
|
||||
---
|
||||
### How Model Resolution Works
|
||||
|
||||
## Provider Priority
|
||||
Each agent has a fallback chain. The system tries models in priority order until it finds one available through your connected providers. You don't need to configure providers per model — just authenticate (`opencode auth login`) and the system figures out which models are available and where.
|
||||
|
||||
```
|
||||
Native (anthropic/, openai/, google/) > Kimi for Coding > GitHub Copilot > Venice > OpenCode Zen > Z.ai Coding Plan
|
||||
Agent Request → User Override (if configured) → Fallback Chain → System Default
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
16
package.json
16
package.json
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "oh-my-opencode",
|
||||
"version": "3.8.0",
|
||||
"version": "3.8.1",
|
||||
"description": "The Best AI Agent Harness - Batteries-Included OpenCode Plugin with Multi-Model Orchestration, Parallel Background Agents, and Crafted LSP/AST Tools",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
@@ -74,13 +74,13 @@
|
||||
"typescript": "^5.7.3"
|
||||
},
|
||||
"optionalDependencies": {
|
||||
"oh-my-opencode-darwin-arm64": "3.8.0",
|
||||
"oh-my-opencode-darwin-x64": "3.8.0",
|
||||
"oh-my-opencode-linux-arm64": "3.8.0",
|
||||
"oh-my-opencode-linux-arm64-musl": "3.8.0",
|
||||
"oh-my-opencode-linux-x64": "3.8.0",
|
||||
"oh-my-opencode-linux-x64-musl": "3.8.0",
|
||||
"oh-my-opencode-windows-x64": "3.8.0"
|
||||
"oh-my-opencode-darwin-arm64": "3.8.1",
|
||||
"oh-my-opencode-darwin-x64": "3.8.1",
|
||||
"oh-my-opencode-linux-arm64": "3.8.1",
|
||||
"oh-my-opencode-linux-arm64-musl": "3.8.1",
|
||||
"oh-my-opencode-linux-x64": "3.8.1",
|
||||
"oh-my-opencode-linux-x64-musl": "3.8.1",
|
||||
"oh-my-opencode-windows-x64": "3.8.1"
|
||||
},
|
||||
"trustedDependencies": [
|
||||
"@ast-grep/cli",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "oh-my-opencode-darwin-arm64",
|
||||
"version": "3.8.0",
|
||||
"version": "3.8.1",
|
||||
"description": "Platform-specific binary for oh-my-opencode (darwin-arm64)",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "oh-my-opencode-darwin-x64",
|
||||
"version": "3.8.0",
|
||||
"version": "3.8.1",
|
||||
"description": "Platform-specific binary for oh-my-opencode (darwin-x64)",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "oh-my-opencode-linux-arm64-musl",
|
||||
"version": "3.8.0",
|
||||
"version": "3.8.1",
|
||||
"description": "Platform-specific binary for oh-my-opencode (linux-arm64-musl)",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "oh-my-opencode-linux-arm64",
|
||||
"version": "3.8.0",
|
||||
"version": "3.8.1",
|
||||
"description": "Platform-specific binary for oh-my-opencode (linux-arm64)",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "oh-my-opencode-linux-x64-musl",
|
||||
"version": "3.8.0",
|
||||
"version": "3.8.1",
|
||||
"description": "Platform-specific binary for oh-my-opencode (linux-x64-musl)",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "oh-my-opencode-linux-x64",
|
||||
"version": "3.8.0",
|
||||
"version": "3.8.1",
|
||||
"description": "Platform-specific binary for oh-my-opencode (linux-x64)",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "oh-my-opencode-windows-x64",
|
||||
"version": "3.8.0",
|
||||
"version": "3.8.1",
|
||||
"description": "Platform-specific binary for oh-my-opencode (windows-x64)",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
||||
@@ -1671,6 +1671,14 @@
|
||||
"created_at": "2026-02-21T15:09:19Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 2021
|
||||
},
|
||||
{
|
||||
"name": "coleleavitt",
|
||||
"id": 75138914,
|
||||
"comment_id": 3939630796,
|
||||
"created_at": "2026-02-21T22:44:45Z",
|
||||
"repoId": 1108837393,
|
||||
"pullRequestNo": 2029
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -317,6 +317,22 @@ export function buildAntiPatternsSection(): string {
|
||||
${patterns.join("\n")}`
|
||||
}
|
||||
|
||||
export function buildDeepParallelSection(model: string, categories: AvailableCategory[]): string {
|
||||
const isNonClaude = !model.toLowerCase().includes('claude')
|
||||
const hasDeepCategory = categories.some(c => c.name === 'deep')
|
||||
|
||||
if (!isNonClaude || !hasDeepCategory) return ""
|
||||
|
||||
return `### Deep Parallel Delegation
|
||||
|
||||
For implementation tasks, actively decompose and delegate to \`deep\` category agents in parallel.
|
||||
|
||||
1. Break the implementation into independent work units
|
||||
2. Maximize parallel deep agents — spawn one per independent unit (\`run_in_background=true\`)
|
||||
3. Give each agent a GOAL, not step-by-step instructions — deep agents explore and solve autonomously
|
||||
4. Collect results, integrate, verify coherence`
|
||||
}
|
||||
|
||||
export function buildUltraworkSection(
|
||||
agents: AvailableAgent[],
|
||||
categories: AvailableCategory[],
|
||||
|
||||
@@ -25,6 +25,7 @@ import {
|
||||
buildOracleSection,
|
||||
buildHardBlocksSection,
|
||||
buildAntiPatternsSection,
|
||||
buildDeepParallelSection,
|
||||
categorizeTools,
|
||||
} from "./dynamic-agent-prompt-builder";
|
||||
|
||||
@@ -139,6 +140,7 @@ Should I proceed with [recommendation], or would you prefer differently?
|
||||
}
|
||||
|
||||
function buildDynamicSisyphusPrompt(
|
||||
model: string,
|
||||
availableAgents: AvailableAgent[],
|
||||
availableTools: AvailableTool[] = [],
|
||||
availableSkills: AvailableSkill[] = [],
|
||||
@@ -161,6 +163,7 @@ function buildDynamicSisyphusPrompt(
|
||||
const oracleSection = buildOracleSection(availableAgents);
|
||||
const hardBlocks = buildHardBlocksSection();
|
||||
const antiPatterns = buildAntiPatternsSection();
|
||||
const deepParallelSection = buildDeepParallelSection(model, availableCategories);
|
||||
const taskManagementSection = buildTaskManagementSection(useTaskSystem);
|
||||
const todoHookNote = useTaskSystem
|
||||
? "YOUR TASK CREATION WOULD BE TRACKED BY HOOK([SYSTEM REMINDER - TASK CONTINUATION])"
|
||||
@@ -356,6 +359,8 @@ STOP searching when:
|
||||
|
||||
${categorySkillsGuide}
|
||||
|
||||
${deepParallelSection}
|
||||
|
||||
${delegationTable}
|
||||
|
||||
### Delegation Prompt Structure (MANDATORY - ALL 6 sections):
|
||||
@@ -545,13 +550,14 @@ export function createSisyphusAgent(
|
||||
const categories = availableCategories ?? [];
|
||||
const prompt = availableAgents
|
||||
? buildDynamicSisyphusPrompt(
|
||||
model,
|
||||
availableAgents,
|
||||
tools,
|
||||
skills,
|
||||
categories,
|
||||
useTaskSystem,
|
||||
)
|
||||
: buildDynamicSisyphusPrompt([], tools, skills, categories, useTaskSystem);
|
||||
: buildDynamicSisyphusPrompt(model, [], tools, skills, categories, useTaskSystem);
|
||||
|
||||
const permission = {
|
||||
question: "allow",
|
||||
|
||||
@@ -0,0 +1,190 @@
|
||||
import { describe, test, expect, beforeEach, afterEach } from "bun:test"
|
||||
import { mkdtempSync, writeFileSync, rmSync } from "node:fs"
|
||||
import { join } from "node:path"
|
||||
import { tmpdir } from "node:os"
|
||||
import { isCompactionAgent, findNearestMessageExcludingCompaction } from "./compaction-aware-message-resolver"
|
||||
|
||||
describe("isCompactionAgent", () => {
|
||||
describe("#given agent name variations", () => {
|
||||
test("returns true for 'compaction'", () => {
|
||||
// when
|
||||
const result = isCompactionAgent("compaction")
|
||||
|
||||
// then
|
||||
expect(result).toBe(true)
|
||||
})
|
||||
|
||||
test("returns true for 'Compaction' (case insensitive)", () => {
|
||||
// when
|
||||
const result = isCompactionAgent("Compaction")
|
||||
|
||||
// then
|
||||
expect(result).toBe(true)
|
||||
})
|
||||
|
||||
test("returns true for ' compaction ' (with whitespace)", () => {
|
||||
// when
|
||||
const result = isCompactionAgent(" compaction ")
|
||||
|
||||
// then
|
||||
expect(result).toBe(true)
|
||||
})
|
||||
|
||||
test("returns false for undefined", () => {
|
||||
// when
|
||||
const result = isCompactionAgent(undefined)
|
||||
|
||||
// then
|
||||
expect(result).toBe(false)
|
||||
})
|
||||
|
||||
test("returns false for null", () => {
|
||||
// when
|
||||
const result = isCompactionAgent(null as unknown as string)
|
||||
|
||||
// then
|
||||
expect(result).toBe(false)
|
||||
})
|
||||
|
||||
test("returns false for non-compaction agent like 'sisyphus'", () => {
|
||||
// when
|
||||
const result = isCompactionAgent("sisyphus")
|
||||
|
||||
// then
|
||||
expect(result).toBe(false)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe("findNearestMessageExcludingCompaction", () => {
|
||||
let tempDir: string
|
||||
|
||||
beforeEach(() => {
|
||||
tempDir = mkdtempSync(join(tmpdir(), "compaction-test-"))
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
rmSync(tempDir, { force: true, recursive: true })
|
||||
})
|
||||
|
||||
describe("#given directory with messages", () => {
|
||||
test("finds message with full agent and model", () => {
|
||||
// given
|
||||
const message = {
|
||||
agent: "sisyphus",
|
||||
model: { providerID: "anthropic", modelID: "claude-opus-4-6" },
|
||||
}
|
||||
writeFileSync(join(tempDir, "001.json"), JSON.stringify(message))
|
||||
|
||||
// when
|
||||
const result = findNearestMessageExcludingCompaction(tempDir)
|
||||
|
||||
// then
|
||||
expect(result).not.toBeNull()
|
||||
expect(result?.agent).toBe("sisyphus")
|
||||
expect(result?.model?.providerID).toBe("anthropic")
|
||||
expect(result?.model?.modelID).toBe("claude-opus-4-6")
|
||||
})
|
||||
|
||||
test("skips compaction agent messages", () => {
|
||||
// given
|
||||
const compactionMessage = {
|
||||
agent: "compaction",
|
||||
model: { providerID: "anthropic", modelID: "claude-opus-4-6" },
|
||||
}
|
||||
const validMessage = {
|
||||
agent: "sisyphus",
|
||||
model: { providerID: "anthropic", modelID: "claude-opus-4-6" },
|
||||
}
|
||||
writeFileSync(join(tempDir, "002.json"), JSON.stringify(compactionMessage))
|
||||
writeFileSync(join(tempDir, "001.json"), JSON.stringify(validMessage))
|
||||
|
||||
// when
|
||||
const result = findNearestMessageExcludingCompaction(tempDir)
|
||||
|
||||
// then
|
||||
expect(result).not.toBeNull()
|
||||
expect(result?.agent).toBe("sisyphus")
|
||||
})
|
||||
|
||||
test("falls back to partial agent/model match", () => {
|
||||
// given
|
||||
const messageWithAgentOnly = {
|
||||
agent: "hephaestus",
|
||||
}
|
||||
const messageWithModelOnly = {
|
||||
model: { providerID: "openai", modelID: "gpt-5.3" },
|
||||
}
|
||||
writeFileSync(join(tempDir, "001.json"), JSON.stringify(messageWithModelOnly))
|
||||
writeFileSync(join(tempDir, "002.json"), JSON.stringify(messageWithAgentOnly))
|
||||
|
||||
// when
|
||||
const result = findNearestMessageExcludingCompaction(tempDir)
|
||||
|
||||
// then
|
||||
expect(result).not.toBeNull()
|
||||
// Should find the one with agent first (sorted reverse, so 002 is checked first)
|
||||
expect(result?.agent).toBe("hephaestus")
|
||||
})
|
||||
|
||||
test("returns null for empty directory", () => {
|
||||
// given - empty directory (tempDir is already empty)
|
||||
|
||||
// when
|
||||
const result = findNearestMessageExcludingCompaction(tempDir)
|
||||
|
||||
// then
|
||||
expect(result).toBeNull()
|
||||
})
|
||||
|
||||
test("returns null for non-existent directory", () => {
|
||||
// given
|
||||
const nonExistentDir = join(tmpdir(), "non-existent-dir-12345")
|
||||
|
||||
// when
|
||||
const result = findNearestMessageExcludingCompaction(nonExistentDir)
|
||||
|
||||
// then
|
||||
expect(result).toBeNull()
|
||||
})
|
||||
|
||||
test("skips invalid JSON files and finds valid message", () => {
|
||||
// given
|
||||
const invalidJson = "{ invalid json"
|
||||
const validMessage = {
|
||||
agent: "oracle",
|
||||
model: { providerID: "google", modelID: "gemini-2-flash" },
|
||||
}
|
||||
writeFileSync(join(tempDir, "002.json"), invalidJson)
|
||||
writeFileSync(join(tempDir, "001.json"), JSON.stringify(validMessage))
|
||||
|
||||
// when
|
||||
const result = findNearestMessageExcludingCompaction(tempDir)
|
||||
|
||||
// then
|
||||
expect(result).not.toBeNull()
|
||||
expect(result?.agent).toBe("oracle")
|
||||
})
|
||||
|
||||
test("finds newest valid message (sorted by filename reverse)", () => {
|
||||
// given
|
||||
const olderMessage = {
|
||||
agent: "older",
|
||||
model: { providerID: "a", modelID: "b" },
|
||||
}
|
||||
const newerMessage = {
|
||||
agent: "newer",
|
||||
model: { providerID: "c", modelID: "d" },
|
||||
}
|
||||
writeFileSync(join(tempDir, "001.json"), JSON.stringify(olderMessage))
|
||||
writeFileSync(join(tempDir, "010.json"), JSON.stringify(newerMessage))
|
||||
|
||||
// when
|
||||
const result = findNearestMessageExcludingCompaction(tempDir)
|
||||
|
||||
// then
|
||||
expect(result).not.toBeNull()
|
||||
expect(result?.agent).toBe("newer")
|
||||
})
|
||||
})
|
||||
})
|
||||
@@ -0,0 +1,57 @@
|
||||
import { readdirSync, readFileSync } from "node:fs"
|
||||
import { join } from "node:path"
|
||||
import type { StoredMessage } from "../hook-message-injector"
|
||||
|
||||
export function isCompactionAgent(agent: string | undefined): boolean {
|
||||
return agent?.trim().toLowerCase() === "compaction"
|
||||
}
|
||||
|
||||
function hasFullAgentAndModel(message: StoredMessage): boolean {
|
||||
return !!message.agent &&
|
||||
!isCompactionAgent(message.agent) &&
|
||||
!!message.model?.providerID &&
|
||||
!!message.model?.modelID
|
||||
}
|
||||
|
||||
function hasPartialAgentOrModel(message: StoredMessage): boolean {
|
||||
const hasAgent = !!message.agent && !isCompactionAgent(message.agent)
|
||||
const hasModel = !!message.model?.providerID && !!message.model?.modelID
|
||||
return hasAgent || hasModel
|
||||
}
|
||||
|
||||
export function findNearestMessageExcludingCompaction(messageDir: string): StoredMessage | null {
|
||||
try {
|
||||
const files = readdirSync(messageDir)
|
||||
.filter((name) => name.endsWith(".json"))
|
||||
.sort()
|
||||
.reverse()
|
||||
|
||||
for (const file of files) {
|
||||
try {
|
||||
const content = readFileSync(join(messageDir, file), "utf-8")
|
||||
const parsed = JSON.parse(content) as StoredMessage
|
||||
if (hasFullAgentAndModel(parsed)) {
|
||||
return parsed
|
||||
}
|
||||
} catch {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
for (const file of files) {
|
||||
try {
|
||||
const content = readFileSync(join(messageDir, file), "utf-8")
|
||||
const parsed = JSON.parse(content) as StoredMessage
|
||||
if (hasPartialAgentOrModel(parsed)) {
|
||||
return parsed
|
||||
}
|
||||
} catch {
|
||||
continue
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
return null
|
||||
}
|
||||
|
||||
return null
|
||||
}
|
||||
351
src/features/background-agent/error-classifier.test.ts
Normal file
351
src/features/background-agent/error-classifier.test.ts
Normal file
@@ -0,0 +1,351 @@
|
||||
import { describe, test, expect } from "bun:test"
|
||||
import {
|
||||
isRecord,
|
||||
isAbortedSessionError,
|
||||
getErrorText,
|
||||
extractErrorName,
|
||||
extractErrorMessage,
|
||||
getSessionErrorMessage,
|
||||
} from "./error-classifier"
|
||||
|
||||
describe("isRecord", () => {
|
||||
describe("#given null or primitive values", () => {
|
||||
test("returns false for null", () => {
|
||||
expect(isRecord(null)).toBe(false)
|
||||
})
|
||||
|
||||
test("returns false for undefined", () => {
|
||||
expect(isRecord(undefined)).toBe(false)
|
||||
})
|
||||
|
||||
test("returns false for string", () => {
|
||||
expect(isRecord("hello")).toBe(false)
|
||||
})
|
||||
|
||||
test("returns false for number", () => {
|
||||
expect(isRecord(42)).toBe(false)
|
||||
})
|
||||
|
||||
test("returns false for boolean", () => {
|
||||
expect(isRecord(true)).toBe(false)
|
||||
})
|
||||
|
||||
test("returns true for array (arrays are objects)", () => {
|
||||
expect(isRecord([1, 2, 3])).toBe(true)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given plain objects", () => {
|
||||
test("returns true for empty object", () => {
|
||||
expect(isRecord({})).toBe(true)
|
||||
})
|
||||
|
||||
test("returns true for object with properties", () => {
|
||||
expect(isRecord({ key: "value" })).toBe(true)
|
||||
})
|
||||
|
||||
test("returns true for object with nested objects", () => {
|
||||
expect(isRecord({ nested: { deep: true } })).toBe(true)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given Error instances", () => {
|
||||
test("returns true for Error instance", () => {
|
||||
expect(isRecord(new Error("test"))).toBe(true)
|
||||
})
|
||||
|
||||
test("returns true for TypeError instance", () => {
|
||||
expect(isRecord(new TypeError("test"))).toBe(true)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe("isAbortedSessionError", () => {
|
||||
describe("#given error with aborted message", () => {
|
||||
test("returns true for string containing aborted", () => {
|
||||
expect(isAbortedSessionError("Session aborted")).toBe(true)
|
||||
})
|
||||
|
||||
test("returns true for string with ABORTED uppercase", () => {
|
||||
expect(isAbortedSessionError("Session ABORTED")).toBe(true)
|
||||
})
|
||||
|
||||
test("returns true for Error with aborted in message", () => {
|
||||
expect(isAbortedSessionError(new Error("Session aborted"))).toBe(true)
|
||||
})
|
||||
|
||||
test("returns true for object with message containing aborted", () => {
|
||||
expect(isAbortedSessionError({ message: "The session was aborted" })).toBe(true)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given error without aborted message", () => {
|
||||
test("returns false for string without aborted", () => {
|
||||
expect(isAbortedSessionError("Session completed")).toBe(false)
|
||||
})
|
||||
|
||||
test("returns false for Error without aborted", () => {
|
||||
expect(isAbortedSessionError(new Error("Something went wrong"))).toBe(false)
|
||||
})
|
||||
|
||||
test("returns false for empty string", () => {
|
||||
expect(isAbortedSessionError("")).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given invalid inputs", () => {
|
||||
test("returns false for null", () => {
|
||||
expect(isAbortedSessionError(null)).toBe(false)
|
||||
})
|
||||
|
||||
test("returns false for undefined", () => {
|
||||
expect(isAbortedSessionError(undefined)).toBe(false)
|
||||
})
|
||||
|
||||
test("returns false for object without message", () => {
|
||||
expect(isAbortedSessionError({ code: "ABORTED" })).toBe(false)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe("getErrorText", () => {
|
||||
describe("#given string input", () => {
|
||||
test("returns the string as-is", () => {
|
||||
expect(getErrorText("Something went wrong")).toBe("Something went wrong")
|
||||
})
|
||||
|
||||
test("returns empty string for empty string", () => {
|
||||
expect(getErrorText("")).toBe("")
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given Error instance", () => {
|
||||
test("returns name and message format", () => {
|
||||
expect(getErrorText(new Error("test message"))).toBe("Error: test message")
|
||||
})
|
||||
|
||||
test("returns TypeError format", () => {
|
||||
expect(getErrorText(new TypeError("type error"))).toBe("TypeError: type error")
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given object with message property", () => {
|
||||
test("returns message property as string", () => {
|
||||
expect(getErrorText({ message: "custom error" })).toBe("custom error")
|
||||
})
|
||||
|
||||
test("returns name property when message not available", () => {
|
||||
expect(getErrorText({ name: "CustomError" })).toBe("CustomError")
|
||||
})
|
||||
|
||||
test("prefers message over name", () => {
|
||||
expect(getErrorText({ name: "CustomError", message: "error message" })).toBe("error message")
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given invalid inputs", () => {
|
||||
test("returns empty string for null", () => {
|
||||
expect(getErrorText(null)).toBe("")
|
||||
})
|
||||
|
||||
test("returns empty string for undefined", () => {
|
||||
expect(getErrorText(undefined)).toBe("")
|
||||
})
|
||||
|
||||
test("returns empty string for object without message or name", () => {
|
||||
expect(getErrorText({ code: 500 })).toBe("")
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe("extractErrorName", () => {
|
||||
describe("#given Error instance", () => {
|
||||
test("returns Error for generic Error", () => {
|
||||
expect(extractErrorName(new Error("test"))).toBe("Error")
|
||||
})
|
||||
|
||||
test("returns TypeError name", () => {
|
||||
expect(extractErrorName(new TypeError("test"))).toBe("TypeError")
|
||||
})
|
||||
|
||||
test("returns RangeError name", () => {
|
||||
expect(extractErrorName(new RangeError("test"))).toBe("RangeError")
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given plain object with name property", () => {
|
||||
test("returns name property when string", () => {
|
||||
expect(extractErrorName({ name: "CustomError" })).toBe("CustomError")
|
||||
})
|
||||
|
||||
test("returns undefined when name is not string", () => {
|
||||
expect(extractErrorName({ name: 123 })).toBe(undefined)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given invalid inputs", () => {
|
||||
test("returns undefined for null", () => {
|
||||
expect(extractErrorName(null)).toBe(undefined)
|
||||
})
|
||||
|
||||
test("returns undefined for undefined", () => {
|
||||
expect(extractErrorName(undefined)).toBe(undefined)
|
||||
})
|
||||
|
||||
test("returns undefined for string", () => {
|
||||
expect(extractErrorName("Error message")).toBe(undefined)
|
||||
})
|
||||
|
||||
test("returns undefined for object without name property", () => {
|
||||
expect(extractErrorName({ message: "test" })).toBe(undefined)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe("extractErrorMessage", () => {
|
||||
describe("#given string input", () => {
|
||||
test("returns the string as-is", () => {
|
||||
expect(extractErrorMessage("error message")).toBe("error message")
|
||||
})
|
||||
|
||||
test("returns undefined for empty string", () => {
|
||||
expect(extractErrorMessage("")).toBe(undefined)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given Error instance", () => {
|
||||
test("returns error message", () => {
|
||||
expect(extractErrorMessage(new Error("test error"))).toBe("test error")
|
||||
})
|
||||
|
||||
test("returns empty string for Error with no message", () => {
|
||||
expect(extractErrorMessage(new Error())).toBe("")
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given object with message property", () => {
|
||||
test("returns message property", () => {
|
||||
expect(extractErrorMessage({ message: "custom message" })).toBe("custom message")
|
||||
})
|
||||
|
||||
test("falls through to JSON.stringify for empty message value", () => {
|
||||
expect(extractErrorMessage({ message: "" })).toBe('{"message":""}')
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given nested error structure", () => {
|
||||
test("extracts message from nested error object", () => {
|
||||
expect(extractErrorMessage({ error: { message: "nested error" } })).toBe("nested error")
|
||||
})
|
||||
|
||||
test("extracts message from data.error structure", () => {
|
||||
expect(extractErrorMessage({ data: { error: "data error" } })).toBe("data error")
|
||||
})
|
||||
|
||||
test("extracts message from cause property", () => {
|
||||
expect(extractErrorMessage({ cause: "cause error" })).toBe("cause error")
|
||||
})
|
||||
|
||||
test("extracts message from cause object with message", () => {
|
||||
expect(extractErrorMessage({ cause: { message: "cause message" } })).toBe("cause message")
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given complex error with data wrapper", () => {
|
||||
test("extracts from error.data.message", () => {
|
||||
const error = {
|
||||
data: {
|
||||
message: "data message",
|
||||
},
|
||||
}
|
||||
expect(extractErrorMessage(error)).toBe("data message")
|
||||
})
|
||||
|
||||
test("prefers top over nested-level message", () => {
|
||||
const error = {
|
||||
message: "top level",
|
||||
data: { message: "nested" },
|
||||
}
|
||||
expect(extractErrorMessage(error)).toBe("top level")
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given invalid inputs", () => {
|
||||
test("returns undefined for null", () => {
|
||||
expect(extractErrorMessage(null)).toBe(undefined)
|
||||
})
|
||||
|
||||
test("returns undefined for undefined", () => {
|
||||
expect(extractErrorMessage(undefined)).toBe(undefined)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given object without extractable message", () => {
|
||||
test("falls back to JSON.stringify for object", () => {
|
||||
const obj = { code: 500, details: "error" }
|
||||
const result = extractErrorMessage(obj)
|
||||
expect(result).toContain('"code":500')
|
||||
})
|
||||
|
||||
test("falls back to String() for non-serializable object", () => {
|
||||
const circular: Record<string, unknown> = { a: 1 }
|
||||
circular.self = circular
|
||||
const result = extractErrorMessage(circular)
|
||||
expect(result).toBe("[object Object]")
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe("getSessionErrorMessage", () => {
|
||||
describe("#given valid error properties", () => {
|
||||
test("extracts message from error.message", () => {
|
||||
const properties = { error: { message: "session error" } }
|
||||
expect(getSessionErrorMessage(properties)).toBe("session error")
|
||||
})
|
||||
|
||||
test("extracts message from error.data.message", () => {
|
||||
const properties = {
|
||||
error: {
|
||||
data: { message: "data error message" },
|
||||
},
|
||||
}
|
||||
expect(getSessionErrorMessage(properties)).toBe("data error message")
|
||||
})
|
||||
|
||||
test("prefers error.data.message over error.message", () => {
|
||||
const properties = {
|
||||
error: {
|
||||
message: "top level",
|
||||
data: { message: "nested" },
|
||||
},
|
||||
}
|
||||
expect(getSessionErrorMessage(properties)).toBe("nested")
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given missing or invalid properties", () => {
|
||||
test("returns undefined when error is missing", () => {
|
||||
expect(getSessionErrorMessage({})).toBe(undefined)
|
||||
})
|
||||
|
||||
test("returns undefined when error is null", () => {
|
||||
expect(getSessionErrorMessage({ error: null })).toBe(undefined)
|
||||
})
|
||||
|
||||
test("returns undefined when error is string", () => {
|
||||
expect(getSessionErrorMessage({ error: "error string" })).toBe(undefined)
|
||||
})
|
||||
|
||||
test("returns undefined when data is not an object", () => {
|
||||
expect(getSessionErrorMessage({ error: { data: "not an object" } })).toBe(undefined)
|
||||
})
|
||||
|
||||
test("returns undefined when message is not string", () => {
|
||||
expect(getSessionErrorMessage({ error: { message: 123 } })).toBe(undefined)
|
||||
})
|
||||
|
||||
test("returns undefined when data.message is not string", () => {
|
||||
expect(getSessionErrorMessage({ error: { data: { message: null } } })).toBe(undefined)
|
||||
})
|
||||
})
|
||||
})
|
||||
@@ -1,3 +1,7 @@
|
||||
export function isRecord(value: unknown): value is Record<string, unknown> {
|
||||
return typeof value === "object" && value !== null
|
||||
}
|
||||
|
||||
export function isAbortedSessionError(error: unknown): boolean {
|
||||
const message = getErrorText(error)
|
||||
return message.toLowerCase().includes("aborted")
|
||||
@@ -19,3 +23,61 @@ export function getErrorText(error: unknown): string {
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
export function extractErrorName(error: unknown): string | undefined {
|
||||
if (isRecord(error) && typeof error["name"] === "string") return error["name"]
|
||||
if (error instanceof Error) return error.name
|
||||
return undefined
|
||||
}
|
||||
|
||||
export function extractErrorMessage(error: unknown): string | undefined {
|
||||
if (!error) return undefined
|
||||
if (typeof error === "string") return error
|
||||
if (error instanceof Error) return error.message
|
||||
|
||||
if (isRecord(error)) {
|
||||
const dataRaw = error["data"]
|
||||
const candidates: unknown[] = [
|
||||
error,
|
||||
dataRaw,
|
||||
error["error"],
|
||||
isRecord(dataRaw) ? (dataRaw as Record<string, unknown>)["error"] : undefined,
|
||||
error["cause"],
|
||||
]
|
||||
|
||||
for (const candidate of candidates) {
|
||||
if (typeof candidate === "string" && candidate.length > 0) return candidate
|
||||
if (
|
||||
isRecord(candidate) &&
|
||||
typeof candidate["message"] === "string" &&
|
||||
candidate["message"].length > 0
|
||||
) {
|
||||
return candidate["message"]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
return JSON.stringify(error)
|
||||
} catch {
|
||||
return String(error)
|
||||
}
|
||||
}
|
||||
|
||||
interface EventPropertiesLike {
|
||||
[key: string]: unknown
|
||||
}
|
||||
|
||||
export function getSessionErrorMessage(properties: EventPropertiesLike): string | undefined {
|
||||
const errorRaw = properties["error"]
|
||||
if (!isRecord(errorRaw)) return undefined
|
||||
|
||||
const dataRaw = errorRaw["data"]
|
||||
if (isRecord(dataRaw)) {
|
||||
const message = dataRaw["message"]
|
||||
if (typeof message === "string") return message
|
||||
}
|
||||
|
||||
const message = errorRaw["message"]
|
||||
return typeof message === "string" ? message : undefined
|
||||
}
|
||||
|
||||
270
src/features/background-agent/fallback-retry-handler.test.ts
Normal file
270
src/features/background-agent/fallback-retry-handler.test.ts
Normal file
@@ -0,0 +1,270 @@
|
||||
import { describe, test, expect, mock, beforeEach } from "bun:test"
|
||||
|
||||
mock.module("../../shared", () => ({
|
||||
log: mock(() => {}),
|
||||
readConnectedProvidersCache: mock(() => null),
|
||||
readProviderModelsCache: mock(() => null),
|
||||
}))
|
||||
|
||||
mock.module("../../shared/model-error-classifier", () => ({
|
||||
shouldRetryError: mock(() => true),
|
||||
getNextFallback: mock((chain: Array<{ model: string }>, attempt: number) => chain[attempt]),
|
||||
hasMoreFallbacks: mock((chain: Array<{ model: string }>, attempt: number) => attempt < chain.length),
|
||||
selectFallbackProvider: mock((providers: string[]) => providers[0]),
|
||||
}))
|
||||
|
||||
mock.module("../../shared/provider-model-id-transform", () => ({
|
||||
transformModelForProvider: mock((_provider: string, model: string) => model),
|
||||
}))
|
||||
|
||||
import { tryFallbackRetry } from "./fallback-retry-handler"
|
||||
import { shouldRetryError } from "../../shared/model-error-classifier"
|
||||
import type { BackgroundTask } from "./types"
|
||||
import type { ConcurrencyManager } from "./concurrency"
|
||||
|
||||
function createMockTask(overrides: Partial<BackgroundTask> = {}): BackgroundTask {
|
||||
return {
|
||||
id: "test-task-1",
|
||||
description: "test task",
|
||||
prompt: "test prompt",
|
||||
agent: "sisyphus-junior",
|
||||
status: "error",
|
||||
parentSessionID: "parent-session-1",
|
||||
parentMessageID: "parent-message-1",
|
||||
fallbackChain: [
|
||||
{ model: "fallback-model-1", providers: ["provider-a"], variant: undefined },
|
||||
{ model: "fallback-model-2", providers: ["provider-b"], variant: undefined },
|
||||
],
|
||||
attemptCount: 0,
|
||||
concurrencyKey: "provider-a/original-model",
|
||||
model: { providerID: "provider-a", modelID: "original-model" },
|
||||
...overrides,
|
||||
}
|
||||
}
|
||||
|
||||
function createMockConcurrencyManager(): ConcurrencyManager {
|
||||
return {
|
||||
release: mock(() => {}),
|
||||
acquire: mock(async () => {}),
|
||||
getQueueLength: mock(() => 0),
|
||||
getActiveCount: mock(() => 0),
|
||||
} as unknown as ConcurrencyManager
|
||||
}
|
||||
|
||||
function createMockClient() {
|
||||
return {
|
||||
session: {
|
||||
abort: mock(async () => ({})),
|
||||
},
|
||||
} as any
|
||||
}
|
||||
|
||||
function createDefaultArgs(taskOverrides: Partial<BackgroundTask> = {}) {
|
||||
const processKeyFn = mock(() => {})
|
||||
const queuesByKey = new Map<string, Array<{ task: BackgroundTask; input: any }>>()
|
||||
const idleDeferralTimers = new Map<string, ReturnType<typeof setTimeout>>()
|
||||
const concurrencyManager = createMockConcurrencyManager()
|
||||
const client = createMockClient()
|
||||
const task = createMockTask(taskOverrides)
|
||||
|
||||
return {
|
||||
task,
|
||||
errorInfo: { name: "OverloadedError", message: "model overloaded" },
|
||||
source: "polling",
|
||||
concurrencyManager,
|
||||
client,
|
||||
idleDeferralTimers,
|
||||
queuesByKey,
|
||||
processKey: processKeyFn,
|
||||
}
|
||||
}
|
||||
|
||||
describe("tryFallbackRetry", () => {
|
||||
beforeEach(() => {
|
||||
;(shouldRetryError as any).mockImplementation(() => true)
|
||||
})
|
||||
|
||||
describe("#given retryable error with fallback chain", () => {
|
||||
test("returns true and enqueues retry", () => {
|
||||
const args = createDefaultArgs()
|
||||
|
||||
const result = tryFallbackRetry(args)
|
||||
|
||||
expect(result).toBe(true)
|
||||
})
|
||||
|
||||
test("resets task status to pending", () => {
|
||||
const args = createDefaultArgs()
|
||||
|
||||
tryFallbackRetry(args)
|
||||
|
||||
expect(args.task.status).toBe("pending")
|
||||
})
|
||||
|
||||
test("increments attemptCount", () => {
|
||||
const args = createDefaultArgs()
|
||||
|
||||
tryFallbackRetry(args)
|
||||
|
||||
expect(args.task.attemptCount).toBe(1)
|
||||
})
|
||||
|
||||
test("updates task model to fallback", () => {
|
||||
const args = createDefaultArgs()
|
||||
|
||||
tryFallbackRetry(args)
|
||||
|
||||
expect(args.task.model?.modelID).toBe("fallback-model-1")
|
||||
expect(args.task.model?.providerID).toBe("provider-a")
|
||||
})
|
||||
|
||||
test("clears sessionID and startedAt", () => {
|
||||
const args = createDefaultArgs({
|
||||
sessionID: "old-session",
|
||||
startedAt: new Date(),
|
||||
})
|
||||
|
||||
tryFallbackRetry(args)
|
||||
|
||||
expect(args.task.sessionID).toBeUndefined()
|
||||
expect(args.task.startedAt).toBeUndefined()
|
||||
})
|
||||
|
||||
test("clears error field", () => {
|
||||
const args = createDefaultArgs({ error: "previous error" })
|
||||
|
||||
tryFallbackRetry(args)
|
||||
|
||||
expect(args.task.error).toBeUndefined()
|
||||
})
|
||||
|
||||
test("sets new queuedAt", () => {
|
||||
const args = createDefaultArgs()
|
||||
|
||||
tryFallbackRetry(args)
|
||||
|
||||
expect(args.task.queuedAt).toBeInstanceOf(Date)
|
||||
})
|
||||
|
||||
test("releases concurrency slot", () => {
|
||||
const args = createDefaultArgs()
|
||||
|
||||
tryFallbackRetry(args)
|
||||
|
||||
expect(args.concurrencyManager.release).toHaveBeenCalledWith("provider-a/original-model")
|
||||
})
|
||||
|
||||
test("clears concurrencyKey after release", () => {
|
||||
const args = createDefaultArgs()
|
||||
|
||||
tryFallbackRetry(args)
|
||||
|
||||
expect(args.task.concurrencyKey).toBeUndefined()
|
||||
})
|
||||
|
||||
test("aborts existing session", () => {
|
||||
const args = createDefaultArgs({ sessionID: "session-to-abort" })
|
||||
|
||||
tryFallbackRetry(args)
|
||||
|
||||
expect(args.client.session.abort).toHaveBeenCalledWith({
|
||||
path: { id: "session-to-abort" },
|
||||
})
|
||||
})
|
||||
|
||||
test("adds retry input to queue and calls processKey", () => {
|
||||
const args = createDefaultArgs()
|
||||
|
||||
tryFallbackRetry(args)
|
||||
|
||||
const key = `${args.task.model!.providerID}/${args.task.model!.modelID}`
|
||||
const queue = args.queuesByKey.get(key)
|
||||
expect(queue).toBeDefined()
|
||||
expect(queue!.length).toBe(1)
|
||||
expect(queue![0].task).toBe(args.task)
|
||||
expect(args.processKey).toHaveBeenCalledWith(key)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given non-retryable error", () => {
|
||||
test("returns false when shouldRetryError returns false", () => {
|
||||
;(shouldRetryError as any).mockImplementation(() => false)
|
||||
const args = createDefaultArgs()
|
||||
|
||||
const result = tryFallbackRetry(args)
|
||||
|
||||
expect(result).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given no fallback chain", () => {
|
||||
test("returns false when fallbackChain is undefined", () => {
|
||||
const args = createDefaultArgs({ fallbackChain: undefined })
|
||||
|
||||
const result = tryFallbackRetry(args)
|
||||
|
||||
expect(result).toBe(false)
|
||||
})
|
||||
|
||||
test("returns false when fallbackChain is empty", () => {
|
||||
const args = createDefaultArgs({ fallbackChain: [] })
|
||||
|
||||
const result = tryFallbackRetry(args)
|
||||
|
||||
expect(result).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given exhausted fallbacks", () => {
|
||||
test("returns false when attemptCount exceeds chain length", () => {
|
||||
const args = createDefaultArgs({ attemptCount: 5 })
|
||||
|
||||
const result = tryFallbackRetry(args)
|
||||
|
||||
expect(result).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given task without concurrency key", () => {
|
||||
test("skips concurrency release", () => {
|
||||
const args = createDefaultArgs({ concurrencyKey: undefined })
|
||||
|
||||
tryFallbackRetry(args)
|
||||
|
||||
expect(args.concurrencyManager.release).not.toHaveBeenCalled()
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given task without session", () => {
|
||||
test("skips session abort", () => {
|
||||
const args = createDefaultArgs({ sessionID: undefined })
|
||||
|
||||
tryFallbackRetry(args)
|
||||
|
||||
expect(args.client.session.abort).not.toHaveBeenCalled()
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given active idle deferral timer", () => {
|
||||
test("clears the timer and removes from map", () => {
|
||||
const args = createDefaultArgs()
|
||||
const timerId = setTimeout(() => {}, 10000)
|
||||
args.idleDeferralTimers.set("test-task-1", timerId)
|
||||
|
||||
tryFallbackRetry(args)
|
||||
|
||||
expect(args.idleDeferralTimers.has("test-task-1")).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("#given second attempt", () => {
|
||||
test("uses second fallback in chain", () => {
|
||||
const args = createDefaultArgs({ attemptCount: 1 })
|
||||
|
||||
tryFallbackRetry(args)
|
||||
|
||||
expect(args.task.model?.modelID).toBe("fallback-model-2")
|
||||
expect(args.task.attemptCount).toBe(2)
|
||||
})
|
||||
})
|
||||
})
|
||||
126
src/features/background-agent/fallback-retry-handler.ts
Normal file
126
src/features/background-agent/fallback-retry-handler.ts
Normal file
@@ -0,0 +1,126 @@
|
||||
import type { BackgroundTask, LaunchInput } from "./types"
|
||||
import type { FallbackEntry } from "../../shared/model-requirements"
|
||||
import type { ConcurrencyManager } from "./concurrency"
|
||||
import type { OpencodeClient, QueueItem } from "./constants"
|
||||
import { log, readConnectedProvidersCache, readProviderModelsCache } from "../../shared"
|
||||
import {
|
||||
shouldRetryError,
|
||||
getNextFallback,
|
||||
hasMoreFallbacks,
|
||||
selectFallbackProvider,
|
||||
} from "../../shared/model-error-classifier"
|
||||
import { transformModelForProvider } from "../../shared/provider-model-id-transform"
|
||||
|
||||
export function tryFallbackRetry(args: {
|
||||
task: BackgroundTask
|
||||
errorInfo: { name?: string; message?: string }
|
||||
source: string
|
||||
concurrencyManager: ConcurrencyManager
|
||||
client: OpencodeClient
|
||||
idleDeferralTimers: Map<string, ReturnType<typeof setTimeout>>
|
||||
queuesByKey: Map<string, QueueItem[]>
|
||||
processKey: (key: string) => void
|
||||
}): boolean {
|
||||
const { task, errorInfo, source, concurrencyManager, client, idleDeferralTimers, queuesByKey, processKey } = args
|
||||
const fallbackChain = task.fallbackChain
|
||||
const canRetry =
|
||||
shouldRetryError(errorInfo) &&
|
||||
fallbackChain &&
|
||||
fallbackChain.length > 0 &&
|
||||
hasMoreFallbacks(fallbackChain, task.attemptCount ?? 0)
|
||||
|
||||
if (!canRetry) return false
|
||||
|
||||
const attemptCount = task.attemptCount ?? 0
|
||||
const providerModelsCache = readProviderModelsCache()
|
||||
const connectedProviders = providerModelsCache?.connected ?? readConnectedProvidersCache()
|
||||
const connectedSet = connectedProviders ? new Set(connectedProviders.map(p => p.toLowerCase())) : null
|
||||
|
||||
const isReachable = (entry: FallbackEntry): boolean => {
|
||||
if (!connectedSet) return true
|
||||
return entry.providers.some((p) => connectedSet.has(p.toLowerCase()))
|
||||
}
|
||||
|
||||
let selectedAttemptCount = attemptCount
|
||||
let nextFallback: FallbackEntry | undefined
|
||||
while (fallbackChain && selectedAttemptCount < fallbackChain.length) {
|
||||
const candidate = getNextFallback(fallbackChain, selectedAttemptCount)
|
||||
if (!candidate) break
|
||||
selectedAttemptCount++
|
||||
if (!isReachable(candidate)) {
|
||||
log("[background-agent] Skipping unreachable fallback:", {
|
||||
taskId: task.id,
|
||||
source,
|
||||
model: candidate.model,
|
||||
providers: candidate.providers,
|
||||
})
|
||||
continue
|
||||
}
|
||||
nextFallback = candidate
|
||||
break
|
||||
}
|
||||
if (!nextFallback) return false
|
||||
|
||||
const providerID = selectFallbackProvider(
|
||||
nextFallback.providers,
|
||||
task.model?.providerID,
|
||||
)
|
||||
|
||||
log("[background-agent] Retryable error, attempting fallback:", {
|
||||
taskId: task.id,
|
||||
source,
|
||||
errorName: errorInfo.name,
|
||||
errorMessage: errorInfo.message?.slice(0, 100),
|
||||
attemptCount: selectedAttemptCount,
|
||||
nextModel: `${providerID}/${nextFallback.model}`,
|
||||
})
|
||||
|
||||
if (task.concurrencyKey) {
|
||||
concurrencyManager.release(task.concurrencyKey)
|
||||
task.concurrencyKey = undefined
|
||||
}
|
||||
|
||||
if (task.sessionID) {
|
||||
client.session.abort({ path: { id: task.sessionID } }).catch(() => {})
|
||||
}
|
||||
|
||||
const idleTimer = idleDeferralTimers.get(task.id)
|
||||
if (idleTimer) {
|
||||
clearTimeout(idleTimer)
|
||||
idleDeferralTimers.delete(task.id)
|
||||
}
|
||||
|
||||
task.attemptCount = selectedAttemptCount
|
||||
const transformedModelId = transformModelForProvider(providerID, nextFallback.model)
|
||||
task.model = {
|
||||
providerID,
|
||||
modelID: transformedModelId,
|
||||
variant: nextFallback.variant,
|
||||
}
|
||||
task.status = "pending"
|
||||
task.sessionID = undefined
|
||||
task.startedAt = undefined
|
||||
task.queuedAt = new Date()
|
||||
task.error = undefined
|
||||
|
||||
const key = task.model ? `${task.model.providerID}/${task.model.modelID}` : task.agent
|
||||
const queue = queuesByKey.get(key) ?? []
|
||||
const retryInput: LaunchInput = {
|
||||
description: task.description,
|
||||
prompt: task.prompt,
|
||||
agent: task.agent,
|
||||
parentSessionID: task.parentSessionID,
|
||||
parentMessageID: task.parentMessageID,
|
||||
parentModel: task.parentModel,
|
||||
parentAgent: task.parentAgent,
|
||||
parentTools: task.parentTools,
|
||||
model: task.model,
|
||||
fallbackChain: task.fallbackChain,
|
||||
category: task.category,
|
||||
isUnstableAgent: task.isUnstableAgent,
|
||||
}
|
||||
queue.push({ task, input: retryInput })
|
||||
queuesByKey.set(key, queue)
|
||||
processKey(key)
|
||||
return true
|
||||
}
|
||||
@@ -5,7 +5,6 @@ import type {
|
||||
LaunchInput,
|
||||
ResumeInput,
|
||||
} from "./types"
|
||||
import type { FallbackEntry } from "../../shared/model-requirements"
|
||||
import { TaskHistory } from "./task-history"
|
||||
import {
|
||||
log,
|
||||
@@ -13,8 +12,6 @@ import {
|
||||
normalizePromptTools,
|
||||
normalizeSDKResponse,
|
||||
promptWithModelSuggestionRetry,
|
||||
readConnectedProvidersCache,
|
||||
readProviderModelsCache,
|
||||
resolveInheritedPromptTools,
|
||||
createInternalAgentTextPart,
|
||||
} from "../../shared"
|
||||
@@ -25,28 +22,31 @@ import type { BackgroundTaskConfig, TmuxConfig } from "../../config/schema"
|
||||
import { isInsideTmux } from "../../shared/tmux"
|
||||
import {
|
||||
shouldRetryError,
|
||||
getNextFallback,
|
||||
hasMoreFallbacks,
|
||||
selectFallbackProvider,
|
||||
} from "../../shared/model-error-classifier"
|
||||
import { transformModelForProvider } from "../../shared/provider-model-id-transform"
|
||||
import {
|
||||
DEFAULT_MESSAGE_STALENESS_TIMEOUT_MS,
|
||||
DEFAULT_STALE_TIMEOUT_MS,
|
||||
MIN_IDLE_TIME_MS,
|
||||
MIN_RUNTIME_BEFORE_STALE_MS,
|
||||
POLLING_INTERVAL_MS,
|
||||
TASK_CLEANUP_DELAY_MS,
|
||||
TASK_TTL_MS,
|
||||
} from "./constants"
|
||||
|
||||
import { subagentSessions } from "../claude-code-session-state"
|
||||
import { getTaskToastManager } from "../task-toast-manager"
|
||||
import { MESSAGE_STORAGE, type StoredMessage } from "../hook-message-injector"
|
||||
import { existsSync, readFileSync, readdirSync } from "node:fs"
|
||||
import { formatDuration } from "./duration-formatter"
|
||||
import {
|
||||
isAbortedSessionError,
|
||||
extractErrorName,
|
||||
extractErrorMessage,
|
||||
getSessionErrorMessage,
|
||||
isRecord,
|
||||
} from "./error-classifier"
|
||||
import { tryFallbackRetry } from "./fallback-retry-handler"
|
||||
import { registerManagerForCleanup, unregisterManagerForCleanup } from "./process-cleanup"
|
||||
import { isCompactionAgent, findNearestMessageExcludingCompaction } from "./compaction-aware-message-resolver"
|
||||
import { MESSAGE_STORAGE } from "../hook-message-injector"
|
||||
import { join } from "node:path"
|
||||
|
||||
type ProcessCleanupEvent = NodeJS.Signals | "beforeExit" | "exit"
|
||||
import { pruneStaleTasksAndNotifications } from "./task-poller"
|
||||
import { checkAndInterruptStaleTasks } from "./task-poller"
|
||||
|
||||
type OpencodeClient = PluginInput["client"]
|
||||
|
||||
@@ -89,9 +89,7 @@ export interface SubagentSessionCreatedEvent {
|
||||
export type OnSubagentSessionCreated = (event: SubagentSessionCreatedEvent) => Promise<void>
|
||||
|
||||
export class BackgroundManager {
|
||||
private static cleanupManagers = new Set<BackgroundManager>()
|
||||
private static cleanupRegistered = false
|
||||
private static cleanupHandlers = new Map<ProcessCleanupEvent, () => void>()
|
||||
|
||||
|
||||
private tasks: Map<string, BackgroundTask>
|
||||
private notifications: Map<string, BackgroundTask[]>
|
||||
@@ -705,8 +703,8 @@ export class BackgroundManager {
|
||||
if (!assistantError) return
|
||||
|
||||
const errorInfo = {
|
||||
name: this.extractErrorName(assistantError),
|
||||
message: this.extractErrorMessage(assistantError),
|
||||
name: extractErrorName(assistantError),
|
||||
message: extractErrorMessage(assistantError),
|
||||
}
|
||||
this.tryFallbackRetry(task, errorInfo, "message.updated")
|
||||
}
|
||||
@@ -809,7 +807,7 @@ export class BackgroundManager {
|
||||
|
||||
const errorObj = props?.error as { name?: string; message?: string } | undefined
|
||||
const errorName = errorObj?.name
|
||||
const errorMessage = props ? this.getSessionErrorMessage(props) : undefined
|
||||
const errorMessage = props ? getSessionErrorMessage(props) : undefined
|
||||
|
||||
const errorInfo = { name: errorName, message: errorMessage }
|
||||
if (this.tryFallbackRetry(task, errorInfo, "session.error")) return
|
||||
@@ -934,110 +932,21 @@ export class BackgroundManager {
|
||||
errorInfo: { name?: string; message?: string },
|
||||
source: string,
|
||||
): boolean {
|
||||
const fallbackChain = task.fallbackChain
|
||||
const canRetry =
|
||||
shouldRetryError(errorInfo) &&
|
||||
fallbackChain &&
|
||||
fallbackChain.length > 0 &&
|
||||
hasMoreFallbacks(fallbackChain, task.attemptCount ?? 0)
|
||||
|
||||
if (!canRetry) return false
|
||||
|
||||
const attemptCount = task.attemptCount ?? 0
|
||||
const providerModelsCache = readProviderModelsCache()
|
||||
const connectedProviders = providerModelsCache?.connected ?? readConnectedProvidersCache()
|
||||
const connectedSet = connectedProviders ? new Set(connectedProviders.map(p => p.toLowerCase())) : null
|
||||
|
||||
const isReachable = (entry: FallbackEntry): boolean => {
|
||||
if (!connectedSet) return true
|
||||
|
||||
// Gate only on provider connectivity. Provider model lists can be stale/incomplete,
|
||||
// especially after users manually add models to opencode.json.
|
||||
return entry.providers.some((p) => connectedSet.has(p.toLowerCase()))
|
||||
}
|
||||
|
||||
let selectedAttemptCount = attemptCount
|
||||
let nextFallback: FallbackEntry | undefined
|
||||
while (fallbackChain && selectedAttemptCount < fallbackChain.length) {
|
||||
const candidate = getNextFallback(fallbackChain, selectedAttemptCount)
|
||||
if (!candidate) break
|
||||
selectedAttemptCount++
|
||||
if (!isReachable(candidate)) {
|
||||
log("[background-agent] Skipping unreachable fallback:", {
|
||||
taskId: task.id,
|
||||
source,
|
||||
model: candidate.model,
|
||||
providers: candidate.providers,
|
||||
})
|
||||
continue
|
||||
}
|
||||
nextFallback = candidate
|
||||
break
|
||||
}
|
||||
if (!nextFallback) return false
|
||||
|
||||
const providerID = selectFallbackProvider(
|
||||
nextFallback.providers,
|
||||
task.model?.providerID,
|
||||
)
|
||||
|
||||
log("[background-agent] Retryable error, attempting fallback:", {
|
||||
taskId: task.id,
|
||||
const previousSessionID = task.sessionID
|
||||
const result = tryFallbackRetry({
|
||||
task,
|
||||
errorInfo,
|
||||
source,
|
||||
errorName: errorInfo.name,
|
||||
errorMessage: errorInfo.message?.slice(0, 100),
|
||||
attemptCount: selectedAttemptCount,
|
||||
nextModel: `${providerID}/${nextFallback.model}`,
|
||||
concurrencyManager: this.concurrencyManager,
|
||||
client: this.client,
|
||||
idleDeferralTimers: this.idleDeferralTimers,
|
||||
queuesByKey: this.queuesByKey,
|
||||
processKey: (key: string) => this.processKey(key),
|
||||
})
|
||||
|
||||
if (task.concurrencyKey) {
|
||||
this.concurrencyManager.release(task.concurrencyKey)
|
||||
task.concurrencyKey = undefined
|
||||
if (result && previousSessionID) {
|
||||
subagentSessions.delete(previousSessionID)
|
||||
}
|
||||
|
||||
if (task.sessionID) {
|
||||
this.client.session.abort({ path: { id: task.sessionID } }).catch(() => {})
|
||||
subagentSessions.delete(task.sessionID)
|
||||
}
|
||||
|
||||
const idleTimer = this.idleDeferralTimers.get(task.id)
|
||||
if (idleTimer) {
|
||||
clearTimeout(idleTimer)
|
||||
this.idleDeferralTimers.delete(task.id)
|
||||
}
|
||||
|
||||
task.attemptCount = selectedAttemptCount
|
||||
const transformedModelId = transformModelForProvider(providerID, nextFallback.model)
|
||||
task.model = {
|
||||
providerID,
|
||||
modelID: transformedModelId,
|
||||
variant: nextFallback.variant,
|
||||
}
|
||||
task.status = "pending"
|
||||
task.sessionID = undefined
|
||||
task.startedAt = undefined
|
||||
task.queuedAt = new Date()
|
||||
task.error = undefined
|
||||
|
||||
const key = task.model ? `${task.model.providerID}/${task.model.modelID}` : task.agent
|
||||
const queue = this.queuesByKey.get(key) ?? []
|
||||
const retryInput: LaunchInput = {
|
||||
description: task.description,
|
||||
prompt: task.prompt,
|
||||
agent: task.agent,
|
||||
parentSessionID: task.parentSessionID,
|
||||
parentMessageID: task.parentMessageID,
|
||||
parentModel: task.parentModel,
|
||||
parentAgent: task.parentAgent,
|
||||
parentTools: task.parentTools,
|
||||
model: task.model,
|
||||
fallbackChain: task.fallbackChain,
|
||||
category: task.category,
|
||||
}
|
||||
queue.push({ task, input: retryInput })
|
||||
this.queuesByKey.set(key, queue)
|
||||
this.processKey(key)
|
||||
return true
|
||||
return result
|
||||
}
|
||||
|
||||
markForNotification(task: BackgroundTask): void {
|
||||
@@ -1256,45 +1165,11 @@ export class BackgroundManager {
|
||||
}
|
||||
|
||||
private registerProcessCleanup(): void {
|
||||
BackgroundManager.cleanupManagers.add(this)
|
||||
|
||||
if (BackgroundManager.cleanupRegistered) return
|
||||
BackgroundManager.cleanupRegistered = true
|
||||
|
||||
const cleanupAll = () => {
|
||||
for (const manager of BackgroundManager.cleanupManagers) {
|
||||
try {
|
||||
manager.shutdown()
|
||||
} catch (error) {
|
||||
log("[background-agent] Error during shutdown cleanup:", error)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const registerSignal = (signal: ProcessCleanupEvent, exitAfter: boolean): void => {
|
||||
const listener = registerProcessSignal(signal, cleanupAll, exitAfter)
|
||||
BackgroundManager.cleanupHandlers.set(signal, listener)
|
||||
}
|
||||
|
||||
registerSignal("SIGINT", true)
|
||||
registerSignal("SIGTERM", true)
|
||||
if (process.platform === "win32") {
|
||||
registerSignal("SIGBREAK", true)
|
||||
}
|
||||
registerSignal("beforeExit", false)
|
||||
registerSignal("exit", false)
|
||||
registerManagerForCleanup(this)
|
||||
}
|
||||
|
||||
private unregisterProcessCleanup(): void {
|
||||
BackgroundManager.cleanupManagers.delete(this)
|
||||
|
||||
if (BackgroundManager.cleanupManagers.size > 0) return
|
||||
|
||||
for (const [signal, listener] of BackgroundManager.cleanupHandlers.entries()) {
|
||||
process.off(signal, listener)
|
||||
}
|
||||
BackgroundManager.cleanupHandlers.clear()
|
||||
BackgroundManager.cleanupRegistered = false
|
||||
unregisterManagerForCleanup(this)
|
||||
}
|
||||
|
||||
|
||||
@@ -1368,7 +1243,7 @@ export class BackgroundManager {
|
||||
// Note: Callers must release concurrency before calling this method
|
||||
// to ensure slots are freed even if notification fails
|
||||
|
||||
const duration = this.formatDuration(task.startedAt ?? new Date(), task.completedAt)
|
||||
const duration = formatDuration(task.startedAt ?? new Date(), task.completedAt)
|
||||
|
||||
log("[background-agent] notifyParentSession called for task:", task.id)
|
||||
|
||||
@@ -1455,7 +1330,7 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
|
||||
if (isCompactionAgent(info?.agent)) {
|
||||
continue
|
||||
}
|
||||
const normalizedTools = this.isRecord(info?.tools)
|
||||
const normalizedTools = isRecord(info?.tools)
|
||||
? normalizePromptTools(info.tools as Record<string, boolean | "allow" | "deny" | "ask">)
|
||||
: undefined
|
||||
if (info?.agent || info?.model || (info?.modelID && info?.providerID) || normalizedTools) {
|
||||
@@ -1466,13 +1341,13 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
if (this.isAbortedSessionError(error)) {
|
||||
if (isAbortedSessionError(error)) {
|
||||
log("[background-agent] Parent session aborted while loading messages; using messageDir fallback:", {
|
||||
taskId: task.id,
|
||||
parentSessionID: task.parentSessionID,
|
||||
})
|
||||
}
|
||||
const messageDir = getMessageDir(task.parentSessionID)
|
||||
const messageDir = join(MESSAGE_STORAGE, task.parentSessionID)
|
||||
const currentMessage = messageDir ? findNearestMessageExcludingCompaction(messageDir) : null
|
||||
agent = currentMessage?.agent ?? task.parentAgent
|
||||
model = currentMessage?.model?.providerID && currentMessage?.model?.modelID
|
||||
@@ -1506,7 +1381,7 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
|
||||
noReply: !allComplete,
|
||||
})
|
||||
} catch (error) {
|
||||
if (this.isAbortedSessionError(error)) {
|
||||
if (isAbortedSessionError(error)) {
|
||||
log("[background-agent] Parent session aborted while sending notification; continuing cleanup:", {
|
||||
taskId: task.id,
|
||||
parentSessionID: task.parentSessionID,
|
||||
@@ -1544,97 +1419,11 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
|
||||
}
|
||||
|
||||
private formatDuration(start: Date, end?: Date): string {
|
||||
const duration = (end ?? new Date()).getTime() - start.getTime()
|
||||
const seconds = Math.floor(duration / 1000)
|
||||
const minutes = Math.floor(seconds / 60)
|
||||
const hours = Math.floor(minutes / 60)
|
||||
|
||||
if (hours > 0) {
|
||||
return `${hours}h ${minutes % 60}m ${seconds % 60}s`
|
||||
} else if (minutes > 0) {
|
||||
return `${minutes}m ${seconds % 60}s`
|
||||
}
|
||||
return `${seconds}s`
|
||||
return formatDuration(start, end)
|
||||
}
|
||||
|
||||
private isAbortedSessionError(error: unknown): boolean {
|
||||
const message = this.getErrorText(error)
|
||||
return message.toLowerCase().includes("aborted")
|
||||
}
|
||||
|
||||
private getErrorText(error: unknown): string {
|
||||
if (!error) return ""
|
||||
if (typeof error === "string") return error
|
||||
if (error instanceof Error) {
|
||||
return `${error.name}: ${error.message}`
|
||||
}
|
||||
if (typeof error === "object" && error !== null) {
|
||||
if ("message" in error && typeof error.message === "string") {
|
||||
return error.message
|
||||
}
|
||||
if ("name" in error && typeof error.name === "string") {
|
||||
return error.name
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
private extractErrorName(error: unknown): string | undefined {
|
||||
if (this.isRecord(error) && typeof error["name"] === "string") return error["name"]
|
||||
if (error instanceof Error) return error.name
|
||||
return undefined
|
||||
}
|
||||
|
||||
private extractErrorMessage(error: unknown): string | undefined {
|
||||
if (!error) return undefined
|
||||
if (typeof error === "string") return error
|
||||
if (error instanceof Error) return error.message
|
||||
|
||||
if (this.isRecord(error)) {
|
||||
const dataRaw = error["data"]
|
||||
const candidates: unknown[] = [
|
||||
error,
|
||||
dataRaw,
|
||||
error["error"],
|
||||
this.isRecord(dataRaw) ? (dataRaw as Record<string, unknown>)["error"] : undefined,
|
||||
error["cause"],
|
||||
]
|
||||
|
||||
for (const candidate of candidates) {
|
||||
if (typeof candidate === "string" && candidate.length > 0) return candidate
|
||||
if (
|
||||
this.isRecord(candidate) &&
|
||||
typeof candidate["message"] === "string" &&
|
||||
candidate["message"].length > 0
|
||||
) {
|
||||
return candidate["message"]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
return JSON.stringify(error)
|
||||
} catch {
|
||||
return String(error)
|
||||
}
|
||||
}
|
||||
|
||||
private isRecord(value: unknown): value is Record<string, unknown> {
|
||||
return typeof value === "object" && value !== null
|
||||
}
|
||||
|
||||
private getSessionErrorMessage(properties: EventProperties): string | undefined {
|
||||
const errorRaw = properties["error"]
|
||||
if (!this.isRecord(errorRaw)) return undefined
|
||||
|
||||
const dataRaw = errorRaw["data"]
|
||||
if (this.isRecord(dataRaw)) {
|
||||
const message = dataRaw["message"]
|
||||
if (typeof message === "string") return message
|
||||
}
|
||||
|
||||
const message = errorRaw["message"]
|
||||
return typeof message === "string" ? message : undefined
|
||||
return isAbortedSessionError(error)
|
||||
}
|
||||
|
||||
private hasRunningTasks(): boolean {
|
||||
@@ -1645,25 +1434,12 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
|
||||
}
|
||||
|
||||
private pruneStaleTasksAndNotifications(): void {
|
||||
const now = Date.now()
|
||||
|
||||
for (const [taskId, task] of this.tasks.entries()) {
|
||||
const wasPending = task.status === "pending"
|
||||
const timestamp = task.status === "pending"
|
||||
? task.queuedAt?.getTime()
|
||||
: task.startedAt?.getTime()
|
||||
|
||||
if (!timestamp) {
|
||||
continue
|
||||
}
|
||||
|
||||
const age = now - timestamp
|
||||
if (age > TASK_TTL_MS) {
|
||||
const errorMessage = task.status === "pending"
|
||||
? "Task timed out while queued (30 minutes)"
|
||||
: "Task timed out after 30 minutes"
|
||||
|
||||
log("[background-agent] Pruning stale task:", { taskId, status: task.status, age: Math.round(age / 1000) + "s" })
|
||||
pruneStaleTasksAndNotifications({
|
||||
tasks: this.tasks,
|
||||
notifications: this.notifications,
|
||||
onTaskPruned: (taskId, task, errorMessage) => {
|
||||
const wasPending = task.status === "pending"
|
||||
log("[background-agent] Pruning stale task:", { taskId, status: task.status, age: Math.round(((wasPending ? task.queuedAt?.getTime() : task.startedAt?.getTime()) ? (Date.now() - (wasPending ? task.queuedAt!.getTime() : task.startedAt!.getTime())) : 0) / 1000) + "s" })
|
||||
task.status = "error"
|
||||
task.error = errorMessage
|
||||
task.completedAt = new Date()
|
||||
@@ -1671,7 +1447,6 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
|
||||
this.concurrencyManager.release(task.concurrencyKey)
|
||||
task.concurrencyKey = undefined
|
||||
}
|
||||
// Clean up pendingByParent to prevent stale entries
|
||||
this.cleanupPendingByParent(task)
|
||||
if (wasPending) {
|
||||
const key = task.model
|
||||
@@ -1698,97 +1473,21 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
|
||||
subagentSessions.delete(task.sessionID)
|
||||
SessionCategoryRegistry.remove(task.sessionID)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for (const [sessionID, notifications] of this.notifications.entries()) {
|
||||
if (notifications.length === 0) {
|
||||
this.notifications.delete(sessionID)
|
||||
continue
|
||||
}
|
||||
const validNotifications = notifications.filter((task) => {
|
||||
if (!task.startedAt) return false
|
||||
const age = now - task.startedAt.getTime()
|
||||
return age <= TASK_TTL_MS
|
||||
})
|
||||
if (validNotifications.length === 0) {
|
||||
this.notifications.delete(sessionID)
|
||||
} else if (validNotifications.length !== notifications.length) {
|
||||
this.notifications.set(sessionID, validNotifications)
|
||||
}
|
||||
}
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
private async checkAndInterruptStaleTasks(
|
||||
allStatuses: Record<string, { type: string }> = {},
|
||||
): Promise<void> {
|
||||
const staleTimeoutMs = this.config?.staleTimeoutMs ?? DEFAULT_STALE_TIMEOUT_MS
|
||||
const messageStalenessMs = this.config?.messageStalenessTimeoutMs ?? DEFAULT_MESSAGE_STALENESS_TIMEOUT_MS
|
||||
const now = Date.now()
|
||||
|
||||
for (const task of this.tasks.values()) {
|
||||
if (task.status !== "running") continue
|
||||
|
||||
const startedAt = task.startedAt
|
||||
const sessionID = task.sessionID
|
||||
if (!startedAt || !sessionID) continue
|
||||
|
||||
const sessionStatus = allStatuses[sessionID]?.type
|
||||
const sessionIsRunning = sessionStatus !== undefined && sessionStatus !== "idle"
|
||||
const runtime = now - startedAt.getTime()
|
||||
|
||||
if (!task.progress?.lastUpdate) {
|
||||
if (sessionIsRunning) continue
|
||||
if (runtime <= messageStalenessMs) continue
|
||||
|
||||
const staleMinutes = Math.round(runtime / 60000)
|
||||
task.status = "cancelled"
|
||||
task.error = `Stale timeout (no activity for ${staleMinutes}min since start)`
|
||||
task.completedAt = new Date()
|
||||
|
||||
if (task.concurrencyKey) {
|
||||
this.concurrencyManager.release(task.concurrencyKey)
|
||||
task.concurrencyKey = undefined
|
||||
}
|
||||
|
||||
this.client.session.abort({ path: { id: sessionID } }).catch(() => {})
|
||||
log(`[background-agent] Task ${task.id} interrupted: no progress since start`)
|
||||
|
||||
try {
|
||||
await this.enqueueNotificationForParent(task.parentSessionID, () => this.notifyParentSession(task))
|
||||
} catch (err) {
|
||||
log("[background-agent] Error in notifyParentSession for stale task:", { taskId: task.id, error: err })
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if (sessionIsRunning) continue
|
||||
|
||||
if (runtime < MIN_RUNTIME_BEFORE_STALE_MS) continue
|
||||
|
||||
const timeSinceLastUpdate = now - task.progress.lastUpdate.getTime()
|
||||
if (timeSinceLastUpdate <= staleTimeoutMs) continue
|
||||
if (task.status !== "running") continue
|
||||
|
||||
const staleMinutes = Math.round(timeSinceLastUpdate / 60000)
|
||||
task.status = "cancelled"
|
||||
task.error = `Stale timeout (no activity for ${staleMinutes}min)`
|
||||
task.completedAt = new Date()
|
||||
|
||||
if (task.concurrencyKey) {
|
||||
this.concurrencyManager.release(task.concurrencyKey)
|
||||
task.concurrencyKey = undefined
|
||||
}
|
||||
|
||||
this.client.session.abort({ path: { id: sessionID } }).catch(() => {})
|
||||
log(`[background-agent] Task ${task.id} interrupted: stale timeout`)
|
||||
|
||||
try {
|
||||
await this.enqueueNotificationForParent(task.parentSessionID, () => this.notifyParentSession(task))
|
||||
} catch (err) {
|
||||
log("[background-agent] Error in notifyParentSession for stale task:", { taskId: task.id, error: err })
|
||||
}
|
||||
}
|
||||
await checkAndInterruptStaleTasks({
|
||||
tasks: this.tasks.values(),
|
||||
client: this.client,
|
||||
config: this.config,
|
||||
concurrencyManager: this.concurrencyManager,
|
||||
notifyParentSession: (task) => this.enqueueNotificationForParent(task.parentSessionID, () => this.notifyParentSession(task)),
|
||||
sessionStatuses: allStatuses,
|
||||
})
|
||||
}
|
||||
|
||||
private async pollRunningTasks(): Promise<void> {
|
||||
@@ -1948,89 +1647,3 @@ Use \`background_output(task_id="${task.id}")\` to retrieve this result when rea
|
||||
return current
|
||||
}
|
||||
}
|
||||
|
||||
function registerProcessSignal(
|
||||
signal: ProcessCleanupEvent,
|
||||
handler: () => void,
|
||||
exitAfter: boolean
|
||||
): () => void {
|
||||
const listener = () => {
|
||||
handler()
|
||||
if (exitAfter) {
|
||||
// Set exitCode and schedule exit after delay to allow other handlers to complete async cleanup
|
||||
// Use 6s delay to accommodate LSP cleanup (5s timeout + 1s SIGKILL wait)
|
||||
process.exitCode = 0
|
||||
setTimeout(() => process.exit(), 6000)
|
||||
}
|
||||
}
|
||||
process.on(signal, listener)
|
||||
return listener
|
||||
}
|
||||
|
||||
|
||||
function getMessageDir(sessionID: string): string | null {
|
||||
if (!existsSync(MESSAGE_STORAGE)) return null
|
||||
|
||||
const directPath = join(MESSAGE_STORAGE, sessionID)
|
||||
if (existsSync(directPath)) return directPath
|
||||
|
||||
for (const dir of readdirSync(MESSAGE_STORAGE)) {
|
||||
const sessionPath = join(MESSAGE_STORAGE, dir, sessionID)
|
||||
if (existsSync(sessionPath)) return sessionPath
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
function isCompactionAgent(agent: string | undefined): boolean {
|
||||
return agent?.trim().toLowerCase() === "compaction"
|
||||
}
|
||||
|
||||
function hasFullAgentAndModel(message: StoredMessage): boolean {
|
||||
return !!message.agent &&
|
||||
!isCompactionAgent(message.agent) &&
|
||||
!!message.model?.providerID &&
|
||||
!!message.model?.modelID
|
||||
}
|
||||
|
||||
function hasPartialAgentOrModel(message: StoredMessage): boolean {
|
||||
const hasAgent = !!message.agent && !isCompactionAgent(message.agent)
|
||||
const hasModel = !!message.model?.providerID && !!message.model?.modelID
|
||||
return hasAgent || hasModel
|
||||
}
|
||||
|
||||
function findNearestMessageExcludingCompaction(messageDir: string): StoredMessage | null {
|
||||
try {
|
||||
const files = readdirSync(messageDir)
|
||||
.filter((name) => name.endsWith(".json"))
|
||||
.sort()
|
||||
.reverse()
|
||||
|
||||
for (const file of files) {
|
||||
try {
|
||||
const content = readFileSync(join(messageDir, file), "utf-8")
|
||||
const parsed = JSON.parse(content) as StoredMessage
|
||||
if (hasFullAgentAndModel(parsed)) {
|
||||
return parsed
|
||||
}
|
||||
} catch {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
for (const file of files) {
|
||||
try {
|
||||
const content = readFileSync(join(messageDir, file), "utf-8")
|
||||
const parsed = JSON.parse(content) as StoredMessage
|
||||
if (hasPartialAgentOrModel(parsed)) {
|
||||
return parsed
|
||||
}
|
||||
} catch {
|
||||
continue
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
return null
|
||||
}
|
||||
|
||||
return null
|
||||
}
|
||||
|
||||
@@ -1,41 +0,0 @@
|
||||
import type { BackgroundTask } from "./types"
|
||||
|
||||
export function buildBackgroundTaskNotificationText(args: {
|
||||
task: BackgroundTask
|
||||
duration: string
|
||||
allComplete: boolean
|
||||
remainingCount: number
|
||||
completedTasks: BackgroundTask[]
|
||||
}): string {
|
||||
const { task, duration, allComplete, remainingCount, completedTasks } = args
|
||||
const statusText =
|
||||
task.status === "completed" ? "COMPLETED" : task.status === "interrupt" ? "INTERRUPTED" : task.status === "error" ? "ERROR" : "CANCELLED"
|
||||
const errorInfo = task.error ? `\n**Error:** ${task.error}` : ""
|
||||
|
||||
if (allComplete) {
|
||||
const completedTasksText = completedTasks
|
||||
.map((t) => `- \`${t.id}\`: ${t.description}`)
|
||||
.join("\n")
|
||||
|
||||
return `<system-reminder>
|
||||
[ALL BACKGROUND TASKS COMPLETE]
|
||||
|
||||
**Completed:**
|
||||
${completedTasksText || `- \`${task.id}\`: ${task.description}`}
|
||||
|
||||
Use \`background_output(task_id="<id>")\` to retrieve each result.
|
||||
</system-reminder>`
|
||||
}
|
||||
|
||||
return `<system-reminder>
|
||||
[BACKGROUND TASK ${statusText}]
|
||||
**ID:** \`${task.id}\`
|
||||
**Description:** ${task.description}
|
||||
**Duration:** ${duration}${errorInfo}
|
||||
|
||||
**${remainingCount} task${remainingCount === 1 ? "" : "s"} still in progress.** You WILL be notified when ALL complete.
|
||||
Do NOT poll - continue productive work.
|
||||
|
||||
Use \`background_output(task_id="${task.id}")\` to retrieve this result when ready.
|
||||
</system-reminder>`
|
||||
}
|
||||
162
src/features/background-agent/process-cleanup.test.ts
Normal file
162
src/features/background-agent/process-cleanup.test.ts
Normal file
@@ -0,0 +1,162 @@
|
||||
import { describe, test, expect, beforeEach, afterEach, mock } from "bun:test"
|
||||
import {
|
||||
registerManagerForCleanup,
|
||||
unregisterManagerForCleanup,
|
||||
_resetForTesting,
|
||||
} from "./process-cleanup"
|
||||
|
||||
describe("process-cleanup", () => {
|
||||
const registeredManagers: Array<{ shutdown: () => void }> = []
|
||||
const mockShutdown = mock(() => {})
|
||||
|
||||
const processOnCalls: Array<[string, Function]> = []
|
||||
const processOffCalls: Array<[string, Function]> = []
|
||||
const originalProcessOn = process.on.bind(process)
|
||||
const originalProcessOff = process.off.bind(process)
|
||||
|
||||
beforeEach(() => {
|
||||
mockShutdown.mockClear()
|
||||
processOnCalls.length = 0
|
||||
processOffCalls.length = 0
|
||||
registeredManagers.length = 0
|
||||
|
||||
process.on = originalProcessOn as any
|
||||
process.off = originalProcessOff as any
|
||||
_resetForTesting()
|
||||
|
||||
process.on = ((event: string, listener: Function) => {
|
||||
processOnCalls.push([event, listener])
|
||||
return process
|
||||
}) as any
|
||||
|
||||
process.off = ((event: string, listener: Function) => {
|
||||
processOffCalls.push([event, listener])
|
||||
return process
|
||||
}) as any
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
process.on = originalProcessOn as any
|
||||
process.off = originalProcessOff as any
|
||||
|
||||
for (const manager of [...registeredManagers]) {
|
||||
unregisterManagerForCleanup(manager)
|
||||
}
|
||||
})
|
||||
|
||||
describe("registerManagerForCleanup", () => {
|
||||
test("registers signal handlers on first manager", () => {
|
||||
const manager = { shutdown: mockShutdown }
|
||||
registeredManagers.push(manager)
|
||||
|
||||
registerManagerForCleanup(manager)
|
||||
|
||||
const signals = processOnCalls.map(([signal]) => signal)
|
||||
expect(signals).toContain("SIGINT")
|
||||
expect(signals).toContain("SIGTERM")
|
||||
expect(signals).toContain("beforeExit")
|
||||
expect(signals).toContain("exit")
|
||||
})
|
||||
|
||||
test("signal listener calls shutdown on registered manager", () => {
|
||||
const manager = { shutdown: mockShutdown }
|
||||
registeredManagers.push(manager)
|
||||
|
||||
registerManagerForCleanup(manager)
|
||||
|
||||
const exitEntry = processOnCalls.find(([signal]) => signal === "exit")
|
||||
expect(exitEntry).toBeDefined()
|
||||
const [, listener] = exitEntry!
|
||||
listener()
|
||||
|
||||
expect(mockShutdown).toHaveBeenCalled()
|
||||
})
|
||||
|
||||
test("multiple managers all get shutdown when signal fires", () => {
|
||||
const shutdown1 = mock(() => {})
|
||||
const shutdown2 = mock(() => {})
|
||||
const shutdown3 = mock(() => {})
|
||||
const manager1 = { shutdown: shutdown1 }
|
||||
const manager2 = { shutdown: shutdown2 }
|
||||
const manager3 = { shutdown: shutdown3 }
|
||||
registeredManagers.push(manager1, manager2, manager3)
|
||||
|
||||
registerManagerForCleanup(manager1)
|
||||
registerManagerForCleanup(manager2)
|
||||
registerManagerForCleanup(manager3)
|
||||
|
||||
const exitEntry = processOnCalls.find(([signal]) => signal === "exit")
|
||||
expect(exitEntry).toBeDefined()
|
||||
const [, listener] = exitEntry!
|
||||
listener()
|
||||
|
||||
expect(shutdown1).toHaveBeenCalledTimes(1)
|
||||
expect(shutdown2).toHaveBeenCalledTimes(1)
|
||||
expect(shutdown3).toHaveBeenCalledTimes(1)
|
||||
})
|
||||
|
||||
test("does not re-register signal handlers for subsequent managers", () => {
|
||||
const manager1 = { shutdown: mockShutdown }
|
||||
const manager2 = { shutdown: mockShutdown }
|
||||
registeredManagers.push(manager1, manager2)
|
||||
|
||||
registerManagerForCleanup(manager1)
|
||||
const callsAfterFirst = processOnCalls.length
|
||||
|
||||
registerManagerForCleanup(manager2)
|
||||
|
||||
expect(processOnCalls.length).toBe(callsAfterFirst)
|
||||
})
|
||||
})
|
||||
|
||||
describe("unregisterManagerForCleanup", () => {
|
||||
test("removes signal handlers when last manager unregisters", () => {
|
||||
const manager = { shutdown: mockShutdown }
|
||||
registeredManagers.push(manager)
|
||||
|
||||
registerManagerForCleanup(manager)
|
||||
unregisterManagerForCleanup(manager)
|
||||
registeredManagers.length = 0
|
||||
|
||||
const offSignals = processOffCalls.map(([signal]) => signal)
|
||||
expect(offSignals).toContain("SIGINT")
|
||||
expect(offSignals).toContain("SIGTERM")
|
||||
expect(offSignals).toContain("beforeExit")
|
||||
expect(offSignals).toContain("exit")
|
||||
})
|
||||
|
||||
test("keeps signal handlers when other managers remain", () => {
|
||||
const manager1 = { shutdown: mockShutdown }
|
||||
const manager2 = { shutdown: mockShutdown }
|
||||
registeredManagers.push(manager1, manager2)
|
||||
|
||||
registerManagerForCleanup(manager1)
|
||||
registerManagerForCleanup(manager2)
|
||||
|
||||
unregisterManagerForCleanup(manager2)
|
||||
|
||||
expect(processOffCalls.length).toBe(0)
|
||||
})
|
||||
|
||||
test("remaining managers still get shutdown after partial unregister", () => {
|
||||
const shutdown1 = mock(() => {})
|
||||
const shutdown2 = mock(() => {})
|
||||
const manager1 = { shutdown: shutdown1 }
|
||||
const manager2 = { shutdown: shutdown2 }
|
||||
registeredManagers.push(manager1, manager2)
|
||||
|
||||
registerManagerForCleanup(manager1)
|
||||
registerManagerForCleanup(manager2)
|
||||
|
||||
const exitEntry = processOnCalls.find(([signal]) => signal === "exit")
|
||||
expect(exitEntry).toBeDefined()
|
||||
const [, listener] = exitEntry!
|
||||
unregisterManagerForCleanup(manager2)
|
||||
|
||||
listener()
|
||||
|
||||
expect(shutdown1).toHaveBeenCalledTimes(1)
|
||||
expect(shutdown2).not.toHaveBeenCalled()
|
||||
})
|
||||
})
|
||||
})
|
||||
81
src/features/background-agent/process-cleanup.ts
Normal file
81
src/features/background-agent/process-cleanup.ts
Normal file
@@ -0,0 +1,81 @@
|
||||
import { log } from "../../shared"
|
||||
|
||||
type ProcessCleanupEvent = NodeJS.Signals | "beforeExit" | "exit"
|
||||
|
||||
function registerProcessSignal(
|
||||
signal: ProcessCleanupEvent,
|
||||
handler: () => void,
|
||||
exitAfter: boolean
|
||||
): () => void {
|
||||
const listener = () => {
|
||||
handler()
|
||||
if (exitAfter) {
|
||||
process.exitCode = 0
|
||||
setTimeout(() => process.exit(), 6000).unref()
|
||||
}
|
||||
}
|
||||
process.on(signal, listener)
|
||||
return listener
|
||||
}
|
||||
|
||||
interface CleanupTarget {
|
||||
shutdown(): void
|
||||
}
|
||||
|
||||
const cleanupManagers = new Set<CleanupTarget>()
|
||||
let cleanupRegistered = false
|
||||
const cleanupHandlers = new Map<ProcessCleanupEvent, () => void>()
|
||||
|
||||
export function registerManagerForCleanup(manager: CleanupTarget): void {
|
||||
cleanupManagers.add(manager)
|
||||
|
||||
if (cleanupRegistered) return
|
||||
cleanupRegistered = true
|
||||
|
||||
const cleanupAll = () => {
|
||||
for (const m of cleanupManagers) {
|
||||
try {
|
||||
m.shutdown()
|
||||
} catch (error) {
|
||||
log("[background-agent] Error during shutdown cleanup:", error)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const registerSignal = (signal: ProcessCleanupEvent, exitAfter: boolean): void => {
|
||||
const listener = registerProcessSignal(signal, cleanupAll, exitAfter)
|
||||
cleanupHandlers.set(signal, listener)
|
||||
}
|
||||
|
||||
registerSignal("SIGINT", true)
|
||||
registerSignal("SIGTERM", true)
|
||||
if (process.platform === "win32") {
|
||||
registerSignal("SIGBREAK", true)
|
||||
}
|
||||
registerSignal("beforeExit", false)
|
||||
registerSignal("exit", false)
|
||||
}
|
||||
|
||||
export function unregisterManagerForCleanup(manager: CleanupTarget): void {
|
||||
cleanupManagers.delete(manager)
|
||||
|
||||
if (cleanupManagers.size > 0) return
|
||||
|
||||
for (const [signal, listener] of cleanupHandlers.entries()) {
|
||||
process.off(signal, listener)
|
||||
}
|
||||
cleanupHandlers.clear()
|
||||
cleanupRegistered = false
|
||||
}
|
||||
|
||||
/** @internal — test-only reset for module-level singleton state */
|
||||
export function _resetForTesting(): void {
|
||||
for (const manager of [...cleanupManagers]) {
|
||||
cleanupManagers.delete(manager)
|
||||
}
|
||||
for (const [signal, listener] of cleanupHandlers.entries()) {
|
||||
process.off(signal, listener)
|
||||
}
|
||||
cleanupHandlers.clear()
|
||||
cleanupRegistered = false
|
||||
}
|
||||
@@ -41,6 +41,75 @@ describe("hashline edit operations", () => {
|
||||
expect(result).toEqual(["line 1", "line 2", "inserted", "line 3"])
|
||||
})
|
||||
|
||||
it("applies insert_before with LINE#ID anchor", () => {
|
||||
//#given
|
||||
const lines = ["line 1", "line 2", "line 3"]
|
||||
|
||||
//#when
|
||||
const result = applyHashlineEdits(
|
||||
lines.join("\n"),
|
||||
[{ type: "insert_before", line: anchorFor(lines, 2), text: "before 2" }]
|
||||
)
|
||||
|
||||
//#then
|
||||
expect(result).toEqual("line 1\nbefore 2\nline 2\nline 3")
|
||||
})
|
||||
|
||||
it("applies insert_between with dual anchors", () => {
|
||||
//#given
|
||||
const lines = ["line 1", "line 2", "line 3"]
|
||||
|
||||
//#when
|
||||
const result = applyHashlineEdits(
|
||||
lines.join("\n"),
|
||||
[{
|
||||
type: "insert_between",
|
||||
after_line: anchorFor(lines, 1),
|
||||
before_line: anchorFor(lines, 2),
|
||||
text: ["between"],
|
||||
}]
|
||||
)
|
||||
|
||||
//#then
|
||||
expect(result).toEqual("line 1\nbetween\nline 2\nline 3")
|
||||
})
|
||||
|
||||
it("throws when insert_after receives empty text array", () => {
|
||||
//#given
|
||||
const lines = ["line 1", "line 2"]
|
||||
|
||||
//#when / #then
|
||||
expect(() => applyInsertAfter(lines, anchorFor(lines, 1), [])).toThrow(/non-empty/i)
|
||||
})
|
||||
|
||||
it("throws when insert_before receives empty text array", () => {
|
||||
//#given
|
||||
const lines = ["line 1", "line 2"]
|
||||
|
||||
//#when / #then
|
||||
expect(() =>
|
||||
applyHashlineEdits(lines.join("\n"), [{ type: "insert_before", line: anchorFor(lines, 1), text: [] }])
|
||||
).toThrow(/non-empty/i)
|
||||
})
|
||||
|
||||
it("throws when insert_between receives empty text array", () => {
|
||||
//#given
|
||||
const lines = ["line 1", "line 2"]
|
||||
|
||||
//#when / #then
|
||||
expect(() =>
|
||||
applyHashlineEdits(
|
||||
lines.join("\n"),
|
||||
[{
|
||||
type: "insert_between",
|
||||
after_line: anchorFor(lines, 1),
|
||||
before_line: anchorFor(lines, 2),
|
||||
text: [],
|
||||
}]
|
||||
)
|
||||
).toThrow(/non-empty/i)
|
||||
})
|
||||
|
||||
it("applies replace operation", () => {
|
||||
//#given
|
||||
const content = "hello world foo"
|
||||
@@ -68,6 +137,22 @@ describe("hashline edit operations", () => {
|
||||
expect(result).toEqual("line 1\ninserted\nline 2\nmodified")
|
||||
})
|
||||
|
||||
it("deduplicates identical insert edits in one pass", () => {
|
||||
//#given
|
||||
const content = "line 1\nline 2"
|
||||
const lines = content.split("\n")
|
||||
const edits: HashlineEdit[] = [
|
||||
{ type: "insert_after", line: anchorFor(lines, 1), text: "inserted" },
|
||||
{ type: "insert_after", line: anchorFor(lines, 1), text: "inserted" },
|
||||
]
|
||||
|
||||
//#when
|
||||
const result = applyHashlineEdits(content, edits)
|
||||
|
||||
//#then
|
||||
expect(result).toEqual("line 1\ninserted\nline 2")
|
||||
})
|
||||
|
||||
it("keeps literal backslash-n in plain string text", () => {
|
||||
//#given
|
||||
const lines = ["line 1", "line 2", "line 3"]
|
||||
@@ -101,6 +186,14 @@ describe("hashline edit operations", () => {
|
||||
expect(result).toEqual(["line 1", "inserted", "line 2"])
|
||||
})
|
||||
|
||||
it("throws when insert_after payload only repeats anchor line", () => {
|
||||
//#given
|
||||
const lines = ["line 1", "line 2"]
|
||||
|
||||
//#when / #then
|
||||
expect(() => applyInsertAfter(lines, anchorFor(lines, 1), ["line 1"])).toThrow(/non-empty/i)
|
||||
})
|
||||
|
||||
it("restores indentation for paired single-line replacement", () => {
|
||||
//#given
|
||||
const lines = ["if (x) {", " return 1", "}"]
|
||||
@@ -128,6 +221,23 @@ describe("hashline edit operations", () => {
|
||||
expect(result).toEqual(["before", "new 1", "new 2", "after"])
|
||||
})
|
||||
|
||||
it("throws when insert_between payload contains only boundary echoes", () => {
|
||||
//#given
|
||||
const lines = ["line 1", "line 2", "line 3"]
|
||||
|
||||
//#when / #then
|
||||
expect(() =>
|
||||
applyHashlineEdits(lines.join("\n"), [
|
||||
{
|
||||
type: "insert_between",
|
||||
after_line: anchorFor(lines, 1),
|
||||
before_line: anchorFor(lines, 2),
|
||||
text: ["line 1", "line 2"],
|
||||
},
|
||||
])
|
||||
).toThrow(/non-empty/i)
|
||||
})
|
||||
|
||||
it("restores indentation for first replace_lines entry", () => {
|
||||
//#given
|
||||
const lines = ["if (x) {", " return 1", " return 2", "}"]
|
||||
|
||||
@@ -1,95 +1,18 @@
|
||||
import { parseLineRef, validateLineRef, validateLineRefs } from "./validation"
|
||||
import type { HashlineEdit } from "./types"
|
||||
import {
|
||||
restoreLeadingIndent,
|
||||
stripInsertAnchorEcho,
|
||||
stripInsertBeforeEcho,
|
||||
stripInsertBoundaryEcho,
|
||||
stripRangeBoundaryEcho,
|
||||
toNewLines,
|
||||
} from "./edit-text-normalization"
|
||||
|
||||
const HASHLINE_PREFIX_RE = /^\s*(?:>>>|>>)?\s*\d+#[A-Z]{2}:/
|
||||
const DIFF_PLUS_RE = /^[+-](?![+-])/
|
||||
|
||||
function stripLinePrefixes(lines: string[]): string[] {
|
||||
let hashPrefixCount = 0
|
||||
let diffPlusCount = 0
|
||||
let nonEmpty = 0
|
||||
|
||||
for (const line of lines) {
|
||||
if (line.length === 0) continue
|
||||
nonEmpty += 1
|
||||
if (HASHLINE_PREFIX_RE.test(line)) hashPrefixCount += 1
|
||||
if (DIFF_PLUS_RE.test(line)) diffPlusCount += 1
|
||||
}
|
||||
|
||||
if (nonEmpty === 0) {
|
||||
return lines
|
||||
}
|
||||
|
||||
const stripHash = hashPrefixCount > 0 && hashPrefixCount >= nonEmpty * 0.5
|
||||
const stripPlus = !stripHash && diffPlusCount > 0 && diffPlusCount >= nonEmpty * 0.5
|
||||
|
||||
if (!stripHash && !stripPlus) {
|
||||
return lines
|
||||
}
|
||||
|
||||
return lines.map((line) => {
|
||||
if (stripHash) return line.replace(HASHLINE_PREFIX_RE, "")
|
||||
if (stripPlus) return line.replace(DIFF_PLUS_RE, "")
|
||||
return line
|
||||
})
|
||||
}
|
||||
|
||||
function equalsIgnoringWhitespace(a: string, b: string): boolean {
|
||||
if (a === b) return true
|
||||
return a.replace(/\s+/g, "") === b.replace(/\s+/g, "")
|
||||
}
|
||||
|
||||
function leadingWhitespace(text: string): string {
|
||||
const match = text.match(/^\s*/)
|
||||
return match ? match[0] : ""
|
||||
}
|
||||
|
||||
function restoreLeadingIndent(templateLine: string, line: string): string {
|
||||
if (line.length === 0) return line
|
||||
const templateIndent = leadingWhitespace(templateLine)
|
||||
if (templateIndent.length === 0) return line
|
||||
if (leadingWhitespace(line).length > 0) return line
|
||||
return `${templateIndent}${line}`
|
||||
}
|
||||
|
||||
function stripInsertAnchorEcho(anchorLine: string, newLines: string[]): string[] {
|
||||
if (newLines.length <= 1) return newLines
|
||||
if (equalsIgnoringWhitespace(newLines[0], anchorLine)) {
|
||||
return newLines.slice(1)
|
||||
}
|
||||
return newLines
|
||||
}
|
||||
|
||||
function stripRangeBoundaryEcho(
|
||||
lines: string[],
|
||||
startLine: number,
|
||||
endLine: number,
|
||||
newLines: string[]
|
||||
): string[] {
|
||||
const replacedCount = endLine - startLine + 1
|
||||
if (newLines.length <= 1 || newLines.length <= replacedCount) {
|
||||
return newLines
|
||||
}
|
||||
|
||||
let out = newLines
|
||||
const beforeIdx = startLine - 2
|
||||
if (beforeIdx >= 0 && equalsIgnoringWhitespace(out[0], lines[beforeIdx])) {
|
||||
out = out.slice(1)
|
||||
}
|
||||
|
||||
const afterIdx = endLine
|
||||
if (afterIdx < lines.length && out.length > 0 && equalsIgnoringWhitespace(out[out.length - 1], lines[afterIdx])) {
|
||||
out = out.slice(0, -1)
|
||||
}
|
||||
|
||||
return out
|
||||
}
|
||||
|
||||
function toNewLines(input: string | string[]): string[] {
|
||||
if (Array.isArray(input)) {
|
||||
return stripLinePrefixes(input)
|
||||
}
|
||||
return stripLinePrefixes(input.split("\n"))
|
||||
export interface HashlineApplyReport {
|
||||
content: string
|
||||
noopEdits: number
|
||||
deduplicatedEdits: number
|
||||
}
|
||||
|
||||
export function applySetLine(lines: string[], anchor: string, newText: string | string[]): string[] {
|
||||
@@ -137,10 +60,48 @@ export function applyInsertAfter(lines: string[], anchor: string, text: string |
|
||||
const { line } = parseLineRef(anchor)
|
||||
const result = [...lines]
|
||||
const newLines = stripInsertAnchorEcho(lines[line - 1], toNewLines(text))
|
||||
if (newLines.length === 0) {
|
||||
throw new Error(`insert_after requires non-empty text for ${anchor}`)
|
||||
}
|
||||
result.splice(line, 0, ...newLines)
|
||||
return result
|
||||
}
|
||||
|
||||
export function applyInsertBefore(lines: string[], anchor: string, text: string | string[]): string[] {
|
||||
validateLineRef(lines, anchor)
|
||||
const { line } = parseLineRef(anchor)
|
||||
const result = [...lines]
|
||||
const newLines = stripInsertBeforeEcho(lines[line - 1], toNewLines(text))
|
||||
if (newLines.length === 0) {
|
||||
throw new Error(`insert_before requires non-empty text for ${anchor}`)
|
||||
}
|
||||
result.splice(line - 1, 0, ...newLines)
|
||||
return result
|
||||
}
|
||||
|
||||
export function applyInsertBetween(
|
||||
lines: string[],
|
||||
afterAnchor: string,
|
||||
beforeAnchor: string,
|
||||
text: string | string[]
|
||||
): string[] {
|
||||
validateLineRef(lines, afterAnchor)
|
||||
validateLineRef(lines, beforeAnchor)
|
||||
const { line: afterLine } = parseLineRef(afterAnchor)
|
||||
const { line: beforeLine } = parseLineRef(beforeAnchor)
|
||||
if (beforeLine <= afterLine) {
|
||||
throw new Error(`insert_between requires after_line (${afterLine}) < before_line (${beforeLine})`)
|
||||
}
|
||||
|
||||
const result = [...lines]
|
||||
const newLines = stripInsertBoundaryEcho(lines[afterLine - 1], lines[beforeLine - 1], toNewLines(text))
|
||||
if (newLines.length === 0) {
|
||||
throw new Error(`insert_between requires non-empty text for ${afterAnchor}..${beforeAnchor}`)
|
||||
}
|
||||
result.splice(beforeLine - 1, 0, ...newLines)
|
||||
return result
|
||||
}
|
||||
|
||||
export function applyReplace(content: string, oldText: string, newText: string | string[]): string {
|
||||
if (!content.includes(oldText)) {
|
||||
throw new Error(`Text not found: "${oldText}"`)
|
||||
@@ -157,6 +118,10 @@ function getEditLineNumber(edit: HashlineEdit): number {
|
||||
return parseLineRef(edit.end_line).line
|
||||
case "insert_after":
|
||||
return parseLineRef(edit.line).line
|
||||
case "insert_before":
|
||||
return parseLineRef(edit.line).line
|
||||
case "insert_between":
|
||||
return parseLineRef(edit.before_line).line
|
||||
case "replace":
|
||||
return Number.NEGATIVE_INFINITY
|
||||
default:
|
||||
@@ -164,12 +129,57 @@ function getEditLineNumber(edit: HashlineEdit): number {
|
||||
}
|
||||
}
|
||||
|
||||
export function applyHashlineEdits(content: string, edits: HashlineEdit[]): string {
|
||||
if (edits.length === 0) {
|
||||
return content
|
||||
function normalizeEditPayload(payload: string | string[]): string {
|
||||
return toNewLines(payload).join("\n")
|
||||
}
|
||||
|
||||
function dedupeEdits(edits: HashlineEdit[]): { edits: HashlineEdit[]; deduplicatedEdits: number } {
|
||||
const seen = new Set<string>()
|
||||
const deduped: HashlineEdit[] = []
|
||||
let deduplicatedEdits = 0
|
||||
|
||||
for (const edit of edits) {
|
||||
const key = (() => {
|
||||
switch (edit.type) {
|
||||
case "set_line":
|
||||
return `set_line|${edit.line}|${normalizeEditPayload(edit.text)}`
|
||||
case "replace_lines":
|
||||
return `replace_lines|${edit.start_line}|${edit.end_line}|${normalizeEditPayload(edit.text)}`
|
||||
case "insert_after":
|
||||
return `insert_after|${edit.line}|${normalizeEditPayload(edit.text)}`
|
||||
case "insert_before":
|
||||
return `insert_before|${edit.line}|${normalizeEditPayload(edit.text)}`
|
||||
case "insert_between":
|
||||
return `insert_between|${edit.after_line}|${edit.before_line}|${normalizeEditPayload(edit.text)}`
|
||||
case "replace":
|
||||
return `replace|${edit.old_text}|${normalizeEditPayload(edit.new_text)}`
|
||||
}
|
||||
})()
|
||||
|
||||
if (seen.has(key)) {
|
||||
deduplicatedEdits += 1
|
||||
continue
|
||||
}
|
||||
seen.add(key)
|
||||
deduped.push(edit)
|
||||
}
|
||||
|
||||
const sortedEdits = [...edits].sort((a, b) => getEditLineNumber(b) - getEditLineNumber(a))
|
||||
return { edits: deduped, deduplicatedEdits }
|
||||
}
|
||||
|
||||
export function applyHashlineEditsWithReport(content: string, edits: HashlineEdit[]): HashlineApplyReport {
|
||||
if (edits.length === 0) {
|
||||
return {
|
||||
content,
|
||||
noopEdits: 0,
|
||||
deduplicatedEdits: 0,
|
||||
}
|
||||
}
|
||||
|
||||
const dedupeResult = dedupeEdits(edits)
|
||||
const sortedEdits = [...dedupeResult.edits].sort((a, b) => getEditLineNumber(b) - getEditLineNumber(a))
|
||||
|
||||
let noopEdits = 0
|
||||
|
||||
let result = content
|
||||
let lines = result.split("\n")
|
||||
@@ -182,6 +192,10 @@ export function applyHashlineEdits(content: string, edits: HashlineEdit[]): stri
|
||||
return [edit.start_line, edit.end_line]
|
||||
case "insert_after":
|
||||
return [edit.line]
|
||||
case "insert_before":
|
||||
return [edit.line]
|
||||
case "insert_between":
|
||||
return [edit.after_line, edit.before_line]
|
||||
case "replace":
|
||||
return []
|
||||
default:
|
||||
@@ -201,7 +215,30 @@ export function applyHashlineEdits(content: string, edits: HashlineEdit[]): stri
|
||||
break
|
||||
}
|
||||
case "insert_after": {
|
||||
lines = applyInsertAfter(lines, edit.line, edit.text)
|
||||
const next = applyInsertAfter(lines, edit.line, edit.text)
|
||||
if (next.join("\n") === lines.join("\n")) {
|
||||
noopEdits += 1
|
||||
break
|
||||
}
|
||||
lines = next
|
||||
break
|
||||
}
|
||||
case "insert_before": {
|
||||
const next = applyInsertBefore(lines, edit.line, edit.text)
|
||||
if (next.join("\n") === lines.join("\n")) {
|
||||
noopEdits += 1
|
||||
break
|
||||
}
|
||||
lines = next
|
||||
break
|
||||
}
|
||||
case "insert_between": {
|
||||
const next = applyInsertBetween(lines, edit.after_line, edit.before_line, edit.text)
|
||||
if (next.join("\n") === lines.join("\n")) {
|
||||
noopEdits += 1
|
||||
break
|
||||
}
|
||||
lines = next
|
||||
break
|
||||
}
|
||||
case "replace": {
|
||||
@@ -210,12 +247,25 @@ export function applyHashlineEdits(content: string, edits: HashlineEdit[]): stri
|
||||
throw new Error(`Text not found: "${edit.old_text}"`)
|
||||
}
|
||||
const replacement = Array.isArray(edit.new_text) ? edit.new_text.join("\n") : edit.new_text
|
||||
result = result.replaceAll(edit.old_text, replacement)
|
||||
const replaced = result.replaceAll(edit.old_text, replacement)
|
||||
if (replaced === result) {
|
||||
noopEdits += 1
|
||||
break
|
||||
}
|
||||
result = replaced
|
||||
lines = result.split("\n")
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return lines.join("\n")
|
||||
return {
|
||||
content: lines.join("\n"),
|
||||
noopEdits,
|
||||
deduplicatedEdits: dedupeResult.deduplicatedEdits,
|
||||
}
|
||||
}
|
||||
|
||||
export function applyHashlineEdits(content: string, edits: HashlineEdit[]): string {
|
||||
return applyHashlineEditsWithReport(content, edits).content
|
||||
}
|
||||
|
||||
109
src/tools/hashline-edit/edit-text-normalization.ts
Normal file
109
src/tools/hashline-edit/edit-text-normalization.ts
Normal file
@@ -0,0 +1,109 @@
|
||||
const HASHLINE_PREFIX_RE = /^\s*(?:>>>|>>)?\s*\d+#[A-Z]{2}:/
|
||||
const DIFF_PLUS_RE = /^[+-](?![+-])/
|
||||
|
||||
function equalsIgnoringWhitespace(a: string, b: string): boolean {
|
||||
if (a === b) return true
|
||||
return a.replace(/\s+/g, "") === b.replace(/\s+/g, "")
|
||||
}
|
||||
|
||||
function leadingWhitespace(text: string): string {
|
||||
const match = text.match(/^\s*/)
|
||||
return match ? match[0] : ""
|
||||
}
|
||||
|
||||
export function stripLinePrefixes(lines: string[]): string[] {
|
||||
let hashPrefixCount = 0
|
||||
let diffPlusCount = 0
|
||||
let nonEmpty = 0
|
||||
|
||||
for (const line of lines) {
|
||||
if (line.length === 0) continue
|
||||
nonEmpty += 1
|
||||
if (HASHLINE_PREFIX_RE.test(line)) hashPrefixCount += 1
|
||||
if (DIFF_PLUS_RE.test(line)) diffPlusCount += 1
|
||||
}
|
||||
|
||||
if (nonEmpty === 0) {
|
||||
return lines
|
||||
}
|
||||
|
||||
const stripHash = hashPrefixCount > 0 && hashPrefixCount >= nonEmpty * 0.5
|
||||
const stripPlus = !stripHash && diffPlusCount > 0 && diffPlusCount >= nonEmpty * 0.5
|
||||
|
||||
if (!stripHash && !stripPlus) {
|
||||
return lines
|
||||
}
|
||||
|
||||
return lines.map((line) => {
|
||||
if (stripHash) return line.replace(HASHLINE_PREFIX_RE, "")
|
||||
if (stripPlus) return line.replace(DIFF_PLUS_RE, "")
|
||||
return line
|
||||
})
|
||||
}
|
||||
|
||||
export function toNewLines(input: string | string[]): string[] {
|
||||
if (Array.isArray(input)) {
|
||||
return stripLinePrefixes(input)
|
||||
}
|
||||
return stripLinePrefixes(input.split("\n"))
|
||||
}
|
||||
|
||||
export function restoreLeadingIndent(templateLine: string, line: string): string {
|
||||
if (line.length === 0) return line
|
||||
const templateIndent = leadingWhitespace(templateLine)
|
||||
if (templateIndent.length === 0) return line
|
||||
if (leadingWhitespace(line).length > 0) return line
|
||||
return `${templateIndent}${line}`
|
||||
}
|
||||
|
||||
export function stripInsertAnchorEcho(anchorLine: string, newLines: string[]): string[] {
|
||||
if (newLines.length === 0) return newLines
|
||||
if (equalsIgnoringWhitespace(newLines[0], anchorLine)) {
|
||||
return newLines.slice(1)
|
||||
}
|
||||
return newLines
|
||||
}
|
||||
|
||||
export function stripInsertBeforeEcho(anchorLine: string, newLines: string[]): string[] {
|
||||
if (newLines.length <= 1) return newLines
|
||||
if (equalsIgnoringWhitespace(newLines[newLines.length - 1], anchorLine)) {
|
||||
return newLines.slice(0, -1)
|
||||
}
|
||||
return newLines
|
||||
}
|
||||
|
||||
export function stripInsertBoundaryEcho(afterLine: string, beforeLine: string, newLines: string[]): string[] {
|
||||
let out = newLines
|
||||
if (out.length > 0 && equalsIgnoringWhitespace(out[0], afterLine)) {
|
||||
out = out.slice(1)
|
||||
}
|
||||
if (out.length > 0 && equalsIgnoringWhitespace(out[out.length - 1], beforeLine)) {
|
||||
out = out.slice(0, -1)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
export function stripRangeBoundaryEcho(
|
||||
lines: string[],
|
||||
startLine: number,
|
||||
endLine: number,
|
||||
newLines: string[]
|
||||
): string[] {
|
||||
const replacedCount = endLine - startLine + 1
|
||||
if (newLines.length <= 1 || newLines.length <= replacedCount) {
|
||||
return newLines
|
||||
}
|
||||
|
||||
let out = newLines
|
||||
const beforeIdx = startLine - 2
|
||||
if (beforeIdx >= 0 && equalsIgnoringWhitespace(out[0], lines[beforeIdx])) {
|
||||
out = out.slice(1)
|
||||
}
|
||||
|
||||
const afterIdx = endLine
|
||||
if (afterIdx < lines.length && out.length > 0 && equalsIgnoringWhitespace(out[out.length - 1], lines[afterIdx])) {
|
||||
out = out.slice(0, -1)
|
||||
}
|
||||
|
||||
return out
|
||||
}
|
||||
@@ -1,5 +1,11 @@
|
||||
import { describe, it, expect } from "bun:test"
|
||||
import { computeLineHash, formatHashLine, formatHashLines } from "./hash-computation"
|
||||
import {
|
||||
computeLineHash,
|
||||
formatHashLine,
|
||||
formatHashLines,
|
||||
streamHashLinesFromLines,
|
||||
streamHashLinesFromUtf8,
|
||||
} from "./hash-computation"
|
||||
|
||||
describe("computeLineHash", () => {
|
||||
it("returns deterministic 2-char CID hash per line", () => {
|
||||
@@ -71,3 +77,65 @@ describe("formatHashLines", () => {
|
||||
expect(lines[2]).toMatch(/^3#[ZPMQVRWSNKTXJBYH]{2}:c$/)
|
||||
})
|
||||
})
|
||||
|
||||
describe("streamHashLinesFrom*", () => {
|
||||
async function collectStream(stream: AsyncIterable<string>): Promise<string> {
|
||||
const chunks: string[] = []
|
||||
for await (const chunk of stream) {
|
||||
chunks.push(chunk)
|
||||
}
|
||||
return chunks.join("\n")
|
||||
}
|
||||
|
||||
async function* utf8Chunks(text: string, chunkSize: number): AsyncGenerator<Uint8Array> {
|
||||
const encoded = new TextEncoder().encode(text)
|
||||
for (let i = 0; i < encoded.length; i += chunkSize) {
|
||||
yield encoded.slice(i, i + chunkSize)
|
||||
}
|
||||
}
|
||||
|
||||
it("matches formatHashLines for utf8 stream input", async () => {
|
||||
//#given
|
||||
const content = "a\nb\nc"
|
||||
|
||||
//#when
|
||||
const result = await collectStream(streamHashLinesFromUtf8(utf8Chunks(content, 1), { maxChunkLines: 1 }))
|
||||
|
||||
//#then
|
||||
expect(result).toBe(formatHashLines(content))
|
||||
})
|
||||
|
||||
it("matches formatHashLines for line iterable input", async () => {
|
||||
//#given
|
||||
const content = "x\ny\n"
|
||||
const lines = ["x", "y", ""]
|
||||
|
||||
//#when
|
||||
const result = await collectStream(streamHashLinesFromLines(lines, { maxChunkLines: 2 }))
|
||||
|
||||
//#then
|
||||
expect(result).toBe(formatHashLines(content))
|
||||
})
|
||||
|
||||
it("matches formatHashLines for empty utf8 stream input", async () => {
|
||||
//#given
|
||||
const content = ""
|
||||
|
||||
//#when
|
||||
const result = await collectStream(streamHashLinesFromUtf8(utf8Chunks(content, 1), { maxChunkLines: 1 }))
|
||||
|
||||
//#then
|
||||
expect(result).toBe(formatHashLines(content))
|
||||
})
|
||||
|
||||
it("matches formatHashLines for empty line iterable input", async () => {
|
||||
//#given
|
||||
const content = ""
|
||||
|
||||
//#when
|
||||
const result = await collectStream(streamHashLinesFromLines([], { maxChunkLines: 1 }))
|
||||
|
||||
//#then
|
||||
expect(result).toBe(formatHashLines(content))
|
||||
})
|
||||
})
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import { HASHLINE_DICT } from "./constants"
|
||||
import { createHashlineChunkFormatter } from "./hashline-chunk-formatter"
|
||||
|
||||
export function computeLineHash(lineNumber: number, content: string): string {
|
||||
const stripped = content.replace(/\s+/g, "")
|
||||
@@ -18,3 +19,124 @@ export function formatHashLines(content: string): string {
|
||||
const lines = content.split("\n")
|
||||
return lines.map((line, index) => formatHashLine(index + 1, line)).join("\n")
|
||||
}
|
||||
|
||||
export interface HashlineStreamOptions {
|
||||
startLine?: number
|
||||
maxChunkLines?: number
|
||||
maxChunkBytes?: number
|
||||
}
|
||||
|
||||
function isReadableStream(value: unknown): value is ReadableStream<Uint8Array> {
|
||||
return (
|
||||
typeof value === "object" &&
|
||||
value !== null &&
|
||||
"getReader" in value &&
|
||||
typeof (value as { getReader?: unknown }).getReader === "function"
|
||||
)
|
||||
}
|
||||
|
||||
async function* bytesFromReadableStream(stream: ReadableStream<Uint8Array>): AsyncGenerator<Uint8Array> {
|
||||
const reader = stream.getReader()
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read()
|
||||
if (done) return
|
||||
if (value) yield value
|
||||
}
|
||||
} finally {
|
||||
reader.releaseLock()
|
||||
}
|
||||
}
|
||||
|
||||
export async function* streamHashLinesFromUtf8(
|
||||
source: ReadableStream<Uint8Array> | AsyncIterable<Uint8Array>,
|
||||
options: HashlineStreamOptions = {}
|
||||
): AsyncGenerator<string> {
|
||||
const startLine = options.startLine ?? 1
|
||||
const maxChunkLines = options.maxChunkLines ?? 200
|
||||
const maxChunkBytes = options.maxChunkBytes ?? 64 * 1024
|
||||
const decoder = new TextDecoder("utf-8")
|
||||
const chunks = isReadableStream(source) ? bytesFromReadableStream(source) : source
|
||||
|
||||
let lineNumber = startLine
|
||||
let pending = ""
|
||||
let sawAnyText = false
|
||||
let endedWithNewline = false
|
||||
const chunkFormatter = createHashlineChunkFormatter({ maxChunkLines, maxChunkBytes })
|
||||
|
||||
const pushLine = (line: string): string[] => {
|
||||
const formatted = formatHashLine(lineNumber, line)
|
||||
lineNumber += 1
|
||||
return chunkFormatter.push(formatted)
|
||||
}
|
||||
|
||||
const consumeText = (text: string): string[] => {
|
||||
if (text.length === 0) return []
|
||||
sawAnyText = true
|
||||
pending += text
|
||||
const chunksToYield: string[] = []
|
||||
|
||||
while (true) {
|
||||
const idx = pending.indexOf("\n")
|
||||
if (idx === -1) break
|
||||
const line = pending.slice(0, idx)
|
||||
pending = pending.slice(idx + 1)
|
||||
endedWithNewline = true
|
||||
chunksToYield.push(...pushLine(line))
|
||||
}
|
||||
|
||||
if (pending.length > 0) endedWithNewline = false
|
||||
return chunksToYield
|
||||
}
|
||||
|
||||
for await (const chunk of chunks) {
|
||||
for (const out of consumeText(decoder.decode(chunk, { stream: true }))) {
|
||||
yield out
|
||||
}
|
||||
}
|
||||
|
||||
for (const out of consumeText(decoder.decode())) {
|
||||
yield out
|
||||
}
|
||||
|
||||
if (sawAnyText && (pending.length > 0 || endedWithNewline)) {
|
||||
for (const out of pushLine(pending)) {
|
||||
yield out
|
||||
}
|
||||
}
|
||||
|
||||
const finalChunk = chunkFormatter.flush()
|
||||
if (finalChunk) yield finalChunk
|
||||
}
|
||||
|
||||
export async function* streamHashLinesFromLines(
|
||||
lines: Iterable<string> | AsyncIterable<string>,
|
||||
options: HashlineStreamOptions = {}
|
||||
): AsyncGenerator<string> {
|
||||
const startLine = options.startLine ?? 1
|
||||
const maxChunkLines = options.maxChunkLines ?? 200
|
||||
const maxChunkBytes = options.maxChunkBytes ?? 64 * 1024
|
||||
|
||||
let lineNumber = startLine
|
||||
const chunkFormatter = createHashlineChunkFormatter({ maxChunkLines, maxChunkBytes })
|
||||
|
||||
const pushLine = (line: string): string[] => {
|
||||
const formatted = formatHashLine(lineNumber, line)
|
||||
lineNumber += 1
|
||||
return chunkFormatter.push(formatted)
|
||||
}
|
||||
|
||||
const asyncIterator = (lines as AsyncIterable<string>)[Symbol.asyncIterator]
|
||||
if (typeof asyncIterator === "function") {
|
||||
for await (const line of lines as AsyncIterable<string>) {
|
||||
for (const out of pushLine(line)) yield out
|
||||
}
|
||||
} else {
|
||||
for (const line of lines as Iterable<string>) {
|
||||
for (const out of pushLine(line)) yield out
|
||||
}
|
||||
}
|
||||
|
||||
const finalChunk = chunkFormatter.flush()
|
||||
if (finalChunk) yield finalChunk
|
||||
}
|
||||
|
||||
52
src/tools/hashline-edit/hashline-chunk-formatter.ts
Normal file
52
src/tools/hashline-edit/hashline-chunk-formatter.ts
Normal file
@@ -0,0 +1,52 @@
|
||||
export interface HashlineChunkFormatter {
|
||||
push(formattedLine: string): string[]
|
||||
flush(): string | undefined
|
||||
}
|
||||
|
||||
interface HashlineChunkFormatterOptions {
|
||||
maxChunkLines: number
|
||||
maxChunkBytes: number
|
||||
}
|
||||
|
||||
export function createHashlineChunkFormatter(options: HashlineChunkFormatterOptions): HashlineChunkFormatter {
|
||||
const { maxChunkLines, maxChunkBytes } = options
|
||||
let outputLines: string[] = []
|
||||
let outputBytes = 0
|
||||
|
||||
const flush = (): string | undefined => {
|
||||
if (outputLines.length === 0) return undefined
|
||||
const chunk = outputLines.join("\n")
|
||||
outputLines = []
|
||||
outputBytes = 0
|
||||
return chunk
|
||||
}
|
||||
|
||||
const push = (formattedLine: string): string[] => {
|
||||
const chunksToYield: string[] = []
|
||||
const separatorBytes = outputLines.length === 0 ? 0 : 1
|
||||
const lineBytes = Buffer.byteLength(formattedLine, "utf-8")
|
||||
|
||||
if (
|
||||
outputLines.length > 0 &&
|
||||
(outputLines.length >= maxChunkLines || outputBytes + separatorBytes + lineBytes > maxChunkBytes)
|
||||
) {
|
||||
const flushed = flush()
|
||||
if (flushed) chunksToYield.push(flushed)
|
||||
}
|
||||
|
||||
outputLines.push(formattedLine)
|
||||
outputBytes += (outputLines.length === 1 ? 0 : 1) + lineBytes
|
||||
|
||||
if (outputLines.length >= maxChunkLines || outputBytes >= maxChunkBytes) {
|
||||
const flushed = flush()
|
||||
if (flushed) chunksToYield.push(flushed)
|
||||
}
|
||||
|
||||
return chunksToYield
|
||||
}
|
||||
|
||||
return {
|
||||
push,
|
||||
flush,
|
||||
}
|
||||
}
|
||||
@@ -1,11 +1,19 @@
|
||||
export { computeLineHash, formatHashLine, formatHashLines } from "./hash-computation"
|
||||
export {
|
||||
computeLineHash,
|
||||
formatHashLine,
|
||||
formatHashLines,
|
||||
streamHashLinesFromLines,
|
||||
streamHashLinesFromUtf8,
|
||||
} from "./hash-computation"
|
||||
export { parseLineRef, validateLineRef } from "./validation"
|
||||
export type { LineRef } from "./validation"
|
||||
export type { SetLine, ReplaceLines, InsertAfter, Replace, HashlineEdit } from "./types"
|
||||
export type { SetLine, ReplaceLines, InsertAfter, InsertBefore, InsertBetween, Replace, HashlineEdit } from "./types"
|
||||
export { NIBBLE_STR, HASHLINE_DICT, HASHLINE_REF_PATTERN, HASHLINE_OUTPUT_PATTERN } from "./constants"
|
||||
export {
|
||||
applyHashlineEdits,
|
||||
applyInsertAfter,
|
||||
applyInsertBefore,
|
||||
applyInsertBetween,
|
||||
applyReplace,
|
||||
applyReplaceLines,
|
||||
applySetLine,
|
||||
|
||||
34
src/tools/hashline-edit/tool-description.ts
Normal file
34
src/tools/hashline-edit/tool-description.ts
Normal file
@@ -0,0 +1,34 @@
|
||||
export const HASHLINE_EDIT_DESCRIPTION = `Edit files using LINE#ID format for precise, safe modifications.
|
||||
|
||||
WORKFLOW:
|
||||
1. Read the file and copy exact LINE#ID anchors.
|
||||
2. Submit one edit call with all related operations for that file.
|
||||
3. If more edits are needed after success, use the latest anchors from read/edit output.
|
||||
4. Use anchors as "LINE#ID" only (never include trailing ":content").
|
||||
|
||||
VALIDATION:
|
||||
- Payload shape: { "filePath": string, "edits": [...], "delete"?: boolean, "rename"?: string }
|
||||
- Each edit must be one of: set_line, replace_lines, insert_after, insert_before, insert_between, replace
|
||||
- text/new_text must contain plain replacement text only (no LINE#ID prefixes, no diff + markers)
|
||||
|
||||
LINE#ID FORMAT (CRITICAL - READ CAREFULLY):
|
||||
Each line reference must be in "LINE#ID" format where:
|
||||
- LINE: 1-based line number
|
||||
- ID: Two CID letters from the set ZPMQVRWSNKTXJBYH
|
||||
|
||||
OPERATION TYPES:
|
||||
1. set_line
|
||||
2. replace_lines
|
||||
3. insert_after
|
||||
4. insert_before
|
||||
5. insert_between
|
||||
6. replace
|
||||
|
||||
FILE MODES:
|
||||
- delete=true deletes file and requires edits=[] with no rename
|
||||
- rename moves final content to a new path and removes old path
|
||||
|
||||
CONTENT FORMAT:
|
||||
- text/new_text can be a string (single line) or string[] (multi-line, preferred).
|
||||
- If you pass a multi-line string, it is split by real newline characters.
|
||||
- Literal "\\n" is preserved as text.`
|
||||
@@ -11,12 +11,10 @@ function createMockContext(): ToolContext {
|
||||
sessionID: "test",
|
||||
messageID: "test",
|
||||
agent: "test",
|
||||
directory: "/tmp",
|
||||
worktree: "/tmp",
|
||||
abort: new AbortController().signal,
|
||||
metadata: mock(() => {}),
|
||||
ask: async () => {},
|
||||
}
|
||||
} as unknown as ToolContext
|
||||
}
|
||||
|
||||
describe("createHashlineEditTool", () => {
|
||||
@@ -103,7 +101,7 @@ describe("createHashlineEditTool", () => {
|
||||
|
||||
//#then
|
||||
expect(result).toContain("Error")
|
||||
expect(result).toContain("hash")
|
||||
expect(result).toContain(">>>")
|
||||
})
|
||||
|
||||
it("preserves literal backslash-n and supports string[] payload", async () => {
|
||||
@@ -132,4 +130,90 @@ describe("createHashlineEditTool", () => {
|
||||
//#then
|
||||
expect(fs.readFileSync(filePath, "utf-8")).toBe("join(\\n)\na\nb\nline2")
|
||||
})
|
||||
|
||||
it("supports insert_before and insert_between", async () => {
|
||||
//#given
|
||||
const filePath = path.join(tempDir, "test.txt")
|
||||
fs.writeFileSync(filePath, "line1\nline2\nline3")
|
||||
const line1 = computeLineHash(1, "line1")
|
||||
const line2 = computeLineHash(2, "line2")
|
||||
const line3 = computeLineHash(3, "line3")
|
||||
|
||||
//#when
|
||||
await tool.execute(
|
||||
{
|
||||
filePath,
|
||||
edits: [
|
||||
{ type: "insert_before", line: `3#${line3}`, text: ["before3"] },
|
||||
{ type: "insert_between", after_line: `1#${line1}`, before_line: `2#${line2}`, text: ["between"] },
|
||||
],
|
||||
},
|
||||
createMockContext(),
|
||||
)
|
||||
|
||||
//#then
|
||||
expect(fs.readFileSync(filePath, "utf-8")).toBe("line1\nbetween\nline2\nbefore3\nline3")
|
||||
})
|
||||
|
||||
it("returns error when insert text is empty array", async () => {
|
||||
//#given
|
||||
const filePath = path.join(tempDir, "test.txt")
|
||||
fs.writeFileSync(filePath, "line1\nline2")
|
||||
const line1 = computeLineHash(1, "line1")
|
||||
|
||||
//#when
|
||||
const result = await tool.execute(
|
||||
{
|
||||
filePath,
|
||||
edits: [{ type: "insert_after", line: `1#${line1}`, text: [] }],
|
||||
},
|
||||
createMockContext(),
|
||||
)
|
||||
|
||||
//#then
|
||||
expect(result).toContain("Error")
|
||||
expect(result).toContain("non-empty")
|
||||
})
|
||||
|
||||
it("supports file rename with edits", async () => {
|
||||
//#given
|
||||
const filePath = path.join(tempDir, "source.txt")
|
||||
const renamedPath = path.join(tempDir, "renamed.txt")
|
||||
fs.writeFileSync(filePath, "line1\nline2")
|
||||
const line2 = computeLineHash(2, "line2")
|
||||
|
||||
//#when
|
||||
await tool.execute(
|
||||
{
|
||||
filePath,
|
||||
rename: renamedPath,
|
||||
edits: [{ type: "set_line", line: `2#${line2}`, text: "line2-updated" }],
|
||||
},
|
||||
createMockContext(),
|
||||
)
|
||||
|
||||
//#then
|
||||
expect(fs.existsSync(filePath)).toBe(false)
|
||||
expect(fs.readFileSync(renamedPath, "utf-8")).toBe("line1\nline2-updated")
|
||||
})
|
||||
|
||||
it("supports file delete mode", async () => {
|
||||
//#given
|
||||
const filePath = path.join(tempDir, "delete-me.txt")
|
||||
fs.writeFileSync(filePath, "line1")
|
||||
|
||||
//#when
|
||||
const result = await tool.execute(
|
||||
{
|
||||
filePath,
|
||||
delete: true,
|
||||
edits: [],
|
||||
},
|
||||
createMockContext(),
|
||||
)
|
||||
|
||||
//#then
|
||||
expect(fs.existsSync(filePath)).toBe(false)
|
||||
expect(result).toContain("Successfully deleted")
|
||||
})
|
||||
})
|
||||
|
||||
@@ -1,13 +1,16 @@
|
||||
import { tool, type ToolContext, type ToolDefinition } from "@opencode-ai/plugin/tool"
|
||||
import { storeToolMetadata } from "../../features/tool-metadata-store"
|
||||
import type { HashlineEdit } from "./types"
|
||||
import { applyHashlineEdits } from "./edit-operations"
|
||||
import { applyHashlineEditsWithReport } from "./edit-operations"
|
||||
import { computeLineHash } from "./hash-computation"
|
||||
import { toHashlineContent, generateUnifiedDiff, countLineDiffs } from "./diff-utils"
|
||||
import { HASHLINE_EDIT_DESCRIPTION } from "./tool-description"
|
||||
|
||||
interface HashlineEditArgs {
|
||||
filePath: string
|
||||
edits: HashlineEdit[]
|
||||
delete?: boolean
|
||||
rename?: string
|
||||
}
|
||||
|
||||
type ToolContextWithCallID = ToolContext & {
|
||||
@@ -55,62 +58,11 @@ function generateDiff(oldContent: string, newContent: string, filePath: string):
|
||||
|
||||
export function createHashlineEditTool(): ToolDefinition {
|
||||
return tool({
|
||||
description: `Edit files using LINE#ID format for precise, safe modifications.
|
||||
|
||||
WORKFLOW:
|
||||
1. Read the file and copy exact LINE#ID anchors.
|
||||
2. Submit one edit call with all related operations for that file.
|
||||
3. If more edits are needed after success, use the latest anchors from read/edit output.
|
||||
4. Use anchors as "LINE#ID" only (never include trailing ":content").
|
||||
|
||||
VALIDATION:
|
||||
- Payload shape: { "filePath": string, "edits": [...] }
|
||||
- Each edit must be one of: set_line, replace_lines, insert_after, replace
|
||||
- text/new_text must contain plain replacement text only (no LINE#ID prefixes, no diff + markers)
|
||||
|
||||
LINE#ID FORMAT (CRITICAL - READ CAREFULLY):
|
||||
Each line reference must be in "LINE#ID" format where:
|
||||
- LINE: 1-based line number
|
||||
- ID: Two CID letters from the set ZPMQVRWSNKTXJBYH
|
||||
- Example: "5#VK" means line 5 with hash id "VK"
|
||||
- WRONG: "2#aa" (invalid characters) - will fail!
|
||||
- CORRECT: "2#VK"
|
||||
|
||||
GETTING HASHES:
|
||||
Use the read tool - it returns lines in "LINE#ID:content" format.
|
||||
Successful edit output also includes updated file content in "LINE#ID:content" format.
|
||||
|
||||
FOUR OPERATION TYPES:
|
||||
|
||||
1. set_line: Replace a single line
|
||||
{ "type": "set_line", "line": "5#VK", "text": "const y = 2" }
|
||||
|
||||
2. replace_lines: Replace a range of lines
|
||||
{ "type": "replace_lines", "start_line": "5#VK", "end_line": "7#NP", "text": ["new", "content"] }
|
||||
|
||||
3. insert_after: Insert lines after a specific line
|
||||
{ "type": "insert_after", "line": "5#VK", "text": "console.log('hi')" }
|
||||
|
||||
4. replace: Simple text replacement (no hash validation)
|
||||
{ "type": "replace", "old_text": "foo", "new_text": "bar" }
|
||||
|
||||
HASH MISMATCH HANDLING:
|
||||
If the hash doesn't match the current content, the edit fails with a hash mismatch error. This prevents editing stale content.
|
||||
|
||||
SEQUENTIAL EDITS (ANTI-FLAKE):
|
||||
- Always copy anchors exactly from the latest read/edit output.
|
||||
- Never infer or guess hashes.
|
||||
- For related edits, prefer batching them in one call.
|
||||
|
||||
BOTTOM-UP APPLICATION:
|
||||
Edits are applied from bottom to top (highest line numbers first) to preserve line number references.
|
||||
|
||||
CONTENT FORMAT:
|
||||
- text/new_text can be a string (single line) or string[] (multi-line, preferred).
|
||||
- If you pass a multi-line string, it is split by real newline characters.
|
||||
- Literal "\\n" is preserved as text.`,
|
||||
description: HASHLINE_EDIT_DESCRIPTION,
|
||||
args: {
|
||||
filePath: tool.schema.string().describe("Absolute path to the file to edit"),
|
||||
delete: tool.schema.boolean().optional().describe("Delete file instead of editing"),
|
||||
rename: tool.schema.string().optional().describe("Rename output file path after edits"),
|
||||
edits: tool.schema
|
||||
.array(
|
||||
tool.schema.union([
|
||||
@@ -136,6 +88,21 @@ CONTENT FORMAT:
|
||||
.union([tool.schema.string(), tool.schema.array(tool.schema.string())])
|
||||
.describe("Content to insert after the line (string or string[] for multiline)"),
|
||||
}),
|
||||
tool.schema.object({
|
||||
type: tool.schema.literal("insert_before"),
|
||||
line: tool.schema.string().describe("Line reference in LINE#ID format"),
|
||||
text: tool.schema
|
||||
.union([tool.schema.string(), tool.schema.array(tool.schema.string())])
|
||||
.describe("Content to insert before the line (string or string[] for multiline)"),
|
||||
}),
|
||||
tool.schema.object({
|
||||
type: tool.schema.literal("insert_between"),
|
||||
after_line: tool.schema.string().describe("After line in LINE#ID format"),
|
||||
before_line: tool.schema.string().describe("Before line in LINE#ID format"),
|
||||
text: tool.schema
|
||||
.union([tool.schema.string(), tool.schema.array(tool.schema.string())])
|
||||
.describe("Content to insert between anchor lines (string or string[] for multiline)"),
|
||||
}),
|
||||
tool.schema.object({
|
||||
type: tool.schema.literal("replace"),
|
||||
old_text: tool.schema.string().describe("Text to find"),
|
||||
@@ -145,17 +112,24 @@ CONTENT FORMAT:
|
||||
}),
|
||||
])
|
||||
)
|
||||
.describe("Array of edit operations to apply"),
|
||||
.describe("Array of edit operations to apply (empty when delete=true)"),
|
||||
},
|
||||
execute: async (args: HashlineEditArgs, context: ToolContext) => {
|
||||
try {
|
||||
const metadataContext = context as ToolContextWithMetadata
|
||||
const filePath = args.filePath
|
||||
const { edits } = args
|
||||
const { edits, delete: deleteMode, rename } = args
|
||||
|
||||
if (!edits || !Array.isArray(edits) || edits.length === 0) {
|
||||
if (deleteMode && rename) {
|
||||
return "Error: delete and rename cannot be used together"
|
||||
}
|
||||
|
||||
if (!deleteMode && (!edits || !Array.isArray(edits) || edits.length === 0)) {
|
||||
return "Error: edits parameter must be a non-empty array"
|
||||
}
|
||||
if (deleteMode && edits.length > 0) {
|
||||
return "Error: delete mode requires edits to be an empty array"
|
||||
}
|
||||
|
||||
const file = Bun.file(filePath)
|
||||
const exists = await file.exists()
|
||||
@@ -163,28 +137,43 @@ CONTENT FORMAT:
|
||||
return `Error: File not found: ${filePath}`
|
||||
}
|
||||
|
||||
if (deleteMode) {
|
||||
await Bun.file(filePath).delete()
|
||||
return `Successfully deleted ${filePath}`
|
||||
}
|
||||
|
||||
const oldContent = await file.text()
|
||||
const newContent = applyHashlineEdits(oldContent, edits)
|
||||
const applyResult = applyHashlineEditsWithReport(oldContent, edits)
|
||||
const newContent = applyResult.content
|
||||
|
||||
await Bun.write(filePath, newContent)
|
||||
|
||||
const diff = generateDiff(oldContent, newContent, filePath)
|
||||
if (rename && rename !== filePath) {
|
||||
await Bun.write(rename, newContent)
|
||||
await Bun.file(filePath).delete()
|
||||
}
|
||||
|
||||
const effectivePath = rename && rename !== filePath ? rename : filePath
|
||||
|
||||
const diff = generateDiff(oldContent, newContent, effectivePath)
|
||||
const newHashlined = toHashlineContent(newContent)
|
||||
|
||||
const unifiedDiff = generateUnifiedDiff(oldContent, newContent, filePath)
|
||||
const unifiedDiff = generateUnifiedDiff(oldContent, newContent, effectivePath)
|
||||
const { additions, deletions } = countLineDiffs(oldContent, newContent)
|
||||
|
||||
const meta = {
|
||||
title: filePath,
|
||||
title: effectivePath,
|
||||
metadata: {
|
||||
filePath,
|
||||
path: filePath,
|
||||
file: filePath,
|
||||
filePath: effectivePath,
|
||||
path: effectivePath,
|
||||
file: effectivePath,
|
||||
diff: unifiedDiff,
|
||||
noopEdits: applyResult.noopEdits,
|
||||
deduplicatedEdits: applyResult.deduplicatedEdits,
|
||||
filediff: {
|
||||
file: filePath,
|
||||
path: filePath,
|
||||
filePath,
|
||||
file: effectivePath,
|
||||
path: effectivePath,
|
||||
filePath: effectivePath,
|
||||
before: oldContent,
|
||||
after: newContent,
|
||||
additions,
|
||||
@@ -202,7 +191,8 @@ CONTENT FORMAT:
|
||||
storeToolMetadata(context.sessionID, callID, meta)
|
||||
}
|
||||
|
||||
return `Successfully applied ${edits.length} edit(s) to ${filePath}
|
||||
return `Successfully applied ${edits.length} edit(s) to ${effectivePath}
|
||||
No-op edits: ${applyResult.noopEdits}, deduplicated edits: ${applyResult.deduplicatedEdits}
|
||||
|
||||
${diff}
|
||||
|
||||
|
||||
@@ -17,10 +17,23 @@ export interface InsertAfter {
|
||||
text: string | string[]
|
||||
}
|
||||
|
||||
export interface InsertBefore {
|
||||
type: "insert_before"
|
||||
line: string
|
||||
text: string | string[]
|
||||
}
|
||||
|
||||
export interface InsertBetween {
|
||||
type: "insert_between"
|
||||
after_line: string
|
||||
before_line: string
|
||||
text: string | string[]
|
||||
}
|
||||
|
||||
export interface Replace {
|
||||
type: "replace"
|
||||
old_text: string
|
||||
new_text: string | string[]
|
||||
}
|
||||
|
||||
export type HashlineEdit = SetLine | ReplaceLines | InsertAfter | Replace
|
||||
export type HashlineEdit = SetLine | ReplaceLines | InsertAfter | InsertBefore | InsertBetween | Replace
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import { describe, it, expect } from "bun:test"
|
||||
import { computeLineHash } from "./hash-computation"
|
||||
import { parseLineRef, validateLineRef } from "./validation"
|
||||
import { parseLineRef, validateLineRef, validateLineRefs } from "./validation"
|
||||
|
||||
describe("parseLineRef", () => {
|
||||
it("parses valid LINE#ID reference", () => {
|
||||
@@ -49,7 +49,16 @@ describe("validateLineRef", () => {
|
||||
const lines = ["function hello() {"]
|
||||
|
||||
//#when / #then
|
||||
expect(() => validateLineRef(lines, "1#ZZ")).toThrow(/current hash/)
|
||||
expect(() => validateLineRef(lines, "1#ZZ")).toThrow(/>>>\s+1#[ZPMQVRWSNKTXJBYH]{2}:/)
|
||||
})
|
||||
|
||||
it("shows >>> mismatch context in batched validation", () => {
|
||||
//#given
|
||||
const lines = ["one", "two", "three", "four"]
|
||||
|
||||
//#when / #then
|
||||
expect(() => validateLineRefs(lines, ["2#ZZ"]))
|
||||
.toThrow(/>>>\s+2#[ZPMQVRWSNKTXJBYH]{2}:two/)
|
||||
})
|
||||
})
|
||||
|
||||
@@ -81,7 +90,7 @@ describe("legacy LINE:HEX backward compatibility", () => {
|
||||
const lines = ["function hello() {"]
|
||||
|
||||
//#when / #then
|
||||
expect(() => validateLineRef(lines, "1:ab")).toThrow(/Hash mismatch|current hash/)
|
||||
expect(() => validateLineRef(lines, "1:ab")).toThrow(/>>>\s+1#[ZPMQVRWSNKTXJBYH]{2}:/)
|
||||
})
|
||||
|
||||
it("extracts legacy ref from content with markers", () => {
|
||||
|
||||
@@ -6,6 +6,13 @@ export interface LineRef {
|
||||
hash: string
|
||||
}
|
||||
|
||||
interface HashMismatch {
|
||||
line: number
|
||||
expected: string
|
||||
}
|
||||
|
||||
const MISMATCH_CONTEXT = 2
|
||||
|
||||
const LINE_REF_EXTRACT_PATTERN = /([0-9]+#[ZPMQVRWSNKTXJBYH]{2}|[0-9]+:[0-9a-fA-F]{2,})/
|
||||
|
||||
function normalizeLineRef(ref: string): string {
|
||||
@@ -59,34 +66,85 @@ export function validateLineRef(lines: string[], ref: string): void {
|
||||
const currentHash = computeLineHash(line, content)
|
||||
|
||||
if (currentHash !== hash) {
|
||||
throw new Error(
|
||||
`Hash mismatch at line ${line}. Expected hash: ${hash}, current hash: ${currentHash}. ` +
|
||||
`Line content may have changed. Current content: "${content}"`
|
||||
throw new HashlineMismatchError([{ line, expected: hash }], lines)
|
||||
}
|
||||
}
|
||||
|
||||
export class HashlineMismatchError extends Error {
|
||||
readonly remaps: ReadonlyMap<string, string>
|
||||
|
||||
constructor(
|
||||
private readonly mismatches: HashMismatch[],
|
||||
private readonly fileLines: string[]
|
||||
) {
|
||||
super(HashlineMismatchError.formatMessage(mismatches, fileLines))
|
||||
this.name = "HashlineMismatchError"
|
||||
const remaps = new Map<string, string>()
|
||||
for (const mismatch of mismatches) {
|
||||
const actual = computeLineHash(mismatch.line, fileLines[mismatch.line - 1] ?? "")
|
||||
remaps.set(`${mismatch.line}#${mismatch.expected}`, `${mismatch.line}#${actual}`)
|
||||
}
|
||||
this.remaps = remaps
|
||||
}
|
||||
|
||||
static formatMessage(mismatches: HashMismatch[], fileLines: string[]): string {
|
||||
const mismatchByLine = new Map<number, HashMismatch>()
|
||||
for (const mismatch of mismatches) mismatchByLine.set(mismatch.line, mismatch)
|
||||
|
||||
const displayLines = new Set<number>()
|
||||
for (const mismatch of mismatches) {
|
||||
const low = Math.max(1, mismatch.line - MISMATCH_CONTEXT)
|
||||
const high = Math.min(fileLines.length, mismatch.line + MISMATCH_CONTEXT)
|
||||
for (let line = low; line <= high; line++) displayLines.add(line)
|
||||
}
|
||||
|
||||
const sortedLines = [...displayLines].sort((a, b) => a - b)
|
||||
const output: string[] = []
|
||||
output.push(
|
||||
`${mismatches.length} line${mismatches.length > 1 ? "s have" : " has"} changed since last read. ` +
|
||||
"Use updated LINE#ID references below (>>> marks changed lines)."
|
||||
)
|
||||
output.push("")
|
||||
|
||||
let previousLine = -1
|
||||
for (const line of sortedLines) {
|
||||
if (previousLine !== -1 && line > previousLine + 1) {
|
||||
output.push(" ...")
|
||||
}
|
||||
previousLine = line
|
||||
|
||||
const content = fileLines[line - 1] ?? ""
|
||||
const hash = computeLineHash(line, content)
|
||||
const prefix = `${line}#${hash}:${content}`
|
||||
if (mismatchByLine.has(line)) {
|
||||
output.push(`>>> ${prefix}`)
|
||||
} else {
|
||||
output.push(` ${prefix}`)
|
||||
}
|
||||
}
|
||||
|
||||
return output.join("\n")
|
||||
}
|
||||
}
|
||||
|
||||
export function validateLineRefs(lines: string[], refs: string[]): void {
|
||||
const mismatches: string[] = []
|
||||
const mismatches: HashMismatch[] = []
|
||||
|
||||
for (const ref of refs) {
|
||||
const { line, hash } = parseLineRef(ref)
|
||||
|
||||
if (line < 1 || line > lines.length) {
|
||||
mismatches.push(`Line number ${line} out of bounds (file has ${lines.length} lines)`)
|
||||
continue
|
||||
throw new Error(`Line number ${line} out of bounds (file has ${lines.length} lines)`)
|
||||
}
|
||||
|
||||
const content = lines[line - 1]
|
||||
const currentHash = computeLineHash(line, content)
|
||||
if (currentHash !== hash) {
|
||||
mismatches.push(
|
||||
`line ${line}: expected ${hash}, current ${currentHash} (${line}#${currentHash}) content: "${content}"`
|
||||
)
|
||||
mismatches.push({ line, expected: hash })
|
||||
}
|
||||
}
|
||||
|
||||
if (mismatches.length > 0) {
|
||||
throw new Error(`Hash mismatches:\n- ${mismatches.join("\n- ")}`)
|
||||
throw new HashlineMismatchError(mismatches, lines)
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user