Merge main into feat/omp-support-model-aliases

Brings the PR branch up to the current main so the OMP provider and the
model-alias command can land cleanly. Resolves six merge conflicts and
applies a handful of small fixups alongside the resolution so the
feature matches the conventions set by the cursor-agent merge earlier
today.

Conflict resolutions:

  README.md               Combine cursor-agent and OMP rows in provider
                          list, Requirements, and data-location table;
                          take main's Node 22+ and node:sqlite text.
  src/cli.ts              Keep both new commands: model-alias and plan.
  src/config.ts           Add modelAliases alongside plan on the config
                          type.
  src/providers/index.ts  Keep the cursor-agent lazy-loader from main
                          and add omp to coreProviders. Fold the two
                          pi-module imports into one statement.
  src/providers/pi.ts     Keep the discovery-cache snapshot path from
                          main and the providerName parameterization
                          from the PR. Propagate providerName through
                          saveDiscoveryCache, loadDiscoveryCache, the
                          parserVersion tag, and the dedup key prefix
                          so OMP sources no longer stamp 'pi:' inside
                          their cache entries or dedup keys.
  tests/models.test.ts    Keep main's pricing-and-short-name tests and
                          add the PR's alias tests alongside, sharing a
                          single loadPricing setup and an afterEach
                          alias reset.

Fixups in the same commit:

  src/models.ts           Replace ?? chain in resolveAlias with
                          Object.hasOwn checks. The previous form
                          returned Object.prototype for a model named
                          '__proto__' and broke downstream
                          canonical.startsWith calls. Caught by the
                          existing prototype-pollution test suite.
  src/providers/pi.ts     Use source.provider in the dedup key prefix
                          and add a trailing newline to the file.
  tests/providers/omp.test.ts  Expect 'omp:' in the dedup key for OMP
                          sources, matching the fix above.

Feature work by @cgrossde.
This commit is contained in:
iamtoruk 2026-04-21 03:16:28 -07:00
commit c2ab80d6e2
59 changed files with 8070 additions and 561 deletions

4
.gitignore vendored
View file

@ -16,6 +16,7 @@ Thumbs.db
# Planning artifacts (internal, not shipped)
docs/superpowers/
.claude/
CLAUDE.md
# Config / secrets
.env
@ -36,3 +37,6 @@ npm-debug.log*
# Local Discord brand / promo assets not yet ready to publish
assets/discord-*.png
# Desktop app experiments
desktop/

View file

@ -1,13 +1,70 @@
# Changelog
## Unreleased
## 0.8.4 - 2026-04-20
### Fixed
- **Menubar hang on large session histories.** Menubar-json now uses the source cache instead of re-parsing all files on every poll.
## 0.8.3 - 2026-04-20
### Fixed
- **Source cache empty-session poisoning.** Cache entries with zero sessions are now treated as cache misses, forcing a fresh re-parse instead of silently dropping the session data.
- **Date range skip on changed files.** The date range exclusion now runs only after the fingerprint matches, so files that have grown with new data are never incorrectly skipped.
- **TUI auto-refresh not updating.** The 30-second refresh timer now bypasses the in-memory CachedWindow, which was permanently stale because the date range end is always end-of-day.
- **Menubar showing stale or decreasing costs.** Fixed cache invalidation so the menubar receives correct, up-to-date cost data.
- **Swift menubar observation race.** Explicit UI refresh calls after each data fetch prevent missed updates from the one-shot observation callback.
## 0.8.2 - 2026-04-20
### Added
- **Persistent parse cache for all providers.** Repeated CLI runs now reuse parsed source summaries across fresh processes instead of reparsing raw logs every time. Cache lives at `~/.cache/codeburn/source-cache-v1/` with atomic writes and 0600 file permissions. Credit: @spMohanty (PR #116).
- **`--no-cache` on parse-backed commands.** `report`, `today`, `month`, `status`, `export`, `optimize`, and `compare` can bypass cached entries for that run and rebuild them from raw logs. Credit: @spMohanty (PR #116).
- **`Updating cache` stderr progress.** Non-JSON cold or partial cache rebuilds now show progress while CodeBurn refreshes changed sources. Credit: @spMohanty (PR #116).
- **`codeburn plan` subscription tracking.** Set your plan (`claude-pro`, `claude-max`, `cursor-pro`, or custom) to see a usage progress bar in the dashboard. Includes 7-day trailing median projection and billing-cycle-aware period math. Credit: @tmchow (PR #74).
### Changed
- **Cursor now uses the shared parse cache.** The provider-specific Cursor cache path is gone; SQLite-backed provider data now flows through the same persistent cache layer as the other providers. Credit: @spMohanty (PR #116).
### Fixed
- **Model pricing: removed bidirectional fuzzy match.** `canonical.startsWith(key) || key.startsWith(canonical)` could match unrelated models. Now uses one-directional prefix only. Credit: @hobostay (PR #77).
- **Zero-cost models incorrectly filtered.** `!entry.input_cost_per_token` treated `0` as missing. Now checks `=== undefined` so free-tier models retain their pricing entry. Credit: @hobostay (PR #77).
- **File descriptor leak in `readSessionLines`.** Generator now calls `stream.destroy()` in a `finally` block so early abandonment does not leak open handles. Credit: @hobostay (PR #77).
- **CSV injection guard extended.** Tab and carriage return characters at cell start are now escaped alongside `=`, `+`, `-`, `@`. Credit: @hobostay (PR #77).
- **Crash on empty export periods.** Optional chaining prevents `undefined` access when a period has no projects. Credit: @hobostay (PR #77).
- **Config read crash on malformed JSON.** Restored catch-all error handling in `readConfig` so a corrupt `config.json` returns defaults instead of crashing.
## 0.8.0 - 2026-04-19
### Added
- **`codeburn compare` command.** Side-by-side model comparison across any two models in your session data. Interactive model picker, period switching, and provider filtering.
- **Compare view in dashboard.** Press `c` in the TUI to enter compare mode. Arrow keys switch periods, `b` to return.
- **Performance metrics.** One-shot rate, retry rate, and self-correction detection per model. Self-corrections are detected by scanning JSONL transcripts for tool error followed by retry patterns.
- **Efficiency metrics.** Cost per call, cost per edit turn, output tokens per call, and cache hit rate.
- **Per-category one-shot rates.** Breaks down one-shot success by task category (Coding, Debugging, Feature Dev, etc.) for each model.
- **Working style comparison.** Delegation rate, planning rate (TaskCreate, TaskUpdate, TodoWrite), average tools per turn, and fast mode usage.
- **TUI auto-refresh enabled by default.** Dashboard now refreshes every 30 seconds out of the box. Pass `--refresh 0` to disable. Closes #107.
- **36 comparison tests.** Full coverage for metric computation, category breakdown, working style, self-correction scanning, and planning tool detection. Total suite: 274 tests.
### Fixed
- **Planning rate showed ~0% in model comparison.** Only counted `EnterPlanMode` (rarely used) instead of all planning tools (TaskCreate, TaskUpdate, TodoWrite, EnterPlanMode, ExitPlanMode). Now detects planning at the turn level across all five tool types.
- **Menubar "All" tab showed stale data.** Three-layer caching (300s in-memory TTL, daily disk cache, 60s parser cache) prevented tab switches from showing fresh numbers. Cache TTL reduced from 300s to 30s, tab switches always fetch fresh data, background refresh interval reduced from 60s to 15s.
## 0.7.4 - 2026-04-19
### Added
- **`codeburn report --from/--to`.** Filter sessions to an exact `YYYY-MM-DD` date range (local time). Either flag alone is valid: `--from` alone runs from the given date through end-of-today, `--to` alone runs from the earliest data through the given date. Inverted ranges or malformed dates exit with a clear error. In the TUI, pressing `1`-`5` still switches to the predefined periods. Credit: @lfl1337 (PR #80).
- **`avgCostPerSession` in reports.** JSON `projects[]` entries gain an `avgCostPerSession` field and `export -f csv` adds an `Avg/Session (USD)` column to `projects.csv`. Column order in `projects.csv` is now `Project, Cost, Avg/Session, Share, API Calls, Sessions` -- scripts parsing by column position should read by header instead. Credit: @lfl1337 (PR #80).
- **Menubar auto-update checker.** Background check every 2 days against GitHub Releases. When a newer menubar build is available, an "Update" pill appears in the popover header. One click downloads, replaces, and relaunches the app automatically.
- **Smart agent tab visibility.** The provider tab strip hides when fewer than two providers have spend, reducing clutter for single-tool users.
### Security
- **Semgrep CI guard against prototype pollution regressions.** New `.github/workflows/ci.yml` runs a bracket-assign guard on `src/providers/` and `src/parser.ts` on every push to main and every PR. Blocks re-introducing `$MAP[$KEY] = $MAP[$KEY] ?? $INIT` patterns on `{}`-initialized maps. `categoryBreakdown` in `parser.ts` switched to `Object.create(null)` for consistency with its sibling breakdown maps. Credit: @lfl1337 (PR #78).
### Fixed
- **Stale daily cache caused wrong menubar costs.** The daily cache never recomputed yesterday once written, so a mid-day CLI run would freeze partial cost data permanently. The "All" provider view relied on this cache, showing wildly incorrect numbers while per-provider tabs (which parse fresh) were correct. Yesterday is now evicted and recomputed on every run.
- **UTC date bucketing instead of local timezone.** Timestamps in session files are UTC ISO strings. Several code paths extracted the date via `.slice(0, 10)` (UTC date) while date range filtering used local-time boundaries. Turns between UTC midnight and local midnight were attributed to the wrong day -- the menubar showed lower today cost than the TUI. All date bucketing now uses local time consistently.
- **OpenCode SQLite ESM loader.** `node:sqlite` is now loaded correctly in ESM runtime. Credit: @aaronflorey (PR #104).
- **Menubar trend tooltip per-provider views.** Tooltip now shows the correct cost when a specific provider tab is selected.
- **Menubar (today, all) cache freshness.** The cache entry powering the menubar title and tab labels is now kept fresh independently of the selected period/provider.
- **Agent tab strip restored.** All detected providers are shown again after a regression hid them.
- **Plan pane button cleanup.** Removed the broken "Connect Claude" button that opened a useless terminal session. The Plan pane now shows only a "Retry" button.
## 0.7.3 - 2026-04-18

View file

@ -1,84 +0,0 @@
# CodeBurn Development Rules
## Verification
- NEVER commit without running locally first and confirming it works
- Run `npx tsx src/cli.ts report` and `npx tsx src/cli.ts today` to verify changes before any commit
- For dashboard changes: run the interactive TUI and visually confirm rendering
- For new features: test the happy path AND edge cases (empty data, missing config, pipe mode)
## Code Quality
- Clean, minimal code. No dead code, no commented-out blocks, no TODO placeholders
- No emoji anywhere in the codebase
- No em dashes. Use hyphens or rewrite the sentence
- No AI slop: no "streamline", "leverage", "robust", "seamless" in user-facing text
- No unnecessary abstractions. Three similar lines > premature helper function
- No magic numbers. Extract layout offsets, column widths, thresholds, timeouts, and any value used in a calculation into a named `const` at module scope. Inline literals are only OK for universally understood constants (0, 1, 100 for percent). If a number appears in a formula like `pw - bw - 31`, the `31` must be a named constant.
## Accuracy
- Every user-facing number (cost, tokens, calls) must be verified against real data
- LiteLLM pricing model names must match exactly. No guessing model IDs
- Date range calculations must be tested with edge cases (month boundaries, billing day > days in month)
## Style
- TypeScript strict mode. No `any` types
- No comments unless the WHY is non-obvious
- Imports: node builtins first, then deps, then local (separated by blank line)
- Single quotes, no semicolons inconsistency (follow existing: no trailing semicolons in most files)
## Git
### Branching (strict)
- NEVER commit directly to main. All work happens on branches
- Branch naming: `feat/<name>`, `fix/<name>`, `chore/<name>`, `docs/<name>`
- Merge to main ONLY after: tests pass, CLI verified, manual testing done
- npm publish ONLY from main after merge
- Tag releases: `git tag v0.X.0` after publish
### Creating a branch
```bash
git checkout main && git pull origin main
git checkout -b feat/my-feature
# work, test, iterate
npx vitest run
npx tsx src/cli.ts report
# when ready:
git checkout main && git merge feat/my-feature
git push origin main
```
### Handling external PRs
- NEVER rewrite a contributor's changes on your own branch. Always merge THEIR branch
- Add your improvements as separate commits on top of their branch, not as replacements
- This preserves their authorship in git history so GitHub shows them as a contributor
```bash
gh pr checkout <number> # checkout PR locally
npx vitest run # test their code
npx tsx src/cli.ts report # manual verification
# apply patches if needed, commit on their branch
git checkout main
git merge <branch> # preserves their authorship
git push origin main
gh pr comment <number> --body "Merged, thanks!"
```
### What gets committed
- Source code: `src/`, `tests/`
- Config: `package.json`, `tsconfig.json`, `tsup.config.ts`, `.gitignore`
- Docs: `README.md`, `CHANGELOG.md`, `LICENSE`, `CLAUDE.md`
- Assets: `assets/`
- NEVER commit: `.env`, secrets, keys, planning docs (`docs/superpowers/`), IDE config, logs, `.DS_Store`
- Check `git status` before every commit. Stage specific files, never `git add -A` or `git add .`
### Commit rules
- Commits from: AgentSeal <hello@agentseal.org>
- NEVER add Co-Authored-By lines
- NEVER include personal names or usernames in commits
- Small, focused commits. One feature per commit
- Test locally before every commit
### Public-facing language (commits, PRs, release notes, README)
- Commits and release notes are public. Write like you'd publish them.
- NEVER use words like "steal", "stealing", "copy", "rip off", "inspired by" in commit messages
- Describe what the code does, not where ideas came from
- If you must credit prior art, do it in code comments or docs, not commit messages
- No snark, no filler, no self-deprecation. Treat each commit as a product statement

148
README.md
View file

@ -9,17 +9,15 @@
<p align="center">
<a href="https://www.npmjs.com/package/codeburn"><img src="https://img.shields.io/npm/v/codeburn.svg" alt="npm version" /></a>
<a href="https://www.npmjs.com/package/codeburn"><img src="https://img.shields.io/npm/dt/codeburn.svg" alt="total downloads" /></a>
<a href="https://www.npmjs.com/package/codeburn"><img src="https://img.shields.io/npm/dm/codeburn.svg" alt="monthly downloads" /></a>
<a href="https://bundlephobia.com/package/codeburn"><img src="https://img.shields.io/bundlephobia/min/codeburn" alt="install size" /></a>
<a href="https://github.com/getagentseal/codeburn/blob/main/LICENSE"><img src="https://img.shields.io/npm/l/codeburn.svg" alt="license" /></a>
<a href="https://github.com/getagentseal/codeburn"><img src="https://img.shields.io/badge/node-%3E%3D20-brightgreen.svg" alt="node version" /></a>
<a href="https://github.com/getagentseal/codeburn"><img src="https://img.shields.io/badge/node-%3E%3D22-brightgreen.svg" alt="node version" /></a>
</p>
<p align="center">
<img src="https://raw.githubusercontent.com/getagentseal/codeburn/main/assets/dashboard.jpg" alt="CodeBurn TUI dashboard" width="620" />
</p>
By task type, tool, model, MCP server, and project. Supports **Claude Code**, **Codex** (OpenAI), **Cursor**, **OpenCode**, **Pi**, **[OMP](https://github.com/can1357/oh-my-pi)** (Oh My Pi), and **GitHub Copilot** with a provider plugin system. Tracks one-shot success rate per activity type so you can see where the AI nails it first try vs. burns tokens on edit/test/fix retries. Interactive TUI dashboard with gradient charts, responsive panels, and keyboard navigation. Native macOS menubar app in `mac/`. CSV/JSON export.
By task type, tool, model, MCP server, and project. Supports **Claude Code**, **Codex** (OpenAI), **Cursor**, **cursor-agent**, **OpenCode**, **Pi**, **[OMP](https://github.com/can1357/oh-my-pi)** (Oh My Pi), and **GitHub Copilot** with a provider plugin system. Tracks one-shot success rate per activity type so you can see where the AI nails it first try vs. burns tokens on edit/test/fix retries. Interactive TUI dashboard with gradient charts, responsive panels, and keyboard navigation. Native macOS menubar app in `mac/`. CSV/JSON export.
Works by reading session data directly from disk. No wrapper, no proxy, no API keys. Pricing from LiteLLM (auto-cached, all models supported).
@ -37,9 +35,9 @@ npx codeburn
### Requirements
- Node.js 20+
- Node.js 22+
- Claude Code (`~/.claude/projects/`), Codex (`~/.codex/sessions/`), Cursor, OpenCode, Pi (`~/.pi/agent/sessions/`), OMP (`~/.omp/agent/sessions/`), and/or GitHub Copilot (`~/.copilot/session-state/`)
- For Cursor/OpenCode support: `better-sqlite3` is installed automatically as an optional dependency
- For Cursor/OpenCode support: uses Node's built-in `node:sqlite` (Node 22+)
## Usage
@ -51,7 +49,7 @@ codeburn report -p 30days # rolling 30-day window
codeburn report -p all # every recorded session
codeburn report --from 2026-04-01 --to 2026-04-10 # exact date range
codeburn report --format json # full dashboard data as JSON
codeburn report --refresh 60 # auto-refresh every 60 seconds
codeburn report --refresh 60 # auto-refresh every 60s (default: 30s)
codeburn status # compact one-liner (today + month)
codeburn status --format json
codeburn export # CSV with today, 7 days, 30 days
@ -60,7 +58,7 @@ codeburn optimize # find waste, get copy-paste fixes
codeburn optimize -p week # scope the scan to last 7 days
```
Arrow keys switch between Today / 7 Days / 30 Days / Month / All Time. Press `q` to quit, `1` `2` `3` `4` `5` as shortcuts. The dashboard also shows average cost per session and the five most expensive sessions across all projects.
Arrow keys switch between Today / 7 Days / 30 Days / Month / All Time. Press `q` to quit, `1` `2` `3` `4` `5` as shortcuts, `c` to open model comparison. The dashboard auto-refreshes every 30 seconds by default (`--refresh 0` to disable). The dashboard also shows average cost per session and the five most expensive sessions across all projects.
### JSON output
@ -82,6 +80,25 @@ codeburn today --format json | jq '.overview.cost'
For the lighter `status --format json` (today + month totals only) or file-based exports (`export -f json`), see above.
## Cache behavior
CodeBurn now keeps a persistent parse cache under `~/.cache/codeburn/source-cache-v1/`.
It applies to every provider. Unchanged sources load from cache across fresh CLI runs,
while changed sources are refreshed on demand so rolling windows like `today` stay current
as new log entries land.
Use `--no-cache` on any command that reads session data to ignore cached entries for that
run and rebuild them from raw logs:
```bash
codeburn today --no-cache
codeburn report --period all --no-cache
codeburn export --no-cache
```
When a non-JSON command needs to rebuild part of the cache, CodeBurn shows an
`Updating cache` progress bar on stderr. JSON output stays clean on stdout.
## Providers
CodeBurn auto-detects which AI coding tools you use. If multiple providers have session data on disk, press `p` in the dashboard to toggle between them.
@ -91,6 +108,7 @@ codeburn report # all providers combined (default)
codeburn report --provider claude # Claude Code only
codeburn report --provider codex # Codex only
codeburn report --provider cursor # Cursor only
codeburn report --provider cursor-agent # cursor-agent CLI only
codeburn report --provider opencode # OpenCode only
codeburn report --provider pi # Pi only
codeburn report --provider copilot # GitHub Copilot only
@ -144,7 +162,7 @@ Either flag alone is valid. Inverted or malformed dates exit with a clear error.
Codex tool names are normalized to match Claude's conventions (`exec_command` shows as `Bash`, `read_file` as `Read`, etc.) so the activity classifier and tool breakdown work across providers.
Cursor reads token usage from its local SQLite database. Since Cursor's "Auto" mode hides the actual model used, costs are estimated using Sonnet pricing (labeled "Auto (Sonnet est.)" in the dashboard). The Cursor view shows a **Languages** panel (extracted from code blocks) instead of Core Tools/Shell/MCP panels, since Cursor does not log individual tool calls. First run on a large Cursor database may take up to a minute; results are cached and subsequent runs are instant.
Cursor reads token usage from its local SQLite database via Node's built-in `node:sqlite`. Since Cursor's "Auto" mode hides the actual model used, costs are estimated using Sonnet pricing (labeled "Auto (Sonnet est.)" in the dashboard). The Cursor view shows a **Languages** panel (extracted from code blocks) instead of Core Tools/Shell/MCP panels, since Cursor does not log individual tool calls. Parsed results are cached through the shared persistent cache layer.
GitHub Copilot only logs output tokens in its session state, so Copilot cost rows sit below actual API cost. The model is tracked via `session.model_change` events; messages before the first model change are skipped to avoid silent misattribution.
@ -186,15 +204,31 @@ The currency setting applies everywhere: dashboard, status bar, menu bar widget,
The menu bar widget includes a currency picker with 17 common currencies. For any currency not listed, use the CLI command above.
## Plans (subscription tracking)
If you're on Claude Pro, Claude Max, or Cursor Pro, set your plan so the dashboard shows subscription-relative usage:
```bash
codeburn plan set claude-max # $200/month
codeburn plan set claude-pro # $20/month
codeburn plan set cursor-pro # $20/month
codeburn plan set custom --monthly-usd 150 --provider claude # custom
codeburn plan set none # disable plan view
codeburn plan # show current
codeburn plan reset # remove plan config
```
The progress bar shows API-equivalent cost vs subscription price. Presets use publicly stated plan prices (as of April 2026); they do not model exact token allowances, because vendors do not publish precise consumer-plan limits.
## Menu Bar
<img src="https://cdn.jsdelivr.net/gh/getagentseal/codeburn@main/assets/menubar-0.7.2.png" alt="CodeBurn macOS menubar app" width="420" />
<img src="https://cdn.jsdelivr.net/gh/getagentseal/codeburn@main/assets/menubar-0.8.0.png" alt="CodeBurn macOS menubar app" width="420" />
```bash
npx codeburn menubar
```
One command: downloads the latest `.app`, installs into `~/Applications`, and launches it. Re-run with `--force` to reinstall. Native Swift + SwiftUI app lives in `mac/` (see `mac/README.md` for build details). Shows today's cost with a flame icon, opens a popover with agent tabs, period switcher (Today / 7 Days / 30 Days / Month / All), Trend / Forecast / Pulse / Stats / Plan insights, activity and model breakdowns, optimize findings, and CSV/JSON export. Refreshes live via FSEvents plus a 60-second poll.
One command: downloads the latest `.app`, installs into `~/Applications`, and launches it. Re-run with `--force` to reinstall. Native Swift + SwiftUI app lives in `mac/` (see `mac/README.md` for build details). Shows today's cost with a flame icon, opens a popover with agent tabs, period switcher (Today / 7 Days / 30 Days / Month / All), Trend / Forecast / Pulse / Stats / Plan insights, activity and model breakdowns, optimize findings, and CSV/JSON export. Refreshes live via FSEvents plus a 15-second poll.
## What it tracks
@ -268,6 +302,37 @@ Each finding shows the estimated token and dollar savings plus a ready-to-paste
You can also open it inline from the dashboard: press `o` when a finding count appears in the status bar, `b` to return.
## Compare
Side-by-side model comparison across any two models in your session data. Pick any pair and see how they stack up on real usage from your own sessions.
```bash
codeburn compare # interactive model picker (default: all time)
codeburn compare -p week # last 7 days
codeburn compare -p today # today only
codeburn compare --provider claude # Claude Code sessions only
```
Or press `c` in the dashboard to enter compare mode. Arrow keys switch periods, `b` to return.
**Metrics compared**
| Section | Metric | What it measures |
|---------|--------|-----------------|
| Performance | One-shot rate | Edits that succeed without retries |
| Performance | Retry rate | Average retries per edit turn |
| Performance | Self-correction | Turns where the model corrected its own mistake |
| Efficiency | Cost / call | Average cost per API call |
| Efficiency | Cost / edit | Average cost per edit turn |
| Efficiency | Output tok / call | Average output tokens per call |
| Efficiency | Cache hit rate | Proportion of input from cache |
**Per-category one-shot rates.** Breaks down one-shot success by task category (Coding, Debugging, Feature Dev, etc.) so you can see where each model excels or struggles.
**Working style.** Compares delegation rate (agent spawns), planning rate (TaskCreate, TaskUpdate, TodoWrite usage), average tools per turn, and fast mode usage.
All metrics are computed from your local session data. No LLM calls, fully deterministic.
## How it reads data
**Claude Code** stores session transcripts as JSONL at `~/.claude/projects/<sanitized-path>/<session-id>.jsonl`. Each assistant entry contains model name, token usage (input, output, cache read, cache write), tool_use blocks, and timestamps.
@ -293,35 +358,54 @@ CodeBurn reads these files, deduplicates messages (by API message ID for Claude,
```
src/
cli.ts Commander.js entry point
dashboard.tsx Ink TUI (React for terminals)
parser.ts JSONL reader, dedup, date filter, provider orchestration
models.ts LiteLLM pricing, cost calculation
classifier.ts 13-category task classifier
types.ts Type definitions
format.ts Text rendering (status bar)
menubar-json.ts Payload builder consumed by the native macOS menubar app in mac/
export.ts CSV/JSON multi-period export
config.ts Config file management (~/.config/codeburn/)
currency.ts Currency conversion, exchange rates, Intl formatting
sqlite.ts SQLite adapter (lazy-loads better-sqlite3)
cursor-cache.ts Cursor result cache (file-based, auto-invalidating)
cli.ts Commander.js entry point
dashboard.tsx Ink TUI (React for terminals)
parser.ts JSONL reader, dedup, date filter, provider orchestration
models.ts LiteLLM pricing, cost calculation
classifier.ts 13-category task classifier
compare-stats.ts Model comparison engine (metrics, category breakdown, working style)
types.ts Type definitions
format.ts Text rendering (status bar)
menubar-json.ts Payload builder consumed by the native macOS menubar app in mac/
export.ts CSV/JSON multi-period export
config.ts Config file management (~/.config/codeburn/)
currency.ts Currency conversion, exchange rates, Intl formatting
sqlite.ts SQLite adapter (node:sqlite)
source-cache.ts Persistent parse cache (manifest + per-source entries)
discovery-cache.ts Provider directory scan caching
parse-progress.ts Stderr progress bar for cache rebuilds
provider-colors.ts Provider color and label constants
plans.ts Subscription plan presets and validators
plan-usage.ts Billing period math and usage projection
fs-utils.ts Bounded file readers with stream support
providers/
types.ts Provider interface definitions
index.ts Provider registry (lazy-loads Cursor, OpenCode)
claude.ts Claude Code session discovery
codex.ts Codex session discovery and JSONL parsing
cursor.ts Cursor SQLite parsing, language extraction
opencode.ts OpenCode SQLite session discovery and parsing
pi.ts Pi/OMP agent JSONL session discovery and parsing
types.ts Provider interface definitions
index.ts Provider registry (lazy-loads Cursor, cursor-agent, OpenCode)
claude.ts Claude Code session discovery
codex.ts Codex session discovery and JSONL parsing
copilot.ts GitHub Copilot session state parsing
cursor.ts Cursor SQLite parsing, language extraction
cursor-agent.ts cursor-agent CLI transcript parsing
opencode.ts OpenCode SQLite session discovery and parsing
pi.ts Pi/OMP agent JSONL session discovery and parsing
```
## Star History
<a href="https://www.star-history.com/?repos=getagentseal%2Fcodeburn&type=date&legend=top-left">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/chart?repos=getagentseal/codeburn&type=date&theme=dark&legend=top-left" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/chart?repos=getagentseal/codeburn&type=date&legend=top-left" />
<img alt="Star History Chart" src="https://api.star-history.com/chart?repos=getagentseal/codeburn&type=date&legend=top-left" />
</picture>
</a>
## License
MIT
## Credits
Inspired by [ccusage](https://github.com/ryoppippi/ccusage). Pricing data from [LiteLLM](https://github.com/BerriAI/litellm). Exchange rates from [Frankfurter](https://www.frankfurter.app/).
Inspired by [ccusage](https://github.com/ryoppippi/ccusage) and [CodexBar](https://github.com/nicklama/codexbar). Pricing data from [LiteLLM](https://github.com/BerriAI/litellm). Exchange rates from [Frankfurter](https://www.frankfurter.app/).
Built by [AgentSeal](https://agentseal.org).

BIN
assets/menubar-0.8.0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 760 KiB

20
bin/codeburn Executable file
View file

@ -0,0 +1,20 @@
#!/usr/bin/env node
import { homedir } from "node:os";
try {
process.cwd();
} catch (error) {
if (
error &&
typeof error === "object" &&
"code" in error &&
(error).code === "ENOENT"
) {
process.chdir(homedir());
} else {
throw error;
}
}
await import("../dist/cli.js");

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,266 @@
# Model Comparison Design
Compare two AI models side-by-side using normalized metrics derived from real
usage data. Answers "is Opus 4.7 actually better than 4.6 for my workflow?"
with hard numbers instead of vibes.
## Goals
1. Let users pick any two models and see a fair, normalized comparison
2. Surface efficiency metrics that raw cost/token dashboards don't show
(one-shot rate, retry rate, self-correction rate)
3. Accessible from both the dashboard (press `c`) and standalone (`codeburn compare`)
4. Screenshot-friendly terminal output
## Non-Goals
- Multi-model comparison (3+) -- v2
- Time-frame filtering (`--period`) -- v2
- Charts/graphs in the comparison view -- v2
- Exporting comparison results to JSON/CSV -- v2
- Statistical significance testing (show sample sizes, let the user judge)
---
## 1. Entry Points
### Standalone command
```
codeburn compare [--provider <provider>] [--period <period>]
```
Period defaults to `all` (6 months). Provider defaults to `all`. Both flags
are accepted but optional. Launches the full-screen Ink TUI directly into the
model selection screen.
### Dashboard shortcut
Press `c` in the dashboard to switch to the compare view. Same component, same
flow. `Escape` returns to the dashboard (mirrors how `o` toggles optimize).
The status bar gains a `[c]ompare` hint next to the existing `[o]ptimize`.
---
## 2. Data Pipeline
### Aggregation
Reuse `parseAllSessions` to get `ProjectSummary[]` for the selected
period/provider. Then build per-model stats by iterating turns and calls:
```ts
type ModelStats = {
model: string
calls: number
cost: number
outputTokens: number
inputTokens: number
cacheReadTokens: number
cacheWriteTokens: number
totalTurns: number
editTurns: number
oneShotTurns: number // edit turns with 0 retries
retries: number // total retry count
selfCorrections: number // turns matching apology/mistake patterns
firstSeen: string // earliest timestamp (ISO)
lastSeen: string // latest timestamp (ISO)
}
```
Turn-level metrics are attributed to the primary model (first call in the
turn). This matches how the dashboard already attributes turns.
### Self-correction detection
Scan assistant message text for patterns that indicate the model acknowledged
an error. These patterns were validated against real session data:
```ts
const SELF_CORRECTION_PATTERNS = [
/\bI('m| am) sorry\b/i,
/\bmy mistake\b/i,
/\bmy apolog/i,
/\bI made (a |an )?(error|mistake)\b/i,
/\bI was wrong\b/i,
/\bmy bad\b/i,
/\bI apologize\b/i,
/\bsorry about that\b/i,
/\bsorry for (the|that|this)\b/i,
/\bI should have\b/i,
/\bI shouldn't have\b/i,
/\bI incorrectly\b/i,
/\bI mistakenly\b/i,
]
```
This requires reading assistant message content from session JSONL files,
which the current parser does not expose. The aggregation function will need
to read raw JSONL entries for sessions that contain the selected models.
### Normalization
All comparison metrics are rates or per-call averages. Raw totals (cost, calls,
days) are shown as context, never as comparison metrics.
---
## 3. Comparison Metrics
| Metric | Formula | Better |
|---|---|---|
| Cost / call | `cost / calls` | Lower |
| Output tokens / call | `outputTokens / calls` | Lower |
| Cache hit rate | `cacheRead / (input + cacheRead + cacheWrite) * 100` | Higher |
| One-shot rate | `oneShotTurns / editTurns * 100` | Higher |
| Retry rate | `retries / editTurns` | Lower |
| Self-correction rate | `selfCorrections / totalTurns * 100` | Lower |
### Context row (not compared)
Displayed below the table to give sample-size context:
- Total calls
- Total cost
- Days of data (lastSeen - firstSeen)
- Edit turns (denominator for one-shot/retry metrics)
---
## 4. UI Screens
### Model Selection Screen
```
Model Comparison
Select two models to compare:
claude-opus-4-6 56,031 calls $5,272
> claude-opus-4-7 3,592 calls $664 [selected]
claude-sonnet-4-6 1,142 calls $25
claude-haiku-4-5 323 calls $4
gpt-5 113 calls $3 low data
[space] select [enter] compare [esc] back [q] quit
```
- Arrow keys navigate, spacebar toggles selection (max 2)
- Models sorted by cost descending (most-used first)
- Models with < 20 calls show "low data" dim label
- Enter is disabled until exactly 2 models are selected
- Filter out `<synthetic>` model entries
### Loading Screen
```
Comparing claude-opus-4-6 vs claude-opus-4-7...
```
Simple spinner while aggregation runs. Should be fast (< 2 seconds) since
session data is already parsed.
### Comparison Results Screen
```
claude-opus-4-6 vs claude-opus-4-7
4.6 4.7
Cost / call $0.094 $0.185 4.6 wins
Output tok / call 227 800 4.6 wins
Cache hit rate 98.4% 98.8% 4.7 wins
One-shot rate 88.8% 74.5% 4.6 wins
Retry rate 0.18 0.46 4.6 wins
Self-correction 0.18% 0.25% 4.6 wins
── Context ──────────────────────────────────
Calls 56,031 3,592
Cost $5,272.13 $664.32
Days of data 60 3
Edit turns 1,577 102
[esc] back [q] quit
```
- Winner column uses green text for the better model on each metric
- Model names in the header are shortened for display (drop `claude-` prefix)
- Context section is dimmed to visually separate it from the comparison
- If a metric can't be computed (e.g., 0 edit turns), show `-` instead
---
## 5. File Structure
```
src/compare.tsx -- Ink components: ModelSelector, ComparisonResults,
CompareView (top-level), loading state
src/compare-stats.ts -- aggregateModelStats(), computeComparison(),
self-correction pattern matching, ModelStats type
src/cli.ts -- new `compare` command registration
src/dashboard.tsx -- add 'c' keybinding, CompareView integration
```
### compare-stats.ts
Pure data module, no UI. Exports:
```ts
function aggregateModelStats(projects: ProjectSummary[]): ModelStats[]
function computeComparison(a: ModelStats, b: ModelStats): ComparisonRow[]
```
The self-correction scanner needs raw assistant message text from JSONL files.
Two options: (a) extend the parser to expose message text on turns during
initial parse, or (b) have `compare-stats.ts` re-read JSONL files via provider
session discovery (same mechanism the parser uses). Option (a) is cleaner but
increases memory for all commands; option (b) is isolated to compare. The
implementation plan should decide.
### compare.tsx
Three Ink components:
- `ModelSelector` -- arrow navigation, spacebar toggle, enter to confirm
- `ComparisonResults` -- the formatted table with color-coded winners
- `CompareView` -- orchestrates the flow (selection -> loading -> results)
Exported `renderCompare()` function for the standalone command, and
`CompareView` component for embedding in the dashboard.
---
## 6. Dashboard Integration
### Status bar
Add `[c]ompare` to the status bar, after `[o]ptimize`:
```
1-5 period arrows switch p provider o optimize c compare q quit
```
### View state
Extend the existing `View` type:
```ts
type View = 'dashboard' | 'optimize' | 'compare'
```
Press `c` sets view to `'compare'`. Escape from compare returns to
`'dashboard'`. The compare view receives the already-parsed `projects`
from the dashboard state -- no re-parsing needed.
---
## 7. Edge Cases
- **Only one model in data**: Show message "Need at least 2 models to compare.
Only found: claude-opus-4-6"
- **Model with 0 edit turns**: Show `-` for one-shot rate and retry rate
- **Model with < 20 calls**: Show "low data" warning on selection screen;
allow selection but display a note on the results screen
- **Self-correction scanner fails to read JSONL**: Gracefully degrade --
show `-` for self-correction rate, don't block the rest of the comparison
- **Both models have identical metrics**: Show "tie" instead of a winner

View file

@ -1,7 +1,7 @@
import Foundation
import Observation
private let cacheTTLSeconds: TimeInterval = 300
private let cacheTTLSeconds: TimeInterval = 30
struct CachedPayload {
let payload: MenubarPayload
@ -52,17 +52,15 @@ final class AppStore {
payload.optimize.findingCount
}
/// Switch to a period. Uses cached payload if fresh; otherwise fetches.
/// Switch to a period. Always fetches fresh data so the user never sees stale numbers.
func switchTo(period: Period) async {
selectedPeriod = period
if let cached = cache[currentKey], cached.isFresh { return }
await refresh(includeOptimize: true)
}
/// Switch to a provider filter. Uses cached payload if fresh; otherwise fetches.
/// Switch to a provider filter. Always fetches fresh data so the user never sees stale numbers.
func switchTo(provider: ProviderFilter) async {
selectedProvider = provider
if let cached = cache[currentKey], cached.isFresh { return }
await refresh(includeOptimize: true)
}
@ -75,10 +73,11 @@ final class AppStore {
let key = currentKey
guard !inFlightKeys.contains(key) else { return }
inFlightKeys.insert(key)
isLoading = true
let showLoading = cache[key] == nil
if showLoading { isLoading = true }
defer {
inFlightKeys.remove(key)
isLoading = false
if showLoading { isLoading = false }
}
do {
let fresh = try await DataClient.fetch(period: key.period, provider: key.provider, includeOptimize: includeOptimize)
@ -90,6 +89,15 @@ final class AppStore {
}
}
/// Prefetch all periods so tab switching is instant. Skips any period already cached.
func prefetchAll() async {
for period in Period.allCases {
let key = PayloadCacheKey(period: period, provider: .all)
if cache[key] != nil { continue }
await refreshQuietly(period: period)
}
}
/// Background refresh for a period other than the visible one (e.g. keeping today fresh for the menubar badge).
/// Does not toggle isLoading, so the popover's loading overlay is unaffected.
/// Always uses the .all provider since the menubar badge shows total spend.

View file

@ -2,7 +2,7 @@ import SwiftUI
import AppKit
import Observation
private let refreshIntervalSeconds: UInt64 = 60
private let refreshIntervalSeconds: UInt64 = 15
private let nanosPerSecond: UInt64 = 1_000_000_000
private let refreshIntervalNanos: UInt64 = refreshIntervalSeconds * nanosPerSecond
/// Fixed so the popover's anchor point doesn't shift each time today's cost changes.
@ -28,6 +28,7 @@ final class AppDelegate: NSObject, NSApplicationDelegate, NSPopoverDelegate {
private var statusItem: NSStatusItem!
private var popover: NSPopover!
private let store = AppStore()
let updateChecker = UpdateChecker()
private var refreshTask: Task<Void, Never>?
func applicationDidFinishLaunching(_ notification: Notification) {
@ -39,8 +40,7 @@ final class AppDelegate: NSObject, NSApplicationDelegate, NSPopoverDelegate {
setupPopover()
observeStore()
startRefreshLoop()
// Subscription is fetched lazily when the user opens the Plan pill, so the macOS
// Keychain prompt never fires until the user explicitly asks for it.
Task { await updateChecker.checkIfNeeded() }
}
/// Loads the currency code persisted by `codeburn currency` so a relaunch picks up where
@ -71,17 +71,21 @@ final class AppDelegate: NSObject, NSApplicationDelegate, NSPopoverDelegate {
private func startRefreshLoop() {
refreshTask = Task { [weak self] in
guard let s = self else { return }
// First cycle: fetch current view, then prefetch all periods in background
await s.store.refreshQuietly(period: .today)
s.refreshStatusButton()
await s.store.refresh(includeOptimize: true)
s.refreshStatusButton()
await s.store.prefetchAll()
while !Task.isCancelled {
guard let self else { return }
// Always keep the (today, all) payload warm. The menubar title and the
// agent tab strip both read from it, so it has to refresh every cycle
// regardless of whether the user is currently viewing Today or a
// different period / provider.
await self.store.refreshQuietly(period: .today)
// Refresh the currently-viewed payload. Optimize is fast (~1s warm-cache)
// so include findings on every refresh.
await self.store.refresh(includeOptimize: true)
try? await Task.sleep(nanoseconds: refreshIntervalNanos)
guard let s = self else { return }
await s.store.refreshQuietly(period: .today)
s.refreshStatusButton()
await s.store.refresh(includeOptimize: true)
s.refreshStatusButton()
}
}
}
@ -161,6 +165,7 @@ final class AppDelegate: NSObject, NSApplicationDelegate, NSPopoverDelegate {
let content = MenuBarContent()
.environment(store)
.environment(updateChecker)
.frame(width: popoverWidth)
popover.contentViewController = NSHostingController(rootView: content)

View file

@ -0,0 +1,104 @@
import Foundation
import Observation
private let releasesAPI = "https://api.github.com/repos/getagentseal/codeburn/releases/latest"
private let checkIntervalSeconds: TimeInterval = 2 * 24 * 60 * 60
private let lastCheckKey = "UpdateChecker.lastCheckDate"
private let cachedVersionKey = "UpdateChecker.latestVersion"
@MainActor
@Observable
final class UpdateChecker {
var latestVersion: String?
var isUpdating = false
var updateError: String?
var updateAvailable: Bool {
guard let latest = latestVersion else { return false }
let current = currentVersion
let normalizedLatest = latest.hasPrefix("v") ? String(latest.dropFirst()) : latest
let normalizedCurrent = current.hasPrefix("v") ? String(current.dropFirst()) : current
guard !normalizedCurrent.isEmpty && normalizedCurrent != "dev" else { return false }
return normalizedLatest.compare(normalizedCurrent, options: .numeric) == .orderedDescending
}
var currentVersion: String {
Bundle.main.infoDictionary?["CFBundleShortVersionString"] as? String ?? ""
}
func checkIfNeeded() async {
let lastCheck = UserDefaults.standard.double(forKey: lastCheckKey)
let now = Date().timeIntervalSince1970
if now - lastCheck < checkIntervalSeconds {
latestVersion = UserDefaults.standard.string(forKey: cachedVersionKey)
return
}
await check()
}
func check() async {
guard let url = URL(string: releasesAPI) else { return }
var request = URLRequest(url: url)
request.setValue("codeburn-menubar-updater", forHTTPHeaderField: "User-Agent")
request.setValue("application/vnd.github+json", forHTTPHeaderField: "Accept")
do {
let (data, _) = try await URLSession.shared.data(for: request)
let release = try JSONDecoder().decode(GitHubRelease.self, from: data)
guard let asset = release.assets.first(where: {
$0.name.hasPrefix("CodeBurnMenubar-") && $0.name.hasSuffix(".zip")
}) else { return }
let version = asset.name
.replacingOccurrences(of: "CodeBurnMenubar-", with: "")
.replacingOccurrences(of: ".zip", with: "")
latestVersion = version
UserDefaults.standard.set(Date().timeIntervalSince1970, forKey: lastCheckKey)
UserDefaults.standard.set(version, forKey: cachedVersionKey)
} catch {
NSLog("CodeBurn: update check failed: \(error)")
}
}
func performUpdate() {
isUpdating = true
updateError = nil
let process = CodeburnCLI.makeProcess(subcommand: ["menubar", "--force"])
let errPipe = Pipe()
process.standardOutput = FileHandle.nullDevice
process.standardError = errPipe
process.terminationHandler = { [weak self] proc in
let errData = errPipe.fileHandleForReading.readDataToEndOfFile()
let stderr = String(data: errData, encoding: .utf8) ?? ""
Task { @MainActor in
guard let self else { return }
if proc.terminationStatus != 0 {
self.isUpdating = false
self.updateError = stderr.isEmpty ? "Update failed (exit \(proc.terminationStatus))" : stderr
NSLog("CodeBurn: update failed (exit \(proc.terminationStatus)): \(stderr)")
}
}
}
do {
try process.run()
} catch {
isUpdating = false
updateError = error.localizedDescription
NSLog("CodeBurn: update spawn failed: \(error)")
}
}
}
private struct GitHubRelease: Decodable {
let tag_name: String
let assets: [GitHubAsset]
}
private struct GitHubAsset: Decodable {
let name: String
let browser_download_url: String
}

View file

@ -344,7 +344,7 @@ private struct BarTooltipCard: View {
private func prettyDate(_ ymd: String) -> String {
let parser = DateFormatter()
parser.dateFormat = "yyyy-MM-dd"
parser.timeZone = TimeZone(identifier: "UTC")
parser.timeZone = .current
guard let date = parser.date(from: ymd) else { return ymd }
let display = DateFormatter()
display.dateFormat = "EEE MMM d"
@ -392,11 +392,11 @@ private struct TrendStats {
private func buildTrendBars(from days: [DailyHistoryEntry]) -> [TrendBar] {
var calendar = Calendar(identifier: .gregorian)
calendar.timeZone = TimeZone(identifier: "UTC")!
calendar.timeZone = .current
let formatter: DateFormatter = {
let f = DateFormatter()
f.dateFormat = "yyyy-MM-dd"
f.timeZone = TimeZone(identifier: "UTC")
f.timeZone = .current
return f
}()
let entryByDate = Dictionary(uniqueKeysWithValues: days.map { ($0.date, $0) })
@ -427,11 +427,11 @@ private func computeTrendStats(bars: [TrendBar], allDays: [DailyHistoryEntry]) -
let peak = bars.filter { $0.cost > 0 }.max(by: { $0.cost < $1.cost })
var calendar = Calendar(identifier: .gregorian)
calendar.timeZone = TimeZone(identifier: "UTC")!
calendar.timeZone = .current
let formatter: DateFormatter = {
let f = DateFormatter()
f.dateFormat = "yyyy-MM-dd"
f.timeZone = TimeZone(identifier: "UTC")
f.timeZone = .current
return f
}()
let today = calendar.startOfDay(for: Date())
@ -547,11 +547,11 @@ private struct ForecastStats {
private func computeForecast(days: [DailyHistoryEntry]) -> ForecastStats {
var calendar = Calendar(identifier: .gregorian)
calendar.timeZone = TimeZone(identifier: "UTC")!
calendar.timeZone = .current
let formatter: DateFormatter = {
let f = DateFormatter()
f.dateFormat = "yyyy-MM-dd"
f.timeZone = TimeZone(identifier: "UTC")
f.timeZone = .current
return f
}()
let now = Date()
@ -798,17 +798,17 @@ private func computeAllStats(payload: MenubarPayload) -> AllStats {
let favoriteModel = payload.current.topModels.first?.name ?? ""
var calendar = Calendar(identifier: .gregorian)
calendar.timeZone = TimeZone(identifier: "UTC")!
calendar.timeZone = .current
let formatter: DateFormatter = {
let f = DateFormatter()
f.dateFormat = "yyyy-MM-dd"
f.timeZone = TimeZone(identifier: "UTC")
f.timeZone = .current
return f
}()
let displayFormatter: DateFormatter = {
let f = DateFormatter()
f.dateFormat = "MMM d"
f.timeZone = TimeZone(identifier: "UTC")
f.timeZone = .current
return f
}()
@ -1041,7 +1041,6 @@ private struct PlanLoadingView: View {
private struct PlanNoCredentialsView: View {
@Environment(AppStore.self) private var store
@State private var showManualFallback = false
var body: some View {
VStack(spacing: 8) {
@ -1051,32 +1050,17 @@ private struct PlanNoCredentialsView: View {
Text("No Claude subscription connected")
.font(.system(size: 12, weight: .semibold))
.foregroundStyle(.primary)
if showManualFallback {
Text("Terminal.app isn't available. Open your terminal and run `claude login`, then click Retry.")
.font(.system(size: 10.5))
.foregroundStyle(.secondary)
.multilineTextAlignment(.center)
.frame(maxWidth: 280)
} else {
Text("Click Connect to sign in with Claude, then return here.")
.font(.system(size: 10.5))
.foregroundStyle(.secondary)
.multilineTextAlignment(.center)
.frame(maxWidth: 260)
}
HStack(spacing: 8) {
Button("Connect Claude") {
if !TerminalLauncher.openClaudeLogin() { showManualFallback = true }
}
.controlSize(.small)
.buttonStyle(.borderedProminent)
.tint(Theme.brandAccent)
Button("Retry") {
Task { await store.refreshSubscription() }
}
.controlSize(.small)
.buttonStyle(.bordered)
Text("Sign in with Claude Code, then click Retry.")
.font(.system(size: 10.5))
.foregroundStyle(.secondary)
.multilineTextAlignment(.center)
.frame(maxWidth: 260)
Button("Retry") {
Task { await store.refreshSubscription() }
}
.controlSize(.small)
.buttonStyle(.borderedProminent)
.tint(Theme.brandAccent)
}
.frame(maxWidth: .infinity)
.padding(.vertical, 14)
@ -1086,7 +1070,6 @@ private struct PlanNoCredentialsView: View {
private struct PlanFailedView: View {
@Environment(AppStore.self) private var store
let error: String?
@State private var showManualFallback = false
var body: some View {
VStack(spacing: 8) {
@ -1096,13 +1079,7 @@ private struct PlanFailedView: View {
Text("Couldn't load plan data")
.font(.system(size: 12, weight: .semibold))
.foregroundStyle(.primary)
if showManualFallback {
Text("Terminal.app isn't available. Open your terminal and run `claude login`, then click Retry.")
.font(.system(size: 10.5))
.foregroundStyle(.secondary)
.multilineTextAlignment(.center)
.frame(maxWidth: 280)
} else if let error {
if let error {
Text(error)
.font(.system(size: 10))
.foregroundStyle(.tertiary)
@ -1110,19 +1087,12 @@ private struct PlanFailedView: View {
.frame(maxWidth: 280)
.lineLimit(3)
}
HStack(spacing: 8) {
Button("Reconnect Claude") {
if !TerminalLauncher.openClaudeLogin() { showManualFallback = true }
}
.controlSize(.small)
.buttonStyle(.borderedProminent)
.tint(Theme.brandAccent)
Button("Retry") {
Task { await store.refreshSubscription() }
}
.controlSize(.small)
.buttonStyle(.bordered)
Button("Retry") {
Task { await store.refreshSubscription() }
}
.controlSize(.small)
.buttonStyle(.borderedProminent)
.tint(Theme.brandAccent)
}
.frame(maxWidth: .infinity)
.padding(.vertical, 14)

View file

@ -183,25 +183,61 @@ private struct BurnFlame: View {
}
private struct Header: View {
@Environment(UpdateChecker.self) private var updateChecker
var body: some View {
VStack(alignment: .leading, spacing: 1) {
(
Text("Code").foregroundStyle(.primary)
+ Text("Burn").foregroundStyle(Theme.brandAccent)
)
.font(.system(size: 13, weight: .semibold))
.tracking(-0.15)
Text("AI Coding Cost Tracker")
.font(.system(size: 10.5))
.foregroundStyle(.secondary)
HStack {
VStack(alignment: .leading, spacing: 1) {
(
Text("Code").foregroundStyle(.primary)
+ Text("Burn").foregroundStyle(Theme.brandAccent)
)
.font(.system(size: 13, weight: .semibold))
.tracking(-0.15)
Text("AI Coding Cost Tracker")
.font(.system(size: 10.5))
.foregroundStyle(.secondary)
}
Spacer()
if updateChecker.updateAvailable {
UpdateBadge()
}
}
.frame(maxWidth: .infinity, alignment: .leading)
.padding(.horizontal, 14)
.padding(.top, 10)
.padding(.bottom, 8)
}
}
private struct UpdateBadge: View {
@Environment(UpdateChecker.self) private var updateChecker
var body: some View {
Button {
updateChecker.performUpdate()
} label: {
HStack(spacing: 4) {
if updateChecker.isUpdating {
ProgressView()
.controlSize(.mini)
.scaleEffect(0.7)
} else {
Image(systemName: "arrow.down.circle.fill")
.font(.system(size: 10))
}
Text(updateChecker.isUpdating ? "Updating..." : "Update")
.font(.system(size: 10, weight: .medium))
}
.padding(.horizontal, 8)
.padding(.vertical, 4)
}
.buttonStyle(.borderedProminent)
.tint(Theme.brandAccent)
.controlSize(.mini)
.disabled(updateChecker.isUpdating)
}
}
struct FlameMark: View {
var body: some View {
ZStack {

View file

@ -1,14 +1,15 @@
{
"name": "codeburn",
"version": "0.7.3",
"version": "0.8.4",
"description": "See where your AI coding tokens go - by task, tool, model, and project",
"type": "module",
"main": "./dist/cli.js",
"bin": {
"codeburn": "dist/cli.js"
"codeburn": "bin/codeburn"
},
"files": [
"dist"
"dist",
"bin"
],
"scripts": {
"build": "tsup",

View file

@ -8,13 +8,17 @@ import { renderStatusBar } from './format.js'
import { type PeriodData, type ProviderCost } from './menubar-json.js'
import { buildMenubarPayload } from './menubar-json.js'
import { addNewDays, getDaysInRange, loadDailyCache, saveDailyCache, withDailyCacheLock } from './daily-cache.js'
import { aggregateProjectsIntoDays, buildPeriodDataFromDays } from './day-aggregator.js'
import { aggregateProjectsIntoDays, buildPeriodDataFromDays, dateKey } from './day-aggregator.js'
import { CATEGORY_LABELS, type DateRange, type ProjectSummary, type TaskCategory } from './types.js'
import { renderDashboard } from './dashboard.js'
import { parseDateRangeFlags } from './cli-date.js'
import { createTerminalProgressReporter } from './parse-progress.js'
import { runOptimize, scanAndDetect } from './optimize.js'
import { renderCompare } from './compare.js'
import { getAllProviders } from './providers/index.js'
import { readConfig, saveConfig, getConfigFilePath } from './config.js'
import { clearPlan, readConfig, readPlan, saveConfig, savePlan, getConfigFilePath, type PlanId } from './config.js'
import { clampResetDay, getPlanUsageOrNull, type PlanUsage } from './plan-usage.js'
import { getPresetPlan, isPlanId, isPlanProvider, planDisplayName } from './plans.js'
import { createRequire } from 'node:module'
const require = createRequire(import.meta.url)
@ -25,7 +29,7 @@ const MS_PER_DAY = 24 * 60 * 60 * 1000
const BACKFILL_DAYS = 365
function toDateString(date: Date): string {
return date.toISOString().slice(0, 10)
return `${date.getFullYear()}-${String(date.getMonth() + 1).padStart(2, '0')}-${String(date.getDate()).padStart(2, '0')}`
}
function getDateRange(period: string): { range: DateRange; label: string } {
@ -35,12 +39,12 @@ function getDateRange(period: string): { range: DateRange; label: string } {
switch (period) {
case 'today': {
const start = new Date(now.getFullYear(), now.getMonth(), now.getDate())
return { range: { start, end }, label: `Today (${start.toISOString().slice(0, 10)})` }
return { range: { start, end }, label: `Today (${toDateString(start)})` }
}
case 'yesterday': {
const start = new Date(now.getFullYear(), now.getMonth(), now.getDate() - 1)
const yesterdayEnd = new Date(now.getFullYear(), now.getMonth(), now.getDate() - 1, 23, 59, 59, 999)
return { range: { start, end: yesterdayEnd }, label: `Yesterday (${start.toISOString().slice(0, 10)})` }
return { range: { start, end: yesterdayEnd }, label: `Yesterday (${toDateString(start)})` }
}
case 'week': {
const start = new Date(now.getFullYear(), now.getMonth(), now.getDate() - 7)
@ -83,11 +87,65 @@ function collect(val: string, acc: string[]): string[] {
return acc
}
async function runJsonReport(period: Period, provider: string, project: string[], exclude: string[]): Promise<void> {
function parseNumber(value: string): number {
return Number(value)
}
function parseInteger(value: string): number {
return parseInt(value, 10)
}
type JsonPlanSummary = {
id: PlanId
budget: number
spent: number
percentUsed: number
status: 'under' | 'near' | 'over'
projectedMonthEnd: number
daysUntilReset: number
periodStart: string
periodEnd: string
}
function toJsonPlanSummary(planUsage: PlanUsage): JsonPlanSummary {
return {
id: planUsage.plan.id,
budget: convertCost(planUsage.budgetUsd),
spent: convertCost(planUsage.spentApiEquivalentUsd),
percentUsed: Math.round(planUsage.percentUsed * 10) / 10,
status: planUsage.status,
projectedMonthEnd: convertCost(planUsage.projectedMonthUsd),
daysUntilReset: planUsage.daysUntilReset,
periodStart: planUsage.periodStart.toISOString(),
periodEnd: planUsage.periodEnd.toISOString(),
}
}
async function runJsonReport(period: Period, provider: string, project: string[], exclude: string[], noCache = false): Promise<void> {
await loadPricing()
const { range, label } = getDateRange(period)
const projects = filterProjectsByName(await parseAllSessions(range, provider), project, exclude)
console.log(JSON.stringify(buildJsonReport(projects, label, period), null, 2))
const projects = filterProjectsByName(
await parseAllSessions(range, provider, { noCache, progress: null }),
project,
exclude,
)
const report: ReturnType<typeof buildJsonReport> & { plan?: JsonPlanSummary } = buildJsonReport(projects, label, period)
const planUsage = await getPlanUsageOrNull()
if (planUsage) {
report.plan = toJsonPlanSummary(planUsage)
}
console.log(JSON.stringify(report, null, 2))
}
function noCacheRequested(opts: { cache?: boolean }): boolean {
return opts.cache === false
}
function buildParseOptions(noCache: boolean, enableProgress: boolean) {
return {
noCache,
progress: createTerminalProgressReporter(enableProgress),
}
}
const program = new Command()
@ -125,7 +183,7 @@ function buildJsonReport(projects: ProjectSummary[], period: string, periodKey:
for (const sess of sessions) {
for (const turn of sess.turns) {
if (!turn.timestamp) { continue }
const day = turn.timestamp.slice(0, 10)
const day = dateKey(turn.timestamp)
if (!dailyMap[day]) { dailyMap[day] = { cost: 0, calls: 0 } }
for (const call of turn.assistantCalls) {
dailyMap[day].cost += call.costUSD
@ -206,7 +264,7 @@ function buildJsonReport(projects: ProjectSummary[], period: string, periodKey:
Object.entries(m).sort(([, a], [, b]) => b - a).map(([name, calls]) => ({ name, calls }))
const topSessions = projects
.flatMap(p => p.sessions.map(s => ({ project: p.project, sessionId: s.sessionId, date: s.firstTimestamp?.slice(0, 10) ?? null, cost: convertCost(s.totalCostUSD), calls: s.apiCalls })))
.flatMap(p => p.sessions.map(s => ({ project: p.project, sessionId: s.sessionId, date: s.firstTimestamp ? dateKey(s.firstTimestamp) : null, cost: convertCost(s.totalCostUSD), calls: s.apiCalls })))
.sort((a, b) => b.cost - a.cost)
.slice(0, 5)
@ -248,8 +306,10 @@ program
.option('--format <format>', 'Output format: tui, json', 'tui')
.option('--project <name>', 'Show only projects matching name (repeatable)', collect, [])
.option('--exclude <name>', 'Exclude projects matching name (repeatable)', collect, [])
.option('--refresh <seconds>', 'Auto-refresh interval in seconds', parseInt)
.option('--no-cache', 'Rebuild the parsed source cache for this run')
.option('--refresh <seconds>', 'Auto-refresh interval in seconds (0 to disable)', parseInt, 30)
.action(async (opts) => {
const noCache = noCacheRequested(opts)
let customRange: DateRange | null = null
try {
customRange = parseDateRangeFlags(opts.from, opts.to)
@ -265,17 +325,17 @@ program
if (customRange) {
const label = `${opts.from ?? 'all'} to ${opts.to ?? 'today'}`
const projects = filterProjectsByName(
await parseAllSessions(customRange, opts.provider),
await parseAllSessions(customRange, opts.provider, { noCache, progress: null }),
opts.project,
opts.exclude,
)
console.log(JSON.stringify(buildJsonReport(projects, label, 'custom'), null, 2))
} else {
await runJsonReport(period, opts.provider, opts.project, opts.exclude)
await runJsonReport(period, opts.provider, opts.project, opts.exclude, noCache)
}
return
}
await renderDashboard(period, opts.provider, opts.refresh, opts.project, opts.exclude, customRange)
await renderDashboard(period, opts.provider, opts.refresh, opts.project, opts.exclude, customRange, noCache)
})
function buildPeriodData(label: string, projects: ProjectSummary[]): PeriodData {
@ -327,8 +387,11 @@ program
.option('--exclude <name>', 'Exclude projects matching name (repeatable)', collect, [])
.option('--period <period>', 'Primary period for menubar-json: today, week, 30days, month, all', 'today')
.option('--no-optimize', 'Skip optimize findings (menubar-json only, faster)')
.option('--no-cache', 'Rebuild the parsed source cache for this run')
.action(async (opts) => {
await loadPricing()
const noCache = noCacheRequested(opts)
const parseOptions = buildParseOptions(noCache, opts.format === 'terminal')
const pf = opts.provider
const fp = (p: ProjectSummary[]) => filterProjectsByName(p, opts.project, opts.exclude)
if (opts.format === 'menubar-json') {
@ -341,15 +404,29 @@ program
// The daily cache is provider-agnostic: always backfill it from .all so subsequent
// provider-filtered reads can derive per-provider cost+calls from DailyEntry.providers.
// Yesterday is always recomputed: it may have been cached mid-day with partial data.
const cache = await withDailyCacheLock(async () => {
let c = await loadDailyCache()
// Evict yesterday (and any stale future entries) so the gap fill recomputes them.
const hadYesterday = c.days.some(d => d.date >= yesterdayStr)
if (hadYesterday) {
const freshDays = c.days.filter(d => d.date < yesterdayStr)
const latestFresh = freshDays.length > 0 ? freshDays[freshDays.length - 1].date : null
c = { ...c, days: freshDays, lastComputedDate: latestFresh }
}
const gapStart = c.lastComputedDate
? new Date(new Date(`${c.lastComputedDate}T00:00:00.000Z`).getTime() + MS_PER_DAY)
? new Date(
parseInt(c.lastComputedDate.slice(0, 4)),
parseInt(c.lastComputedDate.slice(5, 7)) - 1,
parseInt(c.lastComputedDate.slice(8, 10)) + 1
)
: new Date(todayStart.getTime() - BACKFILL_DAYS * MS_PER_DAY)
if (gapStart.getTime() <= yesterdayEnd.getTime()) {
const gapRange: DateRange = { start: gapStart, end: yesterdayEnd }
const gapProjects = filterProjectsByName(await parseAllSessions(gapRange, 'all'), opts.project, opts.exclude)
const gapProjects = filterProjectsByName(await parseAllSessions(gapRange, 'all', { noCache, progress: null }), opts.project, opts.exclude)
const gapDays = aggregateProjectsIntoDays(gapProjects)
c = addNewDays(c, gapDays, yesterdayStr)
await saveDailyCache(c)
@ -366,7 +443,7 @@ program
if (isAllProviders) {
const todayRange: DateRange = { start: todayStart, end: now }
const todayProjects = fp(await parseAllSessions(todayRange, 'all'))
const todayProjects = fp(await parseAllSessions(todayRange, 'all', { noCache, progress: null }))
const todayDays = aggregateProjectsIntoDays(todayProjects)
const rangeStartStr = toDateString(periodInfo.range.start)
const rangeEndStr = toDateString(periodInfo.range.end)
@ -377,7 +454,7 @@ program
scanProjects = todayProjects
scanRange = todayRange
} else {
const projects = fp(await parseAllSessions(periodInfo.range, pf))
const projects = fp(await parseAllSessions(periodInfo.range, pf, { noCache, progress: null }))
currentData = buildPeriodData(periodInfo.label, projects)
scanProjects = projects
scanRange = periodInfo.range
@ -391,7 +468,7 @@ program
const providers: ProviderCost[] = []
if (isAllProviders) {
const todayRangeForProviders: DateRange = { start: todayStart, end: now }
const todayDaysForProviders = aggregateProjectsIntoDays(fp(await parseAllSessions(todayRangeForProviders, 'all')))
const todayDaysForProviders = aggregateProjectsIntoDays(fp(await parseAllSessions(todayRangeForProviders, 'all', { noCache, progress: null })))
const rangeStartStr = toDateString(periodInfo.range.start)
const allDaysForProviders = [
...getDaysInRange(cache, rangeStartStr, yesterdayStr),
@ -422,7 +499,7 @@ program
// in the cache, so the filtered view shows zero tokens (heatmap/trend still works on cost).
const historyStartStr = toDateString(new Date(todayStart.getTime() - BACKFILL_DAYS * MS_PER_DAY))
const allCacheDays = getDaysInRange(cache, historyStartStr, yesterdayStr)
const allTodayDaysForHistory = aggregateProjectsIntoDays(fp(await parseAllSessions({ start: todayStart, end: now }, 'all')))
const allTodayDaysForHistory = aggregateProjectsIntoDays(fp(await parseAllSessions({ start: todayStart, end: now }, 'all', { noCache, progress: null })))
const fullHistory = [...allCacheDays, ...allTodayDaysForHistory]
const dailyHistory = fullHistory.map(d => {
if (isAllProviders) {
@ -467,18 +544,28 @@ program
}
if (opts.format === 'json') {
const todayData = buildPeriodData('today', fp(await parseAllSessions(getDateRange('today').range, pf)))
const monthData = buildPeriodData('month', fp(await parseAllSessions(getDateRange('month').range, pf)))
const todayData = buildPeriodData('today', fp(await parseAllSessions(getDateRange('today').range, pf, { noCache, progress: null })))
const monthData = buildPeriodData('month', fp(await parseAllSessions(getDateRange('month').range, pf, { noCache, progress: null })))
const { code, rate } = getCurrency()
console.log(JSON.stringify({
const payload: {
currency: string
today: { cost: number; calls: number }
month: { cost: number; calls: number }
plan?: JsonPlanSummary
} = {
currency: code,
today: { cost: Math.round(todayData.cost * rate * 100) / 100, calls: todayData.calls },
month: { cost: Math.round(monthData.cost * rate * 100) / 100, calls: monthData.calls },
}))
}
const planUsage = await getPlanUsageOrNull()
if (planUsage) {
payload.plan = toJsonPlanSummary(planUsage)
}
console.log(JSON.stringify(payload))
return
}
const monthProjects = fp(await parseAllSessions(getDateRange('month').range, pf))
const monthProjects = fp(await parseAllSessions(getDateRange('month').range, pf, parseOptions))
console.log(renderStatusBar(monthProjects))
})
@ -489,13 +576,15 @@ program
.option('--format <format>', 'Output format: tui, json', 'tui')
.option('--project <name>', 'Show only projects matching name (repeatable)', collect, [])
.option('--exclude <name>', 'Exclude projects matching name (repeatable)', collect, [])
.option('--refresh <seconds>', 'Auto-refresh interval in seconds', parseInt)
.option('--no-cache', 'Rebuild the parsed source cache for this run')
.option('--refresh <seconds>', 'Auto-refresh interval in seconds (0 to disable)', parseInt, 30)
.action(async (opts) => {
const noCache = noCacheRequested(opts)
if (opts.format === 'json') {
await runJsonReport('today', opts.provider, opts.project, opts.exclude)
await runJsonReport('today', opts.provider, opts.project, opts.exclude, noCache)
return
}
await renderDashboard('today', opts.provider, opts.refresh, opts.project, opts.exclude)
await renderDashboard('today', opts.provider, opts.refresh, opts.project, opts.exclude, null, noCache)
})
program
@ -505,13 +594,15 @@ program
.option('--format <format>', 'Output format: tui, json', 'tui')
.option('--project <name>', 'Show only projects matching name (repeatable)', collect, [])
.option('--exclude <name>', 'Exclude projects matching name (repeatable)', collect, [])
.option('--refresh <seconds>', 'Auto-refresh interval in seconds', parseInt)
.option('--no-cache', 'Rebuild the parsed source cache for this run')
.option('--refresh <seconds>', 'Auto-refresh interval in seconds (0 to disable)', parseInt, 30)
.action(async (opts) => {
const noCache = noCacheRequested(opts)
if (opts.format === 'json') {
await runJsonReport('month', opts.provider, opts.project, opts.exclude)
await runJsonReport('month', opts.provider, opts.project, opts.exclude, noCache)
return
}
await renderDashboard('month', opts.provider, opts.refresh, opts.project, opts.exclude)
await renderDashboard('month', opts.provider, opts.refresh, opts.project, opts.exclude, null, noCache)
})
program
@ -522,14 +613,16 @@ program
.option('--provider <provider>', 'Filter by provider: all, claude, codex, cursor', 'all')
.option('--project <name>', 'Show only projects matching name (repeatable)', collect, [])
.option('--exclude <name>', 'Exclude projects matching name (repeatable)', collect, [])
.option('--no-cache', 'Rebuild the parsed source cache for this run')
.action(async (opts) => {
await loadPricing()
const parseOptions = buildParseOptions(noCacheRequested(opts), true)
const pf = opts.provider
const fp = (p: ProjectSummary[]) => filterProjectsByName(p, opts.project, opts.exclude)
const periods: PeriodExport[] = [
{ label: 'Today', projects: fp(await parseAllSessions(getDateRange('today').range, pf)) },
{ label: '7 Days', projects: fp(await parseAllSessions(getDateRange('week').range, pf)) },
{ label: '30 Days', projects: fp(await parseAllSessions(getDateRange('30days').range, pf)) },
{ label: 'Today', projects: fp(await parseAllSessions(getDateRange('today').range, pf, parseOptions)) },
{ label: '7 Days', projects: fp(await parseAllSessions(getDateRange('week').range, pf, parseOptions)) },
{ label: '30 Days', projects: fp(await parseAllSessions(getDateRange('30days').range, pf, parseOptions)) },
]
if (periods.every(p => p.projects.length === 0)) {
@ -537,7 +630,7 @@ program
return
}
const defaultName = `codeburn-${new Date().toISOString().slice(0, 10)}`
const defaultName = `codeburn-${toDateString(new Date())}`
const outputPath = opts.output ?? `${defaultName}.${opts.format}`
let savedPath: string
@ -675,16 +768,148 @@ program
console.log(` Config: ${getConfigFilePath()}\n`)
})
program
.command('plan [action] [id]')
.description('Show or configure a subscription plan for overage tracking')
.option('--format <format>', 'Output format: text or json', 'text')
.option('--monthly-usd <n>', 'Monthly plan price in USD (for custom)', parseNumber)
.option('--provider <name>', 'Provider scope: all, claude, codex, cursor', 'all')
.option('--reset-day <n>', 'Day of month plan resets (1-28)', parseInteger, 1)
.action(async (action?: string, id?: string, opts?: { format?: string; monthlyUsd?: number; provider?: string; resetDay?: number }) => {
const mode = action ?? 'show'
if (mode === 'show') {
const plan = await readPlan()
const displayPlan = !plan || plan.id === 'none'
? { id: 'none', monthlyUsd: 0, provider: 'all', resetDay: 1, setAt: null }
: {
id: plan.id,
monthlyUsd: plan.monthlyUsd,
provider: plan.provider,
resetDay: clampResetDay(plan.resetDay),
setAt: plan.setAt,
}
if (opts?.format === 'json') {
console.log(JSON.stringify(displayPlan))
return
}
if (!plan || plan.id === 'none') {
console.log('\n Plan: none')
console.log(' API-pricing view is active.')
console.log(` Config: ${getConfigFilePath()}\n`)
return
}
console.log(`\n Plan: ${planDisplayName(plan.id)} (${plan.id})`)
console.log(` Budget: $${plan.monthlyUsd}/month`)
console.log(` Provider: ${plan.provider}`)
console.log(` Reset day: ${clampResetDay(plan.resetDay)}`)
console.log(` Set at: ${plan.setAt}`)
console.log(` Config: ${getConfigFilePath()}\n`)
return
}
if (mode === 'reset') {
await clearPlan()
console.log('\n Plan reset. API-pricing view is active.\n')
return
}
if (mode !== 'set') {
console.error('\n Usage: codeburn plan [set <id> | reset]\n')
process.exitCode = 1
return
}
if (!id || !isPlanId(id)) {
console.error(`\n Plan id must be one of: claude-pro, claude-max, cursor-pro, custom, none; got "${id ?? ''}".\n`)
process.exitCode = 1
return
}
const resetDay = opts?.resetDay ?? 1
if (!Number.isInteger(resetDay) || resetDay < 1 || resetDay > 28) {
console.error(`\n --reset-day must be an integer from 1 to 28; got ${resetDay}.\n`)
process.exitCode = 1
return
}
if (id === 'none') {
await clearPlan()
console.log('\n Plan reset. API-pricing view is active.\n')
return
}
if (id === 'custom') {
if (opts?.monthlyUsd === undefined) {
console.error('\n Custom plans require --monthly-usd <positive number>.\n')
process.exitCode = 1
return
}
const monthlyUsd = opts.monthlyUsd
if (!Number.isFinite(monthlyUsd) || monthlyUsd <= 0) {
console.error(`\n --monthly-usd must be a positive number; got ${opts.monthlyUsd}.\n`)
process.exitCode = 1
return
}
const provider = opts?.provider ?? 'all'
if (!isPlanProvider(provider)) {
console.error(`\n --provider must be one of: all, claude, codex, cursor; got "${provider}".\n`)
process.exitCode = 1
return
}
await savePlan({
id: 'custom',
monthlyUsd,
provider,
resetDay,
setAt: new Date().toISOString(),
})
console.log(`\n Plan set to custom ($${monthlyUsd}/month, ${provider}, reset day ${resetDay}).`)
console.log(` Config saved to ${getConfigFilePath()}\n`)
return
}
const preset = getPresetPlan(id)
if (!preset) {
console.error(`\n Unknown preset "${id}".\n`)
process.exitCode = 1
return
}
await savePlan({
...preset,
resetDay,
setAt: new Date().toISOString(),
})
console.log(`\n Plan set to ${planDisplayName(preset.id)} ($${preset.monthlyUsd}/month).`)
console.log(` Provider: ${preset.provider}`)
console.log(` Reset day: ${resetDay}`)
console.log(` Config saved to ${getConfigFilePath()}\n`)
})
program
.command('optimize')
.description('Find token waste and get exact fixes')
.option('-p, --period <period>', 'Analysis period: today, week, 30days, month, all', '30days')
.option('--provider <provider>', 'Filter by provider: all, claude, codex, cursor', 'all')
.option('--no-cache', 'Rebuild the parsed source cache for this run')
.action(async (opts) => {
await loadPricing()
const { range, label } = getDateRange(opts.period)
const projects = await parseAllSessions(range, opts.provider)
const projects = await parseAllSessions(range, opts.provider, buildParseOptions(noCacheRequested(opts), true))
await runOptimize(projects, label, range)
})
program
.command('compare')
.description('Compare two AI models side-by-side')
.option('-p, --period <period>', 'Analysis period: today, week, 30days, month, all', 'all')
.option('--provider <provider>', 'Filter by provider: all, claude, codex, cursor', 'all')
.option('--no-cache', 'Rebuild the parsed source cache for this run')
.action(async (opts) => {
await loadPricing()
const { range } = getDateRange(opts.period)
await renderCompare(range, opts.provider, noCacheRequested(opts))
})
program.parse()

398
src/compare-stats.ts Normal file
View file

@ -0,0 +1,398 @@
import { readdir, readFile } from 'fs/promises'
import { join } from 'path'
import type { ProjectSummary } from './types.js'
const PLANNING_TOOLS = new Set(['TaskCreate', 'TaskUpdate', 'TodoWrite', 'EnterPlanMode', 'ExitPlanMode'])
export type ModelStats = {
model: string
calls: number
cost: number
outputTokens: number
inputTokens: number
cacheReadTokens: number
cacheWriteTokens: number
totalTurns: number
editTurns: number
oneShotTurns: number
retries: number
selfCorrections: number
editCost: number
firstSeen: string
lastSeen: string
}
export function aggregateModelStats(projects: ProjectSummary[]): ModelStats[] {
const byModel = new Map<string, ModelStats>()
const ensure = (model: string): ModelStats => {
let s = byModel.get(model)
if (!s) {
s = { model, calls: 0, cost: 0, outputTokens: 0, inputTokens: 0, cacheReadTokens: 0, cacheWriteTokens: 0, totalTurns: 0, editTurns: 0, oneShotTurns: 0, retries: 0, selfCorrections: 0, editCost: 0, firstSeen: '', lastSeen: '' }
byModel.set(model, s)
}
return s
}
for (const project of projects) {
for (const session of project.sessions) {
for (const turn of session.turns) {
if (turn.assistantCalls.length === 0) continue
const primaryModel = turn.assistantCalls[0]!.model
if (primaryModel === '<synthetic>') continue
const ms = ensure(primaryModel)
ms.totalTurns++
if (turn.hasEdits) {
ms.editTurns++
if (turn.retries === 0) ms.oneShotTurns++
for (const c of turn.assistantCalls) {
if (c.model !== '<synthetic>') ms.editCost += c.costUSD
}
}
ms.retries += turn.retries
for (const call of turn.assistantCalls) {
if (call.model === '<synthetic>') continue
const cs = call.model === primaryModel ? ms : ensure(call.model)
cs.calls++
cs.cost += call.costUSD
cs.outputTokens += call.usage.outputTokens
cs.inputTokens += call.usage.inputTokens
cs.cacheReadTokens += call.usage.cacheReadInputTokens
cs.cacheWriteTokens += call.usage.cacheCreationInputTokens
if (!cs.firstSeen || call.timestamp < cs.firstSeen) cs.firstSeen = call.timestamp
if (!cs.lastSeen || call.timestamp > cs.lastSeen) cs.lastSeen = call.timestamp
}
}
}
}
return [...byModel.values()].sort((a, b) => b.cost - a.cost)
}
export type ComparisonRow = {
section: string
label: string
valueA: number | null
valueB: number | null
formatFn: 'cost' | 'number' | 'percent' | 'decimal'
winner: 'a' | 'b' | 'tie' | 'none'
}
export type CategoryComparison = {
category: string
turnsA: number
editTurnsA: number
oneShotRateA: number | null
turnsB: number
editTurnsB: number
oneShotRateB: number | null
winner: 'a' | 'b' | 'tie' | 'none'
}
export type WorkingStyleRow = {
label: string
valueA: number | null
valueB: number | null
formatFn: ComparisonRow['formatFn']
}
type MetricDef = {
section: string
label: string
formatFn: ComparisonRow['formatFn']
higherIsBetter: boolean
compute: (s: ModelStats) => number | null
}
const METRICS: MetricDef[] = [
{
section: 'Performance',
label: 'One-shot rate',
formatFn: 'percent',
higherIsBetter: true,
compute: s => s.editTurns > 0 ? (s.oneShotTurns / s.editTurns) * 100 : null,
},
{
section: 'Performance',
label: 'Retry rate',
formatFn: 'decimal',
higherIsBetter: false,
compute: s => s.editTurns > 0 ? s.retries / s.editTurns : null,
},
{
section: 'Performance',
label: 'Self-correction',
formatFn: 'percent',
higherIsBetter: false,
compute: s => s.totalTurns > 0 ? (s.selfCorrections / s.totalTurns) * 100 : null,
},
{
section: 'Efficiency',
label: 'Cost / call',
formatFn: 'cost',
higherIsBetter: false,
compute: s => s.calls > 0 ? s.cost / s.calls : null,
},
{
section: 'Efficiency',
label: 'Cost / edit',
formatFn: 'cost',
higherIsBetter: false,
compute: s => s.editTurns > 0 ? s.editCost / s.editTurns : null,
},
{
section: 'Efficiency',
label: 'Output tok / call',
formatFn: 'number',
higherIsBetter: false,
compute: s => s.calls > 0 ? Math.round(s.outputTokens / s.calls) : null,
},
{
section: 'Efficiency',
label: 'Cache hit rate',
formatFn: 'percent',
higherIsBetter: true,
compute: s => {
const total = s.inputTokens + s.cacheReadTokens + s.cacheWriteTokens
return total > 0 ? (s.cacheReadTokens / total) * 100 : null
},
},
]
function pickWinner(valueA: number | null, valueB: number | null, higherIsBetter: boolean): ComparisonRow['winner'] {
if (valueA === null || valueB === null) return 'none'
if (valueA === valueB) return 'tie'
if (higherIsBetter) return valueA > valueB ? 'a' : 'b'
return valueA < valueB ? 'a' : 'b'
}
export function computeComparison(a: ModelStats, b: ModelStats): ComparisonRow[] {
return METRICS.map(m => {
const valueA = m.compute(a)
const valueB = m.compute(b)
return {
section: m.section,
label: m.label,
valueA,
valueB,
formatFn: m.formatFn,
winner: pickWinner(valueA, valueB, m.higherIsBetter),
}
})
}
export function computeCategoryComparison(projects: ProjectSummary[], modelA: string, modelB: string): CategoryComparison[] {
type Accum = { turns: number; editTurns: number; oneShotTurns: number }
const mapA = new Map<string, Accum>()
const mapB = new Map<string, Accum>()
const ensure = (map: Map<string, Accum>, cat: string): Accum => {
let a = map.get(cat)
if (!a) { a = { turns: 0, editTurns: 0, oneShotTurns: 0 }; map.set(cat, a) }
return a
}
for (const project of projects) {
for (const session of project.sessions) {
for (const turn of session.turns) {
if (turn.assistantCalls.length === 0) continue
const primary = turn.assistantCalls[0]!.model
if (primary !== modelA && primary !== modelB) continue
const acc = ensure(primary === modelA ? mapA : mapB, turn.category)
acc.turns++
if (turn.hasEdits) {
acc.editTurns++
if (turn.retries === 0) acc.oneShotTurns++
}
}
}
}
const allCats = new Set([...mapA.keys(), ...mapB.keys()])
const result: CategoryComparison[] = []
for (const category of allCats) {
const a = mapA.get(category)
const b = mapB.get(category)
if ((!a || a.editTurns === 0) && (!b || b.editTurns === 0)) continue
const rateA = a && a.editTurns > 0 ? (a.oneShotTurns / a.editTurns) * 100 : null
const rateB = b && b.editTurns > 0 ? (b.oneShotTurns / b.editTurns) * 100 : null
result.push({
category,
turnsA: a?.turns ?? 0,
editTurnsA: a?.editTurns ?? 0,
oneShotRateA: rateA,
turnsB: b?.turns ?? 0,
editTurnsB: b?.editTurns ?? 0,
oneShotRateB: rateB,
winner: pickWinner(rateA, rateB, true),
})
}
return result.sort((a, b) => (b.turnsA + b.turnsB) - (a.turnsA + a.turnsB))
}
export function computeWorkingStyle(projects: ProjectSummary[], modelA: string, modelB: string): WorkingStyleRow[] {
type StyleAccum = { totalTurns: number; agentSpawns: number; planModeUses: number; totalToolCalls: number; fastModeCalls: number }
const sA: StyleAccum = { totalTurns: 0, agentSpawns: 0, planModeUses: 0, totalToolCalls: 0, fastModeCalls: 0 }
const sB: StyleAccum = { totalTurns: 0, agentSpawns: 0, planModeUses: 0, totalToolCalls: 0, fastModeCalls: 0 }
for (const project of projects) {
for (const session of project.sessions) {
for (const turn of session.turns) {
if (turn.assistantCalls.length === 0) continue
const primary = turn.assistantCalls[0]!.model
if (primary !== modelA && primary !== modelB) continue
const s = primary === modelA ? sA : sB
s.totalTurns++
const turnTools = turn.assistantCalls.flatMap(c => c.tools)
if (turnTools.some(t => PLANNING_TOOLS.has(t)) || turn.assistantCalls.some(c => c.hasPlanMode)) {
s.planModeUses++
}
for (const call of turn.assistantCalls) {
s.totalToolCalls += call.tools.length
if (call.hasAgentSpawn) s.agentSpawns++
if (call.speed === 'fast') s.fastModeCalls++
}
}
}
}
const pct = (num: number, den: number) => den > 0 ? (num / den) * 100 : null
const avg = (num: number, den: number) => den > 0 ? num / den : null
return [
{ label: 'Delegation rate', valueA: pct(sA.agentSpawns, sA.totalTurns), valueB: pct(sB.agentSpawns, sB.totalTurns), formatFn: 'percent' as const },
{ label: 'Planning rate', valueA: pct(sA.planModeUses, sA.totalTurns), valueB: pct(sB.planModeUses, sB.totalTurns), formatFn: 'percent' as const },
{ label: 'Avg tools / turn', valueA: avg(sA.totalToolCalls, sA.totalTurns), valueB: avg(sB.totalToolCalls, sB.totalTurns), formatFn: 'decimal' as const },
{ label: 'Fast mode usage', valueA: pct(sA.fastModeCalls, sA.totalTurns), valueB: pct(sB.fastModeCalls, sB.totalTurns), formatFn: 'percent' as const },
]
}
const SELF_CORRECTION_PATTERNS = [
/\bmy mistake\b/i,
/\bmy bad\b/i,
/\bmy apolog/i,
/\bI apologize\b/i,
/\bI was wrong\b/i,
/\bI was incorrect\b/i,
/\bI made (a |an )?(error|mistake)\b/i,
/\bI incorrectly\b/i,
/\bI mistakenly\b/i,
/\bthat was (incorrect|wrong|an error)\b/i,
/\blet me correct that\b/i,
/\bI need to correct\b/i,
/\byou're right[.,]? I/i,
/\bsorry about that\b/i,
]
function extractText(content: unknown): string {
if (typeof content === 'string') return content
if (!Array.isArray(content)) return ''
return content
.filter((b): b is { type: string; text: string } => b !== null && typeof b === 'object' && b.type === 'text' && typeof b.text === 'string')
.map(b => b.text)
.join(' ')
}
function isCompactFile(name: string): boolean {
return name.includes('compact')
}
async function collectJsonlFiles(sessionDir: string): Promise<string[]> {
const entries = await readdir(sessionDir, { withFileTypes: true })
const files: string[] = []
for (const entry of entries) {
if (entry.isFile() && entry.name.endsWith('.jsonl') && !isCompactFile(entry.name)) {
files.push(join(sessionDir, entry.name))
} else if (entry.isDirectory() && entry.name === 'subagents') {
const subEntries = await readdir(join(sessionDir, entry.name), { withFileTypes: true })
for (const sub of subEntries) {
if (sub.isFile() && sub.name.endsWith('.jsonl') && !isCompactFile(sub.name)) {
files.push(join(sessionDir, entry.name, sub.name))
}
}
}
}
return files
}
export async function scanSelfCorrections(projectDirs: string[]): Promise<Map<string, number>> {
const counts = new Map<string, number>()
const seen = new Set<string>()
for (const dir of projectDirs) {
let entries
try {
entries = await readdir(dir, { withFileTypes: true })
} catch {
continue
}
const allFiles: string[] = []
for (const entry of entries) {
if (entry.isFile() && entry.name.endsWith('.jsonl') && !isCompactFile(entry.name)) {
allFiles.push(join(dir, entry.name))
} else if (entry.isDirectory()) {
try {
const sessionFiles = await collectJsonlFiles(join(dir, entry.name))
allFiles.push(...sessionFiles)
} catch {
continue
}
}
}
for (const file of allFiles) {
let raw: string
try {
raw = await readFile(file, 'utf8')
} catch {
continue
}
for (const line of raw.split('\n')) {
const trimmed = line.trim()
if (!trimmed) continue
let parsed: unknown
try {
parsed = JSON.parse(trimmed)
} catch {
continue
}
const rec = parsed as Record<string, unknown>
if (!rec || typeof rec !== 'object' || rec['type'] !== 'assistant') continue
const ts = rec['timestamp']
const msg = rec['message']
if (msg === null || typeof msg !== 'object') continue
const msgRec = msg as Record<string, unknown>
const model = msgRec['model']
if (typeof model !== 'string' || model === '<synthetic>') continue
const dedupeKey = `${model}:${ts}`
if (seen.has(dedupeKey)) continue
seen.add(dedupeKey)
const text = extractText(msgRec['content'])
if (SELF_CORRECTION_PATTERNS.some(p => p.test(text))) {
counts.set(model, (counts.get(model) ?? 0) + 1)
}
}
}
}
return counts
}

460
src/compare.tsx Normal file
View file

@ -0,0 +1,460 @@
import React, { useState, useEffect, useRef } from 'react'
import { render, Box, Text, useInput, useApp, useStdout } from 'ink'
import type { ModelStats, ComparisonRow, CategoryComparison, WorkingStyleRow } from './compare-stats.js'
import { aggregateModelStats, computeComparison, computeCategoryComparison, computeWorkingStyle, scanSelfCorrections } from './compare-stats.js'
import { formatCost } from './format.js'
import { parseAllSessions } from './parser.js'
import { createTerminalProgressReporter } from './parse-progress.js'
import { getAllProviders } from './providers/index.js'
import type { ProjectSummary, DateRange } from './types.js'
const ORANGE = '#FF8C42'
const GREEN = '#5BF5A0'
const DIM = '#888888'
const GOLD = '#FFD700'
const BAR_A = '#6495ED'
const BAR_B = '#5BF5A0'
const LOW_DATA_THRESHOLD = 20
const LABEL_WIDTH = 20
const VALUE_WIDTH = 14
const MODEL_NAME_COL = 24
const BAR_MAX_WIDTH = 30
const MIN_WIDE = 90
const PANEL_CHROME = 4
const MS_PER_DAY = 24 * 60 * 60 * 1000
const FULL_BLOCK = '\u2588'
function formatValue(value: number | null, fmt: ComparisonRow['formatFn']): string {
if (value === null) return '-'
switch (fmt) {
case 'cost': return formatCost(value)
case 'number': return Math.round(value).toLocaleString()
case 'percent': return `${value.toFixed(1)}%`
case 'decimal': return value.toFixed(2)
}
}
function shortName(model: string): string {
return model.replace(/^claude-/, '').replace(/-\d{8}$/, '')
}
function daysOfData(first: string, last: string): number {
if (!first || !last) return 0
const ms = new Date(last).getTime() - new Date(first).getTime()
return Math.max(1, Math.ceil(ms / MS_PER_DAY))
}
function barWidth(rate: number): number {
return Math.round((rate / 100) * BAR_MAX_WIDTH)
}
type ModelSelectorProps = {
models: ModelStats[]
onSelect: (a: ModelStats, b: ModelStats) => void
onBack: () => void
}
function ModelSelector({ models, onSelect, onBack }: ModelSelectorProps) {
const { exit } = useApp()
const [cursor, setCursor] = useState(0)
const [selected, setSelected] = useState<Set<number>>(new Set())
useInput((input, key) => {
if (input === 'q') { exit(); return }
if (key.escape) { onBack(); return }
if (key.upArrow) {
setCursor(c => (c - 1 + models.length) % models.length)
return
}
if (key.downArrow) {
setCursor(c => (c + 1) % models.length)
return
}
if (input === ' ') {
setSelected(prev => {
const next = new Set(prev)
if (next.has(cursor)) {
next.delete(cursor)
} else if (next.size < 2) {
next.add(cursor)
}
return next
})
return
}
if (key.return && selected.size === 2) {
const indices = [...selected].sort((a, b) => a - b)
onSelect(models[indices[0]!]!, models[indices[1]!]!)
}
})
return (
<Box flexDirection="column" paddingX={2} paddingY={1}>
<Box flexDirection="column" borderStyle="round" borderColor={ORANGE} paddingX={1}>
<Text bold color={ORANGE}>Model Comparison</Text>
<Text> </Text>
<Text color={DIM}>Select two models to compare:</Text>
<Text> </Text>
{models.map((m, i) => {
const isCursor = i === cursor
const isSelected = selected.has(i)
const lowData = m.calls < LOW_DATA_THRESHOLD
const prefix = isCursor ? '> ' : ' '
return (
<Text key={m.model}>
<Text color={isCursor ? ORANGE : undefined}>{prefix}</Text>
<Text bold={isSelected} color={isSelected ? GREEN : undefined}>
{shortName(m.model).padEnd(MODEL_NAME_COL)}
</Text>
<Text>{m.calls.toLocaleString().padStart(8)} calls</Text>
<Text color={GOLD}>{formatCost(m.cost).padStart(10)}</Text>
{isSelected && <Text color={GREEN}> [selected]</Text>}
{lowData && <Text color={DIM}> low data</Text>}
</Text>
)
})}
</Box>
<Text> </Text>
<Text>
<Text color={ORANGE} bold>[space]</Text><Text dimColor> select </Text>
<Text color={ORANGE} bold>[enter]</Text><Text dimColor> compare </Text>
<Text color={ORANGE} bold>{'<>'}</Text><Text dimColor> switch period </Text>
<Text color={ORANGE} bold>[esc]</Text><Text dimColor> back </Text>
<Text color={ORANGE} bold>[q]</Text><Text dimColor> quit</Text>
</Text>
</Box>
)
}
type ComparisonResultsProps = {
modelA: ModelStats
modelB: ModelStats
rows: ComparisonRow[]
categories: CategoryComparison[]
workingStyle: WorkingStyleRow[]
onBack: () => void
}
function MetricPanel({ title, rows, nameA, nameB, pw }: { title: string; rows: ComparisonRow[]; nameA: string; nameB: string; pw: number }) {
return (
<Box flexDirection="column" borderStyle="round" borderColor={ORANGE} paddingX={1} width={pw}>
<Text bold color={ORANGE}>{title}</Text>
<Text>
<Text>{''.padEnd(LABEL_WIDTH)}</Text>
<Text bold>{nameA.padStart(VALUE_WIDTH)}</Text>
<Text bold>{nameB.padStart(VALUE_WIDTH)}</Text>
</Text>
{rows.map(row => {
const fmtA = formatValue(row.valueA, row.formatFn)
const fmtB = formatValue(row.valueB, row.formatFn)
return (
<Text key={row.label}>
<Text color={DIM}>{row.label.padEnd(LABEL_WIDTH)}</Text>
<Text color={row.winner === 'a' ? GREEN : undefined}>{fmtA.padStart(VALUE_WIDTH)}</Text>
<Text color={row.winner === 'b' ? GREEN : undefined}>{fmtB.padStart(VALUE_WIDTH)}</Text>
</Text>
)
})}
</Box>
)
}
function ContextPanel({ title, rows, nameA, nameB, pw, lowDataWarning }: { title: string; rows: { label: string; valueA: string; valueB: string }[]; nameA: string; nameB: string; pw: number; lowDataWarning?: string }) {
return (
<Box flexDirection="column" borderStyle="round" borderColor={ORANGE} paddingX={1} width={pw}>
<Text bold color={ORANGE}>{title}</Text>
<Text>
<Text>{''.padEnd(LABEL_WIDTH)}</Text>
<Text bold>{nameA.padStart(VALUE_WIDTH)}</Text>
<Text bold>{nameB.padStart(VALUE_WIDTH)}</Text>
</Text>
{rows.map(row => (
<Text key={row.label}>
<Text color={DIM}>{row.label.padEnd(LABEL_WIDTH)}</Text>
<Text color={DIM}>{row.valueA.padStart(VALUE_WIDTH)}</Text>
<Text color={DIM}>{row.valueB.padStart(VALUE_WIDTH)}</Text>
</Text>
))}
{lowDataWarning && <Text color={GOLD}>{lowDataWarning}</Text>}
</Box>
)
}
function ComparisonResults({ modelA, modelB, rows, categories, workingStyle, onBack }: ComparisonResultsProps) {
const { exit } = useApp()
const { stdout } = useStdout()
const termWidth = stdout?.columns || 80
const dashWidth = Math.min(160, termWidth)
const wide = dashWidth >= MIN_WIDE
const halfWidth = wide ? Math.floor(dashWidth / 2) : dashWidth
const nameA = shortName(modelA.model)
const nameB = shortName(modelB.model)
const lowDataA = modelA.calls < LOW_DATA_THRESHOLD
const lowDataB = modelB.calls < LOW_DATA_THRESHOLD
useInput((input, key) => {
if (input === 'q') { exit(); return }
if (key.escape) { onBack(); return }
})
const sectionOrder: string[] = []
const sectionRows = new Map<string, ComparisonRow[]>()
for (const row of rows) {
if (!sectionRows.has(row.section)) {
sectionOrder.push(row.section)
sectionRows.set(row.section, [])
}
sectionRows.get(row.section)!.push(row)
}
const fmtTokens = (n: number) => {
if (n >= 1_000_000) return `${(n / 1_000_000).toFixed(1)}M`
if (n >= 1_000) return `${(n / 1_000).toFixed(1)}K`
return String(n)
}
const contextRows: { label: string; valueA: string; valueB: string }[] = [
{ label: 'Calls', valueA: modelA.calls.toLocaleString(), valueB: modelB.calls.toLocaleString() },
{ label: 'Total cost', valueA: formatCost(modelA.cost), valueB: formatCost(modelB.cost) },
{ label: 'Input tokens', valueA: fmtTokens(modelA.inputTokens), valueB: fmtTokens(modelB.inputTokens) },
{ label: 'Output tokens', valueA: fmtTokens(modelA.outputTokens), valueB: fmtTokens(modelB.outputTokens) },
{ label: 'Days of data', valueA: String(daysOfData(modelA.firstSeen, modelA.lastSeen)), valueB: String(daysOfData(modelB.firstSeen, modelB.lastSeen)) },
{ label: 'Edit turns', valueA: modelA.editTurns.toLocaleString(), valueB: modelB.editTurns.toLocaleString() },
{ label: 'Self-corrections', valueA: modelA.selfCorrections.toLocaleString(), valueB: modelB.selfCorrections.toLocaleString() },
]
const lowDataWarning = (lowDataA || lowDataB)
? `Note: ${[lowDataA && shortName(modelA.model), lowDataB && shortName(modelB.model)].filter(Boolean).join(' and ')} ha${lowDataA && lowDataB ? 've' : 's'} fewer than ${LOW_DATA_THRESHOLD} calls`
: undefined
const pw = wide ? halfWidth : dashWidth
return (
<Box flexDirection="column" paddingX={2} paddingY={1}>
<Box flexDirection="column" borderStyle="round" borderColor={ORANGE} paddingX={1} width={dashWidth}>
<Text>
<Text bold color={ORANGE}>{nameA}</Text>
<Text dimColor> vs </Text>
<Text bold color={ORANGE}>{nameB}</Text>
</Text>
</Box>
<Box width={dashWidth}>
<MetricPanel title={sectionOrder[0] ?? 'Performance'} rows={sectionRows.get(sectionOrder[0] ?? '') ?? []} nameA={nameA} nameB={nameB} pw={pw} />
<MetricPanel title={sectionOrder[1] ?? 'Efficiency'} rows={sectionRows.get(sectionOrder[1] ?? '') ?? []} nameA={nameA} nameB={nameB} pw={pw} />
</Box>
{categories.length > 0 && (
<Box flexDirection="column" borderStyle="round" borderColor={ORANGE} paddingX={1} width={dashWidth}>
<Text bold color={ORANGE}>Category Head-to-Head</Text>
<Text color={DIM}>one-shot rate per category</Text>
<Text>
<Text>{' '}</Text>
<Text color={BAR_A}>{FULL_BLOCK + FULL_BLOCK}</Text>
<Text> {nameA} </Text>
<Text color={BAR_B}>{FULL_BLOCK + FULL_BLOCK}</Text>
<Text> {nameB}</Text>
</Text>
{categories.map(cat => {
const bwA = cat.oneShotRateA !== null ? barWidth(cat.oneShotRateA) : 0
const bwB = cat.oneShotRateB !== null ? barWidth(cat.oneShotRateB) : 0
const rateA = cat.oneShotRateA !== null ? `${cat.oneShotRateA.toFixed(1)}%` : '-'
const rateB = cat.oneShotRateB !== null ? `${cat.oneShotRateB.toFixed(1)}%` : '-'
const turnsA = cat.editTurnsA > 0 ? `(${cat.editTurnsA})` : ''
const turnsB = cat.editTurnsB > 0 ? `(${cat.editTurnsB})` : ''
return (
<React.Fragment key={cat.category}>
<Text> </Text>
<Text color={DIM}>{' '}{cat.category}</Text>
<Text>
<Text>{' '}</Text>
<Text color={cat.winner === 'a' ? BAR_A : DIM}>{FULL_BLOCK.repeat(Math.max(bwA, 1))}</Text>
<Text>{' '.repeat(Math.max(0, BAR_MAX_WIDTH - bwA))} </Text>
<Text color={cat.winner === 'a' ? GREEN : undefined}>{rateA.padStart(6)}</Text>
<Text color={DIM}> {turnsA}</Text>
</Text>
<Text>
<Text>{' '}</Text>
<Text color={cat.winner === 'b' ? BAR_B : DIM}>{FULL_BLOCK.repeat(Math.max(bwB, 1))}</Text>
<Text>{' '.repeat(Math.max(0, BAR_MAX_WIDTH - bwB))} </Text>
<Text color={cat.winner === 'b' ? GREEN : undefined}>{rateB.padStart(6)}</Text>
<Text color={DIM}> {turnsB}</Text>
</Text>
</React.Fragment>
)
})}
</Box>
)}
<Box width={dashWidth}>
{workingStyle.length > 0 && (
<ContextPanel title="Working Style" rows={workingStyle.map(r => ({ label: r.label, valueA: formatValue(r.valueA, r.formatFn), valueB: formatValue(r.valueB, r.formatFn) }))} nameA={nameA} nameB={nameB} pw={pw} />
)}
<ContextPanel title="Context" rows={contextRows} nameA={nameA} nameB={nameB} pw={pw} lowDataWarning={lowDataWarning} />
</Box>
<Text>
<Text color={ORANGE} bold>{'<>'}</Text><Text dimColor> switch period </Text>
<Text color={ORANGE} bold>[esc]</Text><Text dimColor> back </Text>
<Text color={ORANGE} bold>[q]</Text><Text dimColor> quit</Text>
</Text>
</Box>
)
}
type CompareViewProps = {
projects: ProjectSummary[]
onBack: () => void
}
export function CompareView({ projects, onBack }: CompareViewProps) {
const { exit } = useApp()
const [phase, setPhase] = useState<'select' | 'loading' | 'results'>('select')
const [models, setModels] = useState<ModelStats[]>(() => aggregateModelStats(projects))
const [pickedNames, setPickedNames] = useState<[string, string] | null>(null)
const [selectedA, setSelectedA] = useState<ModelStats | null>(null)
const [selectedB, setSelectedB] = useState<ModelStats | null>(null)
const [rows, setRows] = useState<ComparisonRow[]>([])
const [categories, setCategories] = useState<CategoryComparison[]>([])
const [style, setStyle] = useState<WorkingStyleRow[]>([])
const [loadTrigger, setLoadTrigger] = useState(0)
const projectsRef = useRef(projects)
projectsRef.current = projects
useEffect(() => {
const newModels = aggregateModelStats(projects)
setModels(newModels)
if (pickedNames) {
const hasA = newModels.some(m => m.model === pickedNames[0])
const hasB = newModels.some(m => m.model === pickedNames[1])
if (hasA && hasB) {
setLoadTrigger(t => t + 1)
} else {
setPickedNames(null)
setPhase('select')
}
}
}, [projects])
useEffect(() => {
if (loadTrigger === 0 || !pickedNames) return
let cancelled = false
setPhase('loading')
const currentModels = aggregateModelStats(projectsRef.current)
const a = currentModels.find(m => m.model === pickedNames[0])
const b = currentModels.find(m => m.model === pickedNames[1])
if (!a || !b) { setPhase('select'); return }
async function run() {
const providers = await getAllProviders()
const dirs: string[] = []
for (const p of providers) {
const sessions = await p.discoverSessions()
for (const s of sessions) dirs.push(s.path)
}
const corrections = await scanSelfCorrections(dirs)
if (cancelled) return
const currentProjects = projectsRef.current
const aCopy = { ...a!, selfCorrections: corrections.get(a!.model) ?? 0 }
const bCopy = { ...b!, selfCorrections: corrections.get(b!.model) ?? 0 }
setSelectedA(aCopy)
setSelectedB(bCopy)
setRows(computeComparison(aCopy, bCopy))
setCategories(computeCategoryComparison(currentProjects, a!.model, b!.model))
setStyle(computeWorkingStyle(currentProjects, a!.model, b!.model))
setPhase('results')
}
run()
return () => { cancelled = true }
}, [loadTrigger])
useInput((input, key) => {
if (phase !== 'select') return
if (models.length < 2) {
if (input === 'q') { exit(); return }
if (key.escape) { onBack(); return }
}
})
if (models.length < 2) {
return (
<Box flexDirection="column" paddingX={2} paddingY={1}>
<Box flexDirection="column" borderStyle="round" borderColor={ORANGE} paddingX={1}>
<Text bold color={ORANGE}>Model Comparison</Text>
<Text> </Text>
<Text color={DIM}>Need at least 2 models to compare. Found {models.length}.</Text>
</Box>
<Text> </Text>
<Text>
<Text color={ORANGE} bold>[esc]</Text><Text dimColor> back </Text>
<Text color={ORANGE} bold>[q]</Text><Text dimColor> quit</Text>
</Text>
</Box>
)
}
const handleSelect = (a: ModelStats, b: ModelStats) => {
setPickedNames([a.model, b.model])
setLoadTrigger(t => t + 1)
}
if (phase === 'loading') {
return (
<Box flexDirection="column" paddingX={2} paddingY={1}>
<Box flexDirection="column" borderStyle="round" borderColor={ORANGE} paddingX={1}>
<Text bold color={ORANGE}>Model Comparison</Text>
<Text> </Text>
<Text color={DIM}>Scanning self-corrections...</Text>
</Box>
</Box>
)
}
if (phase === 'results' && selectedA && selectedB) {
return (
<ComparisonResults
modelA={selectedA}
modelB={selectedB}
rows={rows}
categories={categories}
workingStyle={style}
onBack={() => setPhase('select')}
/>
)
}
return (
<ModelSelector
models={models}
onSelect={handleSelect}
onBack={onBack}
/>
)
}
export async function renderCompare(range: DateRange, provider: string, noCache = false): Promise<void> {
const isTTY = process.stdin.isTTY && process.stdout.isTTY
if (!isTTY) {
process.stdout.write('Model comparison requires an interactive terminal.\n')
return
}
const projects = await parseAllSessions(range, provider, {
noCache,
progress: createTerminalProgressReporter(true),
})
const { waitUntilExit } = render(
<CompareView projects={projects} onBack={() => process.exit(0)} />
)
await waitUntilExit()
}

View file

@ -1,12 +1,24 @@
import { readFile, writeFile, mkdir } from 'fs/promises'
import { readFile, writeFile, mkdir, rename } from 'fs/promises'
import { join } from 'path'
import { homedir } from 'os'
export type PlanId = 'claude-pro' | 'claude-max' | 'cursor-pro' | 'custom' | 'none'
export type PlanProvider = 'claude' | 'codex' | 'cursor' | 'all'
export type Plan = {
id: PlanId
monthlyUsd: number
provider: PlanProvider
resetDay?: number
setAt: string
}
export type CodeburnConfig = {
currency?: {
code: string
symbol?: string
}
plan?: Plan
modelAliases?: Record<string, string>
}
@ -29,7 +41,27 @@ export async function readConfig(): Promise<CodeburnConfig> {
export async function saveConfig(config: CodeburnConfig): Promise<void> {
await mkdir(getConfigDir(), { recursive: true })
await writeFile(getConfigPath(), JSON.stringify(config, null, 2) + '\n', 'utf-8')
const configPath = getConfigPath()
const tmpPath = `${configPath}.tmp`
await writeFile(tmpPath, JSON.stringify(config, null, 2) + '\n', 'utf-8')
await rename(tmpPath, configPath)
}
export async function readPlan(): Promise<Plan | undefined> {
const config = await readConfig()
return config.plan
}
export async function savePlan(plan: Plan): Promise<void> {
const config = await readConfig()
config.plan = plan
await saveConfig(config)
}
export async function clearPlan(): Promise<void> {
const config = await readConfig()
delete config.plan
await saveConfig(config)
}
export function getConfigFilePath(): string {

View file

@ -1,63 +0,0 @@
import { readFile, writeFile, mkdir, stat } from 'fs/promises'
import { join } from 'path'
import { homedir } from 'os'
import type { ParsedProviderCall } from './providers/types.js'
type ResultCache = {
dbMtimeMs: number
dbSizeBytes: number
calls: ParsedProviderCall[]
}
const CACHE_FILE = 'cursor-results.json'
function getCacheDir(): string {
return join(homedir(), '.cache', 'codeburn')
}
function getCachePath(): string {
return join(getCacheDir(), CACHE_FILE)
}
async function getDbFingerprint(dbPath: string): Promise<{ mtimeMs: number; size: number } | null> {
try {
const s = await stat(dbPath)
return { mtimeMs: s.mtimeMs, size: s.size }
} catch {
return null
}
}
export async function readCachedResults(dbPath: string): Promise<ParsedProviderCall[] | null> {
try {
const fp = await getDbFingerprint(dbPath)
if (!fp) return null
const raw = await readFile(getCachePath(), 'utf-8')
const cache = JSON.parse(raw) as ResultCache
if (cache.dbMtimeMs === fp.mtimeMs && cache.dbSizeBytes === fp.size) {
return cache.calls
}
return null
} catch {
return null
}
}
export async function writeCachedResults(dbPath: string, calls: ParsedProviderCall[]): Promise<void> {
try {
const fp = await getDbFingerprint(dbPath)
if (!fp) return
const dir = getCacheDir()
await mkdir(dir, { recursive: true })
const cache: ResultCache = {
dbMtimeMs: fp.mtimeMs,
dbSizeBytes: fp.size,
calls,
}
await writeFile(getCachePath(), JSON.stringify(cache), 'utf-8')
} catch {}
}

View file

@ -4,7 +4,7 @@ import { mkdir, open, readFile, rename, unlink } from 'fs/promises'
import { homedir } from 'os'
import { join } from 'path'
export const DAILY_CACHE_VERSION = 2
export const DAILY_CACHE_VERSION = 3
const DAILY_CACHE_FILENAME = 'daily-cache.json'
export type DailyEntry = {
@ -91,14 +91,11 @@ export async function saveDailyCache(cache: DailyCache): Promise<void> {
}
export function addNewDays(cache: DailyCache, incoming: DailyEntry[], newestDate: string): DailyCache {
const seen = new Set(cache.days.map(d => d.date))
const merged = [...cache.days]
const byDate = new Map(cache.days.map(d => [d.date, d]))
for (const day of incoming) {
if (seen.has(day.date)) continue
seen.add(day.date)
merged.push(day)
byDate.set(day.date, day)
}
merged.sort((a, b) => a.date.localeCompare(b.date))
const merged = Array.from(byDate.values()).sort((a, b) => a.date.localeCompare(b.date))
const nextLast = cache.lastComputedDate && cache.lastComputedDate > newestDate
? cache.lastComputedDate
: newestDate

View file

@ -4,15 +4,30 @@ import React, { useState, useCallback, useEffect, useRef } from 'react'
import { render, Box, Text, useInput, useApp, useWindowSize } from 'ink'
import { CATEGORY_LABELS, type DateRange, type ProjectSummary, type TaskCategory } from './types.js'
import { formatCost, formatTokens } from './format.js'
import { parseAllSessions, filterProjectsByName } from './parser.js'
import { parseAllSessions, filterProjectsByDateRange, filterProjectsByName } from './parser.js'
import { loadPricing } from './models.js'
import { getAllProviders } from './providers/index.js'
import { scanAndDetect, type WasteFinding, type WasteAction, type OptimizeResult } from './optimize.js'
import { estimateContextBudget, discoverProjectCwd, type ContextBudget } from './context-budget.js'
import { dateKey } from './day-aggregator.js'
import { createTerminalProgressReporter } from './parse-progress.js'
import { CompareView } from './compare.js'
import { getPlanUsageOrNull, type PlanUsage } from './plan-usage.js'
import { planDisplayName } from './plans.js'
import { providerColor, providerLabel } from './provider-colors.js'
import { join } from 'path'
type Period = 'today' | 'week' | '30days' | 'month' | 'all'
type View = 'dashboard' | 'optimize'
type View = 'dashboard' | 'optimize' | 'compare'
type CachedWindow = {
period: Period
range: {
start: Date
end: Date
}
projects: ProjectSummary[]
}
const PERIODS: Period[] = ['today', 'week', '30days', 'month', 'all']
const PERIOD_LABELS: Record<Period, string> = {
@ -27,6 +42,7 @@ const MIN_WIDE = 90
const ORANGE = '#FF8C42'
const DIM = '#555555'
const GOLD = '#FFD700'
const PLAN_BAR_WIDTH = 10
const LANG_DISPLAY_NAMES: Record<string, string> = {
javascript: 'JavaScript', typescript: 'TypeScript', python: 'Python',
@ -50,15 +66,6 @@ const PANEL_COLORS = {
bash: '#F5A05B',
}
const PROVIDER_COLORS: Record<string, string> = {
claude: '#FF8C42',
codex: '#5BF5A0',
cursor: '#00B4D8',
opencode: '#A78BFA',
pi: '#F472B6',
all: '#FF8C42',
}
const CATEGORY_COLORS: Record<TaskCategory, string> = {
coding: '#5B9EF5',
debugging: '#F55B5B',
@ -110,6 +117,10 @@ function getDateRange(period: Period): { start: Date; end: Date } {
}
}
function rangeCovers(outer: { start: Date; end: Date }, inner: { start: Date; end: Date }): boolean {
return outer.start <= inner.start && outer.end >= inner.end
}
type Layout = { dashWidth: number; wide: boolean; halfWidth: number; barWidth: number }
function getLayout(columns?: number): Layout {
@ -152,7 +163,18 @@ function fit(s: string, n: number): string {
return s.length > n ? s.slice(0, n) : s.padEnd(n)
}
function Overview({ projects, label, width }: { projects: ProjectSummary[]; label: string; width: number }) {
function renderPlanBar(percentUsed: number, width: number): string {
if (percentUsed <= 100) {
const capped = Math.max(0, percentUsed)
const filled = Math.round((capped / 100) * width)
return `${'▓'.repeat(filled)}${'░'.repeat(Math.max(0, width - filled))}`
}
const factor = percentUsed / 100
const chevrons = Math.min(4, Math.max(1, Math.floor(Math.log10(factor)) + 1))
return `${'▓'.repeat(width)}${'▶'.repeat(chevrons)}`
}
function Overview({ projects, label, width, planUsage }: { projects: ProjectSummary[]; label: string; width: number; planUsage?: PlanUsage }) {
const totalCost = projects.reduce((s, p) => s + p.totalCostUSD, 0)
const totalCalls = projects.reduce((s, p) => s + p.totalApiCalls, 0)
const totalSessions = projects.reduce((s, p) => s + p.sessions.length, 0)
@ -164,6 +186,15 @@ function Overview({ projects, label, width }: { projects: ProjectSummary[]; labe
const allInputTokens = totalInput + totalCacheRead + totalCacheWrite
const cacheHit = allInputTokens > 0
? (totalCacheRead / allInputTokens) * 100 : 0
const planLabel = planUsage ? `${planDisplayName(planUsage.plan.id)}: ${formatCost(planUsage.spentApiEquivalentUsd)} API-equivalent vs ${formatCost(planUsage.budgetUsd)} plan` : ''
const planPct = planUsage ? `${planUsage.percentUsed.toFixed(1)}%` : ''
const planColor = planUsage
? planUsage.status === 'over'
? '#F55B5B'
: planUsage.status === 'near'
? ORANGE
: '#5BF58C'
: DIM
return (
<Box flexDirection="column" borderStyle="round" borderColor={PANEL_COLORS.overview} paddingX={1} width={width}>
@ -184,6 +215,24 @@ function Overview({ projects, label, width }: { projects: ProjectSummary[]; labe
<Text dimColor wrap="truncate-end">
{formatTokens(totalInput)} in {formatTokens(totalOutput)} out {formatTokens(totalCacheRead)} cached {formatTokens(totalCacheWrite)} written
</Text>
{planUsage && (
<>
<Text wrap="truncate-end">
<Text color={planColor}>{planLabel}</Text>
<Text> </Text>
<Text color={planColor}>{renderPlanBar(planUsage.percentUsed, PLAN_BAR_WIDTH)}</Text>
<Text> </Text>
<Text bold color={planColor}>{planPct}</Text>
</Text>
<Text dimColor wrap="truncate-end">
{planUsage.status === 'under'
? `Well within plan. Projected month: ${formatCost(planUsage.projectedMonthUsd)} (reset in ${planUsage.daysUntilReset} days).`
: planUsage.status === 'near'
? `Approaching plan limit. Projected month: ${formatCost(planUsage.projectedMonthUsd)} (reset in ${planUsage.daysUntilReset} days).`
: `${(planUsage.spentApiEquivalentUsd / Math.max(planUsage.budgetUsd, 1)).toFixed(1)}x your subscription value. Projected month: ${formatCost(planUsage.projectedMonthUsd)} (reset in ${planUsage.daysUntilReset} days).`}
</Text>
</>
)}
</Box>
)
}
@ -195,7 +244,7 @@ function DailyActivity({ projects, days = 14, pw, bw }: { projects: ProjectSumma
for (const session of project.sessions) {
for (const turn of session.turns) {
if (!turn.timestamp) continue
const day = turn.timestamp.slice(0, 10)
const day = dateKey(turn.timestamp)
dailyCosts[day] = (dailyCosts[day] ?? 0) + turn.assistantCalls.reduce((s, c) => s + c.costUSD, 0)
dailyCalls[day] = (dailyCalls[day] ?? 0) + turn.assistantCalls.length
}
@ -446,16 +495,6 @@ function BashBreakdown({ projects, pw, bw }: { projects: ProjectSummary[]; pw: n
)
}
const PROVIDER_DISPLAY_NAMES: Record<string, string> = {
all: 'All',
claude: 'Claude',
codex: 'Codex',
cursor: 'Cursor',
opencode: 'OpenCode',
pi: 'Pi',
}
function getProviderDisplayName(name: string): string { return PROVIDER_DISPLAY_NAMES[name] ?? name }
function PeriodTabs({ active, providerName, showProvider }: { active: Period; providerName?: string; showProvider?: boolean }) {
return (
<Box justifyContent="space-between" paddingX={1}>
@ -466,9 +505,7 @@ function PeriodTabs({ active, providerName, showProvider }: { active: Period; pr
</Text>
))}
</Box>
{showProvider && providerName && (
<Box><Text color={DIM}>| </Text><Text color={ORANGE} bold>[p]</Text><Text bold color={PROVIDER_COLORS[providerName] ?? ORANGE}> {getProviderDisplayName(providerName)}</Text></Box>
)}
{showProvider && providerName && <Box><Text color={DIM}>| </Text><Text color={ORANGE} bold>[p]</Text><Text bold color={providerColor(providerName)}> {providerLabel(providerName)}</Text></Box>}
</Box>
)
}
@ -525,7 +562,7 @@ function OptimizeView({ findings, costRate, projects, label, width, healthScore,
)
}
function StatusBar({ width, showProvider, view, findingCount, optimizeAvailable }: { width: number; showProvider?: boolean; view?: View; findingCount?: number; optimizeAvailable?: boolean }) {
function StatusBar({ width, showProvider, view, findingCount, optimizeAvailable, compareAvailable }: { width: number; showProvider?: boolean; view?: View; findingCount?: number; optimizeAvailable?: boolean; compareAvailable?: boolean }) {
const isOptimize = view === 'optimize'
return (
<Box borderStyle="round" borderColor={DIM} width={width} justifyContent="center" paddingX={1}>
@ -542,6 +579,9 @@ function StatusBar({ width, showProvider, view, findingCount, optimizeAvailable
{!isOptimize && optimizeAvailable && findingCount != null && findingCount > 0 && (
<><Text dimColor> </Text><Text color={ORANGE} bold>o</Text><Text dimColor> optimize</Text><Text color="#F55B5B"> ({findingCount})</Text></>
)}
{!isOptimize && compareAvailable && (
<><Text dimColor> </Text><Text color={ORANGE} bold>c</Text><Text dimColor> compare</Text></>
)}
{showProvider && (<><Text dimColor> </Text><Text color={ORANGE} bold>p</Text><Text dimColor> provider</Text></>)}
</Text>
</Box>
@ -553,7 +593,7 @@ function Row({ wide, width, children }: { wide: boolean; width: number; children
return <>{children}</>
}
function DashboardContent({ projects, period, columns, activeProvider, budgets }: { projects: ProjectSummary[]; period: Period; columns?: number; activeProvider?: string; budgets?: Map<string, ContextBudget> }) {
function DashboardContent({ projects, period, columns, activeProvider, budgets, planUsage }: { projects: ProjectSummary[]; period: Period; columns?: number; activeProvider?: string; budgets?: Map<string, ContextBudget>; planUsage?: PlanUsage }) {
const { dashWidth, wide, halfWidth, barWidth } = getLayout(columns)
const isCursor = activeProvider === 'cursor'
if (projects.length === 0) return <Panel title="CodeBurn" color={ORANGE} width={dashWidth}><Text dimColor>No usage data found for {PERIOD_LABELS[period]}.</Text></Panel>
@ -561,7 +601,7 @@ function DashboardContent({ projects, period, columns, activeProvider, budgets }
const days = period === 'all' ? undefined : (period === 'month' || period === '30days' ? 31 : 14)
return (
<Box flexDirection="column" width={dashWidth}>
<Overview projects={projects} label={PERIOD_LABELS[period]} width={dashWidth} />
<Overview projects={projects} label={PERIOD_LABELS[period]} width={dashWidth} planUsage={planUsage} />
<Row wide={wide} width={dashWidth}><DailyActivity projects={projects} days={days} pw={pw} bw={barWidth} /><ProjectBreakdown projects={projects} pw={pw} bw={barWidth} budgets={budgets} /></Row>
<TopSessions projects={projects} pw={dashWidth} bw={barWidth} />
<Row wide={wide} width={dashWidth}><ActivityBreakdown projects={projects} pw={pw} bw={barWidth} /><ModelBreakdown projects={projects} pw={pw} bw={barWidth} /></Row>
@ -574,13 +614,15 @@ function DashboardContent({ projects, period, columns, activeProvider, budgets }
)
}
function InteractiveDashboard({ initialProjects, initialPeriod, initialProvider, refreshSeconds, projectFilter, excludeFilter }: {
function InteractiveDashboard({ initialProjects, initialPeriod, initialProvider, initialPlanUsage, refreshSeconds, projectFilter, excludeFilter, noCache }: {
initialProjects: ProjectSummary[]
initialPeriod: Period
initialProvider: string
initialPlanUsage?: PlanUsage
refreshSeconds?: number
projectFilter?: string[]
excludeFilter?: string[]
noCache?: boolean
}) {
const { exit } = useApp()
const [period, setPeriod] = useState<Period>(initialPeriod)
@ -591,13 +633,166 @@ function InteractiveDashboard({ initialProjects, initialPeriod, initialProvider,
const [view, setView] = useState<View>('dashboard')
const [optimizeResult, setOptimizeResult] = useState<OptimizeResult | null>(null)
const [projectBudgets, setProjectBudgets] = useState<Map<string, ContextBudget>>(new Map())
const [planUsage, setPlanUsage] = useState<PlanUsage | undefined>(initialPlanUsage)
const { columns } = useWindowSize()
const { dashWidth } = getLayout(columns)
const multipleProviders = detectedProviders.length > 1
const optimizeAvailable = activeProvider === 'all' || activeProvider === 'claude'
const modelCount = new Set(
projects.flatMap(p => p.sessions.flatMap(s => Object.keys(s.modelBreakdown)))
).size
const compareAvailable = modelCount >= 2
const debounceRef = useRef<ReturnType<typeof setTimeout> | null>(null)
const cacheByProviderRef = useRef(new Map<string, CachedWindow[]>())
const reloadSeqRef = useRef(0)
const preloadingRef = useRef(new Map<string, Promise<ProjectSummary[]>>())
const findingCount = optimizeResult?.findings.length ?? 0
const providerCacheKey = useCallback((provider: string) => `${provider}:${noCache ? 'nocache' : 'cache'}`, [noCache])
const getRangeWidth = useCallback((range: { start: Date; end: Date }) => range.end.getTime() - range.start.getTime(), [])
const makeCacheToken = useCallback((provider: string, period: Period) => `${providerCacheKey(provider)}:${period}`, [providerCacheKey])
const storeCachedWindow = useCallback((provider: string, period: Period, range: { start: Date; end: Date }, projects: ProjectSummary[]) => {
if (noCache) return
const key = providerCacheKey(provider)
const windows = cacheByProviderRef.current.get(key) ?? []
const normalizedRange = { start: new Date(range.start), end: new Date(range.end) }
const existing = windows.findIndex(
existing => existing.period === period && existing.range.start.getTime() === normalizedRange.start.getTime() && existing.range.end.getTime() === normalizedRange.end.getTime(),
)
if (existing >= 0) windows.splice(existing, 1)
windows.push({ period, range: normalizedRange, projects })
windows.sort((a, b) => a.range.start.getTime() - b.range.start.getTime())
cacheByProviderRef.current.set(key, windows)
}, [noCache, providerCacheKey])
const findCachedWindow = useCallback((provider: string, range: { start: Date; end: Date }) => {
const candidates = cacheByProviderRef.current.get(providerCacheKey(provider)) ?? []
let best: CachedWindow | undefined
for (const candidate of candidates) {
if (!rangeCovers(candidate.range, range)) continue
if (!best) { best = candidate; continue }
if (getRangeWidth(candidate.range) < getRangeWidth(best.range)) {
best = candidate
} else if (candidate.period !== best.period && getRangeWidth(candidate.range) === getRangeWidth(best.range) && candidate.range.start > best.range.start) {
best = candidate
}
}
return best
}, [getRangeWidth, providerCacheKey])
const preloadWindow = useCallback(async (periodToLoad: Period, provider: string) => {
if (noCache) return
const preloadKey = makeCacheToken(provider, periodToLoad)
const range = getDateRange(periodToLoad)
const cached = findCachedWindow(provider, range)
if (cached) return
const inFlight = preloadingRef.current.get(preloadKey)
if (inFlight) return
const promise = (async () => {
const projects = await parseAllSessions(range, provider, { noCache, progress: null })
if (!noCache) {
storeCachedWindow(provider, periodToLoad, range, projects)
}
return projects
})()
preloadingRef.current.set(preloadKey, promise)
try {
await promise
} finally {
preloadingRef.current.delete(preloadKey)
}
}, [findCachedWindow, makeCacheToken, noCache, storeCachedWindow])
const reloadData = useCallback(async (p: Period, prov: string, options?: { silent?: boolean; skipCache?: boolean }) => {
const range = getDateRange(p)
const request = ++reloadSeqRef.current
const token = makeCacheToken(prov, p)
const cachedWindow = options?.skipCache ? undefined : findCachedWindow(prov, range)
if (!options?.silent) {
setOptimizeResult(null)
}
if (cachedWindow) {
const projectsFromCache = filterProjectsByName(
filterProjectsByDateRange(cachedWindow.projects, range),
projectFilter,
excludeFilter,
)
if (!options?.silent && request === reloadSeqRef.current) {
setProjects(projectsFromCache)
}
if (!options?.silent) {
const usage = await getPlanUsageOrNull()
if (request !== reloadSeqRef.current) return
setPlanUsage(usage ?? undefined)
}
return
}
const inFlight = preloadingRef.current.get(token)
if (inFlight) {
if (!options?.silent) setLoading(true)
try {
const projects = await inFlight
if (!noCache) {
storeCachedWindow(prov, p, range, projects)
}
if (request !== reloadSeqRef.current) return
const filtered = filterProjectsByName(projects, projectFilter, excludeFilter)
if (!options?.silent) {
setProjects(filtered)
}
} finally {
if (!options?.silent && request === reloadSeqRef.current) setLoading(false)
}
if (!options?.silent) {
const usage = await getPlanUsageOrNull()
if (request !== reloadSeqRef.current) return
setPlanUsage(usage ?? undefined)
}
return
}
if (!options?.silent) setLoading(true)
try {
const projects = await parseAllSessions(range, prov, { noCache, progress: null })
if (!noCache) {
storeCachedWindow(prov, p, range, projects)
}
if (request !== reloadSeqRef.current) return
const filtered = filterProjectsByName(projects, projectFilter, excludeFilter)
if (!options?.silent) {
setProjects(filtered)
}
} finally {
if (!options?.silent && request === reloadSeqRef.current) setLoading(false)
}
if (!options?.silent) {
const usage = await getPlanUsageOrNull()
if (request !== reloadSeqRef.current) return
setPlanUsage(usage ?? undefined)
}
}, [excludeFilter, findCachedWindow, getPlanUsageOrNull, noCache, projectFilter, storeCachedWindow])
useEffect(() => {
if (noCache) return
const initialRange = getDateRange(initialPeriod)
const initialKey = providerCacheKey(initialProvider)
const existing = cacheByProviderRef.current.get(initialKey) ?? []
const alreadyCached = existing.some(entry => rangeCovers(entry.range, initialRange))
if (!alreadyCached) {
storeCachedWindow(initialProvider, initialPeriod, initialRange, initialProjects)
}
}, [initialPeriod, initialProvider, initialProjects, noCache, providerCacheKey, storeCachedWindow])
useEffect(() => {
if (noCache || period === '30days') return
void preloadWindow('30days', activeProvider)
}, [noCache, period, activeProvider, preloadWindow])
useEffect(() => {
let cancelled = false
async function detect() {
@ -638,31 +833,22 @@ function InteractiveDashboard({ initialProjects, initialPeriod, initialProvider,
return () => { cancelled = true }
}, [projects, period, optimizeAvailable])
const reloadData = useCallback(async (p: Period, prov: string) => {
setLoading(true)
setOptimizeResult(null)
const range = getDateRange(p)
const data = filterProjectsByName(await parseAllSessions(range, prov), projectFilter, excludeFilter)
setProjects(data)
setLoading(false)
}, [projectFilter, excludeFilter])
useEffect(() => {
if (!refreshSeconds || refreshSeconds <= 0) return
const id = setInterval(() => { reloadData(period, activeProvider) }, refreshSeconds * 1000)
const id = setInterval(() => { reloadData(period, activeProvider, { skipCache: true }) }, refreshSeconds * 1000)
return () => clearInterval(id)
}, [refreshSeconds, period, activeProvider, reloadData])
const switchPeriod = useCallback((np: Period) => {
if (np === period) return
setPeriod(np); setView('dashboard')
setPeriod(np)
if (debounceRef.current) clearTimeout(debounceRef.current)
debounceRef.current = setTimeout(() => { reloadData(np, activeProvider) }, 600)
}, [period, activeProvider, reloadData])
const switchPeriodImmediate = useCallback(async (np: Period) => {
if (np === period) return
setPeriod(np); setView('dashboard')
setPeriod(np)
if (debounceRef.current) clearTimeout(debounceRef.current)
await reloadData(np, activeProvider)
}, [period, activeProvider, reloadData])
@ -671,15 +857,16 @@ function InteractiveDashboard({ initialProjects, initialPeriod, initialProvider,
if (input === 'q') { exit(); return }
if (input === 'o' && findingCount > 0 && view === 'dashboard' && optimizeAvailable) { setView('optimize'); return }
if ((input === 'b' || key.escape) && view === 'optimize') { setView('dashboard'); return }
if (input === 'p' && multipleProviders) {
if (input === 'c' && compareAvailable && view === 'dashboard') { setView('compare'); return }
if (input === 'p' && multipleProviders && view !== 'compare') {
const opts = ['all', ...detectedProviders]; const next = opts[(opts.indexOf(activeProvider) + 1) % opts.length]
setActiveProvider(next); setView('dashboard')
if (debounceRef.current) clearTimeout(debounceRef.current)
reloadData(period, next); return
}
const idx = PERIODS.indexOf(period)
if (key.leftArrow && view === 'dashboard') switchPeriod(PERIODS[(idx - 1 + PERIODS.length) % PERIODS.length])
else if ((key.rightArrow || key.tab) && view === 'dashboard') switchPeriod(PERIODS[(idx + 1) % PERIODS.length])
if (key.leftArrow) switchPeriod(PERIODS[(idx - 1 + PERIODS.length) % PERIODS.length]!)
else if (key.rightArrow || key.tab) switchPeriod(PERIODS[(idx + 1) % PERIODS.length]!)
else if (input === '1') switchPeriodImmediate('today')
else if (input === '2') switchPeriodImmediate('week')
else if (input === '3') switchPeriodImmediate('30days')
@ -690,47 +877,79 @@ function InteractiveDashboard({ initialProjects, initialPeriod, initialProvider,
if (loading) {
return (
<Box flexDirection="column" width={dashWidth}>
<PeriodTabs active={period} providerName={activeProvider} showProvider={multipleProviders} />
<Panel title="CodeBurn" color={ORANGE} width={dashWidth}><Text dimColor>Loading {PERIOD_LABELS[period]}...</Text></Panel>
<StatusBar width={dashWidth} showProvider={multipleProviders} view="dashboard" findingCount={0} optimizeAvailable={false} />
<PeriodTabs active={period} providerName={activeProvider} showProvider={view !== 'compare' && multipleProviders} />
{view === 'compare'
? <Box flexDirection="column" paddingX={2} paddingY={1}>
<Box flexDirection="column" borderStyle="round" borderColor={ORANGE} paddingX={1}>
<Text bold color={ORANGE}>Model Comparison</Text>
<Text> </Text>
<Text dimColor>Loading {PERIOD_LABELS[period]} model data...</Text>
</Box>
</Box>
: <Panel title="CodeBurn" color={ORANGE} width={dashWidth}><Text dimColor>Loading {PERIOD_LABELS[period]}...</Text></Panel>}
{view !== 'compare' && <StatusBar width={dashWidth} showProvider={multipleProviders} view={view} findingCount={0} optimizeAvailable={false} compareAvailable={false} />}
</Box>
)
}
return (
<Box flexDirection="column" width={dashWidth}>
<PeriodTabs active={period} providerName={activeProvider} showProvider={multipleProviders} />
{view === 'optimize' && optimizeResult
? <OptimizeView findings={optimizeResult.findings} costRate={optimizeResult.costRate} projects={projects} label={PERIOD_LABELS[period]} width={dashWidth} healthScore={optimizeResult.healthScore} healthGrade={optimizeResult.healthGrade} />
: <DashboardContent projects={projects} period={period} columns={columns} activeProvider={activeProvider} budgets={projectBudgets} />}
<StatusBar width={dashWidth} showProvider={multipleProviders} view={view} findingCount={findingCount} optimizeAvailable={optimizeAvailable} />
<PeriodTabs active={period} providerName={activeProvider} showProvider={multipleProviders && view !== 'compare'} />
{view === 'compare'
? <CompareView projects={projects} onBack={() => setView('dashboard')} />
: view === 'optimize' && optimizeResult
? <OptimizeView findings={optimizeResult.findings} costRate={optimizeResult.costRate} projects={projects} label={PERIOD_LABELS[period]} width={dashWidth} healthScore={optimizeResult.healthScore} healthGrade={optimizeResult.healthGrade} />
: <DashboardContent projects={projects} period={period} columns={columns} activeProvider={activeProvider} budgets={projectBudgets} planUsage={planUsage} />}
{view !== 'compare' && <StatusBar width={dashWidth} showProvider={multipleProviders} view={view} findingCount={findingCount} optimizeAvailable={optimizeAvailable} compareAvailable={compareAvailable} />}
</Box>
)
}
function StaticDashboard({ projects, period, activeProvider }: { projects: ProjectSummary[]; period: Period; activeProvider?: string }) {
function StaticDashboard({ projects, period, activeProvider, planUsage }: { projects: ProjectSummary[]; period: Period; activeProvider?: string; planUsage?: PlanUsage }) {
const { columns } = useWindowSize()
const { dashWidth } = getLayout(columns)
return (
<Box flexDirection="column" width={dashWidth}>
<PeriodTabs active={period} />
<DashboardContent projects={projects} period={period} columns={columns} activeProvider={activeProvider} />
<DashboardContent projects={projects} period={period} columns={columns} activeProvider={activeProvider} planUsage={planUsage} />
</Box>
)
}
export async function renderDashboard(period: Period = 'week', provider: string = 'all', refreshSeconds?: number, projectFilter?: string[], excludeFilter?: string[], customRange?: DateRange | null): Promise<void> {
export async function renderDashboard(
period: Period = 'week',
provider: string = 'all',
refreshSeconds?: number,
projectFilter?: string[],
excludeFilter?: string[],
customRange?: DateRange | null,
noCache = false,
): Promise<void> {
await loadPricing()
const range = customRange ?? getDateRange(period)
const projects = filterProjectsByName(await parseAllSessions(range, provider), projectFilter, excludeFilter)
const isTTY = process.stdin.isTTY && process.stdout.isTTY
const range = customRange ?? getDateRange(period)
const filteredProjects = filterProjectsByName(
await parseAllSessions(range, provider, { noCache, progress: createTerminalProgressReporter(isTTY) }),
projectFilter,
excludeFilter,
)
const planUsage = await getPlanUsageOrNull()
if (isTTY) {
const { waitUntilExit } = render(
<InteractiveDashboard initialProjects={projects} initialPeriod={period} initialProvider={provider} refreshSeconds={refreshSeconds} projectFilter={projectFilter} excludeFilter={excludeFilter} />
<InteractiveDashboard
initialProjects={filteredProjects}
initialPeriod={period}
initialProvider={provider}
initialPlanUsage={planUsage ?? undefined}
refreshSeconds={refreshSeconds}
projectFilter={projectFilter}
excludeFilter={excludeFilter}
noCache={noCache}
/>
)
await waitUntilExit()
} else {
const { unmount } = render(<StaticDashboard projects={projects} period={period} activeProvider={provider} />, { patchConsole: false })
const { unmount } = render(<StaticDashboard projects={filteredProjects} period={period} activeProvider={provider} planUsage={planUsage ?? undefined} />, { patchConsole: false })
unmount()
}
}

View file

@ -20,8 +20,9 @@ function emptyEntry(date: string): DailyEntry {
}
}
function dateKey(iso: string): string {
return iso.slice(0, 10)
export function dateKey(iso: string): string {
const d = new Date(iso)
return `${d.getFullYear()}-${String(d.getMonth() + 1).padStart(2, '0')}-${String(d.getDate()).padStart(2, '0')}`
}
export function aggregateProjectsIntoDays(projects: ProjectSummary[]): DailyEntry[] {

200
src/discovery-cache.ts Normal file
View file

@ -0,0 +1,200 @@
import { createHash, randomBytes } from 'crypto'
import { existsSync } from 'fs'
import { mkdir, open, readFile, rename, unlink } from 'fs/promises'
import { homedir } from 'os'
import { dirname, join } from 'path'
import type { SessionSource } from './providers/types.js'
const DISCOVERY_CACHE_VERSION = 1
const DISCOVERY_DIRECTORY_MARKER_PREFIX = '__dir__:'
function traceDiscoveryCacheRead(op: string, filePath: string, note?: string): void {
if (process.env['CODEBURN_FILE_TRACE'] !== '1') return
const suffix = note ? ` ${note}` : ''
process.stderr.write(`codeburn-trace discovery ${op} ${filePath}${suffix}\n`)
}
export type DiscoverySnapshotEntry = {
path: string
mtimeMs: number
dirSignature?: string
}
export type DiscoveryCacheEntry = {
version: number
provider: string
scope: string
snapshot: DiscoverySnapshotEntry[]
sources: SessionSource[]
}
function cacheRoot(): string {
const base = process.env['CODEBURN_CACHE_DIR'] ?? join(homedir(), '.cache', 'codeburn')
return join(base, 'discovery-cache-v1')
}
function cacheFilename(provider: string, scope: string): string {
return `${createHash('sha1').update(`${provider}:${scope}`).digest('hex')}.json`
}
function cachePath(provider: string, scope: string): string {
return join(cacheRoot(), cacheFilename(provider, scope))
}
function isPlainObject(value: unknown): value is Record<string, unknown> {
return !!value && typeof value === 'object' && !Array.isArray(value)
}
function isFiniteNumber(value: unknown): value is number {
return typeof value === 'number' && Number.isFinite(value)
}
function isDiscoverySnapshotEntry(value: unknown): value is DiscoverySnapshotEntry {
return isPlainObject(value)
&& typeof value.path === 'string'
&& isFiniteNumber(value.mtimeMs)
}
function isSessionSource(value: unknown): value is SessionSource {
return isPlainObject(value)
&& typeof value.path === 'string'
&& typeof value.project === 'string'
&& typeof value.provider === 'string'
&& (value.fingerprintPath === undefined || typeof value.fingerprintPath === 'string')
&& (value.cacheStrategy === undefined || value.cacheStrategy === 'full-reparse' || value.cacheStrategy === 'append-jsonl')
&& (value.progressLabel === undefined || typeof value.progressLabel === 'string')
&& (value.parserVersion === undefined || typeof value.parserVersion === 'string')
}
function isDiscoveryCacheEntry(value: unknown): value is DiscoveryCacheEntry {
return isPlainObject(value)
&& value.version === DISCOVERY_CACHE_VERSION
&& typeof value.provider === 'string'
&& typeof value.scope === 'string'
&& Array.isArray(value.snapshot)
&& value.snapshot.every(isDiscoverySnapshotEntry)
&& Array.isArray(value.sources)
&& value.sources.every(isSessionSource)
}
function normalizeSnapshot(snapshot: DiscoverySnapshotEntry[]): DiscoverySnapshotEntry[] {
return [...snapshot].sort((left, right) => left.path.localeCompare(right.path))
}
function snapshotsMatch(left: DiscoverySnapshotEntry[], right: DiscoverySnapshotEntry[]): boolean {
if (left.length !== right.length) return false
return left.every((entry, index) => {
const other = right[index]
return !!other
&& entry.path === other.path
&& entry.mtimeMs === other.mtimeMs
&& entry.dirSignature === other.dirSignature
})
}
function makeDirectoryMarker(path: string, dirSignature?: string): DiscoverySnapshotEntry {
return {
path: `${DISCOVERY_DIRECTORY_MARKER_PREFIX}${path}`,
mtimeMs: 0,
dirSignature,
}
}
export function isDiscoveryDirectoryMarker(path: string): boolean {
return path.startsWith(DISCOVERY_DIRECTORY_MARKER_PREFIX)
}
export function directoryPathFromMarker(markerPath: string): string | null {
return markerPath.startsWith(DISCOVERY_DIRECTORY_MARKER_PREFIX)
? markerPath.slice(DISCOVERY_DIRECTORY_MARKER_PREFIX.length)
: null
}
async function loadDiscoveryCacheEntry(provider: string, scope: string): Promise<DiscoveryCacheEntry | null> {
const path = cachePath(provider, scope)
if (!existsSync(path)) return null
traceDiscoveryCacheRead('entry:read', path, `provider=${provider} scope=${scope}`)
try {
const raw = await readFile(path, 'utf-8')
const parsed: unknown = JSON.parse(raw)
if (!isDiscoveryCacheEntry(parsed) || parsed.provider !== provider || parsed.scope !== scope) return null
return parsed
} catch {
return null
}
}
async function atomicWriteJson(path: string, value: unknown): Promise<void> {
await mkdir(dirname(path), { recursive: true })
const temp = `${path}.${randomBytes(8).toString('hex')}.tmp`
const handle = await open(temp, 'w', 0o600)
try {
await handle.writeFile(JSON.stringify(value), { encoding: 'utf-8' })
await handle.sync()
} finally {
await handle.close()
}
try {
await rename(temp, path)
} catch (err) {
try {
await unlink(temp)
} catch {}
throw err
}
}
export async function loadDiscoveryCache(
provider: string,
scope: string,
snapshot: DiscoverySnapshotEntry[],
): Promise<SessionSource[] | null> {
const path = cachePath(provider, scope)
if (!existsSync(path)) return null
try {
const raw = await readFile(path, 'utf-8')
const parsed: unknown = JSON.parse(raw)
if (!isDiscoveryCacheEntry(parsed)) return null
if (parsed.provider !== provider || parsed.scope !== scope) return null
const normalizedSnapshot = normalizeSnapshot(snapshot)
const cachedSnapshot = normalizeSnapshot(parsed.snapshot)
if (!snapshotsMatch(normalizedSnapshot, cachedSnapshot)) return null
return parsed.sources
} catch {
return null
}
}
export async function loadDiscoveryCacheEntryUnchecked(
provider: string,
scope: string,
): Promise<DiscoveryCacheEntry | null> {
return loadDiscoveryCacheEntry(provider, scope)
}
export async function saveDiscoveryCache(
provider: string,
scope: string,
snapshot: DiscoverySnapshotEntry[],
sources: SessionSource[],
): Promise<void> {
await mkdir(cacheRoot(), { recursive: true })
await atomicWriteJson(cachePath(provider, scope), {
version: DISCOVERY_CACHE_VERSION,
provider,
scope,
snapshot: normalizeSnapshot(snapshot),
sources,
} satisfies DiscoveryCacheEntry)
}
export function discoveryDirectoryMarker(prefixPath: string, dirSignature?: string): DiscoverySnapshotEntry {
return makeDirectoryMarker(prefixPath, dirSignature)
}

View file

@ -3,9 +3,10 @@ import { dirname, join, resolve } from 'path'
import { CATEGORY_LABELS, type ProjectSummary, type TaskCategory } from './types.js'
import { getCurrency, convertCost } from './currency.js'
import { dateKey } from './day-aggregator.js'
function escCsv(s: string): string {
const sanitized = /^[=+\-@]/.test(s) ? `'${s}` : s
const sanitized = /^[\t\r=+\-@]/.test(s) ? `'${s}` : s
if (sanitized.includes(',') || sanitized.includes('"') || sanitized.includes('\n')) {
return `"${sanitized.replace(/"/g, '""')}"`
}
@ -48,7 +49,7 @@ function buildDailyRows(projects: ProjectSummary[], period: string): Row[] {
for (const session of project.sessions) {
for (const turn of session.turns) {
if (!turn.timestamp) continue
const day = turn.timestamp.slice(0, 10)
const day = dateKey(turn.timestamp)
if (!daily[day]) {
daily[day] = { cost: 0, calls: 0, input: 0, output: 0, cacheRead: 0, cacheWrite: 0, sessions: new Set() }
}
@ -282,7 +283,7 @@ async function clearCodeburnExportFolder(path: string): Promise<void> {
/// wipe a sensitive file (prior versions did `rm(path, { force: true })` unconditionally).
export async function exportCsv(periods: PeriodExport[], outputPath: string): Promise<string> {
const thirtyDays = periods.find(p => p.label === '30 Days')
const thirtyDayProjects = thirtyDays?.projects ?? periods[periods.length - 1].projects
const thirtyDayProjects = thirtyDays?.projects ?? periods[periods.length - 1]?.projects ?? []
let folder = resolve(outputPath)
if (folder.toLowerCase().endsWith('.csv')) {
@ -324,7 +325,7 @@ export async function exportCsv(periods: PeriodExport[], outputPath: string): Pr
export async function exportJson(periods: PeriodExport[], outputPath: string): Promise<string> {
const thirtyDays = periods.find(p => p.label === '30 Days')
const thirtyDayProjects = thirtyDays?.projects ?? periods[periods.length - 1].projects
const thirtyDayProjects = thirtyDays?.projects ?? periods[periods.length - 1]?.projects ?? []
const { code, rate, symbol } = getCurrency()
const data = {

View file

@ -89,5 +89,35 @@ export async function* readSessionLines(filePath: string): AsyncGenerator<string
for await (const line of rl) yield line
} catch (err) {
warn(`stream read failed for ${filePath}: ${(err as NodeJS.ErrnoException).code ?? 'unknown'}`)
} finally {
stream.destroy()
}
}
export async function* readSessionLinesFromOffset(filePath: string, startOffset: number): AsyncGenerator<string> {
let size: number
try {
size = (await stat(filePath)).size
} catch (err) {
warn(`stat failed for ${filePath}: ${(err as NodeJS.ErrnoException).code ?? 'unknown'}`)
return
}
if (size > MAX_SESSION_FILE_BYTES) {
warn(`skipped oversize file ${filePath} (${size} bytes > cap ${MAX_SESSION_FILE_BYTES})`)
return
}
const stream = createReadStream(filePath, {
encoding: 'utf-8',
start: Math.max(0, startOffset),
})
const rl = createInterface({ input: stream, crlfDelay: Infinity })
try {
for await (const line of rl) yield line
} catch (err) {
warn(`stream read failed for ${filePath}: ${(err as NodeJS.ErrnoException).code ?? 'unknown'}`)
} finally {
stream.destroy()
}
}

View file

@ -62,7 +62,7 @@ function getCachePath(): string {
}
function parseLiteLLMEntry(entry: LiteLLMEntry): ModelCosts | null {
if (!entry.input_cost_per_token || !entry.output_cost_per_token) return null
if (entry.input_cost_per_token === undefined || entry.output_cost_per_token === undefined) return null
return {
inputCostPerToken: entry.input_cost_per_token,
outputCostPerToken: entry.output_cost_per_token,
@ -145,7 +145,9 @@ export function setModelAliases(aliases: Record<string, string>): void {
}
function resolveAlias(model: string): string {
return userAliases[model] ?? BUILTIN_ALIASES[model] ?? model
if (Object.hasOwn(userAliases, model)) return userAliases[model]!
if (Object.hasOwn(BUILTIN_ALIASES, model)) return BUILTIN_ALIASES[model]!
return model
}
function getCanonicalName(model: string): string {
return model
@ -164,7 +166,7 @@ export function getModelCosts(model: string): ModelCosts | null {
}
for (const [key, costs] of pricingCache ?? new Map()) {
if (canonical.startsWith(key) || key.startsWith(canonical)) return costs
if (canonical.startsWith(key)) return costs
}
for (const [key, costs] of Object.entries(FALLBACK_PRICING)) {

86
src/parse-progress.ts Normal file
View file

@ -0,0 +1,86 @@
import { stripVTControlCharacters } from 'node:util'
import { Chalk } from 'chalk'
import type { SourceProgressReporter } from './parser.js'
import { providerColor, providerLabel } from './provider-colors.js'
function getBarWidth(columns: number | undefined): number {
if (!columns || columns >= 80) return 16
if (columns >= 56) return 12
return 8
}
function renderBar(current: number, total: number, width: number): { filled: number; empty: number } {
if (total <= 0) return { filled: 0, empty: width }
const filled = Math.max(0, Math.min(width, Math.round((current / total) * width)))
return { filled, empty: Math.max(0, width - filled) }
}
function mapChalkLevel(colorDepth: number): 0 | 1 | 2 | 3 {
if (colorDepth >= 24) return 3
if (colorDepth >= 8) return 2
if (colorDepth >= 1) return 1
return 0
}
export function createTerminalProgressReporter(
enabled: boolean,
stream: NodeJS.WriteStream = process.stderr,
): SourceProgressReporter | null {
if (!enabled || !stream.isTTY) return null
let total = 0
let current = 0
let lastProvider = 'all'
let lastLineLength = 0
let active = false
const colorDepth = typeof stream.getColorDepth === 'function' ? stream.getColorDepth() : 0
const chalk = new Chalk({ level: mapChalkLevel(colorDepth) })
function buildFrame(provider: string, done = false): string {
const columns = 'columns' in stream ? (stream as NodeJS.WriteStream & { columns?: number }).columns : process.stderr.columns
const width = getBarWidth(columns)
const label = providerLabel(provider)
const { filled, empty } = renderBar(current, total, width)
const accent = providerColor(provider)
const line = [
chalk.dim('Updating'),
chalk.bold.hex(accent)(label),
chalk.dim('cache'),
`[${chalk.hex(accent)('█'.repeat(filled))}${chalk.hex('#666666')('░'.repeat(empty))}]`,
`${current}/${total}`,
].join(' ')
const visible = stripVTControlCharacters(line)
const pad = lastLineLength > visible.length ? ' '.repeat(lastLineLength - visible.length) : ''
lastLineLength = Math.max(lastLineLength, visible.length)
return `${line}${pad}${done ? '\n' : '\r'}`
}
return {
start(nextTotal: number) {
total = nextTotal
current = 0
lastProvider = 'all'
lastLineLength = 0
active = nextTotal > 0
},
advance(provider: string) {
if (!active) return
lastProvider = provider
current += 1
stream.write(buildFrame(provider))
},
finish(provider?: string) {
if (!active) return
if (current === 0) return
stream.write(buildFrame(provider ?? lastProvider, true))
active = false
total = 0
current = 0
lastProvider = 'all'
lastLineLength = 0
},
}
}

File diff suppressed because it is too large Load diff

148
src/plan-usage.ts Normal file
View file

@ -0,0 +1,148 @@
import { readPlan, type Plan } from './config.js'
import { parseAllSessions } from './parser.js'
import type { DateRange, ProjectSummary } from './types.js'
const MS_PER_DAY = 24 * 60 * 60 * 1000
const PLAN_NEAR_THRESHOLD_PCT = 80
export type PlanStatus = 'under' | 'near' | 'over'
export type PlanUsage = {
plan: Plan
periodStart: Date
periodEnd: Date
spentApiEquivalentUsd: number
budgetUsd: number
percentUsed: number
status: PlanStatus
projectedMonthUsd: number
daysUntilReset: number
}
export function clampResetDay(resetDay: number | undefined): number {
if (!Number.isInteger(resetDay)) return 1
return Math.min(28, Math.max(1, resetDay ?? 1))
}
export function computePeriodFromResetDay(resetDay: number | undefined, today: Date): { periodStart: Date; periodEnd: Date } {
const day = clampResetDay(resetDay)
const year = today.getFullYear()
const month = today.getMonth()
if (today.getDate() >= day) {
return {
periodStart: new Date(year, month, day, 0, 0, 0, 0),
periodEnd: new Date(year, month + 1, day, 0, 0, 0, 0),
}
}
return {
periodStart: new Date(year, month - 1, day, 0, 0, 0, 0),
periodEnd: new Date(year, month, day, 0, 0, 0, 0),
}
}
function median(values: number[]): number {
if (values.length === 0) return 0
const sorted = [...values].sort((a, b) => a - b)
const mid = Math.floor(sorted.length / 2)
if (sorted.length % 2 === 0) {
return (sorted[mid - 1] + sorted[mid]) / 2
}
return sorted[mid]!
}
function toLocalDateKey(d: Date): string {
const year = d.getFullYear()
const month = String(d.getMonth() + 1).padStart(2, '0')
const day = String(d.getDate()).padStart(2, '0')
return `${year}-${month}-${day}`
}
function toDayIndex(d: Date): number {
return Math.floor(Date.UTC(d.getFullYear(), d.getMonth(), d.getDate()) / MS_PER_DAY)
}
function diffCalendarDays(from: Date, to: Date): number {
return toDayIndex(to) - toDayIndex(from)
}
export function projectMonthEnd(
projects: ProjectSummary[],
periodStart: Date,
periodEnd: Date,
today: Date,
spent: number,
): number {
const dayCosts = new Map<string, number>()
for (const project of projects) {
for (const session of project.sessions) {
for (const turn of session.turns) {
if (!turn.timestamp) continue
const ts = new Date(turn.timestamp)
if (Number.isNaN(ts.getTime())) continue
if (ts < periodStart || ts > today) continue
const dayKey = toLocalDateKey(ts)
const turnCost = turn.assistantCalls.reduce((sum, call) => sum + call.costUSD, 0)
dayCosts.set(dayKey, (dayCosts.get(dayKey) ?? 0) + turnCost)
}
}
}
const elapsedDays = Math.max(1, diffCalendarDays(periodStart, today) + 1)
const elapsedDailyCosts: number[] = []
for (let i = 0; i < elapsedDays; i++) {
const date = new Date(periodStart.getFullYear(), periodStart.getMonth(), periodStart.getDate() + i)
elapsedDailyCosts.push(dayCosts.get(toLocalDateKey(date)) ?? 0)
}
const trailingWindow = elapsedDailyCosts.slice(-7)
const medianDailyCost = median(trailingWindow)
const daysRemaining = Math.max(0, diffCalendarDays(today, periodEnd) - 1)
return spent + medianDailyCost * daysRemaining
}
export function getPlanUsageFromProjects(plan: Plan, projects: ProjectSummary[], today = new Date()): PlanUsage {
const { periodStart, periodEnd } = computePeriodFromResetDay(plan.resetDay, today)
const spent = projects.reduce((sum, p) => sum + p.totalCostUSD, 0)
const budgetUsd = plan.monthlyUsd
const percentUsed = budgetUsd > 0 ? (spent / budgetUsd) * 100 : 0
const status: PlanStatus = percentUsed > 100 ? 'over' : percentUsed >= PLAN_NEAR_THRESHOLD_PCT ? 'near' : 'under'
const projectedMonthUsd = projectMonthEnd(projects, periodStart, periodEnd, today, spent)
const daysUntilReset = Math.max(0, diffCalendarDays(today, periodEnd))
return {
plan,
periodStart,
periodEnd,
spentApiEquivalentUsd: spent,
budgetUsd,
percentUsed,
status,
projectedMonthUsd,
daysUntilReset,
}
}
export async function getPlanUsage(plan: Plan, today = new Date()): Promise<PlanUsage> {
const { periodStart } = computePeriodFromResetDay(plan.resetDay, today)
const range: DateRange = {
start: periodStart,
end: today,
}
const provider = plan.provider === 'all' ? 'all' : plan.provider
const projects = await parseAllSessions(range, provider)
return getPlanUsageFromProjects(plan, projects, today)
}
export async function getPlanUsageOrNull(today = new Date()): Promise<PlanUsage | null> {
const plan = await readPlan()
if (!isActivePlan(plan)) return null
return getPlanUsage(plan, today)
}
export function isActivePlan(plan: Plan | undefined): plan is Plan {
return plan !== undefined && plan.id !== 'none' && Number.isFinite(plan.monthlyUsd) && plan.monthlyUsd > 0
}

55
src/plans.ts Normal file
View file

@ -0,0 +1,55 @@
import type { Plan, PlanId, PlanProvider } from './config.js'
export const PLAN_PROVIDERS: PlanProvider[] = ['all', 'claude', 'codex', 'cursor']
export const PLAN_IDS: PlanId[] = ['claude-pro', 'claude-max', 'cursor-pro', 'custom', 'none']
export const PRESET_PLANS: Record<'claude-pro' | 'claude-max' | 'cursor-pro', Omit<Plan, 'setAt'>> = {
'claude-pro': {
id: 'claude-pro',
monthlyUsd: 20,
provider: 'claude',
resetDay: 1,
},
'claude-max': {
id: 'claude-max',
monthlyUsd: 200,
provider: 'claude',
resetDay: 1,
},
'cursor-pro': {
id: 'cursor-pro',
monthlyUsd: 20,
provider: 'cursor',
resetDay: 1,
},
}
export function isPlanProvider(value: string): value is PlanProvider {
return PLAN_PROVIDERS.includes(value as PlanProvider)
}
export function isPlanId(value: string): value is PlanId {
return PLAN_IDS.includes(value as PlanId)
}
export function getPresetPlan(id: string): Omit<Plan, 'setAt'> | null {
if (id in PRESET_PLANS) {
return PRESET_PLANS[id as keyof typeof PRESET_PLANS]
}
return null
}
export function planDisplayName(id: PlanId): string {
switch (id) {
case 'claude-pro':
return 'Claude Pro'
case 'claude-max':
return 'Claude Max'
case 'cursor-pro':
return 'Cursor Pro'
case 'custom':
return 'Custom'
case 'none':
return 'None'
}
}

27
src/provider-colors.ts Normal file
View file

@ -0,0 +1,27 @@
export const PROVIDER_COLORS: Record<string, string> = {
all: '#FF8C42',
claude: '#FF8C42',
codex: '#5BF5A0',
cursor: '#00B4D8',
opencode: '#A78BFA',
pi: '#F472B6',
copilot: '#6495ED',
}
const PROVIDER_LABELS: Record<string, string> = {
all: 'All',
claude: 'Claude',
codex: 'Codex',
cursor: 'Cursor',
opencode: 'OpenCode',
pi: 'Pi',
copilot: 'Copilot',
}
export function providerLabel(name: string): string {
return PROVIDER_LABELS[name] ?? name
}
export function providerColor(name: string): string {
return PROVIDER_COLORS[name] ?? '#CCCCCC'
}

View file

@ -2,6 +2,7 @@ import { readdir, stat } from 'fs/promises'
import { basename, join } from 'path'
import { homedir } from 'os'
import { type DiscoverySnapshotEntry, loadDiscoveryCache, saveDiscoveryCache } from '../discovery-cache.js'
import { readSessionFile } from '../fs-utils.js'
import { calculateCost } from '../models.js'
import type { Provider, SessionSource, SessionParser, ParsedProviderCall } from './types.js'
@ -86,8 +87,49 @@ async function isValidCodexSession(filePath: string): Promise<{ valid: boolean;
return { valid, meta: valid ? entry : undefined }
}
async function collectCodexDiscoverySnapshot(sessionsDir: string): Promise<DiscoverySnapshotEntry[]> {
const snapshot: DiscoverySnapshotEntry[] = []
let years: string[]
try {
years = await readdir(sessionsDir)
} catch {
return snapshot
}
for (const year of years) {
if (!/^\d{4}$/.test(year)) continue
const yearDir = join(sessionsDir, year)
const yearStat = await stat(yearDir).catch(() => null)
if (!yearStat?.isDirectory()) continue
const months = await readdir(yearDir).catch(() => [] as string[])
for (const month of months) {
if (!/^\d{2}$/.test(month)) continue
const monthDir = join(yearDir, month)
const monthStat = await stat(monthDir).catch(() => null)
if (!monthStat?.isDirectory()) continue
const days = await readdir(monthDir).catch(() => [] as string[])
for (const day of days) {
if (!/^\d{2}$/.test(day)) continue
const dayDir = join(monthDir, day)
const dayStat = await stat(dayDir).catch(() => null)
if (!dayStat?.isDirectory()) continue
snapshot.push({ path: dayDir, mtimeMs: dayStat.mtimeMs })
}
}
}
return snapshot
}
async function discoverSessionsInDir(codexDir: string): Promise<SessionSource[]> {
const sessionsDir = join(codexDir, 'sessions')
const snapshot = await collectCodexDiscoverySnapshot(sessionsDir)
const cached = await loadDiscoveryCache('codex', sessionsDir, snapshot)
if (cached) return cached
const sources: SessionSource[] = []
let years: string[]
@ -122,12 +164,21 @@ async function discoverSessionsInDir(codexDir: string): Promise<SessionSource[]>
if (!valid || !meta) continue
const cwd = meta.payload?.cwd ?? 'unknown'
sources.push({ path: filePath, project: sanitizeProject(cwd), provider: 'codex' })
sources.push({
path: filePath,
project: sanitizeProject(cwd),
provider: 'codex',
fingerprintPath: filePath,
cacheStrategy: 'append-jsonl',
progressLabel: basename(filePath),
parserVersion: 'codex:v1',
})
}
}
}
}
await saveDiscoveryCache('codex', sessionsDir, snapshot, sources)
return sources
}

View file

@ -2,6 +2,7 @@ import { readdir, stat } from 'fs/promises'
import { basename, dirname, join } from 'path'
import { homedir } from 'os'
import { type DiscoverySnapshotEntry, loadDiscoveryCache, saveDiscoveryCache } from '../discovery-cache.js'
import { readSessionFile } from '../fs-utils.js'
import { calculateCost } from '../models.js'
import type { Provider, SessionSource, SessionParser, ParsedProviderCall } from './types.js'
@ -157,7 +158,42 @@ function createParser(source: SessionSource, seenKeys: Set<string>): SessionPars
}
}
async function collectCopilotDiscoverySnapshot(sessionStateDir: string): Promise<DiscoverySnapshotEntry[]> {
const snapshot: DiscoverySnapshotEntry[] = []
let sessionDirs: string[]
try {
sessionDirs = await readdir(sessionStateDir)
} catch {
return snapshot
}
for (const sessionId of sessionDirs) {
const sessionDir = join(sessionStateDir, sessionId)
const dirStat = await stat(sessionDir).catch(() => null)
if (!dirStat?.isDirectory()) continue
const eventsPath = join(sessionDir, 'events.jsonl')
const eventsStat = await stat(eventsPath).catch(() => null)
if (!eventsStat?.isFile()) continue
snapshot.push({ path: eventsPath, mtimeMs: eventsStat.mtimeMs })
const workspacePath = join(sessionDir, 'workspace.yaml')
const workspaceStat = await stat(workspacePath).catch(() => null)
if (workspaceStat?.isFile()) {
snapshot.push({ path: workspacePath, mtimeMs: workspaceStat.mtimeMs })
}
}
return snapshot
}
async function discoverSessionsInDir(sessionStateDir: string): Promise<SessionSource[]> {
const snapshot = await collectCopilotDiscoverySnapshot(sessionStateDir)
const cached = await loadDiscoveryCache('copilot', sessionStateDir, snapshot)
if (cached) return cached
const sources: SessionSource[] = []
let sessionDirs: string[]
@ -179,9 +215,18 @@ async function discoverSessionsInDir(sessionStateDir: string): Promise<SessionSo
if (cwd) project = basename(cwd)
}
sources.push({ path: eventsPath, project, provider: 'copilot' })
sources.push({
path: eventsPath,
project,
provider: 'copilot',
fingerprintPath: eventsPath,
cacheStrategy: 'append-jsonl',
progressLabel: basename(eventsPath),
parserVersion: 'copilot:v1',
})
}
await saveDiscoveryCache('copilot', sessionStateDir, snapshot, sources)
return sources
}

View file

@ -0,0 +1,423 @@
import { createHash } from 'crypto'
import { existsSync } from 'fs'
import { readdir, readFile, stat } from 'fs/promises'
import { join, basename } from 'path'
import { homedir } from 'os'
import { calculateCost } from '../models.js'
import { openDatabase, type SqliteDatabase } from '../sqlite.js'
import type {
Provider,
SessionSource,
SessionParser,
ParsedProviderCall,
} from './types.js'
type ConversationSummary = {
conversationId: string
model: string | null
title: string | null
updatedAt: string | null
}
type AssistantTurn = {
body: string
reasoning: string
tools: string[]
}
type ParsedTurn = {
userMessage: string
assistant: AssistantTurn
}
const CURSOR_AGENT_DEFAULT_MODEL = 'claude-sonnet-4-5'
const CHARS_PER_TOKEN = 4
const MAX_USER_TEXT_LENGTH = 500
const DIGITS_ONLY = /^\d+$/
const UUID_LIKE = /^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$/i
const USER_MARKER = /^\s*user:\s*/i
const ASSISTANT_MARKER = /^\s*A:\s*/
const THINKING_MARKER = /^\s*\[Thinking\]\s*/
const TOOL_CALL_MARKER = /^\s*\[Tool call\]\s*(.+?)\s*$/i
const TOOL_RESULT_MARKER = /^\s*\[Tool result\]\b/i
const USER_QUERY_OPEN = '<user_query>'
const USER_QUERY_CLOSE = '</user_query>'
const CONVERSATION_SUMMARY_QUERY = `
SELECT conversationId, model, title, updatedAt
FROM conversation_summaries
WHERE conversationId = ?
`
const modelDisplayNames: Record<string, string> = {
'claude-4.5-opus-high-thinking': 'Opus 4.5 (Thinking)',
'claude-4-opus': 'Opus 4',
'claude-4-sonnet-thinking': 'Sonnet 4 (Thinking)',
'claude-4.5-sonnet-thinking': 'Sonnet 4.5 (Thinking)',
'claude-4.6-sonnet': 'Sonnet 4.6',
'composer-1': 'Composer 1',
'grok-code-fast-1': 'Grok Code Fast',
'gemini-3-pro': 'Gemini 3 Pro',
'gpt-5.1-codex-high': 'GPT-5.1 Codex',
'gpt-5': 'GPT-5',
'gpt-4.1': 'GPT-4.1',
default: 'Auto (Sonnet est.)',
}
function getCursorAgentBaseDir(baseDirOverride?: string): string {
if (baseDirOverride) return baseDirOverride
// Windows paths unverified; tracked as Open Question 3 in issue #55.
return join(homedir(), '.cursor')
}
function getProjectsDir(baseDir: string): string {
return join(baseDir, 'projects')
}
function getAttributionDbPath(baseDir: string): string {
return join(baseDir, 'ai-tracking', 'ai-code-tracking.db')
}
function estimateTokens(charCount: number): number {
if (charCount <= 0) return 0
return Math.ceil(charCount / CHARS_PER_TOKEN)
}
function parseToolName(raw: string): string {
const clean = raw.trim()
if (clean.length === 0) return 'unknown'
return clean.toLowerCase().replace(/\s+/g, '-')
}
function normalizeTimestamp(raw: string | number | null | undefined): string | null {
if (raw === null || raw === undefined) return null
if (typeof raw === 'string') {
const trimmed = raw.trim()
if (trimmed.length === 0) return null
if (DIGITS_ONLY.test(trimmed)) {
const num = Number(trimmed)
if (!Number.isNaN(num)) {
const ms = num < 1e12 ? num * 1000 : num
return new Date(ms).toISOString()
}
}
const parsed = new Date(trimmed)
if (!Number.isNaN(parsed.getTime())) return parsed.toISOString()
return null
}
const ms = raw < 1e12 ? raw * 1000 : raw
return new Date(ms).toISOString()
}
function prettifyProjectId(raw: string): string {
if (!raw) return raw
if (DIGITS_ONLY.test(raw)) {
const num = Number(raw)
if (!Number.isNaN(num) && raw.length >= 13) {
const iso = new Date(num).toISOString()
return `cursor-agent:${iso}`
}
}
const withoutPrefix = raw.replace(/^-Users-/, '')
const parts = withoutPrefix.split('-').filter(Boolean)
if (parts.length > 0) return parts[parts.length - 1]!
return raw
}
function resolveModel(raw: string | null | undefined): string {
if (!raw || raw === 'default') return CURSOR_AGENT_DEFAULT_MODEL
return raw
}
function toConversationId(transcriptPath: string): string {
const filename = basename(transcriptPath, '.txt')
if (filename.length === 36 && UUID_LIKE.test(filename)) return filename
return createHash('sha1').update(transcriptPath).digest('hex').slice(0, 16)
}
function extractUserQuery(userBlock: string): string {
const chunks: string[] = []
let cursor = 0
while (cursor < userBlock.length) {
const openIndex = userBlock.indexOf(USER_QUERY_OPEN, cursor)
if (openIndex === -1) break
const start = openIndex + USER_QUERY_OPEN.length
const closeIndex = userBlock.indexOf(USER_QUERY_CLOSE, start)
if (closeIndex === -1) {
chunks.push(userBlock.slice(start).trim())
break
}
chunks.push(userBlock.slice(start, closeIndex).trim())
cursor = closeIndex + USER_QUERY_CLOSE.length
}
const combined = chunks.filter(Boolean).join(' ').replace(/\s+/g, ' ').trim()
return combined.slice(0, MAX_USER_TEXT_LENGTH)
}
function parseTranscript(raw: string): { turns: ParsedTurn[]; recognized: boolean } {
const lines = raw.split(/\r?\n/)
let recognized = false
const pendingUsers: string[] = []
const turns: ParsedTurn[] = []
let active: 'none' | 'user' | 'assistant' = 'none'
let userLines: string[] = []
let assistantLines: string[] = []
const flushUser = () => {
if (userLines.length === 0) return
const userQuery = extractUserQuery(userLines.join('\n'))
if (userQuery.length > 0) pendingUsers.push(userQuery)
userLines = []
}
const flushAssistant = () => {
if (assistantLines.length === 0) return
let output = ''
let reasoning = ''
const toolsByTurn: Record<string, boolean> = Object.create(null)
for (const line of assistantLines) {
if (TOOL_RESULT_MARKER.test(line)) continue
const thinkingMatch = line.match(THINKING_MARKER)
if (thinkingMatch) {
const body = line.replace(THINKING_MARKER, '').trim()
if (body.length > 0) reasoning += `${body}\n`
continue
}
const toolMatch = line.match(TOOL_CALL_MARKER)
if (toolMatch) {
const parsedTool = parseToolName(toolMatch[1] ?? '')
const toolKey = `cursor:${parsedTool}`
toolsByTurn[toolKey] = true
continue
}
output += `${line}\n`
}
if (pendingUsers.length > 0) {
const userMessage = pendingUsers.shift()!
const tools = Object.keys(toolsByTurn)
turns.push({
userMessage,
assistant: {
body: output.trim(),
reasoning: reasoning.trim(),
tools,
},
})
}
assistantLines = []
}
for (const line of lines) {
if (USER_MARKER.test(line)) {
recognized = true
if (active === 'user') flushUser()
if (active === 'assistant') flushAssistant()
active = 'user'
userLines = [line.replace(USER_MARKER, '')]
continue
}
if (ASSISTANT_MARKER.test(line)) {
recognized = true
if (active === 'user') flushUser()
if (active === 'assistant') flushAssistant()
active = 'assistant'
assistantLines = [line.replace(ASSISTANT_MARKER, '')]
continue
}
if (active === 'user') {
userLines.push(line)
continue
}
if (active === 'assistant') {
assistantLines.push(line)
}
}
if (active === 'user') flushUser()
if (active === 'assistant') flushAssistant()
return { turns, recognized }
}
function createParser(
source: SessionSource,
seenKeys: Set<string>,
dbPath: string,
summariesByConversationId: Record<string, ConversationSummary | undefined>,
): SessionParser {
return {
async *parse(): AsyncGenerator<ParsedProviderCall> {
const conversationId = toConversationId(source.path)
let summary = summariesByConversationId[conversationId]
let db: SqliteDatabase | null = null
try {
if (!summary) {
if (existsSync(dbPath)) {
try {
db = openDatabase(dbPath)
const rows = db.query<{
conversationId: string
model: string | null
title: string | null
updatedAt: string | number | null
}>(CONVERSATION_SUMMARY_QUERY, [conversationId])
if (rows.length > 0) {
const row = rows[0]!
summary = {
conversationId: row.conversationId,
model: row.model,
title: row.title,
updatedAt: normalizeTimestamp(row.updatedAt),
}
summariesByConversationId[conversationId] = summary
}
} catch {
summary = undefined
}
}
}
const transcript = await readFile(source.path, 'utf-8')
const parsed = parseTranscript(transcript)
if (!parsed.recognized) {
process.stderr.write(`codeburn: skipped ${basename(source.path)}: unrecognized cursor-agent transcript format\n`)
return
}
let timestamp = summary?.updatedAt ?? null
if (!timestamp) {
const fileStat = await stat(source.path)
timestamp = fileStat.mtime.toISOString()
}
const model = resolveModel(summary?.model ?? null)
for (let turnIndex = 0; turnIndex < parsed.turns.length; turnIndex++) {
const turn = parsed.turns[turnIndex]!
const inputTokens = estimateTokens(turn.userMessage.length)
const outputTokens = estimateTokens(turn.assistant.body.length)
const reasoningTokens = estimateTokens(turn.assistant.reasoning.length)
const deduplicationKey = `cursor-agent:${conversationId}:${turnIndex}`
if (seenKeys.has(deduplicationKey)) continue
seenKeys.add(deduplicationKey)
const costUSD = calculateCost(
model,
inputTokens,
outputTokens + reasoningTokens,
0,
0,
0,
)
yield {
provider: 'cursor-agent',
model,
inputTokens,
outputTokens,
cacheCreationInputTokens: 0,
cacheReadInputTokens: 0,
cachedInputTokens: 0,
reasoningTokens,
webSearchRequests: 0,
costUSD,
tools: turn.assistant.tools,
bashCommands: [],
timestamp,
speed: 'standard',
deduplicationKey,
userMessage: turn.userMessage,
sessionId: conversationId,
}
}
} finally {
db?.close()
}
},
}
}
export function createCursorAgentProvider(baseDirOverride?: string): Provider {
const baseDir = getCursorAgentBaseDir(baseDirOverride)
const projectsDir = getProjectsDir(baseDir)
const dbPath = getAttributionDbPath(baseDir)
const summariesByConversationId: Record<string, ConversationSummary | undefined> = Object.create(null)
return {
name: 'cursor-agent',
displayName: 'Cursor Agent',
modelDisplayName(model: string): string {
if (model === 'default') return modelDisplayNames.default
const label = modelDisplayNames[model] ?? model
return `${label} (est.)`
},
toolDisplayName(rawTool: string): string {
return rawTool
},
async discoverSessions(): Promise<SessionSource[]> {
if (!existsSync(projectsDir)) return []
const projectEntries = await readdir(projectsDir, { withFileTypes: true })
const sources: SessionSource[] = []
for (const entry of projectEntries) {
if (!entry.isDirectory()) continue
const projectId = prettifyProjectId(entry.name)
const transcriptDir = join(projectsDir, entry.name, 'agent-transcripts')
if (!existsSync(transcriptDir)) continue
const transcriptEntries = await readdir(transcriptDir, { withFileTypes: true })
for (const transcript of transcriptEntries) {
if (!transcript.isFile()) continue
if (!transcript.name.endsWith('.txt')) continue
const transcriptPath = join(transcriptDir, transcript.name)
sources.push({
path: transcriptPath,
project: projectId,
provider: 'cursor-agent',
fingerprintPath: transcriptPath,
cacheStrategy: 'full-reparse',
progressLabel: `cursor-agent:${basename(transcript.name)}`,
parserVersion: 'cursor-agent:v1',
})
}
}
return sources
},
createSessionParser(source: SessionSource, seenKeys: Set<string>): SessionParser {
return createParser(source, seenKeys, dbPath, summariesByConversationId)
},
}
}
export const cursor_agent = createCursorAgentProvider()

View file

@ -3,7 +3,6 @@ import { join } from 'path'
import { homedir } from 'os'
import { calculateCost } from '../models.js'
import { readCachedResults, writeCachedResults } from '../cursor-cache.js'
import { isSqliteAvailable, getSqliteLoadError, openDatabase, type SqliteDatabase } from '../sqlite.js'
import type { Provider, SessionSource, SessionParser, ParsedProviderCall } from './types.js'
@ -215,16 +214,6 @@ function createParser(source: SessionSource, seenKeys: Set<string>): SessionPars
return
}
const cached = await readCachedResults(source.path)
if (cached) {
for (const call of cached) {
if (seenKeys.has(call.deduplicationKey)) continue
seenKeys.add(call.deduplicationKey)
yield call
}
return
}
let db: SqliteDatabase
try {
db = openDatabase(source.path)
@ -241,8 +230,6 @@ function createParser(source: SessionSource, seenKeys: Set<string>): SessionPars
const { calls } = parseBubbles(db, seenKeys)
await writeCachedResults(source.path, calls)
for (const call of calls) {
yield call
}
@ -272,7 +259,15 @@ export function createCursorProvider(dbPathOverride?: string): Provider {
const dbPath = dbPathOverride ?? getCursorDbPath()
if (!existsSync(dbPath)) return []
return [{ path: dbPath, project: 'cursor', provider: 'cursor' }]
return [{
path: dbPath,
project: 'cursor',
provider: 'cursor',
fingerprintPath: dbPath,
cacheStrategy: 'full-reparse',
progressLabel: 'Cursor state.vscdb',
parserVersion: 'cursor:v1',
}]
},
createSessionParser(source: SessionSource, seenKeys: Set<string>): SessionParser {

View file

@ -1,8 +1,7 @@
import { claude } from './claude.js'
import { codex } from './codex.js'
import { copilot } from './copilot.js'
import { pi } from './pi.js'
import { omp } from './pi.js'
import { pi, omp } from './pi.js'
import type { Provider, SessionSource } from './types.js'
let cursorProvider: Provider | null = null
@ -23,6 +22,9 @@ async function loadCursor(): Promise<Provider | null> {
let opencodeProvider: Provider | null = null
let opencodeLoadAttempted = false
let cursorAgentProvider: Provider | null = null
let cursorAgentLoadAttempted = false
async function loadOpenCode(): Promise<Provider | null> {
if (opencodeLoadAttempted) return opencodeProvider
opencodeLoadAttempted = true
@ -35,13 +37,26 @@ async function loadOpenCode(): Promise<Provider | null> {
}
}
async function loadCursorAgent(): Promise<Provider | null> {
if (cursorAgentLoadAttempted) return cursorAgentProvider
cursorAgentLoadAttempted = true
try {
const { cursor_agent } = await import('./cursor-agent.js')
cursorAgentProvider = cursor_agent
return cursor_agent
} catch {
return null
}
}
const coreProviders: Provider[] = [claude, codex, copilot, pi, omp]
export async function getAllProviders(): Promise<Provider[]> {
const [cursor, opencode] = await Promise.all([loadCursor(), loadOpenCode()])
const [cursor, opencode, cursorAgent] = await Promise.all([loadCursor(), loadOpenCode(), loadCursorAgent()])
const all = [...coreProviders]
if (cursor) all.push(cursor)
if (opencode) all.push(opencode)
if (cursorAgent) all.push(cursorAgent)
return all
}
@ -69,5 +84,9 @@ export async function getProvider(name: string): Promise<Provider | undefined> {
const oc = await loadOpenCode()
return oc ?? undefined
}
if (name === 'cursor-agent') {
const ca = await loadCursorAgent()
return ca ?? undefined
}
return coreProviders.find(p => p.name === name)
}

View file

@ -271,6 +271,10 @@ async function discoverFromDb(dbPath: string): Promise<SessionSource[]> {
path: `${dbPath}:${row.id}`,
project: row.directory ? sanitize(row.directory) : sanitize(row.title),
provider: 'opencode',
fingerprintPath: dbPath,
cacheStrategy: 'full-reparse',
progressLabel: `opencode:${row.id}`,
parserVersion: 'opencode:v1',
}))
} catch {
return []

View file

@ -2,6 +2,7 @@ import { readdir, stat } from 'fs/promises'
import { basename, join } from 'path'
import { homedir } from 'os'
import { type DiscoverySnapshotEntry, loadDiscoveryCache, saveDiscoveryCache } from '../discovery-cache.js'
import { readSessionFile } from '../fs-utils.js'
import { calculateCost } from '../models.js'
import { extractBashCommands } from '../bash-utils.js'
@ -72,7 +73,31 @@ async function readFirstEntry(filePath: string): Promise<PiEntry | null> {
}
}
async function collectPiDiscoverySnapshot(sessionsDir: string): Promise<DiscoverySnapshotEntry[]> {
const snapshot: DiscoverySnapshotEntry[] = []
let projectDirs: string[]
try {
projectDirs = await readdir(sessionsDir)
} catch {
return snapshot
}
for (const dirName of projectDirs) {
const dirPath = join(sessionsDir, dirName)
const dirStat = await stat(dirPath).catch(() => null)
if (!dirStat?.isDirectory()) continue
snapshot.push({ path: dirPath, mtimeMs: dirStat.mtimeMs })
}
return snapshot
}
async function discoverSessionsInDir(sessionsDir: string, providerName: string): Promise<SessionSource[]> {
const snapshot = await collectPiDiscoverySnapshot(sessionsDir)
const cached = await loadDiscoveryCache(providerName, sessionsDir, snapshot)
if (cached) return cached
const sources: SessionSource[] = []
let projectDirs: string[]
@ -104,10 +129,19 @@ async function discoverSessionsInDir(sessionsDir: string, providerName: string):
if (!first || first.type !== 'session') continue
const cwd = first.cwd ?? dirName
sources.push({ path: filePath, project: basename(cwd), provider: providerName })
sources.push({
path: filePath,
project: basename(cwd),
provider: providerName,
fingerprintPath: filePath,
cacheStrategy: 'append-jsonl',
progressLabel: basename(filePath),
parserVersion: `${providerName}:v1`,
})
}
}
await saveDiscoveryCache(providerName, sessionsDir, snapshot, sources)
return sources
}
@ -154,7 +188,7 @@ function createParser(source: SessionSource, seenKeys: Set<string>): SessionPars
const model = msg.model ?? 'gpt-5'
const responseId = msg.responseId ?? ''
const dedupKey = `pi:${source.path}:${responseId || entry.id || entry.timestamp || String(lineIdx)}`
const dedupKey = `${source.provider}:${source.path}:${responseId || entry.id || entry.timestamp || String(lineIdx)}`
if (seenKeys.has(dedupKey)) continue
seenKeys.add(dedupKey)
@ -255,4 +289,4 @@ export function createOmpProvider(sessionsDir?: string): Provider {
}
}
export const omp = createOmpProvider()
export const omp = createOmpProvider()

View file

@ -1,7 +1,13 @@
export type SourceCacheStrategy = 'full-reparse' | 'append-jsonl'
export type SessionSource = {
path: string
project: string
provider: string
fingerprintPath?: string
cacheStrategy?: SourceCacheStrategy
progressLabel?: string
parserVersion?: string
}
export type SessionParser = {

426
src/source-cache.ts Normal file
View file

@ -0,0 +1,426 @@
import { createHash, randomBytes } from 'crypto'
import { existsSync } from 'fs'
import { mkdir, open, readFile, rename, stat, unlink } from 'fs/promises'
import { homedir } from 'os'
import { dirname, join } from 'path'
import type { SessionSummary } from './types.js'
export const SOURCE_CACHE_VERSION = 1
function traceCacheRead(op: string, filePath: string, note?: string): void {
if (process.env['CODEBURN_FILE_TRACE'] !== '1') return
const suffix = note ? ` ${note}` : ''
process.stderr.write(`codeburn-trace source-cache ${op} ${filePath}${suffix}\n`)
}
const APPEND_TAIL_WINDOW_BYTES = 16 * 1024
type SourceCacheStrategy = 'full-reparse' | 'append-jsonl'
type SourceFingerprint = {
mtimeMs: number
sizeBytes: number
}
type AppendState = {
endOffset: number
tailHash: string
lastEntryType?: string
}
export type SourceCacheEntry = {
version: number
provider: string
logicalPath: string
fingerprintPath: string
cacheStrategy: SourceCacheStrategy
parserVersion: string
fingerprint: SourceFingerprint
sessions: SessionSummary[]
appendState?: AppendState
}
export type SourceCacheManifest = {
version: number
entries: Record<string, SourceCacheManifestEntry>
}
export type SourceCacheManifestEntry = {
file: string
provider: string
logicalPath: string
lastSeenParserVersion?: string
cacheStrategy?: SourceCacheStrategy
fingerprintPath?: string
fingerprint?: SourceFingerprint
firstTimestamp?: string
lastTimestamp?: string
appendState?: AppendState
}
type ReadSourceCacheEntryOptions = {
allowStaleFingerprint?: boolean
}
type SourceRange = {
firstTimestamp?: string
lastTimestamp?: string
}
function sourceCacheKey(provider: string, logicalPath: string): string {
return `${provider}:${logicalPath}`
}
export function getManifestEntry(manifest: SourceCacheManifest, provider: string, logicalPath: string): SourceCacheManifestEntry | null {
return manifest.entries[sourceCacheKey(provider, logicalPath)] ?? null
}
function isPlainObject(value: unknown): value is Record<string, unknown> {
return !!value && typeof value === 'object' && !Array.isArray(value)
}
function isFiniteNumber(value: unknown): value is number {
return typeof value === 'number' && Number.isFinite(value)
}
function isManifestEntry(value: unknown): value is SourceCacheManifest['entries'][string] {
const isAppendStateValue = (entry: unknown): entry is AppendState =>
isPlainObject(entry)
&& typeof entry.endOffset === 'number'
&& Number.isFinite(entry.endOffset)
&& typeof entry.tailHash === 'string'
&& (entry.lastEntryType === undefined || typeof entry.lastEntryType === 'string')
const isFingerprint = (entry: unknown): entry is SourceFingerprint => isPlainObject(entry)
&& Number.isFinite(entry.mtimeMs)
&& typeof entry.mtimeMs === 'number'
&& Number.isFinite(entry.sizeBytes)
&& typeof entry.sizeBytes === 'number'
return isPlainObject(value)
&& typeof value.file === 'string'
&& /^[a-f0-9]{40}\.json$/.test(value.file)
&& typeof value.provider === 'string'
&& typeof value.logicalPath === 'string'
&& (value.lastSeenParserVersion === undefined || typeof value.lastSeenParserVersion === 'string')
&& (value.cacheStrategy === undefined || value.cacheStrategy === 'full-reparse' || value.cacheStrategy === 'append-jsonl')
&& (value.fingerprintPath === undefined || typeof value.fingerprintPath === 'string')
&& (value.fingerprint === undefined || isFingerprint(value.fingerprint))
&& (value.firstTimestamp === undefined || typeof value.firstTimestamp === 'string')
&& (value.lastTimestamp === undefined || typeof value.lastTimestamp === 'string')
&& (value.appendState === undefined || isAppendStateValue(value.appendState))
}
function isSessionSummary(value: unknown): value is SessionSummary {
return isPlainObject(value)
&& typeof value.sessionId === 'string'
&& typeof value.project === 'string'
&& typeof value.firstTimestamp === 'string'
&& typeof value.lastTimestamp === 'string'
&& isFiniteNumber(value.totalCostUSD)
&& isFiniteNumber(value.totalInputTokens)
&& isFiniteNumber(value.totalOutputTokens)
&& isFiniteNumber(value.totalCacheReadTokens)
&& isFiniteNumber(value.totalCacheWriteTokens)
&& isFiniteNumber(value.apiCalls)
&& Array.isArray(value.turns)
&& value.turns.every(isParsedTurn)
&& isBreakdownMap(value.modelBreakdown, isModelBreakdownEntry)
&& isBreakdownMap(value.toolBreakdown, isCallsBreakdownEntry)
&& isBreakdownMap(value.mcpBreakdown, isCallsBreakdownEntry)
&& isBreakdownMap(value.bashBreakdown, isCallsBreakdownEntry)
&& isBreakdownMap(value.categoryBreakdown, isCategoryBreakdownEntry)
}
function isTokenUsage(value: unknown): value is { inputTokens: number; outputTokens: number; cacheCreationInputTokens: number; cacheReadInputTokens: number; cachedInputTokens: number; reasoningTokens: number; webSearchRequests: number } {
return isPlainObject(value)
&& isFiniteNumber(value.inputTokens)
&& isFiniteNumber(value.outputTokens)
&& isFiniteNumber(value.cacheCreationInputTokens)
&& isFiniteNumber(value.cacheReadInputTokens)
&& isFiniteNumber(value.cachedInputTokens)
&& isFiniteNumber(value.reasoningTokens)
&& isFiniteNumber(value.webSearchRequests)
}
function isParsedApiCall(value: unknown): boolean {
return isPlainObject(value)
&& typeof value.provider === 'string'
&& typeof value.model === 'string'
&& isTokenUsage(value.usage)
&& isFiniteNumber(value.costUSD)
&& Array.isArray(value.tools)
&& value.tools.every(tool => typeof tool === 'string')
&& Array.isArray(value.mcpTools)
&& value.mcpTools.every(tool => typeof tool === 'string')
&& typeof value.hasAgentSpawn === 'boolean'
&& typeof value.hasPlanMode === 'boolean'
&& (value.speed === 'standard' || value.speed === 'fast')
&& typeof value.timestamp === 'string'
&& Array.isArray(value.bashCommands)
&& value.bashCommands.every(command => typeof command === 'string')
&& typeof value.deduplicationKey === 'string'
}
function isParsedTurn(value: unknown): boolean {
return isPlainObject(value)
&& typeof value.userMessage === 'string'
&& Array.isArray(value.assistantCalls)
&& value.assistantCalls.every(isParsedApiCall)
&& typeof value.timestamp === 'string'
&& typeof value.sessionId === 'string'
}
function isModelBreakdownEntry(value: unknown): value is { calls: number; costUSD: number; tokens: TokenUsage } {
return isPlainObject(value)
&& isFiniteNumber(value.calls)
&& isFiniteNumber(value.costUSD)
&& isTokenUsage(value.tokens)
}
type TokenUsage = { inputTokens: number; outputTokens: number; cacheCreationInputTokens: number; cacheReadInputTokens: number; cachedInputTokens: number; reasoningTokens: number; webSearchRequests: number }
function isCallsBreakdownEntry(value: unknown): value is { calls: number } {
return isPlainObject(value) && isFiniteNumber(value.calls)
}
function isCategoryBreakdownEntry(value: unknown): value is { turns: number; costUSD: number; retries: number; editTurns: number; oneShotTurns: number } {
return isPlainObject(value)
&& isFiniteNumber(value.turns)
&& isFiniteNumber(value.costUSD)
&& isFiniteNumber(value.retries)
&& isFiniteNumber(value.editTurns)
&& isFiniteNumber(value.oneShotTurns)
}
function isBreakdownMap<T>(value: unknown, predicate: (entry: unknown) => entry is T): value is Record<string, T> {
return isPlainObject(value) && Object.values(value).every(predicate)
}
function isAppendState(value: unknown): value is AppendState {
return isPlainObject(value)
&& typeof value.endOffset === 'number'
&& Number.isFinite(value.endOffset)
&& typeof value.tailHash === 'string'
&& (value.lastEntryType === undefined || typeof value.lastEntryType === 'string')
}
function rangeFromSessions(sessions: SessionSummary[]): SourceRange {
if (sessions.length === 0) return {}
let firstTs = sessions[0]?.firstTimestamp
let lastTs = sessions[sessions.length - 1]?.lastTimestamp
for (const session of sessions) {
if (!firstTs || session.firstTimestamp < firstTs) firstTs = session.firstTimestamp
if (!lastTs || session.lastTimestamp > lastTs) lastTs = session.lastTimestamp
}
return {
firstTimestamp: firstTs,
lastTimestamp: lastTs,
}
}
async function readTailStateHash(filePath: string, endOffset: number): Promise<string | null> {
if (endOffset <= 0) return null
const start = Math.max(0, endOffset - APPEND_TAIL_WINDOW_BYTES)
const length = Math.max(0, endOffset - start)
if (length <= 0) return null
const handle = await open(filePath, 'r')
const buffer = Buffer.alloc(length)
try {
await handle.read(buffer, 0, length, start)
} finally {
await handle.close()
}
const chunk = buffer.toString('utf-8').replace(/[\r\n]+$/, '')
if (chunk.length === 0) return null
const lastNewline = chunk.lastIndexOf('\n')
const lastLine = lastNewline >= 0 ? chunk.slice(lastNewline + 1) : chunk
return lastLine.trim() ? createHash('sha1').update(lastLine).digest('hex') : null
}
function isDateRangeOverlap(
firstTimestamp: string | undefined,
lastTimestamp: string | undefined,
rangeStart: number,
rangeEnd: number,
): boolean | null {
if (!firstTimestamp || !lastTimestamp) return null
const firstMs = new Date(firstTimestamp).getTime()
const lastMs = new Date(lastTimestamp).getTime()
if (Number.isNaN(firstMs) || Number.isNaN(lastMs)) return null
return lastMs >= rangeStart && firstMs <= rangeEnd
}
function isSourceCacheEntry(value: unknown): value is SourceCacheEntry {
return isPlainObject(value)
&& typeof value.version === 'number'
&& typeof value.provider === 'string'
&& typeof value.logicalPath === 'string'
&& typeof value.fingerprintPath === 'string'
&& (value.cacheStrategy === 'full-reparse' || value.cacheStrategy === 'append-jsonl')
&& typeof value.parserVersion === 'string'
&& isPlainObject(value.fingerprint)
&& Number.isFinite(value.fingerprint.mtimeMs)
&& typeof value.fingerprint.mtimeMs === 'number'
&& Number.isFinite(value.fingerprint.sizeBytes)
&& typeof value.fingerprint.sizeBytes === 'number'
&& Array.isArray(value.sessions)
&& value.sessions.every(isSessionSummary)
&& (value.appendState === undefined || isAppendState(value.appendState))
}
function cacheRoot(): string {
const base = process.env['CODEBURN_CACHE_DIR'] ?? join(homedir(), '.cache', 'codeburn')
return join(base, 'source-cache-v1')
}
function manifestPath(): string {
return join(cacheRoot(), 'manifest.json')
}
function entryDir(): string {
return join(cacheRoot(), 'entries')
}
function entryFilename(provider: string, logicalPath: string): string {
return `${createHash('sha1').update(sourceCacheKey(provider, logicalPath)).digest('hex')}.json`
}
export function emptySourceCacheManifest(): SourceCacheManifest {
return { version: SOURCE_CACHE_VERSION, entries: {} }
}
export async function computeFileFingerprint(filePath: string): Promise<SourceFingerprint> {
const meta = await stat(filePath)
return { mtimeMs: meta.mtimeMs, sizeBytes: meta.size }
}
export async function loadSourceCacheManifest(): Promise<SourceCacheManifest> {
traceCacheRead('manifest:read', manifestPath())
if (!existsSync(manifestPath())) return emptySourceCacheManifest()
try {
const raw = await readFile(manifestPath(), 'utf-8')
const parsed: unknown = JSON.parse(raw)
if (!isPlainObject(parsed) || parsed.version !== SOURCE_CACHE_VERSION || !isPlainObject(parsed.entries)) {
return emptySourceCacheManifest()
}
const entries: SourceCacheManifest['entries'] = {}
for (const [key, value] of Object.entries(parsed.entries)) {
if (!isManifestEntry(value)) return emptySourceCacheManifest()
entries[key] = value
}
return { version: SOURCE_CACHE_VERSION, entries }
} catch {
return emptySourceCacheManifest()
}
}
async function atomicWriteJson(path: string, value: unknown): Promise<void> {
await mkdir(dirname(path), { recursive: true })
const temp = `${path}.${randomBytes(8).toString('hex')}.tmp`
const handle = await open(temp, 'w', 0o600)
try {
await handle.writeFile(JSON.stringify(value), { encoding: 'utf-8' })
await handle.sync()
} finally {
await handle.close()
}
try {
await rename(temp, path)
} catch (err) {
try {
await unlink(temp)
} catch {}
throw err
}
}
export async function saveSourceCacheManifest(manifest: SourceCacheManifest): Promise<void> {
await mkdir(cacheRoot(), { recursive: true })
await atomicWriteJson(manifestPath(), manifest)
}
export async function readSourceCacheEntry(
manifest: SourceCacheManifest,
provider: string,
logicalPath: string,
options: ReadSourceCacheEntryOptions = {},
): Promise<SourceCacheEntry | null> {
const meta = manifest.entries[sourceCacheKey(provider, logicalPath)]
if (!meta) return null
if (meta.provider !== provider || meta.logicalPath !== logicalPath) return null
const expectedFile = entryFilename(provider, logicalPath)
if (meta.file !== expectedFile) return null
try {
const raw = await readFile(join(entryDir(), meta.file), 'utf-8')
traceCacheRead('entry:read', join(entryDir(), meta.file), `provider=${provider} logicalPath=${logicalPath}`)
const entry: unknown = JSON.parse(raw)
if (!isSourceCacheEntry(entry) || entry.version !== SOURCE_CACHE_VERSION) return null
if (entry.provider !== provider || entry.logicalPath !== logicalPath) return null
if (!options.allowStaleFingerprint) {
const currentFingerprint = await computeFileFingerprint(entry.fingerprintPath)
if (
currentFingerprint.mtimeMs !== entry.fingerprint.mtimeMs
|| currentFingerprint.sizeBytes !== entry.fingerprint.sizeBytes
) {
const sizeMatches = currentFingerprint.sizeBytes === entry.fingerprint.sizeBytes
if (!(
entry.cacheStrategy === 'append-jsonl'
&& entry.appendState
&& sizeMatches
)) {
return null
}
const liveTailHash = await readTailStateHash(entry.fingerprintPath, entry.appendState.endOffset)
if (liveTailHash !== entry.appendState.tailHash) return null
}
}
return entry
} catch {
return null
}
}
export async function writeSourceCacheEntry(manifest: SourceCacheManifest, entry: SourceCacheEntry): Promise<void> {
await mkdir(entryDir(), { recursive: true })
const file = entryFilename(entry.provider, entry.logicalPath)
await atomicWriteJson(join(entryDir(), file), entry)
const range = rangeFromSessions(entry.sessions)
manifest.entries[sourceCacheKey(entry.provider, entry.logicalPath)] = {
file,
provider: entry.provider,
logicalPath: entry.logicalPath,
lastSeenParserVersion: entry.parserVersion,
cacheStrategy: entry.cacheStrategy,
fingerprintPath: entry.fingerprintPath,
fingerprint: entry.fingerprint,
...range,
appendState: entry.appendState,
}
}
export function isManifestDateRangeOverlap(
manifestEntry: SourceCacheManifestEntry | null,
dateRange?: { start: Date; end: Date },
): boolean | null {
if (!manifestEntry || !dateRange) return null
return isDateRangeOverlap(manifestEntry.firstTimestamp, manifestEntry.lastTimestamp, dateRange.start.getTime(), dateRange.end.getTime())
}

55
tests/cli-plan.test.ts Normal file
View file

@ -0,0 +1,55 @@
import { mkdtemp, readFile, rm } from 'node:fs/promises'
import { tmpdir } from 'node:os'
import { join } from 'node:path'
import { spawnSync } from 'node:child_process'
import { describe, it, expect } from 'vitest'
function runCli(args: string[], home: string) {
return spawnSync(process.execPath, ['--import', 'tsx', 'src/cli.ts', ...args], {
cwd: process.cwd(),
env: {
...process.env,
HOME: home,
},
encoding: 'utf-8',
})
}
describe('codeburn plan command', () => {
it('persists plan set and clears on reset', async () => {
const home = await mkdtemp(join(tmpdir(), 'codeburn-cli-plan-'))
try {
const setResult = runCli(['plan', 'set', 'claude-max'], home)
expect(setResult.status).toBe(0)
const configPath = join(home, '.config', 'codeburn', 'config.json')
const configRaw = await readFile(configPath, 'utf-8')
const config = JSON.parse(configRaw) as { plan?: { id?: string; monthlyUsd?: number } }
expect(config.plan?.id).toBe('claude-max')
expect(config.plan?.monthlyUsd).toBe(200)
const resetResult = runCli(['plan', 'reset'], home)
expect(resetResult.status).toBe(0)
const afterResetRaw = await readFile(configPath, 'utf-8')
const afterReset = JSON.parse(afterResetRaw) as { plan?: unknown }
expect(afterReset.plan).toBeUndefined()
} finally {
await rm(home, { recursive: true, force: true })
}
})
it('shows invalid reset-day value in error output', async () => {
const home = await mkdtemp(join(tmpdir(), 'codeburn-cli-plan-'))
try {
const result = runCli(['plan', 'set', 'claude-max', '--reset-day', '99'], home)
expect(result.status).toBe(1)
expect(result.stderr).toContain('--reset-day must be an integer from 1 to 28; got 99.')
} finally {
await rm(home, { recursive: true, force: true })
}
})
})

568
tests/compare-stats.test.ts Normal file
View file

@ -0,0 +1,568 @@
import { mkdtemp, mkdir, rm, writeFile } from 'fs/promises'
import { join } from 'path'
import { tmpdir } from 'os'
import { describe, it, expect, beforeEach, afterEach } from 'vitest'
import { aggregateModelStats, computeComparison, computeCategoryComparison, computeWorkingStyle, scanSelfCorrections, type ModelStats } from '../src/compare-stats.js'
import type { ProjectSummary, SessionSummary, ClassifiedTurn } from '../src/types.js'
function makeTurn(model: string, cost: number, opts: { hasEdits?: boolean; retries?: number; outputTokens?: number; inputTokens?: number; cacheRead?: number; cacheWrite?: number; timestamp?: string; category?: string; hasAgentSpawn?: boolean; hasPlanMode?: boolean; speed?: 'standard' | 'fast'; tools?: string[] } = {}): ClassifiedTurn {
const defaultTools = opts.tools ?? (opts.hasEdits ? ['Edit'] : ['Read'])
return {
timestamp: opts.timestamp ?? '2026-04-15T10:00:00Z',
category: (opts.category ?? 'coding') as ClassifiedTurn['category'],
retries: opts.retries ?? 0,
hasEdits: opts.hasEdits ?? false,
userMessage: '',
assistantCalls: [{
provider: 'claude',
model,
usage: {
inputTokens: opts.inputTokens ?? 100,
outputTokens: opts.outputTokens ?? 200,
cacheCreationInputTokens: opts.cacheWrite ?? 500,
cacheReadInputTokens: opts.cacheRead ?? 5000,
cachedInputTokens: 0,
reasoningTokens: 0,
webSearchRequests: 0,
},
costUSD: cost,
tools: defaultTools,
mcpTools: [],
hasAgentSpawn: opts.hasAgentSpawn ?? false,
hasPlanMode: opts.hasPlanMode ?? false,
speed: opts.speed ?? 'standard' as const,
timestamp: opts.timestamp ?? '2026-04-15T10:00:00Z',
bashCommands: [],
deduplicationKey: `key-${Math.random()}`,
}],
}
}
function makeProject(turns: ClassifiedTurn[]): ProjectSummary {
const session: SessionSummary = {
sessionId: 'test-session',
project: 'test-project',
firstTimestamp: turns[0]?.timestamp ?? '',
lastTimestamp: turns[turns.length - 1]?.timestamp ?? '',
totalCostUSD: turns.reduce((s, t) => s + t.assistantCalls.reduce((s2, c) => s2 + c.costUSD, 0), 0),
totalInputTokens: 0,
totalOutputTokens: 0,
totalCacheReadTokens: 0,
totalCacheWriteTokens: 0,
apiCalls: turns.reduce((s, t) => s + t.assistantCalls.length, 0),
turns,
modelBreakdown: {},
toolBreakdown: {},
mcpBreakdown: {},
bashBreakdown: {},
categoryBreakdown: {} as SessionSummary['categoryBreakdown'],
}
return {
project: 'test-project',
projectPath: '/test',
sessions: [session],
totalCostUSD: session.totalCostUSD,
totalApiCalls: session.apiCalls,
}
}
describe('aggregateModelStats', () => {
it('aggregates calls, cost, and tokens per model', () => {
const project = makeProject([
makeTurn('opus-4-6', 0.10, { outputTokens: 200, inputTokens: 50, cacheRead: 5000, cacheWrite: 500 }),
makeTurn('opus-4-6', 0.15, { outputTokens: 300, inputTokens: 80, cacheRead: 6000, cacheWrite: 600 }),
makeTurn('opus-4-7', 0.25, { outputTokens: 800, inputTokens: 100, cacheRead: 7000, cacheWrite: 700 }),
])
const stats = aggregateModelStats([project])
const m6 = stats.find(s => s.model === 'opus-4-6')!
const m7 = stats.find(s => s.model === 'opus-4-7')!
expect(m6.calls).toBe(2)
expect(m6.cost).toBeCloseTo(0.25)
expect(m6.outputTokens).toBe(500)
expect(m7.calls).toBe(1)
expect(m7.cost).toBeCloseTo(0.25)
expect(m7.outputTokens).toBe(800)
})
it('attributes turn-level metrics to the primary model', () => {
const project = makeProject([
makeTurn('opus-4-6', 0.10, { hasEdits: true, retries: 0 }),
makeTurn('opus-4-6', 0.10, { hasEdits: true, retries: 2 }),
makeTurn('opus-4-7', 0.20, { hasEdits: true, retries: 0 }),
makeTurn('opus-4-7', 0.20, { hasEdits: false }),
])
const stats = aggregateModelStats([project])
const m6 = stats.find(s => s.model === 'opus-4-6')!
const m7 = stats.find(s => s.model === 'opus-4-7')!
expect(m6.editTurns).toBe(2)
expect(m6.oneShotTurns).toBe(1)
expect(m6.retries).toBe(2)
expect(m7.editTurns).toBe(1)
expect(m7.oneShotTurns).toBe(1)
expect(m7.totalTurns).toBe(2)
})
it('tracks firstSeen and lastSeen timestamps', () => {
const project = makeProject([
makeTurn('opus-4-6', 0.10, { timestamp: '2026-04-10T08:00:00Z' }),
makeTurn('opus-4-6', 0.10, { timestamp: '2026-04-15T20:00:00Z' }),
])
const stats = aggregateModelStats([project])
const m = stats.find(s => s.model === 'opus-4-6')!
expect(m.firstSeen).toBe('2026-04-10T08:00:00Z')
expect(m.lastSeen).toBe('2026-04-15T20:00:00Z')
})
it('filters out <synthetic> model entries', () => {
const project = makeProject([
makeTurn('<synthetic>', 0, {}),
makeTurn('opus-4-6', 0.10, {}),
])
const stats = aggregateModelStats([project])
expect(stats.find(s => s.model === '<synthetic>')).toBeUndefined()
expect(stats).toHaveLength(1)
})
it('returns empty array for no projects', () => {
expect(aggregateModelStats([])).toEqual([])
})
it('tracks editCost for edit turns', () => {
const project = makeProject([
makeTurn('opus-4-6', 0.10, { hasEdits: true }),
makeTurn('opus-4-6', 0.20, { hasEdits: true }),
makeTurn('opus-4-6', 0.50, { hasEdits: false }),
])
const stats = aggregateModelStats([project])
const m = stats.find(s => s.model === 'opus-4-6')!
expect(m.editCost).toBeCloseTo(0.30)
})
it('sorts by cost descending', () => {
const project = makeProject([
makeTurn('cheap-model', 0.01),
makeTurn('expensive-model', 5.00),
])
const stats = aggregateModelStats([project])
expect(stats[0].model).toBe('expensive-model')
expect(stats[1].model).toBe('cheap-model')
})
})
function makeStats(overrides: Partial<ModelStats> = {}): ModelStats {
return {
model: 'test-model',
calls: 100,
cost: 10,
outputTokens: 50000,
inputTokens: 10000,
cacheReadTokens: 20000,
cacheWriteTokens: 5000,
totalTurns: 200,
editTurns: 80,
oneShotTurns: 60,
retries: 20,
selfCorrections: 10,
editCost: 8,
firstSeen: '2026-04-01T00:00:00Z',
lastSeen: '2026-04-15T00:00:00Z',
...overrides,
}
}
describe('computeComparison', () => {
it('computes normalized metrics and picks winners correctly', () => {
const a = makeStats({ calls: 100, cost: 10, outputTokens: 50000, inputTokens: 10000, cacheReadTokens: 20000, cacheWriteTokens: 5000, editTurns: 80, oneShotTurns: 60, retries: 20, selfCorrections: 10, totalTurns: 200 })
const b = makeStats({ calls: 100, cost: 8, outputTokens: 40000, inputTokens: 10000, cacheReadTokens: 20000, cacheWriteTokens: 5000, editTurns: 80, oneShotTurns: 60, retries: 20, selfCorrections: 10, totalTurns: 200 })
const rows = computeComparison(a, b)
const costRow = rows.find(r => r.label === 'Cost / call')!
expect(costRow.valueA).toBeCloseTo(0.1)
expect(costRow.valueB).toBeCloseTo(0.08)
expect(costRow.winner).toBe('b')
const outputRow = rows.find(r => r.label === 'Output tok / call')!
expect(outputRow.valueA).toBe(500)
expect(outputRow.valueB).toBe(400)
expect(outputRow.winner).toBe('b')
})
it('returns null values for one-shot rate and retry rate when editTurns is zero', () => {
const a = makeStats({ editTurns: 0, oneShotTurns: 0, retries: 0 })
const b = makeStats({ editTurns: 80, oneShotTurns: 60, retries: 20 })
const rows = computeComparison(a, b)
const oneShotRow = rows.find(r => r.label === 'One-shot rate')!
expect(oneShotRow.valueA).toBeNull()
expect(oneShotRow.winner).toBe('none')
const retryRow = rows.find(r => r.label === 'Retry rate')!
expect(retryRow.valueA).toBeNull()
expect(retryRow.winner).toBe('none')
})
it('returns tie when values are equal', () => {
const a = makeStats({ calls: 100, cost: 10 })
const b = makeStats({ calls: 100, cost: 10 })
const rows = computeComparison(a, b)
const costRow = rows.find(r => r.label === 'Cost / call')!
expect(costRow.winner).toBe('tie')
})
it('computes cost per edit correctly', () => {
const a = makeStats({ editTurns: 40, editCost: 4 })
const b = makeStats({ editTurns: 80, editCost: 4 })
const rows = computeComparison(a, b)
const editRow = rows.find(r => r.label === 'Cost / edit')!
expect(editRow.valueA).toBeCloseTo(0.10)
expect(editRow.valueB).toBeCloseTo(0.05)
expect(editRow.winner).toBe('b')
})
it('picks higher value as winner for cache hit rate', () => {
const a = makeStats({ inputTokens: 5000, cacheReadTokens: 30000, cacheWriteTokens: 5000 })
const b = makeStats({ inputTokens: 10000, cacheReadTokens: 10000, cacheWriteTokens: 5000 })
const rows = computeComparison(a, b)
const cacheRow = rows.find(r => r.label === 'Cache hit rate')!
const totalA = 5000 + 30000 + 5000
const totalB = 10000 + 10000 + 5000
expect(cacheRow.valueA).toBeCloseTo(30000 / totalA * 100)
expect(cacheRow.valueB).toBeCloseTo(10000 / totalB * 100)
expect(cacheRow.winner).toBe('a')
})
})
function jsonlLine(type: string, model: string, text: string, timestamp = '2026-04-15T10:00:00Z'): string {
if (type === 'assistant') {
return JSON.stringify({
type: 'assistant', timestamp,
message: { model, content: [{ type: 'text', text }], id: `msg-${Math.random()}`, usage: { input_tokens: 0, output_tokens: 0 } },
})
}
return JSON.stringify({ type: 'user', timestamp, message: { role: 'user', content: text } })
}
describe('scanSelfCorrections', () => {
let tmpDir: string
beforeEach(async () => {
tmpDir = await mkdtemp(join(tmpdir(), 'codeburn-test-'))
})
afterEach(async () => {
await rm(tmpDir, { recursive: true, force: true })
})
it('counts apology patterns per model', async () => {
const sessionDir = join(tmpDir, 'session-abc')
await mkdir(sessionDir)
const lines = [
jsonlLine('assistant', 'opus-4-6', 'I apologize for the confusion.'),
jsonlLine('assistant', 'opus-4-6', 'Here is the result.'),
jsonlLine('assistant', 'sonnet-4-6', 'I was wrong about that.'),
jsonlLine('user', '', 'Do this'),
]
await writeFile(join(sessionDir, 'session.jsonl'), lines.join('\n') + '\n')
const result = await scanSelfCorrections([tmpDir])
expect(result.get('opus-4-6')).toBe(1)
expect(result.get('sonnet-4-6')).toBe(1)
})
it('does not count non-apology text', async () => {
const sessionDir = join(tmpDir, 'session-xyz')
await mkdir(sessionDir)
const lines = [
jsonlLine('assistant', 'opus-4-6', 'Here is the updated code.'),
jsonlLine('assistant', 'opus-4-6', 'Let me fix that for you.'),
]
await writeFile(join(sessionDir, 'session.jsonl'), lines.join('\n') + '\n')
const result = await scanSelfCorrections([tmpDir])
expect(result.get('opus-4-6')).toBeUndefined()
expect(result.size).toBe(0)
})
it('returns empty map for missing directory', async () => {
const result = await scanSelfCorrections([join(tmpDir, 'nonexistent')])
expect(result.size).toBe(0)
})
it('returns empty map for empty directory', async () => {
const result = await scanSelfCorrections([tmpDir])
expect(result.size).toBe(0)
})
it('scans subagent directories', async () => {
const sessionDir = join(tmpDir, 'session-sub')
const subagentsDir = join(sessionDir, 'subagents')
await mkdir(subagentsDir, { recursive: true })
const lines = [
jsonlLine('assistant', 'haiku-4-6', 'My mistake, let me redo that.'),
]
await writeFile(join(subagentsDir, 'sub.jsonl'), lines.join('\n') + '\n')
const result = await scanSelfCorrections([tmpDir])
expect(result.get('haiku-4-6')).toBe(1)
})
it('skips <synthetic> models', async () => {
const sessionDir = join(tmpDir, 'session-synth')
await mkdir(sessionDir)
const lines = [
jsonlLine('assistant', '<synthetic>', 'I apologize for the error.'),
]
await writeFile(join(sessionDir, 'session.jsonl'), lines.join('\n') + '\n')
const result = await scanSelfCorrections([tmpDir])
expect(result.get('<synthetic>')).toBeUndefined()
expect(result.size).toBe(0)
})
it('accumulates counts across multiple sessions and directories', async () => {
const sessionA = join(tmpDir, 'session-a')
const sessionB = join(tmpDir, 'session-b')
await mkdir(sessionA)
await mkdir(sessionB)
await writeFile(join(sessionA, 'a.jsonl'), [
jsonlLine('assistant', 'opus-4-6', 'I was wrong.', '2026-04-15T10:00:00Z'),
jsonlLine('assistant', 'opus-4-6', 'My bad!', '2026-04-15T10:01:00Z'),
].join('\n') + '\n')
await writeFile(join(sessionB, 'b.jsonl'), [
jsonlLine('assistant', 'opus-4-6', 'I apologize.', '2026-04-15T10:02:00Z'),
].join('\n') + '\n')
const result = await scanSelfCorrections([tmpDir])
expect(result.get('opus-4-6')).toBe(3)
})
it('handles malformed JSON lines gracefully', async () => {
const sessionDir = join(tmpDir, 'session-bad')
await mkdir(sessionDir)
await writeFile(join(sessionDir, 'bad.jsonl'), [
'not valid json',
jsonlLine('assistant', 'opus-4-6', 'I apologize.'),
].join('\n') + '\n')
const result = await scanSelfCorrections([tmpDir])
expect(result.get('opus-4-6')).toBe(1)
})
it('accepts multiple sessionDirs and merges counts', async () => {
const dir2 = await mkdtemp(join(tmpdir(), 'codeburn-test2-'))
try {
const sessionA = join(tmpDir, 'session-a')
const sessionB = join(dir2, 'session-b')
await mkdir(sessionA)
await mkdir(sessionB)
await writeFile(join(sessionA, 'a.jsonl'), [
jsonlLine('assistant', 'sonnet-4-6', 'My mistake.', '2026-04-15T10:00:00Z'),
].join('\n') + '\n')
await writeFile(join(sessionB, 'b.jsonl'), [
jsonlLine('assistant', 'sonnet-4-6', 'I was wrong.', '2026-04-15T10:01:00Z'),
].join('\n') + '\n')
const result = await scanSelfCorrections([tmpDir, dir2])
expect(result.get('sonnet-4-6')).toBe(2)
} finally {
await rm(dir2, { recursive: true, force: true })
}
})
})
describe('computeCategoryComparison', () => {
it('returns per-category one-shot rates for both models', () => {
const project = makeProject([
makeTurn('model-a', 0.10, { hasEdits: true, retries: 0, category: 'coding' }),
makeTurn('model-a', 0.10, { hasEdits: true, retries: 1, category: 'coding' }),
makeTurn('model-b', 0.10, { hasEdits: true, retries: 0, category: 'coding' }),
makeTurn('model-b', 0.10, { hasEdits: true, retries: 0, category: 'coding' }),
makeTurn('model-a', 0.10, { hasEdits: true, retries: 0, category: 'debugging' }),
makeTurn('model-b', 0.10, { hasEdits: true, retries: 1, category: 'debugging' }),
])
const result = computeCategoryComparison([project], 'model-a', 'model-b')
const coding = result.find(r => r.category === 'coding')!
expect(coding.editTurnsA).toBe(2)
expect(coding.oneShotRateA).toBeCloseTo(50)
expect(coding.editTurnsB).toBe(2)
expect(coding.oneShotRateB).toBeCloseTo(100)
expect(coding.winner).toBe('b')
const debugging = result.find(r => r.category === 'debugging')!
expect(debugging.oneShotRateA).toBeCloseTo(100)
expect(debugging.oneShotRateB).toBeCloseTo(0)
expect(debugging.winner).toBe('a')
})
it('skips categories with no edit turns', () => {
const project = makeProject([
makeTurn('model-a', 0.10, { hasEdits: false, category: 'conversation' }),
makeTurn('model-b', 0.10, { hasEdits: false, category: 'conversation' }),
makeTurn('model-a', 0.10, { hasEdits: true, category: 'coding' }),
])
const result = computeCategoryComparison([project], 'model-a', 'model-b')
expect(result.find(r => r.category === 'conversation')).toBeUndefined()
expect(result).toHaveLength(1)
})
it('sorts by total turns descending', () => {
const project = makeProject([
makeTurn('model-a', 0.10, { hasEdits: true, category: 'coding' }),
makeTurn('model-a', 0.10, { hasEdits: true, category: 'coding' }),
makeTurn('model-a', 0.10, { hasEdits: true, category: 'coding' }),
makeTurn('model-b', 0.10, { hasEdits: true, category: 'coding' }),
makeTurn('model-a', 0.10, { hasEdits: true, category: 'debugging' }),
])
const result = computeCategoryComparison([project], 'model-a', 'model-b')
expect(result[0].category).toBe('coding')
})
it('returns null one-shot rate when model has no edits in category', () => {
const project = makeProject([
makeTurn('model-a', 0.10, { hasEdits: true, category: 'coding' }),
makeTurn('model-b', 0.10, { hasEdits: false, category: 'coding' }),
])
const result = computeCategoryComparison([project], 'model-a', 'model-b')
const coding = result.find(r => r.category === 'coding')!
expect(coding.oneShotRateA).toBeCloseTo(100)
expect(coding.oneShotRateB).toBeNull()
expect(coding.winner).toBe('none')
})
})
describe('computeWorkingStyle', () => {
it('computes delegation and planning rates', () => {
const project = makeProject([
makeTurn('model-a', 0.10, { hasAgentSpawn: true }),
makeTurn('model-a', 0.10, {}),
makeTurn('model-a', 0.10, { hasPlanMode: true }),
makeTurn('model-b', 0.10, {}),
makeTurn('model-b', 0.10, {}),
])
const result = computeWorkingStyle([project], 'model-a', 'model-b')
const delegation = result.find(r => r.label === 'Delegation rate')!
expect(delegation.valueA).toBeCloseTo(100 / 3)
expect(delegation.valueB).toBeCloseTo(0)
const planning = result.find(r => r.label === 'Planning rate')!
expect(planning.valueA).toBeCloseTo(100 / 3)
expect(planning.valueB).toBeCloseTo(0)
})
it('computes avg tools per turn', () => {
const project = makeProject([
makeTurn('model-a', 0.10, { hasEdits: true }),
makeTurn('model-a', 0.10, {}),
makeTurn('model-b', 0.10, { hasEdits: true }),
])
const result = computeWorkingStyle([project], 'model-a', 'model-b')
const tools = result.find(r => r.label === 'Avg tools / turn')!
expect(tools.valueA).toBeCloseTo(1)
expect(tools.valueB).toBeCloseTo(1)
})
it('computes fast mode usage', () => {
const project = makeProject([
makeTurn('model-a', 0.10, { speed: 'fast' }),
makeTurn('model-a', 0.10, {}),
makeTurn('model-b', 0.10, { speed: 'fast' }),
makeTurn('model-b', 0.10, { speed: 'fast' }),
])
const result = computeWorkingStyle([project], 'model-a', 'model-b')
const fast = result.find(r => r.label === 'Fast mode usage')!
expect(fast.valueA).toBeCloseTo(50)
expect(fast.valueB).toBeCloseTo(100)
})
it('returns null for models with no turns', () => {
const project = makeProject([
makeTurn('model-a', 0.10, {}),
])
const result = computeWorkingStyle([project], 'model-a', 'model-b')
const delegation = result.find(r => r.label === 'Delegation rate')!
expect(delegation.valueA).toBeCloseTo(0)
expect(delegation.valueB).toBeNull()
})
it('counts TaskCreate as planning', () => {
const project = makeProject([
makeTurn('model-a', 0.10, { tools: ['TaskCreate'] }),
makeTurn('model-a', 0.10, { tools: ['Read'] }),
makeTurn('model-a', 0.10, { tools: ['Edit'] }),
])
const result = computeWorkingStyle([project], 'model-a', 'model-b')
const planning = result.find(r => r.label === 'Planning rate')!
expect(planning.valueA).toBeCloseTo(100 / 3)
})
it('counts TaskUpdate as planning', () => {
const project = makeProject([
makeTurn('model-a', 0.10, { tools: ['TaskUpdate'] }),
makeTurn('model-a', 0.10, { tools: ['Read'] }),
])
const result = computeWorkingStyle([project], 'model-a', 'model-b')
const planning = result.find(r => r.label === 'Planning rate')!
expect(planning.valueA).toBeCloseTo(50)
})
it('counts TodoWrite as planning', () => {
const project = makeProject([
makeTurn('model-a', 0.10, { tools: ['TodoWrite', 'Read'] }),
makeTurn('model-a', 0.10, { tools: ['Bash'] }),
])
const result = computeWorkingStyle([project], 'model-a', 'model-b')
const planning = result.find(r => r.label === 'Planning rate')!
expect(planning.valueA).toBeCloseTo(50)
})
it('counts turn with planning tool + edits as planning', () => {
const project = makeProject([
makeTurn('model-a', 0.10, { tools: ['TaskCreate', 'Edit', 'Read'] }),
makeTurn('model-a', 0.10, { tools: ['Edit'] }),
])
const result = computeWorkingStyle([project], 'model-a', 'model-b')
const planning = result.find(r => r.label === 'Planning rate')!
expect(planning.valueA).toBeCloseTo(50)
})
it('does not count regular tools as planning', () => {
const project = makeProject([
makeTurn('model-a', 0.10, { tools: ['Read', 'Grep', 'Glob'] }),
makeTurn('model-a', 0.10, { tools: ['Edit', 'Bash'] }),
makeTurn('model-a', 0.10, { tools: ['Agent'] }),
])
const result = computeWorkingStyle([project], 'model-a', 'model-b')
const planning = result.find(r => r.label === 'Planning rate')!
expect(planning.valueA).toBeCloseTo(0)
})
it('counts planning once per turn even with multiple planning tools', () => {
const project = makeProject([
makeTurn('model-a', 0.10, { tools: ['TaskCreate', 'TaskUpdate', 'TaskCreate'] }),
makeTurn('model-a', 0.10, { tools: ['Read'] }),
])
const result = computeWorkingStyle([project], 'model-a', 'model-b')
const planning = result.find(r => r.label === 'Planning rate')!
expect(planning.valueA).toBeCloseTo(50)
})
it('hasPlanMode still triggers planning rate', () => {
const project = makeProject([
makeTurn('model-a', 0.10, { hasPlanMode: true, tools: ['Read'] }),
makeTurn('model-a', 0.10, { tools: ['Read'] }),
])
const result = computeWorkingStyle([project], 'model-a', 'model-b')
const planning = result.find(r => r.label === 'Planning rate')!
expect(planning.valueA).toBeCloseTo(50)
})
})

View file

@ -117,7 +117,7 @@ describe('addNewDays', () => {
expect(updated.lastComputedDate).toBe('2026-04-10')
})
it('skips days already present in the cache (first write wins)', () => {
it('replaces existing days with incoming data (last write wins)', () => {
const base: DailyCache = {
version: DAILY_CACHE_VERSION,
lastComputedDate: '2026-04-08',
@ -125,7 +125,7 @@ describe('addNewDays', () => {
}
const updated = addNewDays(base, [emptyDay('2026-04-08', 99)], '2026-04-08')
const aprilEight = updated.days.find(d => d.date === '2026-04-08')!
expect(aprilEight.cost).toBe(5)
expect(aprilEight.cost).toBe(99)
})
it('does not regress lastComputedDate if incoming newestDate is older', () => {

View file

@ -38,6 +38,11 @@ function makeCall(timestamp: string, costUSD: number, model = 'Opus 4.7', provid
}
}
function localDateKey(iso: string): string {
const d = new Date(iso)
return `${d.getFullYear()}-${String(d.getMonth() + 1).padStart(2, '0')}-${String(d.getDate()).padStart(2, '0')}`
}
describe('aggregateProjectsIntoDays', () => {
it('buckets api calls by calendar date derived from timestamp', () => {
const projects: ProjectSummary[] = [
@ -130,12 +135,13 @@ describe('aggregateProjectsIntoDays', () => {
})
it('counts a session under its firstTimestamp date', () => {
const firstTimestamp = '2026-04-09T23:59:00Z'
const projects: ProjectSummary[] = [
makeProject({
sessions: [{
sessionId: 's1',
project: 'p',
firstTimestamp: '2026-04-09T23:59:00Z',
firstTimestamp,
lastTimestamp: '2026-04-10T00:10:00Z',
totalCostUSD: 1,
totalInputTokens: 0, totalOutputTokens: 0, totalCacheReadTokens: 0, totalCacheWriteTokens: 0,
@ -147,7 +153,7 @@ describe('aggregateProjectsIntoDays', () => {
}),
]
const days = aggregateProjectsIntoDays(projects)
expect(days[0]!.date).toBe('2026-04-09')
expect(days[0]!.date).toBe(localDateKey(firstTimestamp))
expect(days[0]!.sessions).toBe(1)
})

View file

@ -1,5 +1,5 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest'
import { mkdtemp, readFile, rm } from 'fs/promises'
import { mkdtemp, readFile, readdir, rm } from 'fs/promises'
import { join } from 'path'
import { tmpdir } from 'os'
@ -134,4 +134,26 @@ describe('exportCsv', () => {
expect(content).toContain("'+danger-model")
expect(content).toContain("'@malicious")
})
it('escapes tab and carriage-return prefixes in CSV cells', async () => {
const periods: PeriodExport[] = [
{
label: '30 Days',
projects: [makeProject('\tcmd'), makeProject('\rcmd')],
},
]
const outputPath = join(tmpDir, 'tab-cr.csv')
const folder = await exportCsv(periods, outputPath)
const projects = await readFile(join(folder, 'projects.csv'), 'utf-8')
expect(projects).toContain("'\tcmd")
expect(projects).toContain("'\rcmd")
})
it('does not crash when periods array is empty', async () => {
const outputPath = join(tmpDir, 'empty.csv')
const folder = await exportCsv([], outputPath)
const entries = await readdir(folder)
expect(entries.length).toBeGreaterThanOrEqual(0)
})
})

View file

@ -7,6 +7,8 @@ import {
MAX_SESSION_FILE_BYTES,
STREAM_THRESHOLD_BYTES,
readSessionFile,
readSessionLines,
readSessionLinesFromOffset,
} from '../src/fs-utils.js'
describe('readSessionFile', () => {
@ -61,3 +63,67 @@ describe('readSessionFile', () => {
expect(await readSessionFile('/nonexistent/path/x.jsonl')).toBeNull()
})
})
describe('readSessionLines', () => {
const tmpDirs: string[] = []
afterEach(async () => {
while (tmpDirs.length > 0) {
const d = tmpDirs.pop()
if (d) await rm(d, { recursive: true, force: true })
}
})
async function tmpPath(content: string): Promise<string> {
const base = await mkdtemp(join(tmpdir(), 'codeburn-lines-'))
tmpDirs.push(base)
const p = join(base, 'session.jsonl')
await writeFile(p, content)
return p
}
it('yields all lines from a file', async () => {
const p = await tmpPath('line1\nline2\nline3\n')
const lines: string[] = []
for await (const line of readSessionLines(p)) lines.push(line)
expect(lines).toEqual(['line1', 'line2', 'line3'])
})
it('does not leak file descriptors when generator is abandoned early', async () => {
const content = Array.from({ length: 1000 }, (_, i) => `line-${i}`).join('\n')
const p = await tmpPath(content)
const gen = readSessionLines(p)
await gen.next()
await gen.return(undefined)
})
})
describe('readSessionLinesFromOffset', () => {
const tmpDirs: string[] = []
afterEach(async () => {
while (tmpDirs.length > 0) {
const d = tmpDirs.pop()
if (d) await rm(d, { recursive: true, force: true })
}
})
async function tmpPath(content: string): Promise<string> {
const base = await mkdtemp(join(tmpdir(), 'codeburn-fs-offset-'))
tmpDirs.push(base)
const p = join(base, 'offset.txt')
await writeFile(p, content, 'utf-8')
return p
}
it('starts at the requested byte offset', async () => {
const p = await tmpPath('alpha\nbeta\ngamma\n')
const lines: string[] = []
for await (const line of readSessionLinesFromOffset(p, Buffer.byteLength('alpha\n', 'utf-8'))) {
lines.push(line)
}
expect(lines).toEqual(['beta', 'gamma'])
})
})

View file

@ -1,10 +1,58 @@
import { describe, it, expect, afterEach } from 'vitest'
import { getModelCosts, getShortModelName, calculateCost, setModelAliases } from '../src/models.js'
import { describe, it, expect, beforeAll, afterEach } from 'vitest'
import { getModelCosts, getShortModelName, calculateCost, loadPricing, setModelAliases } from '../src/models.js'
beforeAll(async () => {
await loadPricing()
})
// Tests run without loadPricing — fallback pricing only.
// setModelAliases resets between tests to avoid cross-contamination.
afterEach(() => setModelAliases({}))
describe('getModelCosts', () => {
it('does not match short canonical against longer pricing key', () => {
const costs = getModelCosts('gpt-4')
if (costs) {
expect(costs.inputCostPerToken).not.toBe(2.5e-6)
}
})
it('returns correct pricing for gpt-4o vs gpt-4o-mini', () => {
const mini = getModelCosts('gpt-4o-mini')
const full = getModelCosts('gpt-4o')
expect(mini).not.toBeNull()
expect(full).not.toBeNull()
expect(mini!.inputCostPerToken).toBeLessThan(full!.inputCostPerToken)
})
it('returns fallback pricing for known Claude models', () => {
const costs = getModelCosts('claude-opus-4-6-20260205')
expect(costs).not.toBeNull()
expect(costs!.inputCostPerToken).toBe(5e-6)
})
})
describe('getShortModelName', () => {
it('maps gpt-4o-mini correctly (not gpt-4o)', () => {
expect(getShortModelName('gpt-4o-mini-2024-07-18')).toBe('GPT-4o Mini')
})
it('maps gpt-4o correctly', () => {
expect(getShortModelName('gpt-4o-2024-08-06')).toBe('GPT-4o')
})
it('maps gpt-4.1-mini correctly (not gpt-4.1)', () => {
expect(getShortModelName('gpt-4.1-mini-2025-04-14')).toBe('GPT-4.1 Mini')
})
it('maps gpt-5.4-mini correctly (not gpt-5.4)', () => {
expect(getShortModelName('gpt-5.4-mini')).toBe('GPT-5.4 Mini')
})
it('maps claude-opus-4-6 with date suffix', () => {
expect(getShortModelName('claude-opus-4-6-20260205')).toBe('Opus 4.6')
})
})
describe('builtin aliases - getModelCosts', () => {
it('resolves anthropic--claude-4.6-opus', () => {
expect(getModelCosts('anthropic--claude-4.6-opus')).not.toBeNull()
@ -89,7 +137,6 @@ describe('user aliases via setModelAliases', () => {
})
it('user alias overrides builtin', () => {
// Remap an OMP key to a different canonical target
setModelAliases({ 'anthropic--claude-4.6-opus': 'claude-sonnet-4-5' })
expect(getModelCosts('anthropic--claude-4.6-opus')).toEqual(getModelCosts('claude-sonnet-4-5'))
})
@ -97,7 +144,6 @@ describe('user aliases via setModelAliases', () => {
it('resetting aliases restores builtins', () => {
setModelAliases({ 'anthropic--claude-4.6-opus': 'claude-sonnet-4-5' })
setModelAliases({})
// Back to builtin: should resolve as opus pricing, not sonnet
expect(getModelCosts('anthropic--claude-4.6-opus')).toEqual(getModelCosts('claude-opus-4-6'))
})
})

View file

@ -0,0 +1,77 @@
import { stripVTControlCharacters } from 'node:util'
import { describe, expect, it, vi } from 'vitest'
import { createTerminalProgressReporter } from '../src/parse-progress.js'
describe('createTerminalProgressReporter', () => {
it('renders a provider-aware cache bar with global counts', () => {
const writes: string[] = []
const stream = {
isTTY: true,
columns: 60,
write: vi.fn((chunk: string) => {
writes.push(chunk)
return true
}),
} as unknown as NodeJS.WriteStream
const reporter = createTerminalProgressReporter(true, stream)
reporter?.start(1899)
reporter?.advance('claude')
reporter?.advance('claude')
reporter?.finish('claude')
const text = stripVTControlCharacters(writes.join(''))
expect(text).toContain('Updating Claude cache')
expect(text).toContain('2/1899')
expect(text).toContain('[')
expect(text).not.toContain('.jsonl')
})
it('shrinks the bar on narrow terminals', () => {
const writes: string[] = []
const stream = {
isTTY: true,
columns: 34,
write: vi.fn((chunk: string) => {
writes.push(chunk)
return true
}),
} as unknown as NodeJS.WriteStream
const reporter = createTerminalProgressReporter(true, stream)
reporter?.start(100)
reporter?.advance('codex')
const text = stripVTControlCharacters(writes.join(''))
expect(text).toContain('Updating Codex cache')
expect(text).toContain('1/100')
expect(text).toMatch(/\[[█░]{8}\]/)
})
it('returns null for non-tty streams', () => {
const stream = { isTTY: false, write: vi.fn() } as unknown as NodeJS.WriteStream
expect(createTerminalProgressReporter(true, stream)).toBeNull()
})
it('uses stream color depth to configure output styling', () => {
const writes: string[] = []
const getColorDepth = vi.fn(() => 8)
const stream = {
isTTY: true,
columns: 80,
getColorDepth,
write: vi.fn((chunk: string) => {
writes.push(chunk)
return true
}),
} as unknown as NodeJS.WriteStream
const reporter = createTerminalProgressReporter(true, stream)
reporter?.start(2)
reporter?.advance('claude')
expect(getColorDepth).toHaveBeenCalledTimes(1)
expect(writes.join('')).toContain('Updating')
})
})

475
tests/parser-cache.test.ts Normal file
View file

@ -0,0 +1,475 @@
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import { appendFile, mkdir, mkdtemp, readFile, rm, writeFile } from 'fs/promises'
import { tmpdir } from 'os'
import { join } from 'path'
import type { ParsedProviderCall, Provider, SessionSource } from '../src/providers/types.js'
let root = ''
let sourcePath = ''
let parseCalls = 0
let claudeRoot = ''
let claudeSessionPath = ''
function makeCall(index: number): ParsedProviderCall {
const second = String(index).padStart(2, '0')
return {
provider: 'fake',
model: 'gpt-5',
inputTokens: 10,
outputTokens: 20,
cacheCreationInputTokens: 0,
cacheReadInputTokens: 0,
cachedInputTokens: 0,
reasoningTokens: 0,
webSearchRequests: 0,
costUSD: 0.01,
tools: ['Edit'],
bashCommands: [],
timestamp: `2026-04-20T09:00:${second}.000Z`,
speed: 'standard',
deduplicationKey: `fake:${index}`,
userMessage: `prompt ${index}`,
sessionId: 'fake-session',
}
}
beforeEach(async () => {
root = await mkdtemp(join(tmpdir(), 'codeburn-parser-cache-'))
sourcePath = join(root, 'fake.jsonl')
claudeRoot = join(root, '.claude')
claudeSessionPath = join(claudeRoot, 'projects', 'demo-project', 'session.jsonl')
parseCalls = 0
process.env['CODEBURN_CACHE_DIR'] = join(root, 'cache')
process.env['CLAUDE_CONFIG_DIR'] = claudeRoot
await writeFile(sourcePath, 'one\n', 'utf-8')
await mkdir(join(claudeRoot, 'projects', 'demo-project'), { recursive: true })
await writeFile(claudeSessionPath, [
JSON.stringify({
type: 'user',
timestamp: '2026-04-20T09:00:00.000Z',
sessionId: 'sess-1',
message: { role: 'user', content: 'first' },
}),
JSON.stringify({
type: 'assistant',
timestamp: '2026-04-20T09:00:01.000Z',
message: {
id: 'msg-1',
model: 'claude-sonnet-4-6',
role: 'assistant',
type: 'message',
content: [],
usage: { input_tokens: 10, output_tokens: 20 },
},
}),
].join('\n') + '\n', 'utf-8')
})
afterEach(async () => {
delete process.env['CODEBURN_CACHE_DIR']
delete process.env['CLAUDE_CONFIG_DIR']
await rm(root, { recursive: true, force: true })
vi.resetModules()
vi.clearAllMocks()
})
describe('parseAllSessions source cache', () => {
it('uses one global progress lifecycle across provider refreshes', async () => {
const fakeSource = {
path: sourcePath,
fingerprintPath: sourcePath,
project: 'fake-project',
provider: 'fake',
cacheStrategy: 'full-reparse',
} as SessionSource
const claudeSource = {
path: join(claudeRoot, 'projects', 'demo-project'),
project: 'demo-project',
provider: 'claude',
} as SessionSource
const fakeProvider: Provider = {
name: 'fake',
displayName: 'Fake',
modelDisplayName: model => model,
toolDisplayName: tool => tool,
discoverSessions: async () => [fakeSource],
createSessionParser() {
return {
async *parse() {
parseCalls += 1
const lineCount = (await readFile(sourcePath, 'utf-8')).trim().split('\n').filter(Boolean).length
for (let i = 0; i < lineCount; i += 1) yield makeCall(i)
},
}
},
}
vi.doMock('../src/providers/index.js', () => ({
discoverAllSessions: async (providerFilter?: string) => {
if (providerFilter === 'fake') return [fakeSource]
return [claudeSource, fakeSource]
},
getProvider: async () => fakeProvider,
}))
const { parseAllSessions } = await import('../src/parser.js')
const progress = {
start: vi.fn(),
advance: vi.fn(),
finish: vi.fn(),
}
const first = await parseAllSessions(undefined, undefined, { progress })
expect(first).toEqual(expect.any(Array))
expect(parseCalls).toBe(1)
expect(progress.start).toHaveBeenCalledTimes(1)
expect(progress.start).toHaveBeenCalledWith(2)
expect(progress.advance).toHaveBeenCalledWith('claude')
expect(progress.advance).toHaveBeenCalledWith('fake')
expect(progress.finish).toHaveBeenCalledTimes(1)
expect(progress.finish).toHaveBeenCalledWith('fake')
const second = await parseAllSessions(undefined, 'fake')
expect(second[0]?.totalApiCalls).toBe(1)
expect(parseCalls).toBe(1)
await writeFile(sourcePath, 'one\ntwo\n', 'utf-8')
const third = await parseAllSessions(undefined, 'fake')
expect(third[0]?.totalApiCalls).toBe(2)
expect(parseCalls).toBe(2)
const rebuilt = await parseAllSessions(undefined, 'fake', { noCache: true })
expect(rebuilt[0]?.totalApiCalls).toBe(2)
expect(parseCalls).toBe(3)
})
it('reuses a broader cached window for a narrower date range', async () => {
const providerName = 'fake-range-reuse'
const fakeSource = {
path: sourcePath,
fingerprintPath: sourcePath,
project: 'fake-project',
provider: providerName,
cacheStrategy: 'full-reparse',
} as SessionSource
const fakeProvider: Provider = {
name: providerName,
displayName: 'Fake',
modelDisplayName: model => model,
toolDisplayName: tool => tool,
discoverSessions: async () => [fakeSource],
createSessionParser() {
return {
async *parse() {
parseCalls += 1
yield { ...makeCall(0), timestamp: '2026-04-20T10:00:00.000Z', deduplicationKey: `${providerName}:day1` }
yield { ...makeCall(1), timestamp: '2026-04-21T10:00:00.000Z', deduplicationKey: `${providerName}:day2` }
},
}
},
}
vi.doMock('../src/providers/index.js', () => ({
discoverAllSessions: async () => [fakeSource],
getProvider: async () => fakeProvider,
}))
vi.resetModules()
const { parseAllSessions: parseWithCache } = await import('../src/parser.js')
const wide = {
start: new Date('2026-04-19T00:00:00.000Z'),
end: new Date('2026-04-21T23:59:59.999Z'),
}
const narrow = {
start: new Date('2026-04-20T00:00:00.000Z'),
end: new Date('2026-04-20T23:59:59.999Z'),
}
const wideProjects = await parseWithCache(wide, providerName)
expect(wideProjects[0]?.totalApiCalls).toBe(2)
expect(parseCalls).toBe(1)
const narrowProjects = await parseWithCache(narrow, providerName)
expect(narrowProjects[0]?.totalApiCalls).toBe(1)
expect(parseCalls).toBe(1)
})
it('does not deduplicate claude turns across different session files', async () => {
const sharedMsgId = 'msg_shared_duplicate_123'
const secondSessionPath = join(claudeRoot, 'projects', 'demo-project', 'session-2.jsonl')
await Promise.all([
writeFile(claudeSessionPath, [
JSON.stringify({
type: 'user',
timestamp: '2026-04-20T09:00:00.000Z',
sessionId: 'sess-1',
message: { role: 'user', content: 'first' },
}),
JSON.stringify({
type: 'assistant',
timestamp: '2026-04-20T09:00:01.000Z',
message: {
id: sharedMsgId,
model: 'claude-sonnet-4-6',
role: 'assistant',
type: 'message',
content: [],
usage: { input_tokens: 10, output_tokens: 20 },
},
}),
].join('\n') + '\n'),
writeFile(secondSessionPath, [
JSON.stringify({
type: 'user',
timestamp: '2026-04-20T09:05:00.000Z',
sessionId: 'sess-2',
message: { role: 'user', content: 'second' },
}),
JSON.stringify({
type: 'assistant',
timestamp: '2026-04-20T09:05:01.000Z',
message: {
id: sharedMsgId,
model: 'claude-sonnet-4-6',
role: 'assistant',
type: 'message',
content: [],
usage: { input_tokens: 11, output_tokens: 21 },
},
}),
].join('\n') + '\n'),
])
vi.doUnmock('../src/providers/index.js')
vi.resetModules()
const { parseAllSessions } = await import('../src/parser.js')
const first = await parseAllSessions(undefined, 'claude')
const project = first.find(project => project.project === 'demo-project')
expect(project?.totalApiCalls).toBe(2)
const second = await parseAllSessions(undefined, 'claude')
const cachedProject = second.find(project => project.project === 'demo-project')
expect(cachedProject?.totalApiCalls).toBe(2)
})
it('filters cached full sessions down to the requested date range', async () => {
const fakeSource = {
path: sourcePath,
fingerprintPath: sourcePath,
project: 'fake-project',
provider: 'fake',
cacheStrategy: 'full-reparse',
progressLabel: 'fake.jsonl',
} as SessionSource
const fakeProvider: Provider = {
name: 'fake',
displayName: 'Fake',
modelDisplayName: model => model,
toolDisplayName: tool => tool,
discoverSessions: async () => [fakeSource],
createSessionParser() {
return {
async *parse() {
yield makeCall(0)
yield { ...makeCall(1), timestamp: '2026-04-21T10:00:00.000Z', deduplicationKey: 'fake:next-day' }
},
}
},
}
vi.doMock('../src/providers/index.js', () => ({
discoverAllSessions: async () => [fakeSource],
getProvider: async () => fakeProvider,
}))
const { parseAllSessions } = await import('../src/parser.js')
await parseAllSessions(undefined, 'fake')
const onlyFirstDay = await parseAllSessions({
start: new Date('2026-04-20T00:00:00.000Z'),
end: new Date('2026-04-20T23:59:59.999Z'),
}, 'fake')
expect(onlyFirstDay[0]?.totalApiCalls).toBe(1)
})
it('refreshes appended Claude log entries on the next run', async () => {
vi.doUnmock('../src/providers/index.js')
vi.resetModules()
const { parseAllSessions } = await import('../src/parser.js')
const first = await parseAllSessions(undefined, 'claude')
expect(first.find(project => project.project === 'demo-project')?.totalApiCalls).toBe(1)
await appendFile(claudeSessionPath, [
JSON.stringify({
type: 'user',
timestamp: '2026-04-20T09:05:00.000Z',
sessionId: 'sess-1',
message: { role: 'user', content: 'second' },
}),
JSON.stringify({
type: 'assistant',
timestamp: '2026-04-20T09:05:01.000Z',
message: {
id: 'msg-2',
model: 'claude-sonnet-4-6',
role: 'assistant',
type: 'message',
content: [],
usage: { input_tokens: 11, output_tokens: 21 },
},
}),
].join('\n') + '\n', 'utf-8')
const second = await parseAllSessions(undefined, 'claude')
expect(second.find(project => project.project === 'demo-project')?.totalApiCalls).toBe(2)
})
it('falls back to a full Claude reparse when cached tail verification fails', async () => {
vi.doUnmock('../src/providers/index.js')
vi.resetModules()
const { parseAllSessions } = await import('../src/parser.js')
await parseAllSessions(undefined, 'claude')
const cacheRoot = join(root, 'cache', 'source-cache-v1')
const manifest = JSON.parse(await readFile(join(cacheRoot, 'manifest.json'), 'utf-8')) as {
entries: Record<string, { file: string }>
}
const entryPath = join(cacheRoot, 'entries', manifest.entries[`claude:${claudeSessionPath}`]!.file)
const entry = JSON.parse(await readFile(entryPath, 'utf-8')) as {
appendState?: { tailHash?: string }
}
entry.appendState = { ...entry.appendState, tailHash: 'broken-tail-hash' }
await writeFile(entryPath, JSON.stringify(entry), 'utf-8')
await appendFile(claudeSessionPath, [
JSON.stringify({
type: 'user',
timestamp: '2026-04-20T09:05:00.000Z',
sessionId: 'sess-1',
message: { role: 'user', content: 'second' },
}),
JSON.stringify({
type: 'assistant',
timestamp: '2026-04-20T09:05:01.000Z',
message: {
id: 'msg-2',
model: 'claude-sonnet-4-6',
role: 'assistant',
type: 'message',
content: [],
usage: { input_tokens: 11, output_tokens: 21 },
},
}),
].join('\n') + '\n', 'utf-8')
vi.resetModules()
const readSessionFileCalls: string[] = []
const readSessionLinesFromOffsetCalls: Array<[string, number]> = []
vi.doMock('../src/fs-utils.js', async () => {
const actual = await vi.importActual<typeof import('../src/fs-utils.js')>('../src/fs-utils.js')
return {
...actual,
readSessionFile: vi.fn(async (filePath: string) => {
readSessionFileCalls.push(filePath)
return actual.readSessionFile(filePath)
}),
readSessionLinesFromOffset: vi.fn(async function* (filePath: string, startOffset: number) {
readSessionLinesFromOffsetCalls.push([filePath, startOffset])
for await (const line of actual.readSessionLinesFromOffset(filePath, startOffset)) {
yield line
}
}),
}
})
const { parseAllSessions: reparsedParseAllSessions } = await import('../src/parser.js')
const reparsed = await reparsedParseAllSessions(undefined, 'claude')
expect(reparsed.find(project => project.project === 'demo-project')?.totalApiCalls).toBe(2)
expect(readSessionFileCalls).toContain(claudeSessionPath)
expect(readSessionLinesFromOffsetCalls).toHaveLength(0)
})
it('keeps appended assistant-only Claude entries inside the existing turn', async () => {
vi.doUnmock('../src/providers/index.js')
vi.resetModules()
const { parseAllSessions } = await import('../src/parser.js')
const first = await parseAllSessions(undefined, 'claude')
const initialSession = first.find(project => project.project === 'demo-project')?.sessions[0]
expect(initialSession?.turns).toHaveLength(1)
await appendFile(claudeSessionPath, JSON.stringify({
type: 'assistant',
timestamp: '2026-04-20T09:05:01.000Z',
message: {
id: 'msg-2',
model: 'claude-sonnet-4-6',
role: 'assistant',
type: 'message',
content: [],
usage: { input_tokens: 11, output_tokens: 21 },
},
}) + '\n', 'utf-8')
const second = await parseAllSessions(undefined, 'claude')
const session = second.find(project => project.project === 'demo-project')?.sessions[0]
expect(session?.apiCalls).toBe(2)
expect(session?.turns).toHaveLength(1)
expect(session?.turns[0]?.userMessage).toBe('first')
expect(session?.turns[0]?.assistantCalls).toHaveLength(2)
})
it('caches Claude session files that contain no turns', async () => {
await writeFile(claudeSessionPath, [
JSON.stringify({
type: 'user',
timestamp: '2026-04-20T09:00:00.000Z',
sessionId: 'sess-empty',
message: { role: 'user', content: 'no assistant response' },
}),
].join('\n') + '\n', 'utf-8')
const readSessionFileCalls: string[] = []
vi.doMock('../src/fs-utils.js', async () => {
const actual = await vi.importActual<typeof import('../src/fs-utils.js')>('../src/fs-utils.js')
return {
...actual,
readSessionFile: vi.fn(async (filePath: string) => {
readSessionFileCalls.push(filePath)
return actual.readSessionFile(filePath)
}),
}
})
vi.resetModules()
const { parseAllSessions } = await import('../src/parser.js')
const first = await parseAllSessions(undefined, 'claude')
const cacheRoot = join(root, 'cache', 'source-cache-v1')
const manifest = JSON.parse(await readFile(join(cacheRoot, 'manifest.json'), 'utf-8')) as {
entries: Record<string, { file: string }>
}
const entryKey = `claude:${claudeSessionPath}`
expect(manifest.entries[entryKey]).toBeDefined()
const cacheEntry = JSON.parse(await readFile(join(cacheRoot, 'entries', manifest.entries[entryKey]!.file), 'utf-8')) as { sessions: unknown[] }
expect(first.find(project => project.project === 'demo-project')?.totalApiCalls).toBeUndefined()
expect(readSessionFileCalls.filter(path => path === claudeSessionPath)).toHaveLength(1)
expect(cacheEntry.sessions).toHaveLength(0)
const second = await parseAllSessions(undefined, 'claude')
expect(second.find(project => project.project === 'demo-project')?.totalApiCalls).toBeUndefined()
expect(readSessionFileCalls.filter(path => path === claudeSessionPath)).toHaveLength(1)
})
})

122
tests/plan-usage.test.ts Normal file
View file

@ -0,0 +1,122 @@
import { describe, it, expect, vi, beforeEach } from 'vitest'
import { computePeriodFromResetDay, getPlanUsage, getPlanUsageFromProjects } from '../src/plan-usage.js'
const { parseAllSessionsMock } = vi.hoisted(() => ({
parseAllSessionsMock: vi.fn(),
}))
vi.mock('../src/parser.js', () => ({
parseAllSessions: parseAllSessionsMock,
}))
describe('computePeriodFromResetDay', () => {
it('uses current month when today is on/after reset day', () => {
const { periodStart, periodEnd } = computePeriodFromResetDay(1, new Date('2026-04-17T10:00:00.000Z'))
expect(periodStart.getFullYear()).toBe(2026)
expect(periodStart.getMonth()).toBe(3)
expect(periodStart.getDate()).toBe(1)
expect(periodEnd.getMonth()).toBe(4)
expect(periodEnd.getDate()).toBe(1)
})
it('uses previous month when today is before reset day', () => {
const { periodStart, periodEnd } = computePeriodFromResetDay(15, new Date('2026-04-03T10:00:00.000Z'))
expect(periodStart.getMonth()).toBe(2)
expect(periodStart.getDate()).toBe(15)
expect(periodEnd.getMonth()).toBe(3)
expect(periodEnd.getDate()).toBe(15)
})
it('clamps reset day into 1..28', () => {
const { periodStart } = computePeriodFromResetDay(99, new Date('2026-04-27T10:00:00.000Z'))
expect(periodStart.getDate()).toBe(28)
})
})
describe('getPlanUsage', () => {
beforeEach(() => {
parseAllSessionsMock.mockReset()
})
it('passes provider filter from plan and computes status', async () => {
parseAllSessionsMock.mockResolvedValue([
{
totalCostUSD: 160,
sessions: [],
},
])
const usage = await getPlanUsage({
id: 'claude-max',
monthlyUsd: 200,
provider: 'claude',
resetDay: 1,
setAt: '2026-04-01T00:00:00.000Z',
}, new Date('2026-04-10T10:00:00.000Z'))
expect(parseAllSessionsMock).toHaveBeenCalledWith(
expect.objectContaining({ start: expect.any(Date), end: expect.any(Date) }),
'claude',
)
expect(usage.spentApiEquivalentUsd).toBe(160)
expect(usage.percentUsed).toBe(80)
expect(usage.status).toBe('near')
})
it('projects using median daily spend (not mean)', async () => {
const dailyCosts = [1, 100, 1, 100, 1, 100, 1]
const turns = dailyCosts.map((cost, idx) => ({
timestamp: `2026-04-${String(idx + 1).padStart(2, '0')}T12:00:00.000Z`,
assistantCalls: [{ costUSD: cost }],
}))
parseAllSessionsMock.mockResolvedValue([
{
totalCostUSD: dailyCosts.reduce((sum, value) => sum + value, 0),
sessions: [{ turns }],
},
])
const usage = await getPlanUsage({
id: 'custom',
monthlyUsd: 500,
provider: 'all',
resetDay: 1,
setAt: '2026-04-01T00:00:00.000Z',
}, new Date('2026-04-07T12:00:00.000Z'))
// Median(1,100,1,100,1,100,1) = 1, so remaining 23 days adds 23.
expect(Math.round(usage.projectedMonthUsd)).toBe(327)
expect(parseAllSessionsMock).toHaveBeenCalledWith(
expect.objectContaining({ start: expect.any(Date), end: expect.any(Date) }),
'all',
)
})
it('computes plan usage from pre-fetched projects', () => {
const usage = getPlanUsageFromProjects({
id: 'custom',
monthlyUsd: 100,
provider: 'all',
resetDay: 1,
setAt: '2026-04-01T00:00:00.000Z',
}, [
{
totalCostUSD: 40,
sessions: [
{
turns: [
{ timestamp: '2026-04-02T12:00:00.000Z', assistantCalls: [{ costUSD: 20 }] },
{ timestamp: '2026-04-03T12:00:00.000Z', assistantCalls: [{ costUSD: 20 }] },
],
},
],
},
], new Date('2026-04-10T10:00:00.000Z'))
expect(usage.spentApiEquivalentUsd).toBe(40)
expect(usage.budgetUsd).toBe(100)
expect(usage.status).toBe('under')
})
})

63
tests/plans.test.ts Normal file
View file

@ -0,0 +1,63 @@
import { mkdtemp, rm } from 'node:fs/promises'
import { tmpdir } from 'node:os'
import { join } from 'node:path'
import { describe, it, expect } from 'vitest'
import { clearPlan, readPlan, savePlan } from '../src/config.js'
import { getPresetPlan, isPlanId, isPlanProvider } from '../src/plans.js'
describe('plan presets', () => {
it('resolves builtin presets', () => {
expect(getPresetPlan('claude-pro')).toMatchObject({ id: 'claude-pro', monthlyUsd: 20, provider: 'claude' })
expect(getPresetPlan('claude-max')).toMatchObject({ id: 'claude-max', monthlyUsd: 200, provider: 'claude' })
expect(getPresetPlan('cursor-pro')).toMatchObject({ id: 'cursor-pro', monthlyUsd: 20, provider: 'cursor' })
expect(getPresetPlan('custom')).toBeNull()
})
it('validates ids and providers', () => {
expect(isPlanId('claude-pro')).toBe(true)
expect(isPlanId('none')).toBe(true)
expect(isPlanId('bad-plan')).toBe(false)
expect(isPlanProvider('all')).toBe(true)
expect(isPlanProvider('claude')).toBe(true)
expect(isPlanProvider('invalid')).toBe(false)
})
})
describe('plan config persistence', () => {
it('round-trips savePlan/readPlan and clearPlan', async () => {
const dir = await mkdtemp(join(tmpdir(), 'codeburn-plan-test-'))
const previousHome = process.env['HOME']
process.env['HOME'] = dir
try {
await savePlan({
id: 'claude-max',
monthlyUsd: 200,
provider: 'claude',
resetDay: 12,
setAt: '2026-04-17T12:00:00.000Z',
})
const plan = await readPlan()
expect(plan).toMatchObject({
id: 'claude-max',
monthlyUsd: 200,
provider: 'claude',
resetDay: 12,
})
await clearPlan()
expect(await readPlan()).toBeUndefined()
} finally {
if (previousHome === undefined) {
delete process.env['HOME']
} else {
process.env['HOME'] = previousHome
}
await rm(dir, { recursive: true, force: true })
}
})
})

View file

@ -0,0 +1,29 @@
import { describe, it, expect } from 'vitest'
import { PROVIDER_COLORS, providerColor, providerLabel } from '../src/provider-colors.js'
describe('provider presentation metadata', () => {
it('exports the shared provider palette', () => {
expect(PROVIDER_COLORS).toEqual({
all: '#FF8C42',
claude: '#FF8C42',
codex: '#5BF5A0',
cursor: '#00B4D8',
opencode: '#A78BFA',
pi: '#F472B6',
copilot: '#6495ED',
})
})
it('maps provider names to labels', () => {
expect(providerLabel('all')).toBe('All')
expect(providerLabel('opencode')).toBe('OpenCode')
expect(providerLabel('unknown')).toBe('unknown')
})
it('maps provider names to colors with a neutral fallback', () => {
expect(providerColor('all')).toBe('#FF8C42')
expect(providerColor('opencode')).toBe('#A78BFA')
expect(providerColor('unknown')).toBe('#CCCCCC')
})
})

View file

@ -1,18 +1,21 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest'
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest'
import { mkdtemp, mkdir, writeFile, rm } from 'fs/promises'
import { join } from 'path'
import { tmpdir } from 'os'
import { createCodexProvider } from '../../src/providers/codex.js'
import * as fsUtils from '../../src/fs-utils.js'
import type { ParsedProviderCall } from '../../src/providers/types.js'
let tmpDir: string
beforeEach(async () => {
tmpDir = await mkdtemp(join(tmpdir(), 'codex-test-'))
process.env['CODEBURN_CACHE_DIR'] = join(tmpDir, 'cache')
})
afterEach(async () => {
delete process.env['CODEBURN_CACHE_DIR']
await rm(tmpDir, { recursive: true, force: true })
})
@ -136,6 +139,28 @@ describe('codex provider - session discovery', () => {
const sessions = await provider.discoverSessions()
expect(sessions).toEqual([])
})
it('reuses cached discovery results when the directory tree is unchanged', async () => {
await writeSession(tmpDir, '2026-04-14', 'rollout-cached.jsonl', [
sessionMeta({ cwd: '/Users/test/myproject' }),
tokenCount({ last: { input: 100, output: 50 }, total: { total: 150 } }),
])
const provider = createCodexProvider(tmpDir)
const readSpy = vi.spyOn(fsUtils, 'readSessionFile')
const first = await provider.discoverSessions()
const firstReadCount = readSpy.mock.calls.length
const second = await provider.discoverSessions()
const secondReadCount = readSpy.mock.calls.length
expect(first).toHaveLength(1)
expect(second).toEqual(first)
expect(firstReadCount).toBeGreaterThan(0)
expect(secondReadCount).toBe(firstReadCount)
readSpy.mockRestore()
})
})
describe('codex provider - JSONL parsing', () => {

View file

@ -1,9 +1,10 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest'
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest'
import { mkdtemp, mkdir, writeFile, rm } from 'fs/promises'
import { join } from 'path'
import { tmpdir } from 'os'
import { copilot, createCopilotProvider } from '../../src/providers/copilot.js'
import * as fsUtils from '../../src/fs-utils.js'
import type { ParsedProviderCall } from '../../src/providers/types.js'
let tmpDir: string
@ -40,9 +41,11 @@ function assistantMessage(opts: { messageId: string; outputTokens: number; tools
describe('copilot provider - JSONL parsing', () => {
beforeEach(async () => {
tmpDir = await mkdtemp(join(tmpdir(), 'copilot-test-'))
process.env['CODEBURN_CACHE_DIR'] = join(tmpDir, 'cache')
})
afterEach(async () => {
delete process.env['CODEBURN_CACHE_DIR']
await rm(tmpDir, { recursive: true, force: true })
})
@ -219,6 +222,25 @@ describe('copilot provider - discoverSessions', () => {
const sessions = await provider.discoverSessions()
expect(sessions).toHaveLength(0)
})
it('reuses cached discovery results when session directories are unchanged', async () => {
await createSessionDir('sess-disc-cached', [modelChange('gpt-4.1')], '/home/user/myapp')
const provider = createCopilotProvider(tmpDir)
const readSpy = vi.spyOn(fsUtils, 'readSessionFile')
const first = await provider.discoverSessions()
const firstReadCount = readSpy.mock.calls.length
const second = await provider.discoverSessions()
const secondReadCount = readSpy.mock.calls.length
expect(first).toHaveLength(1)
expect(second).toEqual(first)
expect(firstReadCount).toBeGreaterThan(0)
expect(secondReadCount).toBe(firstReadCount)
readSpy.mockRestore()
})
})
describe('copilot provider - metadata', () => {

View file

@ -0,0 +1,246 @@
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import { mkdtemp, mkdir, rm, writeFile } from 'fs/promises'
import { existsSync } from 'fs'
import { join } from 'path'
import { tmpdir } from 'os'
import { getAllProviders } from '../../src/providers/index.js'
import { createCursorAgentProvider } from '../../src/providers/cursor-agent.js'
import type { ParsedProviderCall, Provider, SessionSource } from '../../src/providers/types.js'
import { isSqliteAvailable } from '../../src/sqlite.js'
const CHARS_PER_TOKEN = 4
const CURSOR_AGENT_DEFAULT_MODEL = 'claude-sonnet-4-5'
const FIXED_UUID = '123e4567-e89b-12d3-a456-426614174000'
const skipUnlessSqlite = isSqliteAvailable() ? describe : describe.skip
type TestDb = {
exec(sql: string): void
prepare(sql: string): { run(...params: unknown[]): void }
close(): void
}
let tempRoots: string[] = []
beforeEach(() => {
tempRoots = []
})
afterEach(async () => {
await Promise.all(tempRoots.filter(existsSync).map((dir) => rm(dir, { recursive: true, force: true })))
})
async function makeBaseDir(): Promise<string> {
const dir = await mkdtemp(join(tmpdir(), 'cursor-agent-test-'))
tempRoots.push(dir)
return dir
}
async function collectCalls(provider: Provider, source: SessionSource): Promise<ParsedProviderCall[]> {
const calls: ParsedProviderCall[] = []
for await (const call of provider.createSessionParser(source, new Set()).parse()) {
calls.push(call)
}
return calls
}
function withTestDb(dbPath: string, fn: (db: TestDb) => void): void {
const { DatabaseSync: Database } = require('node:sqlite')
const db = new Database(dbPath)
fn(db)
db.close()
}
describe('cursor-agent provider', () => {
it('is registered', async () => {
const all = await getAllProviders()
const provider = all.find((p) => p.name === 'cursor-agent')
expect(provider).toBeDefined()
expect(provider?.displayName).toBe('Cursor Agent')
})
it('maps default model to auto with estimation label', () => {
const provider = createCursorAgentProvider('/tmp/nonexistent-cursor-agent-fixture')
expect(provider.modelDisplayName('default')).toBe('Auto (Sonnet est.)')
})
it('maps known models and appends estimation label', () => {
const provider = createCursorAgentProvider('/tmp/nonexistent-cursor-agent-fixture')
expect(provider.modelDisplayName('claude-4.5-opus-high-thinking')).toBe('Opus 4.5 (Thinking) (est.)')
expect(provider.modelDisplayName('claude-4.6-sonnet')).toBe('Sonnet 4.6 (est.)')
expect(provider.modelDisplayName('composer-1')).toBe('Composer 1 (est.)')
})
it('falls through to raw model name for unknown models with single est. suffix', () => {
const provider = createCursorAgentProvider('/tmp/nonexistent-cursor-agent-fixture')
expect(provider.modelDisplayName('claude-5-future-model')).toBe('claude-5-future-model (est.)')
expect(provider.modelDisplayName('gpt-9')).toBe('gpt-9 (est.)')
})
it('returns identity for tool display name', () => {
const provider = createCursorAgentProvider('/tmp/nonexistent-cursor-agent-fixture')
expect(provider.toolDisplayName('cursor:edit')).toBe('cursor:edit')
})
it('returns empty discovery when projects dir is missing', async () => {
const baseDir = await makeBaseDir()
const provider = createCursorAgentProvider(baseDir)
const sources = await provider.discoverSessions()
expect(sources).toEqual([])
})
it('discovers a single transcript', async () => {
const baseDir = await makeBaseDir()
const transcriptDir = join(baseDir, 'projects', 'test-proj', 'agent-transcripts')
await mkdir(transcriptDir, { recursive: true })
const transcriptPath = join(transcriptDir, `${FIXED_UUID}.txt`)
await writeFile(transcriptPath, 'user:\n<user_query>hello</user_query>\nA:\nworld\n')
const provider = createCursorAgentProvider(baseDir)
const sources = await provider.discoverSessions()
expect(sources).toHaveLength(1)
expect(sources[0]!.provider).toBe('cursor-agent')
expect(sources[0]!.path).toBe(transcriptPath)
expect(sources[0]!.fingerprintPath).toBe(transcriptPath)
expect(sources[0]!.cacheStrategy).toBe('full-reparse')
expect(sources[0]!.parserVersion).toBe('cursor-agent:v1')
})
it('discovers transcripts across multiple projects', async () => {
const baseDir = await makeBaseDir()
const transcriptA = join(baseDir, 'projects', 'proj-one', 'agent-transcripts')
const transcriptB = join(baseDir, 'projects', 'proj-two', 'agent-transcripts')
await mkdir(transcriptA, { recursive: true })
await mkdir(transcriptB, { recursive: true })
await writeFile(join(transcriptA, `${FIXED_UUID}.txt`), 'user:\n<user_query>a</user_query>\nA:\na\n')
await writeFile(join(transcriptB, `${FIXED_UUID}.txt`), 'user:\n<user_query>b</user_query>\nA:\nb\n')
const provider = createCursorAgentProvider(baseDir)
const sources = await provider.discoverSessions()
expect(sources).toHaveLength(2)
expect(sources.every((s) => s.provider === 'cursor-agent')).toBe(true)
})
it('parses one user/assistant pair with estimated token counts', async () => {
const baseDir = await makeBaseDir()
const transcriptDir = join(baseDir, 'projects', 'my-proj', 'agent-transcripts')
await mkdir(transcriptDir, { recursive: true })
const userText = 'explain parser output'
const assistantText = 'first line\nsecond line'
const transcriptPath = join(transcriptDir, `${FIXED_UUID}.txt`)
await writeFile(
transcriptPath,
`user:\n<user_query>${userText}</user_query>\nA:\n${assistantText}\n`
)
const provider = createCursorAgentProvider(baseDir)
const source = (await provider.discoverSessions())[0]!
const calls = await collectCalls(provider, source)
expect(calls).toHaveLength(1)
expect(calls[0]!.provider).toBe('cursor-agent')
expect(calls[0]!.model).toBe(CURSOR_AGENT_DEFAULT_MODEL)
expect(calls[0]!.inputTokens).toBe(Math.ceil(userText.length / CHARS_PER_TOKEN))
expect(calls[0]!.outputTokens).toBe(Math.ceil(assistantText.length / CHARS_PER_TOKEN))
expect(calls[0]!.reasoningTokens).toBe(0)
expect(calls[0]!.deduplicationKey).toBe(`cursor-agent:${FIXED_UUID}:0`)
})
it('parses without sqlite db and defaults model', async () => {
const baseDir = await makeBaseDir()
const transcriptDir = join(baseDir, 'projects', 'fallback-proj', 'agent-transcripts')
await mkdir(transcriptDir, { recursive: true })
const transcriptPath = join(transcriptDir, `${FIXED_UUID}.txt`)
await writeFile(transcriptPath, 'user:\n<user_query>hello world</user_query>\nA:\n[Thinking]private\nvisible\n')
const provider = createCursorAgentProvider(baseDir)
const source = (await provider.discoverSessions())[0]!
const calls = await collectCalls(provider, source)
expect(calls).toHaveLength(1)
expect(calls[0]!.model).toBe(CURSOR_AGENT_DEFAULT_MODEL)
expect(calls[0]!.reasoningTokens).toBe(2)
expect(calls[0]!.outputTokens).toBe(2)
})
it('skips unrecognized transcript format and writes stderr message', async () => {
const baseDir = await makeBaseDir()
const transcriptDir = join(baseDir, 'projects', 'bad-proj', 'agent-transcripts')
await mkdir(transcriptDir, { recursive: true })
const transcriptPath = join(transcriptDir, `${FIXED_UUID}.txt`)
await writeFile(transcriptPath, 'no markers in this transcript')
const provider = createCursorAgentProvider(baseDir)
const source = (await provider.discoverSessions())[0]!
const stderrSpy = vi.spyOn(process.stderr, 'write').mockImplementation(() => true)
const calls = await collectCalls(provider, source)
expect(calls).toHaveLength(0)
expect(stderrSpy).toHaveBeenCalled()
expect(String(stderrSpy.mock.calls[0]?.[0] ?? '')).toContain('unrecognized cursor-agent transcript format')
stderrSpy.mockRestore()
})
it('falls back to stable sha1 conversation id for non-uuid filenames', async () => {
const baseDir = await makeBaseDir()
const transcriptDir = join(baseDir, 'projects', 'sha-proj', 'agent-transcripts')
await mkdir(transcriptDir, { recursive: true })
const transcriptPath = join(transcriptDir, 'not-a-uuid.txt')
await writeFile(transcriptPath, 'user:\n<user_query>test</user_query>\nA:\nresult\n')
const provider = createCursorAgentProvider(baseDir)
const source = (await provider.discoverSessions())[0]!
const callsFirst = await collectCalls(provider, source)
const callsSecond = await collectCalls(provider, source)
expect(callsFirst).toHaveLength(1)
expect(callsSecond).toHaveLength(1)
expect(callsFirst[0]!.sessionId).toHaveLength(16)
expect(callsFirst[0]!.deduplicationKey.startsWith('cursor-agent:')).toBe(true)
expect(callsFirst[0]!.sessionId).toBe(callsSecond[0]!.sessionId)
expect(callsFirst[0]!.deduplicationKey).toBe(callsSecond[0]!.deduplicationKey)
})
})
skipUnlessSqlite('cursor-agent sqlite metadata', () => {
it('uses model metadata from ai-code-tracking db when present', async () => {
const baseDir = await makeBaseDir()
const transcriptDir = join(baseDir, 'projects', 'proj-with-db', 'agent-transcripts')
const aiTrackingDir = join(baseDir, 'ai-tracking')
await mkdir(transcriptDir, { recursive: true })
await mkdir(aiTrackingDir, { recursive: true })
await writeFile(
join(transcriptDir, `${FIXED_UUID}.txt`),
'user:\n<user_query>estimate cost</user_query>\nA:\nanswer\n'
)
const dbPath = join(aiTrackingDir, 'ai-code-tracking.db')
withTestDb(dbPath, (db) => {
db.exec('CREATE TABLE conversation_summaries (conversationId TEXT, title TEXT, tldr TEXT, model TEXT, mode TEXT, updatedAt INTEGER)')
db.prepare('INSERT INTO conversation_summaries (conversationId, title, tldr, model, mode, updatedAt) VALUES (?, ?, ?, ?, ?, ?)')
.run(FIXED_UUID, 'Demo title', '', 'claude-4.6-sonnet', 'agent', 1735689600000)
})
const provider = createCursorAgentProvider(baseDir)
const source = (await provider.discoverSessions())[0]!
const calls = await collectCalls(provider, source)
expect(calls).toHaveLength(1)
expect(calls[0]!.model).toBe('claude-4.6-sonnet')
expect(calls[0]!.timestamp).toBe('2025-01-01T00:00:00.000Z')
})
})

View file

@ -1,6 +1,15 @@
import { describe, it, expect, beforeEach } from 'vitest'
import { beforeEach, afterEach, describe, expect, it } from 'vitest'
import { mkdtemp, mkdir, rm, writeFile } from 'fs/promises'
import { join } from 'path'
import { tmpdir } from 'os'
import { getAllProviders } from '../../src/providers/index.js'
import { createCursorProvider } from '../../src/providers/cursor.js'
import { createOpenCodeProvider } from '../../src/providers/opencode.js'
import type { Provider } from '../../src/providers/types.js'
import { isSqliteAvailable } from '../../src/sqlite.js'
const skipUnlessSqlite = isSqliteAvailable() ? describe : describe.skip
describe('cursor provider', () => {
let cursorProvider: Provider
@ -68,10 +77,83 @@ describe('cursor sqlite adapter', () => {
})
})
describe('cursor cache', () => {
it('returns null when no cache exists', async () => {
const { readCachedResults } = await import('../../src/cursor-cache.js')
const result = await readCachedResults('/nonexistent/path.db')
expect(result).toBeNull()
skipUnlessSqlite('shared cache metadata', () => {
let tmpDir: string
beforeEach(async () => {
tmpDir = await mkdtemp(join(tmpdir(), 'provider-cache-meta-'))
})
afterEach(async () => {
await rm(tmpDir, { recursive: true, force: true })
})
async function createOpenCodeTestDb(dir: string): Promise<string> {
const ocDir = join(dir, 'opencode')
const dbPath = join(ocDir, 'opencode.db')
const { DatabaseSync: Database } = require('node:sqlite')
await mkdir(ocDir, { recursive: true })
const db = new Database(dbPath)
db.exec(`
CREATE TABLE session (
id TEXT PRIMARY KEY, project_id TEXT NOT NULL, parent_id TEXT,
slug TEXT NOT NULL, directory TEXT NOT NULL, title TEXT NOT NULL,
version TEXT NOT NULL, time_created INTEGER, time_updated INTEGER,
time_archived INTEGER
)
`)
db.exec(`
CREATE TABLE message (
id TEXT PRIMARY KEY, session_id TEXT NOT NULL,
time_created INTEGER, time_updated INTEGER, data TEXT NOT NULL
)
`)
db.exec(`
CREATE TABLE part (
id TEXT PRIMARY KEY, message_id TEXT NOT NULL,
session_id TEXT NOT NULL, time_created INTEGER,
time_updated INTEGER, data TEXT NOT NULL
)
`)
db.prepare(`
INSERT INTO session (id, project_id, slug, directory, title, version, time_created)
VALUES (?, ?, ?, ?, ?, ?, ?)
`).run('sess-1', 'proj-1', 'slug-1', '/home/user/myproject', 'My Project', '1.0', 1700000000000)
db.close()
return dbPath
}
it('cursor exposes the sqlite database as its fingerprint path', async () => {
const dbPath = join(tmpDir, 'state.vscdb')
await writeFile(dbPath, '')
const cursor = createCursorProvider(dbPath)
const sources = await cursor.discoverSessions()
expect(sources).toHaveLength(1)
for (const source of sources) {
expect(source.cacheStrategy).toBe('full-reparse')
expect(source.fingerprintPath).toBe(source.path)
expect(source.progressLabel).toBe('Cursor state.vscdb')
expect(source.parserVersion).toBe('cursor:v1')
}
})
it('opencode sources fingerprint the backing database, not the logical dbPath:sessionId key', async () => {
const dbPath = await createOpenCodeTestDb(tmpDir)
const opencode = createOpenCodeProvider(tmpDir)
const sources = await opencode.discoverSessions()
expect(sources).toHaveLength(1)
for (const source of sources) {
expect(source.cacheStrategy).toBe('full-reparse')
expect(source.fingerprintPath).toBeTruthy()
expect(source.fingerprintPath).toBe(dbPath)
expect(source.fingerprintPath).not.toBe(source.path)
expect(source.progressLabel).toBe('opencode:sess-1')
expect(source.parserVersion).toBe('opencode:v1')
}
})
})

View file

@ -164,7 +164,7 @@ describe('omp provider - JSONL parsing', () => {
expect(call.sessionId).toBe('sess-omp-1')
expect(call.userMessage).toBe('write a test')
expect(call.timestamp).toBe('2026-04-14T10:00:30.000Z')
expect(call.deduplicationKey).toContain('pi:')
expect(call.deduplicationKey).toContain('omp:')
expect(call.deduplicationKey).toContain('resp-omp-1')
})

View file

@ -1,18 +1,24 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest'
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest'
import { mkdtemp, mkdir, writeFile, rm } from 'fs/promises'
import { join } from 'path'
import { tmpdir } from 'os'
import { createPiProvider } from '../../src/providers/pi.js'
import * as fsUtils from '../../src/fs-utils.js'
import type { ParsedProviderCall } from '../../src/providers/types.js'
let tmpDir: string
let cacheDir: string
beforeEach(async () => {
tmpDir = await mkdtemp(join(tmpdir(), 'pi-test-'))
cacheDir = await mkdtemp(join(tmpdir(), 'pi-cache-'))
process.env['CODEBURN_CACHE_DIR'] = cacheDir
})
afterEach(async () => {
delete process.env['CODEBURN_CACHE_DIR']
await rm(cacheDir, { recursive: true, force: true })
await rm(tmpDir, { recursive: true, force: true })
})
@ -146,6 +152,29 @@ describe('pi provider - session discovery', () => {
const sessions = await provider.discoverSessions()
expect(sessions).toEqual([])
})
it('reuses cached discovery results when project directories are unchanged', async () => {
const projectDir = join(tmpDir, '--Users-test-myproject--')
await writeSession(projectDir, 'cached.jsonl', [
sessionMeta({ cwd: '/Users/test/myproject' }),
assistantMessage({}),
])
const provider = createPiProvider(tmpDir)
const readSpy = vi.spyOn(fsUtils, 'readSessionFile')
const first = await provider.discoverSessions()
const firstReadCount = readSpy.mock.calls.length
const second = await provider.discoverSessions()
const secondReadCount = readSpy.mock.calls.length
expect(first).toHaveLength(1)
expect(second).toEqual(first)
expect(firstReadCount).toBeGreaterThan(0)
expect(secondReadCount).toBe(firstReadCount)
readSpy.mockRestore()
})
})
describe('pi provider - JSONL parsing', () => {

363
tests/source-cache.test.ts Normal file
View file

@ -0,0 +1,363 @@
import { afterEach, beforeEach, describe, expect, it } from 'vitest'
import { createHash } from 'crypto'
import { existsSync } from 'fs'
import { mkdir, mkdtemp, readFile, readdir, rm, writeFile } from 'fs/promises'
import { tmpdir } from 'os'
import { join } from 'path'
import {
SOURCE_CACHE_VERSION,
emptySourceCacheManifest,
loadSourceCacheManifest,
saveSourceCacheManifest,
readSourceCacheEntry,
writeSourceCacheEntry,
computeFileFingerprint,
type SourceCacheEntry,
} from '../src/source-cache.js'
import type { SessionSummary } from '../src/types.js'
let root = ''
function emptySession(sessionId: string, overrides: Partial<SessionSummary> = {}): SessionSummary {
return {
sessionId,
project: 'project',
firstTimestamp: '2026-04-10T00:00:00Z',
lastTimestamp: '2026-04-10T00:00:00Z',
totalCostUSD: 0,
totalInputTokens: 0,
totalOutputTokens: 0,
totalCacheReadTokens: 0,
totalCacheWriteTokens: 0,
apiCalls: 0,
turns: [],
modelBreakdown: {},
toolBreakdown: {},
mcpBreakdown: {},
bashBreakdown: {},
categoryBreakdown: {},
...overrides,
}
}
beforeEach(async () => {
root = await mkdtemp(join(tmpdir(), 'codeburn-source-cache-'))
process.env['CODEBURN_CACHE_DIR'] = root
})
afterEach(async () => {
delete process.env['CODEBURN_CACHE_DIR']
if (root) await rm(root, { recursive: true, force: true })
})
describe('source cache manifest', () => {
it('returns an empty manifest when no file exists', async () => {
await expect(loadSourceCacheManifest()).resolves.toEqual(emptySourceCacheManifest())
})
it('returns an empty manifest when the manifest shape is invalid', async () => {
await mkdir(join(root, 'source-cache-v1'), { recursive: true })
await writeFile(join(root, 'source-cache-v1', 'manifest.json'), JSON.stringify({
version: SOURCE_CACHE_VERSION,
entries: { bad: { file: 123, provider: 'fake' } },
}), 'utf-8')
await expect(loadSourceCacheManifest()).resolves.toEqual(emptySourceCacheManifest())
})
it('returns an empty manifest when an entry filename is unsafe', async () => {
await mkdir(join(root, 'source-cache-v1'), { recursive: true })
await writeFile(join(root, 'source-cache-v1', 'manifest.json'), JSON.stringify({
version: SOURCE_CACHE_VERSION,
entries: {
bad: {
file: '../escape.json',
provider: 'fake',
logicalPath: join(root, 'source.jsonl'),
},
},
}), 'utf-8')
await expect(loadSourceCacheManifest()).resolves.toEqual(emptySourceCacheManifest())
})
it('round-trips a manifest and entry', async () => {
const sourcePath = join(root, 'source.jsonl')
await writeFile(sourcePath, '{"ok":true}\n', 'utf-8')
const fingerprint = await computeFileFingerprint(sourcePath)
const entry: SourceCacheEntry = {
version: SOURCE_CACHE_VERSION,
provider: 'fake',
logicalPath: sourcePath,
fingerprintPath: sourcePath,
cacheStrategy: 'full-reparse',
parserVersion: 'fake-v1',
fingerprint,
sessions: [],
}
const manifest = await loadSourceCacheManifest()
await writeSourceCacheEntry(manifest, entry)
await saveSourceCacheManifest(manifest)
const loadedManifest = await loadSourceCacheManifest()
const loadedEntry = await readSourceCacheEntry(loadedManifest, 'fake', sourcePath)
expect(loadedEntry).toEqual(entry)
})
it('returns null when the fingerprint no longer matches', async () => {
const sourcePath = join(root, 'source.jsonl')
await writeFile(sourcePath, 'one\n', 'utf-8')
const fingerprint = await computeFileFingerprint(sourcePath)
const entry: SourceCacheEntry = {
version: SOURCE_CACHE_VERSION,
provider: 'fake',
logicalPath: sourcePath,
fingerprintPath: sourcePath,
cacheStrategy: 'full-reparse',
parserVersion: 'fake-v1',
fingerprint,
sessions: [],
}
const manifest = await loadSourceCacheManifest()
await writeSourceCacheEntry(manifest, entry)
await saveSourceCacheManifest(manifest)
await writeFile(sourcePath, 'one\ntwo\n', 'utf-8')
const loaded = await readSourceCacheEntry(await loadSourceCacheManifest(), 'fake', sourcePath)
expect(loaded).toBeNull()
})
it('returns null when the cached entry shape is invalid', async () => {
const sourcePath = join(root, 'source.jsonl')
await writeFile(sourcePath, 'one\n', 'utf-8')
const manifest = await loadSourceCacheManifest()
const file = `${createHash('sha1').update(`fake:${sourcePath}`).digest('hex')}.json`
manifest.entries[`fake:${sourcePath}`] = { file, provider: 'fake', logicalPath: sourcePath }
await saveSourceCacheManifest(manifest)
await mkdir(join(root, 'source-cache-v1', 'entries'), { recursive: true })
await writeFile(join(root, 'source-cache-v1', 'entries', file), JSON.stringify({
version: SOURCE_CACHE_VERSION,
provider: 'fake',
logicalPath: sourcePath,
fingerprintPath: sourcePath,
cacheStrategy: 'full-reparse',
parserVersion: 'fake-v1',
fingerprint: { mtimeMs: 'nope', sizeBytes: 4 },
sessions: [],
}), 'utf-8')
const loaded = await readSourceCacheEntry(await loadSourceCacheManifest(), 'fake', sourcePath)
expect(loaded).toBeNull()
})
it('returns null when the manifest metadata does not match the lookup request', async () => {
const sourcePath = join(root, 'source.jsonl')
await writeFile(sourcePath, 'one\n', 'utf-8')
const fingerprint = await computeFileFingerprint(sourcePath)
const file = `${createHash('sha1').update(`fake:${sourcePath}`).digest('hex')}.json`
const manifest = await loadSourceCacheManifest()
manifest.entries[`fake:${sourcePath}`] = {
file,
provider: 'other',
logicalPath: sourcePath,
}
await saveSourceCacheManifest(manifest)
await mkdir(join(root, 'source-cache-v1', 'entries'), { recursive: true })
await writeFile(join(root, 'source-cache-v1', 'entries', file), JSON.stringify({
version: SOURCE_CACHE_VERSION,
provider: 'fake',
logicalPath: sourcePath,
fingerprintPath: sourcePath,
cacheStrategy: 'full-reparse',
parserVersion: 'fake-v1',
fingerprint,
sessions: [],
}), 'utf-8')
const loaded = await readSourceCacheEntry(await loadSourceCacheManifest(), 'fake', sourcePath)
expect(loaded).toBeNull()
})
it('returns null when a nested assistant call is malformed', async () => {
const sourcePath = join(root, 'source.jsonl')
await writeFile(sourcePath, 'one\n', 'utf-8')
const fingerprint = await computeFileFingerprint(sourcePath)
const entry: SourceCacheEntry = {
version: SOURCE_CACHE_VERSION,
provider: 'fake',
logicalPath: sourcePath,
fingerprintPath: sourcePath,
cacheStrategy: 'full-reparse',
parserVersion: 'fake-v1',
fingerprint,
sessions: [
emptySession('session-1', {
turns: [{
userMessage: 'hello',
assistantCalls: [{
provider: 'fake',
model: 'model',
usage: {
inputTokens: 1,
outputTokens: 1,
cacheCreationInputTokens: 0,
cacheReadInputTokens: 0,
cachedInputTokens: 0,
reasoningTokens: 0,
webSearchRequests: 0,
},
costUSD: 1,
tools: [],
mcpTools: [],
hasAgentSpawn: false,
hasPlanMode: false,
speed: 'standard',
timestamp: '2026-04-10T00:00:00Z',
bashCommands: [],
deduplicationKey: 'k',
}],
timestamp: '2026-04-10T00:00:00Z',
sessionId: 'session-1',
}],
}),
],
}
const manifest = await loadSourceCacheManifest()
await writeSourceCacheEntry(manifest, entry)
await saveSourceCacheManifest(manifest)
await writeFile(join(root, 'source-cache-v1', 'entries', `${createHash('sha1').update(`fake:${sourcePath}`).digest('hex')}.json`), JSON.stringify({
...entry,
sessions: [{
...entry.sessions[0],
turns: [{
...entry.sessions[0].turns[0],
assistantCalls: [{
...entry.sessions[0].turns[0].assistantCalls[0],
usage: { ...entry.sessions[0].turns[0].assistantCalls[0].usage, inputTokens: 'bad' },
}],
}],
}],
}), 'utf-8')
const loaded = await readSourceCacheEntry(await loadSourceCacheManifest(), 'fake', sourcePath)
expect(loaded).toBeNull()
})
it('returns null when append state is malformed', async () => {
const sourcePath = join(root, 'source.jsonl')
await writeFile(sourcePath, 'one\n', 'utf-8')
const fingerprint = await computeFileFingerprint(sourcePath)
const entry = {
version: SOURCE_CACHE_VERSION,
provider: 'fake',
logicalPath: sourcePath,
fingerprintPath: sourcePath,
cacheStrategy: 'append-jsonl' as const,
parserVersion: 'fake-v1',
fingerprint,
sessions: [],
appendState: { endOffset: 'bad', tailHash: 'abc' },
}
const manifest = await loadSourceCacheManifest()
await writeSourceCacheEntry(manifest, entry as SourceCacheEntry)
await saveSourceCacheManifest(manifest)
const loaded = await readSourceCacheEntry(await loadSourceCacheManifest(), 'fake', sourcePath)
expect(loaded).toBeNull()
})
it('returns null when a breakdown map contains malformed values', async () => {
const sourcePath = join(root, 'source.jsonl')
await writeFile(sourcePath, 'one\n', 'utf-8')
const fingerprint = await computeFileFingerprint(sourcePath)
const entry: SourceCacheEntry = {
version: SOURCE_CACHE_VERSION,
provider: 'fake',
logicalPath: sourcePath,
fingerprintPath: sourcePath,
cacheStrategy: 'full-reparse',
parserVersion: 'fake-v1',
fingerprint,
sessions: [
emptySession('session-2', {
modelBreakdown: {
modelA: {
calls: 'bad',
costUSD: 0,
tokens: {
inputTokens: 0,
outputTokens: 0,
cacheCreationInputTokens: 0,
cacheReadInputTokens: 0,
cachedInputTokens: 0,
reasoningTokens: 0,
webSearchRequests: 0,
},
},
},
}),
],
}
const manifest = await loadSourceCacheManifest()
await writeSourceCacheEntry(manifest, entry)
await saveSourceCacheManifest(manifest)
const loaded = await readSourceCacheEntry(await loadSourceCacheManifest(), 'fake', sourcePath)
expect(loaded).toBeNull()
})
it('writes atomically without leaving temp files behind', async () => {
const sourcePath = join(root, 'source.jsonl')
await writeFile(sourcePath, 'x\n', 'utf-8')
const manifest = await loadSourceCacheManifest()
await writeSourceCacheEntry(manifest, {
version: SOURCE_CACHE_VERSION,
provider: 'fake',
logicalPath: sourcePath,
fingerprintPath: sourcePath,
cacheStrategy: 'full-reparse',
parserVersion: 'fake-v1',
fingerprint: await computeFileFingerprint(sourcePath),
sessions: [],
})
await saveSourceCacheManifest(manifest)
const files = JSON.parse(await readFile(join(root, 'source-cache-v1', 'manifest.json'), 'utf-8'))
expect(files.version).toBe(SOURCE_CACHE_VERSION)
expect(existsSync(join(root, 'source-cache-v1', 'entries'))).toBe(true)
const cacheFiles = await readdir(join(root, 'source-cache-v1'))
const entryFiles = await readdir(join(root, 'source-cache-v1', 'entries'))
expect(cacheFiles.some(f => f.endsWith('.tmp'))).toBe(false)
expect(entryFiles.some(f => f.endsWith('.tmp'))).toBe(false)
})
it('does not mutate the manifest when the entry write fails', async () => {
const sourcePath = join(root, 'source.jsonl')
await writeFile(sourcePath, 'x\n', 'utf-8')
const manifest = await loadSourceCacheManifest()
const provider = 'fake'
const logicalPath = sourcePath
const file = `${createHash('sha1').update(`${provider}:${logicalPath}`).digest('hex')}.json`
await mkdir(join(root, 'source-cache-v1', 'entries', file), { recursive: true })
await expect(writeSourceCacheEntry(manifest, {
version: SOURCE_CACHE_VERSION,
provider,
logicalPath,
fingerprintPath: sourcePath,
cacheStrategy: 'full-reparse',
parserVersion: 'fake-v1',
fingerprint: await computeFileFingerprint(sourcePath),
sessions: [],
})).rejects.toBeTruthy()
expect(manifest.entries[`fake:${sourcePath}`]).toBeUndefined()
})
})