Merge remote-tracking branch 'origin/main' into fix/menubar-version-prefix

# Conflicts:
#	package.json
#	src/parser.ts
This commit is contained in:
iamtoruk 2026-05-13 20:32:22 -07:00
commit 403efd4727
36 changed files with 1808 additions and 1081 deletions

18
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View file

@ -0,0 +1,18 @@
## Summary
<!-- What does this PR do? 1-3 bullet points. -->
## Testing
- [ ] I have tested this locally against real data (not just unit tests)
- [ ] `npm test` passes
- [ ] `npm run build` succeeds
### For new providers only:
- [ ] I installed the tool and generated real sessions by using it
- [ ] `npm run dev -- today` shows correct costs and session counts for this provider
- [ ] `npm run dev -- models --provider <name>` shows correct model names and pricing
- [ ] Screenshot or terminal output attached below proving it works with real data
<!-- Paste screenshot / terminal output here -->

View file

@ -1,5 +1,28 @@
# Changelog
## Unreleased
### Added (CLI)
- **IBM Bob provider.** Discovers IBM Bob IDE task history, reuses the
Cline-family parser for token/cost records, extracts model tags and
workspace-based project names from session data. Closes #248.
### Fixed (CLI)
- **Claude 1-hour cache write pricing.** 1-hour cache writes are now priced
at 2x base input (previously used the 5-minute 1.25x rate for all writes).
Daily cache bumped to v6 so stale totals are recomputed. Closes #276.
- **OpenCode MCP usage now counted.** OpenCode stores MCP tool calls as
`<server>_<tool>` names, which the shared MCP pipeline did not recognize.
The provider now normalizes these to the canonical `mcp__<server>__<tool>`
form so MCP breakdowns and `optimize` work correctly. Closes #308.
- **Mangled project names in dashboard.** The By Project and Top Sessions
panels decoded slugs by splitting on `-`, which broke directory names
containing dashes or dots (e.g. `my-project` rendered as `my/project`).
Now uses the real project path instead. Closes #196.
- **Cursor undated bubble rows misattributed to Today.** Bubble rows without
a `createdAt` timestamp were defaulting to the current date, inflating
Today's spend. Now skipped at both the SQL and application level.
## 0.9.8 - 2026-05-10
### Added (CLI)

View file

@ -84,6 +84,23 @@ The `.github/workflows/block-claude-coauthor.yml` workflow rejects any PR whose
If a flagged PR rejects on this check, the workflow prints the exact rebase command to fix it.
## Before You Start
**Comment on the issue first.** Before writing code for a feature or new provider, leave a comment on the relevant issue saying what you plan to do. Wait for a maintainer to confirm the approach. Unsolicited PRs that duplicate work already in progress or take an incompatible approach will be closed.
**One PR at a time.** We will not review a second PR from you until the first is merged or closed. This keeps the review queue manageable and ensures each contribution gets proper attention.
## Adding a New Provider
New providers have the highest bar because broken parsing silently produces wrong data for users. Before opening a PR:
1. **Install the tool and use it.** Generate real sessions by actually coding with the provider. We do this ourselves for every provider we ship.
2. **Test against real data.** Run `npm run dev -- today` and `npm run dev -- models` with your real sessions and confirm the output looks correct — costs are non-zero, model names resolve, session counts match what you see in the tool.
3. **Include proof in the PR.** Attach a screenshot or terminal output showing codeburn correctly parsing your real sessions. PRs for new providers without evidence of local testing will not be reviewed.
4. **Do not rely on AI-generated guesses about storage paths or schemas.** Tools change their data formats between versions. The only way to know the current schema is to install the tool and inspect the actual files on disk.
PRs that add a provider based solely on online documentation or AI-generated code, without evidence of testing against real data, will be closed.
## Pull Requests
1. Fork or branch from `main`.

View file

@ -13,7 +13,7 @@
<a href="https://github.com/sponsors/iamtoruk"><img src="https://img.shields.io/badge/sponsor-♥-ea4aaa?logo=github" alt="Sponsor" /></a>
</p>
CodeBurn tracks token usage, cost, and performance across **18 AI coding tools**. It breaks down spending by task type, model, tool, project, and provider so you can see exactly where your budget goes.
CodeBurn tracks token usage, cost, and performance across **19 AI coding tools**. It breaks down spending by task type, model, tool, project, and provider so you can see exactly where your budget goes.
Everything runs locally. No wrapper, no proxy, no API keys. CodeBurn reads session data directly from disk and prices every call using [LiteLLM](https://github.com/BerriAI/litellm).
@ -104,6 +104,7 @@ Arrow keys switch between Today, 7 Days, 30 Days, Month, and 6 Months (use `--fr
| <img src="assets/providers/cursor-agent.jpg" width="28" /> | cursor-agent | Yes | [cursor-agent.md](docs/providers/cursor-agent.md) |
| <img src="assets/providers/gemini.png" width="28" /> | Gemini CLI | Yes | [gemini.md](docs/providers/gemini.md) |
| <img src="assets/providers/copilot.jpg" width="28" /> | GitHub Copilot | Yes | [copilot.md](docs/providers/copilot.md) |
| <img src="assets/providers/ibm-bob.svg" width="28" /> | IBM Bob | Yes | [ibm-bob.md](docs/providers/ibm-bob.md) |
| <img src="assets/providers/kiro.png" width="28" /> | Kiro | Yes | [kiro.md](docs/providers/kiro.md) |
| <img src="assets/providers/opencode.png" width="28" /> | OpenCode | Yes | [opencode.md](docs/providers/opencode.md) |
| <img src="assets/providers/openclaw.jpg" width="28" /> | OpenClaw | Yes | [openclaw.md](docs/providers/openclaw.md) |
@ -119,7 +120,7 @@ Arrow keys switch between Today, 7 Days, 30 Days, Month, and 6 Months (use `--fr
Each provider doc lists the exact data location, storage format, and known quirks. Linux and Windows paths are detected automatically. If a path has changed or is wrong, please [open an issue](https://github.com/getagentseal/codeburn/issues).
Provider logos are trademarks of their respective owners. The icon set was sourced from [tokscale](https://github.com/junhoyeo/tokscale) (MIT) plus official vendor assets, used under nominative fair use for the purpose of identifying supported tools.
Provider logos are trademarks of their respective owners. The icon set was sourced from [tokscale](https://github.com/junhoyeo/tokscale) (MIT), official vendor assets, and simple provider identifiers, used under nominative fair use for the purpose of identifying supported tools.
CodeBurn auto-detects which AI coding tools you use. If multiple providers have session data on disk, press `p` in the dashboard to toggle between them.
@ -378,6 +379,8 @@ These are starting points, not verdicts. A 60% cache hit on a single experimenta
**OpenClaw** stores agent sessions as JSONL at `~/.openclaw/agents/*.jsonl`. Also checks legacy paths `.clawdbot`, `.moltbot`, `.moldbot`. Token usage comes from assistant message `usage` blocks; model from `modelId` or `message.model` fields.
**IBM Bob** stores IDE task history in `User/globalStorage/ibm.bob-code/tasks/<task-id>/` under the IBM Bob application data directory. CodeBurn reads `ui_messages.json` for API request token/cost records and `api_conversation_history.json` for the selected model, with support for both GA (`IBM Bob`) and preview (`Bob-IDE`) app data folders.
**Roo Code / KiloCode** are Cline-family VS Code extensions. CodeBurn reads `ui_messages.json` from each task directory in VS Code's `globalStorage`, filtering `type: "say"` entries with `say: "api_req_started"` to extract token counts.
CodeBurn deduplicates messages (by API message ID for Claude, by cumulative token cross-check for Codex, by conversation/timestamp for Cursor, by session ID for Gemini, by session+message ID for OpenCode, by responseId for Pi/OMP), filters by date range per entry, and classifies each turn.

View file

@ -0,0 +1,6 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 64 64" role="img" aria-label="IBM Bob">
<rect width="64" height="64" rx="12" fill="#0F62FE"/>
<path d="M14 19h36v5H14zm0 10h36v5H14zm0 10h36v5H14z" fill="#fff" opacity=".9"/>
<circle cx="24" cy="32" r="4" fill="#0F62FE"/>
<circle cx="40" cy="32" r="4" fill="#0F62FE"/>
</svg>

After

Width:  |  Height:  |  Size: 337 B

View file

@ -128,14 +128,14 @@ type Provider = {
}
```
`src/providers/index.ts` registers eighteen providers across two tiers:
`src/providers/index.ts` registers nineteen providers across two tiers:
- **Eager**: `claude`, `codex`, `copilot`, `droid`, `gemini`, `kilo-code`, `kiro`, `openclaw`, `pi`, `omp`, `qwen`, `roo-code`. Imported at module load.
- **Eager**: `claude`, `codex`, `copilot`, `droid`, `gemini`, `ibm-bob`, `kilo-code`, `kiro`, `openclaw`, `pi`, `omp`, `qwen`, `roo-code`. Imported at module load.
- **Lazy**: `antigravity`, `goose`, `cursor`, `opencode`, `cursor-agent`, `crush`. Imported via dynamic `import()` so the heavy dependencies (SQLite, protobuf) do not touch users who do not have those tools installed.
Both lists hit the same `getAllProviders()` aggregator. A failed lazy import is silent and excludes that provider from the run.
`src/providers/vscode-cline-parser.ts` is a shared helper consumed by `kilo-code` and `roo-code`. It is not registered as a provider on its own.
`src/providers/vscode-cline-parser.ts` is a shared helper consumed by `ibm-bob`, `kilo-code`, and `roo-code`. It is not registered as a provider on its own.
For the per-provider data location, storage format, parser quirks, and test coverage, see `docs/providers/`.

View file

@ -15,6 +15,7 @@ For the architectural picture, see `../architecture.md`.
| [Copilot](copilot.md) | JSONL | `src/providers/copilot.ts` | `tests/providers/copilot.test.ts` |
| [Droid](droid.md) | JSONL | `src/providers/droid.ts` | `tests/providers/droid.test.ts` |
| [Gemini](gemini.md) | JSON / JSONL | `src/providers/gemini.ts` | none |
| [IBM Bob](ibm-bob.md) | JSON | `src/providers/ibm-bob.ts` | `tests/providers/ibm-bob.test.ts` |
| [KiloCode](kilo-code.md) | JSON | `src/providers/kilo-code.ts` | `tests/providers/kilo-code.test.ts` |
| [Kiro](kiro.md) | JSON | `src/providers/kiro.ts` | `tests/providers/kiro.test.ts` |
| [OpenClaw](openclaw.md) | JSONL | `src/providers/openclaw.ts` | `tests/providers/openclaw.test.ts` |
@ -38,7 +39,7 @@ For the architectural picture, see `../architecture.md`.
| Helper | Used by | Source |
|---|---|---|
| [vscode-cline-parser](vscode-cline-parser.md) | `kilo-code`, `roo-code` | `src/providers/vscode-cline-parser.ts` |
| [vscode-cline-parser](vscode-cline-parser.md) | `ibm-bob`, `kilo-code`, `roo-code` | `src/providers/vscode-cline-parser.ts` |
## File Format

View file

@ -25,6 +25,17 @@ JSONL, one event per line, per session file. Sessions live under `<project>/<ses
`createSessionParser` returns an empty async generator (`claude.ts:101-105`). Claude is a special case: `src/parser.ts` reads Claude JSONL files directly with full turn grouping, dedup of streaming message IDs, and MCP tool inventory extraction. The provider object exists only so `discoverSessions` can return Claude session sources alongside the others.
## Pricing
Claude Code reports total cache-write tokens in `usage.cache_creation_input_tokens`.
When available, it also splits those writes by duration in
`usage.cache_creation.ephemeral_5m_input_tokens` and
`usage.cache_creation.ephemeral_1h_input_tokens`. CodeBurn keeps the existing
aggregate cache-write token total for reports, but prices the 1-hour portion at
2x base input cost (1.6x the 5-minute cache-write rate exposed by LiteLLM).
If the split fields are missing, the parser falls back to the legacy behavior
and prices every cache write at the 5-minute rate.
## Caching
None at the provider level. The daily aggregation cache (`src/daily-cache.ts`) reuses prior computed days.

55
docs/providers/ibm-bob.md Normal file
View file

@ -0,0 +1,55 @@
# IBM Bob
IBM Bob IDE task history.
- **Source:** `src/providers/ibm-bob.ts`
- **Loading:** eager (`src/providers/index.ts`)
- **Test:** `tests/providers/ibm-bob.test.ts`
## Where It Reads From
IBM Bob stores IDE task history below `User/globalStorage/ibm.bob-code/tasks/` in the application data directory.
Default paths checked:
| Platform | Paths |
|---|---|
| macOS | `~/Library/Application Support/IBM Bob/User/globalStorage/ibm.bob-code/`, `~/Library/Application Support/Bob-IDE/User/globalStorage/ibm.bob-code/` |
| Windows | `%APPDATA%/IBM Bob/User/globalStorage/ibm.bob-code/`, `%APPDATA%/Bob-IDE/User/globalStorage/ibm.bob-code/` |
| Linux | `$XDG_CONFIG_HOME/IBM Bob/User/globalStorage/ibm.bob-code/`, `$XDG_CONFIG_HOME/Bob-IDE/User/globalStorage/ibm.bob-code/` with `~/.config` fallback |
The `Bob-IDE` paths cover the preview-era app name that some installs used before the GA `IBM Bob` directory.
## Storage Format
Each task is a directory under `tasks/<task-id>/` and must contain `ui_messages.json`.
CodeBurn parses the same Cline-family UI event format used by Roo Code and KiloCode:
- `ui_messages.json` entries with `type: "say"` and `say: "api_req_started"` contain serialized token/cost metrics.
- `ui_messages.json` user text entries seed the turn's first user message.
- `api_conversation_history.json` is optional and is used to extract the selected model from `<model>...</model>` environment details when present.
- `task_metadata.json` may exist upstream, but CodeBurn does not need it for usage math today.
If no model tag is present, the parser uses `ibm-bob-auto`, which is priced through the same conservative Sonnet fallback used for Cline-family auto modes.
## Caching
None at the provider level.
## Deduplication
Per `<providerName>:<taskId>:<apiRequestIndex>` via `vscode-cline-parser.ts`.
## Quirks
- IBM Bob has shipped under both `IBM Bob` and `Bob-IDE` application data folder names.
- This provider intentionally covers the IDE task-history format. Bob Shell's `~/.bob` checkpoint data is a separate storage surface and is not parsed until we have a stable usage schema fixture.
- The shared Cline parser does not currently extract individual tool names from UI messages, so tool breakdowns are empty for IBM Bob just like Roo Code and KiloCode.
## When Fixing A Bug Here
1. Check whether the install uses `IBM Bob` or `Bob-IDE` as the application data directory.
2. Confirm the task folder still contains `ui_messages.json` and `api_conversation_history.json`.
3. If the UI message schema changed, add a focused fixture to `tests/providers/ibm-bob.test.ts`.
4. If the change also affects Roo Code or KiloCode, update `src/providers/vscode-cline-parser.ts` and run all three provider test files.

View file

@ -4,7 +4,7 @@ OpenCode (sst/opencode).
- **Source:** `src/providers/opencode.ts`
- **Loading:** lazy (`src/providers/index.ts:59-75`)
- **Test:** `tests/providers/opencode.test.ts` (558 lines, the largest provider test)
- **Test:** `tests/providers/opencode.test.ts` (676 lines, the largest provider test)
## Where it reads from
@ -20,14 +20,18 @@ None.
## Deduplication
Per `<sessionId>:<messageId>` (`opencode.ts:242`).
Per `<sessionId>:<messageId>`.
## Quirks
- **Schema validation is loud.** When a required table is missing, the parser logs an actionable warning telling the user which table is gone and what version of OpenCode it expects (`opencode.ts:104-131`). This is the right behavior; do not silently swallow these.
- Source paths are encoded as `<dbPath>:<sessionId>` (`opencode.ts:147-150`).
- Each message's `parts` are indexed (`opencode.ts:177-191`); preserving the order matters for reasoning-token correctness.
- **Schema validation is loud.** When a required table is missing, the parser logs an actionable warning telling the user which table is gone and what version of OpenCode it expects. This is the right behavior; do not silently swallow these.
- Source paths are encoded as `<dbPath>:<sessionId>`.
- Each message's `parts` are indexed; preserving the order matters for reasoning-token correctness.
- Tokens are reported across `input`, `output`, `reasoning`, `cache.read`, and `cache.write`. Anthropic semantics.
- External MCP tools are stored as `<server>_<tool>` names (for example
`clickup_clickup_get_task`). The provider normalizes those to CodeBurn's
canonical `mcp__<server>__<tool>` names before aggregation so shared MCP
panels and `optimize` findings count OpenCode usage.
## When fixing a bug here

View file

@ -1,17 +1,18 @@
# vscode-cline-parser (Shared Helper)
Shared discovery and parsing for VS Code extensions descended from Cline.
Shared discovery and parsing for Cline-family task folders.
- **Source:** `src/providers/vscode-cline-parser.ts`
- **Loading:** not a provider; imported by `kilo-code.ts` and `roo-code.ts`.
- **Test:** none directly. Coverage comes from `tests/providers/kilo-code.test.ts` and `tests/providers/roo-code.test.ts`.
- **Loading:** not a provider; imported by `ibm-bob.ts`, `kilo-code.ts`, and `roo-code.ts`.
- **Test:** none directly. Coverage comes from `tests/providers/ibm-bob.test.ts`, `tests/providers/kilo-code.test.ts`, and `tests/providers/roo-code.test.ts`.
## What it does
Two responsibilities:
1. `discoverClineTasks(extensionId)` walks VS Code's `globalStorage/<extensionId>/tasks/` directories and returns one source per task that has a `ui_messages.json` file (`vscode-cline-parser.ts:25-50`).
2. `createClineParser` reads each task's `ui_messages.json` and `api_conversation_history.json`, extracts model, tools, and token counts, and yields `ParsedProviderCall` objects.
1. `discoverClineTasks(extensionId)` walks VS Code's `globalStorage/<extensionId>/tasks/` directories and returns one source per task that has a `ui_messages.json` file.
2. `discoverClineTasksInBaseDirs(baseDirs)` does the same for non-VS Code apps with compatible task storage, such as IBM Bob.
3. `createClineParser` reads each task's `ui_messages.json` and `api_conversation_history.json`, extracts model and token counts, and yields `ParsedProviderCall` objects.
## Storage layout
@ -25,25 +26,25 @@ Per task directory:
## Model resolution
The model is extracted from `api_conversation_history.json` by searching user message content blocks for a `<model>...</model>` tag (`vscode-cline-parser.ts:54-72`). Falls back to `cline-auto` if no tag is found.
The model is extracted from `api_conversation_history.json` by searching user message content blocks for a `<model>...</model>` tag. Falls back to the provider-supplied auto model (`cline-auto` by default) if no tag is found.
## Token extraction
From `api_req_started` entries inside `ui_messages.json`. Each such entry's `text` field is JSON-parsed; the parsed object holds `tokensIn`, `tokensOut`, `cacheReads`, `cacheWrites`, and (optionally) `cost` (`vscode-cline-parser.ts:119-134`).
From `api_req_started` entries inside `ui_messages.json`. Each such entry's `text` field is JSON-parsed; the parsed object holds `tokensIn`, `tokensOut`, `cacheReads`, `cacheWrites`, and (optionally) `cost`.
If `cost` is present, it is used directly. If not, `calculateCost` from `src/models.ts` computes it from tokens (`vscode-cline-parser.ts:139`).
If `cost` is present, it is used directly. If not, `calculateCost` from `src/models.ts` computes it from tokens.
## Deduplication
Per `<providerName>:<taskId>:<index>` where `index` is the position of the `api_req_started` entry within `ui_messages.json` (`vscode-cline-parser.ts:109`).
Per `<providerName>:<taskId>:<index>` where `index` is the position of the `api_req_started` entry within `ui_messages.json`.
## Quirks
- Only the **first** user message is emitted as `userMessage` in the `ParsedProviderCall` (`vscode-cline-parser.ts:157`). Subsequent user turns are accounted but not surfaced.
- Only the **first** user message is emitted as `userMessage` in the `ParsedProviderCall`. Subsequent user turns are accounted but not surfaced.
- The model regex looks inside content blocks, not at top-level fields. Some Cline-derivative extensions emit the model elsewhere; if you add support for one, branch on extension ID rather than rewriting the regex.
## When fixing a bug here
1. A change here ripples to **both** KiloCode and Roo Code. Run both test files (`tests/providers/kilo-code.test.ts` and `tests/providers/roo-code.test.ts`) before opening a PR.
1. A change here ripples to IBM Bob, KiloCode, and Roo Code. Run all three provider test files before opening a PR.
2. If you find that one of the two extensions emits a different shape, branch on the extension ID parameter that the discovery function already takes; do not duplicate the parser.
3. If you add support for a third Cline-derivative extension, register it as a thin wrapper file in the same shape as `kilo-code.ts` and `roo-code.ts`.
3. If you add support for another Cline-family task store, register it as a thin wrapper file in the same shape as `ibm-bob.ts`, `kilo-code.ts`, and `roo-code.ts`.

View file

@ -802,6 +802,7 @@ enum ProviderFilter: String, CaseIterable, Identifiable {
case copilot = "Copilot"
case droid = "Droid"
case gemini = "Gemini"
case ibmBob = "IBM Bob"
case kiro = "Kiro"
case kiloCode = "KiloCode"
case openclaw = "OpenClaw"
@ -819,6 +820,7 @@ enum ProviderFilter: String, CaseIterable, Identifiable {
case .cursor: ["cursor", "cursor agent"]
case .rooCode: ["roo-code", "roo code"]
case .kiloCode: ["kilo-code", "kilocode"]
case .ibmBob: ["ibm-bob", "ibm bob"]
case .openclaw: ["openclaw"]
default: [rawValue.lowercased()]
}
@ -833,6 +835,7 @@ enum ProviderFilter: String, CaseIterable, Identifiable {
case .copilot: "copilot"
case .droid: "droid"
case .gemini: "gemini"
case .ibmBob: "ibm-bob"
case .kiloCode: "kilo-code"
case .kiro: "kiro"
case .openclaw: "openclaw"

View file

@ -345,6 +345,7 @@ extension ProviderFilter {
case .copilot: return Color(red: 0x6D/255.0, green: 0x8F/255.0, blue: 0xA6/255.0)
case .droid: return Color(red: 0x7C/255.0, green: 0x3A/255.0, blue: 0xED/255.0)
case .gemini: return Color(red: 0x44/255.0, green: 0x85/255.0, blue: 0xF4/255.0)
case .ibmBob: return Color(red: 0x0F/255.0, green: 0x62/255.0, blue: 0xFE/255.0)
case .kiloCode: return Color(red: 0x00/255.0, green: 0x96/255.0, blue: 0x88/255.0)
case .kiro: return Color(red: 0x4A/255.0, green: 0x9E/255.0, blue: 0xC4/255.0)
case .openclaw: return Color(red: 0xDA/255.0, green: 0x70/255.0, blue: 0x56/255.0)

View file

@ -12,7 +12,7 @@
],
"scripts": {
"bundle-litellm": "node scripts/bundle-litellm.mjs",
"build": "node scripts/bundle-litellm.mjs && tsup && node -e \"require('fs').chmodSync('dist/cli.js',0o755)\"",
"build": "node scripts/bundle-litellm.mjs && tsup && node -e \"const fs=require('fs'); fs.copyFileSync('src/cli.ts','dist/cli.js'); fs.chmodSync('dist/cli.js',0o755)\"",
"dev": "tsx src/cli.ts",
"test": "vitest",
"prepublishOnly": "npm run build"
@ -21,6 +21,7 @@
"claude-code",
"cursor",
"codex",
"ibm-bob",
"opencode",
"pi",
"ai-coding",
@ -30,7 +31,7 @@
"developer-tools"
],
"engines": {
"node": ">=22"
"node": ">=22.13.0"
},
"author": "AgentSeal <hello@agentseal.org>",
"license": "MIT",

View file

@ -1,978 +1,15 @@
import { Command } from 'commander'
import { installMenubarApp } from './menubar-installer.js'
import { exportCsv, exportJson, type PeriodExport } from './export.js'
import { loadPricing, setModelAliases } from './models.js'
import { parseAllSessions, filterProjectsByName } from './parser.js'
import { convertCost } from './currency.js'
import { renderStatusBar } from './format.js'
import { type PeriodData, type ProviderCost } from './menubar-json.js'
import { buildMenubarPayload } from './menubar-json.js'
import { getDaysInRange, ensureCacheHydrated, emptyCache, BACKFILL_DAYS, toDateString } from './daily-cache.js'
import { aggregateProjectsIntoDays, buildPeriodDataFromDays, dateKey } from './day-aggregator.js'
import { CATEGORY_LABELS, type DateRange, type ProjectSummary, type TaskCategory } from './types.js'
import { aggregateModelEfficiency } from './model-efficiency.js'
import { renderDashboard } from './dashboard.js'
import { formatDateRangeLabel, parseDateRangeFlags, getDateRange, toPeriod, type Period } from './cli-date.js'
import { runOptimize, scanAndDetect } from './optimize.js'
import { renderCompare } from './compare.js'
import { getAllProviders } from './providers/index.js'
import { clearPlan, readConfig, readPlan, saveConfig, savePlan, getConfigFilePath, type PlanId } from './config.js'
import { clampResetDay, getPlanUsageOrNull, type PlanUsage } from './plan-usage.js'
import { getPresetPlan, isPlanId, isPlanProvider, planDisplayName } from './plans.js'
import { createRequire } from 'node:module'
const require = createRequire(import.meta.url)
const { version } = require('../package.json')
import { loadCurrency, getCurrency, isValidCurrencyCode } from './currency.js'
async function hydrateCache() {
try {
return await ensureCacheHydrated(
(range) => parseAllSessions(range, 'all'),
aggregateProjectsIntoDays,
)
} catch {
return emptyCache()
}
#!/usr/bin/env node
// This launcher must stay parseable by Node 18. Do NOT add static imports.
const [major, minor] = process.versions.node.split('.').map(Number)
if (major < 22 || (major === 22 && minor < 13)) {
process.stderr.write(
`codeburn requires Node.js >= 22.13.0 (current: ${process.version})\n` +
'Upgrade at https://nodejs.org/\n',
)
process.exit(1)
}
function collect(val: string, acc: string[]): string[] {
acc.push(val)
return acc
}
function parseNumber(value: string): number {
return Number(value)
}
function parseInteger(value: string): number {
return parseInt(value, 10)
}
type JsonPlanSummary = {
id: PlanId
budget: number
spent: number
percentUsed: number
status: 'under' | 'near' | 'over'
projectedMonthEnd: number
daysUntilReset: number
periodStart: string
periodEnd: string
}
function toJsonPlanSummary(planUsage: PlanUsage): JsonPlanSummary {
return {
id: planUsage.plan.id,
budget: convertCost(planUsage.budgetUsd),
spent: convertCost(planUsage.spentApiEquivalentUsd),
percentUsed: Math.round(planUsage.percentUsed * 10) / 10,
status: planUsage.status,
projectedMonthEnd: convertCost(planUsage.projectedMonthUsd),
daysUntilReset: planUsage.daysUntilReset,
periodStart: planUsage.periodStart.toISOString(),
periodEnd: planUsage.periodEnd.toISOString(),
}
}
function assertFormat(value: string, allowed: readonly string[], command: string): void {
if (!allowed.includes(value)) {
process.stderr.write(
`codeburn ${command}: unknown format "${value}". Valid values: ${allowed.join(', ')}.\n`
)
process.exit(1)
}
}
async function runJsonReport(period: Period, provider: string, project: string[], exclude: string[]): Promise<void> {
await loadPricing()
const { range, label } = getDateRange(period)
const projects = filterProjectsByName(await parseAllSessions(range, provider), project, exclude)
const report: ReturnType<typeof buildJsonReport> & { plan?: JsonPlanSummary } = buildJsonReport(projects, label, period)
const planUsage = await getPlanUsageOrNull()
if (planUsage) {
report.plan = toJsonPlanSummary(planUsage)
}
console.log(JSON.stringify(report, null, 2))
}
const program = new Command()
.name('codeburn')
.description('See where your AI coding tokens go - by task, tool, model, and project')
.version(version)
.option('--verbose', 'print warnings to stderr on read failures and skipped files')
.option('--timezone <zone>', 'IANA timezone for date grouping (e.g. Asia/Tokyo, America/New_York)')
program.hook('preAction', async (thisCommand) => {
const tz = thisCommand.opts<{ timezone?: string }>().timezone ?? process.env['CODEBURN_TZ']
if (tz) {
try {
Intl.DateTimeFormat(undefined, { timeZone: tz })
} catch {
console.error(`\n Invalid timezone: "${tz}". Use an IANA timezone like "America/New_York" or "Asia/Tokyo".\n`)
process.exit(1)
}
process.env.TZ = tz
}
const config = await readConfig()
setModelAliases(config.modelAliases ?? {})
if (thisCommand.opts<{ verbose?: boolean }>().verbose) {
process.env['CODEBURN_VERBOSE'] = '1'
}
await loadCurrency()
import('./main.js').catch((err) => {
process.stderr.write(String(err?.message ?? err) + '\n')
process.exit(1)
})
function buildJsonReport(projects: ProjectSummary[], period: string, periodKey: string) {
const sessions = projects.flatMap(p => p.sessions)
const { code } = getCurrency()
const totalCostUSD = projects.reduce((s, p) => s + p.totalCostUSD, 0)
const totalCalls = projects.reduce((s, p) => s + p.totalApiCalls, 0)
const totalSessions = projects.reduce((s, p) => s + p.sessions.length, 0)
const totalInput = sessions.reduce((s, sess) => s + sess.totalInputTokens, 0)
const totalOutput = sessions.reduce((s, sess) => s + sess.totalOutputTokens, 0)
const totalCacheRead = sessions.reduce((s, sess) => s + sess.totalCacheReadTokens, 0)
const totalCacheWrite = sessions.reduce((s, sess) => s + sess.totalCacheWriteTokens, 0)
// Match src/menubar-json.ts:cacheHitPercent: reads over reads+fresh-input. cache_write
// counts tokens being stored, not served, so it doesn't belong in the denominator.
const cacheHitDenom = totalInput + totalCacheRead
const cacheHitPercent = cacheHitDenom > 0 ? Math.round((totalCacheRead / cacheHitDenom) * 1000) / 10 : 0
// Per-day rollup. Mirrors parser.ts categoryBreakdown semantics so a
// consumer summing daily[].editTurns over a period gets the same total as
// sum(activities[].editTurns) for that period: every turn counts once for
// `turns`, edit turns count for `editTurns`, edit turns with zero retries
// count for `oneShotTurns`. Issue #279 — daily-resolution efficiency
// dashboards need this without re-deriving from activity-level rollups.
const dailyMap: Record<string, { cost: number; calls: number; turns: number; editTurns: number; oneShotTurns: number }> = {}
for (const sess of sessions) {
for (const turn of sess.turns) {
// Prefer the user-message timestamp on the turn; fall back to the first
// assistant-call timestamp when the user line is missing (continuation
// sessions where the JSONL begins mid-conversation). Previously these
// turns dropped from daily but stayed in activities, breaking the
// sum(daily[].editTurns) === sum(activities[].editTurns) invariant.
const ts = turn.timestamp || turn.assistantCalls[0]?.timestamp
if (!ts) { continue }
const day = dateKey(ts)
if (!dailyMap[day]) { dailyMap[day] = { cost: 0, calls: 0, turns: 0, editTurns: 0, oneShotTurns: 0 } }
dailyMap[day].turns += 1
if (turn.hasEdits) {
dailyMap[day].editTurns += 1
if (turn.retries === 0) dailyMap[day].oneShotTurns += 1
}
for (const call of turn.assistantCalls) {
dailyMap[day].cost += call.costUSD
dailyMap[day].calls += 1
}
}
}
const daily = Object.entries(dailyMap).sort().map(([date, d]) => ({
date,
cost: convertCost(d.cost),
calls: d.calls,
turns: d.turns,
editTurns: d.editTurns,
oneShotTurns: d.oneShotTurns,
// Pre-computed convenience for dashboards that don't want to do the math.
// null when there are no edit turns (the rate is undefined, not zero —
// a day where the user only had Q&A turns shouldn't read as 0% one-shot).
oneShotRate: d.editTurns > 0
? Math.round((d.oneShotTurns / d.editTurns) * 1000) / 10
: null,
}))
const projectList = projects.map(p => ({
name: p.project,
path: p.projectPath,
cost: convertCost(p.totalCostUSD),
avgCostPerSession: p.sessions.length > 0
? convertCost(p.totalCostUSD / p.sessions.length)
: null,
calls: p.totalApiCalls,
sessions: p.sessions.length,
}))
const modelMap: Record<string, { calls: number; cost: number; inputTokens: number; outputTokens: number; cacheReadTokens: number; cacheWriteTokens: number }> = {}
const modelEfficiency = aggregateModelEfficiency(projects)
for (const sess of sessions) {
for (const [model, d] of Object.entries(sess.modelBreakdown)) {
if (!modelMap[model]) { modelMap[model] = { calls: 0, cost: 0, inputTokens: 0, outputTokens: 0, cacheReadTokens: 0, cacheWriteTokens: 0 } }
modelMap[model].calls += d.calls
modelMap[model].cost += d.costUSD
modelMap[model].inputTokens += d.tokens.inputTokens
modelMap[model].outputTokens += d.tokens.outputTokens
modelMap[model].cacheReadTokens += d.tokens.cacheReadInputTokens
modelMap[model].cacheWriteTokens += d.tokens.cacheCreationInputTokens
}
}
const models = Object.entries(modelMap)
.sort(([, a], [, b]) => b.cost - a.cost)
.map(([name, { cost, ...rest }]) => {
const efficiency = modelEfficiency.get(name)
return {
name,
...rest,
cost: convertCost(cost),
editTurns: efficiency?.editTurns ?? 0,
oneShotTurns: efficiency?.oneShotTurns ?? 0,
oneShotRate: efficiency?.oneShotRate ?? null,
retriesPerEdit: efficiency?.retriesPerEdit ?? null,
costPerEdit: efficiency?.costPerEditUSD !== null && efficiency?.costPerEditUSD !== undefined
? convertCost(efficiency.costPerEditUSD)
: null,
}
})
const catMap: Record<string, { turns: number; cost: number; editTurns: number; oneShotTurns: number }> = {}
for (const sess of sessions) {
for (const [cat, d] of Object.entries(sess.categoryBreakdown)) {
if (!catMap[cat]) { catMap[cat] = { turns: 0, cost: 0, editTurns: 0, oneShotTurns: 0 } }
catMap[cat].turns += d.turns
catMap[cat].cost += d.costUSD
catMap[cat].editTurns += d.editTurns
catMap[cat].oneShotTurns += d.oneShotTurns
}
}
const activities = Object.entries(catMap)
.sort(([, a], [, b]) => b.cost - a.cost)
.map(([cat, d]) => ({
category: CATEGORY_LABELS[cat as TaskCategory] ?? cat,
cost: convertCost(d.cost),
turns: d.turns,
editTurns: d.editTurns,
oneShotTurns: d.oneShotTurns,
oneShotRate: d.editTurns > 0 ? Math.round((d.oneShotTurns / d.editTurns) * 1000) / 10 : null,
}))
const toolMap: Record<string, number> = {}
const mcpMap: Record<string, number> = {}
const bashMap: Record<string, number> = {}
for (const sess of sessions) {
for (const [tool, d] of Object.entries(sess.toolBreakdown)) {
toolMap[tool] = (toolMap[tool] ?? 0) + d.calls
}
for (const [server, d] of Object.entries(sess.mcpBreakdown)) {
mcpMap[server] = (mcpMap[server] ?? 0) + d.calls
}
for (const [cmd, d] of Object.entries(sess.bashBreakdown)) {
bashMap[cmd] = (bashMap[cmd] ?? 0) + d.calls
}
}
const sortedMap = (m: Record<string, number>) =>
Object.entries(m).sort(([, a], [, b]) => b - a).map(([name, calls]) => ({ name, calls }))
const topSessions = projects
.flatMap(p => p.sessions.map(s => ({ project: p.project, sessionId: s.sessionId, date: s.firstTimestamp ? dateKey(s.firstTimestamp) : null, cost: convertCost(s.totalCostUSD), calls: s.apiCalls })))
.sort((a, b) => b.cost - a.cost)
.slice(0, 5)
return {
generated: new Date().toISOString(),
currency: code,
period,
periodKey,
overview: {
cost: convertCost(totalCostUSD),
calls: totalCalls,
sessions: totalSessions,
cacheHitPercent,
tokens: {
input: totalInput,
output: totalOutput,
cacheRead: totalCacheRead,
cacheWrite: totalCacheWrite,
},
},
daily,
projects: projectList,
models,
activities,
tools: sortedMap(toolMap),
mcpServers: sortedMap(mcpMap),
shellCommands: sortedMap(bashMap),
topSessions,
}
}
program
.command('report', { isDefault: true })
.description('Interactive usage dashboard')
.option('-p, --period <period>', 'Starting period: today, week, 30days, month, all', 'week')
.option('--from <date>', 'Start date (YYYY-MM-DD). Overrides --period when set')
.option('--to <date>', 'End date (YYYY-MM-DD). Overrides --period when set')
.option('--provider <provider>', 'Filter by provider (e.g. claude, gemini, cursor, copilot)', 'all')
.option('--format <format>', 'Output format: tui, json', 'tui')
.option('--project <name>', 'Show only projects matching name (repeatable)', collect, [])
.option('--exclude <name>', 'Exclude projects matching name (repeatable)', collect, [])
.option('--refresh <seconds>', 'Auto-refresh interval in seconds (0 to disable)', parseInteger, 30)
.action(async (opts) => {
assertFormat(opts.format, ['tui', 'json'], 'report')
let customRange: DateRange | null = null
try {
customRange = parseDateRangeFlags(opts.from, opts.to)
} catch (err) {
const message = err instanceof Error ? err.message : String(err)
console.error(`\n Error: ${message}\n`)
process.exit(1)
}
const period = toPeriod(opts.period)
if (opts.format === 'json') {
await loadPricing()
await hydrateCache()
if (customRange) {
const label = formatDateRangeLabel(opts.from, opts.to)
const projects = filterProjectsByName(
await parseAllSessions(customRange, opts.provider),
opts.project,
opts.exclude,
)
console.log(JSON.stringify(buildJsonReport(projects, label, 'custom'), null, 2))
} else {
await runJsonReport(period, opts.provider, opts.project, opts.exclude)
}
return
}
await hydrateCache()
const customRangeLabel = customRange ? formatDateRangeLabel(opts.from, opts.to) : undefined
await renderDashboard(period, opts.provider, opts.refresh, opts.project, opts.exclude, customRange, customRangeLabel)
})
function buildPeriodData(label: string, projects: ProjectSummary[]): PeriodData {
const sessions = projects.flatMap(p => p.sessions)
const catTotals: Record<string, { turns: number; cost: number; editTurns: number; oneShotTurns: number }> = {}
const modelTotals: Record<string, { calls: number; cost: number }> = {}
let inputTokens = 0, outputTokens = 0, cacheReadTokens = 0, cacheWriteTokens = 0
for (const sess of sessions) {
inputTokens += sess.totalInputTokens
outputTokens += sess.totalOutputTokens
cacheReadTokens += sess.totalCacheReadTokens
cacheWriteTokens += sess.totalCacheWriteTokens
for (const [cat, d] of Object.entries(sess.categoryBreakdown)) {
if (!catTotals[cat]) catTotals[cat] = { turns: 0, cost: 0, editTurns: 0, oneShotTurns: 0 }
catTotals[cat].turns += d.turns
catTotals[cat].cost += d.costUSD
catTotals[cat].editTurns += d.editTurns
catTotals[cat].oneShotTurns += d.oneShotTurns
}
for (const [model, d] of Object.entries(sess.modelBreakdown)) {
if (!modelTotals[model]) modelTotals[model] = { calls: 0, cost: 0 }
modelTotals[model].calls += d.calls
modelTotals[model].cost += d.costUSD
}
}
return {
label,
cost: projects.reduce((s, p) => s + p.totalCostUSD, 0),
calls: projects.reduce((s, p) => s + p.totalApiCalls, 0),
sessions: projects.reduce((s, p) => s + p.sessions.length, 0),
inputTokens, outputTokens, cacheReadTokens, cacheWriteTokens,
categories: Object.entries(catTotals)
.sort(([, a], [, b]) => b.cost - a.cost)
.map(([cat, d]) => ({ name: CATEGORY_LABELS[cat as TaskCategory] ?? cat, ...d })),
models: Object.entries(modelTotals)
.sort(([, a], [, b]) => b.cost - a.cost)
.map(([name, d]) => ({ name, ...d })),
}
}
program
.command('status')
.description('Compact status output (today + month)')
.option('--format <format>', 'Output format: terminal, menubar-json, json', 'terminal')
.option('--provider <provider>', 'Filter by provider (e.g. claude, gemini, cursor, copilot)', 'all')
.option('--project <name>', 'Show only projects matching name (repeatable)', collect, [])
.option('--exclude <name>', 'Exclude projects matching name (repeatable)', collect, [])
.option('--period <period>', 'Primary period for menubar-json: today, week, 30days, month, all', 'today')
.option('--no-optimize', 'Skip optimize findings (menubar-json only, faster)')
.action(async (opts) => {
assertFormat(opts.format, ['terminal', 'menubar-json', 'json'], 'status')
await loadPricing()
const pf = opts.provider
const fp = (p: ProjectSummary[]) => filterProjectsByName(p, opts.project, opts.exclude)
if (opts.format === 'menubar-json') {
const periodInfo = getDateRange(opts.period)
const now = new Date()
const todayStart = new Date(now.getFullYear(), now.getMonth(), now.getDate())
const yesterdayStr = toDateString(new Date(now.getFullYear(), now.getMonth(), now.getDate() - 1))
const isAllProviders = pf === 'all'
const cache = await hydrateCache()
// CURRENT PERIOD DATA
// - .all provider: assemble from cache + today (fast)
// - specific provider: parse the period range with provider filter (correct, but slower)
let currentData: PeriodData
let scanProjects: ProjectSummary[]
let scanRange: DateRange
if (isAllProviders) {
// Parse only today's sessions; historical data comes from cache to avoid double-counting
const todayRange: DateRange = { start: todayStart, end: new Date() }
const todayProjects = fp(await parseAllSessions(todayRange, 'all'))
const todayDays = aggregateProjectsIntoDays(todayProjects)
const rangeStartStr = toDateString(periodInfo.range.start)
const rangeEndStr = toDateString(periodInfo.range.end)
const historicalDays = getDaysInRange(cache, rangeStartStr, yesterdayStr)
const todayInRange = todayDays.filter(d => d.date >= rangeStartStr && d.date <= rangeEndStr)
const allDays = [...historicalDays, ...todayInRange].sort((a, b) => a.date.localeCompare(b.date))
currentData = buildPeriodDataFromDays(allDays, periodInfo.label)
scanProjects = todayProjects
scanRange = periodInfo.range
} else {
const projects = fp(await parseAllSessions(periodInfo.range, pf))
currentData = buildPeriodData(periodInfo.label, projects)
scanProjects = projects
scanRange = periodInfo.range
}
// PROVIDERS
// For .all: enumerate every provider with cost across the period (from cache) + installed-but-zero.
// For specific: just this single provider with its scoped cost.
const allProviders = await getAllProviders()
const displayNameByName = new Map(allProviders.map(p => [p.name, p.displayName]))
const providers: ProviderCost[] = []
if (isAllProviders) {
// Parse only today; historical provider costs come from cache
const todayRangeForProviders: DateRange = { start: todayStart, end: new Date() }
const todayDaysForProviders = aggregateProjectsIntoDays(fp(await parseAllSessions(todayRangeForProviders, 'all')))
const rangeStartStr = toDateString(periodInfo.range.start)
const todayStr = toDateString(todayStart)
const allDaysForProviders = [
...getDaysInRange(cache, rangeStartStr, yesterdayStr),
...todayDaysForProviders.filter(d => d.date === todayStr),
]
const providerTotals: Record<string, number> = {}
for (const d of allDaysForProviders) {
for (const [name, p] of Object.entries(d.providers)) {
providerTotals[name] = (providerTotals[name] ?? 0) + p.cost
}
}
for (const [name, cost] of Object.entries(providerTotals)) {
providers.push({ name: displayNameByName.get(name) ?? name, cost })
}
for (const p of allProviders) {
if (providers.some(pc => pc.name === p.displayName)) continue
const sources = await p.discoverSessions()
if (sources.length > 0) providers.push({ name: p.displayName, cost: 0 })
}
} else {
const display = displayNameByName.get(pf) ?? pf
providers.push({ name: display, cost: currentData.cost })
}
// DAILY HISTORY (last 365 days)
// Cache stores per-provider cost+calls per day in DailyEntry.providers, so we can derive
// a provider-filtered history without re-parsing. Tokens aren't broken down per provider
// in the cache, so the filtered view shows zero tokens (heatmap/trend still works on cost).
const historyStartStr = toDateString(new Date(now.getFullYear(), now.getMonth(), now.getDate() - BACKFILL_DAYS))
const allCacheDays = getDaysInRange(cache, historyStartStr, yesterdayStr)
// Parse only today for history; historical days come from cache
const todayRangeForHistory: DateRange = { start: todayStart, end: new Date() }
const allTodayDaysForHistory = aggregateProjectsIntoDays(fp(await parseAllSessions(todayRangeForHistory, 'all')))
const todayStrForHistory = toDateString(todayStart)
const fullHistory = [...allCacheDays, ...allTodayDaysForHistory.filter(d => d.date === todayStrForHistory)]
const dailyHistory = fullHistory.map(d => {
if (isAllProviders) {
const topModels = Object.entries(d.models)
.filter(([name]) => name !== '<synthetic>')
.sort(([, a], [, b]) => b.cost - a.cost)
.slice(0, 5)
.map(([name, m]) => ({
name,
cost: m.cost,
calls: m.calls,
inputTokens: m.inputTokens,
outputTokens: m.outputTokens,
}))
return {
date: d.date,
cost: d.cost,
calls: d.calls,
inputTokens: d.inputTokens,
outputTokens: d.outputTokens,
cacheReadTokens: d.cacheReadTokens,
cacheWriteTokens: d.cacheWriteTokens,
topModels,
}
}
const prov = d.providers[pf] ?? { calls: 0, cost: 0 }
return {
date: d.date,
cost: prov.cost,
calls: prov.calls,
inputTokens: 0,
outputTokens: 0,
cacheReadTokens: 0,
cacheWriteTokens: 0,
topModels: [],
}
})
const optimize = opts.optimize === false ? null : await scanAndDetect(scanProjects, scanRange)
console.log(JSON.stringify(buildMenubarPayload(currentData, providers, optimize, dailyHistory)))
return
}
if (opts.format === 'json') {
await hydrateCache()
const todayData = buildPeriodData('today', fp(await parseAllSessions(getDateRange('today').range, pf)))
const monthData = buildPeriodData('month', fp(await parseAllSessions(getDateRange('month').range, pf)))
const { code, rate } = getCurrency()
const payload: {
currency: string
today: { cost: number; calls: number }
month: { cost: number; calls: number }
plan?: JsonPlanSummary
} = {
currency: code,
today: { cost: Math.round(todayData.cost * rate * 100) / 100, calls: todayData.calls },
month: { cost: Math.round(monthData.cost * rate * 100) / 100, calls: monthData.calls },
}
const planUsage = await getPlanUsageOrNull()
if (planUsage) {
payload.plan = toJsonPlanSummary(planUsage)
}
console.log(JSON.stringify(payload))
return
}
await hydrateCache()
const monthProjects = fp(await parseAllSessions(getDateRange('month').range, pf))
console.log(renderStatusBar(monthProjects))
})
program
.command('today')
.description('Today\'s usage dashboard')
.option('--provider <provider>', 'Filter by provider (e.g. claude, gemini, cursor, copilot)', 'all')
.option('--format <format>', 'Output format: tui, json', 'tui')
.option('--project <name>', 'Show only projects matching name (repeatable)', collect, [])
.option('--exclude <name>', 'Exclude projects matching name (repeatable)', collect, [])
.option('--refresh <seconds>', 'Auto-refresh interval in seconds (0 to disable)', parseInteger, 30)
.action(async (opts) => {
assertFormat(opts.format, ['tui', 'json'], 'today')
if (opts.format === 'json') {
await runJsonReport('today', opts.provider, opts.project, opts.exclude)
return
}
await hydrateCache()
await renderDashboard('today', opts.provider, opts.refresh, opts.project, opts.exclude)
})
program
.command('month')
.description('This month\'s usage dashboard')
.option('--provider <provider>', 'Filter by provider (e.g. claude, gemini, cursor, copilot)', 'all')
.option('--format <format>', 'Output format: tui, json', 'tui')
.option('--project <name>', 'Show only projects matching name (repeatable)', collect, [])
.option('--exclude <name>', 'Exclude projects matching name (repeatable)', collect, [])
.option('--refresh <seconds>', 'Auto-refresh interval in seconds (0 to disable)', parseInteger, 30)
.action(async (opts) => {
assertFormat(opts.format, ['tui', 'json'], 'month')
if (opts.format === 'json') {
await runJsonReport('month', opts.provider, opts.project, opts.exclude)
return
}
await hydrateCache()
await renderDashboard('month', opts.provider, opts.refresh, opts.project, opts.exclude)
})
program
.command('export')
.description('Export usage data to CSV or JSON')
.option('-f, --format <format>', 'Export format: csv, json', 'csv')
.option('-o, --output <path>', 'Output file path')
.option('--from <date>', 'Start date (YYYY-MM-DD). Exports a single custom period when set')
.option('--to <date>', 'End date (YYYY-MM-DD). Exports a single custom period when set')
.option('--provider <provider>', 'Filter by provider (e.g. claude, gemini, cursor, copilot)', 'all')
.option('--project <name>', 'Show only projects matching name (repeatable)', collect, [])
.option('--exclude <name>', 'Exclude projects matching name (repeatable)', collect, [])
.action(async (opts) => {
assertFormat(opts.format, ['csv', 'json'], 'export')
await loadPricing()
await hydrateCache()
const pf = opts.provider
const fp = (p: ProjectSummary[]) => filterProjectsByName(p, opts.project, opts.exclude)
let customRange: DateRange | null = null
try {
customRange = parseDateRangeFlags(opts.from, opts.to)
} catch (err) {
const message = err instanceof Error ? err.message : String(err)
console.error(`\n Error: ${message}\n`)
process.exit(1)
}
const periods: PeriodExport[] = customRange
? [{ label: formatDateRangeLabel(opts.from, opts.to), projects: fp(await parseAllSessions(customRange, pf)) }]
: [
{ label: 'Today', projects: fp(await parseAllSessions(getDateRange('today').range, pf)) },
{ label: '7 Days', projects: fp(await parseAllSessions(getDateRange('week').range, pf)) },
{ label: '30 Days', projects: fp(await parseAllSessions(getDateRange('30days').range, pf)) },
]
if (periods.every(p => p.projects.length === 0)) {
console.log('\n No usage data found.\n')
return
}
const defaultName = `codeburn-${toDateString(new Date())}`
const outputPath = opts.output ?? `${defaultName}.${opts.format}`
let savedPath: string
try {
if (opts.format === 'json') {
savedPath = await exportJson(periods, outputPath)
} else {
savedPath = await exportCsv(periods, outputPath)
}
} catch (err) {
// Protection guards in export.ts (symlink refusal, non-codeburn folder refusal, etc.)
// throw with a user-readable message. Print just the message, not the stack, so the CLI
// doesn't spray its internals at the user.
const message = err instanceof Error ? err.message : String(err)
console.error(`\n Export failed: ${message}\n`)
process.exit(1)
}
const exportedLabel = customRange ? formatDateRangeLabel(opts.from, opts.to) : 'Today + 7 Days + 30 Days'
console.log(`\n Exported (${exportedLabel}) to: ${savedPath}\n`)
})
program
.command('menubar')
.description('Install and launch the macOS menubar app (one command, no clone)')
.option('--force', 'Reinstall even if an older copy is already in ~/Applications')
.action(async (opts: { force?: boolean }) => {
try {
const result = await installMenubarApp({ force: opts.force })
console.log(`\n Ready. ${result.installedPath}\n`)
} catch (err) {
const message = err instanceof Error ? err.message : String(err)
console.error(`\n Menubar install failed: ${message}\n`)
process.exit(1)
}
})
program
.command('currency [code]')
.description('Set display currency (e.g. codeburn currency GBP)')
.option('--symbol <symbol>', 'Override the currency symbol')
.option('--reset', 'Reset to USD (removes currency config)')
.action(async (code?: string, opts?: { symbol?: string; reset?: boolean }) => {
if (opts?.reset) {
const config = await readConfig()
delete config.currency
await saveConfig(config)
console.log('\n Currency reset to USD.\n')
return
}
if (!code) {
const { code: activeCode, rate, symbol } = getCurrency()
if (activeCode === 'USD' && rate === 1) {
console.log('\n Currency: USD (default)')
console.log(` Config: ${getConfigFilePath()}\n`)
} else {
console.log(`\n Currency: ${activeCode}`)
console.log(` Symbol: ${symbol}`)
console.log(` Rate: 1 USD = ${rate} ${activeCode}`)
console.log(` Config: ${getConfigFilePath()}\n`)
}
return
}
const upperCode = code.toUpperCase()
if (!isValidCurrencyCode(upperCode)) {
console.error(`\n "${code}" is not a valid ISO 4217 currency code.\n`)
process.exitCode = 1
return
}
const config = await readConfig()
config.currency = {
code: upperCode,
...(opts?.symbol ? { symbol: opts.symbol } : {}),
}
await saveConfig(config)
await loadCurrency()
const { rate, symbol } = getCurrency()
console.log(`\n Currency set to ${upperCode}.`)
console.log(` Symbol: ${symbol}`)
console.log(` Rate: 1 USD = ${rate} ${upperCode}`)
console.log(` Config saved to ${getConfigFilePath()}\n`)
})
program
.command('model-alias [from] [to]')
.description('Map a provider model name to a canonical one for pricing (e.g. codeburn model-alias my-model claude-opus-4-6)')
.option('--remove <from>', 'Remove an alias')
.option('--list', 'List configured aliases')
.action(async (from?: string, to?: string, opts?: { remove?: string; list?: boolean }) => {
const config = await readConfig()
const aliases = config.modelAliases ?? {}
if (opts?.list || (!from && !opts?.remove)) {
const entries = Object.entries(aliases)
if (entries.length === 0) {
console.log('\n No model aliases configured.')
console.log(` Config: ${getConfigFilePath()}\n`)
} else {
console.log('\n Model aliases:')
for (const [src, dst] of entries) {
console.log(` ${src} -> ${dst}`)
}
console.log(` Config: ${getConfigFilePath()}\n`)
}
return
}
if (opts?.remove) {
if (!(opts.remove in aliases)) {
console.error(`\n Alias not found: ${opts.remove}\n`)
process.exitCode = 1
return
}
delete aliases[opts.remove]
config.modelAliases = Object.keys(aliases).length > 0 ? aliases : undefined
await saveConfig(config)
console.log(`\n Removed alias: ${opts.remove}\n`)
return
}
if (!from || !to) {
console.error('\n Usage: codeburn model-alias <from> <to>\n')
process.exitCode = 1
return
}
aliases[from] = to
config.modelAliases = aliases
await saveConfig(config)
console.log(`\n Alias saved: ${from} -> ${to}`)
console.log(` Config: ${getConfigFilePath()}\n`)
})
program
.command('plan [action] [id]')
.description('Show or configure a subscription plan for overage tracking')
.option('--format <format>', 'Output format: text or json', 'text')
.option('--monthly-usd <n>', 'Monthly plan price in USD (for custom)', parseNumber)
.option('--provider <name>', 'Provider scope: all, claude, codex, cursor', 'all')
.option('--reset-day <n>', 'Day of month plan resets (1-28)', parseInteger, 1)
.action(async (action?: string, id?: string, opts?: { format?: string; monthlyUsd?: number; provider?: string; resetDay?: number }) => {
assertFormat(opts?.format ?? 'text', ['text', 'json'], 'plan')
const mode = action ?? 'show'
if (mode === 'show') {
const plan = await readPlan()
const displayPlan = !plan || plan.id === 'none'
? { id: 'none', monthlyUsd: 0, provider: 'all', resetDay: 1, setAt: null }
: {
id: plan.id,
monthlyUsd: plan.monthlyUsd,
provider: plan.provider,
resetDay: clampResetDay(plan.resetDay),
setAt: plan.setAt,
}
if (opts?.format === 'json') {
console.log(JSON.stringify(displayPlan))
return
}
if (!plan || plan.id === 'none') {
console.log('\n Plan: none')
console.log(' API-pricing view is active.')
console.log(` Config: ${getConfigFilePath()}\n`)
return
}
console.log(`\n Plan: ${planDisplayName(plan.id)} (${plan.id})`)
console.log(` Budget: $${plan.monthlyUsd}/month`)
console.log(` Provider: ${plan.provider}`)
console.log(` Reset day: ${clampResetDay(plan.resetDay)}`)
console.log(` Set at: ${plan.setAt}`)
console.log(` Config: ${getConfigFilePath()}\n`)
return
}
if (mode === 'reset') {
await clearPlan()
console.log('\n Plan reset. API-pricing view is active.\n')
return
}
if (mode !== 'set') {
console.error('\n Usage: codeburn plan [set <id> | reset]\n')
process.exitCode = 1
return
}
if (!id || !isPlanId(id)) {
console.error(`\n Plan id must be one of: claude-pro, claude-max, cursor-pro, custom, none; got "${id ?? ''}".\n`)
process.exitCode = 1
return
}
const resetDay = opts?.resetDay ?? 1
if (!Number.isInteger(resetDay) || resetDay < 1 || resetDay > 28) {
console.error(`\n --reset-day must be an integer from 1 to 28; got ${resetDay}.\n`)
process.exitCode = 1
return
}
if (id === 'none') {
await clearPlan()
console.log('\n Plan reset. API-pricing view is active.\n')
return
}
if (id === 'custom') {
if (opts?.monthlyUsd === undefined) {
console.error('\n Custom plans require --monthly-usd <positive number>.\n')
process.exitCode = 1
return
}
const monthlyUsd = opts.monthlyUsd
if (!Number.isFinite(monthlyUsd) || monthlyUsd <= 0) {
console.error(`\n --monthly-usd must be a positive number; got ${opts.monthlyUsd}.\n`)
process.exitCode = 1
return
}
const provider = opts?.provider ?? 'all'
if (!isPlanProvider(provider)) {
console.error(`\n --provider must be one of: all, claude, codex, cursor; got "${provider}".\n`)
process.exitCode = 1
return
}
await savePlan({
id: 'custom',
monthlyUsd,
provider,
resetDay,
setAt: new Date().toISOString(),
})
console.log(`\n Plan set to custom ($${monthlyUsd}/month, ${provider}, reset day ${resetDay}).`)
console.log(` Config saved to ${getConfigFilePath()}\n`)
return
}
const preset = getPresetPlan(id)
if (!preset) {
console.error(`\n Unknown preset "${id}".\n`)
process.exitCode = 1
return
}
await savePlan({
...preset,
resetDay,
setAt: new Date().toISOString(),
})
console.log(`\n Plan set to ${planDisplayName(preset.id)} ($${preset.monthlyUsd}/month).`)
console.log(` Provider: ${preset.provider}`)
console.log(` Reset day: ${resetDay}`)
console.log(` Config saved to ${getConfigFilePath()}\n`)
})
program
.command('optimize')
.description('Find token waste and get exact fixes')
.option('-p, --period <period>', 'Analysis period: today, week, 30days, month, all', '30days')
.option('--provider <provider>', 'Filter by provider (e.g. claude, gemini, cursor, copilot)', 'all')
.action(async (opts) => {
await loadPricing()
await hydrateCache()
const { range, label } = getDateRange(opts.period)
const projects = await parseAllSessions(range, opts.provider)
await runOptimize(projects, label, range)
})
program
.command('compare')
.description('Compare two AI models side-by-side')
.option('-p, --period <period>', 'Analysis period: today, week, 30days, month, all', 'all')
.option('--provider <provider>', 'Filter by provider (e.g. claude, gemini, cursor, copilot)', 'all')
.action(async (opts) => {
await loadPricing()
await hydrateCache()
const { range } = getDateRange(opts.period)
await renderCompare(range, opts.provider)
})
program
.command('models')
.description('Per-model token + cost table, optionally exploded by task type')
.option('-p, --period <period>', 'Analysis period: today, week, 30days, month, all', '30days')
.option('--from <date>', 'Custom range start (YYYY-MM-DD)')
.option('--to <date>', 'Custom range end (YYYY-MM-DD)')
.option('--provider <provider>', 'Filter by provider (e.g. claude, codex, cursor)', 'all')
.option('--task <category>', 'Filter to one task type (e.g. feature, debugging, refactoring)')
.option('--by-task', 'One row per (provider, model, task) instead of one row per (provider, model)')
.option('--top <n>', 'Show only the top N rows', (v: string) => parseInt(v, 10))
.option('--min-cost <usd>', 'Hide rows below this cost threshold', (v: string) => parseFloat(v))
.option('--no-totals', 'Suppress the footer totals row')
.option('--format <format>', 'Output format: table, markdown, json, csv', 'table')
.action(async (opts) => {
const { aggregateModels, renderTable, renderMarkdown, renderJson, renderCsv } = await import('./models-report.js')
await loadPricing()
await hydrateCache()
let range
if (opts.from || opts.to) {
const customRange = parseDateRangeFlags(opts.from, opts.to)
if (!customRange) {
process.stderr.write('codeburn: --from and --to must be valid YYYY-MM-DD dates\n')
process.exit(1)
}
range = customRange
} else {
range = getDateRange(opts.period).range
}
const projects = await parseAllSessions(range, opts.provider)
const rows = await aggregateModels(projects, {
byTask: !!opts.byTask,
taskFilter: opts.task,
topN: typeof opts.top === 'number' && Number.isFinite(opts.top) ? opts.top : undefined,
minCost: typeof opts.minCost === 'number' && Number.isFinite(opts.minCost) ? opts.minCost : 0.01,
})
const fmt = (opts.format ?? 'table').toLowerCase()
if (rows.length === 0 && (fmt === 'table' || fmt === 'markdown')) {
process.stdout.write('No model usage found for the selected period.\n')
return
}
if (fmt === 'json') {
process.stdout.write(renderJson(rows) + '\n')
} else if (fmt === 'csv') {
process.stdout.write(renderCsv(rows, { byTask: !!opts.byTask }) + '\n')
} else if (fmt === 'markdown' || fmt === 'md') {
process.stdout.write(renderMarkdown(rows, { byTask: !!opts.byTask, showTotals: opts.totals !== false }) + '\n')
} else if (fmt === 'table') {
process.stdout.write(renderTable(rows, { byTask: !!opts.byTask, showTotals: opts.totals !== false }) + '\n')
} else {
process.stderr.write(`codeburn: unknown --format "${opts.format}". Choose table, markdown, json, or csv.\n`)
process.exit(1)
}
})
program
.command('yield')
.description('Track which AI spend shipped to main vs reverted/abandoned (experimental)')
.option('-p, --period <period>', 'Analysis period: today, week, 30days, month, all', 'week')
.action(async (opts) => {
const { computeYield, formatYieldSummary } = await import('./yield.js')
await loadPricing()
await hydrateCache()
const { range, label } = getDateRange(opts.period)
console.log(`\n Analyzing yield for ${label}...\n`)
const summary = await computeYield(range, process.cwd())
console.log(formatYieldSummary(summary))
})
program.parse()

View file

@ -5,24 +5,19 @@ import { homedir } from 'os'
import { join } from 'path'
import type { DateRange, ProjectSummary } from './types.js'
// Bumped to 5 alongside the Cursor per-project breakdown: prior daily
// entries recorded every Cursor session under a single 'cursor' project
// label. After the upgrade, the breakdown produces per-workspace project
// labels for new days; without invalidation the dashboard would show
// 'cursor' for historical days and `-Users-you-myproject` for new ones
// in the same window, producing a confusing mixed projection.
export const DAILY_CACHE_VERSION = 5
// MIN_SUPPORTED_VERSION bumped to 5 too. The migration path
// Bumped to 6 alongside the Claude 1-hour cache-write pricing fix: prior
// daily entries priced all Claude cache writes at the 5-minute rate, so
// cached historical cost/model/provider/category totals would remain
// under-reported unless discarded and recomputed from raw sessions.
export const DAILY_CACHE_VERSION = 6
// MIN_SUPPORTED_VERSION bumped to 6 too. The migration path
// (isMigratableCache + migrateDays) only fills in missing default fields;
// it does NOT recompute the providers / categories / models rollups from
// session data, because those raw sessions are not stored in the cache.
// So a migrated v2/v3/v4 cache would carry forward stale provider totals
// (single 'cursor' bucket instead of per-workspace) for the full cache
// retention window. Setting the floor to 5 forces those older caches to
// be discarded and recomputed cleanly. Confirmed by live test:
// menubar-json --period all reported cursor=$3.78 against a migrated
// v4 cache but $4.08 (correct) after the cache was discarded.
const MIN_SUPPORTED_VERSION = 5
// So a migrated v5 cache would carry forward stale pricing totals for
// the full cache retention window. Setting the floor to 6 forces older
// caches to be discarded and recomputed cleanly.
const MIN_SUPPORTED_VERSION = 6
const DAILY_CACHE_FILENAME = 'daily-cache.json'
export type DailyEntry = {

View file

@ -52,6 +52,7 @@ const PROVIDER_COLORS: Record<string, string> = {
claude: '#FF8C42',
codex: '#5BF5A0',
cursor: '#00B4D8',
'ibm-bob': '#0F62FE',
opencode: '#A78BFA',
pi: '#F472B6',
all: '#FF8C42',
@ -247,16 +248,19 @@ function DailyActivity({ projects, days = 14, pw, bw }: { projects: ProjectSumma
)
}
const _homeEncoded = homedir().replace(/\//g, '-')
const _home = homedir()
const _homePrefix = _home.endsWith('/') ? _home : _home + '/'
function shortProject(encoded: string): string {
let path = encoded.replace(/^-/, '')
if (path.startsWith(_homeEncoded.replace(/^-/, ''))) {
path = path.slice(_homeEncoded.replace(/^-/, '').length).replace(/^-/, '')
}
path = path.replace(/^private-tmp-[^-]+-[^-]+-/, '').replace(/^private-tmp-/, '').replace(/^tmp-/, '')
export function shortProject(absPath: string): string {
const normalized = absPath.replace(/\\/g, '/')
let path: string
if (normalized === _home) path = ''
else if (normalized.startsWith(_homePrefix)) path = normalized.slice(_homePrefix.length)
else path = normalized
path = path.replace(/^\/+/, '')
path = path.replace(/^private\/tmp\/[^/]+\/[^/]+\//, '').replace(/^private\/tmp\//, '').replace(/^tmp\//, '')
if (!path) return 'home'
const parts = path.split('-').filter(Boolean)
const parts = path.split('/').filter(Boolean)
if (parts.length <= 3) return parts.join('/')
return parts.slice(-3).join('/')
}
@ -282,7 +286,7 @@ function ProjectBreakdown({ projects, pw, bw, budgets }: { projects: ProjectSumm
return (
<Text key={`${project.project}-${i}`} wrap="truncate-end">
<HBar value={project.totalCostUSD} max={maxCost} width={bw} />
<Text dimColor> {fit(shortProject(project.project), nw)}</Text>
<Text dimColor> {fit(shortProject(project.projectPath), nw)}</Text>
<Text color={GOLD}>{formatCost(project.totalCostUSD).padStart(8)}</Text>
<Text color={GOLD}>{avgCost.padStart(PROJECT_COL_AVG)}</Text>
<Text>{String(project.sessions.length).padStart(6)}</Text>
@ -442,7 +446,7 @@ const TOP_SESSIONS_CALLS_COL = 6
function TopSessions({ projects, pw, bw }: { projects: ProjectSummary[]; pw: number; bw: number }) {
const allSessions = projects.flatMap(p =>
p.sessions.map(s => ({ ...s, projectName: p.project }))
p.sessions.map(s => ({ ...s, projectPath: p.projectPath }))
)
const top = [...allSessions].sort((a, b) => b.totalCostUSD - a.totalCostUSD).slice(0, 5)
@ -460,7 +464,7 @@ function TopSessions({ projects, pw, bw }: { projects: ProjectSummary[]; pw: num
const date = session.firstTimestamp
? session.firstTimestamp.slice(0, TOP_SESSIONS_DATE_LEN)
: '----------'
const label = `${date} ${shortProject(session.projectName)}`
const label = `${date} ${shortProject(session.projectPath)}`
return (
<Text key={`${session.sessionId}-${i}`} wrap="truncate-end">
<HBar value={session.totalCostUSD} max={maxCost} width={bw} />
@ -513,6 +517,7 @@ const PROVIDER_DISPLAY_NAMES: Record<string, string> = {
claude: 'Claude',
codex: 'Codex',
cursor: 'Cursor',
'ibm-bob': 'IBM Bob',
opencode: 'OpenCode',
pi: 'Pi',
}

978
src/main.ts Normal file
View file

@ -0,0 +1,978 @@
import { Command } from 'commander'
import { installMenubarApp } from './menubar-installer.js'
import { exportCsv, exportJson, type PeriodExport } from './export.js'
import { loadPricing, setModelAliases } from './models.js'
import { parseAllSessions, filterProjectsByName } from './parser.js'
import { convertCost } from './currency.js'
import { renderStatusBar } from './format.js'
import { type PeriodData, type ProviderCost } from './menubar-json.js'
import { buildMenubarPayload } from './menubar-json.js'
import { getDaysInRange, ensureCacheHydrated, emptyCache, BACKFILL_DAYS, toDateString } from './daily-cache.js'
import { aggregateProjectsIntoDays, buildPeriodDataFromDays, dateKey } from './day-aggregator.js'
import { CATEGORY_LABELS, type DateRange, type ProjectSummary, type TaskCategory } from './types.js'
import { aggregateModelEfficiency } from './model-efficiency.js'
import { renderDashboard } from './dashboard.js'
import { formatDateRangeLabel, parseDateRangeFlags, getDateRange, toPeriod, type Period } from './cli-date.js'
import { runOptimize, scanAndDetect } from './optimize.js'
import { renderCompare } from './compare.js'
import { getAllProviders } from './providers/index.js'
import { clearPlan, readConfig, readPlan, saveConfig, savePlan, getConfigFilePath, type PlanId } from './config.js'
import { clampResetDay, getPlanUsageOrNull, type PlanUsage } from './plan-usage.js'
import { getPresetPlan, isPlanId, isPlanProvider, planDisplayName } from './plans.js'
import { createRequire } from 'node:module'
const require = createRequire(import.meta.url)
const { version } = require('../package.json')
import { loadCurrency, getCurrency, isValidCurrencyCode } from './currency.js'
async function hydrateCache() {
try {
return await ensureCacheHydrated(
(range) => parseAllSessions(range, 'all'),
aggregateProjectsIntoDays,
)
} catch {
return emptyCache()
}
}
function collect(val: string, acc: string[]): string[] {
acc.push(val)
return acc
}
function parseNumber(value: string): number {
return Number(value)
}
function parseInteger(value: string): number {
return parseInt(value, 10)
}
type JsonPlanSummary = {
id: PlanId
budget: number
spent: number
percentUsed: number
status: 'under' | 'near' | 'over'
projectedMonthEnd: number
daysUntilReset: number
periodStart: string
periodEnd: string
}
function toJsonPlanSummary(planUsage: PlanUsage): JsonPlanSummary {
return {
id: planUsage.plan.id,
budget: convertCost(planUsage.budgetUsd),
spent: convertCost(planUsage.spentApiEquivalentUsd),
percentUsed: Math.round(planUsage.percentUsed * 10) / 10,
status: planUsage.status,
projectedMonthEnd: convertCost(planUsage.projectedMonthUsd),
daysUntilReset: planUsage.daysUntilReset,
periodStart: planUsage.periodStart.toISOString(),
periodEnd: planUsage.periodEnd.toISOString(),
}
}
function assertFormat(value: string, allowed: readonly string[], command: string): void {
if (!allowed.includes(value)) {
process.stderr.write(
`codeburn ${command}: unknown format "${value}". Valid values: ${allowed.join(', ')}.\n`
)
process.exit(1)
}
}
async function runJsonReport(period: Period, provider: string, project: string[], exclude: string[]): Promise<void> {
await loadPricing()
const { range, label } = getDateRange(period)
const projects = filterProjectsByName(await parseAllSessions(range, provider), project, exclude)
const report: ReturnType<typeof buildJsonReport> & { plan?: JsonPlanSummary } = buildJsonReport(projects, label, period)
const planUsage = await getPlanUsageOrNull()
if (planUsage) {
report.plan = toJsonPlanSummary(planUsage)
}
console.log(JSON.stringify(report, null, 2))
}
const program = new Command()
.name('codeburn')
.description('See where your AI coding tokens go - by task, tool, model, and project')
.version(version)
.option('--verbose', 'print warnings to stderr on read failures and skipped files')
.option('--timezone <zone>', 'IANA timezone for date grouping (e.g. Asia/Tokyo, America/New_York)')
program.hook('preAction', async (thisCommand) => {
const tz = thisCommand.opts<{ timezone?: string }>().timezone ?? process.env['CODEBURN_TZ']
if (tz) {
try {
Intl.DateTimeFormat(undefined, { timeZone: tz })
} catch {
console.error(`\n Invalid timezone: "${tz}". Use an IANA timezone like "America/New_York" or "Asia/Tokyo".\n`)
process.exit(1)
}
process.env.TZ = tz
}
const config = await readConfig()
setModelAliases(config.modelAliases ?? {})
if (thisCommand.opts<{ verbose?: boolean }>().verbose) {
process.env['CODEBURN_VERBOSE'] = '1'
}
await loadCurrency()
})
function buildJsonReport(projects: ProjectSummary[], period: string, periodKey: string) {
const sessions = projects.flatMap(p => p.sessions)
const { code } = getCurrency()
const totalCostUSD = projects.reduce((s, p) => s + p.totalCostUSD, 0)
const totalCalls = projects.reduce((s, p) => s + p.totalApiCalls, 0)
const totalSessions = projects.reduce((s, p) => s + p.sessions.length, 0)
const totalInput = sessions.reduce((s, sess) => s + sess.totalInputTokens, 0)
const totalOutput = sessions.reduce((s, sess) => s + sess.totalOutputTokens, 0)
const totalCacheRead = sessions.reduce((s, sess) => s + sess.totalCacheReadTokens, 0)
const totalCacheWrite = sessions.reduce((s, sess) => s + sess.totalCacheWriteTokens, 0)
// Match src/menubar-json.ts:cacheHitPercent: reads over reads+fresh-input. cache_write
// counts tokens being stored, not served, so it doesn't belong in the denominator.
const cacheHitDenom = totalInput + totalCacheRead
const cacheHitPercent = cacheHitDenom > 0 ? Math.round((totalCacheRead / cacheHitDenom) * 1000) / 10 : 0
// Per-day rollup. Mirrors parser.ts categoryBreakdown semantics so a
// consumer summing daily[].editTurns over a period gets the same total as
// sum(activities[].editTurns) for that period: every turn counts once for
// `turns`, edit turns count for `editTurns`, edit turns with zero retries
// count for `oneShotTurns`. Issue #279 — daily-resolution efficiency
// dashboards need this without re-deriving from activity-level rollups.
const dailyMap: Record<string, { cost: number; calls: number; turns: number; editTurns: number; oneShotTurns: number }> = {}
for (const sess of sessions) {
for (const turn of sess.turns) {
// Prefer the user-message timestamp on the turn; fall back to the first
// assistant-call timestamp when the user line is missing (continuation
// sessions where the JSONL begins mid-conversation). Previously these
// turns dropped from daily but stayed in activities, breaking the
// sum(daily[].editTurns) === sum(activities[].editTurns) invariant.
const ts = turn.timestamp || turn.assistantCalls[0]?.timestamp
if (!ts) { continue }
const day = dateKey(ts)
if (!dailyMap[day]) { dailyMap[day] = { cost: 0, calls: 0, turns: 0, editTurns: 0, oneShotTurns: 0 } }
dailyMap[day].turns += 1
if (turn.hasEdits) {
dailyMap[day].editTurns += 1
if (turn.retries === 0) dailyMap[day].oneShotTurns += 1
}
for (const call of turn.assistantCalls) {
dailyMap[day].cost += call.costUSD
dailyMap[day].calls += 1
}
}
}
const daily = Object.entries(dailyMap).sort().map(([date, d]) => ({
date,
cost: convertCost(d.cost),
calls: d.calls,
turns: d.turns,
editTurns: d.editTurns,
oneShotTurns: d.oneShotTurns,
// Pre-computed convenience for dashboards that don't want to do the math.
// null when there are no edit turns (the rate is undefined, not zero —
// a day where the user only had Q&A turns shouldn't read as 0% one-shot).
oneShotRate: d.editTurns > 0
? Math.round((d.oneShotTurns / d.editTurns) * 1000) / 10
: null,
}))
const projectList = projects.map(p => ({
name: p.project,
path: p.projectPath,
cost: convertCost(p.totalCostUSD),
avgCostPerSession: p.sessions.length > 0
? convertCost(p.totalCostUSD / p.sessions.length)
: null,
calls: p.totalApiCalls,
sessions: p.sessions.length,
}))
const modelMap: Record<string, { calls: number; cost: number; inputTokens: number; outputTokens: number; cacheReadTokens: number; cacheWriteTokens: number }> = {}
const modelEfficiency = aggregateModelEfficiency(projects)
for (const sess of sessions) {
for (const [model, d] of Object.entries(sess.modelBreakdown)) {
if (!modelMap[model]) { modelMap[model] = { calls: 0, cost: 0, inputTokens: 0, outputTokens: 0, cacheReadTokens: 0, cacheWriteTokens: 0 } }
modelMap[model].calls += d.calls
modelMap[model].cost += d.costUSD
modelMap[model].inputTokens += d.tokens.inputTokens
modelMap[model].outputTokens += d.tokens.outputTokens
modelMap[model].cacheReadTokens += d.tokens.cacheReadInputTokens
modelMap[model].cacheWriteTokens += d.tokens.cacheCreationInputTokens
}
}
const models = Object.entries(modelMap)
.sort(([, a], [, b]) => b.cost - a.cost)
.map(([name, { cost, ...rest }]) => {
const efficiency = modelEfficiency.get(name)
return {
name,
...rest,
cost: convertCost(cost),
editTurns: efficiency?.editTurns ?? 0,
oneShotTurns: efficiency?.oneShotTurns ?? 0,
oneShotRate: efficiency?.oneShotRate ?? null,
retriesPerEdit: efficiency?.retriesPerEdit ?? null,
costPerEdit: efficiency?.costPerEditUSD !== null && efficiency?.costPerEditUSD !== undefined
? convertCost(efficiency.costPerEditUSD)
: null,
}
})
const catMap: Record<string, { turns: number; cost: number; editTurns: number; oneShotTurns: number }> = {}
for (const sess of sessions) {
for (const [cat, d] of Object.entries(sess.categoryBreakdown)) {
if (!catMap[cat]) { catMap[cat] = { turns: 0, cost: 0, editTurns: 0, oneShotTurns: 0 } }
catMap[cat].turns += d.turns
catMap[cat].cost += d.costUSD
catMap[cat].editTurns += d.editTurns
catMap[cat].oneShotTurns += d.oneShotTurns
}
}
const activities = Object.entries(catMap)
.sort(([, a], [, b]) => b.cost - a.cost)
.map(([cat, d]) => ({
category: CATEGORY_LABELS[cat as TaskCategory] ?? cat,
cost: convertCost(d.cost),
turns: d.turns,
editTurns: d.editTurns,
oneShotTurns: d.oneShotTurns,
oneShotRate: d.editTurns > 0 ? Math.round((d.oneShotTurns / d.editTurns) * 1000) / 10 : null,
}))
const toolMap: Record<string, number> = {}
const mcpMap: Record<string, number> = {}
const bashMap: Record<string, number> = {}
for (const sess of sessions) {
for (const [tool, d] of Object.entries(sess.toolBreakdown)) {
toolMap[tool] = (toolMap[tool] ?? 0) + d.calls
}
for (const [server, d] of Object.entries(sess.mcpBreakdown)) {
mcpMap[server] = (mcpMap[server] ?? 0) + d.calls
}
for (const [cmd, d] of Object.entries(sess.bashBreakdown)) {
bashMap[cmd] = (bashMap[cmd] ?? 0) + d.calls
}
}
const sortedMap = (m: Record<string, number>) =>
Object.entries(m).sort(([, a], [, b]) => b - a).map(([name, calls]) => ({ name, calls }))
const topSessions = projects
.flatMap(p => p.sessions.map(s => ({ project: p.project, sessionId: s.sessionId, date: s.firstTimestamp ? dateKey(s.firstTimestamp) : null, cost: convertCost(s.totalCostUSD), calls: s.apiCalls })))
.sort((a, b) => b.cost - a.cost)
.slice(0, 5)
return {
generated: new Date().toISOString(),
currency: code,
period,
periodKey,
overview: {
cost: convertCost(totalCostUSD),
calls: totalCalls,
sessions: totalSessions,
cacheHitPercent,
tokens: {
input: totalInput,
output: totalOutput,
cacheRead: totalCacheRead,
cacheWrite: totalCacheWrite,
},
},
daily,
projects: projectList,
models,
activities,
tools: sortedMap(toolMap),
mcpServers: sortedMap(mcpMap),
shellCommands: sortedMap(bashMap),
topSessions,
}
}
program
.command('report', { isDefault: true })
.description('Interactive usage dashboard')
.option('-p, --period <period>', 'Starting period: today, week, 30days, month, all', 'week')
.option('--from <date>', 'Start date (YYYY-MM-DD). Overrides --period when set')
.option('--to <date>', 'End date (YYYY-MM-DD). Overrides --period when set')
.option('--provider <provider>', 'Filter by provider (e.g. claude, gemini, cursor, copilot)', 'all')
.option('--format <format>', 'Output format: tui, json', 'tui')
.option('--project <name>', 'Show only projects matching name (repeatable)', collect, [])
.option('--exclude <name>', 'Exclude projects matching name (repeatable)', collect, [])
.option('--refresh <seconds>', 'Auto-refresh interval in seconds (0 to disable)', parseInteger, 30)
.action(async (opts) => {
assertFormat(opts.format, ['tui', 'json'], 'report')
let customRange: DateRange | null = null
try {
customRange = parseDateRangeFlags(opts.from, opts.to)
} catch (err) {
const message = err instanceof Error ? err.message : String(err)
console.error(`\n Error: ${message}\n`)
process.exit(1)
}
const period = toPeriod(opts.period)
if (opts.format === 'json') {
await loadPricing()
await hydrateCache()
if (customRange) {
const label = formatDateRangeLabel(opts.from, opts.to)
const projects = filterProjectsByName(
await parseAllSessions(customRange, opts.provider),
opts.project,
opts.exclude,
)
console.log(JSON.stringify(buildJsonReport(projects, label, 'custom'), null, 2))
} else {
await runJsonReport(period, opts.provider, opts.project, opts.exclude)
}
return
}
await hydrateCache()
const customRangeLabel = customRange ? formatDateRangeLabel(opts.from, opts.to) : undefined
await renderDashboard(period, opts.provider, opts.refresh, opts.project, opts.exclude, customRange, customRangeLabel)
})
function buildPeriodData(label: string, projects: ProjectSummary[]): PeriodData {
const sessions = projects.flatMap(p => p.sessions)
const catTotals: Record<string, { turns: number; cost: number; editTurns: number; oneShotTurns: number }> = {}
const modelTotals: Record<string, { calls: number; cost: number }> = {}
let inputTokens = 0, outputTokens = 0, cacheReadTokens = 0, cacheWriteTokens = 0
for (const sess of sessions) {
inputTokens += sess.totalInputTokens
outputTokens += sess.totalOutputTokens
cacheReadTokens += sess.totalCacheReadTokens
cacheWriteTokens += sess.totalCacheWriteTokens
for (const [cat, d] of Object.entries(sess.categoryBreakdown)) {
if (!catTotals[cat]) catTotals[cat] = { turns: 0, cost: 0, editTurns: 0, oneShotTurns: 0 }
catTotals[cat].turns += d.turns
catTotals[cat].cost += d.costUSD
catTotals[cat].editTurns += d.editTurns
catTotals[cat].oneShotTurns += d.oneShotTurns
}
for (const [model, d] of Object.entries(sess.modelBreakdown)) {
if (!modelTotals[model]) modelTotals[model] = { calls: 0, cost: 0 }
modelTotals[model].calls += d.calls
modelTotals[model].cost += d.costUSD
}
}
return {
label,
cost: projects.reduce((s, p) => s + p.totalCostUSD, 0),
calls: projects.reduce((s, p) => s + p.totalApiCalls, 0),
sessions: projects.reduce((s, p) => s + p.sessions.length, 0),
inputTokens, outputTokens, cacheReadTokens, cacheWriteTokens,
categories: Object.entries(catTotals)
.sort(([, a], [, b]) => b.cost - a.cost)
.map(([cat, d]) => ({ name: CATEGORY_LABELS[cat as TaskCategory] ?? cat, ...d })),
models: Object.entries(modelTotals)
.sort(([, a], [, b]) => b.cost - a.cost)
.map(([name, d]) => ({ name, ...d })),
}
}
program
.command('status')
.description('Compact status output (today + month)')
.option('--format <format>', 'Output format: terminal, menubar-json, json', 'terminal')
.option('--provider <provider>', 'Filter by provider (e.g. claude, gemini, cursor, copilot)', 'all')
.option('--project <name>', 'Show only projects matching name (repeatable)', collect, [])
.option('--exclude <name>', 'Exclude projects matching name (repeatable)', collect, [])
.option('--period <period>', 'Primary period for menubar-json: today, week, 30days, month, all', 'today')
.option('--no-optimize', 'Skip optimize findings (menubar-json only, faster)')
.action(async (opts) => {
assertFormat(opts.format, ['terminal', 'menubar-json', 'json'], 'status')
await loadPricing()
const pf = opts.provider
const fp = (p: ProjectSummary[]) => filterProjectsByName(p, opts.project, opts.exclude)
if (opts.format === 'menubar-json') {
const periodInfo = getDateRange(opts.period)
const now = new Date()
const todayStart = new Date(now.getFullYear(), now.getMonth(), now.getDate())
const yesterdayStr = toDateString(new Date(now.getFullYear(), now.getMonth(), now.getDate() - 1))
const isAllProviders = pf === 'all'
const cache = await hydrateCache()
// CURRENT PERIOD DATA
// - .all provider: assemble from cache + today (fast)
// - specific provider: parse the period range with provider filter (correct, but slower)
let currentData: PeriodData
let scanProjects: ProjectSummary[]
let scanRange: DateRange
if (isAllProviders) {
// Parse only today's sessions; historical data comes from cache to avoid double-counting
const todayRange: DateRange = { start: todayStart, end: new Date() }
const todayProjects = fp(await parseAllSessions(todayRange, 'all'))
const todayDays = aggregateProjectsIntoDays(todayProjects)
const rangeStartStr = toDateString(periodInfo.range.start)
const rangeEndStr = toDateString(periodInfo.range.end)
const historicalDays = getDaysInRange(cache, rangeStartStr, yesterdayStr)
const todayInRange = todayDays.filter(d => d.date >= rangeStartStr && d.date <= rangeEndStr)
const allDays = [...historicalDays, ...todayInRange].sort((a, b) => a.date.localeCompare(b.date))
currentData = buildPeriodDataFromDays(allDays, periodInfo.label)
scanProjects = todayProjects
scanRange = periodInfo.range
} else {
const projects = fp(await parseAllSessions(periodInfo.range, pf))
currentData = buildPeriodData(periodInfo.label, projects)
scanProjects = projects
scanRange = periodInfo.range
}
// PROVIDERS
// For .all: enumerate every provider with cost across the period (from cache) + installed-but-zero.
// For specific: just this single provider with its scoped cost.
const allProviders = await getAllProviders()
const displayNameByName = new Map(allProviders.map(p => [p.name, p.displayName]))
const providers: ProviderCost[] = []
if (isAllProviders) {
// Parse only today; historical provider costs come from cache
const todayRangeForProviders: DateRange = { start: todayStart, end: new Date() }
const todayDaysForProviders = aggregateProjectsIntoDays(fp(await parseAllSessions(todayRangeForProviders, 'all')))
const rangeStartStr = toDateString(periodInfo.range.start)
const todayStr = toDateString(todayStart)
const allDaysForProviders = [
...getDaysInRange(cache, rangeStartStr, yesterdayStr),
...todayDaysForProviders.filter(d => d.date === todayStr),
]
const providerTotals: Record<string, number> = {}
for (const d of allDaysForProviders) {
for (const [name, p] of Object.entries(d.providers)) {
providerTotals[name] = (providerTotals[name] ?? 0) + p.cost
}
}
for (const [name, cost] of Object.entries(providerTotals)) {
providers.push({ name: displayNameByName.get(name) ?? name, cost })
}
for (const p of allProviders) {
if (providers.some(pc => pc.name === p.displayName)) continue
const sources = await p.discoverSessions()
if (sources.length > 0) providers.push({ name: p.displayName, cost: 0 })
}
} else {
const display = displayNameByName.get(pf) ?? pf
providers.push({ name: display, cost: currentData.cost })
}
// DAILY HISTORY (last 365 days)
// Cache stores per-provider cost+calls per day in DailyEntry.providers, so we can derive
// a provider-filtered history without re-parsing. Tokens aren't broken down per provider
// in the cache, so the filtered view shows zero tokens (heatmap/trend still works on cost).
const historyStartStr = toDateString(new Date(now.getFullYear(), now.getMonth(), now.getDate() - BACKFILL_DAYS))
const allCacheDays = getDaysInRange(cache, historyStartStr, yesterdayStr)
// Parse only today for history; historical days come from cache
const todayRangeForHistory: DateRange = { start: todayStart, end: new Date() }
const allTodayDaysForHistory = aggregateProjectsIntoDays(fp(await parseAllSessions(todayRangeForHistory, 'all')))
const todayStrForHistory = toDateString(todayStart)
const fullHistory = [...allCacheDays, ...allTodayDaysForHistory.filter(d => d.date === todayStrForHistory)]
const dailyHistory = fullHistory.map(d => {
if (isAllProviders) {
const topModels = Object.entries(d.models)
.filter(([name]) => name !== '<synthetic>')
.sort(([, a], [, b]) => b.cost - a.cost)
.slice(0, 5)
.map(([name, m]) => ({
name,
cost: m.cost,
calls: m.calls,
inputTokens: m.inputTokens,
outputTokens: m.outputTokens,
}))
return {
date: d.date,
cost: d.cost,
calls: d.calls,
inputTokens: d.inputTokens,
outputTokens: d.outputTokens,
cacheReadTokens: d.cacheReadTokens,
cacheWriteTokens: d.cacheWriteTokens,
topModels,
}
}
const prov = d.providers[pf] ?? { calls: 0, cost: 0 }
return {
date: d.date,
cost: prov.cost,
calls: prov.calls,
inputTokens: 0,
outputTokens: 0,
cacheReadTokens: 0,
cacheWriteTokens: 0,
topModels: [],
}
})
const optimize = opts.optimize === false ? null : await scanAndDetect(scanProjects, scanRange)
console.log(JSON.stringify(buildMenubarPayload(currentData, providers, optimize, dailyHistory)))
return
}
if (opts.format === 'json') {
await hydrateCache()
const todayData = buildPeriodData('today', fp(await parseAllSessions(getDateRange('today').range, pf)))
const monthData = buildPeriodData('month', fp(await parseAllSessions(getDateRange('month').range, pf)))
const { code, rate } = getCurrency()
const payload: {
currency: string
today: { cost: number; calls: number }
month: { cost: number; calls: number }
plan?: JsonPlanSummary
} = {
currency: code,
today: { cost: Math.round(todayData.cost * rate * 100) / 100, calls: todayData.calls },
month: { cost: Math.round(monthData.cost * rate * 100) / 100, calls: monthData.calls },
}
const planUsage = await getPlanUsageOrNull()
if (planUsage) {
payload.plan = toJsonPlanSummary(planUsage)
}
console.log(JSON.stringify(payload))
return
}
await hydrateCache()
const monthProjects = fp(await parseAllSessions(getDateRange('month').range, pf))
console.log(renderStatusBar(monthProjects))
})
program
.command('today')
.description('Today\'s usage dashboard')
.option('--provider <provider>', 'Filter by provider (e.g. claude, gemini, cursor, copilot)', 'all')
.option('--format <format>', 'Output format: tui, json', 'tui')
.option('--project <name>', 'Show only projects matching name (repeatable)', collect, [])
.option('--exclude <name>', 'Exclude projects matching name (repeatable)', collect, [])
.option('--refresh <seconds>', 'Auto-refresh interval in seconds (0 to disable)', parseInteger, 30)
.action(async (opts) => {
assertFormat(opts.format, ['tui', 'json'], 'today')
if (opts.format === 'json') {
await runJsonReport('today', opts.provider, opts.project, opts.exclude)
return
}
await hydrateCache()
await renderDashboard('today', opts.provider, opts.refresh, opts.project, opts.exclude)
})
program
.command('month')
.description('This month\'s usage dashboard')
.option('--provider <provider>', 'Filter by provider (e.g. claude, gemini, cursor, copilot)', 'all')
.option('--format <format>', 'Output format: tui, json', 'tui')
.option('--project <name>', 'Show only projects matching name (repeatable)', collect, [])
.option('--exclude <name>', 'Exclude projects matching name (repeatable)', collect, [])
.option('--refresh <seconds>', 'Auto-refresh interval in seconds (0 to disable)', parseInteger, 30)
.action(async (opts) => {
assertFormat(opts.format, ['tui', 'json'], 'month')
if (opts.format === 'json') {
await runJsonReport('month', opts.provider, opts.project, opts.exclude)
return
}
await hydrateCache()
await renderDashboard('month', opts.provider, opts.refresh, opts.project, opts.exclude)
})
program
.command('export')
.description('Export usage data to CSV or JSON')
.option('-f, --format <format>', 'Export format: csv, json', 'csv')
.option('-o, --output <path>', 'Output file path')
.option('--from <date>', 'Start date (YYYY-MM-DD). Exports a single custom period when set')
.option('--to <date>', 'End date (YYYY-MM-DD). Exports a single custom period when set')
.option('--provider <provider>', 'Filter by provider (e.g. claude, gemini, cursor, copilot)', 'all')
.option('--project <name>', 'Show only projects matching name (repeatable)', collect, [])
.option('--exclude <name>', 'Exclude projects matching name (repeatable)', collect, [])
.action(async (opts) => {
assertFormat(opts.format, ['csv', 'json'], 'export')
await loadPricing()
await hydrateCache()
const pf = opts.provider
const fp = (p: ProjectSummary[]) => filterProjectsByName(p, opts.project, opts.exclude)
let customRange: DateRange | null = null
try {
customRange = parseDateRangeFlags(opts.from, opts.to)
} catch (err) {
const message = err instanceof Error ? err.message : String(err)
console.error(`\n Error: ${message}\n`)
process.exit(1)
}
const periods: PeriodExport[] = customRange
? [{ label: formatDateRangeLabel(opts.from, opts.to), projects: fp(await parseAllSessions(customRange, pf)) }]
: [
{ label: 'Today', projects: fp(await parseAllSessions(getDateRange('today').range, pf)) },
{ label: '7 Days', projects: fp(await parseAllSessions(getDateRange('week').range, pf)) },
{ label: '30 Days', projects: fp(await parseAllSessions(getDateRange('30days').range, pf)) },
]
if (periods.every(p => p.projects.length === 0)) {
console.log('\n No usage data found.\n')
return
}
const defaultName = `codeburn-${toDateString(new Date())}`
const outputPath = opts.output ?? `${defaultName}.${opts.format}`
let savedPath: string
try {
if (opts.format === 'json') {
savedPath = await exportJson(periods, outputPath)
} else {
savedPath = await exportCsv(periods, outputPath)
}
} catch (err) {
// Protection guards in export.ts (symlink refusal, non-codeburn folder refusal, etc.)
// throw with a user-readable message. Print just the message, not the stack, so the CLI
// doesn't spray its internals at the user.
const message = err instanceof Error ? err.message : String(err)
console.error(`\n Export failed: ${message}\n`)
process.exit(1)
}
const exportedLabel = customRange ? formatDateRangeLabel(opts.from, opts.to) : 'Today + 7 Days + 30 Days'
console.log(`\n Exported (${exportedLabel}) to: ${savedPath}\n`)
})
program
.command('menubar')
.description('Install and launch the macOS menubar app (one command, no clone)')
.option('--force', 'Reinstall even if an older copy is already in ~/Applications')
.action(async (opts: { force?: boolean }) => {
try {
const result = await installMenubarApp({ force: opts.force })
console.log(`\n Ready. ${result.installedPath}\n`)
} catch (err) {
const message = err instanceof Error ? err.message : String(err)
console.error(`\n Menubar install failed: ${message}\n`)
process.exit(1)
}
})
program
.command('currency [code]')
.description('Set display currency (e.g. codeburn currency GBP)')
.option('--symbol <symbol>', 'Override the currency symbol')
.option('--reset', 'Reset to USD (removes currency config)')
.action(async (code?: string, opts?: { symbol?: string; reset?: boolean }) => {
if (opts?.reset) {
const config = await readConfig()
delete config.currency
await saveConfig(config)
console.log('\n Currency reset to USD.\n')
return
}
if (!code) {
const { code: activeCode, rate, symbol } = getCurrency()
if (activeCode === 'USD' && rate === 1) {
console.log('\n Currency: USD (default)')
console.log(` Config: ${getConfigFilePath()}\n`)
} else {
console.log(`\n Currency: ${activeCode}`)
console.log(` Symbol: ${symbol}`)
console.log(` Rate: 1 USD = ${rate} ${activeCode}`)
console.log(` Config: ${getConfigFilePath()}\n`)
}
return
}
const upperCode = code.toUpperCase()
if (!isValidCurrencyCode(upperCode)) {
console.error(`\n "${code}" is not a valid ISO 4217 currency code.\n`)
process.exitCode = 1
return
}
const config = await readConfig()
config.currency = {
code: upperCode,
...(opts?.symbol ? { symbol: opts.symbol } : {}),
}
await saveConfig(config)
await loadCurrency()
const { rate, symbol } = getCurrency()
console.log(`\n Currency set to ${upperCode}.`)
console.log(` Symbol: ${symbol}`)
console.log(` Rate: 1 USD = ${rate} ${upperCode}`)
console.log(` Config saved to ${getConfigFilePath()}\n`)
})
program
.command('model-alias [from] [to]')
.description('Map a provider model name to a canonical one for pricing (e.g. codeburn model-alias my-model claude-opus-4-6)')
.option('--remove <from>', 'Remove an alias')
.option('--list', 'List configured aliases')
.action(async (from?: string, to?: string, opts?: { remove?: string; list?: boolean }) => {
const config = await readConfig()
const aliases = config.modelAliases ?? {}
if (opts?.list || (!from && !opts?.remove)) {
const entries = Object.entries(aliases)
if (entries.length === 0) {
console.log('\n No model aliases configured.')
console.log(` Config: ${getConfigFilePath()}\n`)
} else {
console.log('\n Model aliases:')
for (const [src, dst] of entries) {
console.log(` ${src} -> ${dst}`)
}
console.log(` Config: ${getConfigFilePath()}\n`)
}
return
}
if (opts?.remove) {
if (!(opts.remove in aliases)) {
console.error(`\n Alias not found: ${opts.remove}\n`)
process.exitCode = 1
return
}
delete aliases[opts.remove]
config.modelAliases = Object.keys(aliases).length > 0 ? aliases : undefined
await saveConfig(config)
console.log(`\n Removed alias: ${opts.remove}\n`)
return
}
if (!from || !to) {
console.error('\n Usage: codeburn model-alias <from> <to>\n')
process.exitCode = 1
return
}
aliases[from] = to
config.modelAliases = aliases
await saveConfig(config)
console.log(`\n Alias saved: ${from} -> ${to}`)
console.log(` Config: ${getConfigFilePath()}\n`)
})
program
.command('plan [action] [id]')
.description('Show or configure a subscription plan for overage tracking')
.option('--format <format>', 'Output format: text or json', 'text')
.option('--monthly-usd <n>', 'Monthly plan price in USD (for custom)', parseNumber)
.option('--provider <name>', 'Provider scope: all, claude, codex, cursor', 'all')
.option('--reset-day <n>', 'Day of month plan resets (1-28)', parseInteger, 1)
.action(async (action?: string, id?: string, opts?: { format?: string; monthlyUsd?: number; provider?: string; resetDay?: number }) => {
assertFormat(opts?.format ?? 'text', ['text', 'json'], 'plan')
const mode = action ?? 'show'
if (mode === 'show') {
const plan = await readPlan()
const displayPlan = !plan || plan.id === 'none'
? { id: 'none', monthlyUsd: 0, provider: 'all', resetDay: 1, setAt: null }
: {
id: plan.id,
monthlyUsd: plan.monthlyUsd,
provider: plan.provider,
resetDay: clampResetDay(plan.resetDay),
setAt: plan.setAt,
}
if (opts?.format === 'json') {
console.log(JSON.stringify(displayPlan))
return
}
if (!plan || plan.id === 'none') {
console.log('\n Plan: none')
console.log(' API-pricing view is active.')
console.log(` Config: ${getConfigFilePath()}\n`)
return
}
console.log(`\n Plan: ${planDisplayName(plan.id)} (${plan.id})`)
console.log(` Budget: $${plan.monthlyUsd}/month`)
console.log(` Provider: ${plan.provider}`)
console.log(` Reset day: ${clampResetDay(plan.resetDay)}`)
console.log(` Set at: ${plan.setAt}`)
console.log(` Config: ${getConfigFilePath()}\n`)
return
}
if (mode === 'reset') {
await clearPlan()
console.log('\n Plan reset. API-pricing view is active.\n')
return
}
if (mode !== 'set') {
console.error('\n Usage: codeburn plan [set <id> | reset]\n')
process.exitCode = 1
return
}
if (!id || !isPlanId(id)) {
console.error(`\n Plan id must be one of: claude-pro, claude-max, cursor-pro, custom, none; got "${id ?? ''}".\n`)
process.exitCode = 1
return
}
const resetDay = opts?.resetDay ?? 1
if (!Number.isInteger(resetDay) || resetDay < 1 || resetDay > 28) {
console.error(`\n --reset-day must be an integer from 1 to 28; got ${resetDay}.\n`)
process.exitCode = 1
return
}
if (id === 'none') {
await clearPlan()
console.log('\n Plan reset. API-pricing view is active.\n')
return
}
if (id === 'custom') {
if (opts?.monthlyUsd === undefined) {
console.error('\n Custom plans require --monthly-usd <positive number>.\n')
process.exitCode = 1
return
}
const monthlyUsd = opts.monthlyUsd
if (!Number.isFinite(monthlyUsd) || monthlyUsd <= 0) {
console.error(`\n --monthly-usd must be a positive number; got ${opts.monthlyUsd}.\n`)
process.exitCode = 1
return
}
const provider = opts?.provider ?? 'all'
if (!isPlanProvider(provider)) {
console.error(`\n --provider must be one of: all, claude, codex, cursor; got "${provider}".\n`)
process.exitCode = 1
return
}
await savePlan({
id: 'custom',
monthlyUsd,
provider,
resetDay,
setAt: new Date().toISOString(),
})
console.log(`\n Plan set to custom ($${monthlyUsd}/month, ${provider}, reset day ${resetDay}).`)
console.log(` Config saved to ${getConfigFilePath()}\n`)
return
}
const preset = getPresetPlan(id)
if (!preset) {
console.error(`\n Unknown preset "${id}".\n`)
process.exitCode = 1
return
}
await savePlan({
...preset,
resetDay,
setAt: new Date().toISOString(),
})
console.log(`\n Plan set to ${planDisplayName(preset.id)} ($${preset.monthlyUsd}/month).`)
console.log(` Provider: ${preset.provider}`)
console.log(` Reset day: ${resetDay}`)
console.log(` Config saved to ${getConfigFilePath()}\n`)
})
program
.command('optimize')
.description('Find token waste and get exact fixes')
.option('-p, --period <period>', 'Analysis period: today, week, 30days, month, all', '30days')
.option('--provider <provider>', 'Filter by provider (e.g. claude, gemini, cursor, copilot)', 'all')
.action(async (opts) => {
await loadPricing()
await hydrateCache()
const { range, label } = getDateRange(opts.period)
const projects = await parseAllSessions(range, opts.provider)
await runOptimize(projects, label, range)
})
program
.command('compare')
.description('Compare two AI models side-by-side')
.option('-p, --period <period>', 'Analysis period: today, week, 30days, month, all', 'all')
.option('--provider <provider>', 'Filter by provider (e.g. claude, gemini, cursor, copilot)', 'all')
.action(async (opts) => {
await loadPricing()
await hydrateCache()
const { range } = getDateRange(opts.period)
await renderCompare(range, opts.provider)
})
program
.command('models')
.description('Per-model token + cost table, optionally exploded by task type')
.option('-p, --period <period>', 'Analysis period: today, week, 30days, month, all', '30days')
.option('--from <date>', 'Custom range start (YYYY-MM-DD)')
.option('--to <date>', 'Custom range end (YYYY-MM-DD)')
.option('--provider <provider>', 'Filter by provider (e.g. claude, codex, cursor)', 'all')
.option('--task <category>', 'Filter to one task type (e.g. feature, debugging, refactoring)')
.option('--by-task', 'One row per (provider, model, task) instead of one row per (provider, model)')
.option('--top <n>', 'Show only the top N rows', (v: string) => parseInt(v, 10))
.option('--min-cost <usd>', 'Hide rows below this cost threshold', (v: string) => parseFloat(v))
.option('--no-totals', 'Suppress the footer totals row')
.option('--format <format>', 'Output format: table, markdown, json, csv', 'table')
.action(async (opts) => {
const { aggregateModels, renderTable, renderMarkdown, renderJson, renderCsv } = await import('./models-report.js')
await loadPricing()
await hydrateCache()
let range
if (opts.from || opts.to) {
const customRange = parseDateRangeFlags(opts.from, opts.to)
if (!customRange) {
process.stderr.write('codeburn: --from and --to must be valid YYYY-MM-DD dates\n')
process.exit(1)
}
range = customRange
} else {
range = getDateRange(opts.period).range
}
const projects = await parseAllSessions(range, opts.provider)
const rows = await aggregateModels(projects, {
byTask: !!opts.byTask,
taskFilter: opts.task,
topN: typeof opts.top === 'number' && Number.isFinite(opts.top) ? opts.top : undefined,
minCost: typeof opts.minCost === 'number' && Number.isFinite(opts.minCost) ? opts.minCost : 0.01,
})
const fmt = (opts.format ?? 'table').toLowerCase()
if (rows.length === 0 && (fmt === 'table' || fmt === 'markdown')) {
process.stdout.write('No model usage found for the selected period.\n')
return
}
if (fmt === 'json') {
process.stdout.write(renderJson(rows) + '\n')
} else if (fmt === 'csv') {
process.stdout.write(renderCsv(rows, { byTask: !!opts.byTask }) + '\n')
} else if (fmt === 'markdown' || fmt === 'md') {
process.stdout.write(renderMarkdown(rows, { byTask: !!opts.byTask, showTotals: opts.totals !== false }) + '\n')
} else if (fmt === 'table') {
process.stdout.write(renderTable(rows, { byTask: !!opts.byTask, showTotals: opts.totals !== false }) + '\n')
} else {
process.stderr.write(`codeburn: unknown --format "${opts.format}". Choose table, markdown, json, or csv.\n`)
process.exit(1)
}
})
program
.command('yield')
.description('Track which AI spend shipped to main vs reverted/abandoned (experimental)')
.option('-p, --period <period>', 'Analysis period: today, week, 30days, month, all', 'week')
.action(async (opts) => {
const { computeYield, formatYieldSummary } = await import('./yield.js')
await loadPricing()
await hydrateCache()
const { range, label } = getDateRange(opts.period)
console.log(`\n Analyzing yield for ${label}...\n`)
const summary = await computeYield(range, process.cwd())
console.log(formatYieldSummary(summary))
})
program.parse()

View file

@ -25,6 +25,7 @@ type SnapshotEntry = [number, number, number | null, number | null]
const LITELLM_URL = 'https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json'
const CACHE_TTL_MS = 24 * 60 * 60 * 1000
const WEB_SEARCH_COST = 0.01
const ONE_HOUR_CACHE_WRITE_MULTIPLIER_FROM_FIVE_MINUTE_RATE = 1.6
const FAST_MULTIPLIERS: Record<string, number> = {
'claude-opus-4-7': 6,
@ -166,6 +167,7 @@ const BUILTIN_ALIASES: Record<string, string> = {
'copilot-auto': 'claude-sonnet-4-5',
'copilot-openai-auto': 'gpt-5.3-codex',
'copilot-anthropic-auto': 'claude-sonnet-4-5',
'ibm-bob-auto': 'claude-sonnet-4-5',
'kiro-auto': 'claude-sonnet-4-5',
'cline-auto': 'claude-sonnet-4-5',
'openclaw-auto': 'claude-sonnet-4-5',
@ -310,6 +312,7 @@ export function calculateCost(
cacheReadTokens: number,
webSearchRequests: number,
speed: 'standard' | 'fast' = 'standard',
oneHourCacheCreationTokens = 0,
): number {
const costs = getModelCosts(model)
if (!costs) {
@ -335,11 +338,15 @@ export function calculateCost(
// from real spend in aggregate totals. NaN is also handled here; the
// arithmetic below short-circuits to 0 when any operand is non-finite.
const safe = (n: number) => (Number.isFinite(n) && n > 0 ? n : 0)
const safeOneHourCacheCreation = safe(oneHourCacheCreationTokens)
const safeCacheCreation = Math.max(safe(cacheCreationTokens), safeOneHourCacheCreation)
const safeFiveMinuteCacheCreation = Math.max(0, safeCacheCreation - safeOneHourCacheCreation)
return multiplier * (
safe(inputTokens) * costs.inputCostPerToken +
safe(outputTokens) * costs.outputCostPerToken +
safe(cacheCreationTokens) * costs.cacheWriteCostPerToken +
safeFiveMinuteCacheCreation * costs.cacheWriteCostPerToken +
safeOneHourCacheCreation * costs.cacheWriteCostPerToken * ONE_HOUR_CACHE_WRITE_MULTIPLIER_FROM_FIVE_MINUTE_RATE +
safe(cacheReadTokens) * costs.cacheReadCostPerToken +
safe(webSearchRequests) * costs.webSearchCostPerRequest
)
@ -351,6 +358,7 @@ const autoModelNames: Record<string, string> = {
'copilot-auto': 'Copilot (auto)',
'copilot-openai-auto': 'Copilot (OpenAI)',
'copilot-anthropic-auto': 'Copilot (Anthropic)',
'ibm-bob-auto': 'IBM Bob (auto)',
'kiro-auto': 'Kiro (auto)',
'cline-auto': 'Cline (auto)',
'openclaw-auto': 'OpenClaw (auto)',

View file

@ -93,16 +93,39 @@ function getMessageId(entry: JournalEntry): string | null {
return msg?.id ?? null
}
function positiveNumber(n: number | undefined): number {
return n !== undefined && Number.isFinite(n) && n > 0 ? n : 0
}
function extractClaudeCacheCreation(usage: AssistantMessageContent['usage']): { totalTokens: number; oneHourTokens: number } {
const legacyTotal = positiveNumber(usage.cache_creation_input_tokens)
const cacheCreation = usage.cache_creation
const fiveMinuteTokens = positiveNumber(cacheCreation?.ephemeral_5m_input_tokens)
const oneHourTokens = positiveNumber(cacheCreation?.ephemeral_1h_input_tokens)
const splitTotal = fiveMinuteTokens + oneHourTokens
if (splitTotal === 0) return { totalTokens: legacyTotal, oneHourTokens: 0 }
// Valid Claude usage reports the legacy total and split total as equal.
// Keep the larger value so malformed partial splits do not drop tokens.
const totalTokens = Math.max(legacyTotal, splitTotal)
return {
totalTokens,
oneHourTokens: Math.min(oneHourTokens, totalTokens),
}
}
function parseApiCall(entry: JournalEntry): ParsedApiCall | null {
if (entry.type !== 'assistant') return null
const msg = entry.message as AssistantMessageContent | undefined
if (!msg?.usage || !msg?.model) return null
const usage = msg.usage
const cacheCreation = extractClaudeCacheCreation(usage)
const tokens: TokenUsage = {
inputTokens: usage.input_tokens ?? 0,
outputTokens: usage.output_tokens ?? 0,
cacheCreationInputTokens: usage.cache_creation_input_tokens ?? 0,
cacheCreationInputTokens: cacheCreation.totalTokens,
cacheReadInputTokens: usage.cache_read_input_tokens ?? 0,
cachedInputTokens: 0,
reasoningTokens: 0,
@ -119,6 +142,7 @@ function parseApiCall(entry: JournalEntry): ParsedApiCall | null {
tokens.cacheReadInputTokens,
tokens.webSearchRequests,
usage.speed ?? 'standard',
cacheCreation.oneHourTokens,
)
const bashCmds = extractBashCommandsFromContent(msg.content ?? [])
@ -564,7 +588,7 @@ async function parseProviderSources(
const provider = await getProvider(providerName)
if (!provider) return []
const sessionMap = new Map<string, { project: string; turns: ClassifiedTurn[] }>()
const sessionMap = new Map<string, { project: string; projectPath?: string; turns: ClassifiedTurn[] }>()
try {
for (const source of sources) {
@ -589,13 +613,15 @@ async function parseProviderSources(
const turn = providerCallToTurn(call)
const classified = classifyTurn(turn)
const key = `${providerName}:${call.sessionId}:${source.project}`
const project = call.project ?? source.project
const key = `${providerName}:${call.sessionId}:${project}`
const existing = sessionMap.get(key)
if (existing) {
existing.turns.push(classified)
if (!existing.projectPath && call.projectPath) existing.projectPath = call.projectPath
} else {
sessionMap.set(key, { project: source.project, turns: [classified] })
sessionMap.set(key, { project, projectPath: call.projectPath, turns: [classified] })
}
}
} catch (err) {
@ -614,22 +640,26 @@ async function parseProviderSources(
}
}
const projectMap = new Map<string, SessionSummary[]>()
for (const [key, { project, turns }] of sessionMap) {
const projectMap = new Map<string, { projectPath?: string; sessions: SessionSummary[] }>()
for (const [key, { project, projectPath, turns }] of sessionMap) {
const sessionId = key.split(':')[1] ?? key
const session = buildSessionSummary(sessionId, project, turns)
if (session.apiCalls > 0) {
const existing = projectMap.get(project) ?? []
existing.push(session)
projectMap.set(project, existing)
const existing = projectMap.get(project)
if (existing) {
existing.sessions.push(session)
if (!existing.projectPath && projectPath) existing.projectPath = projectPath
} else {
projectMap.set(project, { projectPath, sessions: [session] })
}
}
}
const projects: ProjectSummary[] = []
for (const [dirName, sessions] of projectMap) {
for (const [dirName, { projectPath, sessions }] of projectMap) {
projects.push({
project: dirName,
projectPath: unsanitizePath(dirName),
projectPath: projectPath ?? unsanitizePath(dirName),
sessions,
totalCostUSD: sessions.reduce((s, sess) => s + sess.totalCostUSD, 0),
totalApiCalls: sessions.reduce((s, sess) => s + sess.apiCalls, 0),

View file

@ -329,7 +329,8 @@ const USER_MESSAGES_QUERY = `
// the whole template. The original combined string is preserved as
// BUBBLE_QUERY_SINCE for any caller that doesn't want the cap.
const BUBBLE_QUERY_SINCE_HEAD = BUBBLE_QUERY_BASE + `
AND (json_extract(value, '$.createdAt') > ? OR json_extract(value, '$.createdAt') IS NULL)`
AND json_extract(value, '$.createdAt') IS NOT NULL
AND json_extract(value, '$.createdAt') > ?`
const BUBBLE_QUERY_SINCE_TAIL = `
ORDER BY ROWID ASC
`
@ -458,6 +459,7 @@ function parseBubbles(db: SqliteDatabase, seenKeys: Set<string>): { calls: Parse
}
const createdAt = row.created_at ?? ''
if (!createdAt) continue
// The JSON `conversationId` field on bubbles is empty in current
// Cursor builds. The real composerId lives in the row key
// `bubbleId:<composerId>:<bubbleUuid>`. Extract from the key so the
@ -487,7 +489,7 @@ function parseBubbles(db: SqliteDatabase, seenKeys: Set<string>): { calls: Parse
const costUSD = calculateCost(pricingModel, inputTokens, outputTokens, 0, 0, 0)
const timestamp = createdAt || new Date().toISOString()
const timestamp = createdAt
const userQuestion = takeUserMessage(userMessages, conversationId)
const assistantText = blobToText(row.user_text)
const userText = (userQuestion + ' ' + assistantText).trim()

59
src/providers/ibm-bob.ts Normal file
View file

@ -0,0 +1,59 @@
import { join } from 'path'
import { homedir } from 'os'
import { getShortModelName } from '../models.js'
import { discoverClineTasksInBaseDirs, createClineParser } from './vscode-cline-parser.js'
import type { Provider, SessionSource, SessionParser } from './types.js'
const PROVIDER_NAME = 'ibm-bob'
const DISPLAY_NAME = 'IBM Bob'
const EXTENSION_ID = 'ibm.bob-code'
const FALLBACK_MODEL = 'ibm-bob-auto'
export function getIBMBobGlobalStorageDirs(): string[] {
const home = homedir()
if (process.platform === 'darwin') {
return [
join(home, 'Library', 'Application Support', 'IBM Bob', 'User', 'globalStorage', EXTENSION_ID),
join(home, 'Library', 'Application Support', 'Bob-IDE', 'User', 'globalStorage', EXTENSION_ID),
]
}
if (process.platform === 'win32') {
const appData = process.env['APPDATA'] ?? join(home, 'AppData', 'Roaming')
return [
join(appData, 'IBM Bob', 'User', 'globalStorage', EXTENSION_ID),
join(appData, 'Bob-IDE', 'User', 'globalStorage', EXTENSION_ID),
]
}
const configHome = process.env['XDG_CONFIG_HOME'] ?? join(home, '.config')
return [
join(configHome, 'IBM Bob', 'User', 'globalStorage', EXTENSION_ID),
join(configHome, 'Bob-IDE', 'User', 'globalStorage', EXTENSION_ID),
]
}
export function createIBMBobProvider(overrideDir?: string): Provider {
return {
name: PROVIDER_NAME,
displayName: DISPLAY_NAME,
modelDisplayName(model: string): string {
return getShortModelName(model)
},
toolDisplayName(rawTool: string): string {
return rawTool
},
async discoverSessions(): Promise<SessionSource[]> {
const dirs = overrideDir ? [overrideDir] : getIBMBobGlobalStorageDirs()
return discoverClineTasksInBaseDirs(dirs, PROVIDER_NAME, DISPLAY_NAME)
},
createSessionParser(source: SessionSource, seenKeys: Set<string>): SessionParser {
return createClineParser(source, seenKeys, PROVIDER_NAME, FALLBACK_MODEL)
},
}
}
export const ibmBob = createIBMBobProvider()

View file

@ -3,6 +3,7 @@ import { codex } from './codex.js'
import { copilot } from './copilot.js'
import { droid } from './droid.js'
import { gemini } from './gemini.js'
import { ibmBob } from './ibm-bob.js'
import { kiloCode } from './kilo-code.js'
import { kiro } from './kiro.js'
import { openclaw } from './openclaw.js'
@ -101,7 +102,7 @@ async function loadCrush(): Promise<Provider | null> {
}
}
const coreProviders: Provider[] = [claude, codex, copilot, droid, gemini, kiloCode, kiro, openclaw, pi, omp, qwen, rooCode]
const coreProviders: Provider[] = [claude, codex, copilot, droid, gemini, ibmBob, kiloCode, kiro, openclaw, pi, omp, qwen, rooCode]
export async function getAllProviders(): Promise<Provider[]> {
const [ag, gs, cursor, opencode, cursorAgent, crush] = await Promise.all([loadAntigravity(), loadGoose(), loadCursor(), loadOpenCode(), loadCursorAgent(), loadCrush()])

View file

@ -64,6 +64,25 @@ const toolNameMap: Record<string, string> = {
patch: 'Patch',
}
function normalizeToolName(rawTool?: string): string {
if (!rawTool) return ''
if (rawTool.startsWith('mcp__')) return rawTool
const builtIn = toolNameMap[rawTool]
if (builtIn) return builtIn
// OpenCode stores MCP calls as `<server>_<tool>` with no separate server field.
// Built-ins are handled above, and server ids are assumed not to contain `_`.
const serverSeparator = rawTool.indexOf('_')
if (serverSeparator > 0 && serverSeparator < rawTool.length - 1) {
const server = rawTool.slice(0, serverSeparator)
const tool = rawTool.slice(serverSeparator + 1)
return `mcp__${server}__${tool}`
}
return rawTool
}
function sanitize(dir: string): string {
return dir.replace(/^\//, '').replace(/\//g, '-')
}
@ -233,7 +252,7 @@ function createParser(
const msgParts = partsByMsg.get(msg.id) ?? []
const toolParts = msgParts.filter((p) => p.type === 'tool')
const tools = toolParts
.map((p) => toolNameMap[p.tool ?? ''] ?? p.tool ?? '')
.map((p) => normalizeToolName(p.tool))
.filter(Boolean)
const bashCommands = toolParts

View file

@ -27,6 +27,8 @@ export type ParsedProviderCall = {
deduplicationKey: string
userMessage: string
sessionId: string
project?: string
projectPath?: string
}
export type Provider = {

View file

@ -24,6 +24,23 @@ export function getVSCodeGlobalStoragePath(extensionId: string): string {
export async function discoverClineTasks(extensionId: string, providerName: string, displayName: string, overrideDir?: string): Promise<SessionSource[]> {
const baseDir = overrideDir ?? getVSCodeGlobalStoragePath(extensionId)
return discoverClineTasksInBaseDirs([baseDir], providerName, displayName)
}
export async function discoverClineTasksInBaseDirs(baseDirs: string[], providerName: string, displayName: string): Promise<SessionSource[]> {
const sources: SessionSource[] = []
const seen = new Set<string>()
for (const baseDir of baseDirs) {
for (const source of await discoverClineTasksInBaseDir(baseDir, providerName, displayName)) {
if (seen.has(source.path)) continue
seen.add(source.path)
sources.push(source)
}
}
return sources
}
async function discoverClineTasksInBaseDir(baseDir: string, providerName: string, displayName: string): Promise<SessionSource[]> {
const tasksDir = join(baseDir, 'tasks')
const sources: SessionSource[] = []
@ -50,28 +67,43 @@ export async function discoverClineTasks(extensionId: string, providerName: stri
}
const MODEL_TAG_RE = /<model>([^<]+)<\/model>/
const WORKSPACE_DIR_RE = /Current Workspace Directory \(([^)]+)\)/
function extractModelFromHistory(taskDir: string): Promise<string> {
type HistoryMeta = { model: string; workspace: string | null }
function extractHistoryMeta(taskDir: string, fallbackModel: string): Promise<HistoryMeta> {
return readFile(join(taskDir, 'api_conversation_history.json'), 'utf-8')
.then(raw => {
const msgs = JSON.parse(raw) as Array<{ role?: string; content?: Array<{ text?: string }> }>
if (!Array.isArray(msgs)) return 'cline-auto'
if (!Array.isArray(msgs)) return { model: fallbackModel, workspace: null }
let model: string | null = null
let workspace: string | null = null
for (const msg of msgs) {
if (msg.role !== 'user' || !Array.isArray(msg.content)) continue
for (const block of msg.content) {
const match = typeof block.text === 'string' && MODEL_TAG_RE.exec(block.text)
if (match) {
const raw = match[1]
return raw.includes('/') ? raw.split('/').pop()! : raw
if (typeof block.text !== 'string') continue
if (!model) {
const mm = MODEL_TAG_RE.exec(block.text)
if (mm) model = mm[1].includes('/') ? mm[1].split('/').pop()! : mm[1]
}
if (!workspace) {
const wm = WORKSPACE_DIR_RE.exec(block.text)
if (wm) workspace = wm[1]
}
if (model && workspace) break
}
if (model && workspace) break
}
return 'cline-auto'
return { model: model ?? fallbackModel, workspace }
})
.catch(() => 'cline-auto')
.catch(() => ({ model: fallbackModel, workspace: null }))
}
export function createClineParser(source: SessionSource, seenKeys: Set<string>, providerName: string): SessionParser {
function workspaceToProject(workspace: string): string {
return basename(workspace) || workspace
}
export function createClineParser(source: SessionSource, seenKeys: Set<string>, providerName: string, fallbackModel = 'cline-auto'): SessionParser {
return {
async *parse(): AsyncGenerator<ParsedProviderCall> {
const taskDir = source.path
@ -93,7 +125,10 @@ export function createClineParser(source: SessionSource, seenKeys: Set<string>,
if (!Array.isArray(uiMessages)) return
const model = await extractModelFromHistory(taskDir)
const meta = await extractHistoryMeta(taskDir, fallbackModel)
const model = meta.model
const project = meta.workspace ? workspaceToProject(meta.workspace) : undefined
const projectPath = meta.workspace ?? undefined
let userMessage = ''
for (const msg of uiMessages) {
@ -156,6 +191,8 @@ export function createClineParser(source: SessionSource, seenKeys: Set<string>,
deduplicationKey: dedupKey,
userMessage: index === 0 ? userMessage : '',
sessionId: taskId,
project,
projectPath,
}
}
},

View file

@ -25,6 +25,10 @@ export type ApiUsage = {
input_tokens: number
output_tokens: number
cache_creation_input_tokens?: number
cache_creation?: {
ephemeral_5m_input_tokens?: number
ephemeral_1h_input_tokens?: number
}
cache_read_input_tokens?: number
server_tool_use?: {
web_search_requests?: number

View file

@ -104,6 +104,36 @@ describe('loadDailyCache', () => {
expect(existsSync(join(TMP_CACHE_ROOT, 'daily-cache.json.v2.bak'))).toBe(true)
})
it('discards a v5 cache because cached Claude costs predate 1-hour cache pricing', async () => {
const saved = {
version: 5,
lastComputedDate: '2026-05-01',
days: [{
date: '2026-05-01',
cost: 0.37575,
calls: 1,
sessions: 1,
inputTokens: 0,
outputTokens: 0,
cacheReadTokens: 0,
cacheWriteTokens: 60_120,
editTurns: 0,
oneShotTurns: 0,
models: { 'Opus 4.7': { calls: 1, cost: 0.37575, inputTokens: 0, outputTokens: 0, cacheReadTokens: 0, cacheWriteTokens: 60_120 } },
categories: {},
providers: { claude: { calls: 1, cost: 0.37575 } },
}],
}
const { writeFile, mkdir } = await import('fs/promises')
await mkdir(TMP_CACHE_ROOT, { recursive: true })
await writeFile(join(TMP_CACHE_ROOT, 'daily-cache.json'), JSON.stringify(saved), 'utf-8')
const cache = await loadDailyCache()
expect(cache.version).toBe(DAILY_CACHE_VERSION)
expect(cache.days).toEqual([])
expect(cache.lastComputedDate).toBeNull()
expect(existsSync(join(TMP_CACHE_ROOT, 'daily-cache.json.v5.bak'))).toBe(true)
})
it('round-trips a valid cache through save and load', async () => {
const saved: DailyCache = {
version: DAILY_CACHE_VERSION,

View file

@ -1,5 +1,8 @@
import { homedir } from 'os'
import { describe, it, expect } from 'vitest'
import { shortProject } from '../src/dashboard.js'
import { formatCost } from '../src/format.js'
import type { ProjectSummary, SessionSummary } from '../src/types.js'
@ -53,7 +56,7 @@ function makeProject(name: string, sessions: SessionSummary[]): ProjectSummary {
// Logic replicated from TopSessions component
function getTopSessions(projects: ProjectSummary[], n = 5) {
const all = projects.flatMap(p => p.sessions.map(s => ({ ...s, projectName: p.project })))
const all = projects.flatMap(p => p.sessions.map(s => ({ ...s, projectPath: p.projectPath })))
return [...all].sort((a, b) => b.totalCostUSD - a.totalCostUSD).slice(0, n)
}
@ -99,6 +102,36 @@ describe('TopSessions - top-5 selection', () => {
})
})
describe('shortProject - path shortening', () => {
const home = homedir()
it('preserves directory names containing dashes', () => {
expect(shortProject(`${home}/work/my-project`)).toBe('work/my-project')
})
it('preserves directory names containing dots', () => {
expect(shortProject(`${home}/work/my.app.io`)).toBe('work/my.app.io')
})
it('returns "home" for the home dir itself', () => {
expect(shortProject(home)).toBe('home')
})
it('does not strip a sibling whose name shares the home prefix', () => {
const sibling = `${home}-backup/proj`
expect(shortProject(sibling).endsWith('proj')).toBe(true)
expect(shortProject(sibling)).not.toMatch(/^-/)
})
it('keeps only the last 3 segments for deeply nested paths', () => {
expect(shortProject(`${home}/a/b/c/d/e/f`)).toBe('d/e/f')
})
it('handles paths outside the home dir', () => {
expect(shortProject('/opt/myproject')).toBe('opt/myproject')
})
})
describe('avg/s in ProjectBreakdown', () => {
it('returns dash for a project with no sessions', () => {
const project = makeProject('proj', [])

View file

@ -46,8 +46,8 @@ describe('aggregateProjectsIntoDays', () => {
sessions: [{
sessionId: 's1',
project: 'p',
firstTimestamp: '2026-04-09T10:00:00Z',
lastTimestamp: '2026-04-10T08:00:00Z',
firstTimestamp: '2026-04-09T10:00:00',
lastTimestamp: '2026-04-10T08:00:00',
totalCostUSD: 10,
totalInputTokens: 0,
totalOutputTokens: 0,
@ -57,14 +57,14 @@ describe('aggregateProjectsIntoDays', () => {
turns: [
{
userMessage: 'hi',
timestamp: '2026-04-09T10:00:00Z',
timestamp: '2026-04-09T10:00:00',
sessionId: 's1',
category: 'coding',
retries: 0,
hasEdits: true,
assistantCalls: [
makeCall('2026-04-09T10:00:00Z', 4),
makeCall('2026-04-10T08:00:00Z', 6),
makeCall('2026-04-09T10:00:00', 4),
makeCall('2026-04-10T08:00:00', 6),
],
},
],
@ -92,8 +92,8 @@ describe('aggregateProjectsIntoDays', () => {
sessions: [{
sessionId: 's1',
project: 'p',
firstTimestamp: '2026-04-09T10:00:00Z',
lastTimestamp: '2026-04-09T10:05:00Z',
firstTimestamp: '2026-04-09T10:00:00',
lastTimestamp: '2026-04-09T10:05:00',
totalCostUSD: 3,
totalInputTokens: 0,
totalOutputTokens: 0,
@ -103,12 +103,12 @@ describe('aggregateProjectsIntoDays', () => {
turns: [
{
userMessage: 'hi',
timestamp: '2026-04-09T10:00:00Z',
timestamp: '2026-04-09T10:00:00',
sessionId: 's1',
category: 'coding',
retries: 0,
hasEdits: true,
assistantCalls: [makeCall('2026-04-09T10:00:00Z', 3)],
assistantCalls: [makeCall('2026-04-09T10:00:00', 3)],
},
],
modelBreakdown: {},
@ -138,8 +138,8 @@ describe('aggregateProjectsIntoDays', () => {
sessions: [{
sessionId: 's1',
project: 'p',
firstTimestamp: '2026-04-09T23:59:00Z',
lastTimestamp: '2026-04-10T00:10:00Z',
firstTimestamp: '2026-04-09T23:59:00',
lastTimestamp: '2026-04-10T00:10:00',
totalCostUSD: 1,
totalInputTokens: 0, totalOutputTokens: 0, totalCacheReadTokens: 0, totalCacheWriteTokens: 0,
apiCalls: 0,
@ -151,7 +151,7 @@ describe('aggregateProjectsIntoDays', () => {
}),
]
const days = aggregateProjectsIntoDays(projects)
const expectedDate = dateKey('2026-04-09T23:59:00Z')
const expectedDate = dateKey('2026-04-09T23:59:00')
expect(days[0]!.date).toBe(expectedDate)
expect(days[0]!.sessions).toBe(1)
})
@ -162,18 +162,18 @@ describe('aggregateProjectsIntoDays', () => {
sessions: [{
sessionId: 's1',
project: 'p',
firstTimestamp: '2026-04-10T10:00:00Z',
lastTimestamp: '2026-04-10T10:00:00Z',
firstTimestamp: '2026-04-10T10:00:00',
lastTimestamp: '2026-04-10T10:00:00',
totalCostUSD: 10,
totalInputTokens: 0, totalOutputTokens: 0, totalCacheReadTokens: 0, totalCacheWriteTokens: 0,
apiCalls: 2,
turns: [
{
userMessage: 'x', timestamp: '2026-04-10T10:00:00Z', sessionId: 's1',
userMessage: 'x', timestamp: '2026-04-10T10:00:00', sessionId: 's1',
category: 'coding', retries: 0, hasEdits: false,
assistantCalls: [
makeCall('2026-04-10T10:00:00Z', 7, 'Opus 4.7', 'claude'),
makeCall('2026-04-10T10:00:00Z', 3, 'gpt-5', 'codex'),
makeCall('2026-04-10T10:00:00', 7, 'Opus 4.7', 'claude'),
makeCall('2026-04-10T10:00:00', 3, 'gpt-5', 'codex'),
],
},
],

View file

@ -158,6 +158,18 @@ describe('calculateCost - OMP names produce non-zero cost', () => {
})
})
describe('calculateCost - Claude cache write durations', () => {
it('prices 1-hour cache writes at 1.6x the 5-minute cache write rate', () => {
const fiveMinute = calculateCost('claude-opus-4-7', 0, 0, 1_000_000, 0, 0)
const oneHour = calculateCost('claude-opus-4-7', 0, 0, 1_000_000, 0, 0, 'standard', 1_000_000)
const mixed = calculateCost('claude-opus-4-7', 0, 0, 100_000, 0, 0, 'standard', 60_000)
expect(fiveMinute).toBeCloseTo(6.25, 6)
expect(oneHour).toBeCloseTo(10, 6)
expect(mixed).toBeCloseTo(0.85, 6)
})
})
describe('existing model names still resolve', () => {
it('canonical claude-opus-4-6', () => {
expect(getModelCosts('claude-opus-4-6')).not.toBeNull()

View file

@ -31,7 +31,14 @@ function dayRange(day: string): DateRange {
}
}
async function writeClaudeSession(projectSlug: string, sessionId: string, cwd: string, timestamp: string): Promise<void> {
async function writeClaudeSession(
projectSlug: string,
sessionId: string,
cwd: string,
timestamp: string,
usage: Record<string, unknown> = { input_tokens: 100, output_tokens: 50 },
model = 'claude-sonnet-4-5',
): Promise<void> {
const projectDir = join(tmpDir, 'projects', projectSlug)
await mkdir(projectDir, { recursive: true })
const filePath = join(projectDir, `${sessionId}.jsonl`)
@ -44,12 +51,9 @@ async function writeClaudeSession(projectSlug: string, sessionId: string, cwd: s
id: `msg-${sessionId}`,
type: 'message',
role: 'assistant',
model: 'claude-sonnet-4-5',
model,
content: [],
usage: {
input_tokens: 100,
output_tokens: 50,
},
usage,
},
}) + '\n')
@ -158,3 +162,51 @@ describe('Claude cwd project paths', () => {
expect(projects[0]!.projectPath).toBe('fallback/slug')
})
})
describe('Claude cache creation pricing', () => {
it('prices 1-hour cache writes from usage.cache_creation at the 2x input rate', async () => {
await writeClaudeSession(
'cache-pricing',
'one-hour-cache',
'/tmp/cache-pricing',
'2099-05-05T10:00:00.000Z',
{
input_tokens: 0,
output_tokens: 0,
cache_creation_input_tokens: 60_120,
cache_creation: {
ephemeral_5m_input_tokens: 0,
ephemeral_1h_input_tokens: 60_120,
},
},
'claude-opus-4-7',
)
const projects = await parseAllSessions(dayRange('2099-05-05'), 'claude')
expect(projects).toHaveLength(1)
expect(projects[0]!.sessions[0]!.totalCacheWriteTokens).toBe(60_120)
expect(projects[0]!.totalCostUSD).toBeCloseTo(0.6012, 6)
})
it('falls back to the legacy 5-minute cache write rate when split fields are absent', async () => {
await writeClaudeSession(
'legacy-cache-pricing',
'legacy-cache',
'/tmp/legacy-cache-pricing',
'2099-05-06T10:00:00.000Z',
{
input_tokens: 0,
output_tokens: 0,
cache_creation_input_tokens: 60_120,
},
'claude-opus-4-7',
)
const projects = await parseAllSessions(dayRange('2099-05-06'), 'claude')
expect(projects).toHaveLength(1)
expect(projects[0]!.sessions[0]!.totalCacheWriteTokens).toBe(60_120)
expect(projects[0]!.totalCostUSD).toBeCloseTo(0.37575, 6)
})
})

View file

@ -3,7 +3,7 @@ import { providers, getAllProviders } from '../src/providers/index.js'
describe('provider registry', () => {
it('has core providers registered synchronously', () => {
expect(providers.map(p => p.name)).toEqual(['claude', 'codex', 'copilot', 'droid', 'gemini', 'kilo-code', 'kiro', 'openclaw', 'pi', 'omp', 'qwen', 'roo-code'])
expect(providers.map(p => p.name)).toEqual(['claude', 'codex', 'copilot', 'droid', 'gemini', 'ibm-bob', 'kilo-code', 'kiro', 'openclaw', 'pi', 'omp', 'qwen', 'roo-code'])
})
it('includes sqlite providers after async load', async () => {

View file

@ -0,0 +1,164 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest'
import { mkdtemp, mkdir, writeFile, rm } from 'fs/promises'
import { join } from 'path'
import { tmpdir } from 'os'
import { ibmBob, createIBMBobProvider } from '../../src/providers/ibm-bob.js'
import type { ParsedProviderCall } from '../../src/providers/types.js'
let tmpDir: string
function makeUiMessages(opts: {
tokensIn?: number
tokensOut?: number
cacheReads?: number
cacheWrites?: number
cost?: number
userMessage?: string
ts?: number
}): string {
const messages: unknown[] = []
if (opts.userMessage) {
messages.push({ type: 'say', say: 'user_feedback', text: opts.userMessage, ts: 1_700_000_000_000 })
}
const apiData: Record<string, unknown> = {
tokensIn: opts.tokensIn ?? 100,
tokensOut: opts.tokensOut ?? 50,
cacheReads: opts.cacheReads ?? 0,
cacheWrites: opts.cacheWrites ?? 0,
}
if (opts.cost !== undefined) apiData.cost = opts.cost
messages.push({
type: 'say',
say: 'api_req_started',
text: JSON.stringify(apiData),
ts: opts.ts ?? 1_700_000_001_000,
})
return JSON.stringify(messages)
}
function makeApiHistory(model?: string): string {
const modelTag = model ? `<model>${model}</model>` : ''
return JSON.stringify([
{ role: 'user', content: [{ type: 'text', text: `hello\n<environment_details>\n${modelTag}\n</environment_details>` }] },
{ role: 'assistant', content: [{ type: 'text', text: 'response' }] },
])
}
describe('ibm-bob provider - discovery and parsing', () => {
beforeEach(async () => {
tmpDir = await mkdtemp(join(tmpdir(), 'ibm-bob-test-'))
})
afterEach(async () => {
await rm(tmpDir, { recursive: true, force: true })
})
it('discovers IBM Bob task directories with ui_messages.json', async () => {
const task1 = join(tmpDir, 'tasks', 'task-a')
const task2 = join(tmpDir, 'tasks', 'task-b')
await mkdir(task1, { recursive: true })
await mkdir(task2, { recursive: true })
await writeFile(join(task1, 'ui_messages.json'), '[]')
await writeFile(join(task2, 'ui_messages.json'), '[]')
const provider = createIBMBobProvider(tmpDir)
const sessions = await provider.discoverSessions()
expect(sessions).toHaveLength(2)
expect(sessions.every(s => s.provider === 'ibm-bob')).toBe(true)
expect(sessions.every(s => s.project === 'IBM Bob')).toBe(true)
})
it('skips tasks without ui_messages.json', async () => {
const task = join(tmpDir, 'tasks', 'task-no-ui')
await mkdir(task, { recursive: true })
await writeFile(join(task, 'api_conversation_history.json'), '[]')
const provider = createIBMBobProvider(tmpDir)
const sessions = await provider.discoverSessions()
expect(sessions).toHaveLength(0)
})
it('parses token usage and provider cost from Bob ui messages', async () => {
const taskDir = join(tmpDir, 'tasks', 'task-001')
await mkdir(taskDir, { recursive: true })
await writeFile(join(taskDir, 'ui_messages.json'), makeUiMessages({
tokensIn: 250,
tokensOut: 125,
cacheReads: 60,
cacheWrites: 30,
cost: 0.08,
userMessage: 'modernize this class',
}))
await writeFile(join(taskDir, 'api_conversation_history.json'), makeApiHistory('anthropic/claude-sonnet-4-6'))
const source = { path: taskDir, project: 'IBM Bob', provider: 'ibm-bob' }
const calls: ParsedProviderCall[] = []
for await (const call of ibmBob.createSessionParser(source, new Set()).parse()) calls.push(call)
expect(calls).toHaveLength(1)
expect(calls[0]!).toMatchObject({
provider: 'ibm-bob',
model: 'claude-sonnet-4-6',
inputTokens: 250,
outputTokens: 125,
cacheReadInputTokens: 60,
cacheCreationInputTokens: 30,
costUSD: 0.08,
userMessage: 'modernize this class',
sessionId: 'task-001',
})
expect(calls[0]!.deduplicationKey).toBe('ibm-bob:task-001:0')
})
it('falls back to IBM Bob auto model when history has no model tag', async () => {
const taskDir = join(tmpDir, 'tasks', 'task-002')
await mkdir(taskDir, { recursive: true })
await writeFile(join(taskDir, 'ui_messages.json'), makeUiMessages({ tokensIn: 100, tokensOut: 50 }))
await writeFile(join(taskDir, 'api_conversation_history.json'), makeApiHistory())
const source = { path: taskDir, project: 'IBM Bob', provider: 'ibm-bob' }
const calls: ParsedProviderCall[] = []
for await (const call of ibmBob.createSessionParser(source, new Set()).parse()) calls.push(call)
expect(calls).toHaveLength(1)
expect(calls[0]!.model).toBe('ibm-bob-auto')
expect(calls[0]!.costUSD).toBeGreaterThan(0)
})
it('deduplicates across parser runs', async () => {
const taskDir = join(tmpDir, 'tasks', 'task-003')
await mkdir(taskDir, { recursive: true })
await writeFile(join(taskDir, 'ui_messages.json'), makeUiMessages({ tokensIn: 100, tokensOut: 50 }))
const source = { path: taskDir, project: 'IBM Bob', provider: 'ibm-bob' }
const seenKeys = new Set<string>()
const calls1: ParsedProviderCall[] = []
for await (const call of ibmBob.createSessionParser(source, seenKeys).parse()) calls1.push(call)
const calls2: ParsedProviderCall[] = []
for await (const call of ibmBob.createSessionParser(source, seenKeys).parse()) calls2.push(call)
expect(calls1).toHaveLength(1)
expect(calls2).toHaveLength(0)
})
})
describe('ibm-bob provider - metadata', () => {
it('has correct name and displayName', () => {
expect(ibmBob.name).toBe('ibm-bob')
expect(ibmBob.displayName).toBe('IBM Bob')
})
it('uses shared short model display names', () => {
expect(ibmBob.modelDisplayName('ibm-bob-auto')).toBe('IBM Bob (auto)')
expect(ibmBob.modelDisplayName('claude-sonnet-4-6')).toBe('Sonnet 4.6')
})
})

View file

@ -337,6 +337,124 @@ skipUnlessSqlite('opencode provider - session parsing', () => {
expect(call.deduplicationKey).toBe('opencode:sess-1:msg-2')
})
it('normalizes opencode MCP tool names for shared MCP reporting', async () => {
const dbPath = createTestDb(tmpDir)
withTestDb(dbPath, (db) => {
insertSession(db, 'sess-1')
insertMessage(db, 'msg-1', 'sess-1', 1700000000000, { role: 'user' })
insertPart(db, 'part-1', 'msg-1', 'sess-1', { type: 'text', text: 'look up the ClickUp task' })
insertMessage(db, 'msg-2', 'sess-1', 1700000001000, {
role: 'assistant',
modelID: 'claude-opus-4-6',
cost: 0.05,
tokens: { input: 100, output: 200, reasoning: 0, cache: { read: 0, write: 0 } },
})
insertPart(db, 'part-2', 'msg-2', 'sess-1', {
type: 'tool',
tool: 'clickup_clickup_get_task',
state: { status: 'completed', input: {} },
})
insertPart(db, 'part-3', 'msg-2', 'sess-1', {
type: 'tool',
tool: 'figma_get_file',
state: { status: 'completed', input: {} },
})
})
const calls = await collectCalls(createOpenCodeProvider(tmpDir), dbPath, 'sess-1')
expect(calls).toHaveLength(1)
expect(calls[0]!.tools).toEqual([
'mcp__clickup__clickup_get_task',
'mcp__figma__get_file',
])
})
it('preserves already-normalized MCP tool names', async () => {
const dbPath = createTestDb(tmpDir)
withTestDb(dbPath, (db) => {
insertSession(db, 'sess-1')
insertMessage(db, 'msg-1', 'sess-1', 1700000001000, {
role: 'assistant',
modelID: 'claude-opus-4-6',
cost: 0.05,
tokens: { input: 100, output: 200, reasoning: 0, cache: { read: 0, write: 0 } },
})
insertPart(db, 'part-1', 'msg-1', 'sess-1', {
type: 'tool',
tool: 'mcp__github__search_code',
state: { status: 'completed', input: {} },
})
})
const calls = await collectCalls(createOpenCodeProvider(tmpDir), dbPath, 'sess-1')
expect(calls).toHaveLength(1)
expect(calls[0]!.tools).toEqual(['mcp__github__search_code'])
})
it('keeps extension tool names without a server prefix as regular tools', async () => {
const dbPath = createTestDb(tmpDir)
withTestDb(dbPath, (db) => {
insertSession(db, 'sess-1')
insertMessage(db, 'msg-1', 'sess-1', 1700000001000, {
role: 'assistant',
modelID: 'claude-opus-4-6',
cost: 0.05,
tokens: { input: 100, output: 200, reasoning: 0, cache: { read: 0, write: 0 } },
})
insertPart(db, 'part-1', 'msg-1', 'sess-1', {
type: 'tool',
tool: 'customtool',
state: { status: 'completed', input: {} },
})
})
const calls = await collectCalls(createOpenCodeProvider(tmpDir), dbPath, 'sess-1')
expect(calls).toHaveLength(1)
expect(calls[0]!.tools).toEqual(['customtool'])
})
it('keeps malformed server-prefixed tool names as regular tools', async () => {
const dbPath = createTestDb(tmpDir)
withTestDb(dbPath, (db) => {
insertSession(db, 'sess-1')
insertMessage(db, 'msg-1', 'sess-1', 1700000001000, {
role: 'assistant',
modelID: 'claude-opus-4-6',
cost: 0.05,
tokens: { input: 100, output: 200, reasoning: 0, cache: { read: 0, write: 0 } },
})
insertPart(db, 'part-1', 'msg-1', 'sess-1', {
type: 'tool',
tool: '_missing_server',
state: { status: 'completed', input: {} },
})
insertPart(db, 'part-2', 'msg-1', 'sess-1', {
type: 'tool',
tool: 'missing_',
state: { status: 'completed', input: {} },
})
insertPart(db, 'part-3', 'msg-1', 'sess-1', {
type: 'tool',
tool: '_',
state: { status: 'completed', input: {} },
})
})
const calls = await collectCalls(createOpenCodeProvider(tmpDir), dbPath, 'sess-1')
expect(calls).toHaveLength(1)
expect(calls[0]!.tools).toEqual([
'_missing_server',
'missing_',
'_',
])
})
it('skips zero-token messages with zero cost', async () => {
const dbPath = createTestDb(tmpDir)
withTestDb(dbPath, (db) => {

View file

@ -1,7 +1,7 @@
import { defineConfig } from 'tsup'
export default defineConfig({
entry: ['src/cli.ts'],
entry: ['src/main.ts'],
format: ['esm'],
target: 'node20',
outDir: 'dist',
@ -9,7 +9,4 @@ export default defineConfig({
splitting: false,
sourcemap: true,
dts: false,
banner: {
js: '#!/usr/bin/env node',
},
})