- Add OMP provider reading from ~/.omp/agent/sessions (same JSONL
format as Pi, shared parser)
- Parameterize discoverSessionsInDir with provider name so sessions
carry correct provider field
- Add BUILTIN_ALIASES for proxy model name variants (anthropic--claude-*
double-dash format) that don't match LiteLLM keys
- Add model-alias CLI command for user-defined name mappings
- Wire setModelAliases into preAction after config load
- Add modelAliases field to CodeburnConfig
- Update README: OMP in provider table, model-alias section
Release notes cover the two merged PRs:
- #44: GitHub Copilot provider parsing ~/.copilot/session-state/ with
model tracking via session.model_change events. Adds fallback pricing
for six gpt/o3/o4 models. Copilot logs only output tokens, so cost
rows sit below actual API cost; documented in README and CHANGELOG.
- #51: All Time period (key 5, -p all), avg/s column in By Project,
and a new Top Sessions panel showing the five most expensive sessions
across all projects.
README: provider list updated (Copilot added, output-tokens-only caveat
alongside Cursor's Auto-mode estimation note), usage examples include
`-p all` and key `5`, provider filter list includes `copilot`.
CHANGELOG: 0.6.0 entry lists both features with contributor credits,
plus two fixes (longest-key-first model display sort and empty
firstTimestamp placeholder).
114 tests pass. Build succeeds at 132KB.
- init currentModel to '' and skip assistant messages before first
session.model_change to avoid silent misattribution
- add comment documenting why inputTokens is always 0
- fix delete_file tool mapping ('Edit' -> 'Delete')
- add schema doc comment to ToolRequest optional fields
- remove catch-all from CopilotEvent union for proper TS narrowing
- add tests: pre-model-change skip, workspace.yaml quote/comment strip,
longest-prefix model display name match
- TopSessions: show '----------' placeholder when session.firstTimestamp
is empty (Copilot provider yields '' when timestamp is missing)
- DashboardContent: add comment explaining undefined days tri-state
- tests/dashboard.test.ts: cover top-5 selection (fewer than 5 sessions,
tied costs, descending sort) and avg/s returning '-' for zero sessions
Authored-by: AgentSeal <hello@agentseal.org>
- Add 'all' period (key 5) to dashboard showing data from all time
- Daily Activity panel shows all available days when All Time is active
- Add avg cost per session column to Project breakdown
- Add Top Sessions panel highlighting the 5 most expensive sessions
Lists common signals in the dashboard (cache hit, retry rate,
model mix, tool patterns) and what each typically means.
Points readers toward interpreting their own data without
overpromising automatic diagnosis.
- Add Pi entry to PROVIDER_COLORS (pink) and PROVIDER_DISPLAY_NAMES.
- Update README with Pi in supported providers, requirements,
command examples, and the data-format section.
dispatch_agent is Pi's delegation tool. Mapping it to Agent makes
the classifier route those turns into the Delegation category
instead of Conversation, matching the existing semantics for the
task tool.
- Move modelDisplayEntries sort to module scope so it runs once
instead of on every modelDisplayName call.
- Add bash-utils tests for the env var prefix and true/false
skip behaviors that came in with the Pi commit.
Verified Pi token semantics against real session data:
input + cacheRead + cacheWrite + output equals totalTokens
exactly, confirming Anthropic-style accounting (cached tokens
disjoint from input). No double-counting.
Maps Pi's lowercase tool names (bash, read, edit, write...) to
the capitalized form used by every other provider, so the
dashboard shows consistent tool names across providers and the
activity classifier works without extra lowercase entries.
Reverts the lowercase additions to classifier.ts since the
provider now normalizes tool names at the source.
- Adds support for Pi (pi.ai) as a new session provider.
- Pi sessions are stored as JSONL files under `~/.pi/agent/sessions/<project-dir>/` and use OpenAI-compatible model IDs (gpt-5, gpt-5.4, gpt-4o, etc.).
- `src/providers/pi.ts` (new): Pi provider - discovers JSONL session files, parses assistant turns, extracts token counts, tool calls, and bash commands, deduplicates via response ID with line-index fallback
- `src/providers/types.ts`: added bashCommands field to `ParsedProviderCall` so all providers carry extracted bash command lists
- `src/providers/index.ts`: registered Pi as a core provider alongside Claude and Codex
- `src/providers/codex.ts`, `cursor.ts`: added `bashCommands: []` to satisfy the new required field on `ParsedProviderCall`
- `src/parser.ts`: fixed bug where `providerCallToTurn` always emitted an empty bashCommands array instead of passing through the parsed commands
- `src/classifier.ts`: added lowercase tool name variants (bash, edit, read, write) to match Pi's tool naming convention in JSONL output
- `src/bash-utils.ts`: exclude `true`, `false`, and shell variable assignments from extracted commands; scan past leading `NAME=val` tokens so `FOO=bar ls` correctly records `ls` rather than being dropped
- `package.json`: added pi to keywords
- `tests/providers/pi.test.ts` (new): 16 unit tests covering session discovery, multi-turn parsing, tool/bash extraction, deduplication, zero-token filtering, and display name mapping
- `tests/provider-registry.test.ts`: updated core provider list to include pi
- [X] Unit tests pass (`npx vitest run`, 56 tests across 6 files);
- [X] Manually verified via `npx tsx src/cli.ts` report and showing Pi sessions alongside Claude and Codex in the dashboard.
Adds a cache hit percentage column to the By Model panel and
fixes the cache hit math in the Overview panel. Both were
treating cache_creation tokens as cache hits, when they are
actually cache misses being written to cache for future use.
The corrected formula counts both fresh input and cache writes
in the denominator, so a heavy Claude Code session now shows
~97% cache hit instead of the misleading 100%.
Also extracts column widths into named constants per CLAUDE.md
no-magic-numbers rule, and tightens the model name column to a
fixed 14 chars so columns hug the name like the Activity panel.
The pricing fetcher was filtering out every model name containing
'/' or '.', which discarded provider-prefixed entries like
'anthropic/claude-opus-4-6' and version-suffixed ones like
'gemini-2.0-flash-001'. That dropped 93% of LiteLLM's catalog
(2,467 of 2,659 models), leaving most non-Anthropic models
priced at $0.
Now we load every entry and additionally index each one under its
stripped name so 'claude-opus-4-6' resolves whether or not the
caller includes the provider prefix.
Add a '30 Days' section to the menubar format output, positioned between
'7 Days' and 'Month' to match the tab order in the interactive report.
The rolling 30-day date range logic already existed (used by report and
export commands) - this wires it into the status menubar renderer.
Both 30 Days (rolling window) and Month (calendar month) are shown,
giving useful context early in the month when the calendar month total
is nearly empty.
Codex Desktop on Windows uses "Codex Desktop" as the originator
string instead of "codex_cli" or "codex_vscode". The startsWith
check was case-sensitive, rejecting these sessions silently.
Fixes#1 (comment by @JiglioNero).
Reads session data from OpenCode's SQLite databases at
~/.local/share/opencode/. Reuses the existing better-sqlite3
adapter (same as Cursor), lazy-loaded so users without OpenCode
see no difference. Adds bashCommands to the provider interface
so shell command breakdowns work across all providers.
31 tests, schema validation, diagnostic stderr on failures.
Also fixes a pre-existing tsc error in currency.ts.
Major release adding Cursor as the third supported provider.
Lazy-loaded SQLite, file-based result cache, provider-specific
dashboard layouts, debounced period switching, broader classifier.
- Cursor listed as supported provider with data location and caveats
- Auto mode pricing estimation explained
- Languages panel and cache behavior documented
- Project structure updated with new files
- Repo description and topics updated on GitHub
120 days was for testing. Max dashboard period is 30 days, so
Cursor SQL lookback is now 35 days (30 + margin for This Month).
Smaller scan window = faster cold starts, smaller cache file.
First run parses the 21GB DB (slow, ~40-80s). Writes parsed
results to ~/.cache/codeburn/cursor-results.json. Subsequent
runs check DB mtime+size -- if unchanged, load from cache
(instant). Cache auto-invalidates when Cursor modifies the DB.
- Fetch actual user messages (type 1 bubbles) for keyword classification
instead of using assistant response text
- Synthetic Edit tool when codeBlocks present so classifier detects coding
- Languages use lang: prefix to separate from tool classification
- Dashboard filters lang: entries for Languages panel, hides from Core Tools
- Extract user text from bubbles for activity classifier
- Extract codeBlocks languageId for programming language breakdown
- Show Languages panel instead of Core Tools/Shell/MCP for Cursor
- Adaptive dashboard layout based on active provider
- 120-day daily activity range for longer periods
The incremental cache saved a timestamp watermark but not the parsed
data, so subsequent runs found no new entries and returned empty.
Removed the cache layer entirely -- the 120-day SQL time filter
already limits the scan sufficiently.