WFGY/ProblemMap/GlobalFixMap/Eval/eval_cost_reporting.md

6.4 KiB
Raw Blame History

Eval: Cost Reporting and Efficiency

🧭 Quick Return to Map

You are in a sub-page of Eval.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Evaluation disclaimer (cost reporting)
Any cost and efficiency numbers on this page come from specific runs with specific models and hardware.
They are for comparison inside that context only and are not economic guarantees or universal prices.


This page defines how to measure and report cost per correct answer in retrieval-augmented and reasoning pipelines. Latency and accuracy alone are insufficient. Without cost analysis, systems regress into wasteful configurations.

Open these first


Acceptance targets

  • Cost per correct answer ≤ 1.3× baseline
  • Cost stability variance ≤ 15% across 3 seeds and 3 paraphrases
  • Token efficiency ≥ 0.7 (fraction of tokens contributing to correct citation)
  • Budget alerting: auto-flag when projected monthly spend > 110% of budget cap

Reporting dimensions

Each evaluation run must record cost on three levels:

  1. Raw tokens

    • input, output, total per query
    • broken down by retrieval, rerank, reasoning
  2. Cost per unit

    • $/1k tokens per provider and model
    • normalized into usd_equiv
  3. Cost per correct

    • (total spend ÷ number of correct answers)
    • stratified by question bucket (short, medium, long)

JSON schema

{
  "suite": "v1_cost",
  "arm": "with_hybrid",
  "provider": "anthropic",
  "model": "claude-3.7-sonnet",
  "bucket": "long",
  "precision": 0.79,
  "recall": 0.68,
  "ΔS_avg": 0.41,
  "correct_answers": 40,
  "total_questions": 50,
  "tokens": { "in": 2850, "out": 920, "total": 3770 },
  "cost_per_1k_tokens_usd": 0.006,
  "spend_usd": 0.0226,
  "cost_per_correct": 0.00056,
  "variance_across_runs": 0.11,
  "notes": "within budget and stable"
}

Diagnostic questions

  • Are rerankers worth the extra spend? → check ΔS reduction vs token increase.
  • Is hybrid retrieval doubling retrieval tokens with little gain?
  • Does the large model add accuracy, or is a small model + WFGY equal at lower cost?
  • Is citation length inflated (long snippets)? → enforce snippet contract.

Escalation and fixes

  • High cost per correct → switch to caching, smaller model with WFGY overlay.
  • Variance >15% → clamp paraphrases, normalize prompt headers.
  • Budget overrun → auto-throttle evals, alert with alerting_and_probes.md.

Minimal run

  1. Select 20 mixed-length questions.
  2. Run baseline and candidate arms.
  3. Compute cost per correct.
  4. Ship only if candidate ≤ 1.3× baseline and stable across seeds.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
⚙️ Engine WFGY 1.0 Original PDF tension engine and early logic sketch (legacy reference)
⚙️ Engine WFGY 2.0 Production tension kernel for RAG and agent systems
⚙️ Engine WFGY 3.0 TXT based Singularity tension engine (131 S class set)
🗺️ Map Problem Map 1.0 Flagship 16 problem RAG failure taxonomy and fix map
🗺️ Map Problem Map 2.0 Global Debug Card for RAG and agent pipeline diagnosis
🗺️ Map Problem Map 3.0 Global AI troubleshooting atlas and failure pattern map
🧰 App TXT OS .txt semantic OS with fast bootstrap
🧰 App Blah Blah Blah Abstract and paradox Q&A built on TXT OS
🧰 App Blur Blur Blur Text to image generation with semantic control
🏡 Onboarding Starter Village Guided entry point for new users

If this repository helped, starring it improves discovery so more builders can find the docs and tools.
GitHub Repo stars