WFGY/ProblemMap/GlobalFixMap/Eval_Observability/coverage_tracking.md

5.6 KiB
Raw Blame History

Eval Observability — Coverage Tracking

🧭 Quick Return to Map

You are in a sub-page of Eval_Observability.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Evaluation disclaimer (coverage tracking)
Coverage numbers here measure how much of a designed space you have touched under chosen tests.
High coverage does not guarantee absence of bugs or failures outside those tests.


A focused module to monitor retrieval coverage during eval and live runs.
Coverage answers the key question: “Did we retrieve enough of the right section to support the answer?”


Why coverage tracking matters

  • False negatives: The right fact exists, but snippets cover too little of the section.
  • Over-fragmentation: Documents chunked too aggressively result in coverage <0.50 despite correct snippets.
  • Hallucinations: When coverage is low, LLMs often fill gaps with fabrications.
  • Eval blind spots: Benchmarks without coverage probes miss systematic recall failures.

Core definition

Coverage is defined as:

coverage = retrieved_tokens_in_target_section / total_tokens_in_target_section
  • Target section = gold label or expected answer span.
  • Threshold = minimum 0.70 in most RAG tasks.
  • Tolerance = allow 510% batch queries below threshold before raising alert.

Probe design

  1. Annotate gold sets For each eval question, mark the expected source section IDs and token spans.

  2. Measure per-query coverage Count how many tokens from expected span were retrieved. Normalize by total tokens in span.

  3. Batch aggregation Track percentage of queries below threshold. Report average coverage ± variance.

  4. Drift detection Compare against historical baseline (previous model or retriever version). If drop >0.05, escalate to retriever/infrastructure team.


Alert thresholds

Metric Warning Critical
Per-query coverage <0.70 <0.60
Batch pass rate <0.90 <0.80
Drift vs baseline drop >0.05 drop >0.10

Example probe code (pseudo)

def track_coverage(retrieved, target_span):
    overlap = count_tokens(retrieved, target_span)
    coverage = overlap / len(target_span)
    return coverage

for q in eval_batch:
    cov = track_coverage(q.retrieved_tokens, q.gold_span)
    if cov < 0.70:
        alerts.append({"qid": q.id, "coverage": cov})

Common pitfalls

  • Ignoring multi-section answers → coverage must sum across all required sections.
  • Only measuring top-1 snippet → always include top-k, otherwise underestimation occurs.
  • Static thresholds → thresholds should adapt to doc size and retrieval depth.
  • No historical baseline → without drift tracking, regressions pass unnoticed.

Reporting dashboards

  • Histograms of per-query coverage distribution.
  • Trend lines for batch averages across eval sets.
  • Drift deltas vs baseline runs.
  • Heatmaps showing coverage by document or domain.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Module Description Link
WFGY Core Canonical framework entry point View
Problem Map Diagnostic map and navigation hub View
Tension Universe Experiments MVP experiment field View
Recognition Where WFGY is referenced or adopted View
AI Guide Anti-hallucination reading protocol for tools View

If this repository helps, starring it improves discovery for other builders.
GitHub Repo stars