WFGY/ProblemMap/GlobalFixMap/Eval_Observability/variance_and_drift.md

6.1 KiB
Raw Blame History

Eval Observability — Variance and Drift

🧭 Quick Return to Map

You are in a sub-page of Eval_Observability.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Variance and drift checks detect when evaluation scores are unstable across runs or when semantic meaning slowly shifts without clear boundary failures.
These probes prevent "false confidence" in benchmarks by catching hidden instability.


Why variance and drift matter

  • Variance: Scores fluctuate heavily depending on seed, paraphrase, or retriever order. Averages hide the volatility.
  • Drift: Performance declines slowly across sessions, data refreshes, or version bumps. Looks fine short-term but collapses long-term.
  • Silent regressions: Systems pass local tests but fail in production due to unmonitored entropy rise.

Acceptance targets

  • Variance (σ/μ) ≤ 0.15 across 3 seeds and 3 paraphrases.
  • Drift slope: Δscore per batch ≤ 0.02 absolute over 5+ eval windows.
  • No monotonic downward slope longer than 3 consecutive windows.
  • Drift alerts fire if ΔS average increases ≥ 0.10 compared to gold anchors.

Detection workflow

  1. Collect runs across seeds

    • At least 3 seeds, 3 paraphrases.
    • Log ΔS, λ, coverage, citations.
  2. Compute variance

    • Calculate σ/μ for each metric.
    • High variance = unstable eval → rerun with schema locks.
  3. Track drift over time

    • Compare eval batch N vs N-1.
    • Plot moving average.
    • Alert if slope exceeds tolerance.
  4. Root-cause analysis

    • If variance high → check retriever metrics, random seeding, rerankers.
    • If drift detected → audit embeddings, re-chunk, verify data refresh.

Common pitfalls

  • Single-run evals: Hides high variance. Always run multi-seed.
  • Averages without spread: Mean looks fine, variance reveals collapse.
  • Ignoring slow drift: Short tests OK, but 12 weeks later accuracy dies.
  • Cross-store drift: One vector DB stable, another drifts. Must track both.

Example reporting schema

{
  "metric": "ΔS",
  "seed_runs": [0.38, 0.42, 0.44],
  "variance_ratio": 0.14,
  "drift_slope": +0.03,
  "alert": true
}

Fix modules to open


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
Engine WFGY 1.0 Original PDF based tension engine
Engine WFGY 2.0 Production tension kernel and math engine for RAG and agents
Engine WFGY 3.0 TXT based Singularity tension engine, 131 S class set
Map Problem Map 1.0 Flagship 16 problem RAG failure checklist and fix map
Map Problem Map 2.0 RAG focused recovery pipeline
Map Problem Map 3.0 Global Debug Card, image as a debug protocol layer
Map Semantic Clinic Symptom to family to exact fix
Map Grandmas Clinic Plain language stories mapped to Problem Map 1.0
Onboarding Starter Village Guided tour for newcomers
App TXT OS TXT semantic OS, fast boot
App Blah Blah Blah Abstract and paradox Q and A built on TXT OS
App Blur Blur Blur Text to image with semantic control
App Blow Blow Blow Reasoning game engine and memory demo

If this repository helped, starring it improves discovery so more builders can find the docs and tools. GitHub Repo stars