WFGY/ProblemMap/GlobalFixMap/RAG/eval_drift.md
2025-09-01 10:09:43 +08:00

7.7 KiB
Raw Blame History

Eval Drift — Guardrails and Fix Pattern

When evaluation metrics swing unpredictably across runs (precision, recall, ΔS, coverage) even though the data and index appear unchanged.
This signals eval drift: your evaluation harness is not structurally stable.


Open these first


Core acceptance

  • ΔS(question, retrieved) ≤ 0.45 across 3 paraphrases
  • Coverage ≥ 0.70 per target section
  • λ convergent on 2 seeds, stable across runs
  • Variance of metrics ≤ 0.05 across replays

Typical symptoms → exact fix

Symptom Likely cause Open this
Precision/recall varies ±0.20 each run eval harness non-deterministic Eval Precision/Recall
Identical queries give different metrics bootstrap not fenced Bootstrap Ordering
Eval metrics collapse on fresh deploy index not fully warmed Predeploy Collapse
Coverage < 0.50 despite gold answers embedding or chunk drift Embedding ≠ Semantic, Chunking Checklist

Fix in 60 seconds

  1. Lock seeds
    Fix random seeds at retrieval, reranker, and eval harness layers.

  2. Fence bootstrap
    Require VECTOR_READY==true and index hash match before eval begins.

  3. Replay 3 paraphrases
    Eval the same question with 3 paraphrases. Require ΔS variance < 0.05.

  4. Cross-seed check
    Run two seeds. λ must remain convergent across both.

  5. Regression gate
    Ship only if coverage ≥ 0.70 and precision/recall stable within 0.05.


Copy-paste eval harness snippet

def eval_guardrails(question, retrieved, gold):
    ds_qr = deltaS(question, retrieved)
    ds_rg = deltaS(retrieved, gold)

    assert ds_qr <= 0.45, "ΔS drift detected"
    assert coverage(retrieved, gold) >= 0.70, "Coverage too low"
    assert lambda_state(retrieved) in {"→","←","<>"} , "λ divergent"

    return {
        "ΔS_qr": ds_qr,
        "ΔS_rg": ds_rg,
        "coverage": coverage(retrieved, gold),
        "λ": lambda_state(retrieved)
    }

Diagnostic probes

  • Re-run variance test: run eval 5 times, log precision/recall. Stddev >0.05 → unstable harness.
  • Anchor comparison: compare ΔS to gold anchor vs decoy. If both similar, re-embed.
  • Deploy warm-up: log VECTOR_READY and index hash before eval.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + ”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame WFGY Engine 2.0 is already unlocked.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow