WFGY/ProblemMap/GlobalFixMap/Eval_Observability/README.md

11 KiB
Raw Blame History

Eval Observability — Global Fix Map

🏥 Quick Return to Emergency Room

You are in a specialist desk.
For full triage and doctors on duty, return here:

Think of this page as a sub-room.
If you want full consultation and prescriptions, go back to the Emergency Room lobby.

This folder provides guardrails for evaluation and observability in RAG and agent pipelines.
It shows how to catch silent drift, regressions, and unstable metrics before they break your system.


What this folder is

  • A starter kit to make evals predictable and repeatable.
  • Guardrails for metrics, variance, and drift detection.
  • Copy-paste probes and configs you can add to your pipeline.
  • Acceptance targets you can actually measure and enforce.

When to use

  • Metrics look unstable between runs.
  • Coverage seems high but answers still drift.
  • ΔS changes across paraphrases or seeds.
  • λ flips divergent after harmless edits.
  • Benchmarks regress without any code change.
  • Long-run evals show a slow decline.

Acceptance targets

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 to target section
  • λ remains convergent across 3 paraphrases and 2 seeds
  • Variance ratio ≤ 0.15 across seeds
  • No downward drift beyond 3 eval windows
  • E_resonance stays flat on long evals

Quick routes — open these first

Symptom Open this page
Benchmarks regress with no code change regression_gate.md
Metrics fluctuate or alerts missing alerting_and_probes.md
Coverage looks high but not real coverage_tracking.md
ΔS thresholds unclear deltaS_thresholds.md
λ flips or diverges lambda_observe.md
Variance high between seeds variance_and_drift.md
Need a full setup eval_playbook.md
Logging + monitoring integration metrics_and_logging.md

Copy-paste eval contract

eval_contract:
  seeds: 3
  paraphrases: 3
  targets:
    deltaS: <=0.45
    coverage: >=0.70
    lambda: convergent
    variance: <=0.15
    drift: <=0.02
alerts:
  - deltaS >=0.60
  - lambda divergent
  - drift slope >0.02

FAQ

Q: What if my metrics vary a lot each run? A: Check variance_and_drift.md. Add more seeds and enforce variance ≤0.15.

Q: My eval passes locally but fails in CI — why? A: See metrics_and_logging.md. Local runs often miss logging detail. CI must enforce the same eval contract.

Q: What if coverage is high but the answer is still wrong? A: Open coverage_tracking.md. You might be measuring snippet recall, not semantic coverage. Switch to ΔS-based coverage.

Q: ΔS is always drifting, even on simple questions. A: Look at deltaS_thresholds.md. Adjust thresholds and clamp variance with λ probes.

Q: How do I stop regressions before release? A: Use regression_gate.md. It defines pass/fail rules so bad models never ship.


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Module Description Link
WFGY Core Canonical framework entry point View
Problem Map Diagnostic map and navigation hub View
Tension Universe Experiments MVP experiment field View
Recognition Where WFGY is referenced or adopted View
AI Guide Anti-hallucination reading protocol for tools View

If this repository helps, starring it improves discovery for other builders.
GitHub Repo stars