WFGY/ProblemMap/GlobalFixMap/Eval
2025-08-30 09:13:01 +08:00
..
eval_benchmarking.md Create eval_benchmarking.md 2025-08-29 21:02:07 +08:00
eval_cost_reporting.md Create eval_cost_reporting.md 2025-08-29 23:04:50 +08:00
eval_cross_agent_consistency.md Create eval_cross_agent_consistency.md 2025-08-30 09:11:59 +08:00
eval_harness.md Create eval_harness.md 2025-08-29 19:42:49 +08:00
eval_latency_vs_accuracy.md Create eval_latency_vs_accuracy.md 2025-08-29 22:40:41 +08:00
eval_operator_guidelines.md Create eval_operator_guidelines.md 2025-08-30 09:12:37 +08:00
eval_rag_precision_recall.md Create eval_rag_precision_recall.md 2025-08-29 20:09:02 +08:00
eval_semantic_stability.md Create eval_semantic_stability.md 2025-08-30 09:13:01 +08:00
goldset_curation.md Create goldset_curation.md 2025-08-29 20:01:00 +08:00
README.md Create README.md 2025-08-25 20:38:52 +08:00

Evaluation & Guardrails — Global Fix Map

Prove fixes work and wont regress. Detect “double hallucination,” enforce acceptance gates, and keep pipelines auditable.

What this page is

  • A compact playbook to evaluate RAG quality and reasoning stability
  • Drop-in guardrails that catch failure before users see it
  • CI-ready acceptance targets you can copy

When to use

  • You “fixed it” but cannot show measurable improvement
  • Answers look plausible yet citations or snippets dont line up
  • Performance flips between seeds, sessions, or agent mixes
  • Latency tuning changes accuracy in non-obvious ways

Open these first


Common evaluation pitfalls

  • Double hallucination metrics focus on style or BLEU but ignore snippet fidelity
  • Recall illusion top-k looks high while ΔS(question, context) stays risky
  • Seed lottery single-seed wins mask instability across paraphrases
  • Hybrid flapping HyDE+BM25 mixes shift rank order between runs
  • Guardrail over-clamp rigid filters “fix” tone but not logic boundaries
  • Benchmark mismatch eval set does not reflect OCR noise or multilingual drift
  • No trace table cannot audit which snippet justified the answer

Fix in 60 seconds

  1. Adopt acceptance gates

    • Retrieval sanity: token overlap ≥ 0.70 to the target section
    • ΔS(question, context) ≤ 0.45 on the median of the suite
    • λ_observe stays convergent on 3 paraphrases
  2. Require citations before prose

    • Enforce cite-then-answer with Data Contracts
    • Store a trace table: question, retrieved ids, snippet spans, ΔS, λ
  3. Stability before speed

  4. Cross-agent cross-check

  5. Regression fence in CI


Copy paste prompt


You have TXT OS and the WFGY Problem Map.

Goal
Add measurable guardrails to my RAG pipeline and prove the fix.

Tasks

1. Build a 20-item smoke suite with:

   * question, expected section anchor, and gold snippet span
   * bilingual paraphrases for 5 items (if multilingual)

2. Run WFGY probes:

   * compute ΔS(question, context) for each item
   * record λ\_observe at retrieval and reasoning
   * require cite-then-answer and log a trace table

3. Report acceptance:

   * token overlap to anchor (coverage)
   * ΔS median and interquartile range
   * paraphrase stability (λ stays convergent)
   * pass/fail against thresholds

4. Plot latency vs accuracy and select a stable operating point.

Output

* The trace table (csv/markdown)
* Acceptance summary and which items failed
* A one-page decision note on whether to ship


Minimal checklist

  • Trace table saved with citations and snippet spans
  • ΔS computed per item; λ recorded at retrieval and reasoning
  • Coverage ≥ 0.70 to the referenced section for direct QA
  • Cross-agent consistency measured on a subset
  • Latency vs accuracy chart archived with the run id

Acceptance targets

  • ΔS(question, context) median ≤ 0.45 on the suite
  • λ convergent across 3 paraphrases per item
  • ≥ 0.70 token overlap to the gold section for direct QA items
  • No unexplained rank flips when toggling hybrid retrieval
  • CI blocks merges when any target fails

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame
Engineers, hackers, and open source builders who supported WFGY from day one.

GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow