mirror of
https://github.com/onestardao/WFGY.git
synced 2026-04-28 11:40:07 +00:00
| .. | ||
| eval_benchmarking.md | ||
| eval_cost_reporting.md | ||
| eval_cross_agent_consistency.md | ||
| eval_harness.md | ||
| eval_latency_vs_accuracy.md | ||
| eval_operator_guidelines.md | ||
| eval_rag_precision_recall.md | ||
| eval_semantic_stability.md | ||
| goldset_curation.md | ||
| README.md | ||
Evaluation & Guardrails — Global Fix Map
Prove fixes work and won’t regress. Detect “double hallucination,” enforce acceptance gates, and keep pipelines auditable.
What this page is
- A compact playbook to evaluate RAG quality and reasoning stability
- Drop-in guardrails that catch failure before users see it
- CI-ready acceptance targets you can copy
When to use
- You “fixed it” but cannot show measurable improvement
- Answers look plausible yet citations or snippets don’t line up
- Performance flips between seeds, sessions, or agent mixes
- Latency tuning changes accuracy in non-obvious ways
Open these first
- RAG precision/recall spec: RAG Precision & Recall
- Latency versus accuracy method: Latency vs Accuracy
- Cross-agent agreement tests: Cross-Agent Consistency
- Semantic stability checks: Semantic Stability
- Why-this-snippet schema: Retrieval Traceability
- Snippet & citation schema: Data Contracts
Common evaluation pitfalls
- Double hallucination metrics focus on style or BLEU but ignore snippet fidelity
- Recall illusion top-k looks high while ΔS(question, context) stays risky
- Seed lottery single-seed wins mask instability across paraphrases
- Hybrid flapping HyDE+BM25 mixes shift rank order between runs
- Guardrail over-clamp rigid filters “fix” tone but not logic boundaries
- Benchmark mismatch eval set does not reflect OCR noise or multilingual drift
- No trace table cannot audit which snippet justified the answer
Fix in 60 seconds
-
Adopt acceptance gates
- Retrieval sanity: token overlap ≥ 0.70 to the target section
- ΔS(question, context) ≤ 0.45 on the median of the suite
- λ_observe stays convergent on 3 paraphrases
-
Require citations before prose
- Enforce cite-then-answer with Data Contracts
- Store a trace table: question, retrieved ids, snippet spans, ΔS, λ
-
Stability before speed
- Plot latency vs accuracy and pin the knee point
See Latency vs Accuracy
- Plot latency vs accuracy and pin the knee point
-
Cross-agent cross-check
- Compare two capable models on the same context
See Cross-Agent Consistency
- Compare two capable models on the same context
-
Regression fence in CI
- Fail the build if ΔS median rises above 0.45 or trace coverage drops below 0.70
See RAG Precision & Recall
- Fail the build if ΔS median rises above 0.45 or trace coverage drops below 0.70
Copy paste prompt
You have TXT OS and the WFGY Problem Map.
Goal
Add measurable guardrails to my RAG pipeline and prove the fix.
Tasks
1. Build a 20-item smoke suite with:
* question, expected section anchor, and gold snippet span
* bilingual paraphrases for 5 items (if multilingual)
2. Run WFGY probes:
* compute ΔS(question, context) for each item
* record λ\_observe at retrieval and reasoning
* require cite-then-answer and log a trace table
3. Report acceptance:
* token overlap to anchor (coverage)
* ΔS median and interquartile range
* paraphrase stability (λ stays convergent)
* pass/fail against thresholds
4. Plot latency vs accuracy and select a stable operating point.
Output
* The trace table (csv/markdown)
* Acceptance summary and which items failed
* A one-page decision note on whether to ship
Minimal checklist
- Trace table saved with citations and snippet spans
- ΔS computed per item; λ recorded at retrieval and reasoning
- Coverage ≥ 0.70 to the referenced section for direct QA
- Cross-agent consistency measured on a subset
- Latency vs accuracy chart archived with the run id
Acceptance targets
- ΔS(question, context) median ≤ 0.45 on the suite
- λ convergent across 3 paraphrases per item
- ≥ 0.70 token overlap to the gold section for direct QA items
- No unexplained rank flips when toggling hybrid retrieval
- CI blocks merges when any target fails
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame —
Engineers, hackers, and open source builders who supported WFGY from day one.
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.