mirror of
https://github.com/onestardao/WFGY.git
synced 2026-04-28 11:40:07 +00:00
7.3 KiB
7.3 KiB
Eval Observability — ΔS Thresholds
A dedicated module for ΔS monitoring in evaluation pipelines.
ΔS = semantic distance between query, retrieved content, and gold anchor.
Tracking thresholds ensures that retrieval and reasoning quality remain auditable, measurable, and comparable.
Why ΔS thresholds matter
- Detect semantic drift: High ΔS despite “correct” tokens indicates meaning mismatch.
- Localize retrieval errors: Low similarity in meaning even if vector scores look fine.
- Evaluate reasoning robustness: Stable models keep ΔS below the risk boundary across paraphrases.
- Flag latent hallucinations: ΔS >0.60 strongly correlates with unsupported answers.
Core bands
| Band | Range | Meaning |
|---|---|---|
| Stable | ΔS < 0.40 | Retrieval and reasoning aligned. Answers should be correct and verifiable. |
| Transitional | 0.40 ≤ ΔS < 0.60 | Risk zone. Minor schema changes or index drift may flip outcomes. |
| Critical | ΔS ≥ 0.60 | High failure probability. Almost always linked to missing context or schema break. |
Acceptance targets
- Per-query: ΔS ≤ 0.45
- Batch average: ≤ 0.40
- Allowance: ≤ 10% of queries can fall in the transitional band (0.40–0.60).
- Critical: 0% tolerance for ΔS ≥ 0.60 in gold-set eval.
ΔS in eval workflow
- Probe per query
Log ΔS(question, retrieved) and ΔS(retrieved, anchor). - Batch roll-up
Compute mean, variance, and percentile distribution. - Compare across seeds
Run three paraphrases and two random seeds; check convergence. - Drift alerting
If ΔS rises >0.05 vs baseline, trigger retraining or schema audit.
Example probe (pseudo)
def deltaS_probe(query, retrieved, anchor):
d1 = deltaS(query, retrieved)
d2 = deltaS(retrieved, anchor)
return max(d1, d2)
for q in eval_set:
s = deltaS_probe(q.query, q.retrieved, q.anchor)
if s >= 0.60:
alerts.append({"qid": q.id, "ΔS": s, "status": "critical"})
Common pitfalls
- Using cosine similarity as ΔS → ΔS is semantic distance, not raw vector score.
- Ignoring anchor comparison → must compute against both query and gold span.
- No variance tracking → averages hide volatility; variance is key.
- One-shot eval → without paraphrase/seed checks, thresholds lack reliability.
Reporting recommendations
- ΔS histogram: visualize stability bands.
- Trend line: track ΔS mean per batch over time.
- Baseline delta: highlight drift from previous eval version.
- Failure clustering: group queries where ΔS ≥0.60 for root-cause analysis.
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one.
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.