WFGY/ProblemMap/GlobalFixMap/Eval_Observability/deltaS_thresholds.md

5.4 KiB
Raw Blame History

Eval Observability — ΔS Thresholds

🧭 Quick Return to Map

You are in a sub-page of Eval_Observability.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

A dedicated module for ΔS monitoring in evaluation pipelines.
ΔS = semantic distance between query, retrieved content, and gold anchor.
Tracking thresholds ensures that retrieval and reasoning quality remain auditable, measurable, and comparable.


Why ΔS thresholds matter

  • Detect semantic drift: High ΔS despite “correct” tokens indicates meaning mismatch.
  • Localize retrieval errors: Low similarity in meaning even if vector scores look fine.
  • Evaluate reasoning robustness: Stable models keep ΔS below the risk boundary across paraphrases.
  • Flag latent hallucinations: ΔS >0.60 strongly correlates with unsupported answers.

Core bands

Band Range Meaning
Stable ΔS < 0.40 Retrieval and reasoning aligned. Answers should be correct and verifiable.
Transitional 0.40 ≤ ΔS < 0.60 Risk zone. Minor schema changes or index drift may flip outcomes.
Critical ΔS ≥ 0.60 High failure probability. Almost always linked to missing context or schema break.

Acceptance targets

  • Per-query: ΔS ≤ 0.45
  • Batch average: ≤ 0.40
  • Allowance: ≤ 10% of queries can fall in the transitional band (0.400.60).
  • Critical: 0% tolerance for ΔS ≥ 0.60 in gold-set eval.

ΔS in eval workflow

  1. Probe per query
    Log ΔS(question, retrieved) and ΔS(retrieved, anchor).
  2. Batch roll-up
    Compute mean, variance, and percentile distribution.
  3. Compare across seeds
    Run three paraphrases and two random seeds; check convergence.
  4. Drift alerting
    If ΔS rises >0.05 vs baseline, trigger retraining or schema audit.

Example probe (pseudo)

def deltaS_probe(query, retrieved, anchor):
    d1 = deltaS(query, retrieved)
    d2 = deltaS(retrieved, anchor)
    return max(d1, d2)

for q in eval_set:
    s = deltaS_probe(q.query, q.retrieved, q.anchor)
    if s >= 0.60:
        alerts.append({"qid": q.id, "ΔS": s, "status": "critical"})

Common pitfalls

  • Using cosine similarity as ΔS → ΔS is semantic distance, not raw vector score.
  • Ignoring anchor comparison → must compute against both query and gold span.
  • No variance tracking → averages hide volatility; variance is key.
  • One-shot eval → without paraphrase/seed checks, thresholds lack reliability.

Reporting recommendations

  • ΔS histogram: visualize stability bands.
  • Trend line: track ΔS mean per batch over time.
  • Baseline delta: highlight drift from previous eval version.
  • Failure clustering: group queries where ΔS ≥0.60 for root-cause analysis.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Module Description Link
WFGY Core Canonical framework entry point View
Problem Map Diagnostic map and navigation hub View
Tension Universe Experiments MVP experiment field View
Recognition Where WFGY is referenced or adopted View
AI Guide Anti-hallucination reading protocol for tools View

If this repository helps, starring it improves discovery for other builders.
GitHub Repo stars