WFGY/ProblemMap/GlobalFixMap/Retrieval/store_agnostic_guardrails.md
2025-08-27 18:55:16 +08:00

12 KiB
Raw Blame History

Store-Agnostic Guardrails for Retrieval

Use this page to harden retrieval quality without changing your vector store. The checks localize failure causes and route you to the exact structural fix so you can verify with measurable targets.

Acceptance targets

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage of target section ≥ 0.70
  • λ remains convergent across 3 paraphrases and 2 seeds
  • E_resonance stays flat on long windows

15-minute triage checklist

  1. Lock metrics and analyzers
    One analyzer for write and read. Verify distance metric and normalization.
    Open: Retrieval Playbook

  2. Enforce the snippet contract
    Required fields: snippet_id, section_id, source_url, offsets, tokens.
    Open: Data Contracts

  3. Trace why this snippet
    Add cite-then-explain and store the trace.
    Open: Retrieval Traceability

  4. Probe ΔS and λ
    Three paraphrases and two seeds. If ΔS ≥ 0.60 or λ flips, clamp variance.
    Open: deltaS_probes.md

  5. k sweep and rerankers
    k in {5, 10, 20}. Try a deterministic reranker when order matters.
    Open: Rerankers · hybrid_reranker_recipe.md

  6. Check chunk boundaries and anchors
    If facts exist but never surface, realign chunking and anchors.
    Open: chunking-checklist.md · chunk_alignment.md

  7. Detect fragmentation
    If coverage is low while index looks healthy, suspect store fragmentation.
    Open: pattern_vectorstore_fragmentation.md

  8. Hybrid failure
    If hybrid underperforms a single retriever, split parsing and rebalance.
    Open: pattern_query_parsing_split.md

  9. Embedding vs meaning
    High similarity yet wrong answer means metric or family mismatch.
    Open: embedding-vs-semantic.md


Minimal instrumentation you can paste

# Pseudocode: keep these checkpoints store agnostic
def retrieve(q, k=10):
    # unified analyzer and explicit metric
    return retriever.invoke(q, k=k)

def trace_schema(snippet):
    assert {"snippet_id","section_id","source_url","offsets","tokens"} <= set(snippet.keys())

def observe(q, snippets, answer):
    # compute ΔS and λ, record probes
    log = probes.compute(q, snippets, answer)
    if log["ΔS"] >= 0.60 or log["λ_flip"]:
        raise Exception("High ΔS or λ flip. Apply variance clamp and rerankers.")
    return log

def pipeline(q):
    s = retrieve(q, k=10)
    for x in s: trace_schema(x)
    msg = prompt.cite_then_explain(q, s)
    ans = llm.invoke(msg)
    return observe(q, s, ans)

Copy-paste LLM prompt

You have TXT OS and the WFGY pages loaded.

Task:
1) Enforce cite-then-explain with fields {snippet_id, section_id, source_url, offsets, tokens}.
2) Log ΔS(question, retrieved) and λ across 3 paraphrases and 2 seeds.
3) If ΔS ≥ 0.60 or λ flips, propose the smallest structural change referencing:
   retrieval-playbook, retrieval-traceability, data-contracts, rerankers, query-parsing-split.
4) Return JSON:
{ "citations": [...], "answer": "...", "ΔS": 0.xx, "λ_state": "<>", "coverage": 0.xx, "next_fix": "..." }

Symptoms → exact structural fix

Symptom Likely cause Open this
High similarity yet wrong meaning metric or embedding family mismatch embedding-vs-semantic.md
Facts exist but never retrieved chunk drift or store fragmentation chunking-checklist.md · pattern_vectorstore_fragmentation.md
Hybrid worse than single retriever query parsing split, mis-weighted rerank pattern_query_parsing_split.md · rerankers.md
Citations missing or unstable schema not enforced, formatter renamed fields retrieval-traceability.md · data-contracts.md
Answers flip between runs prompt header reordering or variance context-drift.md · rerankers.md

Rebuild order when numbers stay bad

Follow the store-agnostic sequence and re-measure after each step. Open: Retrieval Playbook

  1. Lock analyzer and distance metric
  2. Re-chunk with anchor checklist
  3. Re-embed with a single family and normalization
  4. Add deterministic reranker and stabilize order
  5. Tighten data contracts and traceability
  6. Evaluate with the gold set and ΔS probes Open: retrieval_eval_recipes.md

Ops monitors to keep on


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one.

GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow