WFGY/ProblemMap/GlobalFixMap/Retrieval/store_agnostic_guardrails.md

11 KiB
Raw Blame History

Store-Agnostic Guardrails for Retrieval

🧭 Quick Return to Map

You are in a sub-page of Retrieval.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Use this page to harden retrieval quality without changing your vector store. The checks localize failure causes and route you to the exact structural fix so you can verify with measurable targets.

Acceptance targets

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage of target section ≥ 0.70
  • λ remains convergent across 3 paraphrases and 2 seeds
  • E_resonance stays flat on long windows

15-minute triage checklist

  1. Lock metrics and analyzers
    One analyzer for write and read. Verify distance metric and normalization.
    Open: Retrieval Playbook

  2. Enforce the snippet contract
    Required fields: snippet_id, section_id, source_url, offsets, tokens.
    Open: Data Contracts

  3. Trace why this snippet
    Add cite-then-explain and store the trace.
    Open: Retrieval Traceability

  4. Probe ΔS and λ
    Three paraphrases and two seeds. If ΔS ≥ 0.60 or λ flips, clamp variance.
    Open: deltaS_probes.md

  5. k sweep and rerankers
    k in {5, 10, 20}. Try a deterministic reranker when order matters.
    Open: Rerankers · hybrid_reranker_recipe.md

  6. Check chunk boundaries and anchors
    If facts exist but never surface, realign chunking and anchors.
    Open: chunking-checklist.md · chunk_alignment.md

  7. Detect fragmentation
    If coverage is low while index looks healthy, suspect store fragmentation.
    Open: pattern_vectorstore_fragmentation.md

  8. Hybrid failure
    If hybrid underperforms a single retriever, split parsing and rebalance.
    Open: pattern_query_parsing_split.md

  9. Embedding vs meaning
    High similarity yet wrong answer means metric or family mismatch.
    Open: embedding-vs-semantic.md


Minimal instrumentation you can paste

# Pseudocode: keep these checkpoints store agnostic
def retrieve(q, k=10):
    # unified analyzer and explicit metric
    return retriever.invoke(q, k=k)

def trace_schema(snippet):
    assert {"snippet_id","section_id","source_url","offsets","tokens"} <= set(snippet.keys())

def observe(q, snippets, answer):
    # compute ΔS and λ, record probes
    log = probes.compute(q, snippets, answer)
    if log["ΔS"] >= 0.60 or log["λ_flip"]:
        raise Exception("High ΔS or λ flip. Apply variance clamp and rerankers.")
    return log

def pipeline(q):
    s = retrieve(q, k=10)
    for x in s: trace_schema(x)
    msg = prompt.cite_then_explain(q, s)
    ans = llm.invoke(msg)
    return observe(q, s, ans)

Copy-paste LLM prompt

You have TXT OS and the WFGY pages loaded.

Task:
1) Enforce cite-then-explain with fields {snippet_id, section_id, source_url, offsets, tokens}.
2) Log ΔS(question, retrieved) and λ across 3 paraphrases and 2 seeds.
3) If ΔS ≥ 0.60 or λ flips, propose the smallest structural change referencing:
   retrieval-playbook, retrieval-traceability, data-contracts, rerankers, query-parsing-split.
4) Return JSON:
{ "citations": [...], "answer": "...", "ΔS": 0.xx, "λ_state": "<>", "coverage": 0.xx, "next_fix": "..." }

Symptoms → exact structural fix

Symptom Likely cause Open this
High similarity yet wrong meaning metric or embedding family mismatch embedding-vs-semantic.md
Facts exist but never retrieved chunk drift or store fragmentation chunking-checklist.md · pattern_vectorstore_fragmentation.md
Hybrid worse than single retriever query parsing split, mis-weighted rerank pattern_query_parsing_split.md · rerankers.md
Citations missing or unstable schema not enforced, formatter renamed fields retrieval-traceability.md · data-contracts.md
Answers flip between runs prompt header reordering or variance context-drift.md · rerankers.md

Rebuild order when numbers stay bad

Follow the store-agnostic sequence and re-measure after each step. Open: Retrieval Playbook

  1. Lock analyzer and distance metric
  2. Re-chunk with anchor checklist
  3. Re-embed with a single family and normalization
  4. Add deterministic reranker and stabilize order
  5. Tighten data contracts and traceability
  6. Evaluate with the gold set and ΔS probes Open: retrieval_eval_recipes.md

Ops monitors to keep on


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
Engine WFGY 1.0 Original PDF based tension engine
Engine WFGY 2.0 Production tension kernel and math engine for RAG and agents
Engine WFGY 3.0 TXT based Singularity tension engine, 131 S class set
Map Problem Map 1.0 Flagship 16 problem RAG failure checklist and fix map
Map Problem Map 2.0 RAG focused recovery pipeline
Map Problem Map 3.0 Global Debug Card, image as a debug protocol layer
Map Semantic Clinic Symptom to family to exact fix
Map Grandmas Clinic Plain language stories mapped to Problem Map 1.0
Onboarding Starter Village Guided tour for newcomers
App TXT OS TXT semantic OS, fast boot
App Blah Blah Blah Abstract and paradox Q and A built on TXT OS
App Blur Blur Blur Text to image with semantic control
App Blow Blow Blow Reasoning game engine and memory demo

If this repository helped, starring it improves discovery so more builders can find the docs and tools. GitHub Repo stars