13 KiB
Store-Agnostic Guardrails for Retrieval
🧭 Quick Return to Map
You are in a sub-page of Retrieval.
To reorient, go back here:
- Retrieval — information access and knowledge lookup
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Use this page to harden retrieval quality without changing your vector store. The checks localize failure causes and route you to the exact structural fix so you can verify with measurable targets.
Acceptance targets
- ΔS(question, retrieved) ≤ 0.45
- Coverage of target section ≥ 0.70
- λ remains convergent across 3 paraphrases and 2 seeds
- E_resonance stays flat on long windows
15-minute triage checklist
-
Lock metrics and analyzers
One analyzer for write and read. Verify distance metric and normalization.
Open: Retrieval Playbook -
Enforce the snippet contract
Required fields:snippet_id,section_id,source_url,offsets,tokens.
Open: Data Contracts -
Trace why this snippet
Add cite-then-explain and store the trace.
Open: Retrieval Traceability -
Probe ΔS and λ
Three paraphrases and two seeds. If ΔS ≥ 0.60 or λ flips, clamp variance.
Open: deltaS_probes.md -
k sweep and rerankers
k in {5, 10, 20}. Try a deterministic reranker when order matters.
Open: Rerankers · hybrid_reranker_recipe.md -
Check chunk boundaries and anchors
If facts exist but never surface, realign chunking and anchors.
Open: chunking-checklist.md · chunk_alignment.md -
Detect fragmentation
If coverage is low while index looks healthy, suspect store fragmentation.
Open: pattern_vectorstore_fragmentation.md -
Hybrid failure
If hybrid underperforms a single retriever, split parsing and rebalance.
Open: pattern_query_parsing_split.md -
Embedding vs meaning
High similarity yet wrong answer means metric or family mismatch.
Open: embedding-vs-semantic.md
Minimal instrumentation you can paste
# Pseudocode: keep these checkpoints store agnostic
def retrieve(q, k=10):
# unified analyzer and explicit metric
return retriever.invoke(q, k=k)
def trace_schema(snippet):
assert {"snippet_id","section_id","source_url","offsets","tokens"} <= set(snippet.keys())
def observe(q, snippets, answer):
# compute ΔS and λ, record probes
log = probes.compute(q, snippets, answer)
if log["ΔS"] >= 0.60 or log["λ_flip"]:
raise Exception("High ΔS or λ flip. Apply variance clamp and rerankers.")
return log
def pipeline(q):
s = retrieve(q, k=10)
for x in s: trace_schema(x)
msg = prompt.cite_then_explain(q, s)
ans = llm.invoke(msg)
return observe(q, s, ans)
Copy-paste LLM prompt
You have TXT OS and the WFGY pages loaded.
Task:
1) Enforce cite-then-explain with fields {snippet_id, section_id, source_url, offsets, tokens}.
2) Log ΔS(question, retrieved) and λ across 3 paraphrases and 2 seeds.
3) If ΔS ≥ 0.60 or λ flips, propose the smallest structural change referencing:
retrieval-playbook, retrieval-traceability, data-contracts, rerankers, query-parsing-split.
4) Return JSON:
{ "citations": [...], "answer": "...", "ΔS": 0.xx, "λ_state": "<>", "coverage": 0.xx, "next_fix": "..." }
Symptoms → exact structural fix
| Symptom | Likely cause | Open this |
|---|---|---|
| High similarity yet wrong meaning | metric or embedding family mismatch | embedding-vs-semantic.md |
| Facts exist but never retrieved | chunk drift or store fragmentation | chunking-checklist.md · pattern_vectorstore_fragmentation.md |
| Hybrid worse than single retriever | query parsing split, mis-weighted rerank | pattern_query_parsing_split.md · rerankers.md |
| Citations missing or unstable | schema not enforced, formatter renamed fields | retrieval-traceability.md · data-contracts.md |
| Answers flip between runs | prompt header reordering or variance | context-drift.md · rerankers.md |
Rebuild order when numbers stay bad
Follow the store-agnostic sequence and re-measure after each step. Open: Retrieval Playbook
- Lock analyzer and distance metric
- Re-chunk with anchor checklist
- Re-embed with a single family and normalization
- Add deterministic reranker and stabilize order
- Tighten data contracts and traceability
- Evaluate with the gold set and ΔS probes Open: retrieval_eval_recipes.md
Ops monitors to keep on
-
Index readiness fence and version hash Open: bootstrap-ordering.md
-
Live ΔS and λ alerts on long windows Open: ops live monitoring
-
Regression gate for coverage and ΔS Open: eval precision and recall
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one.
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.