11 KiB
LlamaIndex — Guardrails and Fix Patterns
Use this when your stack uses LlamaIndex (indices, query engines, retrievers, routers, agents) and you see wrong snippets, unstable reasoning, mixed sources, or silent failures that look fine in logs.
Acceptance targets
- ΔS(question, retrieved) ≤ 0.45
- coverage ≥ 0.70 to the intended section or record
- λ stays convergent across 3 paraphrases
Typical breakpoints → exact fixes
-
Retrieval returns plausible but wrong chunks
Fix No.1: Hallucination and Chunk Drift →
Hallucination
Also review the Retrieval Playbook →
Retrieval Playbook -
High vector similarity, wrong meaning
Fix No.5: Embedding ≠ Semantic →
Embedding ≠ Semantic -
Hybrid retrieval or RouterQueryEngine degrades compared to single retriever
Pattern: Query Parsing Split →
Query Parsing Split
Add ordering control with Rerankers →
Rerankers -
Facts exist in the store but never show up
Pattern: Vectorstore Fragmentation →
Vectorstore Fragmentation -
Citations missing or inconsistent between retriever and response synthesizer
Fix No.8: Retrieval Traceability + Data Contracts →
Retrieval Traceability · Data Contracts -
Long pipelines flatten tone and drift logically
Fix No.3 and No.9: Context Drift and Entropy Collapse →
Context Drift · Entropy Collapse -
Agents loop, roles blur, or memory overwrites facts
Fix No.13: Multi-Agent Chaos →
Multi-Agent Problems · Role Drift · Memory Overwrite -
Confident tone but wrong answer
Fix No.4: Bluffing and Overconfidence →
Bluffing -
Model merges two sources into one response
Pattern: Symbolic Constraint Unlock →
Symbolic Constraint Unlock
Minimal LlamaIndex pattern with WFGY checks
# Pseudocode. Control points you must keep.
from llama_index.core import VectorStoreIndex
from llama_index.core.query_engine import RetrieverQueryEngine
# build index with explicit metric and normalization
index = VectorStoreIndex.from_documents(
docs,
embed_model=embedder, # keep the same fn for write and read
)
retriever = index.as_retriever(similarity_top_k=10)
qe = RetrieverQueryEngine.from_args(retriever)
def assemble_prompt(context, q):
# schema-locked: system -> task -> constraints -> citations -> answer
return prompt.render(context=context, question=q)
def reason(msg):
# require cite then explain in the template
return llm(msg)
def wfgy_checks(q, context, answer):
# compute ΔS(question, context) and record snippet↔citation mapping
# fail fast when ΔS ≥ 0.60 or λ divergent
return metrics_and_trace(q, context, answer)
def run(q):
nodes = retriever.retrieve(q)
context = join_nodes(nodes)
msg = assemble_prompt(context, q)
answer = reason(msg)
return wfgy_checks(q, context, answer)
What this enforces
- Retrieval is observable and parameterized.
- Prompt is schema locked with cite first.
- WFGY check runs after generation and can stop the run when ΔS is high or λ flips.
- Traces record snippet to citation mapping for audits.
Reference specs RAG Architecture & Recovery · Retrieval Playbook · Retrieval Traceability · Data Contracts
LlamaIndex-specific gotchas
-
Mixed embedding functions or metrics between ingestion and query. Rebuild with explicit metric and unit normalization. See Embedding ≠ Semantic
-
ResponseSynthesizer rewrites citations. Enforce cite first and lock section ids. See Retrieval Traceability
-
RouterQueryEngine sends different tokenizations to dense and sparse backends. Unify analyzers first. See Query Parsing Split
-
Persistence reloads with a different embedder when swapping models. Pin versions and validate store headers. See Pre-Deploy Collapse
-
Agent tool loops with vague stopping rules. Add BBCR bridge steps and clamp variance with BBAM in the prompt. See Logic Collapse
When to escalate
-
ΔS remains ≥ 0.60 after chunking and retrieval fixes Work through the playbook and rebuild index parameters. Retrieval Playbook
-
Answers flip between runs or sessions Verify version skew and session state. Pre-Deploy Collapse
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame —
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.