| .. | ||
| checklists | ||
| eval | ||
| mvp_demo | ||
| ops | ||
| patterns | ||
| playbooks | ||
| tools | ||
| .gitkeep | ||
| chunking_to_embedding_contract.md | ||
| dimension_mismatch_and_projection.md | ||
| duplication_and_near_duplicate_collapse.md | ||
| hybrid_retriever_weights.md | ||
| metric_mismatch.md | ||
| normalization_and_scaling.md | ||
| poisoning_and_contamination.md | ||
| README.md | ||
| tokenization_and_casing.md | ||
| update_and_index_skew.md | ||
| vectorstore_fragmentation.md | ||
RAG + VectorDB — Global Fix Map
This hub covers typical retrieval bugs caused by vector databases and embeddings.
Use this page if your RAG pipeline looks fine but answers keep drifting, citations don’t match, or hybrid retrievers underperform.
Every page here is a guardrail with copy-paste recipes and acceptance targets.
Orientation: what each page means
| Fix Page | What it solves | Typical symptom |
|---|---|---|
| metric_mismatch.md | Distance metric mismatch (cosine vs L2 vs dot) | High similarity numbers but wrong meaning |
| normalization_and_scaling.md | Missing normalization or scaling issues | Embeddings with larger norms dominate |
| tokenization_and_casing.md | Tokenizer or casing drift | Same text embeds differently across runs |
| chunking_to_embedding_contract.md | Chunking not aligned with embedding model | Citations cut mid-sentence or incoherent snippets |
| vectorstore_fragmentation.md | Over-fragmented stores | Retrieval pulls incomplete, scattered sections |
| dimension_mismatch_and_projection.md | Embedding and index dimension mismatch | Runtime errors or silent drop of vectors |
| update_and_index_skew.md | Index not refreshed after updates | Old sections keep showing up |
| hybrid_retriever_weights.md | Hybrid weighting not tuned | BM25+ANN underperforms single retriever |
| duplication_and_near_duplicate_collapse.md | Redundant entries collapse signal | Top-k filled with near-identical chunks |
| poisoning_and_contamination.md | Malicious or noisy vectors | Hallucinations, unsafe content retrieval |
When to use this folder
- Your answers look semantically wrong even though top-k similarity looks high.
- Citations point to the wrong section or cannot be verified.
- Hybrid retrieval underperforms vs single retriever.
- Index seems “healthy” but recall/coverage stays low.
Core acceptance targets
- ΔS(question, retrieved) ≤ 0.45
- Coverage of target section ≥ 0.70
- λ_observe convergent across 3 paraphrases
- E_resonance flat on long windows
FAQ for newcomers
Why do we need these fixes if VectorDBs are mature?
Because RAG pipelines often break not at the infra level but at the semantic boundary. Even if FAISS, Milvus, or Pinecone run fine, the contracts between embedding, chunking, and retrieval are fragile.
What is metric mismatch and why is it deadly?
If your index uses L2 but embeddings were trained for cosine, the “closest” neighbors are meaningless. This is the single most common RAG failure.
Why do duplicates matter so much?
If your corpus has many repeated sentences, the retriever fills top-k with clones. The LLM sees no diversity and hallucinates.
Is poisoning really a real-world issue?
Yes. Even a single malicious doc can bias retrieval. This page shows how to detect and quarantine them without retraining the whole pipeline.
60-Second Fix Checklist
-
Lock metrics and analyzers
One embedding model per field. One distance metric. Same analyzer for read/write. -
Enforce snippet contracts
Require{snippet_id, section_id, source_url, offsets, tokens}.
→ See data-contracts -
Tune hybrid retrievers
Keep candidate lists from BM25 and ANN. Detect query splits.
→ See rerankers -
Cold-start fences
Block traffic until index hash and embedding version match.
→ See bootstrap-ordering -
Observability
Log ΔS and λ. Alert if ΔS ≥ 0.60.
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + ” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame —
Engineers, hackers, and open source builders who supported WFGY from day one.
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.