| .. | ||
| checklists | ||
| eval | ||
| mvp_demo | ||
| ops | ||
| patterns | ||
| playbooks | ||
| tools | ||
| .gitkeep | ||
| chunking_to_embedding_contract.md | ||
| dimension_mismatch_and_projection.md | ||
| duplication_and_near_duplicate_collapse.md | ||
| hybrid_retriever_weights.md | ||
| metric_mismatch.md | ||
| normalization_and_scaling.md | ||
| poisoning_and_contamination.md | ||
| README.md | ||
| tokenization_and_casing.md | ||
| update_and_index_skew.md | ||
| vectorstore_fragmentation.md | ||
RAG + VectorDB — Global Fix Map
🏥 Quick Return to Emergency Room
You are in a specialist desk.
For full triage and doctors on duty, return here:
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a sub-room.
If you want full consultation and prescriptions, go back to the Emergency Room lobby.
This hub covers typical retrieval bugs caused by vector databases and embeddings.
Use this page if your RAG pipeline looks fine but answers keep drifting, citations don’t match, or hybrid retrievers underperform.
Every page here is a guardrail with copy-paste recipes and acceptance targets.
Orientation: what each page means
| Fix Page | What it solves | Typical symptom |
|---|---|---|
| metric_mismatch.md | Distance metric mismatch (cosine vs L2 vs dot) | High similarity numbers but wrong meaning |
| normalization_and_scaling.md | Missing normalization or scaling issues | Embeddings with larger norms dominate |
| tokenization_and_casing.md | Tokenizer or casing drift | Same text embeds differently across runs |
| chunking_to_embedding_contract.md | Chunking not aligned with embedding model | Citations cut mid-sentence or incoherent snippets |
| vectorstore_fragmentation.md | Over-fragmented stores | Retrieval pulls incomplete, scattered sections |
| dimension_mismatch_and_projection.md | Embedding and index dimension mismatch | Runtime errors or silent drop of vectors |
| update_and_index_skew.md | Index not refreshed after updates | Old sections keep showing up |
| hybrid_retriever_weights.md | Hybrid weighting not tuned | BM25+ANN underperforms single retriever |
| duplication_and_near_duplicate_collapse.md | Redundant entries collapse signal | Top-k filled with near-identical chunks |
| poisoning_and_contamination.md | Malicious or noisy vectors | Hallucinations, unsafe content retrieval |
When to use this folder
- Your answers look semantically wrong even though top-k similarity looks high.
- Citations point to the wrong section or cannot be verified.
- Hybrid retrieval underperforms vs single retriever.
- Index seems “healthy” but recall/coverage stays low.
Core acceptance targets
- ΔS(question, retrieved) ≤ 0.45
- Coverage of target section ≥ 0.70
- λ_observe convergent across 3 paraphrases
- E_resonance flat on long windows
FAQ for newcomers
Why do we need these fixes if VectorDBs are mature?
Because RAG pipelines often break not at the infra level but at the semantic boundary. Even if FAISS, Milvus, or Pinecone run fine, the contracts between embedding, chunking, and retrieval are fragile.
What is metric mismatch and why is it deadly?
If your index uses L2 but embeddings were trained for cosine, the “closest” neighbors are meaningless. This is the single most common RAG failure.
Why do duplicates matter so much?
If your corpus has many repeated sentences, the retriever fills top-k with clones. The LLM sees no diversity and hallucinates.
Is poisoning really a real-world issue?
Yes. Even a single malicious doc can bias retrieval. This page shows how to detect and quarantine them without retraining the whole pipeline.
60-Second Fix Checklist
-
Lock metrics and analyzers
One embedding model per field. One distance metric. Same analyzer for read/write. -
Enforce snippet contracts
Require{snippet_id, section_id, source_url, offsets, tokens}.
→ See data-contracts -
Tune hybrid retrievers
Keep candidate lists from BM25 and ANN. Detect query splits.
→ See rerankers -
Cold-start fences
Block traffic until index hash and embedding version match.
→ See bootstrap-ordering -
Observability
Log ΔS and λ. Alert if ΔS ≥ 0.60.
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + ” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
Explore More
| Layer | Page | What it’s for |
|---|---|---|
| Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| Engine | WFGY 1.0 | Original PDF based tension engine |
| Engine | WFGY 2.0 | Production tension kernel and math engine for RAG and agents |
| Engine | WFGY 3.0 | TXT based Singularity tension engine, 131 S class set |
| Map | Problem Map 1.0 | Flagship 16 problem RAG failure checklist and fix map |
| Map | Problem Map 2.0 | RAG focused recovery pipeline |
| Map | Problem Map 3.0 | Global Debug Card, image as a debug protocol layer |
| Map | Semantic Clinic | Symptom to family to exact fix |
| Map | Grandma’s Clinic | Plain language stories mapped to Problem Map 1.0 |
| Onboarding | Starter Village | Guided tour for newcomers |
| App | TXT OS | TXT semantic OS, fast boot |
| App | Blah Blah Blah | Abstract and paradox Q and A built on TXT OS |
| App | Blur Blur Blur | Text to image with semantic control |
| App | Blow Blow Blow | Reasoning game engine and memory demo |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.