6.4 KiB
VectorStore Metrics & FAISS Pitfalls
Diagnosing silent retrieval drift and restoring semantic precision with WFGY
1 Problem Statement
Modern RAG stacks rely on fast ANN engines (FAISS, Qdrant, Chroma, Elastic knn) plus cosine or L2 distance.
Those defaults maximise geometric proximity, not semantic correctness.
The result is a class of failures that pass conventional benchmarks yet inject logically irrelevant context into the LLM prompt.
| Symptom | Observable Signal |
|---|---|
| Answers quote “nearby” text that does not answer the query | Cosine ≥ 0.90 but human relevance ≤ 0.3 |
| Re-embedding improves results for hours, then regresses | Index drift, feature-space skew |
| Raising k from 5 → 20 changes answers dramatically | Retrieval instability vs. ground truth |
Offline eval MRR@k > 0.75 but live QA accuracy < 0.4 |
Phantom-precision effect |
2 Failure Mechanisms
| # | Root Cause | Technical Detail | Impact |
|---|---|---|---|
| 2.1 | Metric Blindness | Cosine treats vectors on a unit hypersphere - not sentence logic. 10-token “Every year x%” ≈ 100-token “Each fiscal cycle …” | Irrelevant but lexically similar chunks dominate top-k |
| 2.2 | Domain Mixing | One embedding model for code, policy docs, memes → vector clusters overlap | Cross-domain leakage |
| 2.3 | Chunk Boundary Drift | Mid-sentence splits, tables stored as separate rows | High similarity, zero answerability |
| 2.4 | Precision-Recall Mirage | Retrieval metrics computed on synthetic positives; real queries vary | Offline > Online gap |
Mathematically, similarity S_cos is necessary but not sufficient for semantic integrity S_sem:
S\_sem = S\_cos · κ(text-logic) · κ(domain) · κ(boundary)
When any κ ≈ 0, S_sem collapses even if S_cos ≈ 1.
3 False Remedies
-
“Increase model size (text-embedding-ada-002 → ada-002-v2)”
- Latent space quality ↑, but κ(boundary) and κ(domain) remain 0.
-
“Set k = 25 and rerank with the LLM”
- More vectors, higher cost; garbage-in still dominates rerank.
-
“Fine-tune the retriever on 1 000 Q → A pairs”
- Works until new domain arrives; does not address metric blindness.
4 WFGY Correction Pipeline
| Stage | Module | Function |
|---|---|---|
| 4.1 Pre-index | BBMC (Semantic Residue Minimisation) | Detects chunk boundaries by ΔS spike; merges or re-splits to minimise residue. |
| 4.2 Index time | BBPF (Multi-Path Progression) | Stores dual embeddings: lexical + logic-topology; attaches λ_observe signature. |
| 4.3 Query time | BBAM (Attention Modulation) | Penalises vectors with divergent λ relative to query, rescales similarity. |
| 4.4 Post-filter | BBCR (Collapse–Rebirth Correction) | If top-k still yields ΔS > 0.6, calls bridge-node routine or asks user for anchor. |
4.5 Algorithmic Guardrail
if ΔS(query, ctx_top1) > 0.60:
# semantic stress too high → potential metric failure
ctx_bridge = search_bridge_nodes(query, max_depth=2)
if ctx_bridge:
re_rank([ctx_bridge] + ctx_topk)
else:
raise LogicBoundaryAlert
5 Validation Protocol (o3 model recommended)
| Test | Expected |
|---|---|
| ΔS(q, ctx1) ≤ 0.45 | Stable retrieval |
| λ_observe remains convergent across paraphrase × 3 | No metric-induced drift |
| Answer Embedding Variance over 5 seeds < 0.12 | Deterministic chain stability |
| Human Relevance (n=50) ≥ 0.8 | Real-world semantic pass |
6 FAQ
Q 1: Can I keep cosine but fix chunking? A: Yes; κ(boundary) is often the biggest lever. WFGY’s BBMC handles this automatically.
Q 2: Does hybrid (BM25 + vectors) solve it? A: Helps recall, not precision. Still needs semantic filters.
Q 3: Which vector DB works best with WFGY? A: Any ANN engine that supports custom pre/post hooks. FAISS, Qdrant, Milvus tested ≥ 10 M vectors.
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
Explore More
| Layer | Page | What it’s for |
|---|---|---|
| ⭐ Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| ⚙️ Engine | WFGY 1.0 | Original PDF tension engine and early logic sketch (legacy reference) |
| ⚙️ Engine | WFGY 2.0 | Production tension kernel for RAG and agent systems |
| ⚙️ Engine | WFGY 3.0 | TXT based Singularity tension engine (131 S class set) |
| 🗺️ Map | Problem Map 1.0 | Flagship 16 problem RAG failure taxonomy and fix map |
| 🗺️ Map | Problem Map 2.0 | Global Debug Card for RAG and agent pipeline diagnosis |
| 🗺️ Map | Problem Map 3.0 | Global AI troubleshooting atlas and failure pattern map |
| 🧰 App | TXT OS | .txt semantic OS with fast bootstrap |
| 🧰 App | Blah Blah Blah | Abstract and paradox Q&A built on TXT OS |
| 🧰 App | Blur Blur Blur | Text to image generation with semantic control |
| 🏡 Onboarding | Starter Village | Guided entry point for new users |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.