WFGY/ProblemMap/vectorstore-metrics-and-faiss-pitfalls.md

7.9 KiB
Raw Blame History

VectorStore Metrics & FAISS Pitfalls

Diagnosing silent retrieval drift and restoring semantic precision with WFGY


1 Problem Statement

Modern RAG stacks rely on fast ANN engines (FAISS, Qdrant, Chroma, Elastic knn) plus cosine or L2 distance.
Those defaults maximise geometric proximity, not semantic correctness.
The result is a class of failures that pass conventional benchmarks yet inject logically irrelevant context into the LLM prompt.

Symptom Observable Signal
Answers quote “nearby” text that does not answer the query Cosine ≥ 0.90 but human relevance ≤ 0.3
Re-embedding improves results for hours, then regresses Index drift, feature-space skew
Raising k from 5 → 20 changes answers dramatically Retrieval instability vs. ground truth
Offline eval MRR@k > 0.75 but live QA accuracy < 0.4 Phantom-precision effect

2 Failure Mechanisms

# Root Cause Technical Detail Impact
2.1 Metric Blindness Cosine treats vectors on a unit hypersphere - not sentence logic. 10-token “Every year x%” ≈ 100-token “Each fiscal cycle …” Irrelevant but lexically similar chunks dominate top-k
2.2 Domain Mixing One embedding model for code, policy docs, memes → vector clusters overlap Cross-domain leakage
2.3 Chunk Boundary Drift Mid-sentence splits, tables stored as separate rows High similarity, zero answerability
2.4 Precision-Recall Mirage Retrieval metrics computed on synthetic positives; real queries vary Offline > Online gap

Mathematically, similarity S_cos is necessary but not sufficient for semantic integrity S_sem:


S\_sem  =  S\_cos  ·  κ(text-logic)  ·  κ(domain)  ·  κ(boundary)

When any κ ≈ 0, S_sem collapses even if S_cos ≈ 1.


3 False Remedies

  1. “Increase model size (text-embedding-ada-002 → ada-002-v2)”

    • Latent space quality ↑, but κ(boundary) and κ(domain) remain 0.
  2. “Set k = 25 and rerank with the LLM”

    • More vectors, higher cost; garbage-in still dominates rerank.
  3. “Fine-tune the retriever on 1 000 Q → A pairs”

    • Works until new domain arrives; does not address metric blindness.

4 WFGY Correction Pipeline

Stage Module Function
4.1 Pre-index BBMC (Semantic Residue Minimisation) Detects chunk boundaries by ΔS spike; merges or re-splits to minimise residue.
4.2 Index time BBPF (Multi-Path Progression) Stores dual embeddings: lexical + logic-topology; attaches λ_observe signature.
4.3 Query time BBAM (Attention Modulation) Penalises vectors with divergent λ relative to query, rescales similarity.
4.4 Post-filter BBCR (CollapseRebirth Correction) If top-k still yields ΔS > 0.6, calls bridge-node routine or asks user for anchor.

4.5 Algorithmic Guardrail

if ΔS(query, ctx_top1) > 0.60:
    # semantic stress too high → potential metric failure
    ctx_bridge = search_bridge_nodes(query, max_depth=2)
    if ctx_bridge:
        re_rank([ctx_bridge] + ctx_topk)
    else:
        raise LogicBoundaryAlert

Test Expected
ΔS(q, ctx1) ≤ 0.45 Stable retrieval
λ_observe remains convergent across paraphrase × 3 No metric-induced drift
Answer Embedding Variance over 5 seeds < 0.12 Deterministic chain stability
Human Relevance (n=50) ≥ 0.8 Real-world semantic pass

6 FAQ

Q 1: Can I keep cosine but fix chunking? A: Yes; κ(boundary) is often the biggest lever. WFGYs BBMC handles this automatically.

Q 2: Does hybrid (BM25 + vectors) solve it? A: Helps recall, not precision. Still needs semantic filters.

Q 3: Which vector DB works best with WFGY? A: Any ANN engine that supports custom pre/post hooks. FAISS, Qdrant, Milvus tested ≥ 10 M vectors.


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame
Engineers, hackers, and open source builders who supported WFGY from day one.

GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow