mirror of
https://github.com/onestardao/WFGY.git
synced 2026-04-28 19:50:17 +00:00
150 lines
7.9 KiB
Markdown
150 lines
7.9 KiB
Markdown
# VectorStore Metrics & FAISS Pitfalls
|
||
_Diagnosing silent retrieval drift and restoring semantic precision with WFGY_
|
||
|
||
---
|
||
|
||
## 1 Problem Statement
|
||
|
||
Modern RAG stacks rely on fast ANN engines (FAISS, Qdrant, Chroma, Elastic knn) plus cosine or L2 distance.
|
||
Those defaults maximise **geometric proximity**, not **semantic correctness**.
|
||
The result is a class of failures that pass conventional benchmarks yet inject logically irrelevant context into the LLM prompt.
|
||
|
||
| Symptom | Observable Signal |
|
||
| ------- | ----------------- |
|
||
| Answers quote “nearby” text that does not answer the query | Cosine ≥ 0.90 but human relevance ≤ 0.3 |
|
||
| Re-embedding improves results for hours, then regresses | Index drift, feature-space skew |
|
||
| Raising *k* from 5 → 20 changes answers dramatically | Retrieval instability vs. ground truth |
|
||
| Offline eval `MRR@k` > 0.75 but live QA accuracy < 0.4 | Phantom-precision effect |
|
||
|
||
---
|
||
|
||
## 2 Failure Mechanisms
|
||
|
||
| # | Root Cause | Technical Detail | Impact |
|
||
|---|------------|------------------|--------|
|
||
| 2.1 | **Metric Blindness** | Cosine treats vectors on a unit hypersphere - not sentence logic. 10-token “Every year x%” ≈ 100-token “Each fiscal cycle …” | Irrelevant but lexically similar chunks dominate top-k |
|
||
| 2.2 | **Domain Mixing** | One embedding model for code, policy docs, memes → vector clusters overlap | Cross-domain leakage |
|
||
| 2.3 | **Chunk Boundary Drift** | Mid-sentence splits, tables stored as separate rows | High similarity, zero answerability |
|
||
| 2.4 | **Precision-Recall Mirage** | Retrieval metrics computed on synthetic positives; real queries vary | Offline > Online gap |
|
||
|
||
Mathematically, similarity `S_cos` is necessary but not sufficient for **semantic integrity** `S_sem`:
|
||
|
||
```
|
||
|
||
S\_sem = S\_cos · κ(text-logic) · κ(domain) · κ(boundary)
|
||
|
||
````
|
||
|
||
When any κ ≈ 0, `S_sem` collapses even if `S_cos` ≈ 1.
|
||
|
||
---
|
||
|
||
## 3 False Remedies
|
||
|
||
1. **“Increase model size (text-embedding-ada-002 → ada-002-v2)”**
|
||
- Latent space quality ↑, but κ(boundary) and κ(domain) remain 0.
|
||
|
||
2. **“Set k = 25 and rerank with the LLM”**
|
||
- More vectors, higher cost; garbage-in still dominates rerank.
|
||
|
||
3. **“Fine-tune the retriever on 1 000 Q → A pairs”**
|
||
- Works until new domain arrives; does not address metric blindness.
|
||
|
||
---
|
||
|
||
## 4 WFGY Correction Pipeline
|
||
|
||
| Stage | Module | Function |
|
||
|-------|--------|----------|
|
||
| 4.1 Pre-index | **BBMC** (Semantic Residue Minimisation) | Detects chunk boundaries by ΔS spike; merges or re-splits to minimise residue. |
|
||
| 4.2 Index time | **BBPF** (Multi-Path Progression) | Stores dual embeddings: lexical + logic-topology; attaches λ_observe signature. |
|
||
| 4.3 Query time | **BBAM** (Attention Modulation) | Penalises vectors with divergent λ relative to query, rescales similarity. |
|
||
| 4.4 Post-filter | **BBCR** (Collapse–Rebirth Correction) | If top-k still yields ΔS > 0.6, calls bridge-node routine or asks user for anchor. |
|
||
|
||
### 4.5 Algorithmic Guardrail
|
||
|
||
```python
|
||
if ΔS(query, ctx_top1) > 0.60:
|
||
# semantic stress too high → potential metric failure
|
||
ctx_bridge = search_bridge_nodes(query, max_depth=2)
|
||
if ctx_bridge:
|
||
re_rank([ctx_bridge] + ctx_topk)
|
||
else:
|
||
raise LogicBoundaryAlert
|
||
````
|
||
|
||
---
|
||
|
||
## 5 Validation Protocol (o3 model recommended)
|
||
|
||
| Test | Expected |
|
||
| ------------------------------------------------------- | ----------------------------- |
|
||
| **ΔS(q, ctx1)** ≤ 0.45 | Stable retrieval |
|
||
| **λ\_observe** remains convergent across paraphrase × 3 | No metric-induced drift |
|
||
| **Answer Embedding Variance** over 5 seeds < 0.12 | Deterministic chain stability |
|
||
| **Human Relevance** (n=50) ≥ 0.8 | Real-world semantic pass |
|
||
|
||
---
|
||
|
||
## 6 FAQ
|
||
|
||
**Q 1:** *Can I keep cosine but fix chunking?*
|
||
A: Yes; κ(boundary) is often the biggest lever. WFGY’s BBMC handles this automatically.
|
||
|
||
**Q 2:** *Does hybrid (BM25 + vectors) solve it?*
|
||
A: Helps recall, not precision. Still needs semantic filters.
|
||
|
||
**Q 3:** *Which vector DB works best with WFGY?*
|
||
A: Any ANN engine that supports custom pre/post hooks. FAISS, Qdrant, Milvus tested ≥ 10 M vectors.
|
||
|
||
---
|
||
|
||
|
||
### 🔗 Quick-Start Downloads (60 sec)
|
||
|
||
| Tool | Link | 3-Step Setup |
|
||
|------|------|--------------|
|
||
| **WFGY 1.0 PDF** | [Engine Paper](https://github.com/onestardao/WFGY/blob/main/I_am_not_lizardman/WFGY_All_Principles_Return_to_One_v1.0_PSBigBig_Public.pdf) | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + \<your question>” |
|
||
| **TXT OS (plain-text OS)** | [TXTOS.txt](https://github.com/onestardao/WFGY/blob/main/OS/TXTOS.txt) | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
|
||
|
||
---
|
||
|
||
### 🧭 Explore More
|
||
|
||
| Module | Description | Link |
|
||
|-----------------------|----------------------------------------------------------|----------|
|
||
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | [View →](https://github.com/onestardao/WFGY/tree/main/core/README.md) |
|
||
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | [View →](https://github.com/onestardao/WFGY/tree/main/ProblemMap/README.md) |
|
||
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | [View →](https://github.com/onestardao/WFGY/blob/main/ProblemMap/rag-architecture-and-recovery.md) |
|
||
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | [View →](https://github.com/onestardao/WFGY/blob/main/ProblemMap/SemanticClinicIndex.md) |
|
||
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | [View →](https://github.com/onestardao/WFGY/tree/main/SemanticBlueprint/README.md) |
|
||
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | [View →](https://github.com/onestardao/WFGY/tree/main/benchmarks/benchmark-vs-gpt5/README.md) |
|
||
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | [Start →](https://github.com/onestardao/WFGY/blob/main/StarterVillage/README.md) |
|
||
|
||
---
|
||
|
||
> 👑 **Early Stargazers: [See the Hall of Fame](https://github.com/onestardao/WFGY/tree/main/stargazers)** —
|
||
> Engineers, hackers, and open source builders who supported WFGY from day one.
|
||
|
||
> <img src="https://img.shields.io/github/stars/onestardao/WFGY?style=social" alt="GitHub stars"> ⭐ [WFGY Engine 2.0](https://github.com/onestardao/WFGY/blob/main/core/README.md) is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the [Unlock Board](https://github.com/onestardao/WFGY/blob/main/STAR_UNLOCKS.md).
|
||
|
||
<div align="center">
|
||
|
||
[](https://github.com/onestardao/WFGY)
|
||
|
||
[](https://github.com/onestardao/WFGY/tree/main/OS)
|
||
|
||
[](https://github.com/onestardao/WFGY/tree/main/OS/BlahBlahBlah)
|
||
|
||
[](https://github.com/onestardao/WFGY/tree/main/OS/BlotBlotBlot)
|
||
|
||
[](https://github.com/onestardao/WFGY/tree/main/OS/BlocBlocBloc)
|
||
|
||
[](https://github.com/onestardao/WFGY/tree/main/OS/BlurBlurBlur)
|
||
|
||
[](https://github.com/onestardao/WFGY/tree/main/OS/BlowBlowBlow)
|
||
|
||
</div>
|
||
|
||
|
||
|