7.8 KiB
Normalization and Scaling — Guardrails and Fix Pattern
🧭 Quick Return to Map
You are in a sub-page of RAG_VectorDB.
To reorient, go back here:
- RAG_VectorDB — vector databases for retrieval and grounding
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Use this page when vector similarity is unstable because embeddings are not normalized or scaling factors differ between training and retrieval.
This failure often appears when cosine distance is requested but vectors are stored raw, or when IP/dot metrics exaggerate magnitude.
Open these first
- Visual map and recovery: RAG Architecture & Recovery
- Embedding vs meaning: embedding-vs-semantic.md
- Metric mismatch: metric_mismatch.md
- Chunking checklist: chunking-checklist.md
Core acceptance
- Vectors are L2-normalized when using cosine similarity.
- ΔS(question, retrieved) ≤ 0.45, stable across three paraphrases.
- Coverage ≥ 0.70 on the target section.
- λ remains convergent across seeds.
Typical breakpoints and the right fix
-
Cosine similarity reported but vectors not normalized
→ metric_mismatch.md -
Dot product used without rescaling (large norm vectors dominate retrieval)
→ Normalize or rescale embeddings before indexing. -
Cross-model mixing (embeddings from different checkpoints with different norms)
→ Re-normalize the corpus and queries to unit length. -
Hybrid dense + sparse weighting unstable (scale mismatch between BM25 scores and vector norms)
→ Apply explicit min-max or z-score scaling before weighted sum.
Fix in 60 seconds
-
Check norms
Sample 100 embeddings. Compute mean L2 norm. If not ~1.0 under cosine, normalization missing. -
Normalize queries
Ensurequery_vector = vector / ||vector||before retrieval when using cosine. -
Corpus re-index
Drop and rebuild index with normalized vectors if store does not enforce it. -
Hybrid scaling
Normalize dense similarity scores into the same 0–1 range as BM25 before combining.
Copy-paste probe
import numpy as np
def check_norms(vectors):
norms = np.linalg.norm(vectors, axis=1)
return norms.mean(), norms.std()
mean_norm, std_norm = check_norms(sample_vectors)
print("Mean norm:", mean_norm, "Std:", std_norm)
Target: mean ≈ 1.0, std ≤ 0.05 for cosine retrieval.
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one.
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.