WFGY/ProblemMap/GlobalFixMap/RAG_VectorDB/normalization_and_scaling.md
2025-09-01 10:22:35 +08:00

7.2 KiB
Raw Blame History

Normalization and Scaling — Guardrails and Fix Pattern

Use this page when vector similarity is unstable because embeddings are not normalized or scaling factors differ between training and retrieval.
This failure often appears when cosine distance is requested but vectors are stored raw, or when IP/dot metrics exaggerate magnitude.


Open these first


Core acceptance

  • Vectors are L2-normalized when using cosine similarity.
  • ΔS(question, retrieved) ≤ 0.45, stable across three paraphrases.
  • Coverage ≥ 0.70 on the target section.
  • λ remains convergent across seeds.

Typical breakpoints and the right fix

  • Cosine similarity reported but vectors not normalized
    metric_mismatch.md

  • Dot product used without rescaling (large norm vectors dominate retrieval)
    → Normalize or rescale embeddings before indexing.

  • Cross-model mixing (embeddings from different checkpoints with different norms)
    → Re-normalize the corpus and queries to unit length.

  • Hybrid dense + sparse weighting unstable (scale mismatch between BM25 scores and vector norms)
    → Apply explicit min-max or z-score scaling before weighted sum.


Fix in 60 seconds

  1. Check norms
    Sample 100 embeddings. Compute mean L2 norm. If not ~1.0 under cosine, normalization missing.

  2. Normalize queries
    Ensure query_vector = vector / ||vector|| before retrieval when using cosine.

  3. Corpus re-index
    Drop and rebuild index with normalized vectors if store does not enforce it.

  4. Hybrid scaling
    Normalize dense similarity scores into the same 01 range as BM25 before combining.


Copy-paste probe

import numpy as np

def check_norms(vectors):
    norms = np.linalg.norm(vectors, axis=1)
    return norms.mean(), norms.std()

mean_norm, std_norm = check_norms(sample_vectors)
print("Mean norm:", mean_norm, "Std:", std_norm)

Target: mean ≈ 1.0, std ≤ 0.05 for cosine retrieval.


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one.

GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow