WFGY/ProblemMap/GlobalFixMap/Embeddings/metric_mismatch.md

9.3 KiB
Raw Blame History

Metric Mismatch — Guardrails and Fix Pattern

🧭 Quick Return to Map

You are in a sub-page of Embeddings.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Embedding metric mismatch is a structural error where the similarity metric used at retrieval time does not match the assumptions of the embedding model.

Symptoms: top-k neighbors look plausible, but answers cite wrong sections or fail silently after switching distance metrics.

A focused fix when nearest neighbors look similar in cosine distance but your store runs L2 or dot, or the reverse. Use this page to localize the failure and rebuild the index with the right metric and normalization rules. No infra change needed.

Open these first

Acceptance targets

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage of the correct section ≥ 0.70
  • λ remains convergent across 3 paraphrases and 2 seeds
  • E_resonance flat on long windows

Fix in 60 seconds

  1. Measure ΔS
    Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
    Stable < 0.40, transitional 0.400.60, risk ≥ 0.60.

  2. Check metric alignment

    • What your embedder assumes: cosine or unit length vectors.
    • What your store actually uses: L2, inner product, or cosine.
    • If they disagree, rebuild with the correct metric or normalize on write and search.
  3. Normalize and lock

    • If using cosine, enforce L2 normalization at both index and query time.
    • Fix text analyzer and casing before re-embed to avoid silent drift.
    • Record INDEX_HASH, EMBED_MODEL, METRIC, NORMALIZED=true|false in metadata.
  4. Verify
    Run three paraphrases and two random seeds.
    Require coverage ≥ 0.70 with λ convergent and ΔS ≤ 0.45.


Failure signatures → likely cause → open this

  • Top hits look semantically off while cosine test in a notebook seems fine
    → Store runs L2 or dot with non normalized vectors.
    Open: Retrieval Playbook

  • Order of neighbors flips when you switch between dot and cosine on the same vectors
    → Missing unit normalization or mixed write paths.
    Open: Embedding ≠ Semantic

  • Hybrid retrieval underperforms a single retriever
    → Metric split plus fragmented index or mixed analyzers.
    Open: Vectorstore Fragmentation, Rerankers

  • Good recall but citations drift across runs
    → Header reorder and λ flip, metric inconsistency amplifies the drift.
    Open: Retrieval Traceability


Quick reference — store defaults

版本與託管方案可能改變預設。以實際索引 schema 為準。

Store Typical default metric Where to verify
FAISS No fixed default. You must choose L2 or IP. Cosine via normalized IP. Index type name, for example IndexFlatL2, IndexFlatIP.
Chroma cosine Client setting distance_function.
Qdrant cosine Collection config field distance.
Weaviate cosine Class schema vectorIndexConfig.distance.
Milvus L2 for float vectors Index params metric_type on the collection or index.
pgvector No fixed default. Query operator decides. <-> L2, <#> inner product, <=> cosine.
Redis Vector Similarity L2 Index or HNSW param DISTANCE_METRIC.
Elasticsearch or OpenSearch Often L2 on dense_vector unless set otherwise Field mapping similarity set to l2_norm, cosine, or dot_product.
Pinecone cosine Index spec metric.
Typesense cosine Collection vector field similarity.
Vespa euclidean unless changed Schema distance-metric such as euclidean, angular, dotproduct.

Minimal rebuild checklist

  • Lock tokenizer, casing, and Unicode NFC.
  • Recompute embeddings with a single model id and version string.
  • Apply L2 normalization if cosine is the target metric.
  • Rebuild the index with one metric only and write the metric into store metadata.
  • Add a preflight that fails if query_metric != index_metric.
  • Keep a 200 item gold set to validate ΔS and coverage before go live.

Deep diagnostics

  • Three paraphrase probe
    Ask the same question three ways. If λ flips while metric is stable, fix headers. If ΔS stays high, suspect metric or normalization.

  • Anchor triangulation
    Compare ΔS to the correct section and to a decoy section. If both are close, re-embed after fixing analyzer and normalization.

  • Reranker cross check
    Run a cross encoder on top k. If reranker easily corrects the order, the base metric is likely the bottleneck.


Copy paste prompt for your LLM step


I uploaded TXT OS and the WFGY Problem Map.

My issue: nearest neighbors look wrong. Suspect metric mismatch.
Traces: ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ states across 3 paraphrases.

Tell me:

1. which layer is failing and why,
2. the exact WFGY page to open from this repo,
3. the minimal steps to align metric and normalization,
4. a reproducible test with coverage ≥ 0.70 and ΔS ≤ 0.45.
   Use BBMC, BBCR, BBPF, BBAM when relevant.


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
Engine WFGY 1.0 Original PDF based tension engine
Engine WFGY 2.0 Production tension kernel and math engine for RAG and agents
Engine WFGY 3.0 TXT based Singularity tension engine, 131 S class set
Map Problem Map 1.0 Flagship 16 problem RAG failure checklist and fix map
Map Problem Map 2.0 RAG focused recovery pipeline
Map Problem Map 3.0 Global Debug Card, image as a debug protocol layer
Map Semantic Clinic Symptom to family to exact fix
Map Grandmas Clinic Plain language stories mapped to Problem Map 1.0
Onboarding Starter Village Guided tour for newcomers
App TXT OS TXT semantic OS, fast boot
App Blah Blah Blah Abstract and paradox Q and A built on TXT OS
App Blur Blur Blur Text to image with semantic control
App Blow Blow Blow Reasoning game engine and memory demo

If this repository helped, starring it improves discovery so more builders can find the docs and tools. GitHub Repo stars