WFGY/ProblemMap/GlobalFixMap/VectorDBs_and_Stores/faiss.md
2025-08-26 11:35:00 +08:00

8.8 KiB
Raw Blame History

FAISS: Guardrails and Fix Patterns

A compact repair guide for FAISS retrieval stacks. Use this when recall looks fine but meaning drifts, or when IVF/HNSW tuning flips answers across seeds. The checks below route you to the exact WFGY fix pages and give a minimal recipe you can paste into a runbook.

Open these first

Fix in 60 seconds

  1. Measure ΔS

    • Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
    • Thresholds: stable < 0.40, transitional 0.400.60, risk ≥ 0.60.
  2. Probe with λ_observe

    • Sweep k ∈ {5, 10, 20} and for IVF sweep nprobe ∈ {1, 4, 8, 16}.
    • For HNSW, sweep efSearch ∈ {32, 64, 128}.
    • If ΔS flattens high across k, suspect metric/index mismatch.
  3. Apply the module

  4. Verify

    • Coverage to target section ≥ 0.70, ΔS ≤ 0.45 on three paraphrases, λ stays convergent across seeds.

Typical breakpoints and the right fix

Symptom Likely cause Open this Minimal fix
High cosine similarity but wrong meaning IP vs L2 mixup, un-normalized embeddings Embedding vs Semantic Normalize vectors; match metric to embedder; re-index
Good recall, messy top-k order Rerank missing or weak Rerankers Add cross-encoder rerank, k=50→top-10
Some facts never show up Shards or label fragmentation Vectorstore Fragmentation Merge shards; rebuild IVF lists; verify dim
Answers flip between runs IVF nlist/nprobe underfit, PQ over-aggressive FAISS Pitfalls Raise nprobe, enlarge training set, reduce PQ
Hybrid gets worse than single retriever Query split and prompt coupling Query Parsing Split Split semantic vs lexical prompts; fuse post-retrieval

FAISS quick checklist

  • Confirm dimension matches the embedding model output exactly.
  • Confirm metric: IP with normalized vectors, or L2 with raw vectors. Do not mix.
  • For IVF, set nlist based on corpus size, train with at least 100× nlist examples.
  • Start with nprobe ≈ sqrt(nlist) and tune upward until ΔS stabilizes.
  • For HNSW, raise efConstruction and efSearch until ΔS stops improving.
  • Rebuild the index after changing normalization or metric.
  • Lock the snippet schema and citations using Data Contracts.

Copy-paste repair prompt


audit FAISS retrieval with ΔS and λ\_observe.
report: metric choice (IP/L2), normalization, dim, index type, nlist/nprobe or HNSW ef.
run three paraphrases, k in {5,10,20}. if ΔS stays >0.45, switch to normalized IP and rebuild.
apply BBMC + Data Contracts; add reranker for top-50→10. show before/after ΔS table.

Acceptance targets

  • Coverage ≥ 0.70 to the target section.
  • ΔS ≤ 0.45 across three paraphrases.
  • λ remains convergent across seeds.
  • E_resonance flat under long windows.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + ”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame
Engineers, hackers, and open source builders who supported WFGY from day one.

GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow