WFGY/ProblemMap/GlobalFixMap/VectorDBs_and_Stores/vespa.md
2025-08-27 12:39:34 +08:00

8.5 KiB
Raw Blame History

Vespa: Guardrails and Fix Patterns

Use this page when your retrieval stack runs on Vespa with vector or hybrid ranking. It routes common store-level failures to the right structural fixes in the Problem Map and gives a minimal checklist you can apply fast.

When to open this page

  • High vector similarity yet wrong meaning
  • Hybrid keyword + vector flips order between runs
  • Citations land on the wrong section or offsets do not match
  • Tensor dimensions or distance functions differ across write and read
  • Coverage stays low on the intended section even though recall looks fine

Acceptance targets

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage of target section ≥ 0.70
  • λ_observe remains convergent across 3 paraphrases
  • E_resonance flat on long windows

Map symptoms → structural fixes


Minimal Vespa setup checklist

  1. Pin tensor schema and metric
    Use one embedding model per vector field. Keep a single tensor dimension and a single distance function for the rank profile. Reject documents that do not match the expected dimension.

  2. Contract the snippet
    Every hit must carry {snippet_id, section_id, source_url, offsets, tokens}. Enforce cite-then-explain in the answer layer.
    Spec: data-contracts.md

  3. Hybrid ordering and reranking
    Combine keyword recall with nearestNeighbor recall, then run a deterministic rerank pass. Log both candidate lists to detect query split.
    Guide: rerankers.md

  4. Analyzer and casing parity
    Freeze tokenization, stemming, and casing rules for all text fields used in BM25 or filters. Keep the same rules at write and read.

  5. Observability probes
    Log ΔS(question, retrieved) and λ per step: retrieve → rerank → reason. Alert when ΔS ≥ 0.60 or λ diverges.

  6. Cold start and deploy fences
    Block traffic until the new application package, schema hash, and rank profile version are active.
    See: bootstrap-ordering.md


60-second diagnosis

  1. Run a 10-question smoke set on one target section.
  2. Compute ΔS(question, retrieved) for each question.
  3. If median ΔS ≥ 0.60, apply one structural fix in this order:
    a) normalize embeddings and pin one distance function
    b) enforce data contracts and traceability
    c) add deterministic reranking and align analyzers
  4. Require coverage ≥ 0.70 before you publish.

Copy-paste audit prompt


I uploaded TXT OS and the WFGY Problem Map pages.
Store: Vespa. Retrieval: BM25 + nearestNeighbor with rerank.

Audit this query and return:

* ΔS(question, retrieved) and λ across retrieve → rerank → reason.
* If ΔS ≥ 0.60, choose exactly one minimal structural fix and name the page:
  embedding-vs-semantic, retrieval-traceability, data-contracts, rerankers.
* JSON only:
  { "citations":\[...], "ΔS":0.xx, "λ":"→|←|<>|×", "next\_fix":"..." }


Common Vespa gotchas

  • Mixed embedding dimensions or distance functions across rank profiles
    Standardize on one and validate on write.

  • Summary fields do not include offsets or token spans
    Add fields to the summary and verify the contract.

  • Match-phase or targetHits tuned too low for the collection size
    Recall collapses and rerank cannot recover. Increase recall or shard-level limits.

  • Filter mismatches due to analyzer differences
    Keep analyzer and casing identical across environments.

  • Application package deployed but old profile still served
    Fence the cutover and verify the active profile hash before enabling traffic.


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + ”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame
GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow