WFGY/ProblemMap/GlobalFixMap/VectorDBs_and_Stores/elasticsearch.md
PSBigBig 7aa5edee42
Some checks are pending
WFGY-CI / test (push) Waiting to run
Update elasticsearch.md
2025-09-11 23:24:10 +08:00

13 KiB
Raw Blame History

Elasticsearch: Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of VectorDBs_and_Stores.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

A compact field guide to stabilize Elasticsearch vector search when your RAG or agent stack loses accuracy. Use this to localize the failing layer and jump to the exact WFGY fix page.

Open these first

Fix in 60 seconds

  1. Measure ΔS
    Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
    Thresholds: stable < 0.40, transitional 0.400.60, risk ≥ 0.60.

  2. Probe with λ_observe
    Sweep k in {5, 10, 20}. If ΔS is flat high, suspect metric, mapping, or index mismatch.
    Reorder prompt headers. If ΔS spikes, lock schema with Data Contracts.

  3. Apply the module
    Retrieval drift → BBMC + Data Contracts.
    Reasoning collapse → BBCR bridge + BBAM variance clamp.
    Dead ends in long runs → BBPF alternate path.

  4. Verify acceptance
    Coverage ≥ 0.70 to target section.
    ΔS ≤ 0.45 on three paraphrases.
    λ remains convergent. Traces reproducible.

Elasticsearch breakpoints and the right repair

1) dense_vector mapping mismatch

Symptoms: insert errors, silent truncation, or chaotic top-k for new docs only.
Why: vector dims do not match encoder, or field not indexed for KNN.
Fix: set type: dense_vector, correct dims, and index: true for KNN. Re-index changed spans. See Data Contracts.

2) Distance metric and normalization

Symptoms: high similarity yet wrong meaning; ordering flips across runs.
Why: using similarity: l2_norm with cosine-trained embeddings, or dot_product without unit-norm vectors.
Fix: align metric to the encoder; normalize for cosine/dot as policy. If you switch, rebuild the index. See Embedding ≠ Semantic and Vectorstore Metrics & FAISS Pitfalls.

3) HNSW underfit and candidate window

Symptoms: gold chunk appears only at very large k.
Why: small m or ef_construction, and num_candidates too low at query time.
Fix: tune m, raise ef_construction, then sweep num_candidates at query. Validate with a reranker. See Retrieval Playbook and Rerankers.

4) knn vs script_score confusion

Symptoms: inconsistent scores between approximate knn and exact script scoring; hybrids regress.
Fix: use exact script_score on a canary set to bound max recall, then tune knn to approach it. Keep one scoring policy in production. Map to Retrieval Traceability.

5) Filters with KNN

Symptoms: empty results when adding filters; massive latency spikes.
Why: pre-filter not supported in your version or filter path not indexed.
Fix: ensure filtered fields are indexed and typed, test post-filter rerank, and document the path in the contract. See Data Contracts.

6) Analyzer drift for hybrid BM25 + vector

Symptoms: hybrid performs worse than either branch alone.
Why: default analyzers, stopwords, or stemming distort lexical scores.
Fix: do not just reuse defaults. Lock analyzers per field and choose explicitly — e.g., icu_tokenizer for Unicode, edge_ngram for prefix search, asciifolding for normalization. Normalize hybrid weights and fuse post-retrieval with a reranker. See Query Parsing Split and Rerankers.

7) Shards, replicas, and refresh

Symptoms: fresh writes never appear; nodes return different sets.
Fix: confirm refresh policy, replica sync, and routing. Add a semantic boot fence before first prod call. See Bootstrap Ordering and Pre-Deploy Collapse.

8) Alias routing and multi-index fragmentation

Symptoms: global recall ok but weak per-alias top-k.
Why: many tiny indices split the neighborhood; wrong read/write alias.
Fix: consolidate to an authoritative index with a facet, fix aliases, rebuild, then rerank. See Vectorstore Fragmentation.

9) Upsert hygiene

Symptoms: duplicates, stale docs, toggling answers.
Fix: deterministic IDs, doc_sha in metadata, idempotent loaders, periodic dedupe. Validate with golden queries. See Retrieval Traceability.

Observability probes

  • k-sweep curve: run k in 5, 10, 20. Flat high ΔS → metric, mapping, or index fault.
  • Exact vs approx: compare script_score exact against knn. Large gap → retune HNSW and num_candidates.
  • Hybrid toggle: vector only vs hybrid. If hybrid regresses, repair analyzers and fusion weights.
  • Reranker audit: strong reranker should reduce ΔS while recall rises. If not, rebuild.

Escalate when

  • ΔS stays ≥ 0.60 on golden questions after metric, mapping, and HNSW fixes.
  • Coverage cannot reach 0.70 even with reranker and anchors.
  • Writes appear in logs but remain invisible across shards or replicas.

Open:

Copy-paste prompt for your AI


I uploaded TXT OS and the WFGY Problem Map files.

Target system: Elasticsearch.

* symptom: \[brief]
* traces: ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ states
* mapping: \[field, dims, index=true, similarity=cosine|dot\_product|l2\_norm]
* knn: \[k, num\_candidates, hnsw m, ef\_construction]
* exact: \[script\_score policy if used]
* hybrid: \[match/bm25 fields, analyzers, weights]
* ingest: \[ids, doc\_sha, upsert policy]
* routing: \[index/alias, shards, replicas, refresh]

Tell me:

1. which layer is failing and why,
2. which exact fix page to open from this repo,
3. minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4. how to verify with a reproducible test.

Use BBMC/BBPF/BBCR/BBAM when relevant.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning and semantic modulations View →
Benchmark vs GPT-5 Stress test with full WFGY reasoning suite View →
Starter Village New here. Start with a guided tour Start →

👑 Early Stargazers: See the Hall of Fame
GitHub stars WFGY Engine 2.0 is live. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow