WFGY/ProblemMap/GlobalFixMap/VectorDBs_and_Stores/redis.md
2025-09-05 11:55:12 +08:00

13 KiB
Raw Blame History

Redis Vector (Redis Stack): Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of VectorDBs_and_Stores.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

A compact field guide to stabilize Redis Stack vector search when your RAG or agent stack loses accuracy. Use this to localize the failing layer and jump to the exact WFGY fix page.

Open these first

Fix in 60 seconds

  1. Measure ΔS
    Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
    Thresholds: stable < 0.40, transitional 0.400.60, risk ≥ 0.60.

  2. Probe with λ_observe
    Sweep top-k in {5, 10, 20}. If ΔS is flat high, suspect metric or index mismatch.
    Reorder prompt headers. If ΔS spikes, lock the schema with Data Contracts.

  3. Apply the module
    Retrieval drift → BBMC + Data Contracts.
    Reasoning collapse → BBCR bridge + BBAM variance clamp.
    Dead ends in long runs → BBPF alternate path.

  4. Verify acceptance
    Coverage ≥ 0.70 to target section.
    ΔS ≤ 0.45 across three paraphrases.
    λ remains convergent across seeds.

Redis specific breakpoints and the right repair

1) JSON vs HASH schema mismatch

Symptoms: FT.SEARCH returns objects but vector KNN is empty or errors.
Why: index created ON JSON while data written to HASH, or field path is wrong.
Fix: keep a single object model in a data contract. For JSON, store vectors at a fixed JSONPath (for example $.vec). Recreate the index with the correct ON JSON|HASH and SCHEMA path. See Data Contracts and verify with Retrieval Traceability.

2) Distance metric and normalization

Symptoms: high scores yet wrong meaning, order flips between runs.
Why: schema uses VECTOR ... DISTANCE_METRIC that does not match the encoder. Cosine requires normalized vectors.
Fix: align metric to the encoder family and normalize for COSINE or IP. If you switch metric, rebuild the index. Read Embedding ≠ Semantic and Vectorstore Metrics & FAISS Pitfalls.

3) HNSW vs FLAT audit

Symptoms: gold chunk appears only at very large k or under hybrid reranker.
Why: HNSW underfit (M, EF_CONSTRUCTION, EF_RUNTIME) or FLAT used without proper dim/metric.
Fix: for quality checks, build a FLAT index or run FLAT on a canary set to bound upper recall. Then tune HNSW and validate with a reranker. See Retrieval Playbook and Rerankers.

4) Prefix routing and index keyspace

Symptoms: FT.SEARCH returns nothing for known IDs.
Why: PREFIX in FT.CREATE does not match the stored key pattern.
Fix: lock keyspace rules in the data contract and align loader prefixes. Rebuild and re-test ΔS.

5) Stopwords and analyzers in hybrid queries

Symptoms: hybrid BM25 + vector performs worse than vector alone.
Why: default STOPWORDS or stemming breaks lexical branch, so fusion is biased.
Fix: use a controlled analyzer or disable stopwords for the tested fields. Normalize scores before fusion and rerank. See Query Parsing Split and Rerankers.

6) TAG and NUMERIC filter drift

Symptoms: filters work in isolation but empty under KNN.
Why: type mismatch or missing index on those properties.
Fix: declare TAG or NUMERIC fields in the index schema, normalize case and separators, and validate final WHERE at query build time. Map to Data Contracts.

7) Persistence and eviction

Symptoms: vectors disappear after restart, or recall collapses randomly.
Why: AOF/RDB settings or maxmemory eviction removed keys holding vectors.
Fix: set non-evicting policy for vector namespaces, confirm persistence schedule, and warm the index on boot. Add a semantic boot fence. See Bootstrap Ordering and Pre-Deploy Collapse.

8) Upsert hygiene

Symptoms: duplicate or stale entries, toggling answers across runs.
Why: non-deterministic IDs or mixed loaders using HSET vs JSON.SET with different paths.
Fix: deterministic IDs, doc_sha in metadata, idempotent loader, and periodic dedupe. Verify with Retrieval Traceability.

9) Fragmentation across many indexes

Symptoms: global recall ok but per-index top-k is weak.
Why: tiny indices with poor neighborhood structure.
Fix: consolidate into one authoritative index with a routed facet. See Vectorstore Fragmentation.

Observability probes

  • k-sweep curve: run k in 5, 10, 20 and plot ΔS. Flat high means metric or index fault.
  • Score audit: read __vector_score distribution and check monotonicity under reranker.
  • Anchor control: compare against a golden set. If only one prefix or index fails, rebuild that scope.
  • Hybrid toggle: vector only vs hybrid. If hybrid is worse, fix query split and analyzers.

Escalate when

  • ΔS stays above 0.60 for golden questions after metric, schema, and HNSW corrections.
  • Coverage cannot reach 0.70 even with reranker and clean anchors.
  • Writes appear in logs but results stay empty for the target prefix or index.

Open:

Copy-paste prompt for your AI


I uploaded TXT OS and the WFGY Problem Map files.

Target system: Redis Stack vector.

* symptom: \[brief]
* traces: ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ states
* index: \[ON JSON|HASH, PREFIX, VECTOR field path, metric, type=HNSW|FLAT, M, EF\_CONSTRUCTION, EF\_RUNTIME]
* analyzers: \[stopwords, stemming, language]
* filters: \[TAG keys, NUMERIC keys]
* ingest: \[ids, doc\_sha, loader, upsert policy]

Tell me:

1. which layer is failing and why,
2. which exact fix page to open from this repo,
3. minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4. how to verify with a reproducible test.

Use BBMC/BBPF/BBCR/BBAM when relevant.


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning and semantic modulations View →
Benchmark vs GPT-5 Stress test with full WFGY reasoning suite View →
Starter Village New here. Start with a guided tour Start →

👑 Early Stargazers: See the Hall of Fame
GitHub stars WFGY Engine 2.0 is live. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow

要我繼續同一個家族,下一個建議是 elasticsearch.mdtypesense.md。說個名字,我就照這一版式輸出。