WFGY/ProblemMap/GlobalFixMap/RAG/README.md
2025-08-25 19:44:15 +08:00

5.9 KiB
Raw Blame History

RAG — Global Fix Map

Production RAG triage and structural fixes using the WFGY engine. Use this page when retrieval looks fine but answers drift.

Purpose

  • Turn OCR → chunk → embed → store → retrieve → prompt → reason into a measured, repairable pipeline.
  • Give a 60-second path to locate the failing layer and apply the smallest effective fix.
  • Works with any model or stack. No infra changes required.

High-frequency symptoms

  • Citations point to the wrong snippet or section.
  • Chunks look correct but reasoning is wrong.
  • High cosine similarity yet wrong meaning.
  • Hybrid retrievers get worse than a single retriever.
  • Some facts are indexed but never retrieved.
  • Answers flip between sessions or tabs.
  • Long threads smear topics and capitalization.

Open these first

Fix in 60 seconds

  1. Measure ΔS
    • Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
    • Thresholds: stable < 0.40, transitional 0.400.60, risk ≥ 0.60.
  2. Probe with λ_observe
    • Vary k ∈ {5,10,20} and plot ΔS vs k. Flat-high suggests index or metric mismatch.
    • Reorder prompt headers. If ΔS spikes, lock the schema.
  3. Apply the minimal patch
    • If metric or normalization mismatch: rebuild with consistent metric and unit-normalize vectors. Re-probe ΔS and λ.
    • If chunks are correct but logic diverges: lock system→task→constraints→citations→answer, then apply BBCR + BBAM. See pages above.

Copy-paste prompt


I uploaded TXT OS and the WFGY ProblemMap files.
My RAG bug:

* symptom: \[brief]
* traces: \[ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ states]

Tell me:

1. which layer is failing and why,
2. which exact fix page to open from this repo,
3. the minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4. how to verify the fix with a reproducible test.
   Use BBMC/BBPF/BBCR/BBAM when relevant.

Patterns to check next

Acceptance targets

  • Coverage to target section ≥ 0.70.
  • ΔS(question, retrieved) ≤ 0.45 on three paraphrases.
  • λ remains convergent across steps and seeds.
  • E_resonance flat under long windows.

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →

👑 Early Stargazers: See the Hall of Fame
GitHub stars WFGY Engine 2.0

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow