WFGY/ProblemMap/GlobalFixMap/RAG/context_drift.md
2025-09-05 11:40:15 +08:00

8.3 KiB
Raw Blame History

Context Drift in RAG — Guardrails and Fix Pattern

🧭 Quick Return to Map

You are in a sub-page of RAG.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

When answers alternate or degrade as dialogs get longer, even though the retriever continues to surface the right snippets.
This page stabilizes λ (semantic convergence) and prevents entropy creep in retrieval-augmented pipelines.


Open these first


Core acceptance

  • ΔS(question, retrieved) ≤ 0.45 across full chain
  • λ stays convergent across 3 paraphrases and 2 seeds
  • Coverage ≥ 0.70 for target section, even after N steps
  • E_resonance stable on long dialog windows

Typical symptoms → exact fix

Symptom Likely cause Open this
Same question asked twice, different answers λ drift with long chain Entropy Collapse, Logic Collapse
Correct snippets retrieved, answer drops citation payload contract erosion Data Contracts, Retrieval Traceability
Paraphrase of query yields different grounding unstable λ_observe Retrieval Playbook
Long dialog overwrites memory buffer collapse Memory Long Context

Fix in 60 seconds

  1. Three-paraphrase probe

    • Ask the same question three ways.
    • If λ flips between paraphrases, lock snippet schema and apply BBAM variance clamp.
  2. ΔS check over chain

    • Log ΔS(question,retrieved) across 510 dialog turns.
    • If ΔS rises over time, re-segment and enforce citation-first prompting.
  3. Apply module


Copy-paste probe prompt

I uploaded TXT OS and the WFGY Problem Map.

My issue:
- same query gives different answers in long dialog
- traces: ΔS(question,retrieved)=..., λ states across 3 paraphrases

Tell me:
1) where context drift occurs,
2) the exact WFGY page to open,
3) the minimal fix to enforce convergence,
4) a reproducible test over 5 turns.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + ”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one. WFGY Engine 2.0 is already unlocked.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow