WFGY/ProblemMap/GlobalFixMap/MemoryLongContext/ghost-context.md
2025-09-05 11:21:01 +08:00

7.6 KiB
Raw Blame History

Ghost Context — Guardrails and Fix Pattern

🧭 Quick Return to Map

You are in a sub-page of MemoryLongContext.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

When personas, roles, or long sessions change, old buffers may linger.
These stale fragments contaminate new answers, creating phantom references or role bleed.


Symptoms

  • Answer contains facts or tone from a previous persona even after reset.
  • Citations look valid but reference sections from a past session.
  • A new task starts yet responses drift back to old task context.
  • Model insists on constraints that were valid in an earlier role but not now.
  • Answers feel “haunted” by old memory traces.

Root causes

  • Hidden buffers not cleared after system or role switch.
  • State variables (persona, role, policy) not reset between sessions.
  • Token reuse from stale cache layers.
  • ΔS stays abnormally low even when task has switched domains.
  • λ_observe shows convergence to an irrelevant anchor.

Fix in 60 seconds

  1. Stamp and reset state
    Require {mem_rev, mem_hash, persona_id} on each turn.
    If persona_id differs from previous → force buffer clear.

  2. Fence the prompt schema
    Assemble prompts as {system | persona | constraints | snippets | answer}.
    Do not let persona or role tokens bleed into snippet blocks.

  3. Drop stale buffers
    When persona changes, zero all prior hidden states.
    Require fresh snippets for new context.

  4. Probe for contamination

    • Compute ΔS(new question, retrieved snippet).
    • If ΔS ≤ 0.30 but snippets belong to past persona → ghost detected.
    • Trigger hard reset and request new anchors.
  5. Audit the joins
    Compare ΔS across old vs new persona snippets.
    Require ΔS(new persona, old snippet) ≥ 0.65 before allowing reuse.


Copy-paste diagnostic prompt

You have TXTOS and the WFGY Problem Map.

Task: Detect and purge ghost context.

Steps:
1. Print {mem_rev, mem_hash, persona_id}.
2. Verify that current persona_id matches task scope.
3. If snippets cite a different persona_id, mark as ghost.
4. Require re-retrieval for ghosted snippets.
5. Report:
   - ΔS(new question, retrieved)
   - ΔS(new persona vs old snippets)
   - λ states
   - Reset actions taken

Acceptance targets

  • ΔS(new question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 to the new target section
  • λ remains convergent across three paraphrases
  • No snippet contamination from old persona or task scope

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one.

GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow  

要我直接幫你寫 state-fork.md 嗎?