WFGY/ProblemMap/GlobalFixMap/MemoryLongContext
2025-08-30 21:54:15 +08:00
..
chunking-checklist.md Create chunking-checklist.md 2025-08-30 21:44:51 +08:00
context-drift.md Create context-drift.md 2025-08-30 21:21:14 +08:00
data-contracts.md Create data-contracts.md 2025-08-30 21:41:02 +08:00
entropy-collapse.md Create entropy-collapse.md 2025-08-30 21:20:20 +08:00
ghost-context.md Create ghost-context.md 2025-08-30 21:54:15 +08:00
memory-coherence.md Update memory-coherence.md 2025-08-30 21:34:20 +08:00
ocr-parsing-checklist.md Create ocr-parsing-checklist.md 2025-08-30 21:50:11 +08:00
pattern_memory_desync.md Create pattern_memory_desync.md 2025-08-30 21:37:36 +08:00
README.md Create README.md 2025-08-25 20:27:04 +08:00
retrieval-traceability.md Create retrieval-traceability.md 2025-08-30 21:38:14 +08:00

Memory & Long-Context — Global Fix Map

Keep threads coherent across long windows and session restarts.
Detect and repair entropy melt, boundary drift, and state desync.

What this page is

  • A compact checklist for long contexts and multi-session memory
  • Copyable guards to stop drift and collapse before they spread
  • How to measure stability with ΔS and λ_observe

When to use

  • Dialogs grow past 50k to 100k tokens and answers degrade
  • Facts flip after tab refresh or model switch
  • Citations look right yet reasoning goes flat or chaotic
  • OCR transcripts look fine but capitalization and spacing drift
  • Multi day support threads lose task state or rewrite history

Open these first


Common failure patterns

  • Entropy melt attention variance climbs with length and the model smooths meaning
  • Boundary leak chunks merge across section joins and citations shift by a few lines
  • State fork two tabs or agents hold different memory revisions and answers flip
  • Ghost context stale buffers linger after role or persona change and contaminate steps
  • OCR jitter mixed spacing or width variants create false token differences

Fix in 60 seconds

  1. Stamp and fence state

    • At turn start set mem_rev, mem_hash, task_id
    • Forbid writes if client stamps do not match the server record
  2. Shard the window

    • Assemble prompts as {system | task | constraints | snippets | answer}
    • Split snippets by section and forbid cross section reuse
  3. Normalize inputs

    • Unicode NFC, strip zero width, unify full and half width
    • Drop OCR lines below confidence threshold
  4. Stabilize attention

    • Apply BBAM to clamp variance
    • If collapse detected use BBCR to bridge and re anchor
  5. Probe the joins

    • Measure ΔS across adjacent chunks and keep each join ≤ 0.50
    • Plot ΔS(question, retrieved) vs k and expect a downward curve after the fix
  6. Trace or stop

    • Require cite then answer
    • If a claim has no snippet id stop and ask for the exact citation

Copy paste prompt


You have TXT OS and the WFGY Problem Map.

Goal
Stabilize memory across long windows and across sessions without losing traceability.

Protocol

1. Print {mem\_rev, mem\_hash, task\_id}. If missing set defaults and echo them.
2. Build a Snippet Table with columns {section\_id | start\_line | end\_line | citation}.
3. Guardrails

   * cite then answer
   * forbid cross section reuse
   * if a claim lacks a snippet id stop and request it
4. Collapse control

   * if attention variance rises apply BBAM
   * if logic stalls apply BBCR and show the bridge node
5. Metrics

   * report ΔS(question, retrieved)
   * report ΔS across each join
   * report λ\_observe at retrieval, assembly, reasoning

Input

* question
* snippets with ids and line ranges
* previous {mem\_rev, mem\_hash, task\_id} if any

Output

* header {mem\_rev, mem\_hash, task\_id}
* Snippet Table
* Bridge Check
* Final Answer with inline citations
* ΔS and λ states


Minimal checklist

  • State stamped with mem_rev and mem_hash at every turn
  • Prompt schema locked and section fences enforced
  • Unicode normalized and OCR noise gated
  • BBAM enabled and BBCR available on collapse
  • ΔS at each join ≤ 0.50 and overall ΔS(question, retrieved) ≤ 0.45
  • Cite then answer and no orphan claims

Acceptance targets

  • Retrieval coverage ≥ 0.70 to the intended section
  • ΔS(question, retrieved) ≤ 0.45 and joins ≤ 0.50
  • λ remains convergent across three paraphrases
  • No state fork across tabs or agents for the same task_id

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame
Engineers, hackers, and open source builders who supported WFGY from day one.

GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow