mirror of
https://github.com/onestardao/WFGY.git
synced 2026-04-28 11:40:07 +00:00
| .. | ||
| context-drift.md | ||
| entropy-collapse.md | ||
| memory-coherence.md | ||
| pattern_memory_desync.md | ||
| README.md | ||
| retrieval-traceability.md | ||
Memory & Long-Context — Global Fix Map
Keep threads coherent across long windows and session restarts.
Detect and repair entropy melt, boundary drift, and state desync.
What this page is
- A compact checklist for long contexts and multi-session memory
- Copyable guards to stop drift and collapse before they spread
- How to measure stability with ΔS and λ_observe
When to use
- Dialogs grow past 50k to 100k tokens and answers degrade
- Facts flip after tab refresh or model switch
- Citations look right yet reasoning goes flat or chaotic
- OCR transcripts look fine but capitalization and spacing drift
- Multi day support threads lose task state or rewrite history
Open these first
- Session continuity and state fences: Memory Coherence
- Long window drift and attention melt: Entropy Collapse
- Long reasoning chain drift: Context Drift
- Cross tab and cache hazards: Memory Desync Pattern
- Trace schema and audit trail: Retrieval Traceability
- Snippet and citation schema: Data Contracts
- Chunk stability at joins: Chunking Checklist
- OCR quality and normalization: OCR Parsing Checklist
Common failure patterns
- Entropy melt attention variance climbs with length and the model smooths meaning
- Boundary leak chunks merge across section joins and citations shift by a few lines
- State fork two tabs or agents hold different memory revisions and answers flip
- Ghost context stale buffers linger after role or persona change and contaminate steps
- OCR jitter mixed spacing or width variants create false token differences
Fix in 60 seconds
-
Stamp and fence state
- At turn start set
mem_rev,mem_hash,task_id - Forbid writes if client stamps do not match the server record
- At turn start set
-
Shard the window
- Assemble prompts as
{system | task | constraints | snippets | answer} - Split snippets by section and forbid cross section reuse
- Assemble prompts as
-
Normalize inputs
- Unicode NFC, strip zero width, unify full and half width
- Drop OCR lines below confidence threshold
-
Stabilize attention
- Apply BBAM to clamp variance
- If collapse detected use BBCR to bridge and re anchor
-
Probe the joins
- Measure ΔS across adjacent chunks and keep each join ≤ 0.50
- Plot ΔS(question, retrieved) vs k and expect a downward curve after the fix
-
Trace or stop
- Require cite then answer
- If a claim has no snippet id stop and ask for the exact citation
Copy paste prompt
You have TXT OS and the WFGY Problem Map.
Goal
Stabilize memory across long windows and across sessions without losing traceability.
Protocol
1. Print {mem\_rev, mem\_hash, task\_id}. If missing set defaults and echo them.
2. Build a Snippet Table with columns {section\_id | start\_line | end\_line | citation}.
3. Guardrails
* cite then answer
* forbid cross section reuse
* if a claim lacks a snippet id stop and request it
4. Collapse control
* if attention variance rises apply BBAM
* if logic stalls apply BBCR and show the bridge node
5. Metrics
* report ΔS(question, retrieved)
* report ΔS across each join
* report λ\_observe at retrieval, assembly, reasoning
Input
* question
* snippets with ids and line ranges
* previous {mem\_rev, mem\_hash, task\_id} if any
Output
* header {mem\_rev, mem\_hash, task\_id}
* Snippet Table
* Bridge Check
* Final Answer with inline citations
* ΔS and λ states
Minimal checklist
- State stamped with
mem_revandmem_hashat every turn - Prompt schema locked and section fences enforced
- Unicode normalized and OCR noise gated
- BBAM enabled and BBCR available on collapse
- ΔS at each join ≤ 0.50 and overall ΔS(question, retrieved) ≤ 0.45
- Cite then answer and no orphan claims
Acceptance targets
- Retrieval coverage ≥ 0.70 to the intended section
- ΔS(question, retrieved) ≤ 0.45 and joins ≤ 0.50
- λ remains convergent across three paraphrases
- No state fork across tabs or agents for the same
task_id
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame —
Engineers, hackers, and open source builders who supported WFGY from day one.
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.