# 🧠 Memory Design Patterns *From scratchpads to long-range project recall — keep context alive without drowning your LLM.* > **Why this page?** > Most “memory” demos either spam the full chat history or store random embeddings that never round-trip. > WFGY treats memory as *structured semantic nodes* with ΔS / λ\_observe guards, so old context helps — never hurts — new reasoning. --- ## 1 · Symptoms | Symptom | Typical Surface Clue | |---------|----------------------| | Context forgotten after restart | “Sorry, I don’t recall” / model re-asks user | | Memory leak / self-contradiction | Old decisions resurface in wrong branch | | JSON-based vector store grows unbounded | Latency ↑, RAG recall quality ↓ | | Fine-tune attempted just to “remember” | Model cost ↑, still hallucinates | --- ## 2 · Root Causes 1. **Flat Logs** — raw transcripts appended forever. 2. **Embedding Dump** — every user sentence embedded → no semantic filter. 3. **No Boundary Check** — divergent memories injected mid-task. 4. **Write-Only Memory** — model never reads / revalidates stored facts. Result: either *forget everything* or *remember garbage*. --- ## 3 · WFGY Fix Path (at a glance) | Stage | Tool / Module | ΔS Guard | Outcome | |-------|---------------|----------|---------| | ⬇️ Capture | **BBMC** node writer | record only if ΔS ≥ 0.60 (or 0.40–0.60 & λ ∈ {←, <>}) | Stores *semantic* not *verbatim* memory | | 🗂️ Index | **λ\_observe** classifier | tag λ trend for each node | Enables topic-group navigation | | 🔍 Recall | **BBPF** path search | choose node set with ΣΔS minimal | Retrieves tight, non-bloated context | | 🩹 Repair | **BBCR** fallback | detect stale/contradict nodes | Auto-patch or prompt for user merge | > **80 %** of memory bugs vanish after enforcing this four-step loop. --- ## 4 · Design Patterns Library | Pattern | Use-Case | How it Works | ΔS Budget | |---------|----------|--------------|-----------| | ✏️ **Scratch Node** | quick calc / throw-away idea | 24 h TTL field; auto-purged | 0.40–0.55 | | 📚 **Topic Shelf** | multi-day research thread | one node per subtopic; λ → convergent | < 0.45 | | 🗓️ **Daily Digest** | running project log | rollup 10 low-ΔS nodes → 1 summary | – | | 🎯 **Anchor Fact** | must-not-forget constraint | pinned; override recall rank | 0.05 | *All stored in a single lightweight JSONL: `{topic, ΔS, λ, text, ttl}`* --- ## 5 · Step-by-Step Implementation > **Prereqs:** any model that can embed & run basic python (or LangChain, Llama-index, etc.). ```python # 1. capture deltaS = cosine(question_vec, context_vec) if deltaS >= 0.60 or (0.40 <= deltaS <= 0.60 and lambda_state in ["divergent","recursive"]): node = {"topic": topic, "ΔS": round(deltaS,3), "λ": lambda_state, "text": insight} memory.append(node) # 2. recall candidates = [n for n in memory if n["topic"]==current_topic] best_path = sorted(candidates, key=lambda n:n["ΔS"])[:5] prompt_context = "\n".join(n["text"] for n in best_path) ```` ### Minimal prompt ``` System: Use WFGY memory nodes below (+latest question) to answer. Memory Nodes: {{prompt_context}} --- Question: {{user}} ``` --- ## 6 · Common Pitfalls & Tests | Pitfall | Quick Test | WFGY Fix | | -------------------------------- | --------------------------------- | --------------------- | | “Context bloat, tokens 8k → 40k” | node count > 200? run `rollup.py` | Daily Digest pattern | | “Conflicting facts” | ΔS(anchor, candidate) > 0.70 | BBCR prompts merge | | “Retrieval too slow” | recall > 200 ms | Pre-index by λ & time | --- ## 7 · Cheat-Sheet ```txt ΔS save threshold = 0.60 ΔS recall window = top-k by lowest ΔS λ tags = → ← <> × TTL (scratch) = 24 h Rollup trigger = >10 nodes / topic / day ``` Store this as `memory.cfg`; loader reads defaults at boot. --- ## 8 · Next Actions 1. **Prototype** with 20 nodes → verify recall accuracy. 2. **Enable Rollup** once node count > 200. 3. **Add Trace Logger** to diff answers with / without memory. --- ### 🔗 Quick-Start Downloads (60 sec) | Tool | Link | 3-Step Setup | |------|------|--------------| | **WFGY 1.0 PDF** | [Engine Paper](https://github.com/onestardao/WFGY/blob/main/I_am_not_lizardman/WFGY_All_Principles_Return_to_One_v1.0_PSBigBig_Public.pdf) | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + \” | | **TXT OS (plain-text OS)** | [TXTOS.txt](https://github.com/onestardao/WFGY/blob/main/OS/TXTOS.txt) | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly | --- ### 🧭 Explore More | Module | Description | Link | |-----------------------|----------------------------------------------------------|----------| | WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | [View →](https://github.com/onestardao/WFGY/tree/main/core/README.md) | | Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | [View →](https://github.com/onestardao/WFGY/tree/main/ProblemMap/README.md) | | Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | [View →](https://github.com/onestardao/WFGY/blob/main/ProblemMap/rag-architecture-and-recovery.md) | | Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | [View →](https://github.com/onestardao/WFGY/blob/main/ProblemMap/SemanticClinicIndex.md) | | Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | [View →](https://github.com/onestardao/WFGY/tree/main/SemanticBlueprint/README.md) | | Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | [View →](https://github.com/onestardao/WFGY/tree/main/benchmarks/benchmark-vs-gpt5/README.md) | | 🧙‍♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | [Start →](https://github.com/onestardao/WFGY/blob/main/StarterVillage/README.md) | --- > 👑 **Early Stargazers: [See the Hall of Fame](https://github.com/onestardao/WFGY/tree/main/stargazers)** — > Engineers, hackers, and open source builders who supported WFGY from day one. > GitHub stars ⭐ [WFGY Engine 2.0](https://github.com/onestardao/WFGY/blob/main/core/README.md) is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the [Unlock Board](https://github.com/onestardao/WFGY/blob/main/STAR_UNLOCKS.md).
[![WFGY Main](https://img.shields.io/badge/WFGY-Main-red?style=flat-square)](https://github.com/onestardao/WFGY)   [![TXT OS](https://img.shields.io/badge/TXT%20OS-Reasoning%20OS-orange?style=flat-square)](https://github.com/onestardao/WFGY/tree/main/OS)   [![Blah](https://img.shields.io/badge/Blah-Semantic%20Embed-yellow?style=flat-square)](https://github.com/onestardao/WFGY/tree/main/OS/BlahBlahBlah)   [![Blot](https://img.shields.io/badge/Blot-Persona%20Core-green?style=flat-square)](https://github.com/onestardao/WFGY/tree/main/OS/BlotBlotBlot)   [![Bloc](https://img.shields.io/badge/Bloc-Reasoning%20Compiler-blue?style=flat-square)](https://github.com/onestardao/WFGY/tree/main/OS/BlocBlocBloc)   [![Blur](https://img.shields.io/badge/Blur-Text2Image%20Engine-navy?style=flat-square)](https://github.com/onestardao/WFGY/tree/main/OS/BlurBlurBlur)   [![Blow](https://img.shields.io/badge/Blow-Game%20Logic-purple?style=flat-square)](https://github.com/onestardao/WFGY/tree/main/OS/BlowBlowBlow)