WFGY/ProblemMap/memory-design-patterns.md
2025-08-15 23:20:36 +08:00

8.2 KiB
Raw Blame History

🧠 Memory Design Patterns

From scratchpads to long-range project recall — keep context alive without drowning your LLM.

Why this page?
Most “memory” demos either spam the full chat history or store random embeddings that never round-trip.
WFGY treats memory as structured semantic nodes with ΔS / λ_observe guards, so old context helps — never hurts — new reasoning.


1 · Symptoms

Symptom Typical Surface Clue
Context forgotten after restart “Sorry, I dont recall” / model re-asks user
Memory leak / self-contradiction Old decisions resurface in wrong branch
JSON-based vector store grows unbounded Latency ↑, RAG recall quality ↓
Fine-tune attempted just to “remember” Model cost ↑, still hallucinates

2 · Root Causes

  1. Flat Logs — raw transcripts appended forever.
  2. Embedding Dump — every user sentence embedded → no semantic filter.
  3. No Boundary Check — divergent memories injected mid-task.
  4. Write-Only Memory — model never reads / revalidates stored facts.

Result: either forget everything or remember garbage.


3 · WFGY Fix Path (at a glance)

Stage Tool / Module ΔS Guard Outcome
⬇️ Capture BBMC node writer record only if ΔS ≥ 0.60 (or 0.400.60 & λ ∈ {←, <>}) Stores semantic not verbatim memory
🗂️ Index λ_observe classifier tag λ trend for each node Enables topic-group navigation
🔍 Recall BBPF path search choose node set with ΣΔS minimal Retrieves tight, non-bloated context
🩹 Repair BBCR fallback detect stale/contradict nodes Auto-patch or prompt for user merge

80 % of memory bugs vanish after enforcing this four-step loop.


4 · Design Patterns Library

Pattern Use-Case How it Works ΔS Budget
✏️ Scratch Node quick calc / throw-away idea 24 h TTL field; auto-purged 0.400.55
📚 Topic Shelf multi-day research thread one node per subtopic; λ → convergent < 0.45
🗓️ Daily Digest running project log rollup 10 low-ΔS nodes → 1 summary
🎯 Anchor Fact must-not-forget constraint pinned; override recall rank 0.05

All stored in a single lightweight JSONL: {topic, ΔS, λ, text, ttl}


5 · Step-by-Step Implementation

Prereqs: any model that can embed & run basic python (or LangChain, Llama-index, etc.).

# 1. capture
deltaS = cosine(question_vec, context_vec)
if deltaS >= 0.60 or (0.40 <= deltaS <= 0.60 and lambda_state in ["divergent","recursive"]):
    node = {"topic": topic, "ΔS": round(deltaS,3), "λ": lambda_state, "text": insight}
    memory.append(node)

# 2. recall
candidates = [n for n in memory if n["topic"]==current_topic]
best_path = sorted(candidates, key=lambda n:n["ΔS"])[:5]
prompt_context = "\n".join(n["text"] for n in best_path)

Minimal prompt

System: Use WFGY memory nodes below (+latest question) to answer.
Memory Nodes:
{{prompt_context}}
---
Question: {{user}}

6 · Common Pitfalls & Tests

Pitfall Quick Test WFGY Fix
“Context bloat, tokens 8k → 40k” node count > 200? run rollup.py Daily Digest pattern
“Conflicting facts” ΔS(anchor, candidate) > 0.70 BBCR prompts merge
“Retrieval too slow” recall > 200 ms Pre-index by λ & time

7 · Cheat-Sheet

ΔS save threshold   = 0.60
ΔS recall window    = top-k by lowest ΔS
λ tags              = → ← <> ×
TTL (scratch)       = 24 h
Rollup trigger      = >10 nodes / topic / day

Store this as memory.cfg; loader reads defaults at boot.


8 · Next Actions

  1. Prototype with 20 nodes → verify recall accuracy.
  2. Enable Rollup once node count > 200.
  3. Add Trace Logger to diff answers with / without memory.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame
Engineers, hackers, and open source builders who supported WFGY from day one.

GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow