WFGY/ProblemMap/patterns
2026-03-04 06:53:04 +00:00
..
pattern_bootstrap_deadlock.md docs: replace Explore More footer with unified navigation block 2026-03-04 06:53:04 +00:00
pattern_hallucination_reentry.md docs: replace Explore More footer with unified navigation block 2026-03-04 06:53:04 +00:00
pattern_memory_desync.md docs: replace Explore More footer with unified navigation block 2026-03-04 06:53:04 +00:00
pattern_query_parsing_split.md docs: replace Explore More footer with unified navigation block 2026-03-04 06:53:04 +00:00
pattern_rag_semantic_drift.md docs: replace Explore More footer with unified navigation block 2026-03-04 06:53:04 +00:00
pattern_symbolic_constraint_unlock.md docs: replace Explore More footer with unified navigation block 2026-03-04 06:53:04 +00:00
pattern_vectorstore_fragmentation.md docs: replace Explore More footer with unified navigation block 2026-03-04 06:53:04 +00:00
README.md docs: replace Explore More footer with unified navigation block 2026-03-04 06:53:04 +00:00

Patterns — Failure Catalog (Problem Map 2.0)

This folder is a field guide to recurring failures in RAG and multi-stage LLM pipelines.
Each pattern is actionable: fast signals, root causes, a minimal repro, a deterministic fix, and links to hands-on examples (SDK-free, stdlib-only).

How to use this folder

  1. Start with the symptom youre seeing.
  2. Open the matching pattern and run the Minimal Repro + Standard Fix.
  3. Wire the acceptance criteria into CI (see Example 08) so the fix stays fixed.

Quick Index

Pattern Problem Map No. Symptoms youll see Fix entrypoint
RAG Semantic Drift (pattern_rag_semantic_drift.md) No.1 Plausible but ungrounded answers; citations dont contain the claim Example 01, Example 03
Memory Desync (pattern_memory_desync.md) — (State/Context) Old names/IDs reappear; agents disagree across turns Example 04
Vector Store Fragmentation (pattern_vectorstore_fragmentation.md) No.3 Recall flips across envs; score scales change; rank inversions Example 05
Hallucination Re-Entry (pattern_hallucination_reentry.md) — (Provenance) Models prior text shows up as “evidence”; non-corpus sources cited Example 06
Bootstrap Deadlock (pattern_bootstrap_deadlock.md) No.14 /readyz stuck/flapping; circular waits at startup Example 07
Query Parsing Split (pattern_query_parsing_split.md) — (Parsing) Multi-intent prompts answered partially or mixed Example 03, Example 04
Symbolic Constraint Unlock (SCU) (pattern_symbolic_constraint_unlock.md) No.11 (Symbolic collapse) “Must/Only/Never” rules vanish mid-pipeline; impossible states Example 03, Example 04, Example 08

Legend: Problem Map numbers refer to root categories used across the repo. “—” means cross-cutting (not a single number).


Pick-a-Pattern in 30 Seconds (Triage Flow)

  1. Grounding first — Run Example 01 on a few failing questions.
    • If refusal behavior or citations fail ⇒ go to Semantic Drift.
  2. Context/state sanity — Check context_id / mem_rev/hash.
    • Mismatch ⇒ Memory Desync.
  3. Index parity — Validate index_out/manifest.json vs runtime.
    • Drift or score scale shift ⇒ Vector Store Fragmentation.
  4. Provenance — Inspect source for cited ids.
    • Any model|chat|tmp:Hallucination Re-Entry.
  5. Startup — If the first minute after deploy is flaky ⇒ Bootstrap Deadlock.
  6. Query shape — If the prompt mixes “compare… then draft…” ⇒ Query Parsing Split.
  7. Logic rules — If answers cross “must/only/never” boundaries ⇒ SCU.

Standard Acceptance Gates (copy to CI)

  • Guarded Output: either exact refusal token not in context or JSON with claim + citations:[id,…] scoped to retrieved ids.
  • Provenance: all citations pass the corpus-only filter (no chat:/draft:/tmp:).
  • Context Consistency: if used, context_id.mem_rev/hash echoes the turn snapshot.
  • Constraint Integrity (SCU): constraints_echo ≡ locked set; no contradiction patterns matched.
  • Quality Gates (Ex.08): precision≥0.80, under-refusal≤0.05, citation hit rate≥0.75.

File Layout

See ../examples/ for runnable, stdlib-only code referenced in each pattern.


Contributing (tight process)

  1. Propose a new pattern via issue labels: pattern-proposal, with minimal repro + acceptance gate.
  2. Stabilize with an example (Python or Node, stdlib-only).
  3. Add to this README only after approval.
  4. Guard with Example 08 metrics before shipping a pattern-driven fix.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
Engine WFGY 1.0 Original PDF based tension engine
Engine WFGY 2.0 Production tension kernel and math engine for RAG and agents
Engine WFGY 3.0 TXT based Singularity tension engine, 131 S class set
Map Problem Map 1.0 Flagship 16 problem RAG failure checklist and fix map
Map Problem Map 2.0 RAG focused recovery pipeline
Map Problem Map 3.0 Global Debug Card, image as a debug protocol layer
Map Semantic Clinic Symptom to family to exact fix
Map Grandmas Clinic Plain language stories mapped to Problem Map 1.0
Onboarding Starter Village Guided tour for newcomers
App TXT OS TXT semantic OS, fast boot
App Blah Blah Blah Abstract and paradox Q and A built on TXT OS
App Blur Blur Blur Text to image with semantic control
App Blow Blow Blow Reasoning game engine and memory demo

If this repository helped, starring it improves discovery so more builders can find the docs and tools. GitHub Repo stars