13 KiB
WFGY Problem Map 1.0 — bookmark it. you’ll need it
🛡️ permanent fixes for recurring ai bugs. fix once, never again.
WFGY Problem Map = a reasoning layer for your AI.
no infra change. just load TXT OS or WFGY Core, then ask your model: “which problem map number am i hitting?”
you get the diagnosis and the exact fix steps.
16 reproducible failure modes in AI systems, each with a clear fix (MIT).
A plug-and-play semantic firewall, with no infra changes required.
Built at the semantic layer. solve it once, it stays solved.
if this page saves you time, a ⭐ helps others discover it.
thanks everyone — WFGY reached 800 stars in 70 days.
next milestone: at 1000 stars we’ll unlock Blur Blur Blur.
WFGY Core is live: a 30-line reasoning engine for recovery and resilience.
fixing rag hallucinations? it makes models reason before answering.
coming next: Semantic Surgery Room and Global Fix Map (n8n, GHL, Make, more). planned by Sep 1.
quick access
don’t worry if this looks long. with TXT OS loaded, simply ask your LLM:
“which Problem Map number fits my issue?” it will point you to the right page.
- Semantic Clinic (triage when unsure): Fix symptoms fast →
- Getting Started (practical): Guard a RAG pipeline with WFGY →
- Beginner Guide: Find and fix your first failure →
- Diagnose by symptom:
Diagnose.mdtable → - Visual RAG Guide:
RAG Architecture & Recovery
high-altitude map linking symptom × stage × failure class with exact recovery paths. - Multi-Agent chaos: Role drift & memory overwrite →
- Field reports: Real bugs and fixes from users →
- TXT OS directory: browse the OS repo →
- MVP demos: Minimal WFGY examples →
tip: if you’re new, skip scrolling — use the minimal quick-start below.
quick-start downloads (60 sec)
new here? skip the map. grab TXT OS or the WFGY PDF, boot, then ask your model:
“answer using WFGY: ” or “which Problem Map number am i hitting?”
| tool | link | 3-step setup |
|---|---|---|
| WFGY 1.0 PDF | engine paper | 1) download 2) upload to your LLM 3) ask: “answer using WFGY + ” |
| TXT OS | TXTOS.txt | 1) download 2) paste into any LLM chat 3) type “hello world” to boot |
why this matters long-term
these 16 errors are not random. they are structural weak points every ai pipeline hits eventually.
with WFGY as a semantic firewall you don’t just fix today’s issue — you shield tomorrow’s.
this isn’t just a bug list. it’s an x-ray for your pipeline, so you stop guessing and start repairing.
see the end-to-end view: RAG Architecture & Recovery
🧪 one-click sandboxes — run WFGY instantly
run lightweight diagnostics with zero install and zero api key. powered by colab.
these tools map directly to the problem classes. others are handled inside WFGY and will surface in later CLIs.
ΔS diagnostic (mvp) — measure semantic drift
detects: No.2 — Interpretation Collapse
steps: run all, paste prompt+answer, read ΔS and fix tip
λ_observe checkpoint — mid-step re-grounding
fixes: No.6 — Logic Collapse & Recovery
steps: run all, compare ΔS before/after, fallback to BBCR if needed
ε_resonance — domain-level harmony
explains: No.12 — Philosophical Recursion
steps: run, tune anchors, read ε
λ_diverse — answer-set diversity
detects: No.3 — Long Reasoning Chains
steps: run, supply ≥3 answers, read score
failure catalog (with fixes)
if you are unsure which one applies, ask your LLM with TXT OS loaded:
“which Problem Map number matches my trace?” it will route you.
| # | problem domain | what breaks | doc |
|---|---|---|---|
| 1 | hallucination & chunk drift | retrieval returns wrong/irrelevant content | hallucination.md |
| 2 | interpretation collapse | chunk is right, logic is wrong | retrieval-collapse.md |
| 3 | long reasoning chains | drifts across multi-step tasks | context-drift.md |
| 4 | bluffing / overconfidence | confident but unfounded answers | bluffing.md |
| 5 | semantic ≠ embedding | cosine match ≠ true meaning | embedding-vs-semantic.md |
| 6 | logic collapse & recovery | dead-ends, needs controlled reset | logic-collapse.md |
| 7 | memory breaks across sessions | lost threads, no continuity | memory-coherence.md |
| 8 | debugging is a black box | no visibility into failure path | retrieval-traceability.md |
| 9 | entropy collapse | attention melts, incoherent output | entropy-collapse.md |
| 10 | creative freeze | flat, literal outputs | creative-freeze.md |
| 11 | symbolic collapse | abstract/logical prompts break | symbolic-collapse.md |
| 12 | philosophical recursion | self-reference loops, paradox traps | philosophical-recursion.md |
| 13 | multi-agent chaos | agents overwrite or misalign logic | Multi-Agent_Problems.md |
| 14 | bootstrap ordering | services fire before deps ready | bootstrap-ordering.md |
| 15 | deployment deadlock | circular waits in infra | deployment-deadlock.md |
| 16 | pre-deploy collapse | version skew / missing secret on first call | predeploy-collapse.md |
for No.13 deep dives:
• role drift → multi-agent-chaos/role-drift.md
• cross-agent memory overwrite → multi-agent-chaos/memory-overwrite.md
minimal quick-start
- open Beginner Guide and follow the symptom checklist.
- use the Visual RAG Guide to locate the failing stage.
- open the matching page and apply the patch.
ask any LLM to apply WFGY (TXT OS makes it smoother):
i’ve uploaded TXT OS / WFGY notes.
my issue: \[e.g., OCR tables look fine but answers point to wrong sections]
which WFGY modules should i apply and in what order?
status & difficulty
| # | problem | difficulty* | implementation |
|---|---|---|---|
| 1 | hallucination & chunk drift | medium | ✅ stable |
| 2 | interpretation collapse | high | ✅ stable |
| 3 | long reasoning chains | high | ✅ stable |
| 4 | bluffing / overconfidence | high | ✅ stable |
| 5 | semantic ≠ embedding | medium | ✅ stable |
| 6 | logic collapse & recovery | very high | ✅ stable |
| 7 | memory breaks across sessions | high | ✅ stable |
| 8 | debugging black box | medium | ✅ stable |
| 9 | entropy collapse | high | ✅ stable |
| 10 | creative freeze | medium | ✅ stable |
| 11 | symbolic collapse | very high | ✅ stable |
| 12 | philosophical recursion | very high | ✅ stable |
| 13 | multi-agent chaos | very high | ✅ stable |
| 14 | bootstrap ordering | medium | ✅ stable |
| 15 | deployment deadlock | high | ⚠️ beta |
| 16 | pre-deploy collapse | medium-high | ✅ stable |
*distance from default LLM behavior to a production-ready fix.
🔮 coming soon: global fix map
a universal layer above providers, agents, and infra.
Problem Map is step one. Global Fix Map expands the same reasoning-first firewall to RAG, infra boot, agents, evals, and more. same zero-install experience. launching around Sep.
contributing / support
- open an issue with a minimal repro (inputs → calls → wrong output).
- PRs for clearer docs, repros, or patches are welcome.
- project home: github.com/onestardao/WFGY
- TXT OS: browse the OS
- if this map helped you, a ⭐ helps more devs find it.
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
👑 Early Stargazers: See the Hall of Fame —
Engineers, hackers, and open source builders who supported WFGY from day one.
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.