11 KiB
Anthropic (Claude): Guardrails and Fix Patterns
A compact field guide to stabilize Anthropic workflows that touch RAG, tools, multi-agent plans, and long dialogs. Use these checks to localize the failure, then jump to the exact WFGY fix page.
Open these first
- Visual map and recovery: RAG Architecture & Recovery
- End to end retrieval knobs: Retrieval Playbook
- Why this snippet (traceability schema): Retrieval Traceability
- Ordering control: Rerankers
- Embedding vs meaning: Embedding ≠ Semantic
- Hallucination and chunk boundaries: Hallucination
- Long chains and entropy: Context Drift, Entropy Collapse
- Symbolic collapse and recovery: Logic Collapse
- Prompt injection and schema locks: Prompt Injection
- Multi-agent conflicts: Multi-Agent Problems
- Bootstrap and deploy issues: Bootstrap Ordering, Deployment Deadlock, Pre-deploy Collapse
- Snippet and citation schema: Data Contracts
Core acceptance
- ΔS(question, retrieved) ≤ 0.45
- Coverage ≥ 0.70 to the target section
- λ remains convergent across three paraphrases and two seeds
Fix in 60 seconds
-
Measure ΔS Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor). Thresholds: stable < 0.40, transitional 0.40–0.60, risk ≥ 0.60.
-
Probe with λ_observe Vary k in retrieval (5, 10, 20). If ΔS stays flat and high, suspect metric or index mismatch. Reorder prompt headers; if ΔS spikes, lock the schema.
-
Apply the module
- Retrieval drift → BBMC plus Data Contracts.
- Reasoning collapse → BBCR bridge plus BBAM variance clamp, then verify with Logic Collapse.
- Hallucination re-entry after correction → see Pattern: Hallucination Re-entry.
Typical Anthropic breakpoints and the right fix
-
System vs user role mixing. Claude is sensitive to misplaced policy text inside user turns. Move all non-task policy to system. Re-test ΔS. Open: Retrieval Traceability, Data Contracts.
-
JSON tool protocol variance. Tool schemas that allow free text responses raise ΔS and create flip states. Enforce strict argument schemas and echo back the schema in every tool step. Open: Prompt Injection.
-
HyDE plus BM25 query split in reruns. If recall is high but top-k order is unstable, lock the two-stage query and rerank deterministically. Open: Pattern: Query Parsing Split, Rerankers.
-
Tool loop or agent handoff stalls with partial memory writes. Split memory namespaces and lock writes by
mem_revandmem_hash. Open: Multi-Agent Problems. -
Safety refusal that hides the cited snippet. Use citation-first prompting and SCU (symbolic constraint unlock). Open: Retrieval Traceability, Pattern: SCU.
Deep diagnostics
- Three-paraphrase probe. Ask the same question three ways. Log ΔS and λ for each. If λ flips on harmless paraphrase, clamp with BBAM and tighten snippet schema.
- Anchor triangulation. Compare ΔS to the expected anchor section and to a decoy section. If ΔS is close for both, re-chunk and re-embed.
- Chain length audit. If entropy rises after 25–40 steps, split the plan, then re-join with a BBCR bridge. Open: Context Drift, Entropy Collapse.
Escalate and structural fixes
-
Index or metric mismatch. If ΔS stays high across seeds, rebuild with the semantic chunking checklist and verify with a small gold set. Open: Embedding ≠ Semantic, Chunking Checklist.
-
Cold boot or first call crash in fresh deploys. Check ordering, secrets, and version skew. Open: Bootstrap Ordering, Pre-deploy Collapse.
-
Live instability. Add live probes and backoff guards. Open: Live Monitoring for RAG, Debug Playbook.
Copy-paste prompt
You have TXTOS and the WFGY Problem Map loaded.
My Anthropic issue:
- symptom: [one line]
- traces: ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ states across 3 paraphrases
Tell me:
1) failing layer and why,
2) the exact WFGY page to open,
3) the minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4) a reproducible test to verify the fix.
Use BBMC, BBPF, BBCR, BBAM when relevant.
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one.
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.