WFGY/ProblemMap/GlobalFixMap/LLM_Providers/anthropic.md
2025-08-26 14:17:07 +08:00

11 KiB
Raw Blame History

Anthropic (Claude): Guardrails and Fix Patterns

A compact field guide to stabilize Anthropic workflows that touch RAG, tools, multi-agent plans, and long dialogs. Use these checks to localize the failure, then jump to the exact WFGY fix page.

Open these first

Core acceptance

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 to the target section
  • λ remains convergent across three paraphrases and two seeds

Fix in 60 seconds

  1. Measure ΔS Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor). Thresholds: stable < 0.40, transitional 0.400.60, risk ≥ 0.60.

  2. Probe with λ_observe Vary k in retrieval (5, 10, 20). If ΔS stays flat and high, suspect metric or index mismatch. Reorder prompt headers; if ΔS spikes, lock the schema.

  3. Apply the module


Typical Anthropic breakpoints and the right fix

  • System vs user role mixing. Claude is sensitive to misplaced policy text inside user turns. Move all non-task policy to system. Re-test ΔS. Open: Retrieval Traceability, Data Contracts.

  • JSON tool protocol variance. Tool schemas that allow free text responses raise ΔS and create flip states. Enforce strict argument schemas and echo back the schema in every tool step. Open: Prompt Injection.

  • HyDE plus BM25 query split in reruns. If recall is high but top-k order is unstable, lock the two-stage query and rerank deterministically. Open: Pattern: Query Parsing Split, Rerankers.

  • Tool loop or agent handoff stalls with partial memory writes. Split memory namespaces and lock writes by mem_rev and mem_hash. Open: Multi-Agent Problems.

  • Safety refusal that hides the cited snippet. Use citation-first prompting and SCU (symbolic constraint unlock). Open: Retrieval Traceability, Pattern: SCU.


Deep diagnostics

  • Three-paraphrase probe. Ask the same question three ways. Log ΔS and λ for each. If λ flips on harmless paraphrase, clamp with BBAM and tighten snippet schema.
  • Anchor triangulation. Compare ΔS to the expected anchor section and to a decoy section. If ΔS is close for both, re-chunk and re-embed.
  • Chain length audit. If entropy rises after 2540 steps, split the plan, then re-join with a BBCR bridge. Open: Context Drift, Entropy Collapse.

Escalate and structural fixes


Copy-paste prompt

You have TXTOS and the WFGY Problem Map loaded.

My Anthropic issue:
- symptom: [one line]
- traces: ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ states across 3 paraphrases

Tell me:
1) failing layer and why,
2) the exact WFGY page to open,
3) the minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4) a reproducible test to verify the fix.
Use BBMC, BBPF, BBCR, BBAM when relevant.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one.

GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow