WFGY/ProblemMap/GlobalFixMap/LLM_Providers/anthropic_claude.md
2025-09-05 10:58:24 +08:00

10 KiB
Raw Blame History

Anthropic Claude: Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of LLM_Providers.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Acceptance targets

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage to target section ≥ 0.70
  • λ stays convergent across 3 paraphrases
  • E_resonance flat on long runs

What usually breaks


Claude-specific gotchas and the repair

  1. Tool choice and router loops Small changes in tool schemas can push Claude into choose-nothing or loop states. Do keep the tool set minimal and stable. If it loops, bridge with BBCR and clamp variance with BBAM. Read: Logic Collapse and Multi-Agent Problems

  2. System prompt anchoring beats your retrieval If the answer ignores citations, lock the schema: system → task → constraints → citations → answer. Read: Retrieval Traceability and Data Contracts

  3. Harmlessness refusals that look like empty answers Reframe with cite-then-explain and explicit boundaries. If blank output repeats, treat as collapse and apply BBCR. Read: Bluffing / Overconfidence and Logic Collapse

  4. Very long window stability Gate OCR and chunk joins by ΔS, then stabilize attention with BBAM. Read: Entropy Collapse, Hallucination, and Chunking Checklist

  5. JSON or schema output slips mid-answer Enforce citation-first then structured fields. If structure still degrades near the end, split answer into two turns with a BBCR bridge. Read: Data Contracts and Retrieval Collapse


Minimal triage for Claude

  1. Probe Measure ΔS(question, retrieved). Plot ΔS vs k for k in {5,10,20}.
  1. Localize Tag λ across retrieval, assembly, reasoning.
  1. Repair
  • Perception faults: semantic chunking, rebuild index with explicit metric, rerank if needed Read: Retrieval Playbook and Rerankers
  • Logic faults: BBCR bridge then BBAM clamp

Stop tuning and escalate when any holds: ΔS ≥ 0.60 after retrieval fixes, λ flips on mixing sources, or E_resonance climbs in long chains. Open: RAG Architecture and Recovery


Pasteable prompt

I have TXT OS loaded. Use WFGY to fix this Claude run:

symptom: [describe]
traces: [ΔS probes, λ per layer]

Tell me:
1) failing layer and why,
2) which ProblemMap page to open,
3) steps to push ΔS ≤ 0.45 with convergent λ,
4) how to verify with a reproducible test.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one.

GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow