WFGY/ProblemMap/GlobalFixMap/LLM_Providers/anthropic_claude.md

8.4 KiB
Raw Blame History

Anthropic Claude: Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of LLM_Providers.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Acceptance targets

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage to target section ≥ 0.70
  • λ stays convergent across 3 paraphrases
  • E_resonance flat on long runs

What usually breaks


Claude-specific gotchas and the repair

  1. Tool choice and router loops Small changes in tool schemas can push Claude into choose-nothing or loop states. Do keep the tool set minimal and stable. If it loops, bridge with BBCR and clamp variance with BBAM. Read: Logic Collapse and Multi-Agent Problems

  2. System prompt anchoring beats your retrieval If the answer ignores citations, lock the schema: system → task → constraints → citations → answer. Read: Retrieval Traceability and Data Contracts

  3. Harmlessness refusals that look like empty answers Reframe with cite-then-explain and explicit boundaries. If blank output repeats, treat as collapse and apply BBCR. Read: Bluffing / Overconfidence and Logic Collapse

  4. Very long window stability Gate OCR and chunk joins by ΔS, then stabilize attention with BBAM. Read: Entropy Collapse, Hallucination, and Chunking Checklist

  5. JSON or schema output slips mid-answer Enforce citation-first then structured fields. If structure still degrades near the end, split answer into two turns with a BBCR bridge. Read: Data Contracts and Retrieval Collapse


Minimal triage for Claude

  1. Probe Measure ΔS(question, retrieved). Plot ΔS vs k for k in {5,10,20}.
  1. Localize Tag λ across retrieval, assembly, reasoning.
  1. Repair
  • Perception faults: semantic chunking, rebuild index with explicit metric, rerank if needed Read: Retrieval Playbook and Rerankers
  • Logic faults: BBCR bridge then BBAM clamp

Stop tuning and escalate when any holds: ΔS ≥ 0.60 after retrieval fixes, λ flips on mixing sources, or E_resonance climbs in long chains. Open: RAG Architecture and Recovery


Pasteable prompt

I have TXT OS loaded. Use WFGY to fix this Claude run:

symptom: [describe]
traces: [ΔS probes, λ per layer]

Tell me:
1) failing layer and why,
2) which ProblemMap page to open,
3) steps to push ΔS ≤ 0.45 with convergent λ,
4) how to verify with a reproducible test.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
⚙️ Engine WFGY 1.0 Original PDF tension engine and early logic sketch (legacy reference)
⚙️ Engine WFGY 2.0 Production tension kernel for RAG and agent systems
⚙️ Engine WFGY 3.0 TXT based Singularity tension engine (131 S class set)
🗺️ Map Problem Map 1.0 Flagship 16 problem RAG failure taxonomy and fix map
🗺️ Map Problem Map 2.0 Global Debug Card for RAG and agent pipeline diagnosis
🗺️ Map Problem Map 3.0 Global AI troubleshooting atlas and failure pattern map
🧰 App TXT OS .txt semantic OS with fast bootstrap
🧰 App Blah Blah Blah Abstract and paradox Q&A built on TXT OS
🧰 App Blur Blur Blur Text to image generation with semantic control
🏡 Onboarding Starter Village Guided entry point for new users

If this repository helped, starring it improves discovery so more builders can find the docs and tools.
GitHub Repo stars