13 KiB
Smolagents: Guardrails and Fix Patterns
🧭 Quick Return to Map
You are in a sub-page of Agents & Orchestration.
To reorient, go back here:
- Agents & Orchestration — orchestration frameworks and guardrails
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Use this page when your orchestration uses smolagents (ToolCallingAgent, CodeAgent, multi agent flows) and you see tool loops, wrong snippets, role mixing, or answers that flip between runs. The table maps symptoms to exact WFGY fix pages and gives a minimal recipe you can paste.
Acceptance targets
- ΔS(question, retrieved) ≤ 0.45
- Coverage ≥ 0.70 to the intended section or record
- λ stays convergent across 3 paraphrases and 2 seeds
- E_resonance stays flat on long windows
Open these first
-
Visual map and recovery
RAG Architecture & Recovery -
End to end retrieval knobs
Retrieval Playbook -
Why this snippet
Retrieval Traceability -
Ordering control
Rerankers -
Embedding vs meaning
Embedding ≠ Semantic -
Hallucination and chunk edges
Hallucination -
Long chains and entropy
Context Drift · Entropy Collapse -
Structural collapse and recovery
Logic Collapse -
Prompt injection and schema locks
Prompt Injection -
Multi agent conflicts
Multi-Agent Problems -
Bootstrap and deployment ordering
Bootstrap Ordering · Deployment Deadlock · Pre-Deploy Collapse -
Snippet and citation schema
Data Contracts
Typical smolagents breakpoints and the right fix
-
ToolCallingAgent returns free text instead of strict JSON
Enforce schema via a contract gate and echo the schema each step.
Open: Data Contracts · Prompt Injection -
CodeAgent executes but results drift or timeout cascades
Add BBCR bridge steps, strict timeouts, and idempotency before side effects.
Open: Logic Collapse -
High similarity yet wrong meaning
Mixed write and read embeddings, metric mismatch, or fragmented stores.
Open: Embedding ≠ Semantic · Vectorstore Fragmentation -
Hybrid retrieval worse than single retriever
Two stage query drift or mis weighted rerank.
Open: Query Parsing Split · Rerankers -
Citations missing or inconsistent across tools
Require cite then explain and lock snippet fields at the agent boundary.
Open: Retrieval Traceability · Data Contracts -
Agent handoff loops or shared memory overwrites
Split memory namespaces and stampmem_revandmem_hash.
Open: Multi-Agent Problems · role drift · memory desync
Fix in 60 seconds
-
Measure ΔS
Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
Stable < 0.40, transitional 0.40 to 0.60, risk ≥ 0.60. -
Probe λ_observe
Do a k sweep in retrieval and reorder prompt headers. If λ flips, lock the schema and clamp with BBAM. -
Apply the module
- Retrieval drift → BBMC plus Data Contracts
- Reasoning collapse → BBCR bridge plus BBAM, verify with Logic Collapse
- Hallucination re entry after correction → Pattern: Hallucination Re-entry
- Verify
Coverage ≥ 0.70. ΔS ≤ 0.45. Three paraphrases and two seeds with λ convergent.
Minimal smolagents pattern with WFGY checks
# Pseudocode: show only the control points that matter.
from smolagents import Tool, ToolCallingAgent # placeholder imports for illustration
# Contracted snippet schema
SNIPPET_FIELDS = ["snippet_id", "section_id", "source_url", "offsets", "tokens"]
def retriever_search(q, k=10):
# unified analyzer and metric across dense and sparse
# return a list[dict] of snippets with SNIPPET_FIELDS populated
return retriever.search(q, k=k)
@Tool
def retrieve(q: str) -> list:
"Return auditable snippets with the locked schema."
return retriever_search(q, k=10)
def assemble_prompt(context, q):
# schema-locked prompt, cite first, then answer
return prompt.format(context=context, question=q)
def wfgy_gate(q, context, answer):
# compute ΔS(question, context) and log λ, enforce thresholds
metrics = metrics_and_trace(q, context, answer)
if metrics["risk"]:
raise RuntimeError("WFGY gate: high ΔS or divergent λ")
return metrics
agent = ToolCallingAgent(
tools=[retrieve],
# keep tool arguments strict and echo the schema on each tool call
)
def run(question: str):
context = retrieve(question)
msg = assemble_prompt(context, question)
# the agent should obey cite-then-explain and strict JSON where required
result = agent.run(msg)
metrics = wfgy_gate(question, context, result)
return {"answer": result, "metrics": metrics}
What this enforces
- Retrieval is observable and parameterized. Analyzer and metric stay unified.
- Prompt is schema locked with cite first and strict JSON for tool outputs.
- A post generation WFGY gate can halt the run when ΔS is high or λ flips.
- Traces record snippet to citation mapping for audits.
Specs and recipes RAG Architecture & Recovery · Retrieval Playbook · Retrieval Traceability · Data Contracts
Smolagents-specific gotchas
-
@Toolsignatures inferred too loosely and allow free form text. Tighten types and validate arguments before execution. See Data Contracts -
CodeAgent side effects outside the intended sandbox. Make the steps idempotent and restrict file system or network access. See Logic Collapse
-
Hybrid retrievers degrade compared to single retriever. Unify analyzer and metric, then add deterministic reranking. See Query Parsing Split · Rerankers
-
Memory overwrite or hidden role drift in multi agent flows. Split namespaces and stamp
mem_revandmem_hash. See Multi-Agent Problems · role drift · memory desync -
Long chains flatten style and drift logically. Split the plan, then re join with a BBCR bridge and clamp with BBAM. See Context Drift · Entropy Collapse
When to escalate
-
ΔS remains ≥ 0.60 Rebuild the index using the checklists and verify with a small gold set. Retrieval Playbook
-
Identical input yields different answers across runs Check version skew and session state. Pre-Deploy Collapse
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
Explore More
| Layer | Page | What it’s for |
|---|---|---|
| ⭐ Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| ⚙️ Engine | WFGY 1.0 | Original PDF tension engine and early logic sketch (legacy reference) |
| ⚙️ Engine | WFGY 2.0 | Production tension kernel for RAG and agent systems |
| ⚙️ Engine | WFGY 3.0 | TXT based Singularity tension engine (131 S class set) |
| 🗺️ Map | Problem Map 1.0 | Flagship 16 problem RAG failure taxonomy and fix map |
| 🗺️ Map | Problem Map 2.0 | Global Debug Card for RAG and agent pipeline diagnosis |
| 🗺️ Map | Problem Map 3.0 | Global AI troubleshooting atlas and failure pattern map |
| 🧰 App | TXT OS | .txt semantic OS with fast bootstrap |
| 🧰 App | Blah Blah Blah | Abstract and paradox Q&A built on TXT OS |
| 🧰 App | Blur Blur Blur | Text to image generation with semantic control |
| 🏡 Onboarding | Starter Village | Guided entry point for new users |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.
要我直接做第三頁 rewind_agents.md 嗎?