WFGY/ProblemMap/GlobalFixMap/Agents_Orchestration/smolagents.md

13 KiB
Raw Blame History

Smolagents: Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of Agents & Orchestration.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Use this page when your orchestration uses smolagents (ToolCallingAgent, CodeAgent, multi agent flows) and you see tool loops, wrong snippets, role mixing, or answers that flip between runs. The table maps symptoms to exact WFGY fix pages and gives a minimal recipe you can paste.

Acceptance targets

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 to the intended section or record
  • λ stays convergent across 3 paraphrases and 2 seeds
  • E_resonance stays flat on long windows

Open these first


Typical smolagents breakpoints and the right fix


Fix in 60 seconds

  1. Measure ΔS
    Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
    Stable < 0.40, transitional 0.40 to 0.60, risk ≥ 0.60.

  2. Probe λ_observe
    Do a k sweep in retrieval and reorder prompt headers. If λ flips, lock the schema and clamp with BBAM.

  3. Apply the module

  1. Verify
    Coverage ≥ 0.70. ΔS ≤ 0.45. Three paraphrases and two seeds with λ convergent.

Minimal smolagents pattern with WFGY checks

# Pseudocode: show only the control points that matter.
from smolagents import Tool, ToolCallingAgent  # placeholder imports for illustration

# Contracted snippet schema
SNIPPET_FIELDS = ["snippet_id", "section_id", "source_url", "offsets", "tokens"]

def retriever_search(q, k=10):
    # unified analyzer and metric across dense and sparse
    # return a list[dict] of snippets with SNIPPET_FIELDS populated
    return retriever.search(q, k=k)

@Tool
def retrieve(q: str) -> list:
    "Return auditable snippets with the locked schema."
    return retriever_search(q, k=10)

def assemble_prompt(context, q):
    # schema-locked prompt, cite first, then answer
    return prompt.format(context=context, question=q)

def wfgy_gate(q, context, answer):
    # compute ΔS(question, context) and log λ, enforce thresholds
    metrics = metrics_and_trace(q, context, answer)
    if metrics["risk"]:
        raise RuntimeError("WFGY gate: high ΔS or divergent λ")
    return metrics

agent = ToolCallingAgent(
    tools=[retrieve],
    # keep tool arguments strict and echo the schema on each tool call
)

def run(question: str):
    context = retrieve(question)
    msg = assemble_prompt(context, question)
    # the agent should obey cite-then-explain and strict JSON where required
    result = agent.run(msg)
    metrics = wfgy_gate(question, context, result)
    return {"answer": result, "metrics": metrics}

What this enforces

  • Retrieval is observable and parameterized. Analyzer and metric stay unified.
  • Prompt is schema locked with cite first and strict JSON for tool outputs.
  • A post generation WFGY gate can halt the run when ΔS is high or λ flips.
  • Traces record snippet to citation mapping for audits.

Specs and recipes RAG Architecture & Recovery · Retrieval Playbook · Retrieval Traceability · Data Contracts


Smolagents-specific gotchas

  • @Tool signatures inferred too loosely and allow free form text. Tighten types and validate arguments before execution. See Data Contracts

  • CodeAgent side effects outside the intended sandbox. Make the steps idempotent and restrict file system or network access. See Logic Collapse

  • Hybrid retrievers degrade compared to single retriever. Unify analyzer and metric, then add deterministic reranking. See Query Parsing Split · Rerankers

  • Memory overwrite or hidden role drift in multi agent flows. Split namespaces and stamp mem_rev and mem_hash. See Multi-Agent Problems · role drift · memory desync

  • Long chains flatten style and drift logically. Split the plan, then re join with a BBCR bridge and clamp with BBAM. See Context Drift · Entropy Collapse


When to escalate

  • ΔS remains ≥ 0.60 Rebuild the index using the checklists and verify with a small gold set. Retrieval Playbook

  • Identical input yields different answers across runs Check version skew and session state. Pre-Deploy Collapse


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
⚙️ Engine WFGY 1.0 Original PDF tension engine and early logic sketch (legacy reference)
⚙️ Engine WFGY 2.0 Production tension kernel for RAG and agent systems
⚙️ Engine WFGY 3.0 TXT based Singularity tension engine (131 S class set)
🗺️ Map Problem Map 1.0 Flagship 16 problem RAG failure taxonomy and fix map
🗺️ Map Problem Map 2.0 Global Debug Card for RAG and agent pipeline diagnosis
🗺️ Map Problem Map 3.0 Global AI troubleshooting atlas and failure pattern map
🧰 App TXT OS .txt semantic OS with fast bootstrap
🧰 App Blah Blah Blah Abstract and paradox Q&A built on TXT OS
🧰 App Blur Blur Blur Text to image generation with semantic control
🏡 Onboarding Starter Village Guided entry point for new users

If this repository helped, starring it improves discovery so more builders can find the docs and tools.
GitHub Repo stars

要我直接做第三頁 rewind_agents.md 嗎?