14 KiB
CrewAI: Guardrails and Fix Patterns
🧭 Quick Return to Map
You are in a sub-page of Agents & Orchestration.
To reorient, go back here:
- Agents & Orchestration — orchestration frameworks and guardrails
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Use this page when your orchestration uses CrewAI (agents, tasks, tools, crews, planning) and you see tool loops, wrong snippets, role mixing, or answers that flip between runs. The table maps symptoms to exact WFGY fix pages and gives a minimal recipe you can paste.
Acceptance targets
- ΔS(question, retrieved) ≤ 0.45
- Coverage ≥ 0.70 to the intended section or record
- λ stays convergent across 3 paraphrases and 2 seeds
- E_resonance stays flat on long windows
Open these first
-
Visual map and recovery
RAG Architecture & Recovery -
End to end retrieval knobs
Retrieval Playbook -
Why this snippet
Retrieval Traceability -
Ordering control
Rerankers -
Embedding vs meaning
Embedding ≠ Semantic -
Hallucination and chunk edges
Hallucination -
Long chains and entropy
Context Drift · Entropy Collapse -
Structural collapse and recovery
Logic Collapse -
Prompt injection and schema locks
Prompt Injection -
Multi agent conflicts
Multi-Agent Problems -
Bootstrap and deployment ordering
Bootstrap Ordering · Deployment Deadlock · Pre-Deploy Collapse -
Snippet and citation schema
Data Contracts
Typical breakpoints and the right fix
-
Agent to agent handoff loops or stalls
Add BBCR bridge steps, set explicit timeouts, log λ per hop, clamp variance with BBAM.
Open: Logic Collapse · Multi-Agent Problems -
High similarity yet wrong meaning
Mixed write and read embeddings, metric mismatch, or fragmented stores.
Open: Embedding ≠ Semantic · Vectorstore Fragmentation -
Hybrid retrieval performs worse than a single retriever
Two stage query drifts, mis weighted rerank, inconsistent analyzer.
Open: Query Parsing Split · Rerankers -
Citations missing or inconsistent across agents
Require cite then explain and lock snippet fields at the task boundary.
Open: Retrieval Traceability · Data Contracts -
Planner injects unsafe tool prompts
Freeze tool schemas and validate arguments before execution.
Open: Prompt Injection -
Long runs flatten style and drift logically
Split tasks, re join with BBCR, measure entropy and stop when it rises.
Open: Context Drift · Entropy Collapse
Fix in 60 seconds
-
Measure ΔS
Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
Stable < 0.40, transitional 0.40 to 0.60, risk ≥ 0.60. -
Probe λ_observe
Do a k sweep in retrieval and reorder prompt headers. If λ flips, lock the schema and clamp with BBAM. -
Apply the module
- Retrieval drift → BBMC plus Data Contracts
- Reasoning collapse → BBCR bridge plus BBAM, verify with Logic Collapse
- Hallucination re entry after correction → Pattern: Hallucination Re-entry
- Verify
Coverage ≥ 0.70. ΔS ≤ 0.45. Three paraphrases and two seeds with λ convergent.
Minimal CrewAI pattern with WFGY checks
# Pseudocode: highlight the control points only
from crewai import Agent, Task, Crew
def retrieve_snippets(q):
# unified analyzer and metric across dense and sparse
return retriever.search(q, k=10)
def assemble_prompt(context, q):
# schema-locked prompt with cite first
return prompt.format(context=context, question=q)
def wfgy_checks(q, context, answer):
# compute ΔS(question, context) and enforce thresholds
# record snippet_id, section_id, source_url, offsets, tokens
metrics = metrics_and_trace(q, context, answer)
if metrics["risk"]:
raise RuntimeError("WFGY gate: high ΔS or divergent λ")
return metrics
researcher = Agent(
role="retrieval",
goal="fetch auditable snippets with fields locked",
backstory="RAG specialist who always cites first"
)
writer = Agent(
role="reasoning",
goal="answer with cite then explain using the snippet schema",
backstory="keeps λ convergent and avoids cross section reuse"
)
task_retrieve = Task(
description="Retrieve k=10 with unified analyzer, return snippet schema",
agent=researcher,
expected_output="list of snippets with {snippet_id, section_id, source_url, offsets, tokens}"
)
task_answer = Task(
description="Assemble cite-first prompt and answer with strict JSON",
agent=writer,
expected_output="{citations:[...], answer:'...'}"
)
crew = Crew(agents=[researcher, writer], tasks=[task_retrieve, task_answer])
def run(question):
context = retrieve_snippets(question)
msg = assemble_prompt(context, question)
answer = crew.kickoff(inputs={"msg": msg})
metrics = wfgy_checks(question, context, answer)
return {"answer": answer, "metrics": metrics}
What this enforces
- Retrieval is observable and parameterized. Analyzer and metric are unified.
- Prompt is schema locked with cite first and strict JSON for tool outputs.
- A post generation WFGY gate can halt the run when ΔS is high or λ flips.
- Traces record snippet to citation mapping for audits.
Specs and recipes RAG Architecture & Recovery · Retrieval Playbook · Retrieval Traceability · Data Contracts
CrewAI specific gotchas
-
Mixed embedding functions across write and read. Rebuild with explicit metric and normalization. See Embedding ≠ Semantic
-
Planner emits tool prompts that bypass the schema. Always validate tool arguments and echo the schema every step. See Prompt Injection
-
Memory overwrite between agents. Stamp
mem_revandmem_hash, split namespaces by agent role. See role drift · memory desync -
Event storms when multiple tasks write to the same index or KV. Add idempotency keys on
{source_id, mem_rev, index_hash}. See Retrieval Traceability -
Long runs degrade style and flip answers. Split the plan, then re join with a BBCR bridge and clamp with BBAM. See Context Drift
When to escalate
-
ΔS remains ≥ 0.60 Rebuild the index using the checklists and verify with a small gold set. Retrieval Playbook
-
Identical input yields different answers across runs Check version skew and session state. Pre-Deploy Collapse
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame Engineers, hackers, and open source builders who supported WFGY from day one.
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.