WFGY/ProblemMap/GlobalFixMap/Agents_Orchestration/crewai.md
2025-09-05 09:09:56 +08:00

14 KiB
Raw Blame History

CrewAI: Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of Agents & Orchestration.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Use this page when your orchestration uses CrewAI (agents, tasks, tools, crews, planning) and you see tool loops, wrong snippets, role mixing, or answers that flip between runs. The table maps symptoms to exact WFGY fix pages and gives a minimal recipe you can paste.

Acceptance targets

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 to the intended section or record
  • λ stays convergent across 3 paraphrases and 2 seeds
  • E_resonance stays flat on long windows

Open these first


Typical breakpoints and the right fix


Fix in 60 seconds

  1. Measure ΔS
    Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
    Stable < 0.40, transitional 0.40 to 0.60, risk ≥ 0.60.

  2. Probe λ_observe
    Do a k sweep in retrieval and reorder prompt headers. If λ flips, lock the schema and clamp with BBAM.

  3. Apply the module

  1. Verify
    Coverage ≥ 0.70. ΔS ≤ 0.45. Three paraphrases and two seeds with λ convergent.

Minimal CrewAI pattern with WFGY checks

# Pseudocode: highlight the control points only
from crewai import Agent, Task, Crew

def retrieve_snippets(q):
    # unified analyzer and metric across dense and sparse
    return retriever.search(q, k=10)

def assemble_prompt(context, q):
    # schema-locked prompt with cite first
    return prompt.format(context=context, question=q)

def wfgy_checks(q, context, answer):
    # compute ΔS(question, context) and enforce thresholds
    # record snippet_id, section_id, source_url, offsets, tokens
    metrics = metrics_and_trace(q, context, answer)
    if metrics["risk"]:
        raise RuntimeError("WFGY gate: high ΔS or divergent λ")
    return metrics

researcher = Agent(
    role="retrieval",
    goal="fetch auditable snippets with fields locked",
    backstory="RAG specialist who always cites first"
)

writer = Agent(
    role="reasoning",
    goal="answer with cite then explain using the snippet schema",
    backstory="keeps λ convergent and avoids cross section reuse"
)

task_retrieve = Task(
    description="Retrieve k=10 with unified analyzer, return snippet schema",
    agent=researcher,
    expected_output="list of snippets with {snippet_id, section_id, source_url, offsets, tokens}"
)

task_answer = Task(
    description="Assemble cite-first prompt and answer with strict JSON",
    agent=writer,
    expected_output="{citations:[...], answer:'...'}"
)

crew = Crew(agents=[researcher, writer], tasks=[task_retrieve, task_answer])

def run(question):
    context = retrieve_snippets(question)
    msg = assemble_prompt(context, question)
    answer = crew.kickoff(inputs={"msg": msg})
    metrics = wfgy_checks(question, context, answer)
    return {"answer": answer, "metrics": metrics}

What this enforces

  • Retrieval is observable and parameterized. Analyzer and metric are unified.
  • Prompt is schema locked with cite first and strict JSON for tool outputs.
  • A post generation WFGY gate can halt the run when ΔS is high or λ flips.
  • Traces record snippet to citation mapping for audits.

Specs and recipes RAG Architecture & Recovery · Retrieval Playbook · Retrieval Traceability · Data Contracts


CrewAI specific gotchas

  • Mixed embedding functions across write and read. Rebuild with explicit metric and normalization. See Embedding ≠ Semantic

  • Planner emits tool prompts that bypass the schema. Always validate tool arguments and echo the schema every step. See Prompt Injection

  • Memory overwrite between agents. Stamp mem_rev and mem_hash, split namespaces by agent role. See role drift · memory desync

  • Event storms when multiple tasks write to the same index or KV. Add idempotency keys on {source_id, mem_rev, index_hash}. See Retrieval Traceability

  • Long runs degrade style and flip answers. Split the plan, then re join with a BBCR bridge and clamp with BBAM. See Context Drift


When to escalate

  • ΔS remains ≥ 0.60 Rebuild the index using the checklists and verify with a small gold set. Retrieval Playbook

  • Identical input yields different answers across runs Check version skew and session state. Pre-Deploy Collapse


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame Engineers, hackers, and open source builders who supported WFGY from day one.

GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow