12 KiB
AutoGen: Guardrails and Fix Patterns
Use this page when your orchestration uses AutoGen (ConversableAgent, GroupChat, function tools) and you see tool loops, wrong snippets, role mixing, or answers that flip between runs. The table maps symptoms to exact WFGY fix pages and gives a minimal recipe you can paste.
Acceptance targets
- ΔS(question, retrieved) ≤ 0.45
- Coverage ≥ 0.70 to the intended section or record
- λ stays convergent across 3 paraphrases and 2 seeds
- E_resonance stays flat on long windows
Open these first
-
Visual map and recovery
RAG Architecture & Recovery -
End to end retrieval knobs
Retrieval Playbook -
Why this snippet
Retrieval Traceability -
Ordering control
Rerankers -
Embedding vs meaning
Embedding ≠ Semantic -
Hallucination and chunk edges
Hallucination -
Long chains and entropy
Context Drift · Entropy Collapse -
Structural collapse and recovery
Logic Collapse -
Prompt injection and schema locks
Prompt Injection -
Multi agent conflicts
Multi-Agent Problems -
Bootstrap and deployment ordering
Bootstrap Ordering · Deployment Deadlock · Pre-Deploy Collapse -
Snippet and citation schema
Data Contracts
Typical breakpoints and the right fix
-
Function tool calls wait on each other or retry forever
Fix: lock roles, add timeouts, and echo a strict JSON schema in every tool response.
Open: Multi-Agent Problems · Logic Collapse -
Two agents overwrite the same memory namespace
Fix: stampmem_revandmem_hash, split read and write, forbid cross section reuse.
Open: role-drift · memory-overwrite -
High similarity yet wrong meaning
Fix: metric or index mismatch, or mixed write and read embeddings.
Open: Embedding ≠ Semantic -
Hybrid stack worse than a single retriever
Fix: lock two stage query and add a deterministic reranker.
Open: Query Parsing Split · Rerankers -
Facts exist in the store yet never show up
Fix: fragmentation or sharding misalignment.
Open: Vectorstore Fragmentation -
Citations inconsistent across agents or steps
Fix: require cite then explain and lock snippet fields.
Open: Retrieval Traceability · Data Contracts -
Long GroupChat runs change style and logic
Fix: split the plan and rejoin with a BBCR bridge.
Open: Context Drift · Entropy Collapse
Fix in 60 seconds
-
Measure ΔS
Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
Stable < 0.40, transitional 0.40 to 0.60, risk ≥ 0.60. -
Probe λ_observe
Do a k sweep in retrieval (5, 10, 20). If ΔS stays high and flat, suspect metric or index mismatch.
Reorder prompt headers. If λ flips, lock the schema and clamp variance with BBAM. -
Apply the module
- Retrieval drift → BBMC plus Data Contracts
- Reasoning collapse → BBCR bridge plus BBAM, verify with Logic Collapse
- Hallucination re entry after a fix → Pattern: Hallucination Re-entry
- Verify
Coverage ≥ 0.70. ΔS ≤ 0.45. Three paraphrases and two seeds with λ convergent.
Minimal AutoGen topology with WFGY checks
# Pseudocode: focus on control points you must keep
from autogen import ConversableAgent, GroupChat, GroupChatManager
user = ConversableAgent("user", system_message="task only")
retriever = ConversableAgent("retriever", tools=[search_tool])
reasoner = ConversableAgent("reasoner", tools=[rag_tool])
auditor = ConversableAgent("auditor", system_message="cite-then-explain, schema-locked")
group = GroupChat(
agents=[user, retriever, reasoner, auditor],
messages=[],
max_round=8,
speaker_selection_method="auto"
)
manager = GroupChatManager(groupchat=group)
# guardrails to add around the loop:
# 1) every tool result must echo the JSON schema
# 2) each step writes {snippet_id, section_id, source_url, offsets, tokens}
# 3) after generation run WFGY checks and stop if ΔS ≥ 0.60 or λ divergent
What this enforces
- Tools return strict JSON and echo the schema. Free text cannot pollute arguments.
- Snippet fields are fixed and citations come first.
- Post generation WFGY checks can halt the run when ΔS is high or λ flips.
Specs and recipes RAG Architecture & Recovery · Retrieval Playbook · Retrieval Traceability · Data Contracts
AutoGen specific gotchas
-
Function schema is too loose and allows mixed JSON and free text. Lock parameters and echo schema on every call. See Prompt Injection
-
GroupChat runs are too long and entropy rises. Split the plan and rejoin with a BBCR bridge. See Context Drift
-
Shared memory overwrites between agents. Add
mem_revandmem_hash, forbid cross section reuse. See memory-overwrite -
Retrieval and rerank are inconsistent across agents. Unify analyzer and metric or add a reranker. See Rerankers
When to escalate
-
ΔS remains ≥ 0.60 Rebuild the index using the checklists and verify with a small gold set. Retrieval Playbook
-
Identical input yields different answers across runs Check version skew and session state. Pre-Deploy Collapse
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame —
Engineers, hackers, and open source builders who supported WFGY from day one.
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.