9.5 KiB
Salesforce Einstein Bots: Guardrails and Fix Patterns
🧭 Quick Return to Map
You are in a sub-page of Chatbots & CX.
To reorient, go back here:
- Chatbots & CX — customer dialogue flows and conversational stability
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Use this page to stabilize Einstein Bots across web, messaging, and agent handoff flows. The checks below localize the failing layer, then jump you to the exact WFGY repair with measurable targets.
Open these first
- Visual map and recovery: RAG Architecture & Recovery
- End-to-end retrieval knobs: Retrieval Playbook
- Why this snippet (traceability schema): Retrieval Traceability
- Ordering control: Rerankers
- Embedding vs meaning: Embedding ≠ Semantic
- Hallucination and chunk boundaries: Hallucination
- Long chains and entropy: Context Drift, Entropy Collapse
- Structural collapse and recovery: Logic Collapse
- Prompt injection and tool schema locks: Prompt Injection
- Multi-agent conflicts and handoffs: Multi-Agent Problems
- Boot and deploy issues: Bootstrap Ordering, Deployment Deadlock, Pre-Deploy Collapse
- Snippet and citation schema: Data Contracts
Core acceptance
- ΔS(question, retrieved) ≤ 0.45
- Coverage ≥ 0.70 to the target section
- λ remains convergent across three paraphrases and two seeds
- E_resonance flat on long windows
Fix in 60 seconds
-
Measure ΔS Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor). Stable < 0.40, transitional 0.40–0.60, risk ≥ 0.60.
-
Probe with λ_observe Vary k in retrieval (5, 10, 20). If ΔS stays high and flat, suspect metric or index mismatch. Reorder prompt headers; if ΔS spikes, lock the schema.
-
Apply the module
- Retrieval drift → BBMC plus Data Contracts.
- Reasoning collapse → BBCR bridge plus BBAM variance clamp, then verify with Logic Collapse.
- Hallucination re-entry after correction → Pattern: Hallucination Re-entry.
Typical Einstein Bots breakpoints → exact fix
-
Knowledge article cited wrongly or not cited when categories or locales differ between web and messaging flows. Open: Retrieval Traceability, Data Contracts. Also see re-chunk checklist in the playbook.
-
Hybrid retrieval underperforms after HyDE or search + embedding mix during Live Agent fallback. Open: Pattern: Query Parsing Split, Rerankers.
-
Tool/Apex action JSON variance across channels. Objects drift or contain free text. Open: Prompt Injection, Data Contracts. Enforce strict args and echo schema each step.
-
Agent handoff stalls or loops with partial memory writes into Service Cloud records. Open: Multi-Agent Problems. Split memory namespaces and fence writes by
mem_revandmem_hash. -
Channel mismatch (SMS, WhatsApp, Web) changes casing or tokenization and flips λ during reruns. Open: Context Drift. Stabilize with deterministic rerank and consistent analyzers.
-
Cold boot after deploy fails first turn or loads wrong flows. Open: Bootstrap Ordering, Pre-Deploy Collapse.
Deep diagnostics
- Three-paraphrase probe. Ask the same question three ways. Log ΔS and λ. If λ flips on benign paraphrase, clamp with BBAM and tighten snippet schema.
- Anchor triangulation. Compare ΔS to the expected article section and to a decoy. If both are close, re-chunk and re-embed.
- Chain length audit across bot flow → search → tool → handoff. If entropy rises after 25–40 steps, split the plan and rejoin with a BBCR bridge. Open: Context Drift, Entropy Collapse.
Escalate and structural fixes
-
Metric or index mismatch. If ΔS stays ≥ 0.60 across seeds, rebuild with semantic chunking, verify with a small gold set. Open: Embedding ≠ Semantic, Retrieval Playbook.
-
Live instability in production. Add probes and backoff guards. Open: Live Monitoring for RAG, Debug Playbook.
Copy-paste prompt for the LLM step
You have TXT OS and the WFGY Problem Map loaded.
My Einstein Bots issue:
- symptom: [one line]
- traces: ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ states across 3 paraphrases
- context: channel=[web|sms|whatsapp], handoff=[none|live_agent], tools=[...]
Tell me:
1) failing layer and why,
2) the exact WFGY page to open,
3) the minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4) a reproducible test I can run from the same chat transcript.
Use BBMC, BBPF, BBCR, BBAM when relevant.
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | Canonical framework entry point | View |
| Problem Map | Diagnostic map and navigation hub | View |
| Tension Universe Experiments | MVP experiment field | View |
| Recognition | Where WFGY is referenced or adopted | View |
| AI Guide | Anti-hallucination reading protocol for tools | View |
If this repository helps, starring it improves discovery for other builders.