12 KiB
Intercom: Guardrails and Fix Patterns
🧭 Quick Return to Map
You are in a sub-page of Chatbots & CX.
To reorient, go back here:
- Chatbots & CX — customer dialogue flows and conversational stability
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Use this page when your Intercom bot blends Fin (AI Agent), Custom Bots, Help Center articles, and webhooks hitting your RAG stack. The checks localize failures to the exact layer and jump you to the right WFGY fix page. All links are text-hyperlinks, absolute to GitHub.
Open these first
- Visual map and recovery: rag-architecture-and-recovery.md
- End-to-end retrieval knobs: retrieval-playbook.md
- Traceability: retrieval-traceability.md
- Data schema locks: data-contracts.md
- Embedding vs meaning: embedding-vs-semantic.md
- Hallucination and chunk boundaries: hallucination.md
- Long chains and entropy: context-drift.md, entropy-collapse.md
- Prompt injection and tool schema: prompt-injection.md
- Multi-agent handoffs: Multi-Agent_Problems.md
- Boot order traps: bootstrap-ordering.md, deployment-deadlock.md, predeploy-collapse.md
Core acceptance (CX)
- ΔS(question, retrieved) ≤ 0.45
- Coverage ≥ 0.70 to the target section
- λ remains convergent across 3 paraphrases and 2 seeds
- E_resonance stays flat over long sessions
Fix in 60 seconds
-
Measure ΔS Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor). Stable < 0.40, transitional 0.40–0.60, risk ≥ 0.60.
-
Probe λ_observe Vary k and reorder prompt headers. If λ flips on harmless paraphrases, lock schema and clamp with BBAM.
-
Apply module
- Retrieval drift → BBMC + retrieval-traceability.md + data-contracts.md
- Reasoning collapse in long chats → BBCR bridge + BBAM; verify with context-drift.md
- Dead ends in toolchains → BBPF alternate paths
-
Verify Three paraphrases reach coverage ≥ 0.70 and ΔS ≤ 0.45. λ convergent on two seeds.
Typical Intercom symptoms → exact fix
-
Fin answers without citing the right Help Center article Analyzer/metric mismatch or fragmented store feeding Fin. → embedding-vs-semantic.md, patterns/pattern_vectorstore_fragmentation.md
-
Resolution Bot hands off to human too early or loops Boot order or version skew between content sync and bot. → bootstrap-ordering.md, deployment-deadlock.md
-
Webhook returns 200 but the bot state drifts Tool JSON schema too loose; free-text in arguments. → data-contracts.md, prompt-injection.md
-
High similarity, wrong snippet Metric mismatch or hybrid query split between Help Center and external KB. → retrieval-playbook.md, patterns/pattern_query_parsing_split.md
-
Long threads become inconsistent after 20–40 turns Entropy rises with chain length; memory writes collide. → context-drift.md, entropy-collapse.md, Multi-Agent_Problems.md
-
Jailbreak or confident bluffing Missing fences and cite-then-explain rules. → bluffing.md, retrieval-traceability.md
Minimal webhook recipe
-
Warm-up fence Check
VECTOR_READY,INDEX_HASH, secrets; short-circuit if not ready. See bootstrap-ordering.md. -
Retrieval step Call your retriever with explicit metric and consistent analyzer. Return
snippet_id,section_id,source_url,offsets,tokens. -
ΔS probe Compute ΔS(question, retrieved). If ≥ 0.60, mark
needs_fix=true. -
LLM answer step LLM reads TXT OS and WFGY schema. Enforce cite-then-explain across the retrieved set.
-
Trace sink Store
question,ΔS,λ_state,INDEX_HASH,snippet_id,dedupe_key.
Copy-paste prompt for your Intercom webhook
You have TXT OS and the WFGY Problem Map loaded.
My Intercom context:
- channel: messenger | email | mobile
- bot: Fin | Custom Bot | Resolution Bot
- retrieved: {k} snippets {snippet_id, section_id, source_url, offsets, tokens}
User question: "{user_question}"
Do:
1) Enforce cite-then-explain. If citations are missing or cross-section, fail fast and return the minimal fix tip.
2) If ΔS(question, retrieved) ≥ 0.60, propose the smallest structural repair
referencing: retrieval-playbook, retrieval-traceability, data-contracts, rerankers.
3) Return JSON:
{ "answer": "...", "citations": [...], "λ_state": "→|←|<>|×", "ΔS": 0.xx, "next_fix": "..." }
Keep it short and auditable.
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one.
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.