9.9 KiB
Intercom: Guardrails and Fix Patterns
🧭 Quick Return to Map
You are in a sub-page of Chatbots & CX.
To reorient, go back here:
- Chatbots & CX — customer dialogue flows and conversational stability
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Use this page when your Intercom bot blends Fin (AI Agent), Custom Bots, Help Center articles, and webhooks hitting your RAG stack. The checks localize failures to the exact layer and jump you to the right WFGY fix page. All links are text-hyperlinks, absolute to GitHub.
Open these first
- Visual map and recovery: rag-architecture-and-recovery.md
- End-to-end retrieval knobs: retrieval-playbook.md
- Traceability: retrieval-traceability.md
- Data schema locks: data-contracts.md
- Embedding vs meaning: embedding-vs-semantic.md
- Hallucination and chunk boundaries: hallucination.md
- Long chains and entropy: context-drift.md, entropy-collapse.md
- Prompt injection and tool schema: prompt-injection.md
- Multi-agent handoffs: Multi-Agent_Problems.md
- Boot order traps: bootstrap-ordering.md, deployment-deadlock.md, predeploy-collapse.md
Core acceptance (CX)
- ΔS(question, retrieved) ≤ 0.45
- Coverage ≥ 0.70 to the target section
- λ remains convergent across 3 paraphrases and 2 seeds
- E_resonance stays flat over long sessions
Fix in 60 seconds
-
Measure ΔS Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor). Stable < 0.40, transitional 0.40–0.60, risk ≥ 0.60.
-
Probe λ_observe Vary k and reorder prompt headers. If λ flips on harmless paraphrases, lock schema and clamp with BBAM.
-
Apply module
- Retrieval drift → BBMC + retrieval-traceability.md + data-contracts.md
- Reasoning collapse in long chats → BBCR bridge + BBAM; verify with context-drift.md
- Dead ends in toolchains → BBPF alternate paths
-
Verify Three paraphrases reach coverage ≥ 0.70 and ΔS ≤ 0.45. λ convergent on two seeds.
Typical Intercom symptoms → exact fix
-
Fin answers without citing the right Help Center article Analyzer/metric mismatch or fragmented store feeding Fin. → embedding-vs-semantic.md, patterns/pattern_vectorstore_fragmentation.md
-
Resolution Bot hands off to human too early or loops Boot order or version skew between content sync and bot. → bootstrap-ordering.md, deployment-deadlock.md
-
Webhook returns 200 but the bot state drifts Tool JSON schema too loose; free-text in arguments. → data-contracts.md, prompt-injection.md
-
High similarity, wrong snippet Metric mismatch or hybrid query split between Help Center and external KB. → retrieval-playbook.md, patterns/pattern_query_parsing_split.md
-
Long threads become inconsistent after 20–40 turns Entropy rises with chain length; memory writes collide. → context-drift.md, entropy-collapse.md, Multi-Agent_Problems.md
-
Jailbreak or confident bluffing Missing fences and cite-then-explain rules. → bluffing.md, retrieval-traceability.md
Minimal webhook recipe
-
Warm-up fence Check
VECTOR_READY,INDEX_HASH, secrets; short-circuit if not ready. See bootstrap-ordering.md. -
Retrieval step Call your retriever with explicit metric and consistent analyzer. Return
snippet_id,section_id,source_url,offsets,tokens. -
ΔS probe Compute ΔS(question, retrieved). If ≥ 0.60, mark
needs_fix=true. -
LLM answer step LLM reads TXT OS and WFGY schema. Enforce cite-then-explain across the retrieved set.
-
Trace sink Store
question,ΔS,λ_state,INDEX_HASH,snippet_id,dedupe_key.
Copy-paste prompt for your Intercom webhook
You have TXT OS and the WFGY Problem Map loaded.
My Intercom context:
- channel: messenger | email | mobile
- bot: Fin | Custom Bot | Resolution Bot
- retrieved: {k} snippets {snippet_id, section_id, source_url, offsets, tokens}
User question: "{user_question}"
Do:
1) Enforce cite-then-explain. If citations are missing or cross-section, fail fast and return the minimal fix tip.
2) If ΔS(question, retrieved) ≥ 0.60, propose the smallest structural repair
referencing: retrieval-playbook, retrieval-traceability, data-contracts, rerankers.
3) Return JSON:
{ "answer": "...", "citations": [...], "λ_state": "→|←|<>|×", "ΔS": 0.xx, "next_fix": "..." }
Keep it short and auditable.
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
Explore More
| Layer | Page | What it’s for |
|---|---|---|
| ⭐ Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| ⚙️ Engine | WFGY 1.0 | Original PDF tension engine and early logic sketch (legacy reference) |
| ⚙️ Engine | WFGY 2.0 | Production tension kernel for RAG and agent systems |
| ⚙️ Engine | WFGY 3.0 | TXT based Singularity tension engine (131 S class set) |
| 🗺️ Map | Problem Map 1.0 | Flagship 16 problem RAG failure taxonomy and fix map |
| 🗺️ Map | Problem Map 2.0 | Global Debug Card for RAG and agent pipeline diagnosis |
| 🗺️ Map | Problem Map 3.0 | Global AI troubleshooting atlas and failure pattern map |
| 🧰 App | TXT OS | .txt semantic OS with fast bootstrap |
| 🧰 App | Blah Blah Blah | Abstract and paradox Q&A built on TXT OS |
| 🧰 App | Blur Blur Blur | Text to image generation with semantic control |
| 🏡 Onboarding | Starter Village | Guided entry point for new users |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.