WFGY/ProblemMap/GlobalFixMap/Chatbots_CX/rasa.md
2025-08-28 15:35:00 +08:00

12 KiB
Raw Blame History

Rasa: Guardrails and Fix Patterns

A focused guide to stabilize Rasa bots that call retrieval, tools, or external LLMs. Use this page to localize the failing layer and jump to the exact WFGY fix page with measurable targets.

Open these first

Core acceptance

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 to the target section
  • λ remains convergent across three paraphrases and two seeds
  • E_resonance stays flat on long windows

Fix in 60 seconds

  1. Measure ΔS
    Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
    Stable < 0.40, transitional 0.400.60, risk ≥ 0.60.

  2. Probe λ_observe
    Vary k in retrieval (5, 10, 20). If ΔS stays flat and high, suspect metric or index mismatch.
    Reorder prompt headers; if ΔS spikes, lock the schema.

  3. Apply the module


Typical Rasa breakpoints and the right fix


Deep diagnostics

  • Three-paraphrase probe
    Ask the same user need three ways. Log ΔS and λ per step. If λ flips on harmless paraphrase, clamp with BBAM and tighten the citation-first schema.

  • Anchor triangulation
    Compare ΔS to the expected anchor section and to a decoy section. If both are close, re-chunk and re-embed, verify with a tiny gold set.

  • Chain length audit
    If entropy rises after 2540 steps across stories, split the plan and re-join with a BBCR bridge. See context-drift.md.


Minimal ops checklist for Rasa

  • NLU thresholds: set fallback and ambiguity handling so low-confidence intents route to a safe clarify path.
  • Action server: health probe plus backoff. Respect readiness gates from bootstrap-ordering.md.
  • Event schema: store snippet_id, section_id, source_url, offsets, tokens, ΔS, λ_state, index_hash. Enforce cite-then-explain.
  • Retriever parity: keep metric, analyzer, and normalization identical between training and runtime.

Copy-paste prompt for the LLM step behind Rasa

You have TXT OS and the WFGY Problem Map loaded.

My Rasa issue:
- symptom: [one line]
- traces: ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ states across 3 paraphrases

Tell me:
1) failing layer and why,
2) the exact WFGY page to open from this repo,
3) the minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4) how to verify with a reproducible test.
Use BBMC/BBPF/BBCR/BBAM when relevant.

When to escalate

  • ΔS stays ≥ 0.60 after chunking and retrieval fixes → rebuild index with explicit metric and normalization. See retrieval-playbook.md.

  • Answers alternate across identical dialogs → investigate memory desync or version skew. See predeploy-collapse.md.


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one.

GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow