11 KiB
Kimi (Moonshot) Guardrails and Fix Patterns
Use this page when failures look provider specific on Kimi. Examples include JSON mode drifting into prose, safety filters stripping citations, or streaming tool calls that stall. Each fix maps back to WFGY pages so you can verify with measurable targets.
Core acceptance
- ΔS(question, retrieved) ≤ 0.45
- coverage ≥ 0.70 for the target section
- λ remains convergent across 3 paraphrases
Open these first
- Visual map and recovery: RAG Architecture & Recovery
- End-to-end knobs: Retrieval Playbook
- Why this snippet: Retrieval Traceability
- Ordering control: Rerankers
- Embedding vs meaning: Embedding ≠ Semantic
- Hallucination and chunk boundaries: Hallucination
- Long threads and memory: Context Drift, Entropy Collapse, Memory Coherence
- Logic collapse and recovery: Logic Collapse
- Snippet and citation schema: Data Contracts
- Patterns: Query Parsing Split, Vectorstore Fragmentation, Hallucination Re-entry
- Ops: Live Monitoring, Debug Playbook
- Multi-agent overview: Multi-Agent Problems
Fix in 60 seconds
-
Measure ΔS
- Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
- Thresholds: stable < 0.40, transitional 0.40–0.60, risk ≥ 0.60.
-
Probe with λ_observe
- Vary k = {5, 10, 20}. Flat high curve suggests index or metric mismatch.
- Reorder prompt headers. If ΔS spikes, lock the schema.
-
Apply the module
- Retrieval drift → BBMC + Data Contracts.
- Reasoning collapse → BBCR bridge + BBAM variance clamp.
- Dead ends in long runs → BBPF alternate path.
-
Provider knobs to check first
- Structured output mode on and schema fixed.
- Temperature and top_p conservative during diagnosis.
- Tool use set to serial if parallel calls cross-talk.
- Safety setting that removes citations set to a lower level during eval.
-
Verify
- Three paraphrases hold the same citations.
- λ convergent across seeds.
- E_resonance flat on long replies.
Typical breakpoints and the right fix
-
JSON mode drifts into prose or extra commentary
Lock a strict output schema with Data Contracts. Add a BBCR “bridge” instruction that rejects non-JSON. If it still leaks, run a short two-turn repair using Logic Collapse. -
Chinese tokenizer quirks change similarity despite high cosine
Treat it as metric mismatch. Use Embedding ≠ Semantic and add BM25 fallback in the Retrieval Playbook. Then re-rank with Rerankers and anchor citations via Retrieval Traceability. -
Safety filter strips citations or tool arguments
Move citation text to a dedicated field in the schema and reference with IDs. See Retrieval Traceability. If the model “bluffs” when filtered, apply controls in Bluffing. -
Streaming tool calls stall or race
Force single-tool steps and add timeouts. Trace with Live Monitoring. If agents fight over memory, see Multi-Agent Problems and the memory patterns in Memory Coherence. -
Long chat melts down after many pages
Cut context windows at stable joins and verify with Context Drift and Entropy Collapse. If replies “flip” across tabs, check Memory Desync. -
Hybrid retrieval (HyDE + BM25) underperforms
Look for query splits in Pattern: Query Parsing Split. Align the query parse and re-rank. -
Non-English corpus drifts
Follow the Multilingual Guide. Normalize punctuation and numerals in chunking and traceability.
Copy-paste prompt
I uploaded TXT OS and the WFGY Problem Map files.
My Kimi bug:
• symptom: \[brief]
• traces: \[ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ states]
Tell me:
1. which layer is failing and why,
2. which exact fix page to open from this repo,
3. the minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4. how to verify the fix with a reproducible test.
Use BBMC/BBPF/BBCR/BBAM where relevant.
Escalate when
- First call after deploy fails or tools fire before data is ready. See Pre-Deploy Collapse and Bootstrap Ordering.
- Deadlocks or version skew in prod. See Deployment Deadlock.
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + ” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame —
Engineers, hackers, and open source builders who supported WFGY from day one.
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.