9.9 KiB
Kimi (Moonshot) Guardrails and Fix Patterns
🧭 Quick Return to Map
You are in a sub-page of LLM_Providers.
To reorient, go back here:
- LLM_Providers — model vendors and deployment options
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Use this page when failures look provider specific on Kimi. Examples include JSON mode drifting into prose, safety filters stripping citations, or streaming tool calls that stall. Each fix maps back to WFGY pages so you can verify with measurable targets.
Core acceptance
- ΔS(question, retrieved) ≤ 0.45
- coverage ≥ 0.70 for the target section
- λ remains convergent across 3 paraphrases
Open these first
- Visual map and recovery: RAG Architecture & Recovery
- End-to-end knobs: Retrieval Playbook
- Why this snippet: Retrieval Traceability
- Ordering control: Rerankers
- Embedding vs meaning: Embedding ≠ Semantic
- Hallucination and chunk boundaries: Hallucination
- Long threads and memory: Context Drift, Entropy Collapse, Memory Coherence
- Logic collapse and recovery: Logic Collapse
- Snippet and citation schema: Data Contracts
- Patterns: Query Parsing Split, Vectorstore Fragmentation, Hallucination Re-entry
- Ops: Live Monitoring, Debug Playbook
- Multi-agent overview: Multi-Agent Problems
Fix in 60 seconds
-
Measure ΔS
- Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
- Thresholds: stable < 0.40, transitional 0.40–0.60, risk ≥ 0.60.
-
Probe with λ_observe
- Vary k = {5, 10, 20}. Flat high curve suggests index or metric mismatch.
- Reorder prompt headers. If ΔS spikes, lock the schema.
-
Apply the module
- Retrieval drift → BBMC + Data Contracts.
- Reasoning collapse → BBCR bridge + BBAM variance clamp.
- Dead ends in long runs → BBPF alternate path.
-
Provider knobs to check first
- Structured output mode on and schema fixed.
- Temperature and top_p conservative during diagnosis.
- Tool use set to serial if parallel calls cross-talk.
- Safety setting that removes citations set to a lower level during eval.
-
Verify
- Three paraphrases hold the same citations.
- λ convergent across seeds.
- E_resonance flat on long replies.
Typical breakpoints and the right fix
-
JSON mode drifts into prose or extra commentary
Lock a strict output schema with Data Contracts. Add a BBCR “bridge” instruction that rejects non-JSON. If it still leaks, run a short two-turn repair using Logic Collapse. -
Chinese tokenizer quirks change similarity despite high cosine
Treat it as metric mismatch. Use Embedding ≠ Semantic and add BM25 fallback in the Retrieval Playbook. Then re-rank with Rerankers and anchor citations via Retrieval Traceability. -
Safety filter strips citations or tool arguments
Move citation text to a dedicated field in the schema and reference with IDs. See Retrieval Traceability. If the model “bluffs” when filtered, apply controls in Bluffing. -
Streaming tool calls stall or race
Force single-tool steps and add timeouts. Trace with Live Monitoring. If agents fight over memory, see Multi-Agent Problems and the memory patterns in Memory Coherence. -
Long chat melts down after many pages
Cut context windows at stable joins and verify with Context Drift and Entropy Collapse. If replies “flip” across tabs, check Memory Desync. -
Hybrid retrieval (HyDE + BM25) underperforms
Look for query splits in Pattern: Query Parsing Split. Align the query parse and re-rank. -
Non-English corpus drifts
Follow the Multilingual Guide. Normalize punctuation and numerals in chunking and traceability.
Copy-paste prompt
I uploaded TXT OS and the WFGY Problem Map files.
My Kimi bug:
• symptom: \[brief]
• traces: \[ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ states]
Tell me:
1. which layer is failing and why,
2. which exact fix page to open from this repo,
3. the minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4. how to verify the fix with a reproducible test.
Use BBMC/BBPF/BBCR/BBAM where relevant.
Escalate when
- First call after deploy fails or tools fire before data is ready. See Pre-Deploy Collapse and Bootstrap Ordering.
- Deadlocks or version skew in prod. See Deployment Deadlock.
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + ” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
Explore More
| Layer | Page | What it’s for |
|---|---|---|
| Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| Engine | WFGY 1.0 | Original PDF based tension engine |
| Engine | WFGY 2.0 | Production tension kernel and math engine for RAG and agents |
| Engine | WFGY 3.0 | TXT based Singularity tension engine, 131 S class set |
| Map | Problem Map 1.0 | Flagship 16 problem RAG failure checklist and fix map |
| Map | Problem Map 2.0 | RAG focused recovery pipeline |
| Map | Problem Map 3.0 | Global Debug Card, image as a debug protocol layer |
| Map | Semantic Clinic | Symptom to family to exact fix |
| Map | Grandma’s Clinic | Plain language stories mapped to Problem Map 1.0 |
| Onboarding | Starter Village | Guided tour for newcomers |
| App | TXT OS | TXT semantic OS, fast boot |
| App | Blah Blah Blah | Abstract and paradox Q and A built on TXT OS |
| App | Blur Blur Blur | Text to image with semantic control |
| App | Blow Blow Blow | Reasoning game engine and memory demo |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.