- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to model list - Set MiniMax-M2.7 as default model - Keep all previous models as alternatives
11 KiB
MiniMax — Guardrails and Fix Patterns
🧭 Quick Return to Map
You are in a sub-page of LLM_Providers. To reorient, go back here:
- LLM_Providers — model vendors and deployment options
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward. If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Use this page when failures look provider specific on MiniMax models (MiniMax-M2.7, MiniMax-M2.7-highspeed, MiniMax-M2.5, MiniMax-M2.5-highspeed). MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities. Examples include temperature rejection at zero, tool-call JSON drift in long 204K-context windows, Chinese tokenizer similarity mismatches, or streaming stalls under high concurrency. Each fix maps back to WFGY pages so you can verify with measurable targets.
Core acceptance
- ΔS(question, retrieved) ≤ 0.45
- Coverage ≥ 0.70 for the target section
- λ remains convergent across 3 paraphrases
Open these first
- Visual map and recovery: RAG Architecture & Recovery
- End-to-end knobs: Retrieval Playbook
- Why this snippet: Retrieval Traceability
- Ordering control: Rerankers
- Embedding vs meaning: Embedding ≠ Semantic
- Hallucination and chunk boundaries: Hallucination
- Long threads and memory: Context Drift, Entropy Collapse, Memory Coherence
- Logic collapse and recovery: Logic Collapse
- Snippet and citation schema: Data Contracts
- Patterns: Query Parsing Split, Vectorstore Fragmentation, Hallucination Re-entry
- Ops: Live Monitoring, Debug Playbook
- Multi-agent overview: Multi-Agent Problems
Fix in 60 seconds
-
Measure ΔS
- Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
- Thresholds: stable < 0.40, transitional 0.40–0.60, risk ≥ 0.60.
-
Probe with λ_observe
- Vary k = {5, 10, 20}. Flat high curve suggests index or metric mismatch.
- Reorder prompt headers. If ΔS spikes, lock the schema.
-
Apply the module
- Retrieval drift → BBMC + Data Contracts.
- Reasoning collapse → BBCR bridge + BBAM variance clamp.
- Dead ends in long runs → BBPF alternate path.
-
Provider knobs to check first
- Temperature must be strictly greater than 0. Use 0.01 as the near-deterministic default.
- OpenAI-compatible endpoint (
https://api.minimax.io/v1) — confirm base URL is set correctly. - Structured output mode on and schema fixed.
- Tool use set to serial if parallel calls cross-talk.
-
Verify
- Three paraphrases hold the same citations.
- λ convergent across seeds.
- E_resonance flat on long replies.
Typical breakpoints and the right fix
-
Temperature = 0 rejected, runs fail before any output MiniMax requires temperature in (0.0, 1.0]. Setting exactly 0 raises an API error. Use 0.01 for near-deterministic behavior. If your framework hardcodes
temperature=0for evals, patch it before blaming retrieval. This is a config-level issue, not a semantic one. -
Tool call JSON drifts or fields missing on long contexts The 204K context window allows very large prompts, but deep context can cause tool-call schema drift. Lock a strict IO header and cite the schema: Data Contracts. Add trace tags in the prompt then verify: Retrieval Traceability. If agents are orchestrating, isolate boundaries: Agent Boundary Design, Agent Consensus.
-
Chinese tokenizer quirks change similarity despite high cosine Treat as metric mismatch. Use Embedding ≠ Semantic and add BM25 fallback in the Retrieval Playbook. Then re-rank with Rerankers and anchor citations via Retrieval Traceability.
-
Safety filter strips citations or tool arguments Move citation text to a dedicated field in the schema and reference with IDs. See Retrieval Traceability. If the model "bluffs" when filtered, apply controls in Bluffing.
-
Long chat melts down after filling the 204K window MiniMax models (M2.7, M2.5) support up to 204K tokens. Entropy collapse can still occur at the tail of long sessions. Cut context windows at stable joins and verify with Context Drift and Entropy Collapse. If replies flip across turns, check Memory Desync.
-
OpenAI SDK client misconfiguration MiniMax uses an OpenAI-compatible API. When using the OpenAI SDK, set
base_url="https://api.minimax.io/v1"and pass your MiniMax API key. Common pitfall: forgetting to change the base URL or passing the wrong key results in auth errors that look like model failures. -
Hybrid retrieval (HyDE + BM25) underperforms Look for query splits in Pattern: Query Parsing Split. Align the query parse and re-rank.
-
Non-English corpus drifts Follow the Multilingual Guide. Normalize punctuation and numerals in chunking and traceability.
Copy-paste prompt
I uploaded TXT OS and the WFGY Problem Map files.
My MiniMax bug:
• symptom: \[brief]
• traces: \[ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ states]
Tell me:
1. which layer is failing and why,
2. which exact fix page to open from this repo,
3. the minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4. how to verify the fix with a reproducible test.
Use BBMC/BBPF/BBCR/BBAM where relevant.
Escalate when
- First call after deploy fails or tools fire before data is ready. See Pre-Deploy Collapse and Bootstrap Ordering.
- Deadlocks or version skew in prod. See Deployment Deadlock.
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask "Answer using WFGY + " |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type "hello world" — OS boots instantly |
Explore More
| Layer | Page | What it's for |
|---|---|---|
| ⭐ Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| ⚙️ Engine | WFGY 1.0 | Original PDF tension engine and early logic sketch (legacy reference) |
| ⚙️ Engine | WFGY 2.0 | Production tension kernel for RAG and agent systems |
| ⚙️ Engine | WFGY 3.0 | TXT based Singularity tension engine (131 S class set) |
| 🗺️ Map | Problem Map 1.0 | Flagship 16 problem RAG failure taxonomy and fix map |
| 🗺️ Map | Problem Map 2.0 | Global Debug Card for RAG and agent pipeline diagnosis |
| 🗺️ Map | Problem Map 3.0 | Global AI troubleshooting atlas and failure pattern map |
| 🧰 App | TXT OS | .txt semantic OS with fast bootstrap |
| 🧰 App | Blah Blah Blah | Abstract and paradox Q&A built on TXT OS |
| 🧰 App | Blur Blur Blur | Text to image generation with semantic control |
| 🏡 Onboarding | Starter Village | Guided entry point for new users |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.