WFGY/ProblemMap/GlobalFixMap/LLM_Providers/minimax.md
Octopus 87ff7f2fcf feat: upgrade MiniMax default model to M2.7
- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to model list
- Set MiniMax-M2.7 as default model
- Keep all previous models as alternatives
2026-03-18 05:05:56 -05:00

11 KiB
Raw Blame History

MiniMax — Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of LLM_Providers. To reorient, go back here:

Think of this page as a desk within a ward. If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Use this page when failures look provider specific on MiniMax models (MiniMax-M2.7, MiniMax-M2.7-highspeed, MiniMax-M2.5, MiniMax-M2.5-highspeed). MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities. Examples include temperature rejection at zero, tool-call JSON drift in long 204K-context windows, Chinese tokenizer similarity mismatches, or streaming stalls under high concurrency. Each fix maps back to WFGY pages so you can verify with measurable targets.

Core acceptance

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 for the target section
  • λ remains convergent across 3 paraphrases

Open these first


Fix in 60 seconds

  1. Measure ΔS

    • Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
    • Thresholds: stable < 0.40, transitional 0.400.60, risk ≥ 0.60.
  2. Probe with λ_observe

    • Vary k = {5, 10, 20}. Flat high curve suggests index or metric mismatch.
    • Reorder prompt headers. If ΔS spikes, lock the schema.
  3. Apply the module

    • Retrieval drift → BBMC + Data Contracts.
    • Reasoning collapse → BBCR bridge + BBAM variance clamp.
    • Dead ends in long runs → BBPF alternate path.
  4. Provider knobs to check first

    • Temperature must be strictly greater than 0. Use 0.01 as the near-deterministic default.
    • OpenAI-compatible endpoint (https://api.minimax.io/v1) — confirm base URL is set correctly.
    • Structured output mode on and schema fixed.
    • Tool use set to serial if parallel calls cross-talk.
  5. Verify

    • Three paraphrases hold the same citations.
    • λ convergent across seeds.
    • E_resonance flat on long replies.

Typical breakpoints and the right fix

  • Temperature = 0 rejected, runs fail before any output MiniMax requires temperature in (0.0, 1.0]. Setting exactly 0 raises an API error. Use 0.01 for near-deterministic behavior. If your framework hardcodes temperature=0 for evals, patch it before blaming retrieval. This is a config-level issue, not a semantic one.

  • Tool call JSON drifts or fields missing on long contexts The 204K context window allows very large prompts, but deep context can cause tool-call schema drift. Lock a strict IO header and cite the schema: Data Contracts. Add trace tags in the prompt then verify: Retrieval Traceability. If agents are orchestrating, isolate boundaries: Agent Boundary Design, Agent Consensus.

  • Chinese tokenizer quirks change similarity despite high cosine Treat as metric mismatch. Use Embedding ≠ Semantic and add BM25 fallback in the Retrieval Playbook. Then re-rank with Rerankers and anchor citations via Retrieval Traceability.

  • Safety filter strips citations or tool arguments Move citation text to a dedicated field in the schema and reference with IDs. See Retrieval Traceability. If the model "bluffs" when filtered, apply controls in Bluffing.

  • Long chat melts down after filling the 204K window MiniMax models (M2.7, M2.5) support up to 204K tokens. Entropy collapse can still occur at the tail of long sessions. Cut context windows at stable joins and verify with Context Drift and Entropy Collapse. If replies flip across turns, check Memory Desync.

  • OpenAI SDK client misconfiguration MiniMax uses an OpenAI-compatible API. When using the OpenAI SDK, set base_url="https://api.minimax.io/v1" and pass your MiniMax API key. Common pitfall: forgetting to change the base URL or passing the wrong key results in auth errors that look like model failures.

  • Hybrid retrieval (HyDE + BM25) underperforms Look for query splits in Pattern: Query Parsing Split. Align the query parse and re-rank.

  • Non-English corpus drifts Follow the Multilingual Guide. Normalize punctuation and numerals in chunking and traceability.


Copy-paste prompt


I uploaded TXT OS and the WFGY Problem Map files.

My MiniMax bug:
• symptom: \[brief]
• traces: \[ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ states]

Tell me:

1. which layer is failing and why,
2. which exact fix page to open from this repo,
3. the minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4. how to verify the fix with a reproducible test.

Use BBMC/BBPF/BBCR/BBAM where relevant.


Escalate when


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask "Answer using WFGY + "
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type "hello world" — OS boots instantly

Explore More

Layer Page What it's for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
⚙️ Engine WFGY 1.0 Original PDF tension engine and early logic sketch (legacy reference)
⚙️ Engine WFGY 2.0 Production tension kernel for RAG and agent systems
⚙️ Engine WFGY 3.0 TXT based Singularity tension engine (131 S class set)
🗺️ Map Problem Map 1.0 Flagship 16 problem RAG failure taxonomy and fix map
🗺️ Map Problem Map 2.0 Global Debug Card for RAG and agent pipeline diagnosis
🗺️ Map Problem Map 3.0 Global AI troubleshooting atlas and failure pattern map
🧰 App TXT OS .txt semantic OS with fast bootstrap
🧰 App Blah Blah Blah Abstract and paradox Q&A built on TXT OS
🧰 App Blur Blur Blur Text to image generation with semantic control
🏡 Onboarding Starter Village Guided entry point for new users

If this repository helped, starring it improves discovery so more builders can find the docs and tools. GitHub Repo stars