mirror of
https://github.com/anomalyco/opencode.git
synced 2026-05-03 15:00:24 +00:00
docs(go): add Kimi K2.6 to Go and Zen content (#23558)
This commit is contained in:
parent
ae7a3518f7
commit
d68ebee555
55 changed files with 227 additions and 136 deletions
|
|
@ -66,6 +66,7 @@ The current list of models includes:
|
|||
- **GLM-5**
|
||||
- **GLM-5.1**
|
||||
- **Kimi K2.5**
|
||||
- **Kimi K2.6**
|
||||
- **MiMo-V2-Pro**
|
||||
- **MiMo-V2-Omni**
|
||||
- **MiniMax M2.5**
|
||||
|
|
@ -94,6 +95,7 @@ The table below provides an estimated request count based on typical Go usage pa
|
|||
| GLM-5.1 | 880 | 2,150 | 4,300 |
|
||||
| GLM-5 | 1,150 | 2,880 | 5,750 |
|
||||
| Kimi K2.5 | 1,850 | 4,630 | 9,250 |
|
||||
| Kimi K2.6 | 1,150 | 2,880 | 5,750 |
|
||||
| MiMo-V2-Pro | 1,290 | 3,225 | 6,450 |
|
||||
| MiMo-V2-Omni | 2,150 | 5,450 | 10,900 |
|
||||
| Qwen3.6 Plus | 3,300 | 8,200 | 16,300 |
|
||||
|
|
@ -104,7 +106,7 @@ The table below provides an estimated request count based on typical Go usage pa
|
|||
Estimates are based on observed average request patterns:
|
||||
|
||||
- GLM-5/5.1 — 700 input, 52,000 cached, 150 output tokens per request
|
||||
- Kimi K2.5 — 870 input, 55,000 cached, 200 output tokens per request
|
||||
- Kimi K2.5/K2.6 — 870 input, 55,000 cached, 200 output tokens per request
|
||||
- MiniMax M2.7/M2.5 — 300 input, 55,000 cached, 125 output tokens per request
|
||||
- MiMo-V2-Pro — 350 input, 41,000 cached, 250 output tokens per request
|
||||
- MiMo-V2-Omni — 1000 input, 60,000 cached, 140 output tokens per request
|
||||
|
|
@ -138,6 +140,7 @@ You can also access Go models through the following API endpoints.
|
|||
| GLM-5.1 | glm-5.1 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.6 | kimi-k2.6 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiMo-V2-Pro | mimo-v2-pro | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiMo-V2-Omni | mimo-v2-omni | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiniMax M2.7 | minimax-m2.7 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
||||
|
|
@ -146,8 +149,8 @@ You can also access Go models through the following API endpoints.
|
|||
| Qwen3.5 Plus | qwen3.5-plus | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/alibaba` |
|
||||
|
||||
The [model id](/docs/config/#models) in your OpenCode config
|
||||
uses the format `opencode-go/<model-id>`. For example, for Kimi K2.5, you would
|
||||
use `opencode-go/kimi-k2.5` in your config.
|
||||
uses the format `opencode-go/<model-id>`. For example, for Kimi K2.6, you would
|
||||
use `opencode-go/kimi-k2.6` in your config.
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue