mirror of
https://github.com/anomalyco/opencode.git
synced 2026-04-29 13:09:46 +00:00
170 lines
6.8 KiB
Text
170 lines
6.8 KiB
Text
---
|
|
title: Go
|
|
description: Low cost subscription for open coding models.
|
|
---
|
|
|
|
import config from "../../../config.mjs"
|
|
export const console = config.console
|
|
export const email = `mailto:${config.email}`
|
|
|
|
OpenCode Go is a low cost subscription — **$5 for your first month**, then **$10/month** — that gives you reliable access to popular open coding models.
|
|
|
|
:::note
|
|
OpenCode Go is currently in beta.
|
|
:::
|
|
|
|
Go works like any other provider in OpenCode. You subscribe to OpenCode Go and
|
|
get your API key. It's **completely optional** and you don't need to use it to
|
|
use OpenCode.
|
|
|
|
It is designed primarily for international users, with models hosted in the US, EU, and Singapore for stable global access.
|
|
|
|
---
|
|
|
|
## Background
|
|
|
|
Open models have gotten really good. They now reach performance close to
|
|
proprietary models for coding tasks. And because many providers can serve them
|
|
competitively, they are usually far cheaper.
|
|
|
|
However, getting reliable, low latency access to them can be difficult. Providers
|
|
vary in quality and availability.
|
|
|
|
:::tip
|
|
We tested a select group of models and providers that work well with OpenCode.
|
|
:::
|
|
|
|
To fix this, we did a couple of things:
|
|
|
|
1. We tested a select group of open models and talked to their teams about how to
|
|
best run them.
|
|
2. We then worked with a few providers to make sure these were being served
|
|
correctly.
|
|
3. Finally, we benchmarked the combination of the model/provider and came up
|
|
with a list that we feel good recommending.
|
|
|
|
OpenCode Go gives you access to these models for **$5 for your first month**, then **$10/month**.
|
|
|
|
---
|
|
|
|
## How it works
|
|
|
|
OpenCode Go works like any other provider in OpenCode.
|
|
|
|
1. You sign in to **<a href={console}>OpenCode Zen</a>**, subscribe to Go, and
|
|
copy your API key.
|
|
2. You run the `/connect` command in the TUI, select `OpenCode Go`, and paste
|
|
your API key.
|
|
3. Run `/models` in the TUI to see the list of models available through Go.
|
|
|
|
:::note
|
|
Only one member per workspace can subscribe to OpenCode Go.
|
|
:::
|
|
|
|
The current list of models includes:
|
|
|
|
- **GLM-5**
|
|
- **GLM-5.1**
|
|
- **Kimi K2.5**
|
|
- **Kimi K2.6**
|
|
- **MiMo-V2-Pro**
|
|
- **MiMo-V2-Omni**
|
|
- **MiniMax M2.5**
|
|
- **MiniMax M2.7**
|
|
- **Qwen3.5 Plus**
|
|
- **Qwen3.6 Plus**
|
|
|
|
The list of models may change as we test and add new ones.
|
|
|
|
---
|
|
|
|
## Usage limits
|
|
|
|
OpenCode Go includes the following limits:
|
|
|
|
- **5 hour limit** — $12 of usage
|
|
- **Weekly limit** — $30 of usage
|
|
- **Monthly limit** — $60 of usage
|
|
|
|
Limits are defined in dollar value. This means your actual request count depends on the model you use. Cheaper models like Qwen3.5 Plus allow for more requests, while higher-cost models like GLM-5.1 allow for fewer.
|
|
|
|
The table below provides an estimated request count based on typical Go usage patterns:
|
|
|
|
| Model | requests per 5 hour | requests per week | requests per month |
|
|
| ------------ | ------------------- | ----------------- | ------------------ |
|
|
| GLM-5.1 | 880 | 2,150 | 4,300 |
|
|
| GLM-5 | 1,150 | 2,880 | 5,750 |
|
|
| Kimi K2.5 | 1,850 | 4,630 | 9,250 |
|
|
| Kimi K2.6 | 1,150 | 2,880 | 5,750 |
|
|
| MiMo-V2-Pro | 1,290 | 3,225 | 6,450 |
|
|
| MiMo-V2-Omni | 2,150 | 5,450 | 10,900 |
|
|
| Qwen3.6 Plus | 3,300 | 8,200 | 16,300 |
|
|
| MiniMax M2.7 | 3,400 | 8,500 | 17,000 |
|
|
| MiniMax M2.5 | 6,300 | 15,900 | 31,800 |
|
|
| Qwen3.5 Plus | 10,200 | 25,200 | 50,500 |
|
|
|
|
Estimates are based on observed average request patterns:
|
|
|
|
- GLM-5/5.1 — 700 input, 52,000 cached, 150 output tokens per request
|
|
- Kimi K2.5/K2.6 — 870 input, 55,000 cached, 200 output tokens per request
|
|
- MiniMax M2.7/M2.5 — 300 input, 55,000 cached, 125 output tokens per request
|
|
- MiMo-V2-Pro — 350 input, 41,000 cached, 250 output tokens per request
|
|
- MiMo-V2-Omni — 1000 input, 60,000 cached, 140 output tokens per request
|
|
- Qwen3.5 Plus — 410 input, 47,000 cached, 140 output tokens per request
|
|
- Qwen3.6 Plus — 500 input, 57,000 cached, 190 output tokens per request
|
|
|
|
You can track your current usage in the **<a href={console}>console</a>**.
|
|
|
|
:::tip
|
|
If you reach the usage limit, you can continue using the free models.
|
|
:::
|
|
|
|
Usage limits may change as we learn from early usage and feedback.
|
|
|
|
---
|
|
|
|
### Usage beyond limits
|
|
|
|
If you also have credits on your Zen balance, you can enable the **Use balance**
|
|
option in the console. When enabled, Go will fall back to your Zen balance
|
|
after you've reached your usage limits instead of blocking requests.
|
|
|
|
---
|
|
|
|
## Endpoints
|
|
|
|
You can also access Go models through the following API endpoints.
|
|
|
|
| Model | Model ID | Endpoint | AI SDK Package |
|
|
| ------------ | ------------ | ------------------------------------------------ | --------------------------- |
|
|
| GLM-5.1 | glm-5.1 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
|
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
|
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
|
| Kimi K2.6 | kimi-k2.6 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
|
| MiMo-V2-Pro | mimo-v2-pro | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
|
| MiMo-V2-Omni | mimo-v2-omni | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
|
| MiniMax M2.7 | minimax-m2.7 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
|
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
|
| Qwen3.6 Plus | qwen3.6-plus | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/alibaba` |
|
|
| Qwen3.5 Plus | qwen3.5-plus | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/alibaba` |
|
|
|
|
The [model id](/docs/config/#models) in your OpenCode config
|
|
uses the format `opencode-go/<model-id>`. For example, for Kimi K2.6, you would
|
|
use `opencode-go/kimi-k2.6` in your config.
|
|
|
|
---
|
|
|
|
## Privacy
|
|
|
|
The plan is designed primarily for international users, with models hosted in the US, EU, and Singapore for stable global access. Our providers follow a zero-retention policy and do not use your data for model training.
|
|
|
|
---
|
|
|
|
## Goals
|
|
|
|
We created OpenCode Go to:
|
|
|
|
1. Make AI coding **accessible** to more people with a low cost subscription.
|
|
2. Provide **reliable** access to the best open coding models.
|
|
3. Curate models that are **tested and benchmarked** for coding agent use.
|
|
4. Have **no lock-in** by allowing you to use any other provider with OpenCode as well.
|