mirror of
https://github.com/anomalyco/opencode.git
synced 2026-05-14 08:31:29 +00:00
Some checks are pending
deploy / deploy (push) Waiting to run
docs-locale-sync / sync-locales (push) Waiting to run
generate / generate (push) Waiting to run
nix-eval / nix-eval (push) Waiting to run
nix-hashes / compute-hash (blacksmith-4vcpu-ubuntu-2404, x86_64-linux) (push) Waiting to run
nix-hashes / compute-hash (blacksmith-4vcpu-ubuntu-2404-arm, aarch64-linux) (push) Waiting to run
nix-hashes / compute-hash (macos-15-intel, x86_64-darwin) (push) Waiting to run
nix-hashes / compute-hash (macos-latest, aarch64-darwin) (push) Waiting to run
nix-hashes / update-hashes (push) Blocked by required conditions
publish / build-tauri (map[host:blacksmith-4vcpu-windows-2025 target:x86_64-pc-windows-msvc]) (push) Blocked by required conditions
publish / build-tauri (map[host:blacksmith-8vcpu-ubuntu-2404-arm target:aarch64-unknown-linux-gnu]) (push) Blocked by required conditions
publish / build-tauri (map[host:macos-latest target:aarch64-apple-darwin]) (push) Blocked by required conditions
publish / build-tauri (map[host:macos-latest target:x86_64-apple-darwin]) (push) Blocked by required conditions
publish / build-tauri (map[host:windows-2025 target:aarch64-pc-windows-msvc]) (push) Blocked by required conditions
publish / build-electron (map[host:blacksmith-4vcpu-ubuntu-2404 platform_flag:--linux target:aarch64-unknown-linux-gnu]) (push) Blocked by required conditions
publish / build-electron (map[host:blacksmith-4vcpu-ubuntu-2404 platform_flag:--linux target:x86_64-unknown-linux-gnu]) (push) Blocked by required conditions
publish / build-electron (map[host:blacksmith-4vcpu-windows-2025 platform_flag:--win target:x86_64-pc-windows-msvc]) (push) Blocked by required conditions
publish / build-electron (map[host:macos-latest platform_flag:--mac --arm64 target:aarch64-apple-darwin]) (push) Blocked by required conditions
publish / build-electron (map[host:macos-latest platform_flag:--mac --x64 target:x86_64-apple-darwin]) (push) Blocked by required conditions
publish / build-electron (map[host:windows-2025 platform_flag:--win --arm64 target:aarch64-pc-windows-msvc]) (push) Blocked by required conditions
publish / version (push) Waiting to run
publish / build-cli (push) Blocked by required conditions
publish / sign-cli-windows (push) Blocked by required conditions
publish / build-tauri (map[host:blacksmith-4vcpu-ubuntu-2404 target:x86_64-unknown-linux-gnu]) (push) Blocked by required conditions
publish / publish (push) Blocked by required conditions
storybook / storybook build (push) Waiting to run
test / unit (windows) (push) Waiting to run
test / e2e (linux) (push) Waiting to run
test / e2e (windows) (push) Waiting to run
test / unit (linux) (push) Waiting to run
typecheck / typecheck (push) Waiting to run
153 lines
5.7 KiB
Text
153 lines
5.7 KiB
Text
---
|
|
title: Go
|
|
description: Low cost subscription for open coding models.
|
|
---
|
|
|
|
import config from "../../../config.mjs"
|
|
export const console = config.console
|
|
export const email = `mailto:${config.email}`
|
|
|
|
OpenCode Go is a low cost subscription — **$5 for your first month**, then **$10/month** — that gives you reliable access to popular open coding models.
|
|
|
|
:::note
|
|
OpenCode Go is currently in beta.
|
|
:::
|
|
|
|
Go works like any other provider in OpenCode. You subscribe to OpenCode Go and
|
|
get your API key. It's **completely optional** and you don't need to use it to
|
|
use OpenCode.
|
|
|
|
It is designed primarily for international users, with models hosted in the US, EU, and Singapore for stable global access.
|
|
|
|
---
|
|
|
|
## Background
|
|
|
|
Open models have gotten really good. They now reach performance close to
|
|
proprietary models for coding tasks. And because many providers can serve them
|
|
competitively, they are usually far cheaper.
|
|
|
|
However, getting reliable, low latency access to them can be difficult. Providers
|
|
vary in quality and availability.
|
|
|
|
:::tip
|
|
We tested a select group of models and providers that work well with OpenCode.
|
|
:::
|
|
|
|
To fix this, we did a couple of things:
|
|
|
|
1. We tested a select group of open models and talked to their teams about how to
|
|
best run them.
|
|
2. We then worked with a few providers to make sure these were being served
|
|
correctly.
|
|
3. Finally, we benchmarked the combination of the model/provider and came up
|
|
with a list that we feel good recommending.
|
|
|
|
OpenCode Go gives you access to these models for **$5 for your first month**, then **$10/month**.
|
|
|
|
---
|
|
|
|
## How it works
|
|
|
|
OpenCode Go works like any other provider in OpenCode.
|
|
|
|
1. You sign in to **<a href={console}>OpenCode Zen</a>**, subscribe to Go, and
|
|
copy your API key.
|
|
2. You run the `/connect` command in the TUI, select `OpenCode Go`, and paste
|
|
your API key.
|
|
3. Run `/models` in the TUI to see the list of models available through Go.
|
|
|
|
:::note
|
|
Only one member per workspace can subscribe to OpenCode Go.
|
|
:::
|
|
|
|
The current list of models includes:
|
|
|
|
- **GLM-5**
|
|
- **Kimi K2.5**
|
|
- **MiMo-V2-Pro**
|
|
- **MiMo-V2-Omni**
|
|
- **MiniMax M2.5**
|
|
- **MiniMax M2.7**
|
|
|
|
The list of models may change as we test and add new ones.
|
|
|
|
---
|
|
|
|
## Usage limits
|
|
|
|
OpenCode Go includes the following limits:
|
|
|
|
- **5 hour limit** — $12 of usage
|
|
- **Weekly limit** — $30 of usage
|
|
- **Monthly limit** — $60 of usage
|
|
|
|
Limits are defined in dollar value. This means your actual request count depends on the model you use. Cheaper models like MiniMax M2.5 allow for more requests, while higher-cost models like GLM-5 allow for fewer.
|
|
|
|
The table below provides an estimated request count based on typical Go usage patterns:
|
|
|
|
| | GLM-5 | Kimi K2.5 | MiMo-V2-Pro | MiMo-V2-Omni | MiniMax M2.7 | MiniMax M2.5 |
|
|
| ------------------- | ----- | --------- | ----------- | ------------ | ------------ | ------------ |
|
|
| requests per 5 hour | 1,150 | 1,850 | 1,290 | 2,150 | 14,000 | 20,000 |
|
|
| requests per week | 2,880 | 4,630 | 3,225 | 5,450 | 35,000 | 50,000 |
|
|
| requests per month | 5,750 | 9,250 | 6,450 | 10,900 | 70,000 | 100,000 |
|
|
|
|
Estimates are based on observed average request patterns:
|
|
|
|
- GLM-5 — 700 input, 52,000 cached, 150 output tokens per request
|
|
- Kimi K2.5 — 870 input, 55,000 cached, 200 output tokens per request
|
|
- MiniMax M2.7/M2.5 — 300 input, 55,000 cached, 125 output tokens per request
|
|
- MiMo-V2-Pro — 350 input, 41,000 cached, 250 output tokens per request
|
|
- MiMo-V2-Omni — 1000 input, 60,000 cached, 140 output tokens per request
|
|
|
|
You can track your current usage in the **<a href={console}>console</a>**.
|
|
|
|
:::tip
|
|
If you reach the usage limit, you can continue using the free models.
|
|
:::
|
|
|
|
Usage limits may change as we learn from early usage and feedback.
|
|
|
|
---
|
|
|
|
### Usage beyond limits
|
|
|
|
If you also have credits on your Zen balance, you can enable the **Use balance**
|
|
option in the console. When enabled, Go will fall back to your Zen balance
|
|
after you've reached your usage limits instead of blocking requests.
|
|
|
|
---
|
|
|
|
## Endpoints
|
|
|
|
You can also access Go models through the following API endpoints.
|
|
|
|
| Model | Model ID | Endpoint | AI SDK Package |
|
|
| ------------ | ------------ | ------------------------------------------------ | --------------------------- |
|
|
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
|
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
|
| MiMo-V2-Pro | mimo-v2-pro | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
|
| MiMo-V2-Omni | mimo-v2-omni | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
|
| MiniMax M2.7 | minimax-m2.7 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
|
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
|
|
|
The [model id](/docs/config/#models) in your OpenCode config
|
|
uses the format `opencode-go/<model-id>`. For example, for Kimi K2.5, you would
|
|
use `opencode-go/kimi-k2.5` in your config.
|
|
|
|
---
|
|
|
|
## Privacy
|
|
|
|
The plan is designed primarily for international users, with models hosted in the US, EU, and Singapore for stable global access. Our providers follow a zero-retention policy and do not use your data for model training.
|
|
|
|
---
|
|
|
|
## Goals
|
|
|
|
We created OpenCode Go to:
|
|
|
|
1. Make AI coding **accessible** to more people with a low cost subscription.
|
|
2. Provide **reliable** access to the best open coding models.
|
|
3. Curate models that are **tested and benchmarked** for coding agent use.
|
|
4. Have **no lock-in** by allowing you to use any other provider with OpenCode as well.
|