feat: deepseek api support (#118)

## Summary

* add native DeepSeek provider support via the shared OpenAI-compatible
provider base
* allow `deepseek/...` model prefixes in config validation
* add `DEEPSEEK_API_KEY` and `DEEPSEEK_BASE_URL` settings
* add DeepSeek entries to `.env.example` and `config/env.example`
* implement `DeepSeekProvider` and register it in provider dependencies
* add a DeepSeek request builder with DeepSeek-specific thinking payload
handling
* preserve Anthropic thinking blocks as `reasoning_content` for
DeepSeek-compatible continuation flows
* update `claude-pick` to discover DeepSeek models from the DeepSeek API
* document DeepSeek usage in `README.md`
* add tests for config validation, provider dependency wiring, request
building, and streaming behavior

## Motivation

DeepSeek exposes an OpenAI-compatible API and can be used directly
without routing through OpenRouter. This lets users spend their existing
DeepSeek balance through the proxy while keeping the same Claude Code
workflow and per-model provider mapping.

## Example

```dotenv
DEEPSEEK_API_KEY="sk-..."
DEEPSEEK_BASE_URL="https://api.deepseek.com"

MODEL_OPUS="deepseek/deepseek-reasoner"
MODEL_SONNET="deepseek/deepseek-chat"
MODEL_HAIKU="deepseek/deepseek-chat"
MODEL="deepseek/deepseek-chat"

---------

Co-authored-by: Alishahryar1 <alishahryar2@gmail.com>
This commit is contained in:
Pavel Yurchenko 2026-04-23 10:06:01 +10:00 committed by GitHub
parent c3f6dbe0bc
commit e719e4aed2
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
13 changed files with 410 additions and 14 deletions

View file

@ -359,6 +359,7 @@ class TestPerModelMapping:
"open_router/anthropic/claude-3-opus",
"open_router/anthropic/claude-3-haiku",
),
({"MODEL": "deepseek/deepseek-chat"}, "deepseek/deepseek-chat", None),
({"MODEL": "lmstudio/qwen2.5-7b"}, "lmstudio/qwen2.5-7b", None),
({"MODEL": "llamacpp/local-model"}, "llamacpp/local-model", None),
],
@ -494,6 +495,7 @@ class TestPerModelMapping:
assert Settings.parse_provider_type("nvidia_nim/meta/llama") == "nvidia_nim"
assert Settings.parse_provider_type("open_router/deepseek/r1") == "open_router"
assert Settings.parse_provider_type("deepseek/deepseek-chat") == "deepseek"
assert Settings.parse_provider_type("lmstudio/qwen") == "lmstudio"
assert Settings.parse_provider_type("llamacpp/model") == "llamacpp"
@ -502,5 +504,6 @@ class TestPerModelMapping:
from config.settings import Settings
assert Settings.parse_model_name("nvidia_nim/meta/llama") == "meta/llama"
assert Settings.parse_model_name("deepseek/deepseek-chat") == "deepseek-chat"
assert Settings.parse_model_name("lmstudio/qwen") == "qwen"
assert Settings.parse_model_name("llamacpp/model") == "model"