free-claude-code/providers
Alishahryar1 b75f47b62d Gate NIM thinking params behind NIM_ENABLE_THINKING env var
Mistral models reject chat_template_kwargs, causing 400 errors. Make
thinking params (chat_template_kwargs, reasoning_budget) opt-in via
NIM_ENABLE_THINKING env var (default false) so only models that need it
(kimi, nemotron) receive them.
2026-03-27 21:44:36 -07:00
..
common fix(providers): map httpx exceptions natively and remove type ignores 2026-03-08 08:33:34 -07:00
llamacpp feat: add llama.cpp provider for local anthropic messages API 2026-03-08 10:38:25 -07:00
lmstudio fix(providers): map httpx exceptions natively and remove type ignores 2026-03-08 08:33:34 -07:00
nvidia_nim Gate NIM thinking params behind NIM_ENABLE_THINKING env var 2026-03-27 21:44:36 -07:00
open_router Improve deterministic error surfacing across stream and API 2026-03-01 01:32:52 -08:00
__init__.py feat: add llama.cpp provider for local anthropic messages API 2026-03-08 10:38:25 -07:00
base.py minor refactor using minimax m2.5 2026-02-27 20:44:39 -08:00
exceptions.py fixed linter errors 2026-01-30 23:32:02 -08:00
openai_compat.py Move nim_settings from shared base class to NvidiaNimProvider (#78) 2026-03-07 22:34:45 -08:00
rate_limit.py Add code review fix plan covering 11 issues across modularity, encapsulation, performance, and dead code (#62) 2026-03-01 00:45:33 -08:00