Commit graph

170 commits

Author SHA1 Message Date
Alishahryar1
abae61d85b Fix null usage in SSE for OpenAI-compatible streams (#209, #123)
- Only use provider completion_tokens when it is an int; otherwise estimate
- Coerce message_start/message_delta usage fields to safe integers in SSEBuilder
- Add regression tests for null upstream completion_tokens and builder edge cases

Claude Code could crash (e.g. undefined access on usage) when NIM/GLM or
similar sent usage with null token fields in streamed message_delta.
2026-04-27 18:20:33 -07:00
Alishahryar1
0cca5699cb fix(messaging): reuse parent CLI session for Telegram reply continuation (#233)
Pass parent_session_id into get_or_create_session so reply nodes align with
the fork/resume path instead of always allocating a fresh pending session.
Add unit coverage and update integration expectations.
2026-04-27 16:24:31 -07:00
Alishahryar1
f96f541c0a fix(smoke): accept reasoning-only streams and add text placeholder
- OpenAI-compat: emit minimal text block when only reasoning_content streams
  (e.g. NIM) so clients get a text segment.
- Provider prereq: pass if text or thinking content is non-empty after strip.
- Add unit test for reasoning-only stream placeholder text.
2026-04-26 12:55:18 -07:00
Alishahryar1
36d236b563 fix(206): defer post-tool assistant content for OpenAI chat conversion
- Make AnthropicToOpenAIConverter stateful: assistant text after tool_use is
  deferred until matching tool_result, then replayed as a follow-up assistant
  turn.
- After native streamed tool_use, emit top-level SSE error on transport
  failure instead of assistant text_delta (avoids bad transcript shape).
- Add NIM preflight, streaming, converter, and product smoke regressions.
2026-04-26 12:43:25 -07:00
Alishahryar1
6297b48f81 feat(deepseek): use native Anthropic Messages transport
- Point DeepSeek at api.deepseek.com/anthropic with x-api-key headers
- Native request builder, DeepSeek-specific thinking/block sanitization
- Drop deepseek from OpenAI-chat server-tool preflight; update tests and docs
- Default smoke model deepseek-v4-pro; re-export dump_raw_messages_request
2026-04-26 12:03:21 -07:00
Alishahryar1
2d2bf3de70 fix: replay reasoning_content for DeepSeek/NIM and expand provider smoke
Some checks are pending
CI / checks (push) Waiting to run
- Add ReasoningReplayMode and top-level reasoning replay in OpenAI conversion
- DeepSeek/NIM request bodies use reasoning_content when thinking is enabled
- NIM retries without reasoning_content on 400 from upstream
- Per-provider smoke models (FCC_SMOKE_MODEL_*) independent of MODEL mapping
- Fix smoke model override parsing for owner/model names with slashes
- Live smoke: reasoning tool continuation uses synthetic thinking+tool history
- Tests and docs updated
2026-04-26 11:02:18 -07:00
Alishahryar1
f3a7528d49 Major refactor: API, providers, messaging, and Anthropic protocol
Some checks are pending
CI / checks (push) Waiting to run
Consolidates the incremental refactor work into a single change set: modular web tools (api/web_tools), native Anthropic request building and SSE block policy, OpenAI conversion and error handling, provider transports and rate limiting, messaging handler and tree queue, safe logging, smoke tests, and broad test coverage.
2026-04-26 03:01:14 -07:00
Wang Ji
b525217633
[feat] ollama method support (#129)
Support use ollama method like LM stuio

---------

Co-authored-by: Alishahryar1 <alishahryar2@gmail.com>
Co-authored-by: u011436427 <u011436427@noreply.gitcode.com>
2026-04-25 22:06:36 -07:00
Alishahryar1
7f1e860c7f Use root env example for fcc init 2026-04-25 20:59:44 -07:00
Alishahryar1
f29e693dc5 Add per-model thinking toggles 2026-04-25 20:51:07 -07:00
Alishahryar1
40951c145a refactor: drop legacy title-generation detection copy
Some checks are pending
CI / checks (push) Waiting to run
Remove new-conversation-topic heuristic; keep sentence-case and JSON session
title patterns. Update unit and smoke E2E payloads accordingly.
2026-04-25 00:45:22 -07:00
Alishahryar1
080ebefc7b fix: detect Claude Code 2.1+ session title requests for optimization skip
Expand is_title_generation_request to match sentence-case/JSON title prompts
in addition to legacy new-conversation-topic copy. Add unit test for the
current session-title system text shape.
2026-04-25 00:44:25 -07:00
Alishahryar1
b926f60f64 feat: Anthropic web server tools, provider metadata, messaging hardening
- Add local web_search/web_fetch SSE handling and optional tool schemas
- Extend HeuristicToolParser for JSON-style WebFetch/WebSearch text
- Consolidate provider defaults, ids, and exception typing; stream contracts
- Messaging: typed options, voice config injection, platform contract cleanup
- Tests for web server tools, converters, parsers, contracts; ignore debug-*.log
2026-04-24 23:01:14 -07:00
Alishahryar1
0e3b2c24b4 refactor: remove OpenRouter rollback, shims, and redundant layers
- OpenRouter: native Anthropic only; remove chat_request and OPENROUTER_TRANSPORT
- Drop OpenAICompatibleProvider alias, api.request_utils, voice_pipeline facade
- Simplify OpenRouter SSE, generic reasoning in conversion, messaging dispatch
- Shared markdown table helpers; API optimization response helper; contract guards
- Restore PLAN.md; update docs and tests
2026-04-24 21:08:38 -07:00
Alishahryar1
26b8a29537 Architecture refactor: core anthropic, runtime, smoke tiers, remove providers.common 2026-04-24 20:03:14 -07:00
Alishahryar1
66ef23072c Refactor provider routing and smoke coverage 2026-04-24 19:34:34 -07:00
Alishahryar1
efa9f36c3a Revert "Refactor native Anthropic messages providers (#147)"
Some checks are pending
CI / checks (push) Waiting to run
This reverts commit ffa8237220.
2026-04-24 17:27:26 -07:00
Ali Khokhar
ffa8237220
Refactor native Anthropic messages providers (#147)
## Summary
- add a shared `AnthropicMessagesProvider` for native Anthropic
`/messages` providers
- migrate OpenRouter, LM Studio, and llama.cpp onto the shared
transport/streaming base
- preserve provider-specific stream chunking, request headers, thinking
filtering, and error shapes

## Verification
- `uv run ruff format`
- `uv run ruff check`
- `uv run ty check`
- `uv run pytest` (902 passed)
2026-04-24 17:25:49 -07:00
Alishahryar1
751694a5da Refactor smoke testing framework and enhance provider configurations
- Updated DEFAULT_TARGETS in config.py to include new targets: clients, llamacpp, and lmstudio, while removing contract and optimizations.
- Introduced TARGET_ALIASES for better target management.
- Added TARGET_REQUIRED_ENV to specify environment variables needed for each target.
- Enhanced SmokeOutcome in report.py to include classification of outcomes for better reporting.
- Implemented classify_outcome function to categorize smoke test results.
- Added new test for stop endpoint in test_api_live.py to ensure proper error handling.
- Updated test_auth_live.py to enforce auth token requirements and utilize environment files.
- Changed target from vscode to clients in test_client_shapes_live.py.
- Removed obsolete test_feature_manifest.py and test_stream_contracts.py files.
- Added new skip helpers in skips.py to manage upstream unavailability scenarios.
- Created new tests for local provider endpoints in test_local_provider_endpoints_live.py.
- Added comprehensive feature inventory tests in tests/contracts/test_feature_manifest.py.
- Implemented stream contract tests in tests/contracts/test_stream_contracts.py.
2026-04-24 17:16:06 -07:00
Alishahryar1
d2db1bd689 Treat empty model overrides as fallback 2026-04-24 13:58:25 -07:00
Alishahryar1
48b085950a Warn on inherited auth token
Some checks are pending
CI / checks (push) Waiting to run
2026-04-24 00:42:33 -07:00
Alishahryar1
6f3d762a4f Revert "Add per-model thinking toggles"
This reverts commit 1f12a33dd7.
2026-04-24 00:26:15 -07:00
Alishahryar1
9c28af7cf1 Fix auth token dotenv precedence 2026-04-24 00:25:31 -07:00
Alishahryar1
1f12a33dd7 Add per-model thinking toggles 2026-04-24 00:14:49 -07:00
Ali Khokhar
462a9430bb
Add local live smoke test suite (#148)
## Summary
- add an opt-in local `smoke/` pytest suite for API, auth, providers,
CLI, IDE-shaped requests, messaging, voice, tools, and thinking stream
contracts
- keep smoke tests out of normal CI collection with `testpaths =
["tests"]`
- write sanitized smoke artifacts under `.smoke-results/`

## Verification
- `uv run ruff format`
- `uv run ruff check`
- `uv run ty check`
- `uv run ty check smoke`
- `FCC_LIVE_SMOKE=1 FCC_SMOKE_TARGETS=all FCC_SMOKE_RUN_VOICE=1 uv run
pytest smoke -n 0 -m live -s --tb=short` -> 17 passed, 9 skipped
- `uv run pytest` -> 904 passed

## Notes
- Skipped live checks require local credentials/tools/services, such as
provider models, Telegram/Discord targets, voice backend, or Claude CLI.
- `claude-pick` smoke was intentionally removed.
2026-04-23 19:06:09 -07:00
Alishahryar1
55131019e1 Sync config defaults and proxy docs
Some checks are pending
CI / checks (push) Waiting to run
2026-04-22 17:34:00 -07:00
Anuj Nitin Bharambe
4fdf7e8b7e
Fix: Exclude chat_template for Mistral tokenizers in NVIDIA NIM (#130) (#131)
Fixes #130. This PR updates the NVIDIA NIM provider to omit
\chat_template_kwargs\ and \chat_template\ when using a Mistral
tokenizer model. This resolves the 400 Bad Request error returned by the
API.

Co-authored-by: Alishahryar1 <alishahryar2@gmail.com>
2026-04-22 17:16:45 -07:00
Wang Ji
4afca05318
bug: nvidia didn't not support reasoning_budget parameter (#126)
<img width="2538" height="411" alt="image"
src="https://github.com/user-attachments/assets/8fc07f00-8869-4548-b40a-a36a15e4e043"
/>

Fixes #127.

---------

Co-authored-by: u011436427 <u011436427@noreply.gitcode.com>
Co-authored-by: Alishahryar1 <alishahryar2@gmail.com>
2026-04-22 17:06:46 -07:00
arssing
2fe15bd2cd
feat: add proxy support for httpx clients (#125)
Add proxy support for providers based on
[doc](https://www.python-httpx.org/advanced/proxies/):

- Add per-provider proxy support (HTTP and SOCKS5) for all 4 providers:
nvidia_nim, open_router, lmstudio, llamacpp
- Each provider gets its own env var (NVIDIA_NIM_PROXY,
OPENROUTER_PROXY, LMSTUDIO_PROXY, LLAMACPP_PROXY) for independent proxy
configuration

---------

Co-authored-by: Alishahryar1 <alishahryar2@gmail.com>
2026-04-22 17:06:16 -07:00
Pavel Yurchenko
e719e4aed2
feat: deepseek api support (#118)
## Summary

* add native DeepSeek provider support via the shared OpenAI-compatible
provider base
* allow `deepseek/...` model prefixes in config validation
* add `DEEPSEEK_API_KEY` and `DEEPSEEK_BASE_URL` settings
* add DeepSeek entries to `.env.example` and `config/env.example`
* implement `DeepSeekProvider` and register it in provider dependencies
* add a DeepSeek request builder with DeepSeek-specific thinking payload
handling
* preserve Anthropic thinking blocks as `reasoning_content` for
DeepSeek-compatible continuation flows
* update `claude-pick` to discover DeepSeek models from the DeepSeek API
* document DeepSeek usage in `README.md`
* add tests for config validation, provider dependency wiring, request
building, and streaming behavior

## Motivation

DeepSeek exposes an OpenAI-compatible API and can be used directly
without routing through OpenRouter. This lets users spend their existing
DeepSeek balance through the proxy while keeping the same Claude Code
workflow and per-model provider mapping.

## Example

```dotenv
DEEPSEEK_API_KEY="sk-..."
DEEPSEEK_BASE_URL="https://api.deepseek.com"

MODEL_OPUS="deepseek/deepseek-reasoner"
MODEL_SONNET="deepseek/deepseek-chat"
MODEL_HAIKU="deepseek/deepseek-chat"
MODEL="deepseek/deepseek-chat"

---------

Co-authored-by: Alishahryar1 <alishahryar2@gmail.com>
2026-04-22 17:06:01 -07:00
Alishahryar1
835d0454e8 Fixes for issue 113 and 116 2026-04-18 16:32:31 -07:00
Alishahryar1
ec904c6e0c lint
Some checks failed
CI / checks (push) Has been cancelled
2026-03-27 21:49:04 -07:00
Alishahryar1
6dd07d9b6b fix: update test_build_request_body to use enable_thinking=True 2026-03-27 21:48:21 -07:00
Alishahryar1
b75f47b62d Gate NIM thinking params behind NIM_ENABLE_THINKING env var
Mistral models reject chat_template_kwargs, causing 400 errors. Make
thinking params (chat_template_kwargs, reasoning_budget) opt-in via
NIM_ENABLE_THINKING env var (default false) so only models that need it
(kimi, nemotron) receive them.
2026-03-27 21:44:36 -07:00
th-ch
f703a0e403
Implement optional authentication (Anthropic style) (#80)
Some checks are pending
CI / checks (push) Waiting to run
2026-03-27 11:11:47 -07:00
Alishahryar1
2fad4dd4c9 Support both kimi (thinking) and nemotron (enable_thinking) in chat_template_kwargs
Some checks are pending
CI / checks (push) Waiting to run
2026-03-26 12:34:12 -07:00
Alishahryar1
f9e7f65f4c Fix NVIDIA NIM reasoning params for updated API
Replace dropped params (thinking, reasoning_split, include_reasoning,
return_tokens_as_token_ids, reasoning_effort) with the new API format:
chat_template_kwargs.enable_thinking=True and reasoning_budget=max_tokens.
2026-03-26 12:25:04 -07:00
Yuval Dinodia
00038209b2
fix: remove unsupported include_stop_str_in_output NIM param (#95)
Some checks failed
CI / checks (push) Has been cancelled
2026-03-23 11:38:13 -07:00
Alishahryar1
55945df1d2 removed logging utils 2026-03-11 07:24:50 -07:00
Alishahryar1
5a36a32836 feat: add llama.cpp provider for local anthropic messages API 2026-03-08 10:38:25 -07:00
Alishahryar1
1aedf4763c fix(providers): map httpx exceptions natively and remove type ignores 2026-03-08 08:33:34 -07:00
Alishahryar1
87d8ce1196 feat(lmstudio): route natively to Anthropic /v1/messages endpoint
- Rewrites LMStudioProvider to inherit from BaseProvider
- Passes requests natively to /v1/messages using httpx instead of AsyncOpenAI
- Auto-translates internal ThinkingConfig to Anthropic schema
- Updates .env.example with model routing instructions
- Adjusts test suite for new native integration
2026-03-08 08:17:05 -07:00
Ali Khokhar
884ddd77af
Add tests for fcc-init entrypoint (cli/entrypoints.py) (#77)
Some checks are pending
CI / checks (push) Waiting to run
2026-03-07 08:27:11 -08:00
Alishahryar1
2e8b22fa9d Remvoed root insert hack from conftest 2026-03-01 21:57:25 -08:00
Alishahryar1
a7d88d5cbd Updated README with per-model mapping, fixed test .env isolation 2026-03-01 21:52:35 -08:00
Ali Khokhar
0b324e0421
Per claude model mapping (#66) 2026-03-01 21:32:23 -08:00
Ali Khokhar
fae8a2a044
Remove over-engineering: drop tree_queue setter, _set_connected(), fi… (#63)
…x cancel_all() TOCTOU

- Remove tree_queue property setter (backward-compat hack; all callers
already migrated to replace_tree_queue()); keep property getter only
- Update 2 remaining tests that still used direct assignment to use
replace_tree_queue()
- Remove _set_connected() 1-line wrapper on DiscordPlatform; assign
_connected directly
- Fix cancel_all() TOCTOU: hold self._lock for the full loop so newly
created trees cannot slip through between the snapshot and cancellation

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-03-01 12:34:00 -08:00
Alishahryar1
35a2760f6e Fixed encapsulation violations 2026-03-01 04:28:22 -08:00
Alishahryar1
302ee28585 Removed dead code 2026-03-01 04:21:06 -08:00
Alishahryar1
34757511a0 Improve deterministic error surfacing across stream and API 2026-03-01 01:32:52 -08:00