* feat(qoder): native cosy integration
* feat(qoder): implement native COSY encryption algorithm and remove CLI child instances, plus workflow bumps
* feat(resilience): context overflow fallback, OAuth token detection, empty content guard & context-optimized combo strategy
- Add isContextOverflowError + isContextOverflow detectors (400 + token-limit signals)
- Auto-fallback to next family model on context overflow in chatCore
- Add isEmptyContentResponse to catch fake-success empty responses, trigger fallback + recursive retry
- Add OAUTH_INVALID_TOKEN error type (T11) with isOAuthInvalidToken signal matching; warn instead of deactivating node
- Add getModelContextLimit helper in modelsDevSync (reads limit_context from synced capabilities)
- Upgrade getTokenLimit in contextManager to check models.dev DB before registry (fixes gemini-2.5-pro: 1000000→1048576)
- Add findLargerContextModel in modelFamilyFallback for context-aware model selection
- Add sortModelsByContextSize + context-optimized combo strategy in combo.ts
- Update context-manager unit test for corrected gemini-2.5-pro limit
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(review): address Gemini code review — tool_calls path, infinite recursion, dedup signals, findLargerContextModel
- Fix isEmptyContentResponse: check message.tool_calls/delta.tool_calls instead
of firstChoice.tool_calls (wrong OpenAI API path, caused tool-call responses
to be falsely flagged as empty)
- Fix empty content fallback: replace recursive handleChatCore call (infinite
recursion risk + wrong model due to original body.model) with non-recursive
pattern — call executeProviderRequest, parse fallback response body, reassign
responseBody and fall through to existing processing
- Fix context overflow: use findLargerContextModel over family candidates first,
fall back to getNextFamilyFallback — ensures we pick a model with actually
larger context window on overflow
- Fix signal dedup: export CONTEXT_OVERFLOW_SIGNALS + CONTEXT_OVERFLOW_REGEX
from errorClassifier.ts; import shared regex in modelFamilyFallback.ts,
removing duplicate signal list and per-call RegExp construction
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(UI): add context-optimized strategy to frontend schema and options
* fix(sse): preserve Responses API events in stream translation
When translating Claude-format responses (e.g. GLM) to Responses API
format for Codex CLI, the sanitizer stripped {event, data} structured
items to {"object":"chat.completion.chunk"}, losing all content and
the critical response.completed event.
Only run sanitizeStreamingChunk on OpenAI Chat Completions chunks,
skipping items that have the Responses API {event, data} structure.
* test(sse): add regression test for Claude→Responses stream sanitization
Verifies that {event,data} structured items from the Responses API
translator bypass sanitizeStreamingChunk when translating Claude-format
providers (e.g. GLM) to Responses API format for Codex CLI.
* fix(sse): strengthen Responses API event detection with response. prefix check
Use explicit `response.` prefix check instead of generic `event && data`
presence check, as recommended in PR review.
* fix: pin Next.js to 16.0.10 to prevent Turbopack hashed module bug
Remove ^ prefix from next and eslint-config-next to prevent
automatic upgrades to 16.1.x+ which introduced content-based
hashing for external module references in Turbopack.
Also remove duplicate Material Symbols @import from globals.css
(font already loaded via <link> in layout.tsx).
Fixes#509
* align cc-compatible cache handling with client passthrough
* chore: integrate resilience and turbopack fixes (PRs #992, #990, #987)
* chore(release): bump to v3.5.2 — changelog, docs, version sync
* docs(i18n): sync documentation updates to 33 languages
* fix(qoder): replace any with unknown to comply with strict any-budget
---------
Co-authored-by: diegosouzapw <diegosouzapw@users.noreply.github.com>
Co-authored-by: oyi77 <oyi77@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Chris Staley <christopher-s@users.noreply.github.com>
Co-authored-by: Ivan <shanin-i2011@yandex.ru>
Co-authored-by: R.D. <rogerproself@gmail.com>