When a file is skipped because the model doesn't support a modality,
it should not be treated as an error. The error field was incorrectly
being set alongside the informational message.
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
Adds the required getContentGeneratorConfig mock to read-file.test.ts
and pathReader.test.ts to fix failing tests that depend on content
generator configuration.
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
This fixes session corruption issues where the modality check was based on
the model name rather than the actual resolved config, causing inconsistent
behavior when the config's modalities differed from the defaults.
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
- Increase description warning threshold from 500 to 1,000 characters
- Change system prompt 10,000 char limit from error to warning
- Remove intermediate 5,000 char warning threshold for system prompts
- Update documentation to reflect soft warning behavior
This provides more flexibility for users while still guiding them
toward better practices.
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
Restructures the convertOpenAIResponseToGemini method to set
response.candidates within the choice conditional block, making
the empty choices handling more explicit and the control flow clearer.
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
- Add 'array' type support to SettingItemDefinition
- Change hooks field from object to array type
- Add additionalProperties constraint for env fields
- Fix additionalProperties generation to only apply for object types
This ensures the hooks configuration schema correctly represents hooks as an array
and properly validates environment variable objects.
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
Previously, `generateQualitativeInsights` used `Promise.all` with a
`generate` helper that re-threw errors. A single LLM call failure
(timeout, rate limit, JSON parse error) caused the entire `qualitative`
object to become `undefined`, hiding all detailed report sections.
Now individual `generate` calls catch errors and return `undefined`
instead of throwing. The `QualitativeInsights` interface fields are
made optional so partial results render correctly — each React section
component already guards against missing data with `if (!field) return
null`.
Made-with: Cursor
The streaming path (convertOpenAIChunkToGemini) already uses optional
chaining on choices and guards with `if (choice)`, but the
non-streaming path accesses choices[0] directly. Providers that return
an empty choices array cause a TypeError crash.
Made-with: Cursor
deepseek-r1 normalizes to `deepseek-r1` which does not match the
existing `^deepseek-reasoner` output pattern, causing it to fall
through to the 8K default. DeepSeek R1 supports 64K output tokens,
same as deepseek-reasoner. Without this fix, responses from
deepseek-r1 are silently truncated at 8K tokens.
Made-with: Cursor
The temporary debug log session setup at the start of loadSettings() was
removed along with unused imports (setDebugLogSession, sanitizeCwd). The
resolvedWorkspaceDir variable is now defined where it's actually used.
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
Document that the setting defaults to true on most platforms but false
on Windows builds <= 19041 due to ConPTY reliability issues, matching
VS Code's approach.
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
When toggling shell focus mode with Ctrl+F, the raw control character
was being forwarded to the PTY, causing a ^F artifact to appear in the
shell. This fix intercepts the Ctrl+F keypress before it reaches the
PTY when it's used for focus mode toggling.
Fixes#2236
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
Remove tests that rely on arrow key and keyboard input timing which are
unreliable on Windows CI due to terminal emulation differences.
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
ConPTY on Windows builds <= 19041 has known reliability issues including
missing output and hangs. VS Code uses the same cutoff.
This fixes run_shell_command returning empty output on affected Windows
systems by falling back to spawn-based execution instead of node-pty.
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
- Enable windowsVerbatimArguments only when shell is cmd.exe
- PowerShell requires default escaping for correct arg round-trip
The windowsVerbatimArguments option skips Node's MSVC CRT escaping,
which cmd.exe doesn't understand. PowerShell (.NET) needs the default
escaping so args pass correctly through CommandLineToArgvW.
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
PowerShell handles array args correctly via CommandLineToArgvW, and
the string form breaks quoted paths ending in backslash (e.g.,
"C:\Temp\") because \" is treated as an escaped quote.
This refines the previous fix to only apply the workaround to cmd.exe.
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
On Windows, node-pty's argsToCommandLine re-quotes array elements that
contain spaces, which mangles user-provided quoted arguments. For example,
'type "hello world"' becomes '"type \"hello world\""'.
By passing args as a single string instead of an array on Windows,
node-pty concatenates it verbatim, preserving the original quoting.
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
Updates test expectations for GLM-5 and GLM-4.7 output limits
from 16K to 128K to align with the implementation changes.
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
GPT-5.x models have 400K total context but 128K is reserved for output,
so the actual input limit is 272K (400K - 128K). Also updates GLM-5 and
GLM-4.7 output limits from 16K to 128K.
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>