qwen-code/packages/core
zhangxy-zju d40fe7cdba
Some checks are pending
Qwen Code CI / Lint (push) Waiting to run
Qwen Code CI / Test (push) Blocked by required conditions
Qwen Code CI / Test-1 (push) Blocked by required conditions
Qwen Code CI / Test-2 (push) Blocked by required conditions
Qwen Code CI / Test-3 (push) Blocked by required conditions
Qwen Code CI / Test-4 (push) Blocked by required conditions
Qwen Code CI / Test-5 (push) Blocked by required conditions
Qwen Code CI / Test-6 (push) Blocked by required conditions
Qwen Code CI / Test-7 (push) Blocked by required conditions
Qwen Code CI / Test-8 (push) Blocked by required conditions
Qwen Code CI / Post Coverage Comment (push) Blocked by required conditions
Qwen Code CI / CodeQL (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:docker (push) Waiting to run
E2E Tests / E2E Test (Linux) - sandbox:none (push) Waiting to run
E2E Tests / E2E Test - macOS (push) Waiting to run
fix(core): scope StreamingToolCallParser per stream, not per Converter (#3516) (#3525)
* fix(core): scope StreamingToolCallParser per stream, not per Converter

Issue #3516 reports subagent failures with `Model stream ended with
empty response text` whose real root cause is concurrent streams
racing on a single shared tool-call parser.

Architecture before this change:

    Config (singleton)
      └── contentGenerator (OpenAIContentGenerator)
            └── ContentGenerationPipeline
                  └── OpenAIContentConverter
                        └── streamingToolCallParser  ← shared!

Any caller of `Config.getContentGenerator()` — foreground turns,
fork subagents, `run_in_background: true` subagents, ACP concurrent
Agent calls (PR #3463) — ends up using the same parser instance.
When two streams run concurrently, `processStreamWithLogging`'s
stream-start `resetStreamingToolCalls()` wipes the other stream's
in-flight buffers, and their chunks interleave at `index: 0`,
producing corrupt JSON like
`{"file_path": "/A{"file_path": "/B...` that even jsonrepair cannot
salvage. The corrupted tool calls are dropped entirely and the
stream surfaces upstream as `NO_RESPONSE_TEXT`.

Fix: move parser state from Converter instance field into
per-stream local state.

- Add `ConverterStreamContext` and `createStreamContext()` factory
  on `OpenAIContentConverter`. Each call returns a fresh context
  holding its own `StreamingToolCallParser`.
- `convertOpenAIChunkToGemini(chunk, ctx)` now takes the context
  as an explicit arg; all internal parser calls route through it.
- `ContentGenerationPipeline.processStreamWithLogging` creates one
  context at stream entry and passes it to every chunk conversion.
- Drop `OpenAIContentConverter.streamingToolCallParser` field.
- Drop `resetStreamingToolCalls()` — the context has stream-local
  lifetime, no manual reset needed. The two call sites in the
  pipeline (stream entry and error path) are removed.

Tests:

- Replace the `resetStreamingToolCalls` suite with a
  `createStreamContext` suite asserting that distinct contexts are
  independent and writes to one never leak into the other.
- Add a regression test simulating two concurrent streams with
  interleaved chunks through the same Converter instance; both
  tool calls close cleanly with correct arguments and ids.
- All existing single-stream tests updated to obtain a context via
  `createStreamContext()` and pass it through to chunk conversion.
- `pipeline.test.ts` mocks updated accordingly.

packages/core test suite: 841 passed. No stale references to
`resetStreamingToolCalls` or the private parser field remain.

Refs #3516

* docs(core): clarify GC wording in per-stream context comment (copilot review)

* test(core): add pipeline-level integration test for concurrent streams

Complements the unit tests in converter.test.ts by driving the real
ContentGenerationPipeline + real OpenAIContentConverter (no mocks on
converter) through two streams that interleave on the event loop via
`setImmediate`-paced async generators.

Two scenarios:

1. Happy path — two concurrent executeStream invocations with their
   own tool-call chunks. Assert each stream emits its own function
   call with the correct id and args (not cross-contaminated from
   the sibling stream).

2. Error isolation — one stream hits `error_finish` mid-flight while
   a sibling stream is still accumulating tool-call chunks. Assert
   the sibling's function call still emits cleanly, covering the
   removed `resetStreamingToolCalls()` call in the error path of
   processStreamWithLogging.

Verified as a positive control: with the per-stream context fix
reverted (origin/main state), both tests fail with exactly the
bug shape users reported — one stream's function call is either
overwritten by the other's id/args, or is swallowed entirely when
the sibling stream's error path wipes the shared parser buffer.

Refs #3516
2026-04-22 20:32:30 +08:00
..
scripts Fix: Improve ripgrep binary detection and cross-platform compatibility (#1060) 2025-11-18 19:38:30 +08:00
src fix(core): scope StreamingToolCallParser per stream, not per Converter (#3516) (#3525) 2026-04-22 20:32:30 +08:00
vendor feat test tool permissions 2026-03-10 16:30:22 +08:00
index.ts fix: Remove remaining ClearcutLogger export from packages/core/index.ts 2026-02-01 14:52:14 +08:00
package.json chore(release): bump version to 0.15.0 (#3526) 2026-04-22 19:26:13 +08:00
test-setup.ts feat(memory): managed auto-memory and auto-dream system (#3087) 2026-04-16 20:05:45 +08:00
tsconfig.json fix: upgrade @lydell/node-pty to 1.2.0-beta.10 to fix PTY FD leak 2026-04-01 07:55:56 +08:00
vitest.config.ts Sync upstream Gemini-CLI v0.8.2 (#838) 2025-10-23 09:27:04 +08:00