mirror of
https://github.com/diegosouzapw/OmniRoute.git
synced 2026-05-05 09:46:30 +00:00
Some checks are pending
CI / Lint (push) Waiting to run
CI / Build language matrix (push) Waiting to run
CI / i18n Validation (push) Blocked by required conditions
CI / PR Test Policy (push) Waiting to run
CI / Advanced Security Scans (push) Waiting to run
CI / Build (push) Waiting to run
CI / Package Artifact (push) Blocked by required conditions
CI / Unit Tests (push) Blocked by required conditions
CI / Coverage (push) Blocked by required conditions
CI / SonarQube (push) Blocked by required conditions
CI / PR Coverage Comment (push) Blocked by required conditions
CI / E2E Tests (1/4) (push) Blocked by required conditions
CI / E2E Tests (2/4) (push) Blocked by required conditions
CI / E2E Tests (3/4) (push) Blocked by required conditions
CI / E2E Tests (4/4) (push) Blocked by required conditions
CI / Integration Tests (push) Blocked by required conditions
CI / Security Tests (push) Blocked by required conditions
CI / CI Dashboard (push) Blocked by required conditions
Publish to Docker Hub / Build and Push Docker (multi-arch) (push) Waiting to run
* fix(streaming): #1211 greedy strip omniModel tags to prevent literal \n\n artifacts - Changed regex quantifier from ? to * in combo.ts, comboAgentMiddleware.ts, and contextHandoff.ts to greedily strip all JSON-escaped newline sequences surrounding <omniModel> tags in SSE streaming chunks - Added \r to the character class for cross-platform robustness - Fixed Playwright strict-mode violation in combo-unification.spec.ts - Bumped OpenAPI version and CHANGELOG to 3.6.6 * fix: 3 bugs found during issue triage (#1175, #1187/#1218, #1202) - fix(gemini): strip VS Code JSON Schema extensions from tool schemas (#1175) Add enumDescriptions, markdownDescription, markdownEnumDescriptions, enumItemLabels and tags to UNSUPPORTED_SCHEMA_CONSTRAINTS so the Gemini sanitizer removes them before forwarding. GitHub Copilot injects these non-standard fields into tool definitions, causing Gemini to reject with 'Unknown name enumDescriptions at functionDeclarations[n].parameters'. - fix(health-check): unwrap proxy config object before passing to getAccessToken (#1187 #1218) resolveProxyForConnection() returns { proxy, level, levelId } but the health check loop was passing the full wrapper to getAccessToken(), which expects the inner config object (.host, .port etc). The proxy dispatcher validated .host on the wrapper (undefined) and threw 'Context proxy host is required', silently marking every connection as unhealthy every sweep. Fix mirrors the pattern already used in chatHelpers.ts: proxyResult?.proxy || null. - fix(ui): debounce models.dev sync interval slider to save only on release (#1202) The slider's onChange fired updateInterval() on every drag tick, sending a PATCH per pixel of movement. Rapid API responses overwrote UI state mid-drag. Introduce draftIntervalHours for smooth visual feedback; the PATCH fires on onMouseUp / onBlur once the user releases the control. * fix(providers): update Xiaomi MiMo token-plan endpoints (#1238) Integrated into release/v3.6.6 * fix(cc-compatible): trim beta flags and preserve cache passthrough (#1230) Integrated into release/v3.6.6 * feat(memory+skills): full-featured memory & skills systems with tests (#1228) Integrated into release/v3.6.6 * fix: forward client x-initiator header to GitHub Copilot upstream (#1227) Integrated into release/v3.6.6 * feat(bailian-quota): add Alibaba Coding Plan quota monitoring (#1235) * fix: resolve v3.6.6 backlog bugs (#1206, #1211, #1220, #1231) - fix(core): #1206 inject startup guard against app/ and src/app/ conflict - fix(health): #1220 add HEALTHCHECK_STAGGER_MS to prevent token refresh bursting - fix(proxy): #1231 prioritize HTTP 429 over quota body heuristics - fix(sse): #1211 strip leading double-newlines in responses API stream * fix(tests): resolve memory migration and skills route pagination bugs from PR overlaps * docs: Update CHANGELOG.md with v3.6.6 features (#1182, #1165, #1177) * chore(release): bump version to 3.6.6 Update package versions for the electron app and open-sse package. Sync llm.txt metadata and feature headings with the 3.6.6 release. * feat(core): harden outbound provider calls and add cooldown retries Add guarded outbound fetch helpers with private/local URL blocking, controlled retries, timeout normalization, and route-level status propagation for provider validation and model discovery. Introduce cooldown-aware chat retries with configurable requestRetry and maxRetryIntervalSec settings, model-scoped cooldown responses, and improved rate-limit learning from headers and error bodies so short upstream lockouts can recover automatically. Also align Antigravity and Codex header handling, require API keys for Pollinations, validate web runtime env at startup, restore sanitized Gemini tool names in translated responses, and inject a synthetic Claude text block when upstream SSE completes empty. * feat(models): add glmt preset and hybrid token counting Introduce GLM Thinking as a first-class provider preset with shared GLM model metadata, pricing, usage sync, dashboard support, and provider request defaults for higher token budgets and longer timeouts. Use provider-side /messages/count_tokens when a Claude-compatible upstream supports it, while preserving estimated fallback behavior for missing models, missing credentials, and upstream failures. Also add startup seeding for default model aliases and normalize common cross-proxy model dialects so canonical slashful model ids do not get misrouted during resolution. * feat(api): add sync tokens and v1 websocket bridge Add dedicated sync token storage, issuance, revocation, and bundle download routes backed by stable config bundle versioning and ETag support. Expose the v1 websocket handshake route and custom Next server bridge so OpenAI-compatible websocket traffic can be upgraded and proxied through the dashboard and API bridge. Expand compliance auditing with structured metadata, pagination, request context, auth and provider credential events, and SSRF-blocked validation logging. * docs: Update all documentation for v3.6.6 - CHANGELOG: Add WebSocket bridge, GLM Thinking preset, safe outbound fetch/SSRF guard, cooldown-aware retries, compliance audit v2, model alias seeding, and all Internal Improvements for the 3 new commits - README: Expand v3.6.x highlights table with 10 new features; add SafeOutboundFetch, CooldownAwareRetry, SSRF guard, TPS metric, sync tokens, WebSocket bridge to Resilience/Observability/Deployment tables - ARCHITECTURE: Bump date; add new modules to executive summary, API routes, SSE core services, Auth/Security section; add SSRF/Outbound guard failure mode (section 6); expand module mapping - ENVIRONMENT: Add OMNIROUTE_CRYPT_KEY/OMNIROUTE_API_KEY_BASE64 legacy aliases, OUTBOUND_SSRF_GUARD_ENABLED, CODEX_CLIENT_VERSION, and REQUEST_RETRY/MAX_RETRY_INTERVAL_SEC cooldown retry settings - FEATURES: Add 6 new feature sections — V1 WebSocket Bridge, Sync Tokens & Config Bundle, GLM Thinking Preset, Safe Outbound Fetch & SSRF Guard, Cooldown-Aware Retries, Compliance Audit v2 * fix: use api64 for proxy test (#1255) Integrated into release/v3.6.6 — IPv6 proxy test fix * fix(page): update custom models section to include all providers #1200 (#1256) Integrated into release/v3.6.6 — Gemini custom model picker fix * fix: provide default client_id fallbacks to prevent broken OAuth requests (#1246) Integrated into release/v3.6.6 — OAuth client_id default fallbacks * fix: translate max_tokens/max_completion_tokens → max_output_tokens in Chat→Responses translator (#1245) Integrated into release/v3.6.6 — max_tokens → max_output_tokens Responses API translation + unit tests * feat(oauth): support cursor-agent CLI as Cursor credential source (#1258) Integrated into release/v3.6.6 — cursor-agent CLI credential source support * fix(cc-compatible): restore upstream SSE and correct stream/combo timeout behavior (#1257) Integrated into release/v3.6.6 — CC-compatible upstream SSE restore + stream timeout fix + README table repair * fix(cli-tools): resolve API key resolution and model mapping bugs in CLI tools (#1263) Integrated into release/v3.6.6 * feat(cli-tools): add Qwen Code CLI integration (#1266) Integrated into release/v3.6.6 * fix(i18n): add missing zh-CN translations and fix logger imports (#1269) Integrated into release/v3.6.6 * fix(i18n): add Chinese i18n support to dashboard components (#1274) Integrated into release/v3.6.6 * feat: update Pollinations to require API key, remove free tier flag (#1177) * feat: friendly error messages for crypto/encryption failures (#1165) * feat: add TPS (tokens per second) metric column to request logs (#1182) * feat: merge custom/imported models into filter list for all providers (#1191) * feat(fallback): Fix provider-profile-driven lockouts (#1267) This integrates rdself's unify-provider-profile-locks PR manually to handle structural conflicts. * fix(claude): proper Anthropic SDK integration (#1271) * fix(healthcheck): use correct proxy wrapper format for getAccessToken (#1272) * chore(release): v3.6.6 — skills registry stability fix + final integration * fix(auth): harden bootstrap auth and memory dashboard behavior Restrict unauthenticated writes to /api/settings/require-login to the initial bootstrap window while keeping read-only checks public. This prevents post-setup config changes without blocking first-run login setup, and the onboarding flow now logs in immediately after setting the password. Restore memory API filtering and pagination behavior by supporting q searches, honoring offset-based requests, and avoiding unrelated fallback results when FTS misses. Update dashboard stats fallback to use the response totals consistently. Package the MCP server with explicit file entries and add regression tests for bootstrap auth and memory route behavior * fix(codex): remove max_output_tokens from body for compatibility * chore(release): v3.6.6 — include PR 1274 fixes in changelog * chore: exclude additional build artifacts and internal directories from npm package distribution * fix: update Gemini OAuth test to match registry defaults + codex UI improvements * fix: restore .mjs refs for scripts/ in test imports after ts migration * fix: restore next.config.mjs ref in dev-origins test * fix: implement db migration safety checks and codex config format * fix: disable mass-migration abort during unit tests based on auto-backup flag * fix: update script regex in auto-update tests to use .mjs * feat: Add Perplexity Web (Session) provider (#1289) Integrated into release/v3.6.6 * fix(cli): resolve codex routing config parsing, standardize select model button positioning, and clarify oauth documentation * docs(changelog): record recent cli, provider, and test updates Document the latest fixes for Codex routing configuration parsing and Lobehub provider icon fallback behavior. Add the note that the remaining JavaScript test files were migrated to TypeScript ES modules to reflect the completed test stack transition. * chore(release): merge #1286 minor improvements manually to avoid testing conflict * chore(test): rename perplexity-web.test.mjs to .ts to maintain 100% TS codebase * chore(docs): update CHANGELOG.md for perplexity-web provider * fix(security): resolve CodeQL incomplete URL substring sanitization via URL parsing in test mocks * fix: integrate compressContext() into chatCore.ts request pipeline Proactively compress oversized contexts before sending to upstream providers, preventing context_length_exceeded errors. Compression triggers at 85% of model's context limit using the existing 3-layer compressContext() function. - Import compressContext, estimateTokens, getTokenLimit from contextManager - Add compression check after translation, before executor dispatch - Estimate tokens and compare against 85% threshold of model's context limit - Apply 3-layer compression (trim tools, compress thinking, purify history) - Log compression events with before/after token counts and layers applied - Audit compression events for observability - Add unit tests verifying integration behavior Closes #1290 * fix(tests): align reasoning expectations with GLM thinking structure * fix: prevent orphaned tool_result messages in purifyHistory() When purifyHistory() drops oldest messages to fit context window, it can split tool_use/tool_result pairs — keeping the tool_result but dropping the tool_use that initiated it. This causes upstream providers to reject the request with format errors. Add fixToolPairs() that runs after each purification pass to remove: - OpenAI format: orphaned role='tool' messages without matching tool_calls ID - Claude format: orphaned tool_result content blocks without matching tool_use ID Closes #1291 * fix(tests): supply tool_use in mock so it is not dropped * chore: convert remaining test to TypeScript * fix(tests): restore compatibility with compressContext threshold test after tsx migration * docs: finalize v3.6.6 release documentation * fix(core): finalize provider removal, type issues, and codex API key config * fix(dashboard): render Web/Cookie, Search, Audio provider sections and fix TypeScript errors * fix: increase MCP web_search timeout to 60s (#1278) * fix: route combo testing properly for embedding models (#1260) * fix: accumulate excluded accounts in combo fallback loop (#1233) * fix: strip leading whitespace and newlines from first streaming chunk (#1211) * docs: clarify VPS and Docker settings for OAuth credentials (#1204) * fix: return real retry-after for pipeline gates (#1301) Integrated into release/v3.6.6 — returns real Retry-After values from pipeline gates * feat: streaming semantic cache, Cursor auto-version detection, and call-log enhancements (#1296) Integrated into release/v3.6.6 — streaming semantic cache, Cursor auto-version detection, call-log cache_source tracking * feat(api): support more OpenAI types (image, embeddings, audio-transcriptions, audio-speech) (#1297) Integrated into release/v3.6.6 — adds embeddings, audio-transcriptions, audio-speech, and images-generations support for custom OpenAI-compatible providers, plus Pollinations image registry * deps: bump hono from 4.12.12 to 4.12.14 (#1302) Integrated into release/v3.6.6 * deps: bump hono from 4.12.12 to 4.12.14 (#1306) Integrated into release/v3.6.6 * chore: stabilization fixes for v3.6.6 (#1298, #1254, #59, CI) * fix(providers): match correct endpoint for Xiaomi MiMo, strip routing prefix for custom openai endpoints (#1303, #1261) * feat(storage): add database backup cleanup controls * chore(release): v3.6.6 — Final Stabilization Push * Backport call log storage refactor to release/v3.6.6 (#1307) Integrated into release/v3.6.6 * deps: update dompurify to 3.4.0 to resolve CVE-XYZ (#60) * test: disable sqlite auto backup in CI to resolve E2E timeout (#24481475058) * chore(docs): sync CHANGELOG for v3.6.6 with missing features and fixes * chore(release): prep v3.6.6 infrastructure and type safety fixes - Migrated legacy .mjs scripts to .ts (bin, prepublish, policies) - Resolved pre-commit strict lint (t11 budget) errors in combo.ts - Explicitly typed all TS bindings in pack-artifact policies - Updated package.json commands to run Node via tsx/esm internally - Hardened CI/CD with explicit node version 22.22.2 checks - Completed stage validations for v3.6.6 final release * chore: fix TS build errors and e2e timeouts in CI - Migrate nodeRuntimeSupport to TS interfaces avoiding implicit any - Increase visibility timeouts in skills-marketplace E2E test to 15s to bypass CI flakiness - Complete migration of .mjs scripts to .ts ensuring type safety * chore(release): sync package version 3.6.6 across workspaces * test(e2e): universally increase UI component visibility timeouts from 5s to 15s to bypass CI starvation * chore(build): inject baseUrl, paths, and types:node into MITM tsconfig within prepublish hook to fix missing types in CI check --------- Co-authored-by: diegosouzapw <diegosouzapw@users.noreply.github.com> Co-authored-by: Jack <5443152+hijak@users.noreply.github.com> Co-authored-by: Randi <55005611+rdself@users.noreply.github.com> Co-authored-by: Paijo <14921983+oyi77@users.noreply.github.com> Co-authored-by: Samuel Cedric <ceds.sam@gmail.com> Co-authored-by: Max Garmash <max@37bytes.com> Co-authored-by: Markus Hartung <mail@hartmark.se> Co-authored-by: Gi99lin <74502520+Gi99lin@users.noreply.github.com> Co-authored-by: Payne <baboialex95@gmail.com> Co-authored-by: Benson K B <bensonkbmca@gmail.com> Co-authored-by: clousky2020 <33016567+clousky2020@users.noreply.github.com> Co-authored-by: Ravi Tharuma <25951435+RaviTharuma@users.noreply.github.com> Co-authored-by: oyi77 <oyi77@users.noreply.github.com> Co-authored-by: Hdsje <vovan877@gmail.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: xiaoge1688 <moyekongling@gmail.com>
549 lines
18 KiB
TypeScript
549 lines
18 KiB
TypeScript
import test from "node:test";
|
|
import assert from "node:assert/strict";
|
|
|
|
import { CursorExecutor } from "../../open-sse/executors/cursor.ts";
|
|
import {
|
|
decodeMessage,
|
|
encodeField,
|
|
parseConnectRPCFrame,
|
|
wrapConnectRPCFrame,
|
|
} from "../../open-sse/utils/cursorProtobuf.ts";
|
|
import {
|
|
buildCursorHeaders,
|
|
generateCursorChecksum,
|
|
generateHashed64Hex,
|
|
generateSessionId,
|
|
} from "../../open-sse/utils/cursorChecksum.ts";
|
|
import {
|
|
getCursorVersion,
|
|
resetCursorVersionCache,
|
|
} from "../../open-sse/utils/cursorVersionDetector.ts";
|
|
|
|
const LEN = 2;
|
|
const VARINT = 0;
|
|
const TOP_LEVEL_TOOL_CALL = 1;
|
|
const TOP_LEVEL_RESPONSE = 2;
|
|
const RESPONSE_TEXT = 1;
|
|
const TOOL_ID = 3;
|
|
const TOOL_NAME = 9;
|
|
const TOOL_RAW_ARGS = 10;
|
|
const TOOL_IS_LAST = 11;
|
|
|
|
function concatArrays(...arrays) {
|
|
const total = arrays.reduce((sum, array) => sum + array.length, 0);
|
|
const result = new Uint8Array(total);
|
|
let offset = 0;
|
|
|
|
for (const array of arrays) {
|
|
result.set(array, offset);
|
|
offset += array.length;
|
|
}
|
|
|
|
return result;
|
|
}
|
|
|
|
function buildTextFrame(text) {
|
|
return Buffer.from(
|
|
wrapConnectRPCFrame(
|
|
encodeField(TOP_LEVEL_RESPONSE, LEN, encodeField(RESPONSE_TEXT, LEN, text)),
|
|
false
|
|
)
|
|
);
|
|
}
|
|
|
|
function buildCompressedTextFrame(text) {
|
|
return Buffer.from(
|
|
wrapConnectRPCFrame(
|
|
encodeField(TOP_LEVEL_RESPONSE, LEN, encodeField(RESPONSE_TEXT, LEN, text)),
|
|
true
|
|
)
|
|
);
|
|
}
|
|
|
|
function buildToolCallFrame({ id, name, args, isLast }) {
|
|
return Buffer.from(
|
|
wrapConnectRPCFrame(
|
|
encodeField(
|
|
TOP_LEVEL_TOOL_CALL,
|
|
LEN,
|
|
concatArrays(
|
|
encodeField(TOOL_ID, LEN, id),
|
|
encodeField(TOOL_NAME, LEN, name),
|
|
encodeField(TOOL_RAW_ARGS, LEN, args),
|
|
encodeField(TOOL_IS_LAST, VARINT, isLast ? 1 : 0)
|
|
)
|
|
),
|
|
false
|
|
)
|
|
);
|
|
}
|
|
|
|
function buildJsonErrorFrame(error) {
|
|
return Buffer.from(wrapConnectRPCFrame(new TextEncoder().encode(JSON.stringify(error)), false));
|
|
}
|
|
|
|
test("CursorExecutor.buildUrl uses the configured Cursor endpoint", () => {
|
|
const executor = new CursorExecutor();
|
|
assert.equal(
|
|
executor.buildUrl(),
|
|
"https://api2.cursor.sh/aiserver.v1.ChatService/StreamUnifiedChatWithTools"
|
|
);
|
|
});
|
|
|
|
test("CursorExecutor.buildHeaders strips token prefixes and derives checksum/session headers", () => {
|
|
const executor = new CursorExecutor();
|
|
const originalDateNow = Date.now;
|
|
const originalDbPath = process.env.CURSOR_STATE_DB_PATH;
|
|
// Force fallback by pointing to a non-existent DB path
|
|
process.env.CURSOR_STATE_DB_PATH = "/nonexistent/cursor/state.vscdb";
|
|
resetCursorVersionCache();
|
|
Date.now = () => 1_700_000_000_000;
|
|
|
|
try {
|
|
const headers = executor.buildHeaders({
|
|
accessToken: "prefix::real-token",
|
|
providerSpecificData: { machineId: "machine-1", ghostMode: false },
|
|
});
|
|
|
|
const expectedVersion = getCursorVersion();
|
|
|
|
assert.equal(headers.authorization, "Bearer real-token");
|
|
assert.equal(headers["x-client-key"], generateHashed64Hex("real-token"));
|
|
assert.equal(headers["x-session-id"], generateSessionId("real-token"));
|
|
assert.equal(headers["x-cursor-checksum"], generateCursorChecksum("machine-1"));
|
|
assert.equal(headers["x-cursor-client-version"], expectedVersion);
|
|
assert.equal(headers["x-cursor-user-agent"], `Cursor/${expectedVersion}`);
|
|
assert.equal(headers["user-agent"], `Cursor/${expectedVersion}`);
|
|
assert.equal(headers["x-ghost-mode"], "false");
|
|
assert.equal(headers["connect-protocol-version"], "1");
|
|
assert.match(headers["x-amzn-trace-id"], /^Root=/);
|
|
assert.ok(headers["x-request-id"]);
|
|
} finally {
|
|
Date.now = originalDateNow;
|
|
if (originalDbPath === undefined) {
|
|
delete process.env.CURSOR_STATE_DB_PATH;
|
|
} else {
|
|
process.env.CURSOR_STATE_DB_PATH = originalDbPath;
|
|
}
|
|
resetCursorVersionCache();
|
|
}
|
|
});
|
|
|
|
test("buildCursorHeaders utility stays aligned with Cursor Composer 2 versioned headers", () => {
|
|
const originalDbPath = process.env.CURSOR_STATE_DB_PATH;
|
|
process.env.CURSOR_STATE_DB_PATH = "/nonexistent/cursor/state.vscdb";
|
|
resetCursorVersionCache();
|
|
|
|
try {
|
|
const headers = buildCursorHeaders("prefix::real-token", "machine-1", false);
|
|
const expectedVersion = getCursorVersion();
|
|
|
|
assert.equal(headers.Authorization, "Bearer real-token");
|
|
assert.equal(headers["x-cursor-client-version"], expectedVersion);
|
|
assert.equal(headers["x-cursor-user-agent"], `Cursor/${expectedVersion}`);
|
|
assert.equal(headers["User-Agent"], `Cursor/${expectedVersion}`);
|
|
assert.equal(headers["x-ghost-mode"], "false");
|
|
} finally {
|
|
if (originalDbPath === undefined) {
|
|
delete process.env.CURSOR_STATE_DB_PATH;
|
|
} else {
|
|
process.env.CURSOR_STATE_DB_PATH = originalDbPath;
|
|
}
|
|
resetCursorVersionCache();
|
|
}
|
|
});
|
|
|
|
test("CursorExecutor.buildHeaders derives machineId when not provided", () => {
|
|
const executor = new CursorExecutor();
|
|
const headers = executor.buildHeaders({ accessToken: "real-token", providerSpecificData: {} });
|
|
assert.ok(headers["x-cursor-checksum"], "should have a checksum header");
|
|
assert.ok(headers["x-client-key"], "should have a client key header");
|
|
});
|
|
|
|
test("CursorExecutor.transformRequest produces a framed protobuf payload", () => {
|
|
const executor = new CursorExecutor();
|
|
const transformed = executor.transformRequest(
|
|
"claude-3.5-sonnet",
|
|
{ messages: [{ role: "user", content: "Hello" }], tools: [] },
|
|
true,
|
|
{}
|
|
);
|
|
const frame = parseConnectRPCFrame(transformed);
|
|
const fields = decodeMessage(frame.payload);
|
|
|
|
assert.ok(transformed instanceof Uint8Array);
|
|
assert.equal(frame.flags, 0);
|
|
assert.equal(frame.consumed, transformed.length);
|
|
assert.equal(fields.has(1), true);
|
|
});
|
|
|
|
test("CursorExecutor.transformProtobufToJSON aggregates text and split tool call arguments", async () => {
|
|
const executor = new CursorExecutor();
|
|
const body = { messages: [{ role: "user", content: "hi" }] };
|
|
const buffer = Buffer.concat([
|
|
buildTextFrame("Hello "),
|
|
buildToolCallFrame({
|
|
id: "call_1",
|
|
name: "read_file",
|
|
args: '{"path":',
|
|
isLast: false,
|
|
}),
|
|
buildToolCallFrame({
|
|
id: "call_1",
|
|
name: "read_file",
|
|
args: '"/tmp/a"}',
|
|
isLast: true,
|
|
}),
|
|
]);
|
|
|
|
const response = executor.transformProtobufToJSON(buffer, "cursor-small", body);
|
|
const payload = await response.json();
|
|
|
|
assert.equal(response.status, 200);
|
|
assert.equal(payload.object, "chat.completion");
|
|
assert.equal(payload.model, "cursor-small");
|
|
assert.equal(payload.choices[0].message.content, "Hello ");
|
|
assert.equal(payload.choices[0].finish_reason, "tool_calls");
|
|
assert.equal(payload.choices[0].message.tool_calls[0].function.name, "read_file");
|
|
assert.equal(payload.choices[0].message.tool_calls[0].function.arguments, '{"path":"/tmp/a"}');
|
|
assert.equal(payload.usage.estimated, true);
|
|
});
|
|
|
|
test("CursorExecutor.transformProtobufToJSON finalizes incomplete tool calls when the stream ends early", async () => {
|
|
const executor = new CursorExecutor();
|
|
const response = executor.transformProtobufToJSON(
|
|
Buffer.concat([
|
|
buildToolCallFrame({
|
|
id: "call_2",
|
|
name: "list_files",
|
|
args: '{"path":"/tmp"}',
|
|
isLast: false,
|
|
}),
|
|
]),
|
|
"cursor-small",
|
|
{ messages: [{ role: "user", content: "hi" }] }
|
|
);
|
|
const payload = await response.json();
|
|
|
|
assert.equal(payload.choices[0].finish_reason, "tool_calls");
|
|
assert.equal(payload.choices[0].message.tool_calls[0].id, "call_2");
|
|
assert.equal(payload.choices[0].message.tool_calls[0].function.name, "list_files");
|
|
});
|
|
|
|
test("CursorExecutor.transformProtobufToJSON keeps prior content when an error frame arrives after output", async () => {
|
|
const executor = new CursorExecutor();
|
|
const response = executor.transformProtobufToJSON(
|
|
Buffer.concat([
|
|
buildTextFrame("Partial answer"),
|
|
buildJsonErrorFrame({
|
|
error: {
|
|
code: "resource_exhausted",
|
|
message: "late error",
|
|
},
|
|
}),
|
|
]),
|
|
"cursor-small",
|
|
{ messages: [{ role: "user", content: "hi" }] }
|
|
);
|
|
const payload = await response.json();
|
|
|
|
assert.equal(response.status, 200);
|
|
assert.equal(payload.choices[0].message.content, "Partial answer");
|
|
assert.equal(payload.choices[0].finish_reason, "stop");
|
|
});
|
|
|
|
test("CursorExecutor.transformProtobufToJSON decompresses gzip frames", async () => {
|
|
const executor = new CursorExecutor();
|
|
const response = executor.transformProtobufToJSON(
|
|
Buffer.concat([buildCompressedTextFrame("Compressed answer")]),
|
|
"cursor-small",
|
|
{ messages: [{ role: "user", content: "hi" }] }
|
|
);
|
|
const payload = await response.json();
|
|
|
|
assert.equal(payload.choices[0].message.content, "Compressed answer");
|
|
});
|
|
|
|
test("CursorExecutor.transformProtobufToSSE emits assistant chunks, tool deltas and DONE marker", async () => {
|
|
const executor = new CursorExecutor();
|
|
const body = { messages: [{ role: "user", content: "hi" }] };
|
|
const buffer = Buffer.concat([
|
|
buildTextFrame("Hello "),
|
|
buildToolCallFrame({
|
|
id: "call_1",
|
|
name: "read_file",
|
|
args: '{"path":',
|
|
isLast: false,
|
|
}),
|
|
buildToolCallFrame({
|
|
id: "call_1",
|
|
name: "read_file",
|
|
args: '"/tmp/a"}',
|
|
isLast: true,
|
|
}),
|
|
]);
|
|
|
|
const response = executor.transformProtobufToSSE(buffer, "cursor-small", body);
|
|
const text = await response.text();
|
|
|
|
assert.equal(response.status, 200);
|
|
assert.equal(response.headers.get("Content-Type"), "text/event-stream");
|
|
assert.match(text, /"role":"assistant","content":"Hello "/);
|
|
assert.match(text, /"tool_calls":\[/);
|
|
assert.match(text, /"name":"read_file"/);
|
|
assert.match(text, /"finish_reason":"tool_calls"/);
|
|
assert.match(text, /\[DONE\]/);
|
|
});
|
|
|
|
test("CursorExecutor.transformProtobufToSSE finalizes unterminated tool calls at stream end", async () => {
|
|
const executor = new CursorExecutor();
|
|
const response = executor.transformProtobufToSSE(
|
|
Buffer.concat([
|
|
buildToolCallFrame({
|
|
id: "call_2",
|
|
name: "read_file",
|
|
args: '{"path":"/tmp/b"}',
|
|
isLast: false,
|
|
}),
|
|
]),
|
|
"cursor-small",
|
|
{ messages: [{ role: "user", content: "hi" }] }
|
|
);
|
|
const text = await response.text();
|
|
|
|
assert.match(text, /"name":"read_file"/);
|
|
assert.match(text, /"finish_reason":"tool_calls"/);
|
|
assert.match(text, /\[DONE\]/);
|
|
});
|
|
|
|
test("CursorExecutor.transformProtobufToSSE returns a JSON error before any content is streamed", async () => {
|
|
const executor = new CursorExecutor();
|
|
const response = executor.transformProtobufToSSE(
|
|
buildJsonErrorFrame({
|
|
error: {
|
|
code: "resource_exhausted",
|
|
message: "too many requests",
|
|
details: [{ debug: { error: "LIMIT", details: { title: "Limit hit" } } }],
|
|
},
|
|
}),
|
|
"cursor-small",
|
|
{ messages: [{ role: "user", content: "hi" }] }
|
|
);
|
|
const payload = await response.json();
|
|
|
|
assert.equal(response.status, 429);
|
|
assert.equal(payload.error.type, "rate_limit_error");
|
|
assert.equal(payload.error.message, "Limit hit");
|
|
assert.equal(payload.error.code, "LIMIT");
|
|
});
|
|
|
|
test("CursorExecutor.transformProtobufToSSE stops gracefully when a JSON error arrives after content", async () => {
|
|
const executor = new CursorExecutor();
|
|
const response = executor.transformProtobufToSSE(
|
|
Buffer.concat([
|
|
buildTextFrame("Partial Cursor answer"),
|
|
buildJsonErrorFrame({
|
|
error: {
|
|
code: "resource_exhausted",
|
|
message: "late limit",
|
|
},
|
|
}),
|
|
]),
|
|
"cursor-small",
|
|
{ messages: [{ role: "user", content: "hi" }] }
|
|
);
|
|
const text = await response.text();
|
|
|
|
assert.equal(response.status, 200);
|
|
assert.match(text, /Partial Cursor answer/);
|
|
assert.match(text, /"finish_reason":"stop"/);
|
|
assert.match(text, /\[DONE\]/);
|
|
});
|
|
|
|
test("CursorExecutor.transformProtobufToSSE emits plain content deltas after tool call chunks", async () => {
|
|
const executor = new CursorExecutor();
|
|
const response = executor.transformProtobufToSSE(
|
|
Buffer.concat([
|
|
buildToolCallFrame({
|
|
id: "call_3",
|
|
name: "read_file",
|
|
args: '{"path":"/tmp/c"}',
|
|
isLast: false,
|
|
}),
|
|
buildTextFrame("Follow-up text"),
|
|
]),
|
|
"cursor-small",
|
|
{ messages: [{ role: "user", content: "hi" }] }
|
|
);
|
|
const text = await response.text();
|
|
|
|
assert.match(text, /"name":"read_file"/);
|
|
assert.match(text, /"delta":\{"content":"Follow-up text"\}/);
|
|
assert.match(text, /"finish_reason":"tool_calls"/);
|
|
});
|
|
|
|
test("CursorExecutor.transformProtobufToSSE emits an empty assistant envelope for empty responses", async () => {
|
|
const executor = new CursorExecutor();
|
|
const response = executor.transformProtobufToSSE(Buffer.alloc(0), "cursor-small", {
|
|
messages: [{ role: "user", content: "hi" }],
|
|
});
|
|
const text = await response.text();
|
|
|
|
assert.match(text, /"role":"assistant","content":""/);
|
|
assert.match(text, /"finish_reason":"stop"/);
|
|
assert.match(text, /\[DONE\]/);
|
|
});
|
|
|
|
test("CursorExecutor.transformProtobufToSSE converts JSON error frames into rate-limit responses", async () => {
|
|
const executor = new CursorExecutor();
|
|
const response = executor.transformProtobufToSSE(
|
|
buildJsonErrorFrame({
|
|
error: {
|
|
code: "resource_exhausted",
|
|
message: "rate limited",
|
|
details: [{ debug: { error: "LIMIT", details: { title: "Limit", detail: "Slow down" } } }],
|
|
},
|
|
}),
|
|
"cursor-small",
|
|
{ messages: [{ role: "user", content: "hi" }] }
|
|
);
|
|
const payload = await response.json();
|
|
|
|
assert.equal(response.status, 429);
|
|
assert.equal(payload.error.type, "rate_limit_error");
|
|
assert.equal(payload.error.message, "Limit");
|
|
assert.equal(payload.error.code, "LIMIT");
|
|
});
|
|
|
|
test("CursorExecutor.execute returns transformed JSON for non-stream responses", async () => {
|
|
const executor = new CursorExecutor();
|
|
const body = { messages: [{ role: "user", content: "hi" }] };
|
|
const responseBuffer = Buffer.concat([buildTextFrame("Hello from Cursor")]);
|
|
executor.makeHttp2Request = async () => ({
|
|
status: 200,
|
|
headers: {},
|
|
body: responseBuffer,
|
|
});
|
|
executor.makeFetchRequest = executor.makeHttp2Request;
|
|
|
|
const result = await executor.execute({
|
|
model: "cursor-small",
|
|
body,
|
|
stream: false,
|
|
credentials: {
|
|
accessToken: "token",
|
|
providerSpecificData: { machineId: "machine-1" },
|
|
},
|
|
});
|
|
const payload = await result.response.json();
|
|
|
|
assert.equal(
|
|
result.url,
|
|
"https://api2.cursor.sh/aiserver.v1.ChatService/StreamUnifiedChatWithTools"
|
|
);
|
|
assert.equal(result.transformedBody, body);
|
|
assert.equal(result.headers.authorization, "Bearer token");
|
|
assert.equal(payload.object, "chat.completion");
|
|
assert.equal(payload.choices[0].message.content, "Hello from Cursor");
|
|
assert.equal(payload.choices[0].finish_reason, "stop");
|
|
});
|
|
|
|
test("CursorExecutor.execute returns transformed SSE for stream responses", async () => {
|
|
const executor = new CursorExecutor();
|
|
const body = { messages: [{ role: "user", content: "hi" }] };
|
|
const responseBuffer = Buffer.concat([buildTextFrame("Hello stream")]);
|
|
executor.makeHttp2Request = async () => ({
|
|
status: 200,
|
|
headers: {},
|
|
body: responseBuffer,
|
|
});
|
|
executor.makeFetchRequest = executor.makeHttp2Request;
|
|
|
|
const result = await executor.execute({
|
|
model: "cursor-small",
|
|
body,
|
|
stream: true,
|
|
credentials: {
|
|
accessToken: "token",
|
|
providerSpecificData: { machineId: "machine-1" },
|
|
},
|
|
});
|
|
const text = await result.response.text();
|
|
|
|
assert.equal(result.response.status, 200);
|
|
assert.match(text, /"content":"Hello stream"/);
|
|
assert.match(text, /\[DONE\]/);
|
|
});
|
|
|
|
test("CursorExecutor.execute maps non-200 upstream responses to OpenAI-style errors", async () => {
|
|
const executor = new CursorExecutor();
|
|
const body = { messages: [{ role: "user", content: "hi" }] };
|
|
executor.makeHttp2Request = async () => ({
|
|
status: 403,
|
|
headers: {},
|
|
body: Buffer.from("denied"),
|
|
});
|
|
executor.makeFetchRequest = executor.makeHttp2Request;
|
|
|
|
const result = await executor.execute({
|
|
model: "cursor-small",
|
|
body,
|
|
stream: false,
|
|
credentials: {
|
|
accessToken: "token",
|
|
providerSpecificData: { machineId: "machine-1" },
|
|
},
|
|
});
|
|
const payload = await result.response.json();
|
|
|
|
assert.equal(result.response.status, 403);
|
|
assert.equal(payload.error.type, "invalid_request_error");
|
|
assert.match(payload.error.message, /\[403\]: denied/);
|
|
});
|
|
|
|
test("CursorExecutor.execute maps transport failures to connection_error and refreshCredentials returns null", async () => {
|
|
const executor = new CursorExecutor();
|
|
executor.makeHttp2Request = async () => {
|
|
throw new Error("socket hang up");
|
|
};
|
|
executor.makeFetchRequest = executor.makeHttp2Request;
|
|
|
|
const result = await executor.execute({
|
|
model: "cursor-small",
|
|
body: { messages: [{ role: "user", content: "hi" }] },
|
|
stream: false,
|
|
credentials: {
|
|
accessToken: "token",
|
|
providerSpecificData: { machineId: "machine-1" },
|
|
},
|
|
});
|
|
const payload = await result.response.json();
|
|
|
|
assert.equal(result.response.status, 500);
|
|
assert.equal(payload.error.type, "connection_error");
|
|
assert.equal(payload.error.message, "socket hang up");
|
|
assert.equal(await executor.refreshCredentials(), null);
|
|
});
|
|
|
|
test("CursorExecutor.transformProtobufToSSE finalizes un-terminated tools when stream abruptly cuts before isLast", async () => {
|
|
const executor = new CursorExecutor();
|
|
|
|
// Send a tool call but never close it
|
|
const response = executor.transformProtobufToSSE(
|
|
Buffer.concat([
|
|
buildToolCallFrame({
|
|
id: "call_abrupt",
|
|
name: "write_file",
|
|
args: '{"content":"partial"',
|
|
isLast: false,
|
|
}),
|
|
]),
|
|
"cursor-small",
|
|
{ messages: [{ role: "user", content: "hi" }] }
|
|
);
|
|
|
|
const text = await response.text();
|
|
assert.match(text, /"name":"write_file"/);
|
|
assert.match(text, /"finish_reason":"tool_calls"/);
|
|
assert.match(text, /\[DONE\]/);
|
|
});
|