mirror of
https://github.com/diegosouzapw/OmniRoute.git
synced 2026-05-05 17:56:56 +00:00
Some checks are pending
CI / Lint (push) Waiting to run
CI / Build language matrix (push) Waiting to run
CI / i18n Validation (push) Blocked by required conditions
CI / PR Test Policy (push) Waiting to run
CI / Advanced Security Scans (push) Waiting to run
CI / Build (push) Waiting to run
CI / Package Artifact (push) Blocked by required conditions
CI / Unit Tests (push) Blocked by required conditions
CI / Coverage (push) Blocked by required conditions
CI / SonarQube (push) Blocked by required conditions
CI / PR Coverage Comment (push) Blocked by required conditions
CI / E2E Tests (1/4) (push) Blocked by required conditions
CI / E2E Tests (2/4) (push) Blocked by required conditions
CI / E2E Tests (3/4) (push) Blocked by required conditions
CI / E2E Tests (4/4) (push) Blocked by required conditions
CI / Integration Tests (push) Blocked by required conditions
CI / Security Tests (push) Blocked by required conditions
CI / CI Dashboard (push) Blocked by required conditions
Publish to Docker Hub / Build and Push Docker (multi-arch) (push) Waiting to run
* fix(streaming): #1211 greedy strip omniModel tags to prevent literal \n\n artifacts - Changed regex quantifier from ? to * in combo.ts, comboAgentMiddleware.ts, and contextHandoff.ts to greedily strip all JSON-escaped newline sequences surrounding <omniModel> tags in SSE streaming chunks - Added \r to the character class for cross-platform robustness - Fixed Playwright strict-mode violation in combo-unification.spec.ts - Bumped OpenAPI version and CHANGELOG to 3.6.6 * fix: 3 bugs found during issue triage (#1175, #1187/#1218, #1202) - fix(gemini): strip VS Code JSON Schema extensions from tool schemas (#1175) Add enumDescriptions, markdownDescription, markdownEnumDescriptions, enumItemLabels and tags to UNSUPPORTED_SCHEMA_CONSTRAINTS so the Gemini sanitizer removes them before forwarding. GitHub Copilot injects these non-standard fields into tool definitions, causing Gemini to reject with 'Unknown name enumDescriptions at functionDeclarations[n].parameters'. - fix(health-check): unwrap proxy config object before passing to getAccessToken (#1187 #1218) resolveProxyForConnection() returns { proxy, level, levelId } but the health check loop was passing the full wrapper to getAccessToken(), which expects the inner config object (.host, .port etc). The proxy dispatcher validated .host on the wrapper (undefined) and threw 'Context proxy host is required', silently marking every connection as unhealthy every sweep. Fix mirrors the pattern already used in chatHelpers.ts: proxyResult?.proxy || null. - fix(ui): debounce models.dev sync interval slider to save only on release (#1202) The slider's onChange fired updateInterval() on every drag tick, sending a PATCH per pixel of movement. Rapid API responses overwrote UI state mid-drag. Introduce draftIntervalHours for smooth visual feedback; the PATCH fires on onMouseUp / onBlur once the user releases the control. * fix(providers): update Xiaomi MiMo token-plan endpoints (#1238) Integrated into release/v3.6.6 * fix(cc-compatible): trim beta flags and preserve cache passthrough (#1230) Integrated into release/v3.6.6 * feat(memory+skills): full-featured memory & skills systems with tests (#1228) Integrated into release/v3.6.6 * fix: forward client x-initiator header to GitHub Copilot upstream (#1227) Integrated into release/v3.6.6 * feat(bailian-quota): add Alibaba Coding Plan quota monitoring (#1235) * fix: resolve v3.6.6 backlog bugs (#1206, #1211, #1220, #1231) - fix(core): #1206 inject startup guard against app/ and src/app/ conflict - fix(health): #1220 add HEALTHCHECK_STAGGER_MS to prevent token refresh bursting - fix(proxy): #1231 prioritize HTTP 429 over quota body heuristics - fix(sse): #1211 strip leading double-newlines in responses API stream * fix(tests): resolve memory migration and skills route pagination bugs from PR overlaps * docs: Update CHANGELOG.md with v3.6.6 features (#1182, #1165, #1177) * chore(release): bump version to 3.6.6 Update package versions for the electron app and open-sse package. Sync llm.txt metadata and feature headings with the 3.6.6 release. * feat(core): harden outbound provider calls and add cooldown retries Add guarded outbound fetch helpers with private/local URL blocking, controlled retries, timeout normalization, and route-level status propagation for provider validation and model discovery. Introduce cooldown-aware chat retries with configurable requestRetry and maxRetryIntervalSec settings, model-scoped cooldown responses, and improved rate-limit learning from headers and error bodies so short upstream lockouts can recover automatically. Also align Antigravity and Codex header handling, require API keys for Pollinations, validate web runtime env at startup, restore sanitized Gemini tool names in translated responses, and inject a synthetic Claude text block when upstream SSE completes empty. * feat(models): add glmt preset and hybrid token counting Introduce GLM Thinking as a first-class provider preset with shared GLM model metadata, pricing, usage sync, dashboard support, and provider request defaults for higher token budgets and longer timeouts. Use provider-side /messages/count_tokens when a Claude-compatible upstream supports it, while preserving estimated fallback behavior for missing models, missing credentials, and upstream failures. Also add startup seeding for default model aliases and normalize common cross-proxy model dialects so canonical slashful model ids do not get misrouted during resolution. * feat(api): add sync tokens and v1 websocket bridge Add dedicated sync token storage, issuance, revocation, and bundle download routes backed by stable config bundle versioning and ETag support. Expose the v1 websocket handshake route and custom Next server bridge so OpenAI-compatible websocket traffic can be upgraded and proxied through the dashboard and API bridge. Expand compliance auditing with structured metadata, pagination, request context, auth and provider credential events, and SSRF-blocked validation logging. * docs: Update all documentation for v3.6.6 - CHANGELOG: Add WebSocket bridge, GLM Thinking preset, safe outbound fetch/SSRF guard, cooldown-aware retries, compliance audit v2, model alias seeding, and all Internal Improvements for the 3 new commits - README: Expand v3.6.x highlights table with 10 new features; add SafeOutboundFetch, CooldownAwareRetry, SSRF guard, TPS metric, sync tokens, WebSocket bridge to Resilience/Observability/Deployment tables - ARCHITECTURE: Bump date; add new modules to executive summary, API routes, SSE core services, Auth/Security section; add SSRF/Outbound guard failure mode (section 6); expand module mapping - ENVIRONMENT: Add OMNIROUTE_CRYPT_KEY/OMNIROUTE_API_KEY_BASE64 legacy aliases, OUTBOUND_SSRF_GUARD_ENABLED, CODEX_CLIENT_VERSION, and REQUEST_RETRY/MAX_RETRY_INTERVAL_SEC cooldown retry settings - FEATURES: Add 6 new feature sections — V1 WebSocket Bridge, Sync Tokens & Config Bundle, GLM Thinking Preset, Safe Outbound Fetch & SSRF Guard, Cooldown-Aware Retries, Compliance Audit v2 * fix: use api64 for proxy test (#1255) Integrated into release/v3.6.6 — IPv6 proxy test fix * fix(page): update custom models section to include all providers #1200 (#1256) Integrated into release/v3.6.6 — Gemini custom model picker fix * fix: provide default client_id fallbacks to prevent broken OAuth requests (#1246) Integrated into release/v3.6.6 — OAuth client_id default fallbacks * fix: translate max_tokens/max_completion_tokens → max_output_tokens in Chat→Responses translator (#1245) Integrated into release/v3.6.6 — max_tokens → max_output_tokens Responses API translation + unit tests * feat(oauth): support cursor-agent CLI as Cursor credential source (#1258) Integrated into release/v3.6.6 — cursor-agent CLI credential source support * fix(cc-compatible): restore upstream SSE and correct stream/combo timeout behavior (#1257) Integrated into release/v3.6.6 — CC-compatible upstream SSE restore + stream timeout fix + README table repair * fix(cli-tools): resolve API key resolution and model mapping bugs in CLI tools (#1263) Integrated into release/v3.6.6 * feat(cli-tools): add Qwen Code CLI integration (#1266) Integrated into release/v3.6.6 * fix(i18n): add missing zh-CN translations and fix logger imports (#1269) Integrated into release/v3.6.6 * fix(i18n): add Chinese i18n support to dashboard components (#1274) Integrated into release/v3.6.6 * feat: update Pollinations to require API key, remove free tier flag (#1177) * feat: friendly error messages for crypto/encryption failures (#1165) * feat: add TPS (tokens per second) metric column to request logs (#1182) * feat: merge custom/imported models into filter list for all providers (#1191) * feat(fallback): Fix provider-profile-driven lockouts (#1267) This integrates rdself's unify-provider-profile-locks PR manually to handle structural conflicts. * fix(claude): proper Anthropic SDK integration (#1271) * fix(healthcheck): use correct proxy wrapper format for getAccessToken (#1272) * chore(release): v3.6.6 — skills registry stability fix + final integration * fix(auth): harden bootstrap auth and memory dashboard behavior Restrict unauthenticated writes to /api/settings/require-login to the initial bootstrap window while keeping read-only checks public. This prevents post-setup config changes without blocking first-run login setup, and the onboarding flow now logs in immediately after setting the password. Restore memory API filtering and pagination behavior by supporting q searches, honoring offset-based requests, and avoiding unrelated fallback results when FTS misses. Update dashboard stats fallback to use the response totals consistently. Package the MCP server with explicit file entries and add regression tests for bootstrap auth and memory route behavior * fix(codex): remove max_output_tokens from body for compatibility * chore(release): v3.6.6 — include PR 1274 fixes in changelog * chore: exclude additional build artifacts and internal directories from npm package distribution * fix: update Gemini OAuth test to match registry defaults + codex UI improvements * fix: restore .mjs refs for scripts/ in test imports after ts migration * fix: restore next.config.mjs ref in dev-origins test * fix: implement db migration safety checks and codex config format * fix: disable mass-migration abort during unit tests based on auto-backup flag * fix: update script regex in auto-update tests to use .mjs * feat: Add Perplexity Web (Session) provider (#1289) Integrated into release/v3.6.6 * fix(cli): resolve codex routing config parsing, standardize select model button positioning, and clarify oauth documentation * docs(changelog): record recent cli, provider, and test updates Document the latest fixes for Codex routing configuration parsing and Lobehub provider icon fallback behavior. Add the note that the remaining JavaScript test files were migrated to TypeScript ES modules to reflect the completed test stack transition. * chore(release): merge #1286 minor improvements manually to avoid testing conflict * chore(test): rename perplexity-web.test.mjs to .ts to maintain 100% TS codebase * chore(docs): update CHANGELOG.md for perplexity-web provider * fix(security): resolve CodeQL incomplete URL substring sanitization via URL parsing in test mocks * fix: integrate compressContext() into chatCore.ts request pipeline Proactively compress oversized contexts before sending to upstream providers, preventing context_length_exceeded errors. Compression triggers at 85% of model's context limit using the existing 3-layer compressContext() function. - Import compressContext, estimateTokens, getTokenLimit from contextManager - Add compression check after translation, before executor dispatch - Estimate tokens and compare against 85% threshold of model's context limit - Apply 3-layer compression (trim tools, compress thinking, purify history) - Log compression events with before/after token counts and layers applied - Audit compression events for observability - Add unit tests verifying integration behavior Closes #1290 * fix(tests): align reasoning expectations with GLM thinking structure * fix: prevent orphaned tool_result messages in purifyHistory() When purifyHistory() drops oldest messages to fit context window, it can split tool_use/tool_result pairs — keeping the tool_result but dropping the tool_use that initiated it. This causes upstream providers to reject the request with format errors. Add fixToolPairs() that runs after each purification pass to remove: - OpenAI format: orphaned role='tool' messages without matching tool_calls ID - Claude format: orphaned tool_result content blocks without matching tool_use ID Closes #1291 * fix(tests): supply tool_use in mock so it is not dropped * chore: convert remaining test to TypeScript * fix(tests): restore compatibility with compressContext threshold test after tsx migration * docs: finalize v3.6.6 release documentation * fix(core): finalize provider removal, type issues, and codex API key config * fix(dashboard): render Web/Cookie, Search, Audio provider sections and fix TypeScript errors * fix: increase MCP web_search timeout to 60s (#1278) * fix: route combo testing properly for embedding models (#1260) * fix: accumulate excluded accounts in combo fallback loop (#1233) * fix: strip leading whitespace and newlines from first streaming chunk (#1211) * docs: clarify VPS and Docker settings for OAuth credentials (#1204) * fix: return real retry-after for pipeline gates (#1301) Integrated into release/v3.6.6 — returns real Retry-After values from pipeline gates * feat: streaming semantic cache, Cursor auto-version detection, and call-log enhancements (#1296) Integrated into release/v3.6.6 — streaming semantic cache, Cursor auto-version detection, call-log cache_source tracking * feat(api): support more OpenAI types (image, embeddings, audio-transcriptions, audio-speech) (#1297) Integrated into release/v3.6.6 — adds embeddings, audio-transcriptions, audio-speech, and images-generations support for custom OpenAI-compatible providers, plus Pollinations image registry * deps: bump hono from 4.12.12 to 4.12.14 (#1302) Integrated into release/v3.6.6 * deps: bump hono from 4.12.12 to 4.12.14 (#1306) Integrated into release/v3.6.6 * chore: stabilization fixes for v3.6.6 (#1298, #1254, #59, CI) * fix(providers): match correct endpoint for Xiaomi MiMo, strip routing prefix for custom openai endpoints (#1303, #1261) * feat(storage): add database backup cleanup controls * chore(release): v3.6.6 — Final Stabilization Push * Backport call log storage refactor to release/v3.6.6 (#1307) Integrated into release/v3.6.6 * deps: update dompurify to 3.4.0 to resolve CVE-XYZ (#60) * test: disable sqlite auto backup in CI to resolve E2E timeout (#24481475058) * chore(docs): sync CHANGELOG for v3.6.6 with missing features and fixes * chore(release): prep v3.6.6 infrastructure and type safety fixes - Migrated legacy .mjs scripts to .ts (bin, prepublish, policies) - Resolved pre-commit strict lint (t11 budget) errors in combo.ts - Explicitly typed all TS bindings in pack-artifact policies - Updated package.json commands to run Node via tsx/esm internally - Hardened CI/CD with explicit node version 22.22.2 checks - Completed stage validations for v3.6.6 final release * chore: fix TS build errors and e2e timeouts in CI - Migrate nodeRuntimeSupport to TS interfaces avoiding implicit any - Increase visibility timeouts in skills-marketplace E2E test to 15s to bypass CI flakiness - Complete migration of .mjs scripts to .ts ensuring type safety * chore(release): sync package version 3.6.6 across workspaces * test(e2e): universally increase UI component visibility timeouts from 5s to 15s to bypass CI starvation * chore(build): inject baseUrl, paths, and types:node into MITM tsconfig within prepublish hook to fix missing types in CI check --------- Co-authored-by: diegosouzapw <diegosouzapw@users.noreply.github.com> Co-authored-by: Jack <5443152+hijak@users.noreply.github.com> Co-authored-by: Randi <55005611+rdself@users.noreply.github.com> Co-authored-by: Paijo <14921983+oyi77@users.noreply.github.com> Co-authored-by: Samuel Cedric <ceds.sam@gmail.com> Co-authored-by: Max Garmash <max@37bytes.com> Co-authored-by: Markus Hartung <mail@hartmark.se> Co-authored-by: Gi99lin <74502520+Gi99lin@users.noreply.github.com> Co-authored-by: Payne <baboialex95@gmail.com> Co-authored-by: Benson K B <bensonkbmca@gmail.com> Co-authored-by: clousky2020 <33016567+clousky2020@users.noreply.github.com> Co-authored-by: Ravi Tharuma <25951435+RaviTharuma@users.noreply.github.com> Co-authored-by: oyi77 <oyi77@users.noreply.github.com> Co-authored-by: Hdsje <vovan877@gmail.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: xiaoge1688 <moyekongling@gmail.com>
662 lines
22 KiB
TypeScript
662 lines
22 KiB
TypeScript
import test from "node:test";
|
|
import assert from "node:assert/strict";
|
|
|
|
const originalEnv = { ...process.env };
|
|
Object.assign(process.env, {
|
|
CLAUDE_OAUTH_CLIENT_ID: "9d1c250a-e61b-44d9-88ed-5944d1962f5e",
|
|
CODEX_OAUTH_CLIENT_ID: "app_EMoamEEZ73f0CkXaXp7hrann",
|
|
GEMINI_OAUTH_CLIENT_ID:
|
|
"681255809395-oo8ft2oprdrnp9e3aqf6av3hmdib135j.apps.googleusercontent.com",
|
|
GEMINI_OAUTH_CLIENT_SECRET: "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",
|
|
GEMINI_CLI_OAUTH_CLIENT_ID:
|
|
"681255809395-oo8ft2oprdrnp9e3aqf6av3hmdib135j.apps.googleusercontent.com",
|
|
GEMINI_CLI_OAUTH_CLIENT_SECRET: "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",
|
|
QWEN_OAUTH_CLIENT_ID: "f0304373b74a44d2b584a3fb70ca9e56",
|
|
KIMI_CODING_OAUTH_CLIENT_ID: "17e5f671-d194-4dfb-9706-5516cb48c098",
|
|
ANTIGRAVITY_OAUTH_CLIENT_ID:
|
|
"1071006060591-tmhssin2h21lcre235vtolojh4g403ep.apps.googleusercontent.com",
|
|
ANTIGRAVITY_OAUTH_CLIENT_SECRET: "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",
|
|
GITHUB_OAUTH_CLIENT_ID: "Iv1.b507a08c87ecfe98",
|
|
});
|
|
|
|
const providersModule = await import("../../src/lib/oauth/providers/index.ts");
|
|
const oauthModule = await import("../../src/lib/oauth/constants/oauth.ts");
|
|
const registryModule = await import("../../open-sse/config/providerRegistry.ts");
|
|
|
|
const PROVIDERS = providersModule.default;
|
|
const {
|
|
ANTIGRAVITY_CONFIG,
|
|
CLAUDE_CONFIG,
|
|
CLINE_CONFIG,
|
|
CODEX_CONFIG,
|
|
CURSOR_CONFIG,
|
|
GEMINI_CONFIG,
|
|
GITHUB_CONFIG,
|
|
KILOCODE_CONFIG,
|
|
KIMI_CODING_CONFIG,
|
|
KIRO_CONFIG,
|
|
OAUTH_TIMEOUT,
|
|
PROVIDERS: OAUTH_PROVIDER_IDS,
|
|
QODER_CONFIG,
|
|
QWEN_CONFIG,
|
|
} = oauthModule;
|
|
const { REGISTRY } = registryModule;
|
|
|
|
const originalFetch = globalThis.fetch;
|
|
|
|
const EXPECTED_PROVIDER_KEYS = [
|
|
"claude",
|
|
"codex",
|
|
"gemini-cli",
|
|
"antigravity",
|
|
"qoder",
|
|
"qwen",
|
|
"kimi-coding",
|
|
"github",
|
|
"kiro",
|
|
"cursor",
|
|
"kilocode",
|
|
"cline",
|
|
];
|
|
|
|
const EXPECTED_CONFIG_BY_PROVIDER = {
|
|
claude: CLAUDE_CONFIG,
|
|
codex: CODEX_CONFIG,
|
|
"gemini-cli": GEMINI_CONFIG,
|
|
antigravity: ANTIGRAVITY_CONFIG,
|
|
qoder: QODER_CONFIG,
|
|
qwen: QWEN_CONFIG,
|
|
"kimi-coding": KIMI_CODING_CONFIG,
|
|
github: GITHUB_CONFIG,
|
|
kiro: KIRO_CONFIG,
|
|
cursor: CURSOR_CONFIG,
|
|
kilocode: KILOCODE_CONFIG,
|
|
cline: CLINE_CONFIG,
|
|
};
|
|
|
|
const REQUIRED_FIELDS_BY_PROVIDER = {
|
|
claude: ["authorizeUrl", "tokenUrl", "redirectUri", "scopes", "clientId"],
|
|
codex: ["authorizeUrl", "tokenUrl", "scope", "clientId"],
|
|
"gemini-cli": ["authorizeUrl", "tokenUrl", "userInfoUrl", "scopes", "clientId"],
|
|
antigravity: ["authorizeUrl", "tokenUrl", "userInfoUrl", "scopes", "clientId"],
|
|
qoder: ["extraParams"],
|
|
qwen: ["deviceCodeUrl", "tokenUrl", "scope", "clientId"],
|
|
"kimi-coding": ["deviceCodeUrl", "tokenUrl", "clientId"],
|
|
github: ["deviceCodeUrl", "tokenUrl", "userInfoUrl", "copilotTokenUrl", "clientId"],
|
|
kiro: [
|
|
"registerClientUrl",
|
|
"deviceAuthUrl",
|
|
"tokenUrl",
|
|
"socialAuthEndpoint",
|
|
"socialLoginUrl",
|
|
"socialTokenUrl",
|
|
"socialRefreshUrl",
|
|
"authMethods",
|
|
],
|
|
cursor: ["apiEndpoint", "api3Endpoint", "agentEndpoint", "agentNonPrivacyEndpoint", "dbKeys"],
|
|
kilocode: ["apiBaseUrl", "initiateUrl", "pollUrlBase"],
|
|
cline: ["appBaseUrl", "apiBaseUrl", "authorizeUrl", "tokenExchangeUrl", "refreshUrl"],
|
|
};
|
|
|
|
function getByPath(object, path) {
|
|
return path.split(".").reduce((value, segment) => value?.[segment], object);
|
|
}
|
|
|
|
function collectHttpsUrls(value, path = "config") {
|
|
const results = [];
|
|
|
|
if (typeof value === "string") {
|
|
if (/^https?:\/\//.test(value)) {
|
|
results.push({ path, value });
|
|
}
|
|
return results;
|
|
}
|
|
|
|
if (!value || typeof value !== "object" || Array.isArray(value)) {
|
|
return results;
|
|
}
|
|
|
|
for (const [key, nestedValue] of Object.entries(value)) {
|
|
results.push(...collectHttpsUrls(nestedValue, `${path}.${key}`));
|
|
}
|
|
|
|
return results;
|
|
}
|
|
|
|
function jsonResponse(body, status = 200) {
|
|
return new Response(JSON.stringify(body), {
|
|
status,
|
|
headers: { "Content-Type": "application/json" },
|
|
});
|
|
}
|
|
|
|
function textResponse(body, status = 200) {
|
|
return new Response(body, {
|
|
status,
|
|
headers: { "Content-Type": "text/plain" },
|
|
});
|
|
}
|
|
|
|
function createJwt(payload) {
|
|
const encode = (value) =>
|
|
Buffer.from(JSON.stringify(value)).toString("base64url").replace(/=/g, "");
|
|
|
|
return `${encode({ alg: "none", typ: "JWT" })}.${encode(payload)}.signature`;
|
|
}
|
|
|
|
function useFetchSequence(sequence) {
|
|
let index = 0;
|
|
globalThis.fetch = async (...args) => {
|
|
const next = sequence[index++];
|
|
if (!next) {
|
|
throw new Error(`Unexpected fetch call #${index}`);
|
|
}
|
|
return typeof next === "function" ? next(...args) : next;
|
|
};
|
|
}
|
|
|
|
test.afterEach(() => {
|
|
globalThis.fetch = originalFetch;
|
|
});
|
|
|
|
test.after(() => {
|
|
globalThis.fetch = originalFetch;
|
|
for (const key of Object.keys(process.env)) {
|
|
if (!(key in originalEnv)) {
|
|
delete process.env[key];
|
|
}
|
|
}
|
|
Object.assign(process.env, originalEnv);
|
|
});
|
|
|
|
test("OAuth provider registry exposes every expected provider exactly once", () => {
|
|
assert.deepEqual(Object.keys(PROVIDERS), EXPECTED_PROVIDER_KEYS);
|
|
assert.equal(new Set(Object.keys(PROVIDERS)).size, EXPECTED_PROVIDER_KEYS.length);
|
|
});
|
|
|
|
test("OAuth constants include all provider ids and use a sane timeout", () => {
|
|
const constantIds = Object.values(OAUTH_PROVIDER_IDS);
|
|
const registryIds = Object.keys(PROVIDERS);
|
|
|
|
assert.ok(Number.isInteger(OAUTH_TIMEOUT));
|
|
assert.ok(OAUTH_TIMEOUT > 0);
|
|
assert.equal(new Set(constantIds).size, constantIds.length);
|
|
|
|
for (const providerId of registryIds) {
|
|
assert.ok(
|
|
constantIds.includes(providerId),
|
|
`Expected oauth constants to include provider id ${providerId}`
|
|
);
|
|
}
|
|
});
|
|
|
|
test("every registered OAuth provider has a valid config object, flow type and token mapper", () => {
|
|
const allowedFlowTypes = new Set([
|
|
"authorization_code",
|
|
"authorization_code_pkce",
|
|
"device_code",
|
|
"import_token",
|
|
]);
|
|
|
|
for (const [providerId, provider] of Object.entries(PROVIDERS)) {
|
|
assert.equal(provider.config, EXPECTED_CONFIG_BY_PROVIDER[providerId]);
|
|
assert.ok(allowedFlowTypes.has(provider.flowType), `${providerId} has unsupported flowType`);
|
|
assert.equal(typeof provider.mapTokens, "function", `${providerId} must expose mapTokens`);
|
|
|
|
const mapped = provider.mapTokens({});
|
|
assert.ok(
|
|
mapped && typeof mapped === "object",
|
|
`${providerId} mapTokens must return an object`
|
|
);
|
|
}
|
|
});
|
|
|
|
test("every required provider config field is present when the provider is enabled for that flow", () => {
|
|
for (const [providerId, fields] of Object.entries(REQUIRED_FIELDS_BY_PROVIDER)) {
|
|
const provider = PROVIDERS[providerId];
|
|
const config = provider.config;
|
|
|
|
for (const field of fields) {
|
|
const value = getByPath(config, field);
|
|
|
|
if (
|
|
providerId === "qoder" &&
|
|
!config.enabled &&
|
|
["authorizeUrl", "tokenUrl", "userInfoUrl", "clientId"].includes(field)
|
|
) {
|
|
continue;
|
|
}
|
|
|
|
assert.notEqual(value, undefined, `${providerId} missing config field ${field}`);
|
|
|
|
if (Array.isArray(value)) {
|
|
assert.ok(value.length > 0, `${providerId}.${field} must not be empty`);
|
|
} else if (typeof value === "string") {
|
|
assert.ok(value.length > 0, `${providerId}.${field} must not be empty`);
|
|
} else if (typeof value === "object") {
|
|
assert.ok(
|
|
value && Object.keys(value).length > 0,
|
|
`${providerId}.${field} must not be empty`
|
|
);
|
|
}
|
|
}
|
|
}
|
|
});
|
|
|
|
test("all provider endpoint URLs use HTTPS when a URL is configured", () => {
|
|
for (const [providerId, provider] of Object.entries(PROVIDERS)) {
|
|
const httpsUrls = collectHttpsUrls(provider.config);
|
|
|
|
for (const entry of httpsUrls) {
|
|
const parsed = new URL(entry.value);
|
|
assert.equal(parsed.protocol, "https:", `${providerId} ${entry.path} must use HTTPS`);
|
|
}
|
|
}
|
|
});
|
|
|
|
test("browser-based providers expose buildAuthUrl and return provider-specific auth URLs", () => {
|
|
const redirectUri = "http://localhost:43121/callback";
|
|
const state = "state-123";
|
|
const codeChallenge = "challenge-456";
|
|
|
|
const claudeUrl = new URL(
|
|
PROVIDERS.claude.buildAuthUrl(CLAUDE_CONFIG, redirectUri, state, codeChallenge)
|
|
);
|
|
const codexUrl = new URL(
|
|
PROVIDERS.codex.buildAuthUrl(CODEX_CONFIG, redirectUri, state, codeChallenge)
|
|
);
|
|
const geminiUrl = new URL(
|
|
PROVIDERS["gemini-cli"].buildAuthUrl(GEMINI_CONFIG, redirectUri, state)
|
|
);
|
|
const antigravityUrl = new URL(
|
|
PROVIDERS.antigravity.buildAuthUrl(ANTIGRAVITY_CONFIG, redirectUri, state)
|
|
);
|
|
const clineUrl = new URL(PROVIDERS.cline.buildAuthUrl(CLINE_CONFIG, redirectUri));
|
|
|
|
assert.equal(claudeUrl.origin, "https://claude.ai");
|
|
assert.equal(claudeUrl.searchParams.get("client_id"), CLAUDE_CONFIG.clientId);
|
|
assert.equal(codexUrl.origin, "https://auth.openai.com");
|
|
assert.equal(codexUrl.searchParams.get("code_challenge"), codeChallenge);
|
|
assert.equal(geminiUrl.origin, "https://accounts.google.com");
|
|
assert.equal(geminiUrl.searchParams.get("redirect_uri"), redirectUri);
|
|
assert.equal(antigravityUrl.origin, "https://accounts.google.com");
|
|
assert.equal(clineUrl.origin, "https://api.cline.bot");
|
|
});
|
|
|
|
test("device and import-token providers expose the flow-specific fields expected by their configs", () => {
|
|
const deviceProviders = ["qwen", "kimi-coding", "github", "kiro", "kilocode"];
|
|
|
|
for (const providerId of deviceProviders) {
|
|
const provider = PROVIDERS[providerId];
|
|
assert.equal(provider.flowType, "device_code");
|
|
assert.equal(typeof provider.requestDeviceCode, "function");
|
|
assert.equal(typeof provider.pollToken, "function");
|
|
}
|
|
|
|
assert.equal(PROVIDERS.cursor.flowType, "import_token");
|
|
assert.equal(CURSOR_CONFIG.dbKeys.accessToken, "cursorAuth/accessToken");
|
|
assert.equal(CURSOR_CONFIG.dbKeys.machineId, "storage.serviceMachineId");
|
|
assert.ok(Array.isArray(KIRO_CONFIG.authMethods));
|
|
assert.ok(KIRO_CONFIG.authMethods.includes("builder-id"));
|
|
});
|
|
|
|
test("provider-specific config shapes remain valid for special cases", () => {
|
|
assert.ok(Array.isArray(CLAUDE_CONFIG.scopes) && CLAUDE_CONFIG.scopes.length > 0);
|
|
assert.ok(Array.isArray(GEMINI_CONFIG.scopes) && GEMINI_CONFIG.scopes.length > 0);
|
|
assert.ok(Array.isArray(ANTIGRAVITY_CONFIG.scopes) && ANTIGRAVITY_CONFIG.scopes.length > 0);
|
|
assert.equal(typeof CODEX_CONFIG.extraParams.originator, "string");
|
|
assert.equal(typeof QODER_CONFIG.extraParams.loginMethod, "string");
|
|
assert.ok(Array.isArray(KIRO_CONFIG.grantTypes) && KIRO_CONFIG.grantTypes.length > 0);
|
|
assert.equal(typeof KILOCODE_CONFIG.pollUrlBase, "string");
|
|
});
|
|
|
|
test("Gemini OAuth defaults use common Gemini CLI client secret as fallback", () => {
|
|
assert.equal(
|
|
GEMINI_CONFIG.clientSecret,
|
|
process.env.GEMINI_CLI_OAUTH_CLIENT_SECRET || process.env.GEMINI_OAUTH_CLIENT_SECRET || ""
|
|
);
|
|
assert.equal(REGISTRY.gemini.oauth.clientSecretDefault, "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");
|
|
assert.equal(
|
|
REGISTRY["gemini-cli"].oauth.clientSecretDefault,
|
|
"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"
|
|
);
|
|
});
|
|
|
|
test("Qoder remains a safe special case when browser OAuth is disabled", () => {
|
|
if (!QODER_CONFIG.enabled) {
|
|
assert.equal(
|
|
PROVIDERS.qoder.buildAuthUrl(QODER_CONFIG, "http://localhost/callback", "state"),
|
|
null
|
|
);
|
|
return;
|
|
}
|
|
|
|
const authUrl = PROVIDERS.qoder.buildAuthUrl(
|
|
QODER_CONFIG,
|
|
"http://localhost/callback",
|
|
"state-123"
|
|
);
|
|
assert.equal(typeof authUrl, "string");
|
|
assert.ok(authUrl.startsWith("https://"));
|
|
});
|
|
|
|
test("Codex parses id_token metadata and prefers a team workspace when the JWT only marks the personal plan", async () => {
|
|
const idToken = createJwt({
|
|
email: "dev@example.com",
|
|
"https://api.openai.com/auth": {
|
|
chatgpt_account_id: "personal-workspace",
|
|
chatgpt_plan_type: "free",
|
|
chatgpt_user_id: "user-123",
|
|
organizations: [
|
|
{
|
|
id: "team-workspace",
|
|
is_default: false,
|
|
role: "member",
|
|
title: "Platform Team",
|
|
},
|
|
],
|
|
},
|
|
});
|
|
|
|
const extra = await PROVIDERS.codex.postExchange({ id_token: idToken });
|
|
const mapped = PROVIDERS.codex.mapTokens(
|
|
{
|
|
access_token: "access-token",
|
|
refresh_token: "refresh-token",
|
|
id_token: idToken,
|
|
expires_in: 3600,
|
|
},
|
|
extra
|
|
);
|
|
|
|
assert.equal(extra.authInfo.chatgpt_account_id, "personal-workspace");
|
|
assert.equal(mapped.email, "dev@example.com");
|
|
assert.equal(mapped.providerSpecificData.workspaceId, "team-workspace");
|
|
assert.equal(mapped.providerSpecificData.workspacePlanType, "team");
|
|
});
|
|
|
|
test("Cline decodes embedded callback payloads without using the network", async () => {
|
|
const encodedCode = Buffer.from(
|
|
JSON.stringify({
|
|
accessToken: "cline-access",
|
|
refreshToken: "cline-refresh",
|
|
email: "cline@example.com",
|
|
firstName: "Cline",
|
|
lastName: "Bot",
|
|
expiresAt: "2030-01-01T00:00:00.000Z",
|
|
})
|
|
).toString("base64");
|
|
|
|
const tokens = await PROVIDERS.cline.exchangeToken(CLINE_CONFIG, encodedCode, "http://localhost");
|
|
const mapped = PROVIDERS.cline.mapTokens(tokens);
|
|
|
|
assert.equal(tokens.access_token, "cline-access");
|
|
assert.equal(mapped.accessToken, "cline-access");
|
|
assert.equal(mapped.email, "cline@example.com");
|
|
assert.equal(mapped.name, "Cline Bot");
|
|
});
|
|
|
|
test("Gemini and Antigravity run mocked browser OAuth exchanges and post-exchange enrichment", async () => {
|
|
const geminiConfig = { ...GEMINI_CONFIG, clientSecret: "gemini-secret" };
|
|
useFetchSequence([
|
|
jsonResponse({
|
|
access_token: "gemini-access",
|
|
refresh_token: "gemini-refresh",
|
|
expires_in: 3600,
|
|
}),
|
|
jsonResponse({ email: "gemini@example.com" }),
|
|
jsonResponse({ cloudaicompanionProject: { id: "gemini-project" } }),
|
|
jsonResponse({ access_token: "anti-access", refresh_token: "anti-refresh", expires_in: 7200 }),
|
|
jsonResponse({ email: "anti@example.com" }),
|
|
(_url, init = {}) => {
|
|
assert.equal(init.method, "POST");
|
|
assert.equal(init.headers.Authorization, "Bearer anti-access");
|
|
assert.equal(init.headers["User-Agent"], "google-api-nodejs-client/9.15.1");
|
|
assert.equal(
|
|
init.headers["X-Goog-Api-Client"],
|
|
"google-cloud-sdk vscode_cloudshelleditor/0.1"
|
|
);
|
|
assert.equal(
|
|
init.headers["Client-Metadata"],
|
|
JSON.stringify({
|
|
ideType: "IDE_UNSPECIFIED",
|
|
platform: "PLATFORM_UNSPECIFIED",
|
|
pluginType: "GEMINI",
|
|
})
|
|
);
|
|
return jsonResponse({
|
|
cloudaicompanionProject: { id: "anti-project" },
|
|
allowedTiers: [{ id: "tier-default", isDefault: true }],
|
|
});
|
|
},
|
|
(_url, init = {}) => {
|
|
assert.equal(init.method, "POST");
|
|
assert.equal(init.headers.Authorization, "Bearer anti-access");
|
|
assert.equal(init.headers["User-Agent"], "google-api-nodejs-client/9.15.1");
|
|
assert.equal(
|
|
init.headers["X-Goog-Api-Client"],
|
|
"google-cloud-sdk vscode_cloudshelleditor/0.1"
|
|
);
|
|
return jsonResponse({
|
|
done: true,
|
|
response: { cloudaicompanionProject: { id: "anti-project-final" } },
|
|
});
|
|
},
|
|
]);
|
|
|
|
const geminiTokens = await PROVIDERS["gemini-cli"].exchangeToken(
|
|
geminiConfig,
|
|
"code-1",
|
|
"http://localhost/callback"
|
|
);
|
|
const geminiExtra = await PROVIDERS["gemini-cli"].postExchange(geminiTokens);
|
|
const geminiMapped = PROVIDERS["gemini-cli"].mapTokens(geminiTokens, geminiExtra);
|
|
|
|
const antigravityTokens = await PROVIDERS.antigravity.exchangeToken(
|
|
ANTIGRAVITY_CONFIG,
|
|
"code-2",
|
|
"http://localhost/callback"
|
|
);
|
|
const antigravityExtra = await PROVIDERS.antigravity.postExchange(antigravityTokens);
|
|
const antigravityMapped = PROVIDERS.antigravity.mapTokens(antigravityTokens, antigravityExtra);
|
|
|
|
assert.equal(geminiMapped.email, "gemini@example.com");
|
|
assert.equal(geminiMapped.projectId, "gemini-project");
|
|
assert.equal(antigravityMapped.email, "anti@example.com");
|
|
assert.equal(antigravityMapped.projectId, "anti-project-final");
|
|
});
|
|
|
|
test("Qoder enabled mode exchanges tokens and loads profile metadata through mocked endpoints", async () => {
|
|
const originalQoderConfig = structuredClone(QODER_CONFIG);
|
|
const qoderConfig = Object.assign(QODER_CONFIG, {
|
|
enabled: true,
|
|
clientId: "qoder-client",
|
|
clientSecret: "qoder-secret",
|
|
authorizeUrl: "https://auth.qoder.dev/authorize",
|
|
tokenUrl: "https://auth.qoder.dev/token",
|
|
userInfoUrl: "https://auth.qoder.dev/user",
|
|
extraParams: {
|
|
loginMethod: "phone",
|
|
type: "phone",
|
|
},
|
|
});
|
|
|
|
try {
|
|
useFetchSequence([
|
|
jsonResponse({
|
|
access_token: "qoder-access",
|
|
refresh_token: "qoder-refresh",
|
|
expires_in: 1800,
|
|
}),
|
|
jsonResponse({
|
|
success: true,
|
|
data: {
|
|
apiKey: "qoder-api-key",
|
|
email: "qoder@example.com",
|
|
nickname: "Qoder User",
|
|
},
|
|
}),
|
|
]);
|
|
|
|
const authUrl = PROVIDERS.qoder.buildAuthUrl(
|
|
qoderConfig,
|
|
"http://localhost/callback",
|
|
"state-123"
|
|
);
|
|
const tokens = await PROVIDERS.qoder.exchangeToken(
|
|
qoderConfig,
|
|
"browser-code",
|
|
"http://localhost/callback"
|
|
);
|
|
const extra = await PROVIDERS.qoder.postExchange(tokens);
|
|
const mapped = PROVIDERS.qoder.mapTokens(tokens, extra);
|
|
|
|
assert.ok(authUrl.startsWith("https://auth.qoder.dev/authorize?"));
|
|
assert.equal(mapped.apiKey, "qoder-api-key");
|
|
assert.equal(mapped.email, "qoder@example.com");
|
|
assert.equal(mapped.displayName, "Qoder User");
|
|
} finally {
|
|
Object.assign(QODER_CONFIG, originalQoderConfig);
|
|
}
|
|
});
|
|
|
|
test("Qwen and Kimi Coding execute mocked device-code flows and token mapping", async () => {
|
|
const qwenIdToken = createJwt({
|
|
email: "qwen@example.com",
|
|
name: "Qwen User",
|
|
});
|
|
|
|
useFetchSequence([
|
|
jsonResponse({
|
|
device_code: "qwen-device",
|
|
user_code: "QWEN123",
|
|
verification_uri: "https://chat.qwen.ai/activate",
|
|
expires_in: 300,
|
|
interval: 5,
|
|
}),
|
|
jsonResponse({
|
|
access_token: createJwt({ sub: "qwen-subject" }),
|
|
refresh_token: "qwen-refresh",
|
|
expires_in: 3600,
|
|
id_token: qwenIdToken,
|
|
resource_url: "https://chat.qwen.ai/resource",
|
|
}),
|
|
jsonResponse({
|
|
device_code: "kimi-device",
|
|
user_code: "KIMI123",
|
|
verification_uri: "https://auth.kimi.com/activate",
|
|
expires_in: 600,
|
|
interval: 4,
|
|
}),
|
|
jsonResponse({
|
|
access_token: "kimi-access",
|
|
refresh_token: "kimi-refresh",
|
|
expires_in: 7200,
|
|
token_type: "Bearer",
|
|
scope: "profile",
|
|
}),
|
|
]);
|
|
|
|
const qwenDevice = await PROVIDERS.qwen.requestDeviceCode(QWEN_CONFIG, "challenge-123");
|
|
const qwenPoll = await PROVIDERS.qwen.pollToken(QWEN_CONFIG, qwenDevice.device_code, "verifier");
|
|
const qwenMapped = PROVIDERS.qwen.mapTokens(qwenPoll.data);
|
|
|
|
const kimiDevice = await PROVIDERS["kimi-coding"].requestDeviceCode(KIMI_CODING_CONFIG);
|
|
const kimiPoll = await PROVIDERS["kimi-coding"].pollToken(
|
|
KIMI_CODING_CONFIG,
|
|
kimiDevice.device_code
|
|
);
|
|
const kimiMapped = PROVIDERS["kimi-coding"].mapTokens(kimiPoll.data);
|
|
|
|
assert.equal(qwenMapped.email, "qwen@example.com");
|
|
assert.equal(qwenMapped.displayName, "Qwen User");
|
|
assert.equal(qwenMapped.providerSpecificData.resourceUrl, "https://chat.qwen.ai/resource");
|
|
assert.equal(kimiMapped.accessToken, "kimi-access");
|
|
assert.equal(kimiMapped.tokenType, "Bearer");
|
|
});
|
|
|
|
test("GitHub executes mocked device-code and profile enrichment flows", async () => {
|
|
useFetchSequence([
|
|
jsonResponse({
|
|
device_code: "github-device",
|
|
user_code: "GH123",
|
|
verification_uri: "https://github.com/login/device",
|
|
expires_in: 900,
|
|
interval: 5,
|
|
}),
|
|
jsonResponse({
|
|
access_token: "github-access",
|
|
refresh_token: "github-refresh",
|
|
expires_in: 3600,
|
|
}),
|
|
jsonResponse({ token: "copilot-token", expires_at: "2030-01-01T00:00:00.000Z" }),
|
|
jsonResponse({
|
|
id: 42,
|
|
login: "octocat",
|
|
name: "Octo Cat",
|
|
email: "octo@example.com",
|
|
}),
|
|
]);
|
|
|
|
const device = await PROVIDERS.github.requestDeviceCode(GITHUB_CONFIG);
|
|
const poll = await PROVIDERS.github.pollToken(GITHUB_CONFIG, device.device_code);
|
|
const extra = await PROVIDERS.github.postExchange(poll.data);
|
|
const mapped = PROVIDERS.github.mapTokens(poll.data, extra);
|
|
|
|
assert.equal(poll.ok, true);
|
|
assert.equal(mapped.providerSpecificData.copilotToken, "copilot-token");
|
|
assert.equal(mapped.providerSpecificData.githubLogin, "octocat");
|
|
assert.equal(mapped.providerSpecificData.githubEmail, "octo@example.com");
|
|
});
|
|
|
|
test("Kiro and KiloCode execute mocked device-code flows across their custom endpoints", async () => {
|
|
useFetchSequence([
|
|
jsonResponse({ clientId: "kiro-client", clientSecret: "kiro-secret" }),
|
|
jsonResponse({
|
|
deviceCode: "kiro-device",
|
|
userCode: "KIRO123",
|
|
verificationUri: "https://device.kiro.dev/verify",
|
|
verificationUriComplete: "https://device.kiro.dev/verify?code=KIRO123",
|
|
expiresIn: 600,
|
|
interval: 5,
|
|
}),
|
|
jsonResponse({
|
|
accessToken: "kiro-access",
|
|
refreshToken: "kiro-refresh",
|
|
expiresIn: 3600,
|
|
}),
|
|
jsonResponse({
|
|
code: "kilo-code",
|
|
verificationUrl: "https://api.kilo.ai/device-auth/kilo-code",
|
|
expiresIn: 300,
|
|
}),
|
|
jsonResponse({ status: "approved", token: "kilo-access", userEmail: "kilo@example.com" }),
|
|
textResponse("", 202),
|
|
textResponse("", 403),
|
|
textResponse("", 410),
|
|
]);
|
|
|
|
const kiroDevice = await PROVIDERS.kiro.requestDeviceCode(KIRO_CONFIG);
|
|
const kiroPoll = await PROVIDERS.kiro.pollToken(
|
|
KIRO_CONFIG,
|
|
kiroDevice.device_code,
|
|
undefined,
|
|
kiroDevice
|
|
);
|
|
const kiroMapped = PROVIDERS.kiro.mapTokens(kiroPoll.data);
|
|
|
|
const kiloDevice = await PROVIDERS.kilocode.requestDeviceCode(KILOCODE_CONFIG);
|
|
const kiloApproved = await PROVIDERS.kilocode.pollToken(KILOCODE_CONFIG, kiloDevice.device_code);
|
|
const kiloPending = await PROVIDERS.kilocode.pollToken(KILOCODE_CONFIG, kiloDevice.device_code);
|
|
const kiloDenied = await PROVIDERS.kilocode.pollToken(KILOCODE_CONFIG, kiloDevice.device_code);
|
|
const kiloExpired = await PROVIDERS.kilocode.pollToken(KILOCODE_CONFIG, kiloDevice.device_code);
|
|
const kiloMapped = PROVIDERS.kilocode.mapTokens(kiloApproved.data);
|
|
|
|
assert.equal(kiroMapped.accessToken, "kiro-access");
|
|
assert.equal(kiroMapped.providerSpecificData.clientId, "kiro-client");
|
|
assert.equal(kiloApproved.ok, true);
|
|
assert.equal(kiloPending.data.error, "authorization_pending");
|
|
assert.equal(kiloDenied.data.error, "access_denied");
|
|
assert.equal(kiloExpired.data.error, "expired_token");
|
|
assert.equal(kiloMapped.email, "kilo@example.com");
|
|
});
|