Why:
- Add a real bidirectional streaming TTS path: raw LLM tokens are
forwarded to the upstream model (Volcengine v3 via the unspeech ws
bridge) without client-side segmentation, so the model owns sentence
splitting and audio chunks play as they arrive.
- Move audio endpoints out of /api/v1/openai/. `/audio/voices`,
`/audio/models`, `/audio/voices/streaming` are not real OpenAI public
APIs, and the streaming TTS surface has nothing to do with OpenAI —
keeping them under /openai/ mislabelled the contract.
- Introduce `capabilities.speech.transport` on ProviderDefinition so
future streaming providers (ElevenLabs / Cartesia / OpenAI Realtime)
opt in without touching Stage.vue or the session factory.
- Unify Stage.vue's TTS path through a single StageTtsSession so the
chat-orchestrator hooks no longer branch on provider id.
What:
- apps/server: new ws proxy /api/v1/audio/speech/ws bridges client ↔
unspeech with auth, pre-flight flux check, billing from upstream
session.finished.usage, OTel spans.
- apps/server: audio routes moved from /api/v1/openai/audio/* to
/api/v1/audio/* (hard cutover; 404 sentinel tests added).
- apps/server: new /api/v1/audio/voices/streaming proxy reads voices
from unspeech /api/voices?provider=volcengine.
- apps/server: new STREAMING_TTS_UPSTREAM configKV entry +
scripts/seed-streaming-tts.ts.
- stage-ui: new libs/speech/streaming-pipeline.ts opens one ws per LLM
intent (appendText / finish / cancel + onSentence / onError / onDone).
- stage-ui: new libs/speech/tts-session.ts — StageTtsSession interface
with segmenter and streaming adapters; factory dispatches by
capabilities.speech.transport instead of hard-coded provider id.
- stage-ui: providerOfficialSpeechStreaming with capabilities.speech =
{ transport: 'bidirectional-ws' }; settings page with model/voice
picker + ws-based preview.
- stage-ui: Stage.vue chat hooks collapsed to a single currentSession;
hot-swap watcher cancels mid-session on provider/voice/model change;
unmount cancels and drains playback.
Tests:
- 9 streaming-pipeline tests (happy path / buffered / error / cancel /
truncation)
- 11 tts-session tests (factory branch coverage + adapter contracts)
- 4 audio-speech-ws route tests (forwarding / billing / pre-flight /
config-missing)
- 3 legacy-path 404 sentinels in v1 route tests
- Verification doc updated to reflect automated coverage.
End-state of the multi-step KTD-5 / KTD-6 / U8 work. The knoway sidecar
is no longer reachable from server code; the router is required at boot
and now owns chat completions, TTS synthesis, and voice catalog listing.
Highlights:
- LLM_ROUTER_MASTER_KEY becomes required; app.ts drops the graceful-
skip branch and the chat fallback fetch path is gone.
- /audio/speech and /audio/voices route through new routeTts /
listTtsVoices entries that reuse the chat key-rotator + per-attempt
timeout + abort propagation.
- DEFAULT_CHAT_MODEL / DEFAULT_TTS_MODEL move from env to configKV so
default-model swaps are hot-reloadable via Pub/Sub.
- GATEWAY_BASE_URL removed from env schema, .env, .env.local, smoke,
verification harness. Redis upstream-voices cache deleted — catalogs
come from in-process adapter JSON.
- routeTts splits adapter error contract by ApiError statusCode:
4xx propagates without fallback; 5xx folds into the network-failure
fallback path. handleTTS wraps billing + span attribute in try/finally
to plug a span leak when ttsMeter.accumulate() throws.
- seed-router-config.ts rewritten with --merge (default) / --reset /
--dry-run modes and env-var key handoff (OPENROUTER_KEY / AZURE_KEY /
DASHSCOPE_KEY) so prod seed flows never put plaintext on the CLI.
Adds DashScope CosyVoice seeding.
Docs (CLAUDE.md, architecture-overview.md, transport-and-routes.md)
reflect the new boundary. verifications/llm-router.md replaces the
overstated "U1-U9 shipped" line with an evidence-vs-pending table.
Tests: full 40-file / 343-case server suite green. New regressions pin
ApiError 4xx → no-fallback, ApiError 5xx → fallback, TTS billing
failure → span closed and error propagated.
- Added a new PostHog client for capturing server-side business events such as Stripe webhooks and subscription state changes.
- Implemented various tracking functions for pricing funnel steps, character creation, and chat session starts.
- Enhanced the flux meter tests to handle partial charges and report unbilled flux correctly.
- Updated the CharacterDialog and Flux settings pages to track user interactions with analytics events.
- Introduced a mechanism to identify users on PostHog based on authentication state to ensure accurate funnel tracking.
- Added necessary dependencies for PostHog integration in the project.
- Moved RateLimitMetrics import path to a more centralized location.
- Introduced a new file for active sessions gauge to track user sessions in the database.
- Updated index.ts to include new metrics and ensure proper initialization of observability metrics.
- Modified various routes and services to utilize the new observability structure.
- Added smoke tests for HTTP and WebSocket metrics to ensure proper metric registration and functionality.
- Enhanced error handling for metrics reading failures to improve observability.
- Updated the EngagementMetrics interface to use ObservableGauge for tracking active WebSocket connections.
- Added detailed comments explaining the rationale for this change, highlighting the benefits of using a pull-based gauge over a delta-based counter.
- Implemented the ObservableGauge in the createChatWsHandlers function, ensuring it accurately reflects the live count of active connections.
- Removed the previous UpDownCounter logic to prevent issues with connection drift during process crashes or network interruptions.
The Redis Stream `billing-events` + `worker` Railway role +
advisory-lock poller layered together didn't actually buy us reliability
— `debitFlux` swallowed XADD failures, leaving the door open to "balance
updated, ledger row never written". Collapse the whole thing back to:
`creditFlux` and `debitFlux` write `flux_transaction` ledger rows inline
within the same DB transaction that mutates `user_flux`, and `(user_id,
request_id)` remains the partial unique index that keeps retries safe.
Concrete changes:
- Inline ledger inserts in `BillingService.{debitFlux, creditFlux,
creditFluxFromStripeCheckout, creditFluxFromInvoice}`; drop `billingMq`
and `publishEvent` plumbing entirely.
- `routes/openai/v1` writes `llm_request_log` synchronously via the
existing `requestLogService`; the duplicate `llm-request-log.ts` service
module is removed.
- `bin/run-worker.ts`, `libs/mq/*`,
`services/billing/billing-events.ts`,
`services/billing/billing-consumer-handler.ts`, and matching tests are
deleted. CLI now exposes only `api`.
- `BILLING_EVENTS_*` env vars and the `DEFAULT_BILLING_EVENTS_STREAM`
helper are dropped; `docker-compose.yml` no longer ships a worker
service.
- `docs/ai-context/{workers-and-runtime, billing-architecture,
redis-boundaries-and-pubsub, data-model-and-state,
architecture-overview, README}.md`, `CLAUDE.md`, and the existing
verification docs are updated to describe the single-process synchronous
pipeline.
Tests: 29 files / 247 cases pass. Production deployments need to drop
the worker Railway service after this lands.
- Added package.json for @proj-airi/server-protocol with necessary configurations.
- Implemented chat event types including WireMessage, SendMessagesRequest, and PullMessagesRequest.
- Defined WebSocket event types and structures for better integration with AIRI components.
- Updated server-runtime and server-sdk to utilize the new server-protocol package.
- Refactored imports across various packages to replace server-shared types with server-protocol types.
- Enhanced type definitions and added TypeScript configurations for better development experience.