248 KiB
API Contracts
Contract Metadata
{
"subsystem_id": "api-contracts",
"lane": "L6",
"contract_file": "docs/release-control/v6/internal/subsystems/api-contracts.md",
"status_file": "docs/release-control/v6/internal/status.json",
"registry_file": "docs/release-control/v6/internal/subsystems/registry.json",
"dependency_subsystem_ids": [
"agent-lifecycle",
"ai-runtime",
"cloud-paid",
"patrol-intelligence"
]
}
Purpose
Own canonical runtime payload shapes between backend and frontend.
Canonical Files
internal/api/contract_test.gointernal/api/resources.gointernal/api/alerts.gointernal/api/activity_audit_handlers.gofrontend-modern/src/types/api.tsfrontend-modern/src/api/responseUtils.tsfrontend-modern/src/components/Settings/APITokenManager.tsxfrontend-modern/src/components/Settings/apiTokenManagerModel.tsfrontend-modern/src/components/Settings/infrastructureOperationsModel.tsxfrontend-modern/src/components/Settings/useAPITokenManagerState.tsfrontend-modern/src/components/Settings/useInfrastructureOperationsState.tsxfrontend-modern/src/components/Settings/NodeModalAuthenticationSection.tsxfrontend-modern/src/components/Settings/NodeModalBasicInfoSection.tsxfrontend-modern/src/components/Settings/nodeModalModel.tsfrontend-modern/src/components/Settings/NodeModalMonitoringSection.tsxfrontend-modern/src/components/Settings/NodeModalSetupGuideSection.tsxfrontend-modern/src/components/Settings/NodeModalStatusFooter.tsxfrontend-modern/src/components/Settings/useNodeModalState.tsfrontend-modern/src/utils/agentInstallCommand.tsfrontend-modern/src/api/nodes.tsfrontend-modern/src/api/license.tsfrontend-modern/src/api/monitoredSystemLedger.tsfrontend-modern/src/api/resources.tsfrontend-modern/src/api/monitoring.tsinternal/api/monitored_system_ledger.gofrontend-modern/src/components/Settings/useInfrastructureInstallState.tsxfrontend-modern/src/components/Settings/useInfrastructureConfiguredNodesState.tsfrontend-modern/src/components/Settings/useInfrastructureDiscoveryRuntimeState.tsfrontend-modern/src/utils/apiTokenPresentation.tsfrontend-modern/src/utils/infrastructureSettingsPresentation.tsinternal/api/router_routes_auth_security.gointernal/api/relay_hosted_runtime.gointernal/api/ai_hosted_runtime.gointernal/api/router_routes_licensing.gointernal/api/reporting_inventory_handlers.gointernal/cloudcp/portal/bootstrap.gointernal/cloudcp/portal/handlers.gointernal/cloudcp/portal/page.gointernal/cloudcp/portal/page_templates.gointernal/cloudcp/portal/frontend/src/index.tsinternal/cloudcp/portal/frontend/src/shell.tsinternal/cloudcp/portal/frontend/src/billing.tsinternal/cloudcp/portal/frontend/src/runtime.tsinternal/cloudcp/portal/frontend/src/types.tsinternal/cloudcp/portal/frontend/src/styles.cssinternal/cloudcp/portal/frontend/tsconfig.jsoninternal/cloudcp/portal/frontend_sync_test.gointernal/api/recovery_handlers.gointernal/api/config_setup_handlers.gointernal/api/demo_mode_commercial.gointernal/api/security_status_capabilities.gointernal/api/demo_middleware.gofrontend-modern/src/stores/aiRuntimeState.tsinternal/api/connections_types.gointernal/api/connections_aggregator.gointernal/api/connections_handlers.gointernal/api/connections_probe.gofrontend-modern/src/api/connections.tsfrontend-modern/src/api/hostedSignup.ts
Shared Boundaries
frontend-modern/src/api/agentProfiles.tsshared withagent-lifecycle: the agent profiles frontend client is both an agent lifecycle control surface and a canonical API payload contract boundary.frontend-modern/src/api/ai.tsshared withai-runtime: the AI frontend client is both an AI runtime control surface and a canonical API payload contract boundary.frontend-modern/src/api/nodes.tsshared withagent-lifecycle: the shared Proxmox node client is both an agent lifecycle setup/install control surface and a canonical API payload contract boundary.frontend-modern/src/api/notifications.tsshared withnotifications: the notifications frontend client is both a notification delivery control surface and a canonical API payload contract boundary.frontend-modern/src/api/orgs.tsshared withorganization-settings: the organization frontend client is both an organization settings control surface and a canonical API payload contract boundary.frontend-modern/src/api/patrol.tsshared withai-runtime: the Patrol frontend client is both an AI runtime control surface and a canonical API payload contract boundary.frontend-modern/src/api/rbac.tsshared withorganization-settings: the RBAC frontend client is both an organization settings control surface and a canonical API payload contract boundary.frontend-modern/src/api/security.tsshared withsecurity-privacy: the security frontend client is both a security/privacy control surface and a canonical API payload contract boundary.frontend-modern/src/api/updates.tsshared withdeployment-installability: the updates frontend client is both a deployment-installability control surface and a canonical API payload contract boundary.frontend-modern/src/components/Settings/APITokenManager.tsxshared withsecurity-privacy: the API token settings surface is both a security/privacy control surface and a canonical API payload contract boundary.frontend-modern/src/components/Settings/apiTokenManagerModel.tsshared withsecurity-privacy: the pure API token settings model is both a security/privacy control surface and a canonical API payload contract boundary.frontend-modern/src/components/Settings/ConnectionEditor/CredentialSlots/NodeCredentialSlot.tsxshared withagent-lifecycle: the inline node credential slot is both an agent lifecycle control surface and a shared API-backed install/setup contract boundary.frontend-modern/src/components/Settings/infrastructureOperationsModel.tsxshared withagent-lifecycle: the pure infrastructure operations inventory/install model is both an agent fleet lifecycle control surface and an API token, lookup, assignment, and reporting/install contract boundary.frontend-modern/src/components/Settings/NodeModalAuthenticationSection.tsxshared withagent-lifecycle: the node setup authentication section is both an agent lifecycle control surface and a shared API-backed install/setup contract boundary.frontend-modern/src/components/Settings/NodeModalBasicInfoSection.tsxshared withagent-lifecycle: the node setup basic-info section is both an agent lifecycle control surface and a shared API-backed install/setup contract boundary.frontend-modern/src/components/Settings/nodeModalModel.tsshared withagent-lifecycle: the pure node setup modal model is both an agent lifecycle control surface and a shared API-backed install/setup contract boundary.frontend-modern/src/components/Settings/NodeModalMonitoringSection.tsxshared withagent-lifecycle: the node setup monitoring section is both an agent lifecycle control surface and a shared API-backed install/setup contract boundary.frontend-modern/src/components/Settings/NodeModalSetupGuideSection.tsxshared withagent-lifecycle: the node setup guide section is both an agent lifecycle control surface and a shared API-backed install/setup contract boundary.frontend-modern/src/components/Settings/NodeModalStatusFooter.tsxshared withagent-lifecycle: the node setup status/footer section is both an agent lifecycle control surface and a shared API-backed install/setup contract boundary.frontend-modern/src/components/Settings/useAPITokenManagerState.tsshared withsecurity-privacy: the API token settings state hook is both a security/privacy control surface and a canonical API payload contract boundary.frontend-modern/src/components/Settings/useInfrastructureConfiguredNodesState.tsshared withagent-lifecycle: the direct-node infrastructure settings state hook is both an agent lifecycle control surface and a shared Proxmox node API contract boundary.frontend-modern/src/components/Settings/useInfrastructureDiscoveryRuntimeState.tsshared withagent-lifecycle: the infrastructure discovery runtime state hook is both an agent lifecycle control surface and a shared discovery/settings API contract boundary. That same shared boundary also owns settings-route polling scope for discovery payloads: the/api/discoverrefresh loop and websocket-backed discovery status hydration may run only while the operator is on the infrastructure connections workspace under/settings/infrastructure/platforms*, not on the systems ledger or install workspace. It also owns the explicit manual-scan contract for/api/discover: when the operator runs discovery from the infrastructure manager, the hook must consume the immediate POST response body as the next source of truth for discovered candidates and scan errors rather than waiting for a later poll or websocket update. Cached GET payloads and manual POST payloads must both normalize theirupdated/timestampvalues into one millisecond-backedlastResultAtstate so discovery review rows do not depend on transport- specific timestamp shapes.frontend-modern/src/components/Settings/useInfrastructureInstallState.tsxshared withagent-lifecycle: the infrastructure install state hook is both an agent fleet lifecycle control surface and an API token, lookup, and install transport contract boundary.frontend-modern/src/components/Settings/useInfrastructureOperationsState.tsxshared withagent-lifecycle: the shared infrastructure operations state hook is both an agent fleet lifecycle control surface and an API token, lookup, assignment, and reporting/install contract boundary.frontend-modern/src/components/Settings/useNodeModalState.tsshared withagent-lifecycle: the node setup modal state hook is both an agent lifecycle control surface and a shared API-backed install/setup contract boundary.frontend-modern/src/utils/agentInstallCommand.tsshared withagent-lifecycle: the shared frontend install-command helper is both an agent lifecycle control surface and a canonical API/install transport contract boundary.frontend-modern/src/utils/apiTokenPresentation.tsshared withsecurity-privacy: the API token presentation helper is both a security/privacy control surface and a canonical API token management boundary.frontend-modern/src/utils/infrastructureSettingsPresentation.tsshared withagent-lifecycle: the infrastructure settings presentation helper is both an agent lifecycle control surface and an API-backed direct-node/discovery settings boundary.internal/api/access_control_handlers.goshared withorganization-settings: RBAC role and user-assignment handlers are both an organization settings control surface and a canonical API payload contract boundary. The shared node setup boundary above owns the guided/manual setup split for PVE/PBS consumers: Agent Install and Direct Connection setup-script modes are auto-registration paths, while Token ID/Value fields, Test Connection, and Add Node are manual-token or existing-node edit controls only. That same client contract must expose the setup strategy before a token path is chosen: Agent Install is API + Agent, Direct Connection is API inventory, and Manual Token Setup is a manual API-token escape hatch. The inline node credential slot must keep the visible submit sequence asEndpoint,Authentication, andCoveragebefore the API-backed setup controls. That sequence is presentation guidance for the existing setup payload phases; it does not create a second node setup API model or allow page-local payload ownership.internal/api/agent_install_command_shared.goshared withagent-lifecycle: agent install command assembly is both an agent lifecycle control surface and a canonical API payload contract boundary.internal/api/ai_handler.goshared withai-runtime: Pulse Assistant handlers are both an AI runtime control surface and a canonical API payload contract boundary.internal/api/ai_handlers.goshared withai-runtime: AI settings and remediation handlers are both an AI runtime control surface and a canonical API payload contract boundary.internal/api/ai_intelligence_handlers.goshared withai-runtime: AI intelligence handlers are both an AI runtime control surface and a canonical API payload contract boundary.internal/api/config_setup_handlers.goshared withagent-lifecycle: auto-register and setup handlers are both an agent lifecycle control surface and a canonical API payload contract boundary. That same shared boundary also owns reachable-host selection truth for canonical Proxmox registration: runtime callers may propose orderedcandidateHosts, but the API contract must persist and echo the first candidate Pulse can actually reach instead of freezing the caller's rejected first preference into the stored node endpoint. That same canonical payload contract also owns strict-TLS truth for that selected host:/api/auto-registermay only persistVerifySSL=truewhen Pulse actually captured a certificate fingerprint for the selected candidate, and it must not pretend public-CA verification is safe after every candidate fingerprint probe failed. That same contract now owns stale-marker verification as well: setup-token-authenticatedcheckRegistrationrequests may omit token completion fields and must answer{registered:boolean}from canonical candidate-host matching so runtime repair can distinguish real registrations from stale local marker files without rotating tokens first. That same shared setup contract also owns teardown symmetry for script-managed Proxmox nodes:/api/auto-unregistermust accept the canonicaltype, normalizedhost, explicitserverName, optional canonicaltokenId, request-bodyauthToken, andsource:"script"payload, and it must answer the same canonical success envelope on both real removals and idempotent no-op reruns so browser/runtime callers do not invent a second uninstall vocabulary.internal/api/enterprise_extension_rbac_admin.goshared withorganization-settings: RBAC admin extension endpoints are both an organization settings control surface and a canonical API payload contract boundary.internal/api/licensing_bridge.goshared withcloud-paid: commercial licensing bridge handlers carry both API payload contract and cloud-paid entitlement boundary ownership.internal/api/licensing_handlers.goshared withcloud-paid: commercial licensing handlers carry both API payload contract and cloud-paid entitlement boundary ownership. That same shared licensing boundary also owns authenticated install-version attribution:internal/api/router.gomust hand the canonical processserverVersionintointernal/api/licensing_handlers.go, and the shared licensing runtime must carry that exact value through/v1/activate,/v1/licenses/exchange, and/v1/grants/refreshso migrated installs can be attributed to exact builds without inventing a second version source or trusting browser-supplied version hints. That same shared licensing boundary also owns self-hosted purchase-return framing./auth/license-purchase-startand/auth/license-purchase-activatemay return operators only to the canonical self-hosted settings route at/settings/system/billing/plan, and the bridge pages must describe that surface as a plan-owned destination (Plans,Plan activated,Finalizing plan upgrade) rather than as a tier-ownedPulse Pro billingpage. Frontend callers may still render the unlocked tier name inside that destination, but the browser/API contract must not reintroduce Pulse-Pro-as-page-name copy in callback titles, actions, or retry guidance.internal/api/notifications.goshared withnotifications: notification handlers are both a notification delivery control surface and a canonical API payload contract boundary.internal/api/org_handlers.goshared withorganization-settings: organization management handlers are both an organization settings control surface and a canonical API payload contract boundary.internal/api/org_lifecycle_handlers.goshared withorganization-settings: organization lifecycle handlers are both an organization settings control surface and a canonical API payload contract boundary.internal/api/payments_webhook_handlers.goshared withcloud-paid: commercial payment webhook handlers carry both API payload contract and cloud-paid billing boundary ownership.internal/api/public_signup_handlers.goshared withcloud-paid: hosted signup handlers carry both API payload contract and cloud-paid hosted provisioning boundary ownership. That same shared boundary also owns public hosted-signup response privacy: syntactically valid/api/public/signuprequests must return one generic202 AcceptedPulse Account message whether provisioning/email side effects ran or were suppressed by the owner-email limiter, while invalid bodies and true server failures remain explicit.internal/api/relay_mobile_capability.goshared withrelay-runtime: the backend-owned Pulse Mobile relay capability inventory is both a relay runtime boundary and a canonical API payload contract surface.internal/api/resources.goshared withunified-resources: the unified resource endpoint is both a backend payload contract surface and a unified-resource runtime boundary.internal/api/security.goshared withsecurity-privacy: the security handlers are both a security/privacy control surface and a canonical API payload contract boundary.internal/api/security_tokens.goshared withsecurity-privacy: the security token handlers are both a security/privacy control surface and a canonical API payload contract boundary.internal/api/slo.goshared withperformance-and-scalability: the SLO endpoint is both an API contract surface and a protected performance hot-path boundary.internal/api/system_settings.goshared withsecurity-privacy: the system settings telemetry and auth controls are both a security/privacy control surface and a canonical API payload contract boundary.internal/api/unified_agent.goshared withagent-lifecycle: unified agent download and installer handlers are both an agent lifecycle control surface and a canonical API payload contract boundary.internal/api/updates.goshared withdeployment-installability: update handlers are both a deployment-installability control surface and a canonical API payload contract boundary.
The platform-connections API contract also owns inactive monitored-system
candidate semantics end to end. enabled=false on TrueNAS or VMware preview,
test, add, and update payloads must serialize through the shared ledger client
as active:false, and preview responses may legitimately return no_change,
removes_existing, or removes_multiple with empty projected-system lists
when the disabled candidate no longer counts toward monitored-system capacity.
That same monitored-system admission contract now also owns restart-safe host
report continuity at the API boundary. internal/api/monitored_system_limit_enforcement.go
must treat a returning standalone host report as existing capacity when
monitoring can match it to recent persisted host continuity, so a server
restart or v6 upgrade does not emit a false over-limit 402 before the live
inventory rebuild catches up. Genuinely new host identities must still return
the canonical monitored-system blocked payload.
Extension Points
- Add or change payload fields through handler + contract tests together
- Update frontend API types in lockstep with backend contract changes.
Websocket-backed API consumers such as
frontend-modern/src/components/Settings/useAPITokenManagerState.tsandfrontend-modern/src/components/Settings/useInfrastructureOperationsState.tsxmay read runtime context only throughfrontend-modern/src/contexts/appRuntime.ts; they must not importfrontend-modern/src/App.tsx, because payload ownership remains in the API contract rather than the root shell. - Add dedicated contract tests for new stable payloads
- Route unified resource sensitivity, routing, and
aiSafeSummarypayload changes throughinternal/api/resources.go,internal/api/contract_test.go, and the canonical frontend resource consumer proofs together; resource governance metadata must not ship as an API-only or frontend-only heuristic That same resource payload contract ownsaggregations.policyPostureon/api/resourcesand/api/resources/stats. The aggregation must be derived from canonical unified-resource policy metadata, normalized as camelCase resource API JSON, and exercised with backend contract tests plus the canonicaluseUnifiedResourcesfrontend hook proof whenever it changes. - Route unified-resource action, lifecycle, and export audit reads through
internal/api/activity_audit_handlers.go,internal/api/router_routes_licensing.go, andinternal/api/contract_test.gotogether so the control-plane execution trail stays on a governed API contract instead of a store-only shape - Route dedicated unified-resource timeline and facet-bundle reads through
frontend-modern/src/api/resources.ts,internal/api/resources.go, andinternal/api/contract_test.gotogether so the backend facet contract and the frontend client stay aligned on one timeline-first surface, while capability and relationship detail stays backend-owned for AI correlation and change detection./api/resources/{id}/timelineand/api/resources/{id}/facetsmust keep resource timelines relationship-aware by opting into the canonicalResourceChangeFilters.IncludeRelatedstore path, so a resource timeline includes direct changes and changes that name the resource inrelatedResourcesinstead of hiding child or dependency activity from the owning resource. - Route unified-resource list ordering through
internal/api/resources.go,internal/api/contract_test.go, and the owned unified-resource registry helpers together; list payloads must stay deterministic for equal-name resources by carrying one canonicalname -> type -> idtie-break across cold seed, REST pagination, and websocket-backed refreshes instead of inheriting map order or page-local re-sorts That same shared API contract also owns the external resourcetype, canonical display name, and cluster identity published through/api/resourcesand/api/state; the websocket/state hydrate path must not emit legacy aliases or raw store labels once the unified resource contract has normalized them. - Route unified-agent installer and binary download headers through
internal/api/unified_agent.goandinternal/api/contract_test.gotogether; published release downloads must keep the canonicalX-Checksum-Sha256plusX-Signature-Ed25519contract for updater clients and the base64-encodedX-Signature-SSHSIGcontract for installer clients whether the asset is served locally or proxied from the matching GitHub release, instead of leaving callers to infer trust from source location alone. - Route canonical AI intelligence summary and resource-intelligence reads through
frontend-modern/src/api/ai.ts,frontend-modern/src/stores/aiIntelligence.ts,frontend-modern/src/stores/aiIntelligenceSummaryModel.ts,frontend-modern/src/features/patrol/usePatrolIntelligenceState.ts,frontend-modern/src/features/patrol/PatrolIntelligenceSurface.tsx, the Patrol-owned section files underfrontend-modern/src/features/patrol/,frontend-modern/src/pages/AIIntelligence.tsx,internal/api/ai_handlers.go, andinternal/api/contract_test.gotogether so the summary card, store normalization owner, runtime hook, feature shell, section owners, route shell, and backend payload stay aligned on one governed surface, including the canonical recent-changes slice while keeping the learning counters backend-only coverage, so the summary page keeps Patrol health and findings primary and renders timeline, correlation, and policy-posture data as secondary investigation context rather than as a separate headline product metric and the Patrol findings empty-state behavior, so0 active findingsonly renders as a healthy frontend conclusion when the same governed AI summary contract still reports healthy overall health; degraded or not-fully-verified health predictions must flow through to the Patrol findings surface instead of being replaced by page-local "looks healthy" copy and the Patrol assessment headline plus compact summary-strip behavior, so the same governed AI summary contract decides whether the page leads with verified health, issues detected, coverage incomplete, or another attention state instead of letting count-only page fragments emit a staleNo issues foundconclusion and the Patrol summary shell treatment itself, so the same governed summary contract still lands inside the shared neutral page-card base while severity travels through compact header accents and icon badges instead of a page-local full-width semantic background and the Patrol verification summary derived from run history, so the page also states whether recent Patrol evidence came from a successful full patrol or only from scoped/erroring runs instead of leaving verification scope implicit and the same-day activity-mix explanation derived from that governed run history, so when a recent full patrol is followed by alert-triggered or anomaly-triggered scoped work the verification surface can explain the mix directly instead of reconstructing it from page-local timing heuristics and the Patrol status recency split, solast_patrol_atremains reserved for completed full Patrol sweeps while scoped runs and verification checks advancelast_activity_atwithout claiming a fresh full-estate verification pass and the canonical alert-triggered Patrol enqueue path ininternal/api/router.go, so alert-fired Patrol work flows through the unified alert bridge and trigger manager instead of being duplicated by monitor callback wiring and the sharedfrontend-modern/src/components/Infrastructure/ResourceChangeSummary.tsxcard, so canonical recent-change timelines stay rendered through one governed frontend card instead of separate page-local list loops and the sharedfrontend-modern/src/utils/resourceChangePresentation.tsformatter used by the summary page and resource drawer, so canonical change wording does not drift across surfaces and the/api/ai/intelligence/changesroute plusinternal/api/contract_test.go, so the canonical recent-changes endpoint stays on the same intelligence facade and contract snapshot instead of bypassing the shared timeline source and the canonical policy-posture snapshot derived from unified resources, so sensitivity, routing, and redaction counts stay owned by the same AI summary contract instead of being reconstructed as a page-local governance rollup and the resource-intelligence payload carried by the drawer AI card, so the resource-detail surface stays on one canonical intelligence contract instead of introducing a separate detail endpoint and the learned-correlation payload loaded into the shared AI intelligence store, so the Patrol intelligence page and the AI summary page consume the same governed correlation slice instead of each page fetching its own copy and the shared dashboard-load bundle insidefrontend-modern/src/stores/aiIntelligence.ts, so the page orchestration stays on the store-owned bundle instead of enumerating the AI fetches inline and the sharedfrontend-modern/src/components/Infrastructure/ResourcePolicySummary.tsxcard, so the AI summary page renders the governed policy-posture counts while the resource drawer stays on per-resource policy lines instead of carrying duplicate posture UI loops and the dedicatedfrontend-modern/src/features/patrol/patrolInvestigationContextModel.tsowner, so recent-change, learned-correlation, and policy-coverage summary text stays derived from the canonical AI payload in one place instead of as hook-local count and pluralization logic and the dedicatedfrontend-modern/src/stores/aiIntelligenceSummaryModel.tsowner, so recent-change counts and governed policy-posture fallbacks normalize once at the shared store boundary instead of as Patrol-hook-local payload repair and the sharedfrontend-modern/src/components/Infrastructure/ResourceCorrelationSummary.tsxcard, so learned correlations and correlation context stay rendered through one governed frontend card instead of separate page-local list loops and the same shared correlation card's ordering and truncation rule, so callers pass raw correlations instead of encoding their own top-N sort behavior and the sharedfrontend-modern/src/components/Infrastructure/ResourceChangeSummary.tsxandfrontend-modern/src/components/Infrastructure/ResourceCorrelationSummary.tsxcards' infrastructure resource-link default, so the Patrol page, resource drawer, and problem-resource dashboard panels inherit the canonical resource-filter path construction instead of rebuilding infrastructure URLs inline and the Patrol runtime-remediation destination shared with/api/settings/ai, so summary actions and runtime-finding actions may reuse the governed provider-settings route while still presenting that destination to Patrol operators as Patrol provider configuration instead of genericAI Settingscopy and the Patrol route-shell destination itself, so the thin page shell atfrontend-modern/src/pages/AIIntelligence.tsxmay continue to bridge the shared AI-runtime payload boundary while exposing/patrolas the canonical product route and preserving/aionly as a compatibility redirect - Route frontend API-client parsed error propagation, API-error-status fallback handling, allowed-status handling, custom status-specific error handling, command-trigger success envelope handling, shared response parsing pipelines, missing-resource lookup handling, metadata CRUD routing, stream event consumption, response status, collection normalization, scalar payload coercion, and structured error normalization through canonical shared helpers under
frontend-modern/src/api/That same shared org-management client boundary now owns target-consent sharing semantics acrossfrontend-modern/src/api/orgs.ts,internal/api/org_handlers.go, and the shared org route wiring. Cross-org share creation must remain a pending request until the target organization accepts it, the payload must preservestatus,acceptedAt, andacceptedBy, and widening an accepted share's requested role must reset the share back topending. Downstream settings surfaces must not infer live access from share creation alone or recreate manager-only pending-share visibility rules locally. - Add or change API token scope, assignment, and revocation presentation through
frontend-modern/src/components/Settings/APITokenManager.tsx,frontend-modern/src/components/Settings/apiTokenManagerModel.ts, andfrontend-modern/src/components/Settings/useAPITokenManagerState.tsThat same shared token contract also owns audit scope separation: audit event, verification, summary, export, and unified action/export audit reads must require the dedicatedaudit:readscope instead of reusing broader monitoring or settings-read token grants. - Add or change infrastructure operations token generation, lookup, assignment, the pure unified-agent inventory/install model, the split infrastructure install state owner, the split direct-node/discovery infrastructure settings owners, the shared infrastructure-operations state provider/context shell, and install presentation through
frontend-modern/src/components/Settings/infrastructureOperationsModel.tsx,frontend-modern/src/components/Settings/useInfrastructureConfiguredNodesState.ts,frontend-modern/src/components/Settings/useInfrastructureDiscoveryRuntimeState.ts,frontend-modern/src/components/Settings/useInfrastructureInstallState.tsx, andfrontend-modern/src/components/Settings/useInfrastructureOperationsState.tsx. Phase 9 retired the InfrastructureOperationsController shell and the useInfrastructureReportingState reporting path; they must not be reintroduced, and aggregator-backed reporting reads are owned byfrontend-modern/src/components/Settings/useConnectionsLedger.tsunder the frontend-primitives contract. That same governed infrastructure-operations API boundary also owns discovery polling activation: the shared discovery runtime may only poll/api/discoverwhile the settings shell has theinfrastructure-connectionsroute active, so route-level IA changes cannot silently keep discovery traffic alive on unrelated systems or install screens. That same governed setup/install boundary also owns uninstall convergence: when a script-managed Proxmox node removes its local Pulse credentials, the canonical/api/auto-unregisterAPI must remove the matching stored node immediately and emit the same discovery/node-deleted refresh semantics as manual deletion, so the infrastructure sources table does not keep a stale active row until the next failed poll. - Keep
internal/api/session_store.goon a fail-closed auth-persistence boundary: persisted OIDC refresh tokens may only round-trip through encrypted-at-rest session payloads, and any missing-crypto or invalid-ciphertext path must drop the token instead of preserving plaintext-at-rest session state. - Keep tenant AI handler wiring on canonical provider ownership:
internal/api/ai_handlers.gomay wire tenantReadStateand tenant-scoped unified-resource providers into AI services, but it must not revive tenant snapshot-provider bridges once Patrol can initialize and verify from those canonical providers directly. - Keep Patrol status transport semantics explicit in that same AI handler layer: the Patrol status endpoint must carry machine-readable runtime availability such as blocked, running, disabled, active, or unavailable rather than asking frontend consumers to infer operator state from stale summaries or run history.
- Keep Patrol quickstart transport semantics explicit as well: zero remaining quickstart credits are inventory data, not a standalone runtime-state override, so frontend consumers may only present the exhausted quickstart warning when the payload still reports
using_quickstartor a runtime state that is blocked by quickstart exhaustion. - Keep Patrol intelligence summary transport semantics single-voiced: the canonical overall-health payload and Patrol run-history payload together must support one primary assessment plus one explicit verification explanation, and frontend consumers must not need to derive a second compact assessment or verification verdict row from the same payloads beneath the primary summary card.
- Keep Pulse Mobile relay credential minting and permission ownership on backend ownership:
internal/api/router_routes_auth_security.go,internal/api/security_tokens.go,internal/api/auth.go,internal/api/relay_mobile_capability.go,internal/api/router_routes_ai_relay.go, andfrontend-modern/src/api/security.tsmay expose the canonical mobile runtime token creator and governed route gates, but browser callers must only consume that route and must not define the mobile runtime scope, compatibility gate list, route inventory, or token-purpose metadata locally. - Keep hosted tenant browser-session precedence on the shared auth boundary:
internal/api/auth.go,internal/api/contract_test.go, and hosted tenant callers must treat a validpulse_sessionas authoritative before any API-only token fallback or no-local-auth anonymous fallback, so cloud handoff can continue into protected hosted routes without flattening the operator back toanonymousor forcing a browser session through bearer-token-only mode after the tenant has minted API tokens. That same shared auth boundary also owns hosted handoff authorization.internal/api/cloud_handoff.go,internal/api/cloud_handoff_handlers.go, and hosted tenant callers must derive the effective tenant role from pre-existing server-side org membership only, rather than trusting the handoff JWT to append missing members, repair org metadata, or upgrade roles on arrival. Handoff may mint a browser session only when the tenant org already contains the account as the owner or a member with a valid stored role, and tenant orgs with a blankOwnerUserIDmust fail closed instead of being claimed by the first owner-shaped handoff token. - Keep tenant settings-scope authorization aligned with org management:
internal/api/security_setup_fix.go,internal/api/contract_test.go, and settings-bound hosted callers must allow the current non-default org owner/admin membership to exercise privileged tenant routes, rather than requiring a separate configured local admin identity after hosted handoff. Hosted handoff must not be treated as an org-management side effect for that same privilege boundary. Only canonical invitation, membership-management, or explicit owner-transfer flows may create tenant membership or change the stored owner/admin role. Shared auth routes and downstream settings consumers must treat handoff role claims as bounded by the server-owned membership record, never as authority to elevate tenant privileges. That same org-management transport now owns explicit acceptance for new self-hosted membership as well.internal/api/org_handlers.go,frontend-modern/src/api/orgs.ts, andinternal/api/contract_test.gomust keep new-user adds on the canonical pending-invitation payload (kind:"invitation") plus current-user accept/decline routes, rather than binding an arbitrary username directly intoorg.Members. Immediate role mutation remains valid only for already-accepted members, and owner transfer must fail closed unless the target user is already a stored member. The same permanent-control boundary also requires a fresh browser session for owner transfer:internal/api/auth.go,internal/api/session_store.go, andinternal/api/org_handlers.gomust reject transfer attempts unless the request carries the boundpulse_sessioncookie for the acting owner and that session was minted recently enough to represent an explicit re-auth, rather than letting any long-lived hijacked session permanently reassign org ownership. That same shared auth boundary also owns pre-auth local setup and recovery containment. When no authentication is configured, anonymous fallback and bootstrap quick setup may run only on direct loopback, recovery tokens must bind to the generating client IP, and recovery may mint only a browser-bound localhost session rather than a shared filesystem toggle that disables auth for every loopback client. The same shared auth boundary also owns release-build admin bypass gating.internal/api/auth.gomay keepALLOW_ADMIN_BYPASSfor non-release development workflows, but release builds must compile that env override out entirely instead of reading it and deciding at runtime whether to honor or ignore it. - Keep mobile onboarding payload reads aligned with the server-owned relay-mobile credential:
internal/api/router_routes_ai_relay.go,internal/api/onboarding_handlers.go, andinternal/api/contract_test.gomust allow the dedicatedrelay:mobile:accessscope to reach the governed QR, deep-link, and connection-validation payloads without reintroducing a broadersettings:readrequirement for token-authenticated pairing clients. That same shared relay/runtime boundary also owns hostname target equivalence for agent command routing.internal/api/router_routes_ai_relay.goandinternal/api/contract_test.gomay match a short host against the canonical connected-agent FQDN, but they must do so throughinternal/unifiedresources/hostname_equivalence.goand must not collapse distinct FQDNs that merely share the same short hostname into one API target. When an explicittargetHostmisses that canonical match, the shared relay adapter must keep the result empty instead of silently falling back to the lone connected agent. That same shared runtime-token boundary also owns agent-exec binding.internal/api/deploy_handlers.go,internal/api/router.go, andinternal/api/contract_test.gomust mint agent-exec-capable runtime tokens with a server-ownedbound_agent_idand reject websocket registration when the token is missing binding metadata or names a different agent. That same websocket admission path must also cap concurrent connections per client IP before upgrade so one source cannot hold unbounded agent-exec sockets open. Legacybound_hostnamemetadata may be normalized only as compatibility input into that same canonicalagent-<hostname>binding, and unbound agent-exec tokens must fail closed instead of being treated as global command authority. - Keep hosted billing-state quickstart payload fields on the shared API contract:
internal/api/hosted_entitlement_refresh.go,internal/api/subscription_state_handlers.go, andinternal/api/contract_test.gomust preservequickstart_credits_granted,quickstart_credits_used, andquickstart_credits_granted_atthrough hosted signup, hosted lease refresh, and billing-state reads instead of letting lease rewrites silently erase seeded quickstart inventory. - Keep hosted AI settings bootstrap on the shared API contract:
internal/api/ai_hosted_runtime.go,internal/api/ai_handlers.go,internal/api/ai_handler.go, andinternal/api/contract_test.gomust treat a missingai.encin hosted mode as a canonical bootstrap condition, persist one machine-owned quickstart-backed AI config with the Pulse-owned aliasquickstart:pulse-hostedwhen hosted entitlements grant AI capability, and preserve that configured settings payload as the same public contract that Chat, Patrol, and AI Settings consume instead of embedding a third-party model ID in the transport contract. That same hosted bootstrap surface must also preserve the secure quickstart-identity contract: hosted or trial-backed AI settings reads and enablement may bootstrap Patrol quickstart from the effective signed entitlement lease when no self-hosted installation token exists, but they must not fabricate installation-scoped activation state or anonymous client identity to satisfy/v1/quickstart/bootstrap. - Keep post-boot AI enablement contract-backed on the shared AI/mobile approval surface:
internal/api/ai_handler.go,internal/api/ai_handlers.go,internal/api/router_routes_ai_relay.go, andinternal/api/contract_test.gomust turn the governed approvals-list API into the canonical empty-list payload as soon as settings-driven AI enablement succeeds, rather than leaving that surface on503 Approval store not initializeduntil some separate startup-only side effect happens. - Keep infrastructure summary chart transport contract-backed on the shared API surface:
internal/api/router.go,internal/api/contract_test.go, and frontend infrastructure summary consumers must normalize long-range mixed-cadence history into equal-time summary buckets before shipping the infrastructure charts API payload, so 7-day and 30-day summary cards do not expose compressed right-edge tails just because recent samples arrive at a finer storage resolution. - Keep long-range workload chart transport time-proportional on the shared API surface:
internal/api/router.go,internal/api/contract_test.go, and workload chart consumers must cap mixed-cadence workload history by equal-time buckets rather than raw point index for the per-workload and aggregate workload chart APIs, so 7-day and 30-day workload cards do not bunch recent samples at the right edge just because recent telemetry is stored more densely. - Keep chart timestamp precision canonical on that same shared API surface: when
internal/api/router.goserializes monitoring history into infrastructure or workload chart payloads, it must preserve canonical millisecond timestamps from the shared monitoring timeline instead of rounding through whole-second conversion, so seeded mock history and live appends collapse onto one operator-visible timeline instead of appearing as duplicated tail samples. - Keep storage chart identity canonical on that same shared API surface: the shared storage charts endpoint must key pool and physical-disk series by the resolved unified-resource
MetricsTarget.ResourceID, not by canonical resource IDs or page-local aliases, so storage rows, focused summary cards, sticky summary shells, and detail charts all address the same history series in live and mock mode. - Keep synthetic summary-chart fallback identity canonical on that same shared API surface: when
internal/api/router.gohas to synthesize mock summary history for infrastructure, workloads, or storage cards, it must derive the fallback from canonicalresourceType,resourceID, andmetricTypeownership instead of raw min/max seed-prefix helpers, so range changes and runtime mock updates stay on one governed timeline. The same compact chart boundary also owns aggregate-only storage summary transport./api/charts/storage-summarymay batch only the canonicalusedandavailstorage series required for the aggregate capacity sparkline, and it must not regress into the full per-pool storage payload or a fetch-all-metrics backend path just because the storage page carries a broader chart surface. When mock mode is active, that same endpoint must come from the monitor-owned aggregate summary cache rather than rehydrating each pool chart on request. - Keep workload-chart response identity canonical on that same shared API surface:
internal/api/router.go,internal/api/contract_test.go, and workload summary consumers must emit provider-backed VM and system-container series under the same canonical workload IDs that workloads page rows use, while resolving history through the unifiedMetricsTarget.ResourceID, so hover and focus selection do not fall off for provider-backed rows. Kubernetes pod workload rows follow that same contract through their metrics target./api/resourcesmay expose pod history only through the unifiedMetricsTarget.ResourceID, but that target must be the canonical prefixed runtime keyk8s:<cluster>:pod:<uid>and not the bare source pod ID, so pod workload rows and pod chart payloads stay on one history series. - Keep the hosted account portal bootstrap intelligible without duplicate
chrome.
internal/cloudcp/portal/page.go, the maintained portal frontend bundle, and the shared portal styles may refine layout density, but the account/billing shell must remain understandable from the primary header, section title, and factual body content alone instead of depending on a second context-chip strip to restate the same scope. - Keep storage wire metadata lossless across shared API payload types.
frontend-modern/src/types/api.tsmust continue to expose provider-backed storage metadata such as ProxmoxpoolandzfsPoolfields when the backend emits them, instead of silently dropping that detail from the shared runtime contract. - Keep hosted entitlement refresh ownership on the same governed API contract
as hosted status and entitlements reads.
internal/api/licensing_handlers.go,internal/api/hosted_entitlement_refresh.go, andinternal/api/contract_test.gomust resolve the effective hosted billing target before refresh, persistence, and evaluator rewiring, so tenant- scoped hosted routes cannot refresh against an empty non-default org while the machine's real hosted lease still lives ondefault. - Keep public demo bootstrap posture on the shared security-status contract.
internal/api/router_routes_auth_security.go,internal/api/security_status_capabilities.go, frontend security-status consumers, and shared demo-mode stores must treat/api/security/status.sessionCapabilities.demoModeas the canonical browser bootstrap signal for public demo posture instead of asking frontend callers to infer demo state from response headers,/api/healthprobes, or hostname heuristics. Shared browser stores that consume Patrol approvals must also fail closed from that resolved demo policy at the store boundary, so public demo shells do not probe/api/ai/approvalsor/api/ai/remediation/plansafter the read-only demo posture is already known. - Keep public demo commercial posture middleware-owned on that same shared
API contract.
internal/api/demo_middleware.go,internal/api/demo_mode_commercial.go,internal/api/subscription_entitlements.go, andinternal/api/contract_test.gomust classify commercial routes centrally as either hidden (404) or runtime-safe. Public demo browsers may read the non-commercial/api/license/runtime-capabilitiescontract for feature truth, while/api/license/commercial-posture,/api/license/entitlements, and/auth/license-purchase-startstay hidden. Upgrade prompts, trial nudges, monitored-system migration guidance, usage counts, billing identity, and plan metadata must therefore not depend on hidden commercial routes surviving the public demo boundary. - Keep the storage summary route in
internal/api/router.goas the canonical storage summary contract across dashboard and storage consumers.internal/api/router.go,internal/api/contract_test.go, and shared frontend consumers must expose pooled storage history through one response keyed by canonical metrics-target IDs, preserve millisecond chart timestamps, and avoid reconstructing storage summary behavior from per-pool/api/metrics-store/historyfan-out. - Keep infrastructure summary metric filtering canonical on that same shared
API surface.
frontend-modern/src/api/charts.ts,internal/api/router_routes_monitoring.go,internal/api/router.go,internal/api/types.go, andinternal/api/contract_test.gomust route optional infrastructure-summarymetricsfilters through one governed transport contract, so dashboard-specific consumers can request only CPU and memory without inventing a second summary endpoint or silently widening back to disk/network payloads. The same contract must carry those requested metric filters through the shared guest-chart batch loader ininternal/monitoring/monitor_metrics.goinstead of fetching the full guest metric set and trimming after the API payload is already assembled. - Keep the compact dashboard overview route canonical on that same shared API
surface.
internal/api/resources.go,internal/api/router_routes_monitoring.go,frontend-modern/src/api/resources.ts,frontend-modern/src/hooks/useDashboardOverview.ts, and frontend dashboard consumers must route KPI cards, problem-resource rows, governed resource labels, top-infrastructure identity, and canonical metrics-target join keys through/api/resources/dashboard-summaryinstead of reconstructing that shell from the paginated/api/resourceslist payload or guessing how dashboard trend identities map onto infrastructure chart series. - Keep mock and demo chart reads on the same canonical unified snapshot as
the rest of the API surface.
internal/api/router.go,internal/api/contract_test.go, and chart consumers must route/api/charts,/api/charts/infrastructure, and/api/storage-chartsthroughGetUnifiedReadStateOrSnapshot()whenever mock or demo presentation is active, so VMware, storage, and infrastructure series stay aligned with/api/resourcesand/api/stateinstead of drifting onto the live store-backed graph. - Route the unified connections ledger and address probe through
internal/api/connections_types.go,internal/api/connections_aggregator.go,internal/api/connections_handlers.go,internal/api/connections_probe.go, andfrontend-modern/src/api/connections.tstogether soGET /api/connectionsandPOST /api/connections/probestay on one canonical payload shape instead of re-deriving state from per-type config stores in the frontend. State must remain a derived field sourced from in-memory scheduler health (monitoring.Monitor.SchedulerHealth()) plus agentHost.LastSeen; the endpoint must not introduce new persisted per-connection state. The probe endpoint must remain admin-gated (RequireAdmin+ScopeSettingsWrite) to block unauthenticated SSRF against internal hosts. That same probe path must also validate user-supplied addresses before probing, reject metadata, link-local, multicast, and unspecified destinations, and pin each outbound dial to the first permitted resolved IP so DNS rebinding cannot swap the target between validation and connect time. That same/api/connectionspayload now also owns the additivesystems[]grouping contract for the infrastructure settings source manager. Those grouped rows must stay source-oriented and backend-authored: one primary source row may carry attached collection methods such as a linked Pulse Agent, but attached methods must not be emitted as duplicate peer rows when backend ownership can prove they augment the same source. When the owning source is a Proxmox cluster, that same backend-authored system payload must also carry the canonical cluster identity so the frontend can label the row by cluster moniker instead of by one endpoint node's hostname. That grouped payload must also carry the backend-authored cluster member collection with node identity, endpoint, node-local status, and any linked agent connection id so the frontend can render child node composition without reverse-engineering it from standalone agent rows. Those member records, plus any primary or attached connection row that represents the same host, must also carry canonical host aliases when the backend knows them, so discovery and settings surfaces can reconcile hostname-only and IP-only views of the same enrolled machine instead of showing a second "discovered" candidate row for an already represented source member or API-plus-agent source row. Agent-backed connections also own canonical version/update facts on that same payload: when a source or attachment is backed by Pulse Agent,/api/connectionscarries the installed agent version, the current server-side target agent version when it is meaningful, and whether an update is available, so settings surfaces do not invent frontend-local version comparison rules. That same shared contract also carries compactagentIdentityfacts on agent-backed connections, including the reported hostname, report IP, platform/OS, kernel, architecture, and command capability, so settings surfaces can render recognizable standalone-host identity without a second inventory fetch or frontend-local host reconciliation rules.
Forbidden Paths
- Handler-local payload shape drift without a contract test
- Untracked compatibility aliases becoming permanent runtime contracts
- Frontend-only payload assumptions that are not owned in backend contracts
- Frontend API clients inferring canonical HTTP status from
Error.messagetext - Frontend API clients branching on raw
response.statuschecks for governed status handling instead of the shared response-status helpers - Frontend API clients parsing governed success or stream payloads with raw
response.json(), ad hocresponse.text()+JSON.parse(...), or per-moduleJSON.parse(...)stream decoding instead of the shared response parsing helpers - Frontend API clients normalizing nullable or legacy collection payloads with module-local
|| [],?? [], or ad hocArray.isArray(...)fallbacks instead of shared collection helpers - Frontend API clients swallowing non-not-found API failures behind broad
catch { return null; }fallbacks instead of routing only canonical404cases through explicit status checks - Frontend API clients coercing governed backend payload fields through module-local scalar helper stacks instead of shared scalar coercion helpers
- Frontend API clients normalizing governed structured error payloads through module-local helper functions instead of shared error normalization helpers
- Frontend API clients open-coding parsed non-OK response throwing with
throw new Error(await readAPIErrorMessage(...))instead of the shared response assertion helper - Frontend API clients open-coding governed
assertAPIResponseOK(...); parseRequiredJSON(...)orparseOptionalJSON(...)tandems instead of shared response pipeline helpers - Frontend API clients open-coding governed
404 => nullresponse branches for resource lookups instead of shared missing-resource response helpers - Agent and guest metadata clients duplicating the same CRUD transport logic instead of using one shared metadata client
- AI stream clients duplicating SSE reader, timeout, chunk-splitting, and JSON event parsing loops instead of using one shared stream consumer
- Monitoring delete and idempotent mutate clients open-coding
404/204allowed-status branches instead of using canonical shared allowed-status helpers - Governed frontend API clients open-coding
if (!response.ok) { if (isAPIResponseStatus(...)) throw new Error(...) }status-to-user-message branches instead of using canonical shared custom-status error helpers - Monitoring command-trigger clients open-coding
parseOptionalAPIResponse(response, { success: true }, ...)success-envelope fallbacks instead of using a canonical shared success-envelope helper - Governed frontend API clients open-coding
try/catchwrappers aroundapiFetchJSON(...)just to map402or404into[],{ plans: [] }, ornullinstead of using canonical shared API-error-status fallback helpers - Backend config/settings handlers pointing operator guidance at GitHub
maindocs when the running build already ships that guidance locally under/docs/ - Telemetry preview or reset endpoints drifting from the exact server-owned telemetry runtime contract instead of reusing the same source-of-truth snapshot and install-ID state the background sender uses
- Shared SSO test or metadata-preview handlers open-coding outbound metadata/discovery URLs, allowing userinfo-bearing HTTP(S) inputs, or rebuilding
/.well-known/openid-configurationwith origin-root string concatenation instead of the shared validated URL helpers before any outbound request - AI settings handlers echoing raw provider secrets or testing the wrong provider model:
/api/settings/aimay expose masked provider-auth presence such asollama_password_set, but backend payloads must never echo stored secrets back to clients, and provider-specific test routes must stay bound to the selected provider's own configured model instead of whichever other provider currently owns the defaultmodelfield
Completion Obligations
- Update contract tests when payloads change
- Update frontend API types in the same slice
- Route runtime changes through the explicit API-contract proof policies in
registry.json; default fallback proof routing is not allowed - Update this contract when canonical payload ownership changes
- Keep
/api/resourcespolicy metadata aligned across backend payload tests and canonical frontend resource consumers whenever sensitivity or routing fields change - Keep Patrol status payloads explicit enough that the frontend can present blocked runtime state without treating a previously healthy summary snapshot as current runtime truth, and keep Patrol recency semantics explicit in transport by reserving
last_patrol_atfor completed full patrols while exposing any Patrol activity separately throughlast_activity_atand the scoped-trigger status payload on that same Patrol status surface, so queued scoped work, busy-mode state, and per-source enablement (alertversusanomaly) stay transport-backed instead of being inferred by page-local heuristics and the split Patrol trigger settings contract, sopatrol_alert_triggers_enabledandpatrol_anomaly_triggers_enabledare the canonical AI settings fields while legacypatrol_event_triggers_enabledremains a compatibility aggregate rather than the primary control surface and the server-authoritative quickstart contract, so/api/settings/aiand/api/patrol/statuskeepquickstart_credits_remaining,quickstart_credits_total, andusing_quickstartas canonical transport fields sourced from the latest quickstart bootstrap or proxy response rather than from local grant counters, and shared handlers must not invent client-authored commercial identity or synthetic credits when the quickstart server is unavailable and the activation-gated availability rule, so missing installation activation/trial identity must surface as the canonical activation-required quickstart block reason for Patrol and AI settings enablement rather than silently attempting anonymous bootstrap and the Pulse-owned hosted model alias rule, so persisted legacy hosted quickstart model IDs such asquickstart:minimax-2.5mare rewritten toquickstart:pulse-hostedbefore/api/settings/airesponds, instead of leaking stale vendor identifiers back into the governed payload contract for model, chat, patrol, discovery, or auto-fix fields and the AI settings blocked-reason contract, so/api/settings/aimust exposequickstart_blocked_reasonwhen quickstart cannot currently enable Patrol and must clear that field when a provider-backed path is active or quickstart is genuinely usable and the public interpretation rule, so those fields describe Patrol-only quickstart inventory and active runtime source on activated or trial-backed installs rather than a generic hosted AI quota, anonymous Community entitlement, or full-chat entitlement and the Patrol execution billing rule, so shared runtime bridges such asinternal/api/chat_service_adapter.gomust preserve the stable Patrol execution identifier that the hosted quickstart contract uses to charge once per higher-level Patrol run rather than once per internal provider turn - Keep Patrol summary payload consumers aligned on one assessment hierarchy: transport-driven Patrol summary surfaces may show supporting counts and outcomes, but the canonical assessment and verification states must remain singular and not be repeated as a second compact verdict strip
- Keep Patrol verification and activity facts unified on one transport-backed secondary status area: when frontend consumers combine Patrol status payloads (
runtime_state,last_patrol_at,last_activity_at,trigger_status) with run-history transport, the latest run result, activity mix, scoped-trigger state, and circuit-breaker context must read as one supporting explanation beneath the primary assessment instead of being re-expanded into a separate full-width status strip plus duplicate summary layers and the main Patrol page composition boundary, so once that governed secondary area exists inside the summary shell the same payloads must not also drive a second page-level status strip elsewhere on the route and the Patrol supporting-context disclosure rule, so recent changes, learned correlations, and policy coverage stay secondary explanatory context that opens only when degraded verification, active findings, or selected-run investigation makes that evidence relevant instead of advertising a parallel Patrol workflow on otherwise healthy fully verified states, and the disclosure copy must explicitly tell operators that findings and run history are the Patrol verification evidence while those supporting cards only add explanation from the same governed payload family, and the Patrol-owned helperfrontend-modern/src/features/patrol/patrolSupportingContextPresentation.tsmust keep that transport-derived trust copy aligned across the workspace disclosure rather than letting page shells invent local wording - Keep AI settings setup transport vendor-neutral:
/api/settings/ai/updatemust accept provider credentials or base URLs without a baked vendor model ID, resolve the effective BYOKmodelthrough the canonical runtime provider-catalog policy, and return that resolved model on the same shared/api/settings/aipayload instead of depending on frontend-supplied model defaults. - Treat Patrol summary supporting metrics as readouts, not reinterpretations: when frontend consumers derive cards such as active findings, criticals, warnings, or fixes from the canonical payloads, those cards must stay numeric and must not synthesize new assessment labels like
Issues detectedor verification labels likePartial verificationbeneath the primary summary contract - Treat active Patrol runtime transport as compatible with factual activity surfaces: when the runtime is currently running, frontend consumers may surface in-progress activity context, but they must not replace the activity strip with a second assessment verdict derived from runtime state alone
- Treat Patrol recency as a singular transport-driven fact: once header metadata, verification copy, or the findings footer already present the governed Patrol timing context, frontend summary consumers must not derive an extra timing pill from the same payloads inside the primary summary card
- Treat Patrol findings counts as a singular supporting surface as well: when the summary shell already exposes count cards for active findings, warnings, criticals, and fixes, the primary assessment card must not repeat those same payload-derived counts as secondary badges
- Treat Patrol schedule and recency as header-owned metadata on the main Patrol page: findings empty-state consumers should not receive or restate
next_patrol_at,last_patrol_at,last_activity_at, or interval timing once those transport fields are already presented by the primary header and verification shell - Keep recovery payload filters canonical across
/api/recovery/rollups,/api/recovery/points,/api/recovery/series, and/api/recovery/facets: wheninternal/api/recovery_handlers.goadds a governed recovery filter or display field such as provider-neutralitemType, the same normalized transport must land across all four endpoints and the contract tests must pin both outbound payload shape and accepted query aliases in the same slice - Keep recovery platform-query vocabulary canonical across that same
/api/recovery/*surface: operator-facing transport must emitplatformas the canonical query field, accepted legacyprovideraliases must remain compatibility-only input, andinternal/api/contract_test.gomust pin that fallback behavior in the same slice as any handler change - Keep recovery payload platform vocabulary canonical across that same
/api/recovery/*surface: point payloads must exposeplatform, rollup payloads must exposeplatforms, and any compatibilityprovider/providersaliases must remain secondary fallback fields rather than replacing the shared response model - Keep recovery linked-resource vocabulary canonical across that same
/api/recovery/*surface: points and rollups must exposeitemResourceIdas the canonical linked-resource field, accepted legacysubjectResourceIdaliases must remain compatibility-only input or secondary payload fields, and the shared proof surface must pin that normalization in the same slice as any handler change - Keep recovery external item-reference vocabulary canonical across that same
/api/recovery/*surface: point and rollup payloads must exposeitemRefas the canonical external item-reference field, accepted legacysubjectRefaliases must remain compatibility-only secondary payload fields, and the shared proof surface must pin that normalization in the same slice as any handler change - Keep first-host lookup completion explicit on the shared install-state API
boundary: when
frontend-modern/src/components/Settings/useInfrastructureInstallState.tsxreceives a successful connected-agent lookup result, the canonical install flow must expose direct navigation into/dashboardand/settings/infrastructure/operationsrather than leaving the operator on a transport-only status readout. - Keep the shared first-host detection contract explicit on
/api/stateas used byfrontend-modern/src/components/Settings/useInfrastructureInstallState.tsx: the canonicalconnectedInfrastructureprojection must stay suitable for detecting the first active reporting system during install so brand-new operators can receive the first success handoff without typing a hostname or agent ID. - Keep the shared first-run install-token transport explicit on
/api/security/tokensas used byfrontend-modern/src/components/Settings/useInfrastructureInstallState.tsx: once quick setup has produced the setup handoff credentials, the canonical token-creation contract must remain usable immediately from the install workspace so the first-host flow can auto-create the scoped install token without forcing the operator through a second manual token-generation step. Any downloaded first-run handoff instructions emitted by that same shared install-state surface must describe that prepared token path consistently with the live runtime behavior rather than directing the operator to create another install token manually. - Keep connected-infrastructure surface vocabulary canonical across the
shared
/api/stateand reporting/install consumers:frontend-modern/src/types/api.tsmust treattruenasas a first-class connected-infrastructure surface kind, and connected-infrastructure consumers such asfrontend-modern/src/components/Settings/infrastructureOperationsModel.tsxtogether withfrontend-modern/src/components/Settings/useConnectionsLedger.tsandfrontend-modern/src/components/Settings/ConnectionsTable.tsxmust preserve the transport distinction between machine-managed surfaces (agent,docker,kubernetes) and platform-connections-managed surfaces (proxmox,pbs,pmg,truenas) instead of collapsing them into one uninstall/stop-monitoring model. That same shared payload contract must also preserve guest-linked host identity on connected infrastructure and removed-host records throughlinkedVmIdandlinkedContainerId, so settings consumers can keep the top connections ledger scoped to top-level infrastructure without re-deriving guest status from names or local heuristics. - Keep AI settings payload continuity explicit on the shared
/api/settings/aisurface:internal/api/ai_handlers.goandinternal/api/contract_test.gomust expose masked provider-auth state such asollama_usernameandollama_password_setwithout echoing raw stored secrets, and the same backend contract must keep provider test routes bound to the selected provider's configured model instead of whichever other provider currently owns the defaultmodelfield. - Keep shared AI runtime reads centralized on that same governed contract:
frontend-modern/src/stores/aiRuntimeState.tsis the canonical frontend read owner for/api/settings/aiand/api/ai/models. AI-owned consumers such asfrontend-modern/src/features/patrol/usePatrolIntelligenceState.ts,frontend-modern/src/components/AI/Chat/index.tsx, andfrontend-modern/src/components/AI/AICostDashboard.tsxmust reuse that shared store for read-side runtime truth, whilefrontend-modern/src/components/Settings/useAISettingsState.tsremains the write-side settings owner. Non-AI settings surfaces such asfrontend-modern/src/components/Settings/useAgentProfilesPanelState.tsmust not probe/api/settings/aijust to gate assistant affordances. AI-owned refresh actions may still force a shared reload or sync that store after an owned settings mutation, but they must not reintroduce page-local mount loops that fetch/api/settings/aior/api/ai/modelsseparately for chat, Patrol, and cost/budget views. - Keep API-backed first-target onboarding canonical on that same shared
infrastructure-settings boundary:
frontend-modern/src/components/Settings/infrastructureOperationsModel.tsx,frontend-modern/src/components/Settings/useInfrastructureInstallState.tsx,frontend-modern/src/components/Settings/InfrastructureInstallerSection.tsx,frontend-modern/src/components/Settings/InfrastructureWorkspace.tsx, andfrontend-modern/src/components/SetupWizard/SetupCompletionPanel.tsxmust present TrueNAS and other API-backed platforms as Platform connections-first onboarding rather than as dedicated unified-agent install profiles. The shared host-install contract may guide operators through the first agent-managed host, but alternate CTAs and setup-completion guidance must route API-backed first systems through the canonical infrastructure onboarding contract at/settings/infrastructure?add=pick, while agent-managed first hosts use/settings/infrastructure?add=agent. The infrastructure workspace may consume those onboarding query params and normalize the browser back to/settings/infrastructure, but first-run callers must not fall back to the retired/settings/infrastructure/installor/settings/infrastructure/platformsdeep links. - Keep shared install-script fallback transport pinned to published release
lineage.
internal/api/unified_agent.goandinternal/api/contract_test.gomust only map stable tags or explicit RC prerelease tags without build metadata to GitHub install-script release assets; dev prereleases such asv6.0.0-dev, git-described+git...builds, and other unpublished prerelease identifiers must fail closed on that API boundary instead of generating fake release URLs from a local runtime version string. - Keep local trial-start transport explicit on the shared commercial API
boundary:
/api/license/trial/startmust preserve the hosted-signup redirect contract as409 trial_signup_requiredduring the allowed retry burst, then return429 trial_rate_limitedwith the actual remaining backoff in bothRetry-Afteranddetails.retry_after_secondsonce the burst is exceeded. Hosted self-serve verification failures may render owned HTML, but they must preserve originating Pulse context instead of collapsing into generic control-plane failures. - Keep
/api/security/dev/reset-first-runtransport-backed and genuinely unauthenticated: when the dev reset route clears first-run auth it must also clear any env-backed auth state that feeds/api/security/status, so the status payload flipshasAuthenticationtofalse, preservesbootstrapTokenPath, and allows browser-owned first-session proof to re-enter the real setup wizard instead of silently falling back to an authenticated dashboard state. That recovery transport may expose the bootstrap token file path, but it must not emit the token value into automatic runtime logs. - Keep shared SSO test and metadata-preview transport fail-closed: SAML
metadata URLs and OIDC issuer URLs must reject non-HTTP or userinfo-bearing
inputs before any outbound request is attempted, and OIDC discovery must
append
/.well-known/openid-configurationbeneath the configured issuer base path instead of resetting to the origin root. - Keep config-archive import reloads fail-closed on the shared API/runtime
boundary.
internal/api/config_export_import_handlers.go,internal/api/contract_test.go, and adjacent config/runtime helpers must tolerate absent notification managers and other optional runtime managers after a successful import-triggered reload request, returning a controlled API outcome instead of panicking or leaving browser-visible state half rewired.
Current State
useInfrastructureDiscoveryRuntimeState.ts no longer gates /api/discover
polling on a settings tab name; polling is mount-scoped. The tab guard was
removed when the infrastructure nav collapsed to one infrastructure-systems
entry.
The API layer already uses contract tests in many places, but every major live
contract should continue moving toward canonical-only runtime shapes.
That same shared internal/api/ boundary now also keeps ephemeral auth flow
state and request correlation fail-closed. OIDC authorization state storage
must cap abandoned entries and evict the earliest-expiring state before
unbounded growth, bootstrap token validation must enforce a per-client retry
limit with an explicit Retry-After contract, and incoming X-Request-ID
headers may only round-trip when they fit the bounded safe character set used
for logs and response headers.
That same shared settings/licensing contract now also owns the split usage-data
payload model. frontend-modern/src/api/settings.ts,
internal/api/router_routes_licensing.go, and adjacent settings callers must
keep anonymous outbound telemetry and local-only commercial handoff events as separate
browser-visible scopes, and the telemetry preview payload must ship normalized
version identity fields (version, version_raw, version_channel,
version_build, version_is_development, and
version_is_published_release) instead of leaving browser callers to infer
published-release truth from raw build strings.
That same browser-transport contract now tolerates sparse admission-preview
payloads without changing the runtime truth. Patrol transport may omit
finding_ids, and infrastructure removal previews may stage optimistic rows
only after canonical IDs have been resolved or a safe row-name fallback has
been chosen. API-adjacent browser callers must not reinterpret missing IDs or
preview arrays as authoritative empty success.
Monitored-system commercial admission is now also part of that owned live
contract. Add and update routes must project prospective candidates or
previewed source records through the canonical monitored-system resolver
before persistence, and /api/license/entitlements must expose
current_available when an active monitored-system cap cannot resolve current
usage so callers can fail closed without misreading unavailable usage as a
real zero.
That same current_available truth now includes supplemental-provider startup
readiness. API contracts must not serialize a live monitored-system count from
the first store-backed read-state when provider-owned inventories such as
TrueNAS or VMware have not yet completed an initial baseline and been rebuilt
into the canonical monitor store.
That same workload-chart boundary now also owns the rendered-metric budget on
the shared monitoring routes. /api/charts/workloads and
/api/charts/workloads-summary may batch provider-backed reads in parallel,
but they must request only the canonical workload metrics they actually
serialize (cpu, memory, disk, netin, netout), with Kubernetes pods
staying on that same five-metric set, instead of widening back to disk
read/write or fetch-all backend batches that the browser never renders.
The shared metrics-history contract now also owns physical-disk live I/O
windows. /api/metrics-store/history must accept resourceType=disk, keep
30m as a valid compact live range, and resolve disk, diskread,
diskwrite, and smart_temp against the canonical disk
MetricsTarget.ResourceID that unified resources already expose, instead of
leaving storage drawers or other callers to fork a disk-local history route or
invent an alternate disk identity.
That same metrics-history contract also owns Kubernetes pod identity
normalization. /api/metrics-store/history must accept legacy bare pod IDs
such as cluster-1:pod:pod-1, canonicalize them onto the unified pod metrics
target k8s:cluster-1:pod:pod-1, and keep the response resourceId on that
canonical key. When store-backed history is absent, the handler must fall back
to the same in-memory guest metrics cache that workload charts use for pods,
so demo and mock Kubernetes charts do not go blank while aggregate workload
charts still render.
That same metrics-history contract also owns canonical Kubernetes type
coverage across the shared chart clients. /api/resources,
frontend-modern/src/api/charts.ts, and /api/metrics-store/history must
preserve the shared metrics-target family for clusters, nodes, pods, and
deployments rather than treating prefixed pod IDs as a special case and
dropping k8s-deployment onto an untyped fallback. Cluster history stays on
the canonical cluster source key, node history on
<cluster>:node:<uid-or-name>, pod history on
k8s:<cluster>:pod:<uid-or-namespace/name>, and deployment history on
<cluster>:deployment:<uid-or-namespace/name>, so demo and live workload
detail charts all resolve through one governed identity contract.
The Pulse Account commercial shell now also owns a dedicated bootstrap
contract in internal/cloudcp/portal/page.go, internal/cloudcp/portal/handlers.go,
and internal/cloudcp/portal/handlers_test.go. /api/portal/bootstrap and
the in-page pulse-account-bootstrap payload must stay shape-identical for
account identity context, signed-out versus signed-in shell state, workspace
summaries, and renderer-owned public, commercial, and control-plane route
configuration, including the canonical bootstrap route path, magic-link request
path, signup path, and stable workspace summary fields such as created_at.
That workspace summary contract must expose explicit health semantics: healthy
for passing health checks, checking only when no completed health check
exists yet, and unhealthy for a failed latest health check.
That same shared internal/api/ plus internal/websocket/hub.go boundary also
owns browser websocket origin continuity for reverse-proxied runtimes. Same-host
browser origins must continue to connect when a reverse proxy preserves the
external host but terminates TLS upstream, so live updates do not fail merely
because the backend hop is plain HTTP. Forwarded host/proto headers may extend
that same-origin boundary only after explicit trusted proxy CIDRs are injected,
so hosted tenants and proxies that rewrite hostnames still fail closed onto the
trusted forwarded-origin contract instead of weakening cross-site websocket
checks. Browser-facing websocket upgrades must also require an explicit
Origin header even when allowedOrigins is wildcarded, so missing-origin
requests cannot silently bypass the cross-site websocket boundary.
PULSE_TRUSTED_PROXY_CIDRS must also reject wildcard trust ranges such as
0.0.0.0/0 or ::/0 at startup, while runtime forwarded-header parsing
fails closed if an invalid wildcard proxy trust range somehow reaches the
process.
That same shared boundary now also owns outbound SSO metadata and discovery
URL handling. SAML test/preview metadata fetches and OIDC issuer discovery
must normalize absolute HTTP(S) inputs through shared helpers, reject
userinfo-bearing URLs before any outbound request, and append the OIDC
well-known path relative to the issuer base instead of resetting discovery to
the origin root. Runtime SAML metadata refresh, runtime OIDC discovery, and
admin-side SSO test/preview fetches must all use that same restricted outbound
transport policy, including same-origin redirect validation and checked
regular-file loads for any configured SSO credential or CA-bundle path.
That same restricted outbound transport is also the canonical cross-product
egress boundary. It is exported via pkg/securityutil, with
internal/securityutil retaining Pulse-local wrappers, so adjacent products
such as pulse-enterprise can reuse the same DNS-rebinding-safe dial and
redirect policy for operator-configured audit webhooks instead of reintroducing
raw http.Client egress paths.
That same SSO boundary also owns manual SAML endpoint validation payloads.
internal/api/identity_sso_handlers.go, internal/api/saml_service.go, and
internal/api/contract_test.go must preserve both idpSsoUrl and optional
idpSloUrl on the shared SAML test request, and both fields must fail closed
through the same validated absolute HTTP(S) helpers instead of letting the
manual logout URL drift out of the request model or bypass the governed URL
normalization path.
That same runtime SSO contract also owns the Pulse-side public URL that feeds
SAML service-provider metadata and auth requests. internal/api/saml_handlers.go,
internal/api/saml_service.go, and the SAML regression tests must rebind
previously initialized SAML providers to the current configured PublicURL
before metadata or browser login flows emit SP entity, ACS, or metadata URLs,
so a stale startup-time blank/relative base URL cannot leak back into runtime
metadata or auth request generation once the canonical external URL is known.
That same SSO API boundary also owns final browser redirect construction after
local auth handoff. OIDC and SAML success/error handlers must build their
local returnTo targets through one canonical local-path helper that rejects
absolute or host-bearing targets before query params are appended, so shared
identity flows cannot drift back to per-handler open-redirect shaping.
Commercial self-service actions in that shell must stay same-origin as well:
the frontend may only call the portal-owned /api/portal/commercial/* routes,
and internal/cloudcp/portal/commercial_proxy.go plus internal/cloudcp/routes.go
own the server-side proxy boundary to the shared license/commercial APIs so
the browser runtime does not widen control-plane CSP with direct cross-origin
commercial fetches.
That same shared commercial boundary now also applies to Patrol feature
handoffs. API-backed Patrol surfaces may consume canonical commercial hrefs
from the shared license/commercial contract, but they must not re-decide
internal-versus-external navigation behavior inside API-adjacent page or hook
owners once the contract can resolve to both in-app and public destinations.
That same shared internal/api/ boundary now also owns browser presentation
policy for public-demo and commercial suppression. /api/security/status must
continue to expose the raw session capability fact
sessionCapabilities.demoMode, but browser shells and shared frontend stores
now consume the explicit presentationPolicy payload from that same response
as the canonical runtime contract for demoMode, readOnly,
hideCommercial, and hideUpgrade. Commercial posture and billing stores
must therefore defer their first read until that policy has resolved, so
public demos fail closed without probing hidden commercial routes during
bootstrap.
For ordinary self-hosted v6 installs, that same security-status contract owns
the free-first commercial posture: hideUpgrade defaults to true outside
hosted mode, and API consumers must treat it as a prompt-suppression contract
for upgrade links, trial CTAs, plan upsells, and paid-only navigation rather
than as a billing entitlement change.
That same contract split also makes the licensing boundary explicit:
/api/license/runtime-capabilities is the public runtime feature contract,
/api/license/commercial-posture is the non-billing upgrade/trial posture
contract for real customer workspaces, and /api/license/entitlements
remains billing-only. New callers must extend one of those owned shapes
instead of reviving a combined entitlement payload for mixed runtime,
commercial, and billing concerns.
That same shared licensing contract also owns internal runtime-only
capabilities. Release demo runtimes may use the internal demo_fixtures
entitlement to authorize mock fixture data and /api/system/mock-mode
transitions, but browser-facing entitlement and runtime payloads must filter
that capability back out so public callers never learn or depend on internal
demo-fixture grants.
That same shared licensing boundary now also owns release-build enforcement of
that internal demo-fixture capability. Dev and test builds may keep local
fixture proof tolerant so mock-backed demos can be exercised without a paid
grant, but release builds must gate runtime mock rewiring through the
build-tagged shouldEnforceReleaseDemoFixtureRuntime() contract before
syncReleaseDemoFixtureRuntime() can enable fixtures on a live server.
Browser payloads and public-demo callers must still never see or depend on
that internal grant.
That same shared API contract now also owns browser-proofed read separation.
Non-billing browser journeys such as
tests/integration/tests/11-first-session.spec.ts,
tests/integration/tests/journeys/01-smoke-bootstrap-login-dashboard.spec.ts,
and tests/integration/tests/journeys/03-relay-pairing.spec.ts may call
/api/license/runtime-capabilities for feature truth, but they must assert
zero browser requests to /api/license/entitlements. Billing activation,
upgrade, and owned billing panels remain the only browser surfaces allowed to
read the billing-only entitlements contract.
/portal is now one bootstrap-driven shell for both anonymous and
authenticated users, so new account frontend work must extend that shared
contract rather than inventing a second local payload shape, reviving separate
login/portal templates, or hardcoding production URLs, route prefixes, or
DOM-scraped account facts in static assets. That canonical renderer now lives
under internal/cloudcp/portal/frontend/, is embedded from
internal/cloudcp/portal/dist/, and is guarded by
internal/cloudcp/portal/frontend_sync_test.go, so the maintained frontend
sources and the committed embedded bundle cannot drift silently. The maintained
portal source tree now also owns explicit runtime/bootstrap type definitions
and one task-first shell model across desktop and phone widths: narrow-screen
navigation must collapse the same bootstrap-driven task shell into a compact
task strip, not a second mobile-only route or DOM contract, and the runtime
must keep the active task visibly in-frame when that strip scrolls. That same
shared bootstrap shell must also compress account identity into a compact
mobile summary strip rather than introducing a second narrow-screen
account-context payload or task-specific DOM contract. When that shared shell
opens a lower workspace job surface such as lifecycle review or the
create-workspace form, the runtime must reveal the opened surface instead of
leaving the user at the top of the list. The same shared runtime contract must
also keep the workspace detail rail absent until a lifecycle or
create-workspace job is active, rather than rendering a default idle
lifecycle explainer before the user has picked a task. The same task-first
runtime rule now also applies to Access: the hosted roster is the default
surface, and invite, role-change, or remove controls only appear when the
matching access job is active. When can_manage is false, that same roster
must stay a review surface rather than rendering a third action column full of
fake disabled row state. That same typed bootstrap/runtime contract must also
ship the current hosted roster snapshot in the portal bootstrap payload so the
first Access render is a real review surface rather than a fetch-first or
error-first placeholder; later member API reads remain refresh and mutation
follow-through. That same shared access contract must keep stable access
subject identity across bootstrap and mutation responses: hosted roster rows
carry subject_id, state, and optional user_id, unknown-email invites
return 202 Accepted with state=pending instead of auto-binding a guessed
future user record, and portal magic-link verification may materialize that
pending subject into a membership only after the invited email authenticates
through the portal-owned session path. Billing follows the same shared runtime
contract: hosted billing remains the default primary path, self-hosted billing
jobs open one panel at a time, and the runtime must reveal the active billing
panel on phone-width layouts instead of leaving it offscreen. The same
bootstrap/runtime contract must also carry explicit truth for whether
self-hosted commercial history is relevant to the signed-in account, so
hosted-only accounts do not render self-hosted license, refund, privacy, or
self-hosted escalation paths by default, and self-hosted-only accounts do not
front-load an empty hosted-billing block before the real self-hosted jobs.
That same runtime handoff contract now also covers product-originated
self-hosted upgrade arrivals: /portal?portal_handoff_id=...
may open a portal-owned upgrade job inside Billing, but it must not
fabricate broader self-hosted commercial history or reveal
retrieve/refund/privacy panels for a hosted-only account that only arrived
through an upgrade CTA.
That same commercial contract now also includes the self-hosted purchase
return path. Product-originated upgrade handoffs must include a canonical
commercial-owned portal_handoff_id that resolves server-side to the bound
checkout intent. Pulse still binds checkout completion to a signed
purchase_return_token, but that token must stay inside the Pulse-owned
activation callback path rather than leaking into the portal arrival URL. The
portal runtime must resolve the verified portal handoff through the shared
commercial API and use only that owned handoff-derived checkout state when it
starts checkout instead of trusting browser referrer state, raw
checkout_intent_id, or loose feature / return_url parameters. The
browser-facing GET /v1/checkout/portal-handoff response must not expose the
bound checkout_intent_id, and POST /v1/checkout/session must accept only
portal_handoff_id for product-originated upgrade arrivals so the license
server resolves the private checkout intent internally before Stripe session
creation. That handoff response is now intentionally narrowly stateful: first
resolution must stamp resolved_at, the portal-facing lifecycle must stay
derived from the owned handoff plus the private checkout intent
(created, resolved, checkout_started, completed), and completed
handoffs must refuse browser checkout replay instead of silently reopening
commercial state. The owned handoff row is also the canonical binding record
for product-originated self-hosted checkout: it must persist the signed
purchase_return_jti, the bound Stripe session_id, and the timestamps that
prove resolve, checkout-start, and completion. Stripe success must return
that same portal_handoff_id into Pulse's activation callback, and Pulse
must compare both portal_handoff_id and purchase_return_jti against the
commercial checkout-session result before redeeming the activation key, so
browser form/query state and Stripe metadata alone never become the source of
truth for a completed self-hosted upgrade. That same owned callback path must
resolve only to HTTPS instance origins or a direct loopback HTTP origin, and
the hosted trial/entitlement follow-up fetches behind that path must stay on
the restricted outbound client instead of raw commercial HTTP calls. Once that
commercial binding
verifies, Pulse's owned callback must persist a dedicated local
purchase-return redemption record keyed by portal_handoff_id plus
purchase_return_jti, use explicit local redemption state
(started, activated, failed) instead of a generic replay tombstone, and
allow retry only from owned failed state rather than by deleting the local
binding outright. That same owned contract also
retires the old compatibility
bootstrap surfaces: Pulse must not expose a separate public
GET /auth/license-purchase-handoff resolver, and the commercial server must
not expose a direct browser bootstrap through GET /v1/checkout/intent once
portal_handoff_id is canonical. Pulse's public
GET /auth/license-purchase-activate
callback then serves an auto-submitting bridge into the owned POST activation
path, which redeems the completed checkout through the shared
license/commercial API before returning the browser to the owned billing plan
route. Stripe cancel must return directly to owned billing with
purchase=cancelled; activation success, expiry, and failure must return to
owned billing with explicit arrival states so the billing runtime can surface
those results in-product. If Pulse cannot create the initial Pulse Account
portal handoff, GET /auth/license-purchase-start must still return the
browser to owned billing with purchase=unavailable so the runtime can
surface the failure in-product instead of leaving the operator on a raw
service error. When the upgrade flow was opened in a secondary tab,
the callback may refresh the originating billing tab and close itself; when no
owned billing tab is present, the same contract still owns intent
normalization. Product-originated self-hosted purchase handoff must emit
feature=self_hosted_plan and intent=self_hosted_plan as the canonical
browser/runtime value. The older max_monitored_systems label may be accepted
only as a backward-compatible alias during request or callback normalization,
but Pulse and the license server must not emit it as the primary self-hosted
purchase intent once the uncapped self-hosted model is canonical.
opener is available, the callback must still return the current tab to the
owned billing route automatically instead of leaving the operator on a dead
success page.
That same typed bootstrap/runtime contract must also derive the default signed-
in shell section from account shape: hosted accounts open on Workspaces,
self-hosted-only accounts open on Billing, and the signed-in shell keeps
precise workspace counts inline on Workspaces instead of exposing a separate
Summary tab as a primary or default destination.
That same typed shell-section contract now excludes overview entirely:
internal/cloudcp/portal/frontend/src/types.ts,
internal/cloudcp/portal/frontend/src/shell_section.ts, and
internal/cloudcp/portal/frontend/src/shell.ts may route only the governed
workspaces, access, billing, and support destinations, with hosted
account arrivals defaulting to workspaces and self-hosted-only arrivals
defaulting to billing.
The same account-shape runtime contract must also keep the shell navigation
honest: the task row is Workspaces, Access, Billing, and Support.
Self-hosted-only accounts must drop hosted-only Workspaces and Access
surfaces rather than implying live hosted work, and any shared fallback
surface that still resolves there must render an explicit unavailable state. Support
follows the same
account-shape runtime contract: self-hosted-only accounts expose only the
billing escalation path and billing-specific handoff packet, and hosted
workspace/access escalation controls must not render when no hosted account
exists.
The same typed bootstrap/runtime contract must also keep permission copy
honest for hosted view-only roles: when can_manage is false, Workspaces,
Access, and hosted Billing must stop advertising create, roster-mutation,
or hosted-billing actions and must instead state that an owner or admin is
required.
That same typed shell contract must also keep account context quiet and
literal: the signed-in shell should render one account-context header with the
current account title, kind, role, and short orienting copy, not a second
summary deck competing with the active task surface.
The same permission contract must also drive hosted Support: when
can_manage is false, the support shell may route the user back to
Workspaces, Access, or Billing only as review and owner/admin handoff
paths, not as live hosted mutation paths the current role can execute.
The same typed bootstrap/runtime contract must also keep inline workspace
counts and shell copy honest to account shape: hosted-only accounts may not
mention self-hosted billing utilities by default, and hosted view-only roles
must say when hosted billing still needs owner/admin authority.
The same permission contract must also drive the compact account-context
summary: the strip may not describe full hosted access-control or billing
ownership when the current role can only review workspaces or roster state.
That same typed runtime contract must also normalize account-role labels before
render: customer-facing copy may say Owner, Admin, Tech, or Read-only,
but it must not surface raw runtime identifiers such as read_only or legacy
aliases such as member.
That same runtime contract must also keep the first available action
permission-honest for hosted view-only accounts: when no ready workspace
exists, the primary route must stay on reviewable Workspaces or Access
surfaces before any blocked hosted billing or owner/admin-only mutation path.
That same shared request/runtime boundary must also preserve task-specific
failure copy on transport errors: portal job surfaces may not leak raw strings
such as Network error., and must instead surface the owned fallback for the
exact action that failed.
That same typed summary contract must also keep Ready honest when no hosted
workspace exists yet: hosted accounts with zero workspaces may not route the
user into current workspace review, and must instead render that nothing is
ready until the first hosted workspace exists.
That same typed summary contract must also keep Needs attention honest when
only suspended workspaces remain: hosted workspace history alone may not make
the shell imply that active work is ready.
That same typed summary contract must also stay fact-first: summary copy may
not synthesize urgency or health verdicts such as Nothing urgent or
Healthy now, and must instead render concrete counts, explicit workspace
state, and next-action routing from the owned runtime payload.
That same typed portal runtime contract must also keep task and status copy
literal across the account surface: customer-facing wording may not use
commentary such as obvious, actual work, trustworthy, or settled when
the runtime already knows the concrete state, action, or failure being
rendered. The same typed contract applies to shell badges, section labels,
context chips, route labels, and error headings: they must render the exact
action or state (Manage access, Hosted billing attached, Email support,
Failed to load roster) instead of shorthand such as Manage, Hosted, or
generic alert labels. Support copy is part of the same typed contract:
escalation surfaces must render short literal path/account/action wording
instead of longer procedural prose.
That same typed Access contract must also keep the idle managed roster
structurally honest: when no remove job is active, the roster remains a
two-column review surface for operator and role. The third action column
appears only for the live remove-access job instead of repeating fake idle row
state.
That same typed portal page contract also owns favicon cache-busting: the
rendered <link rel="icon"> must point at the shared /favicon.svg asset
through a versioned href so new portal icon revisions bypass browser cache on
deploy instead of waiting for asset expiry.
That same typed portal page contract must also preserve a calm, flat
account-tool visual posture across all portal scenarios: no gradients, heavy
shadows, or decorative dashboard chrome. The shell uses a compact identity bar
(account name, role, kind) and a horizontal tab bar for Workspaces, Access,
Billing, and Support. Content panels render directly below the tab bar without
reintroducing a second shell-level hero, overview panel, summary deck, or
metric grid ahead of the active task. The Workspaces panel may own one
section header, one quiet inline facts line, and one inline next-action row
above the workspace list when those elements are part of the same task
surface; they must not drift into a separate overview destination or duplicate
context strip. Action buttons (Create workspace, Invite people, Change roles,
Remove access) are integrated into toolbar rows within their respective
bordered data cards rather than existing as free-floating elements above
content. Hierarchy is driven by spacing, typography, and 1px borders rather
than cards, pills, stacked metrics, or ornamental side rails competing with
the active task.
That same typed page contract also applies before auth: the signed-out portal
surface must keep one obvious sign-in action plus precise account-scope
presentation, instead of falling back to a separate marketing-like hero and
generic login card that drifts away from the owned account shell model.
plus a package-local tsc --noEmit gate, so future account-shell work should
extend the typed source boundary instead of reviving opaque global runtime
objects, document-wide render events, or untyped embedded asset edits.
Hosted Pulse Cloud tenant-org AI reads now also follow that same canonical
rule: internal/api/ai_hosted_runtime.go, internal/api/ai_handlers.go,
internal/api/ai_handler.go, and internal/api/hosted_billing_state.go
must derive bootstrap and runtime readiness from the effective hosted billing
lease, falling back to the machine-owned default lease when a tenant org
has no org-local billing state, so /api/settings/ai, /api/ai/status, and
/api/ai/sessions cannot drift across separate entitlement interpretations.
The shared API-token management surface now also preserves canonical local
operator identity when explaining where a token is currently in use. Runtime
and infrastructure usage labels in the revoke flow keep the local instance
name for Docker hosts, agents, PBS, PMG, and similar monitored systems
instead of replacing those identities with governed summary text, so
revocation decisions remain instance-specific and auditable.
The unified resource API payload now carries the richer domain facets directly
through the owned backend response: resource objects can expose canonical
capabilities, relationships, recentChanges, and derived facetCounts
in addition to policy and identity metadata, so the backend payload contract
stays aligned with the timeline and control-plane model instead of flattening
those fields away. The frontend consumer, however, only preserves the
timeline-first recentChanges slice and its counts on the bundle contract.
The same resource contract now also exposes a dedicated
/api/resources/{id}/timeline history endpoint and bundled facet reads under
/api/resources/{id}/facets, so operators can inspect change history without
depending on a monolithic resource payload.
The recovery API boundary now also keeps canonical platform vocabulary
consistent on both sides of the transport. /api/recovery/* queries use
platform as the operator-facing filter key, and the point/rollup payloads
now expose platform / platforms as the primary response fields while
legacy provider aliases remain compatibility-only for older decoders.
The reporting API contract now also treats current-state fleet inventory as a
first-class surface separate from historical metrics reports.
internal/api/reporting_inventory_handlers.go,
internal/api/router_routes_licensing.go, and the settings reporting shell now
own /api/admin/reports/catalog as the canonical operator-facing reporting
catalog plus /api/admin/reports/inventory/vms/export as the stable VM
inventory sub-contract. The catalog endpoint owns the reporting panel title,
description, locked-shell teaser copy, enabled-shell guidance copy,
historical performance report options, and nested VM inventory definition,
while the export endpoint remains the
spreadsheet-shaped CSV transport. That
export is intentionally not comment-prefixed like the legacy metrics CSV, and
it now carries Proxmox pool membership from the canonical unified VM runtime
model instead of inferring or reconstructing that field locally inside the
frontend or handler.
That same catalog payload also owns the optional performance-report capability
surface: supportsMetricFilter and supportsCustomTitle are contract flags,
not UI hints, so frontend consumers and request builders must not render or
emit unsupported metric-filter or custom-title fields from local assumptions.
The same reporting catalog and inventory export definitions also own backend
transport validation and download semantics. internal/api/metrics_reporting_handlers.go
and internal/api/reporting_inventory_handlers.go must derive allowed formats,
default format selection, multi-resource limits, optional metric/title field
emission, canonical default-title fallback, default fallback range window,
attachment filename stems, single-report filename subject, filename date-stamp
style, and invalid-format validation copy from the canonical reporting
definitions instead of hardcoding a second local contract.
Frontend consumers may still keep a local fallback filename for defensive
download behavior, but when the server returns Content-Disposition they must
prefer that attachment filename as the canonical transport output.
That same catalog contract is also authoritative for frontend request builders:
consumers may validate or reject malformed payloads, but they must not invent
replacement report endpoints, filename prefixes, export routes, or default
range windows from frontend-local fallback constants once the catalog has been
accepted.
Reporting time windows follow the same rule: start and end stay optional,
but when present they must parse as RFC3339 and end must not be earlier than
start; invalid values are a 400 invalid_time_range transport failure, not a
silent fallback to the default reporting window.
The same transport contract also owns reporting-field and body validation:
metricType must stay within the governed character set/length, title must
stay within the governed length cap, and the multi-report JSON body must remain
strictly parsed with the canonical size ceiling, unknown-field rejection, and
no trailing payload tolerance instead of accepting malformed operator input and
drifting onward.
Those validation failures also keep stable API error codes owned by the backend
contract itself; handlers must not infer invalid_metric_type,
invalid_title, or similar response codes by parsing their own human-readable
error text.
The catalog route itself is intentionally metadata-readable without the
advanced_reporting feature gate so locked admin surfaces can present the same
canonical reporting definition before upsell, while report generation and
inventory export remain feature-gated execution routes.
That metadata route is still a version boundary as well. Current Pulse servers
must expose /api/admin/reports/catalog, but frontend consumers may treat a
404 from that route as an old-backend compatibility signal and fall back to
the legacy report-generation transport only; they must not synthesize or guess
the newer catalog-owned inventory export contract when the backend does not
provide it.
The licensing API must also stay internally coherent in local dev mode. When
backend feature gates are widened by PULSE_DEV=true or demo/mock mode,
/api/license/runtime-capabilities must advertise the same capability set in
capabilities; it must not leave frontend shells on stale free-tier gating
while backend HasFeature() already treats those features as available.
That widening still has to respect runtime feature flags. A capability like
multi_tenant must stay absent from dev/demo entitlement payloads until the
process also has PULSE_MULTI_TENANT_ENABLED=true; otherwise admin shells
drift into impossible routes that the same backend still rejects as disabled.
The same rule applies to placeholder or plan-marker capabilities as well:
dev/demo entitlement payloads must not advertise non-operable entries like
white_label, multi_user, or unlimited just because they exist in tier
metadata, when the current runtime does not expose a corresponding usable
feature surface.
The /api/resources serializer now also refreshes canonical identity and
policy metadata through the shared unified-resource helper before it writes
the payload, so backend and frontend contract tests stay aligned on one
canonical metadata pass instead of consumer-local attach wrappers.
Those history reads now also accept governed kind, sourceType, and
sourceAdapter query filters, and the backend store owns the corresponding
filtered counts, so the timeline contract can narrow by change class and
adapter provenance without inventing a frontend-only relationship slice.
The same facet bundle contract now also returns grouped recentChangeKinds
counts by canonical ChangeKind, so the shared drawer and summary chips can
show the distribution of restarts, anomalies, state transitions, and other
timeline classes without guessing from the loaded slice.
The same facet bundle contract now also returns grouped
recentChangeSourceTypes counts by canonical source type, so the shared
drawer and summary chips can distinguish platform events, pulse diffs,
heuristics, user actions, and agent actions without inventing frontend-local
provenance heuristics.
The same facet bundle contract now also returns grouped
recentChangeSourceAdapters counts by canonical source adapter, so the
shared drawer and summary chips can distinguish Docker, Proxmox, TrueNAS, and
ops-helper provenance without inventing frontend-local integration heuristics.
Client consumers of the node setup transport now also share the canonical
trial-start action helper in frontend-modern/src/utils/trialStartAction.ts
for the NodeModal Pro upgrade path. The NodesAPI client remains the source of
truth for setup/install requests, while hosted trial redirects and denial copy
must flow through the shared trial-start owner rather than a second client-side
status-code map inside node setup state.
That same frontend/API split now also requires node setup state to consume
shared commercial selectors for non-transport trial gating. useNodeModalState.ts
may decide whether to show a trial CTA through
frontend-modern/src/stores/licenseCommercial.ts, but it must not repurpose
raw commercial-posture fields as if they were part of the NodesAPI transport
contract.
Canonical timeline entries now also preserve correlation context in
relatedResources, so the history surface can explain which neighboring
resources moved with restart, anomaly, config, state transition, and
relationship changes instead of only exposing correlation endpoints when the edge
itself changed.
Restart timeline entries are also a first-class contract now: restart change
kinds can serialize Docker and Kubernetes restart metadata instead of being
folded into generic state transitions.
Incident-driven anomaly entries are also a first-class contract now:
metric_anomaly change kinds can serialize canonical incident rollup changes
instead of being flattened into generic status churn.
For relationship changes, the from and to fields now summarize the actual
edge(s) rather than only the parent pointer, so the API contract keeps the
relationship transition legible even before the frontend expands the
related-resource chips.
The same relationship and change presenters now also own the state, restart,
incident, and config summary fragments that feed those timeline values, so the
API surface preserves the canonical wording before the frontend renders it.
Invalid sourceAdapter values are rejected at the API boundary, so the filter
contract stays aligned with the canonical adapter set rather than silently
falling back to an empty slice.
The same resource-timeline contract now also owns canonical parsing for
kind, sourceType, and sourceAdapter query values, so the HTTP handler
stays thin and the change model remains the source of truth for timeline
filter validation.
The same API contract now also exposes the unified-resource control-plane
history through dedicated enterprise audit reads. The action, lifecycle, and
export history endpoints live in internal/api/activity_audit_handlers.go and
internal/api/router_routes_licensing.go, and the contract tests now pin their
response shapes so the execution trail remains queryable through the governed
API surface rather than only through the underlying store.
The infrastructure platform-connections contract now also owns TrueNAS
connection CRUD under internal/api/truenas_handlers.go and
internal/api/router_routes_registration.go. /api/truenas/connections
must stay the canonical API-backed platform boundary for listing, creating,
updating, deleting, and testing TrueNAS integrations, and PUT updates must
preserve masked secrets (********) instead of clearing stored API keys or
passwords when operators edit non-secret fields from the settings surface.
Draft validation must stay on POST /api/truenas/connections/test, while
re-testing one saved connection must route through
POST /api/truenas/connections/{id}/test so the server reuses stored secret
material instead of forcing the frontend to round-trip redaction placeholders
back through the draft-test API. That saved-connection test route must also
accept the edit-form payload for an existing connection and merge unchanged
masked secrets server-side, so editing operators can test changed host / port /
TLS fields before saving without re-entering retained credentials. For
row-level saved-connection tests with no edit overlay payload, that same route
must update the canonical TrueNAS poll summary owner so subsequent
/api/truenas/connections reads reflect refreshed last-success or last-error
state instead of leaving settings health disconnected from manual operator
tests.
That same route family now also owns pre-save monitored-system admission
preview. POST /api/truenas/connections/preview and
POST /api/truenas/connections/{id}/preview must return the shared
monitored-system ledger preview contract sourced from canonical
unified-resource projection, including current/projected grouped systems and
enforced limit verdicts, rather than page-local settings estimates or
provider-local counters.
That same /api/truenas/connections list boundary now also owns the
operator-facing runtime summary for those configured connections. The list
response must carry the canonical redacted config together with poll health
(intervalSeconds, last success/failure, consecutive failures) and discovered
platform contribution summary (host/resource identity plus systems, pools,
datasets, apps, disks, and recovery artifacts) so the platform-connections
workspace can render real API-backed status and handoff context without
inventing a settings-local shadow fetch path. Zero-value legacy
pollIntervalSeconds config must normalize back to the canonical 60-second
default at this same boundary instead of leaking ambiguous 0 values to the
frontend.
That same /api/truenas/connections boundary also owns explicit disabled-path
semantics: the truenas_disabled response exists only when the server has
explicitly opted out of the default-on TrueNAS integration, not as the normal
bootstrap state for a supported platform.
That same platform-connections boundary therefore defines the current TrueNAS
onboarding floor for Pulse. Supported now means operators can bootstrap
TrueNAS through the shared Infrastructure onboarding flow
(Platform connections may remain the operator-facing setup-wizard label,
but it lands on /settings/infrastructure?add=pick and normalizes into the
shared workspace) and /api/truenas/connections without the unified agent,
preserve masked secrets on ordinary edits, retest saved connections through
the stored-secret path, and see last-sync plus discovered contribution
summaries on the same settings surface. Pulse does not promise a separate
TrueNAS-only onboarding wizard, agent-required bootstrap, or public
provider-local app/log/config APIs at this floor.
That same infrastructure platform-connections contract is also the only
acceptable public backend boundary for the admitted VMware vSphere phase-1
direction. /api/vmware/connections must be the canonical
admin-only route family for listing, creating, updating, deleting, and testing
stored vCenter integrations under one saved-connection model. A green draft
or saved-connection test must mean the declared phase-1 floor is reachable
through the backend runtime, not merely that one of VMware's API families
answered. Pulse may keep separate vSphere Automation API and VI JSON clients
under that one saved connection, but the public API contract must hide that
multi-client runtime detail behind one canonical health and contribution
summary surface. Phase 1 must also keep the negative space explicit: no public
/api/vmware/hosts, /api/vmware/vms, /api/vmware/datastores,
/api/vmware/events, /api/vmware/tasks, or VMware control routes should be
introduced while inventory, alerts, history, and Assistant reads still route
through the shared canonical Pulse surfaces.
That same /api/vmware/connections family now also owns the current phase-1
implementation contract under internal/api/vmware_handlers.go,
internal/api/router.go, internal/api/router_routes_registration.go, and
frontend-modern/src/api/vmware.ts. The list response must carry one redacted
stored connection shape plus canonical poll health and observed
contribution summary (hosts, vms, datastores, viRelease) so the shared
settings workspace can render VMware status without another provider-local
inventory route. When base inventory succeeds but optional signal or topology
reads degrade, that same observed payload must carry the canonical
partial-success shape (degraded, issueCount, summarized issues) instead
of collapsing the whole connection to poll.lastError or pretending the
refresh was fully healthy. That poll payload is the canonical runtime contract:
backend handlers must source it from the poller-owned per-connection summary,
saved row-level retests with no payload must refresh that same summary owner,
and edit-form overlay tests must preserve the stored summary until a real save
succeeds. Compatibility acceptance of a historical test field may exist only
inside shared frontend normalization; the backend route family itself must stay
on poll for the operator-facing response model. POST /api/vmware/connections/test
must stay the draft test surface, while POST /api/vmware/connections/{id}/test
remains the saved connection retest surface. The explicit disabled path also
stays on this boundary: 404 vmware_disabled means the operator or runtime has
opted out of the default-on VMware candidate, not that the platform requires a
different onboarding contract.
That same route family now also owns source-native monitored-system admission
preview. POST /api/vmware/connections/preview and
POST /api/vmware/connections/{id}/preview must project the discovered
provider-backed record set through the shared monitored-system ledger preview
contract before persistence, including current/projected grouped systems and
enforced limit verdicts, rather than collapsing a vCenter add or edit to one
handler-local candidate estimate.
That same TrueNAS and VMware platform-connections contract now also owns
runtime mock continuity. When /api/system/mock-mode flips on a running
server, /api/truenas/connections and /api/vmware/connections must
immediately return the canonical mock connection payloads without restart, and
the shared /api/resources surface must expose the corresponding platform
inventory through source=truenas and source=vmware-vsphere. Shared query
parsing may accept vmware-vsphere as the operator-facing VMware alias, but
the emitted canonical resource source remains the shared vmware source
family rather than a second backend source key.
That same VMware test contract now also owns structured setup-failure
classification. When POST /api/vmware/connections/test or
POST /api/vmware/connections/{id}/test fails, the backend payload must
preserve the canonical top-level code plus string-valued details.error and
details.category, and shared browser normalization in
frontend-modern/src/utils/apiClient.ts plus
frontend-modern/src/api/responseUtils.ts must carry that metadata through the
shared error object without inventing a VMware-only fetch or parsing path.
That same TrueNAS and VMware platform-connections contract now also owns
per-surface scope as a first-class field on the connection shape. The
TrueNAS connection payload must carry positive monitorDatasets,
monitorPools, and monitorReplication booleans; the VMware connection
payload must carry positive monitorVms, monitorHosts, and
monitorDatastores booleans. internal/config/truenas.go and
internal/config/vmware.go must default those fields to true on new
instances and migrate legacy all-false records to all-true inside
ApplyDefaults, so existing truenas.json and vmware.json on disk
continue monitoring every surface after upgrade. The unified
/api/connections aggregator in internal/api/connections_aggregator.go
must project those booleans into the connection row's scope map and
declare capabilities.supportsScope: true for TrueNAS and VMware rather
than hard-coding an all-true scope; frontend-modern/src/api/truenas.ts
and frontend-modern/src/api/vmware.ts must round-trip the booleans
through their normalize*Connection and serialize*ConnectionInput
helpers without dropping them on edit-save.
That same VMware API boundary now also owns the phase-1 runtime negative
space around inventory projection. internal/api/router.go may wire VMware's
supplemental ingest into the shared /api/resources surface so canonical
agent, vm, and storage records can appear elsewhere in Pulse, but the
public backend contract must still stop at /api/vmware/connections* for
provider-local routes. Phase 1 must not add public /api/vmware/resources,
/api/vmware/history, /api/vmware/alerts, or VMware-specific recovery
transport just because the internal poller now projects VMware-backed
resources into the shared canonical inventory.
That same shared API contract now also owns Assistant mention transport for
those canonical resources. frontend-modern/src/api/aiChat.ts,
internal/api/ai_handler.go, and internal/api/ai_handlers.go must preserve
structured mention payloads for canonical agent, vm, storage, and
app-container resources as shared unified-resource IDs plus shared mention
types, so VMware-backed reads stay on /api/ai/* and /api/resources*
instead of introducing VMware-only mention payloads or provider-local
inventory reads under /api/vmware/*.
That same /api/ai/chat payload boundary owns per-request execution-mode
overrides. Dashboard Pulse Brief and other scoped handoffs may include
autonomous_mode:false on the chat request to force approval-required command
execution for that exchange, but the transport must treat the field as a
request override only and must not mutate the user's persistent AI control
setting.
That same backend API boundary now also owns the negative space around
assistant control. Wiring native TrueNAS app actions into
internal/api/router.go, internal/api/ai_handler.go, or adjacent backend
helpers must not introduce a parallel public /api/truenas/apps/... control
surface; provider-backed app control for Pulse Assistant stays behind the
shared AI runtime tool contract unless this API contract changes in the same
slice.
That same negative-space rule also applies to assistant diagnostics. Wiring
native TrueNAS app log reads into internal/api/router.go,
internal/api/ai_handler.go, or adjacent backend helpers must not introduce a
parallel public /api/truenas/apps/.../logs surface; provider-backed app log
reads for Pulse Assistant stay behind the shared pulse_read runtime tool
contract unless this API contract changes in the same slice.
That same negative-space rule also applies to assistant configuration reads.
Wiring native TrueNAS app config into internal/api/router.go,
internal/api/ai_handler.go, or adjacent backend helpers must not introduce a
parallel public /api/truenas/apps/.../config surface; provider-backed app
config for Pulse Assistant stays behind the shared pulse_query action="config" runtime tool contract unless this API contract changes in the
same slice.
The monitored-system ledger contract now also carries a canonical grouping
explanation payload. /api/license/monitored-system-ledger must expose the
shared monitored-system explanation summary, sanitized grouping reasons, and
included top-level surfaces exactly as the unified-resource resolver computed
them, while the frontend client stays in lockstep with that nested payload
shape.
That same ledger contract must also preserve the canonical monitored-system
status enum end to end. Backend normalization may fail closed for unsupported
values, but it must not flatten governed warning state to unknown, because
the billing and inventory surfaces need the real top-level runtime status the
unified-resource resolver computed.
That same contract now also owns the backend-authored status explanation paired
with that enum, and the monitored-system ledger details surface must render it
alongside the counting explanation instead of inventing page-local wording for
what online, warning, offline, or unknown means.
That nested status explanation is now a structured contract, not summary-only
copy: /api/license/monitored-system-ledger must preserve the canonical
summary plus the ordered reason list from unified resources, including the
degraded source or surface, its status, and its canonical reported_at
timestamp, so mixed fresh/stale grouped systems remain explainable through one
governed API shape.
That canonical summary must also carry the mixed-source freshness explanation
when the freshest grouped observation came from a different source than the
degraded one, so API consumers can show a fresh Last Seen value without
making warning or offline state look contradictory.
That freshest grouped observation is now canonically exposed as the structured
latest_included_signal object. Its at, source, name, and type fields
identify exactly which included top-level surface reported most recently.
The backend payload contract now emits only that structured object, and the
frontend monitored-system client should parse that canonical wire contract
directly rather than keeping flat alias fallback for
latest_included_signal_at, latest_included_signal_source, or last_seen.
The canonical nested status-reason timestamp is reported_at, and the
normalized client contract must expose only that field.
That same monitored-system ledger contract now also owns prospective
explanation. POST /api/license/monitored-system-ledger/preview must accept
one canonical candidate plus an optional structured replacement selector, fail
closed when monitored-system usage is unavailable, and return the canonical
current/projected count delta, enforced limit verdict, effect label, and
current/projected ledger entries produced by the shared monitored-system
projection layer instead of by handler-local heuristics.
Configured Proxmox, PBS, and PMG update handlers in
internal/api/config_node_handlers.go must use that same structured
replacement-selector contract when they enforce monitored-system admission:
source-owned names, host URLs, hostnames, and resource identifiers may cross
the API boundary, but handler-local matcher closures must not become the
source of truth for replacement identity.
Provider-backed preview routes such as /api/truenas/connections/preview,
/api/truenas/connections/{id}/preview, /api/vmware/connections/preview,
and /api/vmware/connections/{id}/preview must serialize that same canonical
preview shape directly; they may not down-scope the response to local counts or
hide current/projected grouped systems from the governed contract.
That same platform-connections preview contract now also owns candidate-state
defaulting. New connection preview and test payloads must inherit the
canonical provider default enabled=true when the field is omitted, while
saved-connection preview, test, and update payloads must preserve the stored
enabled state unless the request explicitly changes it. Shared handlers may
not let zero-value JSON decode silently turn an unchanged connection into an
inactive monitored-system candidate.
Inactive TrueNAS and VMware candidates must stay on that same canonical API
contract as zero-delta or removal-only previews. Those routes may not fail
validation just because no projected monitored-system rows remain once the
candidate is treated as non-counting.
That client contract must also fail closed when older or partial payloads omit
the nested explanation object: the frontend may normalize missing explanation
fields to empty reasons/surfaces plus a safe default summary, but it must not
crash or invent non-canonical grouping details.
That same frontend monitored-system client must not keep its own parallel
fallback copy for those summaries. When the payload omits frontend-authored
status or explanation text during mixed-version rollouts, the client should
source its safe default wording from the governed monitored-system
presentation helper instead of duplicating local strings inside the API
normalizer.
Action-plan stale-plan protection on those audit records now uses the canonical
resourceVersion, policyVersion, and planHash fields only, so the
response contract stays deterministic without extra version baggage.
The same API contract now also owns the dedicated frontend resource facet
client in frontend-modern/src/api/resources.ts, which fetches the governed
capability, relationship, and timeline surfaces from internal/api/resources.go
instead of teaching the drawer or list views to reconstruct them inline.
The same AI resource-intelligence payload now also carries dependency and
dependent correlation arrays plus correlation evidence, so the drawer can render
canonical correlation context from the shared AI contract instead of inferring it
from the relationship facet payload alone.
The same AI frontend client now also loads /api/ai/intelligence/correlations
through the shared frontend-modern/src/stores/aiIntelligence.ts store for
the Patrol intelligence page and the AI summary page, so the
learned-correlation list is governed by the same API contract that backs the
resource drawer's correlation evidence instead of being fetched as page-local state.
That correlations route now reads through the canonical AI intelligence
facade first, so the handler and its payload keep the detector behind one
shared access layer instead of routing directly to Patrol-local correlation
state.
That store now also owns the dashboard load bundle used by the Patrol page,
so the page refresh path stays aligned on one store-owned orchestration layer
instead of re-encoding the AI bundle inline.
The AI summary page now also renders the canonical
frontend-modern/src/components/Infrastructure/ResourcePolicySummary.tsx
card for policy posture, so sensitivity, routing, and redaction counts are
presented through one governed frontend component while the resource drawer
keeps only the per-resource policy lines.
The unified action, lifecycle, and export audit reads now also clamp oversized
limit requests to the governed maximum of 1000, so the control-plane audit
surface stays bounded even when callers ask for arbitrarily large history
pages.
Unified action audit payloads must also expose the normalized action plan
preflight through plan.preflight: API consumers should see whether a dry-run
was available, what safety checks were recorded, and what verification steps
remain, instead of inferring action safety from free-form result text.
Those relationship and timeline payloads now also carry lastSeenAt freshness
and optional metadata through the same owned contract, so the drawer can
preserve provenance without inventing a separate relationship-detail schema.
Relationship-change timeline entries now also use the canonical relationship
summary helper for their compact from and to wording, so the API keeps the
human-readable edge label aligned with the unified-resource relationship
presenter instead of reconstructing a local type-token summary.
The same /api/resources/{id}/timeline filter contract now also routes its
kinds, source types, and source adapters through the shared unified-resource
change-filter parser, so API validation stays owned by the change model rather
than being re-parsed separately in the HTTP handler.
The tenant-scoped unified resource API now also stays on canonical
unified-resource seeds end to end: internal/api/resources.go,
internal/api/router_helpers.go, and internal/api/state_provider.go no
longer treat raw tenant StateSnapshot data as a live registry-seeding owner
once UnifiedResourceSnapshotForTenant is available.
The router now wires the tenant resource state provider during initial setup
when a multi-tenant monitor is present, so non-default org resource list and
facet reads do not fall back to a missing-provider 500 during normal tenant
requests.
The unified infrastructure settings surface now also follows an explicit
shared boundary with agent-lifecycle. Changes to
frontend-modern/src/components/Settings/InfrastructureWorkspace.tsx,
frontend-modern/src/components/Settings/InfrastructureInstallerSection.tsx,
frontend-modern/src/components/Settings/ConnectionEditor/CredentialSlots/NodeCredentialSlot.tsx,
frontend-modern/src/components/Settings/useInfrastructureOperationsState.tsx,
and frontend-modern/src/components/Settings/useInfrastructureInstallState.tsx
must carry this contract together with the shared agent-lifecycle contract and
the dedicated API proof files for token generation, agent lookup, profile
assignment, install/uninstall copy transport, and Proxmox setup/install flows,
rather than remaining unowned consumers of those contract surfaces.
That shared infrastructure-settings boundary must also stay under explicit
proof routing on both sides instead of relying only on generic owned-file
coverage on the API-contract side: token generation, agent lookup, profile
assignment, install/uninstall copy transport, and inline Proxmox credential
flows must continue to carry the direct proof paths together with the
lifecycle-side surface proof.
The same shared-boundary rule now applies to frontend-modern/src/api/agentProfiles.ts,
frontend-modern/src/api/nodes.ts,
frontend-modern/src/utils/agentInstallCommand.ts,
internal/api/agent_install_command_shared.go,
internal/api/config_setup_handlers.go, and internal/api/unified_agent.go:
agent install/register/profile control changes must preserve canonical API
payload behavior instead of drifting into subsystem-local transport rules.
That same shared boundary now assumes InfrastructureWorkspace.tsx owns the
top-level ledger shell, frontend-modern/src/components/Settings/useConnectionsLedger.ts
consumes the canonical /api/connections projection for configured rows, and
InfrastructureInstallerSection.tsx plus
ConnectionEditor/CredentialSlots/NodeCredentialSlot.tsx consume the shared
API-backed lifecycle state. The retired
InfrastructureOperationsController.tsx shell and
useInfrastructureReportingState.tsx reporting path must not be reintroduced
as parallel transport owners.
That same /api/connections projection also owns collection-method truth for
the infrastructure ledger: useConnectionsLedger.ts must derive one canonical
subtitle (via platform API, via Pulse Agent, or via platform API and Pulse Agent) from the shared system/component payload instead of letting page-
local tables invent their own API-versus-agent badge heuristics.
That same /api/connections row contract now also owns the fleet-governance
projection consumed by the infrastructure workspace. Connection.fleet is the
canonical machine-readable source for enrollment state, liveness, version
drift, adapter health, config rollout, credential status, update posture, and
remote-control posture; frontend settings surfaces may format those facts, but
must not infer a second fleet state from row labels, error-message text, or
provider-local table heuristics.
That same shared infrastructure-settings boundary also owns install-profile
semantics surfaced by
frontend-modern/src/components/Settings/infrastructureOperationsModel.tsx:
the recommended auto profile may describe Proxmox auto-detect only as the
canonical unpinned runtime mode that lets the agent register every detected
local PVE or PBS service. Frontend copy on that shared model must not imply a
hidden single-type selection or invent a profile flag that the installer and
auto-register contract do not actually persist.
That shared frontend-modern/src/api/agentProfiles.ts boundary must also stay
under explicit proof routing on both sides instead of remaining a generic
frontend-client match on the API-contract side: assignment, delete, unassign,
and suggestion transport changes must carry the direct profile-client proof
together with the lifecycle-side profile proof.
That shared frontend-modern/src/api/nodes.ts boundary must also stay under
explicit proof routing on both sides instead of remaining a generic
frontend-client match on the API-contract side: Proxmox setup-script and
agent-install command transport changes must carry the direct lifecycle/client
proof together with a direct API-contract client proof.
That same rule also applies to the shared update transport surface:
frontend-modern/src/api/updates.ts and internal/api/updates.go must carry a
direct API-contract proof path instead of relying only on the generic frontend
client or backend payload fallback coverage.
That same rule also applies to the shared security transport surface:
frontend-modern/src/api/security.ts, internal/api/security.go,
internal/api/security_tokens.go, and internal/api/system_settings.go must
carry a direct API-contract proof path instead of relying only on the generic
frontend client or backend payload fallback coverage.
That same rule now applies to the shared backend lifecycle install/register
surface as well: internal/api/agent_install_command_shared.go,
internal/api/config_setup_handlers.go, and internal/api/unified_agent.go
must carry a direct API-contract proof path instead of relying only on the
generic internal/api/ backend payload prefix.
That same backend-owned internal/api/ boundary also includes the generated
embedded-frontend warning surface used during local development.
internal/api/DO_NOT_EDIT_FRONTEND_HERE.md must direct developers to edit
frontend-modern/src, identify http://127.0.0.1:5173 as the hot-reload
frontend dev shell, and describe http://127.0.0.1:7655 as the proxied
backend dependency instead of teaching 7655 as the browser-facing dev
entrypoint.
That shared frontend install-command helper must also stay under explicit proof
routing instead of remaining an orphan utility: changes in
frontend-modern/src/utils/agentInstallCommand.ts must carry the direct
helper proof path, not rely only on downstream consumer tests to catch
transport drift.
That same backend install-command contract must also normalize trailing slashes
on canonical base URLs before composing installer asset paths or response
payloads, so /api/agent-install-command and the governed container-runtime
token response cannot emit //install.sh or slash-suffixed pulseURL
transport when PublicURL or AgentConnectURL already ends with /.
That same governed container-runtime migration response must also preserve the
canonical lifecycle shell payload shape: installCommand in the diagnostics
docker prepare-token response may not emit the stale --disable-host alias or
an ad hoc curl | sudo bash pipeline, and must instead match the canonical
root-or-sudo wrapped install transport with --enable-host=false.
That diagnostics install-command payload must also be assembled through the
shared backend install-command helper in internal/api/agent_install_command_shared.go
instead of a handler-local shell formatter, so token omission, plain-HTTP
--insecure, and trailing-slash normalization stay under one canonical API
contract surface.
That same diagnostics boundary must also consume the canonical monitoring
memory-source catalog instead of maintaining a second local trust/fallback
classifier. Node, VM, and LXC memory-source aliases must normalize to the same
governed labels and fallback-reason contract before diagnostics memory-source
breakdowns are serialized.
That same diagnostics boundary must also backfill canonical fallback reasons
when a raw snapshot reaches the API layer without one, so
buildMemorySourceDiagnostics stays self-consistent even if a caller bypasses
GetDiagnosticSnapshots() and hands diagnostics a legacy alias directly.
That same diagnostics boundary now also owns org-scoped local commercial funnel
serialization when the self-hosted privacy contract allows it: if
internal/api/diagnostics.go exposes local upgrade-metric summaries, daily
buckets, or surface/capability breakdowns, it must read them from the local
conversion store through the licensing bridge, keep diagnostics caching scoped
to the authenticated org context, and preserve the canonical camelCase
diagnostics payload shape instead of leaking pkg/licensing types or inferring
hosted checkout stages from the local API layer.
That same public-demo API boundary must also hide runtime-admin operations
surfaces instead of treating them as harmless reads. Demo sessions must receive
404 for /api/diagnostics, /api/diagnostics/docker/prepare-token, and the
shared /api/logs/* endpoints, so the preview shell cannot expose runtime
diagnostics, log streams, or downloadable log bundles behind a supposedly
read-only demo account.
That shared infrastructure install boundary now also preserves copied shell
command payload continuity: any privilege-escalation wrapper applied at
frontend-modern/src/components/Settings/InfrastructureInstallerSection.tsx
through useInfrastructureOperationsState.tsx must keep the full canonical
installer argument list intact instead of dropping token, profile, or
command-execution flags between display and clipboard transport.
That same shared infrastructure-settings boundary now also consumes the canonical
connectedInfrastructure projection from the backend state contract instead of
reconstructing reporting rows by merging raw unified-resource facets and
removed-_ arrays in the browser. v6 clients no longer receive those removed-_
arrays at all for this surface; Connected infrastructure row
identity, reporting-surface labels, and ignore/reconnect scope must be owned
by the backend payload contract, with frontend rendering limited to
presentation and operator actions.
That same install-command payload continuity now also applies when auth is
optional: copied install and upgrade commands must omit token arguments
entirely on token-optional Pulse instances rather than serializing a fake
sentinel token into the governed shell or PowerShell payload.
That same shared installer boundary must also stay on one runtime-argument
contract after the command is copied: scripts/install.sh may not rebuild
separate service-flag strings for token-bearing and token-file install paths,
and must instead derive persisted --url, optional --token, feature
toggles, identity flags, and disk-exclude transport from one canonical
installer-owned argument item list.
That same optional-auth contract now extends through the first governed
runtime transport boundary: post-install Unified Agent report requests and
Proxmox auto-register requests must use the canonical authToken request
field for one-time setup-token auth instead of any API-token auth header path,
so the canonical API surface does not preserve parallel auth transports or a
second auth meaning for the same field.
The self-hosted commercial entitlement payload now also uses one canonical
counted-unit contract: max_monitored_systems is the live runtime and
frontend term, and older max_agents or max_nodes aliases may be decoded
only at explicit legacy import boundaries. Limit current values, add-node
enforcement, auto-register enforcement, deploy-slot enforcement, the
monitored-system ledger endpoint, and TrueNAS/API-backed registration must all
reflect deduped top-level monitored systems rather than agent-only
installation count, and legacy_connections / has_migration_gap may not
imply that API-backed monitoring sits outside the commercial cap.
That same contract now also owns prospective admission and replacement
projection. Config-backed PVE/PBS/PMG, TrueNAS, VMware, and other API-backed
registration or update routes must project candidates or preview records
through the canonical monitored-system resolver before persistence, including
replacement of one existing source-owned surface, instead of rebuilding
handler-local priority tables or platform-specific counters.
That same admission contract now also owns replacement identity. Shared API
handlers may keep source-local request decoding, but the replacement they pass
into monitored-system projection must travel as one canonical structured
selector contract rather than as per-handler opaque match logic, so support
preview, limit enforcement, and final runtime grouping stay aligned.
When an active monitored-system cap is present and current usage cannot be
resolved, those API contracts must fail closed for net-new admissions rather
than serializing a fake zero. /api/license/entitlements therefore carries
limit-level current_available truth so clients can distinguish unavailable
monitored-system usage from a real current: 0.
That same entitlement family now also owns the canonical monitored-system
capacity posture. /api/license/runtime-capabilities,
/api/license/commercial-posture, and /api/license/entitlements must expose
monitored_system_capacity with one shared admission model:
usage_unavailable, unlimited, within_limit, at_limit_blocking_new, or
over_limit_frozen. That contract must state whether new monitored systems are
blocked and whether existing monitoring continues, so browser surfaces stop
guessing from raw current / limit math or inventing a hard-cap model that
the backend does not enforce.
That same contract must also make over-limit legitimacy explicit. When
monitored_system_capacity is at or above a capped plan boundary, the payload
must expose reason as limit_reached, preexisting_usage, or
legacy_migration_capture_pending so browser surfaces can distinguish a full
plan boundary from a frozen above-plan carry-forward and from migrated legacy
continuity that is still being verified.
That same admission family also owns the private monitored-system policy hook
boundary. internal/api/enterprise_extension_monitored_system_admission.go
may register one private ResolveMonitoredSystemAdmissionPolicy hook through
pkg/extensions/monitored_system_admission.go, but that hook must consume the
canonical counted-system input already resolved by shared API admission
helpers. Private builds may not use that extension point to invent
provider-local counters, replacement semantics, or usage-availability fallbacks
that diverge from the shared monitored-system resolver. pkg/server/server.go
may wire the hook during startup, but public runtime still owns counted-system
projection unless and until a later governed enforcement slice actually routes
live admission through that private decision boundary.
That same contract now also owns migrated legacy continuity. When a supported
v5 license auto-exchanges or is activated manually in v6, /api/license/status
and /api/license/entitlements must surface max_monitored_systems from the
greater of the exchanged plan limit and the one-time deduped monitored-system
floor captured from canonical runtime usage, and restored grant activations
must backfill that floor once canonical usage becomes available instead of
falling back to the raw exchanged grant limit after restart.
That migration capture must wait for settled canonical usage, not merely the
first non-nil read-state. If provider-owned supplemental inventories are still
between initial wiring and the first canonical store rebuild, the API must
keep the grandfather floor uncaptured and expose usage as unavailable rather
than sealing continuity against a partial startup graph.
That continuity capture is owned by the shared licensing reconciler rather
than ordinary read handlers. /api/license/status and
/api/license/entitlements may expose monitored_system_continuity
(plan_limit, effective_limit, optional grandfathered_floor,
capture_pending, captured_at) and limit-level
current_unavailable_reason, but those request paths must not seal the
grandfather floor synchronously just because a billing read happened to arrive
after the canonical usage view became available. Those same read handlers must
also stay side-effect free with respect to the reconciler lifecycle itself:
they may observe pending continuity state, but only activation-state
transitions such as activate, restore, grant refresh, and clear/revocation may
bootstrap or tear down the pending-floor reconcile loop.
Continuity capture is itself an activation-state mutation: after the reconciler
persists the one-time floor, the service callback must publish the updated
activation state so ownership can cancel the pending loop without making
ordinary billing reads restart or stop it.
When save-time monitored-system admission fails with a commercial denial, the
structured API error must preserve the canonical monitored_system_preview
object through frontend-modern/src/utils/apiClient.ts and
frontend-modern/src/api/responseUtils.ts so platform settings can render the
same current/projected verdict instead of falling back to generic license copy.
That same configured-path contract now also has an explicit shared owner for
manual auth env files: internal/api/auth_env_path.go must remain the only
place that derives .env from configured runtime paths, and neighboring
handlers like router.go, router_routes_auth_security.go, and
security_setup_fix.go may not reconstruct their own /etc/pulse/.env
fallbacks once runtime path authority has been centralized.
That same monitored-system ledger boundary now also governs frontend client
normalization. frontend-modern/src/api/monitoredSystemLedger.ts must decode
mixed-version payloads into one normalized response shape before render
surfaces consume it: status_explanation, explanation, and
latest_included_signal are the client contract exposed to the UI, while
missing mixed-version fields may be repaired only inside that API client layer
rather than in panel-local fallback helpers.
That same shared API boundary rule now also applies to notification test
handlers: internal/api/notifications.go may decode webhook-test requests and
return the governed response envelope, but notifications-owned service-template
selection, safe header copying, and generic webhook-test payload fallback must
stay in internal/notifications/ rather than becoming a second API-layer owner
for the same transport contract.
The notifications API boundary also carries the canonical webhook template
shape used by the frontend service chooser: frontend-modern/src/api/notifications.ts
must expose the registry's service label, description, and mention-copy
metadata, and it may not invent a second frontend-only service taxonomy for
the chooser.
That same notifications boundary must also canonicalize legacy service-specific
input aliases at ingress instead of leaving them as a live runtime contract:
Pushover app_token / user_token may be accepted only at config/API/UI input
boundaries, and API responses plus live notification runtime state must carry
only canonical token / user fields.
That same shared owner now also governs writable auth env target order:
setup, password-change, and auth-status flows must route .env writes through
the shared helper instead of open-coding config-path writes plus ad hoc
data-path fallback branches in each handler.
Those shared profile-assignment settings surfaces must also preserve canonical
assignment visibility when an assignment references a profile ID that no longer
resolves in the fetched profile collection: the current payload state must stay
visible to the operator instead of collapsing into an empty/default select
value that misstates the backend assignment.
That same shared install-command boundary must preserve selected Proxmox target
profiles across PowerShell transport:
frontend-modern/src/components/Settings/InfrastructureInstallerSection.tsx
and frontend-modern/src/components/Settings/useInfrastructureOperationsState.tsx
must emit PULSE_ENABLE_PROXMOX and PULSE_PROXMOX_TYPE when the operator
copies a Windows install command for a Proxmox-targeted flow, and
scripts/install.ps1 must convert those env vars back into canonical
pulse-agent service args so the copied payload does not drift from the
governed shell command contract.
That same shared PowerShell install transport must also preserve
operator-selected insecure TLS and command-execution settings: copied Windows
install and upgrade payloads must emit PULSE_INSECURE_SKIP_VERIFY and
PULSE_ENABLE_COMMANDS when enabled, and copied Windows uninstall payloads
must still emit PULSE_INSECURE_SKIP_VERIFY when enabled, so
scripts/install.ps1 does not silently drop self-signed transport intent on
the Windows path.
That same shared lifecycle transport must also preserve explicit custom CA
selection end to end: copied shell install, upgrade, and uninstall payloads
must pass --cacert to both the outer installer download and the governed
installer runtime, while copied Windows install, upgrade, and uninstall
payloads must emit PULSE_CACERT and use a PowerShell bootstrap that applies
custom-CA or insecure-TLS certificate handling before install.ps1 is fetched,
not only after the installer starts executing. That bootstrap must accept the
same PEM/CRT/CER trust input that scripts/install.ps1 itself accepts, so the
shared command contract does not narrow custom-CA behavior on the first fetch.
That same shell transport contract also applies to the governed setup-completion
install handoff in SetupCompletionPanel: when the operator supplies a custom CA path
or opts into insecure/self-signed transport, the shared Unix install builder
must carry those choices through both the outer curl fetch and the installer
runtime instead of leaving the first-session onboarding path behind the shared
lifecycle/API contract. For explicit insecure/self-signed mode, that first-hop
fetch must widen to curl -kfsSL; preserving --insecure only on the later
installer runtime is not sufficient.
That same shared lifecycle/API boundary must also keep setup-script bootstrap
transport under one owned backend shape: /api/setup-script-url response
payloads and /api/setup-script rerun guidance must derive URL, download URL,
file name, token hint, and env/non-env command variants from one canonical
bootstrap artifact builder instead of duplicating those fields in separate
handler-local payload assembly paths.
That same owned setup-script contract now also covers the rendered shell body:
PVE and PBS script text must come from shared backend render helpers instead of
remaining duplicated inside the setup handler, so the API boundary owns one
artifact contract plus one render path rather than a route-local script engine.
That owned backend shape must itself stay singular: the shared setup artifact
model is the API contract, and handler-local response structs may not mirror
or remap the same url, downloadURL, scriptFileName, command, expiry, and
token metadata in parallel.
That same setup-completion contract must also preserve the canonical agent-connect
URL boundary: first-session install commands must prefer the backend-governed
security status agentUrl and only fall back to browser origin when no
canonical agent endpoint exists, while still allowing a local override for
bootstrap cases where the operator needs a different agent-to-Pulse address.
That same shared first-session install contract also applies to Windows
transport: SetupCompletionPanel must expose a governed PowerShell install command and
route it through the shared lifecycle helper, so PULSE_URL, optional
PULSE_TOKEN, insecure/self-signed TLS handling, and PULSE_CACERT stay
identical to the Windows install payload contract already enforced in
InfrastructureInstallerSection.tsx and useInfrastructureOperationsState.tsx.
That same first-session install boundary must also preserve the shared
optional-auth command contract: the Unix install builder must support omitted
--token transport, and SetupCompletionPanel may only omit that argument after an
explicit "without token" confirmation when auth is optional, while preserving
the generated token path by default so onboarding does not drift from the
governed settings behavior. After that explicit tokenless confirmation,
repeated wizard copy actions must keep emitting tokenless payloads instead of
silently rotating back to PULSE_TOKEN or --token transport on the next
rendered command. The same rule applies to wizard-owned background token
rotation: agent-connection polling may not regenerate a token or restore
token-auth payloads while explicit tokenless onboarding is still the active
contract.
That same first-session token contract must also stay coherent across the
setup-completion credential surfaces: once SetupCompletionPanel rotates the active install
token, the displayed credential token and downloaded credentials payload must
emit that same current token instead of exporting the stale bootstrap token
while the copied install command already uses a different one. At the same
time, the stable bootstrap admin API token must remain separately visible and
copyable; the setup wizard may not replace the admin credential with the
rotating install token and call that payload contract complete. That same
exported credentials payload must also carry the current agent-install URL and
matching install command contract for both Unix and Windows transport,
including any operator override, instead of serializing only browser-local
login context or Unix-only onboarding while the live setup-completion install
surface has already switched to a different governed endpoint. When explicit
tokenless optional-auth mode is active, the same payload and drawer contract
must report tokenless install mode instead of serializing a misleading current
install token that is no longer part of the active command transport, and the
operator guidance text on the install surface must stop claiming automatic
token rotation after each copy while tokenless transport is active.
That same insecure-TLS contract also applies to installer-owned HTTP traffic:
when PULSE_INSECURE_SKIP_VERIFY is set, scripts/install.ps1 must use the
same relaxed certificate policy for the governed binary download and uninstall
API callback requests instead of preserving --insecure only for the later
agent runtime.
That same shared infrastructure install boundary must also preserve
platform-canonical uninstall command payloads: copied utility actions for
Windows agents must emit the PowerShell uninstall transport, and uninstall
payloads must only carry real API token secrets rather than token record IDs
when server-side deregistration is requested.
That same uninstall payload rule now also applies to copied Unix shell flows:
frontend-modern/src/components/Settings/useInfrastructureOperationsState.tsx
must never serialize a token record ID into the governed --token argument
when building uninstall transport, because the backend runtime only accepts
the raw token secret or no token at all.
The same shared uninstall transport must preserve PULSE_URL for token-optional
Windows flows, because install.ps1 reads its canonical server endpoint from
that environment variable when composing the governed uninstall request.
That same copied uninstall boundary must also preserve the selected agent's
canonical identity when inventory already has it: shell uninstall payloads must
carry --agent-id, and PowerShell uninstall payloads must carry
PULSE_AGENT_ID, so deregistration targets the intended governed agent record
instead of depending on local fallback files or hostname lookup.
The same identity-preservation contract applies to copied upgrade transport:
shell upgrade payloads must carry --agent-id and --hostname, and
PowerShell upgrade payloads must carry PULSE_AGENT_ID and PULSE_HOSTNAME,
so upgrade reruns stay bound to the selected governed inventory record.
That same Unix transport boundary must also preserve shell-safe argument
encoding: copied shell uninstall and upgrade payloads must quote canonical URL,
token, agent ID, and hostname arguments so governed lifecycle commands do not
break or reinterpret inventory values with shell-significant characters.
The same Windows transport boundary must also preserve PowerShell-safe argument
encoding: copied PowerShell uninstall and upgrade payloads must escape
canonical URL, token, agent ID, and hostname values before they enter env
assignments or irm command text, and the copied Windows upgrade payload must
quote the resolved script URL so canonical URLs containing spaces remain a
valid PowerShell transport. The same Windows uninstall payload must quote its
resolved script URL too; escaping PULSE_URL into env assignments is not
sufficient if the later install.ps1 invocation can still be split by
PowerShell parsing.
That same install-command boundary must use the identical escaping rules:
copied shell install payloads must quote canonical URL/token arguments, and
copied PowerShell install payloads must escape canonical URL/token values
before they enter env assignments or irm transport. The same interactive
Windows install snippet must also export PULSE_URL explicitly when copying a
selected canonical agent address, not just the fully qualified install.ps1
download URL.
That same shared install payload contract must also normalize trailing slashes
on canonical Pulse URLs before composing installer asset paths, so copied shell
and PowerShell install transport cannot drift onto //install.sh or
//install.ps1 when operators paste a base URL that already ends with /.
When a governed token is already selected, that same interactive Windows
install payload must carry PULSE_TOKEN too; the copied command may not discard
the chosen credential and regress to a second manual prompt while other
install/uninstall/upgrade payloads stay token-bound.
When no real token has been selected yet, that same interactive Windows payload
must not serialize a placeholder token into PULSE_TOKEN; the contract remains
prompt-driven until a governed credential actually exists.
That optional-auth install contract must also remain bidirectional: when Pulse
allows tokenless transport, the settings surface may omit PULSE_TOKEN after a
real "without token" confirmation, but it must still preserve a real generated
token if the operator chooses one instead of collapsing optional auth into a
tokenless-only command builder.
That same optional-auth payload rule now also covers backend-generated Proxmox
install responses: when auth is not configured, the canonical
agent-install-command API must omit token and --token from its payload
instead of implicitly persisting a new API token record and mutating the
server's auth-configured state just to render a backend-driven install
command.
The same uninstall contract applies to hostname fallback identity: shell
payloads must carry --hostname, PowerShell payloads must carry
PULSE_HOSTNAME, and the uninstall scripts must prefer that explicit hostname
when performing governed /api/agents/agent/lookup fallback. That lookup must
fail closed on ambiguous hostname matches: installer-driven recovery may only
resolve a hostname when the match is unique, and display-name or short-hostname
fallbacks must return not found rather than picking an arbitrary agent.
That lookup fallback transport must be canonicalized on both installer paths:
shell and PowerShell uninstall flows must percent-encode the selected hostname
before issuing /api/agents/agent/lookup, so API-owned identity recovery does
not depend on raw query interpolation.
The same shell uninstall contract also applies to persisted connection state:
when scripts/install.sh receives explicit --agent-id or --hostname, it
must store those values alongside URL/token in connection.env and recover
them before invoking governed uninstall fallback.
The same persisted-identity contract applies to scripts/install.ps1: Windows
install and upgrade must store URL, token, agent ID, and hostname continuity in
installer-owned state and reload those values during governed uninstall before
using local fallback files or hostname discovery.
That ProgramData continuity state is scoped to the live installation only:
after governed uninstall succeeds, scripts/install.ps1 must remove the saved
state so stale agent identity or transport metadata cannot leak into later
removal or reinstall flows.
The same persisted-state contract applies to self-signed transport continuity:
canonical installer-owned uninstall state must retain insecure TLS intent and
reload it during governed offline uninstall, so self-signed Pulse instances do
not lose deregistration reachability after the original clipboard command.
That same persisted shell uninstall state must retain --cacert continuity:
scripts/install.sh must store and recover the custom CA bundle path from
connection.env so governed lookup and uninstall calls continue to trust the
intended Pulse certificate chain offline.
That shell connection.env recovery contract is keyed to partial uninstall
context, not only an entirely missing URL/token pair: if any governed uninstall
identity or transport field is absent on the command line, the script must
reload the missing persisted continuity before using API-owned lookup fallback.
Those register/install control surfaces now also carry a canonical host
identity continuity contract: /api/auto-register and token reuse must treat
hostname-form and IP-form URLs for the same node as one API-owned identity so
reruns do not fork duplicate runtime records or shadow token payloads.
That canonical /api/auto-register payload must also preserve token-action
truth: canonical completion now requires caller-supplied tokenId and
tokenValue, and the response must stay on the direct-use
action="use_token" contract as the only supported completion path.
That same contract must be enforced by first-hop callers too: install and
runtime-side Unified Agent registration clients may not treat a bare 2xx response or a loose
status field as success; they must validate the canonical status,
action, and token/identity response shape.
That same canonical /api/auto-register contract must also accept caller-supplied
Proxmox token completion directly on that contract: when a runtime-side Unified Agent or
generated flow already created the canonical token locally, the request may
carry tokenId and tokenValue, and the response must stay on the direct-use
action="use_token" contract as the only supported completion path.
That same runtime transport contract also governs the agent-ingest boundary in
internal/api/agent_ingest.go and internal/api/router*.go: the primary
request/response surface is the Pulse Unified Agent route family, while
/api/agents/host/* stays a compatibility alias and must not leak back into
handler naming, router-owned state, or proof labels as if it were a second
product-facing API surface.
That confirmation marker must survive the legacy setup-script transport too:
script-generated /api/auto-register payloads must send source="script",
and canonical callers must send that source explicitly, so later canonical
reruns can distinguish real confirmed credentials from agent-created tokens.
That same /api/auto-register request contract must also reject non-canonical
source values outright: only source="agent" and source="script" are valid,
so the backend does not preserve arbitrary caller labels as accidental API
surface.
That same /api/auto-register request contract must also reject non-canonical
node types outright: only type="pve" and type="pbs" are valid, so the
backend does not complete unsupported runtime labels as fake successful
registrations.
That same /api/auto-register request contract must also reject non-canonical
token identities outright: tokenId must be a Pulse-managed canonical
identifier in the form pulse-monitor@{pve|pbs}!pulse-<canonical-scope-slug>
matching the requested node type, so the backend does not preserve arbitrary,
cross-type, or non-Pulse-managed token IDs as accidental API surface.
That same caller-supplied token contract must also stay deterministic across
the live registration clients: installer, setup-script, and runtime-side Unified Agent Proxmox
flows must converge on the same Pulse-managed pulse-<canonical-scope-slug>
token name for the same Pulse endpoint instead of serializing caller-local
timestamp variants into the canonical /api/auto-register payload.
That same deterministic token-name contract also governs backend turnkey
credential setup: the password-based PBS add-node flow and generated
setup-script payloads must derive Pulse-managed token names from the canonical
Pulse endpoint itself rather than request-local Host fallbacks, so loopback
or proxy-facing admin requests cannot fork the token scope for the same Pulse
instance.
That same generated setup-script payload must now also opt into the canonical
registration contract explicitly: locally created Proxmox token completions
must send tokenId and tokenValue as the canonical request shape.
That same request contract must also accept one-time setup-token auth through
authToken only, so /api/auto-register does not keep a duplicate
setupCode payload alias alongside the canonical field.
That same shared discovery transport surface must also keep structured error
ownership in the runtime model: pkg/discovery and internal/discovery own
structured_errors, while internal/api/config_discovery_handlers.go,
internal/api/config_setup_handlers.go, and internal/api/config_node_handlers.go
may derive the deprecated errors string list only as a compatibility field
at the API and WebSocket boundary.
That same WebSocket state boundary must also stay tenant-aware by construction:
internal/websocket may not keep a separate default-org state getter beside the
tenant-aware state path, and default-org snapshots must flow through the same
org_id="default" contract used for non-default organizations.
That same canonical auth contract must also keep its runtime and user-facing
terminology on setup tokens: active /api/auto-register auth failures and the
owning handler/proof names may not drift back to setup-code wording after the
payload contract has been canonicalized.
That same first-session security boundary also governs bootstrap-token
persistence and retrieval: the one-time setup secret may remain recoverable
through the supported pulse bootstrap-token command, but .bootstrap_token
may not remain a raw plaintext secret file on disk. Canonical runtime
persistence must encrypt that token at rest and rewrite any legacy plaintext
bootstrap-token file immediately into the encrypted canonical format on load.
That same first-session contract also owns the dev/test reset response used by
managed-backend proof: /api/security/dev/reset-first-run may exist only for
development verification, must require authenticated settings:write, must
clear persisted auth state through the shared auth-env and token-persistence
helpers, and must return the regenerated bootstrapToken together with the
canonical bootstrapTokenPath needed to re-enter first-run deterministically.
That same setup-token-only contract must also keep missing-token failures
specific: /api/auto-register may not answer a missing authToken request
with a generic authentication error after the route has been narrowed to the
setup-token flow.
That same canonical request contract must also keep field-validation failures
specific: mismatched tokenId/tokenValue input may not collapse into
generic missing-field output, and other missing canonical fields must return
explicit Missing required canonical auto-register fields: ... guidance.
That same request/validation contract must stay coherent across both entry
points on the canonical runtime surface: the public /api/auto-register
handler and the direct canonical handler path may not drift onto different
messages for the same missing-field or token-pair failures.
That same canonical request contract must also require an explicit
serverName field from live callers rather than synthesizing node identity
from host inside the backend.
That same canonical backend contract must also keep overlap-continuity runtime
messages on canonical /api/auto-register wording: the helper/log surface for
resolved-host matches, DHCP continuity matches, and in-place token updates may
not preserve the deleted "secure auto-register" split.
That same canonical runtime path must keep token-completion validation wording
on the canonical contract too: incomplete tokenId/tokenValue payloads may
not preserve deleted "secure token completion" wording in live handler
messages.
That same canonical request contract also governs runtime-side Unified Agent-initiated
Proxmox completion: callers must fetch and use a one-time setup token in
authToken instead of carrying long-lived admin authentication directly on
/api/auto-register.
That same canonical caller-supplied completion request shape also governs scripts/install.sh:
installer-owned Proxmox auto-registration must submit local token creation
results with tokenId and tokenValue on the canonical /api/auto-register contract instead
of emitting any alternate payload shape.
The unified-agent uninstall command contract must also fail closed on
token-required Pulse instances: copied shell and PowerShell uninstall payloads
must use the same resolved token source as install and upgrade, so required
auth cannot silently collapse into tokenless deregistration transport.
Agent profile assignment payloads now also fail closed on missing profiles:
POST /api/admin/profiles/assignments must reject unknown profile_id
references with the canonical not-found response instead of writing orphan
assignment rows that no governed UI can represent.
That same not-found assignment contract must propagate through the shared
frontend client path: frontend-modern/src/api/agentProfiles.ts must surface
the canonical missing-profile message for 404 assignment responses, and the
settings profile surfaces in AgentProfilesPanel.tsx and InfrastructureInstallerSection.tsx
must treat that message as a resync trigger so stale profile options do not
survive after the backend has already rejected them.
That same shared response contract must also fail closed on malformed list
payloads: the profile-management client may not treat non-array profile or
assignment responses as empty collections, and AgentProfilesPanel.tsx /
InfrastructureInstallerSection.tsx must surface the resulting load failure instead of
flattening it into a fake zero-profile state.
That same shared response contract must also fail closed on malformed
profile-object, suggestion, schema, and validation payloads: the
profile-management client may not accept partial profile objects, malformed
schema definitions, or malformed validation/suggestion bodies as successful
contract responses, and the profile editor plus suggestion modal must surface
those canonical response failures instead of collapsing them into generic
save/delete/schema/validation fallback messaging.
The canonical Proxmox auto-register contract must also preserve legacy DHCP
continuity semantics: when /api/auto-register receives the same
canonical node name together with the deterministic Pulse-managed token ID for
that node, it must update the existing PVE or PBS entry in place even if the
host IP has changed, rather than duplicating the node under a second endpoint.
That same /api/auto-register payload contract must now also accept ordered
candidateHosts from runtime-side Proxmox callers and treat host as the
preferred candidate, not an untouchable answer. The backend must normalize the
candidate list, ignore invalid alternates, and persist the first candidate it
can actually reach for TLS fingerprint capture from Pulse's own network view so
registration payloads do not lock in an endpoint the server cannot later poll.
That same response contract must echo the stored reachable candidate back in
the canonical host field, not the caller's rejected first preference, so
runtime-side Unified Agent confirmation and later setup/install surfaces stay
aligned on the actual persisted polling endpoint.
The unified-agent install endpoints now also carry an exact-release fallback
contract: when /install.sh or /install.ps1 cannot be served locally, the
backend must proxy the install script asset from the exact GitHub release that
matches serverVersion and must fail closed for dev or unreleased builds
rather than serving branch-tip installer logic.
That same response contract now also owns signed release-asset headers:
published agent-binary and installer downloads served through
internal/api/unified_agent.go must surface X-Checksum-Sha256,
X-Signature-Ed25519, and the base64-encoded detached X-Signature-SSHSIG,
and release-tagged local assets must not bypass that header contract just
because the binary or script is present on disk.
That same transport rule is now explicit about prerelease classes too: only
stable tags and explicit RC prerelease tags without build metadata qualify as
published install-script release assets. Working-line dev prereleases such as
v6.0.0-dev, git-described builds with +git... metadata, and other
non-published prerelease identifiers must fail closed on that shared
internal/api/unified_agent.go boundary instead of generating fake GitHub
release URLs from a local runtime version string.
The /api/updates/plan contract must also fail closed without becoming a
transport error on supported non-auto-update deployments: manual,
development, and source runtimes must return an explicit manual update
plan payload instead of 404 No updater for deployment type, so first-session
and settings surfaces do not treat valid deployment modes as broken update
transport.
Those same install-command payloads now also carry a non-TLS continuity
contract: when Pulse returns a plain http:// base URL for a generated agent
install command, the command must include --insecure so the installed agent
keeps its update path alive on lab or self-hosted targets instead of silently
skipping updater checks after the first install.
The same plain-HTTP continuity rule applies to governed frontend-generated
host install transport too: shared Unix install command builders must append
--insecure for http:// Pulse URLs so setup-completion copies cannot drift from
the lifecycle contract already enforced in the unified settings surface.
That same frontend install-command contract must also fail closed on blank
local overrides: whitespace-only custom Pulse endpoint input in
InfrastructureInstallerSection.tsx or SetupCompletionPanel.tsx may not override the canonical
backend-governed endpoint, and the shared install-command helper must reject
blank base URLs instead of composing installer script paths from an empty
transport root.
That same install-command payload contract also covers backend-generated
Proxmox install responses in internal/api/agent_install_command_shared.go:
the /api/agent-install-command payload and hosted tenant Proxmox install
payload must emit the same root-or-sudo Unix wrapper contract as the governed
frontend builder, rather than exposing a stale raw | bash -s -- transport
shape through the API surface.
That same rule applies to the unified settings shell lifecycle copies:
frontend-generated Unix install and upgrade commands must append --insecure
for http:// Pulse URLs automatically, while only the explicit insecure-TLS
toggle may widen curl transport itself to -k.
That same unified settings install boundary must also preserve preview/copy
parity: the rendered Linux/macOS/BSD and Windows install snippets in
InfrastructureInstallerSection.tsx must already reflect the active token contract, custom-CA
transport, insecure/plain-HTTP behavior, install-profile env/flags, and
command-execution mode, rather than showing a stale base command that is only
rewritten at copy time.
The loopback-originated install and setup payloads now also preserve the full
configured PublicURL when that URL is the canonical external route, instead
of rewriting only the host and inheriting an http:// request-local scheme
that would drift the generated command away from the governed public endpoint.
The canonical frontend client contract for Proxmox setup transport now also
applies to /api/setup-script-url and /api/setup-script: governed settings
surfaces must request quick-setup commands and manual setup-script downloads
through shared frontend-modern/src/api/nodes.ts helpers for both type:"pve"
and type:"pbs", preserving the runtime-owned bootstrap artifact metadata
instead of open-coding one node type onto raw fetch branches.
That same /api/setup-script-url response contract must now also preserve the
canonical bootstrap identity explicitly through returned type and normalized
host, and the handler must reject missing or unsupported type/host
input instead of minting open-ended setup tokens with caller-local host
formatting.
That same setup-script-url boundary must keep a strict request shape too: the
handler accepts one canonical JSON object only, and unknown fields or trailing
JSON must fail closed as invalid request shape instead of being ignored as
forward-compatible extras.
That same bootstrap request boundary must also keep backupPerms truthful:
the flag is part of the canonical PVE setup contract only, so /api/setup-script
and /api/setup-script-url must reject it for type:"pbs" instead of
silently accepting a transport-level no-op.
That same setup bootstrap contract also keeps host identity explicit across
both routes: /api/setup-script and /api/setup-script-url must reject
missing host input instead of issuing placeholder-host artifacts that only
fail later during execution.
That same request boundary must also keep canonical type and host handling
aligned across both setup routes: /api/setup-script may not treat unknown
type values as implicit PBS requests, and it must normalize the supplied
host before rendering script text so returned artifacts and rerun URLs preserve
the same canonical node identity as /api/setup-script-url.
That same setup bootstrap contract also keeps Pulse identity explicit across
both routes: /api/setup-script may not derive pulse_url from the request
origin once /api/setup-script-url is already returning canonical Pulse URL
metadata, and missing pulse_url input must fail closed instead of silently
forking the bootstrap surface onto request-local origin state.
That same canonical bootstrap response shape must also stay enforced by the
shared frontend setup client in frontend-modern/src/api/nodes.ts, so
settings-owned quick-setup flows fail closed on malformed type, host,
url, downloadURL, command, setupToken, tokenHint, or expires
fields instead of passing raw backend JSON deeper into lane-local UI state. That shared client
must validate the returned setupToken but may not expose or retain it once
the operator-facing surface only needs the runtime-owned bootstrap artifact
plus masked tokenHint.
That frontend bootstrap consumer must also treat expires as a live-expiry
field, not merely a positive number, so expired setup-script-url responses are
rejected before quick-setup UI state or copy actions trust the returned setup
token.
That same settings quick-setup surface must consume the canonicalized response
directly:
frontend-modern/src/components/Settings/NodeModalSetupGuideSection.tsx
inside
frontend-modern/src/components/Settings/ConnectionEditor/CredentialSlots/NodeCredentialSlot.tsx
must copy the governed token-bearing commandWithEnv field but render
commandWithoutEnv as the visible preview, using the guaranteed expires
value without reintroducing module-local nullable fallbacks. The same shared
surface must
also treat setupToken as bootstrap transport data and tokenHint as the
operator-facing display field, so the UI does not re-expose the full one-time
token once the copied/downloaded artifact already carries it. That preview
secrecy rule must stay symmetric across both supported Proxmox types, so the
PBS quick-setup branch may not preserve the token-bearing preview after the
PVE branch has moved to the governed commandWithoutEnv display contract.
That same quick-setup guidance must also stay truthful after the preview is
masked: copy-success messaging may not tell the operator to paste a token
"shown below" once only tokenHint remains visible, and stale raw-token
cleanup paths may not survive in one Proxmox branch after the shared UI state
has moved to hint-only handling.
That same shared settings consumer must keep command-driven setup and manual
credential submission distinct. When a new PVE/PBS setup is in Agent Install or
Direct Connection setup-script mode, the settings UI must not render Token
ID/Value fields, Test Connection, or Add Node controls; those controls are only
valid for Manual Token Setup or existing-node edit flows.
That same shared frontend setup surface must also trim and validate the
canonical host input before invoking /api/setup-script downloads, and the
shared frontend-modern/src/api/nodes.ts helper must reject empty host or
pulseUrl inputs instead of serializing whitespace-corrupted query params.
That same /api/setup-script payload contract must also stay explicit at the
artifact boundary: successful responses are shell-script downloads with
canonical text/x-shellscript content type plus an attachment filename, and
the shared frontend-modern/src/api/nodes.ts client must reject malformed
download headers instead of flattening script delivery into an untyped text
blob.
That same setup bootstrap contract must also keep manual download
non-interactive without depending on a separately rendered secret: the
setup-script-url payload must return a token-bearing downloadURL, and the
shared frontend client must fetch setup scripts through that field instead of
reusing the plain script url that omits the setup token.
That same shared frontend setup surface must also treat
/api/setup-script-url as the canonical bootstrap artifact source for the
current host/type/mode: quick-setup copy and manual script download must reuse
the returned url, downloadURL, scriptFileName, commandWithEnv,
tokenHint, and expires until that artifact expires or the operator changes
the endpoint, instead of rebuilding a second download request from lane-local
form state or retaining the raw setup token inside frontend cache state.
That same bootstrap artifact contract must also stay coherent in public-facing
guidance: docs/API.md and operator setup guides may not describe
/api/setup-script-url as if it only returned a token plus bare URL, and they
may not publish stale curl -sSL ... | bash setup examples after the runtime
and settings surfaces have standardized on the returned canonical command*
fields.
That same setup-script-url payload contract must also return the canonical
setup-script filename as scriptFileName, and the shared settings/bootstrap
consumer may not hardcode separate script names for PVE or PBS once the
runtime-owned filename is available.
That same setup-script-url payload must remain a coherent bootstrap artifact
envelope for all live consumers, not only the frontend: url,
downloadURL, scriptFileName, command, commandWithEnv,
commandWithoutEnv, and masked tokenHint are part of the canonical response
shape, and runtime-side Unified Agent/installer consumers must fail closed when those fields
are missing or mismatched instead of silently treating the response as
setup-token-only.
That same consumer contract must also treat expires as a live-expiry field,
not merely a populated one: installer and runtime-side Unified Agent callers must reject
bootstrap responses whose returned expiry timestamp is already in the past.
That same setup-script-url auth boundary must stay explicit too: returned
setupToken values bootstrap /api/setup-script and /api/auto-register,
but they do not authenticate the /api/setup-script-url request itself once
Pulse auth is configured.
That same setup-script-url payload contract now also fixes the shell transport
it returns: the command, commandWithEnv, and commandWithoutEnv fields
must use shell-quoted curl -fsSL fetches assembled through a shared backend
helper rather than a handler-local curl -sSL pipeline.
Those returned setup-script command fields must also preserve the governed
root-or-sudo execution contract, including carrying PULSE_SETUP_TOKEN
through the sudo path when present instead of assuming direct-root execution.
That same setup-script contract now also covers the generated script text:
operator guidance embedded in /api/setup-script responses must keep the same
fail-fast curl -fsSL fetch wording for retry and missing-host examples
instead of returning stale curl -sSL transport in the script payload.
That embedded guidance must also advertise the same root-or-sudo execution
shape as the API-returned quick-setup command instead of drifting onto a
direct-root-only | bash retry path inside the script payload.
That same script-payload guidance must preserve PULSE_SETUP_TOKEN across
those retry examples too, so the generated script text does not drop the
non-interactive setup-token contract even when it preserves the shell wrapper.
That same generated-script payload must also hydrate PULSE_SETUP_TOKEN from
an embedded setup token before those rerun examples are shown, so canonical
setup_token-issued scripts keep the same non-interactive contract on the
next hop instead of silently reverting to a prompt.
That same /api/setup-script boundary must keep one token name too: embedded
bootstrap uses only the setup_token query, and the rendered setup script body
uses only PULSE_SETUP_TOKEN rather than keeping AUTH_TOKEN or
SETUP_AUTH_TOKEN compatibility aliases alive.
That same generated-script payload must also remove discovered legacy tokens
from the concrete pve and pam token lists it already enumerated, rather
than iterating an undefined shell variable and silently turning operator-chosen
cleanup into a no-op.
That same generated-script payload must also preserve the canonical encoded
rerun URL contract: embedded SETUP_SCRIPT_URL values must carry the exact
selected host, pulse_url, and backup_perms query state instead of
reconstructing a raw query string inside the shell.
That same off-host branch may not advertise a second manual pveum token
creation contract either; when the runtime lacks Proxmox host tooling, the
payload must direct operators back to rerun on the host through the canonical
generated command instead of inventing a separate Pulse Settings token-entry
workflow.
That same script payload must also preserve canonical privilege-error wording
for direct execution: the generated runtime may not regress to the stale
"Please run this script as root" string and must instead use the same root
requirement language as the governed retry examples.
That same manual-add payload must also preserve one canonical token placeholder
string when the script cannot echo the secret again from process state, rather
than drifting across neighboring branches with lane-local variants like
"[See above]" or "Check the output above...".
That same payload must also preserve one canonical success-message contract
across generated PVE and PBS scripts, rather than returning node-type-specific
phrasing for the same successful auto-register result.
That same setup-script payload must also discover legacy cleanup candidates
through the canonical Pulse-managed token prefix for the active Pulse URL,
while still matching legacy timestamp-suffixed variants, instead of rebuilding
an IP-derived regex that can drift from buildPulseMonitorTokenName.
That same cleanup-discovery contract applies to both generated PVE and PBS
setup-script payloads; node type may not fork onto different legacy token-name
matching rules for the same Pulse-managed token surface.
That same payload must also use exact token-name matching for rerun rotation
detection, rather than broad substring checks over token-list output, so the
canonical managed token contract does not collide with unrelated partial-name
matches.
That same payload must also keep PBS token-copy guidance truthful: the
one-time token banner may only be emitted from the successful token-create
branch, not before the creation result is known.
That same payload must also keep PBS auto-register attempt guidance truthful:
the generated script may only print its attempt banner on the branch that is
actually about to send the registration request, not before token-unavailable
or missing-auth skip handling.
That same payload must also fail closed when token creation output does not
yield a usable token value: the generated script may not continue into prompt
or request assembly with an empty token secret, and must instead stop on the
canonical token-value-unavailable branch before any registration POST is built.
That same setup-script payload must also fail closed on auto-register success
parsing: the generated script may not treat any bare success substring as a
successful response, and must instead require an explicit success:true
signal before claiming registration succeeded.
That same payload contract must also fail closed on auto-register transport:
the generated script must use fail-fast curl -fsS request transport and only
evaluate the response payload after a successful curl exit status, rather than
parsing ambiguous stderr or HTTP-failure output as a valid registration body.
That same setup-script payload must also preserve the canonical auth guidance:
authentication failures in the generated script text must reference the active
Pulse setup-token flow, not stale API-token setup instructions, because the
payload now authenticates auto-register through one-time setup tokens.
That same auth-failure payload must also stay truthful after a request attempt:
once the generated script has already entered the registration-request path,
it may not fall back to a missing-token explanation and must instead report
that the provided setup token was invalid or expired, directing the operator
to fetch a fresh setup token from Pulse Settings → Nodes and rerun. The final
completion/footer path must honor that same auth-failure state instead of
reopening manual completion with the emitted token details.
That same payload must also preserve truthful completion messaging: generated
setup-script text may only announce successful Pulse registration when the
payload's auto-register branch succeeded, and must otherwise describe the
result as manual follow-up using the emitted token details.
That same manual-follow-up payload may not advertise a stale PULSE_REG_TOKEN
rerun contract: when auto-register falls back to manual completion, the script
text must direct the operator to Pulse Settings → Nodes with the emitted token
details rather than inventing a second registration-token flow.
That same manual-follow-up payload must also keep its failure-summary text on
that same canonical completion path: the generated script may not fall back to
vague "manual configuration may be needed" wording when it already knows the
operator should finish registration through Pulse Settings → Nodes with the
emitted token details.
That same immediate failure path may not fork into a separate numbered manual
setup list either; it must point directly at the same token-details-below
Settings → Nodes completion contract used by the final manual footer, including
the branch where the registration POST itself fails before a response payload
can be parsed.
That same manual-follow-up payload must also preserve the canonical host value
already carried by the script payload, instead of reverting to a placeholder
host string in the rendered manual-add instructions.
That same host-continuity contract also applies to generated PBS scripts: the
manual-add footer must preserve the canonical host payload value instead of
replacing it with a runtime-discovered local IP that may not match the API
contract the caller requested.
That same PBS payload contract must also bind the canonical host before any
setup-token gating that can skip auto-registration, so manual fallback output
cannot lose the host URL when the operator does not provide a setup token.
That same host binding must also precede token-creation failure fallback, so
the rendered manual footer still carries the canonical host payload even
when the script fails before any auto-register request can be assembled.
If the caller never supplied a canonical host at all, the rendered script
must fail closed instead of surfacing placeholder host values as manual
registration targets; it must direct the caller to regenerate the setup script
with a valid host URL.
That same payload must also preserve token-creation failure truth: when
Proxmox token minting fails, the rendered script may not emit placeholder token
details or report token setup completed. It must keep the host binding, skip
auto-register assembly, and tell the caller to rerun after the token-creation
error is fixed.
That same payload must also preserve token-extraction failure truth: if the
returned token output does not yield a usable token secret, the script may not
advertise manual registration as a fallback path from that broken payload and
must instead direct the caller to rerun after the token output issue is fixed.
Rendered completion and manual-detail payload branches must treat only an
extractable token secret as ready; token-create success alone is not enough.
That same rendered PBS payload must also distinguish skipped auto-register
states from attempted request failures, so missing setup-token input or missing
usable token secret cannot surface the generic request-failed-before-success
banner.
That same payload must also preserve canonical manual-completion phrasing
across generated PVE and PBS scripts: both must use the Settings → Nodes
manual-add language instead of diverging onto node-type-specific fallback
headings that imply different completion paths.
That same generated payload may not shorten the earlier auto-register failure
branch back to plain "Pulse Settings" wording either; both the immediate
failure guidance and the final manual footer must preserve the same Settings →
Nodes completion destination.
/api/charts/workloads-summary now also has a canonical hot-path invariant:
aggregate workload charts must preserve stable guest counts while batching
store-backed metric reads across workload types, with no payload shape change.
That endpoint now also carries an explicit API p95 budget under the same
store-backed mixed-workload fixture used to verify the batched hot path.
That same summary-chart contract now also owns synthetic mock fallback
identity. When internal/api/router.go needs to synthesize summary history
for workloads, infrastructure, or storage cards, it must key those series by
canonical resourceType, resourceID, and metricType instead of ad hoc
seed-prefix bounds, so all time ranges and runtime mock samples stay on one
governed timeline.
Frontend AI API clients now also normalize 402 Payment Required responses for
optional paywalled collections into explicit empty states, so Pulse Pro gating
does not become a transport error path during page bootstrap.
That frontend status handling must now route through the shared
frontend-modern/src/api/responseUtils.ts status helpers rather than through
message-text heuristics in individual API modules.
Optional not-found response handling in frontend API clients must now also use
those shared response-status helpers rather than open-coded response.status
branches in each module.
The same rule now applies to no-content and service-unavailable handling in
governed frontend API clients.
Governed frontend API clients must now also route required and safe success
payload parsing through the shared response parsing helpers rather than through
open-coded response.json() calls in each module.
The same rule now applies to optional success payload parsing, including lookup
responses that may legitimately return an empty body but must not use ad hoc
response.text() plus JSON.parse(...) branches in individual modules.
Investigation and AI chat SSE event payload parsing must now also route through
the shared text-to-JSON helper in frontend-modern/src/api/responseUtils.ts
rather than through per-module JSON.parse(...) stream decoding.
Nullable or legacy collection payloads in governed frontend API clients must
now also route through shared collection-normalization helpers in
frontend-modern/src/api/responseUtils.ts rather than through module-local
|| [], ?? [], or Array.isArray(...) fallback branches.
That rule now also covers patrol run history responses so malformed or legacy
run collections collapse through the shared helper instead of per-module
fallback lists.
The /api/ai/patrol/runs frontend history clients must now also route their
shared fetch plus run-normalization pipeline through one canonical local helper
in frontend-modern/src/api/patrol.ts rather than duplicating the same
endpoint-specific stack across each history variant.
That patrol run-history contract now also treats non-positive or malformed
limit query values as defaulted input and caps oversized requests to the
backend maximum, rather than letting invalid caller input widen the history
payload unexpectedly.
The frontend Patrol history clients in frontend-modern/src/api/patrol.ts
must mirror that normalization before sending the request: invalid and
non-positive caller input collapses back to the client default of 30, and
oversized requests clamp to the backend maximum of 100.
Patrol run detail access for selected-history UX must now resolve a canonical
single-run contract at /api/ai/patrol/runs/{id} instead of probing bounded
history pages and hoping the target run is still inside a recent window; the
tool-call trace UI must fetch the selected run by ID, with
?include=tool_calls carrying the full trace only when explicitly requested.
Frontend investigation rendering for unified Patrol findings must also key off
finding-level investigation metadata, not only investigation_session_id:
the investigation detail endpoint is addressed by finding ID, so findings with
canonical investigation_status, investigation_outcome, or non-zero
investigation_attempts must still surface investigation UI even when the
session ID field is absent or blank.
That same Patrol findings UI contract must keep fix_queued approval recovery
actions visible even when no live pending approval remains and
/api/ai/findings/{id}/investigation resolves to null or omits
proposed_fix: queued remediation state cannot collapse into a dead badge with
no user action path.
Patrol run-history serialization and persistence must also preserve full field
parity across API responses and restart boundaries, including
pmg_checked, rejected_findings, triage_flags, triage_skipped_llm, and
explicit empty finding_ids or effective_scope_resource_ids arrays when a
run represents an empty snapshot or an intentionally empty effective scope.
The same patrol run-history contract now also treats
effective_scope_resource_ids as the canonical analyzed-resource scope when
present, including when it is an explicit empty array, and frontend snapshot
selection must treat an explicit empty finding_ids array as an empty snapshot
rather than falling back to unrelated current findings; a missing
finding_ids field must retain its "no snapshot filter available" meaning
rather than being collapsed into an empty snapshot.
That same frontend run-history path must also preserve and expose
triage_flags and triage_skipped_llm from canonical patrol run records so
deterministic triage-only runs do not collapse into generic "no analysis"
history entries.
Patrol status payloads now also treat quickstart credit state as canonical API
contract data: the Patrol status endpoint must surface
quickstart_credits_remaining, quickstart_credits_total, and
using_quickstart directly from backend runtime state so the frontend can
render Patrol quickstart availability without local heuristics or shadow
derived state.
That quickstart transport contract must also preserve the distinction between
credit inventory and live runtime path: zero remaining credits alone must not
force a blocked or exhausted operator presentation while Patrol is active on a
configured non-quickstart provider path.
Those same transport fields now also define the only public quickstart promise:
when pricing, README, or Patrol header copy references them, it must describe
Patrol-only quickstart runs and no-key Patrol activation on activated or
trial-backed installs rather than generic AI credits, anonymous bootstrap, or
hosted-chat access.
Hosted billing-state payloads now also carry the canonical quickstart grant
metadata used by hosted bootstrap and refresh flows. Billing reads and contract
proofs must preserve quickstart_credits_granted,
quickstart_credits_used, and quickstart_credits_granted_at as backend-
owned fields, so hosted entitlement refresh cannot silently drop a workspace
back to "no quickstart inventory" just because the lease or trial state was
rewritten.
That same Patrol status contract now also carries a canonical runtime_state
field, so the frontend can distinguish blocked, running, disabled, active,
and unavailable Patrol runtime states without deriving operator status from
stale health summaries, last-run history, or local blocked-reason heuristics.
The backend status payload must derive that blocked runtime state directly
from current quickstart-credit availability, and it must clear stale
quickstart-exhausted block metadata once credits or BYOK return, so
the Patrol status endpoint cannot leave Patrol looking healthy or paused based on
an out-of-date last-run artifact.
Patrol mutate endpoints that depend on the background service must also fail
closed with 503 Service Unavailable when AI service initialization is absent
rather than dereferencing a nil service and crashing before a contract response
is written.
The /api/recovery/rollups transport now also carries the same normalized
filter contract as /api/recovery/points, /api/recovery/series, and
/api/recovery/facets: cluster, node, namespace, workload scope,
verification, and free-text query filters must remain coherent across all four
recovery endpoints so the recovery UI cannot render mismatched protected-item
and history views for the same active filter set.
That same recovery API contract now also includes canonical provider-neutral
itemType transport. internal/api/recovery_handlers.go must normalize
provider-native aliases such as proxmox-vm onto the shared recovery item
type vocabulary before filters reach rollups, points, series, or facets, and
those same handlers must preserve that normalized shape back out through
display.itemType and facet option payloads instead of forcing frontend
surfaces to re-derive cross-platform recovery categories from raw
subjectType.
That same recovery API boundary now treats platform as the canonical
operator-facing query field across /api/recovery/rollups, /api/recovery/points,
/api/recovery/series, and /api/recovery/facets. The handlers may continue
mapping that boundary onto internal provider fields, but accepted legacy
provider aliases must be compatibility-only input and must not replace the
canonical transport query shape.
That same recovery API boundary must also treat itemResourceId as the
canonical linked-resource filter and payload field across those same
/api/recovery/* endpoints. Accepted legacy subjectResourceId aliases may
remain as compatibility-only input or secondary payload fields during the v6
transition, but the shared transport contract and frontend decode path must
normalize them back onto canonical itemResourceId.
That same recovery API boundary must also treat itemRef as the canonical
external item-reference field across point and rollup payloads. Accepted
legacy subjectRef aliases may remain as compatibility-only secondary fields
during the v6 transition, but the shared transport contract and frontend
decode path must normalize them back onto canonical itemRef.
That same outbound recovery transport now also treats platform and
platforms as the canonical response fields for point and rollup payloads.
Compatibility provider and providers fields may remain during the v6
transition, but the shared API contract and frontend decode path must treat
them as fallback aliases rather than the primary response vocabulary.
internal/api/contract_test.go must pin that alias behavior directly, so the
canonical platform query and the legacy provider fallback cannot drift
between recovery endpoints without tripping the shared API proof surface.
internal/api/contract_test.go is the canonical proof owner for that
boundary, so response payload shape plus route and query compatibility like
itemType, type, and legacy provider aliases must be pinned there
whenever the shared recovery transport shape changes.
The same rule now also covers optional nested node cluster endpoint collections
so frontend-modern/src/api/nodes.ts does not own its own
Array.isArray(node.clusterEndpoints) response-shape branch.
Canonical alert incident and bulk-acknowledge result payloads must now also
flow through frontend API clients without no-op per-module wrapper
normalization when the backend shape is already canonical.
Legacy alert_identifier compatibility promotion in unified finding and patrol
run payloads must now also route through one shared helper in
frontend-modern/src/api/responseUtils.ts rather than duplicated per-module
record wrappers.
AI frontend clients must now also call canonical status helpers and direct
URL-segment encoding behavior without module-local alias wrappers when those
wrappers add no contract value.
The discovery frontend client must now also centralize typed and agent route
construction through dedicated path builders rather than repeating route
templates or trivial collection-path aliases across each endpoint.
Notifications email config parsing and node cluster endpoint normalization must
now also route through shared scalar coercion helpers in
frontend-modern/src/api/responseUtils.ts rather than through per-module
string/boolean/number helper stacks.
The same shared scalar coercion rule now also applies to monitoring agent
lookup timestamps so lastSeen normalization does not live as a module-local
typeof/Date.parse(...) branch in frontend-modern/src/api/monitoring.ts.
The same scalar-coercion contract now also covers optional Proxmox
clusterEndpoints collections in frontend-modern/src/api/nodes.ts:
frontend consumers may normalize endpoint fields, but they must not fork the
canonical collection-shape guard or reintroduce legacy alert_identifier
field access once camelCase alertIdentifier has been promoted by the shared
response helpers.
The same frontend API contract now also governs Proxmox agent-install command
transport in frontend-modern/src/api/nodes.ts: the canonical client request
shape for /api/agent-install-command must support both type:"pve" and
type:"pbs" with the same explicit enableProxmox flag, so install-command
surfaces do not fork into ad hoc raw POST payloads for different Proxmox node
types. That same shared client boundary must also validate a non-empty
command response and keep the raw backend token field inside
frontend-modern/src/api/nodes.ts rather than leaking it into downstream UI
state. Downstream Proxmox install-command consumers like the extracted node
setup surface
(ConnectionEditor/CredentialSlots/NodeCredentialSlot.tsx,
NodeModalAuthenticationSection.tsx, NodeModalSetupGuideSection.tsx,
nodeModalModel.ts, and useNodeModalState.ts) must then surface those
canonical validation errors
directly rather than collapsing one node-type pane back to generic
copy-generation failure.
Hosted organization-route gating now also falls under this API payload
boundary: when hosted tenants hit organization membership or billing surfaces
through internal/api/org_handlers.go and internal/api/router.go, inactive
subscriptions must fail with the canonical hosted 402 subscription_required
payload instead of reusing the self-hosted multi_tenant_disabled contract or
falling through to an untyped transport error.
Hosted signup and magic-link error payload normalization must now also route
through shared structured error normalization helpers in
frontend-modern/src/api/responseUtils.ts rather than through module-local
error-shape parsing functions.
Governed frontend API clients must now also route canonical non-OK response
throwing through the shared response assertion helper in
frontend-modern/src/api/responseUtils.ts rather than open-coding
throw new Error(await readAPIErrorMessage(...)) in each module.
The same governed modules must now also route assert-then-parse response
pipelines through shared required/optional response helpers in
frontend-modern/src/api/responseUtils.ts rather than repeating
assertAPIResponseOK(...); parseRequiredJSON(...) or parseOptionalJSON(...)
sequences in each client.
Hosted cloud-handoff and billing-admin payloads are canonical API contracts as
well. The handoff exchange must normalize the verified operator email before
it is written into the browser session and before it is returned in the JSON
success payload so session identity, org membership, and handoff payloads
cannot drift on email casing. Hosted billing-admin reads for non-default orgs
must also project the effective default-org hosted lease when the tenant-local
billing file has not been materialized yet, so admin billing-state payloads
stay coherent with the tenant's active entitlement payload instead of briefly
regressing to local trial/default state.
Canonical missing-resource lookups in governed frontend API clients must now
also route 404 => null response handling through shared response helpers in
frontend-modern/src/api/responseUtils.ts rather than open-coding local
status branches in discovery and monitoring clients.
Agent and guest metadata CRUD clients must now also route through one shared
metadata client in frontend-modern/src/api/metadataClient.ts rather than
duplicating the same get/update/delete/list transport logic in two files.
AI investigation and chat stream clients must now also route through one shared
SSE JSON event consumer in frontend-modern/src/api/streaming.ts rather than
duplicating reader lifecycle, timeout, chunk parsing, and event decoding logic
in each module.
Monitoring delete and idempotent mutate clients must now also route 404/204
success cases through shared allowed-status helpers in
frontend-modern/src/api/responseUtils.ts instead of open-coding local
status-branch stacks in each method.
The docker-runtime and kubernetes-cluster resource clients in
frontend-modern/src/api/monitoring.ts must now also route shared delete,
allowed-missing mutation, and display-name transport mechanics through
canonical resource-oriented helpers in that file rather than duplicating the
same fetch-and-assert stacks across runtime and cluster variants.
The same monitoring resource clients must now also route shared no-body
POST actions and success-envelope command triggers through canonical
resource-oriented helpers in frontend-modern/src/api/monitoring.ts rather
than duplicating identical POST transport logic across reenroll and runtime
command endpoints.
Those helpers must stay named and structured in resource terms rather than
reintroducing managed-resource terminology, so the monitoring transport layer
matches the canonical resource model exposed elsewhere in v6.
Those monitoring command helpers must also preserve the canonical frontend
fetch-options contract: governed callers pass string-keyed headers only, and
empty-body success responses normalize through the shared success-envelope
parsing path rather than local response.ok branches.
Legacy persisted Unified Agent scope aliases from v5 and early v6 installs
must also canonicalize to the current agent:* scope identifiers at the
backend contract boundary, so existing installed agents continue to satisfy
agent:report, agent:config:read, agent:manage, and agent:enroll
requirements without manual token replacement after upgrade. That
canonicalization may live only at request-ingress and persistence/migration
boundaries; live token records, runtime scope checks, and API payloads may not
preserve or re-emit host-agent:* aliases.
Agent profile delete and unassign clients must now also route canonical 204
success handling through shared allowed-status helpers in
frontend-modern/src/api/responseUtils.ts instead of open-coding local
if (!isAPIResponseStatus(response, 204)) branches.
Agent profile suggestion and monitoring display-name mutations must now also
route custom 503 and 404 user-facing error promotion through shared
custom-status error helpers in frontend-modern/src/api/responseUtils.ts
instead of open-coding local if (!response.ok) { if (isAPIResponseStatus(...)) throw new Error(...) } stacks.
Monitoring command-trigger clients must now also route empty-body
{ success: true } fallback behavior through a shared success-envelope helper
in frontend-modern/src/api/responseUtils.ts instead of open-coding
parseOptionalAPIResponse(response, { success: true }, ...) in each method.
AI chat SSE now also treats interactive question events as a canonical API
contract surface: backend and frontend must preserve session_id,
question_id, and the structured questions array without handler-local
rewrites or alternate payload aliases.
That same chat SSE contract must remain request-bound. If the HTTP request
context is canceled or the client disconnects, backend assistant execution
must cancel with the request rather than continuing on a detached background
context until an unrelated timeout expires.
Config-registration API contracts at /api/auto-register and
/api/config/nodes now also require deterministic automated proof: backend
verification must stub TLS fingerprint capture and Proxmox cluster-detection
probes rather than depending on live network reachability, so canonical
request/response verification reflects contract behavior instead of ambient
lab state.
That same canonical /api/auto-register response contract must preserve
node identity on success: nodeId must carry the resolved stored node name,
not the raw host URL or requested serverName, so registration payloads stay
aligned with fleet-control payload consumers.
That same response contract must also return the rest of the backend-owned
completion identity coherently: type, source, normalized host, and
matching nodeName must align with the saved node record so installer and
runtime-side Unified Agent callers do not keep separate local success identities after Pulse has
already canonicalized the node.
That same /api/auto-register contract also governs the
node_auto_registered WebSocket payload: it must emit the normalized stored
host plus the resolved stored node identity in name, nodeId, and
nodeName, rather than leaking raw request fields that can diverge from the
saved node record, together with the effective token id that was reused or
issued.
AI and agent-profile collection/detail clients must now also route apiFetchJSON
402/404 fallback behavior through shared API-error-status fallback helpers in
frontend-modern/src/api/responseUtils.ts instead of open-coding local
try/catch wrappers that map those statuses to [], { plans: [] }, or
null.
Paywalled Patrol remediation-intelligence responses must also scrub derived
metadata together with the collection itself: when remediation history is
license-locked, remediations, count, and stats must all collapse to an
explicit empty state rather than leaking paid history totals through a partial
payload.
Hosted billing-state payloads now also treat Stripe webhook-backed commercial
state as canonical API contract data: when checkout and subscription webhooks
persist paid state, plan_version, stripe_price_id, and limits.max_monitored_systems
must stay aligned instead of emitting paid-state payloads with an empty limits
map or stale canceled-state carryover.
That same hosted billing API boundary also owns runtime base-path resolution:
internal/api/payments_webhook_handlers.go must derive webhook dedupe and
customer-index storage from the shared runtime data-dir helper in
internal/config/config.go instead of carrying its own /etc/pulse fallback,
so hosted billing API side effects stay aligned with the same configured data
directory used by the rest of the product.
Not-found detail lookups in governed frontend API clients must now also route
through explicit status-based 404 handling rather than through broad
catch-all null fallbacks that hide real backend failures.
Session and CSRF persistence compatibility under internal/api/session_store.go
and internal/api/csrf_store.go now also has an explicit governed migration
proof route: legacy raw-token sessions.json and csrf_tokens.json files must
load through explicit migration helpers, rewrite immediately into hashed
canonical persistence, and stay covered by
internal/api/session_store_test.go, internal/api/csrf_store_test.go, plus
tests/migration/v5_session_db_test.go, rather than borrowing the generic
backend payload contract proof path.
That same governed auth persistence boundary must also stay owned by the
configured runtime data path instead of hidden package-singleton fallbacks:
session, CSRF, and recovery-token stores may not silently self-initialize on
/etc/pulse from first access or lock onto the first caller forever through
sync.Once; the configured router data path must remain the canonical owner of
those persistence stores, and reinitializing that data path must replace the
old runtime store rather than leaking prior-path state forward.
That same configured-path rule also applies to runtime auth/config reloads:
internal/config/watcher.go may use PULSE_AUTH_CONFIG_DIR only as an
explicit override, but otherwise it must watch the resolved runtime
ConfigPath / DataPath owner. The watcher may not probe /etc/pulse or
/data and silently override the configured path authority for .env and
api_tokens.json reloads.
That same configured-path rule also applies to manual auth env writes and
status reads under internal/api/router.go,
internal/api/router_routes_auth_security.go, and
internal/api/security_setup_fix.go: those handlers must resolve .env
through the shared auth-path helper instead of rebuilding /etc/pulse/.env
fallback logic inline.
That same governed auth persistence rule now also covers recovery-token state
under internal/api/recovery_tokens.go: raw recovery secrets may be minted for
one-time operator use, but recovery_tokens.json must persist only token
hashes and treat any legacy plaintext-token file as an explicit migration input
that is rewritten immediately into hashed canonical persistence on load instead
of leaving raw recovery secrets on the primary runtime disk path.
That same governed persistence rule also covers internal/config/persistence.go
API token metadata handling: api_tokens.json may hold only hashed token
records, but a legacy plaintext metadata file may only be migration input.
Canonical runtime persistence must rewrite plaintext API token metadata
immediately into encrypted-at-rest storage on load instead of treating the
unencrypted file as a normal primary path.
That same fail-closed persistence rule also applies to persisted OIDC refresh
tokens in internal/api/session_store.go: refresh tokens may only be loaded
from or saved to encrypted-at-rest session payloads, and the runtime must drop
them whenever session-store crypto is unavailable or the stored ciphertext is
not canonically decryptable instead of preserving plaintext-at-rest session
state.
Hosted signup handler payload flow now also follows an explicit shared
boundary: internal/api/public_signup_handlers.go owns request/response and
magic-link payload semantics, while internal/hosted/provisioner.go owns the
shared org bootstrap and rollback mechanics that the hosted signup handler
invokes.
That shared public-signup response contract is now intentionally uniform for
syntactically valid requests: the route returns 202 Accepted with one generic
Pulse Account message whether provisioning/email side effects ran or were
suppressed by the owner-email rate limiter, while invalid request bodies and
true server failures remain explicit.
The API token settings surface now also follows the same explicit ownership
rule. Changes to frontend-modern/src/components/Settings/APITokenManager.tsx,
frontend-modern/src/components/Settings/apiTokenManagerModel.ts, and
frontend-modern/src/components/Settings/useAPITokenManagerState.ts must
carry this contract and the dedicated API-token management proof file instead
of remaining an unowned consumer of token scope labels, token assignment
visibility, and revoke-state presentation.
That shared API-token boundary must also stay under explicit proof routing on
both sides instead of relying only on broad settings-surface coverage on the
security side: token settings changes must continue to carry the direct
api-token-management-surface API-contract proof together with the
security-side surface proof.
That same shared commercial API boundary now also owns the local trial-start
transport contract. /api/license/trial/start may allow a short human-scale
burst of retries while the hosted redirect handoff remains canonical, but once
that burst is exceeded it must transition from 409 trial_signup_required to
429 trial_rate_limited and return the actual remaining backoff in both the
Retry-After header and the JSON details.retry_after_seconds payload instead
of a fixed window guess or a text-only error. internal/api/contract_test.go
must pin both the hosted-signup redirect response and the rate-limited response
in the same slice as any handler change.
That same shared commercial API boundary also owns hosted self-serve failure
transport semantics. Hosted trial request and verification failures may render
owned HTML pages, but they must preserve the originating Pulse instance and
customer form context instead of collapsing into generic control-plane failures
or dead-end text with no route back to the originating runtime.
That same boundary must also keep token scope presets lazily derived from the
canonical scope constants: apiTokenManagerModel.ts may expose
getAPITokenScopePresets(), but it must not publish an eagerly evaluated
top-level preset array that can reintroduce settings-chunk initialization-order
failures in production bundles.
That same boundary now also includes
frontend-modern/src/utils/apiTokenPresentation.ts, so token load/create/
revoke errors keep one governed customer-facing message source instead of
reappearing as hook-local strings.
That same token surface, together with frontend-modern/src/api/security.ts,
internal/api/security.go, internal/api/security_tokens.go, and
internal/api/system_settings.go, now also follows an explicit shared
boundary with security-privacy so auth posture, token authority, and
telemetry/privacy control semantics stop borrowing their governance only from
the broader API lane.
The /api/security/tokens payload contract now also carries explicit owner
binding: token create/list responses must preserve the originating
ownerUserId together with org scope so long-lived automation credentials
cannot appear detached from their intended human identity.
The shared direct-node/discovery settings boundary now also includes
frontend-modern/src/utils/infrastructureSettingsPresentation.ts, so the
customer-facing mutation and validation copy used by the governed runtime
hooks stays explicit under the same API-backed settings proof instead of
living as an unowned utility.
That same backend-owned config/settings boundary also owns shipped security-doc
references in operator guidance. internal/api/config_system_handlers.go and
shared setup helpers must not point API responses or runtime guidance at
GitHub main for security instructions that the running build already serves
locally; those references belong on the shipped /docs/SECURITY.md path.
That same governed token contract must fail closed on mutation. Limited-scope
API tokens may only create, rotate, or delete tokens whose effective scopes
are a subset of the caller's own scopes; token-management routes must not let a
settings-capable but narrower token revoke or replace a broader credential.
Those owner-bound credentials now also define the effective authenticated
principal on governed API routes: when token metadata carries ownerUserId,
RBAC and audit-facing auth resolution must use that bound user identity rather
than a detached synthetic token:<id> subject, while still preserving token
scope and org enforcement.
The onboarding QR payload flow now also carries explicit token-bound auth
semantics: when the frontend requests /api/onboarding/qr with a pairing
token, the API client must send that token explicitly so the returned payload
and deep link represent the exact minted pairing credential rather than the
ambient browser session, and the mobile-facing relay.url/relay_url fields
must normalize the stored relay instance endpoint to the app endpoint
(/ws/app) so mobile pairing never receives the instance-only /ws/instance
route.
Incoming organization-share payloads now also preserve requested access-role
semantics at the API boundary: /api/orgs/{id}/shares/incoming must hide
shares whose accessRole exceeds the caller's effective role in the target
organization instead of leaking share metadata that the caller cannot
legitimately accept or use.
That same inbound-sharing contract now also carries explicit target-org
consent semantics. POST /api/orgs/{id}/shares must create pending share
requests rather than granting live access immediately, target-org owners or
admins must accept or decline those requests through
POST /api/orgs/{id}/shares/incoming/{shareId}/accept and
DELETE /api/orgs/{id}/shares/incoming/{shareId}, and
/api/orgs/{id}/shares/incoming must expose pending requests only to those
target-org managers. Once accepted, the payload must preserve status,
acceptedAt, and acceptedBy, and accepted shares may remain visible only to
members whose effective role satisfies the share's accessRole.
Updating an already accepted share must also preserve that consent boundary:
changing the requested accessRole resets the share to pending and clears
the acceptance metadata so a source org cannot silently widen an approved
grant without a new target-side approval.
Organization membership and authorization payloads now also follow an explicit
live-role contract: /api/orgs must list only organizations the caller
currently belongs to, and org-management endpoints must reflect member
promotion or demotion immediately rather than continuing to authorize from
stale owner/admin assumptions after the role change has already been
persisted.
System settings API payloads now also carry an explicit v6 channel contract:
updateChannel resolves to stable or rc with stable as the default, and
autoUpdateEnabled must serialize as false whenever the effective channel is
rc, even if stale persisted state or omitted request fields would otherwise
leave unattended updates enabled.
Update API channel selection now also follows that same contract: /api/updates
surfaces accept only stable or rc, reject unsupported channel values at the
HTTP boundary, and must not allow a stable installation path to apply a
prerelease tarball even when a caller posts a direct GitHub release URL.
The /api/resources and /api/resources/stats handlers now also carry a
single-snapshot aggregation invariant: canonical aggregations.byType must be
derived from the same registry list snapshot used for that request's response
path, so the contract stays deterministic without paying for duplicate
registry-clone work on the hot path. That same governed resource contract now
also includes backend-derived policy and aiSafeSummary fields, and list,
detail, and child payloads must source those values from canonical unified
resource metadata rather than from frontend- or AI-local heuristics.
That same resource-handler seed contract must also stay on canonical unified
resource ownership for tenant-scoped requests: once a tenant state provider
implements UnifiedResourceSnapshotForTenant, /api/resources may not fall
back to raw tenant StateSnapshot seeding when that unified seed is empty.
That same mock/runtime contract now also governs chart payloads under
internal/api/router.go: when demo or mock presentation is enabled,
/api/charts, /api/charts/infrastructure, and /api/storage-charts must
read through GetUnifiedReadStateOrSnapshot() so chart payloads use the same
canonical mock unified-resource snapshot as /api/resources and /api/state
instead of drifting onto the live store-backed graph.
Tenant AI service wiring now follows that same canonical ownership rule:
internal/api/ai_handlers.go may provide tenant ReadState and
tenant-scoped unified-resource providers, but it must not mint tenant snapshot
provider bridges purely to satisfy Patrol once the Patrol runtime can operate
from those canonical tenant providers directly.
Hosted licensing handlers now also carry a tenant-scoped fallback contract:
when hosted auth handoff preserves a non-default tenant org like t-...,
/api/license/status, /api/license/commercial-posture,
/api/license/entitlements, and /api/license/runtime-capabilities must
still evaluate the instance-level hosted billing lease from default if that
tenant org has no org-local billing state of its own, rather than failing
closed into
subscription_required on first entry.
That same hosted entitlement contract also owns lease refresh targeting:
when a hosted tenant request arrives on a non-default org with no org-local
lease, internal/api/hosted_entitlement_refresh.go must resolve the effective
billing target through the same default-org fallback before it refreshes,
persists, or rewires the evaluator. Runtime routes such as
/api/ai/approvals must not refresh against the empty tenant org and silently
fall back to license_required while the real hosted entitlement lease still
exists on default.
That same hosted browser-session contract must also remain authoritative once
the handoff lands on the tenant runtime: when a valid pulse_session cookie
is present, shared internal/api/auth.go helpers must authenticate that
session before any API-only token fallback or no-local-auth anonymous fallback
is considered, so hosted protected routes such as relay-mobile token minting,
onboarding reads, and billing-admin/API surfaces stay reachable after cloud
handoff instead of flattening the operator back to anonymous or demanding a
bearer token from the browser as soon as the tenant has minted one.
That same shared auth contract also governs unauthenticated local recovery and
bootstrap ingress: before auth exists, anonymous fallback and /api/security/quick-setup
must remain direct-loopback only, and recovery tokens may authorize only the
same loopback client IP that minted them when establishing a browser recovery
session.
That same shared settings-scope contract must then preserve canonical
org-management privilege on the tenant side: when a hosted or multi-tenant
request is scoped to a non-default org, internal/api/security_setup_fix.go
must honor the org's owner/admin membership model for settings-bound routes
such as relay-mobile token minting, instead of requiring a separate configured
local admin username that hosted tenants do not carry.
The same onboarding boundary in internal/api/router_routes_ai_relay.go and
internal/api/relay_mobile_capability.go must
also accept the dedicated relay:mobile:access scope for
/api/onboarding/qr, /api/onboarding/validate, and
/api/onboarding/deep-link, because those payloads are the canonical
bootstrap surface for the server-minted mobile credential.
The shared security token contract now also includes single-record metadata
reads. internal/api/security_tokens.go,
internal/api/router_routes_auth_security.go,
frontend-modern/src/api/security.ts, and
frontend-modern/src/types/api.ts own the canonical record.lastUsedAt and
record.expiresAt lookup shape for one token, and relay pairing surfaces must
consume that same contract when deciding whether a displayed QR token can be
revoked or must be preserved as an already-used device credential. That same
contract now also owns backend-minted Pulse Mobile relay access tokens: the
server route, not the browser, defines the canonical dedicated
relay:mobile:access runtime scope, the explicit route inventory in
internal/api/relay_mobile_capability.go, its backward-compatible
server-side route gates alongside legacy ai:chat and ai:execute mobile
tokens, and the token-purpose metadata. Route expansion for Pulse Mobile must
land by editing that backend-owned inventory plus its proofs, rather than by
sprinkling ad hoc compatibility checks across handlers. The pairing UI only
consumes that server-owned credential when requesting the onboarding payload.
That same shared backend API contract now also owns hosted relay bootstrap
reads. internal/api/router.go, internal/api/onboarding_handlers.go, and
internal/api/relay_hosted_runtime.go must derive /api/settings/relay and
the mobile onboarding payload from the same runtime helper. In hosted mode,
when no explicit relay config exists but the default hosted billing lease
grants relay and carries an entitlement JWT plus canonical instance_host,
those read surfaces must auto-bootstrap the persisted relay runtime with the
default relay server URL, a machine-owned hosted instance secret, and
generated relay identity metadata instead of requiring a prior manual
PUT /api/settings/relay. The API response contract must continue to expose
only public relay fields while omitting the hosted instance secret and
private key.
That same shared backend API contract now also owns hosted AI bootstrap
reads. internal/api/ai_hosted_runtime.go, internal/api/ai_handler.go,
internal/api/ai_handlers.go, and internal/api/contract_test.go must derive
/api/settings/ai and the initial hosted AI runtime from the same runtime
helper. In hosted mode, when no explicit ai.enc exists but the default
hosted billing lease grants AI capability and carries hosted entitlement
proof, those read surfaces must persist a canonical quickstart-backed AI
config with the governed Pulse-owned alias quickstart:pulse-hosted instead of returning a synthetic
enabled=false payload that leaves Chat and Patrol unavailable until the
operator manually saves settings. Hosted tenant-org reads must also inherit
the default hosted billing lease whenever no org-local billing state exists,
so AI bootstrap and quickstart-credit surfaces stay aligned with the same
machine-owned entitlement source. Once a real AI config exists, that explicit
operator-owned state must remain authoritative over hosted bootstrap.
The same hosted contract now also requires tenant Pulse Assistant runtime
startup to consume that hosted-aware config path and to refuse caching a
failed tenant chat service, so tenant-org /api/ai/status and
/api/ai/sessions cannot stay wedged behind a stale pre-bootstrap service
after the lease-backed AI config has been persisted.
That same shared AI/mobile API contract now also owns approval-list readiness
for settings-driven enablement. internal/api/ai_handler.go,
internal/api/ai_handlers.go, internal/api/router_routes_ai_relay.go, and
internal/api/contract_test.go must keep the governed approvals-list surface
on its empty-list payload once AI is enabled, even when the first enablement
happens after process startup. A post-boot settings save may not leave that
surface on 503 Approval store not initialized just because the direct AI
runtime had not previously started.
That same shared AI settings contract also owns provider-auth continuity and
provider-scoped test selection. internal/api/ai_handlers.go and
internal/api/contract_test.go must expose masked Ollama auth state through
ollama_username and ollama_password_set, accept provider-auth updates
without echoing raw secrets back into the payload, and keep provider test
routes bound to the provider's own configured model instead of whichever
other provider currently owns the default model selection.
That same shared /api/settings/ai contract now also owns vendor-neutral BYOK
setup. Frontend callers may submit provider credentials or base URLs without a
concrete vendor model ID, and internal/api/ai_handlers.go must resolve and
persist the effective model through the canonical runtime provider-catalog
selection path before returning the updated payload. /api/settings/ai reads
must then echo that resolved model back as the canonical default selection, so
UI setup flows and provider test routes do not drift into frontend-baked model
defaults or handler-local vendor fallbacks.
That same shared config/runtime contract also owns import-triggered reload
safety. When internal/api/config_export_import_handlers.go imports a config
archive and rebinds shared runtime state, the reload path must tolerate absent
notification or monitoring managers and degrade gracefully instead of
panicking on optional side effects. /api/config/import may be exercised from
proof or setup contexts that do not yet have every long-lived runtime manager
wired, but the contract must still leave the imported configuration readable
through the canonical API surface.
That same shared infrastructure-settings API contract now also owns the
connected-infrastructure distinction between machine-managed and
platform-connections-managed reporting. frontend-modern/src/types/api.ts,
frontend-modern/src/components/Settings/infrastructureOperationsModel.tsx,
frontend-modern/src/components/Settings/useConnectionsLedger.ts, and
frontend-modern/src/components/Settings/ConnectionsTable.tsx
must treat truenas as a canonical connected-infrastructure surface kind
alongside proxmox, pbs, and pmg, and the settings reporting/install
surfaces must keep those platform-managed rows navigable back to platform
connections instead of presenting host uninstall or stop-monitoring actions
that only apply to agent, docker, and kubernetes.
That same shared metrics-history contract now also owns physical-disk live I/O
windows. internal/api/router.go must accept resourceType=disk on
/api/metrics-store/history, keep 30m as a valid compact live range, and
resolve disk, diskread, diskwrite, and smart_temp against the
canonical disk MetricsTarget.ResourceID the unified resource already
exposes. Storage drawers and other consumers must not fork a disk-local live
history route, alternate query identity, or feature-specific fallback payload
when the governed chart API already owns that transport.
The shared browser contract now also includes a neutral app-runtime context
boundary for websocket-backed API consumers. API-contract-owned hooks such as
frontend-modern/src/components/Settings/useAPITokenManagerState.ts and
frontend-modern/src/components/Settings/useInfrastructureOperationsState.tsx
may read websocket state through frontend-modern/src/contexts/appRuntime.ts,
but payload truth, bootstrap rules, and commercial identity still belong to
the governed API handlers and contract tests. Those hooks must not import
@/App or treat root-shell ownership as transport authority.
That same shared commercial API contract now also owns the public demo
read-side boundary. internal/api/demo_mode_commercial.go,
internal/api/licensing_handlers.go,
internal/api/monitored_system_ledger.go, and
internal/api/subscription_state_handlers.go must fail closed with a generic
404 for public-demo billing, license-status, and monitored-system-ledger
reads or preview probes whenever DEMO_MODE is enabled. Demo runtimes may
still use real server-side entitlement evaluation internally, but the
governed browser/API contract must not expose commercial identity, usage, or
upgrade-state payloads back to public viewers through those read surfaces.
That same monitored-system admission contract now also owns direct write-path
failure semantics for platform connections. internal/api/truenas_handlers.go,
internal/api/vmware_handlers.go,
internal/api/monitored_system_limit_enforcement.go, and
internal/api/contract_test.go must keep TrueNAS and VMware connection
creates/updates fail-closed with monitored_system_usage_unavailable whenever
the canonical monitored-system usage view is unsettled or rebuilding. VMware
write admission must check that canonical usage state before collecting
external vCenter inventory, so direct API callers cannot receive provider
connection errors or persist connections while capacity accounting is unsafe.
That same browser-transport contract now tolerates sparse admission-preview
payloads without changing the runtime truth. Patrol transport may omit
finding_ids, and infrastructure removal previews may stage optimistic rows
only after canonical IDs have been resolved or a safe row-name fallback has
been chosen. API-adjacent browser callers must not reinterpret missing IDs or
preview arrays as authoritative empty success.
That same shared browser transport contract now also owns the discovery polling
mount scope.
frontend-modern/src/components/Settings/useInfrastructureDiscoveryRuntimeState.ts
no longer gates /api/discover polling on the settings tab name; polling
starts whenever the hook is mounted and stops on cleanup. Callers must not
re-introduce a per-tab gate on this boundary. The discovery subnet settings
write path through SettingsAPI.updateSystemSettings remains governed by the
shared internal/api/ settings boundary and is unaffected by the polling
scope change.