41 KiB
Pulse v6 Source Of Truth
Last updated: 2026-04-04 Status: ACTIVE
This file is the stable human governance layer for the active v6 release profile. It is not a live progress dashboard.
Evergreen control-plane governance now lives in
docs/release-control/CONTROL_PLANE.md, and the active profile plus active
target are machine-selected by docs/release-control/control_plane.json.
Current lane scores, evidence references, coverage-gap discovery records,
candidate-lane planning records, and typed operational decision records live
only in docs/release-control/v6/internal/status.json.
Lane completion state, residual-gap summaries, normalized follow-up tracking,
and derived coverage scores also live only in
docs/release-control/v6/internal/status.json.
Non-blocking same-lane residuals that are concrete enough to name directly
live in status.json.lane_followups.
Product-scope lane-discovery records that reveal where the current taxonomy is
still under-modeled live in status.json.coverage_gaps.
Planned future lane promotions for those gaps live in
status.json.candidate_lanes.
Active multi-agent lease records that reserve governed work in flight live in
status.json.work_claims.
Current repo/release readiness is derived by
python3 scripts/release_control/status_audit.py --pretty from
status.json, not hand-maintained in this file. The human runbook for
trust-critical manual release gates lives in
docs/release-control/v6/internal/HIGH_RISK_RELEASE_VERIFICATION_MATRIX.md.
Use python3 scripts/release_control/status_lookup.py for candidate-lane,
coverage-gap, lane, readiness-assertion, release-gate, followup, and
work-claim lookups when only one governed surface is relevant.
Readiness assertion design rules live in this file.
The active assertion catalog, proof references, and executable proof commands
live in status.json.readiness_assertions.
Purpose
- Define the stable execution model for the active v6 release profile.
- Lock profile-specific scope, release definition, and no-go rules.
- Record only long-lived architectural and release decisions for this profile.
- Point agents to the files that own live state and subsystem truth for this profile.
- Keep startup context light by preferring derived status commands and targeted lookup helpers before opening large machine files in full.
This file must not contain:
- hand-maintained lane score tables
- hand-maintained evidence lists
- ephemeral session recaps or task logs
Canonical Control Files
docs/release-control/AGENT_VALUES.mdEvergreen values-only guidance for agent behavior.docs/release-control/CONTROL_PLANE.mdEvergreen governance for the permanent Pulse release control plane.docs/release-control/control_plane.jsonMachine-readable control-plane state, including the active profile and active target.docs/release-control/control_plane.schema.jsonMachine-readable contract forcontrol_plane.json.docs/release-control/v6/internal/status.jsonLive lane state, lane completion records, coverage-gap discovery records, candidate-lane planning records, active work claims, structured evidence references, and typed operational decision records.docs/release-control/v6/status.schema.jsonMachine-readable contract for thestatus.jsonshape.docs/release-control/v6/internal/CANONICAL_DEVELOPMENT_PROTOCOL.mdRepo-wide change rules for canonical work.docs/release-control/v6/internal/subsystems/registry.jsonMachine-readable subsystem ownership and proof requirements.docs/release-control/v6/internal/subsystems/registry.schema.jsonMachine-readable contract for the subsystem registry shape.docs/release-control/v6/internal/subsystems/*.mdPer-subsystem contracts: truth, extension points, forbidden paths, and completion obligations.docs/release-control/v6/internal/RELEASE_PROMOTION_POLICY.mdCanonical stable-versus-prerelease promotion rules, rollout criteria, and rollback expectations for v6 and later release lines.docs/release-control/v6/internal/V5_MAINTENANCE_SUPPORT_POLICY.mdCanonical v5 maintenance-only support policy, release-line rules, and GA notice requirements for the v6 cutover.docs/release-control/v6/internal/RC_TO_GA_REHEARSAL_TEMPLATE.mdCanonical human record shape for the non-publish RC-to-GA rehearsal run.docs/release-control/v6/internal/PLATFORM_SUPPORT_MODEL.mdCanonical first-class platform, ingestion-mode, resource-projection, and support-floor model for Pulse v6.
These machine files remain canonical, but agents should not ingest them in full
by default when a smaller derived command answers the question. Prefer
status_audit.py --pretty, status_lookup.py, and subsystem_lookup.py
first, then escalate into the raw files only when the current slice needs that
detail.
Scope
status.json.scope.control_plane_repo is pulse.
status.json.scope.repo_catalog is the canonical machine-readable repo map for
the active Pulse workspace.
Active repositories for v6:
pulseCore desktop/runtime repo and the canonical v6 release-control authority.pulse-proFinancial, operational, checkout, license-server, and relay-server surfaces.pulse-enterpriseClosed-source enterprise and paid runtime features.pulse-mobileMobile client, relay pairing, approvals, and device-local auth/state.
Ignored for v6 control:
pulse-5.1.xpulse-refactor-streams
Product Direction
Pulse is moving toward the unified operations layer for mixed private
infrastructure.
Pulse should win by giving operators one coherent place to monitor,
investigate, and safely act across mixed estates rather than by accumulating
opportunistic platform-specific surfaces.
New platform work must therefore strengthen that mixed-estate operator surface
and follow docs/release-control/v6/internal/PLATFORM_SUPPORT_MODEL.md
instead of being admitted ad hoc.
Under that governed platform model, docker and kubernetes are first-class
top-level platforms, while podman stays a runtime variant inside docker
rather than becoming its own top-level platform.
The same model also sets the current posture for platform breadth: truenas
is at the declared support floor summarized below, and vmware-vsphere is the
current admitted strategic next-platform direction while its support claim
remains proof-gated.
Pulse Mobile v1 is not desktop parity on a phone.
It is the away-from-desk decision surface for the same Pulse runtime: the next
item, findings, approvals, reconnect state, instance switching, and contextual
AI handoff.
Public/store copy and primary mobile shell labels must reinforce that narrower
job instead of promising a pocket dashboard with full desktop parity.
Mobile earns broader authority only after short operator validation confirms
that users can open Now, tell what needs attention, act, and recover trust
without desktop fallback.
Until that validation and the remaining hardware proof gates pass, widen mobile
through clearer ops cues and trust repair only, not through extra tabs, graphs,
or desktop-parity surface expansion.
Release Definition
Pulse v6 is ready when these outcomes land together:
- Unified resource model is stable and expansion-ready.
- Product quality feels polished and trustable out of the box.
- Commercial packaging materially improves conversion and revenue.
- Stable or GA promotion happens only after prerelease validation, not as the first customer exposure.
Pulse v6 is still a bridge release toward that unified operations layer, while the current active target remains governed prerelease-cut readiness with an explicit RC-publication hold rather than broad new platform execution. For this profile, "bridge release" means the product should stop reading as a monitoring tool growing sideways and instead make the mixed-estate operator surface legible through canonical resources, investigation context, governed action boundaries, and fleet governance wherever the surfaced case is already proven. Pulse owns infrastructure context, policy, and governed action boundaries so any sandboxed agent can use them; Pulse does not own being the sandbox where arbitrary agent execution lives. The retained foundation is therefore the hidden backend layer: canonical resources and relationships, standardized agent-emitted resource/signal/change envelopes, policy metadata and routing hooks, governed action and approval boundaries with auditability, and first-class fleet governance.
Evergreen Readiness Assertions
Pulse v6 readiness is governed by a small evergreen assertion set.
These assertions are durable release truths, not one-off launch checklist
items.
status.json.readiness_assertions owns the active assertion catalog, proof
references, and executable proof commands for the active release line.
docs/release-control/control_plane.json owns the active engineering target
that this profile is currently pursuing.
python3 scripts/release_control/control_plane_audit.py --check enforces that
the active target cannot stay stale once its machine-derived completion rule is
already satisfied.
Assertion design rules:
- Keep the set small and durable.
- Write each assertion as a binary release truth, not as a task list.
- Map each assertion to concrete lanes, governed subsystems, or release gates.
- Prefer machine-derived proof and generated summaries over hand-maintained docs.
- If repeated manual proof becomes expensive, automate it or demote it to a one-time migration item instead of keeping it in the evergreen set.
- After GA, keep using the same control plane and promote a new active target instead of cloning a new governance stack.
- Not all evidence classes are equal. High-risk release gates must declare the minimum evidence tier required for closure, and rehearsal evidence below that tier must remain blocking even when a dated record already exists.
- When a user states a durable product truth, normalize it into a readiness assertion, release gate, or open decision rather than leaving it as chat.
- Treat casual user language about consistency, seamlessness, drift, bypass resistance, or things that "should always be true" as candidate governance input, not only explicit requests to add a formal assertion.
- Active v6-facing guidance must stay current; legacy or historical docs may exist only as clearly marked reference material, not as current instructions.
- Comparable settings surfaces should be normalized into canonical page-shell rules when the user raises consistency drift, rather than treated as vague polish notes.
- Lane taxonomy is allowed to evolve when it no longer reflects the current product truth.
- If a durable product surface is materially underrepresented by the current
lane map, record an active
status.json.coverage_gapsentry instead of hiding the gap inside a broad lane, a lane followup, or a generic assertion. - Treat
status.json.coverage_gapsas active fog-of-war records rather than a loose backlog: each one should cite evidence, carry an explicit coverage deduction, and name the intended promotion path such as a lane split, new lane, lane expansion, or target update.coverage_gaps.statusis not cosmetic: once a lane-shaping gap (new-lane,lane-split, orlane-expansion) has a typedcandidate_lanesrecord, move that gap toplanned; do not mark such a gapplanneduntil a matchingcandidate_lanesrecord exists. - When the intended lane split/addition/expansion shape is already clear,
record it in
status.json.candidate_lanesso lane-taxonomy evolution is machine-readable rather than buried in prose. Candidate lanes should also identify the owning control-plane target so future lane-expansion work is routed to an explicit governed initiative.candidate_lanesare only for lane-shaping promotion paths, sotarget-updatecoverage gaps must stay out of that surface. - Once a coverage gap is resolved by a lane split/addition, lane expansion,
or target update, remove it from
coverage_gapsinstead of leaving it behind as historical noise. - Before coding begins on a governed slice, record a
status.json.work_claimsentry for the chosen slice, then verify it withpython3 scripts/release_control/agent_preflight.py --require-active-claim --agent-id <AGENT_ID>. The claim is an audit trail: it records what is being worked on and prevents accidental overlap if the same surface is resumed or handed off. Expired claims no longer mark active work; remove them when encountered so the live claim set stays trustworthy. - Lane work must be evaluated against the canonical v6 model, not only the local state it inherits. For any lane or candidate lane, first identify the canonical v6 shape for that surface: unified-resource-backed where applicable, canonical shared types and APIs, explicit subsystem ownership, and legacy compatibility code reduced to boundary-only exceptions instead of primary flow. When a lane surface still routes through a pre-v6 or host-era shape, do not merely improve that path locally and call the lane healthier. Either modernize it toward the canonical v6 mechanism in the same slice or record the remaining modernization gap explicitly in the owning lane's completion state, follow-up tracking, or lane-shaping governance.
- These modernization rules are retrospective. Existing lanes do not keep their floor forever just because earlier work landed before the control system made this explicit. When an existing lane is touched, reviewed, or reopened, judge it against the same canonical v6 modernization rules as new lane work and lower its claimed completeness if legacy-primary paths are still carrying the surface. Backward compatibility is not a blanket excuse to keep old internal mechanisms alive. Preserve legacy behavior only at explicit boundaries such as migrations, upgrades, imports, external contracts, or wire interoperability that still genuinely exist; otherwise retire the legacy-primary path instead of teaching the lane to depend on it. When a canonical replacement lands, remove the superseded internal path in the same slice unless a boundary-only exception is explicitly governed, and prove that the remaining supported boundary still works. Do not wait only for the slice that authored the replacement: if a later governed slice lands on clearly obsolete old-way internals in that same surface, cleanup should retire them rather than normalizing their continued presence.
- Guardrail-only work is support work, not lane advancement.
Contract ratchets, proof-routing cleanup, registry tightening, and
guardrail-only tests may strengthen confidence, but they must not be
treated as substantive lane movement by themselves.
If
status.jsonadvances a lane's score, status, or completion state, the same slice should normally include an owned runtime or product-surface delta for that lane unless the remaining gap is explicitly governance-only. - Large machine control files are canonical databases, not startup prompt
payload.
Treat them as canonical databases, not startup prompt payload.
Agents should prefer executable summaries and targeted lookup commands over
rereading
status.jsonorregistry.jsonin full on ordinary turns. Escalate into the raw files only when a lane, assertion, release gate, work claim, or subsystem boundary actually requires full-detail inspection. - Legacy compatibility must be named as a boundary exception, not assumed as a primary v6 design goal. If a touched surface still needs old-path support, make the boundary obligation explicit in the owning contract or lane residual and keep the rest of the superseded internal path retired. Otherwise, treat continuing investment in that old internal path as drift, not prudence. Clearly obsolete old-way code discovered during adjacent governed work is part of that same drift signal; clean it up or record the blocking reason explicitly instead of leaving it behind as ambient repo clutter.
- Claimed lane work should prefer the largest coherent same-surface slice. Within an active claimed lane, prefer the largest coherent same-surface slice. When one behavior arc on one governed surface is already clearly in scope, group the remaining same-surface work into the largest slice that still has one coherent proof story. Do not fragment a single modernization or canonicalization arc into many tiny commits just because each residue item can be named separately. Split only when there is a real risk boundary, concept boundary, or proof boundary.
- Lane scores, missing evidence items, and proof counts are signals, not the work target. Use them to find the remaining runtime, product, ownership, or governance gap that still keeps the lane below floor. Do not frame a slice as "get two more evidence items" or "raise the lane score" unless the remaining gap is explicitly governance-only. For ordinary lane work, identify the real same-lane gap first, then add the proof that demonstrates the gap is actually closed.
- Worktrees are available for isolated mutating slices when explicit isolation
is needed.
A shared mutable checkout is fine for normal single-session work.
Use
worktree_base.py,worktree_claim.py, andworktree_finish.pywhen a slice requires its own isolated hooks, staged scope, and dirty state — for example when a subagent needs to mutate independently and land back cleanly.
Non-Negotiable Release Gates
- Do not ship with open trust-critical P0s.
- Do not ship when paywalls and runtime gates disagree.
- Do not ship hosted Pulse flows that break signup, auth, provision, hosted runtime access, billing/admin visibility, or revocation.
- Do not ship multi-tenant support if tenant isolation, organization scope, tenant-scoped runtime state, or cross-org sharing can leak outside the intended tenant boundary.
- Do not ship MSP support if one provider account cannot safely onboard, manage, and separate multiple client tenants from one control surface.
- Do not ship if API tokens can exceed assigned user, org, or scope boundaries, survive revocation, or silently widen authority through legacy alias handling.
- Do not ship if a low-privilege user can view, mutate, or destroy beyond the permissions granted by their effective org membership and role.
- Do not ship if grandfathered recurring continuity, cancellation revocation, and later re-entry pricing can drift across Stripe, Pulse runtime, and customer-visible billing state.
- Do not ship if Pulse Mobile can keep stale access, lose approval state, or fail pairing, reconnect, or auth-transition recovery against a real instance.
- Do not ship if relay registration, reconnect, stale-session recovery, or disconnect drain can strand clients in resume loops, dead sessions, or lost inflight work.
- Do not ship upgrades that reset paid state, licensing continuity, or first-session flow.
- Do not ship if comparable settings surfaces drift away from the canonical page-shell contract and still present inconsistent top-level framing, header chrome, or section treatment without explicit justification.
- Do not keep polishing strong lanes while weak lanes remain behind.
- Do not treat
status.jsonlane scores reaching target as sufficient release approval by themselves; open operational decisions, machine-derived unresolved readiness assertions, and unresolved release gates still apply. - Do not promote v6 to stable or GA without an exercised prerelease, a real release-pipeline proof run, a recorded rollback target plus exact reinstall command, and a written v5 maintenance-only support policy.
- Do not declare a lane or blocker complete just because the first passing check landed. If obvious same-lane gaps still keep the outcome from being coherent, trustworthy, or realistically shippable, that lane is not done yet.
- Do not ship a customer-facing surface that is still prototype-grade. If a portal, hosted workflow, first-session flow, or other customer-facing surface still feels confusing, second-rate, or untrustworthy in real browser use, it is not acceptable to keep treating that surface as a good baseline for narrow refinement.
- Lanes that are at target but intentionally not closed must record a
bounded residual in
status.jsonwith a short rationale and explicit tracking references to the governing follow-up surface. Pair lanestatusandcompletion.statecoherently:completegoes withtarget-met,bounded-residualgoes withpartial, andopenmust not be paired withtarget-met.partialmeans measurable progress, so it must not sit at zero score.not-startedmeans zero score andopen.blockedremains anopenlane below target rather than a residual or complete state. Blocked lanes must declare typed same-lane blocker references to unresolved readiness assertions, release gates, or open decisions.completion.trackingis only for bounded residuals; open or complete lanes should keep that list empty until the lane reaches its current floor and the remaining work becomes a governed residual. Those references must belong to that same lane and must still be unresolved rather than pointing at unrelated governance objects or already-passed assertions, gates, or completed targets. Bounded residual tracking must use a lane followup, readiness assertion, release gate, or open decision rather than a broad target reference.lane_followupsare active residual records, not a loose backlog: each one should stay referenced by the owning lane'sbounded-residualcompletion.tracking. Once a lane followup is no longer active residual work, remove it fromlane_followupsinstead of leaving it behind with a completed status. - Do not treat a lane as healthy just because it has local fixes on a legacy path. If the canonical v6 route for that surface now runs through unified resources, canonical shared types, explicit subsystem ownership, or a modernized proof path, the lane is still below floor until that migration is made or the remaining gap is governed explicitly.
- Do not treat missing evidence count or a score delta as the task itself. If the real remaining gap is runtime, product, ownership, or behavior coherence, fix that gap and let the score/evidence catch up as proof. Only treat evidence-count closure itself as the task when the remaining gap is explicitly governance-only.
- Do not run parallel mutating agents out of one dirty checkout and call that acceptable coordination. Claims reduce overlap, but they do not isolate hooks, formatters, staged reads, or unrelated dirt. Parallel mutation should use separate worktrees so each agent sees one slice's git state at a time.
Locked Decisions
- Release-control execution is direct and repo-aware. The old orchestrator and loop tooling are retired.
- v6 GA is gated by
L1,L2,L3,L5,L6,L7,L8,L9,L10,L11, andL12.L4remains post-GA track work and is not a GA floor gate. - Trial authority for v6 is SaaS-controlled.
POST /api/license/trial/startmust initiate hosted signup only by returning409 trial_signup_requiredwhile the hosted-signup retry burst remains open, then429 trial_rate_limitedplusRetry-Afterbackoff once that limiter engages; the local runtime may redeem signed trial activations but must not mint local trial state directly. - v5 to v6 commercial migration must preserve unresolved paid-license state and downgrade safety.
- Cloud and MSP Stripe
price_*IDs are operational fill-in items, not architectural blockers. - Stable or GA promotion for v6 must come from an exercised RC and stay blocked until the RC-to-GA promotion gate is cleared and the published v5 maintenance-policy notice is ready.
- v6 and later releases use a promotion model, not a direct broad-rollout
model.
stablemust receive only promoted, already-validated builds,rcis the opt-in preview channel, and unattended auto-update exposure remainsstable-only unless a new channel policy is explicitly adopted. - Once v6 reaches stable or GA, v5 enters a 90-day maintenance-only window: critical security issues, critical correctness/data-loss issues, and safe migration blockers only. The exact end-of-support date must be published in the GA release notice. After that window, v5 is unsupported.
- Paid Pulse Pro v5 customers keep their existing recurring price through the v6 pricing change until they cancel. Renewal and entitlement continuity must preserve that grandfathered price state; any return after cancellation must use current v6 pricing.
- Pulse Mobile does not need desktop parity to stop blocking the v6 prerelease line. The mobile usefulness floor for prerelease is narrower: preserve at least one trusted paired instance across relaunches, expose relay/runtime state clearly in the main shell, fail closed into a recoverable disconnected state on stale or revoked access, and keep live approvals useful and recoverable. Broader parity and expansion remain post-prerelease scope.
- The minimum required update set for canonical work is a floor, not a lane closure rule. Agents should push the current lane to a coherent, defensible stop point and complete the next obvious same-lane work when it is necessary for trustworthy results, then normalize any remaining valid gap instead of calling the lane done by default.
- Unified-resource change history is the canonical durable backend timeline. Alert incident memory may retain investigation-local notes, analysis, commands, runbooks, and alert lifecycle breadcrumbs, but it must remain a derived incident projection rather than a competing source of truth for durable resource history.
TrueNAS Support Floor
Pulse v6 uses one governed definition of "TrueNAS supported" at the current
floor. Detailed proof routes live in status.json.readiness_assertions and the
owning subsystem contracts; user-facing claims must not exceed this floor.
Unless a concrete defect appears or governance explicitly widens the support
claim, broad same-shape TrueNAS iteration is not default work above this
declared floor.
- Architecture boundary:
TrueNAS is API-first. The unified agent may augment a TrueNAS system later,
but it is not required for bootstrap or baseline support. TrueNAS must
project into the canonical
agent,app-container,storage,physical-disk, and recovery contracts instead of reopening a parallel TrueNAS product model. - Onboarding path:
Supported now through the shared platform-connections flow and
/api/truenas/connections. Operators can add, test, edit, retest, and delete stored TrueNAS connections without re-entering masked secrets on ordinary edits.truenas_disabledis only an explicit server opt-out, not the baseline state. Out of scope for this floor: a separate TrueNAS-first wizard, agent-required bootstrap, or disabled-by-default launch posture. - Infrastructure visibility:
Supported now as one canonical top-level system in connected infrastructure
plus shared host telemetry/history when the API can supply it. The expected
user outcome is that a connected TrueNAS appliance shows up as
infrastructure, keeps platform-connection poll health, and remains a
TrueNAS platform even when host telemetry is ignored. Out of scope: a
separate
truenas-systemsurface or provider-local infrastructure model. - Workloads:
Supported now for TrueNAS apps projected as canonical
app-containerworkloads with shared workload navigation, metrics, and related links. Out of scope: a separate TrueNAS app runtime model or provider-local workload page/control surface. - Storage and disk health: Supported now for pools, datasets, and physical disks projected into the shared storage and disk contracts, including SMART/disk-state risk, live temperature, and recent temperature aggregates when the provider supplies them. Out of scope: promising deeper TrueNAS-only topology or admin actions beyond the current shared storage-health floor.
- Recovery: Supported now as read-side visibility for TrueNAS snapshots and replication artifacts in the shared recovery model, filters, rollups, and cross-surface handoffs. Out of scope: treating recovery as its own TrueNAS onboarding flow or promising provider-native restore/control actions that do not yet exist on the governed shared path.
- Alerts: Supported now when TrueNAS systems, disks, and app parents participate in the shared alert thresholds, incidents, and related-resource handoffs into infrastructure, workloads, storage, and recovery. Out of scope: a separate TrueNAS-only alert product surface.
- Assistant read/control:
Supported now for canonical read-side resource access plus bounded app
control. Assistant may read TrueNAS app logs via
pulse_read, read app config viapulse_query action="config", and issue native app start/stop/restart throughpulse_controlon canonicalapp-containerresources. Out of scope: a blanket TrueNAS admin plane, provider-local AI tools, host command execution without the unified agent, or broader action promises ahead of the existing action-governance coverage gap.
VMware vSphere Admission Model
Pulse v6 now has one locked, active strategic direction for VMware vSphere under the governed platform-admission model. VMware is the current admitted next-platform direction for Pulse, but it is not yet a supported platform claim. This section defines the only acceptable phase-1 floor while support remains proof-gated.
Pulse also now distinguishes an admitted platform that is merely
architecture-locked from one that is first-lab-ready: first-lab-ready
means the bounded non-live phase floor is implemented and regression-proofed
well enough that the next proper move is a real lab run, while the support
claim stays blocked until live proof passes.
- Architecture boundary:
VMware vSphere should enter Pulse only as the first-class
vmware-vsphereplatform. Phase 1 isvCenteronly; directESXiis deferred work and must not inherit support by implication. - Ingestion model: VMware should be API-first. The baseline path is the official vCenter Automation API plus the Virtual Infrastructure JSON API. A Pulse-managed agent may augment a VMware environment later, but it is not part of the bootstrap or baseline support contract.
- Canonical resource projections:
ESXi hosts must project as canonical
agent, guest workloads as canonicalvm, and datastores as canonicalstorage. vCenter, datacenter, cluster, folder, and resource-pool objects remain topology or relationship metadata, not top-level Pulse resource types. Out of scope for phase 1: provider-localesxi-hostorvsphere-vmtypes plusphysical-disk,system-container, orapp-containerprojections. - Visibility, workloads, and storage: The phase-1 floor is read-first infrastructure support through shared Pulse surfaces: host inventory, VM inventory/runtime/guest identity when the API exposes it, snapshot-tree visibility, datastore capacity/accessibility, and metrics/alarm context routed onto the shared resource model.
- Recovery: vSphere snapshots and changed-disk visibility are useful read-side signals, but they do not make VMware a recovery-supported platform in Pulse by themselves. Until shared recovery artifacts and restore flows exist on the governed path, VMware recovery stays out of the support claim.
- Alerts: The phase-1 floor may include vSphere alarm state, overall health state, and related event/task history when those signals are projected through the shared alerts, incidents, and resource-timeline contracts instead of a provider-local incident surface.
- Assistant read/control: Assistant read may be supported on those canonical resources once the shared read paths land. Assistant control stays read-only in phase 1 even though VMware exposes power, snapshot, and guest-operation APIs, because Pulse is not yet claiming a broad VMware action plane ahead of the existing action-governance coverage gap.
- Current implementation checkpoint:
VMware may be described internally as
first-lab-readyonce the shared phase-1 onboarding, projection, alert/history, and Assistant-read floor is implemented with automated non-live proof.first-lab-readyis an implementation milestone only; it does not change the support matrix or any user-facing support wording. - Support gate:
Do not call VMware supported until one real
vCentercapability is recorded inLOCAL_CAPABILITIES.mdand proves connection onboarding, the minimum privilege bundle, the supported version floor, canonicalagent/vm/storageprojection, alert and metrics-history truth, and assistant read behavior. If those proofs do not hold, implementation should stop at governance rather than shipping an inflated support claim. - Execution sequence:
VMware phase-1 execution should follow
docs/release-control/v6/internal/VMWARE_VSPHERE_PHASE1_EXECUTION_PLAN.mdas the concrete slice order and stop/go contract for the first admitted VMware support floor. - Projection contract:
VMware phase-1 projection should also follow
docs/release-control/v6/internal/VMWARE_VCENTER_PHASE1_RESOURCE_PROJECTION_SPEC.mdas the canonical source, identity, topology, alert-mapping, and non-projection boundary for phase-1 VMware resources. - Alerts and Assistant contract:
VMware phase-1 alerts and Assistant work should also follow
docs/release-control/v6/internal/VMWARE_VCENTER_PHASE1_ALERTS_AND_ASSISTANT_SPEC.mdas the canonical shared-alert, shared-timeline, Assistant-read, and Assistant-control-exclusion boundary for the VMware phase-1 floor. - Backend API/runtime contract:
VMware phase-1 backend API/runtime work should also follow
docs/release-control/v6/internal/VMWARE_VCENTER_PHASE1_API_RUNTIME_SPEC.mdas the canonical public API-boundary, session-ownership, provider-health, and negative-space contract for the VMware phase-1 floor.
Cross-Repo Contracts
These contracts must not drift:
- Licensing contract:
pulse/pkg/licensingsemantics vs license-server plans and gates. - Relay grant contract: license-server issued grants vs relay-server acceptance.
- Relay protocol contract:
pulse/internal/relay,pulse-pro/relay-server, andpulse-mobile/src/relaywire compatibility. - Pricing contract:
architecture pricing docs vs Stripe and checkout wiring in
pulse-pro.
Development Governance
For canonical subsystem work:
- Read
docs/release-control/CONTROL_PLANE.md,docs/release-control/AGENT_VALUES.md,docs/release-control/control_plane.json, this file, anddocs/release-control/v6/internal/status.jsonfirst. Usepython3 scripts/release_control/control_plane.py --agent-entrypoint --prettywhen you need the canonical ordered entry bundle instead of reconstructing it manually. If the agent starts outside thepulserepo, resolve those files frompulse/docs/release-control/v6/under the shared workspace root rather than inventing a parallel control layer in the current repo. - Then read
docs/release-control/v6/internal/CANONICAL_DEVELOPMENT_PROTOCOL.md. - Then read
docs/release-control/v6/internal/subsystems/registry.json. - Then read the relevant subsystem contract under
docs/release-control/v6/internal/subsystems/. - Update
status.jsonwhen live lane state, readiness derivation rules, lane completion records, coverage-gap records, candidate-lane planning records, assertion proof routes, evidence references, or typed operational decision records change. - Update this file only when stable governance, scope, locked decisions, or the readiness-assertion design rules change.
- When a canonical path replaces an old path, add or tighten a guardrail so the old path cannot silently return.
- When the active target's machine-derived completion rule is satisfied,
update
docs/release-control/control_plane.jsonin the same task or stop and promote the next target before continuing under stale scope. - When the user changes Pulse's priority or says what the product should focus on next, classify that as an active-target update or another control-plane change instead of leaving it as informal discussion.
- Do not wait for a special governance prompt. If the user casually says something should be consistent, seamless, difficult to bypass, free of drift, or always true, decide whether it belongs in a readiness assertion, release gate, open decision, or active target before ending the task.
- Do not stop at the first narrow success inside a lane. When the next obvious same-lane work is still required for a coherent and trustworthy result, complete it in the same slice when feasible.
- Keep that standard bounded. If the remaining work clearly belongs to another lane, another active target, a larger redesign, or a separately governed open decision, record that state explicitly instead of expanding the current slice without end.
- If the current lane map no longer models a durable product surface cleanly,
record that discovery in
status.json.coverage_gapswith evidence and an intended promotion path instead of burying it in generic residual text. - Treat lane completion, lane coverage, and coverage planning as separate
derived signals.
status_audit.py --prettymay derive a lane-completion score fromcurrent_score/target_score, a lane-coverage score from unresolvedcoverage_gaps, a coverage-planning score from unresolvedcoverage_gapsthat still lack explicitcandidate_lanes, and a conservative governed-surface score as the lower of completion and coverage. Those scores are decision aids, not proof that unknown work does not exist. - Treat active work claiming as its own live control surface.
status.json.work_claimsshould hold only live lease-style claims, not a historical log, andstatus_audit.py --prettyshould make active claims, expired claims, claim conflicts, and theavailable_candidate_lane_queuevisible enough that agents can avoid overlapping picks. Usescripts/release_control/work_claim.pyas the default reservation path rather than hand-editing claim objects when the system can reserve the slice directly. - Keep prompt-like guidance values-led. If a rule needs detailed repeated reminder, prefer strengthening the control plane, active profile, subsystem contracts, audits, or guardrails over expanding prompt prose into a second operating manual.
- Keep governance routing quiet by default. Resolving the canonical entry bundle, choosing the owning lane, and deciding whether a claim or governance update is required should usually happen in the background. Surface those mechanics only when they materially change blockers, scope, cross-repo impact, or the user's next decision.
For readiness assertion work:
- Update this file only when the readiness-assertion design rules change.
- Update
status.json.readiness_assertionswhen the active assertion catalog or proof references/proof commands change. - Route manual assertion proof through
HIGH_RISK_RELEASE_VERIFICATION_MATRIX.mdwhenever the assertion needs a trust-critical release gate instead of a one-off checklist.
Source Domains
If conflicts appear, resolve by domain:
docs/release-control/v6/internal/CANONICAL_DEVELOPMENT_PROTOCOL.md,docs/release-control/v6/internal/subsystems/registry.json, and the relevant subsystem contract own implementation rules.docs/release-control/v6/internal/status.jsonowns live lane state, the active readiness assertion catalog, readiness derivation rules, executable proof commands, coverage-gap discovery records, candidate-lane planning records, structured evidence references, and typed operational decision records.docs/release-control/v6/status.schema.jsonowns the machine-readable shape contract forstatus.json.docs/release-control/v6/internal/subsystems/registry.schema.jsonowns the machine-readable shape contract for the subsystem registry.docs/release-control/AGENT_VALUES.mdowns evergreen agent values, whiledocs/release-control/CONTROL_PLANE.mdanddocs/release-control/control_plane.jsonown evergreen governance, active profile selection, and active target selection.- This file owns profile-specific governance, repo scope, release gates, readiness assertion design rules, and locked decisions for v6.
- Supporting architecture and release docs are evidence only. They do not override the files above.