This commit contains multiple fixes for temperature proxy registration,
but the core issue remains unresolved.
## What's Fixed:
1. Added config pointer and reloadFunc to TemperatureProxyHandlers
2. Added SetConfig method to keep handler in sync with router config changes
3. Added config reload after registration to prevent monitor from overwriting
4. Fixed installer port conflict detection and duplicate YAML key issues
5. Added comprehensive debug logging throughout registration flow
## What's Still Broken:
The TemperatureProxyURL, TemperatureProxyToken, and TemperatureProxyControlToken
fields are NOT persisting to nodes.enc after SaveNodesConfig is called.
Debug logs confirm:
- HandleRegister correctly updates nodesConfig.PVEInstances[matchedIndex]
- The correct data is passed to SaveNodesConfig (verified in logs)
- SaveNodesConfig completes without errors
- Config reload executes successfully
- BUT after Pulse restart, the fields are empty when loaded from disk
The bug is in SaveNodesConfig serialization or file writing logic itself.
Related files:
- internal/api/temperature_proxy.go: Registration handler
- internal/config/persistence.go: SaveNodesConfig implementation
- internal/config/config.go: PVEInstance struct definition
Added TestPVESetupScriptArgumentAlignment to prevent future fmt.Sprintf
argument mismatch bugs in the PVE quick setup script template.
The test uses sentinel values (SENTINEL_URL, SENTINEL_HOST, deadbeef...)
to verify that critical placeholders receive the correct argument types:
✓ Repair block INSTALLER_URL uses pulseURL (not authToken)
✓ Repair --pulse-server flags use pulseURL (not authToken)
✓ Authorization headers use runtime $AUTH_TOKEN variable (not hardcoded)
✓ Token ID uses tokenName (pulse-*) (not pulseURL or authToken)
This test would have caught the bugs fixed in commits 2bb73d3c7 and
2053bc5e2, where:
- authToken appeared in --pulse-server URLs (argument shift)
- Authorization headers were hardcoded instead of using runtime variable
Recommended by Codex as a safeguard against this class of regression.
Changed Authorization headers in ssh-config and verify-temperature-ssh API
calls to use the runtime $AUTH_TOKEN variable instead of compile-time
hardcoded authToken.
This fixes a bug where users who override the auth token via:
- PULSE_SETUP_TOKEN environment variable
- Interactive prompt (when auth_token URL param omitted)
...would still send an empty Bearer token in the Authorization headers,
causing API calls to fail with 401 Unauthorized.
Changes:
- Line 4748: -H "Authorization: Bearer %s" → -H "Authorization: Bearer $AUTH_TOKEN"
- Line 4937: -H "Authorization: Bearer %s" → -H "Authorization: Bearer $AUTH_TOKEN"
- Removed 2 authToken arguments from fmt.Sprintf (lines 5059)
Now the script respects runtime token overrides in all code paths.
Identified by Codex during fmt.Sprintf argument alignment review.
Fixed critical argument mismatch bug where fmt.Sprintf arguments didn't align
with template placeholders. This caused:
- authToken being passed where pulseURL expected (curl errors)
- pulseURL being passed where authToken expected (empty Authorization headers)
- tokenName misalignment (Token ID placeholder broken)
Root cause: Template has 51 %s placeholders (54 total - 3 escaped %%s), but
argument list had wrong count and ordering.
Solution: Rebuilt argument list (lines 5049-5059) with correct mapping:
- 27 pulseURL (all installer URLs, --pulse-server flags, API endpoints)
- 11 tokenName (token creation, checks, final Token ID)
- 3 authToken (AUTH_TOKEN variable + 2 Authorization headers)
- 3 serverHost (error message rerun hints)
- 1 each: serverName, time, pulseIP, storagePerms, SSH keys, minProxyReadyVersion
Verified with go vet (passes). Mapping confirmed by walking each placeholder
in template and matching to correct argument type.
Related to #TBD (user will test)
Issue: When deployment type cannot be determined, error message referenced
$PROXY_INSTALLER but deleted it immediately, making instructions unusable.
Fix: Provide complete curl commands that users can copy-paste directly:
curl -fsSL $PULSE_URL/api/install/install-sensor-proxy.sh | bash -s -- ...
This ensures users have a working repair path even when auto-detection fails.
Identified by Codex final review.
Addresses all remaining issues from Codex final review:
Issue 1: SUMMARY_PROXY_INSTALLED unreliable (only set by install.sh)
Fix: Use PROXY_SOCKET_EXISTED_AT_START flag set at script start - works
for all installation methods (manual, older installers, etc.)
Issue 2: CTID detection fails when container offline/renamed
Fix: Read SUMMARY_CTID from install_summary.json as fallback. Priority:
1) Live PULSE_CTID detection
2) SUMMARY_CTID from json file
3) Standalone node detection
Issue 3: Failed repair disables working proxy (TEMPERATURE_ENABLED=false)
Fix: Keep TEMPERATURE_ENABLED=true in all failure paths. Comments explain:
proxy was working before, keep it enabled even if repair fails.
This ensures turnkey repair works reliably across all deployment scenarios
without breaking existing working proxies.
Addresses all issues found in Codex review:
1. Prevent double-install: Check SUMMARY_PROXY_INSTALLED to distinguish
between fresh installs (skip repair) vs existing installs (run repair)
2. Fix clustered node failures: Explicitly detect deployment type and bail
out with clear error message if neither --ctid nor --standalone can be
determined
3. Add health validation: Mirror main install path - verify service active,
socket exists, and fetch SSH public key after repair
4. Capture installer output: Show full diagnostics on failure (tail -20)
5. Better error messages: Provide specific manual repair commands when
deployment type cannot be auto-detected
This ensures the turnkey repair experience works reliably without regressing
fresh install UX.
The previous attempt (ed04926) was ineffective - it only set TEMPERATURE_ENABLED=true
which was redundant (already set at line 4051) and didn't trigger the auto-install block
because that block is gated by SKIP_TEMPERATURE_PROMPT != true.
This fix actually downloads and runs install-sensor-proxy.sh when an existing
socket is detected, which:
- Refreshes control plane tokens (fixes 401 errors)
- Updates control plane URL to correct Pulse instance
- Rewrites config atomically (Phase 2 installer is idempotent)
- Maintains turnkey UX - rerunning setup script now actually works
Detected by Codex final review.
When sensor-proxy socket is detected, the setup script was skipping
temperature monitoring setup with 'already configured' message. This
left stale control plane URLs/tokens, breaking temperature monitoring.
Now follows Codex recommendation: treat existing installations as
upgrade/repair opportunities. The installer is idempotent (Phase 2),
so rerunning it safely refreshes tokens, updates URLs, and ensures
turnkey operation even on hosts with existing installations.
Changes:
- Remove early return when sensor-proxy socket detected
- Set TEMPERATURE_ENABLED=true to proceed with reinstall
- Update message to clarify repair/upgrade behavior
- Maintains turnkey promise: rerun setup and it just works
Modern Proxmox LXC containers (cgroup v2 + systemd) don't expose the CTID
inside the guest namespace. The auto-detection in DetectLXCCTID() works
for older LXC setups and when hostname is numeric, but fails for most
production containers where users set custom hostnames.
Changes:
- Added PULSE_LXC_CTID environment variable override in router.go:490-495
- Graceful fallback: auto-detect first, then check env var, then show placeholder
- UI already handles missing CTID by showing "pct exec <ctid>" placeholder
This provides a robust solution for thousands of users:
- Stock Proxmox LXC: Shows `pct exec <ctid>` placeholder (user substitutes manually)
- Custom hostname containers: Can set PULSE_LXC_CTID=171 in compose/systemd
- Numeric hostname containers: Auto-detected (backwards compatible)
Related: FirstRunSetup.tsx already has graceful fallback (line 336-339)
- Added DetectDockerContainerName() to detect container name from hostname
- Extended /api/security/status to expose dockerContainerName field
- Updated FirstRunSetup to show actual container name when detected:
* Before: 'docker exec <container-name> cat /data/.bootstrap_token'
* After: 'docker exec pulse cat /data/.bootstrap_token'
This reduces friction for users - no need to look up the container name.
Works when Docker container is named (--name flag), falls back to
placeholder for auto-generated container IDs.
- Added DetectLXCCTID() to internal/system/container.go to detect Proxmox container ID
- Extended /api/security/status to expose inContainer and lxcCtid fields
- Updated FirstRunSetup to show most relevant command based on detected environment:
* LXC with CTID: Shows 'pct exec 171 -- cat /etc/pulse/.bootstrap_token'
* Docker: Shows 'docker exec <container-name> cat /data/.bootstrap_token'
* Bare metal: Shows 'cat /etc/pulse/.bootstrap_token'
- Collapsed alternative methods behind 'Show other retrieval methods' button
This addresses user feedback that showing all options was overwhelming.
Now users see the command most likely to work for their setup first,
with alternatives hidden but still accessible.
**Host Detection**:
- Now detects localhost by hostname and FQDN, not just IP
- Fixes issue where nodes configured as https://hostname:8006 would skip
localhost cleanup (API tokens, bind mounts, service removal)
**Systemd Sandbox**:
- Added /etc/pve and /etc/systemd/system to ReadWritePaths
- Allows cleanup script to modify Proxmox configs and systemd units
**Uninstaller Improvements**:
- Use UUID for transient unit names (prevents same-second collisions)
- Added --purge flag for complete removal
- Added --wait and --collect flags to capture exit code
- Now fails cleanup if uninstaller exits non-zero
**Path Migration**:
- Fixed all /usr/local references to use /opt/pulse/sensor-proxy
- Updated forced command in SSH authorized_keys
- Updated self-heal script installer path
- Updated Go backend removal helpers (supports both new and legacy paths)
These fixes address Codex findings: hostname detection, sandbox permissions,
transient unit collisions, incomplete purging, and incomplete path migration.
Related to cleanup implementation testing.
## HTTP Server Fixes
- Add source IP middleware to enforce allowed_source_subnets
- Fix missing source subnet validation for external HTTP requests
- HTTP health endpoint now respects subnet restrictions
## Installer Improvements
- Auto-configure allowed_source_subnets with Pulse server IP
- Add cluster node hostnames to allowed_nodes (not just IPs)
- Fix node validation to accept both hostnames and IPs
- Add Pulse server reachability check before installation
- Add port availability check for HTTP mode
- Add automatic rollback on service startup failure
- Add HTTP endpoint health check after installation
- Fix config backup and deduplication (prevent duplicate keys)
- Fix IPv4 validation with loopback rejection
- Improve registration retry logic with detailed errors
- Add automatic LXC bind mount cleanup on uninstall
## Temperature Collection Fixes
- Add local temperature collection for self-monitoring nodes
- Fix node identifier matching (use hostname not SSH host)
- Fix JSON double-encoding in HTTP client response
Related to #XXX (temperature monitoring fixes)
Implements REST API endpoints to enable automatic registration of
temperature proxies during sensor-proxy installation.
API endpoints:
- POST /api/temperature-proxy/register
- Accepts: hostname, proxy_url
- Returns: authentication token
- Finds matching PVE instance and configures proxy URL/token
- No authentication required (called during installation)
- DELETE /api/temperature-proxy/unregister?hostname=X
- Removes proxy configuration from PVE instance
- Requires admin authentication
Implementation:
- Uses config.ConfigPersistence for loading/saving nodes.enc
- Matches PVE instances by hostname in Host field or ClusterEndpoints
- Generates cryptographically secure random tokens (32 bytes, base64)
- Atomic config updates (load → modify → save)
Next step: Update install-sensor-proxy.sh to call registration API
Related to #571
Implements a "Remember Me" option that allows users to stay logged in
for 30 days instead of the default 24 hours. This addresses the pain
point of frequent re-authentication in LAN-only environments while
maintaining authentication security.
Backend changes:
- Add rememberMe field to login request handling
- Support variable session durations (24h default, 30d with Remember Me)
- Implement sliding session expiration that extends sessions on each
authenticated request using the original duration
- Store OriginalDuration in session data for proper sliding window
- Update session cookie MaxAge to match session duration
Frontend changes:
- Add "Remember Me for 30 days" checkbox to login form
- Pass rememberMe flag in login request
- Improve UI with clear duration indication
Key features:
- Sessions extend automatically on each request (sliding window)
- Original duration preserved across session extension
- Backward compatible with existing sessions (legacy sessions work)
- Sessions persist across server restarts
This provides a better user experience for LAN deployments without
compromising security by completely disabling authentication.
CRITICAL SECURITY FIX: The /download/pulse-host-agent endpoint was directly
concatenating user-supplied platform and arch query parameters into file paths
without validation, allowing path traversal attacks.
An attacker could request:
/download/pulse-host-agent?platform=../../etc/passwd
to read arbitrary files from the container filesystem.
Fix: Add input validation to only allow alphanumeric characters and hyphens
in platform/arch parameters before using them in file paths.
Related: Codex security audit identified this during pre-release review
When a request for /login (or any other frontend route) comes in without
proper Accept headers (like from curl or some browsers), the server was
returning 'Authentication required' text instead of serving the frontend HTML.
This is because the router was checking authentication before serving ANY
non-API route, including frontend pages like /login, /dashboard, etc.
The fix: Frontend routes should always be served without backend auth checks.
The authentication logic runs in the frontend JavaScript after the page loads.
Backend auth should only block:
- API endpoints (/api/*)
- WebSocket connections (/ws*, /socket.io/*)
- Download endpoints (/download/*)
- Special scripts (/install-*.sh, etc.)
All other routes are frontend pages that need to be served to everyone so
the login page can load and handle auth in the browser.
This fixes the integration tests where Playwright couldn't see the login
form because the server was rejecting the /login request before serving HTML.
Related to #695 (release workflow integration tests)
- Add job queue system to ensure only one update runs at a time
- Add Server-Sent Events (SSE) for real-time push updates
- Increase rate limit from 20/min to 60/min for update endpoints
- Add unit tests for queue and SSE functionality
- Frontend: Update modal now uses SSE with polling fallback
Eliminates: 429 rate limit errors, duplicate modals, race conditions
Related to #671
This commit implements a comprehensive refactoring of the update system
to address race conditions, redundant polling, and rate limiting issues.
Backend changes:
- Add job queue system to ensure only ONE update runs at a time
- Implement Server-Sent Events (SSE) for real-time update progress
- Add rate limiting to /api/updates/status (5-second minimum per client)
- Create SSE broadcaster for push-based status updates
- Integrate job queue with update manager for atomic operations
- Add comprehensive unit tests for queue and SSE components
Frontend changes:
- Update UpdateProgressModal to use SSE as primary mechanism
- Implement automatic fallback to polling when SSE unavailable
- Maintain backward compatibility with existing update flow
- Clean up SSE connections on component unmount
API changes:
- Add new endpoint: GET /api/updates/stream (SSE)
- Enhance /api/updates/status with client-based rate limiting
- Return cached status with appropriate headers when rate limited
Benefits:
- Eliminates 429 rate limit errors during updates
- Only one update job can run at a time (prevents race conditions)
- Real-time updates via SSE reduce unnecessary polling
- Graceful degradation to polling when SSE unavailable
- Better resource utilization and reduced server load
Testing:
- All existing tests pass
- New unit tests for queue and SSE functionality
- Integration tests verify complete update flow
This commit addresses three recurring issues with the update system:
1. **Checksum mismatches (v4.27.0, v4.28.0):**
- Root cause: Release process uploads checksums.txt first, but if artifacts
are rebuilt after that upload, checksums become stale
- Fix: Update RELEASE_CHECKLIST.md to REQUIRE running validate-release.sh
before publishing (step 9, non-negotiable)
- The validation script exists and catches these errors, but wasn't being
enforced in the release process
2. **Duplicate error modals:**
- Root cause: UpdateProgressModal rendered in both App.tsx
(GlobalUpdateProgressWatcher) and UpdateBanner.tsx
- Fix: Remove UpdateProgressModal from UpdateBanner.tsx
- GlobalUpdateProgressWatcher automatically shows the modal when updates
start, so the banner's modal is redundant
3. **Rate limiting too strict:**
- Root cause: UpdateProgressModal polls /api/updates/status every 2 seconds
(30 req/min), but rate limit was 20/min
- Fix: Increase UpdateEndpoints rate limit from 20/min to 60/min
- Allows modal to poll without hitting rate limits during updates
These were all manual process errors and configuration issues, not code bugs.
The validation script enforcement prevents future checksum mismatches.
The first-run setup UI was displaying incorrect bootstrap token paths for
Docker deployments. It showed `/etc/pulse/.bootstrap_token` regardless of
deployment type, but Docker containers use `/data/.bootstrap_token` by
default (via PULSE_DATA_DIR env var).
Changes:
- Extended `/api/security/status` endpoint to include `bootstrapTokenPath`
and `isDocker` fields when a bootstrap token is active
- Updated FirstRunSetup component to fetch and display the correct path
dynamically based on actual deployment configuration
- For Docker deployments, UI now shows both `docker exec` command and
in-container command
- Falls back to showing both standard and Docker paths if API data
unavailable (backward compatibility)
This fix ensures users always see the correct command for their specific
deployment, including custom PULSE_DATA_DIR configurations.
Users upgrading from v4.25 (where DISABLE_AUTH actually disabled auth) to
v4.27.1 (where DISABLE_AUTH is ignored but triggers a deprecation warning)
were stuck in a catch-22:
- They had no credentials (old version had auth disabled)
- DISABLE_AUTH detection incorrectly required authentication
- Setup wizard returned 401, preventing first credential creation
- Could not complete setup to create credentials and remove flag
Root cause: When DISABLE_AUTH was detected, the code set forceRequested=true
which triggered the authentication requirement even when authConfigured=false.
Fix: Only require authentication when credentials actually exist. When no
auth is configured, allow the bootstrap token flow regardless of whether
DISABLE_AUTH is detected.
This lets users upgrade from legacy DISABLE_AUTH deployments by using the
bootstrap token to create their first credentials, then removing the flag.
The diagnostic code was warning ALL deployments using /run/pulse-sensor-proxy
socket path to "remove and re-add" their configuration to use /mnt/pulse-proxy
instead. This was incorrect for Docker deployments where /run is the correct
and documented mount path (see docker-compose.yml line 15).
The warning was only meant for LXC containers where the managed mount at
/mnt/pulse-proxy is preferred over a legacy hand-crafted /run mount.
Fix: Only show the warning in non-Docker environments (check PULSE_DOCKER env).
Docker deployments correctly use /run/pulse-sensor-proxy per compose file.
Impact: Docker users were seeing confusing diagnostic warnings telling them
to reconfigure a correct setup.
Implements comprehensive mdadm RAID array monitoring for Linux hosts
via pulse-host-agent. Arrays are automatically detected and monitored
with real-time status updates, rebuild progress tracking, and automatic
alerting for degraded or failed arrays.
Key changes:
**Backend:**
- Add mdadm package for parsing mdadm --detail output
- Extend host agent report structure with RAID array data
- Integrate mdadm collection into host agent (Linux-only, best-effort)
- Add RAID array processing in monitoring system
- Implement automatic alerting:
- Critical alerts for degraded arrays or arrays with failed devices
- Warning alerts for rebuilding/resyncing arrays with progress tracking
- Auto-clear alerts when arrays return to healthy state
**Frontend:**
- Add TypeScript types for RAID arrays and devices
- Display RAID arrays in host details drawer with:
- Array status (clean/degraded/recovering) with color-coded indicators
- Device counts (active/total/failed/spare)
- Rebuild progress percentage and speed when applicable
- Green for healthy, amber for rebuilding, red for degraded
**Documentation:**
- Document mdadm monitoring feature in HOST_AGENT.md
- Explain requirements (Linux, mdadm installed, root access)
- Clarify scope (software RAID only, hardware RAID not supported)
**Testing:**
- Add comprehensive tests for mdadm output parsing
- Test parsing of healthy, degraded, and rebuilding arrays
- Verify proper extraction of device states and rebuild progress
All builds pass successfully. RAID monitoring is automatic and best-effort
- if mdadm is not installed or no arrays exist, host agent continues
reporting other metrics normally.
Related to #676
Adds build support for 32-bit x86 (i386/i686) and ARMv6 (older Raspberry Pi models) architectures across all agents and install scripts.
Changes:
- Add linux-386 and linux-armv6 to build-release.sh builds array
- Update Dockerfile to build docker-agent, host-agent, and sensor-proxy for new architectures
- Update all install scripts to detect and handle i386/i686 and armv6l architectures
- Add architecture normalization in router download endpoints
- Update update manager architecture mapping
- Update validate-release.sh to expect 24 binaries (was 18)
This enables Pulse agents to run on older/legacy hardware including 32-bit x86 systems and Raspberry Pi Zero/Zero W devices.
Allow homelab users to send webhooks to internal services while maintaining security defaults.
Changes:
- Add webhookAllowedPrivateCIDRs field to SystemSettings (persistent config)
- Implement CIDR parsing and validation in NotificationManager
- Convert ValidateWebhookURL to instance method to access allowlist
- Add UI controls in System Settings for configuring trusted CIDR ranges
- Maintain strict security by default (block all private IPs)
- Keep localhost, link-local, and cloud metadata services blocked regardless of allowlist
- Re-validate on both config save and webhook delivery (DNS rebinding protection)
- Add comprehensive tests for CIDR parsing and IP matching
Backend:
- UpdateAllowedPrivateCIDRs() parses comma-separated CIDRs with validation
- Support for bare IPs (auto-converts to /32 or /128)
- Thread-safe allowlist updates with RWMutex
- Logging when allowlist is updated or used
- Validation errors prevent invalid CIDRs from being saved
Frontend:
- New "Webhook Security" section in System Settings
- Input field with examples and helpful placeholder text
- Real-time unsaved changes tracking
- Loads and saves allowlist via system settings API
Security:
- Default behavior unchanged (all private IPs blocked)
- Explicit opt-in required via configuration
- Localhost (127/8) always blocked
- Link-local (169.254/16) always blocked
- Cloud metadata services always blocked
- DNS resolution checked at both save and send time
Testing:
- Tests for CIDR parsing (valid/invalid inputs)
- Tests for IP allowlist matching
- Tests for bare IP address handling
- Tests for security boundaries (localhost, link-local remain blocked)
Related to #673🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Related to #670, #657
The fix in v4.26.5 (commit 59a97f2e3) attempted to resolve storage disappearing
by preferring hostnames over IPs when TLS hostname verification is required
(VerifySSL=true and no fingerprint). However, that fix was ineffective because
the cluster discovery code was populating BOTH the Host and IP fields with the
IP address.
**Root Cause:**
In internal/api/config_handlers.go, the detectPVECluster function was setting:
- endpoint.Host = schemePrefix + clusterNode.IP (when IP was available)
- endpoint.IP = clusterNode.IP
This meant both fields contained the same IP address. When the monitoring code
tried to prefer endpoint.Host for TLS validation (internal/monitoring/monitor.go:
361-368), it was still getting an IP, causing certificate validation to fail
with "certificate is valid for pve01.example.com, not 10.0.0.44".
**Solution:**
Separate the Host and IP fields properly during cluster discovery:
- endpoint.Host = hostname (e.g., "https://pve01:8006") for TLS validation
- endpoint.IP = IP address (e.g., "10.0.0.44") for DNS-free connections
The existing logic in clusterEndpointEffectiveURL() can now correctly choose
between them based on TLS requirements.
**Impact:**
Users with VerifySSL=true who upgraded to v4.26.1-v4.26.5 and lost storage
visibility should now see storage, VM disks, and backups again after this fix.
The setup script template had 44 %s placeholders, but the fmt.Sprintf call
arguments were out of order starting at position 15. This caused the Pulse
URL to be inserted where the token name should be, resulting in errors like:
Token ID: pulse-monitor@pam!http://192.168.0.44:7655
Instead of the correct format:
Token ID: pulse-monitor@pam!pulse-192-168-0-44-1762545916
Changes:
- Escaped %s in printf helper (line 3949) so it doesn't consume arguments
- Reordered fmt.Sprintf arguments (lines 4727-4732) to match template order
- Removed 2 extra pulseURL arguments that were causing the shift
This fix ensures all 44 placeholders receive the correct values in order.
The download endpoint had a dangerous fallback that silently served the
wrong binary when the requested platform/arch combination was missing.
If a Docker image shipped without Windows binaries, the installer would
receive a Linux ELF instead of a Windows PE, causing ERROR_BAD_EXE_FORMAT.
Changes:
- Download handler now operates in strict mode when platform+arch are
specified, returning 404 instead of serving mismatched binaries
- PowerShell installer validates PE header (MZ signature)
- PowerShell installer verifies PE machine type matches requested arch
- PowerShell installer fetches and verifies SHA256 checksums
- PowerShell installer shows diagnostic info: OS arch, download URL,
file size for better troubleshooting
This prevents silent failures and provides clear error messages when
binaries are missing or corrupted.