mirror of
https://github.com/ruvnet/RuView.git
synced 2026-05-17 04:19:13 +00:00
feat: adaptive CSI classifier with signal smoothing pipeline (ADR-048) (#144)
Add environment-tuned activity classification that learns from labeled ESP32 CSI recordings, replacing brittle static thresholds. - Adaptive classifier: 15-feature logistic regression trained from JSONL recordings (variance, motion band, subcarrier stats: skew, kurtosis, entropy, IQR). Trains in <1s, persists as JSON, auto-loads on restart. - Three-stage signal smoothing: adaptive baseline subtraction (α=0.003), EMA + trimmed-mean median filter (21-frame window), hysteresis debounce (4 frames). Motion classification now stable across seconds, not frames. - Vital signs stabilization: outlier rejection (±8 BPM HR, ±2 BPM BR), trimmed mean, dead-band (±2 BPM HR), EMA α=0.02. HR holds steady for 10+ seconds instead of jumping 50 BPM every frame. - Observatory auto-detect: always probes /health on startup, connects WebSocket to live ESP32 data automatically. - New API endpoints: POST /api/v1/adaptive/train, GET /adaptive/status, POST /adaptive/unload for runtime model management. - Updated user guide with Observatory, adaptive classifier tutorial, signal smoothing docs, and new troubleshooting entries.
This commit is contained in:
parent
f771cf8461
commit
5fa61ba7ea
6 changed files with 2435 additions and 49 deletions
140
docs/adr/ADR-048-adaptive-csi-classifier.md
Normal file
140
docs/adr/ADR-048-adaptive-csi-classifier.md
Normal file
|
|
@ -0,0 +1,140 @@
|
|||
# ADR-048: Adaptive CSI Activity Classifier
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Status | Accepted |
|
||||
| Date | 2026-03-05 |
|
||||
| Deciders | ruv |
|
||||
| Depends on | ADR-024 (AETHER Embeddings), ADR-039 (Edge Processing), ADR-045 (AMOLED Display) |
|
||||
|
||||
## Context
|
||||
|
||||
WiFi-based activity classification using ESP32 Channel State Information (CSI) relies on hand-tuned thresholds to distinguish between activity states (absent, present_still, present_moving, active). These static thresholds are brittle — they don't account for:
|
||||
|
||||
- **Environment-specific signal patterns**: Room geometry, furniture, wall materials, and ESP32 placement all affect how CSI signals respond to human activity.
|
||||
- **Temporal noise characteristics**: Real ESP32 CSI data at ~10 FPS has significant frame-to-frame jitter that causes classification to jump between states.
|
||||
- **Vital signs estimation noise**: Heart rate and breathing rate estimates from Goertzel filter banks produce large swings (50+ BPM frame-to-frame) at low confidence levels.
|
||||
|
||||
The existing threshold-based approach produces noisy, unstable classifications that degrade the user experience in the Observatory visualization and the main dashboard.
|
||||
|
||||
## Decision
|
||||
|
||||
### 1. Three-Stage Signal Smoothing Pipeline
|
||||
|
||||
All CSI-derived metrics pass through a three-stage pipeline before reaching the UI:
|
||||
|
||||
#### Stage 1: Adaptive Baseline Subtraction
|
||||
- EMA with α=0.003 (~30s time constant) tracks the "quiet room" noise floor
|
||||
- Only updates during low-motion periods to avoid inflating baseline during activity
|
||||
- 50-frame warm-up period for initial baseline learning
|
||||
- Subtracts 70% of baseline from raw motion score to remove environmental drift
|
||||
|
||||
#### Stage 2: EMA + Median Filtering
|
||||
- **Motion score**: Blended from 4 signals (temporal diff 40%, variance 20%, motion band power 25%, change points 15%), then EMA-smoothed with α=0.15
|
||||
- **Vital signs**: 21-frame sliding window → trimmed mean (drop top/bottom 25%) → EMA with α=0.02 (~5s time constant)
|
||||
- **Dead-band**: HR won't update unless trimmed mean differs by >2 BPM; BR needs >0.5 BPM
|
||||
- **Outlier rejection**: HR jumps >8 BPM/frame and BR jumps >2 BPM/frame are discarded
|
||||
|
||||
#### Stage 3: Hysteresis Debounce
|
||||
- Activity state transitions require 4 consecutive frames (~0.4s) of agreement before committing
|
||||
- Prevents rapid flickering between states
|
||||
- Independent candidate tracking resets on new direction changes
|
||||
|
||||
### 2. Adaptive Classifier Module (`adaptive_classifier.rs`)
|
||||
|
||||
A Rust-native environment-tuned classifier that learns from labeled JSONL recordings:
|
||||
|
||||
#### Feature Extraction (15 features)
|
||||
| # | Feature | Source | Discriminative Power |
|
||||
|---|---------|--------|---------------------|
|
||||
| 0 | variance | Server | Medium — temporal CSI spread |
|
||||
| 1 | motion_band_power | Server | Medium — high-frequency subcarrier energy |
|
||||
| 2 | breathing_band_power | Server | Low — respiratory band energy |
|
||||
| 3 | spectral_power | Server | Low — mean squared amplitude |
|
||||
| 4 | dominant_freq_hz | Server | Low — peak subcarrier index |
|
||||
| 5 | change_points | Server | Medium — threshold crossing count |
|
||||
| 6 | mean_rssi | Server | Low — received signal strength |
|
||||
| 7 | amp_mean | Subcarrier | Medium — mean amplitude across 56 subcarriers |
|
||||
| 8 | amp_std | Subcarrier | **High** — amplitude spread (motion increases spread) |
|
||||
| 9 | amp_skew | Subcarrier | Medium — asymmetry of amplitude distribution |
|
||||
| 10 | amp_kurt | Subcarrier | **High** — peakedness (presence creates peaks) |
|
||||
| 11 | amp_iqr | Subcarrier | Medium — inter-quartile range |
|
||||
| 12 | amp_entropy | Subcarrier | **High** — spectral entropy (motion increases disorder) |
|
||||
| 13 | amp_max | Subcarrier | Medium — peak amplitude value |
|
||||
| 14 | amp_range | Subcarrier | Medium — amplitude dynamic range |
|
||||
|
||||
#### Training Algorithm
|
||||
- **Multiclass logistic regression** with softmax output
|
||||
- **Mini-batch SGD** (batch size 32, 200 epochs, linear learning rate decay)
|
||||
- **Z-score normalisation** using global mean/stddev computed from all training data
|
||||
- Per-class statistics (mean, stddev) stored for Mahalanobis distance fallback
|
||||
- Deterministic shuffling (LCG PRNG, seed 42) for reproducible results
|
||||
|
||||
#### Training Data Pipeline
|
||||
1. Record labeled CSI sessions via `POST /api/v1/recording/start {"id":"train_<label>"}`
|
||||
2. Filename-based label assignment: `*empty*`→absent, `*still*`→present_still, `*walking*`→present_moving, `*active*`→active
|
||||
3. Train via `POST /api/v1/adaptive/train`
|
||||
4. Model saved to `data/adaptive_model.json`, auto-loaded on server restart
|
||||
|
||||
#### Inference Pipeline
|
||||
1. Extract 15-feature vector from current CSI frame
|
||||
2. Z-score normalise using stored global mean/stddev
|
||||
3. Compute softmax probabilities across 4 classes
|
||||
4. Blend adaptive model confidence (70%) with smoothed threshold confidence (30%)
|
||||
5. Override classification only when adaptive model is loaded
|
||||
|
||||
### 3. API Endpoints
|
||||
|
||||
| Method | Endpoint | Description |
|
||||
|--------|----------|-------------|
|
||||
| POST | `/api/v1/adaptive/train` | Train classifier from `train_*` recordings |
|
||||
| GET | `/api/v1/adaptive/status` | Check model status, accuracy, class stats |
|
||||
| POST | `/api/v1/adaptive/unload` | Revert to threshold-based classification |
|
||||
| POST | `/api/v1/recording/start` | Start recording CSI frames (JSONL) |
|
||||
| POST | `/api/v1/recording/stop` | Stop recording |
|
||||
| GET | `/api/v1/recording/list` | List available recordings |
|
||||
|
||||
### 4. Vital Signs Smoothing
|
||||
|
||||
| Parameter | Value | Rationale |
|
||||
|-----------|-------|-----------|
|
||||
| Median window | 21 frames | ~2s of history, robust to transients |
|
||||
| Aggregation | Trimmed mean (middle 50%) | More stable than pure median, less noisy than raw mean |
|
||||
| EMA alpha | 0.02 | ~5s time constant — readings change very slowly |
|
||||
| HR dead-band | ±2 BPM | Prevents display creep from micro-fluctuations |
|
||||
| BR dead-band | ±0.5 BPM | Same for breathing rate |
|
||||
| HR max jump | 8 BPM/frame | Outlier rejection threshold |
|
||||
| BR max jump | 2 BPM/frame | Outlier rejection threshold |
|
||||
|
||||
## Consequences
|
||||
|
||||
### Benefits
|
||||
- **Stable UI**: Vital signs readings hold steady for 5-10+ seconds instead of jumping every frame
|
||||
- **Environment adaptation**: Classifier learns the specific room's signal characteristics
|
||||
- **Graceful fallback**: If no adaptive model is loaded, threshold-based classification with smoothing still works
|
||||
- **No external dependencies**: Pure Rust implementation, no Python/ML frameworks needed
|
||||
- **Fast training**: 3,000+ frames train in <1 second on commodity hardware
|
||||
- **Portable model**: JSON serialisation, loadable on any platform
|
||||
|
||||
### Limitations
|
||||
- **Single-link**: With one ESP32, the feature space is limited. Multi-AP setups (ADR-029) would dramatically improve separability.
|
||||
- **No temporal features**: Current frame-level classification doesn't use sequence models (LSTM/Transformer). Could be added later.
|
||||
- **Label quality**: Training accuracy depends heavily on recording quality (distinct activities, actual room vacancy for "empty").
|
||||
- **Linear classifier**: Logistic regression may underfit non-linear decision boundaries. Could upgrade to 2-layer MLP if needed.
|
||||
|
||||
### Future Work
|
||||
- **Online learning**: Continuously update model weights from user corrections
|
||||
- **Sequence models**: Use sliding window of N frames as input for temporal pattern recognition
|
||||
- **Contrastive pretraining**: Leverage ADR-024 AETHER embeddings for self-supervised feature learning
|
||||
- **Multi-AP fusion**: Use ADR-029 multistatic sensing for richer feature space
|
||||
- **Edge deployment**: Export learned thresholds to ESP32 firmware (ADR-039 Tier 2) for on-device classification
|
||||
|
||||
## Files
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `crates/wifi-densepose-sensing-server/src/adaptive_classifier.rs` | Adaptive classifier module (feature extraction, training, inference) |
|
||||
| `crates/wifi-densepose-sensing-server/src/main.rs` | Smoothing pipeline, API endpoints, integration |
|
||||
| `ui/observatory/js/hud-controller.js` | UI-side lerp smoothing (4% per frame) |
|
||||
| `data/adaptive_model.json` | Trained model (auto-created by training endpoint) |
|
||||
| `data/recordings/train_*.jsonl` | Labeled training recordings |
|
||||
|
|
@ -26,15 +26,20 @@ WiFi DensePose turns commodity WiFi signals into real-time human pose estimation
|
|||
7. [Web UI](#web-ui)
|
||||
8. [Vital Sign Detection](#vital-sign-detection)
|
||||
9. [CLI Reference](#cli-reference)
|
||||
10. [Training a Model](#training-a-model)
|
||||
10. [Observatory Visualization](#observatory-visualization)
|
||||
11. [Adaptive Classifier](#adaptive-classifier)
|
||||
- [Recording Training Data](#recording-training-data)
|
||||
- [Training the Model](#training-the-model)
|
||||
- [Using the Trained Model](#using-the-trained-model)
|
||||
12. [Training a Model](#training-a-model)
|
||||
- [CRV Signal-Line Protocol](#crv-signal-line-protocol)
|
||||
11. [RVF Model Containers](#rvf-model-containers)
|
||||
12. [Hardware Setup](#hardware-setup)
|
||||
13. [RVF Model Containers](#rvf-model-containers)
|
||||
14. [Hardware Setup](#hardware-setup)
|
||||
- [ESP32-S3 Mesh](#esp32-s3-mesh)
|
||||
- [Intel 5300 / Atheros NIC](#intel-5300--atheros-nic)
|
||||
13. [Docker Compose (Multi-Service)](#docker-compose-multi-service)
|
||||
14. [Troubleshooting](#troubleshooting)
|
||||
15. [FAQ](#faq)
|
||||
15. [Docker Compose (Multi-Service)](#docker-compose-multi-service)
|
||||
16. [Troubleshooting](#troubleshooting)
|
||||
17. [FAQ](#faq)
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -42,12 +47,12 @@ WiFi DensePose turns commodity WiFi signals into real-time human pose estimation
|
|||
|
||||
| Requirement | Minimum | Recommended |
|
||||
|-------------|---------|-------------|
|
||||
| **OS** | Windows 10, macOS 10.15, Ubuntu 18.04 | Latest stable |
|
||||
| **OS** | Windows 10/11, macOS 10.15, Ubuntu 18.04 | Latest stable |
|
||||
| **RAM** | 4 GB | 8 GB+ |
|
||||
| **Disk** | 2 GB free | 5 GB free |
|
||||
| **Docker** (for Docker path) | Docker 20+ | Docker 24+ |
|
||||
| **Rust** (for source build) | 1.70+ | 1.85+ |
|
||||
| **Python** (for legacy v1) | 3.8+ | 3.11+ |
|
||||
| **Python** (for legacy v1) | 3.10+ | 3.13+ |
|
||||
|
||||
**Hardware for live sensing (optional):**
|
||||
|
||||
|
|
@ -82,15 +87,15 @@ cd RuView/rust-port/wifi-densepose-rs
|
|||
# Build
|
||||
cargo build --release
|
||||
|
||||
# Verify (runs 1,100+ tests)
|
||||
cargo test --workspace
|
||||
# Verify (runs 1,400+ tests)
|
||||
cargo test --workspace --no-default-features
|
||||
```
|
||||
|
||||
The compiled binary is at `target/release/sensing-server`.
|
||||
|
||||
### From crates.io (Individual Crates)
|
||||
|
||||
All 15 crates are published to crates.io at v0.3.0. Add individual crates to your own Rust project:
|
||||
All 16 crates are published to crates.io at v0.3.0. Add individual crates to your own Rust project:
|
||||
|
||||
```bash
|
||||
# Core types and traits
|
||||
|
|
@ -113,6 +118,9 @@ cargo add wifi-densepose-ruvector --features crv
|
|||
|
||||
# WebAssembly bindings
|
||||
cargo add wifi-densepose-wasm
|
||||
|
||||
# WASM edge runtime (lightweight, for embedded/IoT)
|
||||
cargo add wifi-densepose-wasm-edge
|
||||
```
|
||||
|
||||
See the full crate list and dependency order in [CLAUDE.md](../CLAUDE.md#crate-publishing-order).
|
||||
|
|
@ -206,25 +214,27 @@ Default in Docker. Generates synthetic CSI data exercising the full pipeline.
|
|||
```bash
|
||||
# Docker
|
||||
docker run -p 3000:3000 ruvnet/wifi-densepose:latest
|
||||
# (--source simulated is the default)
|
||||
# (--source auto is the default; falls back to simulate when no hardware detected)
|
||||
|
||||
# From source
|
||||
./target/release/sensing-server --source simulated --http-port 3000 --ws-port 3001
|
||||
./target/release/sensing-server --source simulate --http-port 3000 --ws-port 3001
|
||||
```
|
||||
|
||||
### Windows WiFi (RSSI Only)
|
||||
|
||||
Uses `netsh wlan` to capture RSSI from nearby access points. No special hardware needed, but capabilities are limited to coarse presence and motion detection (no pose estimation or vital signs).
|
||||
Uses `netsh wlan` to capture RSSI from nearby access points. No special hardware needed. Supports presence detection, motion classification, and coarse breathing rate estimation. No pose estimation (requires CSI).
|
||||
|
||||
```bash
|
||||
# From source (Windows only)
|
||||
./target/release/sensing-server --source windows --http-port 3000 --ws-port 3001 --tick-ms 500
|
||||
./target/release/sensing-server --source wifi --http-port 3000 --ws-port 3001 --tick-ms 500
|
||||
|
||||
# Docker (requires --network host on Windows)
|
||||
docker run --network host ruvnet/wifi-densepose:latest --source windows --tick-ms 500
|
||||
docker run --network host ruvnet/wifi-densepose:latest --source wifi --tick-ms 500
|
||||
```
|
||||
|
||||
See [Tutorial #36](https://github.com/ruvnet/RuView/issues/36) for a walkthrough.
|
||||
> **Community verified:** Tested on Windows 10 (10.0.26200) with Intel Wi-Fi 6 AX201 160MHz, Python 3.14, StormFiber 5 GHz network. All 7 tutorial steps passed with stable RSSI readings at -48 dBm. See [Tutorial #36](https://github.com/ruvnet/RuView/issues/36) for the full walkthrough and test results.
|
||||
|
||||
**Vital signs from RSSI:** The sensing server now supports breathing rate estimation from RSSI variance patterns (requires stationary subject near AP) and motion classification with confidence scoring. RSSI-based vital sign detection has lower fidelity than ESP32 CSI — it is best for presence detection and coarse motion classification.
|
||||
|
||||
### macOS WiFi (RSSI Only)
|
||||
|
||||
|
|
@ -315,6 +325,9 @@ Base URL: `http://localhost:3000` (Docker) or `http://localhost:8080` (binary de
|
|||
| `GET` | `/api/v1/train/status` | Training run status | `{"phase":"idle"}` |
|
||||
| `POST` | `/api/v1/train/start` | Start a training run | `{"status":"started"}` |
|
||||
| `POST` | `/api/v1/train/stop` | Stop the active training run | `{"status":"stopped"}` |
|
||||
| `POST` | `/api/v1/adaptive/train` | Train adaptive classifier from recordings | `{"success":true,"accuracy":0.85}` |
|
||||
| `GET` | `/api/v1/adaptive/status` | Adaptive model status and accuracy | `{"loaded":true,"accuracy":0.85}` |
|
||||
| `POST` | `/api/v1/adaptive/unload` | Unload adaptive model | `{"success":true}` |
|
||||
|
||||
### Example: Get Vital Signs
|
||||
|
||||
|
|
@ -410,9 +423,16 @@ wscat -c ws://localhost:3001/ws/sensing
|
|||
|
||||
## Web UI
|
||||
|
||||
The built-in Three.js UI is served at `http://localhost:3000/` (Docker) or the configured HTTP port.
|
||||
The built-in Three.js UI is served at `http://localhost:3000/ui/` (Docker) or the configured HTTP port.
|
||||
|
||||
**What you see:**
|
||||
**Two visualization modes:**
|
||||
|
||||
| Page | URL | Purpose |
|
||||
|------|-----|---------|
|
||||
| **Dashboard** | `/ui/index.html` | Tabbed monitoring dashboard with body model, signal heatmap, phase plot, vital signs |
|
||||
| **Observatory** | `/ui/observatory.html` | Immersive 3D room visualization with cinematic lighting and wireframe figures |
|
||||
|
||||
**Dashboard panels:**
|
||||
|
||||
| Panel | Description |
|
||||
|-------|-------------|
|
||||
|
|
@ -423,7 +443,7 @@ The built-in Three.js UI is served at `http://localhost:3000/` (Docker) or the c
|
|||
| Vital Signs | Live breathing rate (BPM) and heart rate (BPM) |
|
||||
| Dashboard | System stats, throughput, connected WebSocket clients |
|
||||
|
||||
The UI updates in real-time via the WebSocket connection.
|
||||
Both UIs update in real-time via WebSocket and auto-detect the sensing server on the same origin.
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -441,6 +461,8 @@ The system extracts breathing rate and heart rate from CSI signal fluctuations u
|
|||
- Subject within ~3-5 meters of an access point (up to ~8 m with multistatic mesh)
|
||||
- Relatively stationary subject (large movements mask vital sign oscillations)
|
||||
|
||||
**Signal smoothing:** Vital sign estimates pass through a three-stage smoothing pipeline (ADR-048): outlier rejection (±8 BPM HR, ±2 BPM BR per frame), 21-frame trimmed mean, and EMA with α=0.02. This produces stable readings that hold steady for 5-10+ seconds instead of jumping every frame. See [Adaptive Classifier](#adaptive-classifier) for details.
|
||||
|
||||
**Simulated mode** produces synthetic vital sign data for testing.
|
||||
|
||||
---
|
||||
|
|
@ -451,7 +473,7 @@ The Rust sensing server binary accepts the following flags:
|
|||
|
||||
| Flag | Default | Description |
|
||||
|------|---------|-------------|
|
||||
| `--source` | `auto` | Data source: `auto`, `simulated`, `windows`, `esp32` |
|
||||
| `--source` | `auto` | Data source: `auto`, `simulate`, `wifi`, `esp32` |
|
||||
| `--http-port` | `8080` | HTTP port for REST API and UI |
|
||||
| `--ws-port` | `8765` | WebSocket port |
|
||||
| `--udp-port` | `5005` | UDP port for ESP32 CSI frames |
|
||||
|
|
@ -472,13 +494,13 @@ The Rust sensing server binary accepts the following flags:
|
|||
|
||||
```bash
|
||||
# Simulated mode with UI (development)
|
||||
./target/release/sensing-server --source simulated --http-port 3000 --ws-port 3001 --ui-path ../../ui
|
||||
./target/release/sensing-server --source simulate --http-port 3000 --ws-port 3001 --ui-path ../../ui
|
||||
|
||||
# ESP32 hardware mode
|
||||
./target/release/sensing-server --source esp32 --udp-port 5005
|
||||
|
||||
# Windows WiFi RSSI
|
||||
./target/release/sensing-server --source windows --tick-ms 500
|
||||
./target/release/sensing-server --source wifi --tick-ms 500
|
||||
|
||||
# Run benchmark
|
||||
./target/release/sensing-server --benchmark
|
||||
|
|
@ -492,6 +514,149 @@ The Rust sensing server binary accepts the following flags:
|
|||
|
||||
---
|
||||
|
||||
## Observatory Visualization
|
||||
|
||||
The Observatory is an immersive Three.js visualization that renders WiFi sensing data as a cinematic 3D experience. It features room-scale props, wireframe human figures, WiFi signal animations, and a live data HUD.
|
||||
|
||||
**URL:** `http://localhost:3000/ui/observatory.html`
|
||||
|
||||
**Features:**
|
||||
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| Room scene | Furniture, walls, floor with emissive materials and 6-point lighting |
|
||||
| Wireframe figures | Up to 4 human skeletons with joint pulsation synced to breathing |
|
||||
| Signal field | Volumetric WiFi wave visualization |
|
||||
| Live HUD | Heart rate, breathing rate, confidence, RSSI, motion level |
|
||||
| Auto-detect | Automatically connects to live ESP32 data when sensing server is running |
|
||||
| Scenario cycling | 6 preset scenarios with smooth transitions (demo mode) |
|
||||
|
||||
**Keyboard shortcuts:**
|
||||
|
||||
| Key | Action |
|
||||
|-----|--------|
|
||||
| `1-6` | Switch scenario |
|
||||
| `A` | Toggle auto-cycle |
|
||||
| `P` | Pause/resume |
|
||||
| `S` | Open settings |
|
||||
| `R` | Reset camera |
|
||||
|
||||
**Live data auto-detect:** When served by the sensing server, the Observatory probes `/health` on the same origin and automatically connects via WebSocket. The HUD badge switches from `DEMO` to `LIVE`. No configuration needed.
|
||||
|
||||
---
|
||||
|
||||
## Adaptive Classifier
|
||||
|
||||
The adaptive classifier (ADR-048) learns your environment's specific WiFi signal patterns from labeled recordings. It replaces static threshold-based classification with a trained logistic regression model that uses 15 features (7 server-computed + 8 subcarrier-derived statistics).
|
||||
|
||||
### Signal Smoothing Pipeline
|
||||
|
||||
All CSI-derived metrics pass through a three-stage pipeline before reaching the UI:
|
||||
|
||||
| Stage | What It Does | Key Parameters |
|
||||
|-------|-------------|----------------|
|
||||
| **Adaptive baseline** | Learns quiet-room noise floor, subtracts drift | α=0.003, 50-frame warm-up |
|
||||
| **EMA + median filter** | Smooths motion score and vital signs | Motion α=0.15; Vitals: 21-frame trimmed mean, α=0.02 |
|
||||
| **Hysteresis debounce** | Prevents rapid state flickering | 4 frames (~0.4s) required for state transition |
|
||||
|
||||
Vital signs use additional stabilization:
|
||||
|
||||
| Parameter | Value | Effect |
|
||||
|-----------|-------|--------|
|
||||
| HR dead-band | ±2 BPM | Prevents micro-drift |
|
||||
| BR dead-band | ±0.5 BPM | Prevents micro-drift |
|
||||
| HR max jump | 8 BPM/frame | Rejects noise spikes |
|
||||
| BR max jump | 2 BPM/frame | Rejects noise spikes |
|
||||
|
||||
### Recording Training Data
|
||||
|
||||
Record labeled CSI sessions while performing distinct activities. Each recording captures full sensing frames (features + raw subcarrier amplitudes) at ~10-25 FPS.
|
||||
|
||||
```bash
|
||||
# 1. Record empty room (leave the room for 30 seconds)
|
||||
curl -X POST http://localhost:3000/api/v1/recording/start \
|
||||
-H "Content-Type: application/json" -d '{"id":"train_empty_room"}'
|
||||
# ... wait 30 seconds ...
|
||||
curl -X POST http://localhost:3000/api/v1/recording/stop
|
||||
|
||||
# 2. Record sitting still (sit near ESP32 for 30 seconds)
|
||||
curl -X POST http://localhost:3000/api/v1/recording/start \
|
||||
-H "Content-Type: application/json" -d '{"id":"train_sitting_still"}'
|
||||
# ... wait 30 seconds ...
|
||||
curl -X POST http://localhost:3000/api/v1/recording/stop
|
||||
|
||||
# 3. Record walking (walk around the room for 30 seconds)
|
||||
curl -X POST http://localhost:3000/api/v1/recording/start \
|
||||
-H "Content-Type: application/json" -d '{"id":"train_walking"}'
|
||||
# ... wait 30 seconds ...
|
||||
curl -X POST http://localhost:3000/api/v1/recording/stop
|
||||
|
||||
# 4. Record active movement (jumping jacks, arm waving for 30 seconds)
|
||||
curl -X POST http://localhost:3000/api/v1/recording/start \
|
||||
-H "Content-Type: application/json" -d '{"id":"train_active"}'
|
||||
# ... wait 30 seconds ...
|
||||
curl -X POST http://localhost:3000/api/v1/recording/stop
|
||||
```
|
||||
|
||||
Recordings are saved as JSONL files in `data/recordings/`. Filenames must start with `train_` and contain a class keyword:
|
||||
|
||||
| Filename pattern | Class |
|
||||
|-----------------|-------|
|
||||
| `*empty*` or `*absent*` | absent |
|
||||
| `*still*` or `*sitting*` | present_still |
|
||||
| `*walking*` or `*moving*` | present_moving |
|
||||
| `*active*` or `*exercise*` | active |
|
||||
|
||||
### Training the Model
|
||||
|
||||
Train the adaptive classifier from your labeled recordings:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3000/api/v1/adaptive/train
|
||||
```
|
||||
|
||||
The server trains a multiclass logistic regression on 15 features using mini-batch SGD (200 epochs). Training completes in under 1 second for typical recording sets. The trained model is saved to `data/adaptive_model.json` and automatically loaded on server restart.
|
||||
|
||||
**Check model status:**
|
||||
|
||||
```bash
|
||||
curl http://localhost:3000/api/v1/adaptive/status
|
||||
```
|
||||
|
||||
**Unload the model (revert to threshold-based classification):**
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3000/api/v1/adaptive/unload
|
||||
```
|
||||
|
||||
### Using the Trained Model
|
||||
|
||||
Once trained, the adaptive model runs automatically:
|
||||
|
||||
1. Each CSI frame is classified using the learned weights instead of static thresholds
|
||||
2. Model confidence is blended with smoothed threshold confidence (70/30 split)
|
||||
3. The model persists across server restarts (loaded from `data/adaptive_model.json`)
|
||||
|
||||
**Tips for better accuracy:**
|
||||
|
||||
- Record with clearly distinct activities (actually leave the room for "empty")
|
||||
- Record 30-60 seconds per activity (more data = better model)
|
||||
- Re-record and retrain if you move the ESP32 or rearrange the room
|
||||
- The model is environment-specific — retrain when the physical setup changes
|
||||
|
||||
### Adaptive Classifier API
|
||||
|
||||
| Method | Endpoint | Description |
|
||||
|--------|----------|-------------|
|
||||
| `POST` | `/api/v1/adaptive/train` | Train from `train_*` recordings |
|
||||
| `GET` | `/api/v1/adaptive/status` | Model status, accuracy, class stats |
|
||||
| `POST` | `/api/v1/adaptive/unload` | Unload model, revert to thresholds |
|
||||
| `POST` | `/api/v1/recording/start` | Start recording CSI frames |
|
||||
| `POST` | `/api/v1/recording/stop` | Stop recording |
|
||||
| `GET` | `/api/v1/recording/list` | List recordings |
|
||||
|
||||
---
|
||||
|
||||
## Training a Model
|
||||
|
||||
The training pipeline is implemented in pure Rust (7,832 lines, zero external ML dependencies).
|
||||
|
|
@ -805,13 +970,28 @@ rustc --version
|
|||
|
||||
### Windows: RSSI mode shows no data
|
||||
|
||||
Run the terminal as Administrator (required for `netsh wlan` access).
|
||||
Run the terminal as Administrator (required for `netsh wlan` access). Verified working on Windows 10 and 11 with Intel AX201 and Intel BE201 adapters.
|
||||
|
||||
### Vital signs show 0 BPM
|
||||
|
||||
- Vital sign detection requires CSI-capable hardware (ESP32 or research NIC)
|
||||
- RSSI-only mode (Windows WiFi) does not have sufficient resolution for vital signs
|
||||
- In simulated mode, synthetic vital signs are generated after a few seconds of warm-up
|
||||
- With real ESP32 data, vital signs take ~5 seconds to stabilize (smoothing pipeline warm-up)
|
||||
|
||||
### Vital signs jumping around
|
||||
|
||||
The server applies a 3-stage smoothing pipeline (ADR-048). If readings are still unstable:
|
||||
- Ensure the subject is relatively still (large movements mask vital sign oscillations)
|
||||
- Train the adaptive classifier for your specific environment: `curl -X POST http://localhost:3000/api/v1/adaptive/train`
|
||||
- Check signal quality: `curl http://localhost:3000/api/v1/sensing/latest` — look for `signal_quality > 0.4`
|
||||
|
||||
### Observatory shows DEMO instead of LIVE
|
||||
|
||||
- Verify the sensing server is running: `curl http://localhost:3000/health`
|
||||
- Access Observatory via the server URL: `http://localhost:3000/ui/observatory.html` (not a file:// URL)
|
||||
- Hard refresh with Ctrl+Shift+R to clear cached settings
|
||||
- The auto-detect probes `/health` on the same origin — cross-origin won't work
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -838,11 +1018,20 @@ The system uses WiFi radio signals, not cameras. No images or video are captured
|
|||
**Q: What's the Python vs Rust difference?**
|
||||
The Rust implementation (v2) is 810x faster than Python (v1) for the full CSI pipeline. The Docker image is 132 MB vs 569 MB. Rust is the primary and recommended runtime. Python v1 remains available for legacy workflows.
|
||||
|
||||
**Q: Can I use an ESP8266 instead of ESP32-S3?**
|
||||
No. The ESP8266 does not expose WiFi Channel State Information (CSI) through its SDK, has insufficient RAM (~80 KB vs 512 KB), and runs a single-core 80 MHz CPU that cannot handle the signal processing pipeline. The ESP32-S3 is the minimum supported CSI capture device. See [Issue #138](https://github.com/ruvnet/RuView/issues/138) for alternatives including using cheap Android TV boxes as aggregation hubs.
|
||||
|
||||
**Q: Does the Windows WiFi tutorial work on Windows 10?**
|
||||
Yes. Community-tested on Windows 10 (build 26200) with an Intel Wi-Fi 6 AX201 160MHz adapter on a 5 GHz network. All 7 tutorial steps passed with Python 3.14. See [Issue #36](https://github.com/ruvnet/RuView/issues/36) for full test results.
|
||||
|
||||
**Q: Can I run the sensing server on an ARM device (Raspberry Pi, TV box)?**
|
||||
ARM64 deployment is planned ([ADR-046](adr/ADR-046-android-tv-box-armbian-deployment.md)) but not yet available as a pre-built binary. You can cross-compile from source using `cross build --release --target aarch64-unknown-linux-gnu -p wifi-densepose-sensing-server` if you have the Rust cross-compilation toolchain set up.
|
||||
|
||||
---
|
||||
|
||||
## Further Reading
|
||||
|
||||
- [Architecture Decision Records](../docs/adr/) - 43 ADRs covering all design decisions
|
||||
- [Architecture Decision Records](../docs/adr/) - 48 ADRs covering all design decisions
|
||||
- [WiFi-Mat Disaster Response Guide](wifi-mat-user-guide.md) - Search & rescue module
|
||||
- [Build Guide](build-guide.md) - Detailed build instructions
|
||||
- [RuVector](https://github.com/ruvnet/ruvector) - Signal intelligence crate ecosystem
|
||||
|
|
|
|||
|
|
@ -0,0 +1,461 @@
|
|||
//! Adaptive CSI Activity Classifier
|
||||
//!
|
||||
//! Learns environment-specific classification thresholds from labeled JSONL
|
||||
//! recordings. Uses a lightweight approach:
|
||||
//!
|
||||
//! 1. **Feature statistics**: per-class mean/stddev for each of 7 CSI features
|
||||
//! 2. **Mahalanobis-like distance**: weighted distance to each class centroid
|
||||
//! 3. **Logistic regression weights**: learned via gradient descent on the
|
||||
//! labeled data for fine-grained boundary tuning
|
||||
//!
|
||||
//! The trained model is serialised as JSON and hot-loaded at runtime so that
|
||||
//! the classification thresholds adapt to the specific room and ESP32 placement.
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::collections::HashMap;
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
// ── Feature vector ───────────────────────────────────────────────────────────
|
||||
|
||||
/// Extended feature vector: 7 server features + 8 subcarrier-derived features = 15.
|
||||
const N_FEATURES: usize = 15;
|
||||
|
||||
/// Activity classes we recognise.
|
||||
pub const CLASSES: &[&str] = &["absent", "present_still", "present_moving", "active"];
|
||||
const N_CLASSES: usize = 4;
|
||||
|
||||
/// Extract extended feature vector from a JSONL frame (features + raw amplitudes).
|
||||
pub fn features_from_frame(frame: &serde_json::Value) -> [f64; N_FEATURES] {
|
||||
let feat = frame.get("features").cloned().unwrap_or(serde_json::Value::Null);
|
||||
let nodes = frame.get("nodes").and_then(|n| n.as_array());
|
||||
let amps: Vec<f64> = nodes
|
||||
.and_then(|ns| ns.first())
|
||||
.and_then(|n| n.get("amplitude"))
|
||||
.and_then(|a| a.as_array())
|
||||
.map(|arr| arr.iter().filter_map(|v| v.as_f64()).collect())
|
||||
.unwrap_or_default();
|
||||
|
||||
// Server-computed features (0-6).
|
||||
let variance = feat.get("variance").and_then(|v| v.as_f64()).unwrap_or(0.0);
|
||||
let mbp = feat.get("motion_band_power").and_then(|v| v.as_f64()).unwrap_or(0.0);
|
||||
let bbp = feat.get("breathing_band_power").and_then(|v| v.as_f64()).unwrap_or(0.0);
|
||||
let sp = feat.get("spectral_power").and_then(|v| v.as_f64()).unwrap_or(0.0);
|
||||
let df = feat.get("dominant_freq_hz").and_then(|v| v.as_f64()).unwrap_or(0.0);
|
||||
let cp = feat.get("change_points").and_then(|v| v.as_f64()).unwrap_or(0.0);
|
||||
let rssi = feat.get("mean_rssi").and_then(|v| v.as_f64()).unwrap_or(0.0);
|
||||
|
||||
// Subcarrier-derived features (7-14).
|
||||
let (amp_mean, amp_std, amp_skew, amp_kurt, amp_iqr, amp_entropy, amp_max, amp_range) =
|
||||
subcarrier_stats(&s);
|
||||
|
||||
[
|
||||
variance, mbp, bbp, sp, df, cp, rssi,
|
||||
amp_mean, amp_std, amp_skew, amp_kurt, amp_iqr, amp_entropy, amp_max, amp_range,
|
||||
]
|
||||
}
|
||||
|
||||
/// Also keep a simpler version for runtime (no JSONL, just FeatureInfo + amps).
|
||||
pub fn features_from_runtime(feat: &serde_json::Value, amps: &[f64]) -> [f64; N_FEATURES] {
|
||||
let variance = feat.get("variance").and_then(|v| v.as_f64()).unwrap_or(0.0);
|
||||
let mbp = feat.get("motion_band_power").and_then(|v| v.as_f64()).unwrap_or(0.0);
|
||||
let bbp = feat.get("breathing_band_power").and_then(|v| v.as_f64()).unwrap_or(0.0);
|
||||
let sp = feat.get("spectral_power").and_then(|v| v.as_f64()).unwrap_or(0.0);
|
||||
let df = feat.get("dominant_freq_hz").and_then(|v| v.as_f64()).unwrap_or(0.0);
|
||||
let cp = feat.get("change_points").and_then(|v| v.as_f64()).unwrap_or(0.0);
|
||||
let rssi = feat.get("mean_rssi").and_then(|v| v.as_f64()).unwrap_or(0.0);
|
||||
let (amp_mean, amp_std, amp_skew, amp_kurt, amp_iqr, amp_entropy, amp_max, amp_range) =
|
||||
subcarrier_stats(amps);
|
||||
[
|
||||
variance, mbp, bbp, sp, df, cp, rssi,
|
||||
amp_mean, amp_std, amp_skew, amp_kurt, amp_iqr, amp_entropy, amp_max, amp_range,
|
||||
]
|
||||
}
|
||||
|
||||
/// Compute statistical features from raw subcarrier amplitudes.
|
||||
fn subcarrier_stats(amps: &[f64]) -> (f64, f64, f64, f64, f64, f64, f64, f64) {
|
||||
if amps.is_empty() {
|
||||
return (0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0);
|
||||
}
|
||||
let n = amps.len() as f64;
|
||||
let mean = amps.iter().sum::<f64>() / n;
|
||||
let var = amps.iter().map(|a| (a - mean).powi(2)).sum::<f64>() / n;
|
||||
let std = var.sqrt().max(1e-9);
|
||||
|
||||
// Skewness (asymmetry).
|
||||
let skew = amps.iter().map(|a| ((a - mean) / std).powi(3)).sum::<f64>() / n;
|
||||
// Kurtosis (peakedness).
|
||||
let kurt = amps.iter().map(|a| ((a - mean) / std).powi(4)).sum::<f64>() / n - 3.0;
|
||||
|
||||
// IQR (inter-quartile range).
|
||||
let mut sorted = amps.to_vec();
|
||||
sorted.sort_by(|a, b| a.partial_cmp(b).unwrap());
|
||||
let q1 = sorted[sorted.len() / 4];
|
||||
let q3 = sorted[3 * sorted.len() / 4];
|
||||
let iqr = q3 - q1;
|
||||
|
||||
// Spectral entropy (normalised).
|
||||
let total_power: f64 = amps.iter().map(|a| a * a).sum::<f64>().max(1e-9);
|
||||
let entropy: f64 = amps.iter()
|
||||
.map(|a| {
|
||||
let p = (a * a) / total_power;
|
||||
if p > 1e-12 { -p * p.ln() } else { 0.0 }
|
||||
})
|
||||
.sum::<f64>() / n.ln().max(1e-9); // normalise to [0,1]
|
||||
|
||||
let max_val = sorted.last().copied().unwrap_or(0.0);
|
||||
let range = max_val - sorted.first().copied().unwrap_or(0.0);
|
||||
|
||||
(mean, std, skew, kurt, iqr, entropy, max_val, range)
|
||||
}
|
||||
|
||||
// ── Per-class statistics ─────────────────────────────────────────────────────
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct ClassStats {
|
||||
pub label: String,
|
||||
pub count: usize,
|
||||
pub mean: [f64; N_FEATURES],
|
||||
pub stddev: [f64; N_FEATURES],
|
||||
}
|
||||
|
||||
// ── Trained model ────────────────────────────────────────────────────────────
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct AdaptiveModel {
|
||||
/// Per-class feature statistics (centroid + spread).
|
||||
pub class_stats: Vec<ClassStats>,
|
||||
/// Logistic regression weights: [N_CLASSES x (N_FEATURES + 1)] (last = bias).
|
||||
pub weights: Vec<[f64; N_FEATURES + 1]>,
|
||||
/// Global feature normalisation: mean and stddev across all training data.
|
||||
pub global_mean: [f64; N_FEATURES],
|
||||
pub global_std: [f64; N_FEATURES],
|
||||
/// Training metadata.
|
||||
pub trained_frames: usize,
|
||||
pub training_accuracy: f64,
|
||||
pub version: u32,
|
||||
}
|
||||
|
||||
impl Default for AdaptiveModel {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
class_stats: Vec::new(),
|
||||
weights: vec![[0.0; N_FEATURES + 1]; N_CLASSES],
|
||||
global_mean: [0.0; N_FEATURES],
|
||||
global_std: [1.0; N_FEATURES],
|
||||
trained_frames: 0,
|
||||
training_accuracy: 0.0,
|
||||
version: 1,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl AdaptiveModel {
|
||||
/// Classify a raw feature vector. Returns (class_label, confidence).
|
||||
pub fn classify(&self, raw_features: &[f64; N_FEATURES]) -> (&'static str, f64) {
|
||||
if self.weights.is_empty() || self.class_stats.is_empty() {
|
||||
return ("present_still", 0.5);
|
||||
}
|
||||
|
||||
// Normalise features.
|
||||
let mut x = [0.0f64; N_FEATURES];
|
||||
for i in 0..N_FEATURES {
|
||||
x[i] = (raw_features[i] - self.global_mean[i]) / (self.global_std[i] + 1e-9);
|
||||
}
|
||||
|
||||
// Compute logits: w·x + b for each class.
|
||||
let mut logits = [0.0f64; N_CLASSES];
|
||||
for c in 0..N_CLASSES.min(self.weights.len()) {
|
||||
let w = &self.weights[c];
|
||||
let mut z = w[N_FEATURES]; // bias
|
||||
for i in 0..N_FEATURES {
|
||||
z += w[i] * x[i];
|
||||
}
|
||||
logits[c] = z;
|
||||
}
|
||||
|
||||
// Softmax.
|
||||
let max_logit = logits.iter().cloned().fold(f64::NEG_INFINITY, f64::max);
|
||||
let exp_sum: f64 = logits.iter().map(|z| (z - max_logit).exp()).sum();
|
||||
let mut probs = [0.0f64; N_CLASSES];
|
||||
for c in 0..N_CLASSES {
|
||||
probs[c] = ((logits[c] - max_logit).exp()) / exp_sum;
|
||||
}
|
||||
|
||||
// Pick argmax.
|
||||
let (best_c, best_p) = probs.iter().enumerate()
|
||||
.max_by(|a, b| a.1.partial_cmp(b.1).unwrap())
|
||||
.unwrap();
|
||||
let label = if best_c < CLASSES.len() { CLASSES[best_c] } else { "present_still" };
|
||||
(label, *best_p)
|
||||
}
|
||||
|
||||
/// Save model to a JSON file.
|
||||
pub fn save(&self, path: &Path) -> std::io::Result<()> {
|
||||
let json = serde_json::to_string_pretty(self)
|
||||
.map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, e))?;
|
||||
std::fs::write(path, json)
|
||||
}
|
||||
|
||||
/// Load model from a JSON file.
|
||||
pub fn load(path: &Path) -> std::io::Result<Self> {
|
||||
let json = std::fs::read_to_string(path)?;
|
||||
serde_json::from_str(&json)
|
||||
.map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, e))
|
||||
}
|
||||
}
|
||||
|
||||
// ── Training ─────────────────────────────────────────────────────────────────
|
||||
|
||||
/// A labeled training sample.
|
||||
struct Sample {
|
||||
features: [f64; N_FEATURES],
|
||||
class_idx: usize,
|
||||
}
|
||||
|
||||
/// Load JSONL recording frames and assign a class label based on filename.
|
||||
fn load_recording(path: &Path, class_idx: usize) -> Vec<Sample> {
|
||||
let content = match std::fs::read_to_string(path) {
|
||||
Ok(c) => c,
|
||||
Err(_) => return Vec::new(),
|
||||
};
|
||||
content.lines().filter_map(|line| {
|
||||
let v: serde_json::Value = serde_json::from_str(line).ok()?;
|
||||
// Use extended features (server features + subcarrier stats).
|
||||
Some(Sample {
|
||||
features: features_from_frame(&v),
|
||||
class_idx,
|
||||
})
|
||||
}).collect()
|
||||
}
|
||||
|
||||
/// Map a recording filename to a class index.
|
||||
fn classify_recording_name(name: &str) -> Option<usize> {
|
||||
let lower = name.to_lowercase();
|
||||
if lower.contains("empty") || lower.contains("absent") { Some(0) }
|
||||
else if lower.contains("still") || lower.contains("sitting") || lower.contains("standing") { Some(1) }
|
||||
else if lower.contains("walking") || lower.contains("moving") { Some(2) }
|
||||
else if lower.contains("active") || lower.contains("exercise") || lower.contains("running") { Some(3) }
|
||||
else { None }
|
||||
}
|
||||
|
||||
/// Train a model from labeled JSONL recordings in a directory.
|
||||
///
|
||||
/// Recordings are matched to classes by filename pattern:
|
||||
/// - `*empty*` / `*absent*` → absent (0)
|
||||
/// - `*still*` / `*sitting*` → present_still (1)
|
||||
/// - `*walking*` / `*moving*` → present_moving (2)
|
||||
/// - `*active*` / `*exercise*`→ active (3)
|
||||
pub fn train_from_recordings(recordings_dir: &Path) -> Result<AdaptiveModel, String> {
|
||||
// Scan for train_* files.
|
||||
let mut samples: Vec<Sample> = Vec::new();
|
||||
let entries = std::fs::read_dir(recordings_dir)
|
||||
.map_err(|e| format!("Cannot read {}: {}", recordings_dir.display(), e))?;
|
||||
|
||||
for entry in entries.flatten() {
|
||||
let fname = entry.file_name().to_string_lossy().to_string();
|
||||
if !fname.starts_with("train_") || !fname.ends_with(".jsonl") {
|
||||
continue;
|
||||
}
|
||||
if let Some(class_idx) = classify_recording_name(&fname) {
|
||||
let loaded = load_recording(&entry.path(), class_idx);
|
||||
eprintln!(" Loaded {}: {} frames → class '{}'",
|
||||
fname, loaded.len(), CLASSES[class_idx]);
|
||||
samples.extend(loaded);
|
||||
}
|
||||
}
|
||||
|
||||
if samples.is_empty() {
|
||||
return Err("No training samples found. Record data with train_* prefix.".into());
|
||||
}
|
||||
|
||||
let n = samples.len();
|
||||
eprintln!("Total training samples: {n}");
|
||||
|
||||
// ── Compute global normalisation stats ──
|
||||
let mut global_mean = [0.0f64; N_FEATURES];
|
||||
let mut global_var = [0.0f64; N_FEATURES];
|
||||
for s in &samples {
|
||||
for i in 0..N_FEATURES { global_mean[i] += s.features[i]; }
|
||||
}
|
||||
for i in 0..N_FEATURES { global_mean[i] /= n as f64; }
|
||||
for s in &samples {
|
||||
for i in 0..N_FEATURES {
|
||||
global_var[i] += (s.features[i] - global_mean[i]).powi(2);
|
||||
}
|
||||
}
|
||||
let mut global_std = [0.0f64; N_FEATURES];
|
||||
for i in 0..N_FEATURES {
|
||||
global_std[i] = (global_var[i] / n as f64).sqrt().max(1e-9);
|
||||
}
|
||||
|
||||
// ── Compute per-class statistics ──
|
||||
let mut class_sums = vec![[0.0f64; N_FEATURES]; N_CLASSES];
|
||||
let mut class_sq = vec![[0.0f64; N_FEATURES]; N_CLASSES];
|
||||
let mut class_counts = vec![0usize; N_CLASSES];
|
||||
for s in &samples {
|
||||
let c = s.class_idx;
|
||||
class_counts[c] += 1;
|
||||
for i in 0..N_FEATURES {
|
||||
class_sums[c][i] += s.features[i];
|
||||
class_sq[c][i] += s.features[i] * s.features[i];
|
||||
}
|
||||
}
|
||||
|
||||
let mut class_stats = Vec::new();
|
||||
for c in 0..N_CLASSES {
|
||||
let cnt = class_counts[c].max(1) as f64;
|
||||
let mut mean = [0.0; N_FEATURES];
|
||||
let mut stddev = [0.0; N_FEATURES];
|
||||
for i in 0..N_FEATURES {
|
||||
mean[i] = class_sums[c][i] / cnt;
|
||||
stddev[i] = ((class_sq[c][i] / cnt) - mean[i] * mean[i]).max(0.0).sqrt();
|
||||
}
|
||||
class_stats.push(ClassStats {
|
||||
label: CLASSES[c].to_string(),
|
||||
count: class_counts[c],
|
||||
mean,
|
||||
stddev,
|
||||
});
|
||||
}
|
||||
|
||||
// ── Normalise all samples ──
|
||||
let mut norm_samples: Vec<([f64; N_FEATURES], usize)> = samples.iter().map(|s| {
|
||||
let mut x = [0.0; N_FEATURES];
|
||||
for i in 0..N_FEATURES {
|
||||
x[i] = (s.features[i] - global_mean[i]) / (global_std[i] + 1e-9);
|
||||
}
|
||||
(x, s.class_idx)
|
||||
}).collect();
|
||||
|
||||
// ── Train logistic regression via mini-batch SGD ──
|
||||
let mut weights = vec![[0.0f64; N_FEATURES + 1]; N_CLASSES];
|
||||
let lr = 0.1;
|
||||
let epochs = 200;
|
||||
let batch_size = 32;
|
||||
|
||||
// Shuffle helper (simple LCG for determinism).
|
||||
let mut rng_state: u64 = 42;
|
||||
let mut rng_next = move || -> u64 {
|
||||
rng_state = rng_state.wrapping_mul(6364136223846793005).wrapping_add(1442695040888963407);
|
||||
rng_state >> 33
|
||||
};
|
||||
|
||||
for epoch in 0..epochs {
|
||||
// Shuffle samples.
|
||||
for i in (1..norm_samples.len()).rev() {
|
||||
let j = (rng_next() as usize) % (i + 1);
|
||||
norm_samples.swap(i, j);
|
||||
}
|
||||
|
||||
let mut epoch_loss = 0.0f64;
|
||||
let mut batch_count = 0;
|
||||
|
||||
for batch_start in (0..norm_samples.len()).step_by(batch_size) {
|
||||
let batch_end = (batch_start + batch_size).min(norm_samples.len());
|
||||
let batch = &norm_samples[batch_start..batch_end];
|
||||
|
||||
// Accumulate gradients.
|
||||
let mut grad = vec![[0.0f64; N_FEATURES + 1]; N_CLASSES];
|
||||
|
||||
for (x, target) in batch {
|
||||
// Forward: softmax.
|
||||
let mut logits = [0.0f64; N_CLASSES];
|
||||
for c in 0..N_CLASSES {
|
||||
logits[c] = weights[c][N_FEATURES]; // bias
|
||||
for i in 0..N_FEATURES {
|
||||
logits[c] += weights[c][i] * x[i];
|
||||
}
|
||||
}
|
||||
let max_l = logits.iter().cloned().fold(f64::NEG_INFINITY, f64::max);
|
||||
let exp_sum: f64 = logits.iter().map(|z| (z - max_l).exp()).sum();
|
||||
let mut probs = [0.0f64; N_CLASSES];
|
||||
for c in 0..N_CLASSES {
|
||||
probs[c] = ((logits[c] - max_l).exp()) / exp_sum;
|
||||
}
|
||||
|
||||
// Cross-entropy loss.
|
||||
epoch_loss += -(probs[*target].max(1e-15)).ln();
|
||||
|
||||
// Gradient: prob - one_hot(target).
|
||||
for c in 0..N_CLASSES {
|
||||
let delta = probs[c] - if c == *target { 1.0 } else { 0.0 };
|
||||
for i in 0..N_FEATURES {
|
||||
grad[c][i] += delta * x[i];
|
||||
}
|
||||
grad[c][N_FEATURES] += delta; // bias grad
|
||||
}
|
||||
}
|
||||
|
||||
// Update weights.
|
||||
let bs = batch.len() as f64;
|
||||
let current_lr = lr * (1.0 - epoch as f64 / epochs as f64); // linear decay
|
||||
for c in 0..N_CLASSES {
|
||||
for i in 0..=N_FEATURES {
|
||||
weights[c][i] -= current_lr * grad[c][i] / bs;
|
||||
}
|
||||
}
|
||||
batch_count += 1;
|
||||
}
|
||||
|
||||
if epoch % 50 == 0 || epoch == epochs - 1 {
|
||||
let avg_loss = epoch_loss / n as f64;
|
||||
eprintln!(" Epoch {epoch:3}: loss = {avg_loss:.4}");
|
||||
}
|
||||
}
|
||||
|
||||
// ── Evaluate accuracy ──
|
||||
let mut correct = 0;
|
||||
for (x, target) in &norm_samples {
|
||||
let mut logits = [0.0f64; N_CLASSES];
|
||||
for c in 0..N_CLASSES {
|
||||
logits[c] = weights[c][N_FEATURES];
|
||||
for i in 0..N_FEATURES {
|
||||
logits[c] += weights[c][i] * x[i];
|
||||
}
|
||||
}
|
||||
let pred = logits.iter().enumerate()
|
||||
.max_by(|a, b| a.1.partial_cmp(b.1).unwrap())
|
||||
.unwrap().0;
|
||||
if pred == *target { correct += 1; }
|
||||
}
|
||||
let accuracy = correct as f64 / n as f64;
|
||||
eprintln!("Training accuracy: {correct}/{n} = {accuracy:.1}%");
|
||||
|
||||
// ── Per-class accuracy ──
|
||||
let mut class_correct = vec![0usize; N_CLASSES];
|
||||
let mut class_total = vec![0usize; N_CLASSES];
|
||||
for (x, target) in &norm_samples {
|
||||
class_total[*target] += 1;
|
||||
let mut logits = [0.0f64; N_CLASSES];
|
||||
for c in 0..N_CLASSES {
|
||||
logits[c] = weights[c][N_FEATURES];
|
||||
for i in 0..N_FEATURES {
|
||||
logits[c] += weights[c][i] * x[i];
|
||||
}
|
||||
}
|
||||
let pred = logits.iter().enumerate()
|
||||
.max_by(|a, b| a.1.partial_cmp(b.1).unwrap())
|
||||
.unwrap().0;
|
||||
if pred == *target { class_correct[*target] += 1; }
|
||||
}
|
||||
for c in 0..N_CLASSES {
|
||||
let tot = class_total[c].max(1);
|
||||
eprintln!(" {}: {}/{} ({:.0}%)", CLASSES[c], class_correct[c], tot,
|
||||
class_correct[c] as f64 / tot as f64 * 100.0);
|
||||
}
|
||||
|
||||
Ok(AdaptiveModel {
|
||||
class_stats,
|
||||
weights,
|
||||
global_mean,
|
||||
global_std,
|
||||
trained_frames: n,
|
||||
training_accuracy: accuracy,
|
||||
version: 1,
|
||||
})
|
||||
}
|
||||
|
||||
/// Default path for the saved adaptive model.
|
||||
pub fn model_path() -> PathBuf {
|
||||
PathBuf::from("data/adaptive_model.json")
|
||||
}
|
||||
|
|
@ -8,6 +8,7 @@
|
|||
//!
|
||||
//! Replaces both ws_server.py and the Python HTTP server.
|
||||
|
||||
mod adaptive_classifier;
|
||||
mod rvf_container;
|
||||
mod rvf_pipeline;
|
||||
mod vital_signs;
|
||||
|
|
@ -299,6 +300,34 @@ struct AppStateInner {
|
|||
model_loaded: bool,
|
||||
/// Smoothed person count (EMA) for hysteresis — prevents frame-to-frame jumping.
|
||||
smoothed_person_score: f64,
|
||||
// ── Motion smoothing & adaptive baseline (ADR-047 tuning) ────────────
|
||||
/// EMA-smoothed motion score (alpha ~0.15 for ~10 FPS → ~1s time constant).
|
||||
smoothed_motion: f64,
|
||||
/// Current classification state for hysteresis debounce.
|
||||
current_motion_level: String,
|
||||
/// How many consecutive frames the *raw* classification has agreed with a
|
||||
/// *candidate* new level. State only changes after DEBOUNCE_FRAMES.
|
||||
debounce_counter: u32,
|
||||
/// The candidate motion level that the debounce counter is tracking.
|
||||
debounce_candidate: String,
|
||||
/// Adaptive baseline: EMA of motion score when room is "quiet" (low motion).
|
||||
/// Subtracted from raw score so slow environmental drift doesn't inflate readings.
|
||||
baseline_motion: f64,
|
||||
/// Number of frames processed so far (for baseline warm-up).
|
||||
baseline_frames: u64,
|
||||
// ── Vital signs smoothing ────────────────────────────────────────────
|
||||
/// EMA-smoothed heart rate (BPM).
|
||||
smoothed_hr: f64,
|
||||
/// EMA-smoothed breathing rate (BPM).
|
||||
smoothed_br: f64,
|
||||
/// EMA-smoothed HR confidence.
|
||||
smoothed_hr_conf: f64,
|
||||
/// EMA-smoothed BR confidence.
|
||||
smoothed_br_conf: f64,
|
||||
/// Median filter buffer for HR (last N raw values for outlier rejection).
|
||||
hr_buffer: VecDeque<f64>,
|
||||
/// Median filter buffer for BR.
|
||||
br_buffer: VecDeque<f64>,
|
||||
/// ADR-039: Latest edge vitals packet from ESP32.
|
||||
edge_vitals: Option<Esp32VitalsPacket>,
|
||||
/// ADR-040: Latest WASM output packet from ESP32.
|
||||
|
|
@ -324,6 +353,9 @@ struct AppStateInner {
|
|||
training_status: String,
|
||||
/// Training configuration, if any.
|
||||
training_config: Option<serde_json::Value>,
|
||||
// ── Adaptive classifier (environment-tuned) ──────────────────────────
|
||||
/// Trained adaptive model (loaded from data/adaptive_model.json or trained at runtime).
|
||||
adaptive_model: Option<adaptive_classifier::AdaptiveModel>,
|
||||
}
|
||||
|
||||
/// Number of frames retained in `frame_history` for temporal analysis.
|
||||
|
|
@ -716,11 +748,12 @@ fn compute_subcarrier_variances(frame_history: &VecDeque<Vec<f64>>, n_sub: usize
|
|||
/// the amplitude time series.
|
||||
/// - **Signal quality**: based on SNR estimate (RSSI – noise floor) and subcarrier
|
||||
/// variance stability.
|
||||
/// Returns (features, raw_classification, breathing_rate_hz, sub_variances, raw_motion_score).
|
||||
fn extract_features_from_frame(
|
||||
frame: &Esp32Frame,
|
||||
frame_history: &VecDeque<Vec<f64>>,
|
||||
sample_rate_hz: f64,
|
||||
) -> (FeatureInfo, ClassificationInfo, f64, Vec<f64>) {
|
||||
) -> (FeatureInfo, ClassificationInfo, f64, Vec<f64>, f64) {
|
||||
let n_sub = frame.amplitudes.len().max(1);
|
||||
let n = n_sub as f64;
|
||||
let mean_amp: f64 = frame.amplitudes.iter().sum::<f64>() / n;
|
||||
|
|
@ -799,8 +832,11 @@ fn extract_features_from_frame(
|
|||
};
|
||||
|
||||
// Blend temporal motion with variance-based motion for robustness.
|
||||
// Also factor in motion_band_power and change_points for ESP32 real-world sensitivity.
|
||||
let variance_motion = (temporal_variance / 10.0).clamp(0.0, 1.0);
|
||||
let motion_score = (temporal_motion_score * 0.7 + variance_motion * 0.3).clamp(0.0, 1.0);
|
||||
let mbp_motion = (motion_band_power / 25.0).clamp(0.0, 1.0);
|
||||
let cp_motion = (change_points as f64 / 15.0).clamp(0.0, 1.0);
|
||||
let motion_score = (temporal_motion_score * 0.4 + variance_motion * 0.2 + mbp_motion * 0.25 + cp_motion * 0.15).clamp(0.0, 1.0);
|
||||
|
||||
// ── Signal quality metric ──
|
||||
// Based on estimated SNR (RSSI relative to noise floor) and subcarrier consistency.
|
||||
|
|
@ -823,24 +859,198 @@ fn extract_features_from_frame(
|
|||
spectral_power,
|
||||
};
|
||||
|
||||
// ── Classification ──
|
||||
let (motion_level, presence) = if motion_score > 0.4 {
|
||||
("active".to_string(), true)
|
||||
} else if motion_score > 0.08 {
|
||||
("present_still".to_string(), true)
|
||||
// Return raw motion_score and signal_quality — classification is done by
|
||||
// `smooth_and_classify()` which has access to EMA state and hysteresis.
|
||||
let raw_classification = ClassificationInfo {
|
||||
motion_level: raw_classify(motion_score),
|
||||
presence: motion_score > 0.04,
|
||||
confidence: (0.4 + signal_quality * 0.3 + motion_score * 0.3).clamp(0.0, 1.0),
|
||||
};
|
||||
|
||||
(features, raw_classification, breathing_rate_hz, sub_variances, motion_score)
|
||||
}
|
||||
|
||||
/// Simple threshold classification (no smoothing) — used as the "raw" input.
|
||||
fn raw_classify(score: f64) -> String {
|
||||
if score > 0.25 { "active".into() }
|
||||
else if score > 0.12 { "present_moving".into() }
|
||||
else if score > 0.04 { "present_still".into() }
|
||||
else { "absent".into() }
|
||||
}
|
||||
|
||||
/// Debounce frames required before state transition (at ~10 FPS = ~0.4s).
|
||||
const DEBOUNCE_FRAMES: u32 = 4;
|
||||
/// EMA alpha for motion smoothing (~1s time constant at 10 FPS).
|
||||
const MOTION_EMA_ALPHA: f64 = 0.15;
|
||||
/// EMA alpha for slow-adapting baseline (~30s time constant at 10 FPS).
|
||||
const BASELINE_EMA_ALPHA: f64 = 0.003;
|
||||
/// Number of warm-up frames before baseline subtraction kicks in.
|
||||
const BASELINE_WARMUP: u64 = 50;
|
||||
|
||||
/// Apply EMA smoothing, adaptive baseline subtraction, and hysteresis debounce
|
||||
/// to the raw classification. Mutates the smoothing state in `AppStateInner`.
|
||||
fn smooth_and_classify(state: &mut AppStateInner, raw: &mut ClassificationInfo, raw_motion: f64) {
|
||||
// 1. Adaptive baseline: slowly track the "quiet room" floor.
|
||||
// Only update baseline when raw score is below the current smoothed level
|
||||
// (i.e. during calm periods) so walking doesn't inflate the baseline.
|
||||
state.baseline_frames += 1;
|
||||
if state.baseline_frames < BASELINE_WARMUP {
|
||||
// During warm-up, aggressively learn the baseline.
|
||||
state.baseline_motion = state.baseline_motion * 0.9 + raw_motion * 0.1;
|
||||
} else if raw_motion < state.smoothed_motion + 0.05 {
|
||||
state.baseline_motion = state.baseline_motion * (1.0 - BASELINE_EMA_ALPHA)
|
||||
+ raw_motion * BASELINE_EMA_ALPHA;
|
||||
}
|
||||
|
||||
// 2. Subtract baseline and clamp.
|
||||
let adjusted = (raw_motion - state.baseline_motion * 0.7).max(0.0);
|
||||
|
||||
// 3. EMA smooth the adjusted score.
|
||||
state.smoothed_motion = state.smoothed_motion * (1.0 - MOTION_EMA_ALPHA)
|
||||
+ adjusted * MOTION_EMA_ALPHA;
|
||||
let sm = state.smoothed_motion;
|
||||
|
||||
// 4. Classify from smoothed score.
|
||||
let candidate = raw_classify(sm);
|
||||
|
||||
// 5. Hysteresis debounce: require N consecutive frames agreeing on a new state.
|
||||
if candidate == state.current_motion_level {
|
||||
// Already in this state — reset debounce.
|
||||
state.debounce_counter = 0;
|
||||
state.debounce_candidate = candidate;
|
||||
} else if candidate == state.debounce_candidate {
|
||||
state.debounce_counter += 1;
|
||||
if state.debounce_counter >= DEBOUNCE_FRAMES {
|
||||
// Transition accepted.
|
||||
state.current_motion_level = candidate;
|
||||
state.debounce_counter = 0;
|
||||
}
|
||||
} else {
|
||||
("absent".to_string(), false)
|
||||
};
|
||||
// New candidate — restart counter.
|
||||
state.debounce_candidate = candidate;
|
||||
state.debounce_counter = 1;
|
||||
}
|
||||
|
||||
let confidence = (0.4 + signal_quality * 0.3 + motion_score * 0.3).clamp(0.0, 1.0);
|
||||
// 6. Write the smoothed result back into the classification.
|
||||
raw.motion_level = state.current_motion_level.clone();
|
||||
raw.presence = sm > 0.03;
|
||||
raw.confidence = (0.4 + sm * 0.6).clamp(0.0, 1.0);
|
||||
}
|
||||
|
||||
let classification = ClassificationInfo {
|
||||
motion_level,
|
||||
presence,
|
||||
confidence,
|
||||
};
|
||||
/// If an adaptive model is loaded, override the classification with the
|
||||
/// model's prediction. Uses the full 15-feature vector for higher accuracy.
|
||||
fn adaptive_override(state: &AppStateInner, features: &FeatureInfo, classification: &mut ClassificationInfo) {
|
||||
if let Some(ref model) = state.adaptive_model {
|
||||
// Get current frame amplitudes from the latest history entry.
|
||||
let amps = state.frame_history.back()
|
||||
.map(|v| v.as_slice())
|
||||
.unwrap_or(&[]);
|
||||
let feat_arr = adaptive_classifier::features_from_runtime(
|
||||
&serde_json::json!({
|
||||
"variance": features.variance,
|
||||
"motion_band_power": features.motion_band_power,
|
||||
"breathing_band_power": features.breathing_band_power,
|
||||
"spectral_power": features.spectral_power,
|
||||
"dominant_freq_hz": features.dominant_freq_hz,
|
||||
"change_points": features.change_points,
|
||||
"mean_rssi": features.mean_rssi,
|
||||
}),
|
||||
amps,
|
||||
);
|
||||
let (label, conf) = model.classify(&feat_arr);
|
||||
classification.motion_level = label.to_string();
|
||||
classification.presence = label != "absent";
|
||||
// Blend model confidence with existing smoothed confidence.
|
||||
classification.confidence = (conf * 0.7 + classification.confidence * 0.3).clamp(0.0, 1.0);
|
||||
}
|
||||
}
|
||||
|
||||
(features, classification, breathing_rate_hz, sub_variances)
|
||||
/// Size of the median filter window for vital signs outlier rejection.
|
||||
const VITAL_MEDIAN_WINDOW: usize = 21;
|
||||
/// EMA alpha for vital signs (~5s time constant at 10 FPS).
|
||||
const VITAL_EMA_ALPHA: f64 = 0.02;
|
||||
/// Maximum BPM jump per frame before a value is rejected as an outlier.
|
||||
const HR_MAX_JUMP: f64 = 8.0;
|
||||
const BR_MAX_JUMP: f64 = 2.0;
|
||||
/// Minimum change from current smoothed value before EMA updates (dead-band).
|
||||
/// Prevents micro-drift from creeping in.
|
||||
const HR_DEAD_BAND: f64 = 2.0;
|
||||
const BR_DEAD_BAND: f64 = 0.5;
|
||||
|
||||
/// Smooth vital signs using median-filter outlier rejection + EMA.
|
||||
/// Mutates `state.smoothed_hr`, `state.smoothed_br`, etc.
|
||||
/// Returns the smoothed VitalSigns to broadcast.
|
||||
fn smooth_vitals(state: &mut AppStateInner, raw: &VitalSigns) -> VitalSigns {
|
||||
let raw_hr = raw.heart_rate_bpm.unwrap_or(0.0);
|
||||
let raw_br = raw.breathing_rate_bpm.unwrap_or(0.0);
|
||||
|
||||
// -- Outlier rejection: skip values that jump too far from current EMA --
|
||||
let hr_ok = state.smoothed_hr < 1.0 || (raw_hr - state.smoothed_hr).abs() < HR_MAX_JUMP;
|
||||
let br_ok = state.smoothed_br < 1.0 || (raw_br - state.smoothed_br).abs() < BR_MAX_JUMP;
|
||||
|
||||
// Push into buffer (only non-outlier values)
|
||||
if hr_ok && raw_hr > 0.0 {
|
||||
state.hr_buffer.push_back(raw_hr);
|
||||
if state.hr_buffer.len() > VITAL_MEDIAN_WINDOW { state.hr_buffer.pop_front(); }
|
||||
}
|
||||
if br_ok && raw_br > 0.0 {
|
||||
state.br_buffer.push_back(raw_br);
|
||||
if state.br_buffer.len() > VITAL_MEDIAN_WINDOW { state.br_buffer.pop_front(); }
|
||||
}
|
||||
|
||||
// Compute trimmed mean: drop top/bottom 25% then average the middle 50%.
|
||||
// This is more stable than pure median and less noisy than raw mean.
|
||||
let trimmed_hr = trimmed_mean(&state.hr_buffer);
|
||||
let trimmed_br = trimmed_mean(&state.br_buffer);
|
||||
|
||||
// EMA smooth with dead-band: only update if the trimmed mean differs
|
||||
// from the current smoothed value by more than the dead-band.
|
||||
// This prevents the display from constantly creeping by tiny amounts.
|
||||
if trimmed_hr > 0.0 {
|
||||
if state.smoothed_hr < 1.0 {
|
||||
state.smoothed_hr = trimmed_hr;
|
||||
} else if (trimmed_hr - state.smoothed_hr).abs() > HR_DEAD_BAND {
|
||||
state.smoothed_hr = state.smoothed_hr * (1.0 - VITAL_EMA_ALPHA)
|
||||
+ trimmed_hr * VITAL_EMA_ALPHA;
|
||||
}
|
||||
// else: within dead-band, hold current value
|
||||
}
|
||||
if trimmed_br > 0.0 {
|
||||
if state.smoothed_br < 1.0 {
|
||||
state.smoothed_br = trimmed_br;
|
||||
} else if (trimmed_br - state.smoothed_br).abs() > BR_DEAD_BAND {
|
||||
state.smoothed_br = state.smoothed_br * (1.0 - VITAL_EMA_ALPHA)
|
||||
+ trimmed_br * VITAL_EMA_ALPHA;
|
||||
}
|
||||
}
|
||||
|
||||
// Smooth confidence
|
||||
state.smoothed_hr_conf = state.smoothed_hr_conf * 0.92 + raw.heartbeat_confidence * 0.08;
|
||||
state.smoothed_br_conf = state.smoothed_br_conf * 0.92 + raw.breathing_confidence * 0.08;
|
||||
|
||||
VitalSigns {
|
||||
breathing_rate_bpm: if state.smoothed_br > 1.0 { Some(state.smoothed_br) } else { None },
|
||||
heart_rate_bpm: if state.smoothed_hr > 1.0 { Some(state.smoothed_hr) } else { None },
|
||||
breathing_confidence: state.smoothed_br_conf,
|
||||
heartbeat_confidence: state.smoothed_hr_conf,
|
||||
signal_quality: raw.signal_quality,
|
||||
}
|
||||
}
|
||||
|
||||
/// Trimmed mean: sort, drop top/bottom 25%, average the middle 50%.
|
||||
/// More robust than median (uses more data) and less noisy than raw mean.
|
||||
fn trimmed_mean(buf: &VecDeque<f64>) -> f64 {
|
||||
if buf.is_empty() { return 0.0; }
|
||||
let mut sorted: Vec<f64> = buf.iter().copied().collect();
|
||||
sorted.sort_by(|a, b| a.partial_cmp(b).unwrap_or(std::cmp::Ordering::Equal));
|
||||
let n = sorted.len();
|
||||
let trim = n / 4; // drop 25% from each end
|
||||
let middle = &sorted[trim..n - trim.max(0)];
|
||||
if middle.is_empty() {
|
||||
sorted[n / 2] // fallback to median if too few samples
|
||||
} else {
|
||||
middle.iter().sum::<f64>() / middle.len() as f64
|
||||
}
|
||||
}
|
||||
|
||||
// ── Windows WiFi RSSI collector ──────────────────────────────────────────────
|
||||
|
|
@ -982,8 +1192,10 @@ async fn windows_wifi_task(state: SharedState, tick_ms: u64) {
|
|||
s_write_pre.frame_history.pop_front();
|
||||
}
|
||||
let sample_rate_hz = 1000.0 / tick_ms as f64;
|
||||
let (features, classification, breathing_rate_hz, sub_variances) =
|
||||
let (features, mut classification, breathing_rate_hz, sub_variances, raw_motion) =
|
||||
extract_features_from_frame(&frame, &s_write_pre.frame_history, sample_rate_hz);
|
||||
smooth_and_classify(&mut s_write_pre, &mut classification, raw_motion);
|
||||
adaptive_override(&s_write_pre, &features, &mut classification);
|
||||
drop(s_write_pre);
|
||||
|
||||
// ── Step 5: Build enhanced fields from pipeline result ───────
|
||||
|
|
@ -1025,7 +1237,8 @@ async fn windows_wifi_task(state: SharedState, tick_ms: u64) {
|
|||
0.05
|
||||
};
|
||||
|
||||
let vitals = s.vital_detector.process_frame(&frame.amplitudes, &frame.phases);
|
||||
let raw_vitals = s.vital_detector.process_frame(&frame.amplitudes, &frame.phases);
|
||||
let vitals = smooth_vitals(&mut s, &raw_vitals);
|
||||
s.latest_vitals = vitals.clone();
|
||||
|
||||
let feat_variance = features.variance;
|
||||
|
|
@ -1132,8 +1345,10 @@ async fn windows_wifi_fallback_tick(state: &SharedState, seq: u32) {
|
|||
s.frame_history.pop_front();
|
||||
}
|
||||
let sample_rate_hz = 2.0_f64; // fallback tick ~ 500 ms => 2 Hz
|
||||
let (features, classification, breathing_rate_hz, sub_variances) =
|
||||
let (features, mut classification, breathing_rate_hz, sub_variances, raw_motion) =
|
||||
extract_features_from_frame(&frame, &s.frame_history, sample_rate_hz);
|
||||
smooth_and_classify(&mut s, &mut classification, raw_motion);
|
||||
adaptive_override(&s, &features, &mut classification);
|
||||
|
||||
s.source = format!("wifi:{ssid}");
|
||||
s.rssi_history.push_back(rssi_dbm);
|
||||
|
|
@ -1152,7 +1367,8 @@ async fn windows_wifi_fallback_tick(state: &SharedState, seq: u32) {
|
|||
0.05
|
||||
};
|
||||
|
||||
let vitals = s.vital_detector.process_frame(&frame.amplitudes, &frame.phases);
|
||||
let raw_vitals = s.vital_detector.process_frame(&frame.amplitudes, &frame.phases);
|
||||
let vitals = smooth_vitals(&mut s, &raw_vitals);
|
||||
s.latest_vitals = vitals.clone();
|
||||
|
||||
let feat_variance = features.variance;
|
||||
|
|
@ -2251,6 +2467,77 @@ async fn train_stop(State(state): State<SharedState>) -> Json<serde_json::Value>
|
|||
}))
|
||||
}
|
||||
|
||||
// ── Adaptive classifier endpoints ────────────────────────────────────────────
|
||||
|
||||
/// POST /api/v1/adaptive/train — train the adaptive classifier from recordings.
|
||||
async fn adaptive_train(State(state): State<SharedState>) -> Json<serde_json::Value> {
|
||||
let rec_dir = PathBuf::from("data/recordings");
|
||||
eprintln!("=== Adaptive Classifier Training ===");
|
||||
match adaptive_classifier::train_from_recordings(&rec_dir) {
|
||||
Ok(model) => {
|
||||
let accuracy = model.training_accuracy;
|
||||
let frames = model.trained_frames;
|
||||
let stats: Vec<_> = model.class_stats.iter().map(|cs| {
|
||||
serde_json::json!({
|
||||
"class": cs.label,
|
||||
"samples": cs.count,
|
||||
"feature_means": cs.mean,
|
||||
})
|
||||
}).collect();
|
||||
|
||||
// Save to disk.
|
||||
if let Err(e) = model.save(&adaptive_classifier::model_path()) {
|
||||
warn!("Failed to save adaptive model: {e}");
|
||||
} else {
|
||||
info!("Adaptive model saved to {}", adaptive_classifier::model_path().display());
|
||||
}
|
||||
|
||||
// Load into runtime state.
|
||||
let mut s = state.write().await;
|
||||
s.adaptive_model = Some(model);
|
||||
|
||||
Json(serde_json::json!({
|
||||
"success": true,
|
||||
"trained_frames": frames,
|
||||
"accuracy": accuracy,
|
||||
"class_stats": stats,
|
||||
}))
|
||||
}
|
||||
Err(e) => {
|
||||
Json(serde_json::json!({
|
||||
"success": false,
|
||||
"error": e,
|
||||
}))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// GET /api/v1/adaptive/status — check adaptive model status.
|
||||
async fn adaptive_status(State(state): State<SharedState>) -> Json<serde_json::Value> {
|
||||
let s = state.read().await;
|
||||
match &s.adaptive_model {
|
||||
Some(model) => Json(serde_json::json!({
|
||||
"loaded": true,
|
||||
"trained_frames": model.trained_frames,
|
||||
"accuracy": model.training_accuracy,
|
||||
"version": model.version,
|
||||
"classes": adaptive_classifier::CLASSES,
|
||||
"class_stats": model.class_stats,
|
||||
})),
|
||||
None => Json(serde_json::json!({
|
||||
"loaded": false,
|
||||
"message": "No adaptive model. POST /api/v1/adaptive/train to train one.",
|
||||
})),
|
||||
}
|
||||
}
|
||||
|
||||
/// POST /api/v1/adaptive/unload — unload the adaptive model (revert to thresholds).
|
||||
async fn adaptive_unload(State(state): State<SharedState>) -> Json<serde_json::Value> {
|
||||
let mut s = state.write().await;
|
||||
s.adaptive_model = None;
|
||||
Json(serde_json::json!({ "success": true, "message": "Adaptive model unloaded." }))
|
||||
}
|
||||
|
||||
/// Generate a simple timestamp string (epoch seconds) for recording IDs.
|
||||
fn chrono_timestamp() -> u64 {
|
||||
std::time::SystemTime::now()
|
||||
|
|
@ -2492,8 +2779,10 @@ async fn udp_receiver_task(state: SharedState, udp_port: u16) {
|
|||
}
|
||||
|
||||
let sample_rate_hz = 1000.0 / 500.0_f64; // default tick; ESP32 frames arrive as fast as they come
|
||||
let (features, classification, breathing_rate_hz, sub_variances) =
|
||||
let (features, mut classification, breathing_rate_hz, sub_variances, raw_motion) =
|
||||
extract_features_from_frame(&frame, &s.frame_history, sample_rate_hz);
|
||||
smooth_and_classify(&mut s, &mut classification, raw_motion);
|
||||
adaptive_override(&s, &features, &mut classification);
|
||||
|
||||
// Update RSSI history
|
||||
s.rssi_history.push_back(features.mean_rssi);
|
||||
|
|
@ -2508,10 +2797,11 @@ async fn udp_receiver_task(state: SharedState, udp_port: u16) {
|
|||
else if classification.motion_level == "present_still" { 0.3 }
|
||||
else { 0.05 };
|
||||
|
||||
let vitals = s.vital_detector.process_frame(
|
||||
let raw_vitals = s.vital_detector.process_frame(
|
||||
&frame.amplitudes,
|
||||
&frame.phases,
|
||||
);
|
||||
let vitals = smooth_vitals(&mut s, &raw_vitals);
|
||||
s.latest_vitals = vitals.clone();
|
||||
|
||||
// Multi-person estimation with temporal smoothing.
|
||||
|
|
@ -2595,8 +2885,10 @@ async fn simulated_data_task(state: SharedState, tick_ms: u64) {
|
|||
}
|
||||
|
||||
let sample_rate_hz = 1000.0 / tick_ms as f64;
|
||||
let (features, classification, breathing_rate_hz, sub_variances) =
|
||||
let (features, mut classification, breathing_rate_hz, sub_variances, raw_motion) =
|
||||
extract_features_from_frame(&frame, &s.frame_history, sample_rate_hz);
|
||||
smooth_and_classify(&mut s, &mut classification, raw_motion);
|
||||
adaptive_override(&s, &features, &mut classification);
|
||||
|
||||
s.rssi_history.push_back(features.mean_rssi);
|
||||
if s.rssi_history.len() > 60 {
|
||||
|
|
@ -2607,10 +2899,11 @@ async fn simulated_data_task(state: SharedState, tick_ms: u64) {
|
|||
else if classification.motion_level == "present_still" { 0.3 }
|
||||
else { 0.05 };
|
||||
|
||||
let vitals = s.vital_detector.process_frame(
|
||||
let raw_vitals = s.vital_detector.process_frame(
|
||||
&frame.amplitudes,
|
||||
&frame.phases,
|
||||
);
|
||||
let vitals = smooth_vitals(&mut s, &raw_vitals);
|
||||
s.latest_vitals = vitals.clone();
|
||||
|
||||
let frame_amplitudes = frame.amplitudes.clone();
|
||||
|
|
@ -3264,6 +3557,18 @@ async fn main() {
|
|||
active_sona_profile: None,
|
||||
model_loaded,
|
||||
smoothed_person_score: 0.0,
|
||||
smoothed_motion: 0.0,
|
||||
current_motion_level: "absent".to_string(),
|
||||
debounce_counter: 0,
|
||||
debounce_candidate: "absent".to_string(),
|
||||
baseline_motion: 0.0,
|
||||
baseline_frames: 0,
|
||||
smoothed_hr: 0.0,
|
||||
smoothed_br: 0.0,
|
||||
smoothed_hr_conf: 0.0,
|
||||
smoothed_br_conf: 0.0,
|
||||
hr_buffer: VecDeque::with_capacity(8),
|
||||
br_buffer: VecDeque::with_capacity(8),
|
||||
edge_vitals: None,
|
||||
latest_wasm_events: None,
|
||||
// Model management
|
||||
|
|
@ -3278,6 +3583,11 @@ async fn main() {
|
|||
// Training
|
||||
training_status: "idle".to_string(),
|
||||
training_config: None,
|
||||
adaptive_model: adaptive_classifier::AdaptiveModel::load(&adaptive_classifier::model_path()).ok().map(|m| {
|
||||
info!("Loaded adaptive classifier: {} frames, {:.1}% accuracy",
|
||||
m.trained_frames, m.training_accuracy * 100.0);
|
||||
m
|
||||
}),
|
||||
}));
|
||||
|
||||
// Start background tasks based on source
|
||||
|
|
@ -3364,6 +3674,10 @@ async fn main() {
|
|||
.route("/api/v1/train/status", get(train_status))
|
||||
.route("/api/v1/train/start", post(train_start))
|
||||
.route("/api/v1/train/stop", post(train_stop))
|
||||
// Adaptive classifier endpoints
|
||||
.route("/api/v1/adaptive/train", post(adaptive_train))
|
||||
.route("/api/v1/adaptive/status", get(adaptive_status))
|
||||
.route("/api/v1/adaptive/unload", post(adaptive_unload))
|
||||
// Static UI files
|
||||
.nest_service("/ui", ServeDir::new(&ui_path))
|
||||
.layer(SetResponseHeaderLayer::overriding(
|
||||
|
|
|
|||
567
ui/observatory/js/hud-controller.js
Normal file
567
ui/observatory/js/hud-controller.js
Normal file
|
|
@ -0,0 +1,567 @@
|
|||
/**
|
||||
* HudController — Extracted HUD update, settings dialog, and scenario UI
|
||||
*
|
||||
* Manages all DOM-based HUD elements:
|
||||
* - Vital sign display with smooth lerp transitions and color coding
|
||||
* - Signal metrics, sparkline, and presence indicator
|
||||
* - Scenario description and edge module badges
|
||||
* - Mini person-count dot visualization
|
||||
* - Settings dialog (tabs, ranges, presets, data source)
|
||||
* - Quick-select scenario dropdown
|
||||
*/
|
||||
|
||||
// ---- Constants ----
|
||||
|
||||
export const SCENARIO_NAMES = [
|
||||
'EMPTY ROOM','VITAL SIGNS','MULTI-PERSON','FALL DETECT',
|
||||
'SLEEP MONITOR','INTRUSION','GESTURE CTRL','CROWD OCCUPANCY',
|
||||
'SEARCH RESCUE','ELDERLY CARE','FITNESS','SECURITY PATROL',
|
||||
];
|
||||
|
||||
export const DEFAULTS = {
|
||||
bloom: 0.2, bloomRadius: 0.25, bloomThresh: 0.5,
|
||||
exposure: 1.3, vignette: 0.25, grain: 0.01, chromatic: 0.0005,
|
||||
boneThick: 0.018, jointSize: 0.035, glow: 0.3, trail: 0.35,
|
||||
wireColor: '#00d878', jointColor: '#ff4060', aura: 0.02,
|
||||
field: 0.45, waves: 0.4, ambient: 0.7, reflect: 0.2,
|
||||
fov: 50, orbitSpeed: 0.15, grid: true, room: true,
|
||||
scenario: 'auto', cycle: 30, dataSource: 'demo', wsUrl: '',
|
||||
};
|
||||
|
||||
export const SETTINGS_VERSION = '4';
|
||||
|
||||
export const PRESETS = {
|
||||
foundation: {},
|
||||
cinematic: {
|
||||
bloom: 1.2, bloomRadius: 0.5, bloomThresh: 0.2,
|
||||
exposure: 0.8, vignette: 0.7, grain: 0.04, chromatic: 0.002,
|
||||
glow: 0.6, trail: 0.8, aura: 0.06, field: 0.4,
|
||||
waves: 0.7, ambient: 0.25, reflect: 0.5, fov: 40, orbitSpeed: 0.08,
|
||||
},
|
||||
minimal: {
|
||||
bloom: 0.3, bloomRadius: 0.2, bloomThresh: 0.5,
|
||||
exposure: 1.1, vignette: 0.2, grain: 0, chromatic: 0,
|
||||
glow: 0.3, trail: 0.2, aura: 0.02, field: 0.7,
|
||||
waves: 0.3, ambient: 0.6, reflect: 0.1, wireColor: '#40ff90', jointColor: '#4080ff',
|
||||
},
|
||||
neon: {
|
||||
bloom: 2.5, bloomRadius: 0.8, bloomThresh: 0.1,
|
||||
exposure: 0.6, vignette: 0.6, grain: 0.02, chromatic: 0.004,
|
||||
glow: 2.0, trail: 1.0, aura: 0.15, field: 0.6,
|
||||
waves: 1.0, ambient: 0.15, reflect: 0.7, wireColor: '#00ffaa', jointColor: '#ff00ff',
|
||||
},
|
||||
tactical: {
|
||||
bloom: 0.5, bloomRadius: 0.3, bloomThresh: 0.4,
|
||||
exposure: 0.85, vignette: 0.4, grain: 0.04, chromatic: 0.001,
|
||||
glow: 0.5, trail: 0.4, aura: 0.03, field: 0.8,
|
||||
waves: 0.4, ambient: 0.3, reflect: 0.15, wireColor: '#30ff60', jointColor: '#ff8800',
|
||||
},
|
||||
medical: {
|
||||
bloom: 0.6, bloomRadius: 0.4, bloomThresh: 0.35,
|
||||
exposure: 1.0, vignette: 0.3, grain: 0.01, chromatic: 0.0005,
|
||||
glow: 0.6, trail: 0.3, aura: 0.04, field: 0.5,
|
||||
waves: 0.3, ambient: 0.5, reflect: 0.2, wireColor: '#00ccff', jointColor: '#ff3355',
|
||||
},
|
||||
};
|
||||
|
||||
// Scenario descriptions shown below the dropdown
|
||||
const SCENARIO_DESCRIPTIONS = {
|
||||
auto: 'Auto-cycling through all sensing scenarios.',
|
||||
empty_room: 'Baseline calibration with no human presence in the monitored zone.',
|
||||
single_breathing: 'Detecting vital signs through WiFi signal micro-variations.',
|
||||
two_walking: 'Tracking multiple people simultaneously via CSI multiplex separation.',
|
||||
fall_event: 'Sudden posture-change detection using acceleration feature analysis.',
|
||||
sleep_monitoring: 'Monitoring breathing patterns and apnea events during sleep.',
|
||||
intrusion_detect: 'Passive perimeter monitoring -- no cameras, pure RF sensing.',
|
||||
gesture_control: 'DTW-based gesture recognition from hand/arm motion signatures.',
|
||||
crowd_occupancy: 'Estimating room occupancy count from aggregate CSI variance.',
|
||||
search_rescue: 'Through-wall survivor detection using WiFi-MAT multistatic mode.',
|
||||
elderly_care: 'Continuous gait analysis for early mobility-decline detection.',
|
||||
fitness_tracking: 'Rep counting and exercise classification from body kinematics.',
|
||||
security_patrol: 'Multi-zone presence patrol with camera-free motion heatmaps.',
|
||||
};
|
||||
|
||||
// Edge modules active per scenario
|
||||
const SCENARIO_EDGE_MODULES = {
|
||||
auto: [],
|
||||
empty_room: [],
|
||||
single_breathing: ['VITALS'],
|
||||
two_walking: ['GAIT', 'TRACKING'],
|
||||
fall_event: ['FALL', 'VITALS'],
|
||||
sleep_monitoring: ['VITALS', 'APNEA'],
|
||||
intrusion_detect: ['PRESENCE', 'ALERT'],
|
||||
gesture_control: ['GESTURE', 'DTW'],
|
||||
crowd_occupancy: ['OCCUPANCY'],
|
||||
search_rescue: ['MAT', 'VITALS', 'PRESENCE'],
|
||||
elderly_care: ['GAIT', 'VITALS', 'FALL'],
|
||||
fitness_tracking: ['GESTURE', 'GAIT'],
|
||||
security_patrol: ['PRESENCE', 'ALERT', 'TRACKING'],
|
||||
};
|
||||
|
||||
// Edge-module badge colors
|
||||
const MODULE_COLORS = {
|
||||
VITALS: 'var(--red-heart)',
|
||||
GAIT: 'var(--green-glow)',
|
||||
FALL: 'var(--red-alert)',
|
||||
GESTURE: 'var(--amber)',
|
||||
PRESENCE: 'var(--blue-signal)',
|
||||
TRACKING: 'var(--green-bright)',
|
||||
OCCUPANCY: 'var(--amber)',
|
||||
ALERT: 'var(--red-alert)',
|
||||
DTW: 'var(--amber)',
|
||||
APNEA: 'var(--red-heart)',
|
||||
MAT: 'var(--blue-signal)',
|
||||
};
|
||||
|
||||
// Vital-sign color-coding thresholds
|
||||
function vitalColor(type, value) {
|
||||
if (value <= 0) return 'var(--text-secondary)';
|
||||
if (type === 'hr') {
|
||||
if (value < 50 || value > 130) return 'var(--red-alert)';
|
||||
if (value < 60 || value > 100) return 'var(--amber)';
|
||||
return 'var(--green-glow)';
|
||||
}
|
||||
if (type === 'br') {
|
||||
if (value < 8 || value > 28) return 'var(--red-alert)';
|
||||
if (value < 12 || value > 20) return 'var(--amber)';
|
||||
return 'var(--green-glow)';
|
||||
}
|
||||
if (type === 'conf') {
|
||||
if (value < 40) return 'var(--red-alert)';
|
||||
if (value < 70) return 'var(--amber)';
|
||||
return 'var(--green-glow)';
|
||||
}
|
||||
return 'var(--text-primary)';
|
||||
}
|
||||
|
||||
function lerp(a, b, t) {
|
||||
return a + (b - a) * t;
|
||||
}
|
||||
|
||||
// ---- HudController class ----
|
||||
|
||||
export class HudController {
|
||||
constructor(observatory) {
|
||||
this._obs = observatory;
|
||||
this._settingsOpen = false;
|
||||
this._rssiHistory = [];
|
||||
this._sparklineCtx = document.getElementById('rssi-sparkline')?.getContext('2d');
|
||||
|
||||
// Lerp state for smooth vital-sign transitions
|
||||
this._lerpHr = 0;
|
||||
this._lerpBr = 0;
|
||||
this._lerpConf = 0;
|
||||
|
||||
// Track current scenario for description/edge updates
|
||||
this._currentScenarioKey = null;
|
||||
}
|
||||
|
||||
// ============================================================
|
||||
// Settings dialog
|
||||
// ============================================================
|
||||
|
||||
initSettings() {
|
||||
const overlay = document.getElementById('settings-overlay');
|
||||
const btn = document.getElementById('settings-btn');
|
||||
const closeBtn = document.getElementById('settings-close');
|
||||
btn.addEventListener('click', () => this.toggleSettings());
|
||||
closeBtn.addEventListener('click', () => this.toggleSettings());
|
||||
overlay.addEventListener('click', (e) => { if (e.target === overlay) this.toggleSettings(); });
|
||||
|
||||
// Tab switching
|
||||
document.querySelectorAll('.stab').forEach(tab => {
|
||||
tab.addEventListener('click', () => {
|
||||
document.querySelectorAll('.stab').forEach(t => t.classList.remove('active'));
|
||||
document.querySelectorAll('.stab-content').forEach(c => c.classList.remove('active'));
|
||||
tab.classList.add('active');
|
||||
document.getElementById(`stab-${tab.dataset.stab}`).classList.add('active');
|
||||
});
|
||||
});
|
||||
|
||||
const obs = this._obs;
|
||||
const s = obs.settings;
|
||||
|
||||
// Bind ranges
|
||||
this._bindRange('opt-bloom', 'bloom', v => { obs._postProcessing._bloomPass.strength = v; });
|
||||
this._bindRange('opt-bloom-radius', 'bloomRadius', v => { obs._postProcessing._bloomPass.radius = v; });
|
||||
this._bindRange('opt-bloom-thresh', 'bloomThresh', v => { obs._postProcessing._bloomPass.threshold = v; });
|
||||
this._bindRange('opt-exposure', 'exposure', v => { obs._renderer.toneMappingExposure = v; });
|
||||
this._bindRange('opt-vignette', 'vignette', v => { obs._postProcessing._vignettePass.uniforms.uVignetteStrength.value = v; });
|
||||
this._bindRange('opt-grain', 'grain', v => { obs._postProcessing._vignettePass.uniforms.uGrainStrength.value = v; });
|
||||
this._bindRange('opt-chromatic', 'chromatic', v => { obs._postProcessing._vignettePass.uniforms.uChromaticStrength.value = v; });
|
||||
this._bindRange('opt-bone-thick', 'boneThick');
|
||||
this._bindRange('opt-joint-size', 'jointSize');
|
||||
this._bindRange('opt-glow', 'glow');
|
||||
this._bindRange('opt-trail', 'trail');
|
||||
this._bindRange('opt-aura', 'aura');
|
||||
this._bindRange('opt-field', 'field', v => { obs._fieldMat.opacity = v; });
|
||||
this._bindRange('opt-waves', 'waves');
|
||||
this._bindRange('opt-ambient', 'ambient', v => { obs._ambient.intensity = v; });
|
||||
this._bindRange('opt-reflect', 'reflect', v => {
|
||||
obs._floorMat.roughness = 1.0 - v * 0.7;
|
||||
obs._floorMat.metalness = v * 0.5;
|
||||
});
|
||||
this._bindRange('opt-fov', 'fov', v => {
|
||||
obs._camera.fov = v;
|
||||
obs._camera.updateProjectionMatrix();
|
||||
});
|
||||
this._bindRange('opt-orbit-speed', 'orbitSpeed');
|
||||
this._bindRange('opt-cycle', 'cycle', v => { obs._demoData.setCycleDuration(v); });
|
||||
|
||||
// Color pickers
|
||||
document.getElementById('opt-wire-color').value = s.wireColor;
|
||||
document.getElementById('opt-wire-color').addEventListener('input', (e) => {
|
||||
s.wireColor = e.target.value; obs._applyColors(); this.saveSettings();
|
||||
});
|
||||
document.getElementById('opt-joint-color').value = s.jointColor;
|
||||
document.getElementById('opt-joint-color').addEventListener('input', (e) => {
|
||||
s.jointColor = e.target.value; obs._applyColors(); this.saveSettings();
|
||||
});
|
||||
|
||||
// Checkboxes
|
||||
document.getElementById('opt-grid').checked = s.grid;
|
||||
document.getElementById('opt-grid').addEventListener('change', (e) => {
|
||||
s.grid = e.target.checked; obs._grid.visible = e.target.checked; this.saveSettings();
|
||||
});
|
||||
document.getElementById('opt-room').checked = s.room;
|
||||
document.getElementById('opt-room').addEventListener('change', (e) => {
|
||||
s.room = e.target.checked; obs._roomWire.visible = e.target.checked; this.saveSettings();
|
||||
});
|
||||
|
||||
// Scenario select
|
||||
const scenarioSel = document.getElementById('opt-scenario');
|
||||
scenarioSel.value = s.scenario;
|
||||
scenarioSel.addEventListener('change', (e) => {
|
||||
s.scenario = e.target.value;
|
||||
obs._demoData.setScenario(e.target.value);
|
||||
this.saveSettings();
|
||||
});
|
||||
|
||||
// Data source
|
||||
const dsSel = document.getElementById('opt-data-source');
|
||||
dsSel.value = s.dataSource;
|
||||
dsSel.addEventListener('change', (e) => {
|
||||
s.dataSource = e.target.value;
|
||||
document.getElementById('ws-url-row').style.display = e.target.value === 'ws' ? 'flex' : 'none';
|
||||
if (e.target.value === 'ws' && s.wsUrl) obs._connectWS(s.wsUrl);
|
||||
else obs._disconnectWS();
|
||||
this.updateSourceBadge(s.dataSource, obs._ws);
|
||||
this.saveSettings();
|
||||
});
|
||||
document.getElementById('ws-url-row').style.display = s.dataSource === 'ws' ? 'flex' : 'none';
|
||||
|
||||
const wsInput = document.getElementById('opt-ws-url');
|
||||
wsInput.value = s.wsUrl;
|
||||
wsInput.addEventListener('change', (e) => {
|
||||
s.wsUrl = e.target.value;
|
||||
if (s.dataSource === 'ws') obs._connectWS(e.target.value);
|
||||
this.saveSettings();
|
||||
});
|
||||
|
||||
// Buttons
|
||||
document.getElementById('btn-reset-camera').addEventListener('click', () => {
|
||||
obs._camera.position.set(6, 5, 8);
|
||||
obs._controls.target.set(0, 1.2, 0);
|
||||
obs._controls.update();
|
||||
});
|
||||
document.getElementById('btn-export-settings').addEventListener('click', () => {
|
||||
const blob = new Blob([JSON.stringify(s, null, 2)], { type: 'application/json' });
|
||||
const a = document.createElement('a');
|
||||
a.href = URL.createObjectURL(blob);
|
||||
a.download = 'ruview-observatory-settings.json';
|
||||
a.click();
|
||||
});
|
||||
document.getElementById('btn-reset-settings').addEventListener('click', () => {
|
||||
this.applyPreset(DEFAULTS);
|
||||
});
|
||||
|
||||
const presetSel = document.getElementById('opt-preset');
|
||||
presetSel.addEventListener('change', (e) => {
|
||||
const p = PRESETS[e.target.value];
|
||||
if (p) this.applyPreset({ ...DEFAULTS, ...p });
|
||||
});
|
||||
|
||||
obs._grid.visible = s.grid;
|
||||
obs._roomWire.visible = s.room;
|
||||
}
|
||||
|
||||
// ============================================================
|
||||
// Quick-select (top bar scenario dropdown)
|
||||
// ============================================================
|
||||
|
||||
initQuickSelect() {
|
||||
const sel = document.getElementById('scenario-quick-select');
|
||||
if (!sel) return;
|
||||
sel.addEventListener('change', (e) => {
|
||||
this._obs._demoData.setScenario(e.target.value);
|
||||
const settingsSel = document.getElementById('opt-scenario');
|
||||
if (settingsSel) settingsSel.value = e.target.value;
|
||||
this._obs.settings.scenario = e.target.value;
|
||||
this.saveSettings();
|
||||
});
|
||||
}
|
||||
|
||||
// ============================================================
|
||||
// Toggle / save / preset
|
||||
// ============================================================
|
||||
|
||||
toggleSettings() {
|
||||
this._settingsOpen = !this._settingsOpen;
|
||||
document.getElementById('settings-overlay').style.display = this._settingsOpen ? 'flex' : 'none';
|
||||
}
|
||||
|
||||
get settingsOpen() {
|
||||
return this._settingsOpen;
|
||||
}
|
||||
|
||||
saveSettings() {
|
||||
try {
|
||||
localStorage.setItem('ruview-observatory-settings', JSON.stringify(this._obs.settings));
|
||||
} catch {}
|
||||
}
|
||||
|
||||
applyPreset(preset) {
|
||||
const obs = this._obs;
|
||||
Object.assign(obs.settings, preset);
|
||||
this.saveSettings();
|
||||
const rangeMap = {
|
||||
'opt-bloom': 'bloom', 'opt-bloom-radius': 'bloomRadius', 'opt-bloom-thresh': 'bloomThresh',
|
||||
'opt-exposure': 'exposure', 'opt-vignette': 'vignette', 'opt-grain': 'grain', 'opt-chromatic': 'chromatic',
|
||||
'opt-bone-thick': 'boneThick', 'opt-joint-size': 'jointSize', 'opt-glow': 'glow', 'opt-trail': 'trail', 'opt-aura': 'aura',
|
||||
'opt-field': 'field', 'opt-waves': 'waves', 'opt-ambient': 'ambient', 'opt-reflect': 'reflect',
|
||||
'opt-fov': 'fov', 'opt-orbit-speed': 'orbitSpeed', 'opt-cycle': 'cycle',
|
||||
};
|
||||
for (const [id, key] of Object.entries(rangeMap)) {
|
||||
const el = document.getElementById(id);
|
||||
const valEl = document.getElementById(`${id}-val`);
|
||||
if (el) el.value = obs.settings[key];
|
||||
if (valEl) valEl.textContent = obs.settings[key];
|
||||
}
|
||||
const gridEl = document.getElementById('opt-grid');
|
||||
if (gridEl) { gridEl.checked = obs.settings.grid; obs._grid.visible = obs.settings.grid; }
|
||||
const roomEl = document.getElementById('opt-room');
|
||||
if (roomEl) { roomEl.checked = obs.settings.room; obs._roomWire.visible = obs.settings.room; }
|
||||
document.getElementById('opt-wire-color').value = obs.settings.wireColor;
|
||||
document.getElementById('opt-joint-color').value = obs.settings.jointColor;
|
||||
obs._applyPostSettings();
|
||||
obs._renderer.toneMappingExposure = obs.settings.exposure;
|
||||
obs._fieldMat.opacity = obs.settings.field;
|
||||
obs._ambient.intensity = obs.settings.ambient;
|
||||
obs._floorMat.roughness = 1.0 - obs.settings.reflect * 0.7;
|
||||
obs._floorMat.metalness = obs.settings.reflect * 0.5;
|
||||
obs._camera.fov = obs.settings.fov;
|
||||
obs._camera.updateProjectionMatrix();
|
||||
obs._demoData.setCycleDuration(obs.settings.cycle);
|
||||
obs._applyColors();
|
||||
}
|
||||
|
||||
// ============================================================
|
||||
// Source badge
|
||||
// ============================================================
|
||||
|
||||
updateSourceBadge(dataSource, ws) {
|
||||
const dot = document.querySelector('#data-source-badge .dot');
|
||||
const label = document.getElementById('data-source-label');
|
||||
if (dataSource === 'ws' && ws?.readyState === WebSocket.OPEN) {
|
||||
dot.className = 'dot dot--live'; label.textContent = 'LIVE';
|
||||
} else {
|
||||
dot.className = 'dot dot--demo'; label.textContent = 'DEMO';
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================
|
||||
// HUD update (called every frame)
|
||||
// ============================================================
|
||||
|
||||
updateHUD(data, demoData) {
|
||||
if (!data) return;
|
||||
const vs = data.vital_signs || {};
|
||||
const feat = data.features || {};
|
||||
const cls = data.classification || {};
|
||||
|
||||
// Sync scenario dropdown
|
||||
const quickSel = document.getElementById('scenario-quick-select');
|
||||
const cur = demoData._autoMode ? 'auto' : demoData.currentScenario;
|
||||
if (quickSel && quickSel.value !== cur) quickSel.value = cur;
|
||||
const autoIcon = document.getElementById('autoplay-icon');
|
||||
if (autoIcon) autoIcon.className = demoData._autoMode ? '' : 'hidden';
|
||||
|
||||
const targetHr = vs.heart_rate_bpm || 0;
|
||||
const targetBr = vs.breathing_rate_bpm || 0;
|
||||
const targetConf = Math.round((cls.confidence || 0) * 100);
|
||||
|
||||
// Smooth lerp transitions (blend 4% per frame toward target — very stable)
|
||||
const lerpFactor = 0.04;
|
||||
this._lerpHr = targetHr > 0 ? lerp(this._lerpHr, targetHr, lerpFactor) : 0;
|
||||
this._lerpBr = targetBr > 0 ? lerp(this._lerpBr, targetBr, lerpFactor) : 0;
|
||||
this._lerpConf = targetConf > 0 ? lerp(this._lerpConf, targetConf, lerpFactor) : 0;
|
||||
|
||||
const dispHr = this._lerpHr > 1 ? Math.round(this._lerpHr) : '--';
|
||||
const dispBr = this._lerpBr > 1 ? Math.round(this._lerpBr) : '--';
|
||||
const dispConf = this._lerpConf > 1 ? Math.round(this._lerpConf) : '--';
|
||||
|
||||
this._setText('hr-value', dispHr);
|
||||
this._setText('br-value', dispBr);
|
||||
this._setText('conf-value', dispConf);
|
||||
this._setWidth('hr-bar', Math.min(100, this._lerpHr / 120 * 100));
|
||||
this._setWidth('br-bar', Math.min(100, this._lerpBr / 30 * 100));
|
||||
this._setWidth('conf-bar', this._lerpConf);
|
||||
|
||||
// Color-code vital values
|
||||
this._setColor('hr-value', vitalColor('hr', this._lerpHr));
|
||||
this._setColor('br-value', vitalColor('br', this._lerpBr));
|
||||
this._setColor('conf-value', vitalColor('conf', this._lerpConf));
|
||||
|
||||
// Color-code bar fills to match
|
||||
this._setBarColor('hr-bar', vitalColor('hr', this._lerpHr));
|
||||
this._setBarColor('br-bar', vitalColor('br', this._lerpBr));
|
||||
this._setBarColor('conf-bar', vitalColor('conf', this._lerpConf));
|
||||
|
||||
this._setText('rssi-value', `${Math.round(feat.mean_rssi || 0)} dBm`);
|
||||
this._setText('var-value', (feat.variance || 0).toFixed(2));
|
||||
this._setText('motion-value', (feat.motion_band_power || 0).toFixed(3));
|
||||
|
||||
// Mini person-count dots
|
||||
const personCount = data.estimated_persons || 0;
|
||||
this._updatePersonDots(personCount);
|
||||
|
||||
const presEl = document.getElementById('presence-indicator');
|
||||
const presLabel = document.getElementById('presence-label');
|
||||
if (presEl) {
|
||||
const ml = cls.motion_level || 'absent';
|
||||
presEl.className = 'presence-state';
|
||||
if (ml === 'active') { presEl.classList.add('presence--active'); presLabel.textContent = 'ACTIVE'; }
|
||||
else if (cls.presence) { presEl.classList.add('presence--present'); presLabel.textContent = 'PRESENT'; }
|
||||
else { presEl.classList.add('presence--absent'); presLabel.textContent = 'ABSENT'; }
|
||||
}
|
||||
|
||||
const fallEl = document.getElementById('fall-alert');
|
||||
if (fallEl) fallEl.style.display = cls.fall_detected ? 'block' : 'none';
|
||||
|
||||
// Scenario description and edge modules
|
||||
const scenarioKey = demoData._autoMode ? (demoData.currentScenario || 'auto') : (demoData.currentScenario || 'auto');
|
||||
if (scenarioKey !== this._currentScenarioKey) {
|
||||
this._currentScenarioKey = scenarioKey;
|
||||
this._updateScenarioDescription(scenarioKey);
|
||||
this._updateEdgeModules(scenarioKey);
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================
|
||||
// Sparkline
|
||||
// ============================================================
|
||||
|
||||
updateSparkline(data) {
|
||||
const rssi = data?.features?.mean_rssi;
|
||||
if (rssi == null || !this._sparklineCtx) return;
|
||||
this._rssiHistory.push(rssi);
|
||||
if (this._rssiHistory.length > 60) this._rssiHistory.shift();
|
||||
|
||||
const ctx = this._sparklineCtx;
|
||||
const w = ctx.canvas.width, h = ctx.canvas.height;
|
||||
ctx.clearRect(0, 0, w, h);
|
||||
if (this._rssiHistory.length < 2) return;
|
||||
|
||||
ctx.beginPath();
|
||||
ctx.strokeStyle = '#2090ff';
|
||||
ctx.lineWidth = 1.5;
|
||||
ctx.shadowColor = '#2090ff';
|
||||
ctx.shadowBlur = 4;
|
||||
for (let i = 0; i < this._rssiHistory.length; i++) {
|
||||
const x = (i / (this._rssiHistory.length - 1)) * w;
|
||||
const norm = Math.max(0, Math.min(1, (this._rssiHistory[i] + 80) / 60));
|
||||
const y = h - norm * h;
|
||||
i === 0 ? ctx.moveTo(x, y) : ctx.lineTo(x, y);
|
||||
}
|
||||
ctx.stroke();
|
||||
ctx.shadowBlur = 0;
|
||||
ctx.lineTo(w, h);
|
||||
ctx.lineTo(0, h);
|
||||
ctx.closePath();
|
||||
const grad = ctx.createLinearGradient(0, 0, 0, h);
|
||||
grad.addColorStop(0, 'rgba(32,144,255,0.15)');
|
||||
grad.addColorStop(1, 'rgba(32,144,255,0)');
|
||||
ctx.fillStyle = grad;
|
||||
ctx.fill();
|
||||
}
|
||||
|
||||
// ============================================================
|
||||
// Private helpers
|
||||
// ============================================================
|
||||
|
||||
_setText(id, val) {
|
||||
const e = document.getElementById(id);
|
||||
if (e) e.textContent = val;
|
||||
}
|
||||
|
||||
_setWidth(id, pct) {
|
||||
const e = document.getElementById(id);
|
||||
if (e) e.style.width = `${pct}%`;
|
||||
}
|
||||
|
||||
_setColor(id, color) {
|
||||
const e = document.getElementById(id);
|
||||
if (e) e.style.color = color;
|
||||
}
|
||||
|
||||
_setBarColor(id, color) {
|
||||
const e = document.getElementById(id);
|
||||
if (e) e.style.background = color;
|
||||
}
|
||||
|
||||
_bindRange(id, key, applyFn) {
|
||||
const el = document.getElementById(id);
|
||||
const valEl = document.getElementById(`${id}-val`);
|
||||
if (!el) return;
|
||||
el.value = this._obs.settings[key];
|
||||
if (valEl) valEl.textContent = this._obs.settings[key];
|
||||
el.addEventListener('input', (e) => {
|
||||
const v = parseFloat(e.target.value);
|
||||
this._obs.settings[key] = v;
|
||||
if (valEl) valEl.textContent = v;
|
||||
if (applyFn) applyFn(v);
|
||||
this.saveSettings();
|
||||
});
|
||||
}
|
||||
|
||||
_updatePersonDots(count) {
|
||||
const container = document.getElementById('persons-dots');
|
||||
if (!container) {
|
||||
// Fall back to text-only display
|
||||
this._setText('persons-value', count);
|
||||
return;
|
||||
}
|
||||
// Build dot icons: filled for detected persons, dim for empty slots (max 8)
|
||||
const maxDots = 8;
|
||||
const clamped = Math.min(count, maxDots);
|
||||
let html = '';
|
||||
for (let i = 0; i < maxDots; i++) {
|
||||
const active = i < clamped;
|
||||
html += `<span class="person-dot${active ? ' person-dot--active' : ''}"></span>`;
|
||||
}
|
||||
container.innerHTML = html;
|
||||
this._setText('persons-value', count);
|
||||
}
|
||||
|
||||
_updateScenarioDescription(scenarioKey) {
|
||||
const el = document.getElementById('scenario-description');
|
||||
if (!el) return;
|
||||
el.textContent = SCENARIO_DESCRIPTIONS[scenarioKey] || '';
|
||||
}
|
||||
|
||||
_updateEdgeModules(scenarioKey) {
|
||||
const bar = document.getElementById('edge-modules-bar');
|
||||
if (!bar) return;
|
||||
const modules = SCENARIO_EDGE_MODULES[scenarioKey] || [];
|
||||
if (modules.length === 0) {
|
||||
bar.innerHTML = '';
|
||||
bar.style.display = 'none';
|
||||
return;
|
||||
}
|
||||
bar.style.display = 'flex';
|
||||
bar.innerHTML = modules.map(m => {
|
||||
const color = MODULE_COLORS[m] || 'var(--text-secondary)';
|
||||
return `<span class="edge-badge" style="--badge-color:${color}">${m}</span>`;
|
||||
}).join('');
|
||||
}
|
||||
}
|
||||
715
ui/observatory/js/main.js
Normal file
715
ui/observatory/js/main.js
Normal file
|
|
@ -0,0 +1,715 @@
|
|||
/**
|
||||
* RuView Observatory — Main Scene Orchestrator
|
||||
*
|
||||
* Room-based WiFi sensing visualization with:
|
||||
* - Pool of 4 human wireframe figures (multi-person scenarios)
|
||||
* - 7 pose types (standing, walking, lying, sitting, fallen, exercising, gesturing, crouching)
|
||||
* - Scenario-specific room props (chair, exercise mat, door, rubble wall, screen, desk)
|
||||
* - Dot-matrix mist body mass, particle trails, WiFi waves, signal field
|
||||
* - Reflective floor, settings dialog, and practical data HUD
|
||||
*/
|
||||
import * as THREE from 'three';
|
||||
import { OrbitControls } from 'three/addons/controls/OrbitControls.js';
|
||||
|
||||
import { DemoDataGenerator } from './demo-data.js';
|
||||
import { NebulaBackground } from './nebula-background.js';
|
||||
import { PostProcessing } from './post-processing.js';
|
||||
import { FigurePool, SKELETON_PAIRS } from './figure-pool.js';
|
||||
import { PoseSystem } from './pose-system.js';
|
||||
import { ScenarioProps } from './scenario-props.js';
|
||||
import { HudController, DEFAULTS, SETTINGS_VERSION, PRESETS, SCENARIO_NAMES } from './hud-controller.js';
|
||||
|
||||
// ---- Palette ----
|
||||
const C = {
|
||||
greenGlow: 0x00d878,
|
||||
greenBright:0x3eff8a,
|
||||
greenDim: 0x0a6b3a,
|
||||
amber: 0xffb020,
|
||||
blueSignal: 0x2090ff,
|
||||
redAlert: 0xff3040,
|
||||
redHeart: 0xff4060,
|
||||
bgDeep: 0x080c14,
|
||||
};
|
||||
|
||||
// SCENARIO_NAMES, DEFAULTS, SETTINGS_VERSION, PRESETS imported from hud-controller.js
|
||||
|
||||
// ---- Main Class ----
|
||||
|
||||
class Observatory {
|
||||
constructor() {
|
||||
this._canvas = document.getElementById('observatory-canvas');
|
||||
this.settings = { ...DEFAULTS };
|
||||
|
||||
// Load saved settings
|
||||
try {
|
||||
const ver = localStorage.getItem('ruview-settings-version');
|
||||
if (ver === SETTINGS_VERSION) {
|
||||
const saved = localStorage.getItem('ruview-observatory-settings');
|
||||
if (saved) Object.assign(this.settings, JSON.parse(saved));
|
||||
} else {
|
||||
localStorage.removeItem('ruview-observatory-settings');
|
||||
localStorage.setItem('ruview-settings-version', SETTINGS_VERSION);
|
||||
}
|
||||
} catch {}
|
||||
|
||||
// Renderer
|
||||
this._renderer = new THREE.WebGLRenderer({
|
||||
canvas: this._canvas,
|
||||
antialias: true,
|
||||
powerPreference: 'high-performance',
|
||||
});
|
||||
this._renderer.setPixelRatio(Math.min(window.devicePixelRatio, 2));
|
||||
this._renderer.setSize(window.innerWidth, window.innerHeight);
|
||||
this._renderer.toneMapping = THREE.ACESFilmicToneMapping;
|
||||
this._renderer.toneMappingExposure = this.settings.exposure;
|
||||
this._renderer.shadowMap.enabled = true;
|
||||
this._renderer.shadowMap.type = THREE.PCFSoftShadowMap;
|
||||
|
||||
// Scene
|
||||
this._scene = new THREE.Scene();
|
||||
this._scene.background = new THREE.Color(C.bgDeep);
|
||||
this._scene.fog = new THREE.FogExp2(C.bgDeep, 0.005);
|
||||
|
||||
// Camera
|
||||
this._camera = new THREE.PerspectiveCamera(
|
||||
this.settings.fov, window.innerWidth / window.innerHeight, 0.1, 300
|
||||
);
|
||||
this._camera.position.set(6, 5, 8);
|
||||
this._camera.lookAt(0, 1.2, 0);
|
||||
|
||||
// Controls
|
||||
this._controls = new OrbitControls(this._camera, this._canvas);
|
||||
this._controls.enableDamping = true;
|
||||
this._controls.dampingFactor = 0.08;
|
||||
this._controls.minDistance = 2;
|
||||
this._controls.maxDistance = 25;
|
||||
this._controls.maxPolarAngle = Math.PI * 0.88;
|
||||
this._controls.target.set(0, 1.2, 0);
|
||||
this._controls.update();
|
||||
|
||||
this._clock = new THREE.Clock();
|
||||
|
||||
// Data
|
||||
this._demoData = new DemoDataGenerator();
|
||||
this._demoData.setCycleDuration(this.settings.cycle || 30);
|
||||
if (this.settings.scenario && this.settings.scenario !== 'auto') {
|
||||
this._demoData.setScenario(this.settings.scenario);
|
||||
}
|
||||
this._currentData = null;
|
||||
this._currentScenario = null;
|
||||
|
||||
// Build scene
|
||||
this._setupLighting();
|
||||
this._nebula = new NebulaBackground(this._scene);
|
||||
this._buildRoom();
|
||||
this._buildRouter();
|
||||
this._poseSystem = new PoseSystem();
|
||||
this._figurePool = new FigurePool(this._scene, this.settings, this._poseSystem);
|
||||
this._scenarioProps = new ScenarioProps(this._scene);
|
||||
this._buildDotMatrixMist();
|
||||
this._buildParticleTrail();
|
||||
this._buildWifiWaves();
|
||||
this._buildSignalField();
|
||||
|
||||
// Post-processing
|
||||
this._postProcessing = new PostProcessing(this._renderer, this._scene, this._camera);
|
||||
this._applyPostSettings();
|
||||
|
||||
// HUD controller (settings dialog, sparkline, vital displays)
|
||||
this._hud = new HudController(this);
|
||||
|
||||
// State
|
||||
this._autopilot = false;
|
||||
this._autoAngle = 0;
|
||||
this._fpsFrames = 0;
|
||||
this._fpsTime = 0;
|
||||
this._fpsValue = 60;
|
||||
this._showFps = false;
|
||||
this._qualityLevel = 2;
|
||||
|
||||
// WebSocket for live data — always try auto-detect on startup
|
||||
this._ws = null;
|
||||
this._liveData = null;
|
||||
this._autoDetectLive();
|
||||
|
||||
// Input
|
||||
this._initKeyboard();
|
||||
this._hud.initSettings();
|
||||
this._hud.initQuickSelect();
|
||||
window.addEventListener('resize', () => this._onResize());
|
||||
|
||||
// Start
|
||||
this._animate();
|
||||
}
|
||||
|
||||
// ---- Lighting ----
|
||||
|
||||
_setupLighting() {
|
||||
this._ambient = new THREE.AmbientLight(0x446688, this.settings.ambient * 3.0);
|
||||
this._scene.add(this._ambient);
|
||||
|
||||
const hemi = new THREE.HemisphereLight(0x6688bb, 0x203040, 1.2);
|
||||
this._scene.add(hemi);
|
||||
|
||||
const key = new THREE.DirectionalLight(0xffeedd, 1.2);
|
||||
key.position.set(4, 8, 3);
|
||||
key.castShadow = true;
|
||||
key.shadow.mapSize.set(1024, 1024);
|
||||
key.shadow.camera.near = 0.5;
|
||||
key.shadow.camera.far = 20;
|
||||
key.shadow.camera.left = -8;
|
||||
key.shadow.camera.right = 8;
|
||||
key.shadow.camera.top = 8;
|
||||
key.shadow.camera.bottom = -8;
|
||||
this._scene.add(key);
|
||||
|
||||
// Fill light from opposite side
|
||||
const fill = new THREE.DirectionalLight(0x8899bb, 0.7);
|
||||
fill.position.set(-4, 5, -2);
|
||||
this._scene.add(fill);
|
||||
|
||||
// Rim light from above/behind for edge definition
|
||||
const rim = new THREE.DirectionalLight(0x6699cc, 0.5);
|
||||
rim.position.set(0, 6, -5);
|
||||
this._scene.add(rim);
|
||||
|
||||
// Overhead room light — general illumination
|
||||
const overhead = new THREE.PointLight(0x8899aa, 1.0, 20, 1.0);
|
||||
overhead.position.set(0, 3.8, 0);
|
||||
this._scene.add(overhead);
|
||||
}
|
||||
|
||||
// ---- Room ----
|
||||
|
||||
_buildRoom() {
|
||||
this._grid = new THREE.GridHelper(12, 24, 0x1a4830, 0x0c2818);
|
||||
this._grid.material.opacity = 0.5;
|
||||
this._grid.material.transparent = true;
|
||||
this._scene.add(this._grid);
|
||||
|
||||
const boxGeo = new THREE.BoxGeometry(12, 4, 10);
|
||||
const edges = new THREE.EdgesGeometry(boxGeo);
|
||||
this._roomWire = new THREE.LineSegments(edges, new THREE.LineBasicMaterial({
|
||||
color: C.greenDim, opacity: 0.3, transparent: true,
|
||||
}));
|
||||
this._roomWire.position.y = 2;
|
||||
this._scene.add(this._roomWire);
|
||||
|
||||
// Reflective floor
|
||||
const floorGeo = new THREE.PlaneGeometry(12, 10);
|
||||
this._floorMat = new THREE.MeshStandardMaterial({
|
||||
color: 0x101810,
|
||||
roughness: 1.0 - this.settings.reflect * 0.7,
|
||||
metalness: this.settings.reflect * 0.5,
|
||||
emissive: 0x020404,
|
||||
emissiveIntensity: 0.08,
|
||||
});
|
||||
const floor = new THREE.Mesh(floorGeo, this._floorMat);
|
||||
floor.rotation.x = -Math.PI / 2;
|
||||
floor.receiveShadow = true;
|
||||
this._scene.add(floor);
|
||||
|
||||
// Table under router
|
||||
const tableGeo = new THREE.BoxGeometry(0.8, 0.6, 0.5);
|
||||
const tableMat = new THREE.MeshStandardMaterial({ color: 0x6b5840, roughness: 0.55, emissive: 0x1a1408, emissiveIntensity: 0.25 });
|
||||
const table = new THREE.Mesh(tableGeo, tableMat);
|
||||
table.position.set(-4, 0.3, -3);
|
||||
table.castShadow = true;
|
||||
this._scene.add(table);
|
||||
}
|
||||
|
||||
// ---- Router ----
|
||||
|
||||
_buildRouter() {
|
||||
this._routerGroup = new THREE.Group();
|
||||
this._routerGroup.position.set(-4, 0.92, -3);
|
||||
|
||||
const bodyGeo = new THREE.BoxGeometry(0.6, 0.12, 0.35);
|
||||
const bodyMat = new THREE.MeshStandardMaterial({ color: 0x505060, roughness: 0.2, metalness: 0.7, emissive: 0x101018, emissiveIntensity: 0.2 });
|
||||
this._routerGroup.add(new THREE.Mesh(bodyGeo, bodyMat));
|
||||
|
||||
for (let i = -1; i <= 1; i++) {
|
||||
const antGeo = new THREE.CylinderGeometry(0.015, 0.015, 0.35);
|
||||
const antMat = new THREE.MeshStandardMaterial({ color: 0x606068, roughness: 0.3, metalness: 0.6, emissive: 0x101018, emissiveIntensity: 0.15 });
|
||||
const ant = new THREE.Mesh(antGeo, antMat);
|
||||
ant.position.set(i * 0.2, 0.24, 0);
|
||||
ant.rotation.z = i * 0.15;
|
||||
this._routerGroup.add(ant);
|
||||
}
|
||||
|
||||
const ledGeo = new THREE.SphereGeometry(0.025);
|
||||
this._routerLed = new THREE.Mesh(ledGeo, new THREE.MeshBasicMaterial({ color: C.greenGlow }));
|
||||
this._routerLed.position.set(0.22, 0.07, 0.18);
|
||||
this._routerGroup.add(this._routerLed);
|
||||
|
||||
this._routerLight = new THREE.PointLight(C.blueSignal, 1.2, 8);
|
||||
this._routerLight.position.set(0, 0.3, 0);
|
||||
this._routerGroup.add(this._routerLight);
|
||||
|
||||
this._scene.add(this._routerGroup);
|
||||
}
|
||||
|
||||
// ---- WiFi Waves ----
|
||||
|
||||
_buildWifiWaves() {
|
||||
this._wifiWaves = [];
|
||||
for (let i = 0; i < 5; i++) {
|
||||
const radius = 0.8 + i * 1.0;
|
||||
const geo = new THREE.SphereGeometry(radius, 24, 16, 0, Math.PI * 2, 0, Math.PI * 0.6);
|
||||
const mat = new THREE.MeshBasicMaterial({
|
||||
color: C.blueSignal,
|
||||
transparent: true, opacity: 0,
|
||||
side: THREE.DoubleSide,
|
||||
blending: THREE.AdditiveBlending,
|
||||
depthWrite: false, wireframe: true,
|
||||
});
|
||||
const shell = new THREE.Mesh(geo, mat);
|
||||
shell.position.copy(this._routerGroup.position);
|
||||
shell.position.y += 0.5;
|
||||
this._scene.add(shell);
|
||||
this._wifiWaves.push({ mesh: shell, mat, phase: i * 0.7 });
|
||||
}
|
||||
}
|
||||
|
||||
// ========================================
|
||||
// DOT MATRIX MIST
|
||||
// ========================================
|
||||
|
||||
_buildDotMatrixMist() {
|
||||
const COUNT = 800;
|
||||
const positions = new Float32Array(COUNT * 3);
|
||||
const alphas = new Float32Array(COUNT);
|
||||
for (let i = 0; i < COUNT; i++) {
|
||||
const angle = Math.random() * Math.PI * 2;
|
||||
const r = Math.random() * 0.5;
|
||||
positions[i * 3] = Math.cos(angle) * r;
|
||||
positions[i * 3 + 1] = Math.random() * 1.8;
|
||||
positions[i * 3 + 2] = Math.sin(angle) * r;
|
||||
alphas[i] = 0;
|
||||
}
|
||||
const geo = new THREE.BufferGeometry();
|
||||
geo.setAttribute('position', new THREE.BufferAttribute(positions, 3));
|
||||
geo.setAttribute('alpha', new THREE.BufferAttribute(alphas, 1));
|
||||
const mat = new THREE.ShaderMaterial({
|
||||
vertexShader: `
|
||||
attribute float alpha;
|
||||
varying float vAlpha;
|
||||
void main() {
|
||||
vAlpha = alpha;
|
||||
vec4 mv = modelViewMatrix * vec4(position, 1.0);
|
||||
gl_PointSize = 3.0 * (200.0 / -mv.z);
|
||||
gl_Position = projectionMatrix * mv;
|
||||
}
|
||||
`,
|
||||
fragmentShader: `
|
||||
uniform vec3 uColor;
|
||||
varying float vAlpha;
|
||||
void main() {
|
||||
float d = length(gl_PointCoord - 0.5);
|
||||
if (d > 0.5) discard;
|
||||
float edge = smoothstep(0.5, 0.2, d);
|
||||
gl_FragColor = vec4(uColor, edge * vAlpha);
|
||||
}
|
||||
`,
|
||||
uniforms: { uColor: { value: new THREE.Color(this.settings.wireColor) } },
|
||||
transparent: true, blending: THREE.AdditiveBlending, depthWrite: false,
|
||||
});
|
||||
this._mistPoints = new THREE.Points(geo, mat);
|
||||
this._scene.add(this._mistPoints);
|
||||
this._mistCount = COUNT;
|
||||
}
|
||||
|
||||
// ---- Particle Trail ----
|
||||
|
||||
_buildParticleTrail() {
|
||||
const COUNT = 200;
|
||||
const positions = new Float32Array(COUNT * 3);
|
||||
const ages = new Float32Array(COUNT);
|
||||
for (let i = 0; i < COUNT; i++) ages[i] = 1;
|
||||
const geo = new THREE.BufferGeometry();
|
||||
geo.setAttribute('position', new THREE.BufferAttribute(positions, 3));
|
||||
geo.setAttribute('age', new THREE.BufferAttribute(ages, 1));
|
||||
const mat = new THREE.ShaderMaterial({
|
||||
vertexShader: `
|
||||
attribute float age;
|
||||
varying float vAge;
|
||||
void main() {
|
||||
vAge = age;
|
||||
vec4 mv = modelViewMatrix * vec4(position, 1.0);
|
||||
gl_PointSize = max(1.0, (1.0 - age) * 5.0 * (150.0 / -mv.z));
|
||||
gl_Position = projectionMatrix * mv;
|
||||
}
|
||||
`,
|
||||
fragmentShader: `
|
||||
uniform vec3 uColor;
|
||||
varying float vAge;
|
||||
void main() {
|
||||
float d = length(gl_PointCoord - 0.5);
|
||||
if (d > 0.5) discard;
|
||||
float alpha = (1.0 - vAge) * 0.6 * smoothstep(0.5, 0.1, d);
|
||||
gl_FragColor = vec4(uColor, alpha);
|
||||
}
|
||||
`,
|
||||
uniforms: { uColor: { value: new THREE.Color(C.greenGlow) } },
|
||||
transparent: true, blending: THREE.AdditiveBlending, depthWrite: false,
|
||||
});
|
||||
this._trail = new THREE.Points(geo, mat);
|
||||
this._scene.add(this._trail);
|
||||
this._trailHead = 0;
|
||||
this._trailCount = COUNT;
|
||||
this._trailTimer = 0;
|
||||
}
|
||||
|
||||
// ---- Signal Field ----
|
||||
|
||||
_buildSignalField() {
|
||||
const gridSize = 20;
|
||||
const count = gridSize * gridSize;
|
||||
const positions = new Float32Array(count * 3);
|
||||
this._fieldColors = new Float32Array(count * 3);
|
||||
this._fieldSizes = new Float32Array(count);
|
||||
for (let iz = 0; iz < gridSize; iz++) {
|
||||
for (let ix = 0; ix < gridSize; ix++) {
|
||||
const idx = iz * gridSize + ix;
|
||||
positions[idx * 3] = (ix - gridSize / 2) * 0.6;
|
||||
positions[idx * 3 + 1] = 0.02;
|
||||
positions[idx * 3 + 2] = (iz - gridSize / 2) * 0.5;
|
||||
this._fieldSizes[idx] = 8;
|
||||
}
|
||||
}
|
||||
const geo = new THREE.BufferGeometry();
|
||||
geo.setAttribute('position', new THREE.BufferAttribute(positions, 3));
|
||||
geo.setAttribute('color', new THREE.BufferAttribute(this._fieldColors, 3));
|
||||
geo.setAttribute('size', new THREE.BufferAttribute(this._fieldSizes, 1));
|
||||
this._fieldMat = new THREE.PointsMaterial({
|
||||
size: 0.35, vertexColors: true, transparent: true,
|
||||
opacity: this.settings.field, blending: THREE.AdditiveBlending,
|
||||
depthWrite: false, sizeAttenuation: true,
|
||||
});
|
||||
this._fieldPoints = new THREE.Points(geo, this._fieldMat);
|
||||
this._scene.add(this._fieldPoints);
|
||||
}
|
||||
|
||||
// ---- Keyboard ----
|
||||
|
||||
_initKeyboard() {
|
||||
window.addEventListener('keydown', (e) => {
|
||||
if (this._hud.settingsOpen) return;
|
||||
switch (e.key.toLowerCase()) {
|
||||
case 'a':
|
||||
this._autopilot = !this._autopilot;
|
||||
this._controls.enabled = !this._autopilot;
|
||||
break;
|
||||
case 'd': this._demoData.cycleScenario(); break;
|
||||
case 'f':
|
||||
this._showFps = !this._showFps;
|
||||
document.getElementById('fps-counter').style.display = this._showFps ? 'block' : 'none';
|
||||
break;
|
||||
case 's': this._hud.toggleSettings(); break;
|
||||
case ' ':
|
||||
e.preventDefault();
|
||||
this._demoData.paused = !this._demoData.paused;
|
||||
break;
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// ---- Settings / HUD methods delegated to HudController ----
|
||||
|
||||
_applyPostSettings() {
|
||||
const pp = this._postProcessing;
|
||||
pp._bloomPass.strength = this.settings.bloom;
|
||||
pp._bloomPass.radius = this.settings.bloomRadius;
|
||||
pp._bloomPass.threshold = this.settings.bloomThresh;
|
||||
pp._vignettePass.uniforms.uVignetteStrength.value = this.settings.vignette;
|
||||
pp._vignettePass.uniforms.uGrainStrength.value = this.settings.grain;
|
||||
pp._vignettePass.uniforms.uChromaticStrength.value = this.settings.chromatic;
|
||||
}
|
||||
|
||||
_applyColors() {
|
||||
const wc = new THREE.Color(this.settings.wireColor);
|
||||
const jc = new THREE.Color(this.settings.jointColor);
|
||||
this._figurePool.applyColors(wc, jc);
|
||||
this._mistPoints.material.uniforms.uColor.value.copy(wc);
|
||||
}
|
||||
|
||||
// ---- WebSocket live data ----
|
||||
|
||||
_autoDetectLive() {
|
||||
// Probe sensing server health on same origin, then common ports
|
||||
const host = window.location.hostname || 'localhost';
|
||||
const candidates = [
|
||||
window.location.origin, // same origin (e.g. :3000)
|
||||
`http://${host}:8765`, // default WS port
|
||||
`http://${host}:3000`, // default HTTP port
|
||||
];
|
||||
// Deduplicate
|
||||
const unique = [...new Set(candidates)];
|
||||
|
||||
const tryNext = (i) => {
|
||||
if (i >= unique.length) {
|
||||
console.log('[Observatory] No sensing server detected, using demo mode');
|
||||
return;
|
||||
}
|
||||
const base = unique[i];
|
||||
fetch(`${base}/health`, { signal: AbortSignal.timeout(1500) })
|
||||
.then(r => r.ok ? r.json() : Promise.reject())
|
||||
.then(data => {
|
||||
if (data && data.status === 'ok') {
|
||||
const wsProto = base.startsWith('https') ? 'wss:' : 'ws:';
|
||||
const urlObj = new URL(base);
|
||||
const wsUrl = `${wsProto}//${urlObj.host}/ws/sensing`;
|
||||
console.log('[Observatory] Sensing server detected at', base, '→', wsUrl);
|
||||
this.settings.dataSource = 'ws';
|
||||
this.settings.wsUrl = wsUrl;
|
||||
this._connectWS(wsUrl);
|
||||
} else {
|
||||
tryNext(i + 1);
|
||||
}
|
||||
})
|
||||
.catch(() => tryNext(i + 1));
|
||||
};
|
||||
tryNext(0);
|
||||
}
|
||||
|
||||
_connectWS(url) {
|
||||
this._disconnectWS();
|
||||
try {
|
||||
this._ws = new WebSocket(url);
|
||||
this._ws.onopen = () => {
|
||||
console.log('[Observatory] WebSocket connected');
|
||||
this._hud.updateSourceBadge('ws', this._ws);
|
||||
};
|
||||
this._ws.onmessage = (evt) => { try { this._liveData = JSON.parse(evt.data); } catch {} };
|
||||
this._ws.onclose = () => {
|
||||
console.log('[Observatory] WebSocket closed, falling back to demo');
|
||||
this._ws = null;
|
||||
this.settings.dataSource = 'demo';
|
||||
this._hud.updateSourceBadge('demo', null);
|
||||
};
|
||||
this._ws.onerror = () => {};
|
||||
} catch {}
|
||||
}
|
||||
|
||||
_disconnectWS() {
|
||||
if (this._ws) { this._ws.close(); this._ws = null; }
|
||||
this._liveData = null;
|
||||
}
|
||||
|
||||
// ========================================
|
||||
// ANIMATION LOOP
|
||||
// ========================================
|
||||
|
||||
_animate() {
|
||||
requestAnimationFrame(() => this._animate());
|
||||
const dt = Math.min(this._clock.getDelta(), 0.1);
|
||||
const elapsed = this._clock.getElapsedTime();
|
||||
|
||||
// Data source
|
||||
if (this.settings.dataSource === 'ws' && this._liveData) {
|
||||
this._currentData = this._liveData;
|
||||
} else {
|
||||
this._currentData = this._demoData.update(dt);
|
||||
}
|
||||
const data = this._currentData;
|
||||
|
||||
// Updates
|
||||
this._nebula.update(dt, elapsed);
|
||||
this._figurePool.update(data, elapsed);
|
||||
this._scenarioProps.update(data, this._demoData.currentScenario);
|
||||
this._updateDotMatrixMist(data, elapsed);
|
||||
this._updateParticleTrail(data, dt, elapsed);
|
||||
this._updateWifiWaves(elapsed);
|
||||
this._updateSignalField(data);
|
||||
this._hud.updateHUD(data, this._demoData);
|
||||
this._hud.updateSparkline(data);
|
||||
|
||||
// Router LED
|
||||
this._routerLed.material.opacity = 0.5 + 0.5 * Math.sin(elapsed * 8);
|
||||
this._routerLight.intensity = 0.3 + 0.2 * Math.sin(elapsed * 3);
|
||||
|
||||
// Autopilot orbit
|
||||
if (this._autopilot) {
|
||||
this._autoAngle += dt * this.settings.orbitSpeed;
|
||||
const r = 10;
|
||||
this._camera.position.set(
|
||||
Math.sin(this._autoAngle) * r,
|
||||
4.5 + Math.sin(this._autoAngle * 0.5),
|
||||
Math.cos(this._autoAngle) * r
|
||||
);
|
||||
this._controls.target.set(0, 1.2, 0);
|
||||
this._controls.update();
|
||||
}
|
||||
this._controls.update();
|
||||
this._postProcessing.update(elapsed);
|
||||
this._postProcessing.render();
|
||||
this._updateFPS(dt);
|
||||
}
|
||||
|
||||
|
||||
// ========================================
|
||||
// MIST & TRAIL
|
||||
// ========================================
|
||||
|
||||
_updateDotMatrixMist(data, elapsed) {
|
||||
const persons = data?.persons || [];
|
||||
const isPresent = data?.classification?.presence || false;
|
||||
const pos = this._mistPoints.geometry.attributes.position;
|
||||
const alpha = this._mistPoints.geometry.attributes.alpha;
|
||||
|
||||
if (!isPresent || persons.length === 0) {
|
||||
for (let i = 0; i < this._mistCount; i++) {
|
||||
alpha.array[i] = Math.max(0, alpha.array[i] - 0.02);
|
||||
}
|
||||
alpha.needsUpdate = true;
|
||||
return;
|
||||
}
|
||||
|
||||
// Follow primary person
|
||||
const pp = persons[0].position || [0, 0, 0];
|
||||
const px = pp[0] || 0, pz = pp[2] || 0;
|
||||
const ms = persons[0].motion_score || 0;
|
||||
const pose = persons[0].pose || 'standing';
|
||||
const isLying = pose === 'lying' || pose === 'fallen';
|
||||
const bodyH = isLying ? 0.4 : 1.7;
|
||||
const bodyBaseY = isLying ? (pp[1] || 0) + 0.05 : 0.05;
|
||||
const spread = ms > 50 ? 0.6 : 0.4;
|
||||
|
||||
for (let i = 0; i < this._mistCount; i++) {
|
||||
const drift = Math.sin(elapsed * 0.5 + i * 0.1) * 0.003;
|
||||
const angle = (i / this._mistCount) * Math.PI * 2 + elapsed * 0.1;
|
||||
const layerT = (i % 20) / 20;
|
||||
const layerY = bodyBaseY + layerT * bodyH;
|
||||
|
||||
let bodyWidth;
|
||||
if (isLying) {
|
||||
bodyWidth = 0.25;
|
||||
} else {
|
||||
bodyWidth = layerT > 0.75 ? 0.15 : (layerT > 0.45 ? 0.25 : 0.18);
|
||||
}
|
||||
const r = bodyWidth * (0.5 + 0.5 * Math.sin(i * 1.7 + elapsed * 0.3)) * spread;
|
||||
|
||||
const tx = px + Math.cos(angle + i * 0.3) * r + drift;
|
||||
const tz = pz + Math.sin(angle + i * 0.5) * r * 0.6;
|
||||
|
||||
pos.array[i * 3] += (tx - pos.array[i * 3]) * 0.05;
|
||||
pos.array[i * 3 + 1] += (layerY - pos.array[i * 3 + 1]) * 0.05;
|
||||
pos.array[i * 3 + 2] += (tz - pos.array[i * 3 + 2]) * 0.05;
|
||||
|
||||
const targetAlpha = 0.15 + Math.sin(elapsed * 2 + i * 0.5) * 0.08;
|
||||
alpha.array[i] += (targetAlpha - alpha.array[i]) * 0.08;
|
||||
}
|
||||
pos.needsUpdate = true;
|
||||
alpha.needsUpdate = true;
|
||||
}
|
||||
|
||||
_updateParticleTrail(data, dt, elapsed) {
|
||||
if (this.settings.trail <= 0) return;
|
||||
const persons = data?.persons || [];
|
||||
const isPresent = data?.classification?.presence || false;
|
||||
const pos = this._trail.geometry.attributes.position;
|
||||
const ages = this._trail.geometry.attributes.age;
|
||||
|
||||
for (let i = 0; i < this._trailCount; i++) {
|
||||
ages.array[i] = Math.min(1, ages.array[i] + dt * 0.8);
|
||||
}
|
||||
|
||||
// Emit from all active persons
|
||||
if (isPresent && persons.length > 0) {
|
||||
this._trailTimer += dt;
|
||||
const ms = persons[0].motion_score || 0;
|
||||
const emitRate = ms > 50 ? 0.02 : 0.08;
|
||||
|
||||
if (this._trailTimer >= emitRate) {
|
||||
this._trailTimer = 0;
|
||||
for (const p of persons) {
|
||||
const pp = p.position || [0, 0, 0];
|
||||
const idx = this._trailHead;
|
||||
pos.array[idx * 3] = (pp[0] || 0) + (Math.random() - 0.5) * 0.15;
|
||||
pos.array[idx * 3 + 1] = Math.random() * 1.5 + 0.1;
|
||||
pos.array[idx * 3 + 2] = (pp[2] || 0) + (Math.random() - 0.5) * 0.15;
|
||||
ages.array[idx] = 0;
|
||||
this._trailHead = (this._trailHead + 1) % this._trailCount;
|
||||
}
|
||||
}
|
||||
}
|
||||
pos.needsUpdate = true;
|
||||
ages.needsUpdate = true;
|
||||
}
|
||||
|
||||
// ---- WiFi Waves ----
|
||||
|
||||
_updateWifiWaves(elapsed) {
|
||||
for (const w of this._wifiWaves) {
|
||||
const t = (elapsed * 0.8 + w.phase) % 4.5;
|
||||
const life = t / 4.5;
|
||||
w.mat.opacity = Math.max(0, this.settings.waves * 0.25 * (1 - life));
|
||||
const scale = 1 + life * 0.6;
|
||||
w.mesh.scale.set(scale, scale, scale);
|
||||
w.mesh.rotation.y = elapsed * 0.05;
|
||||
}
|
||||
}
|
||||
|
||||
// ---- Signal Field ----
|
||||
|
||||
_updateSignalField(data) {
|
||||
const field = data?.signal_field?.values;
|
||||
if (!field) return;
|
||||
const count = Math.min(field.length, 400);
|
||||
for (let i = 0; i < count; i++) {
|
||||
const v = field[i] || 0;
|
||||
let r, g, b;
|
||||
if (v < 0.3) { r = 0; g = v * 1.5; b = v * 0.3; }
|
||||
else if (v < 0.6) {
|
||||
const t = (v - 0.3) / 0.3;
|
||||
r = t * 0.3; g = 0.45 + t * 0.4; b = 0.09 - t * 0.05;
|
||||
} else {
|
||||
const t = (v - 0.6) / 0.4;
|
||||
r = 0.3 + t * 0.7; g = 0.85 - t * 0.2; b = 0.04;
|
||||
}
|
||||
this._fieldColors[i * 3] = r;
|
||||
this._fieldColors[i * 3 + 1] = g;
|
||||
this._fieldColors[i * 3 + 2] = b;
|
||||
this._fieldSizes[i] = 5 + v * 15;
|
||||
}
|
||||
this._fieldPoints.geometry.attributes.color.needsUpdate = true;
|
||||
this._fieldPoints.geometry.attributes.size.needsUpdate = true;
|
||||
}
|
||||
|
||||
// ---- FPS ----
|
||||
|
||||
_updateFPS(dt) {
|
||||
this._fpsFrames++;
|
||||
this._fpsTime += dt;
|
||||
if (this._fpsTime >= 1) {
|
||||
this._fpsValue = Math.round(this._fpsFrames / this._fpsTime);
|
||||
this._fpsFrames = 0;
|
||||
this._fpsTime = 0;
|
||||
if (this._showFps) {
|
||||
document.getElementById('fps-counter').textContent = `${this._fpsValue} FPS`;
|
||||
}
|
||||
this._adaptQuality();
|
||||
}
|
||||
}
|
||||
|
||||
_adaptQuality() {
|
||||
let nl = this._qualityLevel;
|
||||
if (this._fpsValue < 25 && nl > 0) nl--;
|
||||
else if (this._fpsValue > 55 && nl < 2) nl++;
|
||||
if (nl !== this._qualityLevel) {
|
||||
this._qualityLevel = nl;
|
||||
this._nebula.setQuality(nl);
|
||||
this._postProcessing.setQuality(nl);
|
||||
}
|
||||
}
|
||||
|
||||
_onResize() {
|
||||
const w = window.innerWidth, h = window.innerHeight;
|
||||
this._camera.aspect = w / h;
|
||||
this._camera.updateProjectionMatrix();
|
||||
this._renderer.setSize(w, h);
|
||||
this._postProcessing.resize(w, h);
|
||||
}
|
||||
}
|
||||
|
||||
new Observatory();
|
||||
Loading…
Add table
Add a link
Reference in a new issue