diff --git a/README.md b/README.md index aceeb73b..884da158 100644 --- a/README.md +++ b/README.md @@ -112,20 +112,25 @@ RuView now generates **real-time 3D point clouds** by fusing camera depth + WiFi ```bash cd rust-port/wifi-densepose-rs cargo build --release -p wifi-densepose-pointcloud -./target/release/ruview-pointcloud serve --port 9880 +./target/release/ruview-pointcloud serve --bind 127.0.0.1:9880 # Open http://localhost:9880 for live 3D viewer ``` **CLI commands:** ```bash -ruview-pointcloud demo # synthetic demo -ruview-pointcloud serve --port 9880 # live server + Three.js viewer -ruview-pointcloud capture --output room.ply # capture to PLY -ruview-pointcloud train # depth calibration + DPO pairs -ruview-pointcloud cameras # list available cameras -ruview-pointcloud csi-test --count 100 # send test CSI frames +ruview-pointcloud demo # synthetic demo +ruview-pointcloud serve --bind 127.0.0.1:9880 # live server + Three.js viewer +ruview-pointcloud capture --output room.ply # capture to PLY +ruview-pointcloud train # depth calibration + DPO pairs +ruview-pointcloud cameras # list available cameras +ruview-pointcloud csi-test --count 100 # send test CSI frames +ruview-pointcloud fingerprint office --seconds 5 # record named CSI room fingerprint ``` +The HTTP/viewer server defaults to **loopback (`127.0.0.1`)** — exposing live camera/CSI/vitals on `0.0.0.0` is an explicit opt-in. Brain URL defaults to `http://127.0.0.1:9876` and is overridable via `RUVIEW_BRAIN_URL` env var or the `--brain` flag on `serve`/`train`. + +The pose overlay currently uses an **amplitude-energy heuristic** (`heuristic_pose_from_amplitude`) rather than trained WiFlow inference — real ONNX/Candle inference is tracked as a follow-up. + **Performance:** 22ms pipeline, 905 req/s API, 40K voxel room model from 20 frames. **Brain integration:** Spatial observations (motion, vitals, skeleton, occupancy) sync to the ruOS brain every 60 seconds for agent reasoning. @@ -940,6 +945,8 @@ cargo add wifi-densepose-ruvector # RuVector v2.0.4 integration layer (ADR-017 | [`wifi-densepose-api`](https://crates.io/crates/wifi-densepose-api) | REST + WebSocket API layer | -- | [![crates.io](https://img.shields.io/crates/v/wifi-densepose-api.svg)](https://crates.io/crates/wifi-densepose-api) | | [`wifi-densepose-config`](https://crates.io/crates/wifi-densepose-config) | Configuration management | -- | [![crates.io](https://img.shields.io/crates/v/wifi-densepose-config.svg)](https://crates.io/crates/wifi-densepose-config) | | [`wifi-densepose-db`](https://crates.io/crates/wifi-densepose-db) | Database persistence (PostgreSQL, SQLite, Redis) | -- | [![crates.io](https://img.shields.io/crates/v/wifi-densepose-db.svg)](https://crates.io/crates/wifi-densepose-db) | +| `wifi-densepose-pointcloud` | Real-time dense point cloud from camera + WiFi CSI fusion (Three.js viewer, brain bridge). Workspace-only for now. | -- | — | +| `wifi-densepose-geo` | Geospatial context (Sentinel-2 tiles, SRTM elevation, OSM, weather, night-mode). Workspace-only for now. | -- | — | All crates integrate with [RuVector v2.0.4](https://github.com/ruvnet/ruvector) — see [AI Backbone](#ai-backbone-ruvector) below. diff --git a/docs/user-guide.md b/docs/user-guide.md index b29f314e..5b8ef45b 100644 --- a/docs/user-guide.md +++ b/docs/user-guide.md @@ -547,12 +547,16 @@ RuView can generate real-time 3D point clouds by fusing camera depth estimation cd rust-port/wifi-densepose-rs cargo build --release -p wifi-densepose-pointcloud -# Start the server (auto-detects camera + CSI) -./target/release/ruview-pointcloud serve --port 9880 +# Start the server (auto-detects camera + CSI). Loopback-only by default. +./target/release/ruview-pointcloud serve --bind 127.0.0.1:9880 ``` Open `http://localhost:9880` for the interactive Three.js 3D viewer. +> **Security note.** The server exposes live camera, skeleton, vitals, and occupancy over HTTP. The `--bind` flag defaults to `127.0.0.1:9880` (loopback-only). Exposing on `0.0.0.0` or a LAN IP is opt-in — the server logs a warning when it does, but there is no auth/TLS layer. Put a reverse proxy in front if you need remote access. + +> **Brain URL.** Observations are POSTed to `http://127.0.0.1:9876` by default. Override via the `RUVIEW_BRAIN_URL` environment variable or the `--brain ` flag on `serve` / `train`. + ### Sensors | Sensor | Auto-detected | Data | @@ -565,17 +569,18 @@ Open `http://localhost:9880` for the interactive Three.js 3D viewer. | Command | Description | |---------|-------------| -| `ruview-pointcloud serve --port 9880` | Start HTTP server + Three.js viewer | +| `ruview-pointcloud serve --bind 127.0.0.1:9880` | Start HTTP server + Three.js viewer (loopback-only by default) | | `ruview-pointcloud demo` | Generate synthetic point cloud (no hardware needed) | | `ruview-pointcloud capture --output room.ply` | Capture single frame to PLY file | | `ruview-pointcloud cameras` | List available cameras | -| `ruview-pointcloud train --data-dir ./data` | Depth calibration + occupancy training | +| `ruview-pointcloud train --data-dir ./data [--brain URL]` | Depth calibration + occupancy training (writes under canonicalized `data-dir`; refuses `..` traversal) | | `ruview-pointcloud csi-test --count 100` | Send test CSI frames (no ESP32 needed) | +| `ruview-pointcloud fingerprint [--seconds 5]` | Record a named CSI room fingerprint for later matching | ### Pipeline Components -1. **ADR-018 Parser** — Decodes ESP32 CSI binary frames from UDP, extracts I/Q subcarrier amplitudes and phases -2. **WiFlow Pose** — 17 COCO keypoint estimation from CSI (loads `wiflow-v1.json`, 186K params) +1. **ADR-018 Parser** — Decodes ESP32 CSI binary frames from UDP (magic `0xC5110001` raw CSI and `0xC5110006` feature state), extracts I/Q subcarrier amplitudes and phases. Lives in `parser.rs`; unit-tested against hand-rolled test vectors. +2. **Pose (stub)** — 17 COCO keypoint *layout* generated by `heuristic_pose_from_amplitude` from CSI amplitude energy. This is **not** the trained WiFlow model — it is a placeholder so the viewer has a skeleton to render. Wiring to real Candle/ONNX inference from the `wifi-densepose-nn` crate is a planned follow-up. 3. **Vital Signs** — Breathing rate from CSI phase analysis (peak counting on stable subcarrier) 4. **Motion Detection** — CSI amplitude variance over 20 frames, triggers adaptive capture 5. **RF Tomography** — Backprojection from per-node RSSI to 8×8×4 occupancy grid