mirror of
https://github.com/ruvnet/RuView.git
synced 2026-04-28 05:59:32 +00:00
* feat: dual-modal WASM browser pose estimation demo (ADR-058) Live webcam video + WiFi CSI fusion for real-time pose estimation. Two parallel CNN pipelines (ruvector-cnn-wasm) with attention-weighted fusion and dynamic confidence gating. Three modes: Dual, Video-only, CSI-only. Includes pre-built WASM package (~52KB) for browser deployment. - ADR-058: Dual-modal architecture design - ui/pose-fusion.html: Main demo page with dark theme UI - 7 JS modules: video-capture, csi-simulator, cnn-embedder, fusion-engine, pose-decoder, canvas-renderer, main orchestrator - Pre-built ruvector-cnn-wasm WASM package for browser - CSI heatmap, embedding space visualization, latency metrics - WebSocket support for live ESP32 CSI data - Navigation link added to main dashboard Co-Authored-By: claude-flow <ruv@ruv.net> * fix: motion-responsive skeleton + through-wall CSI tracking - Pose decoder now uses per-cell motion grid to track actual arm/head positions — raising arms moves the skeleton's arms, head follows lateral movement - Motion grid (10x8 cells) tracks intensity per body zone: head, left/right arm upper/mid, legs - Through-wall mode: when person exits frame, CSI maintains presence with slow decay (~10s) and skeleton drifts in exit direction - CSI simulator persists sensing after video loss, ghost pose renders with decreasing confidence - Reduced temporal smoothing (0.45) for faster response to movement Co-Authored-By: claude-flow <ruv@ruv.net> * fix: video fills available space + correct WASM path resolution - Remove fixed aspect-ratio and max-height from video panel so it fills the available viewport space without scrolling - Grid uses 1fr row for content area, overflow:hidden on main grid - Fix WASM path: resolve relative to JS module file using import.meta.url instead of hardcoded ./pkg/ which resolved incorrectly on gh-pages - Responsive: mobile still gets aspect-ratio constraint Co-Authored-By: claude-flow <ruv@ruv.net> * feat: live ESP32 CSI pipeline + auto-connect WebSocket - Add auto-connect to local sensing server WebSocket (ws://localhost:8765) - Demo shows "Live ESP32" when connected to real CSI data - Add build_firmware.ps1 for native Windows ESP-IDF builds (no Docker) - Add read_serial.ps1 for ESP32 serial monitor Pipeline: ESP32 → UDP:5005 → sensing-server → WS:8765 → browser demo Co-Authored-By: claude-flow <ruv@ruv.net> * docs: add ADR-059 live ESP32 CSI pipeline + update README with demo links - ADR-059: Documents end-to-end ESP32 → sensing server → browser pipeline - README: Add dual-modal pose fusion demo link, update ADR count to 49 - References issue #245 Co-Authored-By: claude-flow <ruv@ruv.net> * feat: RSSI visualization, RuVector attention WASM, cache-bust fixes - Add animated RSSI Signal Strength panel with sparkline history - Fix RuVector WasmMultiHeadAttention retptr calling convention - Wire up RuVector Multi-Head + Flash Attention in CNN embedder - Add ambient temporal drift to CSI simulator for visible heatmap animation - Fix embedding space projection (sparse projection replaces cancelling sum) - Add auto-scaling to embedding space renderer - Add cache busters (?v=4) to all ES module imports to prevent stale caches - Add diagnostic logging for module version verification - Add RSSI tracking with quality labels and color-coded dBm display - Includes ruvector-attention-wasm v2.0.5 browser ESM wrapper Co-Authored-By: claude-flow <ruv@ruv.net> * feat: 26-keypoint dexterous pose + full RuVector attention pipeline Pose Decoder (17 → 26 keypoints): - Add finger approximations: thumb, index, pinky per hand (6 new) - Add toe tips: left/right foot index (2 new) - Add neck keypoint (1 new) - Hand openness driven by arm motion intensity - Finger positions computed from wrist-elbow axis angles CNN Embedder (full RuVector WASM pipeline): - Stage 1: Multi-Head Attention (global spatial reasoning) - Stage 2: Hyperbolic Attention (hierarchical body-part tree) - Stage 3: MoE Attention (3 experts: upper/lower/extremities, top-2) - Blended 40/30/30 weighting → final embedding projection Canvas Renderer: - Magenta finger joints with distinct glow - Cyan toe tips - White neck keypoint - Thinner limb lines for hand/foot connections - Joint count shown in overlay label CSI Simulator: - Skip synthetic person state when live ESP32 connected - Only simulate CSI data in demo mode (was already correct) Embedding Space: - Fixed projection: sparse 8-dim projection replaces cancelling sum - Auto-scaling normalizes point spread to fill canvas Cache busters bumped to v=5 on all imports. Co-Authored-By: claude-flow <ruv@ruv.net> * fix: centroid-based pose tracking for responsive limb movement Rewrites pose decoder from intensity-based to position-based tracking: - Arms now track toward motion centroid in each body zone - Elbow/wrist positions computed along shoulder→centroid vector - Legs track toward lower-body zone centroids - Smoothing reduced from 0.45 to 0.25 for responsiveness - Zone centroids blend 30% old / 70% new each frame 6 body zones with overlapping coverage: - Head (top 20%, center cols) - Left/Right Arm (rows 10-60%, outer cols) - Torso (rows 15-55%, center cols) - Left/Right Leg (rows 50-100%, half cols each) Hand openness now driven by arm spread distance + raise amount. Cache busters v=6. Co-Authored-By: claude-flow <ruv@ruv.net> * fix: remove duplicate lAnkleX/rAnkleX declarations in pose-decoder Stale code block from old intensity-based tracking was left behind, re-declaring variables already defined by centroid-based tracking. Co-Authored-By: claude-flow <ruv@ruv.net> * feat(demo): wire all 6 RuVector WASM attention mechanisms into pose fusion - Add WasmLinearAttention and WasmLocalGlobalAttention to browser ESM wrapper - Add 6 WASM utility functions (batch_normalize, pairwise_distances, etc.) - Extend CnnEmbedder to 6-stage pipeline: Flash → MHA → Hyperbolic → Linear → MoE → L+G - Use log-energy softmax blending across all 6 stages - Wire WASM cosine_similarity and normalize into FusionEngine - Add RuVector pipeline stats panel to UI (energy, refinement, pose impact) - Compute embedding-to-joint mapping stats without modifying joint positions - Center camera prompt with flexbox layout - Add cache busters v=12 Co-Authored-By: claude-flow <ruv@ruv.net>
357 lines
12 KiB
JavaScript
357 lines
12 KiB
JavaScript
/**
|
|
* CSI Simulator — Generates realistic WiFi Channel State Information data.
|
|
*
|
|
* In live mode, connects to the sensing server via WebSocket.
|
|
* In demo mode, generates synthetic CSI that correlates with detected motion.
|
|
*
|
|
* Outputs: 3-channel pseudo-image (amplitude, phase, temporal diff)
|
|
* matching the ADR-018 frame format expectations.
|
|
*/
|
|
|
|
export class CsiSimulator {
|
|
static VERSION = 'v4-drift'; // Cache-bust verification
|
|
|
|
constructor(opts = {}) {
|
|
this.subcarriers = opts.subcarriers || 52; // 802.11n HT20
|
|
this.timeWindow = opts.timeWindow || 56; // frames in sliding window
|
|
this.mode = 'demo'; // 'demo' | 'live'
|
|
this.ws = null;
|
|
|
|
// Circular buffer for CSI frames
|
|
this.amplitudeBuffer = [];
|
|
this.phaseBuffer = [];
|
|
this.frameCount = 0;
|
|
|
|
// Noise parameters
|
|
this._rng = this._mulberry32(opts.seed || 7);
|
|
this._noiseState = new Float32Array(this.subcarriers);
|
|
this._baseAmplitude = new Float32Array(this.subcarriers);
|
|
this._basePhase = new Float32Array(this.subcarriers);
|
|
|
|
// Initialize base CSI profile (empty room)
|
|
for (let i = 0; i < this.subcarriers; i++) {
|
|
this._baseAmplitude[i] = 0.5 + 0.3 * Math.sin(i * 0.12);
|
|
this._basePhase[i] = (i / this.subcarriers) * Math.PI * 2;
|
|
}
|
|
|
|
// RSSI tracking
|
|
this.rssiDbm = -70; // default mid-range
|
|
this._rssiTarget = -70;
|
|
|
|
// Person influence (updated from video motion)
|
|
this.personPresence = 0;
|
|
this.personX = 0.5;
|
|
this.personY = 0.5;
|
|
this.personMotion = 0;
|
|
}
|
|
|
|
/**
|
|
* Connect to live sensing server WebSocket
|
|
* @param {string} url - WebSocket URL (e.g. ws://localhost:3030/ws/csi)
|
|
*/
|
|
async connectLive(url) {
|
|
return new Promise((resolve) => {
|
|
try {
|
|
this.ws = new WebSocket(url);
|
|
this.ws.binaryType = 'arraybuffer';
|
|
this.ws.onmessage = (evt) => this._handleLiveFrame(evt.data);
|
|
this.ws.onopen = () => { this.mode = 'live'; resolve(true); };
|
|
this.ws.onerror = () => resolve(false);
|
|
this.ws.onclose = () => { this.mode = 'demo'; };
|
|
// Timeout after 3s
|
|
setTimeout(() => { if (this.mode !== 'live') resolve(false); }, 3000);
|
|
} catch {
|
|
resolve(false);
|
|
}
|
|
});
|
|
}
|
|
|
|
disconnect() {
|
|
if (this.ws) { this.ws.close(); this.ws = null; }
|
|
this.mode = 'demo';
|
|
}
|
|
|
|
get isLive() { return this.mode === 'live'; }
|
|
|
|
/**
|
|
* Update person state from video detection (for correlated demo data).
|
|
* When person exits frame, CSI maintains presence with slow decay
|
|
* (simulating through-wall sensing capability).
|
|
*/
|
|
updatePersonState(presence, x, y, motion) {
|
|
// Don't override real CSI sensing with synthetic video-derived state
|
|
if (this.mode === 'live') return;
|
|
|
|
if (presence > 0.1) {
|
|
// Person detected in video — update CSI state directly
|
|
this.personPresence = presence;
|
|
this.personX = x;
|
|
this.personY = y;
|
|
this.personMotion = motion;
|
|
this._lastSeenTime = performance.now();
|
|
this._lastSeenX = x;
|
|
this._lastSeenY = y;
|
|
} else if (this._lastSeenTime) {
|
|
// Person NOT in video — CSI "through-wall" persistence
|
|
const elapsed = (performance.now() - this._lastSeenTime) / 1000;
|
|
// CSI can sense through walls for ~10 seconds with decaying confidence
|
|
const decayRate = 0.15; // Lose ~15% per second
|
|
this.personPresence = Math.max(0, 1.0 - elapsed * decayRate);
|
|
// Position slowly drifts (person walking behind wall)
|
|
this.personX = this._lastSeenX;
|
|
this.personY = this._lastSeenY;
|
|
this.personMotion = Math.max(0, motion * 0.5 + this.personPresence * 0.2);
|
|
|
|
if (this.personPresence < 0.05) {
|
|
this._lastSeenTime = null;
|
|
}
|
|
} else {
|
|
this.personPresence = 0;
|
|
this.personMotion = 0;
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Generate next CSI frame (demo mode) or return latest live frame
|
|
* @param {number} elapsed - Time in seconds
|
|
* @returns {{ amplitude: Float32Array, phase: Float32Array, snr: number }}
|
|
*/
|
|
nextFrame(elapsed) {
|
|
const amp = new Float32Array(this.subcarriers);
|
|
const phase = new Float32Array(this.subcarriers);
|
|
|
|
if (this.mode === 'live' && this._liveAmplitude) {
|
|
amp.set(this._liveAmplitude);
|
|
phase.set(this._livePhase);
|
|
} else {
|
|
this._generateDemoFrame(amp, phase, elapsed);
|
|
}
|
|
|
|
// Push to circular buffer
|
|
this.amplitudeBuffer.push(new Float32Array(amp));
|
|
this.phaseBuffer.push(new Float32Array(phase));
|
|
if (this.amplitudeBuffer.length > this.timeWindow) {
|
|
this.amplitudeBuffer.shift();
|
|
this.phaseBuffer.shift();
|
|
}
|
|
|
|
// RSSI: smooth toward target (demo mode generates synthetic RSSI)
|
|
if (this.mode === 'demo') {
|
|
// Simulate RSSI based on person presence and slow drift
|
|
this._rssiTarget = -55 - 25 * (1 - this.personPresence) + Math.sin(elapsed * 0.3) * 3;
|
|
}
|
|
this.rssiDbm += (this._rssiTarget - this.rssiDbm) * 0.1;
|
|
|
|
// SNR estimate
|
|
let signalPower = 0, noisePower = 0;
|
|
for (let i = 0; i < this.subcarriers; i++) {
|
|
signalPower += amp[i] * amp[i];
|
|
noisePower += this._noiseState[i] * this._noiseState[i];
|
|
}
|
|
const snr = noisePower > 0 ? 10 * Math.log10(signalPower / noisePower) : 30;
|
|
|
|
this.frameCount++;
|
|
return { amplitude: amp, phase, snr: Math.max(0, Math.min(40, snr)) };
|
|
}
|
|
|
|
/**
|
|
* Build 3-channel pseudo-image for CNN input
|
|
* @param {number} targetSize - Output image dimension (square)
|
|
* @returns {Uint8Array} RGB data (targetSize * targetSize * 3)
|
|
*/
|
|
buildPseudoImage(targetSize = 56) {
|
|
const buf = this.amplitudeBuffer;
|
|
const pBuf = this.phaseBuffer;
|
|
const frames = buf.length;
|
|
if (frames < 2) {
|
|
return new Uint8Array(targetSize * targetSize * 3);
|
|
}
|
|
|
|
const rgb = new Uint8Array(targetSize * targetSize * 3);
|
|
|
|
for (let y = 0; y < targetSize; y++) {
|
|
const fi = Math.min(Math.floor(y / targetSize * frames), frames - 1);
|
|
for (let x = 0; x < targetSize; x++) {
|
|
const si = Math.min(Math.floor(x / targetSize * this.subcarriers), this.subcarriers - 1);
|
|
const idx = (y * targetSize + x) * 3;
|
|
|
|
// R: Amplitude (normalized to 0-255)
|
|
const ampVal = buf[fi][si];
|
|
rgb[idx] = Math.min(255, Math.max(0, Math.floor(ampVal * 255)));
|
|
|
|
// G: Phase (wrapped to 0-255)
|
|
const phaseVal = (pBuf[fi][si] % (2 * Math.PI) + 2 * Math.PI) % (2 * Math.PI);
|
|
rgb[idx + 1] = Math.floor(phaseVal / (2 * Math.PI) * 255);
|
|
|
|
// B: Temporal difference
|
|
if (fi > 0) {
|
|
const diff = Math.abs(buf[fi][si] - buf[fi - 1][si]);
|
|
rgb[idx + 2] = Math.min(255, Math.floor(diff * 500));
|
|
}
|
|
}
|
|
}
|
|
|
|
return rgb;
|
|
}
|
|
|
|
/**
|
|
* Get heatmap data for visualization
|
|
* @returns {{ data: Float32Array, width: number, height: number }}
|
|
*/
|
|
getHeatmapData() {
|
|
const frames = this.amplitudeBuffer.length;
|
|
const w = this.subcarriers;
|
|
const h = Math.min(frames, this.timeWindow);
|
|
const data = new Float32Array(w * h);
|
|
for (let y = 0; y < h; y++) {
|
|
const fi = frames - h + y;
|
|
if (fi >= 0 && fi < frames) {
|
|
for (let x = 0; x < w; x++) {
|
|
data[y * w + x] = this.amplitudeBuffer[fi][x];
|
|
}
|
|
}
|
|
}
|
|
return { data, width: w, height: h };
|
|
}
|
|
|
|
// === Private ===
|
|
|
|
_generateDemoFrame(amp, phase, elapsed) {
|
|
const rng = this._rng;
|
|
const presence = this.personPresence;
|
|
const motion = this.personMotion;
|
|
const px = this.personX;
|
|
|
|
for (let i = 0; i < this.subcarriers; i++) {
|
|
// Base CSI profile (frequency-selective channel)
|
|
let a = this._baseAmplitude[i];
|
|
let p = this._basePhase[i] + elapsed * 0.05;
|
|
|
|
// Environmental noise (correlated across subcarriers)
|
|
this._noiseState[i] = 0.95 * this._noiseState[i] + 0.05 * (rng() * 2 - 1) * 0.03;
|
|
a += this._noiseState[i];
|
|
|
|
// Ambient temporal drift (multipath fading even in empty room)
|
|
a += 0.06 * Math.sin(elapsed * 0.7 + i * 0.25)
|
|
+ 0.04 * Math.sin(elapsed * 1.3 - i * 0.18)
|
|
+ 0.03 * Math.cos(elapsed * 2.1 + i * 0.4);
|
|
|
|
// Person-induced CSI perturbation
|
|
if (presence > 0.1) {
|
|
// Subcarrier-dependent body reflection (Fresnel zone model)
|
|
const freqOffset = (i - this.subcarriers * px) / (this.subcarriers * 0.3);
|
|
const bodyReflection = presence * 0.25 * Math.exp(-freqOffset * freqOffset);
|
|
|
|
// Motion causes amplitude fluctuation
|
|
const motionEffect = motion * 0.15 * Math.sin(elapsed * 3.5 + i * 0.3);
|
|
|
|
// Breathing modulation (0.2-0.3 Hz)
|
|
const breathing = presence * 0.02 * Math.sin(elapsed * 1.5 + i * 0.05);
|
|
|
|
a += bodyReflection + motionEffect + breathing;
|
|
p += presence * 0.4 * Math.sin(elapsed * 2.1 + i * 0.15);
|
|
}
|
|
|
|
amp[i] = Math.max(0, Math.min(1, a));
|
|
phase[i] = p;
|
|
}
|
|
}
|
|
|
|
_handleLiveFrame(data) {
|
|
// Handle JSON text frames from the sensing server
|
|
if (typeof data === 'string') {
|
|
try {
|
|
const msg = JSON.parse(data);
|
|
this._handleJsonFrame(msg);
|
|
} catch (_) { /* ignore malformed JSON */ }
|
|
return;
|
|
}
|
|
|
|
// Handle binary ArrayBuffer frames (ADR-018 format)
|
|
if (!(data instanceof ArrayBuffer)) return;
|
|
const view = new DataView(data);
|
|
// Check ADR-018 magic: 0xC5110001
|
|
if (data.byteLength < 20) return;
|
|
const magic = view.getUint32(0, true);
|
|
if (magic !== 0xC5110001) return;
|
|
|
|
const numSub = Math.min(view.getUint16(8, true), this.subcarriers);
|
|
this._liveAmplitude = new Float32Array(this.subcarriers);
|
|
this._livePhase = new Float32Array(this.subcarriers);
|
|
|
|
const headerSize = 20;
|
|
for (let i = 0; i < numSub && (headerSize + i * 4 + 3) < data.byteLength; i++) {
|
|
const real = view.getInt16(headerSize + i * 4, true);
|
|
const imag = view.getInt16(headerSize + i * 4 + 2, true);
|
|
this._liveAmplitude[i] = Math.sqrt(real * real + imag * imag) / 2048;
|
|
this._livePhase[i] = Math.atan2(imag, real);
|
|
}
|
|
}
|
|
|
|
_handleJsonFrame(msg) {
|
|
// Sensing server sends: { type: "sensing_update", nodes: [{ amplitude: [...], subcarrier_count }], classification, features }
|
|
this._liveAmplitude = new Float32Array(this.subcarriers);
|
|
this._livePhase = new Float32Array(this.subcarriers);
|
|
|
|
// Extract amplitude from sensing_update node data
|
|
const node = (msg.nodes && msg.nodes[0]) || msg;
|
|
const ampArr = node.amplitude || msg.amplitude;
|
|
if (ampArr && Array.isArray(ampArr)) {
|
|
const n = Math.min(ampArr.length, this.subcarriers);
|
|
// Server sends raw amplitude (already magnitude), normalize to 0-1
|
|
let maxAmp = 0;
|
|
for (let i = 0; i < n; i++) maxAmp = Math.max(maxAmp, Math.abs(ampArr[i]));
|
|
const scale = maxAmp > 0 ? 1.0 / maxAmp : 1.0;
|
|
for (let i = 0; i < n; i++) {
|
|
this._liveAmplitude[i] = Math.abs(ampArr[i]) * scale;
|
|
}
|
|
}
|
|
|
|
// Phase from node (if available)
|
|
const phaseArr = node.phase || msg.phase;
|
|
if (phaseArr && Array.isArray(phaseArr)) {
|
|
const n = Math.min(phaseArr.length, this.subcarriers);
|
|
for (let i = 0; i < n; i++) this._livePhase[i] = phaseArr[i];
|
|
} else if (ampArr) {
|
|
// Synthesize phase from amplitude variation (Hilbert-like estimate)
|
|
for (let i = 1; i < this.subcarriers; i++) {
|
|
this._livePhase[i] = this._livePhase[i - 1] + (this._liveAmplitude[i] - this._liveAmplitude[i - 1]) * Math.PI;
|
|
}
|
|
}
|
|
|
|
// Handle raw I/Q pairs
|
|
const iq = node.iq || msg.iq;
|
|
if (iq && Array.isArray(iq)) {
|
|
const n = Math.min(iq.length / 2, this.subcarriers);
|
|
for (let i = 0; i < n; i++) {
|
|
const real = iq[i * 2], imag = iq[i * 2 + 1];
|
|
this._liveAmplitude[i] = Math.sqrt(real * real + imag * imag) / 2048;
|
|
this._livePhase[i] = Math.atan2(imag, real);
|
|
}
|
|
}
|
|
|
|
// Extract RSSI from node data
|
|
if (typeof node.rssi_dbm === 'number') {
|
|
this._rssiTarget = node.rssi_dbm;
|
|
} else if (msg.features && typeof msg.features.mean_rssi === 'number') {
|
|
this._rssiTarget = msg.features.mean_rssi;
|
|
}
|
|
|
|
// Update presence from server classification
|
|
const cls = msg.classification;
|
|
if (cls) {
|
|
if (typeof cls.confidence === 'number') {
|
|
this.personPresence = cls.presence ? cls.confidence : 0;
|
|
}
|
|
}
|
|
}
|
|
|
|
_mulberry32(seed) {
|
|
return function() {
|
|
let t = (seed += 0x6D2B79F5);
|
|
t = Math.imul(t ^ (t >>> 15), t | 1);
|
|
t ^= t + Math.imul(t ^ (t >>> 7), t | 61);
|
|
return ((t ^ (t >>> 14)) >>> 0) / 4294967296;
|
|
};
|
|
}
|
|
}
|