Ruview/ui/pose-fusion/pkg/ruvector-attention/ruvector_attention_wasm.d.ts
rUv 7c1351fd5d
feat(demo): wire all 6 RuVector WASM attention mechanisms into pose fusion
* feat: dual-modal WASM browser pose estimation demo (ADR-058)

Live webcam video + WiFi CSI fusion for real-time pose estimation.
Two parallel CNN pipelines (ruvector-cnn-wasm) with attention-weighted
fusion and dynamic confidence gating. Three modes: Dual, Video-only,
CSI-only. Includes pre-built WASM package (~52KB) for browser deployment.

- ADR-058: Dual-modal architecture design
- ui/pose-fusion.html: Main demo page with dark theme UI
- 7 JS modules: video-capture, csi-simulator, cnn-embedder, fusion-engine,
  pose-decoder, canvas-renderer, main orchestrator
- Pre-built ruvector-cnn-wasm WASM package for browser
- CSI heatmap, embedding space visualization, latency metrics
- WebSocket support for live ESP32 CSI data
- Navigation link added to main dashboard

Co-Authored-By: claude-flow <ruv@ruv.net>

* fix: motion-responsive skeleton + through-wall CSI tracking

- Pose decoder now uses per-cell motion grid to track actual arm/head
  positions — raising arms moves the skeleton's arms, head follows
  lateral movement
- Motion grid (10x8 cells) tracks intensity per body zone: head,
  left/right arm upper/mid, legs
- Through-wall mode: when person exits frame, CSI maintains presence
  with slow decay (~10s) and skeleton drifts in exit direction
- CSI simulator persists sensing after video loss, ghost pose renders
  with decreasing confidence
- Reduced temporal smoothing (0.45) for faster response to movement

Co-Authored-By: claude-flow <ruv@ruv.net>

* fix: video fills available space + correct WASM path resolution

- Remove fixed aspect-ratio and max-height from video panel so it
  fills the available viewport space without scrolling
- Grid uses 1fr row for content area, overflow:hidden on main grid
- Fix WASM path: resolve relative to JS module file using import.meta.url
  instead of hardcoded ./pkg/ which resolved incorrectly on gh-pages
- Responsive: mobile still gets aspect-ratio constraint

Co-Authored-By: claude-flow <ruv@ruv.net>

* feat: live ESP32 CSI pipeline + auto-connect WebSocket

- Add auto-connect to local sensing server WebSocket (ws://localhost:8765)
- Demo shows "Live ESP32" when connected to real CSI data
- Add build_firmware.ps1 for native Windows ESP-IDF builds (no Docker)
- Add read_serial.ps1 for ESP32 serial monitor

Pipeline: ESP32 → UDP:5005 → sensing-server → WS:8765 → browser demo

Co-Authored-By: claude-flow <ruv@ruv.net>

* docs: add ADR-059 live ESP32 CSI pipeline + update README with demo links

- ADR-059: Documents end-to-end ESP32 → sensing server → browser pipeline
- README: Add dual-modal pose fusion demo link, update ADR count to 49
- References issue #245

Co-Authored-By: claude-flow <ruv@ruv.net>

* feat: RSSI visualization, RuVector attention WASM, cache-bust fixes

- Add animated RSSI Signal Strength panel with sparkline history
- Fix RuVector WasmMultiHeadAttention retptr calling convention
- Wire up RuVector Multi-Head + Flash Attention in CNN embedder
- Add ambient temporal drift to CSI simulator for visible heatmap animation
- Fix embedding space projection (sparse projection replaces cancelling sum)
- Add auto-scaling to embedding space renderer
- Add cache busters (?v=4) to all ES module imports to prevent stale caches
- Add diagnostic logging for module version verification
- Add RSSI tracking with quality labels and color-coded dBm display
- Includes ruvector-attention-wasm v2.0.5 browser ESM wrapper

Co-Authored-By: claude-flow <ruv@ruv.net>

* feat: 26-keypoint dexterous pose + full RuVector attention pipeline

Pose Decoder (17 → 26 keypoints):
- Add finger approximations: thumb, index, pinky per hand (6 new)
- Add toe tips: left/right foot index (2 new)
- Add neck keypoint (1 new)
- Hand openness driven by arm motion intensity
- Finger positions computed from wrist-elbow axis angles

CNN Embedder (full RuVector WASM pipeline):
- Stage 1: Multi-Head Attention (global spatial reasoning)
- Stage 2: Hyperbolic Attention (hierarchical body-part tree)
- Stage 3: MoE Attention (3 experts: upper/lower/extremities, top-2)
- Blended 40/30/30 weighting → final embedding projection

Canvas Renderer:
- Magenta finger joints with distinct glow
- Cyan toe tips
- White neck keypoint
- Thinner limb lines for hand/foot connections
- Joint count shown in overlay label

CSI Simulator:
- Skip synthetic person state when live ESP32 connected
- Only simulate CSI data in demo mode (was already correct)

Embedding Space:
- Fixed projection: sparse 8-dim projection replaces cancelling sum
- Auto-scaling normalizes point spread to fill canvas

Cache busters bumped to v=5 on all imports.

Co-Authored-By: claude-flow <ruv@ruv.net>

* fix: centroid-based pose tracking for responsive limb movement

Rewrites pose decoder from intensity-based to position-based tracking:
- Arms now track toward motion centroid in each body zone
- Elbow/wrist positions computed along shoulder→centroid vector
- Legs track toward lower-body zone centroids
- Smoothing reduced from 0.45 to 0.25 for responsiveness
- Zone centroids blend 30% old / 70% new each frame

6 body zones with overlapping coverage:
- Head (top 20%, center cols)
- Left/Right Arm (rows 10-60%, outer cols)
- Torso (rows 15-55%, center cols)
- Left/Right Leg (rows 50-100%, half cols each)

Hand openness now driven by arm spread distance + raise amount.
Cache busters v=6.

Co-Authored-By: claude-flow <ruv@ruv.net>

* fix: remove duplicate lAnkleX/rAnkleX declarations in pose-decoder

Stale code block from old intensity-based tracking was left behind,
re-declaring variables already defined by centroid-based tracking.

Co-Authored-By: claude-flow <ruv@ruv.net>

* feat(demo): wire all 6 RuVector WASM attention mechanisms into pose fusion

- Add WasmLinearAttention and WasmLocalGlobalAttention to browser ESM wrapper
- Add 6 WASM utility functions (batch_normalize, pairwise_distances, etc.)
- Extend CnnEmbedder to 6-stage pipeline: Flash → MHA → Hyperbolic → Linear → MoE → L+G
- Use log-energy softmax blending across all 6 stages
- Wire WASM cosine_similarity and normalize into FusionEngine
- Add RuVector pipeline stats panel to UI (energy, refinement, pose impact)
- Compute embedding-to-joint mapping stats without modifying joint positions
- Center camera prompt with flexbox layout
- Add cache busters v=12

Co-Authored-By: claude-flow <ruv@ruv.net>
2026-03-12 20:59:57 -04:00

359 lines
8.3 KiB
TypeScript

/* tslint:disable */
/* eslint-disable */
/**
* Adam optimizer
*/
export class WasmAdam {
free(): void;
[Symbol.dispose](): void;
/**
* Create a new Adam optimizer
*
* # Arguments
* * `param_count` - Number of parameters
* * `learning_rate` - Learning rate
*/
constructor(param_count: number, learning_rate: number);
/**
* Reset optimizer state
*/
reset(): void;
/**
* Perform optimization step
*
* # Arguments
* * `params` - Current parameter values (will be updated in-place)
* * `gradients` - Gradient values
*/
step(params: Float32Array, gradients: Float32Array): void;
/**
* Get current learning rate
*/
learning_rate: number;
}
/**
* AdamW optimizer (Adam with decoupled weight decay)
*/
export class WasmAdamW {
free(): void;
[Symbol.dispose](): void;
/**
* Create a new AdamW optimizer
*
* # Arguments
* * `param_count` - Number of parameters
* * `learning_rate` - Learning rate
* * `weight_decay` - Weight decay coefficient
*/
constructor(param_count: number, learning_rate: number, weight_decay: number);
/**
* Reset optimizer state
*/
reset(): void;
/**
* Perform optimization step with weight decay
*/
step(params: Float32Array, gradients: Float32Array): void;
/**
* Get current learning rate
*/
learning_rate: number;
/**
* Get weight decay
*/
readonly weight_decay: number;
}
/**
* Flash attention mechanism
*/
export class WasmFlashAttention {
free(): void;
[Symbol.dispose](): void;
/**
* Compute flash attention
*/
compute(query: Float32Array, keys: any, values: any): Float32Array;
/**
* Create a new flash attention instance
*
* # Arguments
* * `dim` - Embedding dimension
* * `block_size` - Block size for tiling
*/
constructor(dim: number, block_size: number);
}
/**
* Hyperbolic attention mechanism
*/
export class WasmHyperbolicAttention {
free(): void;
[Symbol.dispose](): void;
/**
* Compute hyperbolic attention
*/
compute(query: Float32Array, keys: any, values: any): Float32Array;
/**
* Create a new hyperbolic attention instance
*
* # Arguments
* * `dim` - Embedding dimension
* * `curvature` - Hyperbolic curvature parameter
*/
constructor(dim: number, curvature: number);
/**
* Get the curvature
*/
readonly curvature: number;
}
/**
* InfoNCE contrastive loss for training
*/
export class WasmInfoNCELoss {
free(): void;
[Symbol.dispose](): void;
/**
* Compute InfoNCE loss
*
* # Arguments
* * `anchor` - Anchor embedding
* * `positive` - Positive example embedding
* * `negatives` - Array of negative example embeddings
*/
compute(anchor: Float32Array, positive: Float32Array, negatives: any): number;
/**
* Create a new InfoNCE loss instance
*
* # Arguments
* * `temperature` - Temperature parameter for softmax
*/
constructor(temperature: number);
}
/**
* Learning rate scheduler
*/
export class WasmLRScheduler {
free(): void;
[Symbol.dispose](): void;
/**
* Get learning rate for current step
*/
get_lr(): number;
/**
* Create a new learning rate scheduler with warmup and cosine decay
*
* # Arguments
* * `initial_lr` - Initial learning rate
* * `warmup_steps` - Number of warmup steps
* * `total_steps` - Total training steps
*/
constructor(initial_lr: number, warmup_steps: number, total_steps: number);
/**
* Reset scheduler
*/
reset(): void;
/**
* Advance to next step
*/
step(): void;
}
/**
* Linear attention (Performer-style)
*/
export class WasmLinearAttention {
free(): void;
[Symbol.dispose](): void;
/**
* Compute linear attention
*/
compute(query: Float32Array, keys: any, values: any): Float32Array;
/**
* Create a new linear attention instance
*
* # Arguments
* * `dim` - Embedding dimension
* * `num_features` - Number of random features
*/
constructor(dim: number, num_features: number);
}
/**
* Local-global attention mechanism
*/
export class WasmLocalGlobalAttention {
free(): void;
[Symbol.dispose](): void;
/**
* Compute local-global attention
*/
compute(query: Float32Array, keys: any, values: any): Float32Array;
/**
* Create a new local-global attention instance
*
* # Arguments
* * `dim` - Embedding dimension
* * `local_window` - Size of local attention window
* * `global_tokens` - Number of global attention tokens
*/
constructor(dim: number, local_window: number, global_tokens: number);
}
/**
* Mixture of Experts (MoE) attention
*/
export class WasmMoEAttention {
free(): void;
[Symbol.dispose](): void;
/**
* Compute MoE attention
*/
compute(query: Float32Array, keys: any, values: any): Float32Array;
/**
* Create a new MoE attention instance
*
* # Arguments
* * `dim` - Embedding dimension
* * `num_experts` - Number of expert attention mechanisms
* * `top_k` - Number of experts to use per query
*/
constructor(dim: number, num_experts: number, top_k: number);
}
/**
* Multi-head attention mechanism
*/
export class WasmMultiHeadAttention {
free(): void;
[Symbol.dispose](): void;
/**
* Compute multi-head attention
*/
compute(query: Float32Array, keys: any, values: any): Float32Array;
/**
* Create a new multi-head attention instance
*
* # Arguments
* * `dim` - Embedding dimension
* * `num_heads` - Number of attention heads
*/
constructor(dim: number, num_heads: number);
/**
* Get the dimension
*/
readonly dim: number;
/**
* Get the number of heads
*/
readonly num_heads: number;
}
/**
* SGD optimizer with momentum
*/
export class WasmSGD {
free(): void;
[Symbol.dispose](): void;
/**
* Create a new SGD optimizer
*
* # Arguments
* * `param_count` - Number of parameters
* * `learning_rate` - Learning rate
* * `momentum` - Momentum coefficient (default: 0)
*/
constructor(param_count: number, learning_rate: number, momentum?: number | null);
/**
* Reset optimizer state
*/
reset(): void;
/**
* Perform optimization step
*/
step(params: Float32Array, gradients: Float32Array): void;
/**
* Get current learning rate
*/
learning_rate: number;
}
/**
* Compute attention weights from scores
*/
export function attention_weights(scores: Float32Array, temperature?: number | null): void;
/**
* Get information about available attention mechanisms
*/
export function available_mechanisms(): any;
/**
* Batch normalize vectors
*/
export function batch_normalize(vectors: any, epsilon?: number | null): Float32Array;
/**
* Compute cosine similarity between two vectors
*/
export function cosine_similarity(a: Float32Array, b: Float32Array): number;
/**
* Initialize the WASM module with panic hook
*/
export function init(): void;
/**
* Compute L2 norm of a vector
*/
export function l2_norm(vec: Float32Array): number;
/**
* Log a message to the browser console
*/
export function log(message: string): void;
/**
* Log an error to the browser console
*/
export function log_error(message: string): void;
/**
* Normalize a vector to unit length
*/
export function normalize(vec: Float32Array): void;
/**
* Compute pairwise distances between vectors
*/
export function pairwise_distances(vectors: any): Float32Array;
/**
* Generate random orthogonal matrix (for initialization)
*/
export function random_orthogonal_matrix(dim: number): Float32Array;
/**
* Compute scaled dot-product attention
*
* # Arguments
* * `query` - Query vector as Float32Array
* * `keys` - Array of key vectors
* * `values` - Array of value vectors
* * `scale` - Optional scaling factor (defaults to 1/sqrt(dim))
*/
export function scaled_dot_attention(query: Float32Array, keys: any, values: any, scale?: number | null): Float32Array;
/**
* Compute softmax of a vector
*/
export function softmax(vec: Float32Array): void;
/**
* Get the version of the ruvector-attention-wasm crate
*/
export function version(): string;