mirror of
https://github.com/ruvnet/RuView.git
synced 2026-04-28 05:59:32 +00:00
5-phase training pipeline using ruvllm (Rust-native, no PyTorch): 1. Contrastive pretraining (triplet + InfoNCE, 5 triplet strategies) 2. Task head training (presence, activity, vitals via SONA) 3. Per-node LoRA refinement (rank-4, room-specific adaptation) 4. TurboQuant quantization (2/4/8-bit, 6-8x compression) 5. EWC consolidation (prevent catastrophic forgetting) Exports: SafeTensors, HuggingFace config, RVF, per-node LoRA, quantized Validated: 249 triplets, 37,775 emb/s, 100% presence accuracy on test data Target: <5 min training on M4 Pro, <10ms inference on Pi Zero Co-Authored-By: claude-flow <ruv@ruv.net> |
||
|---|---|---|
| .. | ||
| adr | ||
| ddd | ||
| edge-modules | ||
| huggingface | ||
| research | ||
| tutorials | ||
| build-guide.md | ||
| security-audit-wasm-edge-vendor.md | ||
| user-guide.md | ||
| wifi-mat-user-guide.md | ||
| WITNESS-LOG-028.md | ||