mirror of
https://github.com/onestardao/WFGY.git
synced 2026-04-28 19:50:17 +00:00
9.4 KiB
9.4 KiB
ExLLaMA: Guardrails and Fix Patterns
🧭 Quick Return to Map
You are in a sub-page of LocalDeploy_Inference.
To reorient, go back here:
- LocalDeploy_Inference — on-prem deployment and model inference
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
ExLLaMA (and its fork ExLLaMA2/ExLLaMA-HF) is a highly optimized CUDA inference backend used under TextGen WebUI and custom pipelines. It can run very large models (65B+) on limited VRAM, but often shows instability when sharded, quantized, or paired with retrieval layers. This guide stabilizes ExLLaMA with structural guardrails.
Open these first
- Visual recovery map: RAG Architecture & Recovery
- Retrieval and eval knobs: Retrieval Playbook
- Boot and ordering: bootstrap-ordering.md, deployment-deadlock.md, predeploy-collapse.md
- Snippet and trace schema: retrieval-traceability.md, data-contracts.md
Core acceptance
- ΔS(question, retrieved) ≤ 0.45
- Coverage ≥ 0.70 against anchor snippet
- λ convergent across 3 paraphrases × 2 seeds
- E_resonance flat across quantization modes (int4, int8)
Common ExLLaMA breakpoints
| Symptom | Cause | Fix |
|---|---|---|
| First run slower or unstable than warm cache | Lazy CUDA graph compile, missing warm-up fence | bootstrap-ordering.md |
| ΔS spikes when using quantized weights | Tokenizer drift vs chunked embeddings | embedding-vs-semantic.md, chunking-checklist.md |
| Memory corruption after long runs | Fragmented KV cache, no eviction strategy | context-drift.md, entropy-collapse.md |
| API or WebUI tool schema breaks | JSON schema not enforced at inference layer | prompt-injection.md, logic-collapse.md |
| Multi-shard mismatch on large models | Rank-order desync across GPUs | deployment-deadlock.md |
Fix in 60 seconds
- Always warm-up: run a 10-token dummy batch before production queries.
- Schema lock: enforce snippet_id, section_id, tokens in every trace.
- λ probe: measure stability under 2 quant modes (int4 vs int8).
- Cache rotation: reset KV cache every N tokens (e.g., 8192) to prevent drift.
- Verify: coverage ≥ 0.70, ΔS ≤ 0.45 across three paraphrase probes.
Diagnostic prompt (copy-paste)
I am running ExLLaMA backend with quant={mode}, shards={n}, extensions={list}.
Question: "{user_question}"
Please output:
- ΔS vs retrieved snippet
- λ over 3 paraphrases × 2 seeds
- Quantization impact (int4 vs int8)
- Cache stability (tokens until drift)
- Minimal WFGY fix page if ΔS ≥ 0.60
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.