mirror of
https://github.com/onestardao/WFGY.git
synced 2026-04-28 19:50:17 +00:00
8.7 KiB
8.7 KiB
ExLlamaV2: Guardrails and Fix Patterns
ExLlamaV2 is a specialized inference backend for LLaMA-family models with optimized 4-bit quantization.
It provides faster throughput and lower VRAM usage compared to generic backends, but introduces new risks in accuracy, schema drift, and numerical stability.
This page maps those issues to WFGY structural fixes with measurable acceptance targets.
Open these first
- Visual map and recovery: RAG Architecture & Recovery
- End-to-end retrieval knobs: Retrieval Playbook
- Embedding vs meaning: Embedding ≠ Semantic
- Chunk schema: Chunking Checklist
- Collapse and entropy: Logic Collapse, Entropy Collapse
- Ordering and boot issues: Bootstrap Ordering, Pre-deploy Collapse
Core acceptance
- ΔS drift vs FP16 baseline ≤ 0.10
- Coverage ≥ 0.70 for target section
- λ convergent across 3 paraphrases and 2 seeds
- Latency improvement ≥ 25% with accuracy loss ≤ 5%
Typical ExLlamaV2 breakpoints → exact fix
| Symptom | Likely cause | Open this |
|---|---|---|
| Text fluency high, citations missing | Schema loosened in quantized path | Data Contracts, Retrieval Traceability |
| Wrong snippet despite high similarity | Index mismatch after quantization | Embedding ≠ Semantic, Vectorstore Fragmentation |
| JSON breaks frequently | Quantization noise amplifies schema drift | Logic Collapse, Data Contracts |
| Long chain divergence after 20–40 steps | Numerical error accumulation | Entropy Collapse, Context Drift |
| Deployment mismatch | Torch vs ExLlama kernels version skew | Bootstrap Ordering, Pre-deploy Collapse |
Fix in 60 seconds
-
Measure ΔS
Run 20 QA pairs on FP16 baseline vs ExLlamaV2.
Acceptable drift ≤ 0.10. -
Probe λ_observe
Increase retrieval k. If λ flips divergent, apply BBAM schema lock. -
Apply the module
- Retrieval drift → BBMC + Retrieval Traceability
- Reasoning collapse → BBCR + BBAM clamp
- Long-chain instability → BBPF alternate paths
-
Verify
Coverage ≥ 0.70, λ convergent, ΔS ≤ 0.10.
Minimal setup
from transformers import AutoTokenizer
from exllamav2 import ExLlamaV2, ExLlamaV2Cache, ExLlamaV2Tokenizer
model_path = "your-llama-model"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# Initialize ExLlamaV2
model = ExLlamaV2(model_path, quant="4bit", gpu_split="auto")
cache = ExLlamaV2Cache(model)
prompt = "Hello, world!"
tokens = tokenizer.encode(prompt, return_tensors="pt").cuda()
output = model.generate(tokens, max_new_tokens=128, cache=cache)
print(tokenizer.decode(output[0]))
Ops checklist
- Always compare ΔS/λ vs FP16 baseline before shipping
- Pin ExLlama kernels to version matching torch/cuBLAS build
- Log coverage and citation schema at runtime
- Guard JSON outputs with schema validators
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.