WFGY/ProblemMap/GlobalFixMap/LocalDeploy_Inference/exllama.md

7 KiB
Raw Blame History

ExLLaMA: Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of LocalDeploy_Inference.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

ExLLaMA (and its fork ExLLaMA2/ExLLaMA-HF) is a highly optimized CUDA inference backend used under TextGen WebUI and custom pipelines. It can run very large models (65B+) on limited VRAM, but often shows instability when sharded, quantized, or paired with retrieval layers. This guide stabilizes ExLLaMA with structural guardrails.


Open these first


Core acceptance

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 against anchor snippet
  • λ convergent across 3 paraphrases × 2 seeds
  • E_resonance flat across quantization modes (int4, int8)

Common ExLLaMA breakpoints

Symptom Cause Fix
First run slower or unstable than warm cache Lazy CUDA graph compile, missing warm-up fence bootstrap-ordering.md
ΔS spikes when using quantized weights Tokenizer drift vs chunked embeddings embedding-vs-semantic.md, chunking-checklist.md
Memory corruption after long runs Fragmented KV cache, no eviction strategy context-drift.md, entropy-collapse.md
API or WebUI tool schema breaks JSON schema not enforced at inference layer prompt-injection.md, logic-collapse.md
Multi-shard mismatch on large models Rank-order desync across GPUs deployment-deadlock.md

Fix in 60 seconds

  1. Always warm-up: run a 10-token dummy batch before production queries.
  2. Schema lock: enforce snippet_id, section_id, tokens in every trace.
  3. λ probe: measure stability under 2 quant modes (int4 vs int8).
  4. Cache rotation: reset KV cache every N tokens (e.g., 8192) to prevent drift.
  5. Verify: coverage ≥ 0.70, ΔS ≤ 0.45 across three paraphrase probes.

Diagnostic prompt (copy-paste)

I am running ExLLaMA backend with quant={mode}, shards={n}, extensions={list}.  
Question: "{user_question}"  

Please output:
- ΔS vs retrieved snippet
- λ over 3 paraphrases × 2 seeds
- Quantization impact (int4 vs int8)
- Cache stability (tokens until drift)
- Minimal WFGY fix page if ΔS ≥ 0.60

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Module Description Link
WFGY Core Canonical framework entry point View
Problem Map Diagnostic map and navigation hub View
Tension Universe Experiments MVP experiment field View
Recognition Where WFGY is referenced or adopted View
AI Guide Anti-hallucination reading protocol for tools View

If this repository helps, starring it improves discovery for other builders.
GitHub Repo stars