WFGY/ProblemMap/GlobalFixMap/LocalDeploy_Inference/exllama.md

7.8 KiB
Raw Blame History

ExLLaMA: Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of LocalDeploy_Inference.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

ExLLaMA (and its fork ExLLaMA2/ExLLaMA-HF) is a highly optimized CUDA inference backend used under TextGen WebUI and custom pipelines. It can run very large models (65B+) on limited VRAM, but often shows instability when sharded, quantized, or paired with retrieval layers. This guide stabilizes ExLLaMA with structural guardrails.


Open these first


Core acceptance

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 against anchor snippet
  • λ convergent across 3 paraphrases × 2 seeds
  • E_resonance flat across quantization modes (int4, int8)

Common ExLLaMA breakpoints

Symptom Cause Fix
First run slower or unstable than warm cache Lazy CUDA graph compile, missing warm-up fence bootstrap-ordering.md
ΔS spikes when using quantized weights Tokenizer drift vs chunked embeddings embedding-vs-semantic.md, chunking-checklist.md
Memory corruption after long runs Fragmented KV cache, no eviction strategy context-drift.md, entropy-collapse.md
API or WebUI tool schema breaks JSON schema not enforced at inference layer prompt-injection.md, logic-collapse.md
Multi-shard mismatch on large models Rank-order desync across GPUs deployment-deadlock.md

Fix in 60 seconds

  1. Always warm-up: run a 10-token dummy batch before production queries.
  2. Schema lock: enforce snippet_id, section_id, tokens in every trace.
  3. λ probe: measure stability under 2 quant modes (int4 vs int8).
  4. Cache rotation: reset KV cache every N tokens (e.g., 8192) to prevent drift.
  5. Verify: coverage ≥ 0.70, ΔS ≤ 0.45 across three paraphrase probes.

Diagnostic prompt (copy-paste)

I am running ExLLaMA backend with quant={mode}, shards={n}, extensions={list}.  
Question: "{user_question}"  

Please output:
- ΔS vs retrieved snippet
- λ over 3 paraphrases × 2 seeds
- Quantization impact (int4 vs int8)
- Cache stability (tokens until drift)
- Minimal WFGY fix page if ΔS ≥ 0.60

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
Engine WFGY 1.0 Original PDF based tension engine
Engine WFGY 2.0 Production tension kernel and math engine for RAG and agents
Engine WFGY 3.0 TXT based Singularity tension engine, 131 S class set
Map Problem Map 1.0 Flagship 16 problem RAG failure checklist and fix map
Map Problem Map 2.0 RAG focused recovery pipeline
Map Problem Map 3.0 Global Debug Card, image as a debug protocol layer
Map Semantic Clinic Symptom to family to exact fix
Map Grandmas Clinic Plain language stories mapped to Problem Map 1.0
Onboarding Starter Village Guided tour for newcomers
App TXT OS TXT semantic OS, fast boot
App Blah Blah Blah Abstract and paradox Q and A built on TXT OS
App Blur Blur Blur Text to image with semantic control
App Blow Blow Blow Reasoning game engine and memory demo

If this repository helped, starring it improves discovery so more builders can find the docs and tools. GitHub Repo stars