WFGY/ProblemMap/GlobalFixMap/LocalDeploy_Inference/autogptq.md

7.1 KiB
Raw Blame History

AutoGPTQ: Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of LocalDeploy_Inference.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

AutoGPTQ is a widely used library for quantizing large language models into lower-bit formats (INT4/INT8) for efficient local inference.
This page maps the common failure modes when deploying AutoGPTQ and provides structural fixes with measurable targets.


Open these first


Core acceptance

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 to the target section
  • λ remains convergent across three paraphrases and two seeds
  • E_resonance stable across quantized vs full-precision runs

Typical AutoGPTQ breakpoints and the right fix

Symptom Likely cause Fix
Model loads but outputs garbage tokens Misaligned quantization config (bits, group size) Rebuild with correct group size; validate with ΔS probes
GPU memory still OOM despite quantization Offloading not configured or weights pinned to VRAM Enable device_map=auto, verify shard placement
Drastic accuracy drop vs FP16 baseline Quantization schema mismatch or bad calibration Run small calibration dataset; enforce consistent tokenizer
Inference stalls or crashes CUDA/driver mismatch, kernels not compiled Rebuild kernels for your GPU arch; fallback to CPU for test
Wrong snippet chosen during RAG Retrieval mismatch amplified by quantized logits Apply Retrieval Traceability + rerankers

Fix in 60 seconds

  1. Quantization check
    Verify config: bits, group_size, sym/asym. Run ΔS on 10 QA pairs.

  2. GPU memory probe
    Monitor memory before/after load. If OOM persists, enforce CPU/GPU split.

  3. Calibration
    Use a gold dataset (100500 samples). Ensure ΔS gap between FP16 and INT4 ≤ 0.10.

  4. Inference stability
    Run 3 paraphrases × 2 seeds. λ must stay convergent.


Deep diagnostics

  • Entropy vs precision: If entropy collapses earlier in quantized runs, enable double-check rerankers.
  • Traceability: Log both FP16 and INT4 snippet selections. Divergence >20% means schema fix needed.
  • Anchor triangulation: Compare ΔS on FP16 vs INT4 to the same section. If drift >0.15, retrain quantizer.

Copy-paste config snippet

from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig

quantize_config = BaseQuantizeConfig(
    bits=4,
    group_size=128,
    desc_act=False
)

model = AutoGPTQForCausalLM.from_pretrained(
    "your-model",
    quantize_config=quantize_config,
    device_map="auto"
)

Checklist: After loading, test with ΔS probe and λ convergence.


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
Engine WFGY 1.0 Original PDF based tension engine
Engine WFGY 2.0 Production tension kernel and math engine for RAG and agents
Engine WFGY 3.0 TXT based Singularity tension engine, 131 S class set
Map Problem Map 1.0 Flagship 16 problem RAG failure checklist and fix map
Map Problem Map 2.0 RAG focused recovery pipeline
Map Problem Map 3.0 Global Debug Card, image as a debug protocol layer
Map Semantic Clinic Symptom to family to exact fix
Map Grandmas Clinic Plain language stories mapped to Problem Map 1.0
Onboarding Starter Village Guided tour for newcomers
App TXT OS TXT semantic OS, fast boot
App Blah Blah Blah Abstract and paradox Q and A built on TXT OS
App Blur Blur Blur Text to image with semantic control
App Blow Blow Blow Reasoning game engine and memory demo

If this repository helped, starring it improves discovery so more builders can find the docs and tools. GitHub Repo stars