WFGY/ProblemMap/GlobalFixMap/LocalDeploy_Inference/vllm.md

6.9 KiB
Raw Blame History

vLLM: Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of LocalDeploy_Inference.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Field guide for stabilizing vLLM-based local inference pipelines. Use these checks when models serve correctly on API providers but fail under high-throughput GPU serving with vLLM.


Open these first


Core acceptance

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 for target section
  • λ remains convergent across 3 paraphrases and 2 seeds
  • Throughput scaling does not shift retrieved citations

Typical vLLM breakpoints and fix

Symptom Likely cause Fix
Works at batch=1 but fails at scale Context window fragmentation / GPU memory swap context-drift.md, entropy-collapse.md
Citations disappear at high load Async batch merge drops offsets retrieval-traceability.md, data-contracts.md
Different answers run-to-run λ flips with batch ordering logic-collapse.md, rerankers.md
Index correct but retrieval unstable Embedding vs metric mismatch in store embedding-vs-semantic.md, vectorstore-fragmentation.md
GPU OOM / crash at warm-up Preload sequence too large, missing fence bootstrap-ordering.md

Fix in 60 seconds

  1. Measure ΔS at batch=1 and batch=32. If ΔS rises >0.60 only at scale → async batching issue.
  2. Probe λ across 3 paraphrases. If flips, apply BBAM.
  3. Enforce contracts: citations must include snippet_id, offsets.
  4. GPU warm-up: preload with a dummy batch before first live call.
  5. Verify throughput stability with replay test (2 seeds, same dataset).

Copy-paste test prompt

I am running vLLM locally.  
Models served with async batching.  
Question: "{user_question}"  

Please return:
1. ΔS at batch=1 and batch=32  
2. λ across 3 paraphrases  
3. Whether citations preserved (snippet_id, offsets)  
4. Minimal structural fix if ΔS ≥ 0.60  

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Module Description Link
WFGY Core Canonical framework entry point View
Problem Map Diagnostic map and navigation hub View
Tension Universe Experiments MVP experiment field View
Recognition Where WFGY is referenced or adopted View
AI Guide Anti-hallucination reading protocol for tools View

If this repository helps, starring it improves discovery for other builders.
GitHub Repo stars