WFGY/ProblemMap/GlobalFixMap/LocalDeploy_Inference/llamacpp.md
2025-08-30 14:15:21 +08:00

8.4 KiB
Raw Blame History

Llama.cpp: Guardrails and Fix Patterns

Llama.cpp is the most widely used local inference runtime for GGML/GGUF models. It enables CPU/GPU inference across diverse hardware but often introduces fragile states: mismatched quantization, KV-cache drift, and long-context instability. This page defines reproducible WFGY-based guardrails and direct fixes.


Open these first


Core acceptance

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70
  • λ convergent across three paraphrases × two seeds
  • KV cache stability for context >8k tokens
  • JSON schema compliance enforced when using tools

Common Llama.cpp breakpoints

Symptom Likely Cause Fix
Wrong answers despite high similarity Embedding metric mismatch with GGUF/quant variant embedding-vs-semantic.md
Model slows or collapses after 816k tokens KV cache drift context-drift.md, entropy-collapse.md
Output alternates between runs Prompt header drift retrieval-traceability.md
Invalid JSON or schema drift Missing tool schema lock prompt-injection.md, logic-collapse.md
Crash at first inference call Boot order not fenced bootstrap-ordering.md
Segfault when mixing quantized weights Pre-deploy mismatch predeploy-collapse.md

Fix in 60 seconds

  1. Pre-flight warmup: run a dummy prompt (e.g., "hello") to allocate memory.
  2. Schema lock all JSON tool outputs; reject free text where structured arguments expected.
  3. Measure ΔS across 3 paraphrases, require ≤0.45.
  4. Rotate cache or reset every 816k tokens.
  5. Ensure quantization match between build and model weights (GGUF flags).

Diagnostic prompt (copy-paste)

I am running Llama.cpp with model={gguf/quant}, context={n}.
Question: "{user_question}"

Return:
- ΔS(question, retrieved)
- λ states across 3 paraphrases × 2 seeds
- KV cache drift beyond 8k tokens
- JSON schema compliance
- Minimal WFGY page to open if ΔS ≥ 0.60

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame
Engineers, hackers, and open source builders who supported WFGY from day one.

GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow