mirror of
https://github.com/onestardao/WFGY.git
synced 2026-04-28 11:40:07 +00:00
9.3 KiB
9.3 KiB
Ollama: Guardrails and Fix Patterns
🧭 Quick Return to Map
You are in a sub-page of LocalDeploy_Inference.
To reorient, go back here:
- LocalDeploy_Inference — on-prem deployment and model inference
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Field guide for stabilizing Ollama-based local inference pipelines. Use these checks when models run fine on API providers but collapse, stall, or drift when containerized with Ollama.
Open these first
- Architecture recovery: RAG Architecture & Recovery
- End-to-end retrieval knobs: Retrieval Playbook
- Embedding vs semantic: embedding-vs-semantic.md
- Ordering and deploy race conditions: bootstrap-ordering.md, deployment-deadlock.md, predeploy-collapse.md
- Container observability: eval_observability.md
Core acceptance
- ΔS(question, retrieved) ≤ 0.45
- Coverage ≥ 0.70 on the target section
- λ remains convergent across 3 paraphrases
- Local runs reproducible across 2+ seeds
Typical Ollama breakpoints and fix
| Symptom | Likely cause | Fix |
|---|---|---|
| Model boots but stalls on first request | Container not warmed / secrets missing | bootstrap-ordering.md |
| Fast API returns, but snippets wrong | Index/hash drift across containers | retrieval-traceability.md, data-contracts.md |
| Answers diverge run-to-run | λ flips due to context serialization | context-drift.md, entropy-collapse.md |
| Works on GPU API, fails locally | Metric / embedding mismatch in Ollama runtime | embedding-vs-semantic.md, vectorstore-fragmentation.md |
| Container OOM or deadlock | Parallel inference with no fence | deployment-deadlock.md, predeploy-collapse.md |
Fix in 60 seconds
- Measure ΔS between retrieved and anchor.
- Probe λ across 3 paraphrases. If flips, apply BBAM.
- Warm boot with a delay + healthcheck before first request.
- Lock index schema via data-contracts.md.
- Verify reproducibility with two seeds before going live.
Copy-paste local test prompt
I have WFGY + TXTOS loaded.
Running Ollama locally with container {hash}.
Question: "{user_question}"
Return:
1. ΔS(question,retrieved) and λ across 3 paraphrases
2. Whether index schema matches contract
3. Minimal structural fix if ΔS ≥ 0.60
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.
要我直接繼續寫 vllm.md 嗎?