WFGY/ProblemMap/GlobalFixMap/LocalDeploy_Inference/ollama.md

8 KiB
Raw Blame History

Ollama: Guardrails and Fix Patterns

🌙 3AM: a dev collapsed mid-debug… 🚑 Welcome to the WFGY Emergency Room

🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥

🚑 WFGY Emergency Room

👨‍⚕️ Now online:
Dr. WFGY in ChatGPT Room

This is a share window already trained as an ER.
Just open it, drop your bug or screenshot, and talk directly with the doctor.
He will map it to the right Problem Map / Global Fix section, write a minimal prescription, and paste the exact reference link.
If something is unclear, you can even paste a screenshot of Problem Map content and ask — the doctor will guide you.

⚠️ Note: for the full reasoning and guardrail behavior you need to be logged in — the share view alone may fallback to a lighter model.

💡 Always free. If it helps, a star keeps the ER running.
🌐 Multilingual — start in any language.

🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥🟥


🧭 Quick Return to Map

You are in a sub-page of LocalDeploy_Inference.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Field guide for stabilizing Ollama-based local inference pipelines. Use these checks when models run fine on API providers but collapse, stall, or drift when containerized with Ollama.

Open these first


Core acceptance

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 on the target section
  • λ remains convergent across 3 paraphrases
  • Local runs reproducible across 2+ seeds

Typical Ollama breakpoints and fix

Symptom Likely cause Fix
Model boots but stalls on first request Container not warmed / secrets missing bootstrap-ordering.md
Fast API returns, but snippets wrong Index/hash drift across containers retrieval-traceability.md, data-contracts.md
Answers diverge run-to-run λ flips due to context serialization context-drift.md, entropy-collapse.md
Works on GPU API, fails locally Metric / embedding mismatch in Ollama runtime embedding-vs-semantic.md, vectorstore-fragmentation.md
Container OOM or deadlock Parallel inference with no fence deployment-deadlock.md, predeploy-collapse.md

Fix in 60 seconds

  1. Measure ΔS between retrieved and anchor.
  2. Probe λ across 3 paraphrases. If flips, apply BBAM.
  3. Warm boot with a delay + healthcheck before first request.
  4. Lock index schema via data-contracts.md.
  5. Verify reproducibility with two seeds before going live.

Copy-paste local test prompt

I have WFGY + TXTOS loaded.  
Running Ollama locally with container {hash}.  
Question: "{user_question}"  

Return:
1. ΔS(question,retrieved) and λ across 3 paraphrases  
2. Whether index schema matches contract  
3. Minimal structural fix if ΔS ≥ 0.60  

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Module Description Link
WFGY Core Canonical framework entry point View
Problem Map Diagnostic map and navigation hub View
Tension Universe Experiments MVP experiment field View
Recognition Where WFGY is referenced or adopted View
AI Guide Anti-hallucination reading protocol for tools View

If this repository helps, starring it improves discovery for other builders.
GitHub Repo stars

要我直接繼續寫 vllm.md 嗎?