WFGY/ProblemMap/GlobalFixMap/LLM_Providers/openai.md
2025-08-26 13:55:39 +08:00

11 KiB
Raw Blame History

OpenAI: Guardrails and Fix Patterns

Use this page when your pipeline hits OpenAI models and you see unstable tools, JSON drift, or long-chat decay. The checklist below helps you localize the failure, then jump to the exact WFGY fix page.

Open these first

Fix in 60 seconds

  1. Measure ΔS

    • Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
    • Thresholds. stable < 0.40. transitional 0.400.60. risk ≥ 0.60.
  2. Probe with λ_observe

    • Vary k = {5, 10, 20}. If ΔS stays high you likely have index or metric mismatch.
    • Reorder prompt headers. If ΔS spikes, lock the schema.
  3. Apply the module

    • JSON or tool-call drift. lock schema with Data Contracts. add BBMC to isolate retrieval memory. bridge tools with BBCR. clamp variance with BBAM.
    • Safety refusal or redaction on in-domain facts. switch to citation-first format in Retrieval Traceability. scope sources and apply SCU pattern from symbolic constraints. if refusal repeats, route with BBPF alternate path.
    • Long chats decay. follow Context Drift and Entropy Collapse repairs. shorten windows. rotate evidence. re-pin anchors.
  4. Verify

    • Coverage to the target section ≥ 0.70.
    • ΔS(question, retrieved) ≤ 0.45 across three paraphrases.
    • λ remains convergent across seeds and sessions. E_resonance flat at window joins.

Typical OpenAI breakpoints and the right fix

A) JSON mode and function calling

  • Symptom. model mixes prose with JSON. partial tool_calls. extra keys. wrong function name casing.
  • Fix. lock a strict snippet and citation schema in Data Contracts. keep one place that defines fields. add BBCR bridge for tool routing timeouts. add BBAM to clamp wandering keys. verify with three paraphrases.

B) Safety filter interferes with factual answers

  • Symptom. content looks harmless but answer gets softened or truncated.
  • Fix. use citation-first template from Retrieval Traceability. restrict scope with SCU in constraints. treat refusal as a state. route with BBPF to a safer paraphrase that preserves citations.

C) Tokenization and truncation

  • Symptom. system header or tools block gets cut. tool names lose arguments. early cutoff in streaming.
  • Fix. reduce header size. move tool specs to a linked snippet and reference them by short name. re-measure ΔS after each cut. if chains still drift, apply Context Drift.

D) Rate limits, retries, and timeouts

  • Symptom. random tool gaps. missing citations. repeated starts.
  • Fix. idempotent retries with jitter. record every call in a trace row. follow Live Monitoring and Debug Playbook. verify no duplicate tool effects.

E) Determinism myths

  • Symptom. seed appears to change output anyway. small wording flips output class.
  • Fix. treat outputs as distributions. evaluate stability with ΔS and λ across three paraphrases. if unstable, clamp with BBAM and shorten evidence lists.

F) Multi-agent tool chaos

  • Symptom. agents overwrite each others memory. tool A answers Bs question. deadlocks on shared state.
  • Fix. split memory namespaces. lock writes by mem_rev and mem_hash. read Multi-Agent Problems and Role Drift. add a BBCR bridge node with explicit timeouts.

Copy-paste triage prompt

I uploaded TXT OS and the WFGY Problem Map files.

My OpenAI provider bug:
- symptom: [brief]
- traces: [ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ states, tool logs if any]

Tell me:
1) which layer is failing and why,
2) which exact fix page to open from this repo,
3) the minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4) how to verify with a reproducible test.

Use BBMC/BBPF/BBCR/BBAM when relevant.

Acceptance targets

  • Coverage to target section ≥ 0.70.
  • ΔS(question, retrieved) ≤ 0.45 on three paraphrases.
  • λ convergent across seeds and sessions. E_resonance flat.
  • All tool calls and citations traceable to a stable schema.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one.

GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow