WFGY/ProblemMap/GlobalFixMap/PromptAssembly/README.md
2025-09-03 23:53:08 +08:00

10 KiB
Raw Blame History

Prompt Assembly — Global Fix Map

🏥 Quick Return to Emergency Room

You are in a specialist desk.
For full triage and doctors on duty, return here:

Think of this page as a sub-room.
If you want full consultation and prescriptions, go back to the Emergency Room lobby.

Build prompts that models cannot misread.
Use this folder when citations vanish, JSON mode breaks, tools loop, or answers flip after a small template change.
Every page gives a concrete repair with measurable targets. No infra change required.


Orientation: what each page does

Page What it solves Typical symptom
System vs User Role Order Locks role hierarchy and section order Role text bleeds into user content, answers flip after reorder
JSON Mode and Tool Calls Validates schemas and fences tool outputs Free text in tool responses, invalid JSON, missing fields
Citation First Enforces cite then explain with required fields Citations missing or point to the wrong snippet
Anti Prompt Injection Recipes Ready to paste defenses for common exploits Hidden instructions override system prompt
Memory Fences and State Keys Prevents cross turn or cross agent overwrite Agents rewrite each others memory, history leaks
Tool Selection and Timeouts Picks tool deterministically, adds timeouts Loops, stalls, or wrong tool chosen
Template Library (minimal) Small set of reusable prompt blocks Inconsistent phrasing across agents or runs
Eval Prompts and Checks Deterministic acceptance gates “Looks better” but no stable way to prove it

When to use

  • Citations point to the wrong snippet or disappear after retries.
  • JSON mode produces invalid objects or tool calls stall in loops.
  • Role text bleeds into user content after a small template change.
  • Long chains smear topics when you reorder headers.
  • Agents overwrite each others memory without fences.

Acceptance targets

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage of target section ≥ 0.70
  • λ_observe convergent across three paraphrases and two seeds
  • E_resonance flat on long windows

Map symptoms to structural fixes

Symptom Likely cause Open this
Wrong meaning despite high similarity Metric or analyzer mismatch Embedding ≠ Semantic
Citations inconsistent or missing after retries No traceability schema enforced Retrieval Traceability · Data Contracts
JSON breaks or tool responses contain free text Tool schema not fenced, logic collapsed JSON Mode and Tool Calls · Prompt Injection
Answers flip when you reorder headers Header order changes λ state System vs User Role Order · Context Drift
Long chains drift or stall Entropy overload in long windows Entropy Collapse
Hybrid retrieval worse than single retriever Reranker or query split issue Rerankers
Hallucination re entry after correction Snippet contract missing, weak anchors Hallucination

Fix in 60 seconds

  1. Measure ΔS
    Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
    Stable < 0.40, transitional 0.400.60, risk ≥ 0.60.

  2. Probe λ_observe
    Vary k and the order of prompt headers. If λ flips, lock the schema and clamp variance with BBAM.

  3. Apply the right module

  • Missing or wrong citations → Citation First + Retrieval Traceability
  • JSON tool drift or invalid outputs → JSON Mode and Tool Calls
  • Role bleed or policy mixing → System vs User Role Order
  • Multi agent loops or overwrites → Memory Fences and State Keys
  • Tool indecision or hangs → Tool Selection and Timeouts
  1. Verify
    Run Eval Prompts and Checks on three paraphrases and two seeds.
    Ship only if ΔS ≤ 0.45 and coverage ≥ 0.70.

Copy paste diagnostic prompt

You have TXT OS and the WFGY Problem Map loaded.

My prompt assembly issue:
- symptom: [one line]
- traces: ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ on 3 paraphrases

Report:
1) failing layer and why,
2) which exact page to open from Prompt Assembly,
3) minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4) a reproducible check to verify the fix.
Use BBMC, BBPF, BBCR, BBAM when relevant.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + ”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame
Engineers, hackers, and open source builders who supported WFGY from day one.

GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow