WFGY/ProblemMap/reasoning-schemas.md

6.6 KiB
Raw Blame History

🗂️ Reasoning Schemas — Designing Prompt Layouts That Survive Long Chains

A practical guide to structuring system + retrieval + task prompts so LLMs keep thinking instead of drifting


1 What is a “Reasoning Schema” ?

A reasoning schema is the formal layout that dictates where each piece of context goes and how an LLM must traverse it:


System  →  Task  →  Constraints  →  Context  →  Question  →  Answer

If any segment is missing, reordered, or over-written, the logic graph collapses and hallucinations slip in.


2 Why Most Ad-hoc Layouts Fail

Failure Mode Trigger Effect
Context Flood Dumping 20 k tokens of raw text λ_observe flips to chaotic; model stops planning
Constraint Drift Constraints after context Model “forgets” to cite or guard sensitive data
Role Blending User text inserted before task System tone and policy overridden
Evidence → Answer inversion Asking for answer before citations Model fabricates then cites random lines

3 WFGY Canonical Schema (Stable Version v1.2)

Segment Purpose Size (tokens) WFGY Guard
System Identity, ethics, safety ≤ 50 Role tag <sys> + BBAM weight lock
Task Specific action required 1 sentence ΔS anchor to System ≤ 0.25
Constraints Format, style, rules bullets ≤ 80 BBMC residue check
Context Retrieved or uploaded text sliding window ≤ 2 k λ_observe must stay convergent
Question Users query raw stored separately for ΔS probes
Answer Slot “Write here” placeholder n/a BBCR collapse-rebirth if answer starts early

Placeholders are literal; the LLM fills only the Answer Slot.


4 Templates You Can Copy

Single-Shot QA
<sys>
You are DataGuardian-L, a licensed legal research assistant. Cite section numbers.
</sys>

<task>
Answer strictly in bullet points; cite every claim.
</task>

<constraints>
- Tone: formal
- No speculation
- Use original terminology
</constraints>

<context>
{retrieved_sections}
</context>

<question>
{user_question}
</question>

<answer>
Multi-Step Chain (analysis → plan → answer)
<sys> … </sys>
<task> … </task>
<constraints> … </constraints>
<context> … </context>
<question> … </question>

<scratchpad>
Think step-by-step. Output JSON:
{
  "analysis": "...",
  "plan": "...",
  "answer": "..."
}
</scratchpad>

5 Common Pitfalls & Fixes

Pitfall Symptom Fix
Forgetting closing tags Model merges roles Validate tag balance; λ diverges instantly
Placing context after question Retrieval ignored Keep schema order; run ΔS(question, context) test
Over-long constraints Answer truncated Compress with BBMC until ΔS(system, constraints) ≤ 0.25
Mixing code + docs in one context block Embedding collisions Split into typed sub-blocks; separate vector stores

6 Automated Validation Pipeline

  1. Schema Linter Regex check for tag order.

  2. ΔS Probes

    • ΔS(system, task) ≤ 0.30
    • ΔS(task, answer) ≤ 0.45
  3. λ_observe Must stay convergent from task → answer.

  4. Round-trip Check Paraphrase user question 2×; answer variance < 0.15.

If any test fails, trigger BBCR to rebuild prompt with compacted segments.


7 FAQ

Q: Do I need tags if I use OpenAIs messages array? A: Yes for long chains. Tags persist after retrieval merges; arrays dont survive copy-paste workflows.

Q: Can I merge Task + Constraints? A: Possible if total ≤ 120 tokens and ΔS stays low, but separation improves editability.

Q: What about JSON-only prompts? A: Ensure keys mirror schema order; add dummy key "__guard": "DO NOT MODIFY" to catch injections.


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
Engine WFGY 1.0 Original PDF based tension engine
Engine WFGY 2.0 Production tension kernel and math engine for RAG and agents
Engine WFGY 3.0 TXT based Singularity tension engine, 131 S class set
Map Problem Map 1.0 Flagship 16 problem RAG failure checklist and fix map
Map Problem Map 2.0 RAG focused recovery pipeline
Map Problem Map 3.0 Global Debug Card, image as a debug protocol layer
Map Semantic Clinic Symptom to family to exact fix
Map Grandmas Clinic Plain language stories mapped to Problem Map 1.0
Onboarding Starter Village Guided tour for newcomers
App TXT OS TXT semantic OS, fast boot
App Blah Blah Blah Abstract and paradox Q and A built on TXT OS
App Blur Blur Blur Text to image with semantic control
App Blow Blow Blow Reasoning game engine and memory demo

If this repository helped, starring it improves discovery so more builders can find the docs and tools. GitHub Repo stars