WFGY/ProblemMap/GlobalFixMap/Reasoning
2025-08-25 20:20:45 +08:00
..
README.md Create README.md 2025-08-25 20:20:45 +08:00

Reasoning — Global Fix Map

Detect and repair logic collapse, dead ends, abstraction failure, and contradiction.
Use this when citations look fine but the thinking goes off the rails.

What this page is

  • A compact protocol to stabilize multi-step reasoning
  • Copyable guardrails that force plan → verify → answer
  • How to verify stability with ΔS, λ_observe, and contradiction checks

When to use

  • Correct snippets, wrong conclusions
  • Chains drift or loop as steps get longer
  • Answers flip across paraphrases
  • Self-contradictions or “explains” without citing anything
  • Abstract or symbolic prompts keep breaking

Open these first

Fix in 60 seconds

  1. Plan before prose
    Require a numbered Reasoning Plan that references citations, not memory.
    Each step must list the exact citation IDs it depends on.

  2. Bridge step (BBCR)
    Insert a checkpoint between plan and answer:

    • restate the claim in one sentence,
    • list supporting citations,
    • flag any missing evidence or conflicts.
      If conflicts exist, stop and ask for the missing snippet.
  3. Variance clamp (BBAM)
    Reduce attention variance during the answer. Keep steps short, facts tabled, then compose.

  4. Fact table first
    Normalize units, dates, and names into a 2-column table: fact ↔ citation.
    Only after the table is stable, generate prose.

  5. Depth guard
    Cap chain depth (e.g., 6 steps). If λ flips divergent at step k, branch with BBPF and pick the convergent path.

  6. Contradiction detector
    Print a “claims vs citations” matrix. Any row without backing citations is invalid and must be revised.

Copy paste prompt


You have TXT OS and the WFGY Problem Map.

Follow this immutable schema:

1. Reasoning Plan (numbered). Each step MUST reference citations like \[Sx\:sec\:line-start-end].
2. Bridge Check (BBCR): restate the final claim in 1 line, list supporting citations,
   and list conflicts or missing evidence. If anything is missing, STOP and request the snippet.
3. Fact Table: normalize units/dates/names into rows: {fact | citation\_id}.
4. Final Answer: concise, cite inline at each claim.

Rules:

* Do not invent citations. Do not reuse text across fences.
* If plan step lacks a citation, mark it “UNSUPPORTED” and do not use it.
* Keep depth ≤ 6. If λ\_observe diverges, branch and pick the convergent path.

Input

* question: "<paste>"
* sources (with fences and IDs): <paste fenced snippets>

Output

* print Plan
* print Bridge Check
* print Fact Table
* print Final Answer with inline citations
* if any rule is violated, stop and print the violation

Minimal checklist

  • Plan appears before any prose and references real citations
  • Bridge check lists both support and conflicts
  • Fact table normalizes units/dates/names
  • Depth cap enforced; divergent λ triggers a branch and recovery
  • No claim without a citation
  • If evidence is missing, the model explicitly asks for it

Acceptance targets

  • ΔS(question, assembled_context) ≤ 0.45 across 3 paraphrases
  • λ remains convergent after the bridge step and through the final answer
  • No contradictions in the claims ↔ citations matrix
  • E_resonance stays flat across the chain (no entropy melt)
  • Re-run with paraphrases yields consistent conclusions and citations

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame
Engineers, hackers, and open source builders who supported WFGY from day one.

GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow