WFGY/ProblemMap/GlobalFixMap/Reasoning/logic-collapse.md
2025-09-05 11:45:23 +08:00

12 KiB
Raw Blame History

Logic Collapse: Guardrails and Fix Pattern

🧭 Quick Return to Map

You are in a sub-page of Reasoning.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

When deduction chains flatten into platitudes, contradict earlier steps, or bypass citation locks, you have a logic collapse.
This page localizes causes and gives a minimal, testable repair plan driven by ΔS, λ_observe, and WFGY modules.


Symptoms

Symptom What you see
Deduction flips mid chain Step t says A, step t+3 assumes not A
Cite after claim Answer states conclusion first, citations appear later or mismatch
Tool result ignored Structured tool output is not integrated into the final proof
Branch mixing Two hypotheses or roles leak into one stream without arbitration
Infinite hedging Long text, no invariant, no auditable steps
JSON schema drift Different steps produce different fields for the same contract

Acceptance targets

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 to the target section
  • λ remains convergent across 3 paraphrases and 2 seeds
  • E_resonance flat on long windows
  • Zero contradictions across the final plan and the citations

Structural fixes (Problem Map)


Why logic collapses

  1. No invariant. There is no explicit statement of what must stay true across steps.
  2. Citation contract missing. The model is allowed to assert before binding to snippet_id and section_id.
  3. Header drift flips λ. Reordered system or tool headers produce different branches on each run.
  4. Branch contamination. Hypothesis A and B are not isolated, the plan merges silently.
  5. Unruly tool I/O. Free text is accepted where strict JSON was required.
  6. Hybrid retrieval shuffle. The top k changes and the proof silently re-anchors.

Fix in 60 seconds

  1. Pin the invariant
    Add a short invariant header, for example:
    Invariant: conclusions must cite snippet_id and section_id before any reasoning.

  2. Enforce cite-first
    Require the model to produce citations first, then the explanation.
    See retrieval-traceability.md and citation_first.md.

  3. Clamp variance with BBAM
    If λ flips across paraphrases, apply BBAM to keep one path stable.

  4. Bridge gaps with BBCR
    Summarize the current state into a compact, cited bridge, then continue reasoning on top of that single bridge.

  5. Lock schema and ordering
    Freeze headers, tool schemas, and reranker tie breaks.
    See data-contracts.md and rerankers.md.


Minimal step contract

Add this object to every step output. Reject the step if any field is missing.

{
  "step": 7,
  "invariant": "cite-first, no cross-branch mixing",
  "citations": [
    { "snippet_id": "S17", "section_id": "CH3.2", "source_url": "https://...", "offsets": [102, 188] }
  ],
  "claim": "X implies Y under condition Z",
  "justification": "Short, refers to citations only",
  "λ_state": "convergent",
  "ΔS_q_snip": 0.31,
  "next_action": "verify Z across S24",
  "guardrails": { "schema_version": "v1", "tie_break": "stable" }
}

Verification

  • Three paraphrase probe, two seeds.
  • Require ΔS(question, retrieved) ≤ 0.45 and λ convergent in all runs.
  • No contradictions between any step claim and earlier steps.
  • If any run fails, inspect header ordering and reranker tie breaks, then re-run.

Copy paste prompt

You have TXT OS and the WFGY Problem Map loaded.

We suspect a logic collapse.
Inputs:
- question: "{q}"
- current snippets: [{snippet_id, section_id, source_url}]
- last 6 steps with {claim, citations, λ_state, ΔS_q_snip}

Do:
1) State a one line invariant for this task.
2) Produce citations first. If citations are missing or conflict, stop and output the minimal fix.
3) Apply BBCR to create a single cited bridge, then continue reasoning for at most 5 steps.
4) If λ flips across a paraphrase, apply BBAM and retry once.
5) Return JSON:
   { "invariant": "...", "steps": [...], "final_answer": "...",
     "ΔS": 0.xx, "λ_state": "convergent", "next_fix": "..." }
Refuse to output a final answer if any step lacks citations.

When to escalate


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + ”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of FameGitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow