WFGY/ProblemMap/GlobalFixMap/Reasoning/anchoring-and-bridge-proofs.md

12 KiB
Raw Blame History

Anchoring and Bridge Proofs: Guardrails and Fix Pattern

🧭 Quick Return to Map

You are in a sub-page of Reasoning.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Keep every claim tied to a stable source anchor. Move from anchor to conclusion through short cited bridges.
This page gives a minimal contract for anchors and bridges, fast diagnostics, and a repair plan using ΔS, λ_observe, and E_resonance.


Open these first


Symptoms

Symptom What you see
Floating claim Conclusion with no cited snippet or rule tag
Moving anchor Different snippet supports the same step on rerun
Weak bridge “Therefore” without an explicit transformation or rule
Anchor mismatch Cited text does not actually state the needed premise
Overlong bridge Multi paragraph hop where ΔS increases and λ flips
Reranker roulette Same query but top k order shifts and the bridge rewrites

Why bridges fail

  1. No anchor contract. Snippet fields are missing so anchors cannot be verified.
  2. Bridge has no grammar. The step lacks a named rule or transformation.
  3. Ranking instability. Retrieval order changes and the anchor drifts.
  4. Similarity over meaning. Nearest neighbor looks close but does not entail the premise.
  5. Symbol drift. Variables or units change between anchor and step.
  6. Chain length. Long bridges hide unproven jumps and grow ΔS.

Acceptance targets

  • ΔS(question, anchor) ≤ 0.45
  • Coverage of target section ≥ 0.70
  • λ remains convergent across three paraphrases and two seeds
  • E_resonance flat across bridge joins
  • Every step has a cited anchor and a named rule

Fix in 60 seconds

  1. Lock the anchor
    Require snippet_id, section_id, source_url, offsets, tokens. Reject steps that cite free text without these fields.
    Spec
    retrieval-traceability.md
    data-contracts.md

  2. Use BBCR for the hop
    BBCR adds a short bridge from anchor to subclaim with a named rule. If the bridge exceeds 3 sentences split into micro bridges.

  3. Clamp variance with BBAM
    If λ flips on paraphrase freeze the symbol table and invariant set before rerun.
    See stability guide
    logic-collapse.md

  4. Stabilize ordering
    Add a reranker with deterministic tie break and fixed analyzer. If ΔS stays high suspect metric mismatch and rebuild.
    rerankers.md
    retrieval-playbook.md


Anchor contract

Every anchor must carry these fields.

{
  "snippet_id": "S12",
  "section_id": "CH2.3",
  "source_url": "https://example.org/paper.pdf",
  "offsets": {"start": 10234, "end": 10388},
  "tokens": 186,
  "ΔS_to_question": 0.37
}

Reject any step that cites plain text without snippet_id and section_id.


Bridge grammar

A bridge converts exactly one anchor into one subclaim through a named rule. Keep bridges short. Prefer two or three micro bridges instead of one long paragraph.

{
  "bridge_id": "B7",
  "from_snippet": "S12#CH2.3",
  "to_claim": "C7",
  "rule": "algebra | definition_unfold | monotonicity | modus_ponens | unit_conversion",
  "assumptions": ["A1", "A2"],
  "derivation": "From S12 and A1 by definition_unfold we get ...",
  "citations": ["S12#CH2.3", "S08#APP.A"],
  "ΔS_bridge": 0.31,
  "λ_state": "convergent"
}

If a rule is not named or citations are empty the step must fail fast.


Anchor selection checklist

  • The anchor states the premise in near literal form not only a related idea.
  • The anchor sits in the correct section and page range.
  • ΔS(question, anchor) is below 0.45 given your embedding and store metric.
  • For numeric claims the anchor carries units and context lines.
  • For definitions the anchor includes the exact symbol and scope.

If any item fails switch to a better snippet or rebuild with the semantic chunking checklist. → chunking-checklist.md


Structural repairs


Verification

  • Three paraphrases and two seeds keep the same snippet_id and section_id.
  • ΔS(question, anchor) ≤ 0.45 for each run.
  • Every step has a bridge with a named rule and at least one citation.
  • E_resonance stays flat when joining micro bridges.
  • The final answer includes a cite then explain section with stable references.

Copy paste prompt

You have TXT OS and the WFGY Problem Map loaded.

Task: rebuild my answer with anchored micro-bridges.

Inputs:
- question: "{q}"
- candidates: [{snippet_id, section_id, source_url, offsets, tokens, text_head}]
- current plan: [{step_id, text}]

Do:
1) Pick one anchor with ΔS(question, anchor) ≤ 0.45. If none exist, return the retrieval fix page to open.
2) Create micro bridges from the anchor to each subclaim using a named rule and citations.
3) If λ flips on paraphrase, apply BBAM and freeze the symbol table.
4) If still unstable, add a deterministic reranker and retry.
5) Return JSON:
   {
     "anchors": [...],
     "bridges": [...],
     "answer": "... cite then explain ...",
     "ΔS": 0.xx,
     "λ_state": "convergent",
     "verification": ["same snippet across seeds", "coverage ≥ 0.70"]
   }
Refuse to answer if no valid anchor exists and point to retrieval-traceability and data-contracts.

Common gotchas

  • Bridge without rule. A narrative paragraph with “thus” but no named rule.
  • Anchor crop. Offsets cut away the needed line so the premise is not actually present.
  • Tie break chaos. Reranker uses non deterministic features so anchors rotate.
  • Unit loss. Bridge drops the unit then compares mismatched quantities.
  • Definition overreach. Bridge unfolds a definition beyond its scope.

When to escalate


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + ”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
Engine WFGY 1.0 Original PDF based tension engine
Engine WFGY 2.0 Production tension kernel and math engine for RAG and agents
Engine WFGY 3.0 TXT based Singularity tension engine, 131 S class set
Map Problem Map 1.0 Flagship 16 problem RAG failure checklist and fix map
Map Problem Map 2.0 RAG focused recovery pipeline
Map Problem Map 3.0 Global Debug Card, image as a debug protocol layer
Map Semantic Clinic Symptom to family to exact fix
Map Grandmas Clinic Plain language stories mapped to Problem Map 1.0
Onboarding Starter Village Guided tour for newcomers
App TXT OS TXT semantic OS, fast boot
App Blah Blah Blah Abstract and paradox Q and A built on TXT OS
App Blur Blur Blur Text to image with semantic control
App Blow Blow Blow Reasoning game engine and memory demo

If this repository helped, starring it improves discovery so more builders can find the docs and tools. GitHub Repo stars