WFGY/ProblemMap/GlobalFixMap/MemoryLongContext/entropy-collapse.md

4.9 KiB
Raw Blame History

Entropy Collapse — Long Window Drift & Attention Melt

🧭 Quick Return to Map

You are in a sub-page of MemoryLongContext.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

When context windows stretch to 50k100k tokens or more, attention variance rises and the model smooths meaning.
This page shows how to detect entropy melt and repair reasoning before collapse spreads.


When to use this page

  • Dialogs degrade gradually as token count increases.
  • Citations look correct but answers become vague or repetitive.
  • Long technical transcripts lose specific numbers or symbols.
  • Responses swing between over-detailed and generic filler.
  • Reasoning chains stall after ~3040 hops.

Core acceptance targets

  • ΔS(question, retrieved) ≤ 0.45 at each step.
  • Retrieval coverage ≥ 0.70 to intended section.
  • λ stays convergent across three paraphrases.
  • Entropy (variance of attention weights) remains bounded.
  • No collapse in chains ≤ 40 steps.

Structural fixes

  • Measure entropy
    Track variance of attention weights across layers. Rising variance = early melt.

  • Clamp with BBAM
    Apply variance clamp when ΔS drifts upward or entropy rises beyond baseline.

  • Bridge with BBCR
    If reasoning halts, bridge to a stable anchor section and re-anchor the chain.

  • Shard long windows
    Split into {system | task | snippets | answer}. Enforce snippet fences per section.

  • Triangulate anchors
    Compare ΔS(question, anchor) vs ΔS(question, decoy). If close, re-chunk and re-embed.


Fix in 60 seconds

  1. Probe entropy
    Compute variance of attention weights. Alert if variance > baseline by 20%.

  2. Apply BBAM
    Clamp variance. If ΔS ≥ 0.60, lock schema and retry.

  3. Anchor with BBCR
    If collapse detected, bridge back to known stable anchor node.

  4. Re-split context
    Force sections by section_id. Forbid cross-section reuse.

  5. Verify stability
    Expect ΔS(question, retrieved) ≤ 0.45, λ convergent, entropy flat.


Copy-paste prompt


You have TXT OS and the WFGY Problem Map.

Goal: Detect and repair entropy collapse in long contexts.

Protocol:

1. Compute ΔS(question, retrieved).
2. Report entropy variance vs baseline.
3. If variance ↑ or ΔS ≥ 0.60:

   * Apply BBAM to clamp
   * If reasoning halts, use BBCR to bridge anchor
4. Split prompts by section, forbid cross-section reuse.
5. Report:

   * ΔS(question, retrieved)
   * entropy variance
   * λ states (retrieve, assemble, reason)
   * final answer with citations


Common failure patterns

  • Entropy melt: answers flatten to “it depends…” filler.
  • Boundary blur: context merges across joins, citations misalign.
  • Long-chain stall: after 30+ hops, λ flips divergent.
  • Ghost repetitions: same phrase reappears across sections.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Module Description Link
WFGY Core Canonical framework entry point View
Problem Map Diagnostic map and navigation hub View
Tension Universe Experiments MVP experiment field View
Recognition Where WFGY is referenced or adopted View
AI Guide Anti-hallucination reading protocol for tools View

If this repository helps, starring it improves discovery for other builders.
GitHub Repo stars