14 KiB
Rewind Agents: Guardrails and Fix Patterns
Use this page when your orchestration uses Rewind-style agents that capture local context across apps, then plan and act. If you see privacy leaks, wrong app selection, citation mismatches, or answers that flip between runs, follow these checks and jump to the exact WFGY fix pages.
Acceptance targets
- ΔS(question, retrieved) ≤ 0.45
- Coverage ≥ 0.70 to the intended section or record
- λ stays convergent across 3 paraphrases and 2 seeds
- E_resonance stays flat on long windows
Open these first
-
Visual map and recovery
RAG Architecture & Recovery -
End to end retrieval knobs
Retrieval Playbook -
Why this snippet
Retrieval Traceability -
Ordering control
Rerankers -
Embedding vs meaning
Embedding ≠ Semantic -
Hallucination and chunk edges
Hallucination -
Long chains and entropy
Context Drift · Entropy Collapse -
Structural collapse and recovery
Logic Collapse -
Prompt injection and schema locks
Prompt Injection -
Multi agent conflicts
Multi-Agent Problems -
Bootstrap and deployment ordering
Bootstrap Ordering · Deployment Deadlock · Pre-Deploy Collapse -
Snippet and citation schema
Data Contracts
Typical Rewind breakpoints and the right fix
-
Context capture is noisy or oversized and raises ΔS
Tighten capture filters and re-score with deterministic reranking.
Open: Retrieval Playbook · Rerankers -
Private strings leak from raw screen or clipboard into prompts
Add a redaction prefilter and a contract gate before the LLM step.
Open: Data Contracts · Prompt Injection -
High similarity yet wrong meaning after capture
Mixed embedding functions or metric mismatch between capture and store.
Open: Embedding ≠ Semantic -
Wrong app gets chosen in cross app routing
Split the query into intent vs retrieval and lock a two stage rerank.
Open: Query Parsing Split · Rerankers -
Citations do not line up because DOM based capture differs from visible text
Require cite then explain withsnippet_id,section_id,offsets.
Open: Retrieval Traceability · Data Contracts -
Agent handoff loops or shared memory overwrite between apps
Split namespaces per app and stampmem_revandmem_hash.
Open: Multi-Agent Problems · role drift · memory desync -
Cold boot errors when capture begins before indexes are ready
Guard with warm up checks and backoff.
Open: Bootstrap Ordering · Pre-Deploy Collapse
Fix in 60 seconds
-
Measure ΔS
Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
Stable < 0.40, transitional 0.40 to 0.60, risk ≥ 0.60. -
Probe λ_observe
Do a k sweep and reorder headers. If λ flips on paraphrases, lock the schema and clamp variance with BBAM. -
Apply the module
- Retrieval drift → BBMC plus Data Contracts
- Reasoning collapse → BBCR bridge plus BBAM, verify with Logic Collapse
- Hallucination re entry after correction → Pattern: Hallucination Re-entry
- Verify
Coverage ≥ 0.70. ΔS ≤ 0.45. Three paraphrases and two seeds with λ convergent.
Minimal Rewind pattern with WFGY checks
# Pseudocode. Show only the control points that matter.
CAPTURE_FIELDS = ["app", "window_title", "text", "dom_path", "timestamp"]
SNIPPET_FIELDS = ["snippet_id", "section_id", "source_url", "offsets", "tokens"]
def capture_context(apps, budget_chars=8000):
# per-app capture with privacy filters and dedupe
raw = []
for app in apps:
raw.extend(capture_from(app, fields=CAPTURE_FIELDS))
return redact_and_truncate(raw, budget=budget_chars)
def build_candidates(raw):
# convert capture into retrievable snippets with a unified analyzer and metric
return chunk_and_embed(raw, fields=SNIPPET_FIELDS)
def route_intent(question, candidates):
# two stage: intent selection then deterministic rerank
intent = detect_intent(question, candidates)
ordered = rerank(intent, candidates)
return ordered[:10]
def assemble_prompt(snippets, question):
# schema-locked prompt with cite then explain
return prompt.format(context=snippets, question=question)
def wfgy_gate(q, context, answer):
m = metrics_and_trace(q, context, answer)
if m["ΔS"] >= 0.60 or m["λ_state"] == "divergent":
raise RuntimeError("WFGY gate: high ΔS or divergent λ")
return m
def run_rewind_agent(question):
raw = capture_context(apps=["browser","docs","mail"])
candidates = build_candidates(raw)
topk = route_intent(question, candidates)
msg = assemble_prompt(topk, question)
result = agent.invoke(msg) # the agent must respect strict JSON for tools
metrics = wfgy_gate(question, topk, result)
return {"answer": result, "metrics": metrics}
What this enforces
- Capture is filtered and budgeted before retrieval. Privacy redaction happens first.
- Retrieval uses a unified analyzer and metric. Deterministic reranking controls ordering.
- Prompt is schema locked with cite first, then answer.
- A post generation WFGY gate can halt the run when ΔS is high or λ flips.
- Traces record snippet to citation mapping for audits.
Specs and recipes RAG Architecture & Recovery · Retrieval Playbook · Retrieval Traceability · Data Contracts
Rewind-specific gotchas
-
Capture order changes across windows and breaks reproducibility Stamp
capture_revand sort by app priority before rerank. -
Clipboard or screenshot text bypasses redaction rules Force the same redaction pass for every capture source.
-
PDF or canvas based apps produce different text than visible content Add a DOM or accessible text fallback and record the path in
source_url. -
Multi account confusion in Gmail, Drive, Notion Add account id to the namespace and to
dedupe_key. -
Live side effects before citation checks Require successful WFGY gate and idempotency check before any writes.
When to escalate
-
ΔS remains ≥ 0.60 after capture filters and retrieval fixes Rebuild the index using the checklists and verify with a small gold set. Retrieval Playbook
-
Identical input yields different answers across sessions Check version skew, capture order, and session state. Pre-Deploy Collapse
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one.
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.
要不要我直接接著生 agents_orchestration/README.md 的目錄與快速路由,或先做下一頁你排程裡的下一個工具?