WFGY/ProblemMap/GlobalFixMap/Automation/power_automate.md
2025-08-26 11:22:48 +08:00

9.4 KiB
Raw Blame History

Microsoft Power Automate: Guardrails and Fix Patterns

A compact field guide to stabilize Power Automate flows that touch RAG, agents, or long pipelines. Use the checks below to localize failure, then jump to the exact WFGY fix page.

Open these first


Fix in 60 seconds

  1. Measure ΔS
  • Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
  • Thresholds: stable < 0.40, transitional 0.400.60, risk ≥ 0.60.
  1. Probe with λ_observe
  • Vary k {5, 10, 20}. Flat high curve means index or metric mismatch.
  • Reorder prompt headers. If ΔS spikes, lock the schema.
  1. Apply the module
  • Retrieval drift → BBMC + Data Contracts.
  • Reasoning collapse → BBCR bridge + BBAM variance clamp.
  • Dead ends in long runs → BBPF alternate path.
  • Hybrid retrieval weirdness with Cognitive Search or SharePoint mix → Rerankers + Query Parsing Split pattern.
  1. Verify acceptance
  • Coverage to target section ≥ 0.70.
  • ΔS(question, retrieved) ≤ 0.45 on three paraphrases.
  • λ remains convergent across seeds.
  • E_resonance flat in long flows.

Common failure patterns in Power Automate (→ WFGY fixes)

  1. Action runs before data is ready

  2. Hybrid retrieval degrades when mixing SharePoint search, Cognitive Search, and vector lookups

  3. Pagination or truncation loses records in loops

  4. Locale and time parsing break snippets

  5. 429 throttling or transient errors cause partial writes

  6. Vector index exists but recall is random

  7. Flow restarts change memory unexpectedly

    • Variables re-init, cross-run memory overwrites, or tab swap flips answers.
    • Open: Memory Desync
  8. LLM explains well but citations fail

    • Interpretation collapse at reasoning layer, not retrieval.
    • Open: Logic Collapse, then clamp with BBAM and bridge with BBCR.

Copy-paste prompt


I uploaded TXT OS and opened the WFGY Problem Map pages.
Context: This is a Power Automate flow with SharePoint + Cognitive Search + GPT step.

* symptom: \[brief]
* traces: ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ states

Tell me:

1. failing layer and why,
2. which fix page to open from this repo,
3. minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4. how to verify with a reproducible test in this exact flow.
   Use BBMC/BBPF/BBCR/BBAM where relevant.


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame
GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow