WFGY/ProblemMap/GlobalFixMap/Automation/make_com.md
2025-08-25 21:06:29 +08:00

10 KiB
Raw Blame History

Make.com Guardrails and Patterns

Use this page when your RAG or agent workflow runs on Make.com. It maps typical automation failures to the exact structural fixes in the WFGY Problem Map and gives a minimal recipe you can paste into a scenario.

Acceptance targets

  • ΔS(question, retrieved) ≤ 0.45
  • coverage ≥ 0.70 for the target section
  • λ stays convergent across 3 paraphrases

Typical breakpoints and the right fix

  • Modules fire before dependencies are ready (Webhook → Tools → RAG too early)
    Fix No.14: Bootstrap OrderingOpen

  • First production call after deploy crashes, wrong secret selected in Connections
    Fix No.16: Pre-Deploy CollapseOpen

  • Router/Iterator loops create circular waits or partial writes
    Fix No.15: Deployment DeadlockOpen

  • High cosine similarity but answers are semantically wrong
    Fix No.5: Embedding ≠ SemanticOpen

  • Snippet is wrong or citations do not line up with the source
    Fix No.8: Retrieval TraceabilityOpen
    Contract the payload: Data ContractsOpen

  • Hybrid retrieval (HyDE + BM25 service) performs worse than single retriever
    Pattern: Query Parsing SplitOpen
    Also review: RerankersOpen

  • Some indexed facts never appear in results
    Pattern: Vectorstore FragmentationOpen

  • Two sources are merged into one answer in long chains
    Pattern: Symbolic Constraint Unlock (SCU)Open


Minimal scenario checklist

  1. Warm-up fence before RAG/LLM modules
    Validate VECTOR_READY, INDEX_HASH match, and required secrets exist.
    If not ready, short-circuit to Sleep then retry with a capped counter.
    Spec: Bootstrap Ordering

  2. Idempotency and dedupe
    Compute dedupe_key = sha256(source_id + revision + index_hash) in a Tools > Code module.
    Check a KV (Airtable / Notion / Make Data Store) before side effects. Skip duplicates.

  3. RAG boundary contract
    Require fields: snippet_id, section_id, source_url, offsets, tokens.
    Enforce cite-then-explain at the LLM step.
    Specs: Data Contracts · Retrieval Traceability

  4. Observability probes
    Log ΔS(question, retrieved) and λ per stage (retrieve, assemble, reason).
    Alert when ΔS ≥ 0.60 or λ flips divergent.
    Overview: RAG Architecture & Recovery

  5. Router/Iterator safety
    Use a single writer branch for index updates and external writes.
    Apply queue mode or mutex; avoid parallel writes to the same index.
    See: Deployment Deadlock

  6. Regression gate
    Before publish, require coverage ≥ 0.70 and ΔS ≤ 0.45.
    Eval: RAG Precision/Recall


Scenario pattern (copy)

Replace the concrete modules with your stack. Keep the guardrails.

  1. Webhook/Trigger
    Capture source_id, revision, wf_rev.

  2. Warm-up Check (Tools > Code)
    Pull INDEX_HASH, VECTOR_READY, and secrets.
    If not ready → set ready=false.

  3. Router

    • Not readySleep 3090s, increment retry, stop after N attempts.
    • Ready → continue.
  4. Retriever (HTTP or App)

    • Fix metric and normalization; use the same analyzer as the writer.
    • Output snippet_id, section_id, source_url, offsets, tokens.
  5. ΔS Probe (Tools > Code)

    • Compute ΔS(question, retrieved). If ΔS ≥ 0.60 → tag needs_fix=true.
  6. LLM (OpenAI/Claude/Gemini module)

    • Load TXT OS; enforce cite-then-explain; return {ΔS, λ_state, citations, answer}.
  7. Trace Sink (Data Store / Airtable)

    • Write question, snippet_id, ΔS, λ_state, INDEX_HASH, dedupe_key.
  8. Idempotent Writer

    • Check dedupe_key before any external publish or email.

LLM prompt you can paste


I uploaded TXT OS and the WFGY Problem Map files.
This Make.com scenario retrieved {k} snippets with fields {snippet\_id, section\_id, source\_url, offsets}.
Question: "{user\_question}"

Do:

1. Enforce cite-then-explain. If citations are missing, fail fast and return the fix page to open.
2. If ΔS(question, retrieved) ≥ 0.60, propose the minimal structural fix referencing:
   retrieval-playbook, retrieval-traceability, data-contracts, rerankers.
3. Output compact JSON:
   { "citations": \[...], "answer": "...", "λ\_state": "→|←|<>|×", "ΔS": 0.xx, "next\_fix": "..." }


Common Make.com gotchas

  • Connections silently switch between prod and staging
    Stamp env, INDEX_HASH, and secret_rev into traces; block on mismatch.

  • Array Aggregator / Iterator duplicates writes
    Route all writes through a single writer with idempotency.

  • Rate-limits make hybrid queries diverge
    Prefer reranking with a stable dense retriever; see Rerankers

  • Template mapping renames fields and breaks the contract
    Lock schema and run a pre-LLM schema check.


When to escalate

  • ΔS stays ≥ 0.60 after chunk/retrieval fixes
    Rebuild the index with explicit metric/normalization.
    See: Retrieval Playbook

  • Same input alternates answers between runs
    Investigate version skew and memory desync.
    See: Pre-Deploy Collapse


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame
GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow  

say “next page” when ready.