10 KiB
Make.com Guardrails and Patterns
Use this page when your RAG or agent workflow runs on Make.com. It maps typical automation failures to the exact structural fixes in the WFGY Problem Map and gives a minimal recipe you can paste into a scenario.
Acceptance targets
- ΔS(question, retrieved) ≤ 0.45
- coverage ≥ 0.70 for the target section
- λ stays convergent across 3 paraphrases
Typical breakpoints and the right fix
-
Modules fire before dependencies are ready (Webhook → Tools → RAG too early)
Fix No.14: Bootstrap Ordering → Open -
First production call after deploy crashes, wrong secret selected in Connections
Fix No.16: Pre-Deploy Collapse → Open -
Router/Iterator loops create circular waits or partial writes
Fix No.15: Deployment Deadlock → Open -
High cosine similarity but answers are semantically wrong
Fix No.5: Embedding ≠ Semantic → Open -
Snippet is wrong or citations do not line up with the source
Fix No.8: Retrieval Traceability → Open
Contract the payload: Data Contracts → Open -
Hybrid retrieval (HyDE + BM25 service) performs worse than single retriever
Pattern: Query Parsing Split → Open
Also review: Rerankers → Open -
Some indexed facts never appear in results
Pattern: Vectorstore Fragmentation → Open -
Two sources are merged into one answer in long chains
Pattern: Symbolic Constraint Unlock (SCU) → Open
Minimal scenario checklist
-
Warm-up fence before RAG/LLM modules
ValidateVECTOR_READY,INDEX_HASHmatch, and required secrets exist.
If not ready, short-circuit to Sleep then retry with a capped counter.
Spec: Bootstrap Ordering -
Idempotency and dedupe
Computededupe_key = sha256(source_id + revision + index_hash)in a Tools > Code module.
Check a KV (Airtable / Notion / Make Data Store) before side effects. Skip duplicates. -
RAG boundary contract
Require fields:snippet_id,section_id,source_url,offsets,tokens.
Enforce cite-then-explain at the LLM step.
Specs: Data Contracts · Retrieval Traceability -
Observability probes
Log ΔS(question, retrieved) and λ per stage (retrieve, assemble, reason).
Alert when ΔS ≥ 0.60 or λ flips divergent.
Overview: RAG Architecture & Recovery -
Router/Iterator safety
Use a single writer branch for index updates and external writes.
Apply queue mode or mutex; avoid parallel writes to the same index.
See: Deployment Deadlock -
Regression gate
Before publish, require coverage ≥ 0.70 and ΔS ≤ 0.45.
Eval: RAG Precision/Recall
Scenario pattern (copy)
Replace the concrete modules with your stack. Keep the guardrails.
-
Webhook/Trigger
Capturesource_id,revision,wf_rev. -
Warm-up Check (Tools > Code)
PullINDEX_HASH,VECTOR_READY, and secrets.
If not ready → setready=false. -
Router
- Not ready → Sleep 30–90s, increment
retry, stop after N attempts. - Ready → continue.
- Not ready → Sleep 30–90s, increment
-
Retriever (HTTP or App)
- Fix metric and normalization; use the same analyzer as the writer.
- Output
snippet_id,section_id,source_url,offsets,tokens.
-
ΔS Probe (Tools > Code)
- Compute ΔS(question, retrieved). If ΔS ≥ 0.60 → tag
needs_fix=true.
- Compute ΔS(question, retrieved). If ΔS ≥ 0.60 → tag
-
LLM (OpenAI/Claude/Gemini module)
- Load TXT OS; enforce cite-then-explain; return
{ΔS, λ_state, citations, answer}.
- Load TXT OS; enforce cite-then-explain; return
-
Trace Sink (Data Store / Airtable)
- Write
question,snippet_id,ΔS,λ_state,INDEX_HASH,dedupe_key.
- Write
-
Idempotent Writer
- Check
dedupe_keybefore any external publish or email.
- Check
LLM prompt you can paste
I uploaded TXT OS and the WFGY Problem Map files.
This Make.com scenario retrieved {k} snippets with fields {snippet\_id, section\_id, source\_url, offsets}.
Question: "{user\_question}"
Do:
1. Enforce cite-then-explain. If citations are missing, fail fast and return the fix page to open.
2. If ΔS(question, retrieved) ≥ 0.60, propose the minimal structural fix referencing:
retrieval-playbook, retrieval-traceability, data-contracts, rerankers.
3. Output compact JSON:
{ "citations": \[...], "answer": "...", "λ\_state": "→|←|<>|×", "ΔS": 0.xx, "next\_fix": "..." }
Common Make.com gotchas
-
Connections silently switch between prod and staging
Stampenv,INDEX_HASH, andsecret_revinto traces; block on mismatch. -
Array Aggregator / Iterator duplicates writes
Route all writes through a single writer with idempotency. -
Rate-limits make hybrid queries diverge
Prefer reranking with a stable dense retriever; see Rerankers -
Template mapping renames fields and breaks the contract
Lock schema and run a pre-LLM schema check.
When to escalate
-
ΔS stays ≥ 0.60 after chunk/retrieval fixes
Rebuild the index with explicit metric/normalization.
See: Retrieval Playbook -
Same input alternates answers between runs
Investigate version skew and memory desync.
See: Pre-Deploy Collapse
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame —
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.
say “next page” when ready.