WFGY/ProblemMap/GlobalFixMap/Automation/pipedream.md

8.1 KiB
Raw Blame History

Pipedream — Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of Automation Platforms.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Use this when your integration is built on Pipedream (HTTP triggers, Node/Python steps, marketplace components) and answers look plausible but wrong, citations dont line up, or flows pass step-by-step while users still see inconsistencies.

Acceptance targets

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 to the intended section/record
  • λ stays convergent across 3 paraphrases

Typical breakpoints → exact fixes


Minimal Pipedream pattern with WFGY checks

A compact flow outline that enforces cite-first schema, observable retrieval, and ΔS/λ validation.

Trigger: HTTP / Webhook (POST)

Step 1 — Parse input
- Extract "question" and optional "k" (default 10)

Step 2 — Retrieve context (custom component or HTTP)
- POST to your retriever: { question, k }
- Return: snippets[], each with { snippet_id, text, source, section_id }

Step 3 — Assemble prompt (Node step)
SYSTEM:
  Cite lines before any explanation. Keep per-source fences.
TASK:
  Answer only from the provided context. Return citations as [snippet_id].
CONTEXT:
  <joined snippets with snippet_id + source + text>
QUESTION:
  <user question>

Step 4 — Call LLM (component or HTTP)
- Input: prompt from Step 3
- Output: answer + raw citations if available

Step 5 — WFGY post-check (HTTP to your wfgyCheck function)
- Body: { question, context, answer }
- Return: { deltaS, lambda, coverage, notes }

Step 6 — Gate
IF deltaS ≥ 0.60 OR lambda != "→"
   → Fail fast with 422 and include trace table (snippet_id↔citation)
ELSE
   → 200 OK with { answer, deltaS, lambda, coverage, citations[] }

Reference specs: RAG Architecture & Recovery · Retrieval Playbook · Retrieval Traceability · Data Contracts


Pipedream-specific gotchas

  • Event truncation: large contexts exceed step memory or event size. Use external store for snippets, inject only ids + short preview into the prompt, and re-fetch on demand. See Data Contracts

  • Package/runtime drift: Node/Python versions or package pins differ between components. Pin versions and rebuild embeddings/index with the same runtime. See Embedding ≠ Semantic

  • Concurrent runs reorder records and break implicit ranking. Add a rerank step after per-source ΔS ≤ 0.50. See Rerankers

  • Secret/connection mismatch across sources: different tokens for ingestion vs query cause empty/partial retrieval. Verify in a boot check before first LLM call. See Pre-Deploy Collapse

  • Marketplace components hide prompts: wrap LLM calls in your own component so the cite-first schema and fences are explicit in code. See Retrieval Traceability


When to escalate

  • ΔS stays ≥ 0.60 after chunking/retrieval fixes → rebuild index with explicit metric flags and unit normalization. Retrieval Playbook

  • Answers flip between preview and deployed sources → verify version skew, secret scope, and environment variables. Bootstrap Ordering · Deployment Deadlock


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Module Description Link
WFGY Core Canonical framework entry point View
Problem Map Diagnostic map and navigation hub View
Tension Universe Experiments MVP experiment field View
Recognition Where WFGY is referenced or adopted View
AI Guide Anti-hallucination reading protocol for tools View

If this repository helps, starring it improves discovery for other builders.
GitHub Repo stars