8.2 KiB
Citation-First Prompting — Guardrails and Fix Pattern
🧭 Quick Return to Map
You are in a sub-page of PromptAssembly.
To reorient, go back here:
- PromptAssembly — prompt engineering and workflow composition
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Stabilize evidence-based answers by requiring citations before explanation. This page gives a minimal contract, validation steps, and fast routes to structural fixes when citations vanish, drift, or point to the wrong text.
Open these first
- Visual map & recovery: RAG Architecture & Recovery
- Snippet traceability & fields: Retrieval Traceability
- Contract the payload: Data Contracts
- Ordering control: Rerankers
- Long chains & drift: Context Drift, Entropy Collapse
- Semantic ≠ cosine: Embedding ≠ Semantic
- Reasoning collapse: Logic Collapse
When to use
- Answers sound right but show no citations.
- Citations appear but don’t align with the quoted text.
- Different runs cite different sections for the same question.
- After reranking, citations drift or vanish.
- Multi-turn dialogs slowly lose the cite-then-explain order.
Acceptance targets
- Cite-then-explain compliance ≥ 0.98 over 50 queries.
- Field completeness ≥ 0.99 for:
snippet_id, section_id, source_url, offsets, tokens. - ΔS(question, retrieved) ≤ 0.45 and stable across 3 paraphrases.
- Coverage ≥ 0.70 to the target section.
- λ convergent across two seeds.
Fix in 60 seconds
-
Enforce the contract
The model must cite before any reasoning. Reject outputs that invert the order. -
Validate fields
Require the full snippet schema. Reject partial or fuzzy references. -
Pin rerank & order
If citations change with header tweaks, lock your header order and rerank configuration. -
Probe ΔS and λ
If ΔS stays high while citations look plausible, rebuild chunking or metrics.
Minimal prompt block to paste
System:
You must CITE before you EXPLAIN.
Required fields per snippet: snippet\_id, section\_id, source\_url, offsets, tokens.
Order is strict:
1. "citations": \[...]
2. "answer": "..."
If citations are missing or fields incomplete, STOP and return:
{"citations": \[], "answer": "", "next\_fix": "open data-contracts & retrieval-traceability"}
User:
Question: "\<user\_question>"
Top-k retrieved: <passed from retriever>
Acceptance: ΔS(question,retrieved) ≤ 0.45; coverage ≥ 0.70.
JSON response shape (auditable)
{
"citations": [
{
"snippet_id": "S-28391",
"section_id": "SEC-3.2",
"source_url": "https://...",
"offsets": [2312, 2450],
"tokens": 172
}
],
"answer": "…",
"λ_state": "→|←|<>|×",
"ΔS": 0.37
}
Typical breakpoints → exact fix
-
Citations missing but answer present Reject and re-emit with the contract. Open: Data Contracts
-
Citation fields incomplete or wrong offsets Enforce the full schema, verify offsets/tokens against the corpus. Open: Retrieval Traceability
-
High similarity but wrong meaning Rerank or rebuild with correct metric/normalization. Open: Retrieval Playbook, Embedding ≠ Semantic
-
Header tweak breaks citations Freeze header order; clamp variance with BBAM. Open: Logic Collapse
-
Long runs lose citation discipline Split plan, bridge with BBCR; add mid-chain citation checks. Open: Context Drift, Entropy Collapse
Validator stub (copy into your pipeline)
Step 1 parse JSON strictly → if fail, stop.
Step 2 require citations[].length ≥ 1 before answer.
Step 3 verify fields & offsets; reject if any missing.
Step 4 compute ΔS and coverage; block if ΔS>0.45 or coverage<0.70.
Step 5 log λ across three paraphrases; alert if non-convergent.
Eval gates before ship
- Cite-then-explain ≥ 0.98 on 50 queries.
- Field completeness ≥ 0.99.
- ΔS ≤ 0.45, coverage ≥ 0.70, λ convergent on two seeds.
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
Explore More
| Layer | Page | What it’s for |
|---|---|---|
| ⭐ Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| ⚙️ Engine | WFGY 1.0 | Original PDF tension engine and early logic sketch (legacy reference) |
| ⚙️ Engine | WFGY 2.0 | Production tension kernel for RAG and agent systems |
| ⚙️ Engine | WFGY 3.0 | TXT based Singularity tension engine (131 S class set) |
| 🗺️ Map | Problem Map 1.0 | Flagship 16 problem RAG failure taxonomy and fix map |
| 🗺️ Map | Problem Map 2.0 | Global Debug Card for RAG and agent pipeline diagnosis |
| 🗺️ Map | Problem Map 3.0 | Global AI troubleshooting atlas and failure pattern map |
| 🧰 App | TXT OS | .txt semantic OS with fast bootstrap |
| 🧰 App | Blah Blah Blah | Abstract and paradox Q&A built on TXT OS |
| 🧰 App | Blur Blur Blur | Text to image generation with semantic control |
| 🏡 Onboarding | Starter Village | Guided entry point for new users |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.