WFGY/ProblemMap/GlobalFixMap/PromptAssembly/citation_first.md

8.3 KiB
Raw Blame History

Citation-First Prompting — Guardrails and Fix Pattern

🧭 Quick Return to Map

You are in a sub-page of PromptAssembly.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Stabilize evidence-based answers by requiring citations before explanation. This page gives a minimal contract, validation steps, and fast routes to structural fixes when citations vanish, drift, or point to the wrong text.

Open these first

When to use

  • Answers sound right but show no citations.
  • Citations appear but dont align with the quoted text.
  • Different runs cite different sections for the same question.
  • After reranking, citations drift or vanish.
  • Multi-turn dialogs slowly lose the cite-then-explain order.

Acceptance targets

  • Cite-then-explain compliance ≥ 0.98 over 50 queries.
  • Field completeness ≥ 0.99 for: snippet_id, section_id, source_url, offsets, tokens.
  • ΔS(question, retrieved) ≤ 0.45 and stable across 3 paraphrases.
  • Coverage ≥ 0.70 to the target section.
  • λ convergent across two seeds.

Fix in 60 seconds

  1. Enforce the contract
    The model must cite before any reasoning. Reject outputs that invert the order.

  2. Validate fields
    Require the full snippet schema. Reject partial or fuzzy references.

  3. Pin rerank & order
    If citations change with header tweaks, lock your header order and rerank configuration.

  4. Probe ΔS and λ
    If ΔS stays high while citations look plausible, rebuild chunking or metrics.


Minimal prompt block to paste


System:
You must CITE before you EXPLAIN.
Required fields per snippet: snippet\_id, section\_id, source\_url, offsets, tokens.
Order is strict:

1. "citations": \[...]
2. "answer": "..."

If citations are missing or fields incomplete, STOP and return:
{"citations": \[], "answer": "", "next\_fix": "open data-contracts & retrieval-traceability"}

User:
Question: "\<user\_question>"
Top-k retrieved: <passed from retriever>
Acceptance: ΔS(question,retrieved) ≤ 0.45; coverage ≥ 0.70.


JSON response shape (auditable)

{
  "citations": [
    {
      "snippet_id": "S-28391",
      "section_id": "SEC-3.2",
      "source_url": "https://...",
      "offsets": [2312, 2450],
      "tokens": 172
    }
  ],
  "answer": "…",
  "λ_state": "→|←|<>|×",
  "ΔS": 0.37
}

Typical breakpoints → exact fix

  • Citations missing but answer present Reject and re-emit with the contract. Open: Data Contracts

  • Citation fields incomplete or wrong offsets Enforce the full schema, verify offsets/tokens against the corpus. Open: Retrieval Traceability

  • High similarity but wrong meaning Rerank or rebuild with correct metric/normalization. Open: Retrieval Playbook, Embedding ≠ Semantic

  • Header tweak breaks citations Freeze header order; clamp variance with BBAM. Open: Logic Collapse

  • Long runs lose citation discipline Split plan, bridge with BBCR; add mid-chain citation checks. Open: Context Drift, Entropy Collapse


Validator stub (copy into your pipeline)

Step 1  parse JSON strictly → if fail, stop.
Step 2  require citations[].length ≥ 1 before answer.
Step 3  verify fields & offsets; reject if any missing.
Step 4  compute ΔS and coverage; block if ΔS>0.45 or coverage<0.70.
Step 5  log λ across three paraphrases; alert if non-convergent.

Eval gates before ship

  • Cite-then-explain ≥ 0.98 on 50 queries.
  • Field completeness ≥ 0.99.
  • ΔS ≤ 0.45, coverage ≥ 0.70, λ convergent on two seeds.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
Engine WFGY 1.0 Original PDF based tension engine
Engine WFGY 2.0 Production tension kernel and math engine for RAG and agents
Engine WFGY 3.0 TXT based Singularity tension engine, 131 S class set
Map Problem Map 1.0 Flagship 16 problem RAG failure checklist and fix map
Map Problem Map 2.0 RAG focused recovery pipeline
Map Problem Map 3.0 Global Debug Card, image as a debug protocol layer
Map Semantic Clinic Symptom to family to exact fix
Map Grandmas Clinic Plain language stories mapped to Problem Map 1.0
Onboarding Starter Village Guided tour for newcomers
App TXT OS TXT semantic OS, fast boot
App Blah Blah Blah Abstract and paradox Q and A built on TXT OS
App Blur Blur Blur Text to image with semantic control
App Blow Blow Blow Reasoning game engine and memory demo

If this repository helped, starring it improves discovery so more builders can find the docs and tools. GitHub Repo stars