7.2 KiB
Chunking to Embedding Contract — Guardrails and Fix Pattern
Use this page when retrieval fails because the chunk schema is not aligned with the embedding ingestion contract.
If the retriever expects fields that were never embedded, or chunks omit IDs/offsets/anchors, then citations drift and ΔS rises.
Open these first
- Visual map and recovery: RAG Architecture & Recovery
- Snippet and citation schema: data-contracts.md
- Retrieval traceability: retrieval-traceability.md
- Chunking checklist: chunking-checklist.md
Core acceptance
- Every chunk has
chunk_id,section_id,source_url,offsets,tokens. - Embedding index was built from the same schema as retrieval contract.
- ΔS(question, retrieved) ≤ 0.45 across 3 paraphrases.
- Coverage ≥ 0.70 to the target section.
Typical breakpoints and the right fix
-
Missing fields in ingestion (e.g., no
section_id)
→ Enforce data-contracts.md. -
Different schema for ingest vs retrieve
→ Corpus ingested raw text, retriever expects chunk JSON → rebuild with schema. -
Offsets not tracked
→ Cannot map back to original document → enforceoffsetsat ingest. -
Tokenizer drift
→ Chunk IDs differ between preprocessing runs → use chunking-checklist.md.
Fix in 60 seconds
-
Check ingestion schema
Compare the fields stored in the index with the fields expected in retrieval. -
Align contracts
Definechunk = {chunk_id, section_id, source_url, offsets, tokens, text}.
Enforce that this exact object is used both in ingestion and retrieval. -
Rebuild index if misaligned
If fields differ, re-ingest corpus with enforced schema.
Copy-paste schema
{
"chunk_id": "uuid-v4",
"section_id": "doc-23-sec-7",
"source_url": "https://example.com/doc23",
"offsets": [120, 320],
"tokens": 512,
"text": "...."
}
Target: retriever always returns this schema, LLM consumes directly.
Common gotchas
- Only
textembedded, no IDs → cannot trace back → citations drift. - Chunk boundaries not logged → hallucinations reappear.
- JSON schema updated mid-deploy → index mismatch.
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one.
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.