WFGY/ProblemMap/GlobalFixMap/RAG_VectorDB/chunking_to_embedding_contract.md

6.1 KiB
Raw Blame History

Chunking to Embedding Contract — Guardrails and Fix Pattern

🧭 Quick Return to Map

You are in a sub-page of RAG_VectorDB.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Use this page when retrieval fails because the chunk schema is not aligned with the embedding ingestion contract.
If the retriever expects fields that were never embedded, or chunks omit IDs/offsets/anchors, then citations drift and ΔS rises.


Open these first


Core acceptance

  • Every chunk has chunk_id, section_id, source_url, offsets, tokens.
  • Embedding index was built from the same schema as retrieval contract.
  • ΔS(question, retrieved) ≤ 0.45 across 3 paraphrases.
  • Coverage ≥ 0.70 to the target section.

Typical breakpoints and the right fix

  • Missing fields in ingestion (e.g., no section_id)
    → Enforce data-contracts.md.

  • Different schema for ingest vs retrieve
    → Corpus ingested raw text, retriever expects chunk JSON → rebuild with schema.

  • Offsets not tracked
    → Cannot map back to original document → enforce offsets at ingest.

  • Tokenizer drift
    → Chunk IDs differ between preprocessing runs → use chunking-checklist.md.


Fix in 60 seconds

  1. Check ingestion schema
    Compare the fields stored in the index with the fields expected in retrieval.

  2. Align contracts
    Define chunk = {chunk_id, section_id, source_url, offsets, tokens, text}.
    Enforce that this exact object is used both in ingestion and retrieval.

  3. Rebuild index if misaligned
    If fields differ, re-ingest corpus with enforced schema.


Copy-paste schema

{
  "chunk_id": "uuid-v4",
  "section_id": "doc-23-sec-7",
  "source_url": "https://example.com/doc23",
  "offsets": [120, 320],
  "tokens": 512,
  "text": "...."
}

Target: retriever always returns this schema, LLM consumes directly.


Common gotchas

  • Only text embedded, no IDs → cannot trace back → citations drift.
  • Chunk boundaries not logged → hallucinations reappear.
  • JSON schema updated mid-deploy → index mismatch.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
Engine WFGY 1.0 Original PDF based tension engine
Engine WFGY 2.0 Production tension kernel and math engine for RAG and agents
Engine WFGY 3.0 TXT based Singularity tension engine, 131 S class set
Map Problem Map 1.0 Flagship 16 problem RAG failure checklist and fix map
Map Problem Map 2.0 RAG focused recovery pipeline
Map Problem Map 3.0 Global Debug Card, image as a debug protocol layer
Map Semantic Clinic Symptom to family to exact fix
Map Grandmas Clinic Plain language stories mapped to Problem Map 1.0
Onboarding Starter Village Guided tour for newcomers
App TXT OS TXT semantic OS, fast boot
App Blah Blah Blah Abstract and paradox Q and A built on TXT OS
App Blur Blur Blur Text to image with semantic control
App Blow Blow Blow Reasoning game engine and memory demo

If this repository helped, starring it improves discovery so more builders can find the docs and tools. GitHub Repo stars