WFGY/ProblemMap/GlobalFixMap/Chunking/chunking-checklist.md

10 KiB
Raw Blame History

Chunking Checklist — Guardrails and Minimal Fixes

🧭 Quick Return to Map

You are in a sub-page of Chunking.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

A field guide to stabilize document chunking before you touch embeddings or retrievers. Use this page to locate the boundary failure, apply the structural fix, and verify with measurable targets.

Open these first

Core acceptance

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage of target section ≥ 0.70
  • λ remains convergent across 3 paraphrases and 2 seeds
  • Citation match ≥ 0.90 when citations exist
  • Bleed rate ≤ 0.10 across boundaries

60-second fix checklist

  1. Lock the schema

    • Require fields: chunk_id, section_id, source_url, offsets, tokens, hash.
    • Spec: data-contracts.md
  2. Probe ΔS and λ

    • Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
    • If λ flips on paraphrase, reorder headers and clamp with your variance policy.
  3. Repair the boundary

    • If headings drift: apply title hierarchy and section detection.
    • If tables or code are cut: switch to block aware splitting.
    • If recall high but meaning wrong: review metric, overlap, and anchors.

Typical breakpoints → exact fix


Minimal field schema for chunks

Required in every pipeline that cites or reranks by section.

{
  "chunk_id": "docA#s03#p002",
  "section_id": "3. Methods",
  "source_url": "https://example.com/docA.pdf",
  "offsets": [12345, 12980],
  "tokens": 365,
  "hash": "sha1:8c1e…",
  "block_type": "paragraph|table|code|formula",
  "anchor": "first-assertion-or-key-sentence"
}
  • offsets are byte or char positions in the canonical text.
  • anchor is the semantic kernel used for cite-first prompting.
  • Schema details: data-contracts.md

How to chunk correctly

  1. Build the section tree

    • Detect true headings, roman numerals, number lists, and faux headings.
    • See title_hierarchy.md, section_detection.md.
  2. Respect block boundaries

    • Keep tables, code, formulas, and block quotes intact.
    • See code_tables_blocks.md.
  3. Decide overlap deliberately

    • Start with 1015% overlap for narrative text.
    • Avoid overlap on block types unless the block spans pages.
    • See overlap_tradeoffs.md.
  4. Use semantic anchors

    • Extract the first high-information assertion per chunk.
    • Store as anchor.
    • See semantic_anchors.md.
  5. Choose windowing

    • Fixed windows for strict citation tasks.
    • Sliding windows when reranking later.
    • See sliding_window.md.
  6. Handle multilingual and CJK

    • Normalize punctuation and width.
    • Align sentence boundaries.
    • See multilingual_segmentation.md.
  7. PDF and OCR specifics

    • De-columnize, repair hard line breaks, remove headers and footers.
    • See pdf_layouts_and_ocr.md.

Evaluation protocol

  • Coverage: percent of ground-truth answer tokens contained inside retrieved chunks.
  • ΔS: distance between question and retrieved text vs the expected anchor section.
  • Bleed rate: percent of tokens from outside the intended section.
  • Citation match: exact hit or overlap of the cited offsets.
  • Stability: metrics across 3 paraphrases and 2 seeds.

Small gold set template is provided in eval_chunk_quality.md.


Reproducible test

  1. Pick 10 QAs per section. Mark expected section ids.
  2. Run retrieval at k in {5, 10, 20}. Log ΔS, coverage, bleed, match.
  3. If ΔS ≥ 0.60 or bleed > 0.10, repair boundary and repeat.
  4. Pass when all core targets are met.

Copy-paste prompt for LLM assist

You have TXT OS and the WFGY Problem Map loaded.

My chunking issue:
- symptom: [one line]
- probes: ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., coverage=..., bleed=...
- context: store={faiss|qdrant|pgvector|...}, k={5,10,20}

Tell me:
1) which boundary failed (heading, block, overlap, window, pdf/ocr),
2) the exact WFGY page to open for the fix,
3) the minimal steps to push ΔS ≤ 0.45 and coverage ≥ 0.70,
4) a short test I can run to verify. Use BBMC/BBCR/BBPF/BBAM when relevant.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
Engine WFGY 1.0 Original PDF based tension engine
Engine WFGY 2.0 Production tension kernel and math engine for RAG and agents
Engine WFGY 3.0 TXT based Singularity tension engine, 131 S class set
Map Problem Map 1.0 Flagship 16 problem RAG failure checklist and fix map
Map Problem Map 2.0 RAG focused recovery pipeline
Map Problem Map 3.0 Global Debug Card, image as a debug protocol layer
Map Semantic Clinic Symptom to family to exact fix
Map Grandmas Clinic Plain language stories mapped to Problem Map 1.0
Onboarding Starter Village Guided tour for newcomers
App TXT OS TXT semantic OS, fast boot
App Blah Blah Blah Abstract and paradox Q and A built on TXT OS
App Blur Blur Blur Text to image with semantic control
App Blow Blow Blow Reasoning game engine and memory demo

If this repository helped, starring it improves discovery so more builders can find the docs and tools. GitHub Repo stars