8 KiB
Tokenization and Casing — Guardrails and Fix Pattern
🧭 Quick Return to Map
You are in a sub-page of RAG_VectorDB.
To reorient, go back here:
- RAG_VectorDB — vector databases for retrieval and grounding
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Use this page when retrieval fails because the text was chunked or embedded with inconsistent tokenization or casing rules.
This is common when corpus ingestion applies one tokenizer (e.g. sentencepiece, BPE) and queries use another, or when upper/lowercase mismatches create drift.
Open these first
- Visual map and recovery: RAG Architecture & Recovery
- Chunking checklist: chunking-checklist.md
- Retrieval traceability: retrieval-traceability.md
- Embedding vs meaning: embedding-vs-semantic.md
Core acceptance
- Corpus and query tokenizers are identical.
- ΔS(question, retrieved) ≤ 0.45, stable under three paraphrases.
- λ remains convergent across casing variants.
- Coverage ≥ 0.70 for the target section.
Typical breakpoints and the right fix
-
Different tokenizers for corpus vs query
→ Rebuild index with unified tokenizer. See chunking-checklist.md. -
Casing drift (retrieval fails if query has capitalized or accented terms)
→ Apply consistent lowercasing or case-fold normalization before embedding. -
Unicode variants (fullwidth vs halfwidth, accents vs base letters)
→ Normalize text with NFC/NFKC before chunking. -
Mixed language tokenization (CJK vs Latin vs Indic split differently)
→ Align multilingual tokenizer to match model embedding assumptions.
Fix in 60 seconds
-
Check tokenizer logs
Sample corpus and query text, run through the same tokenizer, compare token IDs. -
Case-fold
Apply.lower()or Unicode case-fold to both corpus and queries before embedding. -
Normalize Unicode
Useunicodedata.normalize("NFKC", text)to ensure consistency. -
Re-index if drift found
If tokenization differs, rebuild embedding index after enforcing preprocessing rules.
Copy-paste probe
import unicodedata
def normalize_and_lower(text):
return unicodedata.normalize("NFKC", text).lower()
sample = "Résumé vs Resume"
print(normalize_and_lower(sample))
# → "resume vs resume"
Target: queries and corpus map to the same normalized form.
Common gotchas
- Chunked with sentencepiece but queries fed through default BPE → mismatch.
- Different language casing (Turkish dotted i, German ß) → normalize before embed.
- Multilingual queries that mix scripts → ensure same tokenizer config across corpora.
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one.
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.