12 KiB
Tokenizer Mismatch — Guardrails and Fix Patterns
Stabilize multilingual retrieval when the query and the index segment text differently. Typical pain shows up in Chinese, Japanese, Thai, Khmer, and any mixed-script corpus where whitespace is unreliable.
Open these first
- Visual map and recovery: rag-architecture-and-recovery.md
- Chunking checklist: chunking-checklist.md
- Retrieval traceability: retrieval-traceability.md
- Schema fence for snippets: data-contracts.md
- Embedding vs meaning: embedding-vs-semantic.md
Related multilingual pages in this folder:
- Guide overview: multilingual_guide.md
- Script direction and mixing: script_mixing.md
- Locale and analyzer drift: locale_drift.md
- HyDE behavior across languages: hyde_multilingual.md
Core acceptance targets
- ΔS(question, retrieved) ≤ 0.45 for both English and target language
- Coverage of target section ≥ 0.70 after repair
- λ remains convergent across three paraphrases in the target language
- E_resonance flat across 50+ queries that mix scripts and numbers
What this failure looks like
| Symptom | Likely cause | Where to fix |
|---|---|---|
| Chinese or Japanese queries return no hits, while English paraphrase works | Query tokenizer uses whitespace, index uses character n-gram or vice versa | Switch to language-aware analyzers; unify query and index pipelines |
| BM25 recall ok but citations land in the wrong sub-section | Token boundary misalignment between chunker and retriever | Rechunk with stable boundaries and same segmentation as the store |
| Hybrid retrieval underperforms a single retriever | Mixed analyzers per stage, reranker sees inconsistent text | Normalize text pre-rerank and re-embed with the same tokenizer |
| High similarity yet wrong meaning | Embedding model trained with different normalization or casing rules | Re-embed with consistent normalization; see Embedding ≠ Semantic |
Fix in 60 seconds
-
Measure ΔS Run the same question in English and the target language. If ΔS differs by more than 0.15, suspect tokenizer mismatch.
-
Probe λ_observe Paraphrase the non-English query three ways. If λ flips between convergent and divergent when you reorder headers, lock the prompt schema and proceed to analyzer unification.
-
Apply the smallest structural change
- If your store is lexical: set the same analyzer for both index and query.
- If your store is vector-only: normalize and segment before embedding, then re-embed a small gold set and verify.
- Verify Coverage ≥ 0.70 and ΔS ≤ 0.45 on three paraphrases. Log the analyzer and normalization used.
Minimal repair recipes by stack
Elasticsearch / OpenSearch
-
Pick one analyzer for CJK fields and use it for both indexing and queries. Options that work in practice:
- Japanese:
kuromoji - Korean:
nori - Chinese:
smartcnor ICU + bigram filter
- Japanese:
-
Add a keyword subfield for exact filters and rerank features.
-
Normalize to NFC and convert full-width to half-width for digits and ASCII.
-
Rebuild only the affected indices, then re-run the gold set. See: retrieval-playbook.md
BM25 in code or light stores (Chroma, sqlite FTS, etc.)
- For CJK and Thai, use character bigrams or trigrams on both index and query.
- Remove language-specific stopwords when language is unknown.
- Keep punctuation normalization consistent across stages. See: pattern_query_parsing_split.md
Vector stores (FAISS, Milvus, pgvector, Weaviate, Qdrant)
-
Do not trust the model to “fix” segmentation. Pre-normalize text:
- Unicode NFC
- Lowercase where appropriate
- Full-width to half-width for numbers and ASCII
- Optional: insert spaces between CJK and ASCII tokens for stable sentencepiece
-
Re-embed both corpus and queries with the exact same normalization script.
-
If recall is still low, add a lexical recall stage, then rerank. See: vectorstore-fragmentation.md
Diagnostic checklist
- Query and index use the same tokenizer or analyzer.
- Chunker segmentation matches the retriever segmentation.
- Unicode form is consistent; half-width and diacritics normalized.
- Mixed scripts do not flip direction or join tokens incorrectly.
- Rerank stage sees normalized text, not raw captures.
Copy-paste tests
Three-language ΔS probe
Question: "<your question>"
Languages: English, Chinese, Japanese
For each language:
1. Retrieve top-k with your current settings.
2. Compute ΔS(question, retrieved). Record λ_state.
3. If ΔS differs by > 0.15 across languages, suspect tokenizer mismatch.
Return a table: language, ΔS, λ_state, analyzer/normalizer used.
Schema lock for reruns
Header order: citations → facts → synthesis → caveats.
If λ flips when you change header order, lock this schema and fix analyzers first.
When to escalate
-
ΔS remains ≥ 0.60 after analyzer unification. Re-chunk with stable boundaries and re-embed a gold slice. Open: chunking-checklist.md
-
Citations still jump across sections after repair. Enforce snippet schema and forbid cross-section reuse. Open: data-contracts.md, retrieval-traceability.md
-
Hybrid beats single only sometimes. Align analyzers in both stages and rerank deterministically. Open: rerankers.md
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.