9.1 KiB
Emoji ZWJ & Grapheme Clusters: Guardrails and Fix Pattern
Stabilize retrieval and reasoning when user text contains emoji sequences, skin-tone modifiers, variation selectors, and ZWJ chains. The goal is to keep chunking, indexing, and evaluation aligned with grapheme clusters instead of raw code points.
What this page is
- A compact repair guide for corpora and queries that contain emojis or complex grapheme clusters.
- Structural fixes that do not require infra change.
- Concrete steps with measurable acceptance targets.
When to use
- Family or profession emojis break apart into multiple unrelated tokens.
- Skin tone or gender variants collapse to the base pictograph.
- Variation Selector-16 (FE0F) or ZWJ (U+200D) disappears during export.
- Top-k looks similar but answers flip on messages that include emojis.
- Citations fail to match because offsets count code points instead of graphemes.
Open these first
- Visual map and recovery: RAG Architecture & Recovery
- End to end retrieval knobs: Retrieval Playbook
- Traceability and snippet schema: Retrieval Traceability
- Payload schema: Data Contracts
- Chunking checklist: Chunking Checklist
- Tokenizer mismatch in this folder: tokenizer_mismatch.md
- Width and punctuation pitfalls: digits_width_punctuation.md
Core acceptance
- ΔS(question, retrieved) ≤ 0.45
- Coverage of target section ≥ 0.70
- λ stays convergent across three paraphrases and two seeds
- Offsets and spans are grapheme accurate in citations
Typical symptoms → exact fix
| Symptom | Cause | Open this |
|---|---|---|
| 👨👩👧 breaks into four tokens and retrieval misses context | word-break at code points instead of grapheme clusters | Chunking Checklist, Retrieval Playbook |
| Skin-tone or gender variants normalize to base emoji | aggressive folding or NFKD pipeline drops modifiers | Data Contracts, Retrieval Traceability |
| Offsets in citations do not match UI highlights | span counting by UTF-16 units or code points | Retrieval Traceability |
| Answers flip when messages include emojis | tokenizer mismatch between embedder and store | tokenizer_mismatch.md |
| High similarity yet wrong meaning on chat logs | punctuation or ZWJ stripped during export | digits_width_punctuation.md, Retrieval Playbook |
60-second fix checklist
-
Normalize without destroying intent
Use NFC only. Do not fold ZWJ U+200D, VS-16 U+FE0F, or skin-tone modifiers U+1F3FB–U+1F3FF. -
Grapheme-aware chunking
Use ICU rules or a library that splits on grapheme clusters. Regex engines that support\Xshould prefer it over.. -
Index two tracks when needed
Storetext_rawandtext_search.text_rawkeeps exact clusters for citation.text_searchmay apply safe normalizations for recall. -
Tokenizer alignment
Match embedder and store analyzers. If the store lacks grapheme awareness, rerank with a grapheme-aware stage. -
Traceability contract
Snippet payload must carryoffset_grapheme_start,offset_grapheme_end, and the exact substring for audit. -
Observability probes
Log counts of ZWJ, VS-16, and skin-tone modifiers per snippet. Spikes often reveal faulty exporters.
Deep diagnostics
-
Three-paraphrase probe
Ask the same question three ways with and without emojis. If λ flips only when emojis appear, the tokenizer path is the root cause. -
Anchor triangulation
Compare ΔS to the intended message versus a decoy message that differs only by emoji variants. If scores are close, rebuild index with grapheme-aware chunking. -
Exporter audit
Validate that CSV, HTML, or PDF exporters preserve ZWJ and VS-16. Many pipelines silently drop them.
Copy-paste prompt
You have TXT OS and the WFGY Problem Map loaded.
My emoji issue:
* symptom: \[one line]
* traces: ΔS(question,retrieved)=..., λ states across 3 paraphrases, grapheme offsets present or missing.
Tell me:
1. the failing layer and why,
2. the exact WFGY page to open,
3. the minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4. how to verify with a reproducible test.
Use BBMC, BBCR, BBPF, BBAM when relevant.
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame —
Engineers, hackers, and open source builders who supported WFGY from day one.
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.