8.4 KiB
Emoji ZWJ & Grapheme Clusters: Guardrails and Fix Pattern
🧭 Quick Return to Map
You are in a sub-page of LanguageLocale.
To reorient, go back here:
- LanguageLocale — localization, regional settings, and context adaptation
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Stabilize retrieval and reasoning when user text contains emoji sequences, skin-tone modifiers, variation selectors, and ZWJ chains. The goal is to keep chunking, indexing, and evaluation aligned with grapheme clusters instead of raw code points.
What this page is
- A compact repair guide for corpora and queries that contain emojis or complex grapheme clusters.
- Structural fixes that do not require infra change.
- Concrete steps with measurable acceptance targets.
When to use
- Family or profession emojis break apart into multiple unrelated tokens.
- Skin tone or gender variants collapse to the base pictograph.
- Variation Selector-16 (FE0F) or ZWJ (U+200D) disappears during export.
- Top-k looks similar but answers flip on messages that include emojis.
- Citations fail to match because offsets count code points instead of graphemes.
Open these first
- Visual map and recovery: RAG Architecture & Recovery
- End to end retrieval knobs: Retrieval Playbook
- Traceability and snippet schema: Retrieval Traceability
- Payload schema: Data Contracts
- Chunking checklist: Chunking Checklist
- Tokenizer mismatch in this folder: tokenizer_mismatch.md
- Width and punctuation pitfalls: digits_width_punctuation.md
Core acceptance
- ΔS(question, retrieved) ≤ 0.45
- Coverage of target section ≥ 0.70
- λ stays convergent across three paraphrases and two seeds
- Offsets and spans are grapheme accurate in citations
Typical symptoms → exact fix
| Symptom | Cause | Open this |
|---|---|---|
| 👨👩👧 breaks into four tokens and retrieval misses context | word-break at code points instead of grapheme clusters | Chunking Checklist, Retrieval Playbook |
| Skin-tone or gender variants normalize to base emoji | aggressive folding or NFKD pipeline drops modifiers | Data Contracts, Retrieval Traceability |
| Offsets in citations do not match UI highlights | span counting by UTF-16 units or code points | Retrieval Traceability |
| Answers flip when messages include emojis | tokenizer mismatch between embedder and store | tokenizer_mismatch.md |
| High similarity yet wrong meaning on chat logs | punctuation or ZWJ stripped during export | digits_width_punctuation.md, Retrieval Playbook |
60-second fix checklist
-
Normalize without destroying intent
Use NFC only. Do not fold ZWJ U+200D, VS-16 U+FE0F, or skin-tone modifiers U+1F3FB–U+1F3FF. -
Grapheme-aware chunking
Use ICU rules or a library that splits on grapheme clusters. Regex engines that support\Xshould prefer it over.. -
Index two tracks when needed
Storetext_rawandtext_search.text_rawkeeps exact clusters for citation.text_searchmay apply safe normalizations for recall. -
Tokenizer alignment
Match embedder and store analyzers. If the store lacks grapheme awareness, rerank with a grapheme-aware stage. -
Traceability contract
Snippet payload must carryoffset_grapheme_start,offset_grapheme_end, and the exact substring for audit. -
Observability probes
Log counts of ZWJ, VS-16, and skin-tone modifiers per snippet. Spikes often reveal faulty exporters.
Deep diagnostics
-
Three-paraphrase probe
Ask the same question three ways with and without emojis. If λ flips only when emojis appear, the tokenizer path is the root cause. -
Anchor triangulation
Compare ΔS to the intended message versus a decoy message that differs only by emoji variants. If scores are close, rebuild index with grapheme-aware chunking. -
Exporter audit
Validate that CSV, HTML, or PDF exporters preserve ZWJ and VS-16. Many pipelines silently drop them.
Copy-paste prompt
You have TXT OS and the WFGY Problem Map loaded.
My emoji issue:
* symptom: \[one line]
* traces: ΔS(question,retrieved)=..., λ states across 3 paraphrases, grapheme offsets present or missing.
Tell me:
1. the failing layer and why,
2. the exact WFGY page to open,
3. the minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4. how to verify with a reproducible test.
Use BBMC, BBCR, BBPF, BBAM when relevant.
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
Explore More
| Layer | Page | What it’s for |
|---|---|---|
| Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| Engine | WFGY 1.0 | Original PDF based tension engine |
| Engine | WFGY 2.0 | Production tension kernel and math engine for RAG and agents |
| Engine | WFGY 3.0 | TXT based Singularity tension engine, 131 S class set |
| Map | Problem Map 1.0 | Flagship 16 problem RAG failure checklist and fix map |
| Map | Problem Map 2.0 | RAG focused recovery pipeline |
| Map | Problem Map 3.0 | Global Debug Card, image as a debug protocol layer |
| Map | Semantic Clinic | Symptom to family to exact fix |
| Map | Grandma’s Clinic | Plain language stories mapped to Problem Map 1.0 |
| Onboarding | Starter Village | Guided tour for newcomers |
| App | TXT OS | TXT semantic OS, fast boot |
| App | Blah Blah Blah | Abstract and paradox Q and A built on TXT OS |
| App | Blur Blur Blur | Text to image with semantic control |
| App | Blow Blow Blow | Reasoning game engine and memory demo |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.