mirror of
https://github.com/onestardao/WFGY.git
synced 2026-04-28 11:40:07 +00:00
6.5 KiB
6.5 KiB
Retrieval Readiness Checklist
Purpose: confirm the pipeline is safe to run before any evaluation or go-live.
Applies to BM25, ANN, or hybrid stacks. Store agnostic.
Inputs are consistent
- One embedding model per field, recorded in config.
- Normalization rule set and saved with the index (L2 or cosine compatible).
- Analyzer or tokenizer identical on write and read paths.
- Stopword set and stemming rules fixed and versioned.
Refs:
Embedding ≠ Semantic ·
Store-agnostic guardrails
Index and data state
INDEX_HASHmatches the current code revision that produced vectors.- Document count, chunk count, and vector count agree within 0.5 percent.
- Ingestion job reported zero empty payloads and zero parser errors.
- Cold caches warmed with ten representative queries.
Refs:
Bootstrap ordering ·
Pre-deploy collapse
Gold set and probes
- Ten to fifty QA pairs with ground truth anchors prepared.
- Each QA pair has at least one resolvable
section_idandsource_url. - ΔS probes ready for three paraphrases and two seeds.
Refs:
ΔS probes ·
Retrieval eval recipes
Acceptance targets
- ΔS(question, retrieved) ≤ 0.45
- Coverage of the target section ≥ 0.70
- λ_observe convergent across 3 paraphrases and 2 seeds
- E_resonance stable on long windows
Quick probe you can paste
I loaded TXT OS and WFGY pages.
Task:
- For question "Q", log ΔS(Q, retrieved) and λ across 3 paraphrases and 2 seeds.
- Enforce cite then explain with the traceability schema.
- If ΔS ≥ 0.60, return the smallest structural fix to reach ΔS ≤ 0.45 and coverage ≥ 0.70.
Return JSON:
{ "citations": [...], "ΔS": 0.xx, "λ_state": "<>", "coverage": 0.xx, "next_fix": "..." }
Common fails and minimal fixes
-
Mixed metrics or analyzers after deploy Fix: rebuild with a single metric and analyzer. See Retrieval playbook
-
Fragmented store, anchors missing Fix: re-chunk with anchor tests. See Chunking checklist · Vectorstore fragmentation
🔗 Quick-Start Downloads (60 sec)
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame —
Engineers, hackers, and open source builders who supported WFGY from day one.
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.