WFGY/ProblemMap/GlobalFixMap/Eval/goldset_curation.md

6.5 KiB
Raw Blame History

Goldset Curation — Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of Eval.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

A curated gold set is the foundation for evaluation stability. Without strict contracts on the gold data, all eval metrics become meaningless. This page defines how to build, audit, and maintain gold QA sets that align with WFGY acceptance targets.


Open these first


Acceptance targets for curated goldsets

  • Coverage ≥ 0.80 of target sections
  • ΔS(question, gold anchor) ≤ 0.35
  • λ state stable across 3 paraphrases and 2 seeds
  • No overlap: each gold item maps to exactly one snippet and section

Curation process

1. Select domains

  • Identify domains relevant to the pipeline (finance, law, product docs).
  • Ensure gold questions are drawn from actual user tasks.

2. Define anchors

  • Each QA item must cite a section ID and expected_doc.
  • Anchors must reference stable problem map sections, not ephemeral text.

Example:

{
  "id": "Q_0007",
  "question": "What causes hallucination re-entry after correction?",
  "answer_ref": "PM:patterns/pattern_hallucination_reentry",
  "expected_doc": "ProblemMap/patterns/pattern_hallucination_reentry.md",
  "section_id": "hallucination-reentry"
}

3. Add paraphrases

  • Minimum 3 per question.
  • Probe λ stability under phrasing variance.
{
  "id": "Q_0007_P1",
  "question": "Why do hallucinations return after being corrected once?"
}

4. Validate citations

  • Each gold item must include an exact citation offset.
  • If offsets drift, the goldset is invalid until refreshed.

5. Apply regression gate

  • No gold item should produce ΔS > 0.45 in baseline runs.
  • Violations are logged and flagged for refresh.

Common pitfalls and fixes

  • Gold overlaps across sections → Fix: merge or re-scope questions, ensure one-to-one mapping.

  • Anchors point to unstable docs → Fix: only link to long-lived WFGY ProblemMap pages.

  • Paraphrases flip λ → Fix: clamp with BBAM variance controls and revalidate.

  • Coverage below 0.80 → Fix: expand questions until goldset covers every critical node.


Quick workflow

  1. Draft 2030 candidate QA items.
  2. Add 3 paraphrases each.
  3. Link every item to an anchor section.
  4. Run through eval_harness.md.
  5. Drop items that fail regression gate.
  6. Store final goldset in datasets/gold/.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
Engine WFGY 1.0 Original PDF based tension engine
Engine WFGY 2.0 Production tension kernel and math engine for RAG and agents
Engine WFGY 3.0 TXT based Singularity tension engine, 131 S class set
Map Problem Map 1.0 Flagship 16 problem RAG failure checklist and fix map
Map Problem Map 2.0 RAG focused recovery pipeline
Map Problem Map 3.0 Global Debug Card, image as a debug protocol layer
Map Semantic Clinic Symptom to family to exact fix
Map Grandmas Clinic Plain language stories mapped to Problem Map 1.0
Onboarding Starter Village Guided tour for newcomers
App TXT OS TXT semantic OS, fast boot
App Blah Blah Blah Abstract and paradox Q and A built on TXT OS
App Blur Blur Blur Text to image with semantic control
App Blow Blow Blow Reasoning game engine and memory demo

If this repository helped, starring it improves discovery so more builders can find the docs and tools. GitHub Repo stars