WFGY/ProblemMap/GlobalFixMap/Eval/goldset_curation.md

5.7 KiB
Raw Blame History

Goldset Curation — Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of Eval.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

A curated gold set is the foundation for evaluation stability. Without strict contracts on the gold data, all eval metrics become meaningless. This page defines how to build, audit, and maintain gold QA sets that align with WFGY acceptance targets.


Open these first


Acceptance targets for curated goldsets

  • Coverage ≥ 0.80 of target sections
  • ΔS(question, gold anchor) ≤ 0.35
  • λ state stable across 3 paraphrases and 2 seeds
  • No overlap: each gold item maps to exactly one snippet and section

Curation process

1. Select domains

  • Identify domains relevant to the pipeline (finance, law, product docs).
  • Ensure gold questions are drawn from actual user tasks.

2. Define anchors

  • Each QA item must cite a section ID and expected_doc.
  • Anchors must reference stable problem map sections, not ephemeral text.

Example:

{
  "id": "Q_0007",
  "question": "What causes hallucination re-entry after correction?",
  "answer_ref": "PM:patterns/pattern_hallucination_reentry",
  "expected_doc": "ProblemMap/patterns/pattern_hallucination_reentry.md",
  "section_id": "hallucination-reentry"
}

3. Add paraphrases

  • Minimum 3 per question.
  • Probe λ stability under phrasing variance.
{
  "id": "Q_0007_P1",
  "question": "Why do hallucinations return after being corrected once?"
}

4. Validate citations

  • Each gold item must include an exact citation offset.
  • If offsets drift, the goldset is invalid until refreshed.

5. Apply regression gate

  • No gold item should produce ΔS > 0.45 in baseline runs.
  • Violations are logged and flagged for refresh.

Common pitfalls and fixes

  • Gold overlaps across sections → Fix: merge or re-scope questions, ensure one-to-one mapping.

  • Anchors point to unstable docs → Fix: only link to long-lived WFGY ProblemMap pages.

  • Paraphrases flip λ → Fix: clamp with BBAM variance controls and revalidate.

  • Coverage below 0.80 → Fix: expand questions until goldset covers every critical node.


Quick workflow

  1. Draft 2030 candidate QA items.
  2. Add 3 paraphrases each.
  3. Link every item to an anchor section.
  4. Run through eval_harness.md.
  5. Drop items that fail regression gate.
  6. Store final goldset in datasets/gold/.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Module Description Link
WFGY Core Canonical framework entry point View
Problem Map Diagnostic map and navigation hub View
Tension Universe Experiments MVP experiment field View
Recognition Where WFGY is referenced or adopted View
AI Guide Anti-hallucination reading protocol for tools View

If this repository helps, starring it improves discovery for other builders.
GitHub Repo stars