WFGY/ProblemMap/GlobalFixMap/Eval/eval_operator_guidelines.md

6.2 KiB
Raw Blame History

Eval: Operator Guidelines

🧭 Quick Return to Map

You are in a sub-page of Eval.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

This page sets strict rules for human operators and evaluators running WFGY-aligned evaluation suites. Evaluation is not a free-form activity. It must follow consistent contracts, logging standards, and reproducible steps.


Open these first


Acceptance targets

  • Inter-annotator agreement ≥ 0.85 across 3 operators
  • Annotation variance ≤ 10%
  • Audit reproducibility ≥ 95% (rerun matches logged label)
  • No operator-induced drift: ΔS stability remains within ±0.05 after evaluation

Operator responsibilities

  1. Stay schema-locked Every operator must apply the same definition of "correct" → answer cites the correct snippet and explains consistently.

  2. Use three-paraphrase rule Rephrase the query 3× before marking coverage. If answer flips across paraphrases, log as unstable.

  3. Log ΔS and λ Each annotated run must include structural metrics: ΔS(question,retrieved), λ states, snippet IDs.

  4. Apply cost view Record token counts and cost-per-correct when judging pipelines.

  5. No ad-hoc overrides Do not improvise “fixes” in evaluation logs. All fixes must point to an existing ProblemMap page.


Annotation protocol

  • Step 1: Load baseline Open gold set. Verify expected snippet IDs.

  • Step 2: Apply candidate system Run the same queries with candidate pipeline.

  • Step 3: Compare

    • If candidate cites correct snippet and stays under ΔS ≤ 0.45 → mark correct.
    • If citation missing or ΔS ≥ 0.60 → mark fail, attach probable ProblemMap fix.
  • Step 4: Log JSON

{
  "operator": "user123",
  "q_id": "Q42",
  "question": "What is the cutoff for semantic drift?",
  "expected_snippet": "S123",
  "candidate_snippet": "S123",
  "ΔS": 0.39,
  "λ_state": "→",
  "correct": true,
  "cost_usd": 0.0012,
  "notes": "Stable across 3 paraphrases"
}

When to escalate

  • Annotators disagree by >15% → escalate to eval lead.
  • ΔS drifts beyond thresholds but operators label "correct" → escalate and reconcile.
  • Costs exceed 1.5× baseline → halt run until reviewed.

Audit checklist

  • All logs stored in versioned JSONL.
  • At least 2 operators per eval run.
  • Spot check 10% of logs with independent reviewer.
  • Archive outputs with system hash, gold set version, and ProblemMap pointers.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
Engine WFGY 1.0 Original PDF based tension engine
Engine WFGY 2.0 Production tension kernel and math engine for RAG and agents
Engine WFGY 3.0 TXT based Singularity tension engine, 131 S class set
Map Problem Map 1.0 Flagship 16 problem RAG failure checklist and fix map
Map Problem Map 2.0 RAG focused recovery pipeline
Map Problem Map 3.0 Global Debug Card, image as a debug protocol layer
Map Semantic Clinic Symptom to family to exact fix
Map Grandmas Clinic Plain language stories mapped to Problem Map 1.0
Onboarding Starter Village Guided tour for newcomers
App TXT OS TXT semantic OS, fast boot
App Blah Blah Blah Abstract and paradox Q and A built on TXT OS
App Blur Blur Blur Text to image with semantic control
App Blow Blow Blow Reasoning game engine and memory demo

If this repository helped, starring it improves discovery so more builders can find the docs and tools. GitHub Repo stars