WFGY/ProblemMap/GlobalFixMap/OpsDeploy/feature_flags_safe_launch.md

11 KiB
Raw Blame History

Feature Flags and Safe Launch — OpsDeploy Guardrails

🧭 Quick Return to Map

You are in a sub-page of OpsDeploy.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Ship new prompts, models, retrievers, tools, or index pointers without surprises. This page shows how to scope flags, gate traffic with measurable targets, and roll forward only when the canary stays green.


Open these first


When to use

  • New prompt pack or header order.
  • LLM provider or model change.
  • Retriever, chunker, reranker, store, or metric change.
  • Index pointer or alias swap.
  • Tool schema or protocol change.

Acceptance targets for a flag to graduate

  • ΔS drift vs baseline: median ≤ 0.03 and p95 ≤ 0.10 on the gold set.
  • Coverage to the target section ≥ 0.70.
  • λ remains convergent across 3 paraphrases and 2 seeds.
  • p95 latency within +15 percent of baseline. Error rate ≤ 0.5 percent. Tool success ≥ 99 percent.
  • Cost per solved query within +10 percent or show a clear quality gain.

60-second launch plan

  1. Define the flag and scope
    Scope by tenant, cohort, product surface, region, or endpoint. Never rely on client only flags.
  2. Pin versions behind the flag
    MODEL_VER, PROMPT_VER, INDEX_HASH, RERANK_CONF, TOK_VER, ANALYZER_CONF. See version_pinning_and_model_lock.md.
  3. Warm the cache for the flagged arm
    Use versioned keys and a gold set warmup. See cache_warmup_invalidation.md.
  4. Start at shadow or 5 percent
    Compare to a pinned baseline. Hold for one evaluation window.
  5. Graduate stepwise
    5 to 25 to 50 to 100 percent if all targets stay green. Else stop and roll back.

Flag taxonomy and use

  • Kill switch
    Immediate global off for a risky module or tool. Server evaluated.
  • Progressive rollout
    Traffic weights per tenant or region with hold times and abort gates.
  • Quality gated flag
    Opens only when ΔS, coverage, and λ are inside bounds for the current window.
  • Safety guard
    Allows cite only output if reason step fails. Protects CX while investigating.
  • Cost guard
    Caps heavy chains. Falls back when cost per solved query exceeds budget.

YAML manifest template

# opsdeploy/flags/feature_flags.yml
flags:
  prompt_pack_vNplus1:
    scope:
      tenants: ["all"]            # or a list of tenant ids
      regions: ["us-east-1"]      # stage per region
      endpoints: ["/rag/reason"]
    versions:
      MODEL_VER: "gpt-4o-2025-07-15"
      PROMPT_VER: "p-2025.08.31-h1"
      INDEX_HASH: "b7f3..."
      RERANK_CONF: "bge-rerank-v2@top32"
    traffic:
      baseline_weight: 0.95
      canary_weight: 0.05
    gates:
      delta_s:
        median_max: 0.03
        p95_max: 0.10
      coverage_min: 0.70
      lambda_convergent: true
      latency_p95_uplift_max: 0.15
      error_rate_max: 0.005
      tool_success_min: 0.99
      cost_uplift_max: 0.10
    abort_rules:
      hard:
        - kind: "delta_s_p95_over"
          value: 0.15
        - kind: "coverage_under"
          value: 0.60
        - kind: "lambda_flip_rate_over"
          value: 0.20
        - kind: "error_rate_over"
          value: 0.01
      soft:
        - kind: "latency_p95_over_budget"
        - kind: "cost_per_solved_over_budget"

Server side evaluation snippet

def pass_gates(metrics, targets):
    if metrics["ds_median"] > targets["delta_s"]["median_max"]: return False
    if metrics["ds_p95"]    > targets["delta_s"]["p95_max"]:    return False
    if metrics["coverage"]  < targets["coverage_min"]:          return False
    if not metrics["lambda_convergent"] and targets["lambda_convergent"]: return False
    if metrics["latency_p95_uplift"] > targets["latency_p95_uplift_max"]:  return False
    if metrics["error_rate"] > targets["error_rate_max"]: return False
    if metrics["tool_success"] < targets["tool_success_min"]: return False
    if metrics["cost_uplift"] > targets["cost_uplift_max"]: return False
    return True

Observability you must log

  • Exposure events with assignment keys and sticky sampling id.
  • Version pins on every request.
  • ΔS, coverage, λ states, latency, error, tool success, cost.
  • Gate decisions and the reason for pass or fail.
  • Abort triggers with the exact rule that fired.

Symptom to fix map

Symptom Likely cause Open this
Users bounce between arms no sticky sampling or client only flags staged_rollout_canary.md
Mixed answers across versions cache keys not partitioned by version pins cache_warmup_invalidation.md
Quality regresses after graduation pins drifted or alias flipped mid window version_pinning_and_model_lock.md, blue_green_switchovers.md
429 storm during ramp retry stampede or no global cap rate_limit_backpressure.md
Tool loops under the flag side effects not fenced idempotency_dedupe.md

Common pitfalls

  • Client toggles treated as truth. Always evaluate on the server with signed manifests.
  • Two versions live without cache namespace separation.
  • Region flags diverge and never converge. Roll by region with audit hashes.
  • Graduation without a pinned baseline or a stable eval window.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Module Description Link
WFGY Core Canonical framework entry point View
Problem Map Diagnostic map and navigation hub View
Tension Universe Experiments MVP experiment field View
Recognition Where WFGY is referenced or adopted View
AI Guide Anti-hallucination reading protocol for tools View

If this repository helps, starring it improves discovery for other builders.
GitHub Repo stars