WFGY/ProblemMap/GlobalFixMap/OpsDeploy/rate_limit_backpressure.md

10 KiB
Raw Blame History

Rate Limit and Backpressure — OpsDeploy Guardrails

🧭 Quick Return to Map

You are in a sub-page of OpsDeploy.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Keep your pipeline stable under load. This page gives concrete limits, retry rules, and queuing patterns that prevent 429 storms, tail spikes, and cascading failure. Store and model agnostic.


Open these first


Acceptance targets

  • 429 rate ≤ 0.5% per minute at steady state. Burst window p95 ≤ 2%.
  • Queue wait p95 ≤ 200 ms for read paths. Write paths use deadlines not exceeding 1× p95 service time.
  • End to end p95 latency within +15% of baseline when limiters are active.
  • Drop rate = 0 for idempotent endpoints and exactly-once jobs.
  • ΔS(question, retrieved) ≤ 0.45 and coverage ≥ 0.70 remain stable while throttled. λ stays convergent across 2 seeds.

60-second stabilization plan

  1. Admission control
    Set hard concurrency caps per endpoint and per tenant. Keep avg utilization near 0.7 under peak.
  2. Token bucket on read paths
    Rate r, burst b. Use per-tenant buckets and a global bucket. Enforce at edge and at service.
  3. Leaky queue with deadlines
    Put a deadline on each request. If queue wait exceeds deadline, fail fast with a retry hint.
  4. Retries with full jitter
    Exponential backoff with randomization. Honor provider Retry-After when present.
  5. Circuit breaker
    Open on consecutive errors or saturation. Shed load and degrade features instead of timing out.

Patterns

Token bucket in Redis (pseudo)

def take(bucket, now, rate, burst):
    # bucket: {tokens, last_ts}
    b = redis.hgetall(bucket) or {"tokens": burst, "last_ts": now}
    delta = max(0, now - float(b["last_ts"]))
    tokens = min(burst, float(b["tokens"]) + rate * delta)
    if tokens < 1:
        return False
    pipe = redis.pipeline()
    pipe.hset(bucket, mapping={"tokens": tokens - 1, "last_ts": now})
    pipe.expire(bucket, 3600)
    pipe.execute()
    return True

Backoff with full jitter (pseudo)

def backoff(retry, base=0.25, cap=10.0):
    import random, math
    return random.random() * min(cap, base * (2 ** retry))

Nginx edge limiter (example)

limit_req_zone $binary_remote_addr zone=wfgy_rps:10m rate=20r/s;
server {
  location /api {
    limit_req zone=wfgy_rps burst=40 nodelay;
    proxy_pass http://wfgy_upstream;
  }
}

Priority and fairness

  • Two lanes: interactive queries vs batch jobs. Assign separate buckets.
  • Per-tenant fairness: bucket per tenant plus a shared global bucket.
  • Cost-aware limits: heavier chains consume more tokens.

Degrade strategies

  • Reduce chain length or switch to cached answer if queue wait exceeds threshold.
  • Lower k or skip rerank under pressure.
  • Return cite-only with links when reason step exceeds deadline.

Symptom to fix map

Symptom Likely cause Open this
429 spikes after deploy missing jitter, shared retry stampede cache_warmup_invalidation.md
Tail latency p99 explodes unbounded concurrency or queue with no deadlines staged_rollout_canary.md
Mixed answers across versions cache keys not partitioned by INDEX_HASH version_pinning_and_model_lock.md
Duplicate side effects no idempotency fences under retry idempotency_dedupe.md
Cascading failures to stores no circuit breaker or bulkhead debug_playbook.md

Observability you must log

  • Per endpoint: tokens remaining, throttle events, 429 count, queue depth, wait time, service time.
  • Per tenant: admissions vs evictions, burst usage, fairness ratio.
  • Quality under pressure: ΔS, coverage, λ states.
  • Retry metrics: attempts, Retry-After adherence, success after retry.

Policy template you can paste

# opsdeploy/limits.yml
limits:
  default:
    rps: 20
    burst: 40
    concurrent: 64
    deadline_ms: 2000
    backoff:
      base_s: 0.25
      cap_s: 10
      jitter: full
  endpoints:
    /rag/retrieve:
      rps: 50
      burst: 100
      concurrent: 128
      deadline_ms: 800
    /rag/reason:
      rps: 20
      burst: 40
      concurrent: 64
      deadline_ms: 2000
  tenants:
    premium: { multiplier: 2.0 }
    free:    { multiplier: 0.5 }
decision:
  shed_when:
    queue_wait_p95_ms: 300
    error_rate: 0.01

Rollback rule

If 429 rate stays above 2% for one full evaluation window or tail latency p99 exceeds 2× baseline, shed load, open the breaker, and roll back to previous version or index pointer. Then follow the debug_playbook.md.


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
Engine WFGY 1.0 Original PDF based tension engine
Engine WFGY 2.0 Production tension kernel and math engine for RAG and agents
Engine WFGY 3.0 TXT based Singularity tension engine, 131 S class set
Map Problem Map 1.0 Flagship 16 problem RAG failure checklist and fix map
Map Problem Map 2.0 RAG focused recovery pipeline
Map Problem Map 3.0 Global Debug Card, image as a debug protocol layer
Map Semantic Clinic Symptom to family to exact fix
Map Grandmas Clinic Plain language stories mapped to Problem Map 1.0
Onboarding Starter Village Guided tour for newcomers
App TXT OS TXT semantic OS, fast boot
App Blah Blah Blah Abstract and paradox Q and A built on TXT OS
App Blur Blur Blur Text to image with semantic control
App Blow Blow Blow Reasoning game engine and memory demo

If this repository helped, starring it improves discovery so more builders can find the docs and tools. GitHub Repo stars