WFGY/ProblemMap/GlobalFixMap/VectorDBs_and_Stores/pgvector.md

9.9 KiB
Raw Blame History

pgvector: Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of VectorDBs_and_Stores.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

A compact repair guide for Postgres + pgvector when RAG or agents lose accuracy. Use this to localize the failing layer and jump to the exact WFGY fix page.

Open these first

Fix in 60 seconds

  1. Measure ΔS
    Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
    Targets: stable < 0.40, transitional 0.400.60, risk ≥ 0.60.

  2. Probe with λ_observe
    Sweep k in {5, 10, 20}. If ΔS is flat high across k, suspect metric or index mismatch.
    Reorder prompt headers. If ΔS spikes, lock schema with Data Contracts.

  3. Apply the module
    Retrieval drift → BBMC + Data Contracts.
    Reasoning collapse → BBCR bridge + BBAM variance clamp.
    Dead ends in long runs → BBPF alternate path.

  4. Verify acceptance
    Coverage ≥ 0.70 to target section. ΔS ≤ 0.45 across three paraphrases. λ convergent across seeds.

pgvector breakpoints and the right repair

1) Opclass mismatch

  • Symptom: high similarity yet wrong meaning.
  • Why: using vector_l2_ops with cosine-trained embeddings or vector_ip_ops without normalization.
  • Fix: align opclass with the encoder. Normalize when using IP. See Embedding ≠ Semantic.

2) Index type underfit

  • Symptom: gold chunk appears only at large k.
  • Why: IVFFLAT lists too small or probes too low. HNSW ef_search under-tuned.
  • Fix: IVFFLAT tune lists at build and probes at query. HNSW raise ef_search to 24×k and review m. Validate with Retrieval Playbook and add Rerankers.

3) Training and stats

  • Symptom: unstable top-k after bulk load.
  • Why: IVFFLAT trained on too few samples or skipped ANALYZE.
  • Fix: retrain IVFFLAT with a large sample, ANALYZE, then re-test ΔS and coverage.

4) Dimension or encoder swap

  • Symptom: inserts fail or new rows behave erratically in search.
  • Fix: ensure vector dim matches column dim. Lock encoder version in a data contract and re-embed the changed span. See Data Contracts.

5) Normalization discipline

  • Symptom: cosine search acts like random at small k.
  • Fix: store normalized vectors or normalize at query for cosine or IP. Rebuild index after policy change. See Vectorstore Metrics & FAISS Pitfalls.

6) JSONB filters and plan drift

  • Symptom: filtered search returns empty or slow.
  • Fix: lock metadata schema in data contracts. Add GIN index on JSONB keys used in WHERE. Verify plan uses vector index then filter. See Retrieval Traceability.

7) Fragmentation across schemas or tables

  • Symptom: global recall looks fine but per-scope top-k is weak.
  • Fix: consolidate into one authoritative table with a facet column. Rebuild index and add a reranker. See Vectorstore Fragmentation.

8) Upsert hygiene

  • Symptom: duplicates or stale rows after ON CONFLICT.
  • Fix: deterministic IDs, doc_sha in metadata, idempotent loader, periodic dedupe. Validate with golden queries. See Retrieval Traceability.

9) Hybrid lexical plus vector

  • Symptom: hybrid performs worse than either alone.
  • Fix: normalize scores, fuse post-retrieval, then rerank with a cross-encoder. See Query Parsing Split and Rerankers.

10) Maintenance and boot fences

  • Symptom: first prod call after deploy returns thin results.
  • Fix: enforce bootstrap fence, finish index build, VACUUM after heavy churn, confirm visibility after commit. See Bootstrap Ordering and Pre-deploy Collapse.

Observability probes

  • k-sweep curve: 5, 10, 20 and plot ΔS. Flat high suggests metric or index fault.
  • Index audit: EXPLAIN ANALYZE should show IVFFLAT or HNSW usage. If planner skips it, fix stats and filters.
  • Anchor control: compare against a golden anchor set. If only one table or schema fails, rebuild that scope.
  • Reranker audit: with a strong reranker, recall improves and ΔS falls. If not, rebuild.

Copy-paste prompt for your AI


I uploaded TXT OS and the WFGY Problem Map files.

Target system: Postgres + pgvector.

* symptom: \[brief]
* traces: ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ states
* index: \[type=ivfflat|hnsw, lists/probes or m/ef\_search, opclass, dim, normalized?]
* filters: \[JSONB keys, indexes, example WHERE]
* ingest: \[ids, doc\_sha, upsert policy]

Tell me:

1. which layer is failing and why,
2. which exact fix page to open from this repo,
3. minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4. how to verify with a reproducible test.

Use BBMC/BBPF/BBCR/BBAM when relevant.


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + ”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” and the OS boots

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
⚙️ Engine WFGY 1.0 Original PDF tension engine and early logic sketch (legacy reference)
⚙️ Engine WFGY 2.0 Production tension kernel for RAG and agent systems
⚙️ Engine WFGY 3.0 TXT based Singularity tension engine (131 S class set)
🗺️ Map Problem Map 1.0 Flagship 16 problem RAG failure taxonomy and fix map
🗺️ Map Problem Map 2.0 Global Debug Card for RAG and agent pipeline diagnosis
🗺️ Map Problem Map 3.0 Global AI troubleshooting atlas and failure pattern map
🧰 App TXT OS .txt semantic OS with fast bootstrap
🧰 App Blah Blah Blah Abstract and paradox Q&A built on TXT OS
🧰 App Blur Blur Blur Text to image generation with semantic control
🏡 Onboarding Starter Village Guided entry point for new users

If this repository helped, starring it improves discovery so more builders can find the docs and tools.
GitHub Repo stars