WFGY/ProblemMap/GlobalFixMap/DevTools_CodeAI/codeium.md
2025-09-05 10:33:09 +08:00

12 KiB
Raw Blame History

Codeium: Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of DevTools_CodeAI.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

A focused guide to stabilize Codeium when completions, chat, repo search, and multi file refactors start drifting. Use this to localize the failing layer, then jump to the exact WFGY fix page with measurable targets.

Open these first

Core acceptance

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 to the target function or spec anchor
  • λ remains convergent across three paraphrases and two seeds
  • E_resonance stays flat across long edit plans

Fix in 60 seconds

  1. Measure ΔS Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor). Stable < 0.40, transitional 0.400.60, risk ≥ 0.60.

  2. Probe λ_observe Switch between local and cloud context sources if applicable. Vary k as 5, 10, 20 and pin rerankers. If ΔS remains high and flat, suspect metric or index mismatch. If λ flips on harmless header reorder, lock the schema and clamp with BBAM.

  3. Apply the module

  • Retrieval drift in code or doc lookup → BBMC plus Data Contracts
  • Reasoning collapse in long refactors → BBCR bridge plus BBAM, verify with Logic Collapse
  • Dead ends in edit plans or test generation → BBPF alternate paths with capped breadth
  • Hybrid search worse than single → Pattern Query Parsing Split and Rerankers

Typical Codeium breakpoints and the right fix

  • Wrong library version or API surface Lock anchors to repo@commit, file_path, and symbol names. Require citation before edit. Open: Retrieval Traceability, Data Contracts

  • High similarity yet wrong file or symbol Embeddings prefer near neighbors with different meaning. Rebuild with explicit metric and normalization. Open: Embedding ≠ Semantic, Pattern: Vectorstore Fragmentation

  • Local vs cloud context skew Index states differ and produce conflicting suggestions. Warm up the selected index and fence the first run. Open: Bootstrap Ordering

  • Generated tests reference phantom helpers Force cite then explain with exact file spans and commit SHA before generating tests. Open: Data Contracts

  • Plan loops across multi file edits Split plan into subplans and re join with a bridge step. Open: Context Drift, Entropy Collapse

  • Chat suggests unsafe commands from README or comments Enforce tool allow lists and SCU separation when reading untrusted text. Open: Prompt Injection, Pattern: SCU


IDE checklist for Codeium

  • Warm up the selected context source and verify INDEX_HASH and version.
  • Single retrieval metric per run. Do not mix analyzers while fixing one bug.
  • Prompts carry anchors: repo@commit, file_path, symbol, line_start, line_end, snippet_id.
  • Log per step: ΔS, λ state, coverage. Alert when ΔS ≥ 0.60 or λ diverges.
  • Regression gate requires tests pass, coverage ≥ 0.70, ΔS ≤ 0.45, same diff twice.

Minimal schema you should capture

{ repo, commit_sha, file_path, symbol, line_start, line_end, snippet_id, tokens, ΔS, λ_state } Require cite then explain. Forbid cross file reuse without a new citation.


Deep diagnostics

  • Three paraphrase probe Ask the same change three ways. If λ flips with harmless header reorder, clamp with BBAM and lock the schema.

  • Anchor triangulation Compare ΔS for the intended file vs a decoy or sibling module. If close, re chunk and normalize embeddings. See: Retrieval Playbook, Embedding ≠ Semantic

  • Plan length audit If entropy rises after 25 to 40 steps, split the plan and re join with a BBCR bridge. See: Entropy Collapse

  • Live instability Add probes and backoff guards in Codeium tasks. See: Live Monitoring for RAG, Debug Playbook


Copy paste prompt for Codeium chat

You have TXTOS and the WFGY Problem Map loaded.

My Codeium issue:
- symptom: [one line]
- anchors: repo={name}, commit={sha}, file={path}, lines={a..b}
- traces: ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ across 3 paraphrases

Tell me:
1) the failing layer and why,
2) the exact WFGY page to open,
3) minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4) a reproducible test to verify the fix.
Use BBMC, BBPF, BBCR, BBAM when relevant. Keep it auditable and short.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →

👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one.

GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow