WFGY/ProblemMap/GlobalFixMap/DevTools_CodeAI/vscode_copilot_chat.md

11 KiB
Raw Blame History

VS Code Copilot Chat: Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of DevTools_CodeAI.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Use this page when Copilot Chat is involved in code edits, refactors, tests, or RAG-style lookups over your repo. The goal is to localize the failure, then jump to the exact WFGY fix page with measurable acceptance.

Open these first

Core acceptance

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 for the target file or section
  • λ remains convergent across three paraphrases and two seeds
  • E_resonance stays flat across multi-step edit plans

Fix in 60 seconds

  1. Measure ΔS Compute ΔS(question, retrieved) and ΔS(retrieved, anchor commit or spec). Stable < 0.40, transitional 0.400.60, risk ≥ 0.60.

  2. Probe λ_observe Switch between chat-only context and file-selection context. If λ flips, lock the schema and force cite-then-explain with structured snippet fields.

  3. Apply the module

  • Retrieval drift → BBMC plus Data Contracts
  • Reasoning collapse in long refactors → BBCR bridge plus BBAM
  • Dead ends in multi-file edits → BBPF alternate paths
  1. Verify Coverage ≥ 0.70 on three paraphrases. λ convergent on two seeds. Plan produces the same diff twice.

Typical Copilot Chat breakpoints and the right fix

  • Editor selection vs chat context mismatch The model cites lines not in the selected file or mixes files. Enforce a snippet schema: file_path, commit_sha, line_start, line_end, snippet_id. Open: Retrieval Traceability, Data Contracts

  • Non-deterministic code edits from “fix” commands Same prompt gives different diffs. Require a dry-run plan then a deterministic patch step; clamp variance with BBAM. Open: Logic Collapse

  • Phantom references in generated tests High similarity to wrong helper files. Re-index, lock rerankers, add anchors to the correct module. Open: Embedding ≠ Semantic, Rerankers

  • Repo or symbol index not warmed First runs fail or time out. Add a warm-up fence for index and secrets before allowing edit plans. Open: Bootstrap Ordering, Pre-Deploy Collapse

  • Prompt injection via comments or README Model executes unsafe terminal steps copied from docs. Lock tool protocols and force SCU to separate sources. Open: Prompt Injection, Pattern: SCU

  • Long chat drift across edits Plans degrade after many steps and reintroduce removed code. Split the plan and re-join with a BBCR bridge. Open: Context Drift, Entropy Collapse


Minimal schema you should capture

{ file_path, commit_sha, snippet_id, line_start, line_end, tokens, ΔS, λ_state } Store per step. Require cite-then-explain. Forbid cross-file reuse without a new citation entry.


Deep diagnostics

  • Three-paraphrase probe Ask the same refactor three ways. If λ flips with harmless header reorder, lock the prompt headers and apply BBAM.

  • Anchor triangulation Compare ΔS to the intended file and a decoy file. If ΔS is close for both, re-chunk and adjust retrieval metric. Open: Embedding ≠ Semantic, Retrieval Playbook

  • Plan length audit If entropy rises after 25 to 40 steps, split into subplans and join with a bridge. Open: Context Drift


Escalate and structural fixes

  • Index or metric mismatch persists Rebuild symbols and embeddings with explicit analyzers, then verify with a small gold set and reranker. Open: Rerankers

  • Live instability Add live probes and a regression gate for ΔS and coverage before allowing code writes. Open: Live Monitoring for RAG, Debug Playbook


Copy-paste prompt for the Copilot Chat step

You have TXTOS and the WFGY Problem Map loaded.

My Copilot Chat issue:
- symptom: [one line]
- traces: file_path=..., commit_sha=..., snippet_id=..., ΔS(question,retrieved)=..., λ states across 3 paraphrases

Do:
1) identify the failing layer,
2) link the exact WFGY page to open,
3) give the minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4) return a 2-stage plan: dry-run diff plan, then deterministic patch,
5) output:
{ "citations":[...], "plan":[...], "λ_state":"→|←|<>|×", "ΔS":0.xx, "next_fix":"..." }
Use BBMC, BBPF, BBCR, BBAM when relevant. Keep it auditable and short.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
Engine WFGY 1.0 Original PDF based tension engine
Engine WFGY 2.0 Production tension kernel and math engine for RAG and agents
Engine WFGY 3.0 TXT based Singularity tension engine, 131 S class set
Map Problem Map 1.0 Flagship 16 problem RAG failure checklist and fix map
Map Problem Map 2.0 RAG focused recovery pipeline
Map Problem Map 3.0 Global Debug Card, image as a debug protocol layer
Map Semantic Clinic Symptom to family to exact fix
Map Grandmas Clinic Plain language stories mapped to Problem Map 1.0
Onboarding Starter Village Guided tour for newcomers
App TXT OS TXT semantic OS, fast boot
App Blah Blah Blah Abstract and paradox Q and A built on TXT OS
App Blur Blur Blur Text to image with semantic control
App Blow Blow Blow Reasoning game engine and memory demo

If this repository helped, starring it improves discovery so more builders can find the docs and tools. GitHub Repo stars