WFGY/ProblemMap/GlobalFixMap/DevTools_CodeAI/aws_codewhisperer.md

10 KiB
Raw Permalink Blame History

AWS CodeWhisperer: Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of DevTools_CodeAI.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Use this guide when completions or chat inside CodeWhisperer feel flaky, tool steps loop, or RAG-style answers cite the wrong things. The fixes below map to WFGY pages with measurable targets so you can verify quickly and avoid infra changes.

Open these first

Core acceptance

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 to the correct section
  • λ remains convergent across three paraphrases and two seeds
  • E_resonance flat across the dialog window

Typical CodeWhisperer breakpoints → exact fix

  • Region or account skew between your IDE plugin, credentials, and model endpoint. Verify region and identity consistently. If first call in a fresh boot fails, fix ordering. Open: Bootstrap Ordering, Pre-Deploy Collapse

  • IDE chat cites the wrong file or wrong snippet after retrieval. Lock the snippet contract and require cite-then-explain. Open: Retrieval Traceability, Data Contracts

  • High similarity yet wrong answer when CodeWhisperer consults docs. Suspect metric or index mismatch, or fragmented store. Open: Embedding ≠ Semantic, Vectorstore Fragmentation

  • Hybrid retrieval gets worse than single retriever in chat plans. Stabilize query split and lock reranking deterministically. Open: Query Parsing Split, Rerankers

  • Tool loop or agent handoff stalls when chat triggers build, test, or docs tools. Split memory namespaces, apply timeouts, and fence writes by mem_rev and mem_hash. Open: Multi-Agent Problems

  • Security or policy blocks cause silent fallbacks that change outputs. Make refusal paths explicit and keep the schema locked to avoid hidden branches. Open: Prompt Injection


Fix in 60 seconds

  1. Measure ΔS Compute ΔS(question, retrieved) and ΔS(retrieved, anchor section). Stable < 0.40, transitional 0.400.60, risk ≥ 0.60.

  2. Probe λ_observe Re-order headers minimally and vary k as 5, 10, 20. If ΔS stays flat and high, rebuild metric and normalize. If λ flips on harmless paraphrase, clamp with BBAM.

  3. Apply the module Retrieval drift → BBMC + Data Contracts Reasoning collapse → BBCR bridge + BBAM, then verify with Logic Collapse Dead ends in long chains → BBPF alternate paths

  4. Verify Coverage ≥ 0.70 on three paraphrases. λ convergent on two seeds. E_resonance flat over ten-step dialogs.


IDE checklist for stable runs

  • Warm-up fence before chat or retrieval. Confirm INDEX_HASH, VECTOR_READY, and current credentials. See: Bootstrap Ordering

  • Idempotency for any write step triggered by chat tools. Compute dedupe_key = sha256(source_id + revision + index_hash) and drop duplicates.

  • Cite-then-explain as a hard rule in the prompt template. Forbid cross-section reuse unless explicitly allowed by contract.

  • Observability probes inside the IDE task. Log ΔS and λ states for retrieve, assemble, reason. Alert when ΔS ≥ 0.60 or λ turns divergent.

  • Regression gate before you trust the session. See: RAG Precision/Recall


Copy-paste prompt for CodeWhisperer Chat

You have TXTOS and the WFGY Problem Map loaded.

My task:
- symptom: [one line]
- traces: ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ states across 3 paraphrases

Do:
1) identify which layer fails and why,
2) point me to the exact WFGY page,
3) give minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4) return a short JSON plan with {citations, steps, ΔS, λ_state, next_fix}.
Use BBMC, BBPF, BBCR, BBAM when relevant. Enforce cite-then-explain.

When to escalate

  • ΔS stays ≥ 0.60 after chunking and metric fixes Rebuild with the semantic chunking checklist and verify on a small gold set. Open: Chunking Checklist

  • Answers flip between identical runs in the same IDE session Investigate memory and version skew. Open: Pre-Deploy Collapse


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

Explore More

Layer Page What its for
Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
⚙️ Engine WFGY 1.0 Original PDF tension engine and early logic sketch (legacy reference)
⚙️ Engine WFGY 2.0 Production tension kernel for RAG and agent systems
⚙️ Engine WFGY 3.0 TXT based Singularity tension engine (131 S class set)
🗺️ Map Problem Map 1.0 Flagship 16 problem RAG failure taxonomy and fix map
🗺️ Map Problem Map 2.0 Global Debug Card for RAG and agent pipeline diagnosis
🗺️ Map Problem Map 3.0 Global AI troubleshooting atlas and failure pattern map
🧰 App TXT OS .txt semantic OS with fast bootstrap
🧰 App Blah Blah Blah Abstract and paradox Q&A built on TXT OS
🧰 App Blur Blur Blur Text to image generation with semantic control
🏡 Onboarding Starter Village Guided entry point for new users

If this repository helped, starring it improves discovery so more builders can find the docs and tools.
GitHub Repo stars