WFGY/ProblemMap/GlobalFixMap/DevTools_CodeAI/tabnine.md
2025-08-28 13:21:01 +08:00

12 KiB
Raw Blame History

Tabnine: Guardrails and Fix Patterns

Use this page when Tabnine autocompletes the wrong API, mixes symbols across packages, or flips outputs between local and cloud contexts. The fixes route back to WFGY pages with measurable targets so you can verify without changing infra.

Open these first

Core acceptance

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 to the target function or spec anchor
  • λ remains convergent across three paraphrases and two seeds
  • E_resonance stays flat across long edit plans

Fix in 60 seconds

  1. Measure ΔS Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor). Stable < 0.40, transitional 0.400.60, risk ≥ 0.60.

  2. Probe λ_observe Toggle local versus cloud context and vary k. Pin reranker. If ΔS stays flat and high, suspect metric or index mismatch. If λ flips on harmless header reorder, lock schema and clamp with BBAM.

  3. Apply the module

  • Retrieval drift in code or docs → BBMC plus Data Contracts
  • Reasoning collapse in long refactors → BBCR bridge plus BBAM, verify with Logic Collapse
  • Dead ends in edit plans or test generation → BBPF alternate paths
  • Hybrid search worse than single → Pattern Query Parsing Split and Rerankers
  1. Verify Coverage ≥ 0.70 on three paraphrases. λ convergent on two seeds. Same diff twice.

Typical Tabnine breakpoints and the right fix

  • Local vs cloud index skew Suggestions differ after a reboot or network switch. Warm up the selected index, fence first run, verify analyzer settings. Open: Bootstrap Ordering

  • Wrong symbol in a monorepo Near neighbor symbols from sibling packages win due to similarity. Lock anchors to repo@commit and subdir, require cite then explain. Open: Retrieval Traceability, Embedding ≠ Semantic

  • Generated tests reference phantom helpers Enforce file spans and commit SHA before test generation. Open: Data Contracts

  • Plan loops during multi file edits Split plan and re join with a bridge step. Clamp variance in the patch step. Open: Context Drift, Entropy Collapse

  • Similarity high but meaning wrong Metric or normalization mismatch or fragmented store. Rebuild with explicit metric and fix fragmentation. Open: Embedding ≠ Semantic, Vectorstore Fragmentation

  • Unsafe shell or tool suggestions copied from docs Lock tool allow lists, require SCU separation for untrusted text. Open: Prompt Injection, Pattern: SCU

  • Org policy or model switch changes outputs silently Make refusal paths explicit. Echo the active policy and model in the plan. Open: Multi-Agent Problems, Data Contracts


IDE checklist for Tabnine

  • Warm up the selected context source and verify INDEX_HASH, credentials, and policy mode.
  • Use one retrieval metric per run. Do not mix analyzers while fixing one bug.
  • Prompts carry anchors: repo@commit, file_path, symbol, line_start, line_end, snippet_id.
  • Log per step: ΔS, λ state, coverage. Alert when ΔS ≥ 0.60 or λ diverges.
  • Regression gate requires tests pass, coverage ≥ 0.70, ΔS ≤ 0.45, identical diff twice.

Minimal schema you should capture

{ repo, commit_sha, file_path, symbol, line_start, line_end, snippet_id, tokens, ΔS, λ_state } Require cite then explain. Forbid cross file reuse without a new citation.


Deep diagnostics

  • Three paraphrase probe Ask the same change three ways. If λ flips, clamp with BBAM and lock headers.

  • Anchor triangulation Compare ΔS for target vs decoy file or sibling package. If close, re chunk and normalize embeddings. See: Retrieval Playbook, Embedding ≠ Semantic

  • Plan length audit If entropy rises after 25 to 40 steps, split the plan and re join with a BBCR bridge. See: Entropy Collapse

  • Live instability Add probes and backoff guards in IDE tasks. See: Live Monitoring for RAG, Debug Playbook


Copy paste prompt for Tabnine chat

You have TXTOS and the WFGY Problem Map loaded.

My Tabnine issue:
- symptom: [one line]
- anchors: repo={name}, commit={sha}, file={path}, lines={a..b}
- traces: ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ across 3 paraphrases

Tell me:
1) the failing layer and why,
2) the exact WFGY page to open,
3) minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4) a reproducible test to verify the fix.
Use BBMC, BBPF, BBCR, BBAM when relevant. Keep it auditable and short.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1 Download · 2 Upload to your LLM · 3 Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1 Download · 2 Paste into any LLM chat · 3 Type “hello world” — OS boots instantly

🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →

👑 Early Stargazers: See the Hall of FameGitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow