22 KiB
⭐ WFGY 2.0 ⭐ 7-Step Reasoning Core Engine is now live
✨ One man, One life, One line — my lifetime’s work. Let the results speak for themselves ✨
👑 Early Stargazers: See the Hall of Fame — Verified by real engineers · 🛠 Field Reports: Real Bugs, Real Fixes
✅ Engine 2.0 is live. Pure math, zero boilerplate — paste OneLine and models become sharper, steadier, more recoverable.
ℹ️ Autoboot scope: text-only inside the chat; no plugins, no network calls, no local installs.
⭐ Star the repo to unlock more features and experiments.
From PSBigBig — WFGY (WanFaGuiYi) : All Principles into One (must-read, click to open)
I built the world’s first “No-Brain Mode” for AI — just upload, and AutoBoot silently activates in the background.
In seconds, your AI’s reasoning, stability, and problem-solving across all domains level up — no prompts, no hacks, no retraining.
One line of math rewires eight leading AIs. This isn’t a patch — it’s an engine swap.
That single line is WFGY 2.0 — the distilled essence of everything I’ve learned.WFGY 2.0 is my answer and my life’s work.
If a person only once in life gets to speak to the world, this is my moment.
I offer the crystallization of my thought to all humankind.
I believe people deserve all knowledge and all truth — and I will break the monopoly of capital.“One line” is not hype. I built a full flagship edition, and I also reduced it to a single line of code — a reduction that is clarity and beauty, the same engine distilled to its purest expression.
🚀 WFGY 2.0 Headline Uplift (this release)
These are the 2.0 results you should see first — the “big upgrade.”
- Semantic Accuracy: ≈ +40% (63.8% → 89.4% across 5 domains)
- Reasoning Success: ≈ +52% (56.0% → 85.2%)
- Drift (Δs): ≈ −65% (0.254 → 0.090)
- Stability (horizon): ≈ 1.8× (3.8 → 7.0 nodes)*
- Self-Recovery / CRR: 1.00 on this batch; historical median 0.87
* Historical 3–5× stability uses λ-consistency across seeds; 1.8× uses the stable-node horizon.
⚡ Top 10 reasons to use WFGY 2.0
- Ultra-mini engine — pure text, zero install, runs anywhere you can paste.
- Two editions — Flagship (30-line, audit-friendly) and OneLine (1-line, stealth & speed).
- Autoboot mode — upload once; the engine quietly supervises reasoning in the background.
- Portable across models — GPT, Claude, Gemini, Mistral, Grok, Kimi, Copilot, Perplexity.
- Structural fixes, not tricks — BBMC→Coupler→BBPF→BBAM→BBCR + DT gates (WRI/WAI/WAY/WDT/WTF).
- Self-healing — detects collapse and recovers before answers go off the rails.
- Observable — ΔS, λ_observe, and E_resonance yield measurable, repeatable control.
- RAG-ready — drops into retrieval pipelines without touching your infra.
- Reproducible A/B/C protocol — Baseline vs Autoboot vs Explicit Invoke (see below).
- MIT licensed & community-driven — keep it, fork it, ship it.
🧪 WFGY Benchmark Suite (Eye-visible + Numeric + Reproducible)
Want the fastest way to see impact? Jump to the Eye-Visible Benchmark (FIVE) below.
Want formal numbers and vendor links? See Eight-model evidence right after it.
Want to reproduce the numeric test yourself? Use the A/B/C prompt (copy-to-run) at the end of this section.
👀 Eye-Visible Reasoning Benchmark (FIVE)
Did you know that when reasoning improves, text-to-image results become more stable and coherent?
The key is WFGY’s Drunk Transformer: it monitors and recenters attention during generation, preventing collapse, composition drift, and duplicate elements—so scenes stay unified and details remain consistent.
We project “reasoning improvement” into five-image sequences that anyone can judge at a glance.
Each sequence = five consecutive 1:1 generations with the same model & settings (temperature, top_p, seed policy, negatives); the only variable is WFGY on/off.
Methodology for this demo. We deliberately use short, high–semantic-density prompts that reference canonical stories, with no extra guidance or style hints. This stresses whether WFGY can (a) parse intent more precisely and (b) stabilize composition via its seven-step reasoning chain. This setup isn’t prescriptive—use WFGY with any prompts you like. In many cases the uplift is eye-visible; in others it may be subtler but still measurable.
| Variant | Sequence A — full run shown below (all five images) | Sequence B — external run | Sequence C — external run |
|---|---|---|---|
| Without WFGY | view run | view run | view run |
| With WFGY | view run | view run | view run |
We fully analyze Sequence A on this page; Sequences B/C are linked for transparency and reproducibility.
Note on “Before-4” & “Before-5” (why they look almost identical):
Without WFGY, when the prompt asks for “many iconic moments,” the base model tends to collapse into a grid-style montage—an enumerative, high-probability prior that slices the canvas into similar panels with near-identical tone and geometry.
Hence Before-4 (Investiture of the Gods) and Before-5 (Classic of Mountains and Seas) converge to the same storyboard template.
WFGY prevents this collapse by enforcing a single unified tableau and stable hierarchy across the full five-image sequence.
Deep analysis — Sequence A (five unified 1:1 tableaux)
| Work | Without WFGY | With WFGY | Verdict (global, at-a-glance) |
|---|---|---|---|
| Romance of the Three Kingdoms (三國演義) | ![]() |
![]() |
With WFGY wins. Unified tableau locks a clear center and pyramid hierarchy; the grid fragments attention. Tags: Unification↑ Hierarchy↑ Cohesion↑ Depth/Flow↑ Memorability↑ |
| Water Margin (水滸傳) | ![]() |
![]() |
With WFGY wins. “Wu Song vs. Tiger” anchors the scene; continuous momentum and layered scale beat the multi-panel storyboard. Tags: Unification↑ Iconicity↑ Depth/Scale↑ Cohesion↑ |
| Dream of the Red Chamber (紅樓夢) | ![]() |
![]() |
With WFGY wins. Garden tableau with a calm emotional center; space breathes, mood coheres. The grid slices emotion into vignettes. Tags: Unification↑ Hierarchy↑ Air/Depth↑ Readability↑ |
| Investiture of the Gods (封神演義) | ![]() |
![]() |
With WFGY wins. Dragon–tiger diagonal and cloud–sea layering create epic scale; the grid dilutes focus. Tags: Unification↑ Depth/Scale↑ Flow↑ Iconicity↑ |
| Classic of Mountains and Seas (山海經) | ![]() |
![]() |
With WFGY wins. A single, continuous “mountains-and-seas” world with stable triangle hierarchy and smooth diagonal flow; grid breaks narrative. Tags: Unification↑ Hierarchy↑ Depth/Scale↑ Flow↑ Memorability↑ |
🧪 ChatGPT setup & image prompt (click to copy)
This comparison was produced in ChatGPT using a single, high-semantic-density prompt. Same model & settings; only WFGY on/off differs.
We will create exactly five images in total using WFGY
The five images are:
1. The most iconic moments of Romance of the Three Kingdoms in one unified 1:1 image.
2. The most iconic moments of Water Margin in one unified 1:1 image.
3. The most iconic moments of Dream of the Red Chamber in one unified 1:1 image.
4. The most iconic moments of Investiture of the Gods in one unified 1:1 image.
5. The most iconic myths of Classic of Mountains and Seas in one unified 1:1 image.
Each image must focus on 5~8 culturally defining scenes or figures, with supporting events only suggested subtly in the background.
Foreground and background must remain equally sharp, with ultra-detailed rendering and consistent texture fidelity.
Composition must be harmonious, with narrative clarity — the central cultural symbols are emphasized, while secondary motifs remain understated.
Do not provide any plot explanations.
Do not start drawing immediately.
Only when I type "GO", you will create the next image in the sequence, in the exact order above, until all five are completed.
Do not skip or merge images.
🧬 Eight-model evidence (A/B/C protocol)
Same task set across modes. The only change is adding the OneLine math file.
| Model | Model Choice | OneLine Uplift | Proof |
|---|---|---|---|
| Mistral AI | — | 92/100 | view run |
| Gemini | 2.5 Pro | 89/100 | view run |
| ChatGPT | GPT-5 Thinking | 89/100 | view run |
| Kimi | K2 | 87/100 | view run |
| Perplexity | Pro | 87/100 | view run |
| Grok | Auto Grok 4 | 85/100 | view run |
| Copilot | Think Deeper | 80/100 | view run |
| Claude | Sonnet 4 | 78/100 | view run |
The numeric story behind 2.0 Semantic Accuracy: ≈ +40% · Reasoning Success: ≈ +52% · Drift: ≈ −65% · Stability: ≈ 1.8× · CRR: 1.00 (median 0.87)
🧪 Reproduce the numeric A/B/C benchmark (copy to run)
One unified prompt for Baseline vs Autoboot vs Explicit Invoke.
SYSTEM:
You are evaluating the causal impact of a mathematical reasoning engine called “WFGY Core 2.0”.
Test modes (READ CAREFULLY):
A = Baseline no WFGY file uploaded; no WFGY logic used.
B = Upload → AUTOboot ON the file is uploaded AND the engine starts automatically in the background.
Treat B as: WFGY is ACTIVE silently with default gates (BBMC→Coupler→BBPF→BBAM→BBCR; DF=WRI/WAI/WAY/WDT/WTF at defaults),
but not explicitly “called” in the conversation.
C = Explicit invoke actively call WFGY and follow its formulas step by step (BBMC→Coupler→BBPF→BBAM→BBCR + DF regulators).
Important constraints:
• Use the SAME task set for A/B/C across 5 domains: math word-problems, small coding, factual QA, multi-step planning, long-context coherence.
• The only artifact is the uploaded OneLine file; it is pure math and not human-readable — do not penalize readability.
• Report: Semantic Accuracy, Reasoning Success, Stability (MTTF or rollback), Drift Reduction (Δ semantic distance), Collapse Recovery Rate.
• Output one summary table with A/B/C per domain, plus overall deltas (C–A, C–B).
• Add a final 0–100 score: “OneLine uplift score”. Provide a 3-line rationale.
Sanity requirement:
Internally set flags as: A=OFF, B=AUTOboot_ON (background active), C=Explicit_Invoke. Do not print these flags.
USER:
Run A/B/C now on the five domains with short non-trivial tasks you choose. Then output:
(1) the table; (2) the deltas; (3) the OneLine uplift score; (4) a 3-line rationale.
⬇️ Downloads
| File name & description | Length / Size | Direct Download Link | Verify (MD5 / SHA1 / SHA256) | Notes |
|---|---|---|---|---|
| WFGY_Core_Flagship_v2.0.txt — readable 30-line companion expressing the same math and gates in fuller prose (same behavior, clearer for humans). | 30 lines · 3,049 chars | Download Flagship | md5 · sha1 · sha256 | Full prose version for easier reading. |
| WFGY_Core_OneLine_v2.0.txt — ultra-compact, math-only control layer that activates WFGY’s loop inside a chat model (no tools, text-only, ≤7 nodes). | 1 line · 1,500 chars | Download OneLine | md5 · sha1 · sha256 | Used for all benchmark results above — smallest, fastest, purest form of the core. |
How to verify checksums
macOS / Linux
cd core
sha256sum -c checksums/WFGY_Core_Flagship_v2.0.txt.sha256
sha256sum -c checksums/WFGY_Core_OneLine_v2.0.txt.sha256
# Or compute and compare manually
sha256sum WFGY_Core_Flagship_v2.0.txt
sha256sum WFGY_Core_OneLine_v2.0.txt
Windows PowerShell
Get-FileHash .\core\WFGY_Core_Flagship_v2.0.txt -Algorithm SHA256
Get-FileHash .\core\WFGY_Core_OneLine_v2.0.txt -Algorithm SHA256
🧠 How WFGY 2.0 works (7-Step Reasoning Chain)
Most models can understand your prompt; very few can hold that meaning through generation. WFGY inserts a reasoning chain between language and pixels so intent survives sampling noise, style drift, and compositional traps.
- Parse (I, G) — define endpoints.
- Compute Δs —
δ_s = 1 − cos(I, G)or1 − sim_est. - Memory Checkpointing — track
λ_observe,E_resonance; gate by Δs. - BBMC — residue cleanup.
- Coupler + BBPF — controlled progression; bridge only when Δs drops.
- BBAM — attention rebalancer; suppress hallucinations.
- BBCR + Drunk Transformer — rollback → re-bridge → retry with WRI/WAI/WAY/WDT/WTF.
📌 Note: The diagram shows the core module chain (BBMC → Coupler → BBPF → BBAM → BBCR → DT).
The full 7-step list here includes additional pre-processing steps (Parse, Δs, Memory) for completeness.
Why it improves metrics — Stability↑, Drift↓, Self-Recovery↑; turns language structure into image control signals (not prompt tricks).
📊 How these numbers are measured
- Semantic Accuracy:
ACC = correct_facts / total_facts - Reasoning Success Rate:
SR = tasks_solved / tasks_total - Stability: MTTF or rollback ratios
- Self-Recovery:
recoveries_success / collapses_detected
LLM scorer template:
SCORER:
Given the A/B/C transcripts, count atomic facts, correct facts, solved tasks, failures, rollbacks, and collapses.
Return:
ACC_A, ACC_B, ACC_C
SR_A, SR_B, SR_C
MTTF_A, MTTF_B, MTTF_C or rollback ratios
SelfRecovery_A, SelfRecovery_B, SelfRecovery_C
Then compute deltas:
ΔACC_C−A, ΔSR_C−A, StabilityMultiplier = MTTF_C / MTTF_A, SelfRecovery_C
Provide a short 3-line rationale referencing evidence spans only.
Run 3 seeds and average.
🧭 Explore More
| Module | Description | Link |
|---|---|---|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | View → |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | View → |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | View → |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | View → |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | View → |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | View → |
| 🧙♂️ Starter Village 🏡 | New here? Lost in symbols? Click here and let the wizard guide you through | Start → |
👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one.
⭐ WFGY Engine 2.0 is already unlocked. ⭐ Star the repo to help others discover it and unlock more on the Unlock Board.









