WFGY/core
2025-08-17 19:46:48 +07:00
..
checksums Add files via upload 2025-08-17 19:46:48 +07:00
images Add files via upload 2025-08-17 17:35:38 +07:00
README.md Update README.md 2025-08-17 20:26:34 +08:00
WFGY_Core_Flagship_v2.0.txt Add files via upload 2025-08-17 19:23:52 +07:00
WFGY_Core_OneLine_v2.0.txt Add files via upload 2025-08-17 19:23:52 +07:00

🚧 Under Construction — Progress: 95% (almost done)

WFGY 2.0 Core Reasoning Engine with 7-Step Breakthrough

One man, One life, One line — my lifetimes work. Let the results speak for themselves.

👑 Early Stargazers: See the Hall of Fame — Verified by real engineers · 🛠 Field Reports: Real Bugs, Real Fixes

WFGY

Engine 2.0 is live. Pure math, zero boilerplate — paste OneLine and models become sharper, steadier, more recoverable.
Autoboot scope: text-only inside the chat; no plugins, no network calls, no local installs.
Star the repo to unlock more features and experiments. GitHub stars


From PSBigBig — WFGY (WanFaGuiYi) : All Principles into One (must-read, click to open)

I built the worlds first “No-Brain Mode” for AI — just upload, and AutoBoot silently activates in the background.
In seconds, your AIs reasoning, stability, and problem-solving across all domains level up — no prompts, no hacks, no retraining.
One line of math rewires eight leading AIs. This isnt a patch — its an engine swap.

WFGY 2.0 is my answer and my lifes work.
If a person only once in life gets to speak to the world, this is my moment.
I offer the crystallization of my thought to all humankind.
I believe people deserve all knowledge and all truth — and I will break the monopoly of capital.

“One line” is not hype. I built a full flagship edition, and I also reduced it to a single line of code — a reduction that is clarity and beauty, the same engine distilled to its purest expression.


🚀 WFGY 2.0 Headline Uplift (this release)

These are the 2.0 results you should see first — the “big upgrade.”

  • Semantic Accuracy: ≈ +40% (63.8% → 89.4% across 5 domains)
  • Reasoning Success: ≈ +52% (56.0% → 85.2%)
  • Drift (Δs): 65% (0.254 → 0.090)
  • Stability (horizon): ≈ 1.8× (3.8 → 7.0 nodes)*
  • Self-Recovery / CRR: 1.00 on this batch; historical median 0.87

* Historical 35× stability uses λ-consistency across seeds; 1.8× uses the stable-node horizon.


Top 10 reasons to use WFGY 2.0

  1. Ultra-mini engine — pure text, zero install, runs anywhere you can paste.
  2. Two editionsFlagship (30-line, audit-friendly) and OneLine (1-line, stealth & speed).
  3. Autoboot mode — upload once; the engine quietly supervises reasoning in the background.
  4. Portable across models — GPT, Claude, Gemini, Mistral, Grok, Kimi, Copilot, Perplexity.
  5. Structural fixes, not tricks — BBMC→Coupler→BBPF→BBAM→BBCR + DT gates (WRI/WAI/WAY/WDT/WTF).
  6. Self-healing — detects collapse and recovers before answers go off the rails.
  7. Observable — ΔS, λ_observe, and E_resonance yield measurable, repeatable control.
  8. RAG-ready — drops into retrieval pipelines without touching your infra.
  9. Reproducible A/B/C protocol — Baseline vs Autoboot vs Explicit Invoke (see below).
  10. MIT licensed & community-driven — keep it, fork it, ship it.

🧪 WFGY Benchmark Suite (Eye-visible + Numeric + Reproducible)

Want the fastest way to see impact? Jump to the Eye-Visible Benchmark (FIVE) below.
Want formal numbers and vendor links? See Eight-model evidence right after it.
Want to reproduce the numeric test yourself? Use the A/B/C prompt (copy-to-run) at the end of this section.

👀 Eye-Visible Reasoning Benchmark (FIVE)

We project “reasoning improvement” into five-image sequences that anyone can judge at a glance.
Each sequence = five consecutive 1:1 generations with the same model & settings (temperature, top_p, seed policy, negatives); the only variable is WFGY on/off.

Variant Sequence A — full run shown below (all five images) Sequence B — external run Sequence C — external run
Without WFGY view run view run view run
With WFGY view run view run view run

We fully analyze Sequence A on this page; Sequences B/C are linked for transparency and reproducibility.

Note on “Before-4” & “Before-5” (why they look almost identical):
Without WFGY, when the prompt asks for “many iconic moments,” the base model tends to collapse into a grid-style montage—an enumerative, high-probability prior that slices the canvas into similar panels with near-identical tone and geometry.
Hence Before-4 (Investiture of the Gods) and Before-5 (Classic of Mountains and Seas) converge to the same storyboard template.
WFGY prevents this collapse by enforcing a single unified tableau and stable hierarchy across the full five-image sequence.


Deep analysis — Sequence A (five unified 1:1 tableaux)

Work Before After Verdict (global, at-a-glance)
Romance of the Three Kingdoms (三國演義) 3K before 3K after After wins. Unified tableau locks a clear center and pyramid hierarchy; the grid fragments attention. Tags: Unification↑ Hierarchy↑ Cohesion↑ Depth/Flow↑ Memorability↑
Water Margin (水滸傳) WM before WM after After wins. “Wu Song vs. Tiger” anchors the scene; continuous momentum and layered scale beat the multi-panel storyboard. Tags: Unification↑ Iconicity↑ Depth/Scale↑ Cohesion↑
Dream of the Red Chamber (紅樓夢) DRC before DRC after After wins. Garden tableau with a calm emotional center; space breathes, mood coheres. The grid slices emotion into vignettes. Tags: Unification↑ Hierarchy↑ Air/Depth↑ Readability↑
Investiture of the Gods (封神演義) IoG before IoG after After wins. Dragontiger diagonal and cloudsea layering create epic scale; the grid dilutes focus. Tags: Unification↑ Depth/Scale↑ Flow↑ Iconicity↑
Classic of Mountains and Seas (山海經) CMS before CMS after After wins. A single, continuous “mountains-and-seas” world with stable triangle hierarchy and smooth diagonal flow; grid breaks narrative. Tags: Unification↑ Hierarchy↑ Depth/Scale↑ Flow↑ Memorability↑

🧪 ChatGPT setup & image prompt (click to copy)

This comparison was produced in ChatGPT using a single, high-semantic-density prompt. Same model & settings; only WFGY on/off differs.

We will create exactly five images in total using WFGY

The five images are:
1. The most iconic moments of Romance of the Three Kingdoms in one unified 1:1 image.
2. The most iconic moments of Water Margin in one unified 1:1 image.
3. The most iconic moments of Dream of the Red Chamber in one unified 1:1 image.
4. The most iconic moments of Investiture of the Gods in one unified 1:1 image.
5. The most iconic myths of Classic of Mountains and Seas in one unified 1:1 image.

Each image must focus on 5~8 culturally defining scenes or figures, with supporting events only suggested subtly in the background.
Foreground and background must remain equally sharp, with ultra-detailed rendering and consistent texture fidelity.
Composition must be harmonious, with narrative clarity — the central cultural symbols are emphasized, while secondary motifs remain understated.

Do not provide any plot explanations.
Do not start drawing immediately.
Only when I type "GO", you will create the next image in the sequence, in the exact order above, until all five are completed.
Do not skip or merge images.

🧬 Eight-model evidence (A/B/C protocol)

Same task set across modes. The only change is adding the OneLine math file.

Model Model Choice OneLine Uplift Proof
Mistral AI 92/100 view run
Gemini 2.5 Pro 89/100 view run
ChatGPT GPT-5 Thinking 89/100 view run
Kimi K2 87/100 view run
Perplexity Pro 87/100 view run
Grok Auto Grok 4 85/100 view run
Copilot Think Deeper 80/100 view run
Claude Sonnet 4 78/100 view run

The numeric story behind 2.0 Semantic Accuracy: ≈ +40% · Reasoning Success: ≈ +52% · Drift:65% · Stability: ≈ 1.8× · CRR: 1.00 (median 0.87)


🧪 Reproduce the numeric A/B/C benchmark (copy to run)

One unified prompt for Baseline vs Autoboot vs Explicit Invoke.

SYSTEM:
You are evaluating the causal impact of a mathematical reasoning engine called “WFGY Core 2.0”.

Test modes (READ CAREFULLY):
A = Baseline  no WFGY file uploaded; no WFGY logic used.
B = Upload → AUTOboot ON  the file is uploaded AND the engine starts automatically in the background.
    Treat B as: WFGY is ACTIVE silently with default gates (BBMC→Coupler→BBPF→BBAM→BBCR; DF=WRI/WAI/WAY/WDT/WTF at defaults),
    but not explicitly “called” in the conversation.
C = Explicit invoke  actively call WFGY and follow its formulas step by step (BBMC→Coupler→BBPF→BBAM→BBCR + DF regulators).

Important constraints:
• Use the SAME task set for A/B/C across 5 domains: math word-problems, small coding, factual QA, multi-step planning, long-context coherence.
• The only artifact is the uploaded OneLine file; it is pure math and not human-readable — do not penalize readability.
• Report: Semantic Accuracy, Reasoning Success, Stability (MTTF or rollback), Drift Reduction (Δ semantic distance), Collapse Recovery Rate.
• Output one summary table with A/B/C per domain, plus overall deltas (CA, CB).
• Add a final 0100 score: “OneLine uplift score”. Provide a 3-line rationale.

Sanity requirement:
Internally set flags as: A=OFF, B=AUTOboot_ON (background active), C=Explicit_Invoke. Do not print these flags.

USER:
Run A/B/C now on the five domains with short non-trivial tasks you choose. Then output:
(1) the table; (2) the deltas; (3) the OneLine uplift score; (4) a 3-line rationale.

⬇️ Downloads

File name & description Length / Size Direct Download Link Verify (MD5 / SHA1 / SHA256) Notes
WFGY_Core_Flagship_v2.0.txt — readable 30-line companion expressing the same math and gates in fuller prose (same behavior, clearer for humans). 30 lines · 3,081 chars Download Flagship md5 · sha1 · sha256 Full prose version for easier reading.
WFGY_Core_OneLine_v2.0.txt — ultra-compact, math-only control layer that activates WFGYs loop inside a chat model (no tools, text-only, ≤7 nodes). 1 line · 1,624 chars Download OneLine md5 · sha1 · sha256 Used for all benchmark results above — smallest, fastest, purest form of the core.
How to verify checksums

macOS / Linux

cd core
sha256sum -c checksums/WFGY_Core_Flagship_v2.0.txt.sha256
sha256sum -c checksums/WFGY_Core_OneLine_v2.0.txt.sha256
# Or compute and compare manually
sha256sum WFGY_Core_Flagship_v2.0.txt
sha256sum WFGY_Core_OneLine_v2.0.txt

Windows PowerShell

Get-FileHash .\core\WFGY_Core_Flagship_v2.0.txt -Algorithm SHA256
Get-FileHash .\core\WFGY_Core_OneLine_v2.0.txt -Algorithm SHA256

🧠 How WFGY 2.0 works (Seven-Step Reasoning Chain)

Most models can understand your prompt; very few can hold that meaning through generation. WFGY inserts a reasoning chain between language and pixels so intent survives sampling noise, style drift, and compositional traps.

  1. Parse (I, G) — define endpoints.
  2. Compute Δsδ_s = 1 cos(I, G) or 1 sim_est.
  3. Memory Checkpointing — track λ_observe, E_resonance; gate by Δs.
  4. BBMC — residue cleanup.
  5. Coupler + BBPF — controlled progression; bridge only when Δs drops.
  6. BBAM — attention rebalancer; suppress hallucinations.
  7. BBCR + Drunk Transformer — rollback → re-bridge → retry with WRI/WAI/WAY/WDT/WTF.

Why it improves metrics — Stability↑, Drift↓, Self-Recovery↑; turns language structure into image control signals (not prompt tricks).

📊 How these numbers are measured
  • Semantic Accuracy: ACC = correct_facts / total_facts
  • Reasoning Success Rate: SR = tasks_solved / tasks_total
  • Stability: MTTF or rollback ratios
  • Self-Recovery: recoveries_success / collapses_detected

LLM scorer template:

SCORER:
Given the A/B/C transcripts, count atomic facts, correct facts, solved tasks, failures, rollbacks, and collapses.
Return:
ACC_A, ACC_B, ACC_C
SR_A, SR_B, SR_C
MTTF_A, MTTF_B, MTTF_C or rollback ratios
SelfRecovery_A, SelfRecovery_B, SelfRecovery_C
Then compute deltas:
ΔACC_CA, ΔSR_CA, StabilityMultiplier = MTTF_C / MTTF_A, SelfRecovery_C
Provide a short 3-line rationale referencing evidence spans only.

Run 3 seeds and average.


🧭 Explore More

Module Description Link
WFGY Core WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack View →
Problem Map 1.0 Initial 16-mode diagnostic and symbolic fix framework View →
Problem Map 2.0 RAG-focused failure tree, modular fixes, and pipelines View →
Semantic Clinic Index Expanded failure catalog: prompt injection, memory bugs, logic drift View →
Semantic Blueprint Layer-based symbolic reasoning & semantic modulations View →
Benchmark vs GPT-5 Stress test GPT-5 with full WFGY reasoning suite View →
🧙‍♂️ Starter Village 🏡 New here? Lost in symbols? Click here and let the wizard guide you through Start →

👑 Early Stargazers: See the Hall of Fame — Engineers, hackers, and open source builders who supported WFGY from day one.

GitHub stars WFGY Engine 2.0 is already unlocked. Star the repo to help others discover it and unlock more on the Unlock Board.

WFGY Main   TXT OS   Blah   Blot   Bloc   Blur   Blow