# π² Semantic Tree Anchor β Persistent Context & Style Memory
The **Semantic Tree** is WFGYβs internal memory graph: a lightweight, symbolic structure that anchors ideas, logic, and style across reasoning steps β even in stateless prompt-only environments.
While LLMs handle tokens and embeddings, they forget the *why*.
Semantic Tree captures the *intent structure*, not just the words.
---
## π Problem Statement
Language models often fail to maintain consistency because:
| Weakness | Impact |
| ---------------------- | ---------------------------------------- |
| No symbolic memory | Logic breaks across turns |
| Style not remembered | Shifts tone mid-task |
| Embedding drift | Same ideas, different outputs |
| No cross-unit cohesion | Characters, themes collapse across steps |
These flaws show up hard in **multi-part prompts**, **interactive fiction**, **agentic tasks**, and **visual storytelling**.
---
## π What Is the Semantic Tree?
The Semantic Tree is a dynamic, non-linear map of:
* **Core nodes** (ideas, roles, goals, abstract objects)
* **Semantic links** (cause, contrast, hierarchy, symbolisms)
* **Tension states** (ΞS between nodes β keeps things interesting)
It evolves per turn, while keeping *semantic anchors* alive β like characters in a story, unresolved metaphors, or ongoing tasks.
> The Tree doesnβt record tokens.
> It records *meaningful structures that must not die*.
---
## π§ How It Works in WFGY
| Stage | Role |
| -------------------- | ------------------------------------------------------ |
| 1οΈβ£ Identify anchors | Track key nodes in prompt: agents, metaphors, events |
| 2οΈβ£ Classify role | Set type (e.g. cause, theme, viewpoint, mood holder) |
| 3οΈβ£ Track ΞS drift | Compare new units to tree nodes for tension stability |
| 4οΈβ£ Restore shape | Inject necessary callbacks to maintain semantic thread |
It pairs tightly with the **Reasoning Engine Core** β feeding stable reference frames to logic generation.
---
## π§ Why Symbolic Anchoring Beats Token Memory
| Feature | Token Memory | Semantic Tree |
| ------------------- | -------------------- | -------------------------------- |
| Size | Grows linearly | Sparse, concept-based |
| Drift control | Embedding match only | ΞS + symbolic link tracking |
| Style persistence | Not guaranteed | Can maintain poetic or tonal arc |
| Nonlinear branching | Difficult | Native (tree forks + joins) |
| Imagination support | Limited | Enables consistent surreal logic |
---
## πΌ Example β Multi-Scene Visual Narrative
```txt
Prompt:
"Tell a 4-part story about a lonely AI exploring a broken simulation. Each scene should feel visually distinct but thematically linked."
WFGY Tree:
β’ Scene 1 β Root node: AI's solitude
β’ Scene 2 β Branch: glitchy world physics (linked as 'antagonist')
β’ Scene 3 β Symbol re-introduction: broken mirror from scene 1 (ΞS decay detected)
β’ Scene 4 β Resolution links AI's identity to the mirror β loop closed
β Output: consistent motifs, coherent arc, symbolic closure
```
---
## π§ͺ What It Enables
* πͺ’ **Story continuity** without saving raw text
* π¨ **Style-harmonic image prompts** across visual steps
* π€ **LLM agents that donβt forget what they are**
* π **Re-entry points**: re-invoke old threads even after divergence
---
## π§ Pro-Tip: ΞS Drives Tree Growth
ΞS is not just for logic loops β
It also governs *tree expansion and pruning*:
* If ΞS from a new idea is **too flat**, itβs ignored
* If ΞS is **too high**, system forks a new semantic thread
* If ΞS is **near 0.5**, it connects and grows the branch
> This makes the Tree a true living structure β
> always adjusting toward *meaningful novelty*.
---
## π Related Readings
* [`reasoning_engine_core.md`](./reasoning_engine_core.md)
β Semantic Tree feeds the engine its persistent logic.
* [`semantic_boundary_navigation.md`](./semantic_boundary_navigation.md)
β Shows how Tree enables safe, controlled jumps across ideas.
---
### π§ Explore More
| Module | Description | Link |
|-----------------------|----------------------------------------------------------|----------|
| WFGY Core | WFGY 2.0 engine is live: full symbolic reasoning architecture and math stack | [View β](https://github.com/onestardao/WFGY/tree/main/core/README.md) |
| Problem Map 1.0 | Initial 16-mode diagnostic and symbolic fix framework | [View β](https://github.com/onestardao/WFGY/tree/main/ProblemMap/README.md) |
| Problem Map 2.0 | RAG-focused failure tree, modular fixes, and pipelines | [View β](https://github.com/onestardao/WFGY/blob/main/ProblemMap/rag-architecture-and-recovery.md) |
| Semantic Clinic Index | Expanded failure catalog: prompt injection, memory bugs, logic drift | [View β](https://github.com/onestardao/WFGY/blob/main/ProblemMap/SemanticClinicIndex.md) |
| Semantic Blueprint | Layer-based symbolic reasoning & semantic modulations | [View β](https://github.com/onestardao/WFGY/tree/main/SemanticBlueprint/README.md) |
| Benchmark vs GPT-5 | Stress test GPT-5 with full WFGY reasoning suite | [View β](https://github.com/onestardao/WFGY/tree/main/benchmarks/benchmark-vs-gpt5/README.md) |
| π§ββοΈ Starter Village π‘ | New here? Lost in symbols? Click here and let the wizard guide you through | [Start β](https://github.com/onestardao/WFGY/blob/main/StarterVillage/README.md) |
---
> π **Early Stargazers: [See the Hall of Fame](https://github.com/onestardao/WFGY/tree/main/stargazers)** β
> Engineers, hackers, and open source builders who supported WFGY from day one.
> β [WFGY Engine 2.0](https://github.com/onestardao/WFGY/blob/main/core/README.md) is already unlocked. β Star the repo to help others discover it and unlock more on the [Unlock Board](https://github.com/onestardao/WFGY/blob/main/STAR_UNLOCKS.md).