mirror of
https://github.com/lfnovo/open-notebook.git
synced 2026-04-29 03:50:04 +00:00
Create a hierarchical CLAUDE.md documentation system for the entire Open Notebook codebase with focus on concise, pattern-driven reference cards rather than comprehensive tutorials. ## Changes ### Core Documentation System - Updated `.claude/commands/build-claude-md.md` to distinguish between leaf and parent modules, with special handling for prompt/template modules - Established clear patterns: * Leaf modules (40-70 lines): Components, hooks, API clients * Parent modules (50-150 lines): Architecture, cross-layer patterns, data flows * Template modules: Pattern focus, not catalog listings ### Generated Documentation Created 15 CLAUDE.md reference files across the project: **Frontend (React/Next.js)** - frontend/src/CLAUDE.md: Architecture overview, data flow, three-tier design - frontend/src/lib/hooks/CLAUDE.md: React Query patterns, state management - frontend/src/lib/api/CLAUDE.md: Axios client, FormData handling, interceptors - frontend/src/lib/stores/CLAUDE.md: Zustand state persistence, auth patterns - frontend/src/components/ui/CLAUDE.md: Radix UI primitives, CVA styling **Backend (Python/FastAPI)** - open_notebook/CLAUDE.md: System architecture, layer interactions - open_notebook/ai/CLAUDE.md: Model provisioning, Esperanto integration - open_notebook/domain/CLAUDE.md: Data models, ObjectModel/RecordModel patterns - open_notebook/database/CLAUDE.md: Repository pattern, async migrations - open_notebook/graphs/CLAUDE.md: LangGraph workflows, async orchestration - open_notebook/utils/CLAUDE.md: Cross-cutting utilities, context building - open_notebook/podcasts/CLAUDE.md: Episode/speaker profiles, job tracking **API & Other** - api/CLAUDE.md: REST layer, service architecture - commands/CLAUDE.md: Async command handlers, job queue patterns - prompts/CLAUDE.md: Jinja2 templates, prompt engineering patterns (refactored) **Project Root** - CLAUDE.md: Project overview, three-tier architecture, tech stack, getting started ### Key Features - Zero duplication: Parent modules reference child CLAUDE.md files, don't repeat them - Pattern-focused: Emphasizes how components work together, not component catalogs - Scannable: Short bullets, code examples only when necessary (1-2 per file) - Practical: "How to extend" guides, quirks/gotchas for each module - Navigation: Root CLAUDE.md acts as hub pointing to specialized documentation ### Cleanup - Removed unused `batch_fix_services.py` - Removed deprecated `open_notebook/plugins/podcasts.py` - Updated .gitignore for documentation consistency ## Impact New contributors can now: 1. Read root CLAUDE.md for system architecture (5 min) 2. Jump to specific layer documentation (frontend, api, open_notebook) 3. Dive into module-specific patterns in child CLAUDE.md files (1 min per module) All documentation is lean, reference-focused, and avoids duplication.
3.6 KiB
3.6 KiB
Graphs Module
LangGraph-based workflow orchestration for content processing, chat interactions, and AI-powered transformations.
Key Components
chat.py: Conversational agent with message history, notebook context, and model override supportsource_chat.py: Source-focused chat with ContextBuilder for insights/content injection and context trackingask.py: Multi-search strategy agent (generates search terms, retrieves results, synthesizes answers)source.py: Content ingestion pipeline (extract → save → transform with content-core)transformation.py: Single-node transformation executor with prompt templating via ai_prompterprompt.py: Generic pattern chain for arbitrary prompt-based LLM callstools.py: Minimal tool library (currently justget_current_timestamp())
Important Patterns
- Async/sync bridging in graphs: Both
chat.pyandsource_chat.pyuseasyncio.new_event_loop()workaround because LangGraph nodes are sync butprovision_langchain_model()is async - State machines via StateGraph: Each graph compiles to stateful runnable; conditional edges fan out work (ask.py, source.py do parallel transforms)
- Prompt templating:
ai_prompter.Prompterwith Jinja2 templates referenced by path ("chat/system", "ask/entry", etc.) - Model provisioning via context: Config dict passed to node via
RunnableConfig; defaults fall back to state overrides - Checkpointing:
chat.pyandsource_chat.pyuse SqliteSaver for message history (LangGraph's built-in persistence) - Content extraction:
source.pyuses content-core library with provider/model from DefaultModels; URLs and files both supported
Quirks & Edge Cases
- Async loop gymnastics: ThreadPoolExecutor workaround needed because LangGraph invokes sync nodes but we call async functions; fragile if event loop state changes
clean_thinking_content()ubiquitous: Strips<think>...</think>tags from model responses (handles extended thinking models)- source_chat.py builds context twice: ContextBuilder runs during node execution to fetch source/insights; rebuilds list from context_data (inefficient but safe)
- source.py embedding is async:
source.vectorize()returns job command ID; not awaited (fire-and-forget) - transformation.py nullable source: Accepts
input_textorsource.full_text(falls back to second if first missing) - ask.py hard-coded vector_search: No fallback to text search despite commented code suggesting it was planned
- SqliteSaver location: Checkpoints stored in path from
LANGGRAPH_CHECKPOINT_FILEenv var; connection shared across graphs
Key Dependencies
langgraph: StateGraph, Send, END, START, SqliteSaver checkpoint persistencelangchain_core: Messages, OutputParser, RunnableConfigai_prompter: Prompter for Jinja2 template renderingcontent_core:extract_content()for file/URL processingopen_notebook.ai.provision:provision_langchain_model()(async factory with fallback logic)open_notebook.domain.notebook: Domain models (Source, Note, SourceInsight, vector_search)loguru: Logging
Usage Example
# Invoke a graph with config override
config = {"configurable": {"model_id": "model:custom_id"}}
result = await chat_graph.ainvoke(
{"messages": [HumanMessage(content="...")], "notebook": notebook},
config=config
)
# Source processing (content → save → transform)
result = await source_graph.ainvoke({
"content_state": {...}, # ProcessSourceState from content-core
"apply_transformations": [t1, t2],
"source_id": "source:123",
"embed": True
})