mirror of
https://github.com/lfnovo/open-notebook.git
synced 2026-05-01 21:00:43 +00:00
* feat: content-type aware chunking and unified embedding - Add chunking.py with HTML, Markdown, and plain text detection - Add embedding.py with mean pooling for large content - Create dedicated commands: embed_note, embed_insight, embed_source - Use fire-and-forget pattern for embedding via submit_command() - Refactor rebuild_embeddings_command to delegate to individual commands - Remove legacy commands and needs_embedding() methods - Reduce chunk size to 1500 chars for Ollama compatibility - Update CLAUDE.md documentation for new architecture Fixes #350, #142 * fix: address code review issues - Note.save() now returns command_id for tracking embedding jobs - Add length check after generate_embeddings() to fail fast on mismatch - Add numpy as explicit dependency (was transitive) - Remove hardcoded chunk sizes from docstrings * docs: address code review comments - Rename "SYNC PATH" to "DOMAIN MODEL PATH" in embedding router - Add test_chunking.py and test_embedding.py to Testing Strategy - Clarify auto-embedding behavior for each domain model * fix: clean thinking tags from prompt graph output Adds clean_thinking_content() to prompt.py to handle extended thinking models that return <think>...</think> tags. This fixes empty titles when saving notes from chat. * chore: remove local docker-compose from git * fix(frontend): handle null parent_id in search results Add defensive check for null parent_id in search results to prevent "Cannot read properties of null (reading 'split')" error. This can happen with orphaned records in the database. * fix: cascade delete embeddings and insights when source is deleted When deleting a Source, now also deletes associated: - source_embedding records - source_insight records This prevents orphaned records that cause null parent_id errors in vector search results. * fix: add cleanup for orphan embedding/insight records in migration 10 Deletes source_embedding and source_insight records where the linked source no longer exists (source.id = NONE). * chore: bump esperanto to 2.16 Increases ctx_num for Ollama models to accommodate larger notebook context windows. See: https://github.com/lfnovo/esperanto/pull/69
4.4 KiB
4.4 KiB
Domain Module
Core data models for notebooks, sources, notes, and settings with async SurrealDB persistence, auto-embedding, and relationship management.
Purpose
Two base classes support different persistence patterns: ObjectModel (mutable records with auto-increment IDs) and RecordModel (singleton configuration with fixed IDs).
Key Components
base.py
-
ObjectModel: Base for notebooks, sources, notes
save(): Create/update with auto-embedding for searchable contentdelete(): Remove by IDrelate(relationship, target_id): Create graph relationships (reference, artifact, refers_to)get(id): Polymorphic fetch; resolves subclass from ID prefixget_all(order_by): Fetch all records from table- Integrates with ModelManager for automatic embedding
-
RecordModel: Singleton configuration (ContentSettings, DefaultPrompts)
- Fixed record_id per subclass
update(): Upsert to database- Lazy DB loading via
_load_from_db()
notebook.py
-
Notebook: Research project container
get_sources(),get_notes(),get_chat_sessions(): Navigate relationships
-
Source: Content item (file/URL)
vectorize(): Submit async embedding job (returns command_id, fire-and-forget)get_status(),get_processing_progress(): Track job via surreal_commandsget_context(): Returns summary for LLM contextadd_insight(): Generate and store insights with embeddings
-
Note: Standalone or linked notes
save(): Submitsembed_notecommand after save (fire-and-forget)add_to_notebook(): Link to notebook
-
SourceInsight, SourceEmbedding: Derived content models
-
ChatSession: Conversation container with optional model_override
-
Asset: File/URL reference helper
-
Search functions:
text_search(): Full-text keyword searchvector_search(): Semantic search via embeddings (default minimum_score=0.2)
content_settings.py
- ContentSettings: Singleton for processing engines, embedding strategy, file deletion, YouTube languages
transformation.py
- Transformation: Reusable prompts for content transformation
- DefaultPrompts: Singleton with transformation instructions
Important Patterns
- Async/await: All DB operations async; always use await
- Polymorphic get():
ObjectModel.get(id)determines subclass from ID prefix (table:id format) - Fire-and-forget embedding: Models submit embed_* commands after save via
submit_command()(non-blocking) - Nullable fields: Declare via
nullable_fieldsClassVar to allow None in database - Timestamps:
createdandupdatedauto-managed as ISO strings - Fire-and-forget jobs:
source.vectorize()returns command_id without waiting
Key Dependencies
surrealdb: RecordID type for relationshipspydantic: Validation and field_validator decoratorsopen_notebook.database.repository: CRUD and relationship functionsopen_notebook.ai.models: ModelManager for embeddingssurreal_commands: Async job submission (vectorization, insights)loguru: Logging
Quirks & Gotchas
- Polymorphic resolution:
ObjectModel.get()fails if subclass not imported (search subclasses list) - RecordModel singleton: new returns existing instance; call
clear_instance()in tests - Source.command field: Stored as RecordID; auto-parsed from strings via field_validator
- Text truncation:
Note.get_context(short)hardcodes 100-char limit - Auto-embedding behavior:
Note.save()→ auto-submitsembed_notecommandSource.save()→ does NOT auto-submit (must callvectorize()explicitly)Source.add_insight()→ auto-submitsembed_insightcommand
- Relationship strings: Must match SurrealDB schema (reference, artifact, refers_to)
How to Add New Model
- Inherit from ObjectModel with table_name ClassVar
- Define Pydantic fields with validators
- Override
save()to submit embedding command if searchable (usesubmit_command("embed_*", id)) - Add custom methods for domain logic (get_X, add_to_Y)
- Implement
_prepare_save_data()if custom serialization needed
Usage
notebook = Notebook(name="Research", description="My project")
await notebook.save()
obj = await ObjectModel.get("notebook:123") # Polymorphic fetch
# Search
await text_search("quantum", results=5)
await vector_search("quantum computing", results=10, minimum_score=0.3)