open-notebook/open_notebook/domain/notebook.py
MisonL 67dd85c928
Feat/localization tests docker (#371)
* feat(i18n): complete 100% internationalization and fix Next.js 15 compatibility

* feat(i18n): complete 100% internationalization coverage

* chore(test): finalize component tests and project cleanup

* test(logic): add unit tests for useModalManager hook

* fix(test): resolve timeout in AppSidebar tests by mocking TooltipProvider

* feat(i18n): comprehensive i18n audit, fixes for hardcoded strings, and complete zh-TW support

* fix(i18n): resolve TypeScript warnings and improve translation hook stability

- Remove unused useTranslation import from ConnectionGuard
- Add ref-based checking state to prevent dependency cycles
- Fix useTranslation hook to return empty string for undefined translations
- Add comment for backward compatibility on ExtractedReference interface
- Ensure .replace() string methods work safely with nested translation keys

* feat(i18n): complete internationalization implementation with Docker deployment

- Add LanguageLoadingOverlay component for smooth language transitions
- Update all translation files (en-US, zh-CN, zh-TW) with improved terminology
- Optimize Docker configuration for better performance
- Update version check and config handling for i18n support
- Fix route handling for language-specific content
- Add comprehensive task documentation

* fix(i18n): resolve localization errors, duplicates, and type issues

* chore(i18n): finalize 100% internationalization coverage

* chore(test): supplement i18n test cases and cleanup redundant files

* fix(test): resolve lint type errors and finalize delivery documents

* feat(i18n): finalize full internationalization and zh-TW localization

* fix(frontend): add missing devDependency and fix build tsconfig

* feat(ui): enhance sidebar hover effects with better visual feedback

* fix(frontend): resolve accessibility, i18n, and lint issues

- fix: add missing id, name, autocomplete attributes to dialog inputs
- fix: add aria labels and DialogDescription for accessibility
- fix: resolve uncontrolled component warning in SettingsForm
- fix: correct duplicate 'Traditional Chinese' label in zh-TW locale
- feat: add i18n support for podcast template names
- chore: fix lint errors in Dialogs

* fix: address all 21 PR feedback items from cubic-dev-ai bot

Configuration:
- Remove ignoreDuringBuilds flags from next.config.ts

Testing:
- Fix AppSidebar.test.tsx regex pattern and add missing assertion

Logic:
- Fix ConnectionGuard.tsx re-entry prevention logic

Internationalization (I18n) - Translations:
- Add missing keys: notebooks.archived, common.note/insight, accessibility keys
- Add specific keys: sources.allSourcesDescShort, transformations.selectModel
- Add singular/plural keys: podcasts.usedByCount_one/other, common.note/notes
- Add common.created/updated with {time} placeholder

Internationalization (I18n) - Usage:
- SourcesPage: use allSourcesDescShort instead of string splitting
- TransformationPlayground: use navigation.transformation and selectModel
- CommandPalette: use dedicated keys instead of string concatenation
- GeneratePodcastDialog: fix zh-TW date locale handling
- NotebookHeader: correctly interpolate {time} placeholder
- TransformationCard: use common.description instead of undefined key
- ChatPanel/SpeakerProfilesPanel: implement proper pluralization
- SystemInfo: correctly interpolate {version} placeholder
- LanguageLoadingOverlay: use t.common.loading instead of hardcoded string
- MessageActions: use specific error key cannotSaveNoteNoNotebook

Other:
- Fix SessionManager.tsx exhaustive-deps warning

* fix: remove duplicate locale keys and add missing zh-CN translations

- en-US: remove duplicate loading key (line 59) and addNew key (sources)
- zh-CN: remove duplicate common keys (loading, note, insight, newSource, newNotebook, newPodcast)
- zh-CN: remove duplicate accessibility.searchNotebooks key
- zh-CN: remove duplicate sources.addNew key
- zh-CN: remove duplicate navigation.transformation key
- zh-CN: add missing usedByCount_one and usedByCount_other keys in podcasts
- zh-TW: remove duplicate common keys (loading, note, insight, newSource, newNotebook, newPodcast)
- zh-TW: remove duplicate accessibility.searchNotebooks key
- zh-TW: remove duplicate sources.addNew key

* docs: remove info.md

* fix: remove duplicate notebook keys and unused ts-expect-error

- zh-CN: remove duplicate notebooks keys (archived, archive, unarchive, deleteNotebook, deleteNotebookDesc)
- zh-TW: remove duplicate notebooks keys (archived, archive, unarchive, deleteNotebook, deleteNotebookDesc)
- GeneratePodcastDialog: remove unused @ts-expect-error directive

* fix(a11y): fix unassociated labels in search page

- Replace <Label> with role='group' + aria-labelledby for search type section
- Replace <Label> with role='group' + aria-labelledby for search in section
- Follows WAI-ARIA best practices for labeling form field groups

* fix(a11y): fix unassociated labels across multiple components

- search/page.tsx: use role='group' + aria-labelledby for search type and search in sections
- RebuildEmbeddings.tsx: use role='group' + aria-labelledby for include checkboxes
- TransformationPlayground.tsx: replace Label with span for non-form output label

* chore: revert to npm stack and ensure i18n compatibility

* chore: polish zh-TW translations for better idiomatic usage

* fix: resolve linter errors (ruff import sort, mypy config duplicate)

* style: apply ruff formatting

* fix: finalize upstream compliance (Dockerfile.single, i18n hooks, docker-compose)

* style: polish strings, fix timeout cleanup, and improve test mocks

* fix: use relative imports in test setup to resolve IDE path errors

* perf(docker): optimize build speed by removing apt-get upgrade and build tools

- Remove apt-get upgrade from both builder and runtime stages (saves 10-15 min each)
- Remove gcc/g++/make/git from builder (uv downloads pre-built wheels)
- Add --no-install-recommends to minimize package footprint
- Keep npm mirror (npmmirror.com) for faster frontend deps
- Add npm registry config for reliable China network access

Also includes:
- fix(a11y): add missing labels and aria attributes to form fields
- fix(i18n): add 2s safety timeout to LanguageLoadingOverlay
- fix(i18n): add robustness checks to use-translation proxy

Build time reduced from 2+ hours to ~34 minutes (~70% improvement)

* fix(a11y): resolve 16 form field accessibility warnings in notebook and podcast pages

* fix(a11y): resolve 4 button and 1 select field accessibility warnings in models page

* fix(a11y): resolve redundant attributes and residual warnings in transformations and podcast forms

* fix(i18n): deep fix for language switch hang using proxy protection and safer access

* fix(a11y): add name attributes to ModelSelector, TransformationPlayground, and SourceDetailContent

* fix: add missing Label import to SourceDetailContent

* fix(i18n): use native react-i18next in LanguageLoadingOverlay to prevent hang during language switch

* fix(i18n): rewrite use-translation Proxy with strict depth limit and expanded blocked props to prevent language switch hang

* fix: add type assertion to fix TypeScript comparison error

* fix(i18n): disable useSuspense to prevent thread hang during language resource loading

* fix(i18n): add infinite loop detection circuit breaker to useTranslation hook

* fix(i18n): update traditional chinese label to native script in en-US

* feat: add new localization strings for notebook and note management.

* fix: resolve config priority, docker build deps, and ui glitches

* refactor: improve ui details and test coverage based on feedback

* refactor: improve ui details (version check/lang toggle) and test coverage

* fix: polish language matching and test cleanup

* fix(test): update mocks to resolve timeouts and proxy errors

* fix(frontend): restore tsconfig.json structure and enable IDE support for tests

* fix: address PR review findings and resolve CI OIDC failure

* fix: merge exception headers in custom handler

* fix: comprehensive PR review remediations and async performance fixes

* refactor: address all PR #371 review feedback

- Docker: consolidate SURREAL_URL to docker.env, add single-container override
- Security: restore apt-get upgrade in Dockerfile and Dockerfile.single
- Create centralized getDateLocale helper (lib/utils/date-locale.ts)
- Refactor 7 files to use getDateLocale helper
- Revert config/route.ts to origin/main version
- Move test files to co-located pattern (3 files)
- Remove local useTranslation mock from ConfirmDialog.test.tsx
- Simplify use-version-check to single useEffect pattern
- Fix test import paths after moving to co-located pattern

* fix: add jest-dom types for test files

* fix: address remaining review issues

- Add apt-get upgrade -y to Dockerfile.single backend-builder stage
- Refactor ChatColumn.test.tsx: use 'as unknown as ReturnType<typeof hook>' instead of 'as any'
- Use toBeInTheDocument() assertions instead of toBeDefined()
2026-01-15 13:51:05 -03:00

484 lines
17 KiB
Python

import asyncio
import os
from pathlib import Path
from typing import Any, ClassVar, Dict, List, Literal, Optional, Tuple, Union
from loguru import logger
from pydantic import BaseModel, ConfigDict, Field, field_validator
from surreal_commands import submit_command
from surrealdb import RecordID
from open_notebook.ai.models import model_manager
from open_notebook.database.repository import ensure_record_id, repo_query
from open_notebook.domain.base import ObjectModel
from open_notebook.exceptions import DatabaseOperationError, InvalidInputError
from open_notebook.utils import split_text
class Notebook(ObjectModel):
table_name: ClassVar[str] = "notebook"
name: str
description: str
archived: Optional[bool] = False
@field_validator("name")
@classmethod
def name_must_not_be_empty(cls, v):
if not v.strip():
raise InvalidInputError("Notebook name cannot be empty")
return v
async def get_sources(self) -> List["Source"]:
try:
srcs = await repo_query(
"""
select * omit source.full_text from (
select in as source from reference where out=$id
fetch source
) order by source.updated desc
""",
{"id": ensure_record_id(self.id)},
)
return [Source(**src["source"]) for src in srcs] if srcs else []
except Exception as e:
logger.error(f"Error fetching sources for notebook {self.id}: {str(e)}")
logger.exception(e)
raise DatabaseOperationError(e)
async def get_notes(self) -> List["Note"]:
try:
srcs = await repo_query(
"""
select * omit note.content, note.embedding from (
select in as note from artifact where out=$id
fetch note
) order by note.updated desc
""",
{"id": ensure_record_id(self.id)},
)
return [Note(**src["note"]) for src in srcs] if srcs else []
except Exception as e:
logger.error(f"Error fetching notes for notebook {self.id}: {str(e)}")
logger.exception(e)
raise DatabaseOperationError(e)
async def get_chat_sessions(self) -> List["ChatSession"]:
try:
srcs = await repo_query(
"""
select * from (
select
<- chat_session as chat_session
from refers_to
where out=$id
fetch chat_session
)
order by chat_session.updated desc
""",
{"id": ensure_record_id(self.id)},
)
return (
[ChatSession(**src["chat_session"][0]) for src in srcs] if srcs else []
)
except Exception as e:
logger.error(
f"Error fetching chat sessions for notebook {self.id}: {str(e)}"
)
logger.exception(e)
raise DatabaseOperationError(e)
class Asset(BaseModel):
file_path: Optional[str] = None
url: Optional[str] = None
class SourceEmbedding(ObjectModel):
table_name: ClassVar[str] = "source_embedding"
content: str
async def get_source(self) -> "Source":
try:
src = await repo_query(
"""
select source.* from $id fetch source
""",
{"id": ensure_record_id(self.id)},
)
return Source(**src[0]["source"])
except Exception as e:
logger.error(f"Error fetching source for embedding {self.id}: {str(e)}")
logger.exception(e)
raise DatabaseOperationError(e)
class SourceInsight(ObjectModel):
table_name: ClassVar[str] = "source_insight"
insight_type: str
content: str
async def get_source(self) -> "Source":
try:
src = await repo_query(
"""
select source.* from $id fetch source
""",
{"id": ensure_record_id(self.id)},
)
return Source(**src[0]["source"])
except Exception as e:
logger.error(f"Error fetching source for insight {self.id}: {str(e)}")
logger.exception(e)
raise DatabaseOperationError(e)
async def save_as_note(self, notebook_id: Optional[str] = None) -> Any:
source = await self.get_source()
note = Note(
title=f"{self.insight_type} from source {source.title}",
content=self.content,
)
await note.save()
if notebook_id:
await note.add_to_notebook(notebook_id)
return note
class Source(ObjectModel):
model_config = ConfigDict(arbitrary_types_allowed=True)
table_name: ClassVar[str] = "source"
asset: Optional[Asset] = None
title: Optional[str] = None
topics: Optional[List[str]] = Field(default_factory=list)
full_text: Optional[str] = None
command: Optional[Union[str, RecordID]] = Field(
default=None, description="Link to surreal-commands processing job"
)
@field_validator("command", mode="before")
@classmethod
def parse_command(cls, value):
"""Parse command field to ensure RecordID format"""
if isinstance(value, str) and value:
return ensure_record_id(value)
return value
@field_validator("id", mode="before")
@classmethod
def parse_id(cls, value):
"""Parse id field to handle both string and RecordID inputs"""
if value is None:
return None
if isinstance(value, RecordID):
return str(value)
return str(value) if value else None
async def get_status(self) -> Optional[str]:
"""Get the processing status of the associated command"""
if not self.command:
return None
try:
from surreal_commands import get_command_status
status = await get_command_status(str(self.command))
return status.status if status else "unknown"
except Exception as e:
logger.warning(f"Failed to get command status for {self.command}: {e}")
return "unknown"
async def get_processing_progress(self) -> Optional[Dict[str, Any]]:
"""Get detailed processing information for the associated command"""
if not self.command:
return None
try:
from surreal_commands import get_command_status
status_result = await get_command_status(str(self.command))
if not status_result:
return None
# Extract execution metadata if available
result = getattr(status_result, "result", None)
execution_metadata = (
result.get("execution_metadata", {}) if isinstance(result, dict) else {}
)
return {
"status": status_result.status,
"started_at": execution_metadata.get("started_at"),
"completed_at": execution_metadata.get("completed_at"),
"error": getattr(status_result, "error_message", None),
"result": result,
}
except Exception as e:
logger.warning(f"Failed to get command progress for {self.command}: {e}")
return None
async def get_context(
self, context_size: Literal["short", "long"] = "short"
) -> Dict[str, Any]:
insights_list = await self.get_insights()
insights = [insight.model_dump() for insight in insights_list]
if context_size == "long":
return dict(
id=self.id,
title=self.title,
insights=insights,
full_text=self.full_text,
)
else:
return dict(id=self.id, title=self.title, insights=insights)
async def get_embedded_chunks(self) -> int:
try:
result = await repo_query(
"""
select count() as chunks from source_embedding where source=$id GROUP ALL
""",
{"id": ensure_record_id(self.id)},
)
if len(result) == 0:
return 0
return result[0]["chunks"]
except Exception as e:
logger.error(f"Error fetching chunks count for source {self.id}: {str(e)}")
logger.exception(e)
raise DatabaseOperationError(f"Failed to count chunks for source: {str(e)}")
async def get_insights(self) -> List[SourceInsight]:
try:
result = await repo_query(
"""
SELECT * FROM source_insight WHERE source=$id
""",
{"id": ensure_record_id(self.id)},
)
return [SourceInsight(**insight) for insight in result]
except Exception as e:
logger.error(f"Error fetching insights for source {self.id}: {str(e)}")
logger.exception(e)
raise DatabaseOperationError("Failed to fetch insights for source")
async def add_to_notebook(self, notebook_id: str) -> Any:
if not notebook_id:
raise InvalidInputError("Notebook ID must be provided")
return await self.relate("reference", notebook_id)
async def vectorize(self) -> str:
"""
Submit vectorization as a background job using the vectorize_source command.
This method now leverages the job-based architecture to prevent HTTP connection
pool exhaustion when processing large documents. The actual chunk processing
happens in the background worker pool, with natural concurrency control.
Returns:
str: The command/job ID that can be used to track progress via the commands API
Raises:
ValueError: If source has no text to vectorize
DatabaseOperationError: If job submission fails
"""
logger.info(f"Submitting vectorization job for source {self.id}")
try:
if not self.full_text:
raise ValueError(f"Source {self.id} has no text to vectorize")
# Submit the vectorize_source command which will:
# 1. Delete existing embeddings (idempotency)
# 2. Split text into chunks
# 3. Submit each chunk as an embed_chunk job
command_id = submit_command(
"open_notebook", # app name
"vectorize_source", # command name
{
"source_id": str(self.id),
},
)
command_id_str = str(command_id)
logger.info(
f"Vectorization job submitted for source {self.id}: "
f"command_id={command_id_str}"
)
return command_id_str
except Exception as e:
logger.error(
f"Failed to submit vectorization job for source {self.id}: {e}"
)
logger.exception(e)
raise DatabaseOperationError(e)
async def add_insight(self, insight_type: str, content: str) -> Any:
EMBEDDING_MODEL = await model_manager.get_embedding_model()
if not EMBEDDING_MODEL:
logger.warning("No embedding model found. Insight will not be searchable.")
if not insight_type or not content:
raise InvalidInputError("Insight type and content must be provided")
try:
embedding = (
(await EMBEDDING_MODEL.aembed([content]))[0] if EMBEDDING_MODEL else []
)
return await repo_query(
"""
CREATE source_insight CONTENT {
"source": $source_id,
"insight_type": $insight_type,
"content": $content,
"embedding": $embedding,
};""",
{
"source_id": ensure_record_id(self.id),
"insight_type": insight_type,
"content": content,
"embedding": embedding,
},
)
except Exception as e:
logger.error(f"Error adding insight to source {self.id}: {str(e)}")
raise # DatabaseOperationError(e)
def _prepare_save_data(self) -> dict:
"""Override to ensure command field is always RecordID format for database"""
data = super()._prepare_save_data()
# Ensure command field is RecordID format if not None
if data.get("command") is not None:
data["command"] = ensure_record_id(data["command"])
return data
async def delete(self) -> bool:
"""Delete source and clean up associated file if it exists."""
# Clean up uploaded file if it exists
if self.asset and self.asset.file_path:
file_path = Path(self.asset.file_path)
if file_path.exists():
try:
os.unlink(file_path)
logger.info(f"Deleted file for source {self.id}: {file_path}")
except Exception as e:
logger.warning(
f"Failed to delete file {file_path} for source {self.id}: {e}. "
"Continuing with database deletion."
)
else:
logger.debug(
f"File {file_path} not found for source {self.id}, skipping cleanup"
)
# Call parent delete to remove database record
return await super().delete()
class Note(ObjectModel):
table_name: ClassVar[str] = "note"
title: Optional[str] = None
note_type: Optional[Literal["human", "ai"]] = None
content: Optional[str] = None
@field_validator("content")
@classmethod
def content_must_not_be_empty(cls, v):
if v is not None and not v.strip():
raise InvalidInputError("Note content cannot be empty")
return v
async def add_to_notebook(self, notebook_id: str) -> Any:
if not notebook_id:
raise InvalidInputError("Notebook ID must be provided")
return await self.relate("artifact", notebook_id)
def get_context(
self, context_size: Literal["short", "long"] = "short"
) -> Dict[str, Any]:
if context_size == "long":
return dict(id=self.id, title=self.title, content=self.content)
else:
return dict(
id=self.id,
title=self.title,
content=self.content[:100] if self.content else None,
)
def needs_embedding(self) -> bool:
return True
def get_embedding_content(self) -> Optional[str]:
return self.content
class ChatSession(ObjectModel):
table_name: ClassVar[str] = "chat_session"
nullable_fields: ClassVar[set[str]] = {"model_override"}
title: Optional[str] = None
model_override: Optional[str] = None
async def relate_to_notebook(self, notebook_id: str) -> Any:
if not notebook_id:
raise InvalidInputError("Notebook ID must be provided")
return await self.relate("refers_to", notebook_id)
async def relate_to_source(self, source_id: str) -> Any:
if not source_id:
raise InvalidInputError("Source ID must be provided")
return await self.relate("refers_to", source_id)
async def text_search(
keyword: str, results: int, source: bool = True, note: bool = True
):
if not keyword:
raise InvalidInputError("Search keyword cannot be empty")
try:
search_results = await repo_query(
"""
select *
from fn::text_search($keyword, $results, $source, $note)
""",
{"keyword": keyword, "results": results, "source": source, "note": note},
)
return search_results
except Exception as e:
logger.error(f"Error performing text search: {str(e)}")
logger.exception(e)
raise DatabaseOperationError(e)
async def vector_search(
keyword: str,
results: int,
source: bool = True,
note: bool = True,
minimum_score=0.2,
):
if not keyword:
raise InvalidInputError("Search keyword cannot be empty")
try:
EMBEDDING_MODEL = await model_manager.get_embedding_model()
if EMBEDDING_MODEL is None:
raise ValueError("EMBEDDING_MODEL is not configured")
embed = (await EMBEDDING_MODEL.aembed([keyword]))[0]
search_results = await repo_query(
"""
SELECT * FROM fn::vector_search($embed, $results, $source, $note, $minimum_score);
""",
{
"embed": embed,
"results": results,
"source": source,
"note": note,
"minimum_score": minimum_score,
},
)
return search_results
except Exception as e:
logger.error(f"Error performing vector search: {str(e)}")
logger.exception(e)
raise DatabaseOperationError(e)