open-notebook/open_notebook/ai/provision.py
LUIS NOVO ab5560c9a2 refactor: reorganize folder structure for better maintainability
Changes:
- Move migrations/ under open_notebook/database/migrations/
- Extract AI models to open_notebook/ai/ (Model, ModelManager, provision)
- Extract podcasts to open_notebook/podcasts/ (EpisodeProfile, SpeakerProfile, PodcastEpisode)
- Reorganize prompts to mirror graphs structure (chat/, source_chat/)

This improves code organization by:
- Consolidating database concerns (migrations now with database code)
- Separating AI infrastructure from domain entities
- Isolating podcast feature into its own module
- Creating consistent prompt/graph naming conventions

All 52 tests pass.
2026-01-03 14:04:27 -03:00

32 lines
1.2 KiB
Python

from esperanto import LanguageModel
from langchain_core.language_models.chat_models import BaseChatModel
from loguru import logger
from open_notebook.ai.models import model_manager
from open_notebook.utils import token_count
async def provision_langchain_model(
content, model_id, default_type, **kwargs
) -> BaseChatModel:
"""
Returns the best model to use based on the context size and on whether there is a specific model being requested in Config.
If context > 105_000, returns the large_context_model
If model_id is specified in Config, returns that model
Otherwise, returns the default model for the given type
"""
tokens = token_count(content)
if tokens > 105_000:
logger.debug(
f"Using large context model because the content has {tokens} tokens"
)
model = await model_manager.get_default_model("large_context", **kwargs)
elif model_id:
model = await model_manager.get_model(model_id, **kwargs)
else:
model = await model_manager.get_default_model(default_type, **kwargs)
logger.debug(f"Using model: {model}")
assert isinstance(model, LanguageModel), f"Model is not a LanguageModel: {model}"
return model.to_langchain()