mirror of
https://github.com/lfnovo/open-notebook.git
synced 2026-05-02 21:30:38 +00:00
* feat: replace provider config with credential-based system (#477) Introduce a new credential management system replacing the old ProviderConfig singleton and standalone Models page. Each credential stores encrypted API keys and provider-specific configuration with full CRUD support via a unified settings UI. Backend: - Add Credential domain model with encrypted API key storage - Add credentials API router (CRUD, discovery, registration, testing) - Add encryption utilities for secure key storage - Add key_provider for DB-first env-var fallback provisioning - Add connection tester and model discovery services - Integrate ModelManager with credential-based config - Add provider name normalization for Esperanto compatibility - Add database migrations 11-12 for credential schema Frontend: - Rewrite settings/api-keys page with credential management UI - Add model discovery dialog with search and custom model support - Add compact default model assignments (primary/advanced layout) - Add inline model testing and credential connection testing - Add env-var migration banner - Update navigation to unified settings page - Remove standalone models page and old settings components i18n: - Update all 7 locale files with credential and model management keys Closes #477 Co-Authored-By: JFMD <git@jfmd.us> Co-Authored-By: OraCatQAQ <570768706@qq.com> * fix: address PR #540 review comments - Fix docs referencing removed Models page - Fix error-handler returning raw messages instead of i18n keys - Fix auth.py misleading docstring and missing no-password guard - Fix connection_tester using wrong env var for openai_compatible - Add provision_provider_keys before model discovery/sync - Update CLAUDE.md to reflect credential-based system - Fix missing closing brace in api-keys page useEffect * fix: add logging to credential migration and surface errors in UI - Add comprehensive logging to migrate-from-env and migrate-from-provider-config endpoints (start, per-provider progress, success/failure with stack traces, final summary) - Fix frontend migration hooks ignoring errors array from response - Show error toast when migration fails instead of "nothing to migrate" - Invalidate status/envStatus queries after migration so banner updates * docs: update CLAUDE.md files for credential system Replace stale ProviderConfig and /api-keys/ references across 8 CLAUDE.md files to reflect the new Credential-based system from PR #540. * docs: update user documentation for credential-based system Replace env var API key instructions with Settings UI credential workflow across all user-facing documentation. The new flow is: set OPEN_NOTEBOOK_ENCRYPTION_KEY → start services → add credential in Settings UI → test → discover models → register. - Rewrite ai-providers.md, api-configuration.md, environment-reference.md - Update all quick-start guides and installation docs - Update ollama.md, openai-compatible.md, local-tts/stt networking sections - Update reverse-proxy.md, development-setup.md, security.md - Fix broken links to non-existent docs/deployment/ paths - Add credentials endpoints to api-reference.md - Move all API key env vars to deprecated/legacy sections * chore: bump version to 1.7.0-rc1 Release candidate for credential-based provider management system. * fix: initialize provider before try block in test_credential Prevents UnboundLocalError when Credential.get() throws (e.g., invalid credential_id) before provider is assigned. * fix: reorder down migration to drop index before table Removes duplicate REMOVE FIELD statement and reorders so the index is dropped before the table, preventing rollback failures. * refactor: simplify encryption key to always derive via SHA-256 Remove the dual code path in _ensure_fernet_key() that detected native Fernet keys. Since the credential system is new, always deriving via SHA-256 removes unnecessary complexity. Also removes the generate_key() function and Fernet.generate_key() references from docs. * fix: correct mock patch targets in embedding tests and URL validation Fix embedding tests patching wrong module path for model_manager (was targeting open_notebook.utils.embedding.model_manager but it's imported locally from open_notebook.ai.models). Also fix URL validation to allow unresolvable hostnames since they may be valid in the deployment environment (e.g., Azure endpoints, internal DNS). * feat: add global setup banner for encryption and migration status Show a persistent banner in AppShell when encryption key is missing (red) or env var API keys can be migrated (amber), so users see these prompts on every page instead of only on Settings > API Keys. Includes a docs link for the encryption banner and i18n support across all 7 locales. * docs: several improvements to docker-compose e env examples * Update README.md Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com> * docs: fix env var format in README and update model setup instructions Align the encryption key snippet in README Step 2 with the list format used in the compose file. Replace deprecated "Settings → Models" instructions with credential-based Discover Models flow. * fix: address credential system review issues - Fix SSRF bypass via IPv4-mapped IPv6 addresses (::ffff:169.254.x.x) - Fix TTS connection test missing config parameter - Add Azure-specific model discovery using api-key auth header - Add Vertex static model list for credential-based discovery - Fix PROVIDER_DISCOVERY_FUNCTIONS incorrect azure/vertex mapping - Extract business logic to api/credentials_service.py (service layer) - Move credential Pydantic schemas to api/models.py - Update tests to use new service imports and ValueError assertions * fix: sanitize error responses and migrate key_provider to Credential - Replace raw exception messages in all credential router 500 responses with generic error strings (internal details logged server-side only) - Refactor key_provider.py to use Credential.get_by_provider() instead of deprecated ProviderConfig.get_instance() - Remove unused functions (get_provider_configs, get_default_api_key, get_provider_config) that were dead code --------- Co-authored-by: JFMD <git@jfmd.us> Co-authored-by: OraCatQAQ <570768706@qq.com>
684 lines
23 KiB
Python
684 lines
23 KiB
Python
from typing import Any, Dict, List, Literal, Optional
|
|
|
|
from pydantic import BaseModel, ConfigDict, Field, field_validator, model_validator
|
|
|
|
|
|
# Notebook models
|
|
class NotebookCreate(BaseModel):
|
|
name: str = Field(..., description="Name of the notebook")
|
|
description: str = Field(default="", description="Description of the notebook")
|
|
|
|
|
|
class NotebookUpdate(BaseModel):
|
|
name: Optional[str] = Field(None, description="Name of the notebook")
|
|
description: Optional[str] = Field(None, description="Description of the notebook")
|
|
archived: Optional[bool] = Field(
|
|
None, description="Whether the notebook is archived"
|
|
)
|
|
|
|
|
|
class NotebookResponse(BaseModel):
|
|
id: str
|
|
name: str
|
|
description: str
|
|
archived: bool
|
|
created: str
|
|
updated: str
|
|
source_count: int
|
|
note_count: int
|
|
|
|
|
|
# Search models
|
|
class SearchRequest(BaseModel):
|
|
query: str = Field(..., description="Search query")
|
|
type: Literal["text", "vector"] = Field("text", description="Search type")
|
|
limit: int = Field(100, description="Maximum number of results", le=1000)
|
|
search_sources: bool = Field(True, description="Include sources in search")
|
|
search_notes: bool = Field(True, description="Include notes in search")
|
|
minimum_score: float = Field(
|
|
0.2, description="Minimum score for vector search", ge=0, le=1
|
|
)
|
|
|
|
|
|
class SearchResponse(BaseModel):
|
|
results: List[Dict[str, Any]] = Field(..., description="Search results")
|
|
total_count: int = Field(..., description="Total number of results")
|
|
search_type: str = Field(..., description="Type of search performed")
|
|
|
|
|
|
class AskRequest(BaseModel):
|
|
question: str = Field(..., description="Question to ask the knowledge base")
|
|
strategy_model: str = Field(..., description="Model ID for query strategy")
|
|
answer_model: str = Field(..., description="Model ID for individual answers")
|
|
final_answer_model: str = Field(..., description="Model ID for final answer")
|
|
|
|
|
|
class AskResponse(BaseModel):
|
|
answer: str = Field(..., description="Final answer from the knowledge base")
|
|
question: str = Field(..., description="Original question")
|
|
|
|
|
|
# Models API models
|
|
class ModelCreate(BaseModel):
|
|
name: str = Field(..., description="Model name (e.g., gpt-5-mini, claude, gemini)")
|
|
provider: str = Field(
|
|
..., description="Provider name (e.g., openai, anthropic, gemini)"
|
|
)
|
|
type: str = Field(
|
|
...,
|
|
description="Model type (language, embedding, text_to_speech, speech_to_text)",
|
|
)
|
|
credential: Optional[str] = Field(
|
|
None, description="Credential ID to link this model to"
|
|
)
|
|
|
|
|
|
class ModelResponse(BaseModel):
|
|
id: str
|
|
name: str
|
|
provider: str
|
|
type: str
|
|
credential: Optional[str] = None
|
|
created: str
|
|
updated: str
|
|
|
|
|
|
class DefaultModelsResponse(BaseModel):
|
|
default_chat_model: Optional[str] = None
|
|
default_transformation_model: Optional[str] = None
|
|
large_context_model: Optional[str] = None
|
|
default_text_to_speech_model: Optional[str] = None
|
|
default_speech_to_text_model: Optional[str] = None
|
|
default_embedding_model: Optional[str] = None
|
|
default_tools_model: Optional[str] = None
|
|
|
|
|
|
class ProviderAvailabilityResponse(BaseModel):
|
|
available: List[str] = Field(..., description="List of available providers")
|
|
unavailable: List[str] = Field(..., description="List of unavailable providers")
|
|
supported_types: Dict[str, List[str]] = Field(
|
|
..., description="Provider to supported model types mapping"
|
|
)
|
|
|
|
|
|
# Transformations API models
|
|
class TransformationCreate(BaseModel):
|
|
name: str = Field(..., description="Transformation name")
|
|
title: str = Field(..., description="Display title for the transformation")
|
|
description: str = Field(
|
|
..., description="Description of what this transformation does"
|
|
)
|
|
prompt: str = Field(..., description="The transformation prompt")
|
|
apply_default: bool = Field(
|
|
False, description="Whether to apply this transformation by default"
|
|
)
|
|
|
|
|
|
class TransformationUpdate(BaseModel):
|
|
name: Optional[str] = Field(None, description="Transformation name")
|
|
title: Optional[str] = Field(
|
|
None, description="Display title for the transformation"
|
|
)
|
|
description: Optional[str] = Field(
|
|
None, description="Description of what this transformation does"
|
|
)
|
|
prompt: Optional[str] = Field(None, description="The transformation prompt")
|
|
apply_default: Optional[bool] = Field(
|
|
None, description="Whether to apply this transformation by default"
|
|
)
|
|
|
|
|
|
class TransformationResponse(BaseModel):
|
|
id: str
|
|
name: str
|
|
title: str
|
|
description: str
|
|
prompt: str
|
|
apply_default: bool
|
|
created: str
|
|
updated: str
|
|
|
|
|
|
class TransformationExecuteRequest(BaseModel):
|
|
model_config = ConfigDict(protected_namespaces=())
|
|
|
|
transformation_id: str = Field(
|
|
..., description="ID of the transformation to execute"
|
|
)
|
|
input_text: str = Field(..., description="Text to transform")
|
|
model_id: str = Field(..., description="Model ID to use for the transformation")
|
|
|
|
|
|
class TransformationExecuteResponse(BaseModel):
|
|
model_config = ConfigDict(protected_namespaces=())
|
|
|
|
output: str = Field(..., description="Transformed text")
|
|
transformation_id: str = Field(..., description="ID of the transformation used")
|
|
model_id: str = Field(..., description="Model ID used")
|
|
|
|
|
|
# Default Prompt API models
|
|
class DefaultPromptResponse(BaseModel):
|
|
transformation_instructions: str = Field(
|
|
..., description="Default transformation instructions"
|
|
)
|
|
|
|
|
|
class DefaultPromptUpdate(BaseModel):
|
|
transformation_instructions: str = Field(
|
|
..., description="Default transformation instructions"
|
|
)
|
|
|
|
|
|
# Notes API models
|
|
class NoteCreate(BaseModel):
|
|
title: Optional[str] = Field(None, description="Note title")
|
|
content: str = Field(..., description="Note content")
|
|
note_type: Optional[str] = Field("human", description="Type of note (human, ai)")
|
|
notebook_id: Optional[str] = Field(
|
|
None, description="Notebook ID to add the note to"
|
|
)
|
|
|
|
|
|
class NoteUpdate(BaseModel):
|
|
title: Optional[str] = Field(None, description="Note title")
|
|
content: Optional[str] = Field(None, description="Note content")
|
|
note_type: Optional[str] = Field(None, description="Type of note (human, ai)")
|
|
|
|
|
|
class NoteResponse(BaseModel):
|
|
id: str
|
|
title: Optional[str]
|
|
content: Optional[str]
|
|
note_type: Optional[str]
|
|
created: str
|
|
updated: str
|
|
|
|
|
|
# Embedding API models
|
|
class EmbedRequest(BaseModel):
|
|
item_id: str = Field(..., description="ID of the item to embed")
|
|
item_type: str = Field(..., description="Type of item (source, note)")
|
|
async_processing: bool = Field(
|
|
False, description="Process asynchronously in background"
|
|
)
|
|
|
|
|
|
class EmbedResponse(BaseModel):
|
|
success: bool = Field(..., description="Whether embedding was successful")
|
|
message: str = Field(..., description="Result message")
|
|
item_id: str = Field(..., description="ID of the item that was embedded")
|
|
item_type: str = Field(..., description="Type of item that was embedded")
|
|
command_id: Optional[str] = Field(
|
|
None, description="Command ID for async processing"
|
|
)
|
|
|
|
|
|
# Rebuild request/response models
|
|
class RebuildRequest(BaseModel):
|
|
mode: Literal["existing", "all"] = Field(
|
|
...,
|
|
description="Rebuild mode: 'existing' only re-embeds items with embeddings, 'all' embeds everything",
|
|
)
|
|
include_sources: bool = Field(True, description="Include sources in rebuild")
|
|
include_notes: bool = Field(True, description="Include notes in rebuild")
|
|
include_insights: bool = Field(True, description="Include insights in rebuild")
|
|
|
|
|
|
class RebuildResponse(BaseModel):
|
|
command_id: str = Field(..., description="Command ID to track progress")
|
|
total_items: int = Field(..., description="Estimated number of items to process")
|
|
message: str = Field(..., description="Status message")
|
|
|
|
|
|
class RebuildProgress(BaseModel):
|
|
processed: int = Field(..., description="Number of items processed")
|
|
total: int = Field(..., description="Total items to process")
|
|
percentage: float = Field(..., description="Progress percentage")
|
|
|
|
|
|
class RebuildStats(BaseModel):
|
|
sources: int = Field(0, description="Sources processed")
|
|
notes: int = Field(0, description="Notes processed")
|
|
insights: int = Field(0, description="Insights processed")
|
|
failed: int = Field(0, description="Failed items")
|
|
|
|
|
|
class RebuildStatusResponse(BaseModel):
|
|
command_id: str = Field(..., description="Command ID")
|
|
status: str = Field(..., description="Status: queued, running, completed, failed")
|
|
progress: Optional[RebuildProgress] = None
|
|
stats: Optional[RebuildStats] = None
|
|
started_at: Optional[str] = None
|
|
completed_at: Optional[str] = None
|
|
error_message: Optional[str] = None
|
|
|
|
|
|
# Settings API models
|
|
class SettingsResponse(BaseModel):
|
|
default_content_processing_engine_doc: Optional[str] = None
|
|
default_content_processing_engine_url: Optional[str] = None
|
|
default_embedding_option: Optional[str] = None
|
|
auto_delete_files: Optional[str] = None
|
|
youtube_preferred_languages: Optional[List[str]] = None
|
|
|
|
|
|
class SettingsUpdate(BaseModel):
|
|
default_content_processing_engine_doc: Optional[str] = None
|
|
default_content_processing_engine_url: Optional[str] = None
|
|
default_embedding_option: Optional[str] = None
|
|
auto_delete_files: Optional[str] = None
|
|
youtube_preferred_languages: Optional[List[str]] = None
|
|
|
|
|
|
# Sources API models
|
|
class AssetModel(BaseModel):
|
|
file_path: Optional[str] = None
|
|
url: Optional[str] = None
|
|
|
|
|
|
class SourceCreate(BaseModel):
|
|
# Backward compatibility: support old single notebook_id
|
|
notebook_id: Optional[str] = Field(
|
|
None, description="Notebook ID to add the source to (deprecated, use notebooks)"
|
|
)
|
|
# New multi-notebook support
|
|
notebooks: Optional[List[str]] = Field(
|
|
None, description="List of notebook IDs to add the source to"
|
|
)
|
|
# Required fields
|
|
type: str = Field(..., description="Source type: link, upload, or text")
|
|
url: Optional[str] = Field(None, description="URL for link type")
|
|
file_path: Optional[str] = Field(None, description="File path for upload type")
|
|
content: Optional[str] = Field(None, description="Text content for text type")
|
|
title: Optional[str] = Field(None, description="Source title")
|
|
transformations: Optional[List[str]] = Field(
|
|
default_factory=list, description="Transformation IDs to apply"
|
|
)
|
|
embed: bool = Field(False, description="Whether to embed content for vector search")
|
|
delete_source: bool = Field(
|
|
False, description="Whether to delete uploaded file after processing"
|
|
)
|
|
# New async processing support
|
|
async_processing: bool = Field(
|
|
False, description="Whether to process source asynchronously"
|
|
)
|
|
|
|
@model_validator(mode="after")
|
|
def validate_notebook_fields(self):
|
|
# Ensure only one of notebook_id or notebooks is provided
|
|
if self.notebook_id is not None and self.notebooks is not None:
|
|
raise ValueError(
|
|
"Cannot specify both 'notebook_id' and 'notebooks'. Use 'notebooks' for multi-notebook support."
|
|
)
|
|
|
|
# Convert single notebook_id to notebooks array for internal processing
|
|
if self.notebook_id is not None:
|
|
self.notebooks = [self.notebook_id]
|
|
# Keep notebook_id for backward compatibility in response
|
|
|
|
# Set empty array if no notebooks specified (allow sources without notebooks)
|
|
if self.notebooks is None:
|
|
self.notebooks = []
|
|
|
|
return self
|
|
|
|
|
|
class SourceUpdate(BaseModel):
|
|
title: Optional[str] = Field(None, description="Source title")
|
|
topics: Optional[List[str]] = Field(None, description="Source topics")
|
|
|
|
|
|
class SourceResponse(BaseModel):
|
|
id: str
|
|
title: Optional[str]
|
|
topics: Optional[List[str]]
|
|
asset: Optional[AssetModel]
|
|
full_text: Optional[str]
|
|
embedded: bool
|
|
embedded_chunks: int
|
|
file_available: Optional[bool] = None
|
|
created: str
|
|
updated: str
|
|
# New fields for async processing
|
|
command_id: Optional[str] = None
|
|
status: Optional[str] = None
|
|
processing_info: Optional[Dict] = None
|
|
# Notebook associations
|
|
notebooks: Optional[List[str]] = None
|
|
|
|
|
|
class SourceListResponse(BaseModel):
|
|
id: str
|
|
title: Optional[str]
|
|
topics: Optional[List[str]]
|
|
asset: Optional[AssetModel]
|
|
embedded: bool # Boolean flag indicating if source has embeddings
|
|
embedded_chunks: int # Number of embedded chunks
|
|
insights_count: int
|
|
created: str
|
|
updated: str
|
|
file_available: Optional[bool] = None
|
|
# Status fields for async processing
|
|
command_id: Optional[str] = None
|
|
status: Optional[str] = None
|
|
processing_info: Optional[Dict[str, Any]] = None
|
|
|
|
|
|
# Context API models
|
|
class ContextConfig(BaseModel):
|
|
sources: Dict[str, str] = Field(
|
|
default_factory=dict, description="Source inclusion config {source_id: level}"
|
|
)
|
|
notes: Dict[str, str] = Field(
|
|
default_factory=dict, description="Note inclusion config {note_id: level}"
|
|
)
|
|
|
|
|
|
class ContextRequest(BaseModel):
|
|
notebook_id: str = Field(..., description="Notebook ID to get context for")
|
|
context_config: Optional[ContextConfig] = Field(
|
|
None, description="Context configuration"
|
|
)
|
|
|
|
|
|
class ContextResponse(BaseModel):
|
|
notebook_id: str
|
|
sources: List[Dict[str, Any]] = Field(..., description="Source context data")
|
|
notes: List[Dict[str, Any]] = Field(..., description="Note context data")
|
|
total_tokens: Optional[int] = Field(None, description="Estimated token count")
|
|
|
|
|
|
# Insights API models
|
|
class SourceInsightResponse(BaseModel):
|
|
id: str
|
|
source_id: str
|
|
insight_type: str
|
|
content: str
|
|
created: str
|
|
updated: str
|
|
|
|
|
|
class InsightCreationResponse(BaseModel):
|
|
"""Response for async insight creation."""
|
|
|
|
status: Literal["pending"] = "pending"
|
|
message: str = "Insight generation started"
|
|
source_id: str
|
|
transformation_id: str
|
|
command_id: Optional[str] = None
|
|
|
|
|
|
class SaveAsNoteRequest(BaseModel):
|
|
notebook_id: Optional[str] = Field(None, description="Notebook ID to add note to")
|
|
|
|
|
|
class CreateSourceInsightRequest(BaseModel):
|
|
model_config = ConfigDict(protected_namespaces=())
|
|
|
|
transformation_id: str = Field(..., description="ID of transformation to apply")
|
|
model_id: Optional[str] = Field(
|
|
None, description="Model ID (uses default if not provided)"
|
|
)
|
|
|
|
|
|
# Source status response
|
|
class SourceStatusResponse(BaseModel):
|
|
status: Optional[str] = Field(None, description="Processing status")
|
|
message: str = Field(..., description="Descriptive message about the status")
|
|
processing_info: Optional[Dict[str, Any]] = Field(
|
|
None, description="Detailed processing information"
|
|
)
|
|
command_id: Optional[str] = Field(None, description="Command ID if available")
|
|
|
|
|
|
# Error response
|
|
class ErrorResponse(BaseModel):
|
|
error: str
|
|
message: str
|
|
|
|
|
|
# API Key Configuration models
|
|
class SetApiKeyRequest(BaseModel):
|
|
"""Request to set an API key for a provider."""
|
|
|
|
api_key: Optional[str] = Field(None, description="API key for the provider")
|
|
base_url: Optional[str] = Field(
|
|
None, description="Base URL for URL-based providers (Ollama, OpenAI-compatible)"
|
|
)
|
|
endpoint: Optional[str] = Field(
|
|
None, description="Endpoint URL for Azure OpenAI"
|
|
)
|
|
api_version: Optional[str] = Field(
|
|
None, description="API version for Azure OpenAI"
|
|
)
|
|
endpoint_llm: Optional[str] = Field(
|
|
None, description="Service-specific endpoint for LLM (Azure)"
|
|
)
|
|
endpoint_embedding: Optional[str] = Field(
|
|
None, description="Service-specific endpoint for embedding (Azure)"
|
|
)
|
|
endpoint_stt: Optional[str] = Field(
|
|
None, description="Service-specific endpoint for STT (Azure)"
|
|
)
|
|
endpoint_tts: Optional[str] = Field(
|
|
None, description="Service-specific endpoint for TTS (Azure)"
|
|
)
|
|
service_type: Optional[Literal["llm", "embedding", "stt", "tts"]] = Field(
|
|
None,
|
|
description="Service type for OpenAI-compatible providers (llm, embedding, stt, tts)",
|
|
)
|
|
# Vertex AI specific fields
|
|
vertex_project: Optional[str] = Field(
|
|
None, description="Google Cloud Project ID for Vertex AI"
|
|
)
|
|
vertex_location: Optional[str] = Field(
|
|
None, description="Google Cloud Region for Vertex AI (e.g., us-central1)"
|
|
)
|
|
vertex_credentials_path: Optional[str] = Field(
|
|
None, description="Path to Google Cloud service account JSON file"
|
|
)
|
|
|
|
@field_validator(
|
|
"api_key",
|
|
"base_url",
|
|
"endpoint",
|
|
"api_version",
|
|
"endpoint_llm",
|
|
"endpoint_embedding",
|
|
"endpoint_stt",
|
|
"endpoint_tts",
|
|
"vertex_project",
|
|
"vertex_location",
|
|
"vertex_credentials_path",
|
|
mode="before",
|
|
)
|
|
@classmethod
|
|
def validate_not_empty_string(cls, v: Optional[str]) -> Optional[str]:
|
|
"""Reject empty strings - convert to None or raise error."""
|
|
if v is not None:
|
|
stripped = v.strip()
|
|
if not stripped:
|
|
return None # Treat empty/whitespace-only as None
|
|
return stripped
|
|
return v
|
|
|
|
|
|
class ApiKeyStatusResponse(BaseModel):
|
|
"""Response showing which providers are configured and their source."""
|
|
|
|
configured: Dict[str, bool] = Field(
|
|
..., description="Map of provider name to whether it is configured"
|
|
)
|
|
source: Dict[str, Literal["database", "environment", "none"]] = Field(
|
|
...,
|
|
description="Map of provider name to configuration source (database, environment, or none)",
|
|
)
|
|
encryption_configured: bool = Field(
|
|
...,
|
|
description="Whether OPEN_NOTEBOOK_ENCRYPTION_KEY is set (required to store keys in database)",
|
|
)
|
|
|
|
|
|
class TestConnectionResponse(BaseModel):
|
|
"""Response from testing a provider connection."""
|
|
|
|
provider: str = Field(..., description="Provider name that was tested")
|
|
success: bool = Field(..., description="Whether connection test succeeded")
|
|
message: str = Field(..., description="Result message with details")
|
|
|
|
|
|
class MigrateFromEnvRequest(BaseModel):
|
|
"""Request to migrate API keys from environment variables to database."""
|
|
|
|
force: bool = Field(
|
|
False, description="Force overwrite existing database configurations"
|
|
)
|
|
|
|
|
|
class MigrationResult(BaseModel):
|
|
"""Response from migrating API keys from environment to database."""
|
|
|
|
message: str = Field(..., description="Summary message")
|
|
migrated: List[str] = Field(
|
|
default_factory=list, description="Providers successfully migrated"
|
|
)
|
|
skipped: List[str] = Field(
|
|
default_factory=list, description="Providers skipped (already in DB)"
|
|
)
|
|
errors: List[str] = Field(
|
|
default_factory=list, description="Migration errors by provider"
|
|
)
|
|
|
|
|
|
# Notebook delete cascade models
|
|
# Credential models
|
|
class CreateCredentialRequest(BaseModel):
|
|
"""Request to create a new credential."""
|
|
|
|
name: str = Field(..., description="Credential name")
|
|
provider: str = Field(..., description="Provider name (openai, anthropic, etc.)")
|
|
modalities: List[str] = Field(
|
|
default_factory=list,
|
|
description="Supported modalities (language, embedding, text_to_speech, speech_to_text)",
|
|
)
|
|
api_key: Optional[str] = Field(None, description="API key (stored encrypted)")
|
|
base_url: Optional[str] = Field(None, description="Base URL")
|
|
endpoint: Optional[str] = Field(None, description="Endpoint URL (Azure)")
|
|
api_version: Optional[str] = Field(None, description="API version (Azure)")
|
|
endpoint_llm: Optional[str] = Field(None, description="LLM endpoint")
|
|
endpoint_embedding: Optional[str] = Field(None, description="Embedding endpoint")
|
|
endpoint_stt: Optional[str] = Field(None, description="STT endpoint")
|
|
endpoint_tts: Optional[str] = Field(None, description="TTS endpoint")
|
|
project: Optional[str] = Field(None, description="Project ID (Vertex)")
|
|
location: Optional[str] = Field(None, description="Location (Vertex)")
|
|
credentials_path: Optional[str] = Field(
|
|
None, description="Credentials file path (Vertex)"
|
|
)
|
|
|
|
|
|
class UpdateCredentialRequest(BaseModel):
|
|
"""Request to update an existing credential."""
|
|
|
|
name: Optional[str] = Field(None, description="Credential name")
|
|
modalities: Optional[List[str]] = Field(None, description="Supported modalities")
|
|
api_key: Optional[str] = Field(None, description="API key (stored encrypted)")
|
|
base_url: Optional[str] = Field(None, description="Base URL")
|
|
endpoint: Optional[str] = Field(None, description="Endpoint URL")
|
|
api_version: Optional[str] = Field(None, description="API version")
|
|
endpoint_llm: Optional[str] = Field(None, description="LLM endpoint")
|
|
endpoint_embedding: Optional[str] = Field(None, description="Embedding endpoint")
|
|
endpoint_stt: Optional[str] = Field(None, description="STT endpoint")
|
|
endpoint_tts: Optional[str] = Field(None, description="TTS endpoint")
|
|
project: Optional[str] = Field(None, description="Project ID")
|
|
location: Optional[str] = Field(None, description="Location")
|
|
credentials_path: Optional[str] = Field(None, description="Credentials path")
|
|
|
|
|
|
class CredentialResponse(BaseModel):
|
|
"""Response for a credential (never includes api_key)."""
|
|
|
|
id: str
|
|
name: str
|
|
provider: str
|
|
modalities: List[str]
|
|
base_url: Optional[str] = None
|
|
endpoint: Optional[str] = None
|
|
api_version: Optional[str] = None
|
|
endpoint_llm: Optional[str] = None
|
|
endpoint_embedding: Optional[str] = None
|
|
endpoint_stt: Optional[str] = None
|
|
endpoint_tts: Optional[str] = None
|
|
project: Optional[str] = None
|
|
location: Optional[str] = None
|
|
credentials_path: Optional[str] = None
|
|
has_api_key: bool = False
|
|
created: str
|
|
updated: str
|
|
model_count: int = 0
|
|
|
|
|
|
class CredentialDeleteResponse(BaseModel):
|
|
"""Response for credential deletion."""
|
|
|
|
message: str
|
|
deleted_models: int = 0
|
|
|
|
|
|
class DiscoveredModelResponse(BaseModel):
|
|
"""A model discovered from a provider."""
|
|
|
|
name: str
|
|
provider: str
|
|
model_type: Optional[str] = None
|
|
description: Optional[str] = None
|
|
|
|
|
|
class DiscoverModelsResponse(BaseModel):
|
|
"""Response from model discovery."""
|
|
|
|
credential_id: str
|
|
provider: str
|
|
discovered: List[DiscoveredModelResponse]
|
|
|
|
|
|
class RegisterModelData(BaseModel):
|
|
"""A model to register with user-specified type."""
|
|
|
|
name: str
|
|
provider: str
|
|
model_type: str # Required: user specifies the type
|
|
|
|
|
|
class RegisterModelsRequest(BaseModel):
|
|
"""Request to register discovered models."""
|
|
|
|
models: List[RegisterModelData]
|
|
|
|
|
|
class RegisterModelsResponse(BaseModel):
|
|
"""Response from model registration."""
|
|
|
|
created: int
|
|
existing: int
|
|
|
|
|
|
class NotebookDeletePreview(BaseModel):
|
|
notebook_id: str = Field(..., description="ID of the notebook")
|
|
notebook_name: str = Field(..., description="Name of the notebook")
|
|
note_count: int = Field(..., description="Number of notes that will be deleted")
|
|
exclusive_source_count: int = Field(
|
|
..., description="Number of sources only in this notebook"
|
|
)
|
|
shared_source_count: int = Field(
|
|
..., description="Number of sources shared with other notebooks"
|
|
)
|
|
|
|
|
|
class NotebookDeleteResponse(BaseModel):
|
|
message: str = Field(..., description="Success message")
|
|
deleted_notes: int = Field(..., description="Number of notes deleted")
|
|
deleted_sources: int = Field(..., description="Number of exclusive sources deleted")
|
|
unlinked_sources: int = Field(
|
|
..., description="Number of sources unlinked from notebook"
|
|
)
|