open-notebook/tests/test_models_api.py
MisonL 67dd85c928
Feat/localization tests docker (#371)
* feat(i18n): complete 100% internationalization and fix Next.js 15 compatibility

* feat(i18n): complete 100% internationalization coverage

* chore(test): finalize component tests and project cleanup

* test(logic): add unit tests for useModalManager hook

* fix(test): resolve timeout in AppSidebar tests by mocking TooltipProvider

* feat(i18n): comprehensive i18n audit, fixes for hardcoded strings, and complete zh-TW support

* fix(i18n): resolve TypeScript warnings and improve translation hook stability

- Remove unused useTranslation import from ConnectionGuard
- Add ref-based checking state to prevent dependency cycles
- Fix useTranslation hook to return empty string for undefined translations
- Add comment for backward compatibility on ExtractedReference interface
- Ensure .replace() string methods work safely with nested translation keys

* feat(i18n): complete internationalization implementation with Docker deployment

- Add LanguageLoadingOverlay component for smooth language transitions
- Update all translation files (en-US, zh-CN, zh-TW) with improved terminology
- Optimize Docker configuration for better performance
- Update version check and config handling for i18n support
- Fix route handling for language-specific content
- Add comprehensive task documentation

* fix(i18n): resolve localization errors, duplicates, and type issues

* chore(i18n): finalize 100% internationalization coverage

* chore(test): supplement i18n test cases and cleanup redundant files

* fix(test): resolve lint type errors and finalize delivery documents

* feat(i18n): finalize full internationalization and zh-TW localization

* fix(frontend): add missing devDependency and fix build tsconfig

* feat(ui): enhance sidebar hover effects with better visual feedback

* fix(frontend): resolve accessibility, i18n, and lint issues

- fix: add missing id, name, autocomplete attributes to dialog inputs
- fix: add aria labels and DialogDescription for accessibility
- fix: resolve uncontrolled component warning in SettingsForm
- fix: correct duplicate 'Traditional Chinese' label in zh-TW locale
- feat: add i18n support for podcast template names
- chore: fix lint errors in Dialogs

* fix: address all 21 PR feedback items from cubic-dev-ai bot

Configuration:
- Remove ignoreDuringBuilds flags from next.config.ts

Testing:
- Fix AppSidebar.test.tsx regex pattern and add missing assertion

Logic:
- Fix ConnectionGuard.tsx re-entry prevention logic

Internationalization (I18n) - Translations:
- Add missing keys: notebooks.archived, common.note/insight, accessibility keys
- Add specific keys: sources.allSourcesDescShort, transformations.selectModel
- Add singular/plural keys: podcasts.usedByCount_one/other, common.note/notes
- Add common.created/updated with {time} placeholder

Internationalization (I18n) - Usage:
- SourcesPage: use allSourcesDescShort instead of string splitting
- TransformationPlayground: use navigation.transformation and selectModel
- CommandPalette: use dedicated keys instead of string concatenation
- GeneratePodcastDialog: fix zh-TW date locale handling
- NotebookHeader: correctly interpolate {time} placeholder
- TransformationCard: use common.description instead of undefined key
- ChatPanel/SpeakerProfilesPanel: implement proper pluralization
- SystemInfo: correctly interpolate {version} placeholder
- LanguageLoadingOverlay: use t.common.loading instead of hardcoded string
- MessageActions: use specific error key cannotSaveNoteNoNotebook

Other:
- Fix SessionManager.tsx exhaustive-deps warning

* fix: remove duplicate locale keys and add missing zh-CN translations

- en-US: remove duplicate loading key (line 59) and addNew key (sources)
- zh-CN: remove duplicate common keys (loading, note, insight, newSource, newNotebook, newPodcast)
- zh-CN: remove duplicate accessibility.searchNotebooks key
- zh-CN: remove duplicate sources.addNew key
- zh-CN: remove duplicate navigation.transformation key
- zh-CN: add missing usedByCount_one and usedByCount_other keys in podcasts
- zh-TW: remove duplicate common keys (loading, note, insight, newSource, newNotebook, newPodcast)
- zh-TW: remove duplicate accessibility.searchNotebooks key
- zh-TW: remove duplicate sources.addNew key

* docs: remove info.md

* fix: remove duplicate notebook keys and unused ts-expect-error

- zh-CN: remove duplicate notebooks keys (archived, archive, unarchive, deleteNotebook, deleteNotebookDesc)
- zh-TW: remove duplicate notebooks keys (archived, archive, unarchive, deleteNotebook, deleteNotebookDesc)
- GeneratePodcastDialog: remove unused @ts-expect-error directive

* fix(a11y): fix unassociated labels in search page

- Replace <Label> with role='group' + aria-labelledby for search type section
- Replace <Label> with role='group' + aria-labelledby for search in section
- Follows WAI-ARIA best practices for labeling form field groups

* fix(a11y): fix unassociated labels across multiple components

- search/page.tsx: use role='group' + aria-labelledby for search type and search in sections
- RebuildEmbeddings.tsx: use role='group' + aria-labelledby for include checkboxes
- TransformationPlayground.tsx: replace Label with span for non-form output label

* chore: revert to npm stack and ensure i18n compatibility

* chore: polish zh-TW translations for better idiomatic usage

* fix: resolve linter errors (ruff import sort, mypy config duplicate)

* style: apply ruff formatting

* fix: finalize upstream compliance (Dockerfile.single, i18n hooks, docker-compose)

* style: polish strings, fix timeout cleanup, and improve test mocks

* fix: use relative imports in test setup to resolve IDE path errors

* perf(docker): optimize build speed by removing apt-get upgrade and build tools

- Remove apt-get upgrade from both builder and runtime stages (saves 10-15 min each)
- Remove gcc/g++/make/git from builder (uv downloads pre-built wheels)
- Add --no-install-recommends to minimize package footprint
- Keep npm mirror (npmmirror.com) for faster frontend deps
- Add npm registry config for reliable China network access

Also includes:
- fix(a11y): add missing labels and aria attributes to form fields
- fix(i18n): add 2s safety timeout to LanguageLoadingOverlay
- fix(i18n): add robustness checks to use-translation proxy

Build time reduced from 2+ hours to ~34 minutes (~70% improvement)

* fix(a11y): resolve 16 form field accessibility warnings in notebook and podcast pages

* fix(a11y): resolve 4 button and 1 select field accessibility warnings in models page

* fix(a11y): resolve redundant attributes and residual warnings in transformations and podcast forms

* fix(i18n): deep fix for language switch hang using proxy protection and safer access

* fix(a11y): add name attributes to ModelSelector, TransformationPlayground, and SourceDetailContent

* fix: add missing Label import to SourceDetailContent

* fix(i18n): use native react-i18next in LanguageLoadingOverlay to prevent hang during language switch

* fix(i18n): rewrite use-translation Proxy with strict depth limit and expanded blocked props to prevent language switch hang

* fix: add type assertion to fix TypeScript comparison error

* fix(i18n): disable useSuspense to prevent thread hang during language resource loading

* fix(i18n): add infinite loop detection circuit breaker to useTranslation hook

* fix(i18n): update traditional chinese label to native script in en-US

* feat: add new localization strings for notebook and note management.

* fix: resolve config priority, docker build deps, and ui glitches

* refactor: improve ui details and test coverage based on feedback

* refactor: improve ui details (version check/lang toggle) and test coverage

* fix: polish language matching and test cleanup

* fix(test): update mocks to resolve timeouts and proxy errors

* fix(frontend): restore tsconfig.json structure and enable IDE support for tests

* fix: address PR review findings and resolve CI OIDC failure

* fix: merge exception headers in custom handler

* fix: comprehensive PR review remediations and async performance fixes

* refactor: address all PR #371 review feedback

- Docker: consolidate SURREAL_URL to docker.env, add single-container override
- Security: restore apt-get upgrade in Dockerfile and Dockerfile.single
- Create centralized getDateLocale helper (lib/utils/date-locale.ts)
- Refactor 7 files to use getDateLocale helper
- Revert config/route.ts to origin/main version
- Move test files to co-located pattern (3 files)
- Remove local useTranslation mock from ConfirmDialog.test.tsx
- Simplify use-version-check to single useEffect pattern
- Fix test import paths after moving to co-located pattern

* fix: add jest-dom types for test files

* fix: address remaining review issues

- Add apt-get upgrade -y to Dockerfile.single backend-builder stage
- Refactor ChatColumn.test.tsx: use 'as unknown as ReturnType<typeof hook>' instead of 'as any'
- Use toBeInTheDocument() assertions instead of toBeDefined()
2026-01-15 13:51:05 -03:00

391 lines
14 KiB
Python

from unittest.mock import AsyncMock, patch
import pytest
from fastapi.testclient import TestClient
@pytest.fixture
def client():
"""Create test client after environment variables have been cleared by conftest."""
from api.main import app
return TestClient(app)
class TestModelCreation:
"""Test suite for Model Creation endpoint."""
@pytest.mark.asyncio
@patch("open_notebook.database.repository.repo_query")
@patch("api.routers.models.Model.save")
async def test_create_duplicate_model_same_case(
self, mock_save, mock_repo_query, client
):
"""Test that creating a duplicate model with same case returns 400."""
# Mock repo_query to return a duplicate model
mock_repo_query.return_value = [
{
"id": "model:123",
"name": "gpt-4",
"provider": "openai",
"type": "language",
}
]
# Attempt to create duplicate
response = client.post(
"/api/models",
json={"name": "gpt-4", "provider": "openai", "type": "language"},
)
assert response.status_code == 400
assert (
response.json()["detail"]
== "Model 'gpt-4' already exists for provider 'openai' with type 'language'"
)
@pytest.mark.asyncio
@patch("open_notebook.database.repository.repo_query")
@patch("api.routers.models.Model.save")
async def test_create_duplicate_model_different_case(
self, mock_save, mock_repo_query, client
):
"""Test that creating a duplicate model with different case returns 400."""
# Mock repo_query to return a duplicate model (case-insensitive match)
mock_repo_query.return_value = [
{
"id": "model:123",
"name": "gpt-4",
"provider": "openai",
"type": "language",
}
]
# Attempt to create duplicate with different case
response = client.post(
"/api/models",
json={"name": "GPT-4", "provider": "OpenAI", "type": "language"},
)
assert response.status_code == 400
assert (
response.json()["detail"]
== "Model 'GPT-4' already exists for provider 'OpenAI' with type 'language'"
)
@pytest.mark.asyncio
@patch("open_notebook.database.repository.repo_query")
async def test_create_same_model_name_different_provider(
self, mock_repo_query, client
):
"""Test that creating a model with same name but different provider is allowed."""
from open_notebook.ai.models import Model
# Mock repo_query to return empty (no duplicate found for different provider)
mock_repo_query.return_value = []
# Patch the save method on the Model class
with patch.object(Model, "save", new_callable=AsyncMock) as mock_save:
# Attempt to create same model name with different provider (anthropic)
response = client.post(
"/api/models",
json={"name": "gpt-4", "provider": "anthropic", "type": "language"},
)
# Should succeed because provider is different
assert response.status_code == 200
@pytest.mark.asyncio
@patch("open_notebook.database.repository.repo_query")
async def test_create_same_model_name_different_type(self, mock_repo_query, client):
"""Test that creating a model with same name but different type is allowed."""
from open_notebook.ai.models import Model
# Mock repo_query to return empty (no duplicate found for different type)
mock_repo_query.return_value = []
# Patch the save method on the Model class
with patch.object(Model, "save", new_callable=AsyncMock) as mock_save:
# Attempt to create same model name with different type (embedding instead of language)
response = client.post(
"/api/models",
json={"name": "gpt-4", "provider": "openai", "type": "embedding"},
)
# Should succeed because type is different
assert response.status_code == 200
class TestModelsProviderAvailability:
"""Test suite for Models Provider Availability endpoint."""
@patch("api.routers.models.os.environ.get")
@patch("api.routers.models.AIFactory.get_available_providers")
def test_generic_env_var_enables_all_modes(self, mock_esperanto, mock_env, client):
"""Test that OPENAI_COMPATIBLE_BASE_URL enables all 4 modes."""
# Mock environment: only generic var is set
def env_side_effect(key):
if key == "OPENAI_COMPATIBLE_BASE_URL":
return "http://localhost:1234/v1"
return None
mock_env.side_effect = env_side_effect
# Mock Esperanto response
mock_esperanto.return_value = {
"language": ["openai-compatible"],
"embedding": ["openai-compatible"],
"speech_to_text": ["openai-compatible"],
"text_to_speech": ["openai-compatible"],
}
response = client.get("/api/models/providers")
assert response.status_code == 200
data = response.json()
# openai-compatible should be available
assert "openai-compatible" in data["available"]
# Should support all 4 types
assert "openai-compatible" in data["supported_types"]
supported = data["supported_types"]["openai-compatible"]
assert "language" in supported
assert "embedding" in supported
assert "speech_to_text" in supported
assert "text_to_speech" in supported
assert len(supported) == 4
@patch("api.routers.models.os.environ.get")
@patch("api.routers.models.AIFactory.get_available_providers")
def test_mode_specific_env_vars_llm_embedding(
self, mock_esperanto, mock_env, client
):
"""Test mode-specific env vars (LLM + EMBEDDING) enable only those 2 modes."""
# Mock environment: only LLM and EMBEDDING specific vars are set
def env_side_effect(key):
if key == "OPENAI_COMPATIBLE_BASE_URL_LLM":
return "http://localhost:1234/v1"
if key == "OPENAI_COMPATIBLE_BASE_URL_EMBEDDING":
return "http://localhost:8080/v1"
return None
mock_env.side_effect = env_side_effect
# Mock Esperanto response
mock_esperanto.return_value = {
"language": ["openai-compatible"],
"embedding": ["openai-compatible"],
"speech_to_text": ["openai-compatible"],
"text_to_speech": ["openai-compatible"],
}
response = client.get("/api/models/providers")
assert response.status_code == 200
data = response.json()
# openai-compatible should be available
assert "openai-compatible" in data["available"]
# Should support only language and embedding
assert "openai-compatible" in data["supported_types"]
supported = data["supported_types"]["openai-compatible"]
assert "language" in supported
assert "embedding" in supported
assert "speech_to_text" not in supported
assert "text_to_speech" not in supported
assert len(supported) == 2
@patch("api.routers.models.os.environ.get")
@patch("api.routers.models.AIFactory.get_available_providers")
def test_no_env_vars_set(self, mock_esperanto, mock_env, client):
"""Test that openai-compatible is not available when no env vars are set."""
# Mock environment: no openai-compatible vars are set
def env_side_effect(key):
return None
mock_env.side_effect = env_side_effect
# Mock Esperanto response
mock_esperanto.return_value = {
"language": ["openai-compatible"],
"embedding": ["openai-compatible"],
}
response = client.get("/api/models/providers")
assert response.status_code == 200
data = response.json()
# openai-compatible should NOT be available
assert "openai-compatible" not in data["available"]
assert "openai-compatible" in data["unavailable"]
# Should not have supported_types entry
assert "openai-compatible" not in data["supported_types"]
@patch("api.routers.models.os.environ.get")
@patch("api.routers.models.AIFactory.get_available_providers")
def test_mixed_config_generic_and_mode_specific(
self, mock_esperanto, mock_env, client
):
"""Test mixed config: generic + mode-specific (generic should enable all)."""
# Mock environment: both generic and mode-specific vars are set
def env_side_effect(key):
if key == "OPENAI_COMPATIBLE_BASE_URL":
return "http://localhost:1234/v1"
if key == "OPENAI_COMPATIBLE_BASE_URL_LLM":
return "http://localhost:5678/v1"
return None
mock_env.side_effect = env_side_effect
# Mock Esperanto response
mock_esperanto.return_value = {
"language": ["openai-compatible"],
"embedding": ["openai-compatible"],
"speech_to_text": ["openai-compatible"],
"text_to_speech": ["openai-compatible"],
}
response = client.get("/api/models/providers")
assert response.status_code == 200
data = response.json()
# openai-compatible should be available
assert "openai-compatible" in data["available"]
# Generic var enables all, so all 4 should be supported
assert "openai-compatible" in data["supported_types"]
supported = data["supported_types"]["openai-compatible"]
assert "language" in supported
assert "embedding" in supported
assert "speech_to_text" in supported
assert "text_to_speech" in supported
assert len(supported) == 4
@patch("api.routers.models.os.environ.get")
@patch("api.routers.models.AIFactory.get_available_providers")
def test_individual_mode_llm_only(self, mock_esperanto, mock_env, client):
"""Test individual mode-specific var (LLM only)."""
# Mock environment: only LLM specific var is set
def env_side_effect(key):
if key == "OPENAI_COMPATIBLE_BASE_URL_LLM":
return "http://localhost:1234/v1"
return None
mock_env.side_effect = env_side_effect
# Mock Esperanto response
mock_esperanto.return_value = {
"language": ["openai-compatible"],
"embedding": ["openai-compatible"],
"speech_to_text": ["openai-compatible"],
"text_to_speech": ["openai-compatible"],
}
response = client.get("/api/models/providers")
assert response.status_code == 200
data = response.json()
# Should support only language
supported = data["supported_types"]["openai-compatible"]
assert supported == ["language"]
@patch("api.routers.models.os.environ.get")
@patch("api.routers.models.AIFactory.get_available_providers")
def test_individual_mode_embedding_only(self, mock_esperanto, mock_env, client):
"""Test individual mode-specific var (EMBEDDING only)."""
# Mock environment: only EMBEDDING specific var is set
def env_side_effect(key):
if key == "OPENAI_COMPATIBLE_BASE_URL_EMBEDDING":
return "http://localhost:8080/v1"
return None
mock_env.side_effect = env_side_effect
# Mock Esperanto response
mock_esperanto.return_value = {
"language": ["openai-compatible"],
"embedding": ["openai-compatible"],
"speech_to_text": ["openai-compatible"],
"text_to_speech": ["openai-compatible"],
}
response = client.get("/api/models/providers")
assert response.status_code == 200
data = response.json()
# Should support only embedding
supported = data["supported_types"]["openai-compatible"]
assert supported == ["embedding"]
@patch("api.routers.models.os.environ.get")
@patch("api.routers.models.AIFactory.get_available_providers")
def test_individual_mode_stt_only(self, mock_esperanto, mock_env, client):
"""Test individual mode-specific var (STT only)."""
# Mock environment: only STT specific var is set
def env_side_effect(key):
if key == "OPENAI_COMPATIBLE_BASE_URL_STT":
return "http://localhost:9000/v1"
return None
mock_env.side_effect = env_side_effect
# Mock Esperanto response
mock_esperanto.return_value = {
"language": ["openai-compatible"],
"embedding": ["openai-compatible"],
"speech_to_text": ["openai-compatible"],
"text_to_speech": ["openai-compatible"],
}
response = client.get("/api/models/providers")
assert response.status_code == 200
data = response.json()
# Should support only speech_to_text
supported = data["supported_types"]["openai-compatible"]
assert supported == ["speech_to_text"]
@patch("api.routers.models.os.environ.get")
@patch("api.routers.models.AIFactory.get_available_providers")
def test_individual_mode_tts_only(self, mock_esperanto, mock_env, client):
"""Test individual mode-specific var (TTS only)."""
# Mock environment: only TTS specific var is set
def env_side_effect(key):
if key == "OPENAI_COMPATIBLE_BASE_URL_TTS":
return "http://localhost:9000/v1"
return None
mock_env.side_effect = env_side_effect
# Mock Esperanto response
mock_esperanto.return_value = {
"language": ["openai-compatible"],
"embedding": ["openai-compatible"],
"speech_to_text": ["openai-compatible"],
"text_to_speech": ["openai-compatible"],
}
response = client.get("/api/models/providers")
assert response.status_code == 200
data = response.json()
# Should support only text_to_speech
supported = data["supported_types"]["openai-compatible"]
assert supported == ["text_to_speech"]