Commit graph

3 commits

Author SHA1 Message Date
Luis Novo
5d84ab0768 fix: embedding batch sizing and 413 error classification (1.7.4)
- Add batching to generate_embeddings() (50 texts per batch with per-batch retry)
  to prevent 413 Payload Too Large errors on large documents
- Add 413 error classification rule for user-friendly error messages
- Fix misleading "Created 0 embedded chunks" log in process_source_command
  by removing premature get_embedded_chunks() call (embedding is fire-and-forget)

Closes #594
2026-02-18 11:39:47 -03:00
Luis Novo
cb5ec9d65c fix: restore graceful fallback in get_default_model and truncate error messages
- Catch ConfigurationError alongside ValueError in get_default_model()
  to preserve graceful fallback after ValueError→ConfigurationError migration
- Add _truncate() helper to error_classifier to cap pass-through and
  default error messages at 200 chars, avoiding verbose internal details
2026-02-16 16:25:31 -03:00
Luis Novo
20e18fdd0d feat: improve error clarity for LLM provider failures (#506)
Replace generic "An unexpected error occurred" messages with descriptive,
user-friendly error messages when LLM operations fail. Errors like invalid
API keys, wrong model names, and rate limits now surface clearly in the UI.

Adds error classification utility, global FastAPI exception handlers, and
frontend getApiErrorMessage() helper. Bumps version to 1.7.2.
2026-02-16 16:15:46 -03:00