Merge pull request #733 from MODSetter/dev

feat: agent regeneration and edit fixes
This commit is contained in:
Rohan Verma 2026-01-23 02:35:11 -08:00 committed by GitHub
commit 642f6a0327
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
12 changed files with 990 additions and 85 deletions

View file

@ -17,7 +17,7 @@
# SurfSense
Connect any LLM to your internal knowledge sources and chat with it in real time alongside your team. OSS alternative to NotebookLM, Perplexity, and Glean.
SurfSense is a highly customizable AI research agent, connected to external sources such as Search Engines (SearxNG, Tavily, LinkUp), Google Drive, Slack, Microsoft Teams, Linear, Jira, ClickUp, Confluence, BookStack, Gmail, Notion, YouTube, GitHub, Discord, Airtable, Google Calendar, Luma, Circleback, Elasticsearch and more to come.
SurfSense is a highly customizable AI research agent, connected to external sources such as Search Engines (SearxNG, Tavily, LinkUp), Google Drive, Slack, Microsoft Teams, Linear, Jira, ClickUp, Confluence, BookStack, Gmail, Notion, YouTube, GitHub, Discord, Airtable, Google Calendar, Luma, Circleback, Elasticsearch, Obsidian and more to come.
<div align="center">
<a href="https://trendshift.io/repositories/13606" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13606" alt="MODSetter%2FSurfSense | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
@ -65,25 +65,7 @@ https://github.com/user-attachments/assets/a0a16566-6967-4374-ac51-9b3e07fbecd7
- Support for multiple TTS providers (OpenAI, Azure, Google Vertex AI)
### 🤖 **Deep Agent Architecture**
#### Built-in Agent Tools
| Tool | Description |
|------|-------------|
| **search_knowledge_base** | Search your personal knowledge base with semantic + full-text hybrid search, date filtering, and connector-specific queries |
| **generate_podcast** | Generate audio podcasts from chat conversations or knowledge base content |
| **link_preview** | Fetch rich Open Graph metadata for URLs to display preview cards |
| **display_image** | Display images in chat with metadata and source attribution |
| **scrape_webpage** | Extract full content from webpages for analysis and summarization (supports Firecrawl or local Chromium/Trafilatura) |
#### Extensible Tools Registry
Contributors can easily add new tools via the registry pattern:
1. Create a tool factory function in `surfsense_backend/app/agents/new_chat/tools/`
2. Register it in the `BUILTIN_TOOLS` list in `registry.py`
#### Configurable System Prompts
- Custom system instructions via LLM configuration
- Toggle citations on/off per configuration
- Supports 100+ LLMs via LiteLLM integration
- Powered by [LangChain Deep Agents](https://docs.langchain.com/oss/python/deepagents/overview) - agents that can plan, use subagents, and leverage file systems for complex tasks.
### 📊 **Advanced RAG Techniques**
- Supports 100+ LLM's
@ -113,6 +95,7 @@ Contributors can easily add new tools via the registry pattern:
- Luma
- Circleback
- Elasticsearch
- Obsidian
- and more to come.....
## 📄 **Supported File Extensions**

View file

@ -18,7 +18,7 @@
将任何 LLM 连接到您的内部知识源并与团队成员实时聊天。NotebookLM、Perplexity 和 Glean 的开源替代方案。
SurfSense 是一个高度可定制的 AI 研究助手可以连接外部数据源如搜索引擎SearxNG、Tavily、LinkUp、Google Drive、Slack、Microsoft Teams、Linear、Jira、ClickUp、Confluence、BookStack、Gmail、Notion、YouTube、GitHub、Discord、Airtable、Google Calendar、Luma、Circleback、Elasticsearch 等,未来还会支持更多。
SurfSense 是一个高度可定制的 AI 研究助手可以连接外部数据源如搜索引擎SearxNG、Tavily、LinkUp、Google Drive、Slack、Microsoft Teams、Linear、Jira、ClickUp、Confluence、BookStack、Gmail、Notion、YouTube、GitHub、Discord、Airtable、Google Calendar、Luma、Circleback、Elasticsearch、Obsidian 等,未来还会支持更多。
<div align="center">
<a href="https://trendshift.io/repositories/13606" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13606" alt="MODSetter%2FSurfSense | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
@ -73,25 +73,7 @@ https://github.com/user-attachments/assets/a0a16566-6967-4374-ac51-9b3e07fbecd7
- 支持多个 TTS 提供商OpenAI、Azure、Google Vertex AI
### 🤖 **深度代理架构**
#### 内置代理工具
| 工具 | 描述 |
|------|------|
| **search_knowledge_base** | 使用语义+全文混合搜索、日期过滤和连接器特定查询搜索您的个人知识库 |
| **generate_podcast** | 从聊天对话或知识库内容生成音频播客 |
| **link_preview** | 获取 URL 的 Open Graph 元数据以显示预览卡片 |
| **display_image** | 在聊天中显示带有元数据和来源归属的图像 |
| **scrape_webpage** | 从网页中提取完整内容用于分析和总结(支持 Firecrawl 或本地 Chromium/Trafilatura |
#### 可扩展工具注册表
贡献者可以通过注册表模式轻松添加新工具:
1. 在 `surfsense_backend/app/agents/new_chat/tools/` 中创建工具工厂函数
2. 在 `registry.py``BUILTIN_TOOLS` 列表中注册
#### 可配置的系统提示词
- 通过 LLM 配置自定义系统指令
- 按配置切换引用开关
- 通过 LiteLLM 集成支持 100+ 种 LLM
- 基于 [LangChain Deep Agents](https://docs.langchain.com/oss/python/deepagents/overview) 构建 - 支持规划、子代理和文件系统的复杂任务处理代理。
### 📊 **先进的 RAG 技术**
- 支持 100+ 种大语言模型
@ -121,6 +103,7 @@ https://github.com/user-attachments/assets/a0a16566-6967-4374-ac51-9b3e07fbecd7
- Luma
- Circleback
- Elasticsearch
- Obsidian
- 更多即将推出......
## 📄 **支持的文件扩展名**

View file

@ -45,6 +45,7 @@ from app.schemas.new_chat import (
NewChatThreadUpdate,
NewChatThreadVisibilityUpdate,
NewChatThreadWithMessages,
RegenerateRequest,
ThreadHistoryLoadResponse,
ThreadListItem,
ThreadListResponse,
@ -1013,6 +1014,256 @@ async def handle_new_chat(
) from None
# =============================================================================
# Chat Regeneration Endpoint (Edit/Reload)
# =============================================================================
@router.post("/threads/{thread_id}/regenerate")
async def regenerate_response(
thread_id: int,
request: RegenerateRequest,
session: AsyncSession = Depends(get_async_session),
user: User = Depends(current_active_user),
):
"""
Regenerate the AI response for a chat thread.
This endpoint supports two operations:
1. **Edit**: Provide a new `user_query` to replace the last user message and regenerate
2. **Reload**: Leave `user_query` empty (or None) to regenerate with the same query
Both operations:
- Rewind the LangGraph checkpointer to the state before the last AI response
- Delete the last user message and AI response from the database
- Stream a new response from that checkpoint
Access is granted if:
- User is the creator of the thread
- Thread visibility is SEARCH_SPACE
Requires CHATS_UPDATE permission.
"""
from langchain_core.messages import HumanMessage
from app.agents.new_chat.checkpointer import get_checkpointer
try:
# Verify thread exists and user has permission
result = await session.execute(
select(NewChatThread).filter(NewChatThread.id == thread_id)
)
thread = result.scalars().first()
if not thread:
raise HTTPException(status_code=404, detail="Thread not found")
await check_permission(
session,
user,
thread.search_space_id,
Permission.CHATS_UPDATE.value,
"You don't have permission to update chats in this search space",
)
# Check thread-level access based on visibility
await check_thread_access(session, thread, user)
# Get the checkpointer and state history
checkpointer = await get_checkpointer()
config = {"configurable": {"thread_id": str(thread_id)}}
# Collect checkpoint tuples from the async iterator
# CheckpointTuple has: config, checkpoint (dict with channel_values), metadata, parent_config
checkpoint_tuples = []
async for cp_tuple in checkpointer.alist(config):
checkpoint_tuples.append(cp_tuple)
if not checkpoint_tuples:
raise HTTPException(
status_code=400, detail="No conversation history found for this thread"
)
# Find the checkpoint to rewind to
# Checkpoints are in reverse chronological order (newest first)
# We need to find a checkpoint before the last user message was added
#
# The checkpointer stores states after each node execution.
# For a typical conversation flow:
# - User sends message -> state 1 (with HumanMessage)
# - Agent responds -> state 2 (with HumanMessage + AIMessage)
#
# To regenerate, we need the state BEFORE the last HumanMessage was processed
target_checkpoint_id = None
user_query_to_use = request.user_query
# Look through checkpoints to find the right one
# We want to find the checkpoint just before the last HumanMessage
for i, cp_tuple in enumerate(checkpoint_tuples):
# Access the checkpoint's channel_values which contains "messages"
checkpoint_data = cp_tuple.checkpoint
channel_values = checkpoint_data.get("channel_values", {})
state_messages = channel_values.get("messages", [])
if state_messages:
last_msg = state_messages[-1]
# Find a checkpoint where the last message is NOT a HumanMessage
# This means we're at a state before the user's last message
if not isinstance(last_msg, HumanMessage):
# If no new user_query provided (reload), extract from a later checkpoint
if user_query_to_use is None and i > 0:
# Get the user query from a more recent checkpoint
for prev_cp_tuple in checkpoint_tuples[:i]:
prev_checkpoint_data = prev_cp_tuple.checkpoint
prev_channel_values = prev_checkpoint_data.get(
"channel_values", {}
)
prev_messages = prev_channel_values.get("messages", [])
for msg in reversed(prev_messages):
if isinstance(msg, HumanMessage):
user_query_to_use = msg.content
break
if user_query_to_use:
break
target_checkpoint_id = cp_tuple.config["configurable"][
"checkpoint_id"
]
break
# If we couldn't find a good checkpoint, try alternative approaches
if target_checkpoint_id is None and checkpoint_tuples:
if len(checkpoint_tuples) == 1:
# Only one checkpoint - get the user query from it if not provided
if user_query_to_use is None:
checkpoint_data = checkpoint_tuples[0].checkpoint
channel_values = checkpoint_data.get("channel_values", {})
state_messages = channel_values.get("messages", [])
for msg in state_messages:
if isinstance(msg, HumanMessage):
user_query_to_use = msg.content
break
else:
# Use the oldest checkpoint
target_checkpoint_id = checkpoint_tuples[-1].config["configurable"][
"checkpoint_id"
]
# If we still don't have a user query, get it from the database
if user_query_to_use is None:
# Get the last user message from the database
last_user_msg_result = await session.execute(
select(NewChatMessage)
.filter(
NewChatMessage.thread_id == thread_id,
NewChatMessage.role == NewChatMessageRole.USER,
)
.order_by(NewChatMessage.created_at.desc())
.limit(1)
)
last_user_msg = last_user_msg_result.scalars().first()
if last_user_msg:
content = last_user_msg.content
if isinstance(content, str):
user_query_to_use = content
elif isinstance(content, list):
# Extract text from content parts
for part in content:
if isinstance(part, dict) and part.get("type") == "text":
user_query_to_use = part.get("text", "")
break
elif isinstance(part, str):
user_query_to_use = part
break
if user_query_to_use is None:
raise HTTPException(
status_code=400,
detail="Could not determine user query for regeneration. Please provide a user_query.",
)
# Get the last two messages to delete AFTER streaming succeeds
# This prevents data loss if streaming fails
last_messages_result = await session.execute(
select(NewChatMessage)
.filter(NewChatMessage.thread_id == thread_id)
.order_by(NewChatMessage.created_at.desc())
.limit(2)
)
messages_to_delete = list(last_messages_result.scalars().all())
# Get search space for LLM config
search_space_result = await session.execute(
select(SearchSpace).filter(SearchSpace.id == request.search_space_id)
)
search_space = search_space_result.scalars().first()
if not search_space:
raise HTTPException(status_code=404, detail="Search space not found")
llm_config_id = (
search_space.agent_llm_id if search_space.agent_llm_id is not None else -1
)
# Create a wrapper generator that deletes messages only AFTER streaming succeeds
# This prevents data loss if streaming fails (network error, LLM error, etc.)
async def stream_with_cleanup():
streaming_completed = False
try:
async for chunk in stream_new_chat(
user_query=user_query_to_use,
search_space_id=request.search_space_id,
chat_id=thread_id,
session=session,
user_id=str(user.id),
llm_config_id=llm_config_id,
attachments=request.attachments,
mentioned_document_ids=request.mentioned_document_ids,
mentioned_surfsense_doc_ids=request.mentioned_surfsense_doc_ids,
checkpoint_id=target_checkpoint_id,
):
yield chunk
# If we get here, streaming completed successfully
streaming_completed = True
finally:
# Only delete old messages if streaming completed successfully
# This ensures we don't lose data on streaming failures
if streaming_completed and messages_to_delete:
try:
for msg in messages_to_delete:
await session.delete(msg)
await session.commit()
except Exception as cleanup_error:
# Log but don't fail - the new messages are already streamed
print(
f"[regenerate] Warning: Failed to delete old messages: {cleanup_error}"
)
# Return streaming response with checkpoint_id for rewinding
return StreamingResponse(
stream_with_cleanup(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no",
},
)
except HTTPException:
raise
except Exception as e:
import traceback
traceback.print_exc()
raise HTTPException(
status_code=500,
detail=f"An unexpected error occurred during regeneration: {e!s}",
) from None
# =============================================================================
# Attachment Processing Endpoint
# =============================================================================

View file

@ -184,3 +184,23 @@ class NewChatRequest(BaseModel):
mentioned_surfsense_doc_ids: list[int] | None = (
None # Optional SurfSense documentation IDs mentioned with @ in the chat
)
class RegenerateRequest(BaseModel):
"""
Request schema for regenerating an AI response.
This supports two operations:
1. Edit: Provide a new user_query to replace the last user message and regenerate
2. Reload: Leave user_query empty to regenerate the last AI response with the same query
Both operations rewind the LangGraph checkpointer to the appropriate state.
"""
search_space_id: int
user_query: str | None = (
None # New user query (for edit). None = reload with same query
)
attachments: list[ChatAttachment] | None = None
mentioned_document_ids: list[int] | None = None
mentioned_surfsense_doc_ids: list[int] | None = None

View file

@ -159,6 +159,7 @@ async def stream_new_chat(
attachments: list[ChatAttachment] | None = None,
mentioned_document_ids: list[int] | None = None,
mentioned_surfsense_doc_ids: list[int] | None = None,
checkpoint_id: str | None = None,
) -> AsyncGenerator[str, None]:
"""
Stream chat responses from the new SurfSense deep agent.
@ -177,6 +178,7 @@ async def stream_new_chat(
attachments: Optional attachments with extracted content
mentioned_document_ids: Optional list of document IDs mentioned with @ in the chat
mentioned_surfsense_doc_ids: Optional list of SurfSense doc IDs mentioned with @ in the chat
checkpoint_id: Optional checkpoint ID to rewind/fork from (for edit/reload operations)
Yields:
str: SSE formatted response strings
@ -325,10 +327,13 @@ async def stream_new_chat(
}
# Configure LangGraph with thread_id for memory
# If checkpoint_id is provided, fork from that checkpoint (for edit/reload)
configurable = {"thread_id": str(chat_id)}
if checkpoint_id:
configurable["checkpoint_id"] = checkpoint_id
config = {
"configurable": {
"thread_id": str(chat_id),
},
"configurable": configurable,
"recursion_limit": 80, # Increase from default 25 to allow more tool iterations
}

View file

@ -7,4 +7,7 @@ NEXT_PUBLIC_ELECTRIC_URL=http://localhost:5133
NEXT_PUBLIC_ELECTRIC_AUTH_MODE=insecure
# Contact Form Vars - OPTIONAL
DATABASE_URL=postgresql://postgres:[YOUR-PASSWORD]@db.sdsf.supabase.co:5432/postgres
DATABASE_URL=postgresql://postgres:[YOUR-PASSWORD]@db.sdsf.supabase.co:5432/postgres
# Obsidian flag for cloud version (optional)
NEXT_PUBLIC_DEPLOYMENT_MODE="self-hosted" or "cloud"

View file

@ -48,6 +48,7 @@ import {
appendMessage,
type ChatVisibility,
createThread,
getRegenerateUrl,
getThreadFull,
getThreadMessages,
type MessageRecord,
@ -1045,16 +1046,416 @@ export default function NewChatPage() {
[]
);
// Handle editing a message - removes messages after the edited one and sends as new
/**
* Handle regeneration (edit or reload) by calling the regenerate endpoint
* and streaming the response. This rewinds the LangGraph checkpointer state.
*
* @param newUserQuery - The new user query (for edit). Pass null/undefined for reload.
*/
const handleRegenerate = useCallback(
async (newUserQuery?: string | null) => {
if (!threadId) {
toast.error("Cannot regenerate: no active chat thread");
return;
}
// Abort any previous streaming request
if (abortControllerRef.current) {
abortControllerRef.current.abort();
abortControllerRef.current = null;
}
const token = getBearerToken();
if (!token) {
toast.error("Not authenticated. Please log in again.");
return;
}
// Extract the original user query BEFORE removing messages (for reload mode)
let userQueryToDisplay = newUserQuery;
let originalUserMessageContent: ThreadMessageLike["content"] | null = null;
let originalUserMessageAttachments: ThreadMessageLike["attachments"] | undefined;
let originalUserMessageMetadata: ThreadMessageLike["metadata"] | undefined;
if (!newUserQuery) {
// Reload mode - find and preserve the last user message content
const lastUserMessage = [...messages].reverse().find((m) => m.role === "user");
if (lastUserMessage) {
originalUserMessageContent = lastUserMessage.content;
originalUserMessageAttachments = lastUserMessage.attachments;
originalUserMessageMetadata = lastUserMessage.metadata;
// Extract text for the API request
for (const part of lastUserMessage.content) {
if (typeof part === "object" && part.type === "text" && "text" in part) {
userQueryToDisplay = part.text;
break;
}
}
}
}
// Remove the last two messages (user + assistant) from the UI immediately
// The backend will also delete them from the database
setMessages((prev) => {
if (prev.length >= 2) {
return prev.slice(0, -2);
}
return prev;
});
// Clear thinking steps for the removed messages
setMessageThinkingSteps((prev) => {
const newMap = new Map(prev);
// Remove thinking steps for the last two messages
const lastTwoIds = messages
.slice(-2)
.map((m) => m.id)
.filter((id): id is string => !!id);
for (const id of lastTwoIds) {
newMap.delete(id);
}
return newMap;
});
// Start streaming
setIsRunning(true);
const controller = new AbortController();
abortControllerRef.current = controller;
// Add placeholder user message if we have a new query (edit mode)
const userMsgId = `msg-user-${Date.now()}`;
const assistantMsgId = `msg-assistant-${Date.now()}`;
const currentThinkingSteps = new Map<string, ThinkingStepData>();
// Content parts tracking (same as onNew)
type ContentPart =
| { type: "text"; text: string }
| {
type: "tool-call";
toolCallId: string;
toolName: string;
args: Record<string, unknown>;
result?: unknown;
};
const contentParts: ContentPart[] = [];
let currentTextPartIndex = -1;
const toolCallIndices = new Map<string, number>();
const appendText = (delta: string) => {
if (currentTextPartIndex >= 0 && contentParts[currentTextPartIndex]?.type === "text") {
(contentParts[currentTextPartIndex] as { type: "text"; text: string }).text += delta;
} else {
contentParts.push({ type: "text", text: delta });
currentTextPartIndex = contentParts.length - 1;
}
};
const addToolCall = (toolCallId: string, toolName: string, args: Record<string, unknown>) => {
if (TOOLS_WITH_UI.has(toolName)) {
contentParts.push({ type: "tool-call", toolCallId, toolName, args });
toolCallIndices.set(toolCallId, contentParts.length - 1);
currentTextPartIndex = -1;
}
};
const updateToolCall = (
toolCallId: string,
update: { args?: Record<string, unknown>; result?: unknown }
) => {
const index = toolCallIndices.get(toolCallId);
if (index !== undefined && contentParts[index]?.type === "tool-call") {
const tc = contentParts[index] as ContentPart & { type: "tool-call" };
if (update.args) tc.args = update.args;
if (update.result !== undefined) tc.result = update.result;
}
};
const buildContentForUI = (): ThreadMessageLike["content"] => {
const filtered = contentParts.filter((part) => {
if (part.type === "text") return part.text.length > 0;
if (part.type === "tool-call") return TOOLS_WITH_UI.has(part.toolName);
return false;
});
return filtered.length > 0
? (filtered as ThreadMessageLike["content"])
: [{ type: "text", text: "" }];
};
const buildContentForPersistence = (): unknown[] => {
const parts: unknown[] = [];
if (currentThinkingSteps.size > 0) {
parts.push({
type: "thinking-steps",
steps: Array.from(currentThinkingSteps.values()),
});
}
for (const part of contentParts) {
if (part.type === "text" && part.text.length > 0) {
parts.push(part);
} else if (part.type === "tool-call" && TOOLS_WITH_UI.has(part.toolName)) {
parts.push(part);
}
}
return parts.length > 0 ? parts : [{ type: "text", text: "" }];
};
// Add placeholder messages to UI
// Always add back the user message (with new query for edit, or original content for reload)
const userMessage: ThreadMessageLike = {
id: userMsgId,
role: "user",
content: newUserQuery
? [{ type: "text", text: newUserQuery }]
: originalUserMessageContent || [{ type: "text", text: userQueryToDisplay || "" }],
createdAt: new Date(),
attachments: newUserQuery ? undefined : originalUserMessageAttachments,
metadata: newUserQuery ? undefined : originalUserMessageMetadata,
};
setMessages((prev) => [...prev, userMessage]);
// Add placeholder assistant message
setMessages((prev) => [
...prev,
{
id: assistantMsgId,
role: "assistant",
content: [{ type: "text", text: "" }],
createdAt: new Date(),
},
]);
try {
const response = await fetch(getRegenerateUrl(threadId), {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${token}`,
},
body: JSON.stringify({
search_space_id: searchSpaceId,
user_query: newUserQuery || null,
}),
signal: controller.signal,
});
if (!response.ok) {
throw new Error(`Backend error: ${response.status}`);
}
if (!response.body) {
throw new Error("No response body");
}
// Parse SSE stream (same logic as onNew)
const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = "";
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const events = buffer.split(/\r?\n\r?\n/);
buffer = events.pop() || "";
for (const event of events) {
const lines = event.split(/\r?\n/);
for (const line of lines) {
if (!line.startsWith("data: ")) continue;
const data = line.slice(6).trim();
if (!data || data === "[DONE]") continue;
try {
const parsed = JSON.parse(data);
switch (parsed.type) {
case "text-delta":
appendText(parsed.delta);
setMessages((prev) =>
prev.map((m) =>
m.id === assistantMsgId ? { ...m, content: buildContentForUI() } : m
)
);
break;
case "tool-input-start":
addToolCall(parsed.toolCallId, parsed.toolName, {});
setMessages((prev) =>
prev.map((m) =>
m.id === assistantMsgId ? { ...m, content: buildContentForUI() } : m
)
);
break;
case "tool-input-available":
if (toolCallIndices.has(parsed.toolCallId)) {
updateToolCall(parsed.toolCallId, { args: parsed.input || {} });
} else {
addToolCall(parsed.toolCallId, parsed.toolName, parsed.input || {});
}
setMessages((prev) =>
prev.map((m) =>
m.id === assistantMsgId ? { ...m, content: buildContentForUI() } : m
)
);
break;
case "tool-output-available":
updateToolCall(parsed.toolCallId, { result: parsed.output });
if (parsed.output?.status === "processing" && parsed.output?.task_id) {
const idx = toolCallIndices.get(parsed.toolCallId);
if (idx !== undefined) {
const part = contentParts[idx];
if (part?.type === "tool-call" && part.toolName === "generate_podcast") {
setActivePodcastTaskId(parsed.output.task_id);
}
}
}
setMessages((prev) =>
prev.map((m) =>
m.id === assistantMsgId ? { ...m, content: buildContentForUI() } : m
)
);
break;
case "data-thinking-step": {
const stepData = parsed.data as ThinkingStepData;
if (stepData?.id) {
currentThinkingSteps.set(stepData.id, stepData);
setMessageThinkingSteps((prev) => {
const newMap = new Map(prev);
newMap.set(assistantMsgId, Array.from(currentThinkingSteps.values()));
return newMap;
});
}
break;
}
case "error":
throw new Error(parsed.errorText || "Server error");
}
} catch (e) {
if (e instanceof SyntaxError) continue;
throw e;
}
}
}
}
} finally {
reader.releaseLock();
}
// Persist messages after streaming completes
const finalContent = buildContentForPersistence();
if (contentParts.length > 0) {
try {
// Persist user message (for both edit and reload modes, since backend deleted it)
const userContentToPersist = newUserQuery
? [{ type: "text", text: newUserQuery }]
: originalUserMessageContent || [{ type: "text", text: userQueryToDisplay || "" }];
const savedUserMessage = await appendMessage(threadId, {
role: "user",
content: userContentToPersist,
});
// Update user message ID to database ID
const newUserMsgId = `msg-${savedUserMessage.id}`;
setMessages((prev) =>
prev.map((m) => (m.id === userMsgId ? { ...m, id: newUserMsgId } : m))
);
// Persist assistant message
const savedMessage = await appendMessage(threadId, {
role: "assistant",
content: finalContent,
});
// Update assistant message ID to database ID
const newMsgId = `msg-${savedMessage.id}`;
setMessages((prev) =>
prev.map((m) => (m.id === assistantMsgId ? { ...m, id: newMsgId } : m))
);
setMessageThinkingSteps((prev) => {
const steps = prev.get(assistantMsgId);
if (steps) {
const newMap = new Map(prev);
newMap.delete(assistantMsgId);
newMap.set(newMsgId, steps);
return newMap;
}
return prev;
});
// Track successful response
trackChatResponseReceived(searchSpaceId, threadId);
} catch (err) {
console.error("Failed to persist regenerated message:", err);
}
}
} catch (error) {
if (error instanceof Error && error.name === "AbortError") {
return;
}
console.error("[NewChatPage] Regeneration error:", error);
trackChatError(
searchSpaceId,
threadId,
error instanceof Error ? error.message : "Unknown error"
);
toast.error("Failed to regenerate response. Please try again.");
// Update assistant message with error
setMessages((prev) =>
prev.map((m) =>
m.id === assistantMsgId
? {
...m,
content: [{ type: "text", text: "Sorry, there was an error. Please try again." }],
}
: m
)
);
} finally {
setIsRunning(false);
abortControllerRef.current = null;
}
},
[threadId, searchSpaceId, messages, setMessageThinkingSteps]
);
// Handle editing a message - truncates history and regenerates with new query
const onEdit = useCallback(
async (message: AppendMessage) => {
// Find the message being edited by looking at the parentId
// The parentId tells us which message's response we're editing
// For now, we'll just treat edits like new messages
// A more sophisticated implementation would truncate the history
await onNew(message);
// Extract the new user query from the message content
let newUserQuery = "";
for (const part of message.content) {
if (part.type === "text") {
newUserQuery += part.text;
}
}
if (!newUserQuery.trim()) {
toast.error("Cannot edit with empty message");
return;
}
// Call regenerate with the new query
await handleRegenerate(newUserQuery.trim());
},
[onNew]
[handleRegenerate]
);
// Handle reloading/refreshing the last AI response
const onReload = useCallback(
async (parentId: string | null) => {
// parentId is the ID of the message to reload from (the user message)
// We call regenerate without a query to use the same query
await handleRegenerate(null);
},
[handleRegenerate]
);
// Create external store runtime with attachment support
@ -1063,6 +1464,7 @@ export default function NewChatPage() {
isRunning,
onNew,
onEdit,
onReload,
convertMessage,
onCancel: cancelRun,
adapters: {

View file

@ -179,10 +179,7 @@ export const ObsidianConfig: FC<ObsidianConfigProps> = ({
Index attachment folders and embedded files
</p>
</div>
<Switch
checked={includeAttachments}
onCheckedChange={handleIncludeAttachmentsChange}
/>
<Switch checked={includeAttachments} onCheckedChange={handleIncludeAttachmentsChange} />
</div>
</div>
</div>

View file

@ -4,8 +4,8 @@ import type { FC } from "react";
import { EnumConnectorName } from "@/contracts/enums/connector";
import type { SearchSourceConnector } from "@/contracts/types/connector.types";
import { isSelfHosted } from "@/lib/env-config";
import { ConnectorCard } from "../components/connector-card";
import { ComposioConnectorCard } from "../components/composio-connector-card";
import { ConnectorCard } from "../components/connector-card";
import {
COMPOSIO_CONNECTORS,
CRAWLERS,

View file

@ -160,6 +160,30 @@ export async function getThreadFull(threadId: number): Promise<ThreadRecord> {
return baseApiService.get<ThreadRecord>(`/api/v1/threads/${threadId}/full`);
}
/**
* Regeneration request parameters
*/
export interface RegenerateParams {
searchSpaceId: number;
userQuery?: string | null; // New user query (for edit). Null/undefined = reload with same query
attachments?: Array<{
id: string;
name: string;
type: string;
content: string;
}>;
mentionedDocumentIds?: number[];
mentionedSurfsenseDocIds?: number[];
}
/**
* Get the URL for the regenerate endpoint (for streaming fetch)
*/
export function getRegenerateUrl(threadId: number): string {
const backendUrl = process.env.NEXT_PUBLIC_FASTAPI_BACKEND_URL || "http://localhost:8000";
return `${backendUrl}/api/v1/threads/${threadId}/regenerate`;
}
// =============================================================================
// Thread List Manager (for thread list sidebar)
// =============================================================================

View file

@ -35,7 +35,7 @@
"@electric-sql/react": "^1.0.26",
"@hookform/resolvers": "^5.2.2",
"@number-flow/react": "^0.5.10",
"@posthog/react": "^1.5.2",
"@posthog/react": "^1.7.0",
"@radix-ui/react-accordion": "^1.2.11",
"@radix-ui/react-alert-dialog": "^1.1.14",
"@radix-ui/react-avatar": "^1.1.10",
@ -86,8 +86,8 @@
"next-themes": "^0.4.6",
"pg": "^8.16.3",
"postgres": "^3.4.7",
"posthog-js": "^1.310.1",
"posthog-node": "^5.18.0",
"posthog-js": "^1.334.1",
"posthog-node": "^5.24.1",
"react": "^19.2.3",
"react-day-picker": "^9.8.1",
"react-dom": "^19.2.3",

View file

@ -51,8 +51,8 @@ importers:
specifier: ^0.5.10
version: 0.5.10(react-dom@19.2.3(react@19.2.3))(react@19.2.3)
'@posthog/react':
specifier: ^1.5.2
version: 1.5.2(@types/react@19.2.7)(posthog-js@1.310.1)(react@19.2.3)
specifier: ^1.7.0
version: 1.7.0(@types/react@19.2.7)(posthog-js@1.334.1)(react@19.2.3)
'@radix-ui/react-accordion':
specifier: ^1.2.11
version: 1.2.12(@types/react-dom@19.2.3(@types/react@19.2.7))(@types/react@19.2.7)(react-dom@19.2.3(react@19.2.3))(react@19.2.3)
@ -204,11 +204,11 @@ importers:
specifier: ^3.4.7
version: 3.4.7
posthog-js:
specifier: ^1.310.1
version: 1.310.1
specifier: ^1.334.1
version: 1.334.1
posthog-node:
specifier: ^5.18.0
version: 5.18.0
specifier: ^5.24.1
version: 5.24.1
react:
specifier: ^19.2.3
version: 19.2.3
@ -1447,10 +1447,78 @@ packages:
react: ^18 || ^19
react-dom: ^18 || ^19
'@opentelemetry/api-logs@0.208.0':
resolution: {integrity: sha512-CjruKY9V6NMssL/T1kAFgzosF1v9o6oeN+aX5JB/C/xPNtmgIJqcXHG7fA82Ou1zCpWGl4lROQUKwUNE1pMCyg==}
engines: {node: '>=8.0.0'}
'@opentelemetry/api@1.9.0':
resolution: {integrity: sha512-3giAOQvZiH5F9bMlMiv8+GSPMeqg0dbaeo58/0SlA9sxSqZhnUtxzX9/2FzyhS9sWQf5S0GJE0AKBrFqjpeYcg==}
engines: {node: '>=8.0.0'}
'@opentelemetry/core@2.2.0':
resolution: {integrity: sha512-FuabnnUm8LflnieVxs6eP7Z383hgQU4W1e3KJS6aOG3RxWxcHyBxH8fDMHNgu/gFx/M2jvTOW/4/PHhLz6bjWw==}
engines: {node: ^18.19.0 || >=20.6.0}
peerDependencies:
'@opentelemetry/api': '>=1.0.0 <1.10.0'
'@opentelemetry/core@2.5.0':
resolution: {integrity: sha512-ka4H8OM6+DlUhSAZpONu0cPBtPPTQKxbxVzC4CzVx5+K4JnroJVBtDzLAMx4/3CDTJXRvVFhpFjtl4SaiTNoyQ==}
engines: {node: ^18.19.0 || >=20.6.0}
peerDependencies:
'@opentelemetry/api': '>=1.0.0 <1.10.0'
'@opentelemetry/exporter-logs-otlp-http@0.208.0':
resolution: {integrity: sha512-jOv40Bs9jy9bZVLo/i8FwUiuCvbjWDI+ZW13wimJm4LjnlwJxGgB+N/VWOZUTpM+ah/awXeQqKdNlpLf2EjvYg==}
engines: {node: ^18.19.0 || >=20.6.0}
peerDependencies:
'@opentelemetry/api': ^1.3.0
'@opentelemetry/otlp-exporter-base@0.208.0':
resolution: {integrity: sha512-gMd39gIfVb2OgxldxUtOwGJYSH8P1kVFFlJLuut32L6KgUC4gl1dMhn+YC2mGn0bDOiQYSk/uHOdSjuKp58vvA==}
engines: {node: ^18.19.0 || >=20.6.0}
peerDependencies:
'@opentelemetry/api': ^1.3.0
'@opentelemetry/otlp-transformer@0.208.0':
resolution: {integrity: sha512-DCFPY8C6lAQHUNkzcNT9R+qYExvsk6C5Bto2pbNxgicpcSWbe2WHShLxkOxIdNcBiYPdVHv/e7vH7K6TI+C+fQ==}
engines: {node: ^18.19.0 || >=20.6.0}
peerDependencies:
'@opentelemetry/api': ^1.3.0
'@opentelemetry/resources@2.2.0':
resolution: {integrity: sha512-1pNQf/JazQTMA0BiO5NINUzH0cbLbbl7mntLa4aJNmCCXSj0q03T5ZXXL0zw4G55TjdL9Tz32cznGClf+8zr5A==}
engines: {node: ^18.19.0 || >=20.6.0}
peerDependencies:
'@opentelemetry/api': '>=1.3.0 <1.10.0'
'@opentelemetry/resources@2.5.0':
resolution: {integrity: sha512-F8W52ApePshpoSrfsSk1H2yJn9aKjCrbpQF1M9Qii0GHzbfVeFUB+rc3X4aggyZD8x9Gu3Slua+s6krmq6Dt8g==}
engines: {node: ^18.19.0 || >=20.6.0}
peerDependencies:
'@opentelemetry/api': '>=1.3.0 <1.10.0'
'@opentelemetry/sdk-logs@0.208.0':
resolution: {integrity: sha512-QlAyL1jRpOeaqx7/leG1vJMp84g0xKP6gJmfELBpnI4O/9xPX+Hu5m1POk9Kl+veNkyth5t19hRlN6tNY1sjbA==}
engines: {node: ^18.19.0 || >=20.6.0}
peerDependencies:
'@opentelemetry/api': '>=1.4.0 <1.10.0'
'@opentelemetry/sdk-metrics@2.2.0':
resolution: {integrity: sha512-G5KYP6+VJMZzpGipQw7Giif48h6SGQ2PFKEYCybeXJsOCB4fp8azqMAAzE5lnnHK3ZVwYQrgmFbsUJO/zOnwGw==}
engines: {node: ^18.19.0 || >=20.6.0}
peerDependencies:
'@opentelemetry/api': '>=1.9.0 <1.10.0'
'@opentelemetry/sdk-trace-base@2.2.0':
resolution: {integrity: sha512-xWQgL0Bmctsalg6PaXExmzdedSp3gyKV8mQBwK/j9VGdCDu2fmXIb2gAehBKbkXCpJ4HPkgv3QfoJWRT4dHWbw==}
engines: {node: ^18.19.0 || >=20.6.0}
peerDependencies:
'@opentelemetry/api': '>=1.3.0 <1.10.0'
'@opentelemetry/semantic-conventions@1.39.0':
resolution: {integrity: sha512-R5R9tb2AXs2IRLNKLBJDynhkfmx7mX0vi8NkhZb3gUkPWHn6HXk5J8iQ/dql0U3ApfWym4kXXmBDRGO+oeOfjg==}
engines: {node: '>=14'}
'@orama/orama@3.1.18':
resolution: {integrity: sha512-a61ljmRVVyG5MC/698C8/FfFDw5a8LOIvyOLW5fztgUXqUpc1jOfQzOitSCbge657OgXXThmY3Tk8fpiDb4UcA==}
engines: {node: '>= 20.0.0'}
@ -1537,11 +1605,11 @@ packages:
resolution: {integrity: sha512-dfUnCxiN9H4ap84DvD2ubjw+3vUNpstxa0TneY/Paat8a3R4uQZDLSvWjmznAY/DoahqTHl9V46HF/Zs3F29pg==}
engines: {node: '>= 10.0.0'}
'@posthog/core@1.9.0':
resolution: {integrity: sha512-j7KSWxJTUtNyKynLt/p0hfip/3I46dWU2dk+pt7dKRoz2l5CYueHuHK4EO7Wlgno5yo1HO4sc4s30MXMTICHJw==}
'@posthog/core@1.13.0':
resolution: {integrity: sha512-knjncrk7qRmssFRbGzBl1Tunt21GRpe0Wv+uVelyL0Rh7PdQUsgguulzXFTps8hA6wPwTU4kq85qnbAJ3eH6Wg==}
'@posthog/react@1.5.2':
resolution: {integrity: sha512-KHdXbV1yba7Y2l8BVmwXlySWxqKVLNQ5ZiVvWOf7r3Eo7GIFxCM4CaNK/z83kKWn8KTskmKy7AGF6Hl6INWK3g==}
'@posthog/react@1.7.0':
resolution: {integrity: sha512-pM7GL7z/rKjiIwosbRiQA3buhLI6vUo+wg+T/ZrVZC7O5bVU07TfgNZTcuOj8E9dx7vDbfNrc1kjDN7PKMM8ug==}
peerDependencies:
'@types/react': '>=16.8.0'
posthog-js: '>=1.257.2'
@ -1550,6 +1618,9 @@ packages:
'@types/react':
optional: true
'@posthog/types@1.334.1':
resolution: {integrity: sha512-ypFnwTO7qbV7icylLbujbamPdQXbJq0a61GUUBnJAeTbBw/qYPIss5IRYICcbCj0uunQrwD7/CGxVb5TOYKWgA==}
'@prisma/client@4.8.1':
resolution: {integrity: sha512-d4xhZhETmeXK/yZ7K0KcVOzEfI5YKGGEr4F5SBV04/MU4ncN/HcE28sy3e4Yt8UFW0ZuImKFQJE+9rWt9WbGSQ==}
engines: {node: '>=14.17'}
@ -1562,6 +1633,36 @@ packages:
'@prisma/engines-version@4.8.0-61.d6e67a83f971b175a593ccc12e15c4a757f93ffe':
resolution: {integrity: sha512-MHSOSexomRMom8QN4t7bu87wPPD+pa+hW9+71JnVcF3DqyyO/ycCLhRL1we3EojRpZxKvuyGho2REQsMCvxcJw==}
'@protobufjs/aspromise@1.1.2':
resolution: {integrity: sha512-j+gKExEuLmKwvz3OgROXtrJ2UG2x8Ch2YZUxahh+s1F2HZ+wAceUNLkvy6zKCPVRkU++ZWQrdxsUeQXmcg4uoQ==}
'@protobufjs/base64@1.1.2':
resolution: {integrity: sha512-AZkcAA5vnN/v4PDqKyMR5lx7hZttPDgClv83E//FMNhR2TMcLUhfRUBHCmSl0oi9zMgDDqRUJkSxO3wm85+XLg==}
'@protobufjs/codegen@2.0.4':
resolution: {integrity: sha512-YyFaikqM5sH0ziFZCN3xDC7zeGaB/d0IUb9CATugHWbd1FRFwWwt4ld4OYMPWu5a3Xe01mGAULCdqhMlPl29Jg==}
'@protobufjs/eventemitter@1.1.0':
resolution: {integrity: sha512-j9ednRT81vYJ9OfVuXG6ERSTdEL1xVsNgqpkxMsbIabzSo3goCjDIveeGv5d03om39ML71RdmrGNjG5SReBP/Q==}
'@protobufjs/fetch@1.1.0':
resolution: {integrity: sha512-lljVXpqXebpsijW71PZaCYeIcE5on1w5DlQy5WH6GLbFryLUrBD4932W/E2BSpfRJWseIL4v/KPgBFxDOIdKpQ==}
'@protobufjs/float@1.0.2':
resolution: {integrity: sha512-Ddb+kVXlXst9d+R9PfTIxh1EdNkgoRe5tOX6t01f1lYWOvJnSPDBlG241QLzcyPdoNTsblLUdujGSE4RzrTZGQ==}
'@protobufjs/inquire@1.1.0':
resolution: {integrity: sha512-kdSefcPdruJiFMVSbn801t4vFK7KB/5gd2fYvrxhuJYg8ILrmn9SKSX2tZdV6V+ksulWqS7aXjBcRXl3wHoD9Q==}
'@protobufjs/path@1.1.2':
resolution: {integrity: sha512-6JOcJ5Tm08dOHAbdR3GrvP+yUUfkjG5ePsHYczMFLq3ZmMkAD98cDgcT2iA1lJ9NVwFd4tH/iSSoe44YWkltEA==}
'@protobufjs/pool@1.1.0':
resolution: {integrity: sha512-0kELaGSIDBKvcgS4zkjz1PeddatrjYcmMWOlAuAPwAeccUrPHdUqo/J6LiymHHEiJT5NrF1UVwxY14f+fy4WQw==}
'@protobufjs/utf8@1.1.0':
resolution: {integrity: sha512-Vvn3zZrhQZkkBE8LSuW3em98c0FwgO4nxzv6OdSxPKJIEKY2bGbHn+mhGIPerzI4twdxaP8/0+06HBpwf345Lw==}
'@radix-ui/number@1.1.1':
resolution: {integrity: sha512-MkKCwxlXTgz6CFoJx3pCwn07GKp36+aZyu/u2Ln2VrA5DcdyCZkASEDBTd8x5whTQQL5CiYf4prXKLcgQdv29g==}
@ -4923,6 +5024,9 @@ packages:
lodash.merge@4.6.2:
resolution: {integrity: sha512-0KpjqXRVvrYyCsX1swR/XTK0va6VQkQM6MNo7PqW77ByjAhoARA8EfrP1N4+KlKj8YS0ZUCtRT/YUuhyYDujIQ==}
long@5.3.2:
resolution: {integrity: sha512-mNAgZ1GmyNhD7AuqnTG3/VQ26o760+ZYBPKjPvugO8+nLbYfX6TVpJPseBvopbdY+qpZ/lKUnmEc1LeZYS3QAA==}
longest-streak@3.1.0:
resolution: {integrity: sha512-9Ri+o0JYgehTaVBBDoMqIl8GXtbWg711O3srftcHhZ0dqnETqLaoIK0x17fUw9rFSlK/0NlsKe0Ahhyl5pXE2g==}
@ -5497,12 +5601,12 @@ packages:
resolution: {integrity: sha512-Jtc2612XINuBjIl/QTWsV5UvE8UHuNblcO3vVADSrKsrc6RqGX6lOW1cEo3CM2v0XG4Nat8nI+YM7/f26VxXLw==}
engines: {node: '>=12'}
posthog-js@1.310.1:
resolution: {integrity: sha512-UkR6zzlWNtqHDXHJl2Yk062DOmZyVKTPL5mX4j4V+u3RiYbMHJe47+PpMMUsvK1R2e1r/m9uSlHaJMJRzyUjGg==}
posthog-js@1.334.1:
resolution: {integrity: sha512-5cDzLICr2afnwX/cR9fwoLC0vN0Nb5gP5HiCigzHkgHdO+E3WsYefla3EFMQz7U4r01CBPZ+nZ9/srkzeACxtQ==}
posthog-node@5.18.0:
resolution: {integrity: sha512-SLBEs+sCThxzTGSSDEe97nZHuFFYh6DupObR1yQdvQND3CJh0ogZ0Sa1Vb+Tbrnf0cWbfBC9XNkm44yhaWf3aA==}
engines: {node: '>=20'}
posthog-node@5.24.1:
resolution: {integrity: sha512-1+wsosb5fjuor9zpp3h2uq0xKYY7rDz8gpw/10Scz8Ob/uVNrsHSwGy76D9rgt4cfyaEgpJwyYv+hPi2+YjWtw==}
engines: {node: ^20.20.0 || >=22.22.0}
preact@10.28.1:
resolution: {integrity: sha512-u1/ixq/lVQI0CakKNvLDEcW5zfCjUQfZdK9qqWuIJtsezuyG6pk9TWj75GMuI/EzRSZB/VAE43sNWWZfiy8psw==}
@ -5626,6 +5730,10 @@ packages:
prosemirror-view@1.41.4:
resolution: {integrity: sha512-WkKgnyjNncri03Gjaz3IFWvCAE94XoiEgvtr0/r2Xw7R8/IjK3sKLSiDoCHWcsXSAinVaKlGRZDvMCsF1kbzjA==}
protobufjs@7.5.4:
resolution: {integrity: sha512-CvexbZtbov6jW2eXAvLukXjXUW1TzFaivC46BpWc/3BpcCysb5Vffu+B3XHMm8lVEuy2Mm4XGex8hBSg1yapPg==}
engines: {node: '>=12.0.0'}
pump@3.0.3:
resolution: {integrity: sha512-todwxLMY7/heScKmntwQG8CXVkWUOdYxIvY2s0VWAAMh/nd8SoYiRaKjlr7+iCs984f2P8zvrfWcDDYVb73NfA==}
@ -5637,6 +5745,9 @@ packages:
resolution: {integrity: sha512-vYt7UD1U9Wg6138shLtLOvdAu+8DsC/ilFtEVHcH+wydcSpNE20AfSOduf6MkRFahL5FY7X1oU7nKVZFtfq8Fg==}
engines: {node: '>=6'}
query-selector-shadow-dom@1.0.1:
resolution: {integrity: sha512-lT5yCqEBgfoMYpf3F2xQRK7zEr1rhIIZuceDK6+xRkJQ4NMbHTwXqk4NkwDwQMNqXgG9r9fyHnzwNVs6zV5KRw==}
queue-microtask@1.2.3:
resolution: {integrity: sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A==}
@ -6472,8 +6583,8 @@ packages:
web-namespaces@2.0.1:
resolution: {integrity: sha512-bKr1DkiNa2krS7qxNtdrtHAmzuYGFQLiQ13TsorsdT6ULTkPLKuu5+GsFpDlg6JFjUTwX2DyhMPG2be8uPrqsQ==}
web-vitals@4.2.4:
resolution: {integrity: sha512-r4DIlprAGwJ7YM11VZp4R884m0Vmgr6EAKe3P+kO0PPj3Unqyvv59rczf6UiGcb9Z8QxZVcqKNwv/g0WNdWwsw==}
web-vitals@5.1.0:
resolution: {integrity: sha512-ArI3kx5jI0atlTtmV0fWU3fjpLmq/nD3Zr1iFFlJLaqa5wLBkUSzINwBPySCX/8jRyjlmy1Volw1kz1g9XE4Jg==}
webidl-conversions@7.0.0:
resolution: {integrity: sha512-VwddBukDzu71offAQR975unBIGqfKZpM+8ZX6ySk8nYhVoo5CYaZyzt3YBvYtRtO+aoGlqxPg/B87NGVZ/fu6g==}
@ -7611,8 +7722,82 @@ snapshots:
react: 19.2.3
react-dom: 19.2.3(react@19.2.3)
'@opentelemetry/api-logs@0.208.0':
dependencies:
'@opentelemetry/api': 1.9.0
'@opentelemetry/api@1.9.0': {}
'@opentelemetry/core@2.2.0(@opentelemetry/api@1.9.0)':
dependencies:
'@opentelemetry/api': 1.9.0
'@opentelemetry/semantic-conventions': 1.39.0
'@opentelemetry/core@2.5.0(@opentelemetry/api@1.9.0)':
dependencies:
'@opentelemetry/api': 1.9.0
'@opentelemetry/semantic-conventions': 1.39.0
'@opentelemetry/exporter-logs-otlp-http@0.208.0(@opentelemetry/api@1.9.0)':
dependencies:
'@opentelemetry/api': 1.9.0
'@opentelemetry/api-logs': 0.208.0
'@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0)
'@opentelemetry/otlp-exporter-base': 0.208.0(@opentelemetry/api@1.9.0)
'@opentelemetry/otlp-transformer': 0.208.0(@opentelemetry/api@1.9.0)
'@opentelemetry/sdk-logs': 0.208.0(@opentelemetry/api@1.9.0)
'@opentelemetry/otlp-exporter-base@0.208.0(@opentelemetry/api@1.9.0)':
dependencies:
'@opentelemetry/api': 1.9.0
'@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0)
'@opentelemetry/otlp-transformer': 0.208.0(@opentelemetry/api@1.9.0)
'@opentelemetry/otlp-transformer@0.208.0(@opentelemetry/api@1.9.0)':
dependencies:
'@opentelemetry/api': 1.9.0
'@opentelemetry/api-logs': 0.208.0
'@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0)
'@opentelemetry/resources': 2.2.0(@opentelemetry/api@1.9.0)
'@opentelemetry/sdk-logs': 0.208.0(@opentelemetry/api@1.9.0)
'@opentelemetry/sdk-metrics': 2.2.0(@opentelemetry/api@1.9.0)
'@opentelemetry/sdk-trace-base': 2.2.0(@opentelemetry/api@1.9.0)
protobufjs: 7.5.4
'@opentelemetry/resources@2.2.0(@opentelemetry/api@1.9.0)':
dependencies:
'@opentelemetry/api': 1.9.0
'@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0)
'@opentelemetry/semantic-conventions': 1.39.0
'@opentelemetry/resources@2.5.0(@opentelemetry/api@1.9.0)':
dependencies:
'@opentelemetry/api': 1.9.0
'@opentelemetry/core': 2.5.0(@opentelemetry/api@1.9.0)
'@opentelemetry/semantic-conventions': 1.39.0
'@opentelemetry/sdk-logs@0.208.0(@opentelemetry/api@1.9.0)':
dependencies:
'@opentelemetry/api': 1.9.0
'@opentelemetry/api-logs': 0.208.0
'@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0)
'@opentelemetry/resources': 2.2.0(@opentelemetry/api@1.9.0)
'@opentelemetry/sdk-metrics@2.2.0(@opentelemetry/api@1.9.0)':
dependencies:
'@opentelemetry/api': 1.9.0
'@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0)
'@opentelemetry/resources': 2.2.0(@opentelemetry/api@1.9.0)
'@opentelemetry/sdk-trace-base@2.2.0(@opentelemetry/api@1.9.0)':
dependencies:
'@opentelemetry/api': 1.9.0
'@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0)
'@opentelemetry/resources': 2.2.0(@opentelemetry/api@1.9.0)
'@opentelemetry/semantic-conventions': 1.39.0
'@opentelemetry/semantic-conventions@1.39.0': {}
'@orama/orama@3.1.18': {}
'@parcel/watcher-android-arm64@2.5.1':
@ -7675,17 +7860,19 @@ snapshots:
'@parcel/watcher-win32-ia32': 2.5.1
'@parcel/watcher-win32-x64': 2.5.1
'@posthog/core@1.9.0':
'@posthog/core@1.13.0':
dependencies:
cross-spawn: 7.0.6
'@posthog/react@1.5.2(@types/react@19.2.7)(posthog-js@1.310.1)(react@19.2.3)':
'@posthog/react@1.7.0(@types/react@19.2.7)(posthog-js@1.334.1)(react@19.2.3)':
dependencies:
posthog-js: 1.310.1
posthog-js: 1.334.1
react: 19.2.3
optionalDependencies:
'@types/react': 19.2.7
'@posthog/types@1.334.1': {}
'@prisma/client@4.8.1':
dependencies:
'@prisma/engines-version': 4.8.0-61.d6e67a83f971b175a593ccc12e15c4a757f93ffe
@ -7694,6 +7881,29 @@ snapshots:
'@prisma/engines-version@4.8.0-61.d6e67a83f971b175a593ccc12e15c4a757f93ffe':
optional: true
'@protobufjs/aspromise@1.1.2': {}
'@protobufjs/base64@1.1.2': {}
'@protobufjs/codegen@2.0.4': {}
'@protobufjs/eventemitter@1.1.0': {}
'@protobufjs/fetch@1.1.0':
dependencies:
'@protobufjs/aspromise': 1.1.2
'@protobufjs/inquire': 1.1.0
'@protobufjs/float@1.0.2': {}
'@protobufjs/inquire@1.1.0': {}
'@protobufjs/path@1.1.2': {}
'@protobufjs/pool@1.1.0': {}
'@protobufjs/utf8@1.1.0': {}
'@radix-ui/number@1.1.1': {}
'@radix-ui/primitive@1.0.0':
@ -11383,6 +11593,8 @@ snapshots:
lodash.merge@4.6.2: {}
long@5.3.2: {}
longest-streak@3.1.0: {}
loose-envify@1.4.0:
@ -12272,17 +12484,25 @@ snapshots:
postgres@3.4.7: {}
posthog-js@1.310.1:
posthog-js@1.334.1:
dependencies:
'@posthog/core': 1.9.0
'@opentelemetry/api': 1.9.0
'@opentelemetry/api-logs': 0.208.0
'@opentelemetry/exporter-logs-otlp-http': 0.208.0(@opentelemetry/api@1.9.0)
'@opentelemetry/resources': 2.5.0(@opentelemetry/api@1.9.0)
'@opentelemetry/sdk-logs': 0.208.0(@opentelemetry/api@1.9.0)
'@posthog/core': 1.13.0
'@posthog/types': 1.334.1
core-js: 3.47.0
dompurify: 3.3.1
fflate: 0.4.8
preact: 10.28.1
web-vitals: 4.2.4
query-selector-shadow-dom: 1.0.1
web-vitals: 5.1.0
posthog-node@5.18.0:
posthog-node@5.24.1:
dependencies:
'@posthog/core': 1.9.0
'@posthog/core': 1.13.0
preact@10.28.1: {}
@ -12433,6 +12653,21 @@ snapshots:
prosemirror-state: 1.4.4
prosemirror-transform: 1.10.5
protobufjs@7.5.4:
dependencies:
'@protobufjs/aspromise': 1.1.2
'@protobufjs/base64': 1.1.2
'@protobufjs/codegen': 2.0.4
'@protobufjs/eventemitter': 1.1.0
'@protobufjs/fetch': 1.1.0
'@protobufjs/float': 1.0.2
'@protobufjs/inquire': 1.1.0
'@protobufjs/path': 1.1.2
'@protobufjs/pool': 1.1.0
'@protobufjs/utf8': 1.1.0
'@types/node': 20.19.27
long: 5.3.2
pump@3.0.3:
dependencies:
end-of-stream: 1.4.5
@ -12443,6 +12678,8 @@ snapshots:
punycode@2.3.1: {}
query-selector-shadow-dom@1.0.1: {}
queue-microtask@1.2.3: {}
rc@1.2.8:
@ -13524,7 +13761,7 @@ snapshots:
web-namespaces@2.0.1: {}
web-vitals@4.2.4: {}
web-vitals@5.1.0: {}
webidl-conversions@7.0.0: {}