### TL;DR
Added Python SDK for integrating Supermemory with Cartesia Line voice agents, enabling persistent memory capabilities.
### What changed?
Created a new Python SDK package (`supermemory_cartesia`) that provides:
- `SupermemoryCartesiaAgent` wrapper class that enhances Cartesia Line agents with memory capabilities
- Memory retrieval and storage functionality that integrates with the Supermemory API
- Utility functions for memory formatting, deduplication, and time formatting
- Custom exception classes for error handling
- Comprehensive documentation and type hints
The implementation includes:
- Memory enrichment for user queries
- Automatic storage of conversation history
- Configurable memory retrieval modes (profile, query, full)
- Background processing to avoid blocking the main conversation flow
### How to test?
```python
from supermemory_cartesia import SupermemoryCartesiaAgent
from line.llm_agent import LlmAgent, LlmConfig
import os
# Create base LLM agent
base_agent = LlmAgent(
model="gemini/gemini-2.5-flash-preview-09-2025",
config=LlmConfig(
system_prompt="You are a helpful assistant.",
introduction="Hello!"
)
)
# Wrap with Supermemory
memory_agent = SupermemoryCartesiaAgent(
agent=base_agent,
api_key=os.getenv("SUPERMEMORY_API_KEY"),
user_id="user-123",
)
# Use memory_agent in your Cartesia Line application
```
### Why make this change?
This SDK enables Cartesia Line voice agents to maintain persistent memory across conversations, enhancing user experience by:
1. Providing contextual awareness of past interactions
2. Remembering user preferences and important information
3. Reducing repetition in conversations
4. Creating more personalized and natural voice interactions
The integration is designed to be lightweight and non-blocking, ensuring that memory operations don't impact the responsiveness of voice interactions.
- Switch to infinite query with viewport-triggered pagination (loads more when user zooms out 3x past node bounds)
- Remove maxNodes cap so all data renders
- Remove background color and dot pattern from graph
- Make document-memory edges light grey
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
- Fixed memory graph stuck on "Loading graph data..." caused by field name mismatch after package rewrite (`memoryEntries` vs `memories`, `type` vs `documentType`)
- Improved MCP app UI to match console-v2 design: golden derives edges (#FBBF24), edge glow pass, node hover effects, dot mode at low zoom, vertical controls with keyboard shortcuts, expandable legend
- Synced local `memory-graph` edgeDerives constant with published npm version
## Test plan
- [ ] Run `cd apps/mcp && bun run dev`, connect via Claude Desktop
- [ ] Verify graph renders with golden derives edges and blue dashed extends edges
- [ ] Verify node hover/selection glow effects
- [ ] Verify Fit (Z), Center (C), zoom (+/-) keyboard shortcuts
- [ ] Verify legend expands/collapses
- redirect to login when session is gone instead of blank screen
- show cached username while session restores so header doesn't flicker
- cleaned up redundant type casts and unused vars
Added workspace management to the onboarding flow with automatic organization creation and workspace validation throughout the app.
### What changed?
- Wrapped the app layout with an `EnsureWorkspace` component that redirects users without organizations to onboarding
- Enhanced the welcome page to automatically create an organization with a generated slug when users complete their name input
- Expanded the auth context to track organizations list, restoration state, and provide organization refetching capabilities
- Added utility functions for generating unique organization slugs and usernames from display names
- Implemented proper organization restoration logic that handles saved preferences and multiple organization scenarios
The subscription check on plugins, connections, and usage pages was hardcoded to only look at `api_pro`. Users on scale or enterprise plans saw "Upgrade to Pro" and couldn't create plugin keys or manage connections.
Also, the check only looked at product `status` but not the `allowed` flag from autumn — so users who downgraded but were still in their billing period got locked out early.
Adds multi-select and multi-file drag-and-drop on the Add Document Upload files tab, a per-file queue with status, retry for failed rows, and batched uploads to the existing `POST /v3/documents/file` endpoint with a client-side concurrency limit of 3.
Optional title/description apply only when a single file is queued. .md / .mdx (and text/markdown) are accepted.
Upload progress is shown as a bottom strip with a slow width animation
Single-page changelog covering Feb 2024 through Mar 2026 with filterable
tags (API, SDK, Console, MCP, CLI, Integrations). Replaces the split
overview/developer-platform pages. Adds redirect for old URL.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Updates docs to match the new behavior where metadata-only PATCH updates do not trigger reindexing:
- **update-delete-memories/overview.mdx** — Distinguishes content changes (reindex) vs metadata-only (no reindex), adds a note about `accepted`-style updates
- **document-operations.mdx** — Clarifies that only content changes trigger reprocessing
- **add-memories.mdx** and **add-memories/overview.mdx** — Add notes on metadata-only behavior
- **memory-api/ingesting.mdx** — Splits update behavior into content vs metadata-only
- **memory-api/creation/adding-memories.mdx** — Adds note for the “Adding Additional Metadata to Files” flow
## Summary
This PR introduces comprehensive Supermemory integration for the Microsoft Agent Framework, providing three complementary approaches to add persistent memory capabilities to agents: middleware for automatic memory injection, context providers for session-based memory management, and tools for explicit memory operations.
## Key Changes
- **SupermemoryChatMiddleware**: Automatic memory injection middleware that fetches relevant memories from Supermemory before LLM calls and optionally saves conversations. Supports three modes:
- `"profile"`: Injects all static and dynamic profile memories
- `"query"`: Searches for memories relevant to the current user message
- `"full"`: Combines both profile and query modes
- **SupermemoryContextProvider**: Idiomatic context provider following the Agent Framework pattern (similar to built-in Mem0 integration). Integrates with the session pipeline via `before_run()` and `after_run()` hooks for automatic memory retrieval and storage.
- **SupermemoryTools**: FunctionTool-compatible tools that agents can use for explicit memory operations:
- `search_memories()`: Search for specific memories
- `add_memory()`: Add new memories
- `get_profile()`: Retrieve user profile
- **Utility Functions**: Helper functions for:
- Memory deduplication across static, dynamic, and search result sources
- Profile-to-markdown conversion for LLM consumption
- Message extraction and conversation formatting
- Logging with configurable verbosity
- **Exception Hierarchy**: Custom exceptions for better error handling:
- `SupermemoryConfigurationError`: Missing/invalid configuration
- `SupermemoryAPIError`: API request failures
- `SupermemoryNetworkError`: Network connectivity issues
- `SupermemoryMemoryOperationError`: Memory operation failures
- **Comprehensive Documentation**: README with quick start examples, configuration options, and API reference for all three integration approaches.
- **Test Suite**: Unit tests covering middleware, context provider, tools, and utility functions with proper mocking and error scenarios.
## Implementation Details
- Supports both async (aiohttp) and sync (requests) HTTP clients with automatic fallback
- Handles multiple message formats (dict, objects with attributes, content arrays)
- Configurable memory storage with optional conversation grouping via `conversation_id`
- Environment variable fallback for API key configuration (`SUPERMEMORY_API_KEY`)
- Background task management for non-blocking memory operations in middleware
- Proper async/sync compatibility for the Supermemory SDK
https://claude.ai/code/session_012idB5y6UGK3zmeFULgTc4z
### TL;DR
Enhanced the `forgetMemory` method to try exact content matching first, then fall back to semantic search with a high similarity threshold for more precise memory deletion.
### What changed?
The `forgetMemory` method now uses a two-step approach: first attempting exact content matching via the API, and if that fails with a 404, falling back to semantic search with a similarity threshold of 0.85. The search method also accepts an optional threshold parameter. Error messages now distinguish between exact matches and semantic matches, including similarity scores in the response.
### How to test?
1. Call `forgetMemory` with the exact content of an existing memory to verify direct deletion
2. Call `forgetMemory` with similar but not identical content to test the semantic search fallback
3. Call `forgetMemory` with completely unrelated content to verify the "no matching memory found" response
4. Verify that success messages indicate whether deletion used exact matching or semantic matching with similarity scores
### Why make this change?
This approach provides more precise memory deletion by prioritizing exact matches while still offering a fallback for similar content. The high similarity threshold (0.85) ensures that only very similar memories are deleted when exact matches aren't found, reducing the risk of accidentally deleting unrelated memories.