- Published the v2.0.0 docs and a 1.4 → 2.0 migration guide so existing users have a clear upgrade path.
- Updated the four integration pages (AI SDK, OpenAI, Mastra, VoltAgent) to reflect v2 defaults and link to the migration guide.
- Added a short explainer on the two required fields (containerTag, customId) so new users aren't blocked at first integration.
**`withSupermemory`** **(AI SDK)**
- **`skipMemoryOnError`** **defaults to** **`true`**. memory errors/timeouts log and the model runs on the **original** prompt unless you set `skipMemoryOnError: false`.
- **Pre-LLM** **`/v4/profile`** **is aborted after 5s** via `AbortSigna`
**Docs**
- `packages/tools/README.md`, **`apps/docs/integrations/ai-sdk.md`**
### TL;DR
Added Python SDK for integrating Supermemory with Cartesia Line voice agents, enabling persistent memory capabilities.
### What changed?
Created a new Python SDK package (`supermemory_cartesia`) that provides:
- `SupermemoryCartesiaAgent` wrapper class that enhances Cartesia Line agents with memory capabilities
- Memory retrieval and storage functionality that integrates with the Supermemory API
- Utility functions for memory formatting, deduplication, and time formatting
- Custom exception classes for error handling
- Comprehensive documentation and type hints
The implementation includes:
- Memory enrichment for user queries
- Automatic storage of conversation history
- Configurable memory retrieval modes (profile, query, full)
- Background processing to avoid blocking the main conversation flow
### How to test?
```python
from supermemory_cartesia import SupermemoryCartesiaAgent
from line.llm_agent import LlmAgent, LlmConfig
import os
# Create base LLM agent
base_agent = LlmAgent(
model="gemini/gemini-2.5-flash-preview-09-2025",
config=LlmConfig(
system_prompt="You are a helpful assistant.",
introduction="Hello!"
)
)
# Wrap with Supermemory
memory_agent = SupermemoryCartesiaAgent(
agent=base_agent,
api_key=os.getenv("SUPERMEMORY_API_KEY"),
user_id="user-123",
)
# Use memory_agent in your Cartesia Line application
```
### Why make this change?
This SDK enables Cartesia Line voice agents to maintain persistent memory across conversations, enhancing user experience by:
1. Providing contextual awareness of past interactions
2. Remembering user preferences and important information
3. Reducing repetition in conversations
4. Creating more personalized and natural voice interactions
The integration is designed to be lightweight and non-blocking, ensuring that memory operations don't impact the responsiveness of voice interactions.
Single-page changelog covering Feb 2024 through Mar 2026 with filterable
tags (API, SDK, Console, MCP, CLI, Integrations). Replaces the split
overview/developer-platform pages. Adds redirect for old URL.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Updates docs to match the new behavior where metadata-only PATCH updates do not trigger reindexing:
- **update-delete-memories/overview.mdx** — Distinguishes content changes (reindex) vs metadata-only (no reindex), adds a note about `accepted`-style updates
- **document-operations.mdx** — Clarifies that only content changes trigger reprocessing
- **add-memories.mdx** and **add-memories/overview.mdx** — Add notes on metadata-only behavior
- **memory-api/ingesting.mdx** — Splits update behavior into content vs metadata-only
- **memory-api/creation/adding-memories.mdx** — Adds note for the “Adding Additional Metadata to Files” flow
## Summary
This PR introduces comprehensive Supermemory integration for the Microsoft Agent Framework, providing three complementary approaches to add persistent memory capabilities to agents: middleware for automatic memory injection, context providers for session-based memory management, and tools for explicit memory operations.
## Key Changes
- **SupermemoryChatMiddleware**: Automatic memory injection middleware that fetches relevant memories from Supermemory before LLM calls and optionally saves conversations. Supports three modes:
- `"profile"`: Injects all static and dynamic profile memories
- `"query"`: Searches for memories relevant to the current user message
- `"full"`: Combines both profile and query modes
- **SupermemoryContextProvider**: Idiomatic context provider following the Agent Framework pattern (similar to built-in Mem0 integration). Integrates with the session pipeline via `before_run()` and `after_run()` hooks for automatic memory retrieval and storage.
- **SupermemoryTools**: FunctionTool-compatible tools that agents can use for explicit memory operations:
- `search_memories()`: Search for specific memories
- `add_memory()`: Add new memories
- `get_profile()`: Retrieve user profile
- **Utility Functions**: Helper functions for:
- Memory deduplication across static, dynamic, and search result sources
- Profile-to-markdown conversion for LLM consumption
- Message extraction and conversation formatting
- Logging with configurable verbosity
- **Exception Hierarchy**: Custom exceptions for better error handling:
- `SupermemoryConfigurationError`: Missing/invalid configuration
- `SupermemoryAPIError`: API request failures
- `SupermemoryNetworkError`: Network connectivity issues
- `SupermemoryMemoryOperationError`: Memory operation failures
- **Comprehensive Documentation**: README with quick start examples, configuration options, and API reference for all three integration approaches.
- **Test Suite**: Unit tests covering middleware, context provider, tools, and utility functions with proper mocking and error scenarios.
## Implementation Details
- Supports both async (aiohttp) and sync (requests) HTTP clients with automatic fallback
- Handles multiple message formats (dict, objects with attributes, content arrays)
- Configurable memory storage with optional conversation grouping via `conversation_id`
- Environment variable fallback for API key configuration (`SUPERMEMORY_API_KEY`)
- Background task management for non-blocking memory operations in middleware
- Proper async/sync compatibility for the Supermemory SDK
https://claude.ai/code/session_012idB5y6UGK3zmeFULgTc4z
Documents the new DELETE /v3/auth/scoped-key/:keyId endpoint
for disabling container-scoped API keys.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add entity context documentation to customization and add-memories pages, remove nav icons from Developer Platform, fix install.md parsing error
Changes:
- Remove icons from Developer Platform subheadings (Getting Started, Concepts, Using supermemory, Connectors and sync, Migration Guides)
- Add Entity Context section to customization page with usage example and accordion for advanced API
- Add entityContext parameter to add-memories Parameters table and examples accordion
- Fix MDX parsing error in install.md (wrap curly braces in backticks)
Add documentation for using Supermemory with OpenAI Agents SDK
and CrewAI. Both pages cover user profiles, memory storage,
search, and include practical examples.
adds withSupermemory wrapper and input/output processors for
mastra agents:
- input processor fetches and injects memories into system prompt
before llm calls
- output processor saves conversations to supermemory after
responses
- supports profile, query, and full memory search modes
- includes custom prompt templates and requestcontext support
const agent = new Agent(withSupermemory(
{ id: "my-assistant", model: openai("gpt-4o"), instructions:
"..." },
"user-123",
{ mode: "full", addMemory: "always", threadId: "conv-456" }
))
includes docs as well
this pr also reworks how the tools package works into shared modules
Clawdbot docs
Changes:
- Added integrations/clawdbot.mdx — ClawdBot Supermemory Plugin documentation page with shrimp icon
- Added "Plugins" group to the bottom of the Integrations sidebar in docs.json
- Page covers: API key setup (zsh/bash/PowerShell), plugin install, how it works (auto-recall/auto-capture), features (AI
tools, slash commands, CLI), manual configuration, and advanced options
#### RE-RAISING Pipecat live speech PR
### Added native speech-to-speech model support
### Summary:
- Speech-to-speech support - Auto-detect audio frames and inject memories to system prompt for native audio models (Gemini Live, etc.)
- Fix memory bloating - Replace memories each turn using XML tags instead of accumulating
- Add temporal context - Show recency on search results ([2d ago], [15 Jan])
- New inject_mode param - auto (default), system, or user
### Docs update
- Update the docs for native speech-2-speech models
Added documentation for the new `context` prompt in the Supermemory MCP server that enables automatic user profile injection into AI conversations. Updated the MCP overview page with detailed parameter documentation and usage guidance, and added a changelog entry for December 30, 2025.
**Files changed:**
- `apps/docs/supermemory-mcp/mcp.mdx` - Added Prompts section with `context` prompt documentation
- `apps/docs/changelog/developer-platform.mdx` - Added December 30, 2025 changelog entry
Generated from [fix: prompt injection with mcp](https://github.com/supermemoryai/supermemory/pull/638) @MaheshtheDev
Add comprehensive documentation for the S3 connector including:
- Quick setup with TypeScript, Python, and cURL examples
- S3-compatible services support (MinIO, DigitalOcean Spaces, R2)
- Prefix filtering and dynamic container tag extraction
- Connection management and sync behavior
- IAM permissions and security best practices