- Published the v2.0.0 docs and a 1.4 → 2.0 migration guide so existing users have a clear upgrade path.
- Updated the four integration pages (AI SDK, OpenAI, Mastra, VoltAgent) to reflect v2 defaults and link to the migration guide.
- Added a short explainer on the two required fields (containerTag, customId) so new users aren't blocked at first integration.
Single-page changelog covering Feb 2024 through Mar 2026 with filterable
tags (API, SDK, Console, MCP, CLI, Integrations). Replaces the split
overview/developer-platform pages. Adds redirect for old URL.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary
This PR introduces comprehensive Supermemory integration for the Microsoft Agent Framework, providing three complementary approaches to add persistent memory capabilities to agents: middleware for automatic memory injection, context providers for session-based memory management, and tools for explicit memory operations.
## Key Changes
- **SupermemoryChatMiddleware**: Automatic memory injection middleware that fetches relevant memories from Supermemory before LLM calls and optionally saves conversations. Supports three modes:
- `"profile"`: Injects all static and dynamic profile memories
- `"query"`: Searches for memories relevant to the current user message
- `"full"`: Combines both profile and query modes
- **SupermemoryContextProvider**: Idiomatic context provider following the Agent Framework pattern (similar to built-in Mem0 integration). Integrates with the session pipeline via `before_run()` and `after_run()` hooks for automatic memory retrieval and storage.
- **SupermemoryTools**: FunctionTool-compatible tools that agents can use for explicit memory operations:
- `search_memories()`: Search for specific memories
- `add_memory()`: Add new memories
- `get_profile()`: Retrieve user profile
- **Utility Functions**: Helper functions for:
- Memory deduplication across static, dynamic, and search result sources
- Profile-to-markdown conversion for LLM consumption
- Message extraction and conversation formatting
- Logging with configurable verbosity
- **Exception Hierarchy**: Custom exceptions for better error handling:
- `SupermemoryConfigurationError`: Missing/invalid configuration
- `SupermemoryAPIError`: API request failures
- `SupermemoryNetworkError`: Network connectivity issues
- `SupermemoryMemoryOperationError`: Memory operation failures
- **Comprehensive Documentation**: README with quick start examples, configuration options, and API reference for all three integration approaches.
- **Test Suite**: Unit tests covering middleware, context provider, tools, and utility functions with proper mocking and error scenarios.
## Implementation Details
- Supports both async (aiohttp) and sync (requests) HTTP clients with automatic fallback
- Handles multiple message formats (dict, objects with attributes, content arrays)
- Configurable memory storage with optional conversation grouping via `conversation_id`
- Environment variable fallback for API key configuration (`SUPERMEMORY_API_KEY`)
- Background task management for non-blocking memory operations in middleware
- Proper async/sync compatibility for the Supermemory SDK
https://claude.ai/code/session_012idB5y6UGK3zmeFULgTc4z
Add entity context documentation to customization and add-memories pages, remove nav icons from Developer Platform, fix install.md parsing error
Changes:
- Remove icons from Developer Platform subheadings (Getting Started, Concepts, Using supermemory, Connectors and sync, Migration Guides)
- Add Entity Context section to customization page with usage example and accordion for advanced API
- Add entityContext parameter to add-memories Parameters table and examples accordion
- Fix MDX parsing error in install.md (wrap curly braces in backticks)
Add documentation for using Supermemory with OpenAI Agents SDK
and CrewAI. Both pages cover user profiles, memory storage,
search, and include practical examples.
adds withSupermemory wrapper and input/output processors for
mastra agents:
- input processor fetches and injects memories into system prompt
before llm calls
- output processor saves conversations to supermemory after
responses
- supports profile, query, and full memory search modes
- includes custom prompt templates and requestcontext support
const agent = new Agent(withSupermemory(
{ id: "my-assistant", model: openai("gpt-4o"), instructions:
"..." },
"user-123",
{ mode: "full", addMemory: "always", threadId: "conv-456" }
))
includes docs as well
this pr also reworks how the tools package works into shared modules
Clawdbot docs
Changes:
- Added integrations/clawdbot.mdx — ClawdBot Supermemory Plugin documentation page with shrimp icon
- Added "Plugins" group to the bottom of the Integrations sidebar in docs.json
- Page covers: API key setup (zsh/bash/PowerShell), plugin install, how it works (auto-recall/auto-capture), features (AI
tools, slash commands, CLI), manual configuration, and advanced options
Add comprehensive documentation for the S3 connector including:
- Quick setup with TypeScript, Python, and cURL examples
- S3-compatible services support (MinIO, DigitalOcean Spaces, R2)
- Prefix filtering and dynamic container tag extraction
- Connection management and sync behavior
- IAM permissions and security best practices