- Published the v2.0.0 docs and a 1.4 → 2.0 migration guide so existing users have a clear upgrade path.
- Updated the four integration pages (AI SDK, OpenAI, Mastra, VoltAgent) to reflect v2 defaults and link to the migration guide.
- Added a short explainer on the two required fields (containerTag, customId) so new users aren't blocked at first integration.
**`withSupermemory`** **(AI SDK)**
- **`skipMemoryOnError`** **defaults to** **`true`**. memory errors/timeouts log and the model runs on the **original** prompt unless you set `skipMemoryOnError: false`.
- **Pre-LLM** **`/v4/profile`** **is aborted after 5s** via `AbortSigna`
**Docs**
- `packages/tools/README.md`, **`apps/docs/integrations/ai-sdk.md`**
adds withSupermemory wrapper and input/output processors for
mastra agents:
- input processor fetches and injects memories into system prompt
before llm calls
- output processor saves conversations to supermemory after
responses
- supports profile, query, and full memory search modes
- includes custom prompt templates and requestcontext support
const agent = new Agent(withSupermemory(
{ id: "my-assistant", model: openai("gpt-4o"), instructions:
"..." },
"user-123",
{ mode: "full", addMemory: "always", threadId: "conv-456" }
))
includes docs as well
this pr also reworks how the tools package works into shared modules
## Add customizable prompt templates for memory injection
**Changes:**
- Add `promptTemplate` option to `withSupermemory()` for full control over injected memory format (XML, custom branding, etc.)
- New `MemoryPromptData` interface with `userMemories` and `generalSearchMemories` fields
- Exclude `system` messages from persistence to avoid storing injected prompts
- Add JSDoc comments to all public interfaces for better DevEx
**Usage:**
```typescript
const customPrompt = (data: MemoryPromptData) => `
<user_memories>
${data.userMemories}
${data.generalSearchMemories}
</user_memories>
`.trim()
const model = withSupermemory(openai("gpt-4"), "user-123", {
promptTemplate: customPrompt,
})
```
### Added streaming support to the Supermemory middleware and improved memory handling in the AI SDK integration.
### What changed?
- Refactored the middleware architecture to support both streaming and non-streaming responses
- Extracted memory prompt functionality into a separate module (`memory-prompt.ts`)
- Added memory saving capability for streaming responses
- Improved the formatting of memory content with a "User Supermemories:" prefix
- Added utility function to filter out supermemories from content
- Created a new streaming example in the test app with a dedicated route and page
- Updated version from 1.3.0 to 1.3.1 in package.json
- Simplified installation instructions in [README.m](http://README.md)d
### TL;DR
Added OpenAI SDK middleware support for SuperMemory integration, allowing direct memory injection without AI SDK dependency.
### What changed?
- Added `withSupermemory` middleware for OpenAI SDK that automatically injects relevant memories into chat completions
- Implemented memory search and injection functionality for OpenAI clients
- Restructured the OpenAI module to separate tools and middleware functionality
- Updated README with comprehensive documentation and examples for the new OpenAI middleware
- Added test implementation with a Next.js API route example
- Reorganized package exports to support the new structure