mirror of
https://github.com/supermemoryai/supermemory.git
synced 2026-05-17 12:20:04 +00:00
Some checks failed
Publish Tools / publish (push) Has been cancelled
- Published the v2.0.0 docs and a 1.4 → 2.0 migration guide so existing users have a clear upgrade path. - Updated the four integration pages (AI SDK, OpenAI, Mastra, VoltAgent) to reflect v2 defaults and link to the migration guide. - Added a short explainer on the two required fields (containerTag, customId) so new users aren't blocked at first integration.
216 lines
6 KiB
Text
216 lines
6 KiB
Text
---
|
|
title: "Vercel AI SDK"
|
|
sidebarTitle: "Vercel AI SDK"
|
|
description: "Use Supermemory with Vercel AI SDK for seamless memory management"
|
|
icon: "triangle"
|
|
---
|
|
|
|
The Supermemory AI SDK provides native integration with Vercel's AI SDK through two approaches: **User Profiles** for automatic personalization and **Memory Tools** for agent-based interactions.
|
|
|
|
<Note>
|
|
Migrating to v2 from 1.4.x? Check the [migration guide](/migration/tools-v2-upgrade).
|
|
</Note>
|
|
|
|
<Card title="@supermemory/tools on npm" icon="npm" href="https://www.npmjs.com/package/@supermemory/tools">
|
|
Check out the NPM page for more details
|
|
</Card>
|
|
|
|
## Installation
|
|
|
|
```bash
|
|
npm install @supermemory/tools
|
|
```
|
|
|
|
## Quick Comparison
|
|
|
|
| Approach | Use Case | Setup |
|
|
|----------|----------|-------|
|
|
| User Profiles | Personalized LLM responses with automatic user context | Simple middleware |
|
|
| Memory Tools | AI agents that need explicit memory control | Tool definitions |
|
|
|
|
---
|
|
|
|
## User Profiles with Middleware
|
|
|
|
Automatically inject user profiles into every LLM call for instant personalization.
|
|
|
|
```typescript
|
|
import { generateText } from "ai"
|
|
import { withSupermemory } from "@supermemory/tools/ai-sdk"
|
|
import { openai } from "@ai-sdk/openai"
|
|
|
|
const modelWithMemory = withSupermemory(openai("gpt-5"), {
|
|
containerTag: "user-123",
|
|
customId: "conversation-456",
|
|
})
|
|
|
|
const result = await generateText({
|
|
model: modelWithMemory,
|
|
messages: [{ role: "user", content: "What do you know about me?" }]
|
|
})
|
|
```
|
|
|
|
### Required fields
|
|
|
|
Both `containerTag` and `customId` are required.
|
|
|
|
- **`containerTag`** — *who* the memories belong to. Use a stable identifier per user, workspace, or tenant (e.g. `"user-123"`, `"acme-workspace"`). Memory search and writes are scoped to this tag.
|
|
- **`customId`** — *which conversation* this turn belongs to. Use it to group messages from the same chat session into a single document (e.g. `"chat-2026-04-25"`, a thread ID, or a UUID per session).
|
|
|
|
<Note>
|
|
**Memory saving is enabled by default** (`addMemory: "always"`). New conversations are persisted automatically. To opt out, set `addMemory: "never"`:
|
|
|
|
```typescript
|
|
const modelWithMemory = withSupermemory(openai("gpt-5"), {
|
|
containerTag: "user-123",
|
|
customId: "conversation-456",
|
|
addMemory: "never",
|
|
})
|
|
```
|
|
</Note>
|
|
|
|
### Memory Search Modes
|
|
|
|
**Profile Mode (Default)** - Retrieves the user's complete profile:
|
|
|
|
```typescript
|
|
const model = withSupermemory(openai("gpt-4"), { containerTag: "user-123", customId: "conv-1", mode: "profile" })
|
|
```
|
|
|
|
**Query Mode** - Searches memories based on the user's message:
|
|
|
|
```typescript
|
|
const model = withSupermemory(openai("gpt-4"), { containerTag: "user-123", customId: "conv-1", mode: "query" })
|
|
```
|
|
|
|
**Full Mode** - Combines profile AND query-based search:
|
|
|
|
```typescript
|
|
const model = withSupermemory(openai("gpt-4"), { containerTag: "user-123", customId: "conv-1", mode: "full" })
|
|
```
|
|
|
|
### Custom Prompt Templates
|
|
|
|
Customize how memories are formatted. The template receives `userMemories`, `generalSearchMemories`, and `searchResults` (raw array for filtering by metadata):
|
|
|
|
```typescript
|
|
import { withSupermemory, type MemoryPromptData } from "@supermemory/tools/ai-sdk"
|
|
|
|
const claudePrompt = (data: MemoryPromptData) => `
|
|
<context>
|
|
<user_profile>
|
|
${data.userMemories}
|
|
</user_profile>
|
|
<relevant_memories>
|
|
${data.generalSearchMemories}
|
|
</relevant_memories>
|
|
</context>
|
|
`.trim()
|
|
|
|
const model = withSupermemory(anthropic("claude-3-sonnet"), {
|
|
containerTag: "user-123",
|
|
customId: "conv-1",
|
|
mode: "full",
|
|
promptTemplate: claudePrompt,
|
|
})
|
|
```
|
|
|
|
### Verbose Logging
|
|
|
|
```typescript
|
|
const model = withSupermemory(openai("gpt-4"), {
|
|
containerTag: "user-123",
|
|
customId: "conv-1",
|
|
verbose: true,
|
|
})
|
|
// Console output shows memory retrieval details
|
|
```
|
|
|
|
### When Supermemory errors (default: continue without memories)
|
|
|
|
If the Supermemory API returns an error, is unreachable, or retrieval hits the internal time limit, memory injection is skipped. **`skipMemoryOnError` defaults to `true`**, so the LLM call still runs with the **original** prompt (no injected memories). Use `verbose: true` if you want console output when that happens.
|
|
|
|
To **fail the call** when memory retrieval fails instead, set `skipMemoryOnError: false`:
|
|
|
|
```typescript
|
|
const model = withSupermemory(openai("gpt-5"), {
|
|
containerTag: "user-123",
|
|
customId: "conv-1",
|
|
skipMemoryOnError: false,
|
|
})
|
|
```
|
|
|
|
---
|
|
|
|
## Memory Tools
|
|
|
|
Add memory capabilities to AI agents with search, add, and fetch operations.
|
|
|
|
```typescript
|
|
import { streamText } from "ai"
|
|
import { createAnthropic } from "@ai-sdk/anthropic"
|
|
import { supermemoryTools } from "@supermemory/tools/ai-sdk"
|
|
|
|
const anthropic = createAnthropic({ apiKey: "YOUR_ANTHROPIC_KEY" })
|
|
|
|
const result = await streamText({
|
|
model: anthropic("claude-3-sonnet"),
|
|
prompt: "Remember that my name is Alice",
|
|
tools: supermemoryTools("YOUR_SUPERMEMORY_KEY")
|
|
})
|
|
```
|
|
|
|
### Available Tools
|
|
|
|
**Search Memories** - Semantic search through user memories:
|
|
|
|
```typescript
|
|
const result = await streamText({
|
|
model: openai("gpt-5"),
|
|
prompt: "What are my dietary preferences?",
|
|
tools: supermemoryTools("API_KEY")
|
|
})
|
|
// AI will call: searchMemories({ informationToGet: "dietary preferences" })
|
|
```
|
|
|
|
**Add Memory** - Store new information:
|
|
|
|
```typescript
|
|
const result = await streamText({
|
|
model: anthropic("claude-3-sonnet"),
|
|
prompt: "Remember that I'm allergic to peanuts",
|
|
tools: supermemoryTools("API_KEY")
|
|
})
|
|
// AI will call: addMemory({ memory: "User is allergic to peanuts" })
|
|
```
|
|
|
|
### Using Individual Tools
|
|
|
|
For more control, import tools separately:
|
|
|
|
```typescript
|
|
import {
|
|
searchMemoriesTool,
|
|
addMemoryTool
|
|
} from "@supermemory/tools/ai-sdk"
|
|
|
|
const result = await streamText({
|
|
model: openai("gpt-5"),
|
|
prompt: "What do you know about me?",
|
|
tools: {
|
|
searchMemories: searchMemoriesTool("API_KEY", { projectId: "personal" }),
|
|
createEvent: yourCustomTool,
|
|
}
|
|
})
|
|
```
|
|
|
|
### Tool Results
|
|
|
|
```typescript
|
|
// searchMemories result
|
|
{ success: true, results: [...], count: 5 }
|
|
|
|
// addMemory result
|
|
{ success: true, memory: { id: "mem_123", ... } }
|
|
```
|
|
|