supermemory/apps/docs/test.ts
nexxeln 9553434c9a mastra integration (#717)
adds withSupermemory wrapper and input/output processors for
mastra agents:

- input processor fetches and injects memories into system prompt
before llm calls
- output processor saves conversations to supermemory after
responses
- supports profile, query, and full memory search modes
- includes custom prompt templates and requestcontext support

const agent = new Agent(withSupermemory(
{ id: "my-assistant", model: openai("gpt-4o"), instructions:
"..." },
"user-123",
{ mode: "full", addMemory: "always", threadId: "conv-456" }
))

includes docs as well

this pr also reworks how the tools package works into shared modules
2026-02-03 00:43:08 +00:00

42 lines
1 KiB
TypeScript

import Supermemory from "supermemory"
const client = new Supermemory()
const USER_ID = "dhravya"
const conversation = [
{ role: "assistant", content: "Hello, how are you doing?" },
{
role: "user",
content: "Hello! I am Dhravya. I am 20 years old. I love to code!",
},
{ role: "user", content: "Can I go to the club?" },
]
// Get user profile + relevant memories for context
const profile = await client.profile({
containerTag: USER_ID,
q: conversation.at(-1)?.content,
})
const context = `Static profile:
${profile.profile.static.join("\n")}
Dynamic profile:
${profile.profile.dynamic.join("\n")}
Relevant memories:
${profile.searchResults?.results.map((r) => r.content).join("\n")}`
// Build messages with memory-enriched context
const _messages = [
{ role: "system", content: `User context:\n${context}` },
...conversation,
]
// const response = await llm.chat({ messages });
// Store conversation for future context
await client.add({
content: conversation.map((m) => `${m.role}: ${m.content}`).join("\n"),
containerTag: USER_ID,
})