Commit graph

37 commits

Author SHA1 Message Date
MaheshtheDev
76b0f2746b fix: Multi Params issue with @supermemory/tools (#854)
testing for now

fix proxy issue

upgraded the package
2026-04-16 07:37:09 +00:00
sreedharsreeram
0783594b8d voltagent-sdk (#791) 2026-04-14 20:22:46 +00:00
Ishaan Gupta
239fc123b9
feat: Use LRUCache instead of Map in MemoryCache (#774) 2026-03-20 10:37:39 -07:00
MaheshtheDev
a00a751e10 pkg(tools): Expose raw search results in MemoryPromptData for prompt templates (#787)
Expose raw search results in `MemoryPromptData` so prompt templates can traverse, filter, and selectively include results based on metadata (e.g. score, source).

##### Usage example

```typescript
const promptTemplate = (data: MemoryPromptData) => {
  const relevant = data.searchResults.filter(
    (r) => (r.metadata?.score as number) > 0.7
  )
  return `${data.userMemories}\n${relevant.map(r => r.memory).join('\n')}`
}
```
2026-03-19 00:38:53 +00:00
Mahesh Sanikommu
962fb85cd3
feat: empty state action for new spaces (#780)
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-03-15 11:02:14 -07:00
sreedharsreeram
6d93cd083d OpenAI SDK Backfill (#771) 2026-03-06 23:57:10 +00:00
nexxeln
9553434c9a mastra integration (#717)
adds withSupermemory wrapper and input/output processors for
mastra agents:

- input processor fetches and injects memories into system prompt
before llm calls
- output processor saves conversations to supermemory after
responses
- supports profile, query, and full memory search modes
- includes custom prompt templates and requestcontext support

const agent = new Agent(withSupermemory(
{ id: "my-assistant", model: openai("gpt-4o"), instructions:
"..." },
"user-123",
{ mode: "full", addMemory: "always", threadId: "conv-456" }
))

includes docs as well

this pr also reworks how the tools package works into shared modules
2026-02-03 00:43:08 +00:00
MaheshtheDev
0853bc86cb feat: tools package strict mode for openai models (#699) 2026-01-23 03:15:39 +00:00
MaheshtheDev
1423bd7004 feat: mobile responsive, lint formats, toast, render issue fix (#688)
- Mobile responsive
- new toast design
- web document render issue fix
- posthog analytics
- ui improvements
2026-01-21 03:11:53 +00:00
MaheshtheDev
32a7eff3af fix(tools): multi step agent prompt caching (#685) 2026-01-20 01:30:43 +00:00
Dhravya Shah
87b361c26b
docs changes (#678)
Some checks failed
Publish AI SDK / publish (push) Has been cancelled
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 16:55:32 -08:00
MaheshtheDev
68d4d95c1d feat: allow prompt template for @supermemory/tools package (#655)
## Add customizable prompt templates for memory injection

**Changes:**

- Add `promptTemplate` option to `withSupermemory()` for full control over injected memory format (XML, custom branding, etc.)
- New `MemoryPromptData` interface with `userMemories` and `generalSearchMemories` fields
- Exclude `system` messages from persistence to avoid storing injected prompts
- Add JSDoc comments to all public interfaces for better DevEx

**Usage:**

```typescript
const customPrompt = (data: MemoryPromptData) => `
<user_memories>
${data.userMemories}
${data.generalSearchMemories}
</user_memories>
`.trim()

const model = withSupermemory(openai("gpt-4"), "user-123", {
  promptTemplate: customPrompt,
})
```
2026-01-07 03:48:16 +00:00
Arnab Mondal
aa5654f084
fix(tools): pass apiKey to profile search instead of using process.env (#634) 2025-12-30 10:15:10 -08:00
Dhravya Shah
1162cbf284 conditional 2025-12-23 19:38:55 -08:00
MaheshtheDev
d095bd234e feat(@supermemory/tools): vercel ai sdk compatbile with v5 and v6 (#628) 2025-12-24 01:36:03 +00:00
Dhravya Shah
67e783158b bump package 2025-12-23 15:26:47 -08:00
Dhravya Shah
821d3049cd fix: deduplicate memories after returned to save tokens 2025-12-22 11:09:50 -08:00
Dhravya
81e192e616 Support for conversations in SDKs (#618) 2025-12-20 00:46:13 +00:00
Arnab Mondal
f9d304ec3e
feat(tools): allow passing apiKey via options for browser support (#599)
Co-authored-by: Mahesh Sanikommmu <mahesh.svmr@gmail.com>
2025-12-05 00:34:44 +05:30
MaheshtheDev
1ff6b7f951 chore(@supermemory/tools): fix the documentation of withSupermemory (#601)
- small docs miss match on addMemory default option
2025-12-03 18:59:55 +00:00
Shoubhit Dash
7b0baea0f2
add support for responses api in openai typescript sdk (#549) 2025-11-07 07:39:44 -07:00
MaheshtheDev
97071502a7 feat(@supermemory/tools): capture assitant responses with filtered memory (#539)
### Added streaming support to the Supermemory middleware and improved memory handling in the AI SDK integration.

### What changed?

- Refactored the middleware architecture to support both streaming and non-streaming responses
- Extracted memory prompt functionality into a separate module (`memory-prompt.ts`)
- Added memory saving capability for streaming responses
- Improved the formatting of memory content with a "User Supermemories:" prefix
- Added utility function to filter out supermemories from content
- Created a new streaming example in the test app with a dedicated route and page
- Updated version from 1.3.0 to 1.3.1 in package.json
- Simplified installation instructions in [README.m](http://README.md)d
2025-10-28 22:28:22 +00:00
MaheshtheDev
9d7f1edbfb fix: openai sdk packaging issue (#532) 2025-10-27 20:40:51 +00:00
MaheshtheDev
b3aab91489 feat: withSupermemory for openai sdk (#531)
### TL;DR

Added OpenAI SDK middleware support for SuperMemory integration, allowing direct memory injection without AI SDK dependency.

### What changed?

- Added `withSupermemory` middleware for OpenAI SDK that automatically injects relevant memories into chat completions
- Implemented memory search and injection functionality for OpenAI clients
- Restructured the OpenAI module to separate tools and middleware functionality
- Updated README with comprehensive documentation and examples for the new OpenAI middleware
- Added test implementation with a Next.js API route example
- Reorganized package exports to support the new structure
2025-10-27 20:08:11 +00:00
Mahesh Sanikommmu
4f2d839412 chat app withSupermemory test 2025-10-22 14:00:59 -07:00
Mahesh Sanikommmu
5fb33743b3 add props interface export 2025-10-22 12:31:54 -07:00
Shoubhit Dash
8436229cbb add comment 2025-10-22 23:00:08 +05:30
Shoubhit Dash
5612380dc2 fix prompt mutation in vercel middleware 2025-10-22 22:43:09 +05:30
sohamd22
79f3059b2a add conversationId functionality to map to customId in ingestion (#499)
### TL;DR

Added support for conversation grouping in Supermemory middleware through a new `conversationId` parameter.

### What changed?

- Added a new `conversationId` option to the `withSupermemory` function to group messages into a single document for contextual memory generation
- Updated the middleware to use this conversation ID when adding memories, using a `customId` format of `conversation:{conversationId}`
- Created a new `getConversationContent` function that extracts the full conversation content from the prompt parameters
- Enhanced memory storage to save entire conversations rather than just the last user message
- Updated documentation and examples to demonstrate the new parameter usage

### How to test?

1. Import the `withSupermemory` function from the package
2. Create a model with memory using the new `conversationId` parameter:
   ```typescript
   const modelWithMemory = withSupermemory(openai("gpt-4"), "user-123", {
     conversationId: "conversation-456",
     mode: "full",
     addMemory: "always"
   })
   ```
3. Use the model in a conversation and verify that messages are grouped by the conversation ID
4. Check that memories are being stored with the custom ID format `conversation:{conversationId}`

### Why make this change?

This enhancement improves the contextual understanding of the AI by allowing related messages to be grouped together as a single conversation document. By using a conversation ID, the system can maintain coherent memory across multiple interactions within the same conversation thread, providing better context retrieval and more relevant responses.
2025-10-19 22:24:00 +00:00
Mahesh Sanikommmu
3e47d9ee53 fix: side effect removal 2025-10-10 21:13:21 -07:00
Mahesh Sanikommmu
d8874e8c17 fix: add memory code params and documentation in readme 2025-10-10 21:06:52 -07:00
sohamd22
91a2aa5fdb create memory adding option in vercel sdk (#484)
### TL;DR

Added support for automatically saving user messages to Supermemory.

### What changed?

- Added a new `addMemory` option to `wrapVercelLanguageModel` that accepts either "always" or "never" (defaults to "never")
- Implemented the `addMemoryTool` function to save user messages to Supermemory
- Modified the middleware to check the `addMemory` setting and save the last user message when appropriate
- Initialized the Supermemory client in the middleware to enable memory storage

### How to test?

1. Set the `SUPERMEMORY_API_KEY` environment variable
2. Use the `wrapVercelLanguageModel` function with the new `addMemory: "always"` option
3. Send a user message through the model
4. Verify that the message is saved to Supermemory with the specified container tag

### Why make this change?

This change enables automatic memory creation from user messages, which improves the system's ability to build a knowledge base without requiring explicit memory creation calls. This is particularly useful for applications that want to automatically capture and store user interactions for future reference.
2025-10-11 03:45:06 +00:00
MaheshtheDev
35ac9e086b feat: ai sdk language model withSupermemory (#446) 2025-10-10 05:10:03 +00:00
Dhravya Shah
4d6fd37c99 fix: model names 2025-10-03 02:41:49 -07:00
Dhravya Shah
5c57578573 feat: Add memory vs rag and migration section to docs 2025-10-01 18:11:37 -07:00
Dhravya Shah
53bc296155 feat: Claude memory integration 2025-09-29 13:40:56 -07:00
CodeWithShreyans
cae7051d1a
feat: new tools package (#407)
Some checks failed
Publish AI SDK / publish (push) Has been cancelled
2025-09-02 23:11:19 +00:00