Commit graph

36 commits

Author SHA1 Message Date
MaheshtheDev
23a60f90f6 tools: timeout and default option on error to skip (#875)
Some checks failed
Publish Tools / publish (push) Has been cancelled
**`withSupermemory`** **(AI SDK)**

- **`skipMemoryOnError`** **defaults to** **`true`**. memory errors/timeouts log and the model runs on the **original** prompt unless you set `skipMemoryOnError: false`.
- **Pre-LLM** **`/v4/profile`** **is aborted after 5s** via `AbortSigna`

**Docs**

- `packages/tools/README.md`, **`apps/docs/integrations/ai-sdk.md`**
2026-04-23 00:11:20 +00:00
MaheshtheDev
dd652bc6c6 chore: package update to trigger auto publish (#873) 2026-04-21 18:05:13 +00:00
MaheshtheDev
20c5a18e13 chore: skipMemoryOnError for withSupermemory (#871) 2026-04-21 16:43:23 +00:00
vorflux[bot]
5493455f69
ci: switch npm packages to trusted publishing (OIDC) (#863)
Some checks failed
Publish Cartesia SDK Python / publish (push) Has been cancelled
Publish Tools / publish (push) Has been cancelled
Co-authored-by: Dhravya Shah <dhravyashah@gmail.com>
2026-04-16 19:41:58 -07:00
MaheshtheDev
76b0f2746b fix: Multi Params issue with @supermemory/tools (#854)
testing for now

fix proxy issue

upgraded the package
2026-04-16 07:37:09 +00:00
sreedharsreeram
0783594b8d voltagent-sdk (#791) 2026-04-14 20:22:46 +00:00
Ishaan Gupta
239fc123b9
feat: Use LRUCache instead of Map in MemoryCache (#774) 2026-03-20 10:37:39 -07:00
MaheshtheDev
a00a751e10 pkg(tools): Expose raw search results in MemoryPromptData for prompt templates (#787)
Expose raw search results in `MemoryPromptData` so prompt templates can traverse, filter, and selectively include results based on metadata (e.g. score, source).

##### Usage example

```typescript
const promptTemplate = (data: MemoryPromptData) => {
  const relevant = data.searchResults.filter(
    (r) => (r.metadata?.score as number) > 0.7
  )
  return `${data.userMemories}\n${relevant.map(r => r.memory).join('\n')}`
}
```
2026-03-19 00:38:53 +00:00
nexxeln
9553434c9a mastra integration (#717)
adds withSupermemory wrapper and input/output processors for
mastra agents:

- input processor fetches and injects memories into system prompt
before llm calls
- output processor saves conversations to supermemory after
responses
- supports profile, query, and full memory search modes
- includes custom prompt templates and requestcontext support

const agent = new Agent(withSupermemory(
{ id: "my-assistant", model: openai("gpt-4o"), instructions:
"..." },
"user-123",
{ mode: "full", addMemory: "always", threadId: "conv-456" }
))

includes docs as well

this pr also reworks how the tools package works into shared modules
2026-02-03 00:43:08 +00:00
Dhravya Shah
1c6b7800a8 chore: bump package versions 2026-01-22 20:50:51 -07:00
MaheshtheDev
32a7eff3af fix(tools): multi step agent prompt caching (#685) 2026-01-20 01:30:43 +00:00
Mahesh Sanikommu
645f89310c
PR: nova alpha release (#670)
Co-authored-by: Dhravya Shah <dhravya@supermemory.com>
2026-01-13 00:54:56 -08:00
MaheshtheDev
68d4d95c1d feat: allow prompt template for @supermemory/tools package (#655)
## Add customizable prompt templates for memory injection

**Changes:**

- Add `promptTemplate` option to `withSupermemory()` for full control over injected memory format (XML, custom branding, etc.)
- New `MemoryPromptData` interface with `userMemories` and `generalSearchMemories` fields
- Exclude `system` messages from persistence to avoid storing injected prompts
- Add JSDoc comments to all public interfaces for better DevEx

**Usage:**

```typescript
const customPrompt = (data: MemoryPromptData) => `
<user_memories>
${data.userMemories}
${data.generalSearchMemories}
</user_memories>
`.trim()

const model = withSupermemory(openai("gpt-4"), "user-123", {
  promptTemplate: customPrompt,
})
```
2026-01-07 03:48:16 +00:00
MaheshtheDev
04fb67a33e chore: update the package version of tools (#637) 2025-12-30 18:58:04 +00:00
Dhravya Shah
1162cbf284 conditional 2025-12-23 19:38:55 -08:00
MaheshtheDev
d095bd234e feat(@supermemory/tools): vercel ai sdk compatbile with v5 and v6 (#628) 2025-12-24 01:36:03 +00:00
Dhravya Shah
67e783158b bump package 2025-12-23 15:26:47 -08:00
Dhravya Shah
821d3049cd fix: deduplicate memories after returned to save tokens 2025-12-22 11:09:50 -08:00
MaheshtheDev
fae0afa7da chore: fix tsdown defaults in withsupermemory package (#623) 2025-12-21 19:49:49 +00:00
Dhravya
81e192e616 Support for conversations in SDKs (#618) 2025-12-20 00:46:13 +00:00
Dhravya Shah
ec538e2608 chore: bump package versions 2025-12-06 17:48:11 -08:00
MaheshtheDev
1ff6b7f951 chore(@supermemory/tools): fix the documentation of withSupermemory (#601)
- small docs miss match on addMemory default option
2025-12-03 18:59:55 +00:00
Dhravya Shah
2f8bafac4e update quickstart 2025-11-27 09:53:11 -07:00
MaheshtheDev
97071502a7 feat(@supermemory/tools): capture assitant responses with filtered memory (#539)
### Added streaming support to the Supermemory middleware and improved memory handling in the AI SDK integration.

### What changed?

- Refactored the middleware architecture to support both streaming and non-streaming responses
- Extracted memory prompt functionality into a separate module (`memory-prompt.ts`)
- Added memory saving capability for streaming responses
- Improved the formatting of memory content with a "User Supermemories:" prefix
- Added utility function to filter out supermemories from content
- Created a new streaming example in the test app with a dedicated route and page
- Updated version from 1.3.0 to 1.3.1 in package.json
- Simplified installation instructions in [README.m](http://README.md)d
2025-10-28 22:28:22 +00:00
MaheshtheDev
9d7f1edbfb fix: openai sdk packaging issue (#532) 2025-10-27 20:40:51 +00:00
MaheshtheDev
b3aab91489 feat: withSupermemory for openai sdk (#531)
### TL;DR

Added OpenAI SDK middleware support for SuperMemory integration, allowing direct memory injection without AI SDK dependency.

### What changed?

- Added `withSupermemory` middleware for OpenAI SDK that automatically injects relevant memories into chat completions
- Implemented memory search and injection functionality for OpenAI clients
- Restructured the OpenAI module to separate tools and middleware functionality
- Updated README with comprehensive documentation and examples for the new OpenAI middleware
- Added test implementation with a Next.js API route example
- Reorganized package exports to support the new structure
2025-10-27 20:08:11 +00:00
Mahesh Sanikommmu
5fb33743b3 add props interface export 2025-10-22 12:31:54 -07:00
Mahesh Sanikommmu
de6a1aef9c fix(tools): update the docs for conversational 2025-10-19 18:21:48 -07:00
sohamd22
91a2aa5fdb create memory adding option in vercel sdk (#484)
### TL;DR

Added support for automatically saving user messages to Supermemory.

### What changed?

- Added a new `addMemory` option to `wrapVercelLanguageModel` that accepts either "always" or "never" (defaults to "never")
- Implemented the `addMemoryTool` function to save user messages to Supermemory
- Modified the middleware to check the `addMemory` setting and save the last user message when appropriate
- Initialized the Supermemory client in the middleware to enable memory storage

### How to test?

1. Set the `SUPERMEMORY_API_KEY` environment variable
2. Use the `wrapVercelLanguageModel` function with the new `addMemory: "always"` option
3. Send a user message through the model
4. Verify that the message is saved to Supermemory with the specified container tag

### Why make this change?

This change enables automatic memory creation from user messages, which improves the system's ability to build a knowledge base without requiring explicit memory creation calls. This is particularly useful for applications that want to automatically capture and store user interactions for future reference.
2025-10-11 03:45:06 +00:00
MaheshtheDev
35ac9e086b feat: ai sdk language model withSupermemory (#446) 2025-10-10 05:10:03 +00:00
Dhravya Shah
1e2657cf5c
Revert "test(ai-sdk): streamText and generateText for ai sdk" (#466) 2025-10-08 15:56:03 -07:00
Mahesh Sanikommu
77b6e2b8dc
test(ai-sdk): streamText and generateText for ai sdk (#451) 2025-10-08 15:55:42 -07:00
Dhravya Shah
698bd0e1cd fix: tools files 2025-10-02 17:09:27 -07:00
Dhravya Shah
53bc296155 feat: Claude memory integration 2025-09-29 13:40:56 -07:00
Dhravya Shah
3d2d3ae35c bump version 2025-09-24 21:48:56 -07:00
CodeWithShreyans
cae7051d1a
feat: new tools package (#407)
Some checks failed
Publish AI SDK / publish (push) Has been cancelled
2025-09-02 23:11:19 +00:00
Renamed from packages/openai-sdk-ts/package.json (Browse further)