mirror of
https://github.com/supermemoryai/supermemory.git
synced 2026-05-17 12:20:04 +00:00
86 lines
3.3 KiB
Text
86 lines
3.3 KiB
Text
---
|
|
title: "Overview — What is Supermemory?"
|
|
sidebarTitle: "Overview"
|
|
icon: "book-open"
|
|
---
|
|
|
|
Supermemory is the long-term and short-term memory and context infrastructure for AI agents. It is the [state of the art](https://supermemory.ai/research) across multiple different benchmarks, like LongMemEval and LoCoMo.
|
|
|
|
With supermemory, developers can provide perfect recall about their users to build AI agents that are more intelligent, more personalized, and more consistent. Additionally, *supermemory* has all the pieces of the context stack built in:
|
|
- [Agent memory](/concepts/graph-memory)
|
|
- [Content extraction](/concepts/content-types)
|
|
- [Connectors and syncing](/connectors/overview)
|
|
- [Managed RAG platform](/concepts/super-rag)
|
|
|
|
All this, coming together, makes supermemory the best abstraction to provide to agents.
|
|
|
|
## How does it work? (at a glance)
|
|
|
|

|
|
|
|
- You send Supermemory text, files, and chats.
|
|
- Supermemory [intelligently indexes them](/concepts/how-it-works) and builds a semantic understanding graph on top of an entity (e.g., a user, a document, a project, an organization).
|
|
- At query time, we fetch only the most relevant context and pass it to your models.
|
|
|
|
## Supermemory is context engineering.
|
|
|
|
#### Ingestion and Extraction
|
|
|
|
Supermemory handles all the extraction, for [any data type that you have](/concepts/content-types).
|
|
- Text
|
|
- Conversations
|
|
- Files (PDF, Images, Docs)
|
|
- Even videos!
|
|
|
|
... and then,
|
|
|
|
We offer three ways to add context to your LLMs:
|
|
|
|
#### Memory API — Learned user context
|
|
|
|

|
|
|
|
Supermemory learns and builds the memory for the user. These are extracted facts about the user, that:
|
|
- [Evolve on top of existing context about the user](/concepts/graph-memory), **in real time**
|
|
- Handle **knowledge updates, temporal changes, forgetfulness**
|
|
- Creates a **user profile** as the default context provider for the LLM.
|
|
|
|
_This can then be provided to the LLM, to give more contextual, personalized responses._
|
|
|
|
#### User profiles
|
|
|
|
Having the latest, evolving context about the user allows us to also create a [**User Profile**](/concepts/user-profiles). This is a combination of static and dynamic facts about the user, that the agent should **always know**
|
|
Developers can configure supermemory with what static and dynamic contents are, depending on their use case.
|
|
|
|
- Static: Information that the agent should **always** know.
|
|
- Dynamic: **Episodic** information, about last few conversations etc.
|
|
|
|
This leads to a much better retrieval system, and extremely personalized responses.
|
|
|
|
#### RAG - Advanced semantic search
|
|
|
|
Along with the user context, developers can also choose to do a search on the raw context. We provide full RAG-as-a-service, along with
|
|
- Full advanced metadata filtering
|
|
- Contextual chunking
|
|
- Works well with the memory engine
|
|
|
|
<Info>
|
|
See the full API Reference tab for detailed endpoint documentation.
|
|
</Info>
|
|
|
|
|
|
<Note>
|
|
All three approaches share the **same context pool** when using the same user ID (`containerTag`). You can mix and match based on your needs.
|
|
</Note>
|
|
|
|
## Next steps
|
|
|
|
<CardGroup cols={2}>
|
|
<Card title="Quickstart" icon="play" href="/quickstart">
|
|
Make your first API call in minutes
|
|
</Card>
|
|
<Card title="How it Works" icon="cpu" href="/concepts/how-it-works">
|
|
Understand the knowledge graph architecture
|
|
</Card>
|
|
</CardGroup>
|
|
|