mirror of
https://github.com/supermemoryai/supermemory.git
synced 2026-05-02 21:50:10 +00:00
update quickstart
This commit is contained in:
parent
895f37ac89
commit
2f8bafac4e
16 changed files with 1040 additions and 1536 deletions
183
apps/docs/user-profiles/api.mdx
Normal file
183
apps/docs/user-profiles/api.mdx
Normal file
|
|
@ -0,0 +1,183 @@
|
|||
---
|
||||
title: "Profile API"
|
||||
description: "Endpoint details and response structure for user profiles"
|
||||
sidebarTitle: "API Reference"
|
||||
icon: "code"
|
||||
---
|
||||
|
||||
## Endpoint
|
||||
|
||||
**`POST /v4/profile`**
|
||||
|
||||
Retrieves a user's profile, optionally combined with search results.
|
||||
|
||||
## Request
|
||||
|
||||
### Headers
|
||||
|
||||
| Header | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| `Authorization` | Yes | Bearer token with your API key |
|
||||
| `Content-Type` | Yes | `application/json` |
|
||||
|
||||
### Body Parameters
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `containerTag` | string | Yes | The container tag (usually user ID) to get profiles for |
|
||||
| `q` | string | No | Optional search query to include search results with the profile |
|
||||
|
||||
## Response
|
||||
|
||||
```json
|
||||
{
|
||||
"profile": {
|
||||
"static": [
|
||||
"User is a software engineer",
|
||||
"User specializes in Python and React",
|
||||
"User prefers dark mode interfaces"
|
||||
],
|
||||
"dynamic": [
|
||||
"User is working on Project Alpha",
|
||||
"User recently started learning Rust",
|
||||
"User is debugging authentication issues"
|
||||
]
|
||||
},
|
||||
"searchResults": {
|
||||
"results": [...], // Only if 'q' parameter was provided
|
||||
"total": 15,
|
||||
"timing": 45.2
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Response Fields
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `profile.static` | string[] | Long-term, stable facts about the user |
|
||||
| `profile.dynamic` | string[] | Recent context and temporary information |
|
||||
| `searchResults` | object | Only present if `q` parameter was provided |
|
||||
| `searchResults.results` | array | Matching memory results |
|
||||
| `searchResults.total` | number | Total number of matches |
|
||||
| `searchResults.timing` | number | Query execution time in milliseconds |
|
||||
|
||||
## Basic Request
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```typescript TypeScript
|
||||
const response = await fetch('https://api.supermemory.ai/v4/profile', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`,
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: JSON.stringify({
|
||||
containerTag: 'user_123'
|
||||
})
|
||||
});
|
||||
|
||||
const data = await response.json();
|
||||
|
||||
console.log("Static facts:", data.profile.static);
|
||||
console.log("Dynamic context:", data.profile.dynamic);
|
||||
```
|
||||
|
||||
```python Python
|
||||
import requests
|
||||
import os
|
||||
|
||||
response = requests.post(
|
||||
'https://api.supermemory.ai/v4/profile',
|
||||
headers={
|
||||
'Authorization': f'Bearer {os.getenv("SUPERMEMORY_API_KEY")}',
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
json={
|
||||
'containerTag': 'user_123'
|
||||
}
|
||||
)
|
||||
|
||||
data = response.json()
|
||||
|
||||
print("Static facts:", data['profile']['static'])
|
||||
print("Dynamic context:", data['profile']['dynamic'])
|
||||
```
|
||||
|
||||
```bash cURL
|
||||
curl -X POST https://api.supermemory.ai/v4/profile \
|
||||
-H "Authorization: Bearer YOUR_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"containerTag": "user_123"
|
||||
}'
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
## Profile with Search
|
||||
|
||||
Include a search query to get both profile data and relevant memories in one call:
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```typescript TypeScript
|
||||
const response = await fetch('https://api.supermemory.ai/v4/profile', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`,
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: JSON.stringify({
|
||||
containerTag: 'user_123',
|
||||
q: 'deployment errors yesterday'
|
||||
})
|
||||
});
|
||||
|
||||
const data = await response.json();
|
||||
|
||||
// Profile data
|
||||
const profile = data.profile;
|
||||
|
||||
// Search results (only present because we passed 'q')
|
||||
const searchResults = data.searchResults?.results || [];
|
||||
```
|
||||
|
||||
```python Python
|
||||
response = requests.post(
|
||||
'https://api.supermemory.ai/v4/profile',
|
||||
headers={
|
||||
'Authorization': f'Bearer {os.getenv("SUPERMEMORY_API_KEY")}',
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
json={
|
||||
'containerTag': 'user_123',
|
||||
'q': 'deployment errors yesterday'
|
||||
}
|
||||
)
|
||||
|
||||
data = response.json()
|
||||
|
||||
profile = data['profile']
|
||||
search_results = data.get('searchResults', {}).get('results', [])
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
## Error Responses
|
||||
|
||||
| Status | Description |
|
||||
|--------|-------------|
|
||||
| `400` | Missing or invalid `containerTag` |
|
||||
| `401` | Invalid or missing API key |
|
||||
| `404` | Container not found |
|
||||
| `500` | Internal server error |
|
||||
|
||||
## Rate Limits
|
||||
|
||||
Profile requests count toward your standard API rate limits. Since profiles are cached, repeated requests for the same user are efficient.
|
||||
|
||||
<Card title="See Examples" icon="laptop-code" href="/user-profiles/examples">
|
||||
View complete integration examples for chat apps, support systems, and more
|
||||
</Card>
|
||||
316
apps/docs/user-profiles/examples.mdx
Normal file
316
apps/docs/user-profiles/examples.mdx
Normal file
|
|
@ -0,0 +1,316 @@
|
|||
---
|
||||
title: "Profile Examples"
|
||||
description: "Complete code examples for integrating user profiles"
|
||||
sidebarTitle: "Examples"
|
||||
icon: "laptop-code"
|
||||
---
|
||||
|
||||
## Building a Personalized Prompt
|
||||
|
||||
The most common use case: inject profile data into your LLM's system prompt.
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```typescript TypeScript
|
||||
async function handleChatMessage(userId: string, message: string) {
|
||||
// Get user profile
|
||||
const profileResponse = await fetch('https://api.supermemory.ai/v4/profile', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`,
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: JSON.stringify({ containerTag: userId })
|
||||
});
|
||||
|
||||
const { profile } = await profileResponse.json();
|
||||
|
||||
// Build personalized system prompt
|
||||
const systemPrompt = `You are assisting a user with the following context:
|
||||
|
||||
ABOUT THE USER:
|
||||
${profile.static?.join('\n') || 'No profile information yet.'}
|
||||
|
||||
CURRENT CONTEXT:
|
||||
${profile.dynamic?.join('\n') || 'No recent activity.'}
|
||||
|
||||
Provide responses personalized to their expertise level and preferences.`;
|
||||
|
||||
// Send to your LLM
|
||||
const response = await llm.chat({
|
||||
messages: [
|
||||
{ role: "system", content: systemPrompt },
|
||||
{ role: "user", content: message }
|
||||
]
|
||||
});
|
||||
|
||||
return response;
|
||||
}
|
||||
```
|
||||
|
||||
```python Python
|
||||
import requests
|
||||
import os
|
||||
|
||||
async def handle_chat_message(user_id: str, message: str):
|
||||
# Get user profile
|
||||
response = requests.post(
|
||||
'https://api.supermemory.ai/v4/profile',
|
||||
headers={
|
||||
'Authorization': f'Bearer {os.getenv("SUPERMEMORY_API_KEY")}',
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
json={'containerTag': user_id}
|
||||
)
|
||||
|
||||
profile = response.json()['profile']
|
||||
|
||||
# Build personalized system prompt
|
||||
static_facts = '\n'.join(profile.get('static', ['No profile information yet.']))
|
||||
dynamic_context = '\n'.join(profile.get('dynamic', ['No recent activity.']))
|
||||
|
||||
system_prompt = f"""You are assisting a user with the following context:
|
||||
|
||||
ABOUT THE USER:
|
||||
{static_facts}
|
||||
|
||||
CURRENT CONTEXT:
|
||||
{dynamic_context}
|
||||
|
||||
Provide responses personalized to their expertise level and preferences."""
|
||||
|
||||
# Send to your LLM
|
||||
llm_response = await llm.chat(
|
||||
messages=[
|
||||
{"role": "system", "content": system_prompt},
|
||||
{"role": "user", "content": message}
|
||||
]
|
||||
)
|
||||
|
||||
return llm_response
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
## Full Context Mode
|
||||
|
||||
Combine profile data with query-specific search for comprehensive context:
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```typescript TypeScript
|
||||
async function getFullContext(userId: string, userQuery: string) {
|
||||
// Single call gets both profile and search results
|
||||
const response = await fetch('https://api.supermemory.ai/v4/profile', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`,
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: JSON.stringify({
|
||||
containerTag: userId,
|
||||
q: userQuery // Include the user's query
|
||||
})
|
||||
});
|
||||
|
||||
const data = await response.json();
|
||||
|
||||
return {
|
||||
// Static background about the user
|
||||
userBackground: data.profile.static,
|
||||
// Current activities and context
|
||||
currentContext: data.profile.dynamic,
|
||||
// Query-specific memories
|
||||
relevantMemories: data.searchResults?.results || []
|
||||
};
|
||||
}
|
||||
|
||||
// Usage
|
||||
const context = await getFullContext('user_123', 'deployment error last week');
|
||||
|
||||
const systemPrompt = `
|
||||
User Background:
|
||||
${context.userBackground.join('\n')}
|
||||
|
||||
Current Context:
|
||||
${context.currentContext.join('\n')}
|
||||
|
||||
Relevant Information:
|
||||
${context.relevantMemories.map(m => m.content).join('\n')}
|
||||
`;
|
||||
```
|
||||
|
||||
```python Python
|
||||
async def get_full_context(user_id: str, user_query: str):
|
||||
# Single call gets both profile and search results
|
||||
response = requests.post(
|
||||
'https://api.supermemory.ai/v4/profile',
|
||||
headers={
|
||||
'Authorization': f'Bearer {os.getenv("SUPERMEMORY_API_KEY")}',
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
json={
|
||||
'containerTag': user_id,
|
||||
'q': user_query # Include the user's query
|
||||
}
|
||||
)
|
||||
|
||||
data = response.json()
|
||||
|
||||
return {
|
||||
'user_background': data['profile'].get('static', []),
|
||||
'current_context': data['profile'].get('dynamic', []),
|
||||
'relevant_memories': data.get('searchResults', {}).get('results', [])
|
||||
}
|
||||
|
||||
# Usage
|
||||
context = await get_full_context('user_123', 'deployment error last week')
|
||||
|
||||
system_prompt = f"""
|
||||
User Background:
|
||||
{chr(10).join(context['user_background'])}
|
||||
|
||||
Current Context:
|
||||
{chr(10).join(context['current_context'])}
|
||||
|
||||
Relevant Information:
|
||||
{chr(10).join(m['content'] for m in context['relevant_memories'])}
|
||||
"""
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
## Separate Profile and Search
|
||||
|
||||
For more control, you can call profile and search endpoints separately:
|
||||
|
||||
```typescript TypeScript
|
||||
async function advancedContext(userId: string, query: string) {
|
||||
// Parallel requests for profile and search
|
||||
const [profileRes, searchRes] = await Promise.all([
|
||||
fetch('https://api.supermemory.ai/v4/profile', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`,
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: JSON.stringify({ containerTag: userId })
|
||||
}),
|
||||
fetch('https://api.supermemory.ai/v3/search', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`,
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: JSON.stringify({
|
||||
q: query,
|
||||
containerTag: userId,
|
||||
limit: 5
|
||||
})
|
||||
})
|
||||
]);
|
||||
|
||||
const profile = await profileRes.json();
|
||||
const search = await searchRes.json();
|
||||
|
||||
return { profile: profile.profile, searchResults: search.results };
|
||||
}
|
||||
```
|
||||
|
||||
## Express.js Middleware
|
||||
|
||||
Add profile context to all authenticated requests:
|
||||
|
||||
```typescript TypeScript
|
||||
import express from 'express';
|
||||
|
||||
// Middleware to fetch user profile
|
||||
async function withUserProfile(req, res, next) {
|
||||
if (!req.user?.id) {
|
||||
return next();
|
||||
}
|
||||
|
||||
try {
|
||||
const response = await fetch('https://api.supermemory.ai/v4/profile', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`,
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: JSON.stringify({ containerTag: req.user.id })
|
||||
});
|
||||
|
||||
req.userProfile = await response.json();
|
||||
} catch (error) {
|
||||
console.error('Failed to fetch profile:', error);
|
||||
req.userProfile = null;
|
||||
}
|
||||
|
||||
next();
|
||||
}
|
||||
|
||||
const app = express();
|
||||
|
||||
// Apply to all routes
|
||||
app.use(withUserProfile);
|
||||
|
||||
app.post('/chat', async (req, res) => {
|
||||
const { message } = req.body;
|
||||
|
||||
// Profile is automatically available
|
||||
const profile = req.userProfile?.profile;
|
||||
|
||||
// Use in your LLM call...
|
||||
});
|
||||
```
|
||||
|
||||
## Next.js API Route
|
||||
|
||||
```typescript TypeScript
|
||||
// app/api/chat/route.ts
|
||||
import { NextRequest, NextResponse } from 'next/server';
|
||||
|
||||
export async function POST(req: NextRequest) {
|
||||
const { userId, message } = await req.json();
|
||||
|
||||
// Fetch profile
|
||||
const profileRes = await fetch('https://api.supermemory.ai/v4/profile', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`,
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: JSON.stringify({ containerTag: userId })
|
||||
});
|
||||
|
||||
const { profile } = await profileRes.json();
|
||||
|
||||
// Build context and call your LLM...
|
||||
const response = await generateResponse(message, profile);
|
||||
|
||||
return NextResponse.json({ response });
|
||||
}
|
||||
```
|
||||
|
||||
## AI SDK Integration
|
||||
|
||||
For the cleanest integration, use the Supermemory AI SDK middleware:
|
||||
|
||||
```typescript TypeScript
|
||||
import { generateText } from "ai"
|
||||
import { withSupermemory } from "@supermemory/tools/ai-sdk"
|
||||
import { openai } from "@ai-sdk/openai"
|
||||
|
||||
// One line setup - profiles automatically injected
|
||||
const model = withSupermemory(openai("gpt-4"), "user-123")
|
||||
|
||||
const result = await generateText({
|
||||
model,
|
||||
messages: [{ role: "user", content: "Help me with my current project" }]
|
||||
})
|
||||
// Model automatically has access to user's profile!
|
||||
```
|
||||
|
||||
<Card title="AI SDK User Profiles" icon="triangle" href="/ai-sdk/user-profiles">
|
||||
Learn more about automatic profile injection with the AI SDK
|
||||
</Card>
|
||||
135
apps/docs/user-profiles/overview.mdx
Normal file
135
apps/docs/user-profiles/overview.mdx
Normal file
|
|
@ -0,0 +1,135 @@
|
|||
---
|
||||
title: "User Profiles"
|
||||
description: "Automatically maintained user context that gives your LLMs instant, comprehensive knowledge about each user"
|
||||
sidebarTitle: "Overview"
|
||||
icon: "user"
|
||||
---
|
||||
|
||||
User profiles are **automatically maintained collections of facts about your users** that Supermemory builds from all their interactions and content. Think of it as a persistent "about me" document that's always up-to-date and instantly accessible.
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Instant Context" icon="bolt">
|
||||
No search queries needed - comprehensive user information is always ready
|
||||
</Card>
|
||||
<Card title="Auto-Updated" icon="rotate">
|
||||
Profiles update automatically as users interact with your system
|
||||
</Card>
|
||||
<Card title="Two-Tier Structure" icon="layer-group">
|
||||
Static facts + dynamic context for perfect personalization
|
||||
</Card>
|
||||
<Card title="Zero Setup" icon="wand-magic-sparkles">
|
||||
Just ingest content normally - profiles build themselves
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## Why Profiles?
|
||||
|
||||
Traditional memory systems rely entirely on search, which has fundamental limitations:
|
||||
|
||||
| Problem | With Search Only | With Profiles |
|
||||
|---------|-----------------|---------------|
|
||||
| **Context retrieval** | 3-5 search queries | 1 profile call |
|
||||
| **Response time** | 200-500ms | 50-100ms |
|
||||
| **Consistency** | Varies by search quality | Always comprehensive |
|
||||
| **Basic user info** | Requires specific queries | Always available |
|
||||
|
||||
**Search is too narrow**: When you search for "project updates", you miss that the user prefers bullet points, works in PST timezone, and uses specific terminology.
|
||||
|
||||
**Profiles provide the foundation**: Instead of repeatedly searching for basic context, profiles give your LLM a complete picture of who the user is.
|
||||
|
||||
## Static vs Dynamic
|
||||
|
||||
Profiles intelligently separate two types of information:
|
||||
|
||||

|
||||
|
||||
### Static Profile
|
||||
|
||||
Long-term, stable facts that rarely change:
|
||||
|
||||
- "Sarah Chen is a senior software engineer at TechCorp"
|
||||
- "Sarah specializes in distributed systems and Kubernetes"
|
||||
- "Sarah has a PhD in Computer Science from MIT"
|
||||
- "Sarah prefers technical documentation over video tutorials"
|
||||
|
||||
### Dynamic Profile
|
||||
|
||||
Recent context and temporary states:
|
||||
|
||||
- "Sarah is currently migrating the payment service to microservices"
|
||||
- "Sarah recently started learning Rust for a side project"
|
||||
- "Sarah is preparing for a conference talk next month"
|
||||
- "Sarah is debugging a memory leak in the authentication service"
|
||||
|
||||
## How It Works
|
||||
|
||||
Profiles are **automatically built and maintained** through Supermemory's ingestion pipeline:
|
||||
|
||||
<Steps>
|
||||
<Step title="Content Ingestion">
|
||||
When users add documents, chat, or any content to Supermemory, it goes through the standard ingestion workflow.
|
||||
</Step>
|
||||
|
||||
<Step title="Intelligence Extraction">
|
||||
AI analyzes the content to extract not just memories, but also facts about the user themselves.
|
||||
</Step>
|
||||
|
||||
<Step title="Profile Operations">
|
||||
The system generates profile operations (add, update, or remove facts) based on the new information.
|
||||
</Step>
|
||||
|
||||
<Step title="Automatic Updates">
|
||||
Profiles are updated in real-time, ensuring they always reflect the latest information.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<Note>
|
||||
You don't need to manually manage profiles - they build themselves as users interact with your system.
|
||||
</Note>
|
||||
|
||||
## Profiles + Search
|
||||
|
||||
Profiles don't replace search - they complement it:
|
||||
|
||||
<Steps>
|
||||
<Step title="Profile provides foundation">
|
||||
The user's profile gives your LLM comprehensive background context about who they are, what they know, and what they're working on.
|
||||
</Step>
|
||||
|
||||
<Step title="Search adds specificity">
|
||||
When you need specific information (like "error in deployment yesterday"), search finds those exact memories.
|
||||
</Step>
|
||||
|
||||
<Step title="Combined for perfect context">
|
||||
Your LLM gets both the broad understanding from profiles AND the specific details from search.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
### Example
|
||||
|
||||
User asks: **"Can you help me debug this?"**
|
||||
|
||||
**Without profiles**: The LLM has no context about the user's expertise level, current projects, or debugging preferences.
|
||||
|
||||
**With profiles**: The LLM knows:
|
||||
- The user is a senior engineer (adjust technical level)
|
||||
- They're working on a payment service migration (likely context)
|
||||
- They prefer command-line tools over GUIs (tool suggestions)
|
||||
- They recently had issues with memory leaks (possible connection)
|
||||
|
||||
## Next Steps
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="API Reference" icon="code" href="/user-profiles/api">
|
||||
Learn how to fetch and use profiles via the API
|
||||
</Card>
|
||||
<Card title="Code Examples" icon="laptop-code" href="/user-profiles/examples">
|
||||
See complete integration examples
|
||||
</Card>
|
||||
<Card title="AI SDK Integration" icon="triangle" href="/ai-sdk/user-profiles">
|
||||
Use the AI SDK for automatic profile injection
|
||||
</Card>
|
||||
<Card title="Use Cases" icon="lightbulb" href="/user-profiles/use-cases">
|
||||
Common patterns and applications
|
||||
</Card>
|
||||
</CardGroup>
|
||||
153
apps/docs/user-profiles/use-cases.mdx
Normal file
153
apps/docs/user-profiles/use-cases.mdx
Normal file
|
|
@ -0,0 +1,153 @@
|
|||
---
|
||||
title: "Profile Use Cases"
|
||||
description: "Common patterns and applications for user profiles"
|
||||
sidebarTitle: "Use Cases"
|
||||
icon: "lightbulb"
|
||||
---
|
||||
|
||||
## Personalized AI Assistants
|
||||
|
||||
The most common use case: building AI assistants that truly know your users.
|
||||
|
||||
**What profiles provide:**
|
||||
- User's expertise level (adjust technical depth)
|
||||
- Communication preferences (brief vs detailed, formal vs casual)
|
||||
- Tools and technologies they use
|
||||
- Current projects and priorities
|
||||
|
||||
**Example prompt enhancement:**
|
||||
|
||||
```typescript
|
||||
const systemPrompt = `You are assisting ${userName}.
|
||||
|
||||
Their background:
|
||||
${profile.static.join('\n')}
|
||||
|
||||
Current focus:
|
||||
${profile.dynamic.join('\n')}
|
||||
|
||||
Adjust your responses to match their expertise level and preferences.`;
|
||||
```
|
||||
|
||||
**Result:** An assistant that explains React hooks differently to a junior developer vs a senior architect.
|
||||
|
||||
## Customer Support Systems
|
||||
|
||||
Give support agents (or AI) instant context about customers.
|
||||
|
||||
**What profiles provide:**
|
||||
- Customer's product usage history
|
||||
- Previous issues and resolutions
|
||||
- Preferred communication channels
|
||||
- Technical proficiency level
|
||||
|
||||
**Benefits:**
|
||||
- No more "let me look up your account"
|
||||
- Agents immediately understand customer context
|
||||
- AI support can reference past interactions naturally
|
||||
|
||||
```typescript
|
||||
// Support agent dashboard
|
||||
async function loadCustomerContext(customerId: string) {
|
||||
const { profile } = await getProfile(customerId);
|
||||
|
||||
return {
|
||||
summary: profile.static, // Long-term customer info
|
||||
recentIssues: profile.dynamic // Current tickets, recent problems
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## Educational Platforms
|
||||
|
||||
Adapt learning content to each student's level and progress.
|
||||
|
||||
**What profiles provide:**
|
||||
- Learning style preferences
|
||||
- Completed courses and topics
|
||||
- Areas of strength and weakness
|
||||
- Current learning goals
|
||||
|
||||
**Example adaptation:**
|
||||
|
||||
```typescript
|
||||
// Profile might contain:
|
||||
// static: ["Visual learner", "Strong in algebra, struggles with geometry"]
|
||||
// dynamic: ["Currently studying calculus", "Preparing for AP exam"]
|
||||
|
||||
const tutorPrompt = `You're helping a student with:
|
||||
${profile.static.join('\n')}
|
||||
|
||||
Current focus: ${profile.dynamic.join('\n')}
|
||||
|
||||
Adapt explanations to their learning style and build on their strengths.`;
|
||||
```
|
||||
|
||||
## Development Tools
|
||||
|
||||
IDE assistants and coding tools that understand your codebase and habits.
|
||||
|
||||
**What profiles provide:**
|
||||
- Preferred languages and frameworks
|
||||
- Coding style and conventions
|
||||
- Current project context
|
||||
- Frequently used patterns
|
||||
|
||||
**Example:**
|
||||
|
||||
```typescript
|
||||
// Profile for a developer:
|
||||
// static: ["Prefers TypeScript", "Uses functional patterns", "Senior engineer"]
|
||||
// dynamic: ["Working on auth refactor", "Recently learning Rust"]
|
||||
|
||||
// Code assistant knows to:
|
||||
// - Suggest TypeScript solutions
|
||||
// - Use functional patterns in examples
|
||||
// - Provide senior-level explanations
|
||||
// - Connect suggestions to the auth refactor when relevant
|
||||
```
|
||||
|
||||
## Knowledge Base Assistants
|
||||
|
||||
Internal tools that understand each employee's role and responsibilities.
|
||||
|
||||
**What profiles provide:**
|
||||
- Department and role
|
||||
- Projects they're involved in
|
||||
- Access level and permissions context
|
||||
- Areas of expertise (for routing questions)
|
||||
|
||||
**Example:**
|
||||
|
||||
```typescript
|
||||
// HR assistant that knows:
|
||||
// - Employee's team and manager
|
||||
// - Their location/timezone
|
||||
// - Recent PTO requests
|
||||
// - Benefits elections
|
||||
|
||||
const response = await hrAssistant.answer(
|
||||
"When is my next performance review?",
|
||||
{ profile: employeeProfile }
|
||||
);
|
||||
// Can answer with specific dates, manager name, etc.
|
||||
```
|
||||
|
||||
## E-commerce Recommendations
|
||||
|
||||
Personalized shopping experiences beyond basic recommendation engines.
|
||||
|
||||
**What profiles provide:**
|
||||
- Style preferences
|
||||
- Size information
|
||||
- Past purchases and returns
|
||||
- Budget range
|
||||
- Occasions they shop for
|
||||
|
||||
**Example conversation:**
|
||||
|
||||
```
|
||||
User: "I need something for a wedding next month"
|
||||
|
||||
// Profile knows: prefers classic styles, size M, budget-conscious,
|
||||
// previously bought navy suits
|
||||
Loading…
Add table
Add a link
Reference in a new issue