Vector databases are the backbone of agent long-term memory. They store embeddings (numerical representations of text) and enable similarity search.
| Database | Type | Max Vectors | Unique Strength | Best For |
|---|---|---|---|---|
| Pinecone | Managed SaaS | Billions | Zero-ops, fast scaling | Production, startups |
| Weaviate | Open + Managed | Hundreds of millions | Built-in vectorization | Full-stack vector apps |
| Chroma | Open-source | Millions | Simple API, embedded mode | Prototyping, local dev |
| Qdrant | Open-source | Billions | Rust performance, filtering | High-performance search |
| pgvector | PostgreSQL extension | Millions | Uses existing Postgres | Adding vectors to existing apps |
// Agent Memory with Vector DB:
async function rememberAndRecall(agent, userMessage) {
// 1. Search for relevant memories
const memories = await vectorDB.query({
vector: await embed(userMessage),
topK: 5,
filter: { userId: user.id }
});
// 2. Inject memories into context
const context = memories.map(m => m.text).join('\n');
// 3. Generate response with memory context
const response = await llm.generate({
system: `You have access to past conversations: ${context}`,
user: userMessage
});
// 4. Store this interaction as new memory
await vectorDB.upsert({
id: generateId(),
vector: await embed(userMessage + response),
metadata: { userId: user.id, timestamp: Date.now() }
});
return response;
}