Agent Memory
Explore how Knowledge Graphs enable persistent, structured memory for AI agents. Learn about short-term, long-term, and episodic memory patterns for intelligent agents.
Agent Memory is the ability of AI agents to store, recall, and reason over past experiences, learned facts, and interactions. In Knowledge Graph systems like TrustGraph, agent memory is implemented as a structured graph where entities represent memories, facts, and experiences, and relationships capture temporal, causal, and semantic connections.
Why AI Agents Need Memory
Without memory, AI agents are stateless - each interaction starts from scratch:
❌ No learning from past interactions ❌ Can't maintain context across conversations ❌ No personalization ❌ Redundant information gathering ❌ Can't build on previous work
With proper memory, agents become intelligent assistants:
✅ Learn user preferences over time ✅ Maintain conversation context ✅ Remember past decisions and reasoning ✅ Build knowledge incrementally ✅ Provide personalized experiences
Types of Agent Memory
1. Short-Term Memory (Working Memory)
Temporary memory for the current task or conversation:
// Short-term memory for current conversation
const shortTermMemory = {
conversationId: "conv_123",
messages: [
{ role: "user", content: "What is X?", timestamp: "..." },
{ role: "assistant", content: "X is...", timestamp: "..." }
],
context: {
currentTopic: "X",
entities: ["X", "Y", "Z"],
userIntent: "information_seeking"
}
};
// Stored in fast-access memory (Redis, in-memory)
await shortTermMemory.store(conversationId, shortTermMemory);
Characteristics:
- Fast access (milliseconds)
- Limited size (last N messages)
- Cleared after conversation ends
- Focus on current context
2. Long-Term Memory (Persistent Memory)
Permanent storage of facts, preferences, and learned information:
// Store persistent facts in Knowledge Graph
await trustgraph.memory.store({
agent: "assistant_agent_1",
memoryType: "long-term",
entities: [
{
type: "user_preference",
user: "user_123",
preference: "prefers_concise_answers",
confidence: 0.9,
learnedFrom: "conversation_history",
timestamp: "2025-12-24T10:00:00Z"
},
{
type: "fact",
subject: "Project Apollo",
predicate: "completed_in",
object: "1969",
source: "conversation_42",
verifiedBy: "external_database"
}
],
relationships: [
{
source: "user_123",
type: "prefers",
target: "concise_answers"
}
]
});
Characteristics:
- Persistent across sessions
- Stored in Knowledge Graph
- Searchable and queryable
- Supports reasoning and inference
3. Episodic Memory (Experience Memory)
Memory of specific events, interactions, and experiences:
// Store episodic memory of agent actions
await trustgraph.memory.storeEpisode({
agent: "assistant_agent_1",
episodeId: "episode_456",
event: {
type: "task_completion",
task: "data_analysis",
timestamp: "2025-12-24T14:30:00Z",
context: {
user: "user_123",
goal: "analyze_sales_data",
tools_used: ["python", "pandas", "matplotlib"],
outcome: "success",
insights: ["Sales increased 23% in Q4", "Product X top performer"]
},
sequence: [
{ action: "load_data", status: "success", duration: "2s" },
{ action: "clean_data", status: "success", duration: "5s" },
{ action: "analyze", status: "success", duration: "10s" },
{ action: "visualize", status: "success", duration: "3s" }
]
}
});
// Later, recall similar episodes
const similarEpisodes = await trustgraph.memory.recallEpisodes({
agent: "assistant_agent_1",
similarTo: {
task: "data_analysis",
tools: ["python", "pandas"]
},
limit: 5
});
// Learn from past experiences
if (similarEpisodes.some(e => e.outcome === "success")) {
console.log("I've successfully completed similar tasks before");
const bestApproach = findBestApproach(similarEpisodes);
}
Characteristics:
- Temporally ordered events
- Rich contextual information
- Supports learning from experience
- Enables case-based reasoning
Memory in Knowledge Graphs
Knowledge Graphs are ideal for agent memory because they naturally represent:
- Entities: Users, facts, preferences, events
- Relationships: Temporal, causal, semantic connections
- Properties: Attributes, confidence scores, timestamps
- Provenance: Where information came from
Memory Graph Schema
// Example memory graph structure
const memoryGraph = {
nodes: [
// User entities
{ id: "user_123", type: "user", name: "Alice" },
// Preference entities
{ id: "pref_1", type: "preference", value: "concise_answers", confidence: 0.9 },
// Fact entities
{ id: "fact_42", type: "fact", statement: "Paris is capital of France", verified: true },
// Episode entities
{ id: "episode_1", type: "episode", task: "data_analysis", timestamp: "2025-12-24", outcome: "success" },
// Context entities
{ id: "context_1", type: "conversation_context", topic: "travel" }
],
edges: [
// User preferences
{ source: "user_123", type: "has_preference", target: "pref_1" },
// Fact provenance
{ source: "fact_42", type: "learned_from", target: "episode_1" },
// Temporal relationships
{ source: "episode_1", type: "happened_before", target: "episode_2" },
// Causal relationships
{ source: "action_1", type: "caused", target: "outcome_1" }
]
};
Memory Operations
Storing Memories
// Store a new memory
async function storeMemory(agent: string, memory: Memory) {
await trustgraph.memory.store({
agent,
memory: {
type: memory.type,
content: memory.content,
timestamp: new Date().toISOString(),
// Metadata
importance: calculateImportance(memory),
emotionalValence: memory.emotion,
tags: extractTags(memory),
// Relationships to existing memories
relatedTo: await findRelatedMemories(memory),
// Provenance
source: memory.source,
confidence: memory.confidence
}
});
}
Retrieving Memories
// Retrieve relevant memories for current context
async function recallMemories(agent: string, context: Context) {
return await trustgraph.memory.recall({
agent,
// Query by context
query: context.currentTopic,
// Filter by type
memoryTypes: ["fact", "preference", "episode"],
// Recency bias - prefer recent memories
recencyWeight: 0.3,
// Relevance threshold
minRelevance: 0.6,
// Limit results
limit: 20
});
}
Updating Memories
// Update existing memory (e.g., increase confidence)
async function reinforceMemory(agent: string, memoryId: string) {
await trustgraph.memory.update({
agent,
memoryId,
// Increase confidence through reinforcement
updates: {
confidence: "+0.1", // Increment
lastAccessed: new Date().toISOString(),
accessCount: "+1"
}
});
}
Forgetting (Memory Decay)
// Implement memory decay - forget old, unused memories
async function applyMemoryDecay(agent: string) {
const oldMemories = await trustgraph.memory.find({
agent,
filter: {
lastAccessed: { before: "30_days_ago" },
importance: { lessThan: 0.5 }
}
});
for (const memory of oldMemories) {
// Reduce confidence over time
await trustgraph.memory.update({
agent,
memoryId: memory.id,
updates: {
confidence: "*0.9" // Multiply by decay factor
}
});
// Remove if confidence too low
if (memory.confidence < 0.2) {
await trustgraph.memory.delete({ agent, memoryId: memory.id });
}
}
}
Memory-Augmented Agent Patterns
1. Conversational Agent with Memory
class ConversationalAgent {
async processMessage(userId: string, message: string) {
// 1. Retrieve user context from long-term memory
const userContext = await trustgraph.memory.recall({
agent: this.agentId,
query: userId,
memoryTypes: ["user_preference", "conversation_history"],
limit: 10
});
// 2. Get short-term conversation context
const conversationHistory = await this.shortTermMemory.get(userId);
// 3. Build context for LLM
const context = {
userPreferences: userContext.preferences,
conversationHistory: conversationHistory.messages,
relevantFacts: userContext.facts
};
// 4. Generate response with context
const response = await this.llm.generate({
context,
message,
systemPrompt: this.buildPromptWithMemory(context)
});
// 5. Store new memories from this interaction
await this.storeInteractionMemories(userId, message, response);
return response;
}
buildPromptWithMemory(context: Context) {
return `
You are a helpful assistant with memory of past interactions.
User Preferences:
${context.userPreferences.map(p => `- ${p.description}`).join('\n')}
Conversation History:
${context.conversationHistory.map(m => `${m.role}: ${m.content}`).join('\n')}
Remember these preferences and conversation context in your response.
`;
}
async storeInteractionMemories(userId: string, message: string, response: string) {
// Extract and store new facts
const facts = await this.extractFacts(message, response);
for (const fact of facts) {
await trustgraph.memory.store({
agent: this.agentId,
memory: { type: "fact", ...fact }
});
}
// Update user preferences if detected
const preferences = await this.detectPreferences(message, response);
for (const pref of preferences) {
await trustgraph.memory.store({
agent: this.agentId,
memory: { type: "user_preference", userId, ...pref }
});
}
}
}
2. Task-Executing Agent with Episodic Memory
class TaskAgent {
async executeTask(task: Task) {
// 1. Recall similar past episodes
const similarEpisodes = await trustgraph.memory.recallEpisodes({
agent: this.agentId,
similarTo: {
taskType: task.type,
context: task.context
},
limit: 5
});
// 2. Learn from past experiences
const strategy = this.learnFromEpisodes(similarEpisodes);
// 3. Execute task with learned strategy
const episode = {
episodeId: generateId(),
task,
startTime: new Date(),
actions: []
};
try {
for (const step of strategy.steps) {
const action = await this.executeAction(step);
episode.actions.push(action);
// Store action in episode
await trustgraph.memory.updateEpisode({
agent: this.agentId,
episodeId: episode.episodeId,
action
});
}
episode.outcome = "success";
episode.result = await this.getResult();
} catch (error) {
episode.outcome = "failure";
episode.error = error.message;
}
// 4. Store complete episode for future learning
await trustgraph.memory.storeEpisode({
agent: this.agentId,
episode
});
return episode.result;
}
learnFromEpisodes(episodes: Episode[]) {
// Analyze successful episodes
const successful = episodes.filter(e => e.outcome === "success");
if (successful.length > 0) {
// Extract common patterns
const commonActions = findCommonActionSequence(successful);
const avgDuration = average(successful.map(e => e.duration));
return {
steps: commonActions,
estimatedDuration: avgDuration,
confidence: successful.length / episodes.length
};
}
// Fall back to default strategy
return this.defaultStrategy();
}
}
3. Learning Agent with Continuous Memory Updates
class LearningAgent {
async learn(observation: Observation, reward: number) {
// 1. Retrieve relevant past experiences
const pastExperiences = await trustgraph.memory.recallEpisodes({
agent: this.agentId,
similarTo: observation,
limit: 20
});
// 2. Update memory based on reward
if (reward > 0) {
// Reinforce successful memories
await this.reinforceMemories(pastExperiences, reward);
} else {
// Weaken unsuccessful memories
await this.weakenMemories(pastExperiences, reward);
}
// 3. Store new experience
await trustgraph.memory.store({
agent: this.agentId,
memory: {
type: "experience",
observation,
action: this.lastAction,
reward,
timestamp: new Date().toISOString()
}
});
// 4. Update policy based on accumulated memories
await this.updatePolicy();
}
async reinforceMemories(experiences: Experience[], reward: number) {
for (const exp of experiences) {
await trustgraph.memory.update({
agent: this.agentId,
memoryId: exp.id,
updates: {
confidence: `+${reward * 0.1}`,
successCount: "+1"
}
});
}
}
}
Memory-Aware Context Building
Use agent memory to build richer context for LLM queries:
async function buildMemoryAwareContext(agent: string, query: string) {
// 1. Retrieve relevant memories
const memories = await trustgraph.memory.recall({
agent,
query,
memoryTypes: ["fact", "preference", "episode"],
limit: 30
});
// 2. Organize memories by type
const context = {
// Facts relevant to query
facts: memories
.filter(m => m.type === "fact")
.sort((a, b) => b.confidence - a.confidence)
.map(m => m.statement),
// User preferences
preferences: memories
.filter(m => m.type === "preference")
.map(m => m.description),
// Relevant past episodes
pastExperiences: memories
.filter(m => m.type === "episode" && m.outcome === "success")
.map(m => `Previously: ${m.summary}`)
};
// 3. Format for LLM
return `
## Relevant Facts:
${context.facts.join('\n')}
## User Preferences:
${context.preferences.join('\n')}
## Past Experience:
${context.pastExperiences.join('\n')}
`;
}
Memory Consolidation
Periodically consolidate memories to extract higher-level patterns:
async function consolidateMemories(agent: string) {
// 1. Retrieve all recent memories
const recentMemories = await trustgraph.memory.find({
agent,
filter: {
timestamp: { after: "7_days_ago" }
}
});
// 2. Detect patterns
const patterns = await detectPatterns(recentMemories);
// 3. Create higher-level memories from patterns
for (const pattern of patterns) {
await trustgraph.memory.store({
agent,
memory: {
type: "pattern",
description: pattern.description,
derivedFrom: pattern.sourceMemories.map(m => m.id),
confidence: pattern.frequency,
abstraction_level: "meta"
}
});
}
// 4. Merge similar memories
const duplicates = await findDuplicateMemories(recentMemories);
for (const [mem1, mem2] of duplicates) {
await mergeMemories(agent, mem1, mem2);
}
}
Best Practices
- Separate Memory Types: Use different storage for short-term vs long-term memory
- Implement Decay: Remove or downweight old, unused memories
- Track Confidence: Store confidence scores and update based on reinforcement
- Add Provenance: Always track where memories came from
- Use Timestamps: Enable temporal reasoning and memory decay
- Consolidate Periodically: Extract patterns and merge similar memories
- Limit Context Size: Don't overload LLM with too many memories
- Prioritize Relevance: Retrieve most relevant memories for current context
Related Concepts
- Knowledge Cores - Modular memory storage
- Context Engineering - Building optimal LLM context
- GraphRAG - Memory-augmented retrieval
- Knowledge Graph - Graph data structure