TrustGraph vs Mem0
Compare TrustGraph's Knowledge Graph memory with Mem0's memory layer for LLMs. Understand the differences in memory management, context persistence, and knowledge representation.
TrustGraph vs Mem0
TrustGraph and Mem0 both address AI memory and context management, but they approach the problem from fundamentally different angles: structured Knowledge Graphs versus flexible memory layers.
At a Glance
| Feature | TrustGraph | Mem0 |
|---|---|---|
| Core Abstraction | Knowledge Graph | Memory Layer |
| Data Structure | Graph (nodes + edges) | Vector embeddings + metadata |
| Memory Model | Structured entities & relationships | Unstructured conversation memory |
| Relationship Support | Native, first-class | Implicit in embeddings |
| Query Capability | Graph traversal + semantic search | Semantic similarity search |
| Deployment | Complete platform | Memory service/SDK |
| Primary Use Case | Knowledge management | Conversational context |
| Reasoning | Multi-hop graph reasoning | Retrieval-based context |
Core Philosophy
TrustGraph: Structured Knowledge Memory
TrustGraph provides structured, queryable knowledge through Knowledge Graphs:
// Build structured knowledge representation
await trustgraph.ingest({
sources: ["conversations/", "documents/", "databases/"],
graphConfig: {
extractEntities: true,
buildRelationships: true,
enableTemporal: true,
},
});
// Query with relationship awareness
const context = await trustgraph.query({
query: "What did John say about the Q4 budget?",
reasoning: "graph-traversal",
includeRelationships: true,
temporal: {
range: "Q4 2024",
},
});
// Rich structured result
{
entities: [
{ id: "john_smith", type: "Person", role: "CFO" },
{ id: "q4_budget", type: "Document", date: "2024-10-15" }
],
relationships: [
{ source: "john_smith", type: "mentioned", target: "q4_budget", timestamp: "2024-10-15T10:30:00Z" }
],
facts: [
{ subject: "q4_budget", predicate: "has_value", object: "$2.5M", source: "john_smith" }
]
}
Key characteristics:
- Entities and relationships explicitly modeled
- Facts are structured and traceable
- Multi-hop reasoning over connections
- Temporal awareness built-in
Mem0: Conversational Memory Layer
Mem0 provides a memory layer for conversations:
from mem0 import Memory
# Initialize memory
memory = Memory()
# Store conversational context
memory.add(
messages=[
{"role": "user", "content": "I love playing tennis on weekends"},
{"role": "assistant", "content": "That's great! Tennis is excellent exercise."}
],
user_id="john_doe"
)
# Retrieve relevant memories
memories = memory.search(
query="What does John like to do?",
user_id="john_doe"
)
# Returns relevant conversation snippets
# "I love playing tennis on weekends"
Key characteristics:
- Conversation-centric storage
- Semantic similarity retrieval
- User-specific memory isolation
- Lightweight and fast
Memory Architecture
TrustGraph: Graph-Based Knowledge Store
Structured knowledge persistence:
// TrustGraph maintains explicit knowledge structure
const knowledgeGraph = {
nodes: [
{
id: "user_123",
type: "Person",
attributes: {
name: "John Doe",
preferences: ["tennis", "outdoor activities"],
location: "San Francisco"
}
},
{
id: "activity_tennis",
type: "Activity",
attributes: {
name: "Tennis",
category: "Sports",
frequency: "weekly"
}
}
],
edges: [
{
source: "user_123",
target: "activity_tennis",
type: "enjoys",
weight: 0.9,
firstMentioned: "2024-01-15",
lastMentioned: "2024-12-20",
mentionCount: 15
}
]
};
// Query with graph operations
const relatedActivities = await trustgraph.query({
cypher: `
MATCH (u:Person {id: 'user_123'})-[:enjoys]->(a:Activity)-[:related_to]->(related:Activity)
RETURN related
`
});
// Find activities related to what user enjoys
Benefits:
- Explicit relationship tracking
- Complex queries over connections
- Temporal evolution of knowledge
- Provenance for every fact
Mem0: Vector-Based Memory Store
Embedding-based retrieval:
# Mem0 stores memories as vectors + metadata
memory.add(
messages=[
{"role": "user", "content": "I enjoy playing tennis on weekends"}
],
user_id="john_doe",
metadata={"category": "preferences", "timestamp": "2024-01-15"}
)
# Retrieval via semantic similarity
memories = memory.search(
query="What sports does the user like?",
user_id="john_doe",
limit=5
)
# Returns semantically similar memories
# Relationships are implicit in embeddings, not explicit
Benefits:
- Fast semantic search
- Flexible, unstructured storage
- Simple API
- Low overhead
Context Management
TrustGraph: Multi-Dimensional Context
Rich context with relationships:
// Retrieve comprehensive context
const context = await trustgraph.getContext({
userId: "user_123",
topic: "weekend activities",
includeRelated: true,
depth: 2,
});
// Returns:
{
directFacts: [
{ fact: "user_123 enjoys tennis", confidence: 0.95 }
],
relatedEntities: [
{ entity: "tennis_club_sf", relationship: "member_of" },
{ entity: "sarah_jones", relationship: "plays_with" }
],
temporalContext: {
recentMentions: ["2024-12-20", "2024-12-13"],
frequency: "weekly",
trend: "consistent"
},
inferredPreferences: [
{ preference: "outdoor_activities", confidence: 0.82 },
{ preference: "social_sports", confidence: 0.78 }
]
}
Mem0: Conversational Context
Recent conversation snippets:
# Get conversation history
memories = memory.get_all(user_id="john_doe")
# Returns conversation memories
[
{
"id": "mem_1",
"memory": "User enjoys playing tennis on weekends",
"created_at": "2024-01-15T10:00:00Z",
"metadata": {"category": "preferences"}
},
{
"id": "mem_2",
"memory": "User is a member of SF Tennis Club",
"created_at": "2024-02-10T14:30:00Z",
"metadata": {"category": "affiliations"}
}
]
# Filter by category or time
recent_preferences = memory.search(
query="preferences",
user_id="john_doe",
filters={"category": "preferences"}
)
Use Case Fit
Choose TrustGraph When:
-
Complex Knowledge Management
- Building knowledge bases with interconnected concepts
- Need to understand relationships between entities
- Multi-hop reasoning required
- Example: "How does concept A relate to concept C through B?"
-
Enterprise Knowledge Systems
- Corporate knowledge management
- Interconnected business entities
- Compliance and audit trails needed
- Multiple users sharing knowledge
-
High-Accuracy Requirements
- Medical, legal, financial applications
- Facts must be verifiable and traceable
- Hallucinations unacceptable
- Need explicit provenance
-
Long-Term Knowledge Evolution
- Knowledge accumulates over months/years
- Track how understanding evolves
- Temporal queries ("What did we know in Q3?")
- Learning from historical patterns
Choose Mem0 When:
-
Conversational AI
- Chatbots and virtual assistants
- User-specific conversation history
- Recent context is most important
- Example: "Remember what user said last week"
-
Personalized Applications
- User preference tracking
- Recommendation systems
- Personal assistant applications
- Individual memory isolation
-
Rapid Development
- Quick prototyping
- Simple memory requirements
- Don't need complex relationships
- Python-native projects
-
Lightweight Memory
- Memory as add-on to existing app
- Minimal infrastructure
- Pay-per-use model
- Simple retrieval needs
Reasoning Capabilities
TrustGraph: Graph-Based Reasoning
Multi-hop relationship traversal:
// Complex reasoning over knowledge structure
const reasoning = await trustgraph.reason({
query: "Why might user_123 enjoy hiking?",
strategy: "graph-inference",
maxDepth: 3,
});
// Returns reasoning paths:
{
inferences: [
{
path: [
"user_123 → enjoys → tennis",
"tennis → category → outdoor_sports",
"outdoor_sports → similar_to → hiking"
],
confidence: 0.75
},
{
path: [
"user_123 → location → san_francisco",
"san_francisco → has_access_to → hiking_trails",
"hiking_trails → enable → hiking"
],
confidence: 0.68
}
]
}
Mem0: Similarity-Based Context
Retrieval-based suggestions:
# Find relevant memories
memories = memory.search(
query="outdoor activities user might like",
user_id="john_doe",
limit=10
)
# Returns similar memories
# Reasoning is implicit in semantic similarity
# No explicit relationship chains
Integration & Deployment
TrustGraph: Platform Deployment
Complete infrastructure:
# Docker Compose deployment
docker compose -f trustgraph_system.yaml up -d
# Includes:
# - Knowledge Graph database
# - Vector store
# - API gateway
# - Agent orchestration
# - Monitoring
Features:
- Multi-tenancy built-in
- User authentication
- Cost tracking
- Observability dashboards
- Audit logging
Mem0: SDK/Service Integration
Add to existing application:
# Option 1: Use Mem0 cloud service
from mem0 import Memory
memory = Memory(api_key="your_api_key")
# Option 2: Self-hosted
from mem0 import Memory
from mem0.vector_stores import Qdrant
config = {
"vector_store": {
"provider": "qdrant",
"config": {
"url": "localhost:6333",
}
}
}
memory = Memory.from_config(config)
# Integrate into your app
@app.post("/chat")
async def chat(message: str, user_id: str):
# Get relevant memories
memories = memory.search(query=message, user_id=user_id)
# Use with LLM
response = llm.generate(
prompt=message,
context=memories
)
# Store conversation
memory.add(
messages=[
{"role": "user", "content": message},
{"role": "assistant", "content": response}
],
user_id=user_id
)
return response
Performance & Scalability
TrustGraph
- Optimized for: Complex queries over relationships
- Scaling: Distributed graph databases (Cassandra, Memgraph)
- Query speed: Graph traversal + vector search (100-500ms typical)
- Storage: Efficient graph storage with compression
- Caching: Intelligent subgraph caching
Mem0
- Optimized for: Fast vector similarity search
- Scaling: Horizontal scaling via vector DBs
- Query speed: Vector search only (10-50ms typical)
- Storage: Embeddings + metadata (lightweight)
- Caching: Vector store caching
Advanced Features
TrustGraph
✅ Knowledge Cores: Modular knowledge bases ✅ Graph Visualization: 3D interactive exploration ✅ Temporal Graphs: Time-aware relationships ✅ Ontology Support: Schema-driven graphs ✅ Multi-hop Reasoning: Relationship chain inference ✅ Provenance Tracking: Full audit trail ✅ Multi-modal: Text, structured data, future multimedia
Mem0
✅ User Isolation: Per-user memory spaces ✅ Memory Categories: Organize memories by type ✅ Metadata Filtering: Filter by custom metadata ✅ Memory Decay: Optional time-based relevance decay ✅ Multiple Backends: Qdrant, Pinecone, Chroma support ✅ Session Management: Conversation session tracking ✅ API & SDK: REST API and Python SDK
Pricing
TrustGraph
- Open source: Free
- Self-hosted: Infrastructure costs
- Enterprise support: Optional
Mem0
- Open source: Free (self-hosted)
- Mem0 Cloud: Pay-per-use (pricing varies)
- Infrastructure: Your vector DB if self-hosted
Migration & Integration
From Mem0 to TrustGraph
Why migrate:
- Need structured knowledge representation
- Want relationship-aware reasoning
- Require complex querying capabilities
- Need knowledge that evolves over time
Migration approach:
# Extract memories from Mem0
mem0_memories = memory.get_all(user_id="john_doe")
# Transform to entities and relationships
entities = []
relationships = []
for mem in mem0_memories:
# Extract entities from memory text
extracted = extract_entities(mem["memory"])
entities.extend(extracted["entities"])
relationships.extend(extracted["relationships"])
# Load into TrustGraph
trustgraph.ingest_entities(entities)
trustgraph.ingest_relationships(relationships)
Using Both Together
Complementary use:
# Use Mem0 for short-term conversational memory
memories = mem0_memory.search(
query=user_message,
user_id=user_id,
limit=5
)
# Use TrustGraph for long-term knowledge and reasoning
knowledge_context = trustgraph_client.query({
"query": user_message,
"reasoning": "graph-traversal",
"user_id": user_id
})
# Combine both in LLM prompt
combined_context = {
"recent_conversation": memories,
"knowledge_base": knowledge_context
}
response = llm.generate(
prompt=user_message,
context=combined_context
)
Real-World Example
Customer Support Scenario
Mem0 Approach:
# Store customer interactions
memory.add(
messages=[
{"role": "user", "content": "My order #12345 hasn't arrived"},
{"role": "assistant", "content": "Let me check on that for you"}
],
user_id="customer_789",
metadata={"category": "support", "order": "12345"}
)
# Later, retrieve context
memories = memory.search(
query="order status",
user_id="customer_789"
)
# Returns: "My order #12345 hasn't arrived"
TrustGraph Approach:
// Build knowledge graph of customer, orders, and support
await trustgraph.ingest({
entities: [
{ id: "customer_789", type: "Customer", name: "Jane Smith" },
{ id: "order_12345", type: "Order", status: "shipped", date: "2024-12-15" },
{ id: "support_ticket_456", type: "SupportTicket", created: "2024-12-20" }
],
relationships: [
{ source: "customer_789", type: "placed", target: "order_12345" },
{ source: "customer_789", type: "opened", target: "support_ticket_456" },
{ source: "support_ticket_456", type: "regarding", target: "order_12345" }
]
});
// Complex query
const context = await trustgraph.query({
cypher: `
MATCH (c:Customer {id: 'customer_789'})-[:placed]->(o:Order)
MATCH (c)-[:opened]->(t:SupportTicket)-[:regarding]->(o)
RETURN c, o, t, o.status, t.history
`
});
// Returns full context with relationships and order status
Conclusion
TrustGraph and Mem0 serve different memory needs:
Choose TrustGraph when you need:
- Structured knowledge with explicit relationships
- Complex reasoning over interconnected data
- Long-term knowledge evolution and tracking
- Enterprise knowledge management
- Multi-hop reasoning capabilities
- Provenance and audit trails
Choose Mem0 when you need:
- Simple conversational memory
- User-specific context tracking
- Lightweight memory layer
- Quick integration with existing apps
- Recent conversation history
- Fast semantic retrieval
Use both together for comprehensive memory:
- Mem0 for short-term conversational context
- TrustGraph for long-term structured knowledge
- Best of both worlds for sophisticated applications
Additional Resources
Mem0:
TrustGraph: