TrustGraph vs AWS AI Services
Compare TrustGraph's unified Knowledge Graph platform with AWS AI services (Bedrock, Kendra, Neptune). Learn the differences in deployment, cost, and integration complexity.
TrustGraph vs AWS AI Services
TrustGraph and AWS AI services both enable building intelligent applications, but they differ fundamentally: unified open-source platform versus composing multiple managed AWS services.
At a Glance
| Feature | TrustGraph | AWS AI Services |
|---|---|---|
| Architecture | Unified Knowledge Graph platform | Multiple services to integrate |
| Primary Services | Single platform | Bedrock + Kendra + Neptune + SageMaker |
| Deployment | Self-hosted or any cloud | AWS-only |
| Licensing | Open source (Apache 2.0) | Proprietary, usage-based |
| Vendor Lock-in | Portable | AWS ecosystem lock-in |
| Knowledge Graph | Built-in, first-class | Neptune (separate service) |
| Vector Search | Integrated (Qdrant/Pinecone) | OpenSearch or Kendra |
| LLM Access | 40+ providers | Bedrock models only |
| Cost Model | Infrastructure-based | Pay-per-request/token |
| Data Sovereignty | Deploy anywhere | AWS regions only |
| Integration Complexity | Unified API | Multiple SDKs to coordinate |
Core Philosophy
TrustGraph: Unified Platform Approach
TrustGraph provides everything integrated in one platform:
// Single SDK, everything works together
import { TrustGraph } from "@trustgraph/sdk";
const trustgraph = new TrustGraph({
endpoint: "https://your-deployment.com",
// Deploy on AWS, Azure, GCP, on-premise, or hybrid
});
// Unified ingestion - builds Knowledge Graph + embeddings
await trustgraph.ingest({
sources: ["s3://bucket/docs/", "postgres://db", "api://service"],
graphConfig: {
extractEntities: true,
buildRelationships: true,
ontology: "custom-schema.ttl",
},
});
// Hybrid retrieval - graph + vector automatically coordinated
const result = await trustgraph.query({
query: "How does product X relate to market Y?",
strategy: "graph-rag",
reasoning: "multi-hop",
});
// Multi-agent orchestration built-in
const agents = await trustgraph.createMultiAgent({
researcher: { role: "research", graph: mainGraph },
analyst: { role: "analyze", graph: mainGraph },
});
Key characteristics:
- Single platform, single API
- All components integrated
- Deploy anywhere
- No vendor lock-in
- Open source
AWS: Service Composition Approach
AWS requires assembling multiple services:
import boto3
# Service 1: Amazon Bedrock for LLMs
bedrock = boto3.client('bedrock-runtime')
# Service 2: Amazon Kendra for search
kendra = boto3.client('kendra')
# Service 3: Amazon Neptune for graph database
neptune = boto3.client('neptune')
# Service 4: Amazon OpenSearch for vector search
opensearch = boto3.client('opensearch')
# Service 5: S3 for storage
s3 = boto3.client('s3')
# Service 6: Lambda for orchestration
lambda_client = boto3.client('lambda')
# Service 7: Step Functions for workflows
stepfunctions = boto3.client('stepfunctions')
# You coordinate everything:
# 1. Upload documents to S3
s3.upload_file('doc.pdf', 'bucket', 'doc.pdf')
# 2. Trigger Lambda to process
lambda_client.invoke(FunctionName='process-doc', ...)
# 3. Extract entities (custom code)
# 4. Store in Neptune (custom code)
# 5. Generate embeddings (Bedrock)
response = bedrock.invoke_model(...)
# 6. Store vectors in OpenSearch (custom code)
# 7. Query Kendra for retrieval
kendra_response = kendra.query(...)
# 8. Query Neptune for graph
neptune_response = neptune.execute_gremlin(...)
# 9. Combine results (custom code)
# 10. Generate response with Bedrock
final_response = bedrock.invoke_model(...)
Key characteristics:
- Multiple services to integrate
- Complex orchestration required
- AWS-only deployment
- Usage-based pricing
- Vendor lock-in
Service Comparison
Knowledge Graph: TrustGraph vs Neptune
TrustGraph:
// Knowledge Graph is core to the platform
await trustgraph.ingest({
sources: ["documents/"],
graphConfig: {
extractEntities: true,
linkingStrategy: "semantic",
ontology: "domain-ontology.ttl",
},
});
// Rich graph queries built-in
const result = await trustgraph.query({
cypher: `
MATCH (p:Product)-[:COMPETES_WITH*1..2]->(c:Product)
WHERE p.name = 'Product X'
RETURN c, shortestPath((p)-[:COMPETES_WITH*]-(c))
`,
});
// Integrated with vector search and LLMs
const context = await trustgraph.retrieve({
query: "competitive analysis",
strategy: "graph-rag",
graphDepth: 3,
});
Amazon Neptune:
from gremlin_python.driver import client as gremlin_client
# Separate service - must provision and manage
neptune_endpoint = "wss://your-cluster.neptune.amazonaws.com:8182/gremlin"
neptune = gremlin_client.Client(neptune_endpoint, 'g')
# Write entities manually
neptune.submit("""
g.addV('Product').property('name', 'Product X')
""").all().result()
# Query with Gremlin (different syntax from Cypher)
result = neptune.submit("""
g.V().has('Product', 'name', 'Product X')
.repeat(out('COMPETES_WITH')).times(2)
""").all().result()
# NOT integrated with:
# - Vector search (separate OpenSearch service)
# - LLMs (separate Bedrock service)
# - Document processing (custom Lambda functions)
# You coordinate all integration
Vector Search: TrustGraph vs OpenSearch/Kendra
TrustGraph:
// Vector search integrated with graph
await trustgraph.ingest({
sources: ["docs/"],
// Automatically creates both:
// - Knowledge Graph entities
// - Vector embeddings
});
// Hybrid search out-of-box
const results = await trustgraph.retrieve({
query: "product features",
strategy: "graph-rag",
vectorTopK: 10,
graphDepth: 2,
fusion: "weighted",
});
// Returns unified results
{
vectorMatches: [...],
graphContext: {
entities: [...],
relationships: [...],
},
fusedResults: [...],
}
AWS OpenSearch + Kendra:
# Service 1: OpenSearch for vectors
from opensearchpy import OpenSearch
opensearch = OpenSearch([{'host': 'your-cluster', 'port': 443}])
# Manually create embeddings
embeddings = bedrock.invoke_model(
modelId='amazon.titan-embed-text-v1',
body=json.dumps({'inputText': text})
)
# Manually store in OpenSearch
opensearch.index(
index='documents',
body={'vector': embeddings, 'text': text}
)
# Service 2: Kendra for semantic search (separate!)
kendra = boto3.client('kendra')
kendra_response = kendra.query(
IndexId='your-index-id',
QueryText='product features'
)
# Manually combine OpenSearch and Kendra results
# Manually merge with Neptune graph data
# Custom fusion logic required
LLM Access: TrustGraph vs Bedrock
TrustGraph:
// 40+ LLM providers supported
await trustgraph.configure({
llm: {
provider: "openai", // or anthropic, google, azure, aws, cohere, etc.
model: "gpt-4-turbo",
// Switch providers without code changes
},
});
// Or use multiple providers
await trustgraph.agents.create({
name: "multi-model-analyst",
models: {
reasoning: "claude-3-opus",
summarization: "gpt-4-turbo",
embedding: "openai-3-large",
},
});
// Not locked into one provider
// Best model for each task
Amazon Bedrock:
# Bedrock models only
bedrock = boto3.client('bedrock-runtime')
# Limited model selection
available_models = [
'amazon.titan-text-express-v1',
'anthropic.claude-v2',
'anthropic.claude-3-sonnet',
'meta.llama2-70b',
'cohere.command-text-v14',
# AWS-curated selection only
]
# Cannot use:
# - Latest OpenAI models (GPT-4o, o1)
# - Latest Google models (Gemini Pro)
# - Self-hosted models
# - Custom fine-tuned models outside Bedrock
response = bedrock.invoke_model(
modelId='anthropic.claude-3-sonnet',
body=json.dumps({...})
)
# Locked into Bedrock's model availability
Deployment & Operations
TrustGraph: Deploy Anywhere
Multiple deployment options:
# Option 1: Deploy on AWS with your control
# - EKS (Kubernetes)
# - ECS (containers)
# - EC2 (bare metal)
# - Your VPC
# - Your security groups
# - Your IAM policies
# Option 2: Deploy on Azure
# - AKS
# - Container Instances
# - VMs
# Option 3: Deploy on GCP
# - GKE
# - Cloud Run
# - Compute Engine
# Option 4: On-premise
# - Your data center
# - Bare metal
# - Air-gapped
# Option 5: Hybrid
# - Graph DB on-premise
# - Vector store in cloud
# - Compute distributed
# Option 6: Multi-cloud
# - AWS + Azure + GCP
# - No lock-in
// Same code works everywhere
const trustgraph = new TrustGraph({
endpoint: process.env.TRUSTGRAPH_ENDPOINT,
// Could be AWS, Azure, GCP, on-premise
});
AWS: AWS-Only Deployment
Locked to AWS regions:
# Must use AWS
bedrock = boto3.client('bedrock-runtime', region_name='us-east-1')
neptune = boto3.client('neptune', region_name='us-east-1')
kendra = boto3.client('kendra', region_name='us-east-1')
# Cannot deploy:
# - On-premise
# - On Azure
# - On GCP
# - Air-gapped
# - Multi-cloud
# If AWS has an outage in your region, you're down
# If AWS discontinues a service, you must migrate
# If pricing changes, no alternatives
Cost Structure
TrustGraph: Predictable Infrastructure Costs
Infrastructure-based pricing:
Scenario: 1M documents, 10M queries/month
TrustGraph on AWS (self-managed):
- EKS cluster (3 nodes): $220/month
- Graph DB (Neo4j on EC2): $500/month
- Vector store (Qdrant): $200/month
- Storage (EBS + S3): $100/month
- Networking: $50/month
- LLM APIs (external): $2,000/month (volume discounts)
Total: ~$3,070/month
Benefits:
✅ Predictable costs
✅ No per-request charges
✅ Unlimited queries within infrastructure
✅ Choose cheapest LLM provider
✅ Volume discounts available
✅ Scale infrastructure as needed
TrustGraph breaks even vs AWS at:
~100K documents and 1M queries/month
AWS Services: Usage-Based Costs
Pay-per-request pricing:
Scenario: 1M documents, 10M queries/month
AWS Services:
- Bedrock (LLM): $5,000/month (input + output tokens)
- Bedrock (embeddings): $1,000/month (1B tokens)
- Neptune: $500/month (db.r5.large) + $0.20/M IO requests = $2,500
- OpenSearch: $800/month (instance) + storage
- Kendra: $1,200/month (Enterprise) + $0.0007/query = $8,200
- Lambda: $500/month (orchestration)
- S3: $200/month (storage + requests)
- Data transfer: $500/month
Total: ~$18,700/month
Challenges:
⚠️ Costs scale with usage
⚠️ Difficult to predict bills
⚠️ High per-query costs
⚠️ Kendra especially expensive
⚠️ Multiple service charges
⚠️ Data transfer fees
At low volume (less than 10K docs, less than 100K queries):
AWS may be cheaper due to pay-as-you-go
At scale (more than 100K docs, more than 1M queries):
TrustGraph significantly more cost-effective
Integration Complexity
TrustGraph: Unified Integration
Single SDK, everything integrated:
// One platform, one SDK
import { TrustGraph } from "@trustgraph/sdk";
const tg = new TrustGraph({ endpoint: "..." });
// Ingest - builds graph + vectors automatically
await tg.ingest({ sources: ["s3://bucket/"] });
// Query - hybrid search automatic
const results = await tg.query({ query: "...", strategy: "graph-rag" });
// Agents - orchestration built-in
const agents = await tg.createMultiAgent({...});
// Done - 10 lines of code
AWS: Complex Service Orchestration
Multiple services to coordinate:
# 7+ services to integrate
# 1. Set up S3 bucket
s3 = boto3.client('s3')
s3.create_bucket(Bucket='docs')
# 2. Set up Neptune cluster (GraphQL or Gremlin?)
neptune = boto3.client('neptune')
# Configure cluster, security groups, VPC...
# 3. Set up OpenSearch cluster
opensearch = boto3.client('opensearch')
# Configure domain, access policies...
# 4. Set up Kendra index
kendra = boto3.client('kendra')
kendra.create_index(...)
# Configure data sources, IAM roles...
# 5. Create Lambda functions for orchestration
lambda_client = boto3.client('lambda')
# Write custom processing code
# 6. Create Step Functions for workflows
stepfunctions = boto3.client('stepfunctions')
# Define state machines
# 7. Configure IAM roles and policies
iam = boto3.client('iam')
# Complex cross-service permissions
# 8. Write glue code to coordinate:
# - S3 -> Lambda -> Neptune
# - S3 -> Lambda -> Bedrock -> OpenSearch
# - Query: Kendra + Neptune + OpenSearch -> Bedrock
# Result: 100s of lines of orchestration code
# Complex error handling across services
# Monitoring each service separately
# Debugging distributed failures
Advanced Features
TrustGraph: Platform Features
Built-in capabilities:
✅ Multi-hop reasoning: Graph-based inference ✅ Knowledge Cores: Modular knowledge bases ✅ Temporal graphs: Time-aware relationships ✅ Three-dimensional visualization: Interactive graph exploration ✅ Ontology support: Schema-driven graphs ✅ MCP integration: Model Context Protocol native ✅ Multi-agent orchestration: Coordinated agents ✅ Graph analytics: Centrality, communities, paths ✅ Hybrid retrieval: Graph + vector unified ✅ Provenance tracking: Full audit trails
AWS: DIY Feature Implementation
You implement:
⚠️ Multi-hop reasoning: Write custom graph traversal logic ⚠️ Knowledge organization: Design your own system ⚠️ Temporal queries: Implement time handling ⚠️ Visualization: Use separate tool (Neptune Workbench) ⚠️ Schema management: Manual Neptune schema ⚠️ Integrations: Custom Lambda functions ⚠️ Agent orchestration: Build with Step Functions ⚠️ Graph analytics: Use Neptune Analytics (separate service) ⚠️ Hybrid retrieval: Coordinate OpenSearch + Neptune + Kendra ⚠️ Audit trails: Configure CloudTrail, CloudWatch
Data Sovereignty & Compliance
TrustGraph: Complete Control
Deploy anywhere for compliance:
// Healthcare (HIPAA)
const trustgraph = new TrustGraph({
deployment: "on-premise", // Keep PHI internal
encryption: "AES-256",
auditLog: "enabled",
});
// Financial (SOX, PCI-DSS)
const trustgraph = new TrustGraph({
deployment: "private-cloud",
region: "us-financial-dc",
compliance: ["SOX", "PCI-DSS"],
});
// Government (FedRAMP, ITAR)
const trustgraph = new TrustGraph({
deployment: "air-gapped", // No internet
region: "gov-datacenter",
clearanceLevel: "secret",
});
// EU (GDPR)
const trustgraph = new TrustGraph({
deployment: "eu-datacenter",
dataResidency: "EU-only",
rightToErasure: "enabled",
});
AWS: AWS Compliance Programs
AWS-defined compliance:
# Compliance depends on AWS certifications
# - HIPAA: Use HIPAA-eligible services
# - PCI-DSS: AWS is PCI-DSS compliant
# - SOC 2: AWS has SOC 2 reports
# - FedRAMP: Use GovCloud regions
# Limitations:
# - Data must be in AWS
# - Cannot do air-gapped deployments
# - Region availability varies
# - Some services not available in all compliance programs
# - Bedrock may not be HIPAA-eligible (check current status)
# Example: HIPAA deployment
bedrock = boto3.client('bedrock-runtime', region_name='us-east-1')
# Must sign BAA with AWS
# Must use HIPAA-eligible services only
# Must configure encryption
# Still in AWS cloud, not on-premise
Use Case Recommendations
Choose TrustGraph For:
-
Cost-Sensitive Applications
- High document volumes (>100K documents)
- High query volumes (>1M queries/month)
- Need predictable costs
- Want to optimize per-query costs
-
Data Sovereignty Requirements
- On-premise deployment needed
- Air-gapped environments
- Specific geographic constraints
- Cannot use public cloud
-
Multi-Cloud Strategy
- Don't want AWS lock-in
- Need portability
- Hybrid cloud architecture
- Future cloud flexibility
-
Custom Requirements
- Need specific graph database (Memgraph, FalkorDB)
- Want specific LLM providers
- Require custom processing pipelines
- Need source code access
-
Complex Graph Reasoning
- Multi-hop queries essential
- Relationship understanding critical
- Graph analytics required
- Knowledge Graph is core feature
Choose AWS Services For:
-
AWS-Native Applications
- Already heavily invested in AWS
- AWS expertise in team
- Other AWS services integrated
- AWS Enterprise Support
-
Low Volume Prototypes
- Less than 10K documents
- Less than 100K queries/month
- Pay-as-you-go attractive
- Rapid experimentation
-
Simple RAG Use Cases
- Don't need Knowledge Graph
- Kendra search sufficient
- Basic semantic search
- Limited integration needed
-
AWS Governance Requirements
- Corporate mandate for AWS
- AWS-only policy
- AWS Control Tower
- AWS Organizations
Migration Paths
From AWS to TrustGraph
Migration benefits:
# Export from Neptune
neptune_data = export_neptune_graph()
# Export from OpenSearch/Kendra
documents = export_documents()
# Import to TrustGraph
trustgraph.import_graph(neptune_data)
trustgraph.ingest(documents)
# Simplify architecture
# Before: 7+ AWS services
# After: 1 TrustGraph platform
# Reduce costs
# Before: $18,700/month (AWS services)
# After: $3,070/month (TrustGraph on AWS) or less (on-premise)
# Gain flexibility
# - Deploy anywhere
# - Use any LLM provider
# - No vendor lock-in
TrustGraph on AWS
Best of both worlds:
// Deploy TrustGraph on AWS infrastructure
// Use AWS compute/storage, but not AWS AI services
const trustgraph = new TrustGraph({
deployment: {
cloud: "aws",
region: "us-east-1",
compute: "eks",
storage: "ebs",
},
llm: {
provider: "openai", // Not locked to Bedrock
fallback: "anthropic",
},
});
// Benefits:
// ✅ Use AWS infrastructure (if you want)
// ✅ Not locked into AWS AI services
// ✅ Lower costs than AWS AI services
// ✅ Can migrate to other clouds
// ✅ Unified platform
Conclusion
TrustGraph vs AWS AI Services comparison:
Choose TrustGraph when you need:
- Cost-effective scaling (>100K docs, >1M queries/month)
- Unified Knowledge Graph platform
- Deploy anywhere (AWS, Azure, GCP, on-premise)
- No vendor lock-in
- Predictable costs
- Complex graph reasoning
- Open source flexibility
- Multi-LLM provider support
Choose AWS Services when you need:
- AWS-native integration
- Low volume pay-as-you-go (less than 10K docs, less than 100K queries)
- AWS enterprise support
- Corporate AWS mandate
- Simple RAG without Knowledge Graphs
For production applications with scale, TrustGraph provides a more cost-effective, flexible, and integrated solution. You can even deploy TrustGraph on AWS infrastructure to get AWS benefits without the vendor lock-in and high per-query costs of AWS AI services.
Additional Resources
AWS AI Services:
TrustGraph: