LLM Integration with TrustGraph & MCP
Complete guide to integrating OpenAI and other LLMs with TrustGraph using Model Context Protocol for intelligent contextual grounding
LLM Integration with TrustGraph & MCP
Combine TrustGraph's Knowledge Graph architecture with OpenAI and other LLMs using Model Context Protocol (MCP) for intelligent, hallucination-free AI agents.
Overview
This guide shows you how to integrate TrustGraph with OpenAI's GPT models using the Model Context Protocol, enabling superior contextual grounding and reduced hallucinations.
Setup
Install Dependencies
npm install @trustgraph/sdk openai @modelcontextprotocol/sdk
Configure TrustGraph & MCP
Store your configuration securely:
export TRUSTGRAPH_ENDPOINT="http://localhost:8080"
export TRUSTGRAPH_TOKEN="your-trustgraph-token"
export OPENAI_API_KEY="your-openai-key"
Model Context Protocol (MCP) Integration
TrustGraph supports MCP for seamless LLM integration with contextual grounding.
import { TrustGraphClient } from "@trustgraph/sdk";
import { MCPServer } from "@modelcontextprotocol/sdk";
import OpenAI from "openai";
// Initialize TrustGraph with MCP
const trustgraph = new TrustGraphClient({
endpoint: process.env.TRUSTGRAPH_ENDPOINT,
auth: { token: process.env.TRUSTGRAPH_TOKEN },
});
// Configure MCP server
const mcpServer = new MCPServer({
name: "trustgraph-context",
version: "1.0.0",
capabilities: {
resources: true, // Expose Knowledge Graph as resources
tools: true, // Graph query tools
prompts: true, // Contextual prompts
},
});
// Register TrustGraph tools with MCP
mcpServer.registerTool({
name: "query_knowledge_graph",
description: "Query TrustGraph Knowledge Graph for contextual information",
inputSchema: {
type: "object",
properties: {
query: { type: "string" },
maxDepth: { type: "number", default: 3 },
},
},
handler: async (input) => {
return await trustgraph.queryGraph({
query: input.query,
maxDepth: input.maxDepth,
includeRelationships: true,
});
},
});
Contextual Grounding with OpenAI
Unlike simple RAG, TrustGraph provides structured graph context:
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
async function generateWithContext(query: string) {
// Query Knowledge Graph for structured context
const graphContext = await trustgraph.queryGraph({
query,
mode: "knowledge-graph",
maxDepth: 3,
includeRelationships: true,
});
// Format graph context for LLM
const formattedContext = formatGraphContext(graphContext);
// Generate with OpenAI using graph-grounded context
const completion = await openai.chat.completions.create({
model: "gpt-4-turbo-preview",
messages: [
{
role: "system",
content: `You are an AI assistant with access to a Knowledge Graph.
IMPORTANT: Only answer based on the provided graph context.
If the information is not in the context, say so.
Always cite entity IDs when referencing specific information.`,
},
{
role: "user",
content: `Knowledge Graph Context:\n${formattedContext}\n\nQuestion: ${query}`,
},
],
temperature: 0.7,
max_tokens: 500,
});
return {
response: completion.choices[0].message.content,
entities: graphContext.entities,
relationships: graphContext.relationships,
confidence: calculateConfidence(graphContext),
};
}
function formatGraphContext(graphContext) {
// Format graph as structured context
const entities = graphContext.entities
.map(e => `- [${e.id}] ${e.type}: ${e.name} (${e.description})`)
.join("\n");
const relationships = graphContext.relationships
.map(r => `- ${r.source} → [${r.type}] → ${r.target}`)
.join("\n");
return `Entities:\n${entities}\n\nRelationships:\n${relationships}`;
}
Streaming Responses with Graph Context
Stream responses while maintaining graph grounding:
async function* streamWithGraph(query: string) {
// Pre-fetch graph context
const graphContext = await trustgraph.queryGraph({
query,
maxDepth: 3,
});
const formattedContext = formatGraphContext(graphContext);
const stream = await openai.chat.completions.create({
model: "gpt-4-turbo-preview",
messages: [
{
role: "system",
content: "Answer using only the Knowledge Graph context provided.",
},
{
role: "user",
content: `Graph Context:\n${formattedContext}\n\nQuestion: ${query}`,
},
],
stream: true,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || "";
if (content) {
yield {
text: content,
entities: graphContext.entities, // Attach for reference
};
}
}
}
Best Practices
1. Model Selection for TrustGraph
Choose models based on reasoning requirements:
- GPT-4 Turbo: Best for complex multi-hop reasoning
- GPT-4o: Balanced performance with graph context
- Claude 3.5 Sonnet: Excellent at following graph constraints
2. Contextual Grounding Prompts
Always emphasize graph-based grounding:
const systemPrompt = `You are an AI assistant with access to a Knowledge Graph.
STRICT RULES:
1. Only use information from the provided graph entities and relationships
2. Cite entity IDs when referencing information [entity:123]
3. If information is not in the graph, explicitly state this
4. Trace multi-hop reasoning through relationship chains
5. Never fabricate entities or relationships`;
3. Hallucination Detection
Monitor and prevent hallucinations:
async function detectHallucinations(response: string, graphContext: any) {
// Extract entity references from response
const mentionedEntities = extractEntityReferences(response);
// Verify all mentioned entities exist in graph
const validEntities = graphContext.entities.map(e => e.id);
const hallucinated = mentionedEntities.filter(
e => !validEntities.includes(e)
);
if (hallucinated.length > 0) {
console.warn("Hallucination detected:", hallucinated);
return { valid: false, hallucinated };
}
return { valid: true };
}
4. Cost Optimization with Graph Caching
TrustGraph's graph structure enables intelligent caching:
// Cache subgraphs for frequent queries
const graphCache = new Map();
async function getCachedGraphContext(query: string) {
const cacheKey = hash(query);
if (graphCache.has(cacheKey)) {
return graphCache.get(cacheKey);
}
const context = await trustgraph.queryGraph({ query });
graphCache.set(cacheKey, context);
return context;
}
Advanced: MCP Function Calling
Use OpenAI's function calling with MCP tools:
const tools = [
{
type: "function",
function: {
name: "query_knowledge_graph",
description: "Query TrustGraph Knowledge Graph for information",
parameters: {
type: "object",
properties: {
query: { type: "string", description: "The query" },
maxDepth: { type: "number", description: "Graph traversal depth" },
},
required: ["query"],
},
},
},
];
const completion = await openai.chat.completions.create({
model: "gpt-4-turbo-preview",
messages: [{ role: "user", content: "Explain knowledge graphs" }],
tools,
tool_choice: "auto",
});
// Handle tool calls
if (completion.choices[0].message.tool_calls) {
for (const toolCall of completion.choices[0].message.tool_calls) {
if (toolCall.function.name === "query_knowledge_graph") {
const args = JSON.parse(toolCall.function.arguments);
const result = await trustgraph.queryGraph(args);
// Send result back to OpenAI...
}
}
}
Multi-LLM Support
TrustGraph works with multiple LLM providers:
import Anthropic from "@anthropic-ai/sdk";
// Use Claude with TrustGraph
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const message = await anthropic.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
messages: [{
role: "user",
content: `Graph Context:\n${formattedContext}\n\nQuestion: ${query}`,
}],
});
Monitoring and Analytics
Track graph-grounded responses:
// Log with graph metrics
console.log({
timestamp: new Date(),
query,
model: "gpt-4-turbo",
tokens: completion.usage.total_tokens,
graphNodes: graphContext.entities.length,
graphEdges: graphContext.relationships.length,
traversalDepth: graphContext.maxDepth,
hallucinationScore: hallucinationCheck.score,
confidence: graphContext.confidence,
});
Conclusion
Integrating TrustGraph with OpenAI via Model Context Protocol creates a powerful, hallucination-resistant AI system. The Knowledge Graph architecture provides transparent, traceable, and grounded context that traditional RAG systems cannot match.
Next Steps
- Explore multi-model orchestration with MCP
- Implement graph-based confidence scoring
- Add entity resolution and deduplication
- Set up graph quality monitoring
- Enable real-time graph updates
- Integrate with Claude Desktop via MCP