Context Graphs
Knowledge graphs specifically optimized for AI model consumption, engineered to provide LLMs with structured, semantically rich context that reduces hallucinations and improves reasoning accuracy.
Context Graphs are knowledge graphs specifically engineered and optimized for consumption by AI models. Unlike general-purpose knowledge graphs, context graphs are designed with AI model limitations, context windows, and reasoning patterns in mind, providing structured, semantically rich information that maximizes model performance while minimizing hallucinations.
What Makes a Context Graph Different
Traditional Knowledge Graph
Optimized for:
- Human querying and exploration
- Comprehensive data storage
- Complex analytical queries
- Data warehousing
Context Graph
Optimized for:
- LLM consumption - Fits within context windows
- Semantic clarity - Unambiguous entity and relationship semantics
- Reasoning support - Structured to support multi-hop reasoning
- Relevance ranking - Prioritizes most relevant information
- Hallucination reduction - Provides grounded, verifiable facts
- Token efficiency - Dense information in minimal tokens
Key Characteristics
1. AI-Optimized Structure
Context graphs are structured to match how AI models reason:
// Traditional Knowledge Graph (comprehensive)
{
entities: [...1000s of entities...],
relationships: [...10,000s of relationships...],
properties: {...extensive metadata...}
}
// Context Graph (AI-optimized)
{
// Focused subgraph relevant to query
entities: [
{
id: "person_alice",
type: "Person",
name: "Alice Johnson",
role: "CEO",
relevance: 0.95 // AI relevance score
},
{
id: "company_techcorp",
type: "Company",
name: "TechCorp",
relevance: 0.92
}
],
relationships: [
{
from: "person_alice",
to: "company_techcorp",
type: "leads",
since: "2020",
strength: 1.0, // Relationship confidence
relevance: 0.90
}
],
metadata: {
query: "Who leads TechCorp?",
extractionTime: "2024-12-24T10:00:00Z",
sources: ["hr_database", "linkedin_profile"],
contextWindow: "8k tokens",
tokensUsed: 350
}
}
2. Semantic Clarity
Every entity and relationship has clear, unambiguous semantics:
// Context Graph Example
{
entity: {
id: "drug_metformin",
type: "Pharmaceutical",
semanticType: "http://schema.org/Drug",
canonicalName: "Metformin",
aliases: ["Glucophage", "Fortamet"],
definition: "Oral diabetes medication that controls blood sugar",
context: {
usedFor: "Type 2 Diabetes treatment",
mechanism: "Reduces glucose production in liver",
commonDosage: "500-2000mg daily"
}
},
relationships: [
{
type: "treats",
semanticType: "http://schema.org/treatsCondition",
target: "disease_type2diabetes",
evidence: ["clinical_trials", "fda_approval"],
confidence: 0.99
}
]
}
3. Provenance and Confidence
Context graphs include source tracking and confidence scores:
{
triple: {
subject: "Alice Johnson",
predicate: "works_at",
object: "TechCorp",
// Provenance
sources: [
{
type: "hr_database",
timestamp: "2024-12-01",
confidence: 1.0,
verified: true
},
{
type: "linkedin_profile",
timestamp: "2024-12-20",
confidence: 0.95,
verified: false
}
],
// Aggregated confidence
overallConfidence: 0.98,
lastVerified: "2024-12-01",
// Temporal validity
validFrom: "2020-01-15",
validTo: null // Still current
}
}
4. Token Efficiency
Optimized to convey maximum information in minimal tokens:
## Traditional Format (verbose, 150 tokens)
The person named Alice Johnson currently holds the position of Chief Executive Officer at the technology company known as TechCorp, which she has been leading since January 15th, 2020. TechCorp is a software company that was founded in 2015 and operates in the enterprise software industry. Alice previously worked at another company before joining TechCorp.
## Context Graph Format (concise, 45 tokens)
Alice Johnson → CEO → TechCorp (since 2020-01-15)
TechCorp → industry → Software
TechCorp → founded → 2015
TechCorp → type → Enterprise Software Company
Creating Context Graphs
From Documents (GraphRAG)
# TrustGraph automatically creates context-optimized graphs
tg-start-flow -n graph-rag -i graph-rag -d "Context graph extraction"
tg-add-library-document \
--name "Company Overview" \
--id doc-overview \
--kind text/plain \
documents/overview.txt
tg-start-library-processing \
--flow-id graph-rag \
--document-id doc-overview \
--collection enterprise-docs
# Query extracts optimized context subgraph
tg-invoke-graph-rag \
-f graph-rag \
-C enterprise-docs \
-q "Who are the key leaders at TechCorp?"
Result: Context graph containing only relevant entities (leaders, TechCorp) and relationships (leads, reports_to), optimized for LLM consumption.
Manual Context Engineering
// Extract and optimize context graph for specific query
const contextGraph = await trustgraph.queryGraph({
query: "What products does TechCorp sell?",
// Context optimization parameters
maxEntities: 20, // Limit for context window
maxDepth: 3, // Multi-hop reasoning depth
relevanceThreshold: 0.7, // Only highly relevant entities
includeProvenance: true, // Source tracking
semanticExpansion: true, // Include related concepts
// Relationship prioritization
prioritizeTypes: [
"sells",
"manufactures",
"offers",
"provides"
],
// Format for LLM
outputFormat: "markdown", // or "json", "rdf", "cypher"
includeDefinitions: true, // Add entity definitions
maxTokens: 2000 // Fit in context window
});
Ontology-Driven Context Graphs
# Use ontology to ensure semantic precision
cat domain-ontology.owl | tg-put-config-item \
--type ontology \
--key product-ontology \
--stdin
tg-start-flow -n onto-rag -i onto-rag -d "Ontology-driven context"
tg-start-library-processing \
--flow-id onto-rag \
--document-id product-catalog \
--collection products
# Context graph conforms to ontology
tg-invoke-graph-rag \
-f onto-rag \
-C products \
-q "Which products treat diabetes?"
Result: Context graph with typed entities (Drug, Disease, Treatment) conforming to medical ontology, ensuring semantic precision for medical AI applications.
Context Graph Optimization Strategies
1. Relevance Ranking
Prioritize information by relevance to query:
const optimizedContext = await trustgraph.optimizeContext({
query: "Why did sales decline?",
graph: fullKnowledgeGraph,
ranking: {
algorithm: "page-rank", // or "semantic-similarity", "temporal-proximity"
// Boost factors
boostFactors: {
temporal: 2.0, // Recent events more relevant
directMentions: 1.5, // Entities mentioned in query
highDegree: 1.2 // Well-connected entities
},
// Filter out low-relevance
minRelevance: 0.5
}
});
2. Hierarchical Summarization
Provide multi-level context:
{
// High-level summary (always included)
summary: {
entities: ["TechCorp", "Q4 2024"],
keyFact: "Sales declined 15% in Q4 2024"
},
// Detailed context (included if tokens available)
details: {
entities: [
{
id: "q4_2024_sales",
type: "Metric",
value: "$10M",
change: "-15%",
comparedTo: "q4_2023"
},
{
id: "competitor_x",
type: "Company",
action: "launched rival product",
date: "2024-10-01"
}
],
relationships: [
"competitor_x → launched → rival_product (2024-10)",
"rival_product → caused → market_share_loss",
"market_share_loss → resulted_in → sales_decline"
]
},
// Supporting evidence (included only if space allows)
evidence: {
customerFeedback: [...],
marketAnalysis: [...],
financialReports: [...]
}
}
3. Temporal Ordering
Order information chronologically for reasoning:
{
temporalContext: {
timeline: [
{
date: "2024-01-01",
event: "TechCorp launches Product X",
entities: ["product_x", "techcorp"]
},
{
date: "2024-06-15",
event: "Competitor launches rival product",
entities: ["competitor", "rival_product"],
impact: "market_share_loss"
},
{
date: "2024-10-01",
event: "Sales decline becomes apparent",
entities: ["q3_sales", "sales_decline"],
cause: ["market_share_loss", "rival_product"]
},
{
date: "2024-12-24",
event: "Current state (query time)",
entities: ["techcorp", "product_x"],
status: "declining_sales"
}
],
// Causal chain (for reasoning)
causality: [
"rival_product → market_share_loss",
"market_share_loss → sales_decline"
]
}
}
4. Multi-Hop Path Extraction
Extract reasoning paths for complex queries:
// Query: "How does climate change affect AI research funding?"
{
reasoningPaths: [
{
path: [
"climate_change",
"affects",
"government_priorities",
"influences",
"research_funding",
"supports",
"ai_research"
],
strength: 0.85,
evidence: [
"source_1: Government climate reports",
"source_2: Research funding budgets",
"source_3: AI research grants"
]
},
{
path: [
"climate_change",
"creates",
"environmental_challenges",
"requires",
"ai_solutions",
"increases_demand_for",
"ai_research"
],
strength: 0.78,
evidence: [...]
}
],
// Synthesized answer context
synthesis: {
directImpact: "medium-high",
mechanisms: ["funding_reallocation", "problem_driven_demand"],
confidence: 0.82
}
}
Use Cases
1. Conversational AI Agents
// Agent maintains context graph across conversation
class AIAgent {
contextGraph: ContextGraph;
async processQuery(userMessage: string) {
// Update context graph with new information
await this.contextGraph.update({
entities: this.extractEntities(userMessage),
relationships: this.inferRelationships(userMessage),
conversationContext: {
previousTurn: this.lastResponse,
userIntent: this.classifyIntent(userMessage)
}
});
// Query with accumulated context
const response = await llm.generate({
prompt: userMessage,
context: this.contextGraph.format("markdown"),
systemPrompt: "Use the context graph to answer accurately"
});
return response;
}
}
2. Retrieval-Augmented Generation (RAG)
// GraphRAG with context-optimized retrieval
const answer = await trustgraph.invokeGraphRag({
"flow-id": "graph-rag",
"query": "What cybersecurity threats does our company face?",
// Context graph optimization
contextOptimization: {
maxTokens: 4000,
includeProvenance: true,
rankBy: "relevance",
// Prioritize recent and high-confidence information
temporalWeight: 0.3,
confidenceThreshold: 0.8
}
});
3. Decision Support Systems
// Business intelligence with context graphs
const analysis = await trustgraph.analyze({
question: "Should we expand into the European market?",
contextGraphs: [
"market-analysis-graph",
"competitive-landscape-graph",
"regulatory-environment-graph",
"financial-projections-graph"
],
reasoning: {
type: "multi-factor-decision",
factors: ["market-opportunity", "competition", "regulation", "cost"],
synthesize: true
}
});
// Returns decision context with reasoning paths
4. Knowledge-Augmented Code Generation
// Context graph of codebase for AI code assistant
const codeContext = await trustgraph.extractCodeContext({
query: "Add user authentication to the API",
codeGraph: {
files: ["api/**/*.ts"],
extractPatterns: [
"functions",
"classes",
"imports",
"dependencies"
],
includeRelationships: [
"calls",
"imports",
"extends",
"implements"
]
},
// Optimize for code generation LLM
format: "code-aware-markdown"
});
Context Graph vs Knowledge Graph
| Aspect | Knowledge Graph | Context Graph |
|---|---|---|
| Purpose | Comprehensive knowledge storage | AI model consumption |
| Size | Millions of entities | 10-100s of entities per query |
| Optimization | Query performance, storage | Token efficiency, relevance |
| Semantics | Implicit or explicit | Explicit with definitions |
| Provenance | Optional | Required |
| Confidence | Assumed or absent | Scored and tracked |
| Format | Database-optimized | LLM-optimized |
| Temporal | Snapshot or versioned | Time-ordered for reasoning |
| Relationships | All relationships | Prioritized by relevance |
Best Practices
1. Match Context Window
// Ensure context fits in model's window
const contextGraph = await trustgraph.queryGraph({
query: "...",
constraints: {
maxTokens: 8000, // GPT-4 context window
reserveTokens: 2000, // Leave room for response
targetTokens: 6000 // Aim for this
}
});
2. Include Semantic Definitions
// Don't assume LLM knows domain terms
{
entity: "SOSA",
definition: "Sensor, Observation, Sample, and Actuator ontology",
context: "W3C standard for IoT semantic interoperability",
example: "Used to describe sensor measurements in RDF"
}
3. Provide Reasoning Paths
// Help LLM follow logical chains
{
query: "Why did revenue increase?",
reasoningChains: [
{
chain: [
"new_product_launch",
"→ caused →",
"customer_acquisition",
"→ resulted_in →",
"revenue_increase"
],
evidence: [...]
}
]
}
4. Track Uncertainty
// Mark uncertain information
{
fact: "Competitor may launch product Q3",
confidence: 0.6,
source: "industry_rumors",
evidenceStrength: "weak",
alternativeHypotheses: [
"Competitor delays to Q4 (0.3)",
"Competitor cancels product (0.1)"
]
}
See Also
- Context Engineering - Optimizing LLM context
- Knowledge Graph - General knowledge graphs
- GraphRAG - Graph-based retrieval
- Agent Memory - Persistent AI agent context
- Ontology RAG - Schema-driven context extraction
Examples
- •Context graph providing entity relationships for AI agent decision-making
- •Optimized subgraph extracted from enterprise knowledge for LLM prompting
- •Context graph with temporal ordering and provenance for historical reasoning