Context Engineering
Learn how Context Engineering shapes AI responses by carefully selecting and structuring information from Knowledge Graphs. Master the art of building precise, relevant context for LLM queries.
Context Engineering
Context Engineering is the practice of carefully selecting, structuring, and formatting information to provide optimal context for Large Language Models (LLMs). In Knowledge Graph systems like TrustGraph, Context Engineering leverages graph structure to build rich, relationship-aware context that enables accurate, grounded AI responses.
Why Context Engineering Matters
LLMs can only work with the information you provide in the prompt. Poor context leads to:
- Hallucinations: LLM invents information not present in context
- Irrelevant responses: LLM focuses on wrong aspects
- Missing relationships: LLM doesn't understand connections
- Incomplete reasoning: LLM lacks necessary background
Context Engineering solves these problems by systematically building optimal context from your Knowledge Graph.
Context in Traditional RAG vs Knowledge Graphs
Traditional RAG: Text Chunks
// Traditional vector search
const chunks = await vectorDB.search(query, { topK: 5 });
// Context is just concatenated text
const context = chunks.map(c => c.text).join("\n\n");
// Problem: No relationships between chunks
// Problem: May miss relevant connections
// Problem: No structured understanding
Knowledge Graph: Structured Context
// TrustGraph context with relationships
const context = await trustgraph.queryGraph({
query: "How does climate change affect agriculture?",
maxDepth: 3,
includeRelationships: true
});
// Rich structured context:
// {
// entities: [
// { id: "climate_change", type: "concept", properties: {...} },
// { id: "agriculture", type: "industry", properties: {...} },
// { id: "crop_yields", type: "metric", properties: {...} }
// ],
// relationships: [
// { source: "climate_change", type: "affects", target: "agriculture" },
// { source: "agriculture", type: "measured_by", target: "crop_yields" }
// ],
// paths: [
// ["climate_change", "affects", "agriculture", "measured_by", "crop_yields"]
// ]
// }
Key advantages:
- Entities with properties and types
- Explicit relationships between concepts
- Multi-hop reasoning paths
- Provenance to source documents
Context Engineering Strategies
1. Query-Driven Context Selection
Select context based on query intent:
// Different queries need different context strategies
const strategies = {
factual: {
// "What is X?" - Direct entity + properties
maxDepth: 1,
includeRelationships: false,
focusOnProperties: true
},
relational: {
// "How does X relate to Y?" - Path finding
maxDepth: 3,
includeRelationships: true,
focusOnPaths: true
},
analytical: {
// "Why does X happen?" - Broad context with causality
maxDepth: 4,
includeRelationships: true,
relationshipTypes: ["causes", "influences", "results_in"],
includeInferences: true
},
comparative: {
// "Compare X and Y" - Parallel context for both
maxDepth: 2,
includeRelationships: true,
retrieveMultipleEntities: true
}
};
// Apply strategy based on query analysis
const queryType = analyzeQuery(query);
const context = await trustgraph.queryGraph({
query,
...strategies[queryType]
});
2. Relationship Prioritization
Not all relationships are equally important:
const context = await trustgraph.queryGraph({
query: "Company X's investment strategy",
// Prioritize specific relationship types
relationshipWeights: {
"invests_in": 1.0, // Most relevant
"partners_with": 0.8,
"acquired": 0.9,
"employs": 0.3, // Less relevant for this query
"located_in": 0.2
},
// Filter low-relevance relationships
minRelationshipWeight: 0.5
});
3. Temporal Context
Include time-aware information:
const context = await trustgraph.queryGraph({
query: "Recent developments in AI",
// Temporal filtering
temporal: {
startDate: "2023-01-01",
endDate: "2025-12-24",
includeTemporalRelationships: true
},
// Prefer recent information
recencyBoost: true
});
4. Provenance and Sources
Include source information for transparency:
const context = await trustgraph.queryGraph({
query,
includeProvenance: true,
// Source filtering
trustedSources: [
"scientific_papers",
"official_documentation",
"verified_datasets"
]
});
// Context includes source references
// {
// entity: { id: "quantum_computing", ... },
// provenance: {
// source: "scientific_papers/nature_2024.pdf",
// page: 42,
// confidence: 0.95
// }
// }
Context Formatting for LLMs
Once you've selected context from the graph, format it optimally for the LLM:
Entity-Relationship Format
function formatGraphContext(graphContext) {
return `
# Knowledge Graph Context
## Entities:
${graphContext.entities.map(e => `
- **${e.name}** (${e.type})
${Object.entries(e.properties).map(([k, v]) => ` - ${k}: ${v}`).join('\n')}
`).join('\n')}
## Relationships:
${graphContext.relationships.map(r => `
- ${r.sourceName} --[${r.type}]--> ${r.targetName}
${r.properties ? `Properties: ${JSON.stringify(r.properties)}` : ''}
`).join('\n')}
## Reasoning Paths:
${graphContext.paths.map(p => `
- ${p.map((step, i) => i % 2 === 0 ? step : `--[${step}]-->`).join(' ')}
`).join('\n')}
`;
}
const prompt = `
${formatGraphContext(context)}
Based on the Knowledge Graph context above, answer the following question:
${userQuery}
Guidelines:
- Only use information present in the context
- Cite entity IDs when making claims
- Explain reasoning using the relationship paths
`;
Hierarchical Context
For complex queries, organize context hierarchically:
function formatHierarchicalContext(graphContext) {
return `
# Context for Query: "${userQuery}"
## Primary Entities (Direct matches):
${graphContext.primaryEntities.map(formatEntity).join('\n')}
## Related Entities (1 hop away):
${graphContext.relatedEntities.map(formatEntity).join('\n')}
## Extended Context (2-3 hops):
${graphContext.extendedContext.map(formatEntity).join('\n')}
## Key Relationships:
${graphContext.criticalPaths.map(formatPath).join('\n')}
`;
}
Context Size Management
LLMs have token limits. Manage context size strategically:
Progressive Context Building
async function buildProgressiveContext(query, maxTokens = 8000) {
const context = { entities: [], relationships: [], paths: [] };
let currentTokens = 0;
// 1. Start with most relevant entities
const coreEntities = await trustgraph.getRelevantEntities(query, { limit: 5 });
context.entities.push(...coreEntities);
currentTokens += estimateTokens(coreEntities);
// 2. Add critical relationships
if (currentTokens < maxTokens * 0.6) {
const relationships = await trustgraph.getRelationships(coreEntities);
context.relationships.push(...relationships);
currentTokens += estimateTokens(relationships);
}
// 3. Add extended context if space allows
if (currentTokens < maxTokens * 0.8) {
const extended = await trustgraph.getExtendedContext(coreEntities, { maxDepth: 2 });
context.entities.push(...extended.entities);
context.relationships.push(...extended.relationships);
}
// 4. Add reasoning paths
if (currentTokens < maxTokens * 0.9) {
context.paths = await trustgraph.findReasoningPaths(query, coreEntities);
}
return context;
}
Context Compression
For large contexts, apply compression:
const context = await trustgraph.queryGraph({
query,
maxDepth: 3,
// Compression strategies
compression: {
summarizeProperties: true, // Summarize verbose properties
deduplicateEntities: true, // Remove redundant entities
pruneWeakRelationships: true, // Remove low-relevance relationships
maxEntities: 50, // Hard limit on entities
maxRelationships: 100 // Hard limit on relationships
}
});
Context Validation
Validate context quality before sending to LLM:
function validateContext(context, query) {
const issues = [];
// Check if context addresses the query
if (!contextRelevanceScore(context, query) > 0.7) {
issues.push("Context may not be relevant to query");
}
// Check for completeness
if (context.entities.length === 0) {
issues.push("No entities found - context is empty");
}
// Check for broken references
const entityIds = new Set(context.entities.map(e => e.id));
for (const rel of context.relationships) {
if (!entityIds.has(rel.source) || !entityIds.has(rel.target)) {
issues.push(`Broken relationship reference: ${rel.type}`);
}
}
// Check token size
const tokens = estimateTokens(context);
if (tokens > 8000) {
issues.push(`Context too large: ${tokens} tokens (max: 8000)`);
}
return { valid: issues.length === 0, issues };
}
TrustGraph Context Engineering
TrustGraph provides built-in Context Engineering capabilities:
// Automatic context optimization
const response = await trustgraph.generate({
query: "How does AI impact healthcare?",
// Context engineering configuration
contextConfig: {
strategy: "adaptive", // Auto-select best strategy
maxDepth: 3,
maxTokens: 6000,
includeRelationships: true,
includePaths: true,
includeProvenance: true,
// Automatic formatting
format: "hierarchical",
// Quality controls
validation: true,
minRelevanceScore: 0.7
},
// LLM configuration
model: "gpt-4-turbo",
temperature: 0.3
});
// TrustGraph validates and optimizes context automatically
console.log(response.contextMetadata);
// {
// entitiesUsed: 23,
// relationshipsUsed: 45,
// pathsUsed: 8,
// tokens: 5847,
// relevanceScore: 0.89,
// validationPassed: true
// }
Best Practices
- Match Strategy to Query Type: Use appropriate context strategies for different query types
- Prioritize Relationships: Focus on relationships most relevant to the query
- Include Provenance: Always include source information for transparency
- Manage Token Budget: Stay within LLM context limits through compression and prioritization
- Validate Context: Check context quality before generation
- Format for LLMs: Structure context clearly with sections and hierarchy
- Leverage Graph Structure: Use paths and multi-hop reasoning from the graph
- Test and Iterate: Measure response quality and refine context engineering approach
Related Concepts
- GraphRAG - Graph-based Retrieval-Augmented Generation
- Knowledge Cores - Modular Knowledge Graph storage
- Agent Memory - Persistent memory for AI agents
- Knowledge Graph - Graph of entities and relationships