TrustGraph
key conceptsbeginner

Context Graph vs. Knowledge Graph

A context graph is a knowledge graph—but one purpose-built for AI. Learn how ontologies, graph storage, and reification of agentic behavior combine to create the semantic grounding infrastructure that makes AI systems reliable.

9 min read
Updated 3/12/2026
Daniel Davis
#context-graphs#context graph#knowledge-graphs#reification#rdf#ontologies#ai-context#agentic-ai

When Foundation Capital described the trillion-dollar opportunity in context graphs, many developers asked a reasonable question: isn't that just a knowledge graph? The answer is: yes—and that is exactly the point.

A context graph is a knowledge graph. The distinction is not in the underlying data structure—it is in what the graph is optimized to do. A context graph is a graph system purpose-built for AI: it uses graph storage and ontologies as semantic grounding infrastructure, and it captures agentic behavior through reification, encoding the full context of how knowledge was created, queried, and used directly into the graph itself.

TrustGraph founders Daniel Davis and Mark Adams have been working on this problem for over two years. This guide draws from that experience and from the Context Graph Manifesto.

Knowledge Graphs: The Foundation

A knowledge graph represents information as triples: Subject → Predicate → Object.

TrustGraph → isFoundedBy → Daniel Davis
TrustGraph → coFoundedBy → Mark Adams
TrustGraph → isOpenSourceAt → github.com/trustgraph-ai/trustgraph

This triple structure has roots in predicate logic dating to the 19th century. Semantic networks appeared in the 1960s. The modern knowledge graph emerged from the semantic web movement of the 1990s and 2000s, with W3C standards like RDF and OWL providing the formal foundation.

Knowledge graphs excel at:

CapabilityNotes
Comprehensive storageMillions to billions of entities and relationships
Human queryingSPARQL, Cypher, and graph traversal
Long-term retentionPersistent, versioned knowledge bases
Cross-domain linkageConnecting disparate data sources via shared identifiers

One TrustGraph community member runs over a billion nodes and edges in Cassandra. That is a knowledge graph. A context graph is built on exactly the same foundation—extended for AI.

What Makes a Context Graph Different

A context graph adds three layers on top of the base knowledge graph:

1. Ontological Grounding

An ontology defines the semantic vocabulary of the graph—what types of entities exist, how they relate, and what properties are valid. Without ontological grounding, a graph stores facts. With it, a graph stores facts with meaning that both humans and AI models can interpret consistently.

When TrustGraph extracts knowledge using an OWL ontology, every entity has a type from the ontology's class hierarchy and every relationship has a property from the ontology's property definitions. This semantic precision is essential for AI: an LLM reading a context graph can trust that "diagnosis" refers to a clinical diagnosis—not a software bug diagnosis or a casual observation—because the ontology enforces it.

# Ontology-grounded triples: types are unambiguous
ex:Patient_Alice a onto:Patient ;
    onto:receivedDiagnosis ex:Diagnosis_001 .

ex:Diagnosis_001 a onto:ClinicalDiagnosis ;
    onto:diagnosedCondition onto:Type2Diabetes ;
    onto:diagnosedBy ex:Physician_Smith ;
    onto:diagnosisDate "2025-11-14"^^xsd:date .

The ontology is the semantic contract that makes the graph legible to an AI system at inference time.

2. AI-Optimized Retrieval

A context graph is designed so that query-relevant subgraphs can be extracted efficiently and delivered to an LLM within its context window. This is not a separate system—it is a retrieval property of how the graph is structured and indexed.

TrustGraph's GraphRAG and Ontology RAG retrieval modes both operate on the same underlying graph. The difference is in how entities are extracted and how relationships are annotated. Both produce structured context—in formats like RDF Turtle, JSON-LD, or Markdown—that the LLM can read fluently. As TrustGraph's development confirmed experimentally: structured graph formats deliver better LLM responses than prose or CSV representations, because the syntax itself carries meaning about what is a node, what is a property, and what is a relationship.

3. Reification of Agentic Behavior

While some call this "decision traces", it's really the layer that makes a context graph distinctly different from a conventional knowledge graph—and the most important for building AI systems that can learn and improve over time.

Reification is the practice of turning a statement or event into a first-class node in the graph, so that metadata can be attached to it. In RDF, reification has been part of the standard since its inception:

# Base triple
ex:TrustGraph ex:isOpenSource "true"^^xsd:boolean .

# Reified: the assertion itself becomes a node
ex:Assertion_001 a rdf:Statement ;
    rdf:subject   ex:TrustGraph ;
    rdf:predicate ex:isOpenSource ;
    rdf:object    "true"^^xsd:boolean ;
    ex:assertedBy  ex:Agent_ResearchBot ;
    ex:timestamp   "2026-03-12T10:00:00Z"^^xsd:dateTime ;
    ex:confidence  "0.99"^^xsd:decimal ;
    ex:source      <https://github.com/trustgraph-ai/trustgraph> .

In a context graph, reification goes further: agentic behavior itself is reified. When an AI agent queries the graph, generates a response, or takes an action, that event becomes a node in the graph with metadata encoding the full context of what happened:

# Reified agent interaction
ex:Interaction_2026031201 a cg:AgentInteraction ;
    cg:agent          ex:Agent_SupportBot ;
    cg:userRequest    "What is TrustGraph's license?" ;
    cg:modelUsed      "claude-sonnet-4-6" ;
    cg:modelVersion   "20260301" ;
    cg:temperature    "0.3"^^xsd:decimal ;
    cg:systemPrompt   ex:Prompt_SupportV2 ;
    cg:timestamp      "2026-03-12T10:01:00Z"^^xsd:dateTime ;
    cg:queryTriples   ex:Assertion_001, ex:Assertion_007 ;
    cg:responseText   "TrustGraph is open source under the Apache 2.0 license." ;
    cg:thinkingChain  ex:Reasoning_2026031201 .

The reified interaction captures:

Metadata fieldWhat it records
cg:agentWhich agent handled the interaction
cg:userRequestThe original user question
cg:modelUsedThe LLM that generated the response
cg:temperatureModel sampling parameters
cg:systemPromptThe system prompt version in effect
cg:timestampWhen the interaction occurred
cg:queryTriplesWhich triples from the graph were used
cg:responseTextWhat was generated
cg:thinkingChainThe model's reasoning (if available)

This reified record is stored back into the graph. The context graph does not just store knowledge—it stores the history of how that knowledge has been used.

Why Reification Matters for AI

Capturing agentic behavior as graph data enables three capabilities that conventional knowledge graphs—and vector stores—cannot provide:

Auditability. Every AI-generated response can be traced to the exact graph triples that grounded it, the model parameters in effect, and the user request that triggered it. This is a compliance and trust requirement for regulated industries.

Temporal reasoning. Reified interactions carry timestamps. The graph can answer questions like: "Has the model's answer to this question changed over time, and if so, why?" This is the temporal context layer described in the Context Graph Manifesto and discussed on the Temporal RAG podcast.

Autonomous learning. When reified agentic data is reingested as new knowledge, the graph evolves based on how it has been used. A response that was frequently queried and confirmed becomes more confident. A fact that was queried but led to user corrections becomes less confident. This is the foundation of the self-improving knowledge system described in the Context Graph Manifesto as the "holy grail of a true autonomous system that can learn."

The Full Picture

A context graph is a knowledge graph that has been extended with:

  1. Ontological grounding — Semantic precision via OWL ontologies; entities and relationships carry unambiguous meaning
  2. AI-optimized retrieval — Structured subgraph extraction tuned for LLM context windows
  3. Reified agentic behavior — Agent actions, model parameters, user requests, timestamps, system state, and reasoning encoded as first-class graph nodes

The underlying storage technology does not define the concept. TrustGraph uses Cassandra by default, supports Neo4j, and the agents built on either behave identically. What defines a context graph is not the database—it is the semantic architecture and the reification layer.

Frequently Asked Questions

Is a context graph the same as a knowledge graph?

A context graph is a knowledge graph—but a specific kind of knowledge graph optimized for usage with AI. It uses graph storage and ontologies as semantic grounding infrastructure, and it captures agentic behavior through reification: encoding agent actions, model settings, user requests, timestamps, system parameters, and LLM reasoning directly into the graph as metadata.

What is reification in the context of a context graph?

Reification is the practice of making a statement about a statement—turning a fact or event into a first-class node in the graph so that metadata can be attached to it. In a context graph, reification captures agentic behavior: when an agent takes an action, queries the graph, or generates a response, that event is reified as a node with metadata including the model used, the user request, system parameters, timestamps, and even the model's reasoning chain. This creates a complete, auditable record of how knowledge was used.

What role do ontologies play in a context graph?

Ontologies serve as the semantic grounding infrastructure for a context graph. They define what types of entities exist, how they relate to each other, and what properties are valid—ensuring that "diagnosis" always means a medical diagnosis, not something else. Without ontological grounding, a graph stores facts; with it, a graph stores facts with meaning that both humans and AI models can interpret consistently.

Does TrustGraph use RDF or property graphs?

TrustGraph builds context graphs as triplestores using RDF semantics, with OWL ontologies for semantic grounding. The default graph store is Apache Cassandra. TrustGraph can also translate graphs for storage in Neo4j. The agents built on top of TrustGraph do not behave differently based on the underlying store—what matters is the quality of the semantic grounding and the fidelity of the reified agentic metadata.

Why does reifying agentic behavior improve AI reliability?

When an AI agent's behavior—its queries, decisions, model parameters, and reasoning—is reified into the graph alongside the knowledge it operated on, you gain two capabilities: auditability (you can trace exactly why a response was generated) and learning (the system can re-ingest its own reified behavior as new knowledge, enabling the graph to evolve based on how it has been used).

Watch: What Is a Context Graph?

For a visual explanation of how context graphs work in practice:

What is a Context Graph?

Related Guides

References