Back to News
Context Graph Manifesto

Reification not Decision Traces

January 8, 2026
9 min read

There has been no shortage of commentary around the recent wave of “context graph mania.” From Foundation Capital’s original article from Jaya Gupta and Ashu Garg, “AI’s Trillion Dollar Opportunity: Context graphs” to my Context Graph Manifesto (there’s been so much attention, I even made a YouTube video, What is a Context Graph?), one topic in particular has attracted outsized attention: decision traces. When I first encountered the term, I bristled—and I still do. That said, I understand what people are trying to accomplish when they talk about decision traces. The problem is that decision is the wrong word.

What this is really about is reification.

Reif-what?

Don’t worry—we’ll get there. But first, we need to talk about decisions.

Computer systems don’t make decisions.

There, I said it.

Before you click away in disgust or angrily scroll to the comments to tell me I’m an idiot (and if you read all the way through and still feel compelled to do that, fair enough), let me explain what I mean.


What Is a Decision?

Decisions are a human construct, and they are surprisingly intangible. It sounds simple, but what is a decision, really? At what exact moment do you “decide” to do something? An action is easier to observe, but how do we identify the precise tipping point that caused it? Can we honestly say that a single thought or piece of information led directly to an action?

Stanford researcher Robert Sapolsky argues that humans have no free will at all. In his view, decisions are an illusion—one that obscures the cumulative effect of years of biology, environment, and external forces shaping our thoughts and behaviors.

From a behavioral economics perspective, decisions are inseparable from incentives, goals, and starting conditions. People often point to simplified game theory examples, like the prisoner’s dilemma, where outcomes appear to hinge on a discrete choice: betray the other prisoner or remain silent. While useful as an introduction, these scenarios are poor representations of the real world.


The Limits of Game Theory

As game theory scales into more complex systems, a dominant strategy tends to emerge: copycat behavior. If you’ve ever been told “don’t overthink it,” you may have unknowingly received advice straight out of game theory. In many scenarios, the optimal move is simply to mirror the first actor.

We see this repeatedly in emerging industries. The innovator is often not the long-term winner. Why? Because the innovator bears the cost of uncertainty and market education, while the second mover waits, observes, and copies once the heavy lifting is done.

But copycat dynamics aren’t the only flaw.


Starting Conditions Matter

Nobel Prize–winning behavioral economist Daniel Kahneman exposed a deeper limitation of classical game theory through prospect theory: it ignores starting conditions.

Consider the prisoner’s dilemma again, but with context. One prisoner is 18 years old. The other is 80 and terminally ill, with less than a year to live. If both stay silent, they each receive two years in prison. If one betrays the other, the betrayer walks free and the other gets twenty years.

For the 18-year-old, two years may be painful but manageable; twenty years is catastrophic. For the 80-year-old, both outcomes are unacceptable—they do not want to die in prison. Their incentives and constraints are fundamentally different. Once starting conditions are considered, the “choices” no longer mean the same thing.


Why This Matters for Computer Systems

If we relied on game theory alone, we might be tempted to say that computer systems make decisions. After all, they evaluate options and select outcomes. But prospect theory makes it clear that what we call a decision is deeply shaped by human goals, incentives, and conditions—none of which computers possess.

You might argue that we can program a system to favor certain choices. But where does that bias come from? Not the system itself. It originates with the human designer. The system is merely executing a logic structure created by someone else’s goals, incentives, and starting conditions.


Reification and Context Graphs

Does it really matter if decision is the wrong word? Maybe not. I’m certainly guilty of saying, “don’t let the truth get in the way of good marketing.” Are we just splitting semantic hairs?

In a way, yes—but that’s exactly the point of context graphs.

The need for context graphs in LLM systems arises because language models struggle to disambiguate meaning when information is removed from its original context. Context graphs allow us to retrieve the right contextual signals to guide the model’s interpretation. This idea is not new. And for knowledge graph enthusiasts, that brings us back to reification.


What Is Reification?

There are many complex definitions of reification. The simplest—and best—I’ve found is this:

Reification is a technique for representing statements about statements.

At first glance, that sounds unhelpful. I had the same reaction. But stick with me.

Consider a simple fact:

Fred -> hasLegs -> 4

Now suppose we want to capture that Mark told me that Fred has four legs. We could create an entirely new statement, but ideally we want to relate that assertion back to the original fact in the context graph.

That’s reification.

One approach is to introduce a new node:

[Statement1] -> hasSubject -> Fred
[Statement1] -> hasPredicate -> hasLegs
[Statement1] -> hasObject -> 4
[Statement1] -> assertedBy -> Mark
[Statement1] -> assertedDate -> “2026-01-08”

This works—but there’s a problem. The reified statement isn’t directly connected to the original statement. Querying becomes cumbersome, requiring reconstruction of the original fact from its components.

This is where property graph enthusiasts start smiling.


Property Graphs and RDF 1.2

One of the key differences between RDF graphs and property graphs is that property graphs allow properties on edges. Using a property graph, the same information can be represented as:

Fred -> hasLegs -> 4
          └> assertedBy: Mark
          └> assertedDate: “2026-01-08”

This is cleaner, more intuitive, and far easier to query.

On December 5, 2025, the W3C released a working draft of RDF 1.2, which introduces timely improvements around reification. One major addition is the ability to treat a relationship itself as an object:

<< Fred -> hasLegs -> 4 >> assertedBy -> Mark
<< Fred -> hasLegs -> 4 >> assertedDate -> 2026-01-08

This approach closely resembles property graphs and quad-based models (S, P, O, G), where G acts as an identifier or context. As discussed in the Context Graph Manifesto, there is no single correct approach. With RDF 1.2, the choice between RDF and property graphs increasingly comes down to preference and tooling rather than capability.


Reification as a System of Record

While “decision trace” may be a misnomer, the term system of record gets much closer to what context graphs are actually enabling: auditable records of system behavior, data flows, and outputs.

The reason this is often framed in terms of “decisions” becomes clearer when you consider governance.

In mature organizations, decision-making authority is formalized through governance structures. Certain roles are empowered to make certain decisions, with escalation paths for higher-impact outcomes. Governance often evokes eye rolls—it’s associated with bureaucracy, slowness, and red tape.

But governance exists for a reason.


The Real Reason Records Matter

If you’ve worked as a senior executive, compliance officer, or corporate lawyer, you already know the answer: liability.

More charitably, governance demonstrates that an organization has fulfilled its duty of care—the legal obligation to act with reasonable care to avoid foreseeable harm. And harm doesn’t have to be physical. Violating an SLA or breaching a contract qualifies. When AWS US-East-1 goes down for hours, lawsuits follow.

Enterprises need systems of record not just to assign internal blame (though that happens), but to defend themselves when users claim harm. Lacking records can itself be interpreted as failing duty of care.

Imagine a litigator arguing: “The defendant doesn’t even know how their system works. They have no records showing reasonable precautions. How could they possibly know they weren’t at fault?” Most juries would nod along as the defense’s legal team began encouraging them to settle.


From Black Boxes to Auditability

Today, AI systems are largely black boxes. Reification gives us a path toward transparency.

In RAG systems, provenance can be attached directly to retrieved context:

<<:dataset-common-crawl-2024 :containsData :source-wikipedia-en>>
  :includeDate "2024-02-01T00:00:00Z"^^xsd:dateTime ;
  :recordCount 6500000 ;
  :dataQualityScore 0.94 ;
  :licenseType "CC-BY-SA-3.0" ;
  :preprocessingPipeline :pipeline-cleaner-v3 ;
  :duplicatesRemoved 125000 ;
  :piiFiltered true ;
  :approvedBy :data-governance-team ;
  :auditTrail :audit-log-20240201 .

Inference events can be captured the same way:

<:model-gpt4-mini :generated :response-abc123>>
  :timestamp "2025-01-07T14:32:11Z"^^xsd:dateTime ;
  :inputTokens 450 ;
  :outputTokens 280 ;
  :latencyMs 1250 ;
  :temperature 0.7 ;
  :topP 0.9 ;
  :modelVersion "gpt4-mini-v1.2.3" ;
  :requestId "req-xyz-789" ;
  :userId :user-john-doe ;
  :sessionId "session-2025-01-07-14" ;
  :containsSensitiveInfo false ;
  :moderationScore 0.02 .

Even policy enforcement becomes part of the graph:

<<:deployment-prod-gpt4-mini :hasConstraint :policy-no-medical-advice>>
  :policyEffectiveDate "2024-07-01T00:00:00Z"^^xsd:dateTime ;
  :policyVersion "v2.1" ;
  :enforcementLevel "strict" ;
  :enforcedBy :guardrail-system-v3 ;
  :violationCount 0 ;
  :lastReviewDate "2024-12-01T00:00:00Z"^^xsd:dateTime ;
  :nextReviewDate "2025-06-01T00:00:00Z"^^xsd:dateTime ;
  :approvedBy :legal-compliance-team .

Reification allows us to bind system behavior directly to the data and context that produced it. This creates an auditable trail and opens the door to a more precise—and less anthropomorphic—notion of AI “memory.”


Closing the Loop on Memory

“Memory” is nearly as problematic a term as “decision.” Human memory is deeply flawed, and modeling AI systems after it has always felt misguided.

Instead of asking how to give AI memory, we should ask what we’re trying to accomplish. Do we want to store every token in a conversation? What happens when histories exceed context windows by orders of magnitude?

Context graphs naturally evolve into layered systems:

  • Grounding layers built from curated knowledge
  • A system-of-record layer capturing system behavior
  • Synthetic grounding layers derived from model outputs

Separating these layers is essential. It allows us to measure context drift—how far synthetic grounding diverges from original ground truth. In some cases, evolution is expected. In others, deviation is a failure. The system-of-record layer is what allows us to observe, measure, and correct for this drift.


Putting It Into Practice

At TrustGraph, our initial focus has been on building the tooling and infrastructure for production-grade grounding layers: context graphs that can be deployed anywhere, with any model, under full user control.

Now that TrustGraph is in production with real users, we see the next phase clearly. The foundation is in place. Reification transforms context graphs from static knowledge stores into auditable, learning systems.

TrustGraph 2.0 is just coming into view on the horizon—and as always, it will be free and open source.

For more information: