Back to News
Context Graph Manifesto

Context and the Decision Process

February 13, 2026
10 min read

You make thousands of decisions every day. Most of them you don't even notice. But in business, decisions matter.

The wrong choice can cost millions. The right one can save a company. Yet we pretend decisions are simple. We build models that ignore half of what matters. We create governance structures that slow everything down. We ask AI agents to decide things without teaching them why.

This article explores how decisions really work. We'll start with decision science and behavioral economics—the foundations. Then we'll examine how organizations structure decision-making, from single leaders to distributed teams. We'll look at risk models and what they miss. We'll confront the unknown unknowns. Finally, we'll see why context graphs matter for AI decisions, why they're just the beginning, and how we’re thinking about these problems at TrustGraph.

Decision Science: Trees of Options

Decision science gives us decision trees. It tells us how to break big choices into smaller pieces. You list your options. You assign values to outcomes. You calculate probabilities. You multiply and add. The highest number wins.

These trees work for some decisions. Should you take an umbrella? Check the forecast. The math is clean. But most business decisions aren't clean.

Decision science is useful. But it's a tool, not an answer. You still need to know what the trees leave out.

Behavioral Economics: How We Actually Choose

People don't follow the decision trees. Behavioral economics proved this. We succumb to a nearly neverending list of biases and logical fallacies.

We avoid losses more than we seek gains. We anchor on the first number we hear. We think recent events will keep happening. We trust our gut when we should think. We overthink when we should trust our gut.

Daniel Kahneman showed us two systems. System 1 is fast and automatic. System 2 is slow and deliberate. We think we use System 2 for important choices. We mostly don't. System 1 runs the show. It uses shortcuts. These shortcuts saved our ancestors from lions. They make us terrible at evaluating software vendors.

Understanding behavioral economics means accepting that we're flawed. The question isn't whether bias exists. It's how we design around it.

Organizational Decision Models: Who Decides

Organizations need rules about who decides what. This is governance. Without it, nothing moves. With too much of it, nothing moves either.

Governance determines speed. It determines quality. It determines whether smart people quit in frustration. Get it wrong and you'll watch competitors lap you. Get it right and execution becomes your advantage.

The model you choose shapes everything else. Each model trades different things. Speed for safety. Autonomy for control. Innovation for consistency.

Three Governance Structures

Single authority models are simple. One person decides. The military uses this. So do startups. It's fast. It's clear. But it breaks when decisions need expertise the authority doesn't have. It breaks when the authority becomes a bottleneck. Ever sat waiting on a decision because someone was on vacation? It breaks when the authority is wrong and no one can stop them.

Group decision models spread the load. Committees review. Boards approve. Multiple stakeholders weigh in. This catches mistakes. It incorporates diverse views. But it's slow. It breeds compromise over conviction. It diffuses accountability.

The Team of Teams model changes the game. General Stanley McChrystal described this in his book about fighting Al-Qaeda in Iraq. Push decisions to the edge. Give teams authority. Connect them tightly with shared consciousness. The center sets intent and coordinates. The edges execute with autonomy.

This model works when environments change fast. When local knowledge matters. When waiting for approval means losing. But it requires trust. It requires information systems that share context everywhere. It falls apart if teams optimize locally instead of globally.

Risk: The Center of Governance

All governance models manage risk. Risk is why governance exists. If nothing could go wrong, we wouldn't need approvals. We wouldn't need checks. We'd just act.

Corporate governance is risk management. Boards ask: What could hurt shareholders? Compliance officers ask: What could trigger regulators? Project managers ask: What could blow the timeline? Every approval gate is someone saying "not worth the risk" or "acceptable risk, proceed."

We've built elaborate structures to quantify risk. Risk matrices. Monte Carlo simulations. Value at Risk calculations. These tools help. But they miss most of what makes decisions hard.

What Risk Models Ignore

Traditional risk models focus on outcome probability and magnitude. They ask: How likely is the bad thing? How bad is it? Multiply these together. That's your risk score.

This misses everything that makes real decisions complex. Time pressure changes decisions. A choice you'd make with a week to decide feels different than if you only have ten minutes to decide.

Available context determines what you can see. Two people with different information will make different choices. Both might be rational given what they know. Accuracy of context matters even more. You might have lots of data. If it's wrong, you're worse off than having nothing. Precision is different from accuracy. You can know something is between 10 and 100. That's accurate but imprecise. Precision determines if you can act.

Decision opportunity is temporal. Some choices expire. Miss the window and the choice disappears. Outcome scenarios are multiple and branching. You don't pick between two outcomes. You pick between complex futures. Risk and reward don't sit still. The same action might be brilliant now and stupid next month. Timescale to realize outcomes varies. Some decisions pay off tomorrow. Others take years.

Uncertainty exists in all of these factors. How much context is available? How accurate is the context? How precise is the context? Do I know how long I have to make this decision? Do I even know what the possible outcomes are? Humans crave certainty, despite uncertainty existing everywhere. Does our desire for certainty lead to prioritizing factors where there is more certainty in favor of ignoring those where we’re uncertain?

Finally, reversibility changes everything. Can you undo this? What's the penalty for trying? Reversible decisions deserve different analysis than one-way doors. Amazon's Jeff Bezos built a company culture around this distinction. If there’s no penalty for getting a decision wrong and it can be reversed - make the decision quickly. If not, more care has to be taken.

The Unknown Unknowns

Donald Rumsfeld got mocked for talking about "unknown unknowns." Whether you agree with his politics or not, he was right.

There are some things we know we don’t know. That’s manageable. We can investigate. We can get more data. We can hedge.

Then there are things we don't know we don't know. These kill companies. They start wars. They crash markets. You can't plan for them because you can't imagine them. Every disaster investigation finds unknown unknowns. "Nobody thought that could happen."

The only defense is resilience. Systems that can absorb shocks. Organizations that can pivot. Decisions that preserve options. You can't predict unknown unknowns. You can build structures that survive them.

This matters for AI decisions even more than human ones. We'll see why.

AI Agents and Context Lineage

When an AI agent makes a decision, we want to understand it. So we capture the context. What data did it see? What inputs shaped its choice? We build elaborate systems to track context lineage. This document led to that inference which triggered this action.

Context lineage is necessary. But it's not sufficient. Knowing what an agent saw doesn't tell you why it chose option A over option B. The same context could support different choices. The lineage shows inputs. It doesn't show reasoning. It doesn't show intent.

A human with the same information might choose differently. Not because they're smarter or dumber. Because they have different goals. Different priorities. Different understanding of what matters.

Context is just evidence. Decisions need judgment about what the evidence means.

Intent and Goals: The Missing Piece

Decisions require intent. You need to know what you're trying to achieve. An AI agent with perfect context but no goals lacks focus. It can make decisions but for what purpose?

This seems obvious. But we often deploy agents without clear goal structures. We give them instructions. "Process these claims." "Route these tickets." We assume the goal is implicit. It isn't.

Human organizations have the same problem. How many meetings have you attended where people argued past each other? Usually it's because they had different goals. Marketing wants awareness. Sales wants leads. Product wants retention. Each has context. None have aligned intent.

AI agents need explicit goal frameworks. Not just "maximize X." Goals with priorities. Goals with constraints. Goals with time horizons. "Maximize customer satisfaction" means something different over a day than over a year. An agent needs to know which you mean.

Without intent, context is wasted. With intent, context becomes actionable. The goal shapes what matters. It turns context into informed decisions.

Context Graphs: The Foundation, Not the Solution

Context graphs represent relationships. They show how pieces connect. This entity relates to that event. This decision preceded that outcome. This user has these preferences. This document supports this claim.

Building context graphs is valuable. They make information queryable. They make context shareable. They let AI agents see connections that flat data hides. For TrustGraph and similar systems, graphs are the beginning of AI-ready infrastructure.

But graphs don't make decisions. They inform decisions. A perfect map of all context doesn't tell you what to do. It tells you what's true. What to do requires goals, values, priorities. It requires understanding tradeoffs.

Too many teams think the graph is the end game. Build the graph and intelligence magically emerges. It doesn't work that way. The graph is the foundation. On top you need reasoning systems. Goal structures. Decision frameworks. Ways to handle uncertainty and time pressure and the unknown unknowns.

Context graphs let you build those layers. They're essential. But they're the first step, not the last. The difference matters. If you think the graph solves decision-making, you'll be disappointed. If you see it as enabling better decision-making, you'll build the right things next.

Conclusion

Decisions are harder than we pretend. Our models simplify. Our governance structures optimize for the wrong things. Our risk frameworks miss most of what matters. We pretend certainty exists when it doesn't. We ignore that we can't see everything.

AI agents inherit these problems. They amplify them. An agent making bad decisions makes them faster and at scale. Context lineage isn't enough. Goals aren't enough. Graphs aren't enough. We need all of it. We need systems that preserve what humans do well—judgment under uncertainty, goal-setting, value tradeoffs—while using what machines do well—processing context, finding patterns, executing consistently.

The path forward starts with honesty. Admit that decisions are complex. Build tools that handle complexity instead of hiding it. Use context graphs as foundations. Add goal structures. Add uncertainty handling. Add ways to explain not just what an agent saw, but what it valued and what it was trying to accomplish.

This is hard work. It's necessary work. The organizations that figure it out will make better decisions faster. The ones that don't will keep wondering why their AI investments don't pay off.

Context matters. So does everything else.

For more information: