Back to News
Context Graph Manifesto

The Answer for Enterprise AI is...Insurance?

January 30, 2026
10 min read

That’s right, I did it. I said the “I” word. Insurance.

In my recent talks about context graphs and TrustGraph, some ideas from my past have been haunting me. If you’ve gotten this far, perhaps you’re willing to come with me a bit further.

In 2018, I joined Lyft’s autonomous vehicle org L5 to lead their strategy for safety and cybersecurity, especially in the area of regulatory compliance. At that point, I had nearly 2 decades of experience in safety-critical and mission-critical military aerospace systems and how to measure their cybersecurity performance.

My pitch was simple - don’t follow, lead. Don’t allow regulators to write ill-fitting, overbearing, and often nonsensical requirements for technology they don’t understand. Create the roadmap. Proactively show a path forward to winning the battle with the consumer mindset.

Because ultimately, that’s what it’s all about. All the safety data and certifications don’t matter if the public doesn’t *feel* safe riding in an autonomous vehicle.

How Did AVs Solve This?

The short answer is they haven’t - yet. The AV START Act was introduced in 2017, and the word “start” is the apt word. The AV START Act was just the beginning of thinking about how to regulate AVs. No actual guidance or safety frameworks. And what happened to the AV START Act? It died. It never passed. Here we are in 2026, and you’re thinking, surely, there has been other legislation passed since then? Right?

Wrong. No legislation regulating AVs has been signed into law. Nothing. But yet Waymo is operating now in many states. How? They are having to work individually with local government jurisdictions on regulatory requirements. But if there is no safety framework that they claim they’ve satisfied, how does anyone know they’re safe?

Yet Another Big Data Problem

As I began to consider all the ways something could go wrong with AVs from either a safety or security perspective, it all too quickly became apparent it wasn’t a one person job. It wasn’t a job for a team either, not even a big one. The number of scenarios of bad outcomes for passengers, other vehicles, other drivers, other passengers, pedestrians, bicyclists, people on scooters, people walking their pets, etc. etc., quickly spirals out of control into the millions of permutations.

Testing for every single scenario isn’t possible. There simply isn’t enough time. We can run simulations, but people will be quick to say simulations aren’t the real world. And in the case of AVs where people can be injured or even killed. That distinction matters.

So, What is the AV Industry Doing?

While I haven’t worked directly in the AV industry since 2019, the lack of AV regulations is a pretty clear sign that every one looked at all the scenarios and agreed that writing test plans for every possible thing that could go wrong just isn’t possible.

Then what do you do? You collect data. As much as you possibly can. And that’s exactly what Waymo has been doing. Just a few days ago, a young child was struck by a Waymo in Santa Monica. Before you gasp, the child is totally fine. Before you wag your finger at Waymo, we need to look at the data.

The preliminary data suggests that the Waymo was traveling at low speed (17 mph) when the child ran out from behind a large SUV. The Waymo decelerated to 6 mph when it struck the child. Waymo’s human driver models project that a human driver’s reaction times would have resulted in hitting the child at 14 mph.

While we may be inclined to say that it’s unacceptable that the Waymo struck the child at all, is that actually realistic? Have you ever been driving late at night in forested areas in the winter? Especially in regions with lots of deer? Ever hit one? In lot of areas of the country, hitting deer with your car is a nightly worry, and anyone that’s ever done it will tell you, it’s a helpless feeling where you can nothing but hope for the best in the fractions of second before impact as the deer darts in front of you.

Force Majeure

No matter how good a driver you are, if you’re driving in the dark and a deer darts in front of you, there’s nothing you can do. It appears (although the investigation is still ongoing) that this Waymo outperformed a typical human in begin able to recognize and react to a sudden object coming into its path.

We have a legal construct for this - force majeure. I could give you a legal definition of force majeure, but it’s actually really simple. Shit happens. Sometimes shit just happens, and there’s nothing you can do to avoid a bad outcome.

Instead of trying to prove that they’ve designed an AV that can avoid all possible bad outcomes, Waymo has instead chosen to collect data. Data to compare how their AVs compare to human drivers. How do AVs react when things go wrong? Accidents happen, but how bad are they are? How much damage is done? How badly are people injured? Were people killed? How often does this happen?

We Already Have a Solution For This

If I think back to 2018, I can still remember staring at the ceiling wondering, how are we going to account for all the ways something can go wrong with AVs??? So I started thinking, we don’t give nearly this much thought to human drivers. Driving licensing tests are woefully unrepresentative of driving daily in most situations. Yet, once you have your driving license, you’re fully qualified to drive anywhere. How do we manage all that risk of accidents?

Insurance. In addition to having a driver’s license from your state, what else do you need? Insurance. And what is the insurance for? It’s not for you, but you must have liability insurance to protect in the case that you cause harm to others.

Pricing Bad Outcomes

While some of my co-workers in 2018 immediately saw why I was saying, “the path to safety and security for AVs is insurance!”, others cringed. They didn’t cringe because they thought it was a horrible idea (maybe some did), but people don’t like thinking about putting monetary values on things like human lives.

But that’s exactly what actuaries do. Actuaries take all the accident data and collect how much damage was caused, and that includes everything from the cost to repair vehicle damage, to medical bills for injuries, to even loss of life and the costs incurred. Yes, loss of life gets distilled down to a single dollar value. And that makes a lot of people uncomfortable.

And why does it get distilled down to a single dollar number? How else would the insurance company know how to price the insurance policies? Based on the expected value of bad outcomes of a driver, the insurer sells you a policy priced at a point that will allow them to profit on the average.

AI is Like AVs in More Way than One

The obvious answer is that AVs use AI and computer vision to interpret what they see and guide themselves along our roads like a human. But just like the endless amount of scenarios that can go wrong while driving, AI agents too face a nearly boundless set of scenarios where things go wrong.

As I talked about in my article on determinism, the old way of judging software systems where input A always produces output B no longer applies to agentic systems. Say I task an agent with crafting marketing outreach emails and do whatever it takes to generate replies. Perhaps the agent writes insulting messages, or promises free services, or even makes incendiary political statements that enrage some and thrill others. There is no “output B” for this task.

But just like when you’re injured in an a car accident and look to get your car repaired and medical bills covered, enterprises want the same guarantees with AI agents. If an agent produces bad outcomes, how does the enterprise receive compensation for the harm? How do we even measure the value of the harm?

And even yet, who is liable? Is the agent designer? Is the model provider? What about the data that was used to train the model? What about the information that was used by the agent? Sure it’s cynical, but someone has to be blamed, but who?

Enterprises see the potential of AI. They’ve all done countless PoCs, but why did those PoCs not convert to large scale AI transformation? Risk. AI agents are not deterministic in the traditional system. Input A does not guarantee Output B. And what happens in the case of force majeure? The system was designed to the best of everyone’s abilities but still produced a bad outcome. Is the system liable? The model? The operator of the system? If an enterprise can’t have guarantees for deterministic system performance, they need financial guarantees that undesired bad outcomes will be financially covered by someone else.

And You Thought I Wasn’t Going to Talk About Context Graphs…

And I’m kinda not going to! I think there’s been enough articles about what context graphs are and all the magical things they do for agentic systems. But context graphs are really just a structure for data. What does an agentic system that makes context a first-class citizen look like?

I’ve seen some people make an oil analogy lately. So let’s run with that. A “pipeline” might look like:

Oil -> Oil Extraction -> Oil Refinement -> Oil Distribution -> Conversion to Fuel -> Fuel Distribution -> Fuel Consumption -> Energy Extraction (goal)

In this analogy, is context the oil? No, it’s the raw data. This is the missing primitive in agentic systems: a way to convert raw data into durable, auditable context.

Data -> Context Conversion -> Context Storage and Refinement -> Context Retrieval -> Context Logic -> Business Logic -> Agentic Outcome (goal)

TrustGraph is one implementation of that primitive.

But that’s just solving the first problem of trying to improve agentic outcomes with the right context. We then must capture the data associated with this pipeline process to enable auditability of the system and enable system improvements. In terms of a graph, that would be reification as described in this article.

It Takes WAY More than Just a Context Graph

Strictly speaking, a context graph is only applicable at the “Context Storage and Refinement” stage of our agentic pipeline. This entire pipeline requires a multitude of solutions from data source integrations, data streaming, workflow automation, observability, telemetry, and that’s not even really even talking about all the algorithms for context conversion, refinement, or retrieval.

When we consider all of this infrastructure, it’s not surprising that enterprise AI PoCs haven’t produced good outcomes. The emphasis has been on agent designs and prompts with little to no consideration for the resource that makes all of this possible - context.

It’s a Journey

For anyone that was also in the AV industry during its frothy heyday in 2018-2019, you remember the hype train. You’ll remember how 2020 was going to be the year where AVs dominated the world. Here we are in 2026 and AVs are slowly rolling out in a handful of markets with public sentiment largely still negative.

Agentic systems are facing a similar battle. We must make context a first-class citizen and collect enough data to make meaningful predictions on the system’s ability to cause harm. Insuring agentic systems, now there’s an opportunity…

For more information: