Back to News
Release 2.3

TrustGraph 2.3 Released: Leaner Deployments, Broader Messaging Support, and Deeper Agent Observability

April 28, 2026
4 min read

Listen to this article

0:00
0:00

San Francisco, CA — TrustGraph today announces the release of TrustGraph 2.3, a major update to its open-source context graph platform for AI agents. This release delivers significant infrastructure improvements that reduce memory overhead by up to 3.5 GB per deployment, expands messaging fabric choices to include RabbitMQ and experimental Kafka support, and introduces deeper agent explainability tooling — making TrustGraph faster to deploy, easier to operate at scale, and more transparent in how its agents reason.

Processor Groups: A Smarter Deployment Shape

The headline feature of 2.3 is Processor Groups, a new architecture that replaces the previous one-container-per-processor deployment model. Related processors are now bundled into managed groups across six tiers — control, embeddings, ingest, LLM, RAG, and storage — reducing installation memory consumption by roughly 1.5–2.5 GB per deployment. The configuration builder now produces processor groups as the standard deployment shape for all new TrustGraph 2.3 installations. Better logging and concurrency controls come included, along with async Cassandra table helpers that reduce contention in storage and query paths.

RabbitMQ Production-Ready, Kafka Experimental

TrustGraph 2.3 makes RabbitMQ a first-class, production-ready pub/sub fabric alongside Apache Pulsar. A full backend refactor replaces the previous shared topic exchange with one fanout exchange per topic, eliminating cross-topic interference and fixing a request/response race condition that affected earlier builds. Selecting RabbitMQ over Pulsar saves an additional up to 1 GB of memory per installation on top of the Processor Group savings — a meaningful difference for self-hosted and edge deployments.

An experimental Kafka backend is also included in 2.3, demonstrating TrustGraph's growing independence from any single messaging system. Topics map 1:1 to Kafka topics, subscriptions map to consumer groups, and topic lifecycle is managed via AdminClient. This backend requires further integration testing and is not recommended for production use at this time.

Flow Service Lifecycle Management

A new dedicated Flow Service now owns queue lifecycle for pub/sub flows, decoupled from the config service. Queues are created and torn down in lockstep with flow start and stop events, eliminating queue leakage and stale bindings that could accumulate across restarts. Both RabbitMQ and Pulsar backends have been extended with lifecycle hooks, and consumers, producers, and subscribers now bind through a shared backend interface. This makes TrustGraph substantially more stable under high flow churn and scales cleanly to many concurrent flows.

Multi-Architecture Container Support

All TrustGraph containers are now published as multi-architecture manifests covering both amd64 and arm64, with ARM builds running on native ARM runners. The HuggingFace processor has been updated to Python 3.12 to unblock ARM64 support. This opens TrustGraph to a wider range of deployment targets, including ARM-based cloud instances and edge hardware.

Agent Explainability and LLM Cost Tracking

TrustGraph 2.3 ships deeper agent explainability instrumentation across the agent orchestrator and ReAct pattern, with envelope field naming unified across agent, GraphRAG, and DocumentRAG components. A new provenance helper module centralizes RDF namespace and URI construction, and the TrustGraph ontology is now published as a Turtle file at specs/ontology/trustgraph.ttl.

LLM token usage — input and output counts — now propagates from all LLM providers through the prompt client, flow API, and socket clients to callers, enabling per-request cost tracking across agent, GraphRAG, DocumentRAG, and prompt services.

Additional Improvements

  • Domain and range validation rejects triples that violate ontology schema constraints during extraction
  • Standardized rate-limiting across Cohere, Mistral, OpenAI, and vLLM, backed by a shared contract test suite
  • S3 retry with exponential backoff for improved resilience in large-document and multipart workflows
  • Deferred optional SDK imports so a missing optional dependency no longer prevents platform startup
  • Several bug fixes across flow service restart handling, API gateway dispatcher eviction, ontology extraction, and schema migration

Availability

TrustGraph 2.3 is available now on GitHub. Full release notes, updated documentation, and migration guidance are available at docs.trustgraph.ai.


For more information: