📄️ Overview
TrustGraph is designed to be modular to support as many Language Models and environments as possible. A natural fit for a modular architecture is to decompose functions into a set modules connected through a pub/sub backbone. Apache Pulsar serves as this pub/sub backbone. Pulsar acts as the data broker managing inputs and outputs between modules.
📄️ Pub/Sub Backbone
Modular TrustGraph architecture with Pulsar
📄️ Core Modules
The Core Modules form the basic building blocks of the knowledge network. These modules are required to deploy a full end-to-end knowledge pipeline.
📄️ Text Completion Modules
As AI technology evolves, TrustGraph supports many different Language Model (LM) deployments to to balance cost, performance, and security. Currently, TrustGraph supports AWS Bedrock, AzureAI, Anthropic, Cohere, Ollama, and VertexAI:
📄️ Scripts
There are predefined scrpts that launch TrustGraph actions. The scripts are run with:
📄️ Knowledge Graphs
Interacting with knowledge graphs can be challenging. Graph visualizations can be useful for subgraphs with small numbers of nodes and edges. However, the TrustGraph extraction process will likely extract 5,000 graph edges per 100 pages of a text corpus. At this amount of knowledge extraction, graph visualization tools provide little value. Production scale graphs, for example, often have billions of graph edges. The RAG process is an ideal solution to make sense of these extremely large knowledge graphs.
📄️ Telemetry
OpenTelemetry and TrustGraph