TrustGraph
TrustGraph
provides a true end-to-end AI Infrastructure as Code solution. TrustGraph
is highly-configurable, modular, and flexible with the ability to deploy your entire AI infrastructure with a single command. The infrastructure includes a built-in Naive Extraction
process that ingests a text corpus to build a RDF style knowledge graph coupled with a RAG
service compatible with popular cloud LLMs and self-hosted SLMs (Small Language Models).
The infrastructure processing components are interconnected with a pub/sub engine to maximize modularity and enable new knowledge processing functions. The core processing components decode documents, chunk text, perform embeddings, apply a local SLM/LLM, call a LLM API, and generate LM predictions.
The processing showcases the reliability and efficiences of Graph RAG algorithms which can capture contextual language flags that are missed in conventional RAG approaches. Graph querying algorithms enable retrieving not just relevant knowledge but language cues essential to understanding semantic uses unique to a text corpus.
Processing modules are executed in containers. Processing can be scaled-up by deploying multiple containers.
Features​
- PDF decoding
- Text chunking
- Inference of LMs deployed with Ollama
- Inference of Cloud LLMs:
AWS Bedrock
,AzureAI
,Anthropic
,Cohere
,OpenAI
, andVertexAI
- Mixed model deployments
- Application of a HuggingFace embeddings models
- RDF aligned Knowledge Graph extraction
- Graph edge loading into Apache Cassandra or Neo4j
- Storing embeddings in Qdrant
- Building and loading
Knowledge Cores
- Embedding query service
- Graph RAG query service
- All procesing integrates with Apache Pulsar
- Containers can be deployed using
Docker
or Podman - Plug and play architecture: switch different LLM modules to suit your needs