Skip to main content

TrustGraph

The TrustGraph Engine provides all the tools, services, Graph Stores, and VectorDBs needed to deploy reliable, scalable, and accurate AI agents. The AI Engine includes:

  • Bulk document ingestion
  • Automated Knowledge Graph Building
  • Automated Vectorization
  • Model Agnostic LLM Integration
  • RAG combining both Knowledge Graphs and VectorDBs
  • Enterprise Grade Reliability, Scalability, and Modularity
  • Data Privacy enablement with local LLM deployments with Ollama and Llamafile

Ingest your sensitive data in batches and build reusable and enhanced knowledge cores that transform general purpose LLMs into knowledge specialists. The observability dashboard allows you to monitor LLM latency, resource management, and token throughput in realtime. Visualize your enhanced data with Neo4j.

Features​

  • PDF decoding
  • Text chunking
  • On-Device Inference with Ollama or Llamafile
  • Cloud LLM Inference: AWS Bedrock, AzureAI, Anthropic, Cohere, OpenAI, and VertexAI
  • Mixed model deployments
  • HuggingFace embeddings models
  • RDF Knowledge Extraction Agents
  • Apache Cassandra or Neo4j as graph stores
  • Qdrant as VectorDB
  • Build and load Knowledge Cores
  • Embedding query service
  • GraphRAG query service
  • Apache Pulsar backbone
  • Deploy with Docker, Podman, Minikube, or Kubernetes
  • Modular architecture for easy agent integration