Skip to main content

Use Cases

While the genesis of TrustGraph was to evolve GraphRAG by aligning with open source technologies and standards like RDF, the need for a simple way to deploy AI infrastructure became apparent. In addition to simplfying AI deployments, the AI Engine aproach of TrustGraph has three key use cases: data enhancement, customization and extensibility, and exclusive deployments.

Data Enhancement​

TrustGraph can ingest large amounts of unstructured data scattered across documents and structure that knowledge as RDF knowledge graphs and mapped vector embeddings. This structuring process uncovers patterns and relationships that would previously require manual, human analysis to discover. Sample uses:

  • Regulatory and Compliance Analysis
  • Financial Report Analysis
  • Academic Research
  • Legal Research
  • Legal Discovery
  • Social Graph Analysis
  • Message Logs Analysis
  • Anomaly Detection
  • Analysis Report Generation

Customization and Extensibility​

There are three concepts central to the customization and extensibility of TrustGraph:

  • Extraction Tailoring
  • Reusable Knowledge Cores
  • Apache Pulsar Pub/Sub backbone

Extraction Tailoring​

The default extraction process in TrustGraph is a Naive Extraction. A Naive Extraction has no prior knowledge of the data to be ingested. The default TrustGraph extraction is also designed to work with any LLM. This extraction process can easily be tailored by:

  • Optimizing Prompt structure for a particular LLM
  • Optimizing System Prompts for a particular LLM
  • Defining important terms and concepts
  • Definiting custom structure to enchance the RDF

Reusable Knowledge Cores​

The extracted knoweldge graph and mapped vector embeddings become a knowledge core. Building knowledge cores is a one-time process as they can be saved, shared, and reloaded. The concept of knowledge cores enables loading only the knowledge needed for the AI Engine. This approach also enables data access management.

Pub/Sub Backbone​

Apache Pulsar is an enterprise-grade pub/sub backbone that runs services connected by data processing queues and schemas. Processing modules easily integrate into TrustGraph as either a consumer, producer, or consumer/producer of data processing queues. New modules can subscribe and publish to existing queues or define their own.

Exclusive Deployments​

While TrustGraph supports cloud-hosted models through Anthropic, AWS Bedrock, AzureAI, AzureOpenAI, Cohere, Google AI Studio, OpenAI, and VertexAI, many users want to maintain full control over their data. Sending sensitive data to external APIs opens the door to security weaknesses and in many cases, violates organizational policies. To solve this problem, TrustGraph deploys a full end-to-end AI Engine using Ollama or Llamafile. The Ollama or Llamafile deployments allow a Language Model to run in any environment, including a laptop (although running the models on a laptop is not advised because of thermal management issues). This approach provides a fully self-contained and secure deployment for unlocking the power of GenAI with open Small Language Models (SLMs).