Skip to main content

Use Cases

While the genesis of TrustGraph was to evolve Graph RAG by aligning with open source technologies and standards like RDF, the need for a simple way to deploy AI infrastructure became apparent. In addition to simplfying AI deployments, the Infrastructure as Code aproach of TrustGraph has three key use cases: high efficacy RAG responses, agent integration, and secure AI pipeline deployments.

Improving RAG​

Augmenting RAG with knowledge graphs enables improved semantic granularity when extracting a statement from its original context. The Naive Extraction process in TrustGraph is designed for a text corpus. This process works well with text documents, in common text file formats. The text corpus could be sets of legal text, chat conversation history, research papers, medical records, technical documentation, technical standards, technical requirements, reports, or even financial statements. TrustGraph excels in domains that require understanding thousands of pages of documents to perform a task. Here’s a short list of GenAI responses that TrustGraph can provide:

  • Semantic Search
  • Question and Answering
  • Summarization
  • Conceptual Analysis
  • Recommendations
  • Content Generation
  • Step-by-Step Processes

Agent Integration​

The Apache Pulsar backbone of TrustGraph enables quick and simple agent integration into the full pipeline. Using the TrustGraph module templates, integrating a new service is as simple as adding the agent code to the template and setting the appropriate subscription and publishing queues for inputs and outputs.

Secure AI Pipeline Deployments​

While TrustGraph supports cloud-hosted models through Anthropic, AzureAI, and VertexAI, many users want to maintain full control over their data. Sending sensitive data to external APIs opens the door to security weaknesses and in many cases, violates organizational policies. To solve this problem, TrustGraph deploys a full end-to-end pipeline using Ollama. The Ollama deployment allows a Language Model to run in any environment, including a laptop (although running the models on a laptop is not advised because of thermal management issues). This approach provides a fully self-contained and secure deployment for unlocking the power of GenAI with open source Small Language Models (SLMs).