Skip to main content

Text Completion Modules

As AI technology evolves, TrustGraph supports many different Language Model (LM) deployments to to balance cost, performance, and security. Currently, TrustGraph supports AWS Bedrock, AzureAI, Anthropic, Cohere, Ollama, and VertexAI:

  • text-completion-azure - Sends request to AzureAI serverless endpoint
  • text-completion-bedrock - Send request to AWS Bedrock API
  • text-completion-claude - Sends request to Anthropic's API
  • text-completion-cohere - Send request to Cohere's API
  • text-completion-llamafile - Sends request to a running Llamafile
  • text-completion-ollama - Sends request to LM running using Ollama
  • text-completion-openai - Send request to OpenAI's API
  • text-completion-vertexai - Sends request to model available through VertexAI API

Mixing Model Modules​

It's possible to use different model deployments for the Naive Extraction and RAG processes. The docker-compose-mix.yaml deployment has both text-completion and text-completion-rag modules. Both of those modules can be set to use any of the above supported deployment options.

tip

Any of the YAML files can be configured with both text-completion and text-completion-rag modules. For instance, you could set text-completion to use Cohere and text-completion-rag to use Ollama. Or, another configuration could be to use Ollama for both text-completion and text-completion-rag but specify different models running with Ollama for each process. The flexibility of model deployments dramatically improves the ability to conduct comparison testing for model performance.