Text Completion Modules
As AI technology evolves, TrustGraph supports many different Language Model (LM) deployments to to balance cost, performance, and security. Currently, TrustGraph supports AWS Bedrock
, AzureAI
, Anthropic
, Cohere
, Ollama
, and VertexAI
:
text-completion-azure
- Sends request to AzureAI serverless endpointtext-completion-bedrock
- Send request to AWS Bedrock APItext-completion-claude
- Sends request to Anthropic's APItext-completion-cohere
- Send request to Cohere's APItext-completion-llamafile
- Sends request to a running Llamafiletext-completion-ollama
- Sends request to LM running using Ollamatext-completion-openai
- Send request to OpenAI's APItext-completion-vertexai
- Sends request to model available through VertexAI API
Mixing Model Modules​
It's possible to use different model deployments for the Naive Extraction
and RAG
processes. The docker-compose-mix.yaml
deployment has both text-completion
and text-completion-rag
modules. Both of those modules can be set to use any of the above supported deployment options.
Any of the YAML
files can be configured with both text-completion
and text-completion-rag
modules. For instance, you could set text-completion
to use Cohere
and text-completion-rag
to use Ollama
. Or, another configuration could be to use Ollama
for both text-completion
and text-completion-rag
but specify different models running with Ollama
for each process. The flexibility of model deployments dramatically improves the ability to conduct comparison testing for model performance.