Docker Variables
API Gateway​
The API Gateway is a required component which supports the CLI and Data Workbench. The API Gateway can be configured with a secret key if authentication is required. If no authentication is required, the GATEWAY_SECRET
can be ignored.
export GATEWAY_SECRET=<TOKEN-GOES-HERE>
LLM API Configuration​
All tokens, paths, and authentication files must be set PRIOR to launching a YAML
configuration file.
For Docker
and Podman
deployments, set the following parameters for your selected model deployment option prior to launch. Set parameters for only the model deployments you plan to use.
AWS Bedrock API​
export AWS_ID_KEY=<ID-KEY-HERE>
export AWS_SECRET_KEY=<TOKEN-GOES-HERE>
The current default model for AWS Bedrock
is Mixtral8x7B
in US-West-2
.
AzureAI API​
export AZURE_ENDPOINT=<https://ENDPOINT.API.HOST.GOES.HERE/>
export AZURE_TOKEN=<TOKEN-GOES-HERE>
Azure OpenAI API​
The OpenAI service within AzureAI is similar to deploying a serverless model in Azure, but requires setting the API version and model name. Interestingly, AzureAI gives the user the ability to set the model name however they choose. Thus, the model name is set within AzureAI by the user.
export AZURE_ENDPOINT=<https://ENDPOINT.API.HOST.GOES.HERE/>
export AZURE_TOKEN=<TOKEN-GOES-HERE>
export API_VERSION=<API_VERSION-HERE>
export OPENAI_MODEL=<user-defined-model-name-here>
Anthropic API​
export CLAUDE_KEY=<TOKEN-GOES-HERE>
The current default model for Anthropic
is Claude 3.5 Sonnet
.
Cohere API​
export COHERE_KEY=<TOKEN-GOES-HERE>
The current default model for Cohere
is Aya:8B
.
Google AI Studio API​
export GOOGLE_AI_STUDIO_KEY=<TOKEN-GOES-HERE>
Google is currently offering free usage of Gemini-1.5-Flash
through Google AI Studio.
Llamafile API​
The current Llamafile
integration assumes you already have a Llamafile
running on the host machine. Additional Llamafile
orchestration is coming soon.
Running TrustGraph
and a Llamafile
on a laptop can be tricky. Many laptops, especially MacBooks have only 8GB
of memory. This is not enough memory to run both TrustGraph
and most Llamafiles
. Keep in mind laptops do not have the thermal management capabilities required for sustained heavy compute loads.
export LLAMAFILE_URL=<hostname>
The default host name for Llamafile
is http://localhost:8080/v1
. On MacOS
, if running a Llamafile
locally set LLAMAFILE_URL=http://host.docker.internal:8080/v1
.
Ollama API​
The power of Ollama
is the flexibility it provides in Language Model deployments. Being able to run LMs with Ollama
enables fully secure AI TrustGraph
pipelines that aren't relying on any external APIs. No data is leaving the host environment or network. More information on Ollama
deployments can be found here.
The current default model for an Ollama
deployment is Gemma2:9B
.
Running TrustGraph
and Ollama
on a laptop can be tricky. Many laptops, especially MacBooks have only 8GB
of memory. This is not enough memory to run both TrustGraph
and Ollama
. Most SLMs, like Gemma2:9B
or Llama3.1:8B
require roughly 5GB
of memory. Even if you do have enough memory to run the desired model with Ollama
, note that laptops do not have the thermal management capabilities required for sustained heavy compute loads.
export OLLAMA_HOST=<hostname>
The default Ollama host name is http://localhost:11434
. On MacOS
, if running Ollama
locally set OLLAMA_HOST=http://host.docker.internal:11434
.
OpenAI API​
export OPENAI_TOKEN=<TOKEN-GOES-HERE>
The current default model for OpenAI
is gpt-3.5-turbo
.
VertexAI API​
mkdir -p vertexai
cp <json-credential-from-GCP> vertexai/private.json
The current default model for VertexAI
is gemini-1.0-pro-001
.
VectorDB API Configuration​
Pinecone API​
Unlike Qdrant
and Milvus
which are deployed locally with TrustGraph
, Pinecone
is accessed through an API. You will need your own Pinecone
API key to use it as your VectorDB.
export PINECONE_API_KEY=<TOKEN-GOES-HERE>