Skip to main content

Minikube Variables

API Gateway​

The API Gateway is a required component which supports the CLI and Data Workbench. The API Gateway must be configured with a secret key. However, that secret key can be empty if no authentication is required. The Data Workbench does not currently use keys for authentication. The below example shows how to set the API Gateway secret to be empty with no authentication.

kubectl -n trustgraph create secret \
generic gateway-secret \
--from-literal=gateway-secret=

LLM API Configuration​

caution

All tokens, paths, and authentication files must be set PRIOR to launching a YAML configuration file.

AWS Bedrock API​

kubectl -n trustgraph create secret \
generic bedrock-credentials \
--from-literal=aws-id-key=AWS-ID-KEY \
--from-literal=aws-secret-key=AWS-SECRET-KEY
note

The current default model for AWS Bedrock is Mixtral8x7B in US-West-2.

AzureAI API​

kubectl -n trustgraph create secret \
generic azure-credentials \
--from-literal=azure-endpoint=AZURE-ENDPOINT \
--from-literal=azure-token=AZURE-TOKEN

Azure OpenAI API​

note

The OpenAI service within AzureAI is similar to deploying a serverless model in Azure, but requires setting the API version and model name. Interestingly, AzureAI gives the user the ability to set the model name however they choose. Thus, the model name is set within AzureAI by the user.

kubectl -n trustgraph create secret \
generic azure-openai-credentials \
--from-literal=azure-endpoint=https://ENDPOINT.API.HOST.GOES.HERE/ \
--from-literal=azure-token=TOKEN-GOES-HERE \
--from-literal=api-version=API-VERSION-GOES-HERE \
--from-literal=openai-model=USER-DEFINED-MODEL-NAME-HERE

Anthropic API​

kubectl -n trustgraph create secret \
generic claude-credentials \
--from-literal=claude-key=CLAUDE_KEY
note

The current default model for Anthropic is Claude 3.5 Sonnet.

Cohere API​

kubectl -n trustgraph create secret \
generic cohere-credentials \
--from-literal=cohere-key=COHERE-KEY
note

The current default model for Cohere is Aya:8B.

Google AI Studio API​

kubectl -n trustgraph create secret \
generic googleaistudio-credentials \
--from-literal=google-ai-studio-key=GOOGLEAISTUDIO-KEY
tip

Google is currently offering free usage of Gemini-1.5-Flash through Google AI Studio.

Llamafile API​

caution

The current Llamafile integration assumes you already have a Llamafile running on the host machine. Additional Llamafile orchestration is coming soon.

danger

Running TrustGraph and a Llamafile on a laptop can be tricky. Many laptops, especially MacBooks have only 8GB of memory. This is not enough memory to run both TrustGraph and most Llamafiles. Keep in mind laptops do not have the thermal management capabilities required for sustained heavy compute loads.

kubectl -n trustgraph create secret \
generic googleaistudio-credentials \
--from-literal=google-ai-studio-key=GOOGLEAISTUDIO-KEY

Ollama API​

tip

The power of Ollama is the flexibility it provides in Language Model deployments. Being able to run LMs with Ollama enables fully secure AI TrustGraph pipelines that aren't relying on any external APIs. No data is leaving the host environment or network. More information on Ollama deployments can be found here.

note

The current default model for an Ollama deployment is Gemma2:9B.

danger

Running TrustGraph and Ollama on a laptop can be tricky. Many laptops, especially MacBooks have only 8GB of memory. This is not enough memory to run both TrustGraph and Ollama. Most SLMs, like Gemma2:9B or Llama3.1:8B require roughly 5GB of memory. Even if you do have enough memory to run the desired model with Ollama, note that laptops do not have the thermal management capabilities required for sustained heavy compute loads.

kubectl -n trustgraph \
create secret generic ollama-credentials \
--from-literal=ollama-host=http://ollama:11434/

OpenAI API​

kubectl -n trustgraph create secret \
generic openai-credentials \
--from-literal=openai-token=OPENAI-TOKEN-HERE
note

The current default model for OpenAI is gpt-3.5-turbo.

VertexAI API​

kubectl -n trustgraph create secret \
generic vertexai-creds --from-file=private.json=private.json
note

The current default model for VertexAI is gemini-1.0-pro-001.

VectorDB API Configuration​

Pinecone API​

note

Unlike Qdrant and Milvus which are deployed locally with TrustGraph, Pinecone is accessed through an API. You will need your own Pinecone API key to use it as your VectorDB.

kubectl -n trustgraph create secret \
generic pinecone-api-key \
--from-literal=pinecone-api-key=PINECONE-API-KEY