Skip to main content

Ollama Models

Model Support updated: August 27, 2024

note

The current default model for an Ollama deployment is Gemma2:9B.

Ollama is constantly adding new model support. It is recommended to check here for their full model support list. Below is a list of models that have been tested with TrustGraph and Ollama:

  • Gemma2:27B
  • Gemma2:9B
  • Gemma2:2B
  • Llama3.1:405B
  • Llama3.1:70B
  • Llama3.1:8B
  • Llama3:70B
  • Llama3:8B
  • NEW Hermes3:70B
  • NEW Hermes3:8B
  • Phi3:3B
  • Phi3:14B
  • NEW Phi3.5:3.8B
  • Mixtral8x7B
  • Mixtral8x22B
  • DeepSeekV2:16B
  • WizardLM2:7B
  • WizardLM2:8x22B
  • Qwen2:72B
  • Qwen2:7B
  • Qwen2:1.5B
  • Qwen2:0.5B
note

In addition to parameter count variants, Ollama supports different quantized versions of their supported models. Testing with TrustGraph has not yielded any general recommendations for selecting levels of quantization.