Skip to main content

Ollama on a Local Network

The Ollama deployment of TrustGraph gives incredible amounts of flexiblity in how LMs are deployed. Not only does Ollama support a large number of models, but Ollama runs on Linux, MacOS, and Windows. The Ollama deployment enables running LMs across a local network on a host machine better equipped to run a LM.

caution

While many SLMs can run on a laptop, modern laptops are not designed for prolonged intense processing jobs. It's not that the hardware doesn't have the capability. No, the bottleneck is thermal management. Prolonged running of a LM on a laptop is likely to skyrocket internal temperatures to levels that can cause permanent damage. In comparison, a desktop with similar hardware can reject the heat more efficiently, providing better long-term performance even with the same hardware specs.

Configuring Ollama Server​

Even though Ollama has an Ollama Server, the Ollama Server requires some configuration to enable it. A full list of configuration instructions can be found here. The most important setting is creating an environment variable for OLLAMA_HOST on the machine running Ollama.

The variable for Ollama Server should be set the same for Linux, MacOS, and Windows:

OLLAMA_HOST=0.0.0.0
note

OLLAMA_HOST=0.0.0.0 is for the HOST machine running the models with Ollama. The machine running TrustGraph must also set the OLLAMA_HOST variable for the docker-compose-ollama.yaml file. However, on the maching running TrustGraph, OLLAMA_HOST should be set to the local network address of the maching hosting Ollama, such as OLLAMA_HOST=<HOST.IP.ADDRESS.HERE>.

Configuring the Local Network​

By default, Ollama uses port 11434. Whatever machine is running Ollama on the local network will likely need a new firewall rule. Create a new inbound firewall rule that allows TCP connections on port 11434.