How to Connect Ollama on a Remote Machine to OpenClaw
A question that comes up regularly in the Discord: "How do I connect Ollama running on one PC to OpenClaw running on another?"
This is a common setup when you have a powerful GPU machine for inference but want to run OpenClaw on a lighter device (like a Mac Mini or VPS). Here's the complete guide.
The Problem
By default, Ollama only listens on localhost:11434. If OpenClaw is on a different machine, it can't reach Ollama at all.
Step 1: Configure Ollama to Accept Remote Connections
On the machine running Ollama, you need to set the OLLAMA_HOST environment variable to listen on all interfaces:
Linux (systemd)
# Edit the Ollama service
sudo systemctl edit ollama.service
# Add these lines:
[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
# Restart Ollama
sudo systemctl daemon-reload
sudo systemctl restart ollamamacOS
# Stop Ollama if running as an app
# Then start from terminal with:
OLLAMA_HOST=0.0.0.0:11434 ollama serveWindows
Set the environment variable OLLAMA_HOST to 0.0.0.0:11434 in System Properties โ Environment Variables, then restart Ollama.
Step 2: Verify Ollama is Accessible
From your OpenClaw machine, test connectivity:
curl http://<OLLAMA_IP>:11434/api/tagsYou should see a JSON list of your installed models.
Step 3: Configure OpenClaw to Use Remote Ollama
In your openclaw.json, set the baseUrl for the Ollama provider:
{
"models": {
"providers": {
"ollama": {
"baseUrl": "http://<OLLAMA_IP>:11434"
}
}
}
}Replace <OLLAMA_IP> with the actual IP address of your Ollama machine (e.g., 192.168.1.100).
Step 4: Set Your Model
Now configure OpenClaw to use an Ollama model:
openclaw models default ollama/qwen2.5:32bOr via config:
{
"agents": {
"defaults": {
"model": "ollama/qwen2.5:32b"
}
}
}Security Considerations
Warning: Exposing Ollama to 0.0.0.0 makes it accessible to anyone on your network. For production setups:
- Use a firewall to restrict access to only your OpenClaw machine's IP
- Use a VPN like Tailscale or WireGuard for secure cross-network access
- Consider SSH tunneling for temporary secure connections:
ssh -L 11434:localhost:11434 user@ollama-machine # Then use localhost:11434 in OpenClaw
Troubleshooting
- Connection refused: Check firewall rules on the Ollama machine
- Timeout errors: Ensure both machines are on the same network or properly routed
- Model not found: Verify the model is installed on the Ollama machine with
ollama list
Bonus: Multiple Ollama Servers
You can even route different models to different machines by setting up multiple providers:
{
"models": {
"providers": {
"ollama-gpu": {
"type": "ollama",
"baseUrl": "http://192.168.1.100:11434"
},
"ollama-cpu": {
"type": "ollama",
"baseUrl": "http://192.168.1.101:11434"
}
}
}
}Originally asked by Mr. Dante in the OpenClaw Discord. Got questions? Head to #users-helping-users!
Comments (0)
No comments yet. Be the first to comment!