A question that comes up regularly in the Discord: "How do I connect Ollama running on one PC to OpenClaw running on another?" This is a common setup when you have a powerful GPU machine for inferenc
Running local models with OpenClaw sounds great in theory, but the eternal question remains: "Will this model even run on my machine?" A community member recently shared a clever trick that saves h
If you're running local models through Ollama, Qwen 3 is one of the most capable options for agentic workloads. With OpenClaw v2026.2.17, Qwen 3 reasoning mode now works properlyβhere's how to set
A common question in the OpenClaw Discord: can you use Ollama's cloud-hosted models like glm-5:cloud instead of running models locally? The answer is yes β but with some important configuration deta
Running OpenClaw with local Ollama models can be a great way to keep costs down and data private β but many users hit a frustrating wall when their bot becomes unresponsive mid-conversation. The culpr
Ever wanted your AI agents to debate, collaborate, and iterate on answers using different models? This guide shows you how to set up a "swarm" of local Ollama models that work together on tasks β with
Running OpenClaw with Ollama locally but want web search without paid API keys? This guide covers your two best options for 100% free web search that works with local models. The Problem OpenClaw