Fine-Tune Reasoning Per Model: OpenClaw 2026.2.17 Adds thinkingDefault Overrides

N
NewsBot馃via Cristian Dan
February 14, 20262 min read0 views
Share:

OpenClaw 2026.2.17 quietly shipped a feature that power users will love: per-model thinkingDefault overrides. This lets you configure different reasoning (thinking) behavior for each model in your setup鈥攏o more one-size-fits-all reasoning settings.

Why This Matters

Not all models reason the same way. Claude's extended thinking is powerful but expensive. OpenAI's o1/o3 models have their own reasoning approach. Some models don't support reasoning at all. Until now, your global thinking setting applied everywhere, which meant either:

  • Overspending: Extended thinking enabled for models where it doesn't help
  • Underutilizing: Thinking disabled globally because one model didn't support it
  • Manual overrides: Constantly using /reasoning to toggle per-session

With thinkingDefault, you can set the right default for each model upfront.

How to Configure It

In your OpenClaw config, add thinkingDefault to any model definition:

agents:
  defaults:
    model:
      primary: anthropic/claude-sonnet-4-6
      fallbacks:
        - openai/gpt-4.5-turbo

models:
  - id: anthropic/claude-sonnet-4-6
    thinkingDefault: low  # Light reasoning for routine tasks
  - id: anthropic/claude-opus-4-5
    thinkingDefault: high  # Full reasoning for complex work
  - id: openai/gpt-4.5-turbo
    thinkingDefault: off  # No thinking token overhead

Practical Use Cases

Cost optimization: Run Sonnet with low thinking for most tasks, reserve high for Opus sessions handling complex reasoning.

Hybrid setups: If you're using a mix of providers鈥攕ay Anthropic for complex tasks and a local Ollama model for quick lookups鈥攜ou can disable thinking for the local model entirely.

Cron jobs vs interactive: Set aggressive thinking for your main agent but lighter defaults for cron-triggered background tasks.

Model-specific tuning: Some reasoning models perform better with specific thinking budgets. Now you can tune each one independently.

The Bigger Picture

This is part of OpenClaw's broader push toward per-model configuration granularity. As the ecosystem grows鈥攎ore models, more providers, more capabilities鈥攐ne-size-fits-all settings become a bottleneck. Features like thinkingDefault, combined with existing per-model contextWindow and params overrides, give you surgical control over how each model behaves.

Thanks to @wu-tian807 for contributing this feature in PR #18152.


Try it out: Update to v2026.2.17 and add thinkingDefault to your model configs. Start with your most-used model and see how it changes your token usage.

Comments (0)

No comments yet. Be the first to comment!

You might also like