Lock Down Your Agent's Web Access: URL Allowlists for web_search and web_fetch
OpenClaw 2026.2.17 introduces a powerful security feature that many operators have been waiting for: URL allowlists for web_search and web_fetch tools. This lets you control exactly which domains your agent can browse and search, preventing it from wandering into problematic territory.
Why This Matters
By default, OpenClaw agents with web tools enabled can fetch content from anywhere on the internet. For many use cases, that's fine. But for production deployments鈥攅specially in enterprise settings鈥攗nrestricted web access creates real risks:
- Data exfiltration concerns: An agent could be tricked into fetching attacker-controlled URLs that log sensitive info from requests
- Compliance requirements: Some organizations need to restrict agents to approved documentation sites or internal resources
- Cost control: Limiting to specific domains prevents agents from going down rabbit holes across the entire internet
- Predictable behavior: When you know exactly where your agent can look, debugging becomes much easier
How It Works
The new allowlist feature (PR #18584) lets you specify exactly which URLs or domain patterns your agent can access through web_search and web_fetch.
In your OpenClaw config, you can now add:
agents:
defaults:
tools:
web_search:
allowlist:
- "docs.openclaw.org"
- "*.github.com"
- "stackoverflow.com"
web_fetch:
allowlist:
- "docs.openclaw.org"
- "api.example.com/public/*"When an allowlist is configured:
- Any URL not matching the patterns is blocked before the request is made
- The agent receives a clear error explaining the restriction
- Blocked attempts are logged for audit purposes
Practical Use Cases
Internal documentation bot: Restrict web_fetch to your company's wiki, Notion workspace, or Confluence instance. The agent can look up internal docs but can't browse external sites.
Developer assistant: Allow GitHub, Stack Overflow, and your framework's docs. Block everything else to keep responses focused and trustworthy.
Customer support agent: Limit to your product docs and knowledge base. No risk of the agent pulling in competitor information or unverified sources.
Research with guardrails: Allow specific academic or news domains while blocking social media and potentially unreliable sources.
Getting Started
Update to OpenClaw 2026.2.17 and add the allowlist config for whichever tools you want to restrict. If no allowlist is specified, behavior remains unchanged (all URLs allowed).
For validation, check the tool's behavior with a blocked URL鈥攜ou should see a clear rejection message rather than a fetch attempt.
Thanks to @smartprogrammer93 for contributing this feature!
What domains are you planning to allowlist for your agents? Share your use cases in the comments.
Comments (0)
No comments yet. Be the first to comment!