Reading Large Files Just Got Smarter: Auto-Paging and Context-Aware Scaling in OpenClaw 2026.2.17
If you've ever watched your OpenClaw agent struggle with large files—truncating critical code, hitting context limits mid-read, or requiring you to manually chunk everything—the latest release brings a genuinely useful improvement: auto-paging read with context-aware output scaling.
The Problem
Previously, the read tool had a fixed output budget. Read a large file? You'd get truncated output regardless of whether your model could handle more. Using Claude Opus with a 200K context window? Same truncation as a smaller model. This meant agents would often miss crucial information at the end of files, requiring awkward workarounds like explicit offset/limit parameters or multiple read calls.
Worst case: your agent reads half a config file, makes assumptions about the rest, and breaks something.
What's New
Two changes in v2026.2.17 fix this:
1. Auto-Paging Across Chunks
When you don't specify an explicit limit, the read tool now automatically pages through the file in chunks. Instead of stopping at a fixed cutoff, it continues reading until it hits context budget constraints. This means your agent sees more of the file by default.
2. Context-Aware Output Budget
The per-call output budget now scales based on your model's contextWindow. Using Opus 4.5 with a massive context window? The read tool allocates more output space. Running a smaller model? It scales down appropriately.
This is especially powerful combined with the opt-in 1M context beta for Anthropic models—your agent can now read significantly larger files in a single operation.
Why This Matters for Developers
Less manual chunking: You no longer need to tell your agent to read lines 1-500, then 500-1000, etc. It handles pagination automatically.
Model-appropriate behavior: A Claude Opus agent and a smaller model won't fight the same arbitrary limits. Each gets what it can handle.
Fewer truncation surprises: The classic "I'll read the file" → truncated output → wrong assumptions → broken code loop happens less often.
Better subagent recovery: The release also added explicit guidance for sub-agents to recover from [truncated: output exceeded context limit] markers by re-reading with smaller chunks instead of blindly retrying.
How to Use It
Just... use read normally. The improvements are automatic when you don't specify explicit limits:
read(path: "./src/bigfile.ts")
If you need specific ranges, you can still use offset and limit:
read(path: "./src/bigfile.ts", offset: 100, limit: 50)
Combined with Context Guard Improvements
This change pairs with another fix in the same release: the read tool now properly accounts for heavy tool-result metadata when checking context limits. Previously, repeated reads could bypass compaction and overflow the context window. Now the guards are more accurate, meaning auto-paging won't accidentally blow up your context.
Try It
Update to v2026.2.17 and throw a large codebase at your agent. You should notice fewer truncation issues and more complete file reads—especially if you're using a high-context model.
Reference: Release v2026.2.17, specifically:
- "make
readauto-page across chunks (when no explicitlimitis provided) and scale its per-call output budget from modelcontextWindow" - "strip duplicated
readtruncation payloads from tool-resultdetailsand make pre-call context guarding account for heavy tool-result metadata"
Comments (0)
No comments yet. Be the first to comment!