Mid-refactor, 100% weekly: the 90-second recovery playbook
Claude Code stopped 14 files into a 47-file refactor. The error string mentioned a weekly limit. claude.ai web chat still works fine, which is confusing. This page is the short answer for what to do in the next 90 seconds and the longer answer for why the wall fires the way it does.
Open ClaudeMeter's popup, read seven_day.utilization and seven_day.resets_at off the "7-day · Xd" row. Commit the half-done work as a WIP. Defer everything that does not need Opus. Wrap the next batch in a four-line bash guard that sleeps until resets_at plus a 30-second grace. Resume from the WIP commit, not from the original prompt. ClaudeMeter is free, MIT, source at github.com/m13v/claude-meter.
What it actually looks like in your terminal
The wall has a recognizable shape. Claude Code is mid-loop, you are watching the file edits stream past, then a single line of red and the loop stops. 14 files modified, 33 still to go, no commit, no test pass. This is what you actually see, and what to read off the ClaudeMeter popup the second you switch windows.
Why this fires when web chat still works
Anthropic returns several weekly buckets on the same JSON payload, not one. seven_day is the all-up roof. seven_day_sonnet and seven_day_opus track per-model. seven_day_oauth_apps is the subset of usage that came from any OAuth-authenticated client; Claude Code lives there. Claude.ai web chat charges against seven_day and the per-model bucket but not against seven_day_oauth_apps. So the agent loop can hit 100% on the OAuth bucket and stop, while the web chat keeps going on the same plan.
The shape is in src/models.rs of the Rust core, declared as Option<Window> on each field of UsageResponse. The browser extension fetches it once per minute and the popup conditionally renders whatever fields the server returned. Same JSON the settings page reads.
The 90-second playbook
Four steps. Done in order, the second through fourth actually save weekly minutes. Done out of order, you replay the original prompt and re-spend the bucket on work you already have on disk.
Stop the loop. Commit what's done.
Before doing anything else, run git commit -am 'WIP: refactor checkpoint' so the half-done work is recoverable. Claude Code's next session reads from disk; if you skip the commit and the editor crashes, the wall cost you the work too.
Open ClaudeMeter. Read the 7-day row.
The popup shows '7-day · Xd' (or hours, or minutes). The percent is seven_day.utilization × 100. The countdown is fmtResets(seven_day.resets_at) from extension/popup.js. If you also see '7d Opus 100% · 7d Sonnet 73%', the answer is to defer the Opus work and keep going on Sonnet for the rest of the week.
Triage. Defer everything that does not need Opus.
Most of a refactor is mechanical: rename, move, adjust import, update test. Sonnet handles those at a fraction of the weekly cost. Reserve Opus for the genuinely hard subproblems (a tricky migration, an algorithm change). When seven_day_opus is at 100% but seven_day_sonnet is at 70%, the rest of the week is still productive on Sonnet.
Wrap the next batch in the four-line wall guard.
Drop weekly_wall_guard.sh around the agent loop. It reads claude-meter --json once per iteration, gates on seven_day.utilization, and if you are over the threshold, sleeps until seven_day.resets_at plus 30 seconds of grace. Resume the refactor from the WIP commit, not from the original prompt; replaying the original prompt re-spends the same weekly minutes for work you already have on disk.
The four-line guard
Drop this around the wrapper script that calls the agent loop. It reads claude-meter --json once per iteration, takes the worst weekly percent across rows, and if you are over the threshold, sleeps until the earliest resets_at plus 30 seconds. Resume from the WIP commit when it wakes.
The threshold is at 92, not 100, on purpose. Anthropic rounds for display and the float can sit at 0.998 when the 429 fires. 92 gives you headroom for one more small iteration without risking another rate-limit error inside the same loop.
“Claude Code killed my refactor mid-way at 62% weekly used. Installed ClaudeMeter, now I watch the bar tick instead of guessing.”
claude-meter user, voice.examples in product config
Replaying the prompt vs resuming from the WIP commit
The single most expensive mistake after the wall lifts is re-issuing the original refactor prompt. Claude does not remember the 14 files it already edited; it will redo them from scratch and re-spend the same minutes. The right move is to point Claude Code at the WIP commit and ask it to continue from there.
Same prompt as before the wall: 'refactor src/billing/ to use the new InvoiceV2 schema'. Claude reads the same 47 files, redoes the same edits, retypes the same imports. About 80% of the weekly minutes that produced the WIP commit are spent again to land in the same place.
- Re-edits the 14 files already changed
- Re-spends ~80% of pre-wall weekly minutes
- Easy to second-guess the previous decisions and produce a different (worse) shape
The exact code that builds the "7-day · Xd" row
If you want to verify the popup numbers against your own eyeball before trusting the guard script, this is the code that draws the row. Two pieces: the percent comes from utilization on the Window struct, the countdown comes from fmtResets().
Worth knowing: the toolbar badge stays pinned to the 5-hour percent because worstPct(snaps, 'five_hour') is the badge text in extension/background.js. The weekly number lives in the popup body and the icon tooltip's second line. So if you saw a low number on the badge before the wall fired, that was the 5-hour bucket; the weekly bucket sneaks up because it is not the surface you were watching.
One install, one popup row, one resets_at
The worst version of this is rolling your own once-an-hour script that polls the settings page, scrapes a number, and decides what to do. The internal endpoint already returns structured JSON with absolute UTC reset timestamps. ClaudeMeter is the open-source wrapper that fetches it for you, dedupes parallel sessions by account, and exposes the fields as a clean --json output you can pipe into a guard script.
brew install --cask m13v/tap/claude-meter- Load the
extension/folder atchrome://extensions(or arc://, brave://, edge://) - Visit claude.ai once. Menu bar lights up within a minute.
- Source: github.com/m13v/claude-meter (MIT)
Frequently asked questions
Why did Claude Code stop my refactor when claude.ai still worked for chat?
Because the weekly bucket Claude Code charges against is not the same bucket your web chat charges against. Anthropic returns multiple weekly buckets on /api/organizations/{org_uuid}/usage: seven_day is the all-up roof, seven_day_oauth_apps is the OAuth-app subset (where Claude Code lives), and seven_day_sonnet and seven_day_opus track per-model. Hit the seven_day_oauth_apps ceiling and your terminal stops while web chat keeps going on the same plan. ClaudeMeter shows all of them in one popup so the next prompt does not surprise you.
What does ClaudeMeter actually display when the weekly wall fires mid-refactor?
Two rows of interest. The first is '7-day · Xd' where the percent comes from seven_day.utilization (multiplied by 100 if it is a 0 to 1 fraction in the JSON) and the countdown comes from seven_day.resets_at, formatted by fmtResets() in extension/popup.js. Under one hour shows minutes. Under 48 hours shows hours. Otherwise days. The second is '7d Opus' or '7d Sonnet' when Anthropic returns those fields. The toolbar badge keeps showing the 5-hour percent because the badge has room for one number and the 5-hour bucket usually fires first; the weekly number is in the icon tooltip and the popup body.
Should I just upgrade from Pro to Max to make this go away?
Sometimes. Claude Max raises the weekly cap (per Anthropic's July 2025 hours framing, Pro is roughly 40 to 80 Sonnet hours per week, Max 5x is roughly 140 to 280, Max 20x is 240 to 480). If you keep landing at 100% on a Wednesday, the bucket size is the bottleneck and Max is the answer. If you land at 100% because one runaway agentic loop spent the whole week's quota in a four-hour stretch, a bigger bucket will only make the next runaway loop more expensive. Read the rolling rate (percent moved per hour while the loop ran) on the popup before paying for more bucket.
Can I just count tokens locally and avoid the whole weekly wall problem?
No, and this is the trap that catches every team that tries. Local-log tools like ccusage and Claude-Code-Usage-Monitor walk ~/.claude/projects/*.jsonl and sum inputTokens and outputTokens per session. The numerator is real. The denominator (Anthropic's plan ceiling) is not on disk. The same prompt run at 11 PM Pacific and at 11 AM Pacific moves the local token total identically but moves the server fraction by different amounts because there are peak-hour multipliers and per-model weights on Anthropic's side. ccusage at 5% with claude.ai at 91% is the predictable mismatch when this happens. ClaudeMeter reads the server-truth fraction the settings page reads.
What is the right thing to sleep on so the resumed refactor lands the second the wall lifts?
seven_day.resets_at on the same Window struct that returned the utilization. It is an absolute UTC ISO timestamp, not a rolling clock-time interval, so two parallel agents read by two different shells converge on the same wake-up moment. Sleep until that timestamp, add a 30-second grace period, then resume. claude-meter --json prints it directly; src/models.rs declares it as Option<chrono::DateTime<chrono::Utc>>. Setting an arbitrary 'wake up in 4 hours' instead is wrong for two reasons: (a) the rolling window does not refill on the hour; it refills the moment the oldest charge inside the trailing 168-hour window ages out, and (b) you waste minutes you could have been refactoring through.
If I run five Claude Code agents in five worktrees, do I get five separate weekly quotas?
No. The seven_day_oauth_apps bucket is per-account, not per-process. Five agents in five panes all stack into the same fraction. ClaudeMeter's dedupe_by_account in src/lib.rs collapses snapshots that share account_email or org_uuid into one row, so the popup shows the one shared percent every parallel agent on that account is racing to fill. Local-log trackers read ~/.claude/projects/*.jsonl per process and report N separate per-session totals that do not sum to the server number.
How do I checkpoint a refactor in Claude Code so I can resume after the wall lifts?
Two things. First, commit any half-done work locally before the wall (a git commit -am 'WIP: refactor checkpoint' so Claude Code's next session can read the intermediate state out of git diff). Second, store the conversation thread URL or a short brief that names the file paths, the current cursor in the refactor, and the test that should pass when it lands. When you resume, paste the brief plus 'continue from the WIP commit' instead of replaying the original 'refactor X to Y' prompt; replaying the original prompt re-spends the same weekly minutes for a result you already have on disk.
Does extra usage (pay-as-you-go) save me when the weekly wall fires?
Conditionally. Anthropic's metered billing flag (the extra_usage object on the usage endpoint) is_enabled tells you whether the account has it on. If it is on and used_credits is below monthly_limit, prompts that would have hit the weekly wall instead bill against extra usage. If it is off, or you are out of credits (overage.out_of_credits is true), the wall is the wall. ClaudeMeter renders extra_usage as a third row when the field is present, so you can see the live dollar burn during a heavy refactor instead of finding out at the end of the cycle.
I am on mobile reading this from an X reply. What do I actually do in the next 90 seconds?
Three things. (1) On your laptop, install ClaudeMeter (brew install --cask m13v/tap/claude-meter, then load the extension folder at chrome://extensions, then visit claude.ai once). The menu bar lights up within a minute. (2) Read the '7-day · Xd' row on the popup. The number is your real percent. The countdown is when the wall lifts. (3) Drop the four-line bash guard from this page into the wrapper around your Claude Code loop so the next refactor batch sleeps on seven_day.resets_at instead of bouncing off another wall.
Is ClaudeMeter free, and does it send my data anywhere?
Free and MIT licensed. Source at github.com/m13v/claude-meter. The browser extension runs one HTTPS request per minute to claude.ai using your existing session cookie (the same cookie your browser already sends to the settings page). The data goes to a localhost bridge at 127.0.0.1:63762 that the menu bar app listens on. Nothing leaves your machine. There is no cloud account, no telemetry, no analytics ping.
Pair on the guard script with the maintainer
20 minutes to wire weekly_wall_guard.sh into your specific agent loop and pick the right threshold for your weekly burn pattern.
Keep reading
What the 7-Day Row Actually Shows
Pulled from seven_day.utilization and seven_day.resets_at on /api/organizations/{org}/usage, polled once a minute. Here is what is behind every pixel of that row.
Hours Are a Vibes Metric, the Server Enforces a Float
Anthropic publishes the cap as 40 to 480 Sonnet hours per plan. The cap your account actually hits is a utilization float between 0.0 and 1.0. Here is the real contract.
Local Count vs Server Quota
Why ccusage at 5% and claude.ai at 90% is the predictable mismatch, and which number to trust when the 429 fires mid-refactor.