A Claude Pro usage meter your browser automation loop can actually read mid-run

A Playwright, Computer Use, or Claude Code agentic loop driving a browser hits the rolling 5-hour wall in minutes, not hours. ClaudeMeter exposes the same server-truth percentages that claude.ai/settings/usage renders, via a single CLI flag and a 60-second auto-refreshing extension. Your loop reads its own quota in one line of bash and pauses itself before Anthropic returns 429.

M
Matthew Diakonov
8 min read
Direct answer (verified 2026-05-06)

ClaudeMeter (free, MIT, macOS, source at github.com/m13v/claude-meter) is the meter built for this. The CLI binary at /Applications/ClaudeMeter.app/Contents/MacOS/claude-meter --json prints a Vec<UsageSnapshot> with named fields usage.five_hour.utilization, usage.five_hour.resets_at, usage.seven_day_oauth_apps.utilization, and extra_usage.used_credits. An automation loop pipes it into jq, gates on whichever bucket is highest, and sleeps until the named reset time. Local-log tools like ccusage and Claude-Code-Usage-Monitor cannot see those server-side buckets because they read ~/.claude/projects/*.jsonl, which is a different signal.

The blind-loop problem

A browser-automation loop against Claude is the workload Anthropic built the rolling 5-hour window for. One Playwright iteration, navigate, snapshot the DOM, send it to Claude, parse the response, click. That is a tool-use round-trip. Computer Use is the same shape with screenshots in place of DOM snapshots. Claude Code in a tight refactor loop is the same shape with file diffs. Each round- trip is a chunky prompt plus a chunky response, and a long-running loop fills the bucket faster than a chat user can.

The problem with running a loop against an opaque cap is that the loop runs blind. Iteration 1 is fine, iteration 200 returns a 429 mid-step, and the only signal the script gets is {"error": "rate_limit_error"} after it has already opened a half-completed pull request, half- filled a form, or written half a database migration. The wasted work is the symptom; the actual gap is that the loop did not know its own state.

The cheap fix is a loop guard, a small piece of code the loop calls between iterations to read its own quota and decide whether to keep going. The hard part has been getting an honest number to read. Local-log tools count tokens written to ~/.claude/projects, which is a faithful signal for what your local Claude Code session weighed in tokens but cannot see web-chat usage on the same account, peak-hour multipliers, per-model weights, or the specific weekly bucket Anthropic added for OAuth-app traffic in 2026. Anthropic does the math server-side and the 429 fires off the server number. The loop guard has to read the server number too.

62%

"Claude Code killed my refactor mid-way at 62% weekly used. Installed ClaudeMeter, now I watch the bar tick instead of guessing."

Pro plan user, /r/ClaudeAI

What claude-meter --json actually returns

The CLI is the programmatic surface. Its shape is the schema declared at src/models.rs in the m13v/claude-meter repo, which the binary serialises directly. There is no transformation layer, which is the point: the JSON is the same shape the api crate parses off /api/organizations/{org_uuid}/usage, with the orgs flattened into a top-level array and the cookie-source browser tagged.

claude-meter --json
Fields you will probably read
  • usage.five_hour.utilization fraction 0 to 1 of the rolling 5-hour bucket
  • usage.five_hour.resets_at UTC timestamp the bucket clears
  • usage.seven_day_oauth_apps.utilization weekly bucket for Claude Code and other OAuth apps
  • usage.extra_usage.used_credits pay-as-you-go dollars accrued past plan quota
Fields you can probably ignore
  • org_uuid only matters if you have multiple orgs on one Claude account
  • browser cosmetic label, useful for multi-browser logging
  • subscription.status nice to log on first failure, otherwise stable
  • stale only set when the bridge has not heard from the extension

The loop guard, in one shell script

Drop this between iterations of any browser-automation loop that calls Claude. It exits 0 to keep going, sleeps until the rolling window resets if the 5-hour bucket is past the threshold, and exits 42 (a sentinel for the outer scheduler) if the weekly OAuth- apps bucket is past threshold. Tune the threshold to your taste; 85 is conservative.

loop_guard.sh
example session

How the meter gets the number without a cookie paste

The browser extension is the moving part that earns its place in a guide about automation. Loading it once removes the friction every other server-truth meter has shown so far: open the dev tools, export the cookie, paste it into a config file, refresh when it expires.

1

brew install --cask m13v/tap/claude-meter

Installs the .app under /Applications and the CLI binary at /Applications/ClaudeMeter.app/Contents/MacOS/claude-meter. The brew cask also drops a launch agent so the menu bar comes back after reboot.

2

Load extension/ as an unpacked extension

chrome://extensions in Chrome, arc://extensions in Arc, brave://extensions in Brave, edge://extensions in Edge. Developer mode on, Load unpacked, point at the extension folder of the cloned repo. Pin the icon if you want the popup.

3

extension/background.js fires once per minute

chrome.alarms.create('refresh', { periodInMinutes: 1 }) registers a recurring job. The handler calls fetch(BASE + '/api/account', { credentials: 'include' }) plus the per-org usage, overage, and subscription endpoints, with the session cookie your browser already holds.

4

Snapshots POST to 127.0.0.1:63762/snapshots

BRIDGE constant on line 2 of background.js. The menu bar app listens on that port (BRIDGE_PORT in src/bin/menubar.rs line 349) and labels each snapshot by looking up the peer TCP socket's owning process, so Chrome and Arc rows do not blur together.

5

claude-meter --json drains the same data to stdout

The CLI does not need the bridge to be running. It walks the Chromium profile cookie databases on macOS (or accepts the bridged snapshot if the menu bar is already running and warm), parses the same /api/organizations/{org}/usage response, and prints the parsed Vec<UsageSnapshot> as JSON.

Why ccusage cannot do this, and why that is fine

ccusage and Claude-Code-Usage-Monitor are good tools that answer a different question. They tail ~/.claude/projects/*.jsonl and total tokens per session. That is an honest signal for what your local Claude Code traffic weighed. It is not the bucket Anthropic enforces against an agentic loop. The two answers are both useful, they are just different answers.

Featureccusage / Claude-Code-Usage-MonitorClaudeMeter
Source of utilization number~/.claude/projects/*.jsonl token totals (local estimate)/api/organizations/{org}/usage on claude.ai (server-truth)
Sees rolling 5-hour bucketNo, infers from local activity timestampsYes (five_hour.utilization, five_hour.resets_at)
Sees seven_day_oauth_apps (agentic bucket)No, the field name does not exist locallyYes, exposed as a named field
Sees claude.ai web chat usage in the same numbersNo, web chat leaves no JSONLYes, server quota stacks both
Programmatic shape for an automation loopccusage --json (works, but the numbers are local)claude-meter --json (single binary, no Python deps)
Cookie paste requiredN/A (does not need cookies, does not see the server)No (extension reuses your live claude.ai session)

Use both. Read ccusage to find out which session weighed the most. Read ClaudeMeter to find out whether your loop is allowed to run another iteration. They live next to each other, not on top of each other.

Set it up in five minutes

1

Install the menu bar app

brew install --cask m13v/tap/claude-meter installs both the .app and the CLI binary at /Applications/ClaudeMeter.app/Contents/MacOS/claude-meter.

2

Load the unpacked browser extension

Clone github.com/m13v/claude-meter, open chrome://extensions (or arc://extensions, brave://extensions, edge://extensions), enable Developer mode, Load unpacked, point at the extension/ folder.

3

Visit claude.ai once in that browser

The extension's chrome.alarms job fires every minute and reuses the session cookie your browser already holds, so the menu bar lights up within sixty seconds with two percentages: 5h and 7d.

4

Confirm the CLI sees the same numbers

Run /Applications/ClaudeMeter.app/Contents/MacOS/claude-meter --json and read off five_hour.utilization. It should match the percent in the menu bar and what claude.ai/settings/usage shows in your browser.

5

Drop the loop guard into your automation runner

Call ./loop_guard.sh between iterations of your Playwright, Computer Use, or Claude Code agent loop. Pause on five_hour > threshold, queue work on seven_day_oauth_apps > threshold, exit cleanly on out_of_credits.

What happens when extra-usage runs out

The two utilization buckets are not the only reason a loop can die. If your account has metered extra-usage enabled (Anthropic's pay- as-you-go pool that turns on after the included plan quota exhausts) and the monthly limit hits zero, overage.out_of_credits in the snapshot flips to true. From that point on, every Claude request returns an out-of-credits error regardless of how the five_hour or seven_day buckets look.

For a browser-automation loop, this is the case where retrying is actively harmful: each retry counts as a billable attempt that still fails. The loop guard should treat out_of_credits == true as a hard stop and exit cleanly, not as something to back off and retry. The shell-script pattern earlier in this guide treats the two utilization buckets as soft stops (sleep, queue) and reserves an explicit exit code for the hard stop.

Wiring this into a real automation runner?

Book 20 minutes if you want a hand wiring the loop guard into a Playwright, Computer Use, or Claude Code runner you already have running.

FAQ

What is the right Claude Pro usage meter for a browser automation workflow?

ClaudeMeter. It is free, MIT-licensed, macOS-only, and ships with a CLI binary at /Applications/ClaudeMeter.app/Contents/MacOS/claude-meter. The reason it fits browser automation specifically is the --json flag: claude-meter --json prints a Vec<UsageSnapshot> to stdout, with named fields five_hour.utilization, seven_day.utilization, seven_day_oauth_apps.utilization, and extra_usage.used_credits. Those are the same percentages that claude.ai/settings/usage renders. A Playwright, Computer Use, or Claude Code loop can shell out to claude-meter --json once per iteration and decide whether to keep going. ccusage and Claude-Code-Usage-Monitor cannot do this; they read ~/.claude/projects JSONL files, which is local-token estimation, not the server-side bucket Anthropic actually enforces against agentic loops.

Why does ccusage's number disagree with the Anthropic 429 my automation loop just got?

Because ccusage measures a different signal. It tails ~/.claude/projects JSONL files and totals tokens per session. That is faithful for what your local Claude Code traffic weighed in tokens, but Anthropic's per-org rolling 5-hour window and the seven_day_oauth_apps weekly bucket are computed server-side, with peak-hour multipliers, per-model weights, and any browser-chat usage on the same account stacked into the same buckets. ccusage at 5 percent and claude.ai at 90 percent is the predictable mismatch. The 429 fires off the server number, so an automation loop guard has to read the server number. ClaudeMeter pulls it from /api/organizations/{org_uuid}/usage on claude.ai, which is the same endpoint the settings page uses.

Do I have to manually paste a claude.ai cookie into anything?

No. The browser extension lives at extension/background.js in the m13v/claude-meter repo. It uses chrome.alarms to fire a refresh job every periodInMinutes: 1, and its fetch call passes credentials: 'include' against https://claude.ai/api/account, then /api/organizations/{org}/usage, /api/organizations/{org}/overage_spend_limit, and /api/organizations/{org}/subscription_details for each membership. Because the extension runs inside Chrome (or Arc, Brave, Edge), the request reuses the session cookie your browser already holds. There is no cookie paste, no API key, no service account. Load the unpacked extension once, visit claude.ai once, and the menu bar plus the localhost bridge start receiving snapshots inside sixty seconds.

What does the claude-meter --json output actually look like?

An array of UsageSnapshot objects, one per (browser, account_email) pair. Each snapshot has org_uuid, browser, account_email, fetched_at, an errors array, and three optional sub-objects: usage (with five_hour, seven_day, seven_day_sonnet, seven_day_opus, seven_day_oauth_apps, seven_day_omelette, seven_day_cowork as Window structs of {utilization, resets_at}, plus extra_usage as {is_enabled, monthly_limit, used_credits, utilization, currency}), overage (the pay-as-you-go pool details), and subscription (status, next_charge_date, billing_interval). Pipe it into jq, ingest it into a Python loop, parse it in a Rust agent, whatever you want. Schema is in the public source at src/models.rs in the m13v/claude-meter repo.

Which Claude bucket should my browser automation loop guard read?

Two of them, depending on what you are protecting against. five_hour.utilization is the rolling 5-hour window, which is the wall an agentic browser-control loop hits within minutes if you are running Computer Use or a Playwright-driven Claude Code session in tight iteration. seven_day_oauth_apps.utilization is the agentic-loop-specific weekly bucket Anthropic added in 2026; it tracks Claude Code and other OAuth-app traffic separately from your claude.ai web chat. In practice, gate on whichever is higher. If five_hour goes past 80 percent, sleep until five_hour.resets_at. If seven_day_oauth_apps goes past 80 percent, the loop is going to die in a day or two regardless; queue the remaining work or switch to Sonnet for the rest of the week.

Can my automation script read this without spawning a subprocess each iteration?

Two paths. Easy path: claude-meter --json runs in well under one second on a warm cookie cache, so spawning it once per loop iteration is fine for any iteration cadence above a few seconds. Faster path: the menu bar app exposes a localhost bridge at 127.0.0.1:63762. The browser extension POSTs the same Vec<UsageSnapshot> to it every minute (extension/background.js BRIDGE constant on line 2). The bridge is POST-only by design (request handler at src/bin/menubar.rs lines 372-388 returns 404 to anything else), so reading from your script means tapping the same /api/organizations/{org}/usage endpoint yourself with the cookies the extension is already collecting, or sticking with the CLI. The CLI is the intended programmatic surface.

What about extra-usage spend, the pay-as-you-go balance, can I gate on dollars instead of percent?

Yes. extra_usage.used_credits in the JSON output is the dollar amount your account has accrued past the included plan quota, in the currency reported by extra_usage.currency, governed by extra_usage.monthly_limit. If your loop guard cares about real money rather than utilization fraction, key on used_credits crossing whatever ceiling you set. The overage block adds the disabled-reason and disabled-until fields, which is what tells you Anthropic has frozen extra-usage on your account (out_of_credits true means new prompts are blocked even if your five-hour bucket is fine). Both blocks come from /api/organizations/{org}/overage_spend_limit on claude.ai, exposed unchanged in the snapshot.

Will multiple browsers in the loop confuse the meter?

No. Each (browser, account_email) pair is its own snapshot row. The menu bar app identifies the sending browser by looking up the peer TCP socket's owning process, not by trusting Sec-Ch-Ua headers (which Arc, Brave, and Edge set to Chromium). So if your automation runs three Chromium-family browsers in parallel against the same Claude account, you see three rows, each labeled by the actual binary, all reading the same per-account quota. dedupe_by_account in the CLI then collapses snapshots that belong to the same Claude account before printing, so claude-meter --json gives you one snapshot per account regardless of how many browser tabs were involved in fetching it.

What about Linux or Windows automation runners?

Not supported yet on the menu bar app side. The macOS-only constraint comes from the AppKit code path that paints the colored title segments, which has no cross-platform analogue. The browser extension itself runs in any Chromium browser on any OS and exposes the same usage snapshots via chrome.storage.local; if you have a Linux or Windows automation runner, you can read those snapshots in the extension context (or call the same /api/organizations/{org}/usage endpoint directly with whatever HTTP client your runner has, using the session cookie from the running profile). The CLI loop-guard pattern in this guide is macOS-specific; the underlying server-truth idea is portable.

Is there a privacy cost to running this in an automation loop?

No outbound network egress beyond claude.ai itself. The bridge listens only on 127.0.0.1, the request fires once per minute (or once per loop iteration if you call the CLI), and there is no telemetry, no analytics endpoint, no third-party SDK in the binary. Source is MIT at github.com/m13v/claude-meter and the pieces a curious reviewer cares about are small: extension/background.js for the fetcher, src/api.rs for the parsed response, src/models.rs for the snapshot schema, src/bin/menubar.rs for the bridge.

Does this fix the underlying agentic-loop quota burn?

No. Nothing fixes it; Anthropic enforces the buckets server-side and an agentic loop is exactly the workload that fills them fastest. What this fixes is the visibility gap. Without a meter, a browser automation script runs blind and gets cut off by a 429 mid-iteration, often deep into a multi-step task. With a loop guard reading claude-meter --json, the script knows its own state and can sleep until resets_at, switch to a cheaper model for the remaining iterations, or queue the remaining work for a future window. That is what 'meter for browser automation' actually means in practice: a feedback signal the loop can act on.