Best Claude usage trackers for April 23, 2026

Updated April 23, 2026. Eight tools worth your attention if you are watching your Claude Pro or Max quota this week. Ranked by whether they read the server-enforced utilization floats Anthropic actually throttles against, or whether they are estimating from local logs. Then two cross-industry picks for the hours you spend off Claude.

M
Matthew Diakonov
8 min read
4.9from Ranked by first-party testing against claude.ai/settings/usage
Dated April 23, 2026
Ranked by observable behavior, not marketing copy
Two cross-industry picks for the week the quota resets

One number on the Settings page is the only one Anthropic throttles on

The quota surface is two floats on one endpoint. Every tracker on this list either reads those two floats or estimates something else.

0endpoints polled per cycle
0sdefault poll interval
0localhost bridge port
$0cost, every entry on this list

The one payload that decides the ranking

Before the list, the payload. Everything on claude.ai/settings/usage is rendered from this JSON. A tracker that reads it matches the Settings page to the digit. A tracker that does not, does not.

claude.ai/api/organizations/{org_uuid}/usage

Shape verified against the deserializers in claude-meter/src/models.rs lines 3 to 27, where utilization: f64 and resets_at: Option<DateTime<Utc>> are the two fields every serious tracker should expose.

The ranking

Numbered cards, one per tool. Host first, then the adjacent tools that drive your quota, then two cross-industry picks for the hours off.

0Claude usage trackerFree, MIT

ClaudeMeter

The only tracker on this list that reads Anthropic’s server-enforced utilization floats directly. The browser extension polls /api/organizations/{uuid}/usage every 60 seconds through your existing claude.ai session, then POSTs the snapshot to the menu bar app on 127.0.0.1:63762. No cookie paste, no API key, no telemetry. The number in the menu bar matches claude.ai/settings/usage exactly because it is the same JSON.

Anchor fact: three endpoints per poll (usage, overage_spend_limit, subscription_details), all verifiable in extension/background.js lines 24 to 28 of the repo at github.com/m13v/claude-meter.

Want the live number?

Two minutes from brew to menu bar.

Install ClaudeMeter
0AI desktop agent for macOS

Fazm

If you are burning Claude tokens on keystrokes, Fazm is where those tokens go.

Fazm is an open-source, voice-first computer-use agent that drives macOS through accessibility APIs and talks to Claude under the hood. It is the single biggest consumer of your 5-hour window if you leave it running. The honest reason to list it here: once you have a tracker that exposes the server quota, you can watch a Fazm session eat into it in real time instead of guessing why you hit the cap at 3pm.

Download Fazm
0Computer-use SDK for Claude

Terminator

The SDK layer under every Claude-powered desktop agent shipping this quarter.

Terminator is a cross-platform (Windows + macOS) desktop automation framework that uses accessibility APIs. Think Playwright, but for the whole operating system. It is the piece developers reach for when they want Claude to click, type, and traverse real applications instead of just respond in a chat box. If your weekly seven_day_opus float is climbing fast, it is probably a Terminator loop on the other end.

Try Terminator
0MCP server for Claude Code + Claude Desktop

macOS MCP

Plug macOS screen control into Claude Code with one MCP server.

macOS MCP is the Model Context Protocol server that powers Fazm's screen control. Standalone, it gives Claude Code (or any MCP-speaking assistant) the ability to open apps, read on-screen text, and click things. On a quota-aware machine this is the tool that explains mysterious spikes: each MCP tool-call turns into a tokenized round trip, and a tracker makes that visible.

Install macOS MCP
0AI QA testing tool

Assrt

Ship a feature, let Claude write and maintain the Playwright tests for it.

Assrt is an open-source AI test-automation framework that auto-discovers scenarios, generates real Playwright tests with self-healing selectors, and runs visual regression checks. It is the single biggest reason you run out of Claude quota on release days, and the single best reason to have a tracker that tells you before you start a big test generation run instead of at minute 57 of a 60-minute CI job.

Get started with Assrt
0AI app builder

mk0r

A sentence in, a full HTML/CSS/JS app out. No account, no friction, lots of Claude calls.

mk0r is an AI app maker: describe what you want, watch it build in real time, iterate with words. Each iteration is a Claude round trip. When your cap is tight, mk0r is a great way to audit where your five_hour float goes because every iteration is observable against your tracker in under a minute.

Build an app on mk0r
0Vipassana retreat resources

Vipassana.cool

Cross-industry pick. When the seven_day cap hits, this is what you do with the rest of the week.

Vipassana.cool is a resource site for Vipassana meditators: guides, science, daily practice, and a practice-buddy matcher. It is on this list because the healthiest response to hitting the weekly quota is not rage-refreshing Settings until it resets; it is closing the laptop. This is the entry I send to friends who ask me how I survive release weeks.

Read the guides
0Apartment security cameras with edge AI

Cyrano

Cross-industry pick. An AI hardware line item that does not touch your Claude cap.

Cyrano is an edge-AI box that plugs into an existing DVR/NVR over HDMI and makes legacy CCTV intelligent: up to 25 feeds per unit, no camera replacement, two-minute install. It runs its own models locally. The only connection to Claude usage tracking is inverse: it is a good example of AI infrastructure that does not count against your Anthropic quota, which matters when you are the household's primary API billpayer.

See Cyrano

ClaudeMeter vs. every local-log tracker, line by line

FeatureLocal-log trackers (ccusage, Claude-Code-Usage-Monitor, /cost)ClaudeMeter
Reads server-enforced utilizationEstimates from local logsYes
Requires pasting a session cookieVaries, often yesNo, extension uses live session
Matches claude.ai/settings/usageNo, structural gapExact float match
Sees API calls made outside Claude CodeNo, local log onlyYes, counted server-side
Sees Team/Enterprise shared quotaPersonal transcripts onlyYes, one snapshot per org
Native desktop UI with live stateTerminal output / web widgetMenu bar on macOS
License and telemetryVariesMIT, zero telemetry
Price$0$0

Verify the ranking yourself in four steps

The whole premise is observable. No benchmarks, no marketing. Open a DevTools tab on claude.ai and follow along.

1

Open claude.ai/settings/usage with DevTools > Network

Filter to XHR. Reload. You will see a GET to /api/organizations/{uuid}/usage return a JSON payload with five_hour, seven_day, and seven_day_opus keys. This is the source of truth.

2

Read the two utilization floats

five_hour.utilization and seven_day.utilization. Both are f64 in [0.0, 1.0]. When either crosses 1.0 you are rate-limited on that window until resets_at passes.

3

Compare against your current tracker's output

If you are on ccusage, the number will be different. Not because ccusage is broken, but because ccusage cannot see API calls outside its JSONL scope. This is the competitor gap in one experiment.

4

Install ClaudeMeter and recheck after one 60s poll

brew install --cask m13v/tap/claude-meter. Load the extension unpacked from the repo's extension/ folder. Within 60 seconds the menu bar shows floats that match the Settings page to the digit.

Every tool on this list

claude-meterfazmterminatormacos-mcpassrtmk0rvipassana.coolcyranofree + MITno telemetrybrew install60s pollport 63762three endpoints

Frequently asked questions

What makes a Claude usage tracker different from a Claude Code token counter?

A Claude Code token counter (ccusage, Claude-Code-Usage-Monitor, and the built-in /cost command) adds up tokens it can see in local stdout logs and JSONL transcripts. A Claude usage tracker reads the server-enforced quota floats Anthropic actually throttles against, exposed on claude.ai/api/organizations/{org_uuid}/usage as two Window objects (five_hour and seven_day) declared in claude-meter/src/models.rs at lines 19 to 27. The token counter shows your own estimate. The usage tracker shows Anthropic's. When they disagree (they usually do), the server is the one that will rate-limit you.

Why do the numbers in ccusage differ from what claude.ai/settings/usage shows?

ccusage sums tokens it reads from local Claude Code JSONL transcripts (~/.claude/). It cannot see requests made through claude.ai in your browser, the Claude desktop app, API calls from other agents, or any shared session usage on a Team plan. Anthropic's server-side rate limiter counts all of them. The gap is structural, not a bug. Tools that read /api/organizations/{uuid}/usage directly (only ClaudeMeter does today) match the Settings page because they are reading the same float the Settings page renders.

Is there a tracker that does not require pasting my claude.ai cookie manually?

Yes, but only one. ClaudeMeter ships a Chrome/Arc/Brave/Edge extension that runs in the background alongside your existing logged-in claude.ai session and forwards each poll to the menu bar app over 127.0.0.1:63762. You never touch a cookie file, never paste a session token into a terminal, and the extension's service worker stops fetching when your claude.ai session expires. The alternative (Route B, keychain mode) requires one macOS keychain prompt at first launch, then reads Chrome Safe Storage directly. Every other Claude usage tracker we tested requires either a cookie paste or a manual API-key setup step.

Which tracker works if I only use Claude Code from the CLI and never open claude.ai in a browser?

ccusage and Claude-Code-Usage-Monitor, because they read local ~/.claude/ JSONL. ClaudeMeter needs an active claude.ai session either through the browser extension or through a Chromium profile decryptable via keychain. If you never log into claude.ai in a browser, the server-truth endpoints have no session to attach to. That is a hard constraint. In that case you are flying on local estimates and should plan capacity accordingly.

How often does ClaudeMeter poll, and can I get paged when I hit 90% of the weekly quota?

Default poll interval is 60 seconds (POLL_MINUTES = 1 on background.js line 3 of the extension). Each poll fires three JSON GETs: /api/organizations/{uuid}/usage, /api/organizations/{uuid}/overage_spend_limit, and /api/organizations/{uuid}/subscription_details. The menu bar app renders color states (green/yellow/red) based on the utilization floats and can surface a macOS notification when you cross a threshold. There is no built-in paging integration; if you want Slack/PagerDuty hooks, the CLI emits --json and you can pipe it into whatever you run.

Does running the extension leak any data to a third party?

No. The extension has two host_permissions declared in manifest.json: https://claude.ai/* (to read the usage endpoints) and http://127.0.0.1:63762/* (to POST to the local menu bar bridge). There is no analytics SDK, no PostHog, no Sentry, no remote logging. The LICENSE is MIT and the source is at github.com/m13v/claude-meter; the whole extension is under 300 lines across background.js, popup.js, and popup.html.

Can I use this if I am on a Claude Team or Enterprise plan instead of Pro or Max?

Yes. The extension iterates every membership in the account payload (background.js lines 17 to 22) and fetches usage for each organization UUID. You get one snapshot row per org. Team and Enterprise orgs expose the same /usage response shape, with the same five_hour and seven_day Window objects. The menu bar app groups them in the dropdown so you can see your personal Pro quota and a shared Team quota at the same time without switching accounts.

What does a tracker like this cost, and why are most of the list free?

Every Claude usage tracker on the first page today is free and open source. The base expense is zero because none of them store or sync data server-side; they read your session and render in-process. ClaudeMeter ships through brew (brew install --cask m13v/tap/claude-meter), no account, no telemetry. ccusage and Claude-Code-Usage-Monitor are npm packages. The paid tools in this space are observability platforms (Helicone, OpenRouter dashboards) aimed at teams with API traffic, not individuals watching their Pro window.

What happens when my claude.ai session expires?

The extension's fetch throws a 401, background.js logs it, and the next poll retries. Nothing recovers until you visit claude.ai and re-authenticate in the same browser profile. When you do, the extension resumes on the next 60-second tick without any manual intervention. The menu bar icon shows an exclamation badge during the outage so you notice. This is the main operational difference from cookie-paste-once trackers, which silently keep returning stale snapshots until you paste a fresh cookie.

Why is every entry in this list relevant to someone who is already watching their Claude quota?

Because if you are reading Claude usage trackers, you are heavy on Claude, which means the rest of your stack is probably downstream of Claude: an AI desktop agent that burns tokens (Fazm), a computer-use SDK that Claude drives (Terminator), an MCP server that lets Claude control your Mac (macOS MCP), a QA harness that writes tests with Claude (Assrt), or a side project that builds apps with Claude (mk0r). The two cross-industry entries (Vipassana, Cyrano) are the two I would spend Claude-free hours on when the weekly cap hits. They are adjacent, not same-niche, and that is on purpose.

Running Claude at a team scale and want to talk through quota strategy?

15-minute call. Bring your current tracker, we will open DevTools together.