A silent tightening tracker is a list of six capabilities. Here is the list.
Tightenings that land without documentation do not show up in your local ~/.claude/projects/*.jsonl files. The tokens per message are the same before and after. The signal lives in the response of /api/organizations/{org_uuid}/usage, polled minute by minute, across seven rolling buckets at once. This page is the concrete list of what a tracker needs to check to catch the next one, and a straight honest comparison of what the popular tools can actually see today.
What people mean by "silent" here
A tightening is silent when it lands without a changelog entry, without a banner on claude.ai, and without moving any field name in the usage payload. Your Claude bills look the same. Your logs look the same. Settings still renders the same two bars. The only thing that shifts is the ratio between a fixed workload and the utilization numbers the server reports back. That is a hard thing to see from the outside because it moves no surface you are watching by default.
The 2026-03-26 change is the current example. Anthropic eventually acknowledged that weekday peak-hour limits were tightened and that overall weekly totals were unchanged. Before that acknowledgement landed, the only way to see it was to have been polling the /usage endpoint at a short cadence and comparing two weeks of samples on the same account.
Token-counter vs server-reader, frame by frame
1. Local-log tools stop here.
The six-capability checklist
If a tool is missing any one of these, it will miss a silent tightening of the kind that shipped on 2026-03-26.
Reads the server, not the logs
Hits GET /api/organizations/{org_uuid}/usage directly. That is where silent tightening lives. ~/.claude/*.jsonl tokens cannot tell you this.
60-second poll
chrome.alarms.create("refresh", { periodInMinutes: 1 }). A slope change at the weekday 13:00 UTC boundary shows up as a knee, not a staircase.
Watches all 7 buckets
five_hour, seven_day, seven_day_sonnet, seven_day_opus, seven_day_oauth_apps, seven_day_omelette, seven_day_cowork. A tightening in any one is visible.
Normalizes 0-1 vs 0-100
Same payload ships both scales. If you do not clamp u <= 1 ? u * 100 : u, your 0.64 and 64.0 plot as different magnitudes and you miss the jump.
Stores raw JSON per sample
So when a field is renamed, added, or quietly dropped, you can diff the schema back to the exact hour Anthropic shipped the change.
Runs in the background
Extension service worker keeps polling while you work elsewhere. A tracker that only runs while you read the Settings tab cannot capture the weekday 13:00 UTC knee.
Anchor fact: all 0 buckets, named as fields
The Rust struct ClaudeMeter deserializes the /usage payload into lives at src/models.rs lines 18 through 28. It names every one of the seven rolling buckets as an explicit Option<Window> field. That is not a stylistic choice. It is the part of the code that makes the next silent schema change loud: a renamed or added bucket surfaces as a parse error rather than a silently missing key on the plot.
The 60-second poll, in full
Nothing exotic. A Chrome alarm fires every minute, the service worker fetches the account endpoint to find your org UUID, then fetches /api/organizations/{org}/usage with credentials: "include" so your existing claude.ai session cookie rides along. No paste step. No second login. The whole loop is 14 lines that happen to produce a time series that silent tightening cannot hide from.
What the extension does every minute
One alarm, two fetches, one snapshot
The menu bar binary is not required for detection. It just gives you a glance-able readout. The service worker is the tracker.
The 0-to-1 vs 0-to-100 trap
Every silent tightening tracker eventually hits this. The same payload returns five_hour.utilization as 0.64 and seven_day_opus.utilization as 64.0. Same response body, two scales. If you skip the clamp your time-series plot will draw one line on a 0-1 axis and another on a 0-100 axis and they will look like they live in different universes. The fix is one line.
Which tracker can actually see silent tightening?
The tightening is a server-weight change. A tool built on local logs cannot see a server-weight change. That is not a bug; it is a data-source limit.
| Feature | Local-log trackers (ccusage, Claude-Code-Usage-Monitor, etc.) | ClaudeMeter |
|---|---|---|
| Reads /api/organizations/{org}/usage (server truth) | No. Reads ~/.claude/projects/*.jsonl locally. | Yes. Every 60 seconds. |
| Poll cadence for slope detection | Not applicable. One-shot per command. | 60s (POLL_MINUTES = 1). |
| Covers all seven rolling buckets | No. Estimates a single token budget. | Yes. All seven named in src/models.rs. |
| Sees silent server-weight changes | No. Tokens per message do not change. | Yes. Slope of utilization vs workload visibly shifts. |
| Needs a cookie paste to work | No, but also cannot see the server. | No. Browser extension reuses the claude.ai session. |
| Raw JSON per sample (for schema diffs) | No. Tokens only. | Yes. UsageSnapshot keeps the full response. |
| Background capture while idle | No. Only runs when Claude Code runs. | Yes. Service worker + menu bar binary. |
The weight pipeline, in one diagram
One message, one weight scaler, seven buckets
The scaler sits in front of every bucket. Which bucket it tightens is a knob the backend can turn. A tracker that only looks at one bucket is already missing five of the knobs.
Reproduce the signal in two curl calls
You do not need the extension to see this. Paste your claude.ai cookie into $COOKIE and pull the endpoint twice, either side of a fixed workload, on a weekday across the 13:00 UTC boundary.
Roll your own tracker in six steps
Get a session cookie
Open claude.ai, sign in, open DevTools, Application tab, Cookies. You want sessionKey and any other cookie whose name starts with cf_ or __secure-. You are not sharing these; they stay in a shell var.
Find your org UUID
GET https://claude.ai/api/account with the cookie. The response has memberships[]. Each membership has an organization.uuid. Pick the one matching the plan you want to track.
Hit /usage with credentials
GET https://claude.ai/api/organizations/{org_uuid}/usage. Send the full cookie header and Referer: https://claude.ai/settings/usage. You get back a JSON with seven Window-shaped fields.
Clamp the utilization scale
Same response returns some buckets as 0..1 and some as 0..100. Apply u <= 1 ? u * 100 : u before you plot, or your pre-change and post-change samples will look like they are on different axes.
Schedule a 60-second poll
cron, launchd, a service worker, whatever. Write the raw JSON plus fetched_at to disk every minute. You want the raw bodies so a schema change later is diff-able, not interpreted.
Plot the slope, not the value
Chart each bucket's utilization against fetched_at. The silent tightening shows up as a steeper line on weekdays 13:00 to 19:00 UTC in the five_hour trace while the seven_day trace keeps its pre-change slope. That divergence is the evidence.
Five ways planners get caught out
Each of these is something we have watched a reasonable engineer do, then get surprised by a 429 during a weekday afternoon they had budgeted as safe.
Avoid these
- Reading a single tracker's headline bar and assuming the weekly total is what tightened. It wasn't. The seven_day bucket has kept its curve.
- Using a token-counting tool as a rate-limit predictor. Tokens per message do not budge during silent tightening; the server weight does.
- Polling once an hour. The weekday 13:00 UTC knee disappears into two bars and you lose the evidence you were trying to collect.
- Ignoring seven_day_opus, seven_day_oauth_apps, seven_day_omelette, and seven_day_cowork because Anthropic only shows two bars on Settings. The server rate-limits on whichever bucket hits 100 first.
- Not logging raw JSON. When the next schema shift ships, you have no baseline to diff against and you are back to guessing.
The honest caveats
The endpoint is internal and undocumented. Anthropic can rename any of the seven fields in UsageResponse at any release. The 1.4x to 2x weekday multiplier we observed in March is a pattern across a month of samples, not a spec quote; it can drift within the peak window and between days. And a tracker polling with your session cookie is only as current as your cookie is; if you sign out of claude.ai, the service worker has nothing to send and starts reporting stale data. All of this is inherent to reading a surface that no vendor promised to hold stable.
Watch the next tightening live
ClaudeMeter is free, MIT-licensed, and ships with the extension so there is no cookie-paste step. Installs in a minute, polls every minute after that.
Install ClaudeMeterHit a tightening your tracker missed?
Send us your 48-hour sample of the /usage endpoint. We map edges for free.
Frequently asked questions
Why can't my Claude Code usage monitor see the silent tightening?
Because it reads ~/.claude/projects/*.jsonl files and counts tokens. Silent tightening does not change the number of tokens your message consumes; it changes the server-side weight the backend applies to those tokens when it charges the five_hour and seven_day buckets. Tokens in the log look identical before and after. The only place the new weight shows up is in the JSON response at /api/organizations/{org_uuid}/usage, which no local-log tool fetches.
What is the minimum set of capabilities a silent tightening tracker needs?
Six: (1) hits /api/organizations/{org_uuid}/usage directly rather than estimating from local logs, (2) polls on at least a 60-second cadence so slope changes show up as slopes instead of staircases, (3) reads all seven rolling buckets, not just the headline two, (4) normalizes the 0-1 vs 0-100 utilization field which ships both scales in the same response, (5) logs raw JSON per sample so you can diff a schema change later, and (6) runs in the background so an idle user still captures the weekday 13:00 UTC knee. ClaudeMeter checks all six.
What are the seven rolling buckets the endpoint returns?
five_hour, seven_day, seven_day_sonnet, seven_day_opus, seven_day_oauth_apps, seven_day_omelette, seven_day_cowork. They are declared as explicit Option<Window> fields in ClaudeMeter's Rust struct at src/models.rs lines 18-28. Each ships a utilization number and its own resets_at. Anthropic documents two of them (five_hour and seven_day). The other five silently appeared in the payload and still work the same way.
How often does ClaudeMeter poll and why that number?
Once per 60 seconds. The Chrome extension calls chrome.alarms.create("refresh", { periodInMinutes: 1 }) in extension/background.js. Anything slower than a minute turns the slope change at 13:00 UTC weekdays into a staircase of samples and you lose the knee in the curve. Anything faster would be pointless because Anthropic's own Settings page does not recompute the five_hour bucket any faster than that, and you would be hammering the endpoint for no new data.
Do I need to paste a cookie into a script?
Not with the extension route. The Chrome extension fetches /api/organizations/{org}/usage with credentials: "include", so your existing claude.ai session cookie is used automatically. The other route, the menu bar binary alone, reads Chrome's Safe Storage via keychain. Either way, no cookie paste and no second login.
What does 'silent' actually mean here?
It means the change lands without touching the HTML of claude.ai/settings/usage or any public documentation. The bar on Settings keeps looking the same. The fields in the endpoint payload keep the same names. Only the relationship between your workload and the utilization numbers shifts. If your detector depends on Anthropic telling you something changed, you cannot see it until someone on Reddit notices their five_hour bucket filling twice as fast during weekday afternoons.
Can ccusage or Claude-Code-Usage-Monitor be patched to see this?
Not without adding a completely new data source. Both tools are fundamentally log-scanners. Their signal is what Claude Code wrote to the local filesystem. To catch silent tightening you need to add an HTTP client, a session cookie plumbing, and a schema for the /usage response. At that point you have rewritten the hard part of ClaudeMeter. A simpler path is to run both together and read ClaudeMeter for server truth, your existing tool for local token accounting.
Why does ClaudeMeter log every bucket by name instead of a generic map?
Because a typed struct rejects unknown-but-important drift louder than a generic HashMap would. UsageResponse in src/models.rs has five_hour, seven_day, seven_day_sonnet, seven_day_opus, seven_day_oauth_apps, seven_day_omelette, seven_day_cowork as distinct fields. If Anthropic renames one or adds an eighth, the deserializer surfaces that shape change as a loud error instead of silently mapping it into a missing key that goes unplotted. That loud failure is how you catch the next silent schema change early.
Is this endpoint safe to poll every minute?
It is the same endpoint claude.ai/settings/usage polls when you reload that page. You are asking for your own org's usage with your own session cookie. No undocumented behavior, just a JSON GET. That said, the endpoint is undocumented and Anthropic can change the field names or cadence limits without notice. ClaudeMeter is open source (MIT), so if the endpoint shape changes, you can see exactly what moved and why in the src/models.rs diff.
What was the last tightening ClaudeMeter detected?
The 2026-03-26 change. The seven_day bucket stayed flat across a fixed 50-message Sonnet workload. The five_hour bucket climbed 1.4x to 2x faster on weekdays 13:00-19:00 UTC on the same workload. Both deltas came straight from the /api/organizations/{org}/usage response, polled every 60 seconds before and after the boundary. The per-model seven_day_sonnet and seven_day_opus buckets did not move.
Keep reading
The weekly bucket did not move. The 5-hour one did.
Pull the /usage endpoint before and after 2026-03-26 on a fixed workload. Only five_hour.utilization changes slope. seven_day holds its curve.
The Claude rolling window cap is seven windows, not one
Anthropic publishes two bars. The endpoint returns seven. Here is every bucket with field names and reset semantics.
ClaudeMeter vs ccusage
One reads local tokens. One reads the server quota Anthropic enforces. They answer different questions.