The tightened Claude plan has seven reset clocks. You are probably watching the wrong one.
Every guide about the quota tightening tells you to wait for Sunday. That is only correct if the bucket blocking you right now is seven_day. After 2026-03-26, it usually isn't. The /api/organizations/{org_uuid}/usage endpoint returns a resets_at timestamp on every one of seven rolling windows. Read the one next to the bucket that is actually at 100 percent, and you know, to the minute, when your next message will go through.
Why "when do I get my quota back" is seven questions
Before the tightening most people could ignore this. The weekly bar on claude.ai/settings/usage was the one that tripped, and its reset roughly aligned with the start of your usage week, so "wait until Sunday" was usually right and usually close enough. After 2026-03-26 the same fixed workload drives the five_hour bucket to 100 percent during weekday afternoons, while seven_day keeps its old curve. The gating bucket has changed, which means the active reset clock has changed. "Wait until Sunday" is often off by several days.
The endpoint does not hide any of this. It returns seven Window-shaped fields, each with its own utilization and resets_at. The only work for a tracker is to read them all, pick the one at 100 percent, and render its reset delta in a unit a human can act on.
Anchor fact: the 0-field Window struct
This is the whole shape. Two fields per bucket, repeated seven times. ClaudeMeter deserializes into this struct directly in src/models.rs. If the endpoint ever renames the field, the parse fails loudly instead of silently mapping the new name into a missing key on your plot.
The seven clocks, bucket by bucket
Each of these returns its own utilization and its own resets_at in the same JSON payload. After the tightening, the one most likely to block you is the first one.
five_hour
The rolling 5-hour window. After the tightening, this is the bucket that tends to hit 100 first on weekday afternoons. Its resets_at moves in minutes, not days.
seven_day
The aggregate rolling 7-day. Its resets_at typically lands 5 to 7 days out. Unchanged curve on the same fixed workload after 2026-03-26.
seven_day_sonnet
Sonnet-specific weekly. Carries its own resets_at. Can be at a totally different utilization than seven_day.
seven_day_opus
Opus-specific weekly. Often closer to 100 than the aggregate, because Opus tokens are weighted heavier. Its resets_at is what matters for Opus users.
seven_day_oauth_apps
Third-party app traffic. Separate ceiling, separate resets_at, and it is possible to be blocked here while the main seven_day is well below 100.
seven_day_omelette + seven_day_cowork
Less common traffic paths. Both ship in the same payload, both have their own resets_at. Unless you are live in those flows you will usually see them null.
One message, seven clocks ticking
Every message updates every bucket's resets_at, not just one
The right-hand side is what a tracker has to read. Reading only one is where the "but I still had quota left" confusion comes from.
The CLI that prints a reset per bucket
The CLI formatter inside ClaudeMeter is the cleanest expression of this idea. It takes one Window, computes the delta between now and resets_at, and returns a single line with the delta rendered in d and h components. The caller just invokes it per bucket and prints. No global "quota resets" line, because there isn't one to print.
What the reset line looks like, before vs after the tightening
The weekly line was almost always the active block. A single 'resets on Sunday' read was usually close enough. Most readers never touched the five_hour line.
- seven_day is the gating bucket
- resets_at is days out
- a calendar mental model works
The extension renders each reset inline on the bar
The Chrome extension's popup renders the bucket bars with the reset delta appended to the label. The helper fmtResets picks minute, hour, or day resolution based on how far out the reset is, so a 37-minute wait is literally labelled 5-hour · 37m instead of a decimal hour. That small formatting choice is what turns seven clocks into something you glance at once and act on.
What a tracker has to do with resets_at
Local-log tools can count tokens but cannot read a server-side window. resets_at is the server's field. That is why no log-scanner can answer 'when can I code again' on the tightened plan.
| Feature | Local-log trackers (ccusage, Claude-Code-Usage-Monitor, etc.) | ClaudeMeter |
|---|---|---|
| Shows resets_at for five_hour | Not emitted. Log readers have no concept of server-side windows. | Yes. Formatted as 'in 37m' when close, 'in 4h' when comfortable. |
| Shows resets_at for seven_day | Approximated from a calendar assumption. Often wrong by hours. | Yes. The endpoint's own timestamp, rounded to days when >48h out. |
| Shows per-model reset (Sonnet, Opus) | No. Log tools don't separate the model-specific weekly caps. | Yes. Two independent resets_at values alongside their bars. |
| Updates resets_at as the window slides | Not applicable. | Every 60 seconds, matching the endpoint's precision. |
| Labels which bucket caused a 429 | No. Only the server knows which bucket tripped. | Implicit: the bucket where utilization >= 100 is the one blocking you. |
| Tells you when a null resets_at flips to a real one | No. | Yes. Poll catches the first sample with a non-null timestamp. |
Reproduce it in one curl call
You do not need the extension to confirm any of this. Paste your logged-in claude.ai cookie into $COOKIE and hit the endpoint once. Every Window carries its own resets_at. The one you care about is the one next to the bucket closest to 100 percent.
The five-step read on a tightened plan
Open the endpoint response, or let ClaudeMeter show it.
GET /api/organizations/{org_uuid}/usage with your session cookie, or just open the ClaudeMeter menu bar icon. The binary prints 'N% used -> resets (in Nh)' per bucket, straight from format.rs lines 75-98.
Find the bucket closest to 100 percent.
That is the one the server is rate-limiting against. For five_hour the number is often >90 during weekday peak afternoons after the tightening; for seven_day it is often well below.
Read the resets_at for that bucket only.
The other six resets_at values are not your active block. They matter later. ClaudeMeter rounds the delta via fmtResets (popup.js lines 17-27) so '(in 37m)' tells you the wait in the correct unit.
Plan the next action against that delta.
If five_hour is the gate, your wait is minutes: queue the next task. If seven_day_opus is the gate and the delta is '3d', do not retry in a loop; switch model or stop for the day.
Keep the poll running.
The window slides. resets_at for every bucket shifts minute by minute. A sample you took 45 minutes ago is stale. 60-second polling (POLL_MINUTES = 1 in extension/background.js) matches the data's own cadence.
The fields you care about live in one payload
response
Every orbiting chip is an Option<Window> field in the same JSON body. Every one has its own resets_at.
Shapes of the delta you'll see on each bucket
Same helper (fmtResets) chose the unit for each. Below one hour you get minutes. Below 48 hours you get hours. Past that, days.
Five ways people misread the reset after the tightening
Each of these is the same core mistake: treating the plan as having one clock instead of seven. The endpoint does not work that way.
Avoid these
- Reading the 'resets Sunday' line on claude.ai and assuming that is when you get unblocked. It is only true if seven_day is the gating bucket, which is no longer the common case.
- Treating five_hour.resets_at like a fixed clock. It slides forward with every new message. A sample from an hour ago is already wrong.
- Ignoring seven_day_opus.resets_at as an Opus user. The per-model ceiling often trips before the aggregate, and the resets_at for the aggregate won't free you.
- Treating null resets_at as 'that bucket doesn't exist'. It means you have not used that model in the current window. The next message materialises it.
- Scheduling a 'retry at Monday 09:00' job that hammers the endpoint. If your gating bucket is five_hour with a 37-minute reset, you are 37 minutes away, not three days.
The honest caveats
The endpoint is internal. The field names can change without notice. The resets_at timestamps slide as the window rolls, so a sample from 30 minutes ago is usually off. The seven bucket names listed here were stable through 2026-04-24; Anthropic could rename, add, or drop one on any release. ClaudeMeter is MIT and open source, so if that happens you can see the shape change in one git diff of src/models.rs.
See all seven resets at once
ClaudeMeter is free, MIT-licensed, no cookie paste. Install the extension and the menu-bar app, open the popup, and every bucket renders with its own live reset label.
Install ClaudeMeterStuck on a tightened plan and not sure which bucket is blocking you?
Send us a /usage sample and we'll map which of the seven reset clocks applies to your workload.
Frequently asked questions
My Claude got tightened and I am blocked. How do I know when I can code again?
Pull /api/organizations/{org_uuid}/usage right now. For every field where utilization is close to 100, read the resets_at ISO timestamp next to it. That is your real wait time. The block is being enforced by whichever of the seven buckets crossed 100 first, and only that bucket's resets_at matters. ClaudeMeter prints it in the menu bar as, for example, '5-hour 98% -> resets Fri Apr 24 18:14 (in 37m)'. If the gating bucket is five_hour your wait is measured in minutes, not days, regardless of where the weekly bar sits.
So which of the seven buckets blocked me?
You cannot tell from a single HTTP 429. The server does not name the bucket in the response body. You have to read /api/organizations/{org_uuid}/usage a moment before or after and find the field where utilization >= 100. In practice that is five_hour, seven_day, seven_day_sonnet, or seven_day_opus. The less common ones (seven_day_oauth_apps, seven_day_omelette, seven_day_cowork) gate different traffic paths. ClaudeMeter names all seven as distinct Option<Window> fields in src/models.rs lines 18-28 so you can see which ceiling you hit.
Why isn't there just one 'quota resets in' countdown?
Because the buckets are rolling, not calendar-aligned. resets_at is computed as 'the moment this window's oldest chargeable usage falls out of the window', which is a different wall-clock time for every bucket and every account. Two Pro users at the same utilization can have different five_hour resets_at values because their charging histories differ. That is why ClaudeMeter's format.rs prints the delta in '(in 2d 4h)' form next to each bucket: a single 'resets on Sunday' line would be wrong for almost everyone.
Where exactly is resets_at defined in the product source?
In /src/models.rs, the Window struct is the shape of every rolling bucket: { utilization: f64, resets_at: Option<DateTime<Utc>> }. Every one of the seven fields (five_hour, seven_day, seven_day_sonnet, seven_day_opus, seven_day_oauth_apps, seven_day_omelette, seven_day_cowork) deserializes into a Window, so each has its own resets_at. The CLI formatter at src/format.rs lines 75-98 prints a separate 'resets Mon Apr 28 14:30 (in 2d 4h)' per bucket. The extension UI at extension/popup.js uses fmtResets to render the same value inline as '5-hour · 30m' on each bar.
Does the tightening change what resets_at means?
No. The field's semantics are unchanged. What changed after 2026-03-26 is the rate at which utilization climbs against a fixed workload during weekday peak hours, which makes five_hour reach 100 faster and therefore resets_at on five_hour becomes the active countdown far more often than it used to be. Before the tightening, seven_day was the common gate. After, five_hour is the gate on heavy weekday afternoons. Same field, different bucket showing up as the blocker.
Can I read resets_at without installing ClaudeMeter?
Yes. Paste your claude.ai session cookie into curl, hit /api/organizations/{org_uuid}/usage, and every Window-shaped field in the response will contain a resets_at ISO timestamp. ClaudeMeter just polls that endpoint every 60 seconds and formats the delta for you. The extension uses credentials: 'include' so no cookie paste is needed. The menu bar binary uses Chrome Safe Storage via keychain. Either route, same field.
Can resets_at be null?
Yes. In the Rust struct it is Option<DateTime<Utc>>, and the extension checks fmtResets(w?.resets_at) returns an empty string when the field is absent. Treat null as 'not currently windowed' for that bucket. In practice we see it null for per-model buckets when the account has not used that model in the current window. Once you start using the model, a resets_at appears on the next poll.
Why does ClaudeMeter show the delta in 'm' or 'h' or 'd' instead of a fixed format?
Because in a tight window you need minute resolution and in a calm week you don't. extension/popup.js fmtResets uses: below 1 hour show '37m', below 48 hours show '14h', otherwise show '3d'. The CLI format.rs format_window does the same partition via chrono's num_days and num_hours. The point is to surface the resolution that matters for planning the next action, not an academic timestamp.
If seven_day is at 78 percent can I still be blocked?
Yes, trivially, if five_hour is at 100. The server rate-limits on the first bucket to cross the ceiling. seven_day at 78 tells you nothing about the five_hour curve during a weekday peak hour. This is the single most common 'but I still had quota left' confusion since the tightening. Reading only the weekly bar and assuming it is authoritative is what every pre-tightening guide tells you to do. Post-tightening, it is often wrong.
Does resets_at march forward as time passes or does it stay fixed until the window rolls?
It slides forward. Because the window is rolling, every new message shifts the window's right edge, which shifts resets_at for that bucket. Sampling once a minute is enough to catch the drift. If you sampled once an hour the resets_at you saw at 14:00 would already be stale by 14:05 in a heavy session. A 60-second cadence matches the precision the field is returned at.
Keep reading
What the silent tightening actually changed (and didn't)
Tokens per message stayed identical. Server-side bucket weights did not. Here is what to poll, and how often, to see the change.
The weekly bucket didn't move. The 5-hour one did.
Two samples of /usage before and after 2026-03-26 on a fixed workload. seven_day holds. five_hour climbs 1.4x-2x faster on weekday afternoons.
The rolling window cap is seven windows, not one
Every Window field returned by /usage, with semantics, reset behaviour, and how to surface each one.