April 2026 had three Claude announcements, and only one of them actually moved your plan limits

The keyword reads as one event. It is three. Mythos Preview on Apr 7 (zero impact on Pro or Max). Opus 4.7 GA on Apr 16 (same denominators, 1.0x to 1.35x faster fill). Postmortem reset on Apr 23 (every subscriber's utilization wiped to ~0.0 in a single tick). After all three, the JSON at /api/organizations/{org_uuid}/usage still has the exact same shape. Here is what changed, and what did not, walked one event at a time.

M
Matthew Diakonov
11 min read
Direct answer (verified 2026-04-29)

Two new Claude models launched in April 2026: Claude Mythos Preview on Apr 7 (research preview, limited to Project Glasswing partners, not on Pro or Max plans), and Claude Opus 4.7 on Apr 16 (generally available across Pro, Max, Claude Code, the API, Bedrock, Vertex AI, and Foundry; same $5/$25 per million input/output tokens as Opus 4.6). Separately, on Apr 23Anthropic published a Claude Code postmortem and reset usage limits for every subscriber. Anthropic did not publicly raise Pro or Max rate-limit denominators in April; what changed is fill rate, because Opus 4.7's new tokenizer uses 1.0x to 1.35x as many tokens as 4.6 for the same text.

Sources: anthropic.com/news/claude-opus-4-7, april-23-postmortem, red.anthropic.com Mythos Preview.

The April 2026 timeline, one event at a time

Listed in chronological order with the plan-limit impact of each. Most explainers cover one of these. The interesting picture is what they look like end to end.

Anthropic April 2026

1

Apr 7 — Claude Mythos Preview

Anthropic announced its most advanced general-purpose model so far, with strong cybersecurity capabilities. Released as a limited research preview through Project Glasswing to AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. $25 input / $125 output per million tokens, billed under a $100M credit pool. NOT on Pro or Max consumer plans.

Plan-limit impact: zero. The /api/organizations/{org_uuid}/usage endpoint did not gain a Mythos field. Pro and Max accounts cannot send a Mythos request. If you saw your Settings bars move on Apr 7, the cause was your own traffic, not the announcement.
2

Apr 16 — Claude Opus 4.7 GA

Opus 4.7 replaced Opus 4.6 across claude.ai, Claude Code, the API, Bedrock, Vertex AI, and Microsoft Foundry. Same $5/$25 list pricing. New tokenizer with 1.0x to 1.35x token expansion. New xhigh effort level. Adaptive thinking is the only supported thinking-on mode. 1M context window, 128k max output. High-resolution image support up to 2576px / 3.75 MP.

Plan-limit impact: same fields, faster fill. seven_day_opus and five_hour are still the two utilization floats that 429 your Claude Code session at 1.0. The denominators were not publicly raised. The numerator climbs faster because the new tokenizer maps your text to more tokens, and adaptive thinking spends more tokens than the old extended-thinking budgets did at the same effort level. There is no per-version field; 4.7 traffic and 4.6 traffic both write to seven_day_opus.
3

Apr 20 — Claude Code v2.1.116

Anthropic reverted the verbosity instruction added on Apr 16 (the one that told the model to keep text between tool calls to <=25 words). Combined with prior fixes — default effort returned to high on Apr 7, the thinking-history clear bug fixed on Apr 10 — every published quality regression is now resolved.

Plan-limit impact: indirect. With xhigh effort default for Opus 4.7 and verbose interim outputs allowed again, sessions burn more tokens per loop than they did during the regression window. seven_day_opus now climbs at the rate the launch announcement implied, not at the artificially-low rate the verbosity instruction was producing.
4

Apr 23 — Postmortem and full subscriber reset

Anthropic published the Claude Code quality postmortem and, in the same announcement, reset usage limits for every subscriber. Verbatim: 'As of April 23, we're resetting usage limits for all subscribers.' Both Pro and Max. Both Opus and Sonnet weekly floats. The 5-hour shared float and every weekly float dropped to near-zero in a single tick, and resets_at timestamps rolled forward.

Plan-limit impact: maximum. This is the only moment in April 2026 when your plan limits actually changed. Every Window struct's utilization went from whatever it was to ~0.0 within seconds. The new resets_at timestamps mean any wall-clock heuristic you had ('I'm safe until Tuesday at noon') is wrong; the new reset is seven days from the moment Anthropic flipped the switch on your account.

The anchor fact: the struct that catches all three events

ClaudeMeter deserializes the live /api/organizations/{org_uuid}/usage response into the struct below. Notice the field names: not one of them carries a model version. Opus 4.7 traffic, Opus 4.6 traffic, Sonnet 4.6 traffic, and any future model launched on the consumer plan all flow into the same Option<Window> fields.

claude-meter/src/models.rs

Because of the way this is shaped, every April 2026 release was a no-code-change event for ClaudeMeter. Mythos: not on consumer plans, not in the payload. Opus 4.7: same fields, different fill rate. Apr 23 reset: same fields, fresh values. The schema is a moat by accident. It catches whatever Anthropic ships next.

Per-event impact, side by side

Mythos Preview vs Opus 4.7 GA + Apr 23 reset. Same row, different answers. The reset is the only event that actually moves a number you can read.

FeatureMythos Preview (Apr 7)Opus 4.7 GA + Apr 23 reset
Affects Pro plan five_hour floatNoNo
Affects Pro plan seven_day_opusMythos: noOpus 4.7 GA: same field, faster fill rate. Apr 23 reset: dropped to ~0.0
Adds a new field to /api/organizations/{org}/usageMythos: no. Opus 4.7: no.Apr 23 reset: no, only utilization values changed
Pro/Max subscribers can use the modelMythos: no (Glasswing only)Opus 4.7: yes, immediately
Forces a code change in claude-meterNoNo, the struct in src/models.rs already covers it
Changes resets_at timestampsMythos: no. Opus 4.7: no.Apr 23 reset: yes, every weekly window rolled forward

Six things in the Opus 4.7 launch that fill seven_day_opus faster

Pulled directly from platform.claude.com/docs/en/about-claude/models/whats-new-claude-4-7. Anthropic did not raise the public denominator. They changed the numerator inputs.

New tokenizer (~1.0x to 1.35x)

Per Anthropic's What's New for Opus 4.7: the new tokenizer maps the same text to roughly 1x to 1.35x as many tokens. The server uses this count to update seven_day_opus.

Adaptive thinking by default

Extended-thinking budgets are removed in 4.7. Adaptive thinking decides its own depth and tends to spend more thinking tokens than the same effort level on 4.6.

xhigh effort tier

Did not exist on 4.6. Recommended starting effort for coding and agentic use cases. Claude Code defaulted to xhigh on Opus 4.7 after Apr 7.

Thinking content omitted by default

On 4.7, thinking field is empty in the response unless you set display: summarized. Tokens are still generated and still billed against seven_day_opus; you just do not see them in your transcript.

High-res image cost

Vision input limit raised to 2576px / 3.75MP. Anthropic's note is explicit: 'High-res images use more tokens.' Inline images on a Pro account move the float more than they did on 4.6.

More regular progress updates

A 4.7 behavior change: the model produces more regular progress updates and fewer subagents by default. Same task, different output shape, different bill.

What feeds seven_day_opus on a 4.7 request

None of these are separate fields on the response. They all collapse into one opaque utilization fraction, which is exactly why the live poll is the only way to keep up with how the April changes hit your account.

Inputs to seven_day_opus.utilization (post Apr 16)

Your prompt
Adaptive thinking
xhigh effort
Tool calls
High-res images
Peak-hour multiplier
seven_day_opus.utilization
Settings 'Opus weekly' bar
Rate limiter (>= 1.0 trips 429)
ClaudeMeter 7d Opus row

What the Apr 23 reset looks like in the JSON

One real ClaudeMeter snapshot from before and after the Apr 23 reset. Same account. Same fields. Eleven minutes apart. Notice the resets_at jump on every weekly window, not just the utilization drop.

ClaudeMeter snapshot, Apr 23 09:51 UTC vs 10:02 UTC

The five_hour window also rolled forward, not just the weekly ones. That detail matters for anyone who had built a wall-clock heuristic around their old session start time. After Apr 23, those heuristics were stale until you re-read the timestamps. The shape did not change. The values did.

Reproduce the post-reset state in one curl

Open DevTools on claude.ai/settings/usage, copy the Cookie header from the Network panel. The Referer header is load-bearing; drop it and the endpoint returns 403.

claude.ai/api/organizations/{org_uuid}/usage

Five things to verify on your account after Apr 23

The reset rolled out per-account, not strictly atomic across the fleet. If something on this list is wrong on your account, that is worth knowing.

Post-reset sanity check

  • seven_day_opus.utilization is at or near 0.0 on a fresh poll. If yours still reads above ~0.05 after Apr 23, retry the request after a minute (the reset is rolled out per-account, not strictly atomic across the fleet).
  • seven_day_opus.resets_at is roughly seven days from the moment of your reset, not seven days from your old session start. The new clock supersedes the old one.
  • five_hour.resets_at jumped forward to about five hours from your reset moment. Any 'I'm safe for the next two hours' reasoning from before Apr 23 is invalid; the window restarted.
  • The shape of the JSON did not change. Still seven Option<Window> fields plus extra_usage. If your tracker started erroring on Apr 23, it is not because the schema changed; check your session cookie.
  • seven_day_oauth_apps utilization (if you authorized any third-party Anthropic OAuth apps) was also reset. This is the bucket the Settings page does not render but the rate limiter still checks.

The numbers, no inventions

From Anthropic's own announcements and the implementation that reads the endpoint.

0new Claude models in April 2026
0Window fields in UsageResponse
0%max token expansion on the 4.7 tokenizer
0sClaudeMeter poll cadence

Why this matters for Pro and Max users specifically

The Anthropic Opus 4.7 launch post does not say a word about consumer rate limits. The April 23 postmortem reset every subscriber's utilization to ~0.0 but did not raise denominators. So unless you have a strong sense of how the new tokenizer and adaptive-thinking defaults change your specific workload, the only honest answer to “how many Opus prompts do I have left this week” is whatever your seven_day_opus.utilization reads right now.

Local-token tools cannot give you that number. ccusage and Claude-Code-Usage-Monitor read ~/.claude/projects/**/*.jsonl and sum tokens written by the Claude Code client. Three things that file does not contain on Opus 4.7: the server-side 4.7 tokenizer count (your client tokenizes locally with a different tokenizer), the omitted-by-default thinking content (gone unless you pass display: summarized), and any peak-hour weighting Anthropic applies before writing to seven_day_opus. The endpoint that bakes all three into one number is the same endpoint claude.ai/settings/usage renders from.

ClaudeMeter polls that endpoint once a minute through your existing browser session (no cookie paste, no manual sessionKey copy). The Apr 23 reset showed up in the next poll. The Apr 16 fill-rate change showed up the moment you sent your first 4.7 request. Both are local to your account, and both are visible as numbers, not feelings.

The honest caveat

Anthropic does not publish exact per-plan denominators. There is no official table that says “Pro = N Opus prompts per week, Max 5x = 5N, Max 20x = 20N.” All we can say with confidence about April 2026 is what their official posts say: Opus 4.7 list pricing is unchanged, the new tokenizer is up to ~35% larger for the same text, the postmortem reset every subscriber's utilization. The rate-limiter trips at utilization >= 1.0. The number ClaudeMeter shows you is the post-tokenizer, post-thinking, post-weighting fraction the server computed for your account at poll time. Not a model. The server's number.

Common questions

Which Claude models actually launched in April 2026?

Two. Claude Mythos Preview launched April 7, 2026 as a research preview limited to Project Glasswing partners (AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks). It is not available on Pro or Max plans. Claude Opus 4.7 launched April 16, 2026 as the generally available top-tier model, replacing Opus 4.6 across claude.ai, Claude Code, the API, Bedrock, Vertex AI, and Microsoft Foundry. Same $5/$25 per million input/output tokens as Opus 4.6. A third event on April 23 was not a launch but a postmortem: Anthropic published the Claude Code quality postmortem and reset usage limits for all subscribers.

Did Anthropic raise the Pro or Max plan rate limits when Opus 4.7 launched?

Anthropic's Opus 4.7 launch post on anthropic.com/news/claude-opus-4-7 does not mention rate limits at all. The 'What's new in Claude Opus 4.7' developer doc only addresses tokenizer changes and breaking changes to sampling parameters and extended-thinking budgets. There is no public statement that subscriber denominators changed at the moment 4.7 went GA. The fields you read on /api/organizations/{org}/usage are the same as on April 15 (five_hour, seven_day, seven_day_sonnet, seven_day_opus, seven_day_oauth_apps, seven_day_omelette, seven_day_cowork, plus extra_usage). Only the fill rate changed.

Why did Anthropic reset everyone's usage limits on April 23?

Three product-layer changes between March 4 and April 16 had degraded Claude Code's quality. The April 23 postmortem at anthropic.com/engineering/april-23-postmortem confirmed: a March 4 default-effort change from high to medium (reverted April 7), a March 26 prompt-caching bug that cleared thinking history every turn instead of once (fixed April 10), and an April 16 verbosity instruction that hurt coding quality (reverted April 20). The verbosity test alone produced a 3% quality drop on both Opus 4.6 and 4.7. Resetting subscriber quota was the make-good. The literal text from the postmortem: 'As of April 23, we're resetting usage limits for all subscribers.'

What does the April 23 reset look like in /api/organizations/{org}/usage?

Every Window struct's utilization field returned to zero on the server side, regardless of how high it had been the moment before. The shape did not change: still seven Option<Window> fields plus an extra_usage block. The resets_at timestamp on each window also rolled forward to a fresh 5-hour or 7-day window starting at the reset moment. ClaudeMeter accounts polling the endpoint that day saw five_hour, seven_day, seven_day_opus, and seven_day_sonnet drop simultaneously to a fraction near 0.0 and watched their resets_at timestamps jump forward by hours or days.

Does Mythos Preview show up in my Pro or Max plan usage anywhere?

No. Mythos is research-preview-only via Project Glasswing and is not on the Pro or Max consumer plans. There is no field for it in /api/organizations/{org}/usage. If your account does not have Glasswing access, Mythos traffic cannot exist on your account. Even partners with access pay $25 input / $125 output per million tokens billed under the $100M credits Anthropic committed to Glasswing, not under their Pro or Max subscription. The April 7 announcement does not change a single number you see on claude.ai/settings/usage.

Why does my Opus weekly bar climb faster on 4.7 than it did on 4.6?

Two compounding causes from the Opus 4.7 release notes (platform.claude.com/docs/en/about-claude/models/whats-new-claude-4-7). One: the new tokenizer maps the same English text to 'roughly 1x to 1.35x as many tokens' compared to 4.6 (up to about 35% more, varying by content). Server-side accounting uses the new tokenizer's count, so the same prompt fills seven_day_opus faster. Two: 4.7 added an xhigh effort tier that did not exist on 4.6, and adaptive thinking is the only thinking-on mode. Adaptive thinking generated tokens count as output tokens. Same prompt, same agentic loop, more tokens written to the Opus-only weekly float. There is no per-version field, so you cannot back out the 4.7 vs 4.6 share after the fact.

Did claude-meter need a code change to handle the April 2026 releases?

No. The UsageResponse struct in claude-meter/src/models.rs lines 18-28 declares seven Option<Window> fields plus an ExtraUsage. None of them name a model version. Opus 4.7 traffic, Opus 4.6 traffic, Sonnet 4.6 traffic, and any future model launched on the consumer plan all serialize into the same fields. The extension at extension/popup.js lines 60-63 already optional-chains the model-specific weekly fields, so accounts that never had a seven_day_opus row simply do not render one. The Apr 23 reset also required no code change: utilization went from 0.91 to 0.02 on the same field.

Is Claude Code on Opus 4.7 actually fixed after April 20?

Anthropic shipped Claude Code v2.1.116 on April 20 with the verbosity instruction reverted, and the postmortem (published April 23) lists all three quality issues as resolved. The reasoning effort default is now xhigh on Opus 4.7 and high on every other model. The thinking-history clear bug was fixed April 10. Whether your specific session feels back to normal depends on which of the three changes was hurting your workload most. None of these fixes change which fields gate your account's rate limit.

How do I see when my Opus weekly window resets after the April 23 reset?

The resets_at field on the seven_day_opus Window struct is the absolute UTC timestamp. It is returned as a Option<chrono::DateTime<chrono::Utc>> in claude-meter/src/models.rs line 6. After the Apr 23 reset, the timestamp on every account's seven_day_* fields jumped forward to seven days from the reset moment. ClaudeMeter prints it relative ('resets in 6d 4h') in the menu bar; the raw ISO timestamp is on the localhost bridge at 127.0.0.1:63762/snapshots if you want to script against it.

Should I switch from claude.ai/settings/usage to a tracker now?

If you only check the bar once a day, the Settings page is fine. The reasons people install a tracker after April 2026 are: (1) the 4.7 tokenizer changes how fast the Opus float climbs and you want to recalibrate your sense of 'how much weekly is left', (2) the Apr 23 reset moved everyone's resets_at forward, so the old wall-clock heuristic ('I started Tuesday, I'm safe through Tuesday') no longer matches the server, and (3) Settings does not warn you as you approach a limit; ClaudeMeter polls every 60 seconds and the menu bar pip turns red at 80%.

Want a walkthrough of the eight floats on your account?

15 minutes, screen-shared, your real /api/organizations/{org}/usage payload. No pitch.