What a Claude Code PR actually costs is a delta on seven_day_opus.utilization, not a dollar amount
Every cost guide on this topic converts your local token log to a dollar figure. That number answers a question API users are asking. If you are on Pro, Max, Team, or Enterprise, you already paid a flat fee and the thing a PR actually spent is a fraction of your rolling windows. This page is about how to measure that fraction, because the rate limiter in front of Claude Code cares about it and Stripe does not.
The one-paragraph version
On any Claude Code subscription (Pro, Max 5x, Max 20x, Team Premium, Enterprise) you do not pay per PR in dollars. You pay a flat monthly fee. What a PR costs you is a fraction of four rolling buckets on claude.ai/api/organizations/{org_uuid}/usage: five_hour, seven_day_opus, seven_day_sonnet, and extra_usage. The only honest cost-per-PR is the delta on those four fields between your first and last commit. That delta is what the rate limiter checks. Anything else, including every "PR cost about $0.47" estimate you will see online, is a dollar projection from your local JSONL against public API rates and has no relationship to your actual spend on a subscription plan.
The formula, in one code block
Two snapshots of the same endpoint, subtracted field by field. That is the whole mechanism:
The four deltas answer different questions. The Opus weekly delta is usually what you care about because it is the scarcest bucket for most Claude Code users in 2026. The five-hour delta tells you whether this PR pushed you toward a same-day 429. The Sonnet weekly delta captures any cleanup / test-writing work you shunted to Sonnet. The overage credits delta only moves if one of the subscription floats hit 1.0 mid-session.
The anchor fact: 60-second server-truth polling
For the delta to be accurate, the sampling cadence has to be tight enough that a typical PR is covered by more than one sample. The extension is configured for that at POLL_MINUTES = 1 on line 3 of claude-meter/extension/background.js. Every minute it hits the same /api/organizations/{org}/usage endpoint the Settings page reads, posts the JSON to a local bridge, and updates the menu-bar badge:
And here is the full struct the desktop client deserializes the response into. Every field that can appear in a cost-per-PR delta is named here. There is no seven_day_4.7 bucket. Opus 4.7 and 4.6 both write to seven_day_opus:
Dollar estimate vs utilization delta
Toggle between the two ways to report "what this PR cost." One is the number ccusage, Claude-Code-Usage-Monitor, and most blog posts return. The other is the number Anthropic's rate limiter actually checks.
Sum every token in ~/.claude/projects/**/*.jsonl for the branch lifetime. Multiply input tokens by $15/1M (Opus 4.7 input). Multiply output tokens by $75/1M (Opus 4.7 output). Report the result as 'this PR cost $X.XX'.
- Ignores you already paid flat for the subscription
- Misses adaptive-thinking tokens the server kept hidden
- Misses the 1.0x to 1.35x tokenizer expansion on 4.7
- Misses peak-hour multipliers and plan denominators
- Gives a number that has nothing to do with your 429 risk
What the browser does while you work the PR
Three actors: your git working copy, the Anthropic server, and the ClaudeMeter extension polling in the background. Each request moves the floats. Each sample captures where they landed.
PR lifetime on Claude Code, as utilization floats
Measuring one PR from the command line
ClaudeMeter exposes the live snapshot on a local bridge at http://127.0.0.1:63762/snapshots (same port the extension posts to). Two curl calls plus jq, one at the start of the branch and one at the end, get you the four deltas:
Per-commit logging with a post-commit hook
If you want a dense sampling across the PR without remembering to snapshot, drop this in .git/hooks/post-commit for the repo. Every commit writes a TSV line with the branch, the short SHA, and the four fractions at that moment. Diff the first and last rows for a branch when you merge:
Why local-log tools cannot give you this number
ccusage and Claude-Code-Usage-Monitor both work off ~/.claude/projects/**/*.jsonl. They sum tokens, multiply by the public API rate, and return a dollar figure. If you are on the API, that number is the truth. If you are on a subscription, it answers the wrong question.
Three reasons the local-token approach undercounts compared to what the rate limiter saw. First, Opus 4.7's tokenizer expands the same text to 1.0x to 1.35x more tokens than 4.6, and the expansion happens server-side, after the JSONL is written. Second, Opus 4.7 runs adaptive thinking with display: omitted by default, so thinking content is generated server-side and billed against seven_day_opus but not always logged locally in full. Third, Anthropic applies peak-hour multipliers and plan-specific denominators before writing to the float, and those are server-private.
All three land on the four numbers in the /usage payload. That is the only place on your machine where the rate-limiter-honest cost-per-PR is computable.
Everything that flows into your PR's utilization delta
None of these are fields in the response. They all collapse into the four utilization fractions, which is why the delta is the only honest measurement.
inputs to the cost of one PR
The numbers this page is built on
All four come straight from the product source; none are invented benchmarks or projected dollar figures.
How to run the measurement, step by step
measuring cost_per_pr on a subscription
First commit: take snapshot A
Either click the ClaudeMeter menu bar, or curl the local bridge. Record five_hour.utilization and seven_day_opus.utilization. Put them in the PR description for future-you.
Work the PR (Claude Code runs, commits land)
The extension keeps sampling every minute in the background. You do not have to do anything. Each sample hits /api/organizations/{org_uuid}/usage and the floats drift upward in real time.
Last commit: take snapshot B
Same two fields, again. If you ran the post-commit hook you already have a sample at every commit and can point at the tightest before and after.
Subtract to get the true PR cost
opus_delta = B.seven_day_opus.utilization - A.seven_day_opus.utilization. five_hour_delta similarly. Those two numbers are what the PR cost you on this plan, full stop. Multiply by the weekly budget (implicit in utilization=1.0) for a plan-adjusted picture.
If a sample crossed a reset, flag it
five_hour.resets_at rolls roughly every 5 hours. If it fell inside your PR window the simple subtraction understates cost. ClaudeMeter stores resets_at on every Window, so the post-commit hook can detect it and mark the delta incomplete.
Preconditions for an accurate delta
before you trust the cost_per_pr number
- You sampled seven_day_opus.utilization once before the session that becomes the PR started, and once after the last commit. The delta is the PR cost on the Opus weekly float.
- You sampled five_hour.utilization at the same two points. This delta is what the PR cost against the shared rolling window every Claude Code request touches.
- Neither sample crossed a resets_at rollover. If the five_hour resets_at fell inside your PR, subtract across a reset and report the delta as 'incomplete' instead of treating it as the PR's true cost.
- You noted which model ran. Opus work hits seven_day_opus + five_hour; Sonnet work hits seven_day_sonnet + five_hour; tool calls and attachments count against whatever model was active when the call ran.
- You recorded extra_usage.used_credits in both snapshots. If either subscription float was >= 1.0 during the PR, the overage delta is part of the true cost.
Side by side: which cost is the cost?
Both answer something real. Only one answers the question subscribers are actually asking when they look up "cost per PR."
| Feature | Local log + API rate | ClaudeMeter (utilization delta) |
|---|---|---|
| What the source of truth is | ~/.claude/projects/**/*.jsonl (local log) | claude.ai/api/organizations/{org_uuid}/usage (server) |
| Matches what the rate limiter enforces | No, converts via published API rate | Yes, same JSON Settings renders |
| Correct unit for subscribers | Dollars (wrong, you paid flat) | Fraction of rolling window (correct) |
| Includes server-side tokenizer expansion | No, pre-tokenizer count | Yes, post-tokenizer applied |
| Includes hidden adaptive-thinking tokens | No, not written to JSONL in full | Yes, already in utilization |
| Needs you to be logged into claude.ai | No | Yes |
| Works offline | Yes | No |
| Usable on API-only accounts | Yes, this is its best fit | No, subscriptions only |
A worked example in numbers you can verify
Pick a branch where Claude Code did most of the work. Before the first commit, the extension recorded (from the local bridge at 127.0.0.1:63762):
{ five_hour: 0.41, seven_day_opus: 0.62, seven_day_sonnet: 0.11, extra_usage.used_credits: 0.0 }
You wrote specs, ran Claude Code on Opus for planning, switched to Sonnet for the implementation, merged. Last commit snapshot:
{ five_hour: 0.58, seven_day_opus: 0.66, seven_day_sonnet: 0.19, extra_usage.used_credits: 0.0 }
Subtract:
"This PR cost me 4 percent of my Opus weekly, 8 percent of my Sonnet weekly, and 17 percent of my 5-hour rolling window, with no overage." That is a sentence the rate limiter agrees with. A dollar figure from ccusage would have said "$2.40," and neither you nor Anthropic would charge that.
The honest caveat
The /api/organizations/{org_uuid}/usage endpoint is internal to claude.ai and undocumented. Field names have been stable for many months and are named the same in the Settings page's own JavaScript, but Anthropic could rename, split, or remove them at any release. ClaudeMeter deserializes into a strict Rust struct, so if the shape shifts the macOS app surfaces a parse error and the project patches it. If your plan is API-only and you pay per million tokens, the dollar-per-PR view is the correct one; ccusage is the right tool for that, and this page does not replace it. The delta view is specifically for the Claude Code Pro, Max, Team, and Enterprise subscriber who wants to know what a PR costs them against their actual plan, not against a public API price sheet.
Measure your own cost-per-PR live
ClaudeMeter sits in your macOS menu bar and refreshes the four utilization floats every 60 seconds. Free, MIT licensed, no cookie paste, reads the same JSON the Settings page reads.
Install ClaudeMeterFrequently asked questions
How do I actually measure what one PR costs on Claude Code?
On a Pro or Max subscription, open claude.ai/settings/usage (or run ClaudeMeter) right before you start the session that becomes your PR, and record seven_day_opus.utilization and five_hour.utilization. Do the same right after the last commit. The delta on seven_day_opus is what the PR cost you against the Opus-only weekly ceiling. The delta on five_hour is what it cost against the shared 5-hour ceiling. You are not looking for dollars. You are looking for two fractions, both of which have to stay under 1.0 for your next Claude Code request to succeed.
Why does ccusage give me a dollar amount that feels wrong?
ccusage reads ~/.claude/projects/**/*.jsonl and multiplies the tokens it finds by the public API list price. That number is correct for API usage, where Anthropic actually bills you in dollars per million tokens. Subscribers do not pay that. You pay a flat $20, $100, or $200 a month. The 'dollar cost' your PR rang up is genuinely $0 beyond your subscription fee. The resource it consumed is a percentage of your rolling windows. That is what can run out mid-PR, and that is what ccusage has no way to read.
What exact endpoint does ClaudeMeter hit to read the before/after floats?
GET https://claude.ai/api/organizations/{your_org_uuid}/usage, the same endpoint the Settings page calls. The browser extension calls it at claude-meter/extension/background.js line 24 and polls every minute (POLL_MINUTES = 1 on line 3). The desktop Rust client calls the same URL at claude-meter/src/api.rs line 19. Both deserialize into the struct declared at claude-meter/src/models.rs line 19 to 28, so five_hour, seven_day, seven_day_sonnet, and seven_day_opus land in the same response.
Is a 60-second poll really fast enough to measure a PR?
For a realistic PR, yes. The two floats that matter on Opus 4.7 both update server-side within seconds of the request landing, and the resets_at rollover is either 5 hours out or days out. A PR that spans 20 minutes of Claude Code time gets sampled roughly 20 times by the extension. The math only breaks if your session crosses a five_hour resets_at, in which case the 5-hour float snaps back to 0 mid-session. seven_day_opus resets weekly so that almost never happens inside one PR.
I use extra_usage credits to overflow. Does that show up in the delta?
It shows up in a different field, not in seven_day_opus. Look at extra_usage.utilization in the same payload (field declared at claude-meter/src/models.rs line 10 to 16). Once your subscription floats hit 1.0, requests start burning overage credits instead of the weekly bucket, so seven_day_opus stops climbing and extra_usage.used_credits starts climbing. For an end-to-end PR cost on a subscriber who flipped to overage mid-session, you need three deltas: seven_day_opus, five_hour, and extra_usage.used_credits.
Does a PR that uses only Sonnet touch seven_day_opus at all?
No. Sonnet requests flow into five_hour (shared) and seven_day_sonnet (Sonnet-only weekly). seven_day_opus is untouched. That is why 'switch the small edits to Sonnet' is the most common cheap optimization, and why the cost of a PR depends heavily on which model ran which step. A PR with 10 percent Opus planning and 90 percent Sonnet cleanup can be cheaper on the Opus weekly float than a single deep Opus session even if the Sonnet run logged more tokens in ccusage.
What about thinking tokens, do they count toward the PR's cost?
Yes, and they are invisible to ccusage. Opus 4.7 runs adaptive thinking by default, with display set to omitted. Thinking content is generated server-side and billed against seven_day_opus, but it is not always written to your local JSONL in full. So the delta you read off the server endpoint is larger than the token sum ccusage would report for the same session. Both are real numbers, they just answer different questions: ccusage reports what your model output; the endpoint reports what your plan's rate limiter saw.
How much of seven_day_opus does one PR typically burn?
There is no honest universal answer because it depends on model mix, plan, effort level, and how agentic the session was. What the product gives you is the ability to measure your own number. Run ClaudeMeter during a single PR, note the delta, and you have your personal cost-per-PR on that plan. Published benchmarks that quote 'about 3 percent per PR' or 'Max 20x gets 100 PRs a week' are converting messages to floats through guesses; the delta on your own account is the only number the rate limiter cares about.
Can I script this so every PR auto-logs its utilization delta?
Yes. The extension posts every snapshot to a local bridge at http://127.0.0.1:63762/snapshots (extension/background.js line 2). If you run the desktop binary, it exposes the same data on the same loopback port. A git hook on post-commit that curls the bridge and appends seven_day_opus.utilization to a log file gives you a per-commit trail, and diffing the first and last sample for a branch gives you cost-per-PR directly. The endpoint returns JSON so the hook is about four lines of shell.
Does this work for the Team or Enterprise plans?
Yes, because the endpoint is per-organization. On a Team or Enterprise org, /api/organizations/{org_uuid}/usage still returns the same four utilization floats under the org you are authed into. Different plans just change the denominator baked into each utilization fraction. ClaudeMeter reads the fraction verbatim, so whatever plan you are on, the delta math is the same.
What happens to my PR if five_hour hits 1.0 halfway through?
The next Claude Code request returns 429, the session pauses, and the delta you measured up to that point is the partial cost of the PR so far. When five_hour rolls over (see resets_at in the same payload), the float drops and you can resume. The seven_day_opus float does not reset at that time, so the resume cost keeps accumulating on the weekly bucket. This is the main reason subscribers care about the delta, not about dollars: a PR that rang a 429 mid-session does not cost more money, but it did cost you real minutes of blocked Claude Code time.
Keep reading
Claude Code Opus 4.7 usage limits: the two server floats that gate you
The seven_day_opus field is the Opus-only weekly float. Where it lives, how the 4.7 tokenizer fills it faster, and how to read it live.
Claude Pro's 5-hour window is one float on a sliding clock
The 5-hour bucket is not a 45-message counter. It is a utilization fraction on a rolling clock, returned on the same /usage endpoint.
ClaudeMeter vs ccusage
ccusage sums local tokens and multiplies by public API rates. ClaudeMeter reads the four server-side utilization floats. Different questions, different answers.
Want a cost-per-PR dashboard wired into CI?
I have built the post-commit hook plus a Grafana panel for teams on Max 20x and Team Premium. 15 minutes to walk through whether it fits.
Book a 15-minute call →