Cost per landed PR is quota burn divided by merge commits, not by PRs you opened

Every other guide on this reports a single dollar-per-PR figure, usually pulled from the public API price sheet. The one that actually matters for a team is narrower: quota spent in a window divided by PRs that merged to main in the same window. Abandoned branches, failed CI retries, force-pushed rewrites, and reverts still cost real seven_day_opus. They just do not land. This page shows the formula and the bash pipeline that computes it.

M
Matthew Diakonov
11 min read
4.9from Formula derived directly from the product source; not a secondhand benchmark
Join key: UsageSnapshot.fetched_at (src/models.rs line 65)
Sampling: POLL_MINUTES = 1 (extension/background.js line 3)
Denominator: git log --merges --first-parent main

Why landed is a different number

Most playbooks quote a cost-per-PR by taking every PR you opened and dividing some token sum by that count. That is a fine workload metric. It is a misleading economic metric, because the rate limiter in front of Claude Code does not know or care whether a given session ended in a merge. All four utilization floats on claude.ai/api/organizations/{org_uuid}/usage go up the same amount whether the PR ships to main or gets abandoned the next morning.

Cost per landed PR fixes the denominator. You only count work that actually reached the default branch. Four things land in the gap between PRs opened and PRs landed:

Abandoned branches still burn quota

A branch you started, reran three times, then dropped, still incremented seven_day_opus server-side. The /usage endpoint does not refund.

Failed CI retries are full sessions

Each retry is another Claude Code session. Same context, same tokens, same tool calls. The rate limiter counts each one, even when the PR never merges.

Rebases that nuke work are pure cost

Force-push histories that get overwritten took real Opus work to produce. The git tree forgets, seven_day_opus does not.

Reverts double the effective cost

A PR that merged, broke prod, and got reverted consumed quota twice: once to write it, once to undo it, plus any follow-up fix.

The formula, explicitly

Four numerators, one denominator. The numerator is what the server already knows. The denominator is what git already knows. The join key is a timestamp.

cost_per_landed_pr.py (pseudo)

The reason this works is that every ClaudeMeter sample is timestamped server-true. The Rust struct that backs the desktop client carries a fetched_at field on every snapshot. That timestamp is what lets you window quota deltas against a git window with the same boundaries.

claude-meter/src/models.rs

The four buckets the burn splits into

One endpoint returns all four fields. The rate limiter checks each one independently. Your cost-per-landed-PR is really four numbers, one per bucket, all with the same merge-count denominator.

seven_day_opus

Opus-only weekly float. Opus 4.7 Claude Code sessions land here. Usually the scarcest bucket for heavy coding weeks.

seven_day_sonnet

Sonnet-only weekly float. Cleanup, test writing, low-stakes edits. Moves slower per session than Opus.

five_hour

Shared rolling window across every model. Hits first on a deep day. Rolls over continuously.

extra_usage.used_credits

Overage dollars burned once the subscription floats hit 1.0. The only field that is genuinely denominated in dollars.

What feeds the numerator

Every session your team runs pushes the floats up. Only some of them become merge commits. The ones that do not are the reason cost-per-PR and cost-per-landed-PR diverge.

inputs to the burn, outputs to the ratio

Merged PRs
Abandoned branches
Failed CI retries
Force-pushed rewrites
Review cycles
Reverted PRs
4 weekly burn floats
Window quota burn
Landed-PR count
Cost per landed PR = burn / landed

A bash pipeline that returns the number

This script pulls snapshots from the local bridge, picks the two bracketing samples for your window, subtracts the four utilization fields, counts merges on main, and prints the ratios. It is about forty lines because the hard work is already done by the extension.

cost_per_landed_pr.sh

Running it at the end of a working week gives you one row for the team dashboard. Running it per contributor means filtering git log by --author, which does not change the burn because that belongs to the subscription, not the human.

What running it looks like

one calendar week, main branch, team of three

A week on a utilization timeline

The sequence diagram below shows what the background poll sees over three days. Two branches are started, one is abandoned, one merges. The weekly burn climbs past both, and only the second one ends up in the landed count.

samples, branches, and merges over a 3-day window

ClaudeMeter extensionclaude.ai /usageyour repoPOLL_MINUTES = 1: GET /api/organizations/{org}/usage{ seven_day_opus: 0.21, fetched_at: 2026-04-17T00:00Z }branch-A opened, two Opus sessions, failed CI, abandonednext sample{ seven_day_opus: 0.41, fetched_at: 2026-04-18T13:00Z }branch-B opened, Opus session, MERGED to mainnext sample{ seven_day_opus: 0.55, fetched_at: 2026-04-19T09:00Z }window end: git log --merges --first-parent main: 1 landed PR

Cost per PR vs cost per landed PR

Same numerator source, different denominator. The rows below are what changes when you move from one to the other.

FeatureCost per PR (the default)Cost per landed PR (this page)
DenominatorCount of PRs openedCount of merge commits on main
Charges abandoned branchesNo (they are not in the count)Yes (quota burn stays in numerator, count does not)
Charges failed CI retriesNoYes
Charges force-pushed rewritesNoYes
Charges revertsSometimes (depends on strict vs shipped-once)Configurable, defaulted to strict (revert removes the PR from the count)
Source for numeratorLocal ~/.claude/projects/**/*.jsonl tokensclaude.ai/api/organizations/{org}/usage utilization floats
Source for denominatorPR tracker or memorygit log --merges --first-parent <branch>
UnitDollars (wrong for subscribers)Fraction of weekly Opus (what the rate limiter enforces)

The constants the pipeline runs on

Every number on this page is pulled from the product source. None are projected from API list prices.

0stime between samples (POLL_MINUTES=1)
0utilization fields per window
0denominator: merges to main
0loopback port the bridge listens on

The five steps, compressed

from install to a number you can publish

  1. 1

    Install and warm the bridge

    Add the ClaudeMeter browser extension, log into claude.ai once. The extension starts posting to 127.0.0.1:63762 every 60 seconds.

  2. 2

    Pick a window

    One week is the floor because seven_day_opus is a weekly bucket. Four weeks is better for team reporting.

  3. 3

    Pull the bracketing samples

    Earliest fetched_at >= window_start and latest fetched_at <= window_end. Subtract the four utilization floats.

  4. 4

    Count the landed PRs

    git log --merges --first-parent main --since=... --until=... | wc -l. Filter reverts if you are running the strict view.

  5. 5

    Divide, publish, decide

    Four deltas, one denominator. Post the number in your team channel. Decide whether the plan, the workflow, or the merge rate needs to move.

A worked example

Three engineers on Max 20x. One calendar week, Monday 00:00 UTC to the following Monday 00:00 UTC. Across that window the extension produced roughly 10,080 samples (one per minute). The two bracketing samples had these floats:

start: { seven_day_opus: 0.00, seven_day_sonnet: 0.00, five_hour: 0.0, extra_usage.used_credits: 0.0 }

end: { seven_day_opus: 0.63, seven_day_sonnet: 0.22, five_hour: 0.0 (rolled), extra_usage.used_credits: 0.0 }

In the same window, git log --merges --first-parent main returned 9 merge commits. Two additional branches were opened, ran Claude Code sessions, and were deleted without merging. A cost-per-PR view would count those branches in the denominator and report a lower number per PR. The landed view does not:

Opus weekly per landed PR
0%
Sonnet weekly per landed PR
0%
Landed PRs in window
0
Abandoned branches
0

Had those two abandoned branches landed, cost-per-landed-PR would have been about 5.7 percent Opus weekly each instead of 7. The gap between 7 and 5.7 is the yield tax: the fraction of weekly Opus the team is paying for output that does not ship.

Why local-log tools cannot give you this

ccusage and Claude-Code-Usage-Monitor read ~/.claude/projects/**/*.jsonl and multiply by the public API rate. Two problems for this specific calculation. First, the JSONL does not see server-generated adaptive-thinking tokens or tokenizer expansion on Opus 4.7, so the numerator undercounts. Second, and more importantly, there is no concept of a merge commit in that data. You could layer git on top manually, but the numerator is still wrong because it is local.

ClaudeMeter pulls the server-true utilization floats from the same endpoint the Settings page reads, claude.ai/api/organizations/{org_uuid}/usage. That is the only surface on your machine that reflects what the rate limiter saw. Pair it with git, and the ratio is exact.

Caveats that matter

The /api/organizations/{org_uuid}/usage endpoint is internal to claude.ai and undocumented. Field names have been stable for months but could change. If your window spans a resets_at rollover on seven_day_opus (rare, it is weekly) the simple subtraction undercounts burn; sum per-rollover deltas instead. Rebase-and-merge workflows that disable merge commits need the GitHub API or a PR-number filter in place of git log --merges. Team orgs with multiple seats share a single utilization counter, which is the correct behavior for budgeting but means per-engineer attribution has to come from git authorship, not from the endpoint.

Run the pipeline against your own repo

ClaudeMeter is free and MIT licensed. Install the browser extension, drop the bash script in your repo, and you have a weekly cost-per-landed-PR number. No cookie paste, no API key, no approximations.

Install ClaudeMeter

Want a cost-per-landed-PR dashboard for your team?

I have built this pipeline into a Grafana panel for a handful of Max 20x and Team Premium orgs. 15 minutes to walk through whether it fits yours.

Frequently asked questions

What is the difference between cost per PR and cost per landed PR?

Cost per PR divides your Claude Code spend by every PR you opened. Cost per landed PR divides it only by PRs that actually merged to the default branch. The gap is the quota you burned on abandoned branches, PRs that failed CI three times before you gave up, rebases you threw away, and experiments that never shipped. All of that still incremented seven_day_opus.utilization on the server. None of it ships product. On a team with a 60 percent merge rate, cost per landed PR is roughly 1.67x cost per PR, and that 1.67x is the number that matters for budgeting headcount against a Max 20x or Team Premium plan.

Why can ccusage not compute this on its own?

ccusage reads ~/.claude/projects/**/*.jsonl and reports tokens per session. It has no concept of git, no concept of merge commits, and no concept of the rolling quota buckets Anthropic actually rate-limits against. It also cannot see server-side adaptive-thinking tokens or tokenizer expansion on Opus 4.7. So it can tell you 'this session used 1.2M tokens at $15/1M input', but it cannot tell you 'that session belonged to a PR that never landed, so divide it into the abandoned bucket'. Both halves of the question require data it does not have: git merge state and the server utilization floats that ClaudeMeter pulls from claude.ai/api/organizations/{org_uuid}/usage.

What exactly does ClaudeMeter expose that makes this calculation possible?

Every 60 seconds the extension (see POLL_MINUTES = 1 in claude-meter/extension/background.js line 3) hits claude.ai/api/organizations/{org_uuid}/usage, timestamps the response with fetched_at (UsageSnapshot field at claude-meter/src/models.rs line 65), and posts it to a local bridge at http://127.0.0.1:63762/snapshots. That gives you a time-series of utilization floats. Join that series against git log --merges --format='%H %cI' for the window you care about, and you can compute delta-quota-per-merged-commit directly. No cookie paste. No API key. No approximations.

What counts as a landed PR in this formula?

A merge commit on the default branch (main, master, trunk) inside the window you are measuring. git log --merges --first-parent main returns the list. Squashed merges still show up because GitHub writes a merge commit. Rebase-and-merge workflows are the one edge case: if your team disables merge commits entirely, use git log --first-parent main and filter by PR-number token in the commit message, or pull the merge list from the GitHub API instead. The important thing is the denominator counts work that shipped, not work that opened.

How long a measurement window gives an honest number?

At minimum one full seven_day_opus cycle, because that is the rolling bucket the rate limiter cares about for Opus-heavy Claude Code work. Less than that and you get noise from whichever day you happened to pick. Four weeks is better for teams because it covers holidays, on-call rotations, and a release cadence. The utilization math stays valid at any window length as long as you also pull resets_at from each sample (Window.resets_at in models.rs line 6) and subtract across any rollovers that fell inside the window. ClaudeMeter stores resets_at on every sample, so the subtraction is automatable.

Do reverted PRs count against landed or abandoned?

Your call, and you should be explicit about it in the report. The strict yield view counts a revert as 'didn't land' and removes it from the numerator. The shipped-once view counts it as landed because it did merge. Most teams want the strict view because the quota spent on a reverted PR produced negative value. If you are building an internal dashboard, surface both numbers so a manager can decide which one drives the conversation that week.

What about tokens for code review, not code generation?

Code review sessions still burn the same four utilization floats and still show up in the same fetched_at timestamps. If you want to split them out, label your PRs with a marker in the commit message or branch name (feat/, review/, fix/) and bucket the deltas accordingly. The /api/organizations/{org_uuid}/usage endpoint does not know what the tokens were spent on, it only knows they were spent. The only place that distinction lives is your own tagging conventions.

Is there a quick rule of thumb for estimating cost per landed PR?

Start with your plan's Opus weekly ceiling as 100 percent, sample seven_day_opus.utilization at the start and end of each calendar week, subtract to get weekly burn. Count merges to main that same week. Divide. That gives you a weekly cost-per-landed-PR in the only unit the rate limiter respects: fraction of weekly Opus quota. If you want a dollar projection on top of that, multiply the weekly burn by your plan price for the week. On Max 20x ($200 / month, roughly $50 / week) a weekly burn of 80 percent with 10 landed PRs equals about $4 per landed PR in subscription cost, compared to maybe $25 per landed PR on the API at the same token volume.

Does a PR that spent only Sonnet tokens still count in the denominator?

Yes. Landed means merged, regardless of which model did the work. The numerator splits cleanly: Opus-heavy work lives in seven_day_opus, Sonnet work lives in seven_day_sonnet, and both touch five_hour. So a team running 'plan with Opus, implement with Sonnet, review with Opus' will see the same count of landed PRs but different burns across the three utilization floats. The cost-per-landed-PR formula is really four formulas, one per bucket, and all four have the same denominator: merges to main in the window.

How does this change on Max 20x versus Team Premium?

The utilization math does not change. What changes is the denominator baked into each utilization=1.0. Max 20x has roughly 20 times the per-seat ceiling of Pro; Team Premium is sized for several seats on one org. In practice a team that lands 40 PRs a week at 60 percent seven_day_opus burn on Max 20x will see the same 60 percent burn on Team Premium if they do the same work, because the ceiling scales with the plan. Cost per landed PR expressed as percentage of weekly Opus is portable across plans. Cost per landed PR expressed in dollars is not, because the subscription fee changes.

What does the real-time bridge return and how do I read it?

GET http://127.0.0.1:63762/snapshots returns the same JSON shape as claude.ai's server, plus the fetched_at timestamp and a list of orgs you are a member of. It is a plain HTTP endpoint on loopback, so curl + jq is enough to build a bash pipeline. If you want historical data outside the in-memory buffer, either subscribe to the bridge POSTs from the extension (extension/background.js line 2) or run the desktop Rust binary which logs to disk. The endpoint is the same in both cases, only the storage differs.