Codex pricing, decoded

chatgpt.com/codex/pricing, line by line, and the number it never shows you.

OpenAI Codex has no price tag of its own. It rides inside a ChatGPT plan, and the official pricing page lists the plans cleanly enough. What that page leaves out is the part that actually controls your day: a rolling 5-hour window and a weekly limit that the flat monthly fee never names. This is a plain reading of both halves.

M
Matthew Diakonov
8 min read

Direct answer (verified 2026-05-15)

There is no standalone Codex subscription. Codex usage is bundled into ChatGPT plans. Per developers.openai.com/codex/pricing, the ladder is Free $0/month, Go $8/month, Plus $20/month, Pro from $100/month (with a separate $200/month tier), Business on pay-as-you-go, and Enterprise plus Edu through a sales contact. The $100 Pro tier is 5x Plus usage, the $200 tier is 20x, and a promotion doubles the $100 tier to 10x through May 31, 2026.

$0

ChatGPT Plus, per month. The cheapest plan where Codex is genuinely usable for daily coding.

$0

Pro, lowest tier, per month. 5x Plus usage, doubled to 10x through May 31, 2026.

0x

Plus usage on the $200/month Pro tier, the top consumer level.

The full plan ladder

Here is every plan from the official pricing page, with the Codex allowance attached. The prices are flat monthly fees. The usage column is where the real spread lives.

PlanPriceCodex usage
Free$0/monthLimited Codex access, runs out quickly inside the 5-hour window
Go$8/monthCheapest paid entry, more headroom than Free
Plus$20/monthBaseline Codex usage, the 1x reference every multiplier is measured against
ProFrom $100/month5x Plus usage, doubled to 10x through May 31, 2026
Pro (20x)$200/month20x Plus usage, the top consumer tier
BusinessPay-as-you-goCredit-based, token-metered per workspace
Enterprise & EduContact salesCustom limits and billing

Plans and prices verified against developers.openai.com/codex/pricing on 2026-05-15. The chatgpt.com/codex/pricing page sits behind a bot wall, so the developer subdomain is the readable mirror of the same ladder.

The pricing page quotes a range, not a number

This is the detail almost every pricing write-up flattens out. When the official page describes the Pro tiers, it does not give you a message count. It gives you a band. The Pro 5x tier is quoted as 80 to 400 local messages on GPT-5.5. The Pro 20x tier is quoted as 300 to 1600. Those are five-times wide ranges, and the page is explicit about why: usage varies by model and task complexity.

So the honest reading of “5x more usage” is not “a fixed bigger number”. It is “a wider band, and where you land inside it depends on what you actually do.” A handful of short prompts on a cheaper model puts you near 400. One long agentic refactor on GPT-5.5, with the model re-reading files and chaining tool calls, drops you toward 80. The pricing page cannot resolve that for you. It is a menu, not a meter.

That is the structural reason a flat monthly fee feels unpredictable. You paid one clean number. You then live inside a range that the number does not pin down, gated by a rolling 5-hour window and a weekly limit on top.

What chatgpt.com/codex/pricing spells out

  • Plan prices: Free $0, Go $8, Plus $20, Pro from $100, Business pay-as-you-go
  • Usage multipliers: Pro 5x and Pro 20x relative to Plus
  • Per-million-token credit rates for each model
  • That a 5-hour rolling window exists, with weekly limits on top

What it leaves you to find out the hard way

  • Where you sit in the current 5-hour window right now
  • How close the weekly limit is before it walls a long task
  • Which model and task mix pushed you to the low end of the range
  • When your credits run out at the current burn rate

The April 2, 2026 change: from messages to tokens

On April 2, 2026, OpenAI moved Codex billing off per-message pricing and onto API-style token rates. Consumption is now measured as credits per million input tokens, cached input tokens, and output tokens. Credits stayed the unit you buy and spend, but what draws them down changed: it is now token volume, the same way an API bill works. The shift applied to new and existing Plus, Pro, and Business workspaces, with Enterprise migrating on its own schedule.

The pricing page publishes the credit rate for each model. The two most relevant ones for daily Codex work look like this:

ModelInputCached inputOutput
GPT-5.5125 credits12.50 credits750 credits
GPT-5.462.50 credits6.25 credits375 credits

Rates are credits per one million tokens, verified on developers.openai.com/codex/pricing on 2026-05-15. GPT-5.4 is roughly half the credit cost of GPT-5.5 per token, which is why model choice inside a session moves your effective spend as much as message count does.

Once your included plan usage is spent, further work draws down purchased credits at these rates. That is the mechanism that keeps a long session running past the plan limit instead of hard-stopping. The trade is that your monthly cost is no longer fixed. It is the flat fee plus however many credits the work above the line consumed, and the pricing page does not show you that running total.

Codex pricing and Claude pricing are now the same shape

If you are pricing out Codex, you are almost certainly weighing it against Claude Code, or already running both. Worth knowing: after the April change, the two pricing models are close to isomorphic. A flat plan, a rolling 5-hour window, a weekly limit, and a metered overage path. The prices line up too.

FeatureClaude Code (Anthropic Pro / Max)OpenAI Codex (ChatGPT plans)
Consumer entry price$20/month (Claude Pro)$20/month (ChatGPT Plus), Codex bundled in
Top consumer tier$200/month Max, highest consumer tier$200/month Pro, 20x Plus usage
Short rolling windowRolling 5-hour window5-hour window, shared by local messages and cloud tasks
Longer windowWeekly quota layered on top of the 5-hour windowWeekly limits layered on top of the 5-hour window
Overage modelExtra usage, billed in dollarsCredits, billed per million tokens by model
Where the live meter sitsclaude.ai/settings/usage, a web page you refresh by hand/status inside the Codex CLI

The interesting row is the last one. Both tools gate you the same way, but they put the live readout in different places. Codex prints a usage summary inside its own CLI when you run /status. Claude does not print anything comparable during an active agent loop. The only place to see your real Claude position is claude.ai/settings/usage, a web page you have to remember to open and refresh. So in a two-tool setup, the Codex side has a meter and the Claude side goes dark.

Why this matters for the Claude side of your bill

claude-meter exists for exactly that dark side. It is a free, open-source macOS menu bar app and browser extension that reads your Claude Pro or Max usage and shows it live. It does not track Codex, and it makes no attempt to. Codex already has /status. What was missing was the matching readout on the Anthropic side.

The browser extension makes one HTTPS request per minute to claude.ai using the session cookies your browser already holds, then pushes the snapshot to the menu bar app over a localhost bridge. No cookie paste, no telemetry, no separate login. The numbers come from the same usage endpoint that backs claude.ai/settings/usage, so they match the page exactly. There is also a CLI:

claude-meter, the Claude side of the meter

That is the readout the Codex pricing page has a sibling for and the Claude settings page only offers behind a manual refresh. Pair it with Codex’s /status and both of your flat-rate plans finally have a live meter, so you see a window filling up before it walls a task rather than after. If you run both agents in rotation, the token-juggling guide walks through the exact handoff that asymmetry makes possible.

The short version

Codex pricing is plan pricing. You pick a ChatGPT plan, $20 for Plus up to $200 for the top Pro tier, and Codex comes with it. The April 2 change turned the overage into token-metered credits. None of that is hard to read off the official page.

The part the page does not give you is your position. A 5x plan is a wider range, not a fixed number, and the 5-hour and weekly windows decide when a long task stops. That is true on the Codex side, where /status at least shows you, and on the Claude side, where nothing does until you install something that reads it. The sticker price is the easy half. The meter is the half worth watching.

Running Codex and Claude Code side by side?

Book 20 minutes and we will set up claude-meter against your real claude.ai session, so the Anthropic side of your two-tool stack has a live meter that matches the Codex /status readout.

Frequently asked

Frequently asked questions

How much does OpenAI Codex cost on its own?

Nothing on its own, because there is no standalone Codex subscription. Codex usage is bundled into ChatGPT plans. The plan ladder on developers.openai.com/codex/pricing is Free at $0/month, Go at $8/month, Plus at $20/month, Pro from $100/month (with a $200/month tier on top), Business on pay-as-you-go, and Enterprise plus Edu through a sales contact. You pick a ChatGPT plan and Codex comes with it. There is no line item that says 'Codex' by itself.

What is the difference between the $100 and $200 Pro tiers?

Both are called Pro. The $100/month tier is 5x Plus usage, the $200/month tier is 20x Plus usage. The official page also runs a promotion: it doubles the $100 tier to 10x Plus usage through May 31, 2026, so until that date the $100 tier behaves like a 10x plan. After the promo ends it reverts to 5x. The $200 tier carries a 20x level on an ongoing basis.

What changed in Codex pricing on April 2, 2026?

OpenAI moved Codex billing onto API-style token rates instead of per-message pricing. Consumption is now measured as credits per million input tokens, cached input tokens, and output tokens. Credits stay the unit you buy and spend. The change applied to new and existing Plus, Pro, and Business workspaces, with Enterprise migrating on its own schedule. Practically, it means your Codex usage is now metered the way an API bill is metered, even though you are still on a flat-rate subscription.

Why does the pricing page give a message range instead of one number?

Because the number is genuinely a range. developers.openai.com/codex/pricing quotes the Pro 5x tier as 80 to 400 local messages on GPT-5.5, and the Pro 20x tier as 300 to 1600. That span is five times wide because, in the page's own words, usage varies by model and task complexity. A short question on a cheaper model lands you near the top of the range. A long agentic task on GPT-5.5 lands you near the bottom. The pricing page cannot tell you which end you will hit. Only watching your actual consumption can.

What are credits in Codex pricing, and when do I start spending them?

Credits are the pricing unit you purchase and consume. Your plan comes with included usage. Once you cross the included usage limit for your plan, further work draws down purchased credits, priced per million tokens by model. The credit system is what lets a long session keep running past the plan limit instead of hard-stopping. The catch is that the pricing page shows you the credit rate per model but not your live credit balance against your current burn rate.

Does Codex have a free tier?

Yes. The Free plan at $0/month is on the pricing ladder and includes limited Codex usage. Go at $8/month sits above it as the cheapest paid entry. For sustained daily coding, OpenAI positions Plus at $20/month and the Pro tiers as the realistic options, since Free and Go run out of headroom quickly inside the 5-hour window.

How does Codex pricing compare to Claude Code pricing?

Structurally they are almost the same shape now. Both sell a flat-rate plan, both meter you with a rolling 5-hour window plus a weekly limit, and both fall through to metered overage once the plan allowance is spent. Codex calls its overage credits priced per million tokens. Anthropic calls its overage extra usage priced in dollars. The entry prices match too: Plus is $20/month, Claude Pro is $20/month, and both have a $200/month top consumer tier. The one practical difference is where the live meter sits, covered below.

Does claude-meter track my Codex usage?

No. claude-meter reads the Anthropic side only. It shows your Claude Pro or Max rolling 5-hour window, weekly quota, and extra-usage balance in the macOS menu bar, pulled from the same usage endpoint claude.ai/settings/usage renders. It does not touch your OpenAI account and does not poll Codex. Codex already prints its own usage summary inside the CLI with /status. The reason claude-meter exists is that Claude has no equivalent live readout during an agent loop, so the Claude side of a two-tool setup is the side that goes dark.

How did this page land for you?

React to reveal totals

Comments ()

Leave a comment to see what others are saying.

Public and anonymous. No signup.