Claude Code /goal: why it burns the weekly quota faster
/goal sets a condition and lets Claude Code run unattended, turn after turn, until a small evaluator model says the condition is met. Anthropic doubled the 5-hour cap in May 2026 and left the weekly cap unchanged, so the bucket that fires is seven_day_oauth_apps, and it fires earlier in the week than interactive use ever did.
Direct answer (verified 2026-05-11)
Yes, /goal burns the weekly quota faster than interactive use. /goal removes the per-turn human checkpoint: after every turn a small evaluator model (Haiku by default) returns yes/no on whether the completion condition is met, and “no” fires another full main-model turn automatically. Anthropic's May 2026 rate-limit doubling moved the 5-hour cap, not the weekly cap, so the weekly bucket seven_day_oauth_apps is now the binding constraint and a /goal session can land it days earlier than the same workload run interactively. Source: Anthropic's official docs at code.claude.com/docs/en/goal.
What /goal actually does
The slash command is small. You give it a condition. Claude Code keeps running until the condition is met. Underneath, per Anthropic's docs, it is a session-scoped wrapper around a prompt-based Stop hook: every turn the evaluator model receives the conversation and the condition and returns yes or no plus a short reason. “No” starts the next main-model turn with the reason as guidance.
One goal per session, condition up to 4,000 characters. ◎ /goal active is the status indicator. /goal with no arg shows turns and tokens spent on the active goal.
The line in the docs that hides the burn
Anthropic's docs are explicit about evaluator cost. They are silent about main-model cost. That is the gap the weekly bucket falls into.
Negligible evaluator + Opus main model, fired turn after turn until yes, is not a small number. It is the entire weekly bucket if you are not watching.
Interactive vs. /goal: where the weekly burn comes from
Same workload, two execution shapes. The shape is what changes the weekly bill.
Where seven_day_oauth_apps comes from in each shape
You read the diff, you decide what to ask next, you accept or reject the next tool call. Each turn has a free human checkpoint. If the model goes off the rails, you stop it on turn 3, not turn 30. Weekly burn is naturally throttled by your reading speed.
- One human read between every main-model call.
- Easy to abandon a bad path after a few turns.
- seven_day_oauth_apps creeps up at the rate you can review.
The May 2026 doubling, and why it makes /goal worse, not better
On May 7, 2026 Anthropic doubled the rolling 5-hour cap on Sonnet and Opus for Pro and Max accounts. The weekly cap was not doubled. /goal sessions that previously crashed into the 5-hour wall and were forced to wait now keep running and keep charging the same unchanged weekly bucket.
| Feature | /goal session | Interactive Claude Code |
|---|---|---|
| Per-turn human checkpoint | No (evaluator decides) | Yes (you decide what to ask next) |
| 5-hour cap (doubled May 2026) | Generous; rarely the binding constraint | Generous; rarely the binding constraint |
| Weekly cap (unchanged May 2026) | Throttled by the evaluator's stop signal | Throttled by your reading speed |
| Bucket that walls Claude Code | seven_day_oauth_apps (same, sooner) | seven_day_oauth_apps |
| Cost of a bad path | Full Opus turns until the evaluator notices | Seconds of your reading time |
| Live visibility into weekly burn | /usage breaks the goal; meter is the only honest option | /usage costs a turn; meter is ambient |
The bucket to watch (and what it looks like at 93%)
The bucket is seven_day_oauth_apps (line 24 of src/models.rs). Web chat charges seven_day; OAuth-token clients like Claude Code charge that one and this one. Your account can sit at 58% on the all-up bar and 93% on the OAuth-apps bar at the same time.
Five guards that keep /goal from eating the week
None of these turn /goal off. They put a ceiling on how much of the weekly bucket a single goal can spend.
Bounding /goal in five steps
Set a turn budget in the condition itself
/goal accepts up to 4,000 characters. Spend some of them on a stop clause. 'The test suite passes and the diff is committed, OR you have made 12 attempts.' The evaluator counts attempts and returns yes when the budget is hit.
Route the main model to Sonnet, not Opus
Opus turns are the heaviest line item against seven_day_opus and seven_day_oauth_apps. If the goal is mechanical (fix the failing test, regenerate types, run the migration), Sonnet finishes in fewer turns of cheaper tokens. Switch with --model sonnet on the claude command before the goal fires.
Watch seven_day_oauth_apps in the menu bar, not in /usage
Typing /usage inside the conversation interrupts the goal. ClaudeMeter polls the same /api/organizations/{uuid}/usage endpoint every 60 seconds outside the loop, so you watch the bucket tick up while the goal keeps working.
Set a personal abort line at 80% on the OAuth bucket
ClaudeMeter speeds polling up to 90 seconds when any bucket crosses 80% utilization (HIGH_UTIL_FAST_POLL = 80.0 in src/bin/menubar.rs line 108). When the popover badge flips, /goal clear before the next Opus turn fires. You will leave a few percent on the table; you keep the rest of the week.
Run a small test goal before the big one
First /goal of a session: 'Print the current git status.' Confirm the evaluator behaves and the meter ticks the way you expect. Then set the real condition. The Anthropic docs recommend the same warm-up; the small goal costs almost nothing and tells you whether the loop is going to spiral.
FAQ
Does Claude Code's /goal feature burn the weekly quota faster than interactive use?
Yes. /goal sets a completion condition; after every turn a small evaluator model (Haiku by default) returns yes/no on whether the condition is met, and 'no' fires another main-model turn automatically. There is no per-turn human checkpoint, so the main model can run dozens of turns before you look up. Anthropic doubled the 5-hour cap in May 2026 (announced May 7, 2026) but left the weekly cap unchanged, so the weekly bucket is now the binding constraint for /goal sessions on Claude Code. The evaluator tokens themselves are 'typically negligible' per the docs at code.claude.com/docs/en/goal; the burn is the main model running unattended.
Which weekly bucket does a /goal session actually wall on?
seven_day_oauth_apps, almost always. Claude Code authenticates via OAuth, so its requests count against the OAuth-apps weekly bucket on top of the all-up seven_day aggregate. Your account can sit at seven_day = 58% and seven_day_oauth_apps = 93% at the same time; the web UI on claude.ai/settings/usage emphasizes the all-up bar, the CLI walls on the OAuth one. The field is declared on line 24 of src/models.rs in the open-source ClaudeMeter repo.
How is /goal implemented internally? Why does it cost so many turns?
Per the official docs, /goal is a session-scoped wrapper around a prompt-based Stop hook. The condition you typed is passed to a small fast evaluator model (defaults to Haiku) along with the conversation; the evaluator returns yes/no plus a short reason. 'No' starts another main-model turn with the evaluator's reason as guidance. 'Yes' clears the goal and returns control. The cost shape: one full main-model turn per evaluator-no, plus one tiny evaluator call per turn. With Opus as the main model, every wrong evaluator-no costs you full Opus pricing against seven_day_opus and seven_day_oauth_apps.
Anthropic doubled the rate limits in May 2026. Doesn't that fix /goal?
It fixes the 5-hour wall, not the weekly wall. The May 7, 2026 announcement doubled the rolling 5-hour Sonnet and Opus budgets for Pro and Max accounts. The weekly cap was left at the level Anthropic tightened to in 2026; it was not doubled. /goal sessions that previously hit the 5-hour wall and forced a 5-hour cooldown now keep running, which means more main-model turns charge against the same unchanged weekly bucket. Net effect: /goal users now hit the weekly wall earlier in the week than they did before the doubling, not later.
How do I see the weekly burn live without typing /usage and breaking the goal?
Run an ambient meter outside Claude Code. ClaudeMeter (github.com/m13v/claude-meter, MIT) polls the same /api/organizations/{uuid}/usage endpoint claude.ai/settings/usage hits, every 60 seconds (extension/background.js line 3, const POLL_MINUTES = 1), and renders the six weekly buckets plus the 5-hour bucket in the macOS menu bar. The /goal loop never sees the call. When seven_day_oauth_apps crosses 80% the popover badge changes and you can /goal clear before the next Opus turn fires. ccusage cannot do this; it sums tokens from local Claude Code JSONL files but cannot see Anthropic's server-enforced weekly utilization.
What is the cheapest guard I can put on a /goal so it does not eat the week?
Bound the condition itself. /goal accepts up to 4,000 characters; spend some of them on an OR clause: 'The test suite passes and the diff is committed, OR you have attempted this 12 times.' The evaluator counts attempts and returns yes once the cap is hit. This costs nothing extra and stops the worst failure mode (a bad path that the evaluator keeps saying no to forever). Pair it with --model sonnet on the main model unless the goal genuinely needs Opus reasoning.
Does /goal cost a session even when the evaluator returns yes immediately?
Yes, a small one. Even on a goal the model finishes in one turn, the evaluator still runs once on the resulting conversation. That call is small (Haiku-tier) and typically negligible per the docs. The cost shape only becomes significant when the evaluator returns no repeatedly and the main model fires repeatedly. Watch seven_day_oauth_apps; if it ticks up by more than 1-2 percentage points per minute during a goal, the loop is spiralling and the guard you set in the condition (or your manual /goal clear) is what saves the rest of the week.
What happens to the goal across --resume or --continue?
The goal survives. Per the docs, /goal is session-scoped but persists across --resume and --continue, though the turn and token counters reset to zero on resume. The completion condition is the same; the next time you resume the session the evaluator immediately checks against it and either fires another main-model turn or clears the goal. This means a goal you forgot about can fire fresh main-model turns the next time you resume the session and resume burning seven_day_oauth_apps. /goal clear before suspending a session you do not plan to come back to.
Does ClaudeMeter actually distinguish a /goal session from a normal one?
Not by name. Anthropic's /api/organizations/{uuid}/usage endpoint reports utilization per bucket, not per session or per slash command. ClaudeMeter shows you the bucket utilization regardless of how the load got there. In practice, a /goal session shows up as seven_day_oauth_apps climbing faster than your reading speed; the rate of change is the signal. If you watch the popover for 60 seconds and the OAuth bucket jumped 2-3 percentage points, that is a /goal-shaped burn.
Set up the meter before the next /goal session
Fifteen minutes: install ClaudeMeter, walk the buckets together, set your personal abort line on seven_day_oauth_apps so the weekly wall stops being a surprise.
Related guides
The Claude Code weekly quota wall: what the CLI hides
When the wall fires, Claude Code prints a generic 'rate_limit_error: ... limit reached' string. The OAuth token in your Keychain plus the right endpoint will tell you which bucket actually walled you.
Claude Code 5-hour limit doubled, weekly cap unchanged (May 2026)
The May 7, 2026 doubling moved the 5-hour cap and left the weekly cap alone. Why /goal users now hit the weekly wall earlier in the week, not later.
Claude Code weekly quota meter: the meter you watch instead of typing /usage
/usage costs a turn and prints stale data. A meter polls the same endpoint outside Claude Code on a 60-second tick, so the loop never sees it.
Comments (••)
Leave a comment to see what others are saying.Public and anonymous. No signup.