Where should you get your AI from?
Compare the real cost of local hardware, pay-per-token APIs, and ChatGPT/Claude/Gemini subscriptions. 34+ AI models ranked by quality-adjusted tokens per dollar.
Data from Artificial Analysis · Arena AI · OpenRouter · Desktop Commander · open source
One number. Three categories.
Not all tokens are equal — a token from a smarter model is worth more than a token from a weaker one. To capture that, we multiply raw token volume by a quality score and divide by cost.
Quality is a z-score-normalized blend of three public benchmarks: Arena text ELO (human preference on general tasks), Arena code ELO (human preference on coding), and Artificial Analysis Intelligence Index (composite of 10 academic evals — MMLU-Pro, GPQA, LiveCodeBench, and others). The result is expressed as a 0–100 percentage.
The result: quality-adjusted tokens per dollar — one metric that puts a $3,500 GPU, a $20/month plan, and a $0.07/M API on the same axis.
🖥️ Local Hardware
hardware cost ($) = quality tokens / $
One-time hardware cost amortized over 3 years. tok/s from real-world measurements via Desktop Commander telemetry data (actual user sessions across many hardware combinations). ⚠️ Does not include electricity costs yet (typically $5–$60/month depending on hardware and usage).
💳 Subscription
$ / month = quality tokens / $
⚠️ Token limits are empirically measured — providers don't publish exact numbers.
🔌 API (pay per token)
price/M (input% in + output% out) = quality tokens / $
Adjustable input/output ratio — defaults to 75/25 for general use. Coding workloads are ~90/10 (long context, short output), chat is ~50/50, RAG is ~95/5.
Quality scores: Arena ELO + AA Intelligence Index — the two benchmarks that remain comparable across model generations. See the Benchmarks section for why other benchmarks can't fairly compare across eras.
Pick the right AI for your use case
Different tasks reward different models. Coding benchmarks don't match writing benchmarks, and a model that crushes academic reasoning might not be the best local option for a 24GB GPU. Here's a quick guide based on the data below — click any link to jump to the filtered view.
💻 Best LLM for coding
Coding tasks are typically input-heavy — you send a lot of context, the model writes a short diff. Use the 90/10 coding I/O ratio to see which models give you the most coding tokens per dollar. Claude Sonnet and Opus dominate Arena's code ELO leaderboard, but open models like Qwen3-Coder and GPT-OSS offer better local value if you have the VRAM. For subscriptions, ChatGPT Business at $30/seat/month crushes the value ranking — measured at ~60M tokens/week.
✍️ Best AI for writing and copywriting
Writing tasks lean output-heavy — short prompts, long generations. For subscriptions, this tips toward plans with generous output allowances; for APIs, it favors models with low output-token pricing. GPT-5 Chat and Claude Opus rank highest on Arena's text leaderboard for general writing quality. If cost is the priority, Gemini Flash and GLM-4.7 Flash offer strong quality at API rates below $1 per million tokens.
🖥️ Best local LLM for your GPU
If you're running models locally, the question isn't just which model is best — it's which combination of model, hardware, and quantization. Our data includes real tokens/second measurements from Desktop Commander users across 36 hardware configurations. An RTX 3090 at Q4_K_M runs Llama 3.1 70B around 25 tok/s; an M3 Max with 128GB unified memory handles Qwen3.5 35B A3B at 14+ tok/s. The best local value right now is Qwen3.5 35B A3B (Reasoning) — high quality, modest VRAM, strong throughput.
🤔 ChatGPT vs Claude — which is better value in 2026?
They're close on quality — Claude Opus 4.7 and GPT-5.5 trade the top spot depending on benchmark. Value differs more by plan than by model. ChatGPT Plus ($20/mo) measures ~190M tokens/week on GPT-5.4 via Codex (multi-flip method, supersedes our earlier 13M figure which used single-flip extrapolation). Claude Pro ($20/mo) measures ~15.6M tokens/week on Opus 4.7 — our first direct Pro measurement, roughly 3× our earlier estimate. Claude Max 20× ($200/mo) measures ~388M tokens/week on Sonnet 4.6 or ~248M on Opus 4.7 — the highest quotas we've seen on a personal plan, but you need to use it heavily to get the value. For light users, Plus wins; for heavy coders, Max 20× wins; for teams, ChatGPT Business at $30/seat is the sleeper pick.
💳 ChatGPT Plus vs Pro vs Business — tokens per dollar
ChatGPT Plus ($20/mo) measures ~190M tokens/week on GPT-5.4 (multi-flip method, Apr 24). ChatGPT Business ($30/seat/month) measures ~60M tokens/week on GPT-5.4 — but that number comes from our older single-flip method (Apr 15) and hasn't been re-measured with the newer multi-flip approach yet, so direct Plus-vs-Business comparisons should be treated with caution. ChatGPT Pro ($200/mo) is listed at 66.5M/week on OpenAI's Codex pricing page, not directly measured. The most defensible single finding: if you're a heavy individual user, Plus at $20 delivers genuinely large capacity; if you need guaranteed per-seat quotas for a team, Business is sized for that use case. Pro's raw tokens/$ ratio is worse than either.
⚡ Claude Max 5× vs 20× — is the upgrade worth it?
We measured Claude Max 20× at ~388M tokens/week on Sonnet 4.6 and ~248M tokens/week on Opus 4.7 via Claude Code — Opus has a tighter per-model sub-quota, so it delivers about 64% of what Sonnet gives on the same plan. Both numbers are from multi-flip runs on Apr 24, 2026 (replacing our earlier Sonnet 4.5 single-flip figure of 203M). The 5× plan is estimated at ~97M/week (Sonnet) or ~62M/week (Opus) by ratio (5/20 × measured 20×), not directly measured. At $100 vs $200/month, Max 5× has the better raw tokens/$ ratio if you won't hit the cap. Max 20× wins only if you're doing sustained heavy work — Claude Code all day, multi-agent workflows, or running Opus on large contexts. For most users, Max 5× is the sweet spot; for power users, 20× removes the rate-limit friction.
Use this with your AI agent
This site is website, data, and skill in one repo. You can read it yourself, or point your AI agent at it — the same data and logic, just delegated. Your agent picks up an installable skill that knows how to fetch live numbers and reason about plans.
🧭 Get recommendations from your agent
Install the ai-value-advisor skill once. Then ask your agent things like "which AI should I pay for this month?" or "is Claude Max 20× worth it for me?" — it fetches live data from this site, considers your usage and budget, and recommends with caveats. Works with Claude Code, Cursor, Codex, Copilot, Windsurf, and 30+ other agents via the skills CLI.
npx skills add desktop-commander/best-value-ai
📏 Contribute a measurement
Most subscription plans don't publish their real quotas. The submit-usage-measurement skill in this repo can run a standardized benchmark on your Claude Code or Codex CLI, capture the actual tokens you get, and open a pull request. If you have a plan we haven't measured yet (Claude Pro, ChatGPT Team, an edu/student discount…), this is the easiest way to add it to the dataset.
Explore the numbers
🏆 Ranking — quality-adjusted tokens per dollar
Subscription: assumes you use 100% of your weekly quota. If you only use half, your real value is half what's shown.
API: pay-per-token, no assumptions. What you spend is what you get.
Token limits are empirically measured — providers don't publish exact numbers.
All options ranked by quality-adjusted tokens per dollar
⚔️ Compare — any two options side-by-side
📈 Timeline — how value has changed over time
Value over time by provider
Subscription series only includes plans we've directly measured (ChatGPT Plus, ChatGPT Business, Claude Pro, Claude Max 20×). Plans we haven't measured — ChatGPT Pro, Claude Max 5×, Gemini Advanced — appear in the main rank list with a 📐 est. badge but aren't plotted here.
Subscription tokens over time
How many tokens each plan gives you per day, based on our empirical measurements.
Not all benchmarks age equally.
Comparing GPT-3.5 to GPT-5.4 using benchmark scores sounds simple. It isn't. The tests used to measure models in 2023 are mostly useless today — either saturated or discontinued. This matters for any long-term value comparison.
Saturation
MMLU (2020) and HumanEval (2021) were rigorous tests when introduced. Today GPT-4 scores 87% on MMLU, GPT-5 scores ~90%. A 3% gap in a benchmark where the ceiling is 100% tells you almost nothing. The benchmark is broken as a signal, not the models.
Different tests, different eras
SWE-bench Verified launched in 2024. Aider Polyglot in 2024. GPQA Diamond in 2023. Models from 2022 were never measured on these. You can't compare GPT-3.5's MMLU score to GPT-5's SWE-bench score — they're measuring different things with different scales.
Training contamination
Benchmarks become worthless once labs train on them. Questions leak into pretraining data, benchmark scores stop reflecting real capability. This is why new benchmarks are invented constantly — and why scores from 2022 are especially suspect.
Two signals have been collected continuously since 2023 using the same methodology. They measure different things, but together they give a consistent cross-generation quality score.
Humans pick which model response is better in blind head-to-head comparisons. The ELO rating system means GPT-3.5 and GPT-5.4 are measured on the exact same scale — not by what questions they answered, but by how humans prefer their outputs relative to each other.
Artificial Analysis runs their own evaluations on every major model using consistent infrastructure and aggregates them into a single 0–100 composite. Unlike leaderboard scores that depend on who submitted, AA re-runs everything themselves on the same hardware.
Both scores are z-score normalized: (score − mean) / std × 15 + 50, centering each at 50 on its own distribution. This ensures Arena ELO and AA Intelligence contribute equally to the average — without normalization, Arena's larger numbers would dominate. The two normalized scores are then averaged. If only one is available for a model, that single score is used. Task-specific benchmarks (SWE-bench, Aider, etc.) are shown in raw data but not used in the main value calculation — they can't fairly compare across model generations.
Which models are good at what?
The single-number rankings above blend quality into one score. But models win on different axes — math, coding, long-context reasoning, agentic work. This matrix shows each model's score across benchmarks, color-coded per column by z-score: green = top, red = bottom, hatched = no data.
Providers don't publish token limits. We measure them.
Neither OpenAI nor Anthropic publish exact daily token quotas for subscriptions. Both use rolling 5-hour windows and weekly limits expressed as percentages — not absolute numbers. So we measure empirically.
1. Run a standardized task
We run the same coding task (doubly-linked list + 10 tests) through Codex CLI or Claude Code with --json output, which gives exact token counts per API turn — input, cached, output, and reasoning tokens.
2. Read quota before & after
The CLI's /status command shows your 5-hour and weekly limits as percentages. We record these before and after the task. The delta tells us what fraction of the quota our known token count consumed.
3. Calculate total quota
total_quota = tokens_consumed ÷ (pct_consumed / 100)
Example: 2M tokens consumed 6% of the weekly limit → weekly quota ≈ 33M tokens. Formula: weekly × 4 × quality ÷ monthly price.
Token counts include system prompt (~70K), cached input, reasoning overhead, and tool calls — not just user-visible output. Reasoning effort matters: xhigh uses 1.7× more tokens than medium for the same task. Full methodology →
📊 Our measurements
| Plan | Model | Tool | Date | Files | Task runs | Quota used | 5h window | Weekly est |
|---|---|---|---|---|---|---|---|---|
| Business | gpt-5.4 | Codex | 2026-04-24 | 6 | 4 | 5h:22% wk:3% | 5.0M | 37M |
| Claude Max 20x | Opus 4.7 | Claude | 2026-04-24 | 7 | 200 | 5h:0% wk:3% | — | 184M |
| Claude Pro | Opus 4.7 | Claude | 2026-04-24 | 3 | 20 | 5h:0% wk:8% | — | 16M |
| Plus | gpt-5.4 | Codex | 2026-04-24 | 9 | 9 | 5h:21% wk:3% | 28.2M | 198M |
Files = separate measurement sessions we've run on this plan (variance expected). Task runs = how many times the benchmark task ran within the best session shown. Quota used = how much of the 5-hour and weekly limits our test consumed (higher = more reliable extrapolation). Raw data →
Help improve this data. Run bash scripts/measure-codex-quota.sh on your plan and
submit your results.
Your subscription, priced at API rates
If you bought the same tokens directly through the provider's API, how much would your subscription's weekly quota cost? We can work this out because the measurement captures exact input / cached / output tokens alongside the quota consumed.
| Plan | Model measured | You pay | API-equivalent | Multiplier | Cache% |
|---|---|---|---|---|---|
| ChatGPT Plus | gpt-5.5 | $20/mo | $561/mo | 28.1× | 83% |
| ChatGPT Plus | gpt-5.4 | $20/mo | $547/mo | 27.3× | 86% |
| Claude Max 20× | Opus 4.7 | $200/mo | $2.10K/mo | 10.5× | 93% |
| Claude Pro | Opus 4.7 | $20/mo | $198/mo | 9.9× | 81% |
| Claude Max 20× | Sonnet 4.6 | $200/mo | $1.64K/mo | 8.2× | 92% |
| Claude Pro | Sonnet 4.6 | $20/mo | $162/mo | 8.1× | 77% |
| ChatGPT Business | gpt-5.5 | $30/mo | $197/mo | 6.6× | 67% |
| Claude Max 20× | Sonnet 4.5 | $200/mo | $936/mo | 4.7× | 100% |
| ChatGPT Business | gpt-5.4 | $30/mo | $117/mo | 3.9× | 88% |
How we calculated this
For each plan + model measurement, we extract from the CLI's --json output:
- Non-cached input tokens — billed at full API input price
- Cached input tokens — billed at 10% of input price (OpenAI & Anthropic standard)
- Output tokens — billed at full API output price
- Percent of weekly quota consumed during the run
Extrapolate each token category to 100% weekly, multiply by 4.33 weeks/month, then apply the model's public API pricing:
monthly_api_cost = (
non_cached_input_per_month × $input_rate
+ cached_input_per_month × $input_rate × 0.10
+ output_per_month × $output_rate
)
multiplier = monthly_api_cost ÷ subscription_price
Worked example — ChatGPT Plus on GPT-5.4:
One measurement session consumed 3% of the weekly quota with 5.90M input tokens (5.07M of them cached) and 30.5K output tokens. Scaling to 100% weekly and ×4.33 for monthly gives 119M non-cached input, 732M cached input, 4.4M output. At GPT-5.4's $2.50 input / $15 output rate:
119M × $2.50/M = $298 (non-cached input)
732M × $0.25/M = $183 (cached input)
4.4M × $15/M = $66 (output)
─────────────────────────
$547/mo at API pricing
÷ $20/mo subscription = 27× multiplier
Important caveats
1. High cache hit rates reflect real CLI usage, not a test artifact. Our task hits 67–93% cache reads. We tested whether this was inflated by repeating the same prompt: ran a session with unique nonces per call (CACHE_BUST=1). On Codex the cache rate barely moved (88% → 88%). On Claude Code it dropped from ~92% to ~77% — meaningful but still high. Most of what gets cached is the CLI's own system context (system prompt, tool definitions, prior turns), not the specific user task. Heavy CLI users will see cache rates roughly in this range too. Light users with very different prompts each session would see lower rates and worse subscription value than this multiplier suggests.
2. The multiplier compares CLI-via-subscription to API-with-caching. A developer building the same agent loop directly against the API can enable prompt caching and pay roughly what we calculate. So the comparison is "running this CLI through your subscription vs paying for the same workload via API with caching enabled". A naïve API user who doesn't enable caching would pay ~10× more on the cached portion, making the subscription look even better — but that's not a fair comparison.
3. The multiplier assumes you fully utilize the quota. If you only use 10% of your Plus quota per month, you're getting 10% of the multiplier — possibly worse value than API metered billing. The subscription wins if you're a heavy user.
4. Cache pricing convention. Anthropic exposes three input tiers — fresh input at full price, cache writes at 1.25× input price, cache reads at 0.10× input. OpenAI exposes two — fresh input and cached input at 0.10×. We bill all three Anthropic tiers correctly using the values reported by Claude Code's usage object. Cache writes at 1.25× input had been mislabelled as cache hits in earlier versions of our calculation; this was corrected on Apr 26, 2026, with measurable upward revisions to Claude multipliers.
5. Subscription unit economics aren't part of this. These numbers are "what API would cost", not "what it costs the provider to run". Providers likely price quotas to match expected real usage; the multiplier is a value comparison from the user's side, not a margin claim about the provider.
All the data, all the tables
📋 Raw data — all models, hardware, and subscriptions
🤖 Models Edit on GitHub ↗
| Model | Provider | Release | Card |
|---|
📊 Benchmarks Edit on GitHub ↗
| Model | AA Intelligence | Arena Text ELO | Arena Code ELO | SWE-bench | Aider |
|---|
🔌 API Pricing Edit on GitHub ↗
| Model | Input $/M | Output $/M | tok/s | Source |
|---|
🖥️ Local Performance Edit on GitHub ↗
| Model | Hardware | Tok/s | Quant | VRAM | Source |
|---|
💳 Subscriptions (⚠️ Estimated) Edit on GitHub ↗
| Model | Plan | $/mo | Tok/week | Notes | Source |
|---|
🔧 Hardware Edit on GitHub ↗
| Hardware | Price | VRAM | Year | Source |
|---|
📦 Use this data in your project
JSON data files
All model, hardware, and benchmark data is stored as open JSON files. Fetch them directly — no API key needed.
Attribution
Free to use under Apache 2.0. If you use the data or host a fork, please credit:
"Data from Best Value AI, supported by Desktop Commander"
Contribute
Missing a model? Have local benchmark data? Let your AI agent submit a PR with your hardware's performance data, or contribute manually.