> I can't understand removing Claude Code from $20
Not according to their webpage: "Claude Code is included in your Pro plan. Perfect for short coding sprints in small codebases with access to both Sonnet 4.6 and Opus 4.7." [1]
There are clear contradictions across their marketing site. As others have pointed out, it's being removed from some help articles and the pricing chart now shows it revoked. Confusing signals, but they seem to be changing all pages in this direction and haven't updated that one yet.
The Mac has never been more popular in its 40 year history than it is now. The recently released MacBook Neo broke all previous Mac sales records. Needing to sell more Macs isn't an issue these days.
i can think of absolutely zero publicly traded company boards of this size that would opt for "we're already selling enough devices, we know there's more demand we can't meet, let's not scale up we're really happy with these numbers"
Due to the RAM shortages, Apple isn't able to meet demand as it is.
Apple's Mac revenue last fiscal year was $33.7 billion. I suspect the number of Linux users that might buy a Mac if it could run Linux natively is probably in rounding error territory.
Apple has been around for 50 years and has a market-cap of around $4 trillion. All without supporting Linux. I think they're okay with that.
>The problem is that this is an incredibly niche / small issue (i.e. <<1% of users, let alone prompts
It's not a niche issue at all. 29 million people in the US are struggling with an eating disorder [1].
> This single paragraph is going to legitimately cost anthropic at least 4, maybe 5 digits.
It's 59 out of 3,791 words total in the system prompt. That's 1.48%. Relax.
It should go without saying, but Anthropic has the usage data; they must be seeing a significant increase in the number of times eating disorders come up in conversations with Claude. I'm sure Anthropic takes what goes into the system prompt very seriously.
The trajectory is troubling. Eating disorder prevalence has more than doubled globally since 2000, with a 124% increase according to World Health Organization data. The United States has seen similar trends, with hospitalization rates climbing steadily year over year.
Your source says "Right now, nearly 29 million Americans are struggling with an eating disorder," and then in the table below says that the number of "Americans affected in their lifetime" is 29 million. Two very different things, barely a paragraph apart.
I don't mean to dispute your assertion that it's not a niche issue, but that site does not strike me as a reliable interpreter of the facts.
Anthropic provides details regarding between Opus 4.7 and 4.6, including Opus 4.7 doesn't call tools as frequently as 4.6 due to being more capable. Depending on the task at hand, that could a good thing or not so good [1].
For example, regarding instruction following:
Claude Opus 4.7 interprets prompts more literally and explicitly than Claude Opus 4.6, particularly at lower effort levels. It will not silently generalize an instruction from one item to another, and it will not infer requests you didn't make.
That explains a lot actually. So the fewer tool calls its by design. Makes sense but for coding specifically I'd rather it read my files than guess whats in them.
• Claude Design uses Opus 4.7, which is more expensive than earlier models.
• It's just Day 2; it's not a finished product. It's ridiculous how quickly Anthropic iterates.
• If you've been using Claude for a while, Design already knows your style and preferences. You'd have to start from scratch using a different AI design tool. I don’t doubt that'll pay dividends in the long run.
> It will never be cheaper than what it is today. Anthropic is heavily subsidizing.
We don't know that for sure—they've dropped prices before:
1. Claude 3 → Claude 3.5/3.7 generation (mid-2024 to early 2025): Haiku went from $0.25/$1.25 to $0.80/$4.00 per MTok — this was actually a price increase for Haiku, but Sonnet stayed flat at $3/$15 while delivering significantly better performance, effectively a price-per-capability reduction.
2. Claude 3/4 Opus → Claude Opus 4.5/4.6 (late 2025): This was the big one. Opus dropped from $15/$75 per MTok down to $5/$25 per MTok — a 67% reduction on input and output. This is the most significant explicit price cut Anthropic has made, delivering a far more capable model at one-third the price.
They're definitely not subsidizing API pricing, can't believe how prevalent that fallacy is on HN of all places. The question is how profitable Claude Code is. Your example 2 is real and major but your example 1 is ridiculous, almost any new model from any company is better at the same price, and how is increasing the price an example of decreasing prices??
BTW, Github Copilot is pricing Opus 4.7 at 2.5x the cost of Opus 4.6 at promotional pricing (so maybe it'll be 4-5x). But Github's request based pricing is insane, completely divorced from their actual costs (you can achieve 1+M tokens for $0.10 if you give it a large request), so I'd assume they're losing a lot of money.
The cost of a thing, is relative to its source costs. They are subsidizing API pricing, if you consider all the costs to provide the service, including all model creation, training, etc costs.
But that doesn't mean they will be more expensive, longer term. The cost of compute will go down as time goes on. Each year it will get cheaper. Same for power requirements, computing density, cooling, and so on.
I remember trying to store and play mp3 files on older computers. I could typically hold a few on a disk, and if I wasn't doing anything else I could play one. Barely. Now you'll be hard pressed to play an mp3 and see the load results in top or what not.
If those cost of compute is going down, then eventually it will go down enough that we will run on our LLMs locally and Anthropic will go out of business.
> then eventually it will go down enough that we will run on our LLMs locally and Anthropic will go out of business.
I want robust local LLMs as much as the next person—Gemma E2B, 3.2GB does my word completions as I type. It's gotten to the point where it knows what I'm going to type before I do!
But I don't see Anthropic going out of business anytime soon. As good as some of the open source LLMs are, we’re still a long way from being able to frontier models at home.
If you are using LLMs for tool use locally, then in a decade it will not make sense anymore to pay for hosted solutions. Your device will have compute power to run powerful LLMs trivially.
If you need LLMs at scale to serve many customers, then hosted solutions make sense for the availability aspect. But by this point models can be offered by any generic services provider, like AWS or Cloudflare. Pure AI companies that just offer hosted models and nothing else will go extinct if they don’t expand to offer more services.
> If you are using LLMs for tool use locally, then in a decade it will not make sense anymore to pay for hosted solutions. Your device will have compute power to run powerful LLMs trivially.
LLMs a couple of years ago that'd be impossible to run on consumer hardware are now running on consumer hardware. I'm less concerned about compute power; it's more about memory.
It could be several years before new RAM capacity comes online. Even then, it won't be cheap.
I expect in the future, hosted frontier models will be a utility like electricity or cable tv. Part of a package most people will subscribe to.
> can't believe how prevalent that fallacy is on HN of all places
AI is very emotional for a lot of people leading to bias takes in both directions. We like to think HN is more rational than average, but we’re all human.
> I miss the days of having a native desktop design app with a perpetual license.
You can go that route with Affinity Designer [1], owned by Canva, who partnered with Anthropic on Claude Design [2]:
We’ve loved collaborating with Anthropic over the past couple of years and share a deep focus on making complex things simple. At Canva, our mission has always been to empower the world to design, and that means bringing Canva to wherever ideas begin. We’re excited to build on our collaboration with Claude, making it seamless for people to bring ideas and drafts from Claude Design into Canva, where they instantly become fully editable and collaborative designs ready to refine, share, and publish.
I have it, jumped on the affinity band wagon years ago after Adobe started their enshittification process.
After Canva bought Affinity, you now have to authenticate with your email from time to time when you launch the desktop app. Annoying and why do they do that?
> Why should I believe that Anthropic will care about this product in 2, 3 years?
There's no reason to believe Anthropic will stop caring about this product--they're not Google [1] after all.
> It really feels like Anthropic's product area is extremely overextended at this point.
I don't think so. They have one core product: the Claude model; they're enabling different ways of accessing it. Claude Code for developers, Cowork for general business tasks, and chat for consumers.
This is their first graphic design product, but it fits nicely because once you create a prototype, you can hand it over to Claude Code to make the website, mobile app, or whatever.
The advantage Anthropic has is their ecosystem. A Claude user will be way more productive using Design because all of their context is with Claude; other AI tools don't "know you" the way Claude does. Claude already knows your style and your preferences; it's much more likely to create designs you'd like.
When you go to an AI you don’t normally use, you essentially have to start from scratch.
reply