Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They're definitely not subsidizing API pricing

The cost of a thing, is relative to its source costs. They are subsidizing API pricing, if you consider all the costs to provide the service, including all model creation, training, etc costs.

But that doesn't mean they will be more expensive, longer term. The cost of compute will go down as time goes on. Each year it will get cheaper. Same for power requirements, computing density, cooling, and so on.

I remember trying to store and play mp3 files on older computers. I could typically hold a few on a disk, and if I wasn't doing anything else I could play one. Barely. Now you'll be hard pressed to play an mp3 and see the load results in top or what not.

The same will be true of AI in 20 years.



If those cost of compute is going down, then eventually it will go down enough that we will run on our LLMs locally and Anthropic will go out of business.


> then eventually it will go down enough that we will run on our LLMs locally and Anthropic will go out of business.

I want robust local LLMs as much as the next person—Gemma E2B, 3.2GB does my word completions as I type. It's gotten to the point where it knows what I'm going to type before I do!

But I don't see Anthropic going out of business anytime soon. As good as some of the open source LLMs are, we’re still a long way from being able to frontier models at home.


The industry will shift, yes. At some point, remote LLM compute will be like AWS.

Everyone can do baremetal at home and run on it, or VMs, containers. Many don't.

However, you'll still want the best model and toolset. So there is some place for them to pivot to. Something for them to sell or licence.

It will be interesting to see where the all lands, a decade from now. Who will be left?


If you are using LLMs for tool use locally, then in a decade it will not make sense anymore to pay for hosted solutions. Your device will have compute power to run powerful LLMs trivially.

If you need LLMs at scale to serve many customers, then hosted solutions make sense for the availability aspect. But by this point models can be offered by any generic services provider, like AWS or Cloudflare. Pure AI companies that just offer hosted models and nothing else will go extinct if they don’t expand to offer more services.


> If you are using LLMs for tool use locally, then in a decade it will not make sense anymore to pay for hosted solutions. Your device will have compute power to run powerful LLMs trivially.

LLMs a couple of years ago that'd be impossible to run on consumer hardware are now running on consumer hardware. I'm less concerned about compute power; it's more about memory.

It could be several years before new RAM capacity comes online. Even then, it won't be cheap.

I expect in the future, hosted frontier models will be a utility like electricity or cable tv. Part of a package most people will subscribe to.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: