In the only one that feels that OpenAI has bots/commenters on payroll on all this kind of news downplaying Claude and stating how much better Codex is?
There is too much and there are too many, and some of their takes don’t fly if you use Claude daily.
Yeah, it's eerie, same with how everyone seems to have forgotten that OpenAI betrayed democracy by committing to work on unsupervised autonomous weapons and domestic mass surveillance.
Honestly I find comments like yours much more eerie. By all accounts they never agreed to any of that but you say it with such confidence like it's a fact.
The Trump administration's handling of Anthropic showed that regardless of what the contract or the law says or means, they will severely penalize any vendor who refuses their demands. And OpenAI stepped right into that relationship immediately after the administration showed that. So either they were signing up for a supply-chain risk designation and whatever other punishments the Trump administration dreams up, or they're complying.
If this sounds crazy to you, though, I'd like to know, and understand why. I miss ChatGPT/Codex.
We are also currently inmidst a migration from NextJS to TanStack Start and it's worth for the performance and resource gains alone.
NextJS' dev server takes around 3-4 GB memory after a few page click while TanStack / Vite consumes less than a GB.
This is something I noticed, originally I thought "AI" was the perfect tool for Vercel and Nextjs (current standard = future standard), but then I realized is the total opposite, their moat/stick is gone now, and Rouch that is smart I think knows this.
I switched a middle sized app to Tanstack Router + Vite while I was walking my dogs. Then 30 minuts-1 hour QA and it was done. This should have never happened before AI.
(I did switch because I was tired of the bloated network tab with 100 unnecesary RSC calls, the 5 seconds lag when clicking on an internal link, the 10 seconds "hot reload" after a change... I'm on a M4 MAX with 64GB of ram....)
Vercel's moat is DX in hosting, not NextJS. Consider, people who switch to TanStack Start still need a place to host and many would continue to choose Vercel.
Same principle applies, hosting in Railway has slightly worse UX, but with LLM's you don't need to write a single docker line anymore, so deploying on railway is way way less cumbersome than before, and you gain more control and less costs.
This moat is rapidly disappearing though. Cloudflare is catching up, most apps (including TanStack Start) can be one-click deployed without configuration now.
Exactly, this why if I use next.js I always hijack the api routes and use Elysia, it comes with something called eden that makes the typing e2e fantastic, can't recommend it enough.
As a side note, I'm slowly moving out of Next.js, as you said, is bloated, full of stuff that is just noise and that benefits them (more network requests more money) for little user benefit.
Right now for me the gold standard is Vite + Tanstack Router. (And Elysia for api/server, but thats unrelated).
I work as a consultant so I navigate different codebases, old to new, typescript to javascript, massive to small, frontend only to full stack.
Claude Code experience is massively different depending on the codebase.
Good E2E strongly typed codebase? Can one shot any feature, some small QA, some polishing and it's usually good to ship.
Plain javascript? Object oriented? Injection? Overall magic? Claude can work there but is not a pleasant experience and I wouldn't say it accelerates you that much.
There are a lot of people here that are completely missing the point. What is it called where you look at a point of time and judge an idea without seemingly being able to imagine 5 seconds into the future.
Intelligence is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4. This is deterministic, and it is why LLMs are not intelligent and can never be intelligent no matter how much better they get at superficially copying the form of output of intelligence. Probabilistic prediction is inherently incompatible with deterministic deduction. We're years into being told AGI is here (for whatever squirmy value of AGI the hype huckster wants to shill), and yet LLMs, as expected, still cannot do basic arithmetic that a child could do without being special-cased to invoke a tool call.
Our computer programs execute logic, but cannot reason about it. Reasoning is the ability to dynamically consider constraints we've never seen before and then determine how those constraints would lead to a final conclusion. The rules of mathematics we follow are not programmed into our DNA; we learn them and follow them while our human-programming is actively running. But we can just as easily, at any point, make up new constraints and follow them to new conclusions. What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules.
>Intelligence is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4. This is deterministic, and it is why LLMs are not intelligent and can never be intelligent no matter how much better they get at superficially copying the form of output of intelligence.
This is not even wrong.
>Probabilistic prediction is inherently incompatible with deterministic deduction.
And his is just begging the question again.
Probabilistic prediction could very well be how we do deterministic deduction - e.g. about how strong the weights and how hot the probability path for those deduction steps are, so that it's followed every time, even if the overall process is probabilistic.
Personally I think not even wrong is the perfect description of this argumentation. Intelligence is extremely scientifically fraught. We have been doing intelligence research for over a century and to date we have very little to show for it (and a lot of it ended up being garbage race science anyway). Most attempts to provide a simple (and often any) definition or description of intelligence end up being “not even wrong”.
>Intelligence is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4.
Human Intelligence is clearly not logic based so I'm not sure why you have such a definition.
>and yet LLMs, as expected, still cannot do basic arithmetic that a child could do without being special-cased to invoke a tool call.
One of the most irritating things about these discussions is proclamations that make it pretty clear you've not used these tools in a while or ever. Really, when was the last time you had LLMs try long multi-digit arithmetic on random numbers ? Because your comment is just wrong.
>What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules.
Good thing LLMs can handle this just fine I guess.
Your entire comment perfectly encapsulates why symbolic AI failed to go anywhere past the initial years. You have a class of people that really think they know how intelligence works, but build it that way and it fails completely.
> One of the most irritating things about these discussions is proclamations that make it pretty clear you've not used these tools in a while or ever. Really, when was the last time you had LLMs try long multi-digit arithmetic on random numbers ? Because your comment is just wrong.
They still make these errors on anything that is out of distribution. There is literally a post in this thread linking to a chat where Sonnet failed a basic arithmetic puzzle: https://news.ycombinator.com/item?id=47051286
> Good thing LLMs can handle this just fine I guess.
LLMs can match an example at exactly that trivial level because it can be predicted from context. However, if you construct a more complex example with several rules, especially with rules that have contradictions and have specified logic to resolve conflicts, they fail badly. They can't even play Chess or Poker without breaking the rules despite those being extremely well-represented in the dataset already, nevermind a made-up set of logical rules.
>They still make these errors on anything that is out of distribution. There is literally a post in this thread linking to a chat where Sonnet failed a basic arithmetic puzzle: https://news.ycombinator.com/item?id=47051286
I thought we were talking about actual arithmetic not silly puzzles, and there are many human adults that would fail this, nevermind children.
>LLMs can match an example at exactly that trivial level because it can be predicted from context. However, if you construct a more complex example with several rules, especially with rules that have contradictions and have specified logic to resolve conflicts, they fail badly.
Even if that were true (Have you actually tried?), You do realize many humans would also fail once you did all that right ?
>They can't even reliably play Chess or Poker without breaking the rules despite those extremely well-represented in the dataset already, nevermind a made-up set of logical rules.
LLMs can play chess just fine (99.8 % legal move rate, ~1800 Elo)
I still have not been convinced otherwise that LLMs are just super fancy (and expensive) curve fitting algorithms.
I don‘t like to throw the word intelligence around, but when we talk about intelligence we are usually talking about human behavior. And there is nothing human about being extremely good at curve fitting in multi parametric space.
There is too much and there are too many, and some of their takes don’t fly if you use Claude daily.
reply