Hacker Newsnew | past | comments | ask | show | jobs | submit | jvidalv's commentslogin

In the only one that feels that OpenAI has bots/commenters on payroll on all this kind of news downplaying Claude and stating how much better Codex is?

There is too much and there are too many, and some of their takes don’t fly if you use Claude daily.


Yeah, it's eerie, same with how everyone seems to have forgotten that OpenAI betrayed democracy by committing to work on unsupervised autonomous weapons and domestic mass surveillance.

Honestly I find comments like yours much more eerie. By all accounts they never agreed to any of that but you say it with such confidence like it's a fact.

The Trump administration's handling of Anthropic showed that regardless of what the contract or the law says or means, they will severely penalize any vendor who refuses their demands. And OpenAI stepped right into that relationship immediately after the administration showed that. So either they were signing up for a supply-chain risk designation and whatever other punishments the Trump administration dreams up, or they're complying.

If this sounds crazy to you, though, I'd like to know, and understand why. I miss ChatGPT/Codex.


I find that very obvious too. It started (visibly) shortly after the Opus 4.6 hype.

Of course they do. As do all of the other companies pushing their product these days.

I have switched from the bloated mess of Nextjs to Vite+TSS and never looked back.

We are also currently inmidst a migration from NextJS to TanStack Start and it's worth for the performance and resource gains alone. NextJS' dev server takes around 3-4 GB memory after a few page click while TanStack / Vite consumes less than a GB.

This is something I noticed, originally I thought "AI" was the perfect tool for Vercel and Nextjs (current standard = future standard), but then I realized is the total opposite, their moat/stick is gone now, and Rouch that is smart I think knows this.

I switched a middle sized app to Tanstack Router + Vite while I was walking my dogs. Then 30 minuts-1 hour QA and it was done. This should have never happened before AI.

(I did switch because I was tired of the bloated network tab with 100 unnecesary RSC calls, the 5 seconds lag when clicking on an internal link, the 10 seconds "hot reload" after a change... I'm on a M4 MAX with 64GB of ram....)


Vercel's moat is DX in hosting, not NextJS. Consider, people who switch to TanStack Start still need a place to host and many would continue to choose Vercel.

Same principle applies, hosting in Railway has slightly worse UX, but with LLM's you don't need to write a single docker line anymore, so deploying on railway is way way less cumbersome than before, and you gain more control and less costs.

This moat is rapidly disappearing though. Cloudflare is catching up, most apps (including TanStack Start) can be one-click deployed without configuration now.

The react framework de jour. I wonder what would be the reason to rewrite react apps in 2027.

Thanks guys!

Our sweet prince <3

If you click on "download button" which should open the link to the App Store you will notice that is a broken link. This is why is ai slop.

Spinning a web like that today is 30 minutes of Claude Code prompting.

But like it or not, the gatekeeping of Apple and Google means that pushing an app to their stores is days work and wait time.

So yeah, reeks of ai slop.


apple still needs to review it. That's why I said "literally just now".


Exactly, this why if I use next.js I always hijack the api routes and use Elysia, it comes with something called eden that makes the typing e2e fantastic, can't recommend it enough.

As a side note, I'm slowly moving out of Next.js, as you said, is bloated, full of stuff that is just noise and that benefits them (more network requests more money) for little user benefit.

Right now for me the gold standard is Vite + Tanstack Router. (And Elysia for api/server, but thats unrelated).


I work as a consultant so I navigate different codebases, old to new, typescript to javascript, massive to small, frontend only to full stack.

Claude Code experience is massively different depending on the codebase.

Good E2E strongly typed codebase? Can one shot any feature, some small QA, some polishing and it's usually good to ship.

Plain javascript? Object oriented? Injection? Overall magic? Claude can work there but is not a pleasant experience and I wouldn't say it accelerates you that much.


"...typescript to javascript"

Country AND Western!


We are going to start seeing that be the primary selection criterion. Pick a stack that agents are good at.


Suprisingly Claude is amazing at cleaning up your macbook. Tried, works like a charm.


Is super fast but also super inaccurate, I would say not even gpt-3 levels.


That's because it's llama3 8b.


There are a lot of people here that are completely missing the point. What is it called where you look at a point of time and judge an idea without seemingly being able to imagine 5 seconds into the future.


“static evaluation”


What is the definition for intelligence?


Quoting an older comment of mine...

  Intelligence is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4. This is deterministic, and it is why LLMs are not intelligent and can never be intelligent no matter how much better they get at superficially copying the form of output of intelligence. Probabilistic prediction is inherently incompatible with deterministic deduction. We're years into being told AGI is here (for whatever squirmy value of AGI the hype huckster wants to shill), and yet LLMs, as expected, still cannot do basic arithmetic that a child could do without being special-cased to invoke a tool call.

  Our computer programs execute logic, but cannot reason about it. Reasoning is the ability to dynamically consider constraints we've never seen before and then determine how those constraints would lead to a final conclusion. The rules of mathematics we follow are not programmed into our DNA; we learn them and follow them while our human-programming is actively running. But we can just as easily, at any point, make up new constraints and follow them to new conclusions. What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules.


>Intelligence is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4. This is deterministic, and it is why LLMs are not intelligent and can never be intelligent no matter how much better they get at superficially copying the form of output of intelligence.

This is not even wrong.

>Probabilistic prediction is inherently incompatible with deterministic deduction.

And his is just begging the question again.

Probabilistic prediction could very well be how we do deterministic deduction - e.g. about how strong the weights and how hot the probability path for those deduction steps are, so that it's followed every time, even if the overall process is probabilistic.

Probabilistic doesn't mean completely random.


At the risk of explaining the insult:

https://en.wikipedia.org/wiki/Not_even_wrong

Personally I think not even wrong is the perfect description of this argumentation. Intelligence is extremely scientifically fraught. We have been doing intelligence research for over a century and to date we have very little to show for it (and a lot of it ended up being garbage race science anyway). Most attempts to provide a simple (and often any) definition or description of intelligence end up being “not even wrong”.


>Intelligence is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4.

Human Intelligence is clearly not logic based so I'm not sure why you have such a definition.

>and yet LLMs, as expected, still cannot do basic arithmetic that a child could do without being special-cased to invoke a tool call.

One of the most irritating things about these discussions is proclamations that make it pretty clear you've not used these tools in a while or ever. Really, when was the last time you had LLMs try long multi-digit arithmetic on random numbers ? Because your comment is just wrong.

>What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules.

Good thing LLMs can handle this just fine I guess.

Your entire comment perfectly encapsulates why symbolic AI failed to go anywhere past the initial years. You have a class of people that really think they know how intelligence works, but build it that way and it fails completely.


> One of the most irritating things about these discussions is proclamations that make it pretty clear you've not used these tools in a while or ever. Really, when was the last time you had LLMs try long multi-digit arithmetic on random numbers ? Because your comment is just wrong.

They still make these errors on anything that is out of distribution. There is literally a post in this thread linking to a chat where Sonnet failed a basic arithmetic puzzle: https://news.ycombinator.com/item?id=47051286

> Good thing LLMs can handle this just fine I guess.

LLMs can match an example at exactly that trivial level because it can be predicted from context. However, if you construct a more complex example with several rules, especially with rules that have contradictions and have specified logic to resolve conflicts, they fail badly. They can't even play Chess or Poker without breaking the rules despite those being extremely well-represented in the dataset already, nevermind a made-up set of logical rules.


>They still make these errors on anything that is out of distribution. There is literally a post in this thread linking to a chat where Sonnet failed a basic arithmetic puzzle: https://news.ycombinator.com/item?id=47051286

I thought we were talking about actual arithmetic not silly puzzles, and there are many human adults that would fail this, nevermind children.

>LLMs can match an example at exactly that trivial level because it can be predicted from context. However, if you construct a more complex example with several rules, especially with rules that have contradictions and have specified logic to resolve conflicts, they fail badly.

Even if that were true (Have you actually tried?), You do realize many humans would also fail once you did all that right ?

>They can't even reliably play Chess or Poker without breaking the rules despite those extremely well-represented in the dataset already, nevermind a made-up set of logical rules.

LLMs can play chess just fine (99.8 % legal move rate, ~1800 Elo)

https://arxiv.org/abs/2403.15498

https://arxiv.org/abs/2501.17186

https://github.com/adamkarvonen/chess_gpt_eval


I still have not been convinced otherwise that LLMs are just super fancy (and expensive) curve fitting algorithms.

I don‘t like to throw the word intelligence around, but when we talk about intelligence we are usually talking about human behavior. And there is nothing human about being extremely good at curve fitting in multi parametric space.


Hello! I’ve created an easy to install sound packs that are played on certain Claude actions.

Sometimes I was missing requests from Claude, and this is a fun way to put myself back in action.

Counter Strike, Half life, Old School RuneScape, Stardew Valley… Are some of the ones available.


Totally agree, schema issues are easily solved with solutions like drizzle or prisma.

I would never go full mongo considering how easy is now a days to have TypeScript first postgres.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: