Hacker Newsnew | past | comments | ask | show | jobs | submit | ccortes's commentslogin

Maybe it’s not a given, but it is part of the sales pitch for CEOs. A few others have announced layoffs due to AI being better and more efficient than humans.

How much truth there is to it we don’t know for sure. But it’s not something to be ignored.


CEOs have been saying the exact same thing for the entire history of automation. Take computing, for example, an industry that's always been unusually amenable to automation:

— in the 1960/1970s, when compilers came out. "We don't need so many programmers hand-writing assembly anymore." Remember, COBOL (COmmon Business-Oriented Language) and FORTRAN (FORmula TRANslator) were marketed as human-readable languages that would let business professionals/scientists no longer be reliant on dedicated specialist programmers.

— in the 1980s/1990s, when higher-level languages came out. "C++ and Java mean we don't need an army of low-level C developers spending most of their effort manually managing memory, and rich standard libraries mean they don't have to continuously reimplement common data structures from scratch."

— in the 1990s/2000s, when frameworks came out. "These things are basically plug-and-play, now one full-stack developer can replace a dedicated sysadmin, backend engineer, database engineer, and frontend engineer."

While all of these statements are superficially true, the result was that the world produced more software (and developer jobs) than ever, as each level of abstraction freed developers from having to worry about lower-level problems and instead focus on higher-level solutions. Mel's intellect was freed from having to optimize the position of the memory drum [0] to allow him to focus on optimizing the higher-level logic/algorithms of the problem he's solving. As a result, software has become both more complex but also much more capable, and thus much more common.

While this time with AI may truly be different, I'm not holding my breath.

[0] http://catb.org/jargon/html/story-of-mel.html


> in the long run those businesses fall to leaner competitors

This is not true at all. You can find plenty of examples going either way but it’s far from truth from being a universal reality


> It was comparing to a hypothetical world where everything is perfectly organized, everyone is perfectly behaved, everything is perfectly ordered, and therefore we don't have to have certain jobs that only exist to counter other imperfect things in society.

> Jobs that don't provide value for a company are cut, eventually.

Uhm, seems like Greaber is not the only one drawing conclusions from a hypothetical perfect world


People here seem to be conflating thinking hard and thinking a lot.

Most examples mentioned of “thinking hard” in the comments sound like they think about a lot of stuff superficially instead one particular problem deeply, which is what OP is referring to.


If you actually have a problem worth thinking deeply about, AI usually can’t help with it. For example, AI can’t help you make performant stencil buffers on a Nokia Ngage for fun. It just doesn’t have that in it. Plenty of such problems abound, especially in domains involving some or the other extreme (like high throughput traffic). Just the other day someone posted a vibe coded Wikipedia project that took ages to load (despite being “just” 66MB) and insisted it was the best it was possible to do, whereas Google can load the entire planet (perceptually) in a fraction of a second.


Oh wow, is that what you got from this?

It seems more like a non experienced guy asked the LLM to implement something and the LLM just output what and experienced guy did before, and it even gave him the credit


Copyright notices and signatures in generative AI output are generally a result of the expectation created by the training data that such things exist, and are generally unrelated to how much the output corresponds to any particular piece of training data, and especially to who exactly produced that work.

(It is, of course, exceptionally lazy to leave such things in if you are using the LLM to assist you with a task, and can cause problems of false attribution. Especially in this case where it seems to have just picked a name of one of the maintainers of the project)


Did you take a look at the code? Given your response I figure you did not because if you did you would see that the code was _not_ cloned but genuinely compiled by the LLM.


> then you're doing the opposite of what the author proposes

No, it’s exactly what the author is writing about. Just check his example, it’s pretty clear what he means by “thinking in math”

> Scientific conensus in math is Occam's Razor, or the principle of parsimony. In algebra, topology, logic and many other domains, this means that rather than having many computational steps (or a "simple mental model") to arrive to an answer, you introduce a concept that captures a class of problems and use that.

I don’t even know what you mean by this.


I really want to get into ocaml but the syntax is sooo ugly I feel like you need a great IDE set up to be able to be productive with it.


Might want to check out ReasonML.


> In this article, being "functional" is just serving as a proxy for code quality.

It is not, it is being very specific about what it means and what it is referring to


> It doesn’t really matter why it’s not working.

It does, because it changes the strategy.

If you think the ads are working and have 10k potential customers then you start thinking about how to increase your conversion rate thinking you could get a chunk of those 10k, you might think distribution is solved.

But if it turns out only 2.5k are real humans then your conversion rate might not even be an issue and it’s just the marketing strategy that needs tweaking.

The whole point is that they are giving you fraudulent traffic which you use as real data to figure out the next steps. If you don’t know it’s fraudulent or how much of the clicks are fraudulent then you are taking decisions under the wrong assumptions.

> You can’t stop fraudulent clicks just like you can’t stop your SuperBowl ad from playing while your viewers are in the bathroom

That’s not even a good analogy, we are taking clicks, not impressions.


That’s the point, without the fraudulent clicks you would just move on to some other strategy because the pricing would not be worth it.

Fake clicks give the illusion that ads are working and instead you have to optimize your funnel or whatever else.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: