Hacker Newsnew | past | comments | ask | show | jobs | submit | jaredcwhite's commentslogin

RIP Mozilla. I can't even with this nonsense…truly a shame considering I once admired this organization and the principles it stood for.

So a sloppified Django spit out by Claude? Good luck with that.

The water stuff isn't fake though. It's just easier to lie about.

The future isn't more autonomous cars driving on city streets. It's fewer cars on streets.

I truly believe massive gains in transit & micromobility especially when related to the green energy boom is the real game-changing story of our time.


You could combine both - autonomous cars to shuttle from the station to where you need to go, plus the other stuff.

What makes you think we're going to see massive gains in transit in the future?

Oh? And what extensive knowledge and experience makes YOU qualified to determine what "the vast majority of people on this planet" are doing for work and if those tasks are creative or uncreative?

Not sure what you're insinuating. What do you think is the statistically average job on this planet? It's still going to be cultivating a smallholder farm in developing countries, or working in logistics, manufacturing or the broader service in developed countries.

All of these average jobs are structurally repetitive. Yes, humans do constantly inject creativity, but it's a means to an end, to getting the job done.

You apparently mistook my descriptive comment for a value judgment, but it isn't.


AI, the way you are describing it, has not been invented yet. It is a fiction.

What is called "AI" today is an extremely vague marketing term being applied to various software technologies which are only dangerous because humans are dangerous. Nuclear & chemical weapons are also "very scary" but only because the humans who might decide to use them in fits of insanity are scary.

I'm not in the slightest bit uneasy about "AI" itself right now, because as I said, the AI of Sci-Fi has not yet been invented…and seems unlikely to in any of our lifetimes. (Not throwing shade on clever researchers. We also don't have working FTL travel, though plenty of scientists speculate on how such an engine might be built.)


"It's just marketing" is just the "denial" stage wearing a flimsy disguise.

Even LLMs of today routinely do the kind of tasks that would have "required human intelligence" a few years prior. The gap between "what humans can do" and "what frontier AIs can do" is shrinking every month.

What makes you think that what remains of that gap can't be closed in a series of incremental upgrades? Just 4 years have passed since the first ChatGPT. There are a lot of incremental upgrades left in "any of our lifetimes".


You don't seem to be engaging seriously with respected experts in this field who have been reporting for years at this point that merely scaling LLMs and so-called "agentic systems" doesn't get us anywhere close to true AGI.

Also computers in the 1980s could perform many tasks that previously would have "required human intelligence". So? Are you saying computers in the 1980s were somehow intelligent?


And you don't seem to be engaging seriously with respected experts in this field who say "scaling still works, and will work for a good while longer".

If your only reference points are LeCun, or, worse, some living fossils from the "symbolic AI" era, then you'll be showered in "LLMs can't progress". Often backed by "insights" that are straight up wrong, and were proven wrong empirically some time in 2023.

If you track LLM capabilities over time, however, it's blindingly obvious that the potential of LLMs is yet to be exhausted. And whether it will be any time soon is unclear. No signs of that as of yet. If there's a wall, we are yet to hit it.


That aside.

Lets look at the facts.

Are LLMs displacing labour? In the aggregate - not from what one can see. The aggregate statistics tell a different story e.g. the hiring of software engineers is still growing Y-o-Y.

The limits of LLMs will be put in place through financial constraints. People like you seem to think there's an infinite stream of money to fund this stuff. Not really. Its the same reason why Anthropic and OAI are now shifting focus to generate revenues and cash flows because they will not receive external funding forever.


LLMs are indeed displacing labour. Junior IT roles are drying up in places. Translation and art are also becoming harder to earn from.

I can’t speak for the states, but in AU I clearly see a massive displacement of undergrad and junior roles (only in AI exposed domains).

I say this as both someone who works with many execs, hearing their musings, and someone who no longer can justify hiring junior roles themselves.

Irrespective of that; if we take this strategy of only taking action once it is visible to the layman - our scope of actions available will be invariably and significantly diminished.

Even if you are not convinced it is guaranteed and do not believe what myself and others see. I would ask you is your probability of it happening now really that close to 0? If not then would it not be prudent to take the risk seriously?


> If not then would it not be prudent to take the risk seriously?

What does taking the risk seriously look like?


> What does taking the risk seriously look like?

Politics - proper guardrails, adapting the legal framework to accommodate AI and make sure it doesn't benefit only preselected few.

Something that can and should be done yesterday is to stop the capital drain out of the economy and into accelerated, war-motivated AI development - there's no need for war-AI per se but clearly it's the most likely reason for the capital drain and rush.

Once the rush and wars stop, and some capital is made available for the rest of the economy, the latter can adapt to the introduction of AI at a normal pace, that should include legislative safeguards to support competition and prevent monopolization of AI and information sources.


Oh, you again. In every thread. Are you a respect expert in the field of ai? What are your qualifications?

I'm not interested in reading the same arguments over and over angain. Ai is not scary anymore, it's fucking boring. Exits thread

Oh we are way-ay-ay past this line of argumentation. The AI skeptic community has been far more right than wrong over several years now, whereas the hype-all-the-AI-things crowd has been proven laughably wrong on a fairly regular basis.

Exactly. If I had a nickel for every mention of "just wait 'til the next release!!" as some sort of justification for whatever's going on right now, I'd be a rich man.

Im still waiting for a project that is not a 'pet project' that is mostly LLM-assisted that Wow's me. Why is it taking so long I wonder? Hmm, perhaps all this 'intelligence' is neat. But it is not what pushes humanity forward - which ultimately is what matters. That's the whole point of expending resources...

Quite possible.


"Asbestos has good and bad things" "Assault rifles in the hands of ordinary citizens has good and bad things" "Everyday chemicals in the food supply has good and bad things"

Look, some issues require nuance. Others don't. It's gaslighting to tell activists who consider Big AI to be a net negative for society (by an order of magnitude!) that their position isn't "real-world reflective".


Sheer nonsense. Handcoding is thriving and will easily survive long into the future, especially after the bubble bursts (which is already happening).


Amen. Just like hand washing clothes survived the washing machine fade.


Fr. Hand washing helps clothes last longer and you have circulation of fewer micro plastics.

Not to mention it's more resource efficient.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: