Oh? And what extensive knowledge and experience makes YOU qualified to determine what "the vast majority of people on this planet" are doing for work and if those tasks are creative or uncreative?
Not sure what you're insinuating. What do you think is the statistically average job on this planet? It's still going to be cultivating a smallholder farm in developing countries, or working in logistics, manufacturing or the broader service in developed countries.
All of these average jobs are structurally repetitive. Yes, humans do constantly inject creativity, but it's a means to an end, to getting the job done.
You apparently mistook my descriptive comment for a value judgment, but it isn't.
AI, the way you are describing it, has not been invented yet. It is a fiction.
What is called "AI" today is an extremely vague marketing term being applied to various software technologies which are only dangerous because humans are dangerous. Nuclear & chemical weapons are also "very scary" but only because the humans who might decide to use them in fits of insanity are scary.
I'm not in the slightest bit uneasy about "AI" itself right now, because as I said, the AI of Sci-Fi has not yet been invented…and seems unlikely to in any of our lifetimes. (Not throwing shade on clever researchers. We also don't have working FTL travel, though plenty of scientists speculate on how such an engine might be built.)
"It's just marketing" is just the "denial" stage wearing a flimsy disguise.
Even LLMs of today routinely do the kind of tasks that would have "required human intelligence" a few years prior. The gap between "what humans can do" and "what frontier AIs can do" is shrinking every month.
What makes you think that what remains of that gap can't be closed in a series of incremental upgrades? Just 4 years have passed since the first ChatGPT. There are a lot of incremental upgrades left in "any of our lifetimes".
You don't seem to be engaging seriously with respected experts in this field who have been reporting for years at this point that merely scaling LLMs and so-called "agentic systems" doesn't get us anywhere close to true AGI.
Also computers in the 1980s could perform many tasks that previously would have "required human intelligence". So? Are you saying computers in the 1980s were somehow intelligent?
And you don't seem to be engaging seriously with respected experts in this field who say "scaling still works, and will work for a good while longer".
If your only reference points are LeCun, or, worse, some living fossils from the "symbolic AI" era, then you'll be showered in "LLMs can't progress". Often backed by "insights" that are straight up wrong, and were proven wrong empirically some time in 2023.
If you track LLM capabilities over time, however, it's blindingly obvious that the potential of LLMs is yet to be exhausted. And whether it will be any time soon is unclear. No signs of that as of yet. If there's a wall, we are yet to hit it.
Are LLMs displacing labour? In the aggregate - not from what one can see. The aggregate statistics tell a different story e.g. the hiring of software engineers is still growing Y-o-Y.
The limits of LLMs will be put in place through financial constraints. People like you seem to think there's an infinite stream of money to fund this stuff. Not really. Its the same reason why Anthropic and OAI are now shifting focus to generate revenues and cash flows because they will not receive external funding forever.
I can’t speak for the states, but in AU I clearly see a massive displacement of undergrad and junior roles (only in AI exposed domains).
I say this as both someone who works with many execs, hearing their musings, and someone who no longer can justify hiring junior roles themselves.
Irrespective of that; if we take this strategy of only taking action once it is visible to the layman - our scope of actions available will be invariably and significantly diminished.
Even if you are not convinced it is guaranteed and do not believe what myself and others see. I would ask you is your probability of it happening now really that close to 0? If not then would it not be prudent to take the risk seriously?
Politics - proper guardrails, adapting the legal framework to accommodate AI and make sure it doesn't benefit only preselected few.
Something that can and should be done yesterday is to stop the capital drain out of the economy and into accelerated, war-motivated AI development - there's no need for war-AI per se but clearly it's the most likely reason for the capital drain and rush.
Once the rush and wars stop, and some capital is made available for the rest of the economy, the latter can adapt to the introduction of AI at a normal pace, that should include legislative safeguards to support competition and prevent monopolization of AI and information sources.
Oh we are way-ay-ay past this line of argumentation. The AI skeptic community has been far more right than wrong over several years now, whereas the hype-all-the-AI-things crowd has been proven laughably wrong on a fairly regular basis.
Exactly. If I had a nickel for every mention of "just wait 'til the next release!!" as some sort of justification for whatever's going on right now, I'd be a rich man.
Im still waiting for a project that is not a 'pet project' that is mostly LLM-assisted that Wow's me. Why is it taking so long I wonder? Hmm, perhaps all this 'intelligence' is neat. But it is not what pushes humanity forward - which ultimately is what matters. That's the whole point of expending resources...
"Asbestos has good and bad things"
"Assault rifles in the hands of ordinary citizens has good and bad things"
"Everyday chemicals in the food supply has good and bad things"
Look, some issues require nuance. Others don't. It's gaslighting to tell activists who consider Big AI to be a net negative for society (by an order of magnitude!) that their position isn't "real-world reflective".
reply