Generally speaking, that's incorrect. That's like saying "I don't like cars, and don't see the value in cars, therefore the market for cars is fraudulent".
In AI, buyers are getting what they want. The demand is real. YOU might not value what they're getting, but that doesn't make it fraudulent.
This is why people misunderstand why AI isn't a bubble. A bubble is asset prices rising due to speculative demand far beyond what the actual demand is.
AI - specifically chip and memory markets feeding AI - is a demand shock on par with World War 2 in its impact. NVIDIA is legitimately forecasting demand of $1 trillion in their chips+memory by the end of 2027.
This is actual, real, shipping physical product: not vapor, not something that will disappear, not something that will "crash" suddenly.
Yes, there is some speculation among AI providers training new models in the race to AGI, but that is not the majority of demand, inference is 65-80% of demand. If the current pace of training slows, that excess capacity for training will get easily sorted out through resale markets.
There are circular deals, the product is heavily subsidized. Managers are promised the moon and are deceived about the actual capabilities of the product.
Mythos is marketed like a nuclear weapon to make people jealous.
AI model government approval is floated, perhaps in preparation for a government bailout justified by "national security".
Kushners are invested in OpenAI.
There is a lot of fraud going on.
> This is actual, real, shipping physical product: not vapor, not something that will disappear, not something that will "crash" suddenly.
"Not X, not Y, not Z, just A" works better than "Actually A, not X, not Y, not Z".
> Generally speaking, that's incorrect. That's like saying "I don't like cars, and don't see the value in cars, therefore the market for cars is fraudulent".
It's arguable that the car market is indeed fraudulent and the result of years of lobbying, destroying public transportation and car-centric architectures.
> This is actual, real, shipping physical product: not vapor, not something that will disappear, not something that will "crash" suddenly.
Tulips were real shipping physical products. Railways were real. Housing was real. Whether or not the demand is speculative is largely disconnected from the actual subject of the bubble.
> NVIDIA is legitimately forecasting demand of $1 trillion in their chips+memory by the end of 2027.
Forecasts do not make it so.
> but that is not the majority of demand, inference is 65-80% of demand.
Inference is massively subsidized. The demand is fictitious just because of that. Once prices go up, especially once free or cheap inference dries up, demand will collapse.
But it's not even just the subsidies. AI is forced onto the workplace top-down. Executives demand AI be used before careful evaluation. That's all demand that can collapse at any moment if public opinion sours.
> The world has changed.
It hasn't. For all the claims that AI has made any given job so much easier, developers who claim "It'd have cost me a billion years to do so" (next time bring a counterfactual), the actual economic benefit appears to be a big fat zero. We're right back to the Solow paradox.
Except AI companies are dumping trillions of dollars into this, expecting tens of trillions in return ... from where? Where will these tens of trillions come from if the aggregate economic benefit doesn't exist? Joe Slopman making a dozen CRUD apps a week for half a million in revenue, but there ain't a million of him.
So much of the demand for inference is driven by hype. Companies using AI in the expectation of an ROI that has not materialized, and in many places, is very unlikely to. In no small part because any "efficiency" or "productivity" gains realized by AI immediately drives down the cost of the good or service produced.
I think the evidence that AI is better at knowledge work without a human in the loop... is very limited.
Humans with many agents will be more productive, but the tendency has been for these models is to regress to the mean when it comes to strategic insights.
So far, I think you're right. But the rate of progress just seems so crazy that I'm not seeing any moats that look fundamental. I hope I'm wrong and you're right.
None of modern society and economics was put together accidentally, IMO. It was purposeful, a mix of success & failures, serendipitous, and filled with mixed motives... but that's not quite the same as an accident.
A mix of political scientists, politicians, investors, entrepreneurs, lawyers, judges, scientists, technologists, and economists have tried to mold society to their own theoretical vision for at least 150+ years. Society then reacts to that in both good and bad ways. This distorts the vision, as society changes it to its concerns. And the cycle repeats.
I think of Karl Polyani's The Great Transformation has a great way of looking at the attempts to force "market society" on England in the 1700 and 1800s, and the reaction that all societies exhibit in the face of unconstrained technological or economic change. Both the imposition of change and reaction to it can be violent, it's hard to predict. We've had such a relatively steady state since WW2 in the developed nations that we're not used to this cycle.
Accidentally is the wrong word, but considering it was never done before and had some very unusual constraints (large coal supply and coal industry, sufficient centralised state that could provide peace within its borders but had been neutered into compromise with parliamentary middle class, finance centres, maritime trade etc etc) that it was done at all does feel … unplanned?
The image that sticks in my mind the most is the Meiji Emperor in a 1870s photo dressed in a saville row suit and bowler hat. For Japan the most incredible social card to play that says “we are going to be like these foreigners and their secrets to wealth”
Nothing accidental there, but that still leaves visible joins on the Japanese soul.
Peter Drucker identified this phenomenon as the rise of knowledge work as "the means of production" in the 1950s and 1960s. Management (of people, tasks, responsibilities, and disciplines) and knowledge work were the two sides to organizational performance. Drucker felt that "post capitalist society" was the recognition that capital ceased being the primary factor of production. No matter how much capital you throw at a problem, if you can't retain people that know what you're doing, you won't get far.
Knowledge is a unique resource compared to the other traditional factors of economic production (land, labor, and capital). It is often invested in with capital (education and tools), but it is carried with the human, and leaves with them. It is always decaying - knowledge workers should be in constant learning mode, and stale knowledge eventually becomes a drag on performance.
I'd argue the future is about knowledge workers all becoming managers. When you use agentic AI, it has the flavor of the skills of management. Management is "a practice and a liberal art", according to Drucker, one that has been in poor supply for some time. LLMs are have somewhat stale knowledge and require the human, tools, and RAG to freshen it. And LLMs will always regress to the mean. It is pretty good at pattern analysis and starts to get shaky and mediocre with synthesis. It requires very nuanced, and elaborate prompting to shape its token output towards insightful results that aren't a standard answer. For coding exercises, that can be fine, but at high complexity levels, or when dealing with issues of strategy or evaluation, it is a platitude generator and has no unique competitive advantage.
In other words, competent, talented management mixed with knowledge work is the scarcity we are heading towards. This is arguably why you're seeing the rise of "markdown frameworks" that people swear improve performance, it's the beginnings of management scaffolding for AI.
Technical folks struggle with valuing management skills, and I expect this will increase its value and scarcity.
As for "Physical robustness. Strength, perhaps brutality. Competence in physical tasks." I think the robots will be replacing that pretty shortly.
"Honesty. Parentage. Birth order (see primogeniture.) Those matter in per-technological societies, and they matter in failed societies now. Those are perhaps humanity's core values."
Ehhhhh not really? What about Christianity, where the meek shall inherit the Earth, and love is the core value (putting aside modern day Pharisees and Charlatans that twist the underlying value system)? Or Islam, whose core value is submission to God? While there have been Societies that valued parentage and birth order, that's far from universal.
> "post capitalist society" was the recognition that capital ceased being the primary factor of production. No matter how much capital you throw at a problem, if you can't retain people that know what you're doing, you won't get far.
This leads to the reformulation of knowledge workers as "human capital", and it's hardly post-capitalist. A capitalist society is one where people assemble different forms of capital to produce capital returns that are larger than the sum of the capital inputs, where the possibilities available to you depend on the amount and quality of capital that you have access to. This is all still very relevant when discussing human capital - access to human capital is determined by the quality of your professional networks, whether you decide to be present in geographic talent clusters (i.e. cities as centers of industry), and whether you have sufficient financial capital available in trade.
AI will not transition us to a post-capitalist society. Its promise is solely the ability to replace human capital with other forms: chips and electricity. It does not spell the death of human labor any more than computers and spreadsheets did for accountants.
Well it is 'post-capitalism' rather than 'anti-capitalism' or 'non-capitalism' the same way 'post-punk' relates to 'punk'. Human capital is something of a conceptual misnomer as the knowledge itself is owned by the individual and can only be licensed or contracted by investors and management, not traded on capital markets.
> A capitalist society is one where people assemble different forms of capital to produce capital returns that are larger than the sum of the capital inputs, where the possibilities available to you depend on the amount and quality of capital that you have access to.
The reason we live in post-capitalism is that capital is largely abundant these days, although many regional and cultural barriers remain due to bias, prejudice and risk aversion. But is no longer the determining factor of economic growth - necessary but not sufficient.
> This is all still very relevant when discussing human capital - access to human capital is determined by the quality of your professional networks, whether you decide to be present in geographic talent clusters (i.e. cities as centers of industry), and whether you have sufficient financial capital available in trade.
I feel this is a stretch. In a non-capitalist economic system, for example, the wealth of the collective is arguably also bounded by the scarcity of knowledge in that collective. Knowledge does not have the same properties of capital accumulation that Marx described.
> AI will not transition us to a post-capitalist society. Its promise is solely the ability to replace human capital with other forms: chips and electricity. It does not spell the death of human labor any more than computers and spreadsheets did for accountants.
Spoken like someone who didn't even read my parent comment. We are already living in a post-capitalist society, and have been for several decades.
"Chips and electricity" is reductio ad absurdum, and ignores a vast number of other input factors. AI will not eliminate labor or human capital, that's just marketing. It certainly will transform it as other tools have.
I had thought that months aren't quite a human construct, they correspond roughly to lunar cycles. Weeks were a way to carve the month up into the four lunar phases per cycle.
Seconds, minutes, hours, etc. are, as you say, all sexagesimal math bias.
You're right about months/weeks, I was imprecise in the post: calendar months (as we now use them) are a weird cultural construct (I mean who would divide a measurement unevenly and though that was a good idea?)
Someone actually mathed out infinite monkeys at infinite typewriters, and it turns out, it is a great example of how misleading probabilities are when dealing with infinity:
"Even if every proton in the observable universe (which is estimated at roughly 1080) were a monkey with a typewriter, typing from the Big Bang until the end of the universe (when protons might no longer exist), they would still need a far greater amount of time – more than three hundred and sixty thousand orders of magnitude longer – to have even a 1 in 10500 chance of success. To put it another way, for a one in a trillion chance of success, there would need to be 10^360,641 observable universes made of protonic monkeys."
Often infinite things that are probability 1 in theory, are in practice, safe to assume to be 0.
So no. LLMs are not brute force dummies. We are seeing increasingly emergent behavior in frontier models.
> It is unsurprising that an LLM performs better than random! That's the whole point. It does not imply emergence.
By definition, it is emergent behavior when it exhibits the ability to synthesize solutions to problems that it wasn't trained on. I.e. it can handle generalization.
Emergent behavior would imply that some other function was being reduced to token prediction. Behaving "better than random" ie: not just brute forcing would not qualify - token prediction is not brute forcing and we expect it to do better, it's trained to do so.
If you want to demonstrate an emergent behavior you're going to need to show that.
reply