Yep - had this discussion today. The problem is that building high quality modern 'AI' tools requires a metric assload of data. Right now everyone is running around scraping the Internet indiscriminately for that data and building the best things that they can. The next step is that the creators of said training data want their cut, and the lawyers step in and shake out the industry for the next few years. Precedent is set, laws are made and content creators get to license their data for training these AI models.
Once those laws are in place, the moat is dug - and large corporations with access to capital vacuum up good training data in bulk. The whole time loudly proclaiming that they're 'empowering creators' and 'rewarding original thinkers.' Now the large corporations have the largest and best datasets tied up and neat, deep moat. Any startup wanting to challenge the incumbents could have significantly better models, but without access to the same quality and volume of training data, they'll be at a massive disadvantage.
Yeah I posted this in another thread but your claim is simply untrue.
There is little to no need for anyone to scrape the web themselves, CommonCrawl does it for free and provides access to all that data for free, they are a charity similar to The Internet Archive:
As for using that data to train a model, I've done it at my own company and it's not that expensive, certainly not the astronomically expensive figures people seem to think it is. Maybe a hobbyist can't do it for themselves, but my small company managed to get pretty good results training on our existing hardware. It's hard to give concrete figures because we manage our own hardware instead of using AWS, but all in we're talking on the order of 100k, as opposed to what I've heard people on HN think it costs, as if it's a $10 million+ enterprise.
That's a useful number. Just yesterday we had an announcement of an open source system you can train with one GPU on a desktop for simple models. That could usefully use 16 or 24 GPUs, but it's not like you need a whole data center. It doesn't look like this is an industry with excessive monopoly potential.
The real issue for machine learning is that it lets you build systems that optimize for some metric. That's what corporations do. I've been pointing out lately that the ethical problems for AI in the near term look very much like those for corporations.
“The courts” don’t have rules which apply globally for this sort of thing. My understanding is that even if the models were made illegal to host in America, there’s no shortage of other countries much looser copyright laws where one could hypothetically host such a thing.
There are clearnet sites which host entire movies and games and there have been since the dawn of the internet, wouldn’t you expect those would be bigger targets to go after if the powers were there? All that to say, I’m not bullish on the future global leverage of American copyright law, especially when it comes to AI models.
But courts have jurisdiction over their consumer markets, if a court in the US says it's illegal to use non-licenced data for training models and any product doing so can't be used/sold in the US you have lost access to a massive consumer market.
Case in point: GDPR, you can vacuum as much private data as you wish from non-EU citizens, the moment you touch EU-citizen data you must abide by the GDPR or you don't have access to a market of 400+ million of some of the wealthiest countries in the world.
You don't need global jurisdiction when your economical power is large enough.
I’ve been thinking along the same lines as what you seem to have done. I looked into the CommonCrawl data and also the Pile and I’m not sure they are as large as people think. We also manage some of our own hardware including GPUs. It looked to me that with maybe .5 PB decent storage and 8 high end gpus it could be possible to train a decent LLM in a few months, how does that estimate compare to your experience?
Big corp will always have more data. Even just the web, search engine have a big head start. Then there are "private" data that big provider use for "improving our product": emails, word documents, chat messages, non public social media posts,...
Something I’ve been pondering lately: with the release of all of these content generating language models lately, what will happen to the “scrape and train” process going forward? Aren’t we about to realize an infinite loop problem basically where the training sets will be full of ai generated content that isn’t actually useful for training?
Seems like we almost need pre-ai scraped datasets, almost like how it’s sometimes very useful to have metals that were created before the first atom bombs (I don’t know too much about this, just that there’s a definite land in the radiated sand around that timeframe).
I wonder how correct this is. There are two growth factors here. One is the amount of data required to train a premium model is increasing. The second is that the amount of data being generated is increasing.
I think the future you described is possible. I also think it's possible that corporations are overpaying for data that will become stale after several years and new fresh data is generated in such great quantity/quality that access to it is not capital intensive.
high quality modern 'AI' tools requires a metric assload of data
There seems to be a lot of different ideas about 'quality'. A lot of people want a better-than-search tool that knows vast amounts and can be assigned tasks. I'm personally much more interested in a reasoning machine, I don't need it to include the entire contents of the New York Public Library.
Eh not really, we aren’t far off from AI being able to use browsers just like humans and operate in the world like us. There are no reasonable laws that will be able to prevent this
Becareful of regulatory capture tho. If the one with AI headstart can capture a huge swath of value, said value can be used to lobby and bribe legislators or the executives of the government.
There's no good way to put the AI genie back in the bottle, or make sure it will not create catastrophic undesired consequences. Only way is forward, even if there is no good path in sight.
When AI provides value that formerly would have been provided by workers, that change allows capital to capture a larger share of the revenue. If this isn't counter-balanced, I think we have to call on the tools that Piketty and others have been advocating: institute wealth taxes, create giant sovereign wealth funds, and give every young person universal inheritance. If we really believe that AI will create a bigger pie, but exacerbate inequality, then let's build better tools for dealing with inequality.
But the other side of this is that large corporations will have the upper hand in AI only if the success of that AI is dependent on their other advantages (like having tons of data about all of us). The fact that model architectures are getting somewhat universalized seems to suggest that eventually there may be some highly reusable building blocks that will provide good outcomes ... when trained on enough data with enough diversity. And that data comes from everyone; if we insist on different data governance principles, we could support an ecosystem where anyone can create, train, and run their own AI services.
The problem with this idea is it runs counter to any hope of AI alignment and differential technology development. You can't solve the control problem by throwing out what little control you still have.
If dumping laundry detergent into a pile creates gold, but once the pile reaches 1 million lbs it causes the Earth to explode, you have to find a way that no one can gather such a pile. Not "democratize" the practice and letting anyone anywhere have as much detergent as they wish.
The term the author is looking for to describe their objection is, "radical monopoly" and it was identified in the 70s. The essential concept refers to a technology that shapes its users to its needs. https://en.wikipedia.org/wiki/Radical_monopoly
You could probably say this about anything that improves productivity. FWIW it would be nice if there was a safety net in place to help people recover from having their jobs automated out of existence.
> it would be nice if there was a safety net in place to help people recover from having their jobs automated
Forgive my ignorance but is this not the whole point of civilized society? Social safety net are what my taxes are supposed to be for isn't it?
I'm from Europe living in USA and the discussion around "What if there were some way to pool our resources to help those in need" is always ... interesting.
The US tends to see the role of government to secure rights and freedoms for its citizens, more than to create social safety nets. That’s one of the founding principles of the country. Not that social safety nets are bad, but I say that is not the primary purpose of civilized society.
The US is what happens when a small group of economic/political competitors get a thinly-populated continent for free. The rights and freedoms emphasis makes total sense in a context where you have what seem like unlimited resources by individual human standards. World population in 1776 was under 1 billion and in Europe it was about 150m, or 1/3 of today's.
It's unwise for Americans to stay so mentally anchored to this origin story. It's not that it doesn't have value, but it has increasingly less to do with reality because it depends on a condition of superabundance that no longer obtains.
You make an interesting point. But I don’t see why the governments role to secure rights and freedoms for its citizens would change with fewer natural resources per person. Certainly the nature of rights may change as new technologies develop (do artists have the rights to AI generated art trained on their works, etc) but the principle still seem the same.
Having your private property rights enforced by police is an instance of a social safety net. It’s helping you do things that would be much harder or impossible to do by yourself. So is modern agriculture, medicine, engineering, etc.
Protecting private property is a necessary part of a market economy. I wouldn't call it a safety net. The whole point of a safety net is when your livelihood goes poof you don't freefall into the abyss. Should you sell your home when an AI automates your job? These are orthogonal concepts.
> You could probably say this about anything that improves productivity.
The claim goes too far though. Automation, thus far, has been good for everyone. Mostly those with capital, but also those without. The zero sum outcomes implied by the headline don't hold up to what we've observed with increased living standards globally as automation has improved economic output.
The difference is that in the past automation did not have power itself. If displaced workers revolted, those factory machines would do nothing to save wealthy capitalists from the public's vengeance.
Once they have mass produced Terminator-like invincible robot cops they'll be able to do whatever they wish with us. They could give us universal income, they could also round us up and push us into the sea.
I don't think that's true. The important issue isn't productivity but whether a technology is centralizing or decentralizing, whether it benefits or loses from scale.
The personal home computer, the early web etc were productivity boosts but moved power from incumbents to small players. AI as it currently stands almost achieves the exact opposite. A relatively straight forward heuristic to ask is, how much does an authoritarian power like the technology?
Spot on. AI is extremely centralizing and extremely powerful. I'm a big Bitcoiner and definitely against big government. But I fail to see what feasible counter there is except for something as totalitarian and powerful as the government, whether it be to enforce copyright of training data, UBI, or even forceful limitation of AI research (latter prob won't work. And AI research will simply move to more lax jurisdictions).
> You could probably say this about anything that improves productivity.
AI does not improve productivity in the technical sense. Productivity improvements are the result of using better tools. I would classify better tools as force multipliers. AI is a force replacement as far as I'm concerned. If I'm using stable diffusion, I'm not painting, I'm writing prompts and hoping I get what I want. It's as far removed from the actual practice as it can be.
Well, you aren't, ChatGPT is. In spirit, it's the equivalent of asking someone else to do a job for you. You wouldn't claim you wrote that article, legal filing, SEO spam, etc, you'd say an AI did it.
If you used Photoshop to manipulate an image over using MS Paint, you wouldn't say that Photoshop made the image for you, because it didn't, you did. Grammarly doesn't write your articles for you, you do. They just maximise your efforts.
I wouldn't say it's closer to a person, more simply that to claim it's work verbatim and without substantial modification as your own is perhaps dubious and not entirely honest. Even then, "substantial modification" is somewhat arbitrary. Definitely an interesting philosophical topic.
Yes. I'm not saying LLMs are sentient (corporations are people and similarly not sentient, under the traditional definition of the word), I'm saying there's an active debate on how to view AI, especially in the context of attribution.
This is an extension to the idea that automation is a form of "resource curse" that creates social stratification in a similar way to diamonds or oil.
Being able to automate valuable work means you need a smaller coalition of power brokers to maintain the bulk of your GDP, meaning a government can remain in power with the support of an ever-shrinking class of influential people.
This is something people have been talking about for decades, with semi-serious proposals like a tax for robots being thrown around. The people in power have yet to take a hard look at any of it, though -- and why should they? It's a safer bet to curry favor with the ever-shrinking circle of influential people. The core problem in my opinion is how to create an incentive for politicians today to put into place measures to combat this consolidation of power. The longer we wait, the stronger the incentives against liberal democracy become, and the harder it will be to make a change.
The problem is that all the aggressive proposals come far in advance of the technology actually being even remotely ready (robots don't serve me meals from McDonalds yet, the minimum-wage worker robotic replacement hasn't happened - it probably won't happen in my lifetime).
They're somewhere between reactionary efforts to strangle the technology in the crib, and just completely misinformed naval-gazing by people with nothing better to do - proposing fictional scenarios to argue about rather then dealing with real data.
Data isn’t the only bottleneck. Training models at scale is an expensive and difficult engineering task. It’s something that the “data is the new gold” people always seem to overlook.
And then even if you have the data and the engineering talent, it takes competent leadership to deploy a product and shepherd it to success. (Looking at Google’s graveyard here)
Totally agree. Data is a problem when you need millions of labels, but when your training corpus is just "all the text I can find on the internet" and the training objective is self-supervised, it's not what's stopping people. But as you say, actually training a model of this size is really, really tricky. You can't just grab a few ML engineers off the street, write a check to AWS for some compute and have your own GPT-3.
To summarize: Larry Page is bragging about how, in 2017, Alphabet created a "data REIT" which contained all of Google's data, and licensed it on FRAND terms to all comers. As a "REIT" it's required to pay out 95% of its profits as dividends, and everyone whose data they use is a shareholder in the "REIT."
Yes, I know that's not what REIT's are for. This would take legislation, as Larry hints in the interview.
Basically, the AI training data is nationalized, with compensation to the owners, i.e. Google, FB, etc. You could argue, and people would, about who deserves compensation. Congress would have to do its job, for once.
In this "interview" Larry is bragging about how well this worked out for Google, since the clients of Google Data can make much better use of the data than Google itself can.
Technology as a whole is more useful for people with more resources. When animal husbandry was invented, the shepherd with one sheep still had one sheep at the end of the season, while the shepherd with two sheep might have three at the end of the season.
> unless AI is open-source and truly owned by the end users
This misses the deeper reality, in my view. AI is predicated on and bootstrapped by the free labor of others. Even if it were “open source” and “owned” by end-users, AI fundamentally requires people to do free work. That’s the problem with it.
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
ai is definitely providing us with cheaper knowledge labor. but that's not a bad thing for human knowledge workers. ai is enabling societies to have less expenses. this means more money to spend on the quality of life of everyone. we should push for that.
Counter arguing the central point: AI will be terrible for capitalists, and great for most others. Reason being, the surplus value created by AI is hard to defend (e.g. models and trade secrets seem to leak out on a 1-2 year timescale). Anything that requires Google scale in 2-3 years will only need iPhone scale. This will lead to hardcore deflation too fast for capitalism to escape, which will be passed on to consumers in the form of surplus discretionary spending income.
If you’re correct, the policy response will be a return to zero interest rate policies and QE, to guarantee that the rate of money creation prevents price deflation. This would result in a massive increase in the monetary price of goods such as real estate, stocks, and nfts. The benefits would primarily go to owners of these assets - whether the owners are public or private will vary from country to country.
This doesn't seem like an effective counterpoint to me; it feels like, to put it harshly, the same sort of conjecture crypto enthusiasts use to point out how Bitcoin will be used to fight the man. The average american is now way more productive but those gains they thought we would get with the advent of the computer (4 day work weeks, 5 hour days) never came; instead Americans work the same, produce more, but get a fewer piece of the pie. If anything I expect these trends to continue as AI becomes more commonplace.
Isn't the better question "why is this time unique?"
The problem isn't new, it's always existed as a pathology of American society. You already have been trading away the gains of technological improvement to an ever shrinking group of people in control of capital.
AI isn't creating a new problem, it's a reminder of an old one. Which should make it clear that even bothering to talk about AI specifically is talking about the wrong issue.
i dont think that's true. The average wealth has grown, and even if you didn't get wage growth, the availability of things you could buy with said wage has grown (which is in and of itself a form of wealth).
American productivity/wage gap isn't particularly controversial, it's well documented.
>The average wealth has grown
Globally yes, but as far as Americans are concerned they largely didn't see their wealth grow as the "Elephant curve" [2] shows. A lot of the wealth growth happened in China and India.
>the availability of things you could buy with said wage has grown
"You'll own nothing. And you'll be happy" is pretty much what you are saying here. The price of real assets has grown tremendously, but at least you can buy a 60" flatscreen TV for $100.
Exactly. Imagine every macbook pro/iphone coming up with the open source models preinstalled out of the gate. You don't even need to query google or even see ads. You would have a nightly update that transfers the diffs of the latest news etc.
The author doesn't get into whether even capitalists will be able to reliably get their AIs to do what they want. Good news: AI may end up being terrible for everyone!
It’s in defense of capitalism. Most coups are against what leftists want — socialism. Otherwise why would the US do things like get a million plus Indonesian Muslim communist sympathizers murdered by right wingers in 65? America has gone after every non capitalist country and does agitprop for places like Tibet or Taiwan and capitalists as a whole while vilifying and trying to overthrow Cuba.
It's nothing to do with capitalism, it's all about installing regimes that are pro-US, and allow US enterprises to exploit these countries. Indonesia has a lot of resources, including the Grasberg mine in west Papua, that it leases out to US companies with very little benefit for the local population, instead pocketing millions in bribes. It's also a huge country of 200+ million of huge geopolitical importance.
The US is simply a thug. It has too much firepower for its own good. It's drunk on its ideologies of freedom or democracy or what have you, that it can never recognise its mistakes.
Yes but no. You can be capitalist without letting the rich 0.1% control the government. The US is a result of puritanism of the freedom of speech. Citizens United vs Federal Election commission case confirms that in the US, Money is Speech. That is, lobbying is legal. Huge donations to politicians are legal. It's as corrupt as governments go. Not to mention the role of the media in all this. There are many capitalist societies, e.g. singapore, that do not let their governments become corrupted by capital.
I abhor capitalism as much as you do, but economic system and political system are separate although interlinked. Why shouldn't we count Singapore just because it's small? They have a parliament and all other branches of government just like any other nation. HK is different though as it is ruled by a CEO.
My point is capitalism itself is not necessarily the root of all evils; it is puritanism, dogmatism, and when it comes to the US, it's their puritanical approach to "freedom" that ironically makes the majority of the population less free and less democratic. A communist, planned economy is as much capable of creating an incompetent government and society if there were no political, ethical and cultural wisdom.
To summarise, capitalism is abhorrent, but there are other dimensions.
Okay Singapore counts. It’s still capitalism and capitalism is a right wing ideology. Cuba’s govt is doing well too. I always feel uneasy when capitalist societies that were always given near unfettered access to the Global South and free trade because the Global North didn’t sanction them. Those countries had it on easy mode. They didn’t do what Cuba has done. Or what Russia did in a few decades after 1917 or what China did in the 50s. I’m bordering on sounding like a tankie now, but I’m more so defending countries and people that had to do things the hard way.
Nothing is the root of all evil but “evilness” has almost always been done from a right wing perspective. Yes authoritarian leftists exist but I’d argue it’s impossible not to have to behave like Cuba with freedom restrictions does when America and by extension, the Global North are capitalist and imperialist bullies coming after you.
I still mourn for my parent’s country having us have their socialist leader murdered and replaced by an awful fascistic dictator.
The world was in a Cold War after the communists (Soviets) defeated the Nazis. America did the mass murder and suppression of people in my Indonesia example (Jakarta Method) to suppress communism in support of capitalism. There's been videos recently on Youtube going over US led coups. The amount to stop socialism is the vast majority. Coups don't include various massacres and suppression of communists either.
It kind of hurts to hear people still think this. My parents home country had coups taking out leftist socialist leaders and installing awful dictators. Capitalism is evil. Capitalists should have left socialists alone after World War 2.
Oligarchy sure, but that’s what all capitalist systems will tend towards with wealth redistribution. There is no such thing as free market capitalism. Markets can’t exist without states.
IMO the title would probably be best rephrased as "Centralized/Proprietary AI Is Useful for ..."
I think Stable Diffusion, being FOSS & usable on a decent range of consumer hardware, is a pretty clear counterexample to the article's claim - it doesn't matter if you're a capitalist or not, you can get great value from the technology. Funnily, the article totally avoids mentioning SD.
And its amazing how quickly stable diffusion is advancing past the competition. I tried Dall-e for the first time in months today and its results feel so outdated, and I was dazzled by them 5 months ago.
A1111 and the extensions system they've built is really amazing for putting FOSS into nontechnical consumers' hands.
How are you determining how "far behind" SD is compared to midjourney?
I personally found that stable diffusion has been just as good at producing imagery. And i have seen plenty of fine tuning done for specific purposes (see https://civitai.com/models/5585/deliberate-for-invoke for example).
And the fact that SD is available locally, as opposed to midjourney, makes it more of a hotbed for new innovations.
It’s entirely possible that my prompts are just too vague, but Midjourney produces good results with a couple of words while SD often makes something unintelligible.
But yes, the ability to fine-tune SD is the thing that gives me hope for its future.
Really? Does Midjourney have a civitai.com number of models, extensions like LORA and textual inversion? Does Midjourney have a controlNet alternative?
I’m asking, I have no idea. I was under the impression that MJ was a easier to use toy. I didn’t know they were close let alone Mj superior.
It is largely capitalist in nature but I don't think it's fair to say that it is solely capitalist. There still exists property in socialism however it exists in collective ownership instead of singular ownership and theft or abuse of that property would still be a thing with socialism.
Trademarks would also continue to exist as they generally serve a practical purpose of preventing impersonation of companies, goods, and services. Ownership would be collective among workers as all other property would be and scope requirements for trademark enforcement would likely be limited.
Additionally, the subset of copyright that is copyleft fits in very cleanly with socialism. It guarantees the users and the users' users some degree of rights (as well as enforcing attribution which costs nothing). So you could imagine a socialist view of copyright where copyleft is preserved more or less in perpetuity while traditional copyright continues to exist in a limited, reduced form with a shorter lifespan (i.e. 5-10 years instead of death+70/95/120 years).
As for patents, it's kind of fuzzy. They don't fit well into any socialist understanding of intellectual property and of all the types of IP that exist, they are the only type that constrains knowledge/inventions/craft/skill as property (vs trademark -> identity and copyright -> specific works/creations). You can argue for ownership of the specific products of your labors and you can argue for ownership your identity but it's near impossible to argue within a socialist framework for ownership of ideas/knowledge in of itself or for ownership of the ability to apply that knowledge.
If one was to insist on patents in a socialist society, they'd have to be limited to short lifetimes (i.e. less than 5 years w/ no extension) and copyleft intangibles(ex: software) or products with copyleft manufacturing & design documents would need to be exempt from any exclusivity the patents may grant.
And so long as we live in a capitalist economy, we might as well use that construct to protect people's livelihoods as opposed to letting huge corporations use their work to make billions in profit.
That seems to be true of almost any significant technological advancement. I would argue that the proper response as a society is to socialize the benefits of these developments in order to stem the otherwise inevitable spreading of the wealth gap. I'm sure someone will eventually call me a commie for such wild notions.
That's the exact problem. AI is centralizing. So is empowering the government. Not to mention if AI development is heavily red-taped in one jurisdiction, its development will simply move to a more lax jurisdiction. It makes things even worse.
No matter how you slice it, the rich gets richer, those in power centres in government gets more power, and the majority in the lower middle class loses the most.
Anything that increases capital-intensity of output, compared to labor intensity (good old Cobb-Douglas production function) will be good for the capitalists.
Y = K^(1-x)*L^(x)
Here's ONE WEIRD TRICK that CAPITALISTS HATE. Increase X.
It would be useful to a Communist society, one that is able to re-distribute wealth amongst the population. Of course, americans find the very word revolting, but that's what countries like China have the possibility of achieving in the future, while the US is stuck in their ideological dogma and obsession of capitalism, "democracy" and to some extent evangelical christianism.
These days most americans type like AI bots parroting the propaganda they absorbed from the media. It's the perfect system, populated by serfs toiling for their slavemasters, while blinded by ideologies, forgetting how miserable the typical american life is.
Barely 10 days holiday per year, no universal healthcare, car-infested cities, unsafe cities filled by homelessness, racial tension all over the country, drug addiction, and many more. The US is a living nightmare yet people like you are blinded to think you're still the best in the world. Pity.
This is an ancient fallacy in economics going back to the mechanization of farm labor. The beauty of capitalism is that what's good for capitalists is also good for everyone else. Jobs will be lost but the standard of living will increase for everyone and new jobs created. Until we reach AGI, at which point we can all collectively retire and just do whatever we want.
Or, more likely, based on all of human history, you have a few people at the top who “own” the AI, gatekeep and charge for access to it, own the IP-equivalent of whatever it produces, and life will continue as normal for 80%-99% of peasants/laborers working for the lucky few AI-aristocrats.
Even if AGI itself was “good”, that does not mean it will be used or deployed fairly and equitably. In fact, I’d posit without major regulation it will just increase the wealth gap
> own the IP-equivalent of whatever it produces, and life will continue as normal for 80%-99% of peasants/laborers working for the lucky few AI-aristocrats.
this wasn't the case for other capital goods that was invented in the past, so why is AI going to be different?
In the past, other forms of automation brought on more wealth, which did enrich the rest of society (on top of bringing personal wealth to the founders).
The only difference is that _your_ current view has been normalized to see the products of that capital goods as normal!
> based on all of human history, you have a few people at the top who “own” the AI, gatekeep and charge for access to it, own the IP-equivalent of whatever it produces, and life will continue as normal for 80%-99% of peasants/laborers
It really seems like “all of human history” tells a different story. The standard of living for virtually everyone is much higher now than it was in the past. Obviously there are people who capture a large share of the value. That is true is in any competitive system. But technological advancements have improved the entire world on average.
You need a reality check if you think life today for most people isn't materially better than 50, 100, 500 years ago thanks to improvements driven by capitalism.
I was going to start this but the first few paragraphs already have tons of lies. Where are the non capitalists saying any of these things? The reason for the collapse of USSR is wrong in the first paragraph.
I bet this author thinks Russia isn’t where it is today because of capitalism and the US. Or Putin isn’t a result of US intervention of capitalism in Russia.
He didn't say anything about 'non-capitalists'. Your little noisy clique isn't as large as you think it is. Besides, given how expansively I've seen "capitalism" defined on the internet by its detractors, conservation of energy qualifies as capitalist oppression.
> He didn't say anything about 'non-capitalists'. Your little noisy clique isn't as large as you think it is.
The Soviets and Socialists are anti-capitalists. How are you defending OP/OP's link that attempts to show the evils of socialism/Soviets but don't know the Soviets and socialists are anti-capitalists?
> Besides, given how expansively I've seen "capitalism" defined on the internet by its detractors, conservation of energy qualifies as capitalist oppression.
Capitalism is the means of production being owned by private individual capitalists. Think private property. There's one basic definition of capitalism. None of this was difficult to learn once I did some reading. Or even just watching some non-mainstream and non-right wing videos. Just like I've done with media from the mainstream and the right to see where they are coming from.
The biggest Communist ('socialist' is not well defined at all and shouldn't really be used in a serious discussion) enterprise, the USSR, failed on its own merits (or lack thereof). It was in no way strangled by the US. A major reason was that central planning doesn't work that well and you need a market economy - strongly associated with Capitalism - to actually achieve success. Perhaps in some AI-driven future a planned economy etc. could work well, but that certainly hasn't been the case so far. China famously realised this in the late 80's.
Where did you do your learning? Why are you using the word communism? Socialist and Socialism are well defined. Capitalist is a sillier word when used for avg citizens who are exploited and oppressed.
Russia was dirt poor in 1917. They became a super power in 30-40 years moving an enormous amount of people out of poverty and lack of modern niceties.
China changed in 78 enough already. How do Cuba or Vietnam survive? How did they survive when heavily sanctioned and attacked by capitalist westerners? We have yet to see a capitalist country do that.
Whom they couldn't move out of poverty, they just killed.
I can't speak for grandparent, but when I say "communist" disparagingly, I speak as someone who was born and raised in communist Czechoslovakia, who remembers actual communism.
I remember being in kindergarten, learning how there was a great, kind man Lenin who loved children. And I was already somehow thinking, something is off: this shit shouldn't be taught to kids like us.
So you kids can crumple that up in your woke pipes and smoke it. Pardon me, vape.
Czechoslovakia didn't have actual communism. It was oppressed by a stalinist Soviet Union. Lenin was nothing like Stalin.
People mistake oppression and basically incompetence of the soviet union as proof that communism will never work, but it wasn't communism. During the Lenin era, industrial output actually far outpaced western countries, millions were lifted out of poverty. Same in China, Mao's madness with the great leap forward plunged the country into famine. These weren't communism.
There has not been a real communist society in human history. It'll require the population to change its mindset first from greed, ignorance and blind faith, towards critical thinking, socialism (care for others). These are pre-requisites for any ideals, even democracy.
The falacy of the neoliberal order is to think greed, excess consumption and the paramount right of the individual will magically bring a prosperous society, where in fact it sets up an exploitative system where the vast majority toils for the tiny minority and stays poor and stupid.
Actual True Scotsmans communism could easily be worse than the ones we actually had. Because there would be no way for anyone who is capable to use that to their own advantage rather than that of the collective. It could easily be the scummiest form of communism, where even the hope of partaking in corruption, or clandestine free-market activity would be taken away from you.
You can't use True Scotsman sort of fallacy push back in an adult discussion. It's obvious Stalin's Russia was not proper communism. Lenin was nothing like Stalin. Stalin was still at least a communist, but he was not a good person and was not good for communism. Besides the capitalists in America and all the right wing govts America put in power in its cruel crusade against Communism.
That’s not a given, it needs to be fought tooth and nail to be true. And we have slid back on many of the those fronts: anti-trust laws are almost dead, tax evasion is rampant and not pushed in any meaningful way, the power to influence elections with money in unchecked. Adding AI to that mix will be explosive. My hope that this catalyze the movement to claw back some of those achievements.
You should probably read some Piketty. There is absolutely nothing which structurally ensures that the wealth created by capitalist pursuits will be distributed sufficiently to ensure that standards of living will increase for everyone, and in fact there are a lot of reasons to believe that the past century was an anomaly in that regard.
Why does Pikkety think that capitalism is not the major engine that has been increasing the standard of living for people worldwide? Does he think something else has been the essential motor behind that improvement in the material condition of humans around the world?
No way, AI will also be useful in medical fields, or anyone who has to pour over more data that can be parsed in human lifetimes in order to make discoveries, such as a search for extra terrestrial life, or even in the fields of law. Think outside the box.
You won't be needing radiologists, family doctors etc. You describe your symptoms and upload the scans, blood tests etc and Dr. ChatGPT will tell you exactly what is wrong with you and the steps you need to take to heal. It would also generate the prescription and have the medication sent to you without you needing to lift a finger.
I actually tried this with someone who is currently attending med school and got him to quiz ChatGPT on diagnosing common and esoteric ailments. According to him, ChatGPT assumed to know the answer when in most cases a good doctor would have asked questions to clarify or be specific to make an accurate diagnosis. In a few of those instances, it misdiagnosed when it could have easily gotten it right just by asking one or two simple follow up questions.
I have had this experience with incurious human doctors as well. We do not need mechanical diagnosticians. We need minds that can think through problems using a mix of logic and intuition built through intensive study and experience.
Once those laws are in place, the moat is dug - and large corporations with access to capital vacuum up good training data in bulk. The whole time loudly proclaiming that they're 'empowering creators' and 'rewarding original thinkers.' Now the large corporations have the largest and best datasets tied up and neat, deep moat. Any startup wanting to challenge the incumbents could have significantly better models, but without access to the same quality and volume of training data, they'll be at a massive disadvantage.