I don't follow. Why wouldn't it work? It seems to me that a biased random walk down a gradient is about as universal as it gets. A bit like asking why walking uphill eventually results in you arriving at the top.
It wouldn't work if your landscape has more local minima than atoms in the known universe (which it does) and only some of them are good. Neural networks can easily fail, but there's a lot of things one can do to help ensure it works.
Not a mathematician so I’m immediately out of my depth here (and butchering terminology), but it seems, intuitively, like the presence of a massive amount of local minima wouldn’t really be relevant for gradient descent. A given local minimum would need to have a “well” at least be as large as your step size to reasonably capture your descent.
E.g. you could land perfectly on a local minima but you won’t stay the unless your step size was minute or the minima was quite substantial.
> The correction will be brutal, worse than the Industrial Revolution.
Has it occurred to you that there might not be a correction, and that the outcome would still be brutal, at least on par with the industrial revolution.
I mean as in living through the industrial revolution would have been wild. So whether we have an AI revolution or an AI bubble it's bound to be a roller coaster.
And that's without accounting for the various wars (and resultant economic impacts) that are already in progress. A large part of what drove the meat grinder of WWI was (very approximately) the various actors repeatedly misjudging the overall situation and being overly enthusiastic to try out their shiny new weapons systems. If one or more superpowers decide to have a showdown the only thing that might minimize loss of life this time around is (ironically enough) the rise of autonomous weapons systems. Even in that case as we know from WWII the logical outcome is a decimated economy and manufacturing sector regardless of anything else that might happen.
Bubbles like the AI bubble are a game theoretic outcome of a revolution. Many players invest heavily to avoid losing, but as a whole the market over invests. This leads to a bubble.
In another context I might see it as vendor financing. However given that Google and Anthropic are competitors in this segment and given that Google has previously invested in them I'd rather see this as a sort of bartered stock purchase presumably for the purpose of hedging against failure. If Anthropic wins the race and it turns out to be winner takes all and you happen to own half of Anthropic then you still win half of the immediate spoils even though your internal team lost. If you view losing the race as an existential threat then having all your eggs in the one basket is a terrible proposition.
How can there be a "winner takes it all" situation with AI?
OpenAI lead the game while they were best. Antrophic followed and got better. Now openAI is catching up again and also google with gemini(?) ... and the open weight models are 2 years behind.
Any win here seems only temporary. Even if a new breakthrough to a strong AI happen somehow.
Recursive self-improvement is one argument. Otherwise winner takes all seems much less likely than a OpenAI/Anthropic duopoly. For the best models, obviously other providers will have plenty of uses, but even looking at the revenue right now it's pretty concentrated at the top.
So if I'm Google I'd want a decent chunk of at least one of them.
That's certainly how it looks right now but where's the guarantee? What happens if it turns out that deep learning on its own can't achieve AGI but someone figures out a proprietary algorithm that can? That sort of thing. Metaphorically we're a bunch of tribesmen speculating about the future potential outcomes of the space race (ie the impacts, limits, and timeline of ASI).
Look at the "winner takes all" situation in web search. Of course other search engines exist, but the scale of the Google search operation allows it to do things that are uneconomical for smaller players.
"The first to AGI, or a close approximation, is the winner. "
But why? Assuming there is a secret undiscovered algorithm to make AGI from a neuronal network ... then what happens if someone leaks it, or china steals it and releases it openly tomorrow?
Neither. It's the most severe FOMO in history. The best case scenario is equivalent to attempting to pick future winners just prior to the industrial revolution really kicking off. Except this time around the technological timelines appear to be severely compressed and everyone is fully aware of what's at stake. And again, that's the best case scenario.
That seems really paradoxical and I think it would just burn up compute. The AI really doesn't have any way to know it's getting better without humans telling. As soon as the AI begins to recursively improve based on its own definition of improvement model collapse seems unavoidable.
If humans are able to judge, and if the AI is more capable than a human in every respect, then why can't the AI be the judge of its own performance? Humans judge their own output all the time.
I look at this as Google needs a competitor. While Anthropic seems to be the flavor of the quarter OAI looks like such a dumpster fire right now that it's in Google's best interest to help keep Anthropic moving towards winning the #2 spot. I say the #2 spot because it doesn't matter how good this week's LLM is. Until someone else owns the infra and has an actually profitable business model they're all just lighting money and the world around us on fire.
I actually mentioned to a Google friend the other week that I wouldn't be surprised to see Google tipping the hat towards Anthropic soon so as to put a little more heat on OAI.
Assuming what you say is true then couldn't that be validated by making additional observations in the present day? Since we'd assume some sort of statistical distribution for such objects. Is there any reason that would be unrealistic?
The fact that a bunch of seemingly disparate actors are behaving in a highly coordinated manner is evidence against central orchestration? What an absurd suggestion.
You are assuming without evidence that they are coordinated, then using that to infer central orchestration, and then using that inferred central organization to support coordination.
When there is something that aligns with the interests of several disparate groups it is common for them to all support that something with the need for some central organization.
> You are assuming without evidence that they are coordinated
The evidence is the highly abnormal behavior. The alignment of interests is a red herring.
> it is common for them to all support that something with the need for some central organization.
Sure, as is frequently seen with the conferences and administrative bodies surrounding treaties and the like. Would you care to point out this central organizing body that a bunch of people posting here appear mysteriously determined to deny the existence of?
What exactly is your position? First you object to an alleged lack of evidence on my part, then turn around and seemingly attempt to justify the observed behavior with the argument that coordination in the open is normal and expected. So do you acknowledge the presence of what appears to be centralized coordination in this instance or not?
What counter evidence is there against you, AnonymousPlanet, and gslepak being the same person? You're all seemingly acting in a highly coordinated manner. Would it be reasonable for me to assume you're all one person? Because a suspicious similarity seems to be the only reasoning any of you are providing for these laws being centrally orchestrated.
Tiktok in the US previously had an algorithm that wasn't in keeping with US government goals. That's not a value judgement on my part BTW. Personally I avoid the ingestion of opaque algorithmic feeds to the extent possible.
It could still be like that if there was no opaque algorithm and even better if there was no endless feed to doomscroll. If you only got alerts for messages directed at you and otherwise had to actively visit a person's page to check up on them. But that wouldn't be as engaging (ie addictive) and there wouldn't be nearly as many opportunities for ads or even the collection of data to drive those ads.
The evidence is the part where it very obviously isn't organic. The behavior is clearly too coordinated when compared to past global changes in regulation.
> People and lawmakers are just not thinking through the privacy implications ...
It seems much more likely to me that they are thinking them through and that they have ulterior motives.
BTW "violent agreement" refers to when two parties are arguing because they mistakenly believe that they disagree. A sort of friendly fire if you will. The term you were looking for was something like enthusiastic or similar.
> The evidence is the part where it very obviously isn't organic.
Global Context: Norway joins France, Spain, and Denmark, which are considering similar measures, while Australia and Turkey (which bans users under 15) have already implemented restrictions. The UK recently rejected a similar under-16 ban.
I think it obviously is. Just as much as the migration to solar is organic. There are foils, but there is also an underpinning concerns fueling the global momentum. It's very likely that the functioning western governments (ie still representing the public's interests) are doing just that. These foils include the public service who work with children, who have been sounding the alarm for years being heard and the population that grew up with social media, are now old enough to do something about what they perceive as damaging.
Where have you provided anything to refute the observation that this bears the hallmark of being centrally orchestrated? The context you cite appears to trivially restate my own observations rather than support a counterargument. International laws never proceed in such a uniform manner all at once like this without external coordination.
Of course the lobbyists are playing off of public sentiment and almost certainly working to actively fan those same flames. Notice that the laws aren't the most sensible or least intrusive but rather just about the minimally privacy preserving and maximally authoritarian enabling "solution" that you could possibly come up with. Also notice the convenient alignment of this outcome with various well established ulterior motives of existing actors.
reply