This is an "OS"-feature, where OS means the GUI-Framework that Firefox is using to integrate with the DE.
> If the OS doesn’t provide the feature, then that is the OS’s decision to make, and the browser should respect it.
That's a very strange claim. Nearly everything in any app is something the OS is not providing; that's why apps exist in the first place, to enhance the environment.
> Why would I want text input in one app to have a feature that text input in other apps lacks?
Why should they cripple themselves willingly; just to align with others? Especially as this is a web browser, which has become the main way of interaction with a big part of the world.
VS Code is also offering significant more ability than Zed at the moment. If you want to sell RAM-usage as a phenomenal benefit, then you should compare it with similar editors, like Sublime or (Neo)Vim.
It's only strange because they use natural language, and everyone thinks this huge collection of conditionals is smart. Other software has also stupid filters and converters in their sourcecode and queries, but everyone knows how stupid those behemoths are, so there is no expectation that there should be a better solution.
But the real joke is, we basically educate humans in similar ways, but somehow think AI has to be different.
Git is already distributed by itself. The management-part is what's missing (mergerequests, permissions, issues..), and it's disputable whether this is really necessary, or just a nice to have.
But they are programmable, very freely even. Whether you can start any desired program on the device is the crucial point. Having gates, doesn't influence what's inside the gates.
I'm reminded of Zed Shaw's argument about how python3 should not be considered Turing-complete if it can't run python2 code. It was a fun rhetorical exaggeration that I felt helped clarify that it isn't unable to run python2 code, but rather that the people in charge decided that it shouldn't.
They redefined AGI to be an economical thing, so they can continue making up their stories. All that talk is really just business, no real science in the room there.
It's not a great definition but it's also not a terrible one either.
For an AI system to be able to do all or even most of the jobs in an economy it has to be well rounded in a way it still isn't today, meaning: reliability, planning, long term memory, physical world manipulation etc. A system that can do all of that well enough so it can do the jobs of doctors, programmers and plumbers is generally intelligent in my view.
> It's not a great definition but it's also not a terrible one either. For an AI system to be able to do all or even most of the jobs in an economy
That's not the definition they have been using. The definition was "$100B in profits". That's less than the net income of Microsoft. It would be an interesting milestone, but certainly not "most of the jobs in an economy".
Yeah I think this is more coherent than people realize. Economically relevant knowledge work is things that humans find cognitively demanding. Otherwise they wouldn't be valued in the first place.
It ties the definition to economic value, which I think is the best definition that we can conjure given that AGI is otherwise highly subjective. Economically relevant work is dictated by markets, which I think is the best proxy we have for something so ambiguous.
> Economically relevant knowledge work is things that humans find cognitively demanding. Otherwise they wouldn't be valued in the first place.
Deep scientific discoveries are also cognitively demanding, but are not really valued (see the precarious work environment in academia).
Another point: a lot of work is rather valued in the first place because the work centers around being submissive/docile with regard to bullshit (see the phenomenon of bullshit jobs). You really know better, but you have to keep your mouth shut.
Yeah, I'm sure there could be a better metric, if the metric's purpose was to check on the progress until the AGI target rather than doing business based on it (and so, hammering the metric to fit the shape of "realistic goal")
Around the end of 2024, it was reported that OpenAI and Microsoft agreed that for the purposes of their exclusivity agreement, AGI will be achieved when their AI system generates $100 billion in profit: https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
> OpenAI and Microsoft agreed that for the purposes of their exclusivity agreement, AGI will be achieved when their AI system generates $100 billion in profit
Wow. Maybe they spelled it out as aggregate gross income :P.
Yea, seems like this was stage setting for them to exit. They were already trying to break the deal then. So, I feel like that is lawyers find a way to bend whatever to get out of the deal.
For some definition of Artificial this holds perfectly
A self-running massive corporation with no people that generates billions in profit, no matter what you call it, would completely upend all previous structural assumptions under capitalism
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity
> They redefined AGI to be an economical thing Huh. Source?
I don't think your original comment deserve to be downvoted. (Calling someone illiterate, on the other hand.)
But the "it" I was asking about was "AGI" as "an economical thing." You technically correctly answered how OpenAI defines AGI in public, i.e. with no reference to profits. But it did not address the economic definition OP initially alluded to.
For what it's worth, I could have been clearer in my ask.
The question was about their redefinition of AGI in economical terms for which others provided links, not the one from their (obviously fake) mission statement.
BTW I didn't downwote you (I hate it, if many people downvote a comment it's harder to read), I was just trying to explain why others did. On second thought, my comment was wrong, because your answer was related to the question but it wasn't really the intended one.
When we are having serious conversations about AI rights and shutting off a model + harness was impactful as a death sentence. (I'm extremely skeptical that given the scale of computer/investment needed to produce the models we have _good as they are_ that our current llm architecture gets us there if there is even somewhere we want to go).
It makes sense though. Humans are coherent to the economy based on their ability to perform useful work. If an AI system can perform work as well as or better than any human, than with respect to "anything any human has ever been willing to pay for", it is AGI.
I don't get why HN commenters find this so hard to understand. I have a sense they are being deliberately obtuse because they resent OpenAI's success.
It doesn’t though, AGI have far greater implications than doing mundane work of today. Actual AGI would self improve, that in itself would change literally every single thing of human civilization, instead we are talking about replacing white collar jobs.
An AGI that can do all that would also necessarily be able to do all white collar work. That latter definition I'd consider a "soft threshold" that would be hit before recursive self-improvement, which I imagine would happen soon after.
The current estimation on the time between this is fairly small, bottlenecked most likely by compute constraints, risk aversion, and need to implement safeguards. Metaculus puts it at about 32 months
Sure, but that’s like saying we’re close att infinite life because we’ve expanded our life expectancy.
I don’t really buy into the ”one part equals another”, we are very quick to make those assumptions but they are usually far from the science fiction promised. Batteries and self driving cars comes to mind, and organic or otherwise crazy storage technologies, all ”very soon” for multiple decades.
It’s very possible that white collar jobs get automated to a large degree and we’ll be nowhere closer to AGI than we were in the 70’s, I would actually bet on that outcome being far more likely.
I think AGI by that definition (ability to self-improve) is closer than many people think largely because current models are very close to human intelligence in many domains. They can answer questions, derive theorems, write code, navigate websites, etc. All the work that current AI research scientists do is no more than these general information processing tasks, scaled up in terms of creativity, long-term coherence, sensitivity to bad/good ideas over the span of a larger context window, etc.
The leap between Opus 4.7/GPT 5.5 and what would be sufficient for AGI seems smaller than the leap between The invention of the Transformer model (2017) and today, thus by a very conservative estimate I think it will take no more time between then and now as it will between now and an AI model as smart as any human in all respects (so by 2035). I think it will be shorter though because the amount of money being put into improving and scaling AI models and systems is 100000x greater than it was in 2017.
This is an "OS"-feature, where OS means the GUI-Framework that Firefox is using to integrate with the DE.
> If the OS doesn’t provide the feature, then that is the OS’s decision to make, and the browser should respect it.
That's a very strange claim. Nearly everything in any app is something the OS is not providing; that's why apps exist in the first place, to enhance the environment.
> Why would I want text input in one app to have a feature that text input in other apps lacks?
Why should they cripple themselves willingly; just to align with others? Especially as this is a web browser, which has become the main way of interaction with a big part of the world.
reply