I dislike OpenAI because they were founded to work on AI safety, and the most anti-safety thing you can possibly do is encourage competition over AI capabilities, which is exactly what they are doing over and over again.
AFAICT, “AI safety” was a term created by the overlapping (sometimes in the same body) group of X-risk cultists and corporate AI marketeers as part of their effort to redirect concern from the real and present problems created and exacerbated by existing and imminently-being-deployed AI systems into phantom speculative future problems and corporate prudishness.
X-risk concerns have been around for a long time and were not invented by AI marketeers. I agree that the marketeers are abusing the concept to try for regulatory lock-in and to make their products look maximally impressive.
> X-risk concerns have been around for a long time and were not invented by AI marketeers
I didn't say X-risk concerns were invented by AI marketeers, I said the “AI safety” language was invented by the overlapping groups of X-risk cultists and AI marketeers, some of whom (Sam Altman, for one) are the same people.
AI safety is the dumbest idea in the world by people who think computers are magic, so confusing its meaning is great. The original AI safety people now think LLM training might accidentally produce an AI through "mesa-optimizers", which is more or less a theory that if you randomly generated enough numbers one of them will come alive and eat you.
If there's any magic being alluded to, it's by the people who say that AIs will never reach or exceed human intellectual capabilities because they're "just machines", with the implication that human brains contain mystical intelligence/creativity/emotion substances.
"AIs will never reach or exceed human intellectual capabilities" is an example of Wittgenstein's point that philosophical debates only sound interesting because they don't define their terms first. If you define AI this is I think either trivially true or trivially false.
In the cases where it's false (you could get an artificial human) it still doesn't obviously lead to bad real life consequences, because that relies on another unfounded leap from superintelligence to "takes over the world" ignoring things like, how does it pay its electricity bills, and how does it solve the economic calculation problem.
It's more like having children. Sure they might become a serial killer, but that's a weird reason not to do it.
True, and a good way to explain it to a layperson is through a comparison of Html and Python.
Are there any implementations of Python in Html? No, because Html is not a programming language. Are there any implementations of Html in Python? Many, because Python is a programming language.
Given these assumptions, one easily imagine that Html is a weaker language than Python.
So if Html is weak, let's make it stronger! Let's add some more Html headers of webpages, than three. Html has now 1 million headers! Is it less weak now? Does it come closer in strength to Python?
No, because the formal properties of Html did not change at all, no matter the number of headers. So, do the formal properties of the grammar generator called GPT, are any different related to how many animals it got statistical data on? No, the formal properties of GPT's grammar did not change at all, if it happens to know about 3 animals or a trillion.
While I dislike the silliness that you're alluding to, I think you're using multiple meanings of the phrase 'AI Safety' there all lumped into one negative association.
There are risks, esp in a profit-motivated capitalistic environment. Most researchers don't take the LessWrong in-culture talk seriously. I'm not sure many people are going to be able to actually understand the concerns of people in that group given the way you've presented their opinion(s).
> Most researchers don't take the LessWrong in-culture talk seriously
Yes but politicians do, for some reason. AI Safety has become a meaningless term, because it is so broad it ranges from "no slurs please" over "diverse skin colors in pictures please" to the completely hypothetical "no extinction plz".
“diverse skin colors in pictures”, and, more critically, “AI” vision systems in government use for public programs should work for people of different skin colors, is not so much “AI safety”, as the kind of AI ethics issue that the broader “AI safety” marketing campaign was designed to marginalize, dilute, and distract from.
Look, there is no AI safety advancement without AI capability advancement. I think we van learn fuck all about AI safety if we don't try to actually build those AIs, carefully, and play around with them. AI safety is not an actual field of study when you don't have AIs of corresponsing level to study - otherwise there are zero useful results.
Sure, but you snuck an assumption in there. Just because AI is possible, or someone else will do it, doesn’t obligate us to build it. If we can’t make AI without risk of significant or existential harm, then we shouldn’t do it at all.