Yesterday, I randomly watched his full interview from a month ago with CBS Morning, and found the discussion much more nuanced than today's headlines.
https://www.youtube.com/watch?v=qpoRO378qRY&t=16s
Watching that interview, I got the impression that Geoff is a very curious person, driven by his sense of wonder. At the same time I couldn't help but feel that he comes across as very naive or perhaps innocent in his thinking. While he wouldn't personally use his creations for morally gray or evil things, I think it's clear we're already in living in a world where ML and AI are in the hands of people with less than pure intentions.
Yeah, this "critique" seems incredibly bad faith to me. The actual problem in this hypothetical situation exists with or without the chat bot. Should we expect chat bots to act as police?
Can you clear up specifically what about the second video you think is difficult to understand?
I saw an example of a conversation where the Snapchat 'My AI' was tricked into grooming a child, with the likely outcome being heavy regulation if left alone.
Trying to be diplomatic, but this is such an unnecessary snarky, useless response. Google obviously did go slow with their rollout of AI, to the point where most of the world criticized them to no end for "being caught flat footed" on AI (myself included, so mea culpa).
I don't necessarily think they did it "right", and I think the way they set up their "Ethical AI" team was doomed to fail, but at least they did clearly think about the dangers of AI from the start. I can't really say that about any other player.
> Google obviously did go slow with their rollout of AI, to the point where most of the world criticized them to no end for "being caught flat footed" on AI (myself included, so mea culpa).
they were criticized because they are losing competition not because of rollout, their current tech is weaker than ChatGPT.
Their current tech is weaker because they couldn't release the full version due to the additional safeguards (partly to prevent more people claiming their AI is sentient) and partly also due to cost cutting.
> We’re releasing it initially with our lightweight model version of LaMDA. This much smaller model requires significantly less computing power
Translation: we cannot release our full model because it costs too much. We are giving the world a cheap and worse version due to cost cutting.
> It’s critical that we bring experiences rooted in these models to the world in a bold and responsible way. That’s why we’re committed to developing AI responsibly
Translation: we value responsible AI so much that we'd nerf the capability of the AI to be "responsible"
If someone more ambitious than Sundar were to be CEO I'm sure the recent events would turn out very differently.
where they didn't create positive revenue products yet despite billions of investments, while putting main cash cow (search) into risk by neglecting that area.
They use a lot of machine learning for ads and YouTube recommendations - the TPU makes sense there and if anything shows how hard they try to keep costs down. It’s a no-brainer for them to have tried keeping Search as high-margin as possible for as long as possible.
Cade Metz is the same muckraker who forced Scott Alexander to preemptively dox himself. I don’t know Hinton apart from the fact that he’s a famous AI researcher but he has given no indication that he’s untrustworthy.
I’ll take his word over Metz’s any day of the week!
I've always thought about leaving a little text file buried somewhere on my website that says "Here are all of the things that Future Me really means when he issues a press statement after his product/company/IP is bought by a billion-dollar company."
More like HR said, “Well, there is option A where you leave and are free to do what you wish. And then there is option B (points at bag of cash) where you pretend none of this ever happened…”
I assume Geoffrey Hinton has enough bags of cash for his lifetime and a few more on top of that. IDK why someone so well compensated and so well recognized would agree to limit themselves in exchange for a, relatively speaking, tiny bit more cash. That doesn't make the slightest bit of sense.
"It doesn't matter if you take the bags of cash or not, we will do our best to destroy your life if you mess with us after you are gone. The bags of cash are a formality, but you might as well accept them because we have the power to crush you either way"
Large corporations like Google have a lot of resources and connections to really mess up a single persons life if they really want to, with expensive legal action and PR campaigns.
Yeah, they might cause their reputation some damage by going after the wrong person, but let's be real here.. the worst outcome for Google would likely be miles ahead of the worst outcome for Hinton.
Edit: Note that I'm not actually saying that I think Google and Hinton have this level of adversarial relationship.
I'm just saying that big companies may come after you for speaking out against them regardless of if you've accepted hush money or not.
Given that, it's usually worth
being tactful when talking about former employers regardless of any payouts you may have accepted or agreements you may have signed.
I never use the maximize button because I double click somewhere on the title bar. Why use a tiny button instead of a conveniently large area? When first installing Linux Mint years ago, I was going through the settings and noticed I could customize the buttons. I made the middle button, maximizing, empty and now just have more space between the minimize and close buttons.
It's perfect.
As a bonus tip, remapping double click to right click (also a setting in Cinnamon, don't need external software for this) also makes it way nicer to use (I never use the right click menu - after all, the buttons I regularly need are already right there and there's another button on the left for opening the menu, or you can use alt+space+t).
In Classic Mac OS, the close button was on the other side of the window for the reason described in the article. It joined the others probably for familiarity for Windows users.
IMO their terrible performance is the #1 reason not to use their cloud services, and there's usually nothing you can do about it. The less resources they use for each customers the higher their margins.
Except it's notoriously flaky. I've lost several save games and support has said everything from I'm lying, it's my router, and ghosting me. Can't use a provider that doesn't take data protection seriously.
While the evidence is light. Is anyone surprised if this is true? My experience is that most cybersecurity firms are only slightly better than other enterprises. They often have lofty standards that they themselves don't follow.
They also have professional service arms that are similar to the rest of the industry. Handful of senior people and an army of junior engineers that bias towards velocity over quality (i.e. take shortcuts that can lead to data exposure and other issues)
I know an attorney who was quite capable legally and with tech and spent his career in both. He ended up at a legal organization that also dealt with security.
The cybersecurity industry is absolutely full of crappy security companies worth jack squat. The legal industry is full of Luddites.
Being capable in both areas = some serious demand / profit.
Yeah I don't really agree. I have both software engineering and law degrees and would love to do something on the nexus tech/law/security but there are very few jobs where deep knowledge of several is a real plus. It's at best an 'oh that's nice' level thing. I'm open to jobs in the south of The Netherlands, eastern Belgium or western Germany if anyone is looking :)
> My experience is that most cybersecurity firms are only slightly better than other enterprises.
They're often worse. I can't recall the study, but one of them looked across industries at software quality, and security products were statistically worse than others. Of course such studies are hard to really feel confidence in, but it isn't surprising.
Exactly. Not to mention they've been found guilty of corruption and bribery in the past. Naturally allowed to continue to operate devices critical to democracy.