Dozens of employees have left due to lack of faith in the leadership, and they are in the process of converting from nonprofit to for-profit, all but abandoning their mission to ensure that artificial intelligence benefits all humanity.
Will anything stop them? Can they actually be held accountable?
I think social media, paradoxically, might make it harder to hold people and corporations accountable. There are so many accusations flying around all the time, it can be harder to notice when a situation is truly serious.
There's a big difference between can and will. We absolutely can hold people and corporations accountable, but we often don't. We cannot hold a computer responsible for anything. It's a computer. No matter how complex or abstracted, its output is entirely based on instructions and data given to it by humans, interpreting and executing it as humans designed it to. It can't be discouraged or punished: a computer doesn't care if it's on or off; if it's the most important computer to have ever existed or a DoA Gateway 486 from the early 90s that sat in a dumpster from the day after it was born until the day it was smashed to bits in a garbage compactor in a transfer station. It doesn't care because it can't care. Anything beyond that is anthropomorphization.
> might make it harder to hold people and corporations accountable
The problem is that someone (or some organization) chose to employ that system, and if the errant system doesn't oblige to have itself replaced with a new one, or be amenable to change, the responsibility rebounds back to whoever controls that system, whether that be at the level of the source code, or the circuit breaker.
Corporations are regularly "held accountable". Remember that "accountable" just means "required or expected to justify actions or decisions; responsible."
When you sue a corporation, discovery demands that they share their internal communication. You can depose key actors and require they describe the events. These actors can be cross-examined. A trial continues this. This is the very definition of "accountable".
The problem at OpenAI is that the employees were credulous children who took magic beans instead of a board seat. Legally, management is accountable to the board. In serious cultures that believe in accountability, labour demands seats on the board. In VC story-land, employees make do with vague promises with no legal force.
>The problem at OpenAI is that the employees were credulous children who took magic beans instead of a board seat. Legally, management is accountable to the board. In serious cultures that believe in accountability, labour demands seats on the board. In VC story-land, employees make do with vague promises with no legal force.
This is not a good description of the incident. The employees I mention in my comment, who quit due to lack of faith in Sam Altman, were presumably on the board's side in the Sam vs board drama.
There is still a chance that OpenAI's conversion to for-profit will be blocked. The site I linked is encouraging people to write letters to relevant state AGs: https://www.safetyabandoned.org/#outreach
I think there's a decent argument to be made that the conversion to a for-profit is a violation of OpenAI's nonprofit charter.
My point is: accountability is NOT an abstract property of a thing. It is a relationship between two parties. I am "accountable" to you IF you can demand that I provide an explanation for my behaviour. I am accountable to my boss. I am accountable to the law, should I be sued or charged criminally. I am NOT accountable to random people in the street.
Sam Altman is accountable to the board. The board can demand he explain himself (and did). Management is generally NOT accountable to employees in the USA. This is because labor rarely has a legal right to demand an accounting. In serious labour cultures (e.g. Germany), it is normal for the unions to hold board seats. These board seats are what makes management accountable to the employees.
OpenAI employees took happy words from sama at face value. That was not a legal relationship that provided accountability. And here we are. The decision to change from a not-for-profit is accountable to the board, and maybe the chancellors of Delaware corporate law.
Per Landian-Accelerationist theory, companies are already artificial intelligences. As we've seen, they can be held accountable, and the law (at least in the US) does distinguish in a variety of ways between corporate responsibility and personal responsibility. As you point out, there are lots of failure cases here, and it's something I expect to see continue to be litigated over the coming century.
Correct, for Nick Land "Business ventures are actually existing artificial intelligences"[0] and the failure cases will increase with the ongoing autonomization of capital and eventually the concept of "capital self-ownership"[1] will have to be recognized.
[0] Nick Land (2014). Odds and Ends in Collapse Volume VIII: Casino Real. p. 372.
Consider all of the shenanigans at OpenAI: https://www.safetyabandoned.org/
Dozens of employees have left due to lack of faith in the leadership, and they are in the process of converting from nonprofit to for-profit, all but abandoning their mission to ensure that artificial intelligence benefits all humanity.
Will anything stop them? Can they actually be held accountable?
I think social media, paradoxically, might make it harder to hold people and corporations accountable. There are so many accusations flying around all the time, it can be harder to notice when a situation is truly serious.