I skimmed through this because it was really obvious where it was going.
> The unifying factor in all of the stories I’ve told is that a developer wrote the code that did these unethical or immoral things. As a profession, we have a superpower: we can make computers do things. We build tools, and ultimately some responsibility lies with us to think through how those tools will be used.
No, it does not. It still boggles my mind that people actually seem to have this model of morality. If everyone thought that way, the world would not have nuclear power, to name just the most obvious example that comes to mind.
Nobody is "responsible for" making it possible for others to do evil. That's their choice.
(To expand a bit, because clearly not everyone shares my intuition: if this sort of moral failing can attach at one degree of separation, logically nothing prevents it from attaching at arbitrarily many degrees of separation. If I become culpable because my software "is used to kill people", then any person or company who enables me to write the software faces the same judgment. Vim is used to write software that is used to kill people. Do you really want to put that on the guy[0] who tried to get you to donate to humanitarian aid in Uganda?)
(The same logic is used by activists to demand participation in various seemingly arbitrary corporate boycotts - and people I care about have been harassed because of it. The thing about "following the money" is that it goes literally everywhere.)
I think there's a lot of daylight between "made a text editor that in turn was used to make software, and that software in turn enabled killing machines" and "the core functionality of this software is to facilitate the killing of specific people by finding their cell phones". In the former case, the lethal (and indeed, military) application is pretty disconnected from any of the design decisions that lead to its creation. In the latter, failing to recognize that the lethal application drove all design requirements ultimately hampered the delivery of the product.
To an extent, I'm with you. Is a brick maker responsible if someone uses one of his bricks to commit a murder? Obviously not. On the other hand, if you work on, say, proximity triggers on missiles and never stop to wonder who your missiles will be used on, I'd say you've abdicated a core ethical responsibility.
I don't have a good answer on where the line is, and I've thought about it a lot.
The quote was "ultimately some responsibility lies with us to think through how those tools will be used". It's saying that we as builders have a responsibility to consider and be mindful of the impacts of what we're building. It's not saying that the maintainer of wget is directly responsible for a system used to exfiltrate data from a database of political asylees.
To take your same point to its logical conclusion, no one is responsible for evil aside from the one who pulls the trigger.
> It's saying that we as builders have a responsibility to consider and be mindful of the impacts of what we're building.
Yes, and my argument is that building those things doesn't have the "impact" referred to; using them does.
> To take your same point to its logical conclusion, no one is responsible for evil aside from the one who pulls the trigger.
Tools, by their nature, have explicitly designed uses, and potential cascading consequences of that use. This is one of the reasons that open-source licenses include a disclaimer of warranty: for the legal protection of the author, not just from claims by users, but claims by third parties injured by those users.
As far as culpability goes, I'm much happier drawing the line in a place where, if you keep applying the same logic you used originally, it will stay put rather than moving inexorably further. Contrary to what many have tried to tell me, intent does matter, a whole lot.
Good rebuttal, a lot of absolutes in philosophy you can just extrapolate out to absurd conclusions on both sides.
As an aside, one thing I notice a lot on internet forums is the tendency to immediately jump to these two extremes (often as a form of strawman). Might be projecting here, but I think it's an attempt to get internet points and seem smart, e.g. debate culture. Though I could totally see that maybe everybody can intuitively see these absurd conclusions, so it follows that there will probably be one genuinely disgruntled reader that that finally reaches their breaking point. I know I've certainly made similar comments.
How do we navigate this line? Ultimately I think the answer can only lie in human experiences, and thus I'm glad that the original article exists. It's another datapoint. (though this spawns a whole other discussion about how we get our data)
I think both things can be true. A person can take the position that developing something which will be used to directly kill someone is wrong, and thus refuse such jobs. A person can also take the position that software itself is morally neutral, and how it is used is the choice of the user.
Both of these are perfectly valid lines of moral reasoning. Which one you choose is going to be a personal decision. Debating about which one is more "right" devolves into a philosophical discussion.
Does it "boggle your mind" that some people choose to be vegetarians?
The point is that "which will be" is load-bearing. The idea that someone would feel moral qualms about how software is used by the military is incongruous with signing up for the job in the first place. The military does in fact do a lot of things that are not killing people, and presumably could also find a use for the technology described that does not involve killing people. Like, say, locating allies for a rescue effort.
I'll also say that there's a huge difference between IT specialist in the military and Tier 1 Special Forces Operator. One may facilitate killing, but its not a core competency, and you can spend an entire enlistment term doing absolutely nothing that might suggest your relation to a lethal machine; indeed you can do your job and never think about what you might be facilitating. The other exists to kill, and occasionally explicitly murder, and failing to make peace with that reality makes them less effective at their job. Both are ultimately necessary.
I understood the article as reminding folks to actually think about what you might be facilitating, and make your choices accordingly.
By this logic, software developers of life saving medical technology should have zero pride or comfort in knowing that their work directly saved lives. And the janitor at a hospital should feel no pride for helping to clean a place that makes people feel good.
Of course I'm not supposing that software developers are utterly disconnected from the use of their software. But what matters is the designed purpose of the software, not the (even reasonably foreseeable) motivations of any particular client.
The code described at the beginning of the story locates objects, which are presumed to be in the vicinity of a person, whom the military might then kill, after presumably having used that location software without the victim's consent. By this standard, we should hold Tim Cook responsible for every case of stalking involving the use of an AirTag.
What it sounds like you're opposed to is even the consideration that your work can be used for purposes outside of what they're created for.
And um, yeah, the AirTag release was bad enough for obvious reasons and Apple had to make significant privacy changes. Almost as if they forgot to consider that their work could be used for purposes outside of what it was created for. Could've been safer from the beginning.
Edit: turns out Apple is being sued for Air Tags' uses in stalking [0]. So I'm pretty confused by your point since would presumably Cook care about Apple being sued. Will he personally be criminally liable? Doubt it. But it's not like he's blameless or ignorant to the situation.
Can I ask -- what do you do for work? My gut tells me that you're younger and still trying to figure out whether your means of living is good or justified. And you haven't exactly found that answer yet.
> The unifying factor in all of the stories I’ve told is that a developer wrote the code that did these unethical or immoral things. As a profession, we have a superpower: we can make computers do things. We build tools, and ultimately some responsibility lies with us to think through how those tools will be used.
No, it does not. It still boggles my mind that people actually seem to have this model of morality. If everyone thought that way, the world would not have nuclear power, to name just the most obvious example that comes to mind.
Nobody is "responsible for" making it possible for others to do evil. That's their choice.
(To expand a bit, because clearly not everyone shares my intuition: if this sort of moral failing can attach at one degree of separation, logically nothing prevents it from attaching at arbitrarily many degrees of separation. If I become culpable because my software "is used to kill people", then any person or company who enables me to write the software faces the same judgment. Vim is used to write software that is used to kill people. Do you really want to put that on the guy[0] who tried to get you to donate to humanitarian aid in Uganda?)
(The same logic is used by activists to demand participation in various seemingly arbitrary corporate boycotts - and people I care about have been harassed because of it. The thing about "following the money" is that it goes literally everywhere.)
[0]: https://en.wikipedia.org/wiki/Bram_Moolenaar