Does anyone really hold software developers to the same standards as licensed professions (laywers, accountants, doctors, professional engineers etc.)?
NB I am married to a lawyer and have a brother who is a professionally qualified engineer, so I have to be careful never to refer to myself as a "professional" as I have no professional qualifications. I'm also quite happy not to have the level of personal responsibility and liability that they have!
>> Does anyone really hold software developers to the same standards as licensed professions (laywers, accountants, doctors, professional engineers etc.)?
I don't know. But if electrical engineers, mechanical engineers, and software engineer are all working to build a jet, it would seem reasonable to expect the same diligence from them all.
>> I have to be careful never to refer to myself as a "professional"
You're a professional if you make a living at it.
"A professional is a person who is engaged in a certain activity, or occupation, for gain or compensation as means of livelihood; such as a permanent career, not as an amateur or pastime."
UPDATE:
BTW, licensing helps only if there's a reliable way to determine competence. Arguably, different areas of programming could be considered different fields of expertise.
There is a difference between the lay definition of "professional" and the legal or regulatory definition in some jurisdictions.
In many contexts a "professional" occupation is one that is governed either by the government or a self-regulating body. Accountants, electricians, doctors, lawyers, engineers, etc.
>> In many contexts a "professional" occupation is one that is governed either by the government or a self-regulating body. Accountants, electricians, doctors, lawyers, engineers, etc.
True, but as I edited my comment to say, these bodies can be self-serving as easily as they can serve the public. People already working in a field can inflate their wages by making it harder to enter that field.
That's one use of the term - in the UK (especially amongst people who are professionally qualified) the term often has a narrower meaning.
This doesn't give me any problems! And I am NOT suggesting it is appropriate for general development (although it probably is in a very narrow range of domains - e.g. safety critical systems).
I think that's your answer. How are you to hold an unlicensed professional accountable for anything that s/he's not already legally obligated to do? Note: I'm not making a statement in support of or in opposition of licensing software developers. It's interesting to think about, though.
I wouldn't really take pause in throwing around the term "software professional" unless I was around somebody who was a stickler about using the term "professional"....then I'd really start throwing around the term around. People take themselves too seriously.
"How are you to hold an unlicensed professional accountable for anything that s/he's not already legally obligated to do?"
That's the sanction that is facing a lawyers/accountant/doctor/PE - if they mess up bad enough their licensing body takes their license away and they can't work anymore in that role. Combine that with personal liability for the decisions they make and I'm quite happy not being that kind of professional.
I would change the push from ethical software to ethical development.
Software, like so many drugs is not "unethical" on its own, but the user (or doctor compared to medicine) is the one that makes an unethical use of it (or medicine)
I agree that in our society where software is pervasive, an ethical framework needs to be in place. As an oath, it will not force everybody to comply, but the choice will be based on full knowledge that it is an unethical choice. And that amounts to intentionality
Can I advocate that software developers should be oath to write software that do not harm? The ideal would be that the law should protect software developers that refuse a particular assignment on ethical grounds.
Create a new (viral) software license, the Ethical Public License (EPL). The EPL is linked to the oath and forbids use of the source code in any project or system deemed to break the oath.
This might result in an economic incentive for companies like Path to curb bad behaviour, due to the cost of rewriting EPL code or using inferior code.
Note: The OSI will probably claim the EPL is not an open source license ("No Discrimination Against Fields of Endeavor") and the FSF will likely say the EPL isn't a free software license because it breaks Freedom 0 ("The freedom to run the program, for any purpose").
The irony of taking an oath so focused on privacy and naming it after a famous codebreaker is just fantastic.
Of course it's not the same thing, and I'd never argue that he was wrong to do it given the circumstances, but Turing's career was built on reading other people's communications against their will.
I honestly can't find the source of this, but I recall reading that Turing was against mnemonic assemblers (as they would consume scarce resources).
Its understandable given the computers of the time. But I suspect that at some point, you'll probably want to talk about quality, which so many programmers have discussed.
David Parnas would be a more suitable candidate, if you really want to name it after someone. But I'd like to point out that you may not want to name the oath after a person unless they are involved in drafting it.
Dijkstra, for example, spoke frequently on the topic of being ethical as a programmer / professor. Out of respect, I would never name something like this after him unless I could back up every word with his various papers.
Gotcha: But what about the people communicating with Hitler. What about their privacy? Would you extend the exception to them? If so, fine. What about the information they, in turn relay to others? Is that subject to the exemption as well? What if Hitler only talks privately to one guy, who is within the exception and then that guy turns around and talks to everybody Hitler intends to talk to, outside of the exception?
Draw the line at the same place you would draw it for engineers. Who do you think designed the tanks, guns, aircraft, and ships we used to defeat the Third Reich? How might that war have ended had no engineers been willing to design those weapons?
The same routine that helps target a missile from a drone, could help target viruses more effectively, but someone has to make the choice to use it for that purpose.
Things are amoral. The use and creation of things, however, is open to moral and ethical considerations.
Making a radiation therapy machine without the inclusion of safety mechanisms could be considered unethical. Making a computer vision system that tracks targets is amoral (what's a good word for something that is neither ethical nor unethical? I'm not fond of using moral/amoral for this), but applying it to a weapon targeting system hits an ethical gray area (as with any other weapon engineering task).
Like most things, the ethical status of an act is largely related to the intent and foreknowledge. Knowingly creating a system that will (not just could) be used for harming others (physically, emotionally, financially, etc) can be considered unethical. Making a system which is most likely going to be used for harm might be on the unethical side of gray, even if it has a number of potentially positive or neutral uses. Making a system that might be used for harm (consider knives) is probably not unethical, but then the manner of sale and marketing would become important for the calculation.
Tools merely empower us to do better what we were already doing. The tool is never bad.
We should probably take clues from other professions. A scalpel is never bad, and cutting up a person can be ethically correct.
I haven't thought carefully about this, but it seems that intention is the most important factor, followed by actual outcome. (Eg doing bad things without intent isn't necessarily unethical behavior. But some bad actions are so gruesome that it is hard to forgive the lack of foresight. Recklessness you might call that)
Guns are pretty unambiguously tools for killing. Arguably, not all killing is "bad" though.
Nuclear bombs are pretty unambiguously tools for killing an awful lot of people.
Software is interesting because it is tools for making tools for making tools. If your software has a big green button as the input and a 3D printed nuclear bomb as the output, you may have created a bad tool. If it takes 2 steps to connect your software with one other piece of software to achieve the same result, you still might want to think about what you're doing. Three steps, four steps, five steps, and the ethical boundaries get just as difficult to resolve as most other human problems.
Privacy and security are very important, but if one is going to write a code of ethics for software, it should probably include something about not writing software that kills people (see, for example, Therac-25).
Therac-25 was designed to cure people, not kill them (See http://en.wikipedia.org/wiki/Therac-25). If we are talking about software that is used in drones or missiles, that is a different matter, and one that the oath should address.
It was designed to treat them, but was designed poorly. Due to a combination of poor risk assessment and QA hardware and software mechanisms to ensure safety were not in place. That is an ethical lapse on the part of the individuals and teams that developed it.
Therac-25 was not well designed, which is a much greater ethical problem than designing a machine to kill people. If the failure of your software can kill people, it is unethical to just hack together a solution.
I like the idea, but it is too black-and-white in some places. Never exploiting security vulnerabilities? There are certainly cases where that is justifiable -- attacking the Enigma machine, for example (speaking of Turing...). If World War II were to happen today, there would almost certainly be attacks on computer systems, and we would want our military to attack the enemy military's systems.
There is also the matter of law enforcement. It is better for law enforcement agencies to exploit vulnerabilities (with a warrant) than to require special back doors in software. No reasonable person can take issue with the police exploiting a vulnerability to catch a mafia hit man, a criminal arms dealer, etc. Some hacker needs to be willing to write the software that exploits those vulnerabilities. I would say that writing software with back doors is a much more serious ethical problem than exploiting unintentional vulnerabilities.
Utilitarianism comes to mind: seek to maximize the benefit to society. Sometimes that means attacking, sometimes it means defending, sometimes it means refusing to create the software you are told to create.
As I noted in a commend on github I would not conflate this issue of privacy and ethics as it relates to user information collected by applications with things like war crimes and crimes against humanity.