What's as worrying, judging by the comments here and in that GitHub thread, is that there is no correlation between technical ability and ethical understanding. Perhaps naively, I'd have thought someone intelligent enough to develop a technology like this would also be intelligent enough to understand the complex ethical issues it raises. It seems, unfortunately, that there is no such correlation.
In fact, anecdotally, it seems the people with the technical ability are least likely to have a nuanced understanding of the ethical impact of their work (or, more optimistically, it's only people with the conjunction of technical ability and ethical idiocy who would work on this, and we're not seeing all the capable people who choose not to).
Also, what's with all the people in this thread coming up with implausible edge cases in which deep fake tech could be used ethically to justify a technology that will very obviously be used unethically in the vast majority of cases? It's almost useless for anything except deception—it is intrinsically deceptive. All the 'yeah but cars kill people so should we ban all cars?' comments miss the obvious point that cars are extremely useful, so we accept the relatively small negatives. The ethical balance is the other way around for deep fake tech. It's almost entirely harmful, with some small use cases that might arguably be valuable to someone.
> Perhaps naively, I'd have thought someone intelligent enough to develop a technology like this would also be intelligent enough to understand the complex ethical issues it raises. It seems, unfortunately, that there is no such correlation.
Correct. There is a tendency to think that because a person is exceptionally intelligent or skilled in one area, they must also be intelligent in other areas. It's simply not the case. An expert is authoritative in the areas of their expertise, but outside of those, their opinions are no more likely to be correct than anyone else's.
This error is often leveraged in persuasion campaigns -- thinking that, for instance, a brilliant physicist's opinions on social policies are more likely to be accurate than any random person on the street.
Yes, you are correct. Or, perhaps more accurately, "authoritative". Being an expert in one field does not mean that a person's opinions in other fields are more likely to be correct.
In fact, anecdotally, it seems the people with the technical ability are least likely to have a nuanced understanding of the ethical impact of their work (or, more optimistically, it's only people with the conjunction of technical ability and ethical idiocy who would work on this, and we're not seeing all the capable people who choose not to).
Also, what's with all the people in this thread coming up with implausible edge cases in which deep fake tech could be used ethically to justify a technology that will very obviously be used unethically in the vast majority of cases? It's almost useless for anything except deception—it is intrinsically deceptive. All the 'yeah but cars kill people so should we ban all cars?' comments miss the obvious point that cars are extremely useful, so we accept the relatively small negatives. The ethical balance is the other way around for deep fake tech. It's almost entirely harmful, with some small use cases that might arguably be valuable to someone.