Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Stop developing this technology (github.com/iperov)
263 points by chetangoti on Feb 2, 2023 | hide | past | favorite | 399 comments


Regardless of what I think about this technology in particular, I want to respond to this line from the second comment: [1]

> on the contrary, it must be developed.

No, it mustn’t. There’s not a gun to your head, forcing you to do this. You want to develop this technology. That’s why it’s happening.

Technology isn’t inevitable. It’s a choice we make. And we can go in circles about whether it’s good or bad, but at least cop to the fact that you have agency and you’re making that choice.

[1] https://github.com/iperov/DeepFaceLive/issues/41#issuecommen...


From the main page: “Communication groups [...] • mrdeepfakes - the biggest NSFW English deepfake community”, which strains my assumption of good faith debate regarding their ethics.


If someone makes a deepfake on their own computer, watches it, and doesn't share it with anybody, I don't see how that's markedly different (morally) from just imagining the same thing. Some people have a very strong visual imagination and others don't have it at all, it's only fair if they can use a technological substitute.

Also some entertainment works by artificially instilling a desire which cannot be fulfilled. If people can use deepfakes and masturbation to defuse that desire it might be a moral positive for them.


I think it's a fundamentally different thing, morally speaking.

Imagination is fleeting, ephemeral, jumbled, and usually linked to a specific state of mind that passes once the person has either grown bored of the imagination of achieved whatever satisfaction they wanted from it.

A deep fake image or video is persistent on your device and may leak some day, but worse is that feels real in a way mere imagination never can and it's something a person could go back to over and over, feeding into and enhancing their obsession with the non-consenting person.

I think it's a bad thing in ways that imagination and porn are not.


What if I photoshop my crush's face onto a naked body? Wouldn't that be similar to having a personal deepfake?

I agree with the person you're replying to, you can't stop people doing these things in the privacy of their own home and these deepfakes are functionally similar to other photo / video editing techniques.

_Distributing_ those photoshops / deepfakes is another thing entirely though, and one I am fully against.


> What if I photoshop my crush's face onto a naked body? Wouldn't that be similar to having a personal deepfake?

Yes, and it's already widely considered to be really fucking weird behavior


My childhood was typical. Summers in Rangoon, luge lessons. In the spring we'd make meat helmets.

Of course it took years for me to perfect my artistic skill in the area of erotic fingernail sculptures and by that time anyone could afford a Photoshop license.


Fully agree with ya there, but is it _immoral_? I would say no


The question wasn't whether people can be stopped from doing it, because for the most part they can't be, but whether it's moral. Or at least different in a moral sense from merely imagining someone else doing those actions.

And yes, photoshopping your crush's face onto a naked body is just as unethical and immoral, even if done in the privacy of your own home, for the same reasons I mentioned above.

Compared to deep fakes, however, it has a much higher effort, lower reward, and less realism, which acts as its own inhibitor. Deep fake generators are becoming trivial to use, and not just for one or two pictures but for as many videos as you'd want. That's going to result in a very different driver of obsession.


> photoshopping your crush's face onto a naked body is just as unethical and immoral, even if done in the privacy of your own home, for the same reasons I mentioned above.

The reasons you mentioned above were "it feels more real than imagination" and "a person could go back to it over and over, enhancing their obsession with the non-consenting person". These are not moral bads that require a "no lusting after people, even in your heart" style rule. Being obsessed with people starts in high school and can persist for decades. They never find out, and it never hurts them.

I'm sure many people would like to take away others' ability to picture them naked, but fortunately they didn't have the ability to enforce that by starting a moral crusade against a Github repo. It's one of life's innocent little pleasures.


I never said this was about a "no lusting after people, even in your heart" rule, nor do I believe it should be. If someone lusts after someone else in their head and never does anything to harm the other person there's obviously nothing wrong with that.

It's immoral because it's creating something persistent that could harm someone without their consent, amongst other things.

I also question your assertion that obsessions of this type are always harmless, because it's obvious that many are not and escalate into outright stalking, confrontations, or worse, and anything that makes that sort of obsession more likely or helps to enhance it where it exists increases the chances of that happening.


Morality evolves with technology. At some point in the future even killing someone migh not be very immoral if technology reaches the point where we can back up and recover from back up.


Maybe, but what’s the relevance? We can’t predict that, nor can we adjust today’s morality to match what technology might enable in future.

Perhaps murder will one day be not very immoral, but that doesn’t mean we should treat it any differently today.

Right now, creating fake naked pictures and videos of real and non-consenting people is immoral. In some jurisdictions it’s illegal. It’s also probably harmful, by taking obsessions further than mere imagination would allow.


mrdeepfakes.com is a site where deepfakes are being shared publicly, so your comment doesn't have much to do with the comment you actually replied to.


https://en.wikipedia.org/wiki/Aphantasia is a real thing and affects about 4% of people.


> strains my assumption of good faith debate regarding their ethics.

Let's flip the argument upside down. Let's imagine you want to produce porn content, but don't want to be recognized. This technology allows that.


Side note about that: people doing amateur porn where their face doesn't show at all still get recognized because of particular marks on their skins. So this might help this very specific case, but maybe not as much as one would think.


It's trivial to use Stable Diffusion img2img with low noise to change the marks on your skin.


I've already seen porn where someone's face is blurred.

And whose face are you going to put in that porn with this?


The readme for the repo has examples of AI generated faces. Think this person does not exist, but better.


Emma Watson, probably


TBH I got curious. Clicked on one celeb, oh man, I've seen 90's Photoshop cut and paste the face onto a different body better than those "deepfakes"...


Nothing about producing or consuming deepfakes suggests bad faith.

You might not like the art, but deepfakes are not harmful.


“NSFW deepfakes” literally means “porn with real people’s faces inserted.”

Maybe that’s an art form, but it’s also clearly potentially harmful in the same way as revenge porn, teenage nude selfies, and other cases where regular people and porn intersect.


This sounds like something literally 90% of humanity does in their imagination at some point in their lives. It's inevitable that someone is going to do it in a video when the tech becomes good enough.

I'd say it's not the thing itself that's a problem, it's publishing whatever you create as if it's real. There are many things in life that are potentially harmful, but are part of everyday life since nobody actually harms anyone with them.


You do see the difference between a thought, shared with no one and visible to no one, and an image on a computer, which can be uploaded to the internet, right? This isn't about stopping sexual desire, it's about not distributing images of someone's naked body without their permission. That it's not a guaranteed accurate representation is sort of beside the point. It's good enough if you've never seen them naked before. Some people have gone on record saying they don't like it when it happens to them. Why, do you think?


> You do see the difference between a thought, shared with no one and visible to no one, and an image on a computer, which can be uploaded to the internet, right?

Why do you assume it’s shared with no-one? The representations change, but I can guarantee you that Bianna the Beautiful had several suggestive clay figures created of her without her permission.

Not to say that that was necessarily great, Bianna would probably have been upset too. But trying to fight the method of representation instead of the sharing seems like a fools errant to me.


99% agree. The 1% is for the potential for deepfakes to solve blackmail. If producing deepfakes becomes trivial, any blackmail threat can be dismissed with “Whatever, I’ll just say it’s a deepfake”.


This is not how humans work. False statements about someone damage that person, even once everybody knows they're false.


But with this argument you could blackmail someone whether or not you have some evidence.


You indeed can. Any of your coworkers can report you to HR right now for allegedly saying to them in private that you find underaged boys attractive. Wouldn't that at least be quite stressful to you?


Personally, I've been on the fence about doing sex work. On the one hand, I'm nervous about people identifying me based on my face or identifying marks, but lately I feel like I shouldn't care, I should be open about it and that way I can't be blackmailed.


Why is it clearly potentially harmful? It is not clear to me at all, as photoshop already exists.

How is someone harmed by someone else producing video that features people who look like them? The whole "seeing is believing" thing hasn't been true for ages, every movie is half CG these days.

I would perhaps buy the premise if deepfakes were sprung on the scene in the 70s or 80s, but the 90s and 2000s have inundated so many with completely fantastical high res CG imagery that nobody thinks videos are proof of anything anymore.


Do you agree that most people wouldn't like to find a nude video of themselves online?

Now why does it make a difference if the video is real or not? It's still a video of a nude body with your face on it.

I find this whole deepfake-porn trend to be incredibly disgusting. And it just saddens me that heterosexual males apparently immediately have to exploit such things for their own hornyness. I guess this is also the reason why this topic is not being criticized as much, because the people who develop and consume these videos can't comprehend how disrespectful it all is towards the people (women) who are being deepfaked AND especially the results being shared online, in most cases publicly.


I think that most people would not care if a video is posted online of them if the video is in fact fake.

Would you care if someone posted a nude image of someone with your face photoshopped on it? If so, why?


This recently happened to a large number of women that are Twitch streamers and they are all very upset. You are free to seek out and read their explanations as to why, I'll include a few tweets.

https://twitter.com/qtcinderella/status/1620264657926885380?...

https://twitter.com/mayahiga/status/1620586546083803136?t=c7...


> Would you care

Yes.

> why?

Because it's insanely creepy for someone to photoshop my face onto a naked body without asking me and then post said picture online?!? Especially if it's done so well that people don't notice that it's not actually me.

If your rebuttal is that "not everyone cares if deepfaked photos of themselves are being published online" then the answer is pretty simple: As long as you don't know if someone minds it, don't fucking do it.

It's really alienating to me when I think about the fact that we're discussing if it's okay to upload deepfaked nudes of someone without their consent.

The only reason I can come up with is that some people have probably consumed lots of deepfakes and now don't want admit that it's maybe a bit creepy and wrong.


I just don't understand why anyone would care?

You don't need consent from anyone depicted in visual art to make visual art. It literally has nothing to do with them.

I see no moral, ethical, or consent issue with deepfakes whatsoever. People are allowed to make whatever sort of CGI they can imagine, and, most critically: they should be.


> You don't need consent from anyone depicted in visual art to make visual art. It literally has nothing to do with them.

Yes you do. https://en.wikipedia.org/wiki/Personality_rights


> You don't need consent from anyone depicted in visual art to make visual art

We're talking about (mr)deepfakes. Which is visual art, yes. Nude visual art.

That's kinda the point. The nudity is what makes it immediately not okay.

If you just deepfake someones face onto Jason Stathams body, they will probably enjoy and laugh about it.

If you deepfake their face onto someone who's getting roughly f*cked in the ass, they might not like it that much.

But then again, not just nudity is wrong. Deepfaking someones face into a video to hurt them is wrong as well.

I guess what I'm trying to say is that I really think that AI Ethics should be pushed harder and more seriously.

If someone deepfakes their favorite actresses head into a porn video, I still think that it's wrong. What I would appreciate is if I can at least agree with this person, that what they're doing is not 100% okay.


> I think that most people would not care if a video is posted online of them if the video is in fact fake.

I think most people would care a lot. After all, it doesn't matter if it's fake or not, if matters if anyone you care about thinks it's real. And some almost certainly will.


Have you asked the people this happened to or are you just making assumptions? It’s important to critically examine our own biases in things like this, particularly if we have no idea why something evokes a strong emotional reaction.


Wouldn't there be a dramatically different level of effort to photoshop every frame in a video convincingly than to create a deep fake of the video?


We've not had the test case, but in places where nonconsensual or "revenge" porn is criminalized, deepfakes may well turn out to be illegal. They're certainly distressing for the victim.


It's already illegal for a person to pretend to be another real person without their consent.


This needs an asterisk (and I'll include one of my own: IANAL). In the United States, impersonation is regulated by the states and not consistent across them, but in general this is only true when using impersonation to commit fraud, defamation, or another act that itself is likely a crime already. Many cases would be considered parody or satire, even if no one finds them funny.


This may be the case in some places, and fraud is illegal in general, but should it be illegal to make parodies of politicians using paid actors? I think most people would say no.

Then why should it be illegal to do the same with an AI?


This is a shallow take, no matter which way you look at it. "Guns don't kill people."


Nearly everything remotely related to deepfake porn not only "suggests" bad faith, but positively compels the judgment of extreme levels of bad faith.


I understand the sentiment but I disagree. It's just another disruptive piece of technology that people will have to adjust to. If you can't trust a digital face, then nobody will and using this tech for scamming purposes will simply fizzle out after an initial "adjustment period". I know that sounds rough but society might react differently than you think, i.e.: with digital faces being useless for identifying people, and maybe even becoming creepy or unsettling because of the implicated fakeness, meeting people in real life (opening bank accounts, transactions in general, anything where trust is valuable) will have more value. Don't fight progress in the hopes you can one day become complacent.


It will fizzle out right after fraud involving the telephone or email died out.


I don't see a difference between any technology. So if we can frown upon some nation for developing nukes, we can frown upon a group people for developing this.

On the other hand somebody can say "if we don't, then others will". So regulate and ban the technology, then.


"If we don't, then others will" is a bankrupt argument put forth by people who are trying to justify doing things they know are harmful. It would be more legitimate if it were "we won't, but others will".


This is what I was trying to point too, and why I added the "ban it, then" argument. Because there's a considerable amount of people who operate with the "if it's not illegal, then I can do this" mindset, and there's a big intersection of this two mindsets.

At least putting a ban on it will make people think, I hope.


Agreed.

I'm not sure how I feel about "ban it" -- I can argue that either way -- but I do think that arguments that banning something is pointless because people will do it anyway are misguided.

Regulations don't eliminate the effects of bad actors, but they do reduce the number and severity of them.


> So regulate and ban the technology, then.

When has that ever worked? What banned technology is conceived of but not developed because some government entity said not to?

I think the "if we don't, then others will" is part of the natural progression of technology. Whatever the next logical step of development is, that's where development efforts will flow. Some might not want to go there, but some will. Someone banning it will likely only fan the flames and drive more interest into the space - ala the "Streisand Effect".

When the US government (et al) labeled certain numbers "illegal" because they could be used to break DRM or certain encryption types, academia and hackers alike openly mocked the notion. T-shirts, stickers, and websites sprung up further spreading this "illegal" knowledge. People who had no idea about how a number could be so "dangerous" suddenly wanted to know. By telling people they can't know or do something will absolutely drive people toward that knowledge.

The hacker mentality often answers the question "why?" with "because I can". Saying you cannot only encourages more to jump in.


I see banning things is useless in the long run, but at least some people will think about why it might be banned.

I was thinking about throwing a wrench to the machine while saying "ban it" to make it stutter, like breaking the chain of obedience in Milgram Experiment, or like the woman who walks up to Zimbardo and stops The Prison Experiment (see his TED talk).

Because, as a hacker and programmer, I believe that we have ethical obligations, and this "We're doing something amazing, we need no permission" stance in these communities genuinely worry me.

Technology is not only technology, it affects people's lives. Anything which can damage it beyond a certain point by exploiting human nature is in the same category for me.


Trying to protect low security by censorship is a repetition of history: https://www.zdnet.com/article/chilling-effect-lawsuits-threa...

It's like banning Iliad, because it describes troyan horse.


I think you're missing the point here. Because I don't say anything in the line of that.

What I say is, this is a double-use technology and dangerous beyond a certain point, so it might need to be regulated or banned at the end of the day.

In an ideal world, we shouldn't need this, but we don't live that ideal world.

For example, I can't independently develop an amateur rocket which can land to an area of my choosing by the means of actively directing itself, beyond a certain accuracy and precision. Because it'll be a homing missile. On the same essence I can say that this technology can be used to damage other people.

Or, I can't get some enriched uranium to build myself a small, teapot sized reactor to power my house during power outages.

Can we say that we're censoring research in this areas too, because they're low security things?

This is same with latest A.I. developments. However I'm a bit busy to open these cans today.


Nuclear technology is low security indeed, and it's a technical problem, and uranium isn't exactly abundant element. Untrusted data is a problem of stupidity in comparison. But stupidity causes problems with any technology. It "spoons can harm people" tier problem.


It's possible to buy anything given the right price when it's not regulated or banned due to double-use technology restrictions. I'm sure while expensive I would be able to get required equipment for the right money, from the usual suspects (i.e. I'm sure there'll be microcontroller boards for controlling reactors up to 200KW or servo kits for 12 fuel, 12 regulator rod configurations from adafruit for example).

> Untrusted data is a problem of stupidity in comparison.

In the past, wrong data showed itself because of a lack of coherency. With the advanced misinformation operations, it almost have became an alternative reality game. A.I. today allows us to generate convincing lies at the push of a button. I can fathom what kind of misinformation bubbles can be built with technology like that.

These technologies are attacking to lowest level instincts of humans, which ones we deem utterly reliable for thousands of years. They are the same level with the manipulative algorithms in my mind. I put these into dangerous and harmful category.

This is not a case of stupidity. This is plain old, and very dangerous kind of, manipulation.

Downplaying this is not wise.


That a law is broken by people does not mean we don't need it.


I say frown away! We all know what this is used for and its tasteless at best. But I like to think in solutions, so in stead of trying to arms-race this tech into submission with ever cleverer deep-fake detectors (and thus be able to regulate), why not accept its existence and change ourselves. Not least of all because any digital tech is like a permanent mutation to the digital-dna of our global society.


> why not accept its existence and change ourselves.

That works too, like we accept the algorithms which tie people to screens and used to spread misinformation and manipulate the masses.

> We all know what this is used for and its tasteless at best.

Yeah, like interview fraud, misinformation campaigns and what not. These are tasteless, but not harmless.


> So regulate and ban the technology, then.

Except for criminals and CIA.


> Don't fight progress in the hopes you can one day become complacent.

I'm not convinced this represents "progress". That aside, the goal of resisting it isn't to allow you to become complacent some day. The goal is to avoid being harmed.


Disruptive technology disrupts. Society and culture as well as markets.

How many generations did it take for society to adapt to the industrial revolution? What were the spasms which occurred during that period?

As for deepfakes, we know before we even begin that it breaks (utterly moots) social norms about identity, trust, reputation.

How many people are going to die while societies adapt?

Is that price acceptable for the benefit of more amusing viral videos on TikTok?


"must" is probably the wrong word, but if they don't develop it, someone else will. At most, it would be delayed, and I'm not sure there's much difference in that.


Can we agree that “someone would have done this anyway” is a poor justification for one’s actions? That’s what you say when you 1) recognize that you did a bad thing, but 2) don’t want to be blamed for it.


It’s simply that it’s a kids/teen level argument. “But they did it too!”

Not a lot of thought went into it, is what it usually tells me.


The argument wasn't "Doing something bad is fine because others are", it's "You're trying to stop a bad thing happening by stopping some people upstream from doing something that's perfectly fine. That's going to be ineffective."

But you're right about me not putting a lot of thought. I didn't think this would need more arguing than that.

I still believe that tech is fundamentally neutral, and as such so is its development. Even if a developer intends to give it bad use, the use and development are separate questions.


There are others - not you - who will justify their own actions thusly: the technology existed, its better that the good guys (me - of course) should use it before the bad guys do.

In the space between both equivocations (yours and the other guy's), lies the potential for moral abuse.


That's an even more faulty argument, though, because almost everyone considers themselves one of "the good guys".


Indeed, I think terming yourself in good vs bad as an we vs them terminology makes several biases go undiscovered that if you truly go for objectivity despite your sense of moral leaning is telling you.


What even is a good guy use for Deep fakes besides some memes that get old fast?


This "kid level" argument fooled even Albert Einstein, Roosevelt, and many others. into developing atomic bomb. "If we don't build it Nazis will"

Using it was achieved by different kid level argument "It will prevent even more death and suffering"


The difference is that a lot more thought went into it. Not only that, the presidents domain of expertise and distinction/authority is about these issues. You way down your options, you navigate a complex issue with many factors involved. To prevent more death and suffering was indeed what was one of the most contributing factors of Roosevelts decisions (also to use it, as a direct assault on Japan would have cost a factor more in lives and suffering for both Japan the US)

The difference is of course a well thought out weighing of options, to feel into the issue and make your choice. Taking my immediate surroundings, a lot of people don’t engage in that level of philosophy. Knee jerk reactions are still very common in our species.


Absolutely correct. But that fact doesn't make these arguments acceptable.


You’re speaking to the wrong audience. “Someone else would do it anyway” is how everyone here justifies doing unethical shit at FAANG and refusing to earn less than $300k/yr base because “that’s just the industry”. I have no respect for anyone that says this shit with a straight face.


It should also be said that anyone that makes this argument immediately loses their right to complain when governments need to step in and regulate.


You're commenting even after someone has already provided a legitimate good use of this tech, and are the third person to write as if it were established that it's unethical to develop this tech. I never said that it's fine to do unethical things because others will do it anyway, and I wish you wouldn't imply that I did. From the get-go, it was an open discussion whether the development was good/bad/neutral, and my original comment was to clarify that jakelazaroff's comment probably misinterpreted what they quoted from the use of "must".

My position is that development is not unethical. I'm not trying to justify this position because I don't think it needs to be. When I said "others will do it anyway", it's not a justification and my original comment wasn't even about my position on this. My comment was referring to the usefulness of stopping the implicit always-ethically-neutral development of a tech in order to stop the potential misuse of it. I'm saying that even if they stopped the development of this one repo, or a few, or all others implementing this tech, it'd be ineffective. This repo is just one bit of the tech. Most of the tech is in its dependencies. If someone wants it and has nefarious uses in mind, they don't really really need this repo for that.


> My position is that development is not unethical.

I think this is where we disagree at heart. I don't think that development is an ethically neutral act, any more than any other human activity is ethically neutral.


I see this as part of the expansion of knowledge. People conduct research to get a better understanding of the universe. Tech development is about our understanding of what we can do and how. People conducting studies and research can be unethical in the process, same as tech development, but I can't think of knowledge that would be unethical to expand in if the process is ethical. What understanding of the universe would be wrong to obtain if ethical means were available?

If there's none, why is expanding our knowledge of what we can do and how any different?


I draw a distinction that you don't here. Development is not about the expansion of knowledge, it's about producing things. Science is about the expansion of knowledge.

As a society, we very often conflate the two, but they're really very different things.


They're very intertwined. A paperclip factory produces things; they don't do tech development (shouldn't do much at least). Tech development is about pushing the edge. Science, obtaining more knowledge about the universe we have to work with, helps, but it's also about finding new applications for what we've already known.

Tech development doesn't necessitate the production of things, even intangible ones. Technology development in the form of new optimization techniques in existing software for e.g. rendering 3d models or searching data faster or whatever is not producing things, but it's still tech development.


Technology is the practical application of knowledge about the world. Science is the acquisition of such knowledge. There certainly can be an overlap in the Venn diagram here, but the distinction remains valid and important.

In any case, I was not saying that your perspective is incorrect. I was just saying that my worldview seems rather different from yours.


It's a self-contradictory and bad-faith argument - it doesn't give another person the opportunity to say no, let alone to not do it.


I actually don't agree with this. Assume that the following premises are true:

* action x is legal, but results in a worse world for everyone. * doing action x benefits the first 1000 people who do it. * much more than 1000 are capable of doing x, including me. * if I do it now, I will be one of the first 1000.

If these are true, I think one should do action x. I believe one would be completely justified in doing it.


Basically prisoner's dilemma.

I think the answer is inherent to the question of whether it's "justified", and the fact that you're being downvoted.

If Prisoner's Dilemma is played once, the optimal strategy is to defect. If it's repeated an unknown number of times, the optimal strategy is tit-for-tat.

Now, if you're an identifiable actor in a pool of tit-for-tat-players, and you have a history of defecting first, you're going to face opponents who normally play tit-for-tat start to defect against you first, too. In the end, you will end up as one of the worst peformers in the player pool.

So when you're asking if it is "justifiable" to defect in a game like this, it is the same as asking "will I be treated worse if I defect in this game". The downvote confirms this, I think. It means that in this population pool, bad behavior is punished (or people pretend it is)

Now, if defecting can either be done secretly or if you're in a popluation pool where everyone defects anyway, then always-defect is probably the nash equillibrium. In that case, defecting will be seen as justifiable.


> Basically prisoner's dilemma

> (...)In that case, defecting will be seen as justifiable.

Exactly! My argument basically boiled down to this. In certain situations, defecting may be the way to go. I would not blame the defector for doing what is in their best interest.

And “someone would have done this anyway” could very well be what's turning a situation into a prisoner's dilemma, so it could actually be a valid justification in some cases.

But yeah, it could depend on what is meant by "justification" exactly.


> I would not blame the defector for doing what is in their best interest.

I absolutely would. The defector is willingly harming unconsenting others. That the harm would have been caused by someone else anyway doesn't suddenly make it OK.


This assumes that there is no morality. That there is no merit in not doing something evil even if others will do it in your place.


For an atheist, moral absolutes are absurd (though still widely held). There are no more an absolute case for saying it's evil for a human child to play before a mouse before killing it, than it is for a kitten to do the same.

Rather, one could argue, morality is a set of emotions, norms, beliefs and rules we have gained through culture and evolution in response to certain behavior patterns. And these can be seen as having a game-theoretical foundation. The rules that assigns the label Evil to someone tends to select people that we (or our ingroup) are in an existential conflict with, at a level where it's either us or them.

For instance, we can have rules that define some patterns of behavior as Evil, and have different set of moral rules for treating people depending on whether they are classified as Evil or not.


Hmmm. Not necessarily. I mean if what you did does not change the outcome in any way, then is what you did even evil? If you have no marginal impact, can you ever do good or evil? I am not sure actually.

I realize that this could also apply to lower rank soldiers involved in some war crimes, etc. so I guess it is not that clear cut. But would you blame a random soldier of lowest rank for carrying out terrible orders and not risking himself(or his family)? Well I don't know. I guess I would not.

Of course, the stakes are rarely that high, but the harm done is also rarely that high also.


I don't know that I'd blame them, but I would consider it an act of evil

This is a bit of a tangent, but that's something I never liked about referring to evil things as "inhumane". It seems to me that hurting others in order to help friends, family, or self is very human. In that case, morality is choosing to act against human nature by doing the right thing at the expense of self or loved ones.


That logic is how every atrocity in human history was enabled:

- If I catch this escaped slave, I can get the bounty instead of someone else!

- One of us Auschwitz guards will shoot that escaping prisoner; why shouldn’t I get rewarded for it?

So on and so forth.


Not only that, but if some people have ethical and moral considerations about the consequences of x, then those people have an ethical reason to actually do the thing, since the alternative is that the people doing it are those with no ethical or moral consideration.

If a piece of technology is going to be 90% bad 10% good and you can push it to 80% bad then it is completely justifiable to go for it.


I could not disagree more.


If I can become rich instead of someone else, and the damage will be done either way, it seems perfectly rational if I can avoid any actual legal punishment (I don’t care about people on Twitter being angry with me, Twitter isn’t real life) to push the big red button and reap the rewards.

Once I’m rich and powerful I can afford to be ethical, isn’t that the way of billionaires anyway?


If a person isn't ethical in the first place (as proven by becoming wealthy through unethical means), they certainly aren't going to become ethical after they're rich.


Can we agree that it's fundamentally not a bad thing to develop a technology? If you were a legislator, how would you like to criminalize the development of technology?

You know, the most difficult part of this tech is probably the facial tracking, which is probably also used for the animation of 3d avatars. The jump is probably not huge.


Many things can be unethical/immoral without being illegal, and of course there are some technologies that are fundamentally bad.


Tech is such a malleable thing. Let's say that this repo doesn't exist, but repos exist for doing facial tracking on webcam feeds and pictures (for facial recognition), and others for modifying a 3d model according to a facial model output by facial trackers (for 3d avatars).

Let's say that the tech around this DeepFaceLive repo develops to the point that the repo can become just a 10-line file of glue that makes a 3d model that's just a grid plane with a picture on it that moves according to a webcam feed. Such a repo is redundant and anyone that wants this tech can just apply the repos mentioned in the previous paragraph to do the equivalent.

Are those repos now bad? If not, it seems the ethical question can be resolved by some restructuring of code so bad uses are not made so obvious, while being almost as easy.


What technologies are fundamentally bad? I don't know of any outside of perhaps torture devices.


There's surely at least some that you don't know of because their developer made the decision to not release.


Every single "weapon of mass destruction" is fundamentally bad technology, just for one ultra-easy answer…


That's simply not accurate?

Nuclear weapons and the advent of mutually assured destruction has ushered in a world where many, many, many fewer people die of violent conflict.

Technologies that cause destruction are not congruent with destruction. They change the game such that destruction may actually be prevented by their existence. They are tools, nothing more.

It's possible they have permanently eradicated world war. If that's true, they are not fundamentally bad but in fact really quite great. Only time will tell.


The possibility that nuclear deterrence has helped the world to be more peaceful in comparison to the past means that it's not such an easy answer. Hasn't even been a century since nuclear weapons were invented though, so no clear statistics.


No, we can't. Developing "tech" is an action bound by morals like any other action. Cigarette production has tech in it. They don't just grow on trees. It was developed, and keeps being developed, by people well aware of the effects of smoking.


You're opening another can of worms with that example. An adult, informed smoker that takes care to not cause second-hand smoking and that doesn't have any dependents, causes harm only to themselves. If they think that the reduction of stress or whatever else they gain at the tradeoff of their health is worth it, that's their choice. That's part of their bodily autonomy. As such, the tech is still not inherently bad.


You're conveniently dismissing a few concerns and from that drawing the conclusion it isn't inherently bad?


> Can we agree that it's fundamentally not a bad thing to develop a technology?

I do not agree with this assertion.


Right. Tell that to Einstein, who just before Hiroshima realized it was not such a good argument.

That is a nice piece of history.

Ethics is important. And one consequence is: I may die but at least it was not because I made it possible.


What it boils down to is: If someone else will just do it, then wait and let them do it. If it has morally objectionable consequences, either you care about that and do not involve yourself, or you don't and you go ahead. But then you cannot say you didn't get your hands dirty.


While Einstein regretted working on nuclear weapons, the work they did there is equally responsible for the many lives saved by nuclear power, which he also appreciated. Also, it was the first 20% of the work towards nuclear fusion.

Technology is not evil or good.

Even this technology could be great. Jacky Chan, for example, is not going to make any more movies. But I want more of those movies. This very technology would allow that.


Does Jacky Chan have any saying in this? Or is anyone on the internet free to use his face however they like?

Yea, tech is not evil or good, it's all about how people use it and so far, deep fake tech has only been used for evil. From deep fake porn to fraud. The only not evil thing about this are the memes, but we can live without deep fake memes just fine.


> Technology is not evil or good.

But a given technology may be much more suited to doing evil than doing good.

> Jacky Chan, for example, is not going to make any more movies. But I want more of those movies. This very technology would allow that.

Which is something that I would consider ethically wrong unless Jackie Chan consented to it.


> he work they did there is equally responsible for the many lives saved by nuclear power

This is still on-going. There are thousands of nuclear heads on standby. His technology might end human civilization or even the human race as we know it.

> Technology is not evil or good.

Of course, it's not. Technology is real. Evil and good are imaginary human-brain constructs.


The Nuclear bomb in itself was a lifesaver in the way that the US nor the USSR ever dared to attack each other (directly) because of the potential consequences.

Without Nuclear bombs a "hot" 3rd world-war would have probably happened.

Also no country who has nuclear weapons was ever (directly) attacked.


Yeah, theory of relativity and tech with use cases mostly in porn and fraud aren't really the same thing.


Your response doesn't make any sense. Care to elaborate?


Just search "Atrioc" in google and read any article. So far, this tech's main use is porn with people who do not want to do porn, who do not post adult content, who did not ever consent to their faces being used for adult content.

The second use of this technology is fraud. Go read the issue and the comments linked in the post.

And you should not even need this much info. The simple fact that they mention "mrdeepfake" as one of the 4 communication groups, the second after their Discord server tells you how much good this tech does to the world.


As GeorgeMD pointed out, there's some required reading.


What George is saying makes total sense. Why I had commented on your post was I couldn't understand your reasoning that somehow relativity and the 2 bombs couldn't have happened without each other I found that quite bizarre..


> that somehow relativity and the 2 bombs couldn't have happened without each other I found that quite bizarre..

Sounds about right, because that's literally not what I said.


It’s not relativity. Einstein pushed first for the american atom bomb because “the germans will make it anyway”. Then he realised that it was way too harmful and recanted.

Nothing to do with relativity, just with “if I don’t do it others will”.


For technology not to be developed, everyone with means to develop would need to choose not to. It's not too far from "inevitable"


Potential technology is an infinite set. A gigabyte contains 2^800,000,000 possibilities; a number billions of digits long. Many are junk or functionally duplicates — but many are not, and even out of that we choose a small subset that we think is worth pursuing.

Saying “everyone with means to develop would need to choose not to” is like saying “every writer would need to choose not to write this book”. It’s not inevitable.


The problem is that there is a big confusion about building the technology and using it in some specific, wrong way. Surely anyone here will agree that building the technology itself should not be forbidden. You can do it for some valid reason, or simply for fun. It’s not for anyone to judge or stop you from doing it. Those who use it for some nefarious reason must then answer for that. Guns are the obvious comparison (although maybe not the best) as in they are needed for protection also. But another example is how explosive devices and technologies can be used in a variety of useful ways, such as mining, terraforming, etc. Using someone’s image without their consent is already a crime and applicable in many other non-DF scenarios.


What's the confusion? The allegation is that choosing to build this represents, at best, incredibly poor judgment. The GitHub issue and GP post are good faith appeals to kindest possible interpretations of this work. To use your own example, explosives can be useful, but if you discovered that your neighbor was building pipe bombs in his garage, you'd probably want them to stop. "I'm not gonna use them. I just think they're neat." is not a convincing argument for them to continue. To extend the analogy a bit further, this seems comparable to discovering a box full of pipe bombs with a big "free" sign in front of their house.


Plus... they're going to use it.


My analogies are horrible…


> No, it mustn’t.

Let's imagine you have a horrific face malformation, or went through a terrible accident that left you disfigured. Why not allow you to pass for an ordinary person, at least in video calls, so you can have a somewhat normal life?

It took me two minutes to come up with a legitimate use for this technology. I imagine there are many more only a couple minutes away.


I took the OP's comment as meaning "it's not necessary for this to happen" rather than "it's necessary for this not to happen", a nuance which can be confusing, especially for non-native English speakers. The comment in the article suggests there is no choice but for this technology to be developed, and I think the OP is disagreeing with that assessment and saying "there is a choice".

If I've understood that right, both your comments can agree. There are strong arguments to say we _should_ develop this technology, but there also many good counter-arguments.


> there also many good counter-arguments.

Someone will develop it, and someone will use it for nefarious purposes - as one terrorist organization spokesperson once said, "We only need to get lucky once. You need to get lucky every time". If it is developed like this, in the open, at least we have a better chance to understand its capabilities and limitations, and to be better equipped to build countermeasures.


Yes but in the end it’s all about choice. It’s not some creation of chaotic coincidences and circumstances. It’s not some accidental discovery. It’s an active choice by whoever decides to make it for whoever purpose.

The argument isn’t matter of should it be made, but that the only reason it’s being made is because of people actual choices to do so. It’s not randomly spawning Into existence of its own free will.


An extreme version of this might be:

> I saw Bob about to stab Alice, so I beat Bob to it and shot her in the head. I did nothing wrong since the outcome would have been the same regardless of my actions.


On the bright side, you probably saved Bob's life.


This is exactly the Prisoner's Dilemma, and also shows up in arguments why we need to build insane numbers of nuclear weapons. The logic rests on a faulty assumption that there is a binary choice - either you build it this way, or somebody else builds it this way. It doesn't consider any other possible paths that could lead to better outcomes.


There is an economic incentive to build it this way - for all the legal, and illegal applications it may be used. Movies can make stunts look more like the actors they portray more easily (this has existed for decades now), you can pay Tom Cruise to use his likeness on a publicity campaign without having to pay for his time. You can change your face, or your body, so you are not recognized.

It's a virtual certainty that a technology that can have a commercial application will be developed. I can't imagine a plausible path departing from this present where it isn't.


Yeah, that’s correct. I wanted to match the original comment’s phrasing and thought it was clear from context, but I probably should have just used a different word to avoid ambiguity.


It is much more useful for privacy concious people not wanting to give Zoom more data for their face detection database. All in all it's good for humanity when technology can counteract other evil tech that will spy on you.


you could use avatars / 3D models (possibly a few realistic ones) for those, no need for deepfake technology from a single photo.


True, but that'd be obvious. For a lot of use cases the point would be to do it without being detected.


> technology isnt inevitable

i disagree, society could stop inventing and humanity would likely continue to exist. However, that would would require the population to stop being interested in the development of technology which i think wont happen.

The point is, if this engineer doesnt write this code, someone else will. If a person can imagine a tool, someone will eventually want to make it a reality.

i think deepfakes and ai generation are going to change the world in a way that im a bit leery of, it feels safer to just stop here, but that isnt a possibility, even if deepfakes are made illegal, people will still create their own tools and have their own harddrives for storage.

maybe these things should be illegal or something, hard to say. But the tech will move forward either way imo.


We all have responsibilities as technologists to be ethical (to our own standards, at least), and to keep in mind the impact of the things we build. That applies regardless of whether other tech with a negative social impact exists, or for that matter whether it will exist. You can still be a good person, even if no one else is doing so, and you're under no pressure to be a bad person (in this context, anyway).


Agreed. Convincing folks who think that of anything else is quite difficult.

I’ll promote here one of my favorite reads, The Technological Society by Jacques Ellul. It feels very apropos here as we embark on an new smorgasbord of technologies that can do us harm and good. There will always be people who want to develop technologies even if they are harmful. We need not only ask ourselves “what is the benefit”, but “at what cost?”


> Technology isn’t inevitable. It’s a choice we make.

I think it is inevitable actually. The history of humanity seems to suggest so at least.


>Technology isn’t inevitable. It’s a choice we make.

See and that's the problem, you can just talk about "you" not "we", because someone else will do it. Technology, and with it information cannot be stopped.


"I didn't murder that man, your honor, he would have died eventually anyway."


Incredible stupid comment...just to extend your logic the thing you type right now is "enabling" that "murdering"...so does the invention of the wheel enabling fast wars...or bring you fast into a hospital per ambulance.


I interpreted this as basically equivalent to "Someone will develop it somewhere."


I disagree. We should develop this technology and make people aware of the implications and teach them to be vigilant.


Because being vigilant works so well now...

Knowledge isn't useful if you can't apply it in practice.

My work's IT has a habit of sending out phishing emails that closely match emails legitimate emails our employees and vendors send. We're supposed to be vigilant for looking at the Reply-To field, which is hidden by default on mobile devices. (As is the url things are going to.)

I will never not trigger those. Why? My brain can't tap the header field for every single email I read. And I can't ignore legitimate emails.

Every time on my phone, I'll click the link, then check the url in the browser. I can't get my brain to check before clicking the link. (Trust me, I'm neuro-divergent and I'd tried for years. It won't reprogram.)


As you can't enforce stopping development of it, it is better to also develop it to understand it better and create countermeasures like ml which detects deepfakes.

This also leads potentially to more or better image/video signing Technologie.

And yes just because someone develops this in the open doesn't mean that someone else is not developing it in parallel but hidden.


There is ZERO credibility that this development will be used to "create a countermeasure", especially mrdeepfakes is a key communication channel there.


I'm not talking special to mrdeepfake.

My argument still makes a point that you can't just forbid it. Someone else will do it anyway


Of course you can, even if somebody else does it. Case in point: any penal code.


How can you force stopping development of software?


One of the more egregious examples: https://www.wired.com/2011/03/aleynikov-sentencing/


We do in fact have a gun to our heads. I don't want to ruin your life but those who know know and it starts with R and B. Look it up at your own risk.


Come on, everyone knows that ruining your life starts with metal. R&B, while annoying, is quite safe.


There's a book in Dutch literature, The Assault by Harry Mulisch. It's a great read if you're interested in multi-faceted morality. It deals with guilt and responsibility when there's a chain of things that need to happen for a bad outcome.

During WWII the protagonist's parents are shot by Germans after a body of a collaborator was found in front of their house. The parents were arguing with the soldiers about something when arrested and ended up shot during the arrest.

The collaborator was shot by the resistance in front of their neighbours house, and the neighbours moved the body in front of the protagonist's house.

Over the years, he encounters many people involved in this event, and starts seeing things from many sides. One of the themes that's explored is who bears moral responsibility for his parents' death? The Germans for shooting them? His mother for arguing? The neighbours for moving the body? The resistance for shooting the collaborator? The collaborator for collaborating? All of their actions were a necessary link in the chain that lead to their death.

One of the characters utters a simple and powerful way of dealing with that morality: "He who did it, did it, and not somebody else. The only useful truth is that everybody is killed, by who he is killed, and not by anyone else."

It's a self-serving morality, because the character was part of the resistance group that shot the collaborator in a time where reprisals where very common. But it's also very appealing in its simplicity and clarity.

I find myself referring back to it, in cases like this. In the imagined future, where this tech is used for Bad Things, who is responsible for those Bad Things? The person that did the Bad Thing? The person that developed this tech? The people that developed the tech that lead up to this development?

I'm much inclined to only lay blame on the person that did the Bad Thing.


I think the appropriate way to understand blame and guilt is as social technologies. Who is responsible for some event depends on perspective. Any actor should assign blame, guilt and responsibility in a way that leads to preferred outcomes in the future, e.g. by selecting other agents to collaborate with, communicating rules or contracts and enforcing incentives. Different actors may assign blame differently and there need not always be an objectively correct assignment.

For example, in the story above, the argument against the Germans being to blame would be that the protagonist has no agency with respect to them being there and shooting people. The counter argument "He who did it, did it, and not somebody else" is shifting perspective from the protagonist to society, which does have agency with respect to the Germans shooting people.


I think so too. If the next time you can change something you would have done, such that less people suffer or die, then it seems like you should go ahead and do that. In the end this is the simple rule.

Sometimes you just didn't have the complete picture and you might have been acting ideally given the limited information you had; that is, if you changed your behavior, it might change for the worst in other situations. It's not about self-punishing, it's about learning!


Yeah, it is Germans for shooting them. Everyone else is a victim there. Blaming mom for arguing, blaming neighbours for not wanting to be shot themselves is just victim blaming at its finest.

It was Germany who created the situation as part of their quest for world dominance, it was Germany who trained their soldiers to be violent and cruel, who created the policies and it was German soldiers who shot.


You do realize that “Germany” does not exist as a real-life entity, with behavior and agency in a way that assigning blame would make any sense. In fact, it could be argued, that assigning blame to the entire country of Germany, namely after WW I, was the very cause of Hitker coming to power in the first place.


In my experience, in the US, the lawyers will find the "deep pockets" to go after, and may establish precedent.

In the US, we have the Second Amendment, which has probably prevented many, many lawsuits against gun manufacturers and dealers (but there have still been a few).

Unless there's a constitutional amendment, protecting the tech, it's pretty likely that some hungry lawyers will figure out how to go up the food chain, and get some money.

If the tech is used to go after politicians and oligarchs (almost guaranteed), you can bet that some laws will appear, soon.

In the US, threat of lawsuits govern many decisions. It's a totally legitimate fear, and politicians are notorious for protecting their hunting grounds, and their privileges.

Look at the fight over encryption. It is a "no-brainer," if you are aware of the tech, yet politicians have a very real chance of doing great damage to it.


The second amendment doesn't prevent lawsuits against gun manufactures. The PLCA, passed in 2005, is what provides special legal cover to the gun industry.

If you'll allow me to be cynical for a moment; I believe the PLCA was passed because the people in power felt safe from guns, behind their security and metal detectors. I don't think they feel safe from deep fakes or hackers (nor is there much lobbying money), so a similar law protecting technology would never pass.

[1]: https://en.m.wikipedia.org/wiki/Protection_of_Lawful_Commerc...


This technology is protected by the first ammendment. The first ammendment is the main reason why the law does not have much to say about what kind of programs people cannot share.


I'm much inclined to only lay blame on the person that did the Bad Thing.

There's no technology that can cure humanity of moral failing. Most things contain potential good and evil uses.

You remind me of a story I was told years ago which describes a series of "good" and "bad" events and each subsequent event changes the seeming meaning of the prior one.

The part I remember is where a man is given a horse and rejoices at his good fortune and then he is thrown from the horse, breaking his leg, and he decries his misfortune. Then war breaks out and he is not conscripted because his leg is in a cast and he again feels relieved and fortunate.


This is usually called the "Chinese Farmer" story, and it's very popular among consultants and speakers: https://www.google.com/search?q=Chinese+Farmer+story


Maybe not the origin but here is the story told by Alan Watts: https://youtu.be/j4TZMxkxySc


If you get a motorcycle license in the US, you’ll likely take the MSF safety course where one point is hammered home: a crash is caused by an interaction of factors.

Imagine a car suddenly swerves in front of a motorcycle rider, which causes him to crash. Perhaps it was a rainy day, the rider was talking on the phone, they were traveling slightly above the speed limit, and they were sitting in the car’s blind spot. It was the car that was at fault for causing the crash, but if one of the other factors in the rider’s situation was eliminated, he may have been able to find a safe escape path to perform an emergency maneuver and avoid the crash.

Ultimately, it’s the final piece in a chain of events that directly causes it, but there are a multitude of variables that are necessary to lead to the situation. Blame can be assigned to a higher degree for a single party, but I believe blame can also be applied to varying degrees to multiple parties.

It seems the authors of the tool in question do not see where their technology fits into a larger puzzle. They may not be the ones who ultimately use it for malice, but it is worrying that they so readily shrug off ethical considerations when they have agency over this piece of the equation.


I don't think this is a good way to characterize this situation. In this situation someone is saying, look, these are the consequences of your actions, and they are washing their hands of the consequences in advance, knowing full well what they are. This isn't a series of disconnected accidents leading to an outcome.

A better analogy might be a guy who likes to hand out free bolt cutters on the street.


And so every executioner, and not the malfeasant lawyer, is responsible for the death of the wrongly executed?

A better heuristic, and almost as simple, is to rate responsibility in proportion to power, ie those with the most leverage to steer events take the most blame. Blame is information. It has no mass, and can be divided effortlessly.


I want to thank you for the recommendation. I am reading it right now. It is excellent!


That's a very convenient kind of morals for the people who invested billions of dollars into developing this tech, fully aware what it could be used for. (And OpenAI was aware, it's their entire justification why they don't publish the models)


There are no facets in the case from the book. Germans are 100% responsible for parents' deaths. All other things are just coincidences. Easy as that.


Again and again I am astonished that people without ethics exist, that they are confident in what they are doing and that they appear to be completely unable to reflect upon their actions. They just don't care and appear to be proud of it.

If this is used to scam old people out of their belongings then you really have to question your actions and imho bear some responsibility. Was it worth it? Do the positive uses outweigh their negatives? They use examples of misuse of technology as if that would free them of any guilt. As if previous errors would allow them to do anything because greater mistakes were made.

You are not, of course, completely responsible for the actions others take but if you create something you have to keep in mind bad actors exist. You can't just close your eyes if your actions lead to a strictly worse world. Old people scammed out of their savings are real people and it is real pain. I can't imagine the desperation and the helplessness following that. It really makes me angry how someone can ignore so much pain and not even engage in an argument whether it's the right thing to do.


What's as worrying, judging by the comments here and in that GitHub thread, is that there is no correlation between technical ability and ethical understanding. Perhaps naively, I'd have thought someone intelligent enough to develop a technology like this would also be intelligent enough to understand the complex ethical issues it raises. It seems, unfortunately, that there is no such correlation.

In fact, anecdotally, it seems the people with the technical ability are least likely to have a nuanced understanding of the ethical impact of their work (or, more optimistically, it's only people with the conjunction of technical ability and ethical idiocy who would work on this, and we're not seeing all the capable people who choose not to).

Also, what's with all the people in this thread coming up with implausible edge cases in which deep fake tech could be used ethically to justify a technology that will very obviously be used unethically in the vast majority of cases? It's almost useless for anything except deception—it is intrinsically deceptive. All the 'yeah but cars kill people so should we ban all cars?' comments miss the obvious point that cars are extremely useful, so we accept the relatively small negatives. The ethical balance is the other way around for deep fake tech. It's almost entirely harmful, with some small use cases that might arguably be valuable to someone.


> Perhaps naively, I'd have thought someone intelligent enough to develop a technology like this would also be intelligent enough to understand the complex ethical issues it raises. It seems, unfortunately, that there is no such correlation.

Correct. There is a tendency to think that because a person is exceptionally intelligent or skilled in one area, they must also be intelligent in other areas. It's simply not the case. An expert is authoritative in the areas of their expertise, but outside of those, their opinions are no more likely to be correct than anyone else's.

This error is often leveraged in persuasion campaigns -- thinking that, for instance, a brilliant physicist's opinions on social policies are more likely to be accurate than any random person on the street.


> intelligent in other areas

I think you mean "knowledgeable". There is a correlation though, but never mind.


Yes, you are correct. Or, perhaps more accurately, "authoritative". Being an expert in one field does not mean that a person's opinions in other fields are more likely to be correct.


Guided missiles, misguided men.


I can already fire off emails and text messages claiming to be whomever I want. I can even hire impersonators and lookalikes. I don’t get the big push against these newer technologies.


You're exactly proving the point GP makes, good job.


No they're not and this flippant dismissal is so intellectually lazy.

The ability to scam people exists and isn't going away. People went town to town in covered wagons selling bullshit. People scammed others out of money centuries before that.

The answer isn't pearl-clutching nonsense about how this technology is so different and so morally reprehensible compared to everything else. The answer is education. If someone shows up at your door and sells you a bottle of water that will cure all your ills for $100 then skips town, most people would see it as your fault for being so gullible. Someday there will be some technological or social way to easily differentiate deep fakes from real video the same way you can with photoshopped images today.


We're steadily breaking down all mechanisms that ordinary people can use to trust information they're getting, such as getting a video call from a member of their family and recognising the face and voice.

All previous scams relied on con artists using various means to pretend to either be trustworthy in their own right despite being strangers or as representing some trustworthy institution. But having people being able to act as your family members is a whole other issue. You can't claim that this is no different from other scams.


In Argentina they’ve been scamming people by pretending to be a relative for years now! No AI required. Somebody calls you in the middle of night, distraught, and crying says “mom?” or “dad?”. Then someone interrupts, claims your son or daughter has been abducted (by now you probably have given them their name when responding to the initial plea) and requests money to be dropped at some location where a motorcycle picks it up. They make you stay on the line so you can’t call the cops nor the allegedly abducted person.

These calls usually are made from prison, with an outside accomplice. They are rarely caught.

People are sleepy, and concerned, and swear the voice they heard was the one of their child.

Another no-ai popular scam is done by stealing a WhatsApp account (e.g. by cloning the sim), and then contacting a friend or relative asking for a quick cash transfer for something urgent, to be returned the next day.

Deepfakes might make these scams more believable, but the core causes of the issue and the solutions have not changed.


That sort of scam has been possible, sure, but it depends on both the person being phoned being startled and half asleep, and unable to reach the person or someone else near them in subsequent calls. You're right that I shouldn't have been so absolutist in saying 'all' previous scams, but this is an edge case that doesn't equate to what's becoming possible with real time deep fakes.

What this sort of tech is enabling is scamming where even someone who is fully awake and is probably quite aware and not normally prone to scamming can nonetheless be tricked by a video call.

It's taking us to a point where the only safe way to trust that the person you're speaking to is definitely who they say they are, in all circumstances, is to do it in person. Something not possible for people living far from their other family members.

Do you not see how fundamentally this breaks the trust models we have built over the past few decades?


Scamming an individual is already a ton of work. The amount of effort this shaves off the process is small.


>>The ability to scam people exists and isn't going away.

True, but giving the scammers orders-of-magnitude better tools can have serious consequences. If you already have a plague of robbers in your town, handing out free automatic handguns and ammo to anyone is beyond stupid. Yet this is pretty much what these AI tools give to scammers.

Arguing that people will eventually figure it out is no justification for allowing it. It ignores all the casualties in the meantime. This is especially bad because the result of this scale of weaponized technology may well be a complete destruction of trust in society, or in technology in general. These are catastrophic for everyone in society and the economy.


>The answer is education.

This is the stock answer used for deflection in way too many scenarios. We've seen how it doesn't work.

The complete lack of an enforced ethics code in our field is the biggest blight on our combined contribution to society.


"I'll die at some point so why don't you stab me now"


I can already walk to work. I don't see how a car would make any difference.


And you do and its just as effective??? wow you are a pretty aweful person.


Agree, I've come to believe that social ethics is something that needs to be indoctrinated from an early age, otherwise it can't naturally be developed in many people.


A lot of nasty ideologues that caused massive suffering also believed that


Sure, because it's such a powerful technique for shaping society. Are you saying we should stop using it because it can be abused for evil? :)


If you need to indoctrinate somebody with a certain belief from an early age, then your belief must be so poor that it cannot be used to convince adults, so how can such an action be justified?


Do you assume most adults are rational in their decision-making process and are convinced by arguments? I'm mostly seeing the opposite in practice, and that indoctrination seems merely necessary to balance things out.


I assume that anybody who needs to indoctrinate people into their believes has believes that they cannot justify rationally.


Pretty much yes: my points was that there are plenty of ideas that a rational individual would be convinced of, but that are basically unarguable logically to large parts of the population.

The flat-earthers are an extreme example, but not unique in any way.


They are also 0.01 percent of the population.

But you also seem to assume that different people would come to the same conclusion rationally, without taking into account different experinces, knowledge, ideas and priorities.

And of course, you assume the ideas that YOU came up with are what that rational person would come up with.

Which is the part that scares me. You have a perfect excuse that cannot be disproven: your ideas aren't bad, it is just that not enough people are rational enough to understand them.

Another group that had similar ideas? Lenin and the gang. The total death toll is far too great to risk going down that route again.


You say 0.01 percent, while I think that the majority of the population needs to have some basic values instilled from an early age, if only to achieve the minimum homogeneity of a stable society.

And simply because an evil group uses a tool doesn't mean that tool is broken in itself, it just implies further consideration. For all their atrocities, Nazi Germany did some great engineering and also ran one of the most successful anti-smoking campaigns ever.

I think we have too many differences of opinion to reconcile over internet comments, but thank you for the rebuttals.


At one point we tried to teach "good citizenship" at schools (and in most of Europe, we still do). Then the unethical people started claiming that schools were brainwashing their children, and here we are.


> ...astonished that people without ethics exist

"Moral intelligence" (or "MQ") and "moral cripples".

...are my provisional terms for talking about this.


I don't understand why you're astonished? Psychopaths are all around us, from corporate "leadership" to government lobbyists to fake reviewers on Amazon and the App Store. Confidence men (and women!) have existed since the dawn of time. From selling "snake oil," to 419 scams, to "Microsoft" computer support, technology has only ever aided the process of helping psychopaths find victims. Talent and technical ability are orthogonal to empathy. Always has been; always will be.

This deepfake stuff is a difference in degree, not kind, and once these people figure out how to use AI to help them, everyone is going to have to level up their defenses all over again. You ain't seen nothing yet.


These people are successful.


HN seems to actively work on a cognitive dissonance: on the one hand producing inspirational stories of entrepreneurs changing the world and on the other abandoning all hope that technology/market forces can be controlled in any way.

I'm thinking now this is to justify away the collective guilt of bringing into the mainstream harmful products.

It seems to come from the same origins as "crypto can't be regulated", "government can't to anything", "it's ok because it's legal" and it's always worrying me to not really see any sort of moral stance being taken anymore.


HN is full of a wide variety of people who have a wide variety of different beliefs and worldviews, and that's what you're seeing.

In my view, it's one of the things that makes HN great.


The old: Encryption is evil because it is used to hide information... let's ban encryption. Breaking encryption is evil because it can be used to steal information... let's incriminate breaking encryption.

Let's just get it over with... Technology is evil. Let's ban technology.


"crypto can't be [effectively] regulated" is more observable fact than ideology.


Funny that you picked on this specific example on a broader point.

Every time serious regulators have made a strong move, crypto markets and general population access to crypto products have actually been affected. I'm pretty sure if the US government would try to ban all crypto networks, most activity would stop, leaving only a few die-hard activists and some actual criminals.


Well for me, crypto is highly moral and should be developed to prevent artificial trading restrictions and eliminate borders. There are many people who consider individual freedom to be of highest importance, and who are willing to develop any technology promoting it.


You seem to be thinking ideologically-driven, while I'm more goal-driven. Is absolute freedom the end-all purpose? Shouldn't I be allowed to murder my enemies, if I'm willing to risk the vengeance of their group? I know it's an absurd question, but my point is that the social negotiation for where regulation and limits should come into effect has been abandoned by many.


Absolute freedom is impossible in a world where people interact with and affect each other. Me having absolute freedom means that the freedom of others must be limited.

The goal should be to maximize freedom for everyone. To accomplish this goal means that nobody can have absolute freedom.


Most crypto is controlled by a few key players with even less oversight and accountability than the governments.


"Controlled" in your sentence doesn't mean the same as the one for the ordinary money.


You're right, the difference is even more stark.


The same for any other form of money.


No. Read up on central bank governance.


I think it's a good thing. Not that it's being used for evil things, but because it should help make it obvious that you can't trust anything you see on a screen.

Using fake media to trick people into believing anything used to be a privileged reserved for nation states and the ultra rich. Now that _ANYONE_ and their cat can do it, it should follow that nobody can believe anything that's on a screen anymore (this comment included).


> but because it should help make it obvious that you can't trust anything you see on a screen.

I think this in general (text, audio, video) will produce a societal earthquake, in a way send us into the Middle Ages - you can't really verify things yourself, because everything can be faked, all you can do is anchoring yourself to some trusted authority.

Imagine you read on hacker news an (AI generated) article about a new breakthrough in physics - new convincing evidence for the cyclical universe hypothesis. In the discussion, there will be a lot of seemingly informed comments arguing about this (all AI generated), links to video presentations from reputable scientists (all AI generated) and papers (all AI generated). It will be all wrong (= there wasn't any breakthrough in the first place), but impossible for a non-physicist to assess correctly.

In a way it will lead to centralization of internet and knowledge, people will stick only to their trusted sources. For some it may be Wikipedia and NYTimes, for others some AI-generated island of knowledge/manipulation.

I also wonder what effects this will have on social platforms when 99% of content is generated by AI.


My thought is: if you think conspiracy culture war shitshows are bad now with whatever people on both sides having talking heads saying whatever feeds red meat to their base, imagine trying to bridge the gap in a world with infinite red meat generators competing with each other for audience eyeballs forever.


In Accepting an honorary degree from the University of Notre Dame a few years ago, General David Sarnoff made this statement: “We are too prone to make technological instruments the scapegoats for the sins of those who wield them. The products of modern science are not in themselves good or bad; it is the way they are used that determines their value.” That is the voice of the current somnambulism. Suppose we were to say, “Apple pie is in itself neither good nor bad; it is the way it is used that determines its value.” Or, “The smallpox virus is in itself neither good nor bad; it is the way it is used that determines their value.” That is, if the slugs reach the right people firearms are good. If the TV tubes fire the right ammunition at the right people it is good. I am not being perverse. There is simply nothing in the Sarnoff statement that will bear scrutiny, for it ignores the nature of the medium, of any and all media, in the true Narcissus style of one hypnotized by the amputation and extension of his own being in a new technical form. General Sarnoff went on to explain his attitude to the technology of print, saying that it was true that print caused much trash to circulate, but it had also disseminated the Bible and the thoughts of seers and philosophers. It has never occurred to General Sarnoff that any technology could do anything but add itself on to what we already are. (p. 11)

Marshall McLuhan - Understanding Media


Right. When that chaos happen, just remember that you will not be safe from it


Whats your alternative? Attempt to hide the fact that everything can be easily faked, and that you don't need to be a hollywood studio to do it? What good do you think that does? Instill a false sense of security and trust in a medium which cannot bear it ? How's that not worse?

Even outright banning this technology won't make a dent in the bad uses, since those individuals are highly motivated, highly competent and don't care the least about the ban..

It's not like the development won't happen, it will just be hidden, making it less obvious to the potential victims (increasing the size of this group, since you then need to be "in the know").

So no, the only way forward is to have this out in the open as much as possible so that as many people as possible become aware of how trivial it already is to fake stuff.

So, what are you saying ?


If you truly believe all that, what are you doing to minimise the inevitable damage?

The transition will suely be traumatic. What's your interest in mitigating the trauma? Does it bother you at all?

I agree with what you wrote above. I wish to slow implementations, to allow time for adjustment.


> If you truly believe all that, what are you doing to minimise the inevitable damage?

I'm not doing anything, in particular, I don't know what to do about it ?

I could say that I "do my research" but I don't, because I don't know how to do that withing the framework of not trusting what I read.

I probably use my personal version of common sense, which of course is biased and lacking.. For instance, with the covid masks, I ignored the media arguments from both sides, and thought "if it's in the air, a mask might trap some particles, it might reduce my chance of spreading it or catching it".. I didn't believe the studies that said it was inefficient because I know doctors are using masks during surgery, and Japanese during common cold, so, I know they're at least not going to be detrimental.

I'm old enough that I learned a lot of stuff before the information apocalypse.. I try to apply my understanding of the world to selecting a course of action, but I also understand that it will be imperfect.

I listen to arguments and try to apply my own rationale in deciding whether to believe them.. But we all do that.. I don't take statements as facts at face value because I don't know how they were derived, I will try to think about how plausible it is, together with the sources and my existing knowledge.. Of course I will fail.

Yeah, I believe it but I don't know what can be done about it. I put more confidence in some sources than other, but I don't believe anything to be 100% true simply based on the source..


Why does it have to be traumatic? Maybe the development of mass media was traumatic, everyone putting near-complete trust in any video they see or piece of text they read was traumatic, and now we will see some of that trauma be undone.


>> Maybe the development of mass media was traumatic,

Yes, it really was. Adolf Hitler, for one, found radio and cinema and he just used what was already there. Before his countrymen had developed the media chops to deal with his messaging.

Others' past failures are not an excuse for our future ones. Nor should we allow ourselves to blithely imagine that this time it'll be different. Without strong evidence to the contrary.


So imagine if we had done what you suggest and slowed the spread of radio and cinema. People have barely any idea that such things are possible. And then one day Hitler comes along after surreptitiously developing radio and cinema, and distributes it among the German population, and uses it for his propaganda campaigns. Would that have been better?


Timing is all. You may dislike the analogy - sorry I don't mean to antagonise.

At the start of the pandemic, and in the middle and towards the end there were always voices arguing for "herd immunity now! Let the inevitable happen, as it must".

For me, those voices were too early, until other things had fallen into place. Then they were eventually right.


No worries, I thought it was a good analogy.


I am already not safe from it, but it is less obvious because people like to think the technology doesn't exist merely because it is kind of expensive.


Thank you saurik for what you have done ... and if appropriate for what you have chosen not to do.


Fake information being promoted does not inherently spread chaos.

The church has been around for years and despite the millennia of lies we are all still here and thriving and improving.

Bullshit spreads faster, but truth wins out over centuries because it doesn't go away when you stop looking at it.


    The church has been around for years and despite the millennia of lies we are all still here and thriving and improving.
Sure, certain members of the church got carried away with their power but the church has also been a huge psychological boon to billions of people. For example, we have Christianity to thank for the concept of individual rights and sovereignty. It wasn't that long ago that individuals did not have any inherent value. Christianity is at the root of the "modern" concept of "innocent until proven guilty".


I'm not referring to "certain members", I am talking about the big lie that is central to the church, espoused by every member.

Somehow even with that, vested in authority, we have managed to move past it as a species and have progressed beyond superstitions.

Spreading misinformation is a self-correcting problem. It simply goes away when people get bored with the meme. Reality and truth does not.


I'm convinced that this idea that technology is completely neutral is wrong. It is not neutral in the face of human psychology. The human species is a different animal that the human individual, and it is powerful, but does not make truly conscious decisions.

Once you let the genie out of the bottle, a wish will be made. A technology might not be inherently bad, but neither are knives, and we don't leave those lying around.

That said, it is the human species that develops technology, rarely is one human individual capable of holding back a technology.


'Technology is neutral' is simply a banal observation that technology is only ever used by humans who can decide what to do with it. It's not saying that a new technology doesn't enable new and terrible human choices, simply that those choices rest with the people, not the technology itself.

Knives seem like the perfect example, really. We do leave them lying around in our drawers, usually under zero real security against any malicious guest, but we recognise it is about the choices of the people we invite into our houses, not the existence of the knives themselves, that is the real danger.

You can certainly argue though as you have that humans simply shouldn't have the choice to do certain things - what comes to mind immediately is nuclear weapons.


Knives always seem like a perfect example because they really do have so many good uses, along with some bad ones. But just because two pieces of technology both have good uses and bad uses doesn't mean they're morally equivalent. As to your example, even if I trust people who enter my house I wouldn't want to store a nuke in my kitchen drawers :) And there are many ways to discuss the ethics of tech beyond "good and bad uses", such as to what extent a user of the technology controls it or is controlled by it, what are the negative externalities, does it increase power asymmetry, etc.


> Knives seem like the perfect example, really. We do leave them lying around in our drawers, usually under zero real security against any malicious guest, but we recognise it is about the choices of the people we invite into our houses, not the existence of the knives themselves, that is the real danger.

The problem comes from the scale, it is a knife vs nuke question, but you seem to stop a bit too short in your reasoning. Yes both can kill, they just don't operate on the same scale.

You could forge documents 400 years ago, it doesn't mean it would be ethical to release a software that let you forge any document at scale instantly and for free in a single click

One steam engine is a marvel, 1.4B ICE cars on earth is a nightmare

It's always about scale, almost never about the original purpose/intent of the tech. Modern tech develops and spread infinitely faster than anything from even 20 years ago


Yeah, my argument with the knives was pretty weak and I was trying to make the point you make in your last paragraph. I think I meant to type "lying around children". I should've said bombs.


The idea technology is completely neutral is obviously stupid and obviously wrong.

Neil Postman has two amazing books on this subject.

It is not even that technology is good or bad, any technology will have good and bad aspects but the main issue is that we have surrendered culture and agency to technology as a society.

There is no going back or fix now. The fix for all was a culture that devalues bad uses of technology so that there is no money in creating something shitty. We basically have the opposite of that. Even completely useless technology is worth a ton of money because of our culture.

The solution is not Luddism either. Especially a ridiculous techno-Luddism.

For me, the Faustian bargain with technology has already been signed in blood and there is nothing to do other than enjoy the roller coaster ride.


The purported neutrality of technology is an old saw that people who are practitioners of technology ought to discard. The fact they haven't shows how terrible a job we're doing educating them.

Technology is not neutral and never will be because the nature of technology is to be a means to an end, and as such, to be inseparable from its end.

Technologies enable certain future outcomes.

That some of those outcomes are deemed "good" or "bad" just depends on whether they're compatible with the ends that were pursued initially and the worldview that supports them.

Some technologies have unexpected outcomes, but that doesn't make those outcomes independent of those technologies, either.


>The fix for all was a culture that devalues bad uses of technology

>The solution is not Luddism either. Especially a ridiculous techno-Luddism.

Cough.


> It is not neutral in the face of human psychology.

It's worth noting that nuclear weapons seem to have prevented more deaths than they caused. The MAD doctrine has prevented direct confrontation between the superpowers and limited their military conflicts to proxy wars.

And this is a technology that has almost no application beyond vaporizing cities almost instantaneously.


This is a fair point, but we have no way of knowing how close we came or will come to a nuclear apocalypse, so the risk posed by nuclear technology is difficult to evaluate.


The same as what would have happened have we not invented them.

We don't have reliable ways to project predict the future, or to predict the present based on past conditions.


We are a single human lifetime into a world that has that technology. If we are to live we will live with it for thousands more. Let's give it a bit more time before we decide how well it's turned out?


On the other hand, history is full of technological developments that caused far more harm than necessary before society figured out a way to deal with it simply because nobody thought these things through at the start.


What examples stick out to you? And what do you think of the solutions that arose?


If it happens even once those “lives saved” gains will be wiped away. If the stock market is rising for a hundred years then literally goes to zero, you’ve still lost the long game, buying short term gains at the expense of the long term.

If our children all die in a nuclear holocaust, it doesn’t really matter how many lives we saved, does it?


The acronym MAD is not a coincidence.


Can anybody demonstrate a legitimate use of deepfake software? Has it ever been used to facilitate a socially positive or desirable outcome? While I recognize my experiences are far from definitive, I hazard most would be hard pressed to name anything positive that came out of deepfake technology.

edit: I’ll take your knee-jerk DV, and any others, as an admission of an inability to speak to positive utility of this technology.


Edit: this comment is referring to deepfakes more broadly, and is not a commentary on the validity of the source linked here. I can't speak to the reputability of the community developing this, or how it has been used so far. -- I'm a fairly visual and imaginative person, and it's pretty easy for me to come up with some very useful applications. No hostility intended, genuinely sharing my thoughts:

1. CGI for video editing - lower the bar of entry to de-age actors, or use a stand-in. Actor can't make it to a shoot that day? No worries, replace their face in post easily.

2. Identity protection - Cold call with someone that reached out to you, you're not sure if they're safe or dangerous, could be a good way to protect yourself.

3. Social media content for clients - become a fake avatar for hire essentially, customize your narrator for any video or brand. Video call centers with fake video (they already have voice modifiers and fake names), Enhanced VTuber sort of things (virtual avatars for streaming).

4. Unexpected outcomes: for example Holly Herndon created (and sold) access to an AI replica of her singing voice (n1), and I could see artists selling or renting access to their faces.

Obviously this can and will be used maliciously, but I personally could see myself using it for more positive reasons.

n1. https://holly.mirror.xyz/54ds2IiOnvthjGFkokFCoaI4EabytH9xjAY...


First let me thank you for a thoughtful riposte! I do appreciate that. My question was an honest one and I imagine, not the easiest to conceive an answer to. I genuinely appreciate your taking the time to share your thoughts.

With that said almost every use-case cited was financial or monetary gain whereas I enquired about social utility and value.

That dishonesty ie creation of a fake avatar is cited as being of social utility strikes me as a reach. I don’t see how adding more dishonesty and facades to the world adds social value, but then I may just be of limited imagination.


I would appreciate being able to morph my appearance and voice on video calls, into something more reflective of my identity (facial structure, hair color, cat ears) than the body I was provided by circumstances.


> every use-case cited was financial or monetary gain whereas I enquired about social utility and value.

Oh, right. I think the unfortunate answer is that nobody cares about that distinction any more and only about the former.


> With that said almost every use-case cited was financial or monetary gain whereas I enquired about social utility and value.

It could allow people to petition their government [0] without having someone bust down their door at 3am and disappear forever.

[0] Which is apparently important enough to have been included in the bill of rights


> It could allow people to petition their government [0] without having someone bust down their door at 3am and disappear forever.

Surely just wearing a mask - virtual or otherwise - does the job without the deep fakery?


No doubt, but they wanted a non-capitalistic reason why this technology isn’t The Debil.

And having subtle clues from watching a person’s face while they’re proposing whatever action to overthrow the patriarchy is more convincing than some random person wearing a Guy Fawkes mask talking about revolution for the umpteenth time.


Whoa, whoa, I never claimed the technology was the “debil”. I merely asked for examples of viable, moral, unique uses of this tech.


I think some socially positive use cases could be:

- Representing assistive robots/software with friendly human faces

- Reconstructing the likeness of people with permanent facial injuries when connecting with family

Other, questionably "legitimate", commercial uses are already in production:

- auto-generated corporate training videos

- "Personalized" advertising

I'm hating it already.


#2 sounds really interesting! I’m not sure of the psychological ramifications, but I can’t imagine they’d be different than any other sort of prosthesis save for an inability to actually touch it.

I could see it being used in AR to conceal identity to facilitate more equitable medical outcomes, I suppose.

Thank you again for the input! I was honestly at a loss for positive applications outside of financial gain.

I haven’t seen any ads driven by deepfake, or at least I don’t think I have. That advertising bit does sound rather obnoxious though!


Thanks for encouraging productive discussion! Your original question made me come up with #2 - I couldn't find active development on that specific concept, but I found something pretty amazing.

"Deepfake therapy" lets therapists simulate the presence of dead or non-cooperative people [1]. A study showed positive results when sexual violence victims could safely discuss with a deepfaked version of their abuser [2].

[1]: https://deeptherapy.ai/

[2]: Initial development of perpetrator confrontation using deepfake technology in victims with sexual violence-related PTSD and moral injury: https://www.frontiersin.org/articles/10.3389/fpsyt.2022.8829...


As one example, Matt Stone and Trey Parker made a short movie (and tried to do a feature) with deep fake technology [0].

[0] https://collider.com/trey-parker-matt-stone-almost-made-sass...


That is pretty neat, any sort of art does add cultural and social utility to a degree. Thanks for the heads up, because just about every mention I’ve seen published on the topic is more or less a horror story. I wasn’t being facetious in my query. Thanks again for the input!


Sassy Justice is hilarious, I hope they do end up making more


What is the boundary between "deepfake" and "photoshop" (i.e. regular human "fake" or edit?)

I suspect it's going to become popular for both consensual-deepfake of oneself (PR, magazines, actors, pop stars, any form of public speaker) and "bought out" deepfake (actors selling out their image rights and then losing creative control; dead actors, etc.)

The political-deepfake is really going to accelerate the debate over how much free speech permits you to just lie about people, though.


The number of man hours that would be necessary to plausibly fake even a short film in Photoshop, if I had to guess. It strikes me as analogous to owning sidearms versus BMGs and rocket launchers. One of these tools makes doing bad things far easier.

Another analogy. Say somebody makes some hacking kit. Say it uses zero day exploits to compromise Windows, Mac, and Linux. Would any of us take issue with that? Would it be a different story if it was made into a push-button tool like WinNuke was in the 1990s? Or automated to the extent that somebody who can make a word doc could employ it against your systems? Is there really no feasible line of distinction here, in your eyes?


The social good of deepfake technology will be the destruction of the unwarranted power which has been given to image, and which the Internet has amplified.

Think about it: people choose to trust or not trust based on a face. When deepfaking becomes a tool easily available to every average joe, appearance will lose some of its power. People will learn to lose their irrational trust in face.


The technology isn't just deep fake, deep faking is one ability of techniques that do more general object/person replacement. It is such a small step between techniques for things like digital de-aging to a full fake face, that working on one makes the others possible and trying to ban one will have unintended consequences on the others.


I've been playing TTRPGs via videochat with my friends since pandemic, and I've often thought about setting up video avatars for our characters. It would be especially cool for the DM to be able to switch personas on the fly, and for players to have their characters in video chat.


This kind of technology is far too useful to repressive regimes and those who wish to do nasty things with it.

This means that the incentive to develop this technology is already there, and so it WILL be developed no matter how much people wish it wouldn't.

The only difference at this point is whether some of the implementations are developed in public view or not. If none are public, then all of them will be done in secret, and our opportunities to develop countermeasures will be severely hampered by having fewer eyes on it, and a smaller entry funnel for potential white hats.


> it WILL be developed no matter how much people wish it wouldn't.

Sure, but there's a value in delaying that development. Delaying harm is a valid tactic.


> > it WILL be developed no matter how much people wish it wouldn't.

> Sure, but there's a value in delaying that development. Delaying harm is a valid tactic.

It is a horrible tactic when your adversary has ultra-deep pockets & mountains of bodies to throw at the problem.

Whatever's being developed here has 1/10th the capability of the tech being developed in black site facilities at the behest of national militaries.


> Whatever's being developed here has 1/10th the capability of the tech being developed in black site facilities at the behest of national militaries.

Debatable at least in case of smaller dictatorships like North Korea, Eritrea, Qatar ...


Which are blocked from pulling from Github?


> Which are blocked from pulling from Github?

In the case that they're blocked: They'd only need one unrestricted computer to pull their libraries into & subsequently copy that onto siloed off workstations. It's not like anyone will actually care to notice.

In the case that they aren't blocked: Business as usual.


Exactly.


You really think the government sponsored projects are subject to bans on downloading stuff from GitHub?


Are we in a rhetorical question competition? :)


The entities with interest in it have bigger pockets than some random open source project.


There are many small entities with interest in such a technology which don't have huge budgets. Small terrorist / extremist organizations pushing a specific agenda.

Countries like China can no doubt develop this independently, but they are unlikely to share the technology with random extremists.


Alternatively, having tools like this easily available makes it easier to raise awareness and build teams to combat them.


Delay until when?


Its hard to take seriously the argument that "tech is neutral" when it concerns software. One could maybe make this argument for the hardware underneath (the chips and the cables). They are called after all "general purpose computing" devices and the packets moving around are general purpose streams of bits as well [0].

But software is not "tech". It is the explicit expression and projection of cultural objectives and values onto a particular type of tech. You can take the exact precise hardware we have today and reprogram a million different worlds on it, some better, some worse.

Developers are simply the willing executioners of prevalent power structures. Deal with it. If you have a moral backbone (i.e., you don't agree with the prevalent morality as expressed in what the software industry currently does) do something about it.

[0] Ofcourse upon deeper examination overall system design (e.g. how client or server heavy the configuration, what kind of UI is promoted etc) is not neutral either. Cultural/political/economic choices creep * everywhere*


> But software is not "tech". It is the explicit expression and projection of cultural objectives and values onto a particular type of tech. You can take the exact precise hardware we have today and reprogram a million different worlds on it, some better, some worse.

But all of them with computing as a cultural artifact of some kind — and certainly not a "neutral" one. I've basically come to think of the idea of "general purpose computing" as basically a trojan horse for the naive "tech is neutral" POV. "Making things computable faster" is not a "neutral" change of the universe, it's a step in a specific direction, it defines the boundaries of a particular future cone at the exception of many others.


Particularly ironic using the defence of "it’s not the technology that’s to blame, but the person, not the machine gun, but the person".

Machine guns, an advanced piece of engineering widely known to being developed purely as an academic exercise. No one could expect other uses.


The argument is slightly different here though: The OP is blaming the person who invented the machine gun in the analogy.


Which is obviously reasonable! If you know how to build weapons, don't. Let somebody else do it.


Scientist have moral responsibility not to build weapons of mass destruction for example. Too bad even Einstein couldn't keep with your advice. Scientists are sometimes not so smart. They can be fooled like anybody else.


We're not talking about accidentally inventing a machine gun, we're talking about being cautioned about inventing a machine gun and not caring:

"Hey you're inventing a machine gun!" "No I'm not, and even if I was, machine guns can be used for many things other than violence!"

Edit: Also, Einstein in this analogy is the one cautioning about someone being able to invent a machine gun.


Here's what I would have answered to the OP in the link:

This technology is going to be developed regardless of what we do here. Please realize that you are not advocating for it not to be developed: rather, you are advocating for it not to be developed in the open.


Scammers don't have an unlimited amount of money and skill. Frankly, if the people working at scammer call centers had the skill to make something like this, they probably wouldn't want to work in a scamming call center.

Let this technology be made by someone else. Let it be made by governments and whoever in secret. We know its possible; but just because its possible doesn't mean you have to be the person who does it. If we can delay scammers using this stuff for a few years, we'll keep millions of people from being scammed in the intervening time. Thats a win in my book.


if someone blame this technology, why not to blame guns, warships, tanks, airplanes, shotguns, machineguns before blaming this technology?

We actually blame them, except for airplanes. Most of these were invented at the times when lives had much less value and are of no use unless some half-minded pig attacks you or tries to undermine your defenses.

I’d like to see how this line of reasoning changes when someone releases a virus for your DNA in your backyard, made with funnyjokes/easy-create-virus-for-a-drone-app.


I'll give them props for using an example that is at least challenging to think of positive uses for. The comments snarkily advocating banning electricity or penises just make me wonder what the users' reading comprehension skills are, given that the initial comment is pretty clear on the problem being that they don't think deepfakes have ANY positive use


Remember: if you helped develop it, you're responsible for it. If it kills people, you share the blame.

So, what's the possible scenario for that outcome? Well, look at the upcoming elections in Nigeria. The BBC writes: "With an estimated 80 million Nigerians online, social media plays a huge role in national debates about politics. Our investigation uncovered different tactics used to reach more people on Twitter. Many play on divisive issues such as religious, ethnic and regional differences." ABC News writes: "At least 800 people died in post-election violence after the 2011 polls."

Adding deepfakes into this mix can trigger violent reactions. Should that happen, the creators of deepfakes are obviously to blame, but also those who enabled them, and that includes the original researchers, are responsible. Ignoring that is just putting your head in the sand.


I think "blame" is the wrong term here.

No one can be blamed for unintended consequences (most of the time, I guess). However, the fact remains that one is a crucial part of the chain of events that led to the very existence of the consequence.


Knowingly. It's not as if no-one warned, and as if there aren't any precedents.

But you wouldn't blame the man who puts a poison pill near a playground, and claims "I didn't make them eat it"?


This reply to your comment could only exist because you posted your comment. Are you responsible for anything I say in this box? You enabled it!


Exaggerating one aspect of argument into ridiculousness and ignoring the rest does not show "good faith".


> if you helped develop it, you're responsible for it

FOSS comes without warranty and liability. Read the license.


I think it's pretty obvious that autoencoder deepfake tech and similar technologies are going to be useful, maybe even essential in visual effects. The perceived problem seems to be that the 'irresponsible rabble' also have access to it.

But as the barrier to entry for really convincing output goes up (768px /1024px training pipelines, and beyond), and it suddenly becomes something that one person alone can't really do well any more, the 'amateur' stuff is going to look far worse to people than it does now. You just have to wait for that barrier to rise, and I can tell you as a VFX insider that that is happening right now.

Deepfakes are the reverse of CGI, which began as an inaccessible technology and gradually became accessible, before the scale of its use in VFX reversed that again.

Now, assuming you can either afford or will pirate the right software, you could probably match any CGI VFX shot in a major blockbuster if you gave up your job and worked on it non-stop for a year of 18-hour days (assuming you'd already been through the steep learning curve of the pipeline). So it's out of reach, really, and so will the best deepfakes be.

This stuff everyone is so scared of will end up gate-kept, if only for logistical reasons (never mind any new laws that would address it) - at least at the quality that's so feared in these comments.


What utterly horrible minds in that comment thread. The justification that other bad things happen means we can reasonably create more evil is disgusting.

Those posses a complete lack of a moral compass.


Joseph Weizenbaum once said to the question (paraphrasing) "If I don't develop evil/unethical thing Z, someone else will." with: "In city X every day there is rape and if I don't do it, someone else will." Showing how stupid this line of reasoning is.


But that's stupid, if he does it there is MORE rape.


So not only will someone else do it, if you join, there will be even more harm. So don't do it!


Which is a different situation than the one with developing some technology. If someone develops faceswap, and then someone else develops it too, we won't have twice the amount of faceswap-badness.

edit: which is not to say that I think "if I don't do it, someone else will" is a good argument for doing something you disagree with..


It's not just that, they also lack reading comprehension skills. The initial comment could not be more clear that this tech has practically no positive use whatsoever, yet all the snarky "guess we should ban electricity" comments bring up things that have pretty obvious positive uses


Let's run this through Hanlon's razor...no one is suggesting that we "create more evil". They're simply suggesting that this technology is not inherently evil.

Whether it is evil or not, we can debate, but let's start by assuming good faith by all debate participants.


Instead of blocking technology, what about addressing the root problem: People need to understand concepts such as "chain of trust".

Do you trust videocall participants because you recognize their faces and voices? ...Or because a server certified by a root CA has authenticated the other participants?

The age of deepfakes has started, nobody can stop it. Improving our mental security models will become as essential as literacy.


an age old discussion. ultimately i prefer for this technology to be developed in the open on github and be aware of it (and able to combat nefarious use of it).

the alternative is that its being developed hidden and used by the most vile and evil without many being aware of it (which it most definitely will)

As with nuclear, the cats out of the bag or the babys bathwater(?) already spilled, no way to turn back the clock on technological innovation.


I disagree. Most scammers are unable to build this. Providing attackers with ready to use tech will increase attacks.


now that it is in the open , any high school boy can create some porn deepfakes of their classmates and suicide rate will probably rise. Let's see if you still hold this view when one of your family members is a victim


People have been able to draw naked pictures of people forever, and almost anyone with very little skill has been able to create extremely believable realistic fake naked photographs of people for over a decade now. I honestly don't understand the fear here: video is only more believable than a photograph because video is harder to forge... well, now it isn't, so video isn't believable either. But video could already be forged before, it just required more effort and more skill so a bunch of people have chosen to accept a miserable heuristic that it can't be forged and so it is worth always believing it... why is this good? How is this even sort of OK? We should strive to live in a world where video is as farcical as photography.


Effort is what stops people from bullying others at much larger scale. You don’t understand bullying at all. It’s easy for them to kick your ass or say things every time you pass by and it’s hard for them to create anything without single-tap solutions.


We are lifing just fine with car related deaths. The rate is much higher than suicide rates.

Bullying is alive and well even without this tech. I guess scissoring out a photo from the yearbook and glue it in a porn magazine would work as good. It is not about believability but about the psychological damage when it comes to bullying


But with cars there is an upside, we tolerate it because it is so usefull and the government does its best to make it as safe as possible, setting requirements that need to be met for you to legally use it.

In this case there isn't that much upside, it's basically only being used for non-consensual porn generation and impersonating people


Upside: Consensual porn generation and consensual impersonation.


Then, it's going to be pretty hard to defend that the "downside-related" usage and the "upside-related" usage are anywhere near each other in terms of volume.

To be clearer: yeah, some people are probably creating deepfakes of their own, consensual GFs somewhere on the planet - and more fun to them. There are also comedians doing great things with that.

But let's not kid ourselves into the actual, massively majoritary usage: making someone else's life just a tad shittier, either for profit or out of pure meanness.

I don't know if there is academically sanctionned data about it to appease the HN-crowd, but when it comes out and says "yeah, statistically, talented comedians and kinky couples are only a fraction of the scammers and revenger-p.rn users.", don't count me suprised.


Hard disagree. Quite the opposite: the existence of deepfakes provides a safety net of plausible deniability. Suicide rates should drop as the technology becomes more prevalent.


Rule 34 of the internet. It's inevitable.


The big confusion here is that this particular development effort is the one creating this technology, rather than diffusing it.

If somehow we could get the United Nations to agree to ban deepfake development worldwide, then surely we should enter an ethical discussion about whether we should do it or not. In a world where sophisticated actors already have access to these (and much better) tools, having an open-source GitHub repo is a good thing in my view.


I think it's inevitable that sub-societies will form that accept technology only in ways that preserve the humanity of our interactions. We use the term "disruptive" all the time around here, but what's being disrupted is increasingly close to the heart of human life: the ability to engage with the ideas, emotions, and opinions of another human being, and to know them as a person. If a computer can convincingly impersonate anybody, putting any words you choose in their mouth, and at the same time can generate convincing words on any given subject, it can flood electronic communication with noise that's impossible to remove. It can destroy the trust we have in any interactions not had in person, and we've allowed our society to develop in such a way that interacting only in person is no longer practical.

I know that it's expected that a man of a certain age will begin to say these things, but I think it's true now in a way that it was not true about "those young whippersnappers with their motorcars": we've taken a wrong turn, and I don't like where we're headed.


I've been having this argument over the past few days with some friends/acquaintances in a discord server. I just keep getting told that it's all just 'sensationalist' reactions to new tech. Some of them are quite gung-ho on the 'progress' and never stop to question what it's doing to our humanity and what it means to be human.

I personally think the internet inherently devalues the humanity of interactions - even on voice chat on Discord or some other service. There's something missing from what we get in person that we're losing out on, and I think that's part of why the internet is, generally, a cesspool (doubly so with anonymity thrown in). But we can't escape -- go to a concert, for instance, and how many people are actually paying attention versus just capturing the moment to prove they were there and share it on Snap/Insta?

Like you, I fear we're going a bad way...and they're all just "I can't wait to see how this changes things!" without ever seeing to stop and seriously question things I feel. Their arguments seem to come down to "Well, it's perfect connection for me and I think it's neat"...but then they complain about being depressed and not getting out and doing anything. Discord and the internet isn't a substitution for IRL, and we're fixing to take people assuming it is to the extreme with this new tech. A reckoning is coming, I fear.


haven't paid much attention to the deepfake community. but this one is debatable. one of their linked forums has a section for flagging uncredited videos or work.

so, deepfake authors want credits for their work. that's perplexing.

what's more, this is happening while they seem to be ignoring the ethical concerns raised in the issue. citing that people can do whatever they want with the tech.


Technology like this has blown up on Twitch/Twitch circles within the past couple of days (I won't link to it due to the sexual nature, but you can find references on /r/livestreamfail). Men taking women's [Twitch streamers] faces and use your imagination from there.

Completely inappropriate and unethical.

What value does this technology add? Bringing Princess Leia back to another Star Wars movie? Anything more than that?


You don't have to use your imagination, they proudly list “mrdeepfakes - the biggest NSFW English deepfake community” in the repo's readme.


Quite often in science fiction media, despite the advancement of technology, I see text and graphics(but not pictures) only as User Interfaces. I wonder if this is the path we will be going down. Zero trust towards images and video


Technology can have good and bad uses.

If they do not make them FOSS in public, then the Conspiracy will invent their own and use it for bad uses only.

Furthermore, even if a program is written, you can decide not to use it; that it is written (as FOSS) means that you can read its working, now that someone else wrote about it. You can also execute it on a computer, if that is what is desired. Also, if it is well known enough, then hopefully if someone does use it deceptively against you, then you might be able to guess, or to figure it out (although it might be difficult, at least it might be possible if it is known enough).

I have no intention using such a thing, but someone else might figure out uses of it.

(For example, maybe there are some uses that can be used with movies, for example, if the original actor has been injured for an extended period of time (including if they are dead) or if they want to make up a picture of someone who does not exist. (Although, they should avoid being deceptive. For example, include in the credits, the mention of using such a thing.) Even if it is considered acceptable though, some people will prefer to make movies without it, and such a thing should be acceptable too anyways.)

(I think even in Star Trek, in story, in some episodes they made deepfake movies of someone. And even in Star Trek, both good and bad uses are possible. Or, am I mistaken?)

Nevertheless, there may be some dangers involved, but there are potential danger with anything; if you are careful, then you can try to avoid it, hopefully.


Perhaps another thing that I should mention, which should be considered if an evaluation is attempting to be made, is that such a technology already exists. (I forgot to mention that earlier.)

Also, as mentioned in some other comments: "Alternatively, having tools like this easily available makes it easier to raise awareness and build teams to combat them." Another thing mentioned in another comment: "The entities with interest in it have bigger pockets than some random open source project. ... There are many small entities with interest in such a technology which don't have huge budgets. Small terrorist / extremist organizations pushing a specific agenda." However, in the small case, it is perhaps not as impactful, now that it is FOSS and that it allows others to raise awareness and combat them as described by the other comment. In the big case, where others could make it independently, having this FOSS implementation helps even more against it, since they would otherwise just make it up by themself.


The deepfake thechnology is awesome and should be available to everybody. Because this is the only way everybody can be finally taught to think critically about everything thay hear/see.

Can you believe a politician saying something on TV? Hell no! You should exercise logic about the whole political play he is a part of. Should you think bad about a person you find on a porn site? Absolutely no, what good could result out of this in any case?

This has always been like this but now there is a thing which can push this into the common sense.


This rabbit hole goes waaay deep.

Why should people believe anything you say on this topic? How can they know you're not a bot or a psy-ops troll working for Xi or Putin with the mission to undermine the very fabric of Democratic societies?

What truths/facts should people NOT doubt when they're using "common sense" and thinking "critically". Are we not all affected by propaganda that directly targets what we consider "facts" and "common sense"?

So, to complete the paradox, I will claim one "fact": As humans there are VERY FEW "facts" we can know from first principles, and there are even fewer pinciples we can now for a fact to be universal, or even useful.

We DEPEND on at least some authorities, whether those are people, institutions, ideas or beliefs.

When someone says "think critically!", they mean that you should put some higher burden of evidence on SOME of your beliefs. But not all, and definitely not those that this someone take as axiomatical truths.

An my main worry is not to find some celebrity's face on PornHub. My main concern is the day when almost everything we see that claims to be news has been tampered with in such a way. If we have now way to tell a deepfake from a true video, we can be made to believe absolutely anything. This can be used to ruin lives, trigger wars, including civil wars, cause nuclear holocaust. And it's already happening. Twitter is full of lies of every kind and from all parties and countries.

We may need to find a way back to a world where there is one main shared narrative that we can all more or less trust. Where the custodians of the institutions that provide the narrative understand the need to maintain the trust and the risks involved in undermining the trust for personal gain, and where there are checks and balances that removes bad actors from such positions.

Without this, I believe we're f'ed, but HOW to obtain it, I don't know.


> We may need to find a way back to a world where there is one main shared narrative that we can all more or less trust. Where the custodians of the institutions that provide the narrative understand the need to maintain the trust and the risks involved in undermining the trust for personal gain, and where there are checks and balances that removes bad actors from such positions.

When was such time in history? I'm trying to think of one and I can't. Misinformation has spread in the past too, albeit slowly. The checks and balances did nothing to stop the custodians of the institutions from dismissing heliocentrism, the germ theory, or the continental drift. And for a modern example, the checks and balances failed when the authorities shared a narrative about weapons of mass destruction in Iraq.

All of that was possible without today's technology.


> Misinformation has spread in the past too, albeit slowly.

I would argue that almost all information is "wrong" in the strict sense. Everyone has a ceiling for how detailed a model of reality they need (including moral and ethical reality). And even those weith the highest ceiling (most exact model) are simplifying in ways that others can see as "wrong".

"The earth is flat", "Earth is spherical" and "Earth is an ellipsoid" are all "wrong", technically, yet most people would be fine with the last 2.

What we have, at times, in the past been able to, is to agree on a general narrative and the core of a shared model, at least within our own extended tribe or country.

At times these world views have been diverging. This happened many times with religion, such as during the Reformation. This gave rise to severe wars and civil wars.

> Misinformation has spread in the past too, albeit slowly. The checks and balances did nothing to stop the custodians of the institutions from dismissing heliocentrism, the germ theory, or the continental drift.

These are all cases where scientism started taking over custody for many beliefs that were previously governed by religion. This did cause some friction, but not nearly as bad as during the reformation.

> And for a modern example, the checks and balances failed when the authorities shared a narrative about weapons of mass destruction in Iraq.

There are two aspects to this. I personally believed that the Bush administration actually believed this, and that this was simply bad intel. But if you think they were spreading misinformation on purpose, well, that means you already lost faith in the government. Which is bad if you're right, and maybe even worse if you're wrong.

> All of that was possible without today's technology.

I would argue that the paradigm that we've had in the west based on scientism and liberalism has worked relatively well, except in cases where there have been clashes with traditional religions or novel religion-like ideologies (nazism and communism). Conservatives, liberals and even social democrats shared most core beliefs, and diffences in opinion about details related to relgion or priority were possible to resolve (with some pains, of course).

Part of the reason this worked, was geographical distance. Minor disagreements don't seem quite so important if some person is 1000km away.

Now, though, it seems that technology (twitter being the worst one) forces us into groups, and even punishes those who are not faithful to the canonical beliefs of those groups.

If you're in the MAGA group and believe global warming is a problem, you will be seen as a RINO. If you're progressive (or even "WOKE") and belive that climate change is only a moderate problem (even if fully accepting the IPCC findings), you're labelled a "Climate Denier". In both cases, you may be well within the range that is compatible with the scientific consensus, but that doesn't seem to matter anymore.

And I'm not sure if there are ANY generally trusted institutions around that clearly and loudly express what the scientific consensus really is, including what the uncertainty band is.

Since most people do not have nearly the scientific literacy to interpret the IPCC findings (not to speak of the undelying science), they are left with 2nd hand interpretations, and those tend to be highly oriented towards one of the extreme "tribe-narratives", which (as far as I can tell) both are almost equally wrong.

And demagoges of both sides are excellent at explaining why the OTHER side is wrong, but they almost never why beliefs popular on their OWN side is wrong. More and more people are thinking that the other tribe members are either all stupid, liars or even evil, while they think that their own side are factually correct and morally virtous.

Twitter and Facebook have obviously been boosting this. On twitter you risk being bombarded with propaganda and hit pieces that are either inaccurate, incomplete, or in some cases completely made up.

Now, if people are making up stories already. How will that when anyone can create a video that appears to show that the opponent's champion is doing exactly what the worst conspiracy theories claim (like Hillary raping a child or Trump peeing on some prostitute, live action backup up by people you trust claiming it's all real)?

That's the stuff civil wars (or world wars) are made of.


> Now, if people are making up stories already. How will that when anyone can create a video that appears to show that the opponent's champion is doing exactly what the worst conspiracy theories claim

I see. Some people can't help but believe what their eyes see and what their minds think. Instead of considering that just a candidate for a working hypothesis. We must pity them.


> Why should people believe anything you say on this topic?

They shouldn't. I would just be glad if they would analyse my oppinion and try to understand what do I mean. Whenever I mention a fact they should check it if they consider it important.

> How can they know you're not a bot or a psy-ops troll working for Xi or Putin with the mission to undermine the very fabric of Democratic societies?

I am a psy-ops troll working for myself, my mission is to destroy unconscious beliefs, help people choose their own perception and beliefs counsciously, develop habits to exercise valid logic and become unmanipulatable and happy this way. Perhaps this indeed can undermine the fabric of Democratic societies. Democratic (let alone autocratic) is not free, it generally is a dictate of opportunist manipulators who manage to mesmerize the majority (while majority is never too bright). Isn't it?'

> As humans there are VERY FEW "facts" we can know from first principles

"A person looking like this is being shown on TV waying that right now" is a fact (not 100% reliable, perhaps I'v dreaming, but it most probably is). The statement he announces is not a fact, probably a lie. But now we can speculate about why would he say that now, given current context, how probable it is he is nit a deepfake and whether or not does this matter.

Only 5 different regular polyhedra forms can exist in euclidean 3D space - this is another fact.


> I am a psy-ops troll working for myself,

So you say. How can I know you're not lying even there?

> Only 5 different regular polyhedra forms can exist in euclidean 3D space - this is another fact.

Most people are no more able to verify this statement than statements about 11-dimensional string theory.

If you have their trust, they will belive you, otherwise, not. Regardless of whether it's true.

This goes all the way down. If highly educated and intelligent people are not able to overcome their diffences and clearly state what math and science is saying, including what the error bars are, and for all domains (including climate and sociobiology), people will distrust scentists they see as aligned with the other side.


> How can I know you're not lying even there?

You never know. People mostly just believe what and whom they feel like believing. People having an idea of reasonable credibility are extremely rare. Simply adjusting your voice is enough to make a non-expert to believe or disbelieve you and more.


Basically, any modicum of security people got by installing dashcams and home security cameras is gone. thanks.


There are a lot of completely benign usecases.

1. I want to deepfake myself to have an avatar for online interaction.

2. I want to generate videos instead of filming by pasting people into existing videos.

3. Prevent a Face/Off of scenario.


Lots of comments here and in the GitHub thread claiming there are no legitimate uses for this, so I thought I'd drop a legitimate use just to have an example: I saw an unrelated article today where someone had used some deepfake technology to change the spoken language of an actor.

Imagine what that would mean for dubbed movies/TV if it gets good enough.

There are legit usecases, and that justifies the technologies existence. The bad actors don't make it immoral to develop a technology IMO.


> Imagine what that would mean for dubbed movies/TV if it gets good enough.

In Germany everything is dubbed. Never have I cared about the mouth movement. It's all about the voice and the voice acting. But ok, it will be used at everything, and everyone will get used to the fake.

Still, it is (not will, already is) mostly used for revenge porn, synthetic CP and CEO fraud.


Everything people fear about deepfakes has been true of text for the entire history of writing. We've a brief period in human history during which you could mostly believe data you received from afar, without having to trust the source, because telling a lie with video was much harder than telling a lie with text. Now it's almost as easy to tell a lie with video so we'll have to check sources again. Somehow I think we'll survive.


I can see a few good uses for this tech:

Face swapping + voice swapping + auto translate = your customer support can be anyone on the planet but look and sound familiar to you. Maybe you're getting over a facial injury.

Face swapping = you no longer have to put on make up. Just swap your made up face for meetings.

Face swapping + voice recordings + AI that learns = that scene in Contact where Jody Foster talks to the alien - but he takes the form of her father to make her feel more comfortable.


There is no good for this technology. Nothing good that truly outweighs the bad. I agree with the Sentiment. These deepfakes are not good...they cheapen everything and lower the standard for all....it's literally scammers who want this stuff and people who want to take shortcuts.... essentially you can morally judge a person by this technology and their approach


A lot of people in the comments here are saying that this is beneficial because it will teach people that they can't trust video or audio. But I don't see how that makes sense because this isn't some neutered or weakened form of the technology. That's like saying shooting people makes them more aware of gun violence.


> That's like saying ...

It's like releasing some software that cracks all encryption (or more accurately, authentication), before of which only elite members of society had access to.

It's not a physical weapon, it's an information-security weapon; I don't think the gun metaphor is appropriate.


The problem here is how our societies handle the arrival of new technology, not the technology itself.

Here it's obvious what's going to happen without robust legislation protecting the likeness of all individuals (and not just special-case celebs) from the non-consensual generation of new material.

The fact that such legislation is unlikely to happen before an awful lot of suffering has occurred is a testament both to naive belief that everything new is good. And to legislative processes with a bandwidth from the age of sail that is riddled with vested interests to handle the downsides when scaling breaks the happy path assumptions.

Focusing efforts on the legislative process seems likely to be more productive than point solutions that rely on techies and scientists not to develop tech that can be used for nefarious purposes.


How much are you going to regulate and stop? Game engines currently give photorealistic realtime content.. stop unreal and metahumans too? What about v6 of unreal?

What if this same repo was owned by Nvidia and it had some commercial interest on that product and were ready to litigate.. would everyone still pileup on it?

Is it not on some level disdain that it's just run by a bunch of guys that can be pushed around without much consequence.

Would we have a thread saying, screw it shut down ChatGPT, doesn't fit my moral world-view.. why is that absurd but this is fair discussion?


At the end of the day, it's just pixels on a screen. It's not fair to compare it to machine guns or atomic bombs which cause real physical harm.

You might argue that the technology to make pixels on a screen resembling real humans is bad, but then you have to actually make that argument (and "some people got scammed" is indeed such an argument, albeit a pretty weak one), not just shift it to "this is technology, machine guns are technology, machine guns are bad".


I don’t think any specific technology is good or bad, but I do think you could attempt to quantify its impact better by measuring how often it’s used for “good” versus “bad”.

It’s not going to end the argument though, not least because you’ll then have to assign a relative value to abstract things like “personal freedom” or “artistic expression”.

Even if you could quantify the losses to crypto scams, you can’t put an objective value on some of its more ideological benefits.


Why is it such a big concern? As far as I know deep fake can be recognised with solid confidence using other neural net model? It's only a matter of time this will be detected in every kind of real-time communication app if it poses some kind of a threat and it is rising. Sure, it must be developed, and it's a good thing it is done in public, so the defenders can prepare to defend on open-source material, no?


On a side note, is this really all in Python? I imagine it's offloading some stuff to the GPU right? Maybe the GPU instructions are also stored in python??


Among others it's using pytorch and numpy, two python packages written in compiled languages, torch can use the GPU.


Ah, thanks!


Regarding the downvotes and the second comment on the linked page:

I guess the non-sensical argument "it's not the <insert technology> that <insert and thing>, but the person using the <technology>" will never die out.

If you don't have <the technology>, it's much harder to <do the bad thing>, has to be done hands-on from a very close distance with much higher risk for the perpetrator.


I wonder if some form of regulation is coming to tech. We do not allow people to spread heroin freely or slavery or some other sort of horrible stuff.


How are you going to effectively regulate a white paper? Or Stable diffusion? These things can be trained on readily available, easily purchased by the average citizen commodity GPU hardware. Heck, there’s rumors NVIDIA has an RTX titan in the pipe for under $3000 with 48GB of VRAM!

Even if the government regulates that I can’t physically have one, I could rent a rig of A100X8 servers in Russia, VPN tunnel into it, and download whatever artifacts the server generates.

How in the world could any government short of North Korea effectively regulate such a thing, even if they went full authoritarian, the blowback from such draconian measures would be intense?


I don't really know it does sound super hard. But maybe if we consider that we go down the path where you do not own your tech (we are close to that now) and the companies have some monitoring tools to see what you are doing. And then you have this kind of network of trusted machines that you communicate with.

Edit: i really don't like the idea that i have to be paranoid about every interaction with tech.


This reminds of the whole crowd of artists calling for a ban of AI generated art because "it's stealing". The change will happen, whether you want it or not. So those guys'd better raise the prices for their unique, by-hand manufactured, hard worked pieces of work and leave the low-quality, generic and industrially generated ones to AI. Embrace the change, as they say.


Do you carry a gas mask with you wherever you go? Mustard gas is over a hundred years old, and change will happen, whether you want it or not, so you should be able to just go down to the supermarket and buy it. I'm personally very annoyed by random mustard gas attacks, but there's just no mechanism in society for preventing them.


Your analogy is 100% on point, I guess?


You will never stop the march of technology, especially when it requires so few developers to create it. It will emerge.


Technology isn't inherently good or evil, it's neutral.

If Company A / Country A / Person A won't do it, then Company B / Country B / Person B will do it and use it to bankrupt you / attack and possibly kill you / take advantage of you.

It's that simple.


No, technology falls into different categories.

Aspirin is unambiguously a technological good. As is the bicycle. There are many technological advances which are exclusively (or almost exclusively) used for the benefit of mankind.

Then there is morally neutral technology. Think of a hammer or a knife. Can be used just as easily for good as for evil. It's up to the person who wields it.

The third category is evil technology. Technology that just makes the world a worse place. Think of landmines, nerve gas, or biologically engineered viruses. If we could uninvent things, these are the things we would uninvent in a heart beat.


So technology is good for the Company / Country / Person who builds it first and evil for the one who doesn't. Something being good for some people and evil for others doesn't make it "neutral."


>Technology isn't inherently good or evil, it's neutral.

Broadly yes, but this severe simplification glosses over the some technologies having more potential for abuse than others.

A machine gun has more potential for evil than a dessert spoon.


You can't stop these kinds of double edge swords from being developed but what I'd like is for people to get together and dev a counter to that, something like DetectDeepFace.

For me everything (image, text, sound etc) that comes from a computer is suspect nowadays.


Clear that we're moving to a post-truth society. Visuals can be deepfaked, voices AI-generated.

Could crypto unironically be the way out of this mess? If a document isn't signed by a wallet associated with you, it should not be considered authentic?


> we're moving to a post-truth society

This is far too dramatic. What's actually happening is that we simply no longer give _unattributed_ audio and video the benefit of the doubt. Data pedigree and attribution will become more important, but we're already in a place where real media is routinely misrepresented by bad actors (as a lot of purported Ukraine war footage has shown), and this will simply make us more skeptical still.

> Could crypto unironically be the way out of this mess

I did pitch a PhD study area on this[0] before deciding I didn't want to do a PhD or get into crypto. But I do think signing media and signed attribution chains will become more important ... but I don't see much need for it to be distributed. I (along with a huge number of other people) already sign documents digitally using a 3rd party service

0: https://github.com/pjlsergeant/multimedia-trust-and-certific...


And why would you need crypto for that? You can sign stuff without crypto.


You can't sign stuff without crypto (unless you mean by hand, on paper). Cryptographic signatures by design involve cryptography.

I know what you both meant, just playing with words, sorry. I just hope now that the dust has settled, we can go back to the world where "crypto" meant "cryptography", not anything else[1]

[1] https://www.cryptoisnotcryptocurrency.com/


Sure. I just mentioned crypto because that's what I'm familiar with, and there's already some infrastructure in place to facilitate it. But you will likely need to sign messages posted online - crypto or with your encryption key.


Has any deep technology benefited ordinary people so far? It's mostly used by totalitarian governments, by big tech to fine tune ads, by seo spammers etc. Can't wait for web to fill up with deep nonsense and "art".


The unfortunate part is that the alternative is for only government entities to have this technology. If open source can make a credible attempt at creating live deep fake technology, the government already has a team working on it.


Official discord channel: English / Russian. 中文交流论坛,免费软件教程、模型、人脸数据

LOL

BTW.

It's not a "technology" in any classical sense of this word.

This is a funny and technologically useless rattle that can be used as a Chinese-made Kalashnikov assault rifle.


This is akin to developing Bioweapons. Can it be done? Yes. Should it be done? Absolutely not.

> “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.”


Only in this case it's software engineers. They don't get a free pass.


The sci-fi horror fantasy of engineers being assaulted for working on future-dangerous technology seems a predictable outcome of this kind of rhetoric, soon.


Let me see how crypto is banned first. Same case, even worse: not just used to scam people out of their belongings, but to buy and sell illegal things too.


if this was a discussion on generative AI, I'd agree that the cat is out of the bag and there's no stopping this now.

BUT.

what are people using deepfakes for, in good faith? can someone provide one example that isn't malicious?

the best I could imagine is maybe amateur filmmakers deepfaking their faces onto existing footage to cut costs - but it doesn't seem that this outweighs the drawbacks


Film studios can use deepfakes to bring back actors who passed away. Whistle-blowers could use it to have video calls with journalists without being identified. Privacy-minded regular folks could use it to have a YouTube/Twitch channel without revealing their identity.

These are just a few good-faith applications, but I think the list goes on.


>Film studios can use deepfakes to bring back actors who passed away.

That just reminds me of the short story about a society where nobody died. It wasn't meant to be a happy story.


It's crazy how we experienced this (in the future) very small time window where you could more or less trust digital media.


All the communities that are against this are also against everything else.

They've cried wolf enough times. Everything is dangerous and everything is a crisis.

Consequently, I will ignore their warnings about this as well. It'll be okay. Tomorrow the community will forget about it. Tomorrow the crisis will be that some one-person blog is not GDPR compliant.


This doesn't produce any physical harm, so I see no problem in developing it. It will not spiral out of control.

There will be a break in period, but the conclusion will be check the source of the information.

Making this easily available will make the break in period easier.


Hmm can't close this issue


Tangent, but I think things like that will spell the end of remote working and remote interviewing.


Most people will follow the social norm of not doing this. Until that norm is broken. Perhaps the first successful presidential candidate that is revealed to be virtual will be the event that changes things.


I vote waldo


Why do you even need to look at someone's face for remote working? Most of communications are text-based and asynchronous.


because HR-wise a company needs to be sure it is working with whom it thinks it is working.

See that under the prism of rising interview scams.


All the negative responses are slippery slope fallacies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: