from Hacker News

Stop developing this technology

by chetangoti on 2/2/23, 6:00 AM with 399 comments

  • by jakelazaroff on 2/2/23, 7:01 AM

    Regardless of what I think about this technology in particular, I want to respond to this line from the second comment: [1]

    > on the contrary, it must be developed.

    No, it mustn’t. There’s not a gun to your head, forcing you to do this. You want to develop this technology. That’s why it’s happening.

    Technology isn’t inevitable. It’s a choice we make. And we can go in circles about whether it’s good or bad, but at least cop to the fact that you have agency and you’re making that choice.

    [1] https://github.com/iperov/DeepFaceLive/issues/41#issuecommen...

  • by lucumo on 2/2/23, 10:37 AM

    There's a book in Dutch literature, The Assault by Harry Mulisch. It's a great read if you're interested in multi-faceted morality. It deals with guilt and responsibility when there's a chain of things that need to happen for a bad outcome.

    During WWII the protagonist's parents are shot by Germans after a body of a collaborator was found in front of their house. The parents were arguing with the soldiers about something when arrested and ended up shot during the arrest.

    The collaborator was shot by the resistance in front of their neighbours house, and the neighbours moved the body in front of the protagonist's house.

    Over the years, he encounters many people involved in this event, and starts seeing things from many sides. One of the themes that's explored is who bears moral responsibility for his parents' death? The Germans for shooting them? His mother for arguing? The neighbours for moving the body? The resistance for shooting the collaborator? The collaborator for collaborating? All of their actions were a necessary link in the chain that lead to their death.

    One of the characters utters a simple and powerful way of dealing with that morality: "He who did it, did it, and not somebody else. The only useful truth is that everybody is killed, by who he is killed, and not by anyone else."

    It's a self-serving morality, because the character was part of the resistance group that shot the collaborator in a time where reprisals where very common. But it's also very appealing in its simplicity and clarity.

    I find myself referring back to it, in cases like this. In the imagined future, where this tech is used for Bad Things, who is responsible for those Bad Things? The person that did the Bad Thing? The person that developed this tech? The people that developed the tech that lead up to this development?

    I'm much inclined to only lay blame on the person that did the Bad Thing.

  • by LeanderK on 2/2/23, 2:05 PM

    Again and again I am astonished that people without ethics exist, that they are confident in what they are doing and that they appear to be completely unable to reflect upon their actions. They just don't care and appear to be proud of it.

    If this is used to scam old people out of their belongings then you really have to question your actions and imho bear some responsibility. Was it worth it? Do the positive uses outweigh their negatives? They use examples of misuse of technology as if that would free them of any guilt. As if previous errors would allow them to do anything because greater mistakes were made.

    You are not, of course, completely responsible for the actions others take but if you create something you have to keep in mind bad actors exist. You can't just close your eyes if your actions lead to a strictly worse world. Old people scammed out of their savings are real people and it is real pain. I can't imagine the desperation and the helplessness following that. It really makes me angry how someone can ignore so much pain and not even engage in an argument whether it's the right thing to do.

  • by mihaic on 2/2/23, 8:38 AM

    HN seems to actively work on a cognitive dissonance: on the one hand producing inspirational stories of entrepreneurs changing the world and on the other abandoning all hope that technology/market forces can be controlled in any way.

    I'm thinking now this is to justify away the collective guilt of bringing into the mainstream harmful products.

    It seems to come from the same origins as "crypto can't be regulated", "government can't to anything", "it's ok because it's legal" and it's always worrying me to not really see any sort of moral stance being taken anymore.

  • by dusted on 2/2/23, 7:12 AM

    I think it's a good thing. Not that it's being used for evil things, but because it should help make it obvious that you can't trust anything you see on a screen.

    Using fake media to trick people into believing anything used to be a privileged reserved for nation states and the ultra rich. Now that _ANYONE_ and their cat can do it, it should follow that nobody can believe anything that's on a screen anymore (this comment included).

  • by yipbub on 2/2/23, 7:23 AM

    I'm convinced that this idea that technology is completely neutral is wrong. It is not neutral in the face of human psychology. The human species is a different animal that the human individual, and it is powerful, but does not make truly conscious decisions.

    Once you let the genie out of the bottle, a wish will be made. A technology might not be inherently bad, but neither are knives, and we don't leave those lying around.

    That said, it is the human species that develops technology, rarely is one human individual capable of holding back a technology.

  • by kefka_p on 2/2/23, 8:14 AM

    Can anybody demonstrate a legitimate use of deepfake software? Has it ever been used to facilitate a socially positive or desirable outcome? While I recognize my experiences are far from definitive, I hazard most would be hard pressed to name anything positive that came out of deepfake technology.

    edit: I’ll take your knee-jerk DV, and any others, as an admission of an inability to speak to positive utility of this technology.

  • by kstenerud on 2/2/23, 10:31 AM

    This kind of technology is far too useful to repressive regimes and those who wish to do nasty things with it.

    This means that the incentive to develop this technology is already there, and so it WILL be developed no matter how much people wish it wouldn't.

    The only difference at this point is whether some of the implementations are developed in public view or not. If none are public, then all of them will be done in secret, and our opportunities to develop countermeasures will be severely hampered by having fewer eyes on it, and a smaller entry funnel for potential white hats.

  • by college_physics on 2/2/23, 10:47 AM

    Its hard to take seriously the argument that "tech is neutral" when it concerns software. One could maybe make this argument for the hardware underneath (the chips and the cables). They are called after all "general purpose computing" devices and the packets moving around are general purpose streams of bits as well [0].

    But software is not "tech". It is the explicit expression and projection of cultural objectives and values onto a particular type of tech. You can take the exact precise hardware we have today and reprogram a million different worlds on it, some better, some worse.

    Developers are simply the willing executioners of prevalent power structures. Deal with it. If you have a moral backbone (i.e., you don't agree with the prevalent morality as expressed in what the software industry currently does) do something about it.

    [0] Ofcourse upon deeper examination overall system design (e.g. how client or server heavy the configuration, what kind of UI is promoted etc) is not neutral either. Cultural/political/economic choices creep * everywhere*

  • by rvieira on 2/2/23, 8:38 AM

    Particularly ironic using the defence of "it’s not the technology that’s to blame, but the person, not the machine gun, but the person".

    Machine guns, an advanced piece of engineering widely known to being developed purely as an academic exercise. No one could expect other uses.

  • by proto-n on 2/2/23, 12:56 PM

    Here's what I would have answered to the OP in the link:

    This technology is going to be developed regardless of what we do here. Please realize that you are not advocating for it not to be developed: rather, you are advocating for it not to be developed in the open.

  • by wruza on 2/2/23, 10:25 AM

    if someone blame this technology, why not to blame guns, warships, tanks, airplanes, shotguns, machineguns before blaming this technology?

    We actually blame them, except for airplanes. Most of these were invented at the times when lives had much less value and are of no use unless some half-minded pig attacks you or tries to undermine your defenses.

    I’d like to see how this line of reasoning changes when someone releases a virus for your DNA in your backyard, made with funnyjokes/easy-create-virus-for-a-drone-app.

  • by tgv on 2/2/23, 9:58 AM

    Remember: if you helped develop it, you're responsible for it. If it kills people, you share the blame.

    So, what's the possible scenario for that outcome? Well, look at the upcoming elections in Nigeria. The BBC writes: "With an estimated 80 million Nigerians online, social media plays a huge role in national debates about politics. Our investigation uncovered different tactics used to reach more people on Twitter. Many play on divisive issues such as religious, ethnic and regional differences." ABC News writes: "At least 800 people died in post-election violence after the 2011 polls."

    Adding deepfakes into this mix can trigger violent reactions. Should that happen, the creators of deepfakes are obviously to blame, but also those who enabled them, and that includes the original researchers, are responsible. Ignoring that is just putting your head in the sand.

  • by Hard_Space on 2/2/23, 3:00 PM

    I think it's pretty obvious that autoencoder deepfake tech and similar technologies are going to be useful, maybe even essential in visual effects. The perceived problem seems to be that the 'irresponsible rabble' also have access to it.

    But as the barrier to entry for really convincing output goes up (768px /1024px training pipelines, and beyond), and it suddenly becomes something that one person alone can't really do well any more, the 'amateur' stuff is going to look far worse to people than it does now. You just have to wait for that barrier to rise, and I can tell you as a VFX insider that that is happening right now.

    Deepfakes are the reverse of CGI, which began as an inaccessible technology and gradually became accessible, before the scale of its use in VFX reversed that again.

    Now, assuming you can either afford or will pirate the right software, you could probably match any CGI VFX shot in a major blockbuster if you gave up your job and worked on it non-stop for a year of 18-hour days (assuming you'd already been through the steep learning curve of the pipeline). So it's out of reach, really, and so will the best deepfakes be.

    This stuff everyone is so scared of will end up gate-kept, if only for logistical reasons (never mind any new laws that would address it) - at least at the quality that's so feared in these comments.

  • by jablala on 2/2/23, 11:48 AM

    What utterly horrible minds in that comment thread. The justification that other bad things happen means we can reasonably create more evil is disgusting.

    Those posses a complete lack of a moral compass.

  • by AlexAltea on 2/2/23, 12:12 PM

    Instead of blocking technology, what about addressing the root problem: People need to understand concepts such as "chain of trust".

    Do you trust videocall participants because you recognize their faces and voices? ...Or because a server certified by a root CA has authenticated the other participants?

    The age of deepfakes has started, nobody can stop it. Improving our mental security models will become as essential as literacy.

  • by Lacerda69 on 2/2/23, 7:04 AM

    an age old discussion. ultimately i prefer for this technology to be developed in the open on github and be aware of it (and able to combat nefarious use of it).

    the alternative is that its being developed hidden and used by the most vile and evil without many being aware of it (which it most definitely will)

    As with nuclear, the cats out of the bag or the babys bathwater(?) already spilled, no way to turn back the clock on technological innovation.

  • by epups on 2/2/23, 9:00 AM

    The big confusion here is that this particular development effort is the one creating this technology, rather than diffusing it.

    If somehow we could get the United Nations to agree to ban deepfake development worldwide, then surely we should enter an ethical discussion about whether we should do it or not. In a world where sophisticated actors already have access to these (and much better) tools, having an open-source GitHub repo is a good thing in my view.

  • by JasonFruit on 2/2/23, 2:28 PM

    I think it's inevitable that sub-societies will form that accept technology only in ways that preserve the humanity of our interactions. We use the term "disruptive" all the time around here, but what's being disrupted is increasingly close to the heart of human life: the ability to engage with the ideas, emotions, and opinions of another human being, and to know them as a person. If a computer can convincingly impersonate anybody, putting any words you choose in their mouth, and at the same time can generate convincing words on any given subject, it can flood electronic communication with noise that's impossible to remove. It can destroy the trust we have in any interactions not had in person, and we've allowed our society to develop in such a way that interacting only in person is no longer practical.

    I know that it's expected that a man of a certain age will begin to say these things, but I think it's true now in a way that it was not true about "those young whippersnappers with their motorcars": we've taken a wrong turn, and I don't like where we're headed.

  • by blondin on 2/2/23, 6:22 AM

    haven't paid much attention to the deepfake community. but this one is debatable. one of their linked forums has a section for flagging uncredited videos or work.

    so, deepfake authors want credits for their work. that's perplexing.

    what's more, this is happening while they seem to be ignoring the ethical concerns raised in the issue. citing that people can do whatever they want with the tech.

  • by ttsalami on 2/2/23, 8:19 AM

    Quite often in science fiction media, despite the advancement of technology, I see text and graphics(but not pictures) only as User Interfaces. I wonder if this is the path we will be going down. Zero trust towards images and video
  • by zzo38computer on 2/3/23, 4:53 AM

    Technology can have good and bad uses.

    If they do not make them FOSS in public, then the Conspiracy will invent their own and use it for bad uses only.

    Furthermore, even if a program is written, you can decide not to use it; that it is written (as FOSS) means that you can read its working, now that someone else wrote about it. You can also execute it on a computer, if that is what is desired. Also, if it is well known enough, then hopefully if someone does use it deceptively against you, then you might be able to guess, or to figure it out (although it might be difficult, at least it might be possible if it is known enough).

    I have no intention using such a thing, but someone else might figure out uses of it.

    (For example, maybe there are some uses that can be used with movies, for example, if the original actor has been injured for an extended period of time (including if they are dead) or if they want to make up a picture of someone who does not exist. (Although, they should avoid being deceptive. For example, include in the credits, the mention of using such a thing.) Even if it is considered acceptable though, some people will prefer to make movies without it, and such a thing should be acceptable too anyways.)

    (I think even in Star Trek, in story, in some episodes they made deepfake movies of someone. And even in Star Trek, both good and bad uses are possible. Or, am I mistaken?)

    Nevertheless, there may be some dangers involved, but there are potential danger with anything; if you are careful, then you can try to avoid it, hopefully.

  • by qwerty456127 on 2/2/23, 11:07 AM

    The deepfake thechnology is awesome and should be available to everybody. Because this is the only way everybody can be finally taught to think critically about everything thay hear/see.

    Can you believe a politician saying something on TV? Hell no! You should exercise logic about the whole political play he is a part of. Should you think bad about a person you find on a porn site? Absolutely no, what good could result out of this in any case?

    This has always been like this but now there is a thing which can push this into the common sense.

  • by nathias on 2/2/23, 8:20 AM

    There are a lot of completely benign usecases.

    1. I want to deepfake myself to have an avatar for online interaction.

    2. I want to generate videos instead of filming by pasting people into existing videos.

    3. Prevent a Face/Off of scenario.

  • by amarant on 2/2/23, 12:33 PM

    Lots of comments here and in the GitHub thread claiming there are no legitimate uses for this, so I thought I'd drop a legitimate use just to have an example: I saw an unrelated article today where someone had used some deepfake technology to change the spoken language of an actor.

    Imagine what that would mean for dubbed movies/TV if it gets good enough.

    There are legit usecases, and that justifies the technologies existence. The bad actors don't make it immoral to develop a technology IMO.

  • by DennisP on 2/2/23, 3:49 PM

    Everything people fear about deepfakes has been true of text for the entire history of writing. We've a brief period in human history during which you could mostly believe data you received from afar, without having to trust the source, because telling a lie with video was much harder than telling a lie with text. Now it's almost as easy to tell a lie with video so we'll have to check sources again. Somehow I think we'll survive.
  • by givemeethekeys on 2/2/23, 10:44 AM

    I can see a few good uses for this tech:

    Face swapping + voice swapping + auto translate = your customer support can be anyone on the planet but look and sound familiar to you. Maybe you're getting over a facial injury.

    Face swapping = you no longer have to put on make up. Just swap your made up face for meetings.

    Face swapping + voice recordings + AI that learns = that scene in Contact where Jody Foster talks to the alien - but he takes the form of her father to make her feel more comfortable.

  • by kderbyma on 2/2/23, 2:46 PM

    There is no good for this technology. Nothing good that truly outweighs the bad. I agree with the Sentiment. These deepfakes are not good...they cheapen everything and lower the standard for all....it's literally scammers who want this stuff and people who want to take shortcuts.... essentially you can morally judge a person by this technology and their approach
  • by CyborgCabbage on 2/2/23, 12:34 PM

    A lot of people in the comments here are saying that this is beneficial because it will teach people that they can't trust video or audio. But I don't see how that makes sense because this isn't some neutered or weakened form of the technology. That's like saying shooting people makes them more aware of gun violence.
  • by lonelyasacloud on 2/2/23, 5:19 PM

    The problem here is how our societies handle the arrival of new technology, not the technology itself.

    Here it's obvious what's going to happen without robust legislation protecting the likeness of all individuals (and not just special-case celebs) from the non-consensual generation of new material.

    The fact that such legislation is unlikely to happen before an awful lot of suffering has occurred is a testament both to naive belief that everything new is good. And to legislative processes with a bandwidth from the age of sail that is riddled with vested interests to handle the downsides when scaling breaks the happy path assumptions.

    Focusing efforts on the legislative process seems likely to be more productive than point solutions that rely on techies and scientists not to develop tech that can be used for nefarious purposes.

  • by dmingod666 on 2/2/23, 9:12 AM

    How much are you going to regulate and stop? Game engines currently give photorealistic realtime content.. stop unreal and metahumans too? What about v6 of unreal?

    What if this same repo was owned by Nvidia and it had some commercial interest on that product and were ready to litigate.. would everyone still pileup on it?

    Is it not on some level disdain that it's just run by a bunch of guys that can be pushed around without much consequence.

    Would we have a thread saying, screw it shut down ChatGPT, doesn't fit my moral world-view.. why is that absurd but this is fair discussion?

  • by bondarchuk on 2/2/23, 12:09 PM

    At the end of the day, it's just pixels on a screen. It's not fair to compare it to machine guns or atomic bombs which cause real physical harm.

    You might argue that the technology to make pixels on a screen resembling real humans is bad, but then you have to actually make that argument (and "some people got scammed" is indeed such an argument, albeit a pretty weak one), not just shift it to "this is technology, machine guns are technology, machine guns are bad".

  • by 55555 on 2/2/23, 1:27 PM

    I don’t think any specific technology is good or bad, but I do think you could attempt to quantify its impact better by measuring how often it’s used for “good” versus “bad”.

    It’s not going to end the argument though, not least because you’ll then have to assign a relative value to abstract things like “personal freedom” or “artistic expression”.

    Even if you could quantify the losses to crypto scams, you can’t put an objective value on some of its more ideological benefits.

  • by gloosx on 2/2/23, 2:24 PM

    Why is it such a big concern? As far as I know deep fake can be recognised with solid confidence using other neural net model? It's only a matter of time this will be detected in every kind of real-time communication app if it poses some kind of a threat and it is rising. Sure, it must be developed, and it's a good thing it is done in public, so the defenders can prepare to defend on open-source material, no?
  • by winrid on 2/2/23, 8:04 AM

    On a side note, is this really all in Python? I imagine it's offloading some stuff to the GPU right? Maybe the GPU instructions are also stored in python??
  • by halicarnassus on 2/2/23, 1:24 PM

    Regarding the downvotes and the second comment on the linked page:

    I guess the non-sensical argument "it's not the <insert technology> that <insert and thing>, but the person using the <technology>" will never die out.

    If you don't have <the technology>, it's much harder to <do the bad thing>, has to be done hands-on from a very close distance with much higher risk for the perpetrator.

  • by deniscepko2 on 2/2/23, 1:21 PM

    I wonder if some form of regulation is coming to tech. We do not allow people to spread heroin freely or slavery or some other sort of horrible stuff.
  • by caporaltito on 2/2/23, 8:55 AM

    This reminds of the whole crowd of artists calling for a ban of AI generated art because "it's stealing". The change will happen, whether you want it or not. So those guys'd better raise the prices for their unique, by-hand manufactured, hard worked pieces of work and leave the low-quality, generic and industrially generated ones to AI. Embrace the change, as they say.
  • by WFHRenaissance on 2/2/23, 2:19 PM

    You will never stop the march of technology, especially when it requires so few developers to create it. It will emerge.
  • by baal80spam on 2/2/23, 9:49 AM

    Technology isn't inherently good or evil, it's neutral.

    If Company A / Country A / Person A won't do it, then Company B / Country B / Person B will do it and use it to bankrupt you / attack and possibly kill you / take advantage of you.

    It's that simple.

  • by jmnicolas on 2/2/23, 1:10 PM

    You can't stop these kinds of double edge swords from being developed but what I'd like is for people to get together and dev a counter to that, something like DetectDeepFace.

    For me everything (image, text, sound etc) that comes from a computer is suspect nowadays.

  • by spaceman_2020 on 2/2/23, 12:00 PM

    Clear that we're moving to a post-truth society. Visuals can be deepfaked, voices AI-generated.

    Could crypto unironically be the way out of this mess? If a document isn't signed by a wallet associated with you, it should not be considered authentic?

  • by EVa5I7bHFq9mnYK on 2/2/23, 1:56 PM

    Has any deep technology benefited ordinary people so far? It's mostly used by totalitarian governments, by big tech to fine tune ads, by seo spammers etc. Can't wait for web to fill up with deep nonsense and "art".
  • by Sevii on 2/2/23, 2:30 PM

    The unfortunate part is that the alternative is for only government entities to have this technology. If open source can make a credible attempt at creating live deep fake technology, the government already has a team working on it.
  • by azubinski on 2/2/23, 10:15 AM

    Official discord channel: English / Russian. 中文交流论坛,免费软件教程、模型、人脸数据

    LOL

    BTW.

    It's not a "technology" in any classical sense of this word.

    This is a funny and technologically useless rattle that can be used as a Chinese-made Kalashnikov assault rifle.

  • by dtx1 on 2/2/23, 10:34 AM

    This is akin to developing Bioweapons. Can it be done? Yes. Should it be done? Absolutely not.

    > “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.”

  • by ncr100 on 2/2/23, 4:46 PM

    The sci-fi horror fantasy of engineers being assaulted for working on future-dangerous technology seems a predictable outcome of this kind of rhetoric, soon.
  • by SergeAx on 2/2/23, 7:02 PM

    Let me see how crypto is banned first. Same case, even worse: not just used to scam people out of their belongings, but to buy and sell illegal things too.
  • by return_to_monke on 2/2/23, 5:32 PM

    if this was a discussion on generative AI, I'd agree that the cat is out of the bag and there's no stopping this now.

    BUT.

    what are people using deepfakes for, in good faith? can someone provide one example that isn't malicious?

    the best I could imagine is maybe amateur filmmakers deepfaking their faces onto existing footage to cut costs - but it doesn't seem that this outweighs the drawbacks

  • by sva_ on 2/2/23, 1:09 PM

    It's crazy how we experienced this (in the future) very small time window where you could more or less trust digital media.
  • by renewiltord on 2/2/23, 10:40 AM

    All the communities that are against this are also against everything else.

    They've cried wolf enough times. Everything is dangerous and everything is a crisis.

    Consequently, I will ignore their warnings about this as well. It'll be okay. Tomorrow the community will forget about it. Tomorrow the crisis will be that some one-person blog is not GDPR compliant.

  • by arein3 on 2/2/23, 6:22 PM

    This doesn't produce any physical harm, so I see no problem in developing it. It will not spiral out of control.

    There will be a break in period, but the conclusion will be check the source of the information.

    Making this easily available will make the break in period easier.

  • by ge96 on 2/2/23, 8:32 AM

    Hmm can't close this issue
  • by qthrowayq0909 on 2/2/23, 7:32 AM

    Tangent, but I think things like that will spell the end of remote working and remote interviewing.
  • by badrabbit on 2/2/23, 11:27 AM

    All the negative responses are slippery slope fallacies.