Deepfake

From RationalWiki
Jump to navigation Jump to search
Poetry of reality
Science
Icon science.svg
We must know.
We will know.
A view from the
shoulders of giants.
In the future, entertainment will be randomly generated.
—Larry the Cucumber, VeggieTales ("The Wonderful World of Auto-Tainment!")[1]

Deepfakes (a term combining the machine learning technique "deep learning", and "fake") is essentially the creation of media using machine learning techniques. There is widespread concern that the use of deepfakes will make it a lot easier to spread disinformation, by allowing users to create fake media (including media that was much more difficult to alter in the past, such as videos) that would be much more difficult to discern from the legitimate product.[2]

Techniques[edit]

Deepfakes can be seen as a subset of machine learning computer techniques.

One way to make a "DeepFake" is to use a generative adversarial network, or GAN.[3] GANs work by creating two neural networks. The first network, known as the generator, looks at training material fed into it (eg, photographs of the subject in question that you would want to fake) and attempts to produce artificial outputs by trying to mimic them. The second neural network (the discriminator) determines whether the output of the generator is "real". If the discriminator rejects the attempt, the generator tries again. Eventually, the generator learns the pattern well enough to fool the discriminator; thus, the algorithm is now trained and ready to generate the fake media.

This is just one example; other machine learning algorithms can also be applied to produce a deepfake.[4]

Limitations[edit]

There are limitations to any machine learning technology, and that includes deepfakes. The key is that machine learning algorithms typically need to be trained somehow, and this is not typically trivial. To produce a convincing video deepfake as of this writing (2019), you typically need a significant amount of computational power, a large set of photographs to train with (ideally within a narrow scope of parameters -- ideally the characteristics of the face of the person you are using to fake is pretty similar to what is in the image you doctor), and fairly good computer skills. [5] Deepfake applications typically require a significant amount of hand holding and data massaging in order to obtain reasonable quality. [6] This tutorial on one of several applications to assist in making a deepfake, DeepFaceLab, perhaps demonstrates the relative difficulty of the current process.

On the other hand, the technology is currently mature enough where much of the complexity can be hidden from the user and processed on the backend, if developed properly. Recently, a Chinese mobile application called Zao made waves by creating an app that made creating a deepfake video a lot easier to do, allowing you to insert a face image (either from an uploaded file or a selfie) on short video clips. [7] With Zao, there are limitations in the image you can use (the image you upload has to be a certain quality and characteristic, and at present the application tries to block any image it recognizes as being from a public figure), and the results are reportedly not perfect (occasionally somewhat jerky or occasionally odd). [8] Still, it does show the potential power deepfakes may have for agents with enough backend computing power and professional development.

Alternatives to deepfakes[edit]

It is not necessary to go through the trouble of making a deepfake to spread disinformation. Many other techniques exist to spread fake media, including:

As many hoaxters have shown over the years, you don't need the most accurate media quality or plausibility to spread disinformation. So while there is much concern in media of lesser repute about this "shocking" technique that will "start plaguing your social media" with fakes and will be "accessible to everyday people within months",[12] the truth is the current technological barriers are at this moment rather high, too high for most disinformation agents to bother with, especially given the much easier alternatives. As a result, very few political, conspiratorial, or disinformation deepfakes exist currently. There certainly is good justification for raising concerns over privacy and disinformation and other ethical and moral issues to be applied as the technology improves, but the technology currently is too immature for major concern just yet, with one major exception.

Current applications of deepfake technology[edit]

Pornography[edit]

Being a fiddly, time-consuming technology, deepfake applications are of limited use for most people, for now. However, they have found a useful conduit among a certain type of early technology adopter who have a lot of time on their hands and, tangentially, have a strong perverted desire to insert their favorite A-list celebrity or K-pop star[13] into a pornographic video. Of the relatively small amount of deepfake videos on the web, 96% of them are pornography. [14]

The most notorious deepfake application that was actually easy enough for the masses to use was, likewise, pornographic oriented: DeepNude. [15] Instead of swapping faces (like Zao), DeepNude swapped out bikinis that women were wearing on a beach picture that the user uploaded, and replaced it with AI generated naughty bits. Like Zao, this was relatively easy to use. The programmer quickly took down the application one day after launching it, after figuring out that, you know, an application like this is going to be pretty popular with unsavory types eager to use it to create revenge porn. Oops.[16]

Concern on the ability to create non-consensual pornography with deepfake technology therefore is very legitimate.

Many online communities and legislatures have already reacted to this phenomenon, with Reddit banning pornographic deepfake subreddits,[17] several legislatures rushing to make non-consensual pornographic deepfakes illegal,[18][19] and Pornhub taking steps to block deepfake videos… poorly. [20]

AI voice cloning[edit]

Similar concerns were also expressed regarding deepfake audio of musicians and celebrities. Besides the obvious risk of abusing deepfake voice models for disinformation and hate speech–case in point 4chan edgelord trolls using samples of Emma WatsonWikipedia's voice to make it appear as though she is reciting portions of Adolf Hitler's Mein Kampf[21]–a number of voice actors and musicians have voiced (pun intended) their objections to third parties synthesizing samples of their voice without their consent, such as with veteran actor Rick PasqualoneWikipedia who took offense at YouTuber Mafia Game Videos using the voice of Pasqualone's character Vito Scaletta from Mafia IIWikipedia on a series of videos about the Mafia series, labelling the faux-Scaletta deepfake voice as "soulless" and "without any artistic merit"[22] as well as HitmanWikipedia voice actors David BatesonWikipedia and Jane Perry who both voiced their opposition to an exception to a UK law effectively giving AI creators carte blanche to synthesize someone else's voice without prior approval.[23] Conversely, there are entertainers who either approve of their voice being synthesized (provided that the voice model is not used out of malice) or have had their voice cloned, such as the singer GrimesWikipedia who specifically demanded she gets part of the revenue from the use of her voice, and Snoop DoggWikipedia who lent his distinctive sing-song voice to the text-to-speech service Speechify.[24] Disney animator and Phineas and Ferb series creator-slash-voice actor Dan PovenmireWikipedia also publicly expressed his approval of his character Doofenshmirtz being deepfaked by enthusiasts as well.[25]

As a result of the controversies surrounding AI voice models, some sites such as ElevenLabsWikipedia have paywalled their voice cloning feature to mitigate potential abuse, or in the case of the community-run AI voice synthesis site Uberduck, put up a blacklist of personalities who either requested to have their voices removed from the site due to their (personal) objections, or in the case of politicians such as Donald Trump, were banned outright due to even more serious risks of misinformation.[26] Uberduck previously hosted voice models of politicians on their website for the purposes of making satirical parodies using their voices, but was forced to remove them for fairly obvious reasons.

References[edit]

  1. (January 30, 2016). "Veggietales predicts modern internet humour". VeggieTales, clipped via YouTube.
  2. https://www.cnn.com/interactive/2019/01/business/pentagons-race-against-deepfakes/
  3. https://medium.com/primalbase/why-the-danger-of-deepfakes-is-no-danger-at-all-82c21366e6c6
  4. https://www.alanzucconi.com/2018/03/14/introduction-to-deepfakes/
  5. https://www.technologyreview.com/s/612501/inside-the-world-of-ai-that-forges-beautiful-art-and-terrifying-deepfakes
  6. https://medium.com/primalbase/why-the-danger-of-deepfakes-is-no-danger-at-all-82c21366e6c6
  7. https://boingboing.net/2019/09/02/deepfake-face-swap-app-zao.html
  8. https://www.abacusnews.com/digital-life/we-tested-zao-viral-chinese-deepfake-app/article/3025497
  9. https://www.washingtonpost.com/technology/2019/05/23/faked-pelosi-videos-slowed-make-her-appear-drunk-spread-across-social-media/#comments-wrapper
  10. https://www.vice.com/en_us/article/nedd9w/five-times-james-okeefe-embarrassed-himself-trying-to-out-liberal-bias
  11. https://www.thedailybeast.com/inside-the-deepfake-arms-race
  12. https://www.thesun.co.uk/tech/9984130/deepfakes-warning-videos-hao-li/
  13. https://www.rollingstone.com/culture/culture-news/deepfakes-nonconsensual-porn-study-kpop-895605/
  14. https://www.technologyreview.com/f/614485/deepfake-porn-deeptrace-legislation-california-election-disinformation/
  15. https://www.vice.com/en_us/article/kzm59x/deepnude-app-creates-fake-nudes-of-any-woman
  16. https://www.vice.com/en_us/article/qv7agw/deepnude-app-that-undresses-photos-of-women-takes-it-offline
  17. https://www.theverge.com/2018/2/7/16982046/reddit-deepfakes-ai-celebrity-face-swap-porn-community-ban
  18. https://www.tweaktown.com/news/67974/non-consensual-deepfake-porn-now-banned-creators-sued-150-000/index.html
  19. https://www.bbc.com/news/technology-48839758
  20. https://www.buzzfeednews.com/article/charliewarzel/pornhub-banned-deepfake-celebrity-sex-videos-but-the-site#.ayQ9QJRPj7
  21. "Internet Up in Arms as 4Chan User Uses AI Voice Simulator To Deepfake Emma Watson's Voice, Makes Her Read Hitler's Autobiography – FandomWire". fandomwire.com. February 2, 2023. Retrieved February 3, 2023.
  22. Rick Pasqualone on Twitter: "@MafiaGameVideos And this is why the future of gaming and entertainment is in danger. This is soulless and without any artistic merit. I do not approve." / Twitter
  23. Jane Perry on Twitter
  24. Snoop Dogg Reads Dad Jokes - Speechify App Review & Demo
  25. Doofenshmirtz TEXT-TO-VOICE-inator
  26. Uberduck's Blacklisted/Whitelisted Voices