Deepfake
Poetry of reality Science |
![]() |
We must know. We will know. |
A view from the shoulders of giants. |
Deepfakes (a term combining the machine learning technique "deep learning", and "fake") is essentially the creation of fake media using machine learning techniques. There is widespread concern that the use of deepfakes will make it a lot easier to spread disinformation, by allowing users to create fake media (including media that was much more difficult to alter in the past, such as videos) that would be much more difficult to discern from the legitimate product.[1]
Techniques[edit]
Deepfakes can be seen as a subset of machine learning computer techniques.
One way to make a "DeepFake" is to use a generative adversarial network, or GAN.[2] GANs work by creating two neural networks. The first network, known as the generator, looks at training material fed into it (eg, photographs of the subject in question that you would want to fake) and attempts to produce artificial outputs by trying to mimic them. The second neural network (the discriminator) determines whether the output of the generator is "real". If the discriminator rejects the attempt, the generator tries again. Eventually, the generator learns the pattern well enough to fool the discriminator; thus, the algorithm is now trained and ready to generate the fake media.
This is just one example; other machine learning algorithms can also be applied to produce a deepfake.[3]
Limitations[edit]
There are limitations to any machine learning technology, and that includes deepfakes. The key is that machine learning algorithms typically need to be trained somehow, and this is not typically trivial. To produce a convincing video deepfake as of this writing (2019), you typically need a significant amount of computational power, a large set of photographs to train with (ideally within a narrow scope of parameters -- ideally the characteristics of the face of the person you are using to fake is pretty similar to what is in the image you doctor), and fairly good computer skills. [4] Deepfake applications typically require a significant amount of hand holding and data massaging in order to obtain reasonable quality. [5] This tutorial on one of several applications to assist in making a deepfake, DeepFaceLab, perhaps demonstrates the relative difficulty of the current process.
On the other hand, the technology is currently mature enough where much of the complexity can be hidden from the user and processed on the backend, if developed properly. Recently, a Chinese mobile application called Zao made waves by creating an app that made creating a deepfake video a lot easier to do, allowing you to insert a face image (either from an uploaded file or a selfie) on short video clips. [6] With Zao, there are limitations in the image you can use (the image you upload has to be a certain quality and characteristic, and at present the application tries to block any image it recognizes as being from a public figure), and the results are reportedly not perfect (occasionally somewhat jerky or occasionally odd). [7] Still, it does show the potential power deepfakes may have for agents with enough backend computing power and professional development.
Alternatives to Deepfakes[edit]
It is not necessary to go through the trouble of making a deepfake to spread disinformation. Many other techniques exist to spread fake media, including:
- Photoshop and other digital editing programs.
- Audio editors, using techniques such as time stretch and pitch shifting.[8]
- Selective editing.[9]
- Using look-alikes or sound-alike actors or actresses in staged productions. [10].
- Fake news clickbait.
- Fake news websites.
- Fake news Youtube videos.
As many hoaxters have shown over the years, you don't need the most accurate media quality or plausibility to spread disinformation. So while there is much concern in media of lesser repute about this "shocking" technique that will "start plaguing your social media" with fakes and will be "accessible to everyday people within months",[11] the truth is the current technological barriers are at this moment rather high, too high for most disinformation agents to bother with, especially given the much easier alternatives. As a result, very few political, conspiratorial, or disinformation deepfakes exist currently. There certainly is good justification for raising concerns over privacy and disinformation and other ethical and moral issues to be applied as the technology improves, but the technology currently is too immature for major concern just yet, with one major exception.
Current Applications of Deepfake Technology[edit]
Porn, of course.
Being a fiddly, time-consuming technology, deepfake applications are of limited use for most people, for now. However, they have found a useful conduit among a certain type of early technology adopter who have a lot of time on their hands and, tangentially, have a strong perverted desire to insert their favorite A-list celebrity or K-pop star[12] into a pornographic video. Of the relatively small amount of deepfake videos on the web, 96% of them are pornography. [13]
The most notorious deepfake application that was actually easy enough for the masses to use was, likewise, pornographic oriented: DeepNude. [14] Instead of swapping faces (like Zao), DeepNude swapped out bikinis that women were wearing on a beach picture that the user uploaded, and replaced it with AI generated naughty bits. Like Zao, this was relatively easy to use. The programmer quickly took down the application one day after launching it, after figuring out that, you know, an application like this is going to be pretty popular with unsavory types eager to use it to create revenge porn. Oops.[15]
Concern on the ability to create non-consensual pornography with deepfake technology therefore is very legitimate.
Many online communities and legislatures have already reacted to this phenomenon, with Reddit banning pornographic deepfake subreddits,[16] several legislatures rushing to make non-consensual pornographic deepfakes illegal,[17][18] and Pornhub taking steps to block deepfake videos… poorly. [19]
References[edit]
- ↑ https://www.cnn.com/interactive/2019/01/business/pentagons-race-against-deepfakes/
- ↑ https://medium.com/primalbase/why-the-danger-of-deepfakes-is-no-danger-at-all-82c21366e6c6
- ↑ https://www.alanzucconi.com/2018/03/14/introduction-to-deepfakes/
- ↑ https://www.technologyreview.com/s/612501/inside-the-world-of-ai-that-forges-beautiful-art-and-terrifying-deepfakes
- ↑ https://medium.com/primalbase/why-the-danger-of-deepfakes-is-no-danger-at-all-82c21366e6c6
- ↑ https://boingboing.net/2019/09/02/deepfake-face-swap-app-zao.html
- ↑ https://www.abacusnews.com/digital-life/we-tested-zao-viral-chinese-deepfake-app/article/3025497
- ↑ https://www.washingtonpost.com/technology/2019/05/23/faked-pelosi-videos-slowed-make-her-appear-drunk-spread-across-social-media/#comments-wrapper
- ↑ https://www.vice.com/en_us/article/nedd9w/five-times-james-okeefe-embarrassed-himself-trying-to-out-liberal-bias
- ↑ https://www.thedailybeast.com/inside-the-deepfake-arms-race
- ↑ https://www.thesun.co.uk/tech/9984130/deepfakes-warning-videos-hao-li/
- ↑ https://www.rollingstone.com/culture/culture-news/deepfakes-nonconsensual-porn-study-kpop-895605/
- ↑ https://www.technologyreview.com/f/614485/deepfake-porn-deeptrace-legislation-california-election-disinformation/
- ↑ https://www.vice.com/en_us/article/kzm59x/deepnude-app-creates-fake-nudes-of-any-woman
- ↑ https://www.vice.com/en_us/article/qv7agw/deepnude-app-that-undresses-photos-of-women-takes-it-offline
- ↑ https://www.theverge.com/2018/2/7/16982046/reddit-deepfakes-ai-celebrity-face-swap-porn-community-ban
- ↑ https://www.tweaktown.com/news/67974/non-consensual-deepfake-porn-now-banned-creators-sued-150-000/index.html
- ↑ https://www.bbc.com/news/technology-48839758
- ↑ https://www.buzzfeednews.com/article/charliewarzel/pornhub-banned-deepfake-celebrity-sex-videos-but-the-site#.ayQ9QJRPj7