What is a deepfake?
Have you seen Barack Obama call Donald Trump a “complete dipshit”, or Mark Zuckerberg brag about having “total control of billions of people’s stolen data”, or witnessed Jon Snow’s moving apology for the dismal ending to Game of Thrones? Answer yes and you’ve seen a deepfake. The 21st century’s answer to Photoshopping, deepfakes use a form of artificial intelligence called deep learning to make images of fake events, hence the name deepfake. Want to put new words in a politician’s mouth, star in your favourite movie, or dance like a pro? Then it’s time to make a deepfake.
What are they for?
Many are pornographic. The AI firm Deeptrace found 15,000 deepfake videos online in September 2019, a near doubling over nine months. A staggering 96% were pornographic and 99% of those mapped faces from female celebrities on to porn stars. As new techniques allow unskilled people to make deepfakes with a handful of photos, fake videos are likely to spread beyond the celebrity world to fuel revenge porn. As Danielle Citron, a professor of law at Boston University, puts it: “Deepfake technology is being weaponised against women.” Beyond the porn there’s plenty of spoof, satire and mischief.
Is it just about videos?
No. Deepfake technology can create convincing but entirely fictional photos from scratch. A non-existent Bloomberg journalist, “Maisy Kinsley”, who had a profile on LinkedIn and Twitter, was probably a deepfake. Another LinkedIn fake, “Katie Jones”, claimed to work at the Center for Strategic and International Studies, but is thought to be a deepfake created for a foreign spying operation.
Audio can be deepfaked too, to create “voice skins” or ”voice clones” of public figures. Last March, the chief of a UK subsidiary of a German energy firm paid nearly £200,000 into a Hungarian bank account after being phoned by a fraudster who mimicked the German CEO’s voice. The company’s insurers believe the voice was a deepfake, but the evidence is unclear. Similar scams have reportedly used recorded WhatsApp voice messages.

How are they made?
University researchers and special effects studios have long pushed the boundaries of what’s possible with video and image manipulation. But deepfakes themselves were born in 2017 when a Reddit user of the same name posted doctored porn clips on the site. The videos swapped the faces of celebrities – Gal Gadot, Taylor Swift, Scarlett Johansson and others – on to porn performers.
It takes a few steps to make a face-swap video. First, you run thousands of face shots of the two people through an AI algorithm called an encoder. The encoder finds and learns similarities between the two faces, and reduces them to their shared common features, compressing the images in the process. A second AI algorithm called a decoder is then taught to recover the faces from the compressed images. Because the faces are different, you train one decoder to recover the first person’s face, and another decoder to recover the second person’s face. To perform the face swap, you simply feed encoded images into the “wrong” decoder. For example, a compressed image of person A’s face is fed into the decoder trained on person B. The decoder then reconstructs the face of person B with the expressions and orientation of face A. For a convincing video, this has to be done on every frame.

Another way to make deepfakes uses what’s called a generative adversarial network, or Gan. A Gan pits two artificial intelligence algorithms against each other. The first algorithm, known as the generator, is fed random noise and turns it into an image. This synthetic image is then added to a stream of real images – of celebrities, say – that are fed into the second algorithm, known as the discriminator. At first, the synthetic images will look nothing like faces. But repeat the process countless times, with feedback on performance, and the discriminator and generator both improve. Given enough cycles and feedback, the generator will start producing utterly realistic faces of completely nonexistent celebrities.
Who is making deepfakes?
Everyone from academic and industrial researchers to amateur enthusiasts, visual effects studios and porn producers. Governments might be dabbling in the technology, too, as part of their online strategies to discredit and disrupt extremist groups, or make contact with targeted individuals, for example.
What technology do you need?
It is hard to make a good deepfake on a standard computer. Most are created on high-end desktops with powerful graphics cards or better still with computing power in the cloud. This reduces the processing time from days and weeks to hours. But it takes expertise, too, not least to touch up completed videos to reduce flicker and other visual defects. That said, plenty of tools are now available to help people make deepfakes. Several companies will make them for you and do all the processing in the cloud. There’s even a mobile phone app, Zao, that lets users add their faces to a list of TV and movie characters on which the system has trained.

How do you spot a deepfake?
It gets harder as the technology improves. In 2018, US researchers discovered that deepfake faces don’t blink normally. No surprise there: the majority of images show people with their eyes open, so the algorithms never really learn about blinking. At first, it seemed like a silver bullet for the detection problem. But no sooner had the research been published, than deepfakes appeared with blinking. Such is the nature of the game: as soon as a weakness is revealed, it is fixed.
Poor-quality deepfakes are easier to spot. The lip synching might be bad, or the skin tone patchy. There can be flickering around the edges of transposed faces. And fine details, such as hair, are particularly hard for deepfakes to render well, especially where strands are visible on the fringe. Badly rendered jewellery and teeth can also be a giveaway, as can strange lighting effects, such as inconsistent illumination and reflections on the iris.
Governments, universities and tech firms are all funding research to detect deepfakes. Last month, the first Deepfake Detection Challenge kicked off, backed by Microsoft, Facebook and Amazon. It will include research teams around the globe competing for supremacy in the deepfake detection game.
Facebook last week banned deepfake videos that are likely to mislead viewers into thinking someone “said words that they did not actually say”, in the run-up to the 2020 US election. However, the policy covers only misinformation produced using AI, meaning “shallowfakes” (see below) are still allowed on the platform.

Will deepfakes wreak havoc?
We can expect more deepfakes that harass, intimidate, demean, undermine and destabilise. But will deepfakes spark major international incidents? Here the situation is less clear. A deepfake of a world leader pressing the big red button should not cause armageddon. Nor will deepfake satellite images of troops massing on a border cause much trouble: most nations have their own reliable security imaging systems.
There is still ample room for mischief-making, though. Last year, Tesla stock crashed when Elon Musk smoked a joint on a live web show. In December, Donald Trump flew home early from a Nato meeting when genuine footage emerged of other world leaders apparently mocking him. Will plausible deepfakes shift stock prices, influence voters and provoke religious tension? It seems a safe bet.
Will they undermine trust?
The more insidious impact of deepfakes, along with other synthetic media and fake news, is to create a zero-trust society, where people cannot, or no longer bother to, distinguish truth from falsehood. And when trust is eroded, it is easier to raise doubts about specific events.
Last year, Cameroon’s minister of communication dismissed as fake news a video that Amnesty International believes shows Cameroonianthe country’s soldiers executing civilians.
Donald Trump, who admitted to boasting about grabbing women’s genitals in a recorded conversation, later suggested the tape was not real. In Prince Andrew’s BBC interview with Emily Maitlis, the prince cast doubt on the authenticity of a photo taken with Virginia Giuffre, a shot her attorney insists is genuine and unaltered.
“The problem may not be so much the faked reality as the fact that real reality becomes plausibly deniable,” says Prof Lilian Edwards, a leading expert in internet law at Newcastle University.
As the technology becomes more accessible, deepfakes could mean trouble for the courts, particularly in child custody battles and employment tribunals, where faked events could be entered as evidence. But they also pose a personal security risk: deepfakes can mimic biometric data, and can potentially trick systems that rely on face, voice, vein or gait recognition. The potential for scams is clear. Phone someone out of the blue and they are unlikely to transfer money to an unknown bank account. But what if your “mother” or “sister” sets up a video call on WhatsApp and makes the same request?
What’s the solution?
Ironically, AI may be the answer. Artificial intelligence already helps to spot fake videos, but many existing detection systems have a serious weakness: they work best for celebrities, because they can train on hours of freely available footage. Tech firms are now working on detection systems that aim to flag up fakes whenever they appear. Another strategy focuses on the provenance of the media. Digital watermarks are not foolproof, but a blockchain online ledger system could hold a tamper-proof record of videos, pictures and audio so their origins and any manipulations can always be checked.
Are deepfakes always malicious?
Not at all. Many are entertaining and some are helpful. Voice-cloning deepfakes can restore people’s voices when they lose them to disease. Deepfake videos can enliven galleries and museums. In Florida, the Dalí museum has a deepfake of the surrealist painter who introduces his art and takes selfies with visitors. For the entertainment industry, technology can be used to improve the dubbing on foreign-language films, and more controversially, resurrect dead actors. For example, the late James Dean is due to star in Finding Jack, a Vietnam war movie.
What about shallowfakes?
Coined by Sam Gregory at the human rights organisation Witness, shallowfakes are videos that are either presented out of context or are doctored with simple editing tools. They are crude but undoubtedly impactful. A shallowfake video that slowed down Nancy Pelosi’s speech and made the US Speaker of the House sound slurred reached millions of people on social media.
In another incident, Jim Acosta, a CNN correspondent, was temporarily banned from White House press briefings during a heated exchange with the president. A shallowfake video released afterwards appeared to show him making contact with an intern who tried to take the microphone off him. It later emerged that the video had been sped up at the crucial moment, making the move look aggressive. Costa’s press pass was later reinstated.
The UK’s Conservative party used similar shallowfake tactics. In the run-up to the recent election, the Conservatives doctored a TV interview with the Labour MP Keir Starmer to make it seem that he was unable to answer a question about the party’s Brexit stance. With deepfakes, the mischief-making is only likely to increase. As Henry Ajder, head of threat intelligence at Deeptrace, puts it: “The world is becoming increasingly more synthetic. This technology is not going away.”
FAQs
How do you detect a deepfake? ›
The FBI warned that video participants could spot a deepfake when coughing, sneezing or other sounds don't line up with what's in the video. The side profile check could be a quick and easy-to-follow way for humans to check before beginning an online video meeting.
Why are deepfakes hard to spot? ›Deepfakes are AI-generated media pieces (fake videos, audios, images, or text) that look incredibly realistic and closely imitate a living personality or incident. Because the technology behind them has advanced dramatically, they can often be pretty hard to spot.
Is there technology to detect deepfakes? ›Intel has developed an AI that it says can detect in real time whether a video has been manipulated using deepfake technology. FakeCatcher, part of the chipmaker's responsible AI work, claims to detect deepfakes within milliseconds and with a 96pc accuracy rate.
How easy is it to make and detect a deepfake? ›Deepfakes can be harmful, but creating a deepfake that is hard to detect is not easy. Creating a deepfake today requires the use of a graphics processing unit (GPU). To create a persuasive deepfake, a gaming-type GPU, costing a few thousand dollars, can be sufficient.
What app is used for deepfakes? ›FaceApp is a popular app and is in fact one of the first few apps to really popularise and democratize deepfakes and AI-generated face editing on smartphones. With FaceApp you can simply upload your picture to the app and then see what you'll look like when you're old, make yourself smile, and more.
What are the ways in which deep fakes can be potentially misused? ›An obvious misuse of deepfakes is their potential role in creating more sophisticated fabricated news stories. Conversely, another harm of deepfakes is that they can cast doubt on authentic stories [4].
Can facial recognition detect deepfake? ›But facial recognition technologies that employ a specific user-detection method are highly vulnerable to deepfake-based attacks that could lead to significant security concerns for users and applications, according to new research involving the Penn State College of Information Sciences and Technology.
How do people make deepfakes? ›The encoder and the decoder are recurrent neural networks that train themselves to improve exponentially by practicing on thousands of source/target images. To generate a deepfake, the decoder for the target draws the target image with the source's latent features (expressions), and voila! We have a deepfake image.
Are deepfakes against the law? ›Though the concept of a deepfake is not technically illegal, its potential to create chaos and manipulate public perception makes it a threat, both at individual and societal scales. Intellectual property rights are given to a person for their creation. This can include books, paintings, films, and computer programs.
What is a deepfake example? ›Barack Obama's public service announcement
In 2018, BuzzFeedVideo made a deepfake of former President Barack Obama. The deepfake accurately and perfectly mimicked his voice and gestures to a point where you couldn't tell it that the video was synthetic.
How long does deep fake take? ›
It can take upto 4 hours to create a deepfake picture or videos. However, it only takes 30 minutes to swap faces.
What facial features can help identify a deepfake? ›- Look for unnatural eye movements. ...
- Notice mismatches in color and lighting. ...
- Compare and contrast audio quality. ...
- Strange body shape or movement. ...
- Artificial facial movements. ...
- Unnatural positioning of facial features. ...
- Awkward posture or physique.
As AI technology continues to improve, identifying what is real and what isn't has become increasingly difficult, but Intel Corp. says it now has a solution. Launched Monday, Intel's new FakeCatcher does as the name suggests: It identifies fake videos, so-called deepfakes, with a 96% accuracy rate.
How do you stop deep fakes? ›- Use anti-fake technology. ...
- Espouse training and awareness. ...
- Enforce robust security protocols. ...
- Explore the use of blockchain. ...
- Adopt a zero-trust approach to online content. ...
- Prepare a response strategy. ...
- Develop new security standards. ...
- Keep user data private.
- Zao deepfake app. Zao is the latest app for creating deepfake content within a few minutes. ...
- Wombo. Wombo is a lip-syncing app that lets you transform yourself or others into a singing face. ...
- MyHeritage.
As for technology, deepfakes are the product of Generative Adversarial Networks (GANs), namely two artificial neural networks working together to create real-looking media (CNN03). These two networks called 'the generator' and 'the discriminator' are trained on the same dataset of images, videos, or sounds (GRD03).
Can you sue for deepfake? ›Where the videos are used to harass a person in such a way as to cause the person to reasonably fear for their safety, this would constitute criminal harassment. Where deepfake videos arise in the workplace, it may be possible to file a discrimination complaint under provincial or federal human rights legislation.
Can Face ID tell if its a picture? ›Many people know that Apple's Face ID system is more secure than the default Android facial recognition program. For example, Face ID can't be fooled by a photograph.
What are deepfakes good for? ›Deepfake technology can provide the following benefits to marketers: Lowers the cost of video campaigns. Deepfake technology can create better omnichannel campaigns. It can provide a hyper-personalised experience for customers.
What states ban deepfakes? ›Due to a lack of awareness, deepfake-specific laws exist in only a few states. Texas has a law banning deepfakes created to influence elections, Virginia banned deepfake pornography, and California has laws against both malicious deepfakes within 60 days of an election and nonconsensual deepfake pornography.
Why is deepfake a threat? ›
Often, they inflict psychological harm on the victim, reduce employability, and affect relationships. Bad actors have also used this technique to threaten and intimidate journalists, politicians, and other semi-public figures. Furthermore, cyber criminals use deepfake technology to conduct online fraud.
Is deepfake a identity theft? ›Criminals can use deepfakes to bypass identity verification services and create accounts in banks and financial institutions, possibly even government services, on behalf of other people, using copies of stolen identity documents.
Why were deepfakes created? ›Deepfakes started with the Video Rewrite program, created in 1997 by Christoph Bregler, Michele Covell, and Malcolm Slaney. The program altered existing video footage to create new content of someone mouthing words they didn't speak in the original version.
How many pictures do you need for a deepfake? ›Our approach requires no knowledge of the image. We can create high-quality DeepFakes videos with just one photo of the target subject.
How do you make deep fake pictures? ›- Reface. Reface App integrates several templates. ...
- Wombo. Wombo is an application based on artificial intelligence that allows you to give movements to selfies.
- Deep Nostalgia.
- FacePlay. FacePlay is available for iOS and Android. ...
- FaceJoy.
...
Non-Asymmetrical AI Faces
- Mismatched earrings.
- Different coloured eyes.
- Eyes looking in different directions.
- Ears that aren't the same size or height.
Measure from the center of your hairline to the tip of your chin. Next, measure from the left side of your face to the right side. If your face is longer than it is wide, you may have an oval face shape. If your face is wider than it is long, you may have a round or heart face shape.
Is it legal to deepfake someone? ›Though the concept of a deepfake is not technically illegal, its potential to create chaos and manipulate public perception makes it a threat, both at individual and societal scales. Intellectual property rights are given to a person for their creation.
Can you get sued for deepfake? ›There are currently no copyright laws designed to combat the use of deepfakes. In fact, they permit them in most instances. Deepfakes likely fall under the “fair use” exception to copyright infringement.
Is deepfake a crime? ›The Trouble of Deepfakes
Deepfake media (deepfakes) threaten public trust in video and present challenges for law enforcement with new types of investigations, evidence management, and trials. Deepfake media have already been used to commit crimes from harassment to fraud, and their use in crimes will likely expand.
Does deepfake cost money? ›
You can create it for free in less than 30 seconds using sites like my Heritage, d-id, or any of the many free deepfake applications.
Who creates deepfake? ›Deepfake technology began eight years ago with the use of “generative adversarial networks.” Created by computer scientist Ian Goodfellow, it essentially pit two AIs against each other to compete for the most realistic images. The results were far superior to basic machine-learning techniques.
What are deepfake attacks? ›Deepfakes, an emergent type of threat falling under the greater and more pervasive umbrella of synthetic media, utilize a form of artificial intelligence/machine learning (AI/ML) to create believable, realistic videos, pictures, audio, and text of events which never happened.