A deepfake is a piece of image, audio, or video content using artificial intelligence to create a digital representation by replacing the likeness of one person with another. This advanced technology is becoming more common and convincing, leading to misleading news and counterfeit videos.
We will delve deeper into deepfakes, discuss how deepfakes are created, why there are concerns about their growing prevalence, and how best to detect them so as not to be fooled into believing their content.
Advances in computers have allowed them to become increasingly better at simulating reality. What was once done taking days in the darkroom can be done in seconds using photoshop. For example, five pictures of the Cottingley Fairies tricked the world in 1917.
Modern cinema now relies on computer-generated characters, scenery, and sets, replacing the far-flung locations and time-consuming prop-making that were once an industry staple.
The quality has become so good that many cannot distinguish between CGI and reality.
Deepfakes are the latest iteration in computer imagery, created using specific artificial technology techniques that were once very advanced but are beginning to enter the consumer space and will soon be accessible to all.
The term deepfake was coined from the underlying technology behind them, deep learning, a specific field of Artificial Intelligence (AI) or machine learning. Deep learning algorithms have the ability to teach themselves how to solve problems better, and this ability improves the more extensive the training data set provided to them. Their application to deepfakes makes them capable of swapping faces in video and other digital media, allowing for realistic looking but 100% fake media to be produced.
While many methods can be applied to create deepfakes, the most common is through the use of deep neural networks (DNNs). These DNNs use autoencoders that incorporate a face-swapping technique. The process starts with a target video that is used as the basis of the deepfake (on the left above) and from there, a collection of video clips of the person (Tom Cruise) that you wish to overlay into each frame of the target video.
The target video and the clips used to produce the deepfake can be completely unrelated. The target could be a sports scene or a Hollywood feature, and the person’s videos to insert could be a collection of random YouTube clips.
The deep learning autoencoder is an artificial intelligence program tasked with selecting YouTube clips to understand how the person looks from several angles, accounting for different facial patterns and environmental conditions. It will then map that person into each target video frame to make it look original.
An additional machine learning technique called Generative Adversarial Networks or GANs is added to the mix, which detects any flaws and improves the deepfake through multiple iterations. GANs are themselves another method used to create deepfakes. They rely on large amounts of data to learn how to create new examples that mimic the real target. With sufficient data, they can produce incredibly accurate fakes.
Deepfake apps have also hit the consumer market, such as Zao, FaceApp, DeepFace Lab, Face Swap, and the notorious and removed DeepNude–a particularly dangerous app that generated fake nude images of women.
Several other versions of deepfake software that have varying levels of results can be found on the software development open-source community GitHub. Some of these apps can be used purely for entertainment purposes. However, others are much more likely to be maliciously exploited.
While the ability to swap faces quickly and automatically with an app and create a credible video has some interesting benign applications, in Instagram posts and movie production, deepfakes are obviously dangerous. Sadly, one of the first real-world deepfake applications was in the creation of synthetic pornography.
2017 saw a Reddit user named “deepfakes” create a forum for porn featuring face-swapped actors. Since then, the genre of “revenge porn” has repeatedly made the news. These deepfake use cases have severely damaged the reputations of celebrities, prominent figures, and even regular people. According to a 2019 Deeptrace report, pornography constituted 96% of deepfake videos found online, and this has only dropped to 95% in 2022.
Deepfakes have already been employed in political manipulation. Starting in 2018, for example, a Belgian political party released a video of, at the time, President Donald Trump giving a speech that called on Belgium to withdraw from the Paris climate agreement. The former president Trump never gave that speech. It was a deepfake.
The Trump video was far from the first deepfake created to mislead, and many tech-savvy political experts are bracing for the future wave of fake news featuring convincingly realistic deepfakes. We have been fortunate not to have so many of them during the 2022 midterms, but 2024 may be a different story. They have, however, been used this year to change the course of the war in Ukraine.
Just as deepfake videos have taken off, their audio counterparts have also become a growing field with many applications. Realistic deepfake audio can be created with similar deep learning algorithms using samples of a few hours of the target voice.
Once the model voice has been created, that person can say anything, such as the audio deepfake of Joe Rogan. This method has already been used to perpetrate fraud, and will likely be used again for other nefarious actions.
There are beneficial uses for this technology. It could be used as a form of voice replacement in medical applications, as well as in specific entertainment situations. If an actor was to die before the completion of the movie or before a sequel is started, their voice could be fabricated to complete lines that were not yet spoken. Game programmers can make characters who can say anything in real-time with the real voice rather than using a limited script recorded by the voice actor.
With deepfakes becoming ever more common, our society must collectively adapt to the spotting of deepfake videos in the same way that we have become attuned to detecting various kinds of fake news online.
As is the case with all types of cyber security, there is a cat-and-mouse game where a new deepfake technology must emerge before a relevant countermeasure is created. This process is a vicious cycle, like with computer viruses, which is an ongoing challenge to avoiding the harm that can be done.
There are a few tell-tale giveaways that help in spotting a deepfake.
The earlier generation of deepfakes were not very good at animating faces, and the resulting video felt unnatural and obvious. However, after the University of Albany released its blinking abnormality research, newer deepfakes have incorporated natural blinking into their software–eliminating this problem.
Second, look for unnatural lighting. The deep fake’s algorithm will often retain the illumination of the provided clips that were used to create the fake video’s model. This results in a lighting mismatch.
Unless the audio is also created with a deep fake audio component, it also might not match the speech pattern of the person that is the target. The video and the audio may look out of sync unless both have been painstakingly manipulated.
Even though the quality of deepfakes continues to improve and appear more realistic with technical innovation, we are not defenseless to them.
Sensity, a company that helps verify IDs for KYC applications, has a deepfake detection platform that resembles an antivirus alert system.
The user is alerted when they are viewing content that has signs of AI-generated media. Sensity’s system uses the same deep learning software to detect as is used to create the deepfake videos.
Operation Minerva uses a more straightforward approach to identifying and combating deepfakes. They employ a method of digital fingerprinting and content identification to locate videos made without the target’s consent. It can identify examples of deepfakes, including revenge porn, and if identified, it will send a takedown notice to sites that Operation Minerva polices.
There was also a Deepfake Detection Challenge by Kaggle, sponsored by AWS, Facebook, Microsoft, and the Partnership on AI’s Media Integrity Steering Committee. This challenge was an open, collaborative initiative to build new ways of detecting deepfakes. The prizes ranged up to a half million dollars.
The advent of deepfakes has made the unreal seem real. The quality of deepfakes is improving and combating them will be more problematic as the technology evolves.
We must remain diligent in finding these synthetic clips that can seem so real. They have their place if used for beneficial reasons, such as in entertainment and gaming, or med-tech to help people regain speech. However, the damage they can do on personal, financial, and even social levels has the potential to be catastrophic. Responsible innovation is vital to lasting success.
Disclaimer: The author of this text, Jean Chalopin, is a global business leader with a background encompassing banking, biotech, and entertainment. Mr. Chalopin is Chairman of Deltec International Group, www.deltecbank.com.
The co-author of this text, Robin Trehan, has a bachelor’s degree in economics, a master’s in international business and finance, and an MBA in electronic business. Mr. Trehan is a Senior VP at Deltec International Group, www.deltecbank.com.
The views, thoughts, and opinions expressed in this text are solely the views of the authors, and do not necessarily reflect those of Deltec International Group, its subsidiaries, and/or its employees.
Thank you for your interest in the Deltec Initiatives Foundation. You can contact us using the form below.
Before sending a message to us, kindly note that we do not accept requests for funding. Thank you for your understanding.
Leave A Reply