Fake videos are not a novel innovation, as video editing is almost as old as the video itself. But even though we have witnessed some skillfully crafted fake videos, and CGI (Computer Generated Imagery) became almost indistinguishable from the truth, faking videos of humans was difficult and time-consuming, or so was it up until now.
Whether it was Jon Snow apologizing to the fans of the hit TV show Game of Thrones, or maybe the movie star Tom Cruise randomly appearing on TikTok, almost everyone has seen a deepfake now. Some deepfakes were amateurish, and most could spot them with the naked eye, but others were better made and almost indistinguishable from the truth.
So, let’s take a deep dive into deepfakes, what counts as deepfake, what harm they may cause, and how to combat them effectively?
What is a deepfake?
a video or sound recording that replaces someone’s face or voice with that of someone else, in a way that appears real.
As the dictionary definition suggests, a deepfake is a media file (usually a video or audio) that was faked in a way that makes it ultra-realistic. Celebrities and public figures are the usual victims of such fakery for reasons we will discuss, but almost anyone can be a victim in this era of the internet and social media.
Most deepfakes focus on the human face and voice, but what makes a fake into a deepfake is the methodology, not the subject matter.
The majority of famous deepfakes are humorous. They include examples like the Jon Snow video and other videos like this one of former United States President Donald Trump, or this one of Facebook (now Meta) founder and CEO Mark Zuckerberg. Yet, deepfakes aren’t all created in good faith, and some of them are created for nefarious reasons.
How are deepfakes made?
Deepfakes aren’t made using video editing software like Premiere Pro or After Effects, but instead, they are generated using artificial intelligence and machine learning. That makes deepfakes harder to make as they require extensive knowledge in AI. But at the same time, generating deepfakes becomes easy after building the right algorithms.
To make a deepfake, an AI is supplied with many video and voice clips and still images, and bigger supplies usually make better results. That makes public figures like politicians and artists a perfect victim for deepfakes, as there is a lot of content of them speaking and interacting with their surroundings.
Lately, some deepfakes are being generated from a single photo or short audio with varying levels of success. Generating from a single photo has made humorous Telegram bots like Round Deepfake possible, but it also means we are all in danger, as most people today have many photos and videos of them online.
Seeing (and hearing) is no longer believing
Current deepfake technology has already shown that it’s possible to imitate famous politicians or other public figures to make them say whatever you want. This is dangerous as a video of a public figure like a political or a religious leader asking their supporters to do something can cause a lot of mayhem and damage, especially when craftily done.
Meanwhile, voice deepfakes are becoming even more dangerous regarding corporate and government security. Making a voice deepfake of the CEO or chairman of a company is an effective way to fool and take advantage of employees or business partners.
In early 2020, a Hong Kong bank manager authorized the transfer of $35m after a request from a Dubai client. The voice and emails of the bank manager were false, and the whole thing was a complicated costly heist employing voice deepfakes and other tactics. Earlier in 2019, the insurance firm Euler Hermes Group reported that one of its clients has lost $243,000 to hackers posing as the CEO of another firm.
Deepfake for good
While most deepfake use cases are either malicious or neutral, some good uses are possible too. For example, it’s possible to use deepfakes to give back some cancer and ALS patients the voices they used to have but lost to ALS.
Recently, Rolls-Royce developed “Quips” with help from Intel and Microsoft. Quips is a tool is designed to build a “voice bank” for ALS patients who still can speak, and later supply the data to an AI that allows patients to keep their voices even as ALS takes away their ability to speak.
Another positive use for deepfakes is media production. Instead of requiring the presence of busy actors on sets, studios could buy the right to use the likeness and voice of an actor, then produce the movie using deepfakes instead.
What is to be done
The main issue with deepfake is how close they imitate reality. Spotting deepfakes used to be unchallenging in the early days, as most of them had clear flaws like unnatural face movement or eyes not blinking. But with time, deepfakers ironed out imperfections and made their productions even closer to reality.
There is a lot of effort in using AI to detect deepfakes now. Both Facebook and Microsoft are developing deepfake-detecting AI algorithms, while the UAE government has published a guide to educate people about deepfakes dangers and how to detect, report, and act toward them.
Some people suggest a deepfake ban, and Facebook (among other platforms) has put some restrictions on deepfake publication. Others ask for a ban on the technology used to create deepfakes.
Experts doubt the effectiveness of bans on deepfakes. Pandora’s box is open, and getting rid of deepfakes is no longer feasible.
A suggested alternative is only trusting video and audio from “trusted sources”. Yet, defining trusted sources is problematic, and this approach may hurt investigative journalism and make it harder to build the authenticity of video and audio evidence.
In conclusion, deepfakes are an issue we are not ready to face yet. It will take time and effort to build policies to address deepfakes responsibly, and government action will most likely be needed.