While Deepfake holds exciting possibilities for visual media and computer graphics, there are concerns about its potential to create fake news and malicious hoaxes, in addition to damaging lives and reputation
If you suddenly find yourself in a YouTube video saying things you’d never imagine saying, or doing illegal things that might land you in jail, realize this: The world of videos and audios is becoming increasingly dangerous, and it might affect you more seriously than you imagine. Deep Fake is here to impact your life and your career in unforseen ways, and unless you stay alert, it could ruin you forever.
Deepfake, in layman’s terms, is a human image synthesis technique based on Artificial Intelligence (AI), which creates false realities of real people doing and saying fictional things. While this holds exciting possibilities for visual media and computer graphics, there are concerns about its potential to create fake news and malicious hoaxes, in addition to damaging lives and reputation. What started off a few years ago as a minor online development, has today become a nightmare for celebrities as well as for organizations, as they unwittingly become targets of Deepfake videos.
How it began
While photos and videos have been manipulated for a long time, the advent of Artificial Intelligence (AI) is believed to have given Deepfake the worrying dimension that it has assumed today. It is widely believed that Deepfake in its current form took shape in 2018, when a desktop application called FakeApp was launched. The app allowed users to create and share videos while swapping faces, using an artificial neural network and three to four gigabytes of storage space. Today, researchers have revealed a new software that makes use of machine learning to let users edit, change, modify, add or delete words that someone else has spoken.
How it works
Both audio and video Deepfakes make use of a technology called Generative Adversarial Networks (GANs) which comprise two machine learning models that work seamlessly together: one that produces fake video footage and the other that detects the fake footage. To understand this better, scientists tried to recreate fakes and show everyone how easy it is to edit what people say in videos. For this, they first scanned the target video to identify and isolate phonemes spoken by the source. Then they matched these phonemes with the visemes (facial expressions that accompany each sound), and created a 3D model of the lower half of the subject’s face using the target video. Then, a software combined the data from the three sources to build new footage that matched the text. This was then pasted onto the source video.
Nothing fake about its growth
What is worrying is the increase ease with which it is becoming possible to create fake videos. In fact, top Deepfake artist Hao Li believes that in as little as six months, deep fake videos will be completely untraceable. In an interview, Li said: “I believe it will soon be a point where it isn’t possible to detect if videos are fake or not. We started having serious conversations in the research space about how to address this and discuss the ethics around deepfake and the consequences.”
According to the Amsterdam-based Deeptrace, a company that builds technology to spot AI-manipulated content, there are at least 14,678 Deepfake videos on the Net right now, and the number is growing. This is 84% higher than last December, when the company found 7,964 Deepfake videos in its first count.
The worries mount
While awareness about this issue is on the rise, there are many who believe that it may be impossible to prevent this malicious technology from spreading wings.
Experts studying this growing phenomenon suggest that a possible remedy for this would be to either watermark (electronically sign) videos, or to provide a context and let the public be informed at the beginning of the video that what they are about to watch is fiction. However, the argument against this is that watermarks can be easily removed and that online media doesn’t have much of a context, in any case.
Impact on business
Recently, the employee of a major company in the UK was tricked into thinking he was on the phone with his boss from the parent company, who asked him to transfer a huge amount of money to a Hungarian supplier. What the employee did not realize was that he was not speaking with the actual CEO, but with a scammer who used AI to impersonate the CEO. The actual voice of the CEO had been taken from his various speeches, TED Talks, YouTube videos and other recordings that were inserted into a program that enabled voice imitation. Similarly, businesses can be hit by huge financial losses as their top management may not always be available physically to authorize transactions, and so will have to rely on phone calls or digital signatures or audio clips which can be faked.
Although businesses can stop Deepfake to a large extent by sensitizing their employees to the dangers of fake videos and by monitoring the brand’s online presence, there is nothing much that can be done at this point, to stop Deepfake from spreading. So to sum up, it’s there, it’s real and it’s very, very scary.