Complex Made Simple

Proof Threshold: Exploring How Americans Perceive Deepfakes

Companies like Facebook and Microsoft have taken steps to protect their users from deepfake content, but are the American people concerned? We surveyed 1,011 people familiar with this technology to understand how Americans perceive deepfakes

It's the equivalent of “fake news” here, and the beginning of a movement that seeks to break people’s confidence in the truth Two-thirds of participants believed that one day it would be impossible to discern a real video from a fake one Forty-two percent of people believed it is very or extremely likely that deepfakes will be used to mislead voters in 2020

By: Twingate

While Twingate is designed to provide secure access to any private resource, we’re particularly focused on the needs of developers and DevOps teams

Technological advancements have shifted society since early humans learned to craft tools to aid with hunting and survival. As Bill Gates points out, technology has come a long way and is improving the quality of life in unimaginable ways, like a carbon dioxide catcher or a fridge that makes recipe suggestions

However, this progression of technology and artificial intelligence has resulted in a degree of fear and distrust – and for good reason, as we’ve seen a rise in deepfake technology, which started as photo-altering apps but now allows for a much greater ability to manipulate the truth

Companies like Facebook and Microsoft have taken steps to protect their users from deepfake content, but are the American people concerned? We surveyed 1,011 people familiar with this technology to understand how Americans perceive deepfakes. 

Read: Deepfakes: From deception comes massive business opportunities

Trust what you see?

Deepfakes were defined to survey respondents as “media that is doctored using artificial intelligence-based technology to produce or alter video/image/audio content so that it presents something that didn’t, in fact, occur.” 

More than half of the participants in our study were very or extremely concerned with the implications of deepfake technology. We’re looking at the equivalent of “fake news” here, and the beginning of a movement that seeks to break people’s confidence in the truth, because what is truth when everything can be altered?

That’s a question people are concerned about. Two-thirds of participants believed that one day it would be impossible to discern a real video from a fake one. While more than 1 in 4 thought fake digital media is already taking the place of factual information, most people believed it would take about eight years, on average, before nothing can be trusted.

Read: Watch out! Deepfake could make you your own worst enemy

Implications of False Information

Twitter set a clear standard when it banned all political ads from the platform in 2019. The decision came after Facebook refused to remove fake political ads from the platform, suggesting “free expression” is more important than the removal of false information. 

According to our findings, more than 3 in 4 Americans were extremely or very concerned with the use of deepfake technology to spread false political information. Their anxiety is founded. In 2019, President Trump tweeted a video of Speaker Pelosi “[stammering] through [a] news conference,” and although the video was viewed over 2.5 million times, it was fake.

Following political misinformation, people feared that deepfake technology is used to commit fraud and other digital crimes. Cyber-enabled crimes cost Americans more than $2.7 billion in 2018, and the FBI reported that scams were one of the top three ways money was extorted. Individuals aren’t the only ones at risk: Corporations stand to continue losing millions as artificial intelligence is being used to mimic the voices of well-known CEOs.

Twitter’s decision to ban political ads may seem like a disservice to voters, but it may protect them and their votes in 2020 – it, at least, comforts some. Forty-two percent of people believed it is very or extremely likely that deepfakes will be used to mislead voters in 2020. Cambridge Analytica and Facebook misused 87 million people’s data in 2016, and it remains unclear what role Facebook will play in either the re-election of President Trump or the welcoming of another president. 

Read: Fake news and sensationalism pay, but is it worth it?

Identity and Impersonation

While the risk of deepfake technology use on regular citizens may not be immediate, some laws to protect Americans do exist, including safeguards against harassment or extortion. However, 7 in 10 participants in our study said deepfakes should be illegal, and some are taking action.

The No. 1 way people said they would protect themselves from deepfakes is by denying others the ability to tag them in online photos. Forty-four percent of Americans would go as far as removing all of their facial images from the internet, and 42% said they would delete social media content.

Our findings show that the majority of Americans were concerned about deepfakes, but Gen Xers and millennials in our study worried most. Perhaps it’s because they’ve spent more of their lives with technology and have seen its criminal potential more than baby boomers have. 

Deepfake Experiences (That You Know of)

 An Obscure Future

Much like facial recognition technology, digital manipulation techniques started as seemingly innocent: photoshopping hips for Instagram “likes” or manipulating one’s skin tone in a photo. However, deepfakes have since evolved, and our findings show the majority of Americans are concerned about deepfake technology being used to spread political misinformation.

However, deepfakes don’t have to be political to pose a threat. Voice technology has been used to extort money from business leaders, and scam calls pose a threat to individuals. While laws don’t currently protect people from deepfake technology, personal action can be taken. Visit RoboShield.com and learn how you can block robocalls.