close
close

Deepfakes and voice clones undermine the integrity of elections

Deepfakes and voice clones undermine the integrity of elections

According to TeleSign, the volume of digital transactions is increasing year after year, and with it the potential for AI-powered digital fraud.

Fear of AI-generated content

A new report from TeleSign sheds light on consumer concerns and uncertainties about the use of AI, particularly with regard to digital privacy, and emphasizes the need for ethical use of AI and ML to combat fraud, hacking, and misinformation (also known as “AI for good”). With record numbers of voters heading to the polls in 2024, the report also examines consumer attitudes toward the potential misuse of AI that could undermine trust in elections.

“The advent of AI over the past year has brought the importance of trust to the forefront in the digital world,” said Christophe Van de Weyer, CEO of Telesign. “As AI continues to advance and become more accessible, it is critical that we prioritize AI-powered fraud protection solutions to protect the integrity of personal and institutional data – AI is the best defense against AI-powered fraud attacks.”

Voters fear AI-generated content in elections

The rise of AI reinforces the importance of trust in businesses. 87% of Americans believe brands are responsible for protecting users’ digital privacy. However, when it comes to their perception of the impact of AI on their digital privacy, there is a surprising level of ambivalence: 44% of U.S. respondents believe AI/ML will make no difference in their vulnerability to digital fraud. This comes against a backdrop of increasing account takeover attempts and other fraud attacks fueled by generative AI.

Younger people are also more likely (47%) to trust companies that use AI or ML to protect against attacks than older people (39%) to protect against fraud.

In a year when more voters than ever before are going to the polls – a total of around 49 percent of the world’s population – fears about the impact of artificial intelligence on trust in elections are high. 72 percent of voters worldwide fear that AI-generated content could affect the upcoming elections in their country.

In the U.S., where the presidential election will take place this November, 45 percent of respondents say they have seen an AI-generated political ad or message in the past year, and 17 percent have seen one at some point in the past week.

74% of U.S. respondents agree that they would question the outcome of an online election. The global average is slightly lower at 70%. Americans have the least trust in online election results.

Misinformation undermines confidence in election results

In addition, 75% of U.S. respondents believe that misinformation reduces the trustworthiness of election results in the first place. In particular, 81% of Americans fear that misinformation from deepfakes and vote clones will negatively affect the integrity of their elections. Fraud victims are more likely to believe they were exposed to a deepfake or clone in the past year (21%).

69% of respondents in the US do not believe they have been exposed to deepfake videos or voice cloning recently. The global average rises to 72%.

With the rapid advancement of generative AI fueling alarming fraud trends such as an increase in account takeover attempts, it is imperative for companies to sustainably leverage technologies such as AI to nip fraud attempts in the bud.

Despite progress in detecting and removing deepfakes, the spread of AI-generated content via fake accounts remains a key challenge. One key way for companies to stop the spread of fake accounts and deepfakes is to implement secure protocols to prove the authenticity of users.