close
close

The dangers of voter fraud: We cannot know what we cannot see

The dangers of voter fraud: We cannot know what we cannot see

Don’t miss the leading minds from OpenAI, Chevron, Nvidia, Kaiser Permanente and Capital One at VentureBeat Transform 2024. Gain key insights into GenAI and expand your network at this exclusive three-day event. Learn more


It’s hard to believe that deepfakes have been around so long that we don’t even bat an eyelid at a new case of identity manipulation. But it hasn’t been so long ago that we’ve forgotten.

In 2018, a deepfake showing Barack Obama saying words he never uttered caused an uproar online and sparked concern among U.S. lawmakers, who warned of a future in which AI could manipulate elections or spread misinformation.

In 2019, a famous doctored video of Nancy Pelosi went viral on social media. The video was subtly altered to make her speech appear slurred and her movements sluggish, suggesting her ineptitude or intoxication during an official speech.

In 2020, deepfake videos were used to escalate political tensions between China and India.


Countdown to VB Transform 2024

Join business leaders in San Francisco from July 9-11 for our premier AI event. Network with peers, explore the opportunities and challenges of generative AI, and learn how to integrate AI applications into your industry. Register now


And I don’t even want to get into the hundreds – if not thousands – of celebrity videos that have circulated on the Internet in recent years, from the Taylor Swift porn scandal to Mark Zuckerberg’s dark speech about the power of Facebook.

But despite these concerns, a more subtle and perhaps more deceptive threat looms: Language fraud. At the risk of sounding like a pessimist, this could well prove to be the nail that seals the coffin.

The invisible problem

In contrast to high-resolution video, the typical transmission quality of audio, especially for telephone conversations, is extremely low.

We’ve become desensitized to low-quality audio signals—from bad signals to background noise to distortion—making it incredibly difficult to detect a true anomaly.

The inherent imperfections of audio lend voice manipulation an air of anonymity. A slightly robotic tone or a noise-laden voice message can easily be dismissed as a technical error rather than an attempt at fraud. This makes voice fraud not only effective, but also extremely insidious.

Imagine getting a call from a loved one’s number telling you they’re in trouble and asking for help. The voice might sound a little odd, but you put it down to the wind or a bad line. The emotional urgency of the call might compel you to act before you think to verify its authenticity. And therein lies the danger: voice fraud exploits our willingness to ignore small variations in sound that are common in everyday phone use.

Videos, on the other hand, provide visual clues. Small details like hairline or facial expressions provide clear clues that even the most sophisticated fraudsters cannot detect with the human eye.

These alerts aren’t available on a voice call. That’s one reason why most wireless carriers, including T-Mobile, Verizon and others, provide free services to block, or at least identify and warn about suspected scam calls.

The urgency to validate everything and anything

One consequence of all this is that people automatically check the validity of the source or origin of information. And that’s a great thing.

Society will regain trust in trusted institutions. Despite pressure to discredit traditional media, people will put even more trust in trusted institutions such as C-SPAN. In contrast, people may begin to become increasingly skeptical of social media chatter and lesser-known media or platforms with no good reputation.

On a personal level, people will become more cautious about incoming calls from unknown or unexpected numbers. The old excuse of “I’m just borrowing a friend’s phone” will hold much less water, as the risk of voice fraud makes us wary of unverified claims. The same goes for caller ID or a trusted mutual connection. As a result, individuals might be more inclined to use and trust services that offer secure and encrypted voice communications where the identity of each party can be clearly confirmed.

And technology will get better and hopefully help. Verification technologies and practices will evolve significantly. Techniques such as multi-factor authentication (MFA) for voice calls and the use of blockchain to verify the origin of digital communications will become standard. Likewise, practices such as verbal passcodes or callback verification could become routine, especially in scenarios where sensitive information or transactions are involved.

MFA is not just technology

But MFA isn’t just about technology. Effectively combating voice fraud requires a combination of education, caution, business practices, technology, and government regulations.

For humans: It is important that you be extra cautious. Remember that your loved ones’ voices may already have been recorded and possibly cloned. Pay attention, ask questions and listen.

It’s a responsibility for businesses to develop reliable methods for consumers to verify that they’re communicating with legitimate representatives. In general, you can’t shift responsibility. And in certain jurisdictions, a financial institution may be at least partially legally responsible for fraudulent behavior on customer accounts. This also applies to any business or media platform you interact with.

The government must continue to make it easier for technology companies to innovate. And it must continue to pass laws that protect people’s right to internet safety.

It takes a whole village, but it is possible.

Rick Song is CEO of Persona.

Data decision makers

Welcome to the VentureBeat community!

DataDecisionMakers is the place where experts, including technical staff working with data, can share data-related insights and innovations.

If you want to read about cutting-edge ideas and information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You may even want to contribute an article yourself!

Read more from DataDecisionMakers