close
close

Musk’s voice and image used in deepfake scam against the Olympic Games

Musk’s voice and image used in deepfake scam against the Olympic Games

Elon Musk’s image and voice have been used in cryptocurrency scams in the past and are now a central part of a sophisticated attack on the upcoming Paris Olympic Games, which have increasingly become the target of a wide range of cyber threats over the past year.

According to researchers at McAfee, scammers have created a purported three-part series hosted on Amazon Prime that uses AI-based deepfake technology to make it appear as though Musk is narrating it. The series, titled “Olympics Has Fallen II: The End of Thomas Bach,” is being distributed via messaging service Telegram and appears to be the work of the same people who released another short series last year titled “Olympics Has Fallen,” which features actor Tom Cruise using the deepfake voice.

The cybercriminals behind it falsely claimed that the series was developed by Netflix, although it was also distributed through Telegram channels. According to McAfee, the aim of the video is to discredit the administration of the Paris Olympic Games, which are scheduled to begin on July 26.

In the week following its June 24 release on Telegram, the latest series featuring Musk’s image and voice attracted more than 150,000 viewers.

“In addition to claiming it is an Amazon Prime story, the creators of this content have also circulated images of seemingly fabricated endorsements and reviews from reputable publishers, reinforcing their attempt at social engineering,” Lakshya Mathur, head of research at McAfee, and Abhishek Karnik, head of threat research at the cybersecurity firm, wrote in a report, calling the series a “particularly vicious hoax that aims not only to deceive the International Olympic Committee (IOC) but also to portray it as corrupt.”

There are disturbances

The three-part series includes episodes that use AI-based voice cloning, image diffusion and lip-syncing technologies to create false images and narratives, Mathur and Karnik wrote. While the series appears to be professionally created at first glance, a closer look into the video revealed various glitches and other clues that suggest it was created using AI technologies.

According to the researchers, the show’s creators apparently took a video from an interview with Musk in the Wall Street Journal and then altered it so that the deepfake audio “is barely recognizable to human viewing.” However, they found glitches in the altered video and said the lip-syncing, while good, was not perfect.

At the RSA conference in May, McAfee unveiled its new Deepfake Detector tool – formerly known as Project Mockingbird – designed to help people better identify AI-generated audio.

The deepfake threat

The use of deepfakes – both voice and images – to steal money and data has been a cause for concern in the industry since OpenAI released its generative AI chatbot ChatGPT in late November 2022. The U.S. Federal Communications Commission (FCC) issued a warning to consumers last month about the use of deepfake audio and video links in robocalls and fake content scams that spread misinformation or steal money or personal information.

A month earlier, management consultancy Deloitte had written a report on the growing threat of deepfakes in the banking sector, highlighting a high-profile incident in January in which an employee of a Hong Kong-based company transferred $25 million to fraudsters, believing she was on a video conference with colleagues and her chief financial officer.

“However, it turned out that she was not on the phone with any of these people: scammers created a deepfake that mimicked her appearance to trick her into transferring the money,” Deloitte wrote.

Musk is a popular imitator

Deepfake scams often use altered images and voices of celebrities and public figures. Late last year, McAfee released its Hacker Celebrity Hot List for 2023, featuring the most commonly impersonated people in deepfakes and other AI-generated content. Actor Ryan Gosling topped the list; Musk came in sixth, just between actor Kevin Costner and TV meteorologist Al Roker.

Musk’s voice and image have been most commonly associated with cryptocurrency scams. Most recently, in May, the Hong Kong Securities and Futures Commission warned about a company called Quantum AI that claimed to offer AI-based crypto trading services and boasted a direct connection to Musk. It used AI-generated videos and images of Musk on its now-defunct website and in its social media content.

Olympic Games in Paris in the crosshairs

The use of AI technology in attacks on the Paris Olympics is not surprising. The Microsoft Threat Analysis Center (MTAC) published reports last month on Russia-linked threat groups using both legacy tactics and AI in campaigns aimed at discrediting the IOC and stoking the threat of violence in Paris during the Games. In a report, MTAC Director General Clint Watts described the actions of two threat groups – Storm-1679 and Storm-1099 – that have been focusing on the Olympics for a year, saying Storm-1679 was responsible for the original “Olympics Has Fallen” series.

The group also used artificial intelligence-generated misleading videos to stoke fear of expected violence in Paris and keep people away, Watts wrote, adding that the Russian threat group’s activities will increase as the Games approach.

“While video has traditionally been and will remain a powerful tool for Russian IO (influence operation) campaigns, we are likely to see a tactical shift toward online bots and automated social media accounts,” Watts wrote. “These can provide the illusion of widespread support by quickly flooding social media channels and providing the Russians with a degree of plausible deniability.”

Trend Micro researchers also wrote about ways malicious actors could use deepfake technologies at the Olympics.

“Fake news could misrepresent who won or lost a particular event,” they wrote last month. “Deepfake audio could discredit coaches, players, teams, or referees by putting controversial words in their mouths. Deepfake images or videos could be used to defame an athlete and disqualify them from participating. Threat actors could seek to sow discord with AI-generated content, and the Olympics provide a great platform to drive social wedges.”