Ultimate magazine theme for WordPress.

How deepfake scams use celebrities to lure victims

Wondering why you see videos of a national news anchor promoting a cannabis company on YouTube? Or why is tech billionaire Elon Musk featured in an ad for an investment opportunity that sounds too good to be true?

No matter how convincing it may have looked, the video was likely a deepfake, a term that refers to media that has been manipulated or manufactured using artificial intelligence.

Canadian Anti-Fraud Center (CAFC) client and communications officer Jeff Horncastle is warning Canadians about increasing video and audio scams that use the likeness of media personalities to promote fraudulent cryptocurrency platforms and other scams.

“All (scammers) need some audio of the person they are trying to spoof, perhaps a photo or a short audio clip, and they use that as an additional tool to convince potential victims that it is a real person.” Horncastle told CTV National News.

These scammers often use well-known names to build trust, such as US television personalities Gayle King, Tucker Carlson and Bill Maher.

CTV National News chief news anchor and managing editor Omar Sachedina was also featured in deepfake videos promoting fraud on YouTube. One of these ads appears to show Sachedina introducing a message but instead praising a cannabis company. The sound matches the video well, but is fake.

Another video shows Tesla founder Elon Musk selling shares in fraudulent crypto companies.

Although the AI ​​technology behind it is not new, it is becoming increasingly easy to access apps and websites that can create convincing fake images, videos and audio clips with familiar faces.

Similar to robocalls and spam emails, the creators of these deepfake videos have scammed people out of thousands of dollars. Although there is no concrete data on how many Canadians were targeted with deepfake content, the CAFC reported in 2022 that Canadians lost a total of $531 million to fraud. Through June, Canadians have lost $283.5 million this year.

AI ONLY GETS “BETTER,” WHICH MAKES FRAUD WORSE

Aside from their eerily realistic appearance, the most frightening aspect of these deepfakes is that technology is getting better, making it harder for a person to identify a fake, says technology expert Mark Daley.

“It is important to remember that the deepfakes you see now will be the worst you will ever see for the rest of your life. It’s only going to get better,” Daley, the chief digital information officer at Western University, told CTV National News.

Daley explains that the evolution of technology has intensified over the past five years, as creating a deepfake previously required a highly skilled person with access to exclusive technical software. Now anyone with average AI skills and access to a gaming computer can spread misinformation using the likeness of a well-known figure, such as a politician running for office.

This disinformation is particularly alarming for psychotherapist and tech journalist Georgia Dow, who says fake videos can fuel people’s hatred of certain groups and individuals or make them believe something her favorite star appeared to say during an interview never really happened.

“It’s almost like these revenge fantasies. We want to see people we don’t like in certain situations, we want to see people we like in certain situations. These generate a lot of clicks and people are now trying to get that social currency,” Dow told CTV National News.

Among other things, this technology can be harmful if used during important political events.

For example, in the run-up to the 2024 US presidential election, potential candidates are already seeing videos of them working against their campaigns. Florida Gov. Ron DeSantis, who is currently seeking the Republican Party nomination, appeared in a fake video last week in which he appeared to announce that he was dropping out of the race.

“It really got into our minds that maybe this person is nefarious or maybe they have different feelings. I think this is a really big deal politically,” Dow said.

Recently, Google announced that it would use its own technology to add watermark warnings to AI-generated images to stop false claims and denounce fake photos; However, even with such efforts there are concerns about the potential spread of disinformation.

In a statement to CTV News, a spokesperson for the company said: “We are committed to keeping people safe on our platforms, and when we find content that violates these policies, we take action.”

“We continue to improve our enforcement practices to combat abuse and fraud. In recent years, we have implemented new certification policies, increased advertiser verification, and increased our capacity to detect and prevent coordinated fraud.”

HOW TO DETECT A DEEPFAKE

As technology advances, experts say it’s important for people to question the media they consume online. Warning signs that a video may be fake include audio recordings that don’t match a person’s mouth movements, videos with unnatural eye movements, and differences in the lighting of the person speaking and the background.

Dow also recommended focusing on the “outline” of a person compared to the background, including their hair and areas around their face, especially when the speaker is moving.

Horncastle recommended asking questions about why certain media personalities would promote products that are outside of their interests, and why they would promote anything at all if it’s not something they normally do.

“The first red flag should be, ‘I think this celebrity doesn’t promote this stuff,'” he said.

“And if you’re still not sure, do as much research as you can, but chances are the sites they’re promoting are fraudulent.”

Horncastle says researching the companies behind these products before purchasing from them will reveal how legitimate they are and where exactly your money might be going.

Comments are closed.