IE 11 is not supported. For an optimal experience visit our site on another browser.

What is a deepfake, and why are celebrities speaking out about it?

Gayle King and Tom Hanks are among the celebrities who have denounced AI-generated videos of them.
/ Source: TODAY

Gayle King and Tom Hanks are the latest celebrities to set the record straight on unapproved, AI-generated videos of them promoting products that are scams.

AI versions of King, Hanks, Elon Musk and Tom Brady promoted a weight loss product, a dental plan, an investment opportunity and a stand-up comedy show starring an AI Brady, respectively, during the past year. They have each denounced the clips as inauthentic and misrepresenting them without involving them in the process.

These deepfake videos are examples of how AI is becoming more mainstream. ChatGPT is able to write college essays and officiate weddings. AI-generated yearbook photos are currently sweeping social media, following the popularity of Lensa portraits a year ago. AI was also a sticking point in negotiations between unions representing Hollywood writers and studios.

In the case of deepfake videos, the subjects typically don't agree to participate and, in fact, often learn of them after they go viral online.

"It is probably legal in most places to create these videos without the subject’s consent," tech law expert James Grimmelmann tells TODAY.com.

"Where it becomes illegal is when it’s commercially exploited," he adds.

What are deepfake videos?

Videos like the ones featuring digitally fabricated versions of King, Hanks and other stars are created using artificial intelligence that aggregates real audio and visual footage of them as data points they later put toward altered digital versions of the celebrity’s voice and likeness.

The technology can then generate videos of them saying and doing whatever is inputted.

Other celebrities who have been caught in the fray in recent years include Jennifer Lawrence, Arnold Schwarzenegger, Mark Zuckerberg and former President George W. Bush.

AI is not new, but has surged in popularity in recent years as it becomes more accessible, Juergen Schmidhuber, an internationally recognized computer scientist and leader in the AI field, tells TODAY.com.

Schmidhuber says technology used in AI was prohibitively expensive when it first emerged in the early '90s, but it has become more and more affordable over time.

"That’s the main reason why this stuff has become so popular!" he says.

He says what we're seeing now with deepfakes is only the beginning.

"Soon AI-generated videos will be even much more convincing than they are now," he says. "However, as always, people will get used to it, and learn to doubt what they see. Just like decades ago they got used to realistic movies of dinosaurs."

Schmidhuber objects to using AI for deepfakes or other scams.

"AI-generated imitations of celebrities should be treated like AI-generated imitations of other persons," he says. "Offensive, derogatory, harmful content should be taken down, and its creators should be legally prosecuted."

What have celebrities said about deepfake videos?

King and Hanks released statements during the past week denouncing the deepfake videos that used their likenesses.

"They’ve manipulated my voice and video to make it seem like I’m promoting it," King said on Instagram Oct. 2 regarding the deepfake video involving a weight loss product. "I’ve never heard of this product or used it! Please don’t be fooled by these AI videos."

Commenters on her post expressed fear over the trend.

"Oh my…Gayle this is terrifying!" CBS colleague Nate Burleson wrote.

"This is really scary. I wouldn’t believe it, but someone would. That’s what they count on. What is the truth anymore? We should all be concerned!" another person wrote.

Hanks posted a warning regarding the deepfake video involving him on Instagram Oct. 1.

"Beware!!" he wrote over a screenshot of the AI video of him. "There's a video out there promoting some dental plan with an AI version of me. I have nothing to do with it."

Musk has had multiple deepfake videos of him created, one of which began circulating in May 2022.

"Yikes. Def not me," he commented on a tweet that included a clip of one of his deepfake videos, in which he is promoting what is ultimately a cryptocurrency scam.

In Brady's case, he reportedly sent a cease-and-desist letter in April to the creators of his fake video, Dudesy podcast, hosted by Chad Kultgen and Will Sasso. The letter reportedly called the video "highly offensive" and said it presented him in a "false light."

The creators removed the hourlong video but maintained that they did nothing wrong.

"It's not presenting Mr. Brady. It's an impersonation of Tom Brady," Kultgen said in April, later adding, "It's simply a parody of the idea of Tom Brady doing standup."

Schmidhuber says, "AI-generated parodies of persons should be clearly labeled as such."

Brady announced on Instagram Sept. 28 that he's partnered with Meta to bring "a new AI named Bru," which incorporates Brady's face.

In the music industry, several AI-generated versions of artists' songs have gone viral, including Drake's voice on "Munch" by Ice Spice. Drake and his record label, Universal Music Group, later slammed the use of AI to create fake songs.

Is it legal to create a deepfake video?

Legislation is still catching up to the world of AI, but recently there has been some legislation introduced.

In June, Rep. Ritchie Torres (D-NY) introduced a bill that would require “generative artificial intelligence to disclose that their output has been generated by artificial intelligence.” During the same month, Rep. Ted Lieu (D-CA) introduced a bill to establish an artificial intelligence commission. Additional legislation aiming to regulate AI have been introduced.

Grimmelmann, a law professor at Cornell University, says there is a narrow avenue for legally challenging deepfakes at present.

“Deepfaking specific actors for use in commercially distributed films is probably a right of publicity violation,” he says, noting that the protection covers privacy rights. “The right of publicity doesn’t apply to private noncommercial uses. Where it becomes illegal is when it’s commercially exploited.”

He says a remedy is to expand privacy rights “to highly realistic synthetic digital depictions of people, even when those depictions aren’t being commercially exploited.”

Deepfake videos also raise ethical questions, he says.

“It’s unethical to pass off a deepfake as an actual video; that’s a kind of lying,” he says. “It’s more ethically ambiguous when the deepfake is acknowledged to be one, because this gets at hard and contested questions about how much we ought to be able to control how other people think about and talk about us.”

As far as avoiding being deepfaked, Grimmelmann says there is little recourse for celebrities.

"Celebrities cannot avoid it," he says. "There are too many photos and videos of them already available, and their jobs basically require them to be publicly visible."

Everyone else can avoid it if they "stay hidden and don't appear in photos and videos on the internet," he says.

"Some people live their lives this way, others don’t," he says. "The fear of being turned into a deepfake doesn’t seem like the most important reason to decide whether to be photographed. I think the risk of being stalked or harassed, or the desire to have a public presence, are more important considerations for most people."