TradingKey - OpenAI announced on Monday, in conjunction with the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) and renowned actor Bryan Cranston along with his agency, a joint statement declaring strengthened regulation of its AI video creation application Sora to prevent unauthorized deepfake content. As AI-generated video becomes mainstream, this joint statement marks a potential turning point in how technology companies handle celebrity likeness rights.
The incident originated after Sora 2's release at the end of September, when numerous unauthorized AI-synthesized videos using Cranston's voice and likeness appeared online. The actor, who won an Emmy for his role in "Breaking Bad," explicitly expressed his concerns in the statement, emphasizing that "all artists have the right to determine how their voices and likenesses are simulated."
OpenAI stated that it will collaborate with SAG-AFTRA and work with United Talent Agency (which represents Cranston), the Association of Talent Agents, and Creative Artists Agency to strengthen oversight of unauthorized AI-synthesized videos.
“I am grateful to OpenAI for its policy and for improving its guardrails, and hope that they and all of the companies involved in this work, respect our personal and professional right to manage replication of our voice and likeness,” Cranston said in a statement.
This incident highlights a fundamental flaw in OpenAI's "opt-out" mechanism implemented when launching Sora 2 last month — which required celebrities to actively request removal of content rather than obtaining prior permission before content generation. This "ask for forgiveness rather than permission" strategy in the digital world, while perhaps effective for startup rapid iteration, has clearly triggered serious negative consequences when involving widely influential public figures like Emmy Award winners, leading to a proliferation of false videos.
Creative Artists Agency (CAA) and United Talent Agency (UTA) had previously sharply criticized OpenAI's use of copyrighted material, noting that Sora poses significant risks to their clients and intellectual property.
OpenAI also reiterated its support for the NO FAKES Act. The core purpose of this legislation is to prevent unauthorized AI technology from replicating and misusing human voices or visual likenesses.
“OpenAI is deeply committed to protecting performers from the misappropriation of their voice and likeness,” Altman said in a statement. “We were an early supporter of the NO FAKES Act when it was introduced last year, and will always stand behind the rights of performers.”
Notably, the implications of this battle extend far beyond Hollywood. Currently, OpenAI faces collective pressure from content creators, musicians, and publishers — the New York Times and other media outlets have already filed copyright infringement lawsuits against it, and record companies have issued multiple cease-and-desist orders regarding its voice cloning features.