Scarlett Johansson Slams OpenAI for Mimicking Her Voice in ChatGPT
In a stunning controversy, actress Scarlett Johansson has accused OpenAI of using a voice eerily similar to hers for “Sky,” one of the voices in their upgraded ChatGPT-4o model, without her consent. The issue erupted after OpenAI’s recent unveiling of the advanced AI, which boasts human-like conversational abilities, including talking and laughing. The Black Widow star’s allegations have sparked widespread debate about ethics in AI development, leading OpenAI to suspend the use of the “Sky” voice.

The controversy began with a demo of ChatGPT-4o, where the AI’s voice drew immediate comparisons to Johansson’s role as Samantha, an AI assistant in the 2013 sci-fi film Her. The resemblance was so striking that fans, friends, and media outlets struggled to distinguish it from Johansson’s actual voice. Adding fuel to the fire, OpenAI CEO Sam Altman posted a cryptic tweet referencing “Her” shortly after the model’s announcement, intensifying speculation about the voice’s origins.

Johansson issued a statement expressing her shock and anger, revealing a critical detail: nine months ago, Altman approached her to voice ChatGPT’s new assistant, but she declined for personal reasons. Just two days before the GPT-4o launch, Altman reportedly contacted her again, pressing her to reconsider, but OpenAI proceeded with the release before any agreement was reached. “I was stunned to hear the demo,” Johansson said. “Even my closest friends couldn’t tell the difference.”

OpenAI has since removed the “Sky” voice from ChatGPT-4o, insisting it was created using the natural voice of an undisclosed professional actress, not a replication of Johansson’s. The company emphasized that the voice was not intended to mimic the actress, citing privacy concerns for not revealing the actress’s identity. However, Johansson’s legal team has taken action, sending two letters to OpenAI demanding transparency about how the “Sky” voice was developed. The decision to suspend the voice appears linked to these legal demands.
This incident has reignited concerns about deepfake technology and the unauthorized use of personal likeness in AI systems. Johansson’s case underscores broader ethical questions about protecting individuals’ voices, images, and creative contributions in an era where AI can replicate them with alarming accuracy. Her vocal performance in Her, where she played an AI that forms an emotional bond with a lonely writer (Joaquin Phoenix), was widely praised, making the unauthorized resemblance particularly sensitive.
The controversy has become a focal point in ongoing discussions about AI ethics, with Johansson’s stand highlighting the need for clear regulations to safeguard personal identity. As AI technology advances, this incident serves as a cautionary tale about the potential misuse of likeness and the importance of consent. For now, Johansson’s legal efforts and OpenAI’s response have put a spotlight on the delicate balance between innovation and ethical responsibility in the AI industry.