OpenAI recently found itself in hot water after inadvertently using a voice resembling that of Scarlett Johansson for its latest chatbot, prompting an apology to the actor. While OpenAI denied directly using Johansson’s voice, CEO Sam Altman’s cryptic social media post referencing the film “Her,” in which Johansson voiced an AI character, raised eyebrows. This incident shed light on the growing concerns surrounding voice-cloning technology.
Voice cloning has long been fraught with ethical and practical challenges. Last year, Elevenlabs stirred controversy by releasing its voice-cloning software, leading to the creation of deepfake celebrity videos, including one of Emma Watson reading Hitler’s Mein Kampf. Law enforcement warned of potential extortion schemes using AI clones over the phone. The development of realistic and nuanced voice-cloning technology, notably driven by programs like Tortoise, has accelerated in recent years, revolutionizing how AI learns to mimic human speech.
Also Read: OpenAI Unveils GPT-4o: Advancing ChatGPT with Multimodal AI Capabilities
Despite the potential benefits of AI voice programs in improving business efficiency, concerns persist about their misuse. OpenAI’s alleged use of a voice resembling Johansson’s without her consent has sparked criticism, with some questioning the company’s ethics. However, others argue that distinguishing between genuine voices and convincing imitations can be challenging, potentially setting a controversial precedent for future use.
Amidst the controversy, companies like TALKR.ai and Synthesia are leveraging AI voice programs for various applications, from virtual voice assistants to video avatars. These technologies offer promising solutions for businesses seeking to streamline customer service and content creation processes. Nevertheless, ensuring ethical practices and respecting individuals’ rights, such as obtaining proper consent for voice usage, remains paramount in the development and deployment of AI-driven technologies.