A Voice of Controversy: David Greene vs. Google's AI
A radio host's voice, a tech giant's AI, and a battle for identity.
Radio personality David Greene has taken legal action against Google, claiming that the tech powerhouse has stolen his unique vocal signature for its NotebookLM AI tool. This allegation has sparked a heated debate, with Greene asserting that Google has replicated his iconic voice without consent.
Greene, a former co-host of NPR's Morning Edition and current host of the Left, Right, & Center podcast, discovered NotebookLM's automated podcast generation capabilities through a former colleague. He was reportedly "completely freaked out" by the discovery, according to an interview with the Washington Post.
The lawsuit, filed in California on January 23, accuses Google of seeking to "replicate Mr. Greene's distinctive voice" to create synthetic audio products that mimic his delivery, cadence, and persona. Greene argues that this violates California's statutory right to publicity and unfair competition law, which protect individuals from unauthorized use of their likeness.
But here's where it gets controversial: Google denies any connection between NotebookLM and Greene. A Google spokesperson, José Castañeda, told Gizmodo that "the sound of the male voice in NotebookLM's Audio Overviews is based on a paid professional actor Google hired." This statement adds a layer of complexity to the case, raising questions about the source of the voice and the potential for confusion among listeners.
The use of individuals' likenesses in AI models and the training of these models with copyrighted materials has been a hot-button issue in recent years. In 2024, OpenAI faced a similar controversy when it removed its AI-powered voice, Sky, after allegations that it sounded like actress Scarlett Johansson, who had not given permission for her likeness to be used.
Several major lawsuits have been filed against tech and AI companies for using copyrighted material to train their AI models. In January, a group of prominent artists, including Johansson, launched a campaign against AI slop and theft, highlighting the growing concerns around intellectual property rights in the AI industry.
And this is the part most people miss: the potential impact on the public's perception of AI-generated content. If listeners can't distinguish between a human voice and an AI-generated one, it raises questions about the ethics and transparency of AI technologies.
So, what do you think? Is this a case of a tech giant pushing the boundaries of innovation, or a clear violation of an individual's rights? The debate is open, and we'd love to hear your thoughts in the comments!