Nearly 300 renowned artists, songwriters, and actors have rallied behind a new bipartisan Congressional bill focused on regulating the use of artificial intelligence (AI) for voice and likeness cloning. Spearheaded by the Human Artistry Campaign, this initiative gains visibility through a USA Today print ad featuring support from A-list figures such as 21 Savage, Cardi B & Offset, Bette Midler, Bradley Cooper, and others. The No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act (No AI FRAUD Act), introduced in the U.S. House on January 10, seeks to establish a federal framework protecting individuals from the misuse of AI-generated deepfakes.
Defending Human Rights in the AI Era
The No AI FRAUD Act aims to uphold fundamental human rights by safeguarding voices and likenesses against unauthorized use through AI technology. The Human Artistry Campaign urges citizens to throw their support behind HR 6943, highlighting the necessity of safeguarding individuality in the face of advancing AI technologies. Initiated by various creative industry organizations, the campaign builds upon seven core principles outlined in March 2023, advocating for licensing artistic works used in AI models and discouraging government actions that exploit creators without permission or compensation.
Broad Support and Legislative Roots
Notable support for the No AI FRAUD Act extends across musical artists, actors, and industry professionals, including Chuck D, Mary J. Blige, Trisha Yearwood, Bradley Cooper, and Debra Messing. Representative María Elvira Salazar and a bipartisan group drew inspiration from the Senate discussion draft NO FAKES Act, introducing the legislation to bridge existing gaps and empower artists and citizens to protect their creative work and online individuality.
Urgency Prompted by AI Advances
Recent incidents, such as the spread of a “fake Drake” track and AI-generated audio clips falsely portraying political figures, have heightened the need for legislative action. The No AI FRAUD Act proposes a federal standard to prevent the unauthorized use of AI for replicating voices and likenesses of public figures. While existing “right of publicity” laws vary among states, the bill seeks to create a unified and comprehensive approach to address these challenges.
State-Level Initiatives
Simultaneously, Tennessee introduced a parallel piece of legislation known as the Ensuring Likeness Voice and Image Security (ELVIS) Act. This state-level initiative aims to update protections for songwriters, performers, and music industry professionals, specifically addressing the misuse of AI-generated content.
Strong Backing from Industry and the Public
The No AI FRAUD Act has gained robust support from various music companies and organizations, including the Recording Industry Association of America (RIAA), Universal Music Group, and the Recording Academy. This bipartisan effort addresses potential threats posed by the increasing sophistication and accessibility of AI-generated audio.
Concerns Raised by AI-Generated Audio
The use of AI-generated audio has raised substantial concerns, particularly evident in incidents where political figures were falsely portrayed in manipulated audio clips. The proposed legislation seeks to penalize the production and distribution of AI-generated replicas without consent, acknowledging the real-world consequences, including potential violence, election interference, and fraud.
Challenges in Detecting AI Manipulation
The rapid advancement of voice cloning technology, coupled with the accessibility of AI tools, presents challenges in detecting fake audio campaigns. Unlike manipulated images and videos, AI-generated audio lacks noticeable oddities, making it difficult for both individuals and social media platforms to identify and moderate. The bill addresses these challenges by providing legal recourse for those whose voices and likenesses are exploited.
Global Implications and Misinformation
The impact of AI-generated voice deepfakes extends beyond national borders, evidenced by instances in Slovakia and Sudan. Foreign language deepfakes, often shared on social media platforms without robust fact-checking, contribute to misinformation and confusion. Experts warn that such techniques could be used in future elections worldwide, posing a threat to democracies.