A man accused of using bots and AI to earn streaming revenue has been charged with multiple counts of wire fraud and conspiracy. A musician in the U.S. has been charged with fraudulently using artificial intelligence (AI) tools and bots to stream songs billions of times to claim royalties worth millions of dollars. Michael Smith, from North Carolina, faces accusations of wire fraud, conspiracy to commit wire fraud and money laundering.
Prosecutors allege that Smith’s fraudulent activity marks the first criminal case of its kind. According to the indictment, the 52-year-old used AI-generated music, coupled with thousands of bot accounts, to manipulate streaming numbers on various platforms. Authorities estimate that the scheme ran for several years, enabling Smith to collect over $10 million in royalties that were meant for genuine musicians, songwriters, and other rights holders.
Damian Williams, a U.S. attorney, stated, “Through his brazen fraud scheme, Smith stole millions in royalties that should have been paid to musicians, songwriters, and other rights holders whose songs were legitimately streamed.”
AI-Generated Music and Bot Networks
Smith’s fraudulent activity relied on a partnership with the CEO of an unnamed AI music company. The executive allegedly supplied him with thousands of AI-generated songs monthly, starting in 2018. In exchange, Smith shared track metadata, including song and artist details, and gave the executive a portion of the streaming revenues.
The indictment revealed that Smith operated around 10,000 active bot accounts at one point to generate streams. These bots were used to stream the AI-generated tracks billions of times. The technology behind the fake music improved over time, making it increasingly difficult for streaming platforms to detect the fraudulent activity.
The man accused of using bots and AI to earn streaming revenue reportedly created thousands of AI-generated songs to inflate streams. By June 2019, Smith’s scheme was generating about $110,000 a month, with a portion of the earnings going to his co-conspirators. According to a February email disclosed in the indictment, Smith boasted that his tracks had garnered over four billion streams and brought in more than $12 million in royalties since 2019.
In response to a platform’s report of “streaming abuse” in October 2018, Smith denied any wrongdoing, insisting, “There is absolutely no fraud going on whatsoever!”
FBI Involvement and Legal Ramifications
Authorities claim the man accused of using bots and AI to earn streaming revenue earned over $10 million through fraudulent streaming activity. Smith now faces serious legal consequences, including possible decades-long imprisonment if found guilty. FBI acting assistant director Christie M. Curtis expressed the bureau’s commitment to cracking down on fraud schemes involving advanced technologies. She emphasized the importance of protecting genuine artistic talent from exploitation.
Prosecutors stressed that this case illustrates the growing risk of using technology like AI for fraudulent purposes. As AI tools become more accessible, they are increasingly used to generate music, videos, and other forms of content.
Wider AI Concerns in the Music Industry
Smith’s case highlights the larger concerns surrounding AI in the music industry. AI music-generation tools can produce tracks quickly and cheaply, raising issues about copyright infringement. Some AI tools use copyrighted material from artists without permission, sparking controversy in creative industries.
Several high-profile artists, including Billie Eilish and Elvis Costello, have recently signed an open letter calling for the cessation of AI’s “predatory” practices in music. The letter reflects growing concerns about fair compensation for artists when AI tools use their work as training data.
Music streaming platforms like Spotify and Apple Music have been working to curb artificial streaming practices. Spotify recently updated its royalty policies, requiring more streams for royalty payments and charging labels or distributors if artificial streams are detected. The platform has also increased the minimum track length for certain types of recordings, such as white noise tracks, to ensure legitimate use.
Also Read: South Korea Is Facing a Deepfake Porn Crisis, Sparking Nationwide Outrage and Government Action.