A federal court handed Meta a major win in its court battle against prolific writers who claimed the tech giant had stolen their copyrighted books to train AI models. The decision is a second victory for AI firms grappling with the growing copyright challenges in the industry.
District Judge Vince Chhabria ruled that Meta’s use of copyrighted books to develop its Llama large language models is fair use defense under U.S. copyright law.
The decision came just a week after Amazon-backed Anthropic secured a similar win in San Francisco, suggesting that courts could be growing more sympathetic to the claims of AI companies that they use copyrighted works in a transformative manner.
Authors’ Copyright Infringement Lawsuit Against Meta Dismissed
The lawsuit united 13 prominent writers, among them comedian Sarah Silverman and Pulitzer Prize winner Ta-Nehisi Coates. They claimed Meta infringed copyright by utilising their books without authorisation to train AI models that could rival human writers or copy their work.
The authors asserted Meta’s actions were a clear instance of copyright infringement, citing that the corporation never requested permission beforehand to dump their literary material into its AI training system. They contended that the unauthorized use could damage the market for their books and ruin their right to profit from their work.
Although Judge Chhabria recognized that “it is generally illegal to copy protected works without permission,” he determined that the authors were unsuccessful in proving that Meta’s particular use actually resulted in real-world market harm. That was the determining factor.

The judge’s ruling is based on the doctrine of fair use, which permits the use of copyrighted material without permission for a limited purpose for the purposes of criticism, comment, news reporting, teaching, or research. The courts generally consider four factors in deciding fair use claims: the purpose of use, the nature of the original work, the amount used, and the impact on the market for the original work.
Meta and Anthropic Cases Shape the Future of Training Data
Meta won the argument that using books to train AI models is an innovative application that doesn’t directly compete with or substitute the original work. The company pointed out that its AI models don’t copy entire books or allow users to access copyrighted work directly.
The Meta ruling follows a more complex ruling in favor of Anthropic, the artificial intelligence firm that developed the chatbot Claude. A federal judge ruled that Anthropic’s use of books to train its AI was fair use, but the firm was criticized for the way it obtained the training data.
The court decided that Anthropic had illegally copied and stored more than 7 million pirated books in a common repository. This prompted the judge to direct a trial in December to resolve the matters of unauthorized copying and storage, but the actual use of AI training was held to be fair.
Those rulings have a broad impact on the wider AI industry, which has been subjected to numerous copyright lawsuits from writers, artists, and media companies. Google, Microsoft, and OpenAI are all battling comparable cases, so those early court decisions are particularly significant in terms of providing legal precedent.
The Open-Source AI of Meta Gets a Boost Amidst Ongoing Copyright Battles
Meta welcomed the ruling, its firm’s spokesperson saying: “We welcome today’s ruling by the Court. Open-source AI models are driving transformational innovations, productivity, and creativity for people and businesses, and fair use of copyright material is a critical body of law to construct this transformational technology.”
The ruling does not close the door completely to future copyright litigation. Judge Chhabria took pains to indicate that other authors can still bring similar lawsuits against Meta, suggesting each case will be decided on its own merits.
The law is in limbo, with key cases still unadjudicated. The lawsuit by The New York Times against OpenAI and Microsoft, brought in late 2023, claims the companies copied Times articles without authorization. The case is being followed as a possible landmark ruling to redefine how AI firms manage copyrighted training material.
In the meantime, however, AI firms can cite these initial successes as precedent for their fair use arguments, which could be used to deflect future lawsuits for copyright infringement and allow them to build more advanced AI systems.