In a legal move, NYT sends Perplexity ‘cease and desist’ notice, accusing the startup of violating copyright laws. The New York Times has sent a legal notice to AI startup Perplexity, demanding the company stop using its content for generative AI purposes, according to a statement made by the startup on Tuesday.
The New York Times (NYT) has accused Perplexity of violating copyright laws by using its articles to create summaries and other outputs through AI. A letter sent by NYT on October 2, 2024, claimed that the way Perplexity utilized the newspaper’s content infringed on its copyright. The letter, shared with Reuters, stated that the unauthorized use of NYT’s material must stop immediately.
NYT has previously raised concerns over the use of its content in AI models, which can pull data from the internet to provide users with generated responses or summaries. The company’s concerns follow a broader pattern seen since the release of ChatGPT, with many publishers warning about the potential misuse of their content in AI-generated outputs.
Perplexity’s Response
NYT sends Perplexity ‘cease and desist’ notice, demanding the company stop using its articles to create summaries for AI responses. Perplexity, a company that uses AI to respond to user questions by indexing web pages and citing factual information, plans to reply to the NYT’s demands by the October 30 deadline. In its response to Reuters, Perplexity stated that it does not scrape data for the purpose of building foundational AI models, but instead indexes content and provides citations.
The company further emphasized that factual information cannot be copyrighted, supporting its practice of indexing pages to provide accurate and fact-based answers. Perplexity has also stated that it follows content policies published on its website, ensuring transparency.
NYT’s Broader Legal Actions
NYT is not just in dispute with Perplexity. The news publisher has also taken legal action against OpenAI, accusing the company of using its content without permission to train the AI behind ChatGPT. This is part of a broader concern among news publishers that AI companies are bypassing web standards to scrape their data.
Perplexity has also been criticized by other media outlets, such as Forbes and Wired, for allegedly plagiarizing their content. In response to these concerns, the startup introduced a revenue-sharing program aimed at addressing the grievances of publishers.
Perplexity maintains that it is committed to transparency. The company has a public page outlining its content policies and explaining how it uses web content. According to Perplexity, its business model revolves around providing users with factual content supported by citations, rather than scraping data for AI training purposes.
However, the legal dispute between NYT and Perplexity highlights growing tensions between AI firms and news organizations over the use of online content. As the case develops, both parties will likely shape future discussions on the boundaries of copyright in the era of AI.
Final Thoughts
As part of its copyright defense, NYT sends Perplexity ‘cease and desist’ notice to protect its journalistic work from unauthorized use. This issue isn’t new. Since the rise of chatbots like ChatGPT, publishers have expressed concerns about their work being used without credit or compensation. NYT’s actions against Perplexity mirror its ongoing legal battle with OpenAI, where similar claims were made. Publishers are wary of AI companies bypassing measures that are meant to prevent unauthorized scraping of their content. Despite promises from Perplexity to respect these measures, NYT claims its content still appears in the AI firm’s outputs, suggesting the issue remains unresolved.
At the heart of this debate is the broader question of how AI companies and content creators can coexist. AI models like those used by Perplexity rely on large amounts of data to function effectively, and this data often comes from publicly available information. However, creators of that information, like journalists, want to ensure their work is not exploited without proper recognition or compensation.
Also read: US Government is Considering Restricting Nvidia, AMD AI Chip Exports Amid Security Concerns.