OpenAI’s attempt to overturn a controversial court order requiring the company to retain all ChatGPT logs indefinitely has hit a dead end. US District Judge Sidney Stein quickly denied OpenAI’s objections last week, leaving millions of users’ private conversations vulnerable to potential exposure.
The order, issued by Magistrate Judge Ona Wang, came after news organizations led by The New York Times requested it as part of their copyright lawsuit against OpenAI. The news plaintiffs argue they need access to ChatGPT logs to preserve evidence of users attempting to bypass paywalls and access news content through the AI chatbot.
OpenAI Faces Limited Options in Fight to Protect ChatGPT User Data
OpenAI had contended that the blanket order required them to forgo “long-standing privacy norms” and was an intrusion on user expectations in line with the ChatGPT terms of service. However, Judge Stein was not swayed, noting that OpenAI’s user agreement already stated that information could be retained as part of legal proceedings.
While OpenAI says it plans to “keep fighting” the order, the company appears to have few viable options left. They could potentially petition the Second Circuit Court of Appeals for an emergency order, but such requests are rarely granted and would require proving that Judge Wang’s order represents an extraordinary abuse of discretion.

The company now finds itself in a difficult position: either negotiate a data search process with news plaintiffs to potentially end the retention requirement sooner, or continue fighting and risk keeping users’ private conversations exposed for longer.
ChatGPT Data Access: A Limited Glimpse for News Organizations Amidst Privacy Concerns
The good news for users is that news organizations won’t be rifling through all ChatGPT conversations. Instead, only a small sample of data will be accessed based on keywords that both OpenAI and the news plaintiffs agree upon. This data will remain on OpenAI’s servers, be anonymized, and likely never be directly handed over to the plaintiffs.
Both sides are currently negotiating the exact search process, with everyone hoping to minimize how long the chat logs need to be preserved. For OpenAI, sharing the logs risks revealing instances of copyright infringement that could increase damages in the case. The logs might also expose how often ChatGPT attributes misinformation to news organizations.
For news plaintiffs, accessing the logs isn’t necessarily crucial to their case, but it could help them argue that ChatGPT dilutes the market for their content—a factor that could weigh against fair use protections for AI companies.
Leading consumer privacy lawyer Jay Edelson expressed serious concerns about the precedent this order sets. He questioned whether the evidence from ChatGPT logs would even advance the news plaintiffs’ case, while fundamentally changing “a product that people are using on a daily basis.”
When Your Chats Become Legal Evidence
Edelson highlighted the security risks involved, noting that while OpenAI likely has better security than most firms, “lawyers have notoriously been pretty bad about securing data.” The notion of lawyers handling “some of the most sensitive information on earth” should give goose bumps to all, he warned.
The privacy involved is immense. ChatGPT users entrust it with very intimate informationmedical background, relationship issues, work issues, and other personal matters. They felt they could delete their conversations or employ temporary conversations, and that their information would be safe.
The order sets a concerning precedent for AI data retention in future litigation. Edelson warns that this could lead to more AI data being frozen, potentially affecting even more users. Imagine if similar litigation targeted Google’s AI search summaries or other AI services.
Perhaps most troubling is that enterprise ChatGPT users were excluded from the order, while individual users bear the privacy burden. This means well-resourced businesses keep their data private, while ordinary users’ conversations remain vulnerable.
As Edelson put it: “What’s really most appalling to me is the people who are being affected have had no voice in it.” The court rejected appeals by individual users of ChatGPT to come in, leaving tens of millions of individuals unrepresented in a case directly implicating their privacy. The case illustrates the Gordian knot of AI research, copyright legislation, and user privacy, with average users in the middle of corporate litigation battles.