OpenAI’s aggressive court tactics test First Amendment limits by seeking reporters’ notes from The New York Times. OpenAI’s bid to obtain reporters’ notes from The New York Times in a copyright case is viewed by legal experts as a bold move likely aimed at increasing the newspaper’s legal costs or delaying its lawsuit. Several intellectual property and First Amendment lawyers have expressed doubts about the likelihood of this request being fully granted.
For months, OpenAI Inc. has pursued a discovery approach against The New York Times Co. The two parties have reached an impasse. OpenAI has asked Judge Sidney H. Stein of the US District Court for the Southern District of New York to order the Times to produce reporters’ notes, interview memos, and other materials related to approximately 10 million articles. These articles are claimed to have been used to train OpenAI’s ChatGPT, allegedly without authorization. OpenAI argues that these materials are necessary to determine the copyright status of the articles.
First Amendment Concerns
Legal experts argue that OpenAI’s aggressive court tactics test First Amendment limits and could undermine journalistic protections. The move raises significant First Amendment issues, as it could undermine reporters’ protections, which typically safeguard the confidentiality of their newsgathering process. Jeff Kosseff, an attorney and senior fellow at the Future of Free Speech, emphasized the constitutional implications of this judicial demand.
Doug Lichtman, a media law professor at UCLA, labeled the discovery requests as “abusive.” He pointed out that such tactics could drain the Times’ legal resources, forcing it to settle and thereby preventing the court from addressing critical copyright questions. The Times, unlike OpenAI, has limited funds to allocate to this case, making it a strategic target for such legal maneuvers.
If granted, the motion would require the Times to produce a century’s worth of reporters’ notes, a request seen as overwhelming by Joshua Rich, an IP lawyer from McDonnell Boehnen Hulbert & Berghoff LLP. Rich suggested that OpenAI’s aggressive stance might be a symbolic gesture, indicating the potential burdens of the case to the Times.
Discovery Dispute
The Times filed a lawsuit against OpenAI and Microsoft in December 2023, accusing them of copyright infringement by using its articles in AI training datasets. OpenAI has attempted to dismiss large portions of the lawsuit, claiming the Times used prompt hacking to generate the allegedly infringing outputs. The discovery phase has been contentious, with both sides demanding extensive information from each other. The Times has sought details on how OpenAI’s AI models were trained, while OpenAI has requested information related to the prompts and reporters’ notes.
Adam Weissman, a media and entertainment lawyer, noted that OpenAI’s request is unprecedented in copyright disputes. Typically, a copyright infringement suit relies on the registration of the copyright, which the Times possesses for its articles. This registration is generally considered sufficient evidence of copyrightability. Legal experts argue that challenging the validity of these registrations is a weak argument, as news articles are inherently copyrightable due to their original organization and presentation of facts.
Looking Ahead
The Times has argued that the request is irrelevant to the case and invades reporters’ privilege. This privilege protects the identities of journalists’ sources, even if they are not confidential. The newspaper also highlighted that such massive discovery requests could have a chilling effect on journalistic practices.
OpenAI’s aggressive court tactics test First Amendment limits, drawing criticism from media law professors. Legal experts suggest that OpenAI’s request might be a tactical move to delay the lawsuit, possibly awaiting favorable legislative changes regarding generative AI. If the request is partially granted, it could set a precedent for future copyright cases, potentially altering how news organizations protect their reporting processes and sources. Jane Kirtley, a media ethics and law professor, warned that compelled disclosure in such cases could expose vulnerabilities for news organizations in future litigation.
Also Read: Fei-Fei Li, Known As The ‘Godmother of AI,’ Launches Billion-Dollar Startup World Labs.