OpenAI announced in May a tool named Media Manager, aimed at helping creators control how their works are used in AI training data. However, seven months later, the tool has yet to be launched. Media Manager was designed to allow creators to include or exclude copyrighted materials, such as text, images, audio, and video, from AI training datasets. Despite being positioned as a solution to legal and ethical concerns, reports indicate that development has not been prioritized. Many creators are frustrated that OpenAI failed to deliver the opt-out tool it promised by 2025, as they continue to face challenges in safeguarding their copyrighted materials.
Sources familiar with the matter revealed the tool was never a top priority within OpenAI. A former employee stated there was minimal activity around the project. Even external collaborators reported no significant updates in recent months. Fred von Lohmann, a member of OpenAI’s legal team who was involved in Media Manager’s development, transitioned to a part-time consultancy role in October.
OpenAI initially aimed to roll out Media Manager “by 2025.” However, the absence of updates has raised doubts about the timeline. While OpenAI confirmed in August that the tool was under development, no additional information has been shared since then.
AI and Copyright Concerns
AI models rely on training data to predict and generate content. OpenAI’s ChatGPT, for example, creates essays and emails, while Sora, a video generator, produces realistic footage. However, such models sometimes replicate copyrighted material. Instances include Sora generating clips with TikTok logos and ChatGPT quoting copyrighted articles verbatim.
Creators and rights holders have raised objections, with some initiating legal action against OpenAI. Plaintiffs in ongoing lawsuits include authors, visual artists, and media organizations. They allege that OpenAI used their works without permission.
Despite earlier assurances, OpenAI failed to deliver the opt-out tool it promised by 2025. OpenAI currently offers limited ways for creators to opt out of AI training datasets. A submission form launched last year allows artists to flag works for exclusion. Webmasters can also block OpenAI’s web-crawling bots. However, creators argue these options are inefficient. For example, the image opt-out process requires uploading each image and providing a description.
Experts Question The Effectiveness of Media Manager
Media Manager was presented as a comprehensive solution. It promised to simplify the process for creators to specify what content should or shouldn’t be included in AI training. OpenAI claimed the tool would use advanced machine learning to identify copyrighted works and align with regulatory standards.
Intellectual property experts have raised concerns about Media Manager’s feasibility. Adrian Cyhan, an IP attorney, highlighted the difficulty of implementing content identification tools at scale. Platforms like YouTube and TikTok, despite vast resources, continue to face challenges in managing copyright.
Others believe the tool could shift responsibility onto creators. Ed Newton-Rex of Fairly Trained argued that many creators may remain unaware of the tool, leaving their works vulnerable.
Broader Implications for OpenAI
OpenAI failed to deliver the opt-out tool it promised by 2025, leaving creators without an effective way to protect their works from being used in AI training. OpenAI claims its models transform, rather than copy, original works, a defense rooted in fair use. Courts may eventually side with OpenAI, citing precedents like Google’s legal victory in the Google Books case. However, if OpenAI loses, Media Manager might not protect it from liability.
Legal analysts suggest Media Manager could serve more as a public relations tool than a substantive solution. While it may demonstrate OpenAI’s commitment to ethical AI, critics argue it cannot fully address the complexities of copyright compliance in the AI era.
Also Read: LG Electronics and Samsung Bet Big on AI to Shape Tomorrow’s World.