OpenAI and Anthropic, two leading artificial intelligence companies, have reached an agreement with the U.S. AI Safety Institute to allow the testing of their new models before they are released to the public. This collaboration in which OpenAI and Anthropic agree to let the U.S. AI Safety Institute test and evaluate new models aims to enhance safety and ethical standards in AI. This decision follows growing concerns within the industry regarding the safety and ethical implications of advanced AI technologies.
The U.S. AI Safety Institute, established within the Department of Commerce at the National Institute of Standards and Technology (NIST), announced in a press release that it would receive access to new models from OpenAI and Anthropic both before and after their public release. This initiative aims to enhance the evaluation of AI capabilities and identify potential safety risks. The collaboration is intended to mitigate those risks and promote responsible AI development.
This agreement comes in response to the U.S. government’s first executive order on artificial intelligence, issued by the Biden-Harris administration in October 2023. The executive order called for new safety assessments, guidelines for equity and civil rights, and research on AI’s impact on the labor market. The establishment of the U.S. AI Safety Institute is part of the government’s effort to address these concerns and ensure the safe deployment of AI technologies.
Statements from AI Leaders
As part of their commitment to responsible AI development, OpenAI and Anthropic agree to let the U.S. AI Safety Institute test and evaluate new models to mitigate potential risks. OpenAI CEO Sam Altman expressed support for the collaboration, stating that the company is pleased to have reached an agreement with the U.S. AI Safety Institute for pre-release testing of future models. He also noted that the partnership would help inform safety best practices and standards for AI models. Jack Clark, co-founder of Anthropic, echoed these sentiments, highlighting the importance of rigorous model testing and the need for collaboration with experts to identify and mitigate risks associated with AI deployment.
The agreements between OpenAI, Anthropic, and the U.S. AI Safety Institute come at a time when many AI developers and researchers have raised concerns about safety and ethics in the AI industry. Current and former OpenAI employees recently published an open letter warning of potential issues with rapid AI advancements and the lack of oversight and whistleblower protections. The letter argued that AI companies have strong financial incentives to avoid effective oversight and that existing corporate governance structures are insufficient to address these challenges.
Regulatory Actions and Legal Developments
In response to these concerns, the Federal Trade Commission (FTC) and the Department of Justice are reportedly preparing to launch antitrust investigations into OpenAI, Microsoft, and Nvidia. The investigations will examine the investments and partnerships between AI developers and major cloud service providers, as part of a broader effort to address potential anti-competitive behavior in the AI industry.
Meanwhile, California lawmakers recently passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, also known as SB-1047 or California’s AI Act. The bill, which is now awaiting approval from Governor Gavin Newsom, would mandate safety testing and other safeguards for AI models with significant computational power.
The initiative where OpenAI and Anthropic agree to let the U.S. AI Safety Institute test and evaluate new models marks a significant step towards transparency in AI. The U.S. AI Safety Institute also plans to collaborate with the U.K. AI Safety Institute to provide safety-related feedback to OpenAI and Anthropic. In April, the U.S. and U.K. agreed to work together on developing safety tests for AI models, following commitments made at the first global AI Safety Summit last November.
Also Read: The Future of AI: Apple and Nvidia Are in Talks to Join The OpenAI Funding Round.