OpenAI announced that the US government will get to see ChatGPT 5 before anyone else to ensure the model’s safety and reliability. OpenAI has announced a collaboration with the U.S. AI Safety Institute, a federal body established by the National Institute of Standards and Technology (NIST). The partnership aims to ensure that the next generation of AI models, including the upcoming ChatGPT-5, is safe and reliable before public release. OpenAI will provide early access to this model to the Institute, allowing for thorough evaluation and development of safety protocols.
In a post on platform X, OpenAI CEO Sam Altman emphasized the importance of this partnership. Altman stated, “Our team has been working with the U.S. AI Safety Institute on an agreement where we would provide early access to our next foundational model so that we can work together to push forward the science of AI evaluations.” This collaboration reflects a commitment to advancing AI safety measures and ensuring robust and empirically validated standards.
Context and Background
As part of a new partnership, the US government will get to see ChatGPT 5 before anyone else, allowing for early assessments of the AI’s capabilities. The announcement comes amidst rising concerns about AI safety and OpenAI’s direction. Earlier this year, OpenAI disbanded its Superalignment team, which was focused on aligning AI models with human intentions.
The disbandment led to the departure of key team members, including Jan Leike and co-founder Ilya Sutskever. Leike later joined Anthropic AI, while Sutskever started a new AI safety venture, Safe Superintelligence Inc. Leike expressed frustration over OpenAI’s failure to allocate promised resources, particularly compute power, for safety efforts. He criticized the company’s leadership for prioritizing product launches over safety.
In response to these concerns, Altman reiterated OpenAI’s commitment to safety. He announced that at least 20 percent of the company’s computing resources would be dedicated to safety efforts, a promise initially made in July. Additionally, Altman revealed the removal of non-disparagement clauses from employee contracts. This change aims to create an environment where employees can voice concerns freely, without fear of retaliation. Altman noted, “This is crucial for any company, but for us especially, and an important part of our safety plan.”
This partnership with the U.S. AI Safety Institute is part of a broader trend of collaboration between AI developers and government bodies. By giving early access, the US government will get to see ChatGPT 5 before anyone else, enabling a thorough review of potential risks.
Leadership Changes
In a related development, OpenAI has appointed retired U.S. Army General Paul M. Nakasone to its board of directors. Nakasone will focus on security and governance, reflecting OpenAI’s commitment to these areas. The company stated in a blog post that Nakasone’s appointment underscores the increasing importance of cybersecurity as AI technology continues to evolve.
This collaboration and the changes in OpenAI’s policies signify a proactive approach to addressing AI safety and ethical concerns. The partnership with the U.S. AI Safety Institute is expected to set new standards for AI evaluations and safety measures.
Strengths and Positive Aspects
The partnership itself is a commendable initiative. By providing early access to its models, OpenAI demonstrates transparency and a willingness to subject its technology to rigorous external scrutiny. This move can enhance public trust and ensure that the models are safe for deployment. The involvement of the U.S. AI Safety Institute, backed by the National Institute of Standards and Technology (NIST), adds credibility to the process, as the Institute’s role is to develop guidelines and standards for AI measurement and policy.
Moreover, OpenAI’s decision to allocate at least 20 percent of its computing resources to safety efforts shows its commitment to addressing safety concerns. The removal of non-disparagement clauses from employee contracts is also a positive step, encouraging open communication and accountability within the company.
Also Read: Microsoft Says OpenAI is Now a Competitor in AI: A New Phase in Tech Rivalry.