A group of employees from top AI companies has taken a bold stand, asking their employers to grant them the “right to warn” the public about the potential risks of advanced artificial intelligence (AI). This call for action highlights growing concerns within the AI industry about the responsible development and deployment of these powerful technologies.
Urging Transparency and Accountability
On Tuesday, 13 current and former employees from leading AI firms, including OpenAI and Google DeepMind, issued a proposal. They are demanding robust whistleblower protections and the establishment of secure, anonymous reporting channels. Additionally, they want to abolish restrictive non-disclosure and non-disparagement agreements that currently silence AI workers.
“I’m scared. I’d be crazy not to be,” said Daniel Kokotajlo, who left OpenAI in April due to worries about how the company was handling AI technology. His concerns are shared by several other employees who have also left, reflecting a broader unease about the industry’s trajectory.
Backing from AI Pioneers
The proposal has garnered support from prominent figures in the AI field, including Yoshua Bengio, Geoffrey Hinton, and Stuart Russell. These renowned experts, often dubbed the “godfathers” of AI, emphasize the need for greater transparency and accountability in AI development.
The group behind the proposal believes AI has the potential to bring significant benefits to humanity but also warns of substantial risks, such as the concentration of power within the industry and the suppression of critical internal voices. They argue that AI companies must promote a culture of open criticism while protecting trade secrets.
The Right to Warn Initiative
The Right to Warn initiative aims to ensure that those with deep knowledge of AI systems and their risks can speak out without fear of retaliation. “The people with the most knowledge about how frontier AI systems work and the risks related to their deployment are not fully free to speak because of possible retaliation and overly broad confidentiality agreements,” explained William Saunders, a former OpenAI employee and coalition member.
This push for greater openness follows a revealing Vox report. It exposed that OpenAI had threatened to revoke employees’ vested equity if they didn’t sign restrictive NDAs. OpenAI CEO Sam Altman expressed embarrassment and claimed ignorance about this clause, but later reports suggested that he and other executives were aware of it.
Seeking Real Accountability
Jacob Hilton, a former OpenAI employee now with the Alignment Research Center, stressed the need for AI companies to be held accountable for their commitments to safety, security, governance, and ethics. He pointed out that current voluntary public commitments are inadequate due to a lack of transparency and enforcement.
Hilton highlighted the importance of creating a system that incentivizes companies to honor their public commitments. “Public commitments will often be written by employees who genuinely care, but the company doesn’t have a lot of incentive to stick to these commitments if the public won’t find out about violations,” he noted.
Challenging Restrictive Agreements
The proposal also calls for ending nondisparagement agreements that prevent former employees from voicing concerns. These agreements have long silenced employees who had to sign them when leaving the company, stifling critical discussions about AI safety.
Hilton’s experience with these agreements underscores their chilling effect. Upon leaving OpenAI, he felt pressured to sign a nondisparagement agreement that would have significantly affected his compensation. This attempt to silence him only motivated him to support the Right to Warn initiative.
Recent Developments and Company Responses
OpenAI recently disbanded its “Superalignment” safety team and saw several high-profile researchers leave, raising further concerns about the company’s commitment to safety and ethics. Meanwhile, Google’s AI efforts have also faced scrutiny over safety issues.
In response to the Right to Warn letter, an OpenAI spokesperson stated that the company is “proud of our track record providing the most capable and safest AI systems” and emphasized the importance of rigorous debate. The spokesperson also mentioned that OpenAI has an anonymous integrity hotline. However, a Google spokesperson declined to comment.
Implications for AI Safety
The Right to Warn initiative highlights the crucial importance of safety and ethics in AI development. Given AI’s potential to influence elections, disrupt economies, and pose other significant risks, there is an urgent need for transparent and responsible governance in the AI industry.
The signatories hope that by advocating for greater transparency and accountability, they can ensure AI technologies are developed and deployed in a way that maximizes benefits while minimizing harms. This initiative represents a vital step towards creating a safer and more ethical AI landscape.