In recent weeks, OpenAI has faced significant challenges with its latest model, GPT-4o, overshadowed by serious accusations and a string of high-profile resignations, including co-founder and chief scientist Ilya Sutskever. All the recent accusations leveled at Sam Altman highlight concerns about his leadership and management of OpenAI.
On May 17, senior safety researcher Jan Leike publicly criticized OpenAI for prioritizing product development over safety. Leike, who resigned on May 15 alongside Sutskever, co-led the “superalignment” team. This team focused on long-term AI risks but struggled to get the necessary computing power, despite promises from OpenAI. The team was ultimately disbanded following their departure. Leike has since joined rival AI lab Anthropic.
The same day, Vox reported on OpenAI’s use of restrictive offboarding agreements. These agreements, which former employee Daniel Kokotajlo refused to sign, included non-disparagement and non-disclosure clauses. Sam Altman, OpenAI’s CEO, claimed he was unaware of these provisions and promised they would no longer be enforced. However, leaked documents later showed Altman’s and other executives’ signatures on these agreements, casting doubt on his claims.
Scarlett Johansson Dispute
On May 20, Scarlett Johansson accused OpenAI of using a voice similar to hers without consent in the new GPT-4o model. Johansson stated that Altman had approached her multiple times to voice the model, which she declined. OpenAI paused the use of the voice and apologized for the lack of clear communication, asserting the voice was not intended to mimic Johansson’s.
AI policy researcher Gretchen Krueger, who resigned on May 14, echoed concerns about OpenAI’s governance. On May 22, she emphasized the need for better decision-making processes, accountability, transparency, and impact mitigation.
Former Board Members Speak Out
On May 26, former board members Helen Toner and Tasha McCauley accused Altman of lying and psychological abuse in an op-ed for The Economist. They revealed that Altman had withheld key information from the board, challenging OpenAI’s ability to self-govern effectively. Current board members refuted these claims, citing an internal investigation that cleared Altman. All the recent accusations leveled at Sam Altman have raised questions about transparency and communication within the company.
On June 4, 13 OpenAI and Google DeepMind employees published a letter condemning the lack of accountability in AI companies and calling for stronger whistleblower protections. They argued that employees need the ability to report concerns without fear of retaliation to ensure responsible AI development.
Former OpenAI safety researcher Leopold Aschenbrenner explained his dismissal on the Dwarkesh Podcast, claiming it stemmed from raising safety concerns to the board. He criticized OpenAI’s security measures and alleged that his dismissal was due to sharing a non-sensitive document with external researchers. This dismissal highlights ongoing tensions within OpenAI’s safety teams.
Prioritizing Products Over Safety
A major concern raised by Jan Leike, a senior safety researcher who resigned on May 15, is OpenAI’s alleged focus on product development at the expense of safety. Leike and co-founder Ilya Sutskever, who also resigned, led the “superalignment” team tasked with managing long-term AI risks. According to Leike, the team struggled to obtain the computing resources promised by OpenAI, which hampered their research efforts. This issue underscores a critical point: balancing innovation with safety is crucial for AI companies.
OpenAI’s handling of this situation has raised questions about its commitment to safety. All the recent accusations leveled at Sam Altman have led to a reevaluation of corporate structures and oversight in AI companies. OpenAI’s use of restrictive offboarding agreements, as reported by Vox, further highlights issues of transparency and ethics. These agreements, which included non-disparagement and non-disclosure clauses, prevented former employees from criticizing the company. Daniel Kokotajlo’s refusal to sign such an agreement brought this practice to light. Although CEO Sam Altman claimed he was unaware of these provisions and promised to stop enforcing them, leaked documents showed his and other executives’ signatures on these agreements.
Also Read: ChatGPT Teaches Maths and Flirts: Turning Boring Problems into Fun Challenges.