OpenAI, the creator of ChatGPT, is under fire for its handling of safety, transparency, and internal culture. CEO Sam Altman is now in damage control mode as the company faces a wave of backlash. Former employees have raised OpenAI concerns about the company prioritizing growth over safety.
A group of current and former OpenAI employees has publicly criticized the company’s focus on profits over safety. A report by The New York Times revealed a culture of unfulfilled safety promises. Daniel Kokotajlo, a former researcher, resigned, expressing concerns about rushing the technology’s development. Additionally, whistleblowers and AI insiders have called for greater accountability in the industry. They published an open letter urging AI companies to allow open criticism and protect whistleblowers.
In response to the criticisms, an OpenAI spokesperson emphasized the company’s commitment to safety. They highlighted an anonymous hotline for employees to report concerns and pointed to the company’s safety and security committee. “We’re proud of our record in developing safe AI systems and believe in rigorous debate given the importance of this technology,” the spokesperson said.
Shift in Priorities
The CEO’s leadership style has added to concerns about OpenAI due to the alleged withholding of critical information. Critics argue that OpenAI’s shift from a nonprofit to a “capped profit” organization in 2019 has prioritized growth over safety. Former board members Helen Toner and Tasha McCauley, who supported Altman’s removal last year, stated in an op-ed that profit motives compromise self-governance.
Internal disputes have also come to light. Toner accused Altman of withholding safety information from the board and launching ChatGPT without their knowledge. OpenAI denied the accusations but expressed disappointment over the ongoing issues.
The Times report also mentioned that Microsoft tested Bing with an unreleased version of GPT, which was not approved by OpenAI’s safety board. Microsoft denied these claims. Jan Leike, who led the company’s superalignment team, also left, criticizing OpenAI’s priorities. He noted that safety processes had been sidelined in favor of product development.
OpenAI faced another unexpected challenge when actress Scarlett Johansson accused the company of using a voice model similar to hers without permission. Johansson stated that she had declined multiple offers from Altman to provide a voice for OpenAI.
NDA Controversy
Reports also surfaced about OpenAI’s restrictive NDAs, which threatened former employees with loss of vested equity if they didn’t sign. Altman initially claimed ignorance but was later revealed to have known about the NDAs, further damaging his credibility.
The series of controversies has tarnished Altman’s reputation. Previously seen as a visionary, he now faces criticism for incompetence and unethical behavior. The Wall Street Journal reported conflicts of interest involving Altman’s personal investments in companies that do business with OpenAI.
Prioritizing Profit Over Safety
OpenAI, once a nonprofit dedicated to creating safe AI technology, transitioned to a “capped profit” model in 2019. This shift has sparked criticism that the company now prioritizes growth and profitability over safety. Former employees and board members have voiced concerns that the company’s focus has moved away from rigorous safety measures.
Daniel Kokotajlo, a former researcher, resigned citing worries about the company’s haste in deploying advanced AI without adequate safety precautions. His resignation reflects a broader sentiment among former and current employees that the drive for profit is overshadowing the importance of safety. This concern is further amplified by the departure of key figures like Jan Leike, who led OpenAI’s superalignment team dedicated to addressing the risks of AI superintelligence. Leike’s departure underscores the internal discord over the company’s priorities.
Conflicts of interest involving the CEO’s investments have sparked additional OpenAI concerns. CEO Sam Altman, once viewed as a visionary leader, now faces significant criticism for his handling of the company’s direction and internal culture. Accusations of withholding crucial information from the board and making unilateral decisions, such as the surprise launch of ChatGPT, raise questions about transparency and governance within OpenAI.
Also Read: All the Recent Accusations Leveled at Sam Altman: A Growing Wave of Concerns and Criticisms.