Artificial intelligence (AI) is like a powerful genie. Once out of the bottle, it can do wonders, but if not managed well, it can also cause chaos. Two former board members of OpenAI, Helen Toner and Tasha McCauley, believe that AI companies can’t be trusted to manage themselves. They argue that third-party regulation is needed to keep them in check. Does OpenAI need government regulation?
The OpenAI Turmoil
Toner and McCauley stepped down from OpenAI’s board in November. This happened during a dramatic attempt to remove Sam Altman, the co-founder of OpenAI. Although Altman was removed, he was quickly brought back as the CEO and rejoined the board after five months.
The two former board members wrote in The Economist that their decision to remove Altman was right. They mentioned that senior leaders at OpenAI had accused Altman of creating a “toxic culture of lie” and “psychological abuse.”
Since Altman’s return in March, there have been doubts about OpenAI’s commitment to safety. The company faced criticism for using an AI voice that sounded too much like actress Scarlett Johansson in Chat GPT’s Omni version. This raised concerns about ethics and accountability.
The Need for Regulation
Toner and McCauley are not convinced that OpenAI can hold itself accountable with Altman back in charge. They believe that for OpenAI to achieve its goal of benefiting “all of humanity,” it needs external oversight. Governments, they argue, must step in to create effective rules.
Initially, the duo believed OpenAI could manage itself. However, their experience showed that the lure of profits makes self-governance unreliable.
Policymakers’ Role
The call for government regulation comes with a warning. Toner and McCauley caution that poorly designed laws could stifle competition and innovation, especially hurting smaller companies. They urge policymakers to act independently from AI companies when making rules. It’s vital to avoid loopholes that benefit early movers and prevent regulatory capture, where industry insiders control the regulation process to their advantage.
The AI Safety and Security Board
In April, the Department of Homeland Security set up the Artificial Intelligence Safety and Security Board. This board will give advice on the safe development and use of AI in critical infrastructures in the U.S. The board has 22 members, including Altman and top executives from big tech companies like Nvidia and Alphabet.
However, the board is criticized for having too many representatives from profit-driven companies. AI ethicists worry that this could lead to policies favoring industry profits over human safety. Margaret Mitchell, an AI ethics expert at Hugging Face, emphasized the need to prioritize people over technology.
The Bottom Line
AI is a powerful tool, but like any tool, it needs proper oversight. Toner and McCauley argue that AI companies like OpenAI can’t be trusted to regulate themselves. They call for governments to step in and create fair and effective rules. Policymakers need to be careful to ensure these regulations don’t harm innovation or favor big companies over smaller ones.
The genie of AI is out of the bottle, and it’s up to us to ensure it serves humanity well. With the right rules in place, we can harness AI’s power for good while keeping its risks in check. So, let’s hope our policymakers act wisely and independently to make this happen.
Check out more articles on OpenAI