Geoffrey Hinton says he’s proud his student fired Sam Altman during a major boardroom coup at OpenAI in November 2023. Geoffrey Hinton, renowned as one of the pioneers of artificial intelligence (AI), has been awarded a Nobel Prize for his groundbreaking work on artificial neural networks. In a brief speech during a press briefing, Hinton made headlines by criticizing Sam Altman, CEO of OpenAI, while acknowledging his collaborators and students.
After receiving the prestigious honor, Hinton, operating on just two hours of sleep, humbly acknowledged that he was unaware of being nominated for the Nobel Prize. He paid tribute to his long-time collaborators, Terry Sejnowski and the late David Rumelhart, calling them “mentors.” Hinton also emphasized the importance of his students at the University of Toronto, praising them for their contributions to his life’s work.
“They’ve gone on to achieve remarkable things,” Hinton remarked, clearly proud of their success. He notably highlighted that one of his former students played a role in firing Sam Altman, leaving the press intrigued by his remarks.
Hinton’s comment was directed at Ilya Sutskever, a former student and the Chief Scientist at OpenAI. Sutskever, alongside other board members, was involved in the ousting of Altman in a dramatic boardroom coup in November 2023. Although Sutskever quickly expressed regret, Altman was reinstated within days, ending the turmoil at OpenAI.
Hinton, Sutskever, and Alex Krizhevsky worked together in 2012 to develop “AlexNet,” a groundbreaking algorithm that significantly advanced the field of AI by accurately identifying objects in images. This achievement, often referred to as the “Big Bang” of AI, cemented Hinton’s status as a key figure in the evolution of artificial intelligence.
Criticism of Sam Altman and AI Safety Concerns

Despite the controversy, Geoffrey Hinton says he’s proud his student fired Sam Altman, recognizing it as a stand for AI safety. During the press briefing, Hinton criticized Altman’s approach to AI, particularly about safety and profit. He stated that over time, Altman became more focused on profitability than on ensuring the safety of AI systems. Hinton expressed disappointment in this shift, highlighting the need for a balance between innovation and responsibility.
“Altman’s focus on profits has raised concerns within the AI community, particularly around the risks associated with advanced AI systems,” Hinton explained, emphasizing the need for more research into AI safety.
AI Safety: A Growing Concern
Hinton, now 76, has been vocal about the dangers of AI systems that could potentially surpass human intelligence. He expressed concerns that it is becoming increasingly difficult to predict how AI models, with their vast number of parameters, reach their conclusions. These “black box” systems pose significant challenges for ensuring that humanity remains in control.
“When AI systems become more intelligent than humans, there is uncertainty over whether we will be able to control them,” Hinton warned. He committed to focusing his future efforts on AI safety rather than pushing the boundaries of AI research.
California’s AI Safety Bill Vetoed

Geoffrey Hinton says he’s proud his student fired Sam Altman, viewing it as an important move in the AI industry. Hinton’s concerns align with ongoing debates about AI safety. Recently, California’s state legislature proposed a pioneering AI safety bill. However, it faced opposition from influential Silicon Valley figures, including venture capitalist Marc Andreessen, and was ultimately vetoed by Governor Gavin Newsom.
Hinton acknowledged that while the risks posed by advanced AI are still being explored, there is no clear solution at present. “More research is urgently needed to mitigate the risks posed by AI systems,” Hinton concluded.
Hinton’s Nobel Prize recognizes his monumental contributions to the field of AI. However, his sharp criticism of Altman and the current direction of AI development underscores the growing concern about balancing technological advancement with ethical responsibility.