Elon Musk, CEO of Tesla and SpaceX, has long been vocal about his concerns regarding the impact of artificial intelligence (AI) on humanity.
In response to a tweet from an AI software developer, McKay Wrigley, Musk reiterated his warning that AI will hit society “like an asteroid”. Wrigley had expressed his surprise that many people are unable to apply exponential growth to AI capabilities and pointed out that it would have been seen as a ludicrous idea only a year ago that we would now have GPT-4 level AI.
Musk agreed with the assessment and stated that he had been warning the public about the dangers of AI for years, well before GPT-1.
Musk also revealed that he used his only one-on-one meeting with former President Barack Obama to encourage AI regulation. Musk said he did not promote Tesla or SpaceX during the meeting in February 2015, but rather urged Obama to take action on AI regulation.
Musk’s concerns about AI and his efforts to push for regulation are not new. He has warned that uncontrolled development of AI could lead to disastrous consequences for humanity. In fact, Musk has been actively involved in AI development himself.
He is reportedly working on an AI project at Twitter and is developing plans to launch an AI startup called X.AI to compete with OpenAI, a company backed by Microsoft that develops generative AI tools like the AI chatbots ChatGPT and GPT-4 and the image generator DALL-E 2.
Musk’s recent tweet about the importance of AI regulation comes in response to Senate Majority Leader Chuck Schumer’s announcement that he is laying the groundwork for Congress to regulate AI.
The Financial Times reports that Musk’s X.AI is expected to focus on the development of general-purpose AI, which can perform a wide range of tasks without being programmed for each one individually.
Musk Calls for AI Regulation
A while back, Elon Musk, alongside hundreds of tech experts, including Apple co-founder Steve Wozniak, signed an open letter calling for a six-month halt in developing advanced artificial intelligence (AI) tools beyond GPT-4. The letter warned about the risk of widespread misinformation and massive job automation.
It is feared that the power of AI systems could result in many white-collar job losses in the long run. In a recent experiment, a Wharton professor tested the capability of AI tools on a business project, and the result was considered “superhuman.” Some remote workers are also allegedly leveraging productivity-enhancing AI tools to work multiple full-time jobs without their employers noticing.
Elon Musk established OpenAI in 2015 as a non-profit organization. However, he left the organization after a power struggle with CEO Sam Altman over its control and direction.
On February 17, Elon Musk tweeted that OpenAI was created as an open-source nonprofit organization to counterbalance Google, but it has now become a closed-source, maximum-profit company that is effectively controlled by Microsoft.
According to Altman, who is the CEO of OpenAI, the danger of artificial intelligence has been frequently warned. Last month in an interview with ABC, he said that other A.I. developers working on tools similar to ChatGPT will not apply the same safety restrictions as his company, and time is running out.
Musk has long believed that artificial intelligence needs oversight and described it as “potentially more dangerous than nukes.” He told Tesla investors last month that a regulatory authority is required to oversee the development of A.I. and ensure that it operates in the public interest.
Elon Musk’s views on the impact of technological growth, particularly in the area of artificial intelligence (AI), have sparked a lot of debate and discussion in the tech industry and beyond.
Overall, Musk’s views have raised awareness about the importance of responsible AI development and the need for careful consideration of its potential impact on society.