In a recent statement, the former head of Alphabet Inc’s Google referred to artificial intelligence to be just as vicious as nuclear weapons. A few days ago, Eric Schmidt addressed this matter during a talk with the Aspen Security Forum.
At this forum, Schmidt stated how he was rather ‘naive’ regarding the effects of what the company was exactly up to. However, he noted, how this data was extremely ‘powerful,’ and the government and additional organisations should pressurise these techs more to make sure these factors maintain consistency with everyone’s ‘values.’
He noted how clearly the leverage held by these technologies is quite significant, and negotiation of an AI deal would turn out to be a notable issue. Initially, one requires the technologist to have adequate understanding and a clear idea regarding what can it lead to. In case of situation where someone intends to connect with China for an AI agreement, the question would remain as to who would cooperate with them from the US government. He added how authorities are not yet prepared for the the necessary negotiations for such a situation.
The tweet from Aspen Security Forum:
"We are not ready for the negotiations that we need." – @ericschmidt #AspenSecurity pic.twitter.com/As749t6ZyU
— Aspen Security Forum (@AspenSecurity) July 22, 2022
Further, he spoke about how everyone in the 1950s and 1960s came up with a space with the presence of a ‘no surprise’ mandate when it came to nuclear tests, which subsequently saw a ban. He went on to express his concerns regarding the current perspective of the US towards China, assuming it to be things like corrupt, or vice versa giving rise to the idea of suspicion among the people. He stated how people clearly think that one side preparing for something, could affect the other.
For a long time, the specific abilities of AI have been mentioned, that too with increased emphasis sometimes, on several instances. Evidently, Tesla CEO Elon Musk has frequently spoken of the high amount of risk that artificial intelligence poses to us humans. Moreover, Google recently went on to fire a software engineer working with the search giant for an AI issue. Reportedly, his claims were of its AI developing to becoming self-aware and sentient.
On the other hand, several experts have attempt to make people look at the fact that this aspect of AI is what it receives training for, and how humans mainly use it. Clearly, if these systems are made to learn aspects through systems that are problematic, racist or sexist, its results would have a reflection of this.