A password will be e-mailed to you.

AI Is Not Actually An Existential Threat According To Scientists

Artificial Intelligence

Artificial Intelligence today tops the list of most useful and even smart technologies which can significantly ease up routine works and make life more convenient. Whether it be customized advertising and marketing, voice assistance, seld-driving cars, the backbone is artificial intelligence technology.

However, artificial intelligence, though extremely useful is regarded with a shadow of doubt and threat. Even brilliant minds like Stephen Hawking have expressed concern regarding a probable intelligence explosion. The advancements of artificial intelligence also raise the question of whether a day will come when machines will replace humans in every field. However, researchers and scientists aren’t really convinced about artificial intelligence being a threat to humanity. Here are the reasons why according to scientists, AI is not exactly an existential threat.

Defending or Defying Human Intelligence?

The artificial intelligence that is in use today is mainly used for functions like facial recognition, internet recommendations, customized services, etc. This AI is referred to as narrow or weak AI mainly because of the fact that its functions are limited to certain fields. It cannot be denied that in the performance of these particular tasks, they will prove to be exceptionally better than humans. However, this learning and skill cannot be applied to any other task other than the one assigned to them.

In this particular discussion, regarding the possible existential threat posed by artificial intelligence, Artificial General Intelligence(AGI) gains importance. Artificial General Intelligence is capable of mimicking human intelligence to the level that it can think like a human and even apply it to different situations. A good majority are convinced that when this happens, the threat will be all the more accelerated.

However, experts disagree. Artificial General Intelligence is still highly theoretical and the resources and knowledge needed to make this practical is still far away.

Can AGI be a threat?

Although there is no clarity concerning when AGI will become a reality, discussions still go on about AGI being a possible threat. In contrast to the present AI that uses input and instructions to perform certain functions, AGI uses experience and data and learns from them. This means that we will not be able to predict their reactions or responses when faced with a new situation.

This lack of control over the machines is what raises doubts and questions.

However according to Yingxu Wang(Professor, Software and Brain Sciences, Calgary University),

“professionally designed AI systems and products are well constrained by a fundamental layer of operating systems for safeguard users’ interest and wellbeing, which may not be accessed or modified by the intelligent machines themselves.”

The AI we use today

As it is already said, artificial intelligence isn’t inherently a threat or blessing, rather it is decided by how we put them to use. Artificial Intelligence can become a dangerous threat if it falls in the hands of humans who work with the objective of hurting others. AI is an effective tool, and it will continue to be effective whether it is in the hands of a hero or a villain.

Another factor of threat in artificial intelligence is the biases in algorithms. One striking example of this is the racial biases and the ones in facial recognition. The impacts of these biases can prove to be of immense harm to social order and peace. These biases arise from the data used for training, which lacks a general representation of the population. Therefore it is crucial that attention is paid to rectify these errors which can have far-reaching implications.

To conclude, AI is extremely useful to humankind, however, there are chances when they can have negative impacts when used with malicious intent or in the wrong way. But a day is still not close when artificial intelligence will overthrow humans.

Comments

comments

No more articles
Send this to a friend