Artificial Intelligence has covered a wide ground of possibilities that will supposedly prove to place human life a step above ordinary. Cognitive revolution altered human lives to fit a box of pre defined rubrics of ‘civilization.’ Artificial Intelligence might be on the way to kick start another revolution that can take the world by storm. Or is it not?
Though artificial intelligence has managed to successfully transform a good amount of sci-fi into reality, the wave of questions still stands, threatening to knock down the monument of progress it has managed to build. Hypothetically speaking, a day might come when artificial intelligence will emulate human brains to the last specific detail. Research is under way to cloak AI in human level intelligence and reflexes, where a machine can walk into a room and decide to lax on the couch or read without instruction. However, at present, what AI does is turn input into output. Scientists however haven’t given up hopes. Some of the potential path breakers that can develop a more natural form of intelligence is AGI or Artificial General Intelligence. In a nutshell it is the development of artificial neural networks that are capable of mimicking our brains.
As promising as AGI might seem on the surface, it still entails a chain of questions. The first and foremost being the question of safety. Mimicking human brains is an awe inspiring concept, but we should not forget that human brains are prone to faults and flaws. And if a machine emulates the same to the last detail, does that mean it will also absorb the flaws of the human mind? Can a hacker crack into AI just like a hypnotist cracks into human consciousness?
AGI might not really mean terminator knocking at our doors, but it might hold a potential roadblock of a flawed machine that falls short of security. Though, hypnotism is a vague landscape that still fuels research embedded with speculations, machine bias cannot be overruled. In fact, machine bias is one among the burning challenges to artificial technology. And to an extend it is inevitable, given the fact that the machines are fed with human generated data prone to flaws. GPT-3 being biased against Muslims is one example for this.
In short, the more researchers accelerate their research to create artificial intelligence with human attributes, the more it is exposed to the threats that can often have a highly negative impact. An AI powered system that listens, for instance can be manipulated via audio, same goes for the one that ‘sees’ or ‘comprehends’. With every plus, there comes an inevitable downside that asks us to think twice. Because, the argument won’t hold once a potentially hypnotized machine breaks into the security system and wreaks havoc. And as the saying goes, prevention is always better than cure.