Have you ever been forced to sit at the edge of the seat for a fight? The answer would obviously be yes. Have you been forced to sit at the edge of the seat for a fight where you cannot see the opponents? The answer may not be a ‘YES’ until you have seen Person of interest.
It used to be a simple crime thriller where an Artificial intelligence helps a certain people from stopping crimes being happened before it actually happens. (Not so simple though!!) It has transformed itself into a political thriller by bringing in a rival A.I whose primary aim is to control the world.
Now it has become a grudge match between two A.Is. So what is artificial intelligence and is this all a real possibility? Let us do a small contemplation here.
‘Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, an ideal “intelligent” machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”. As machines become increasingly capable, facilities once thought to require intelligence are removed from the definition. For example, optical character recognition is no longer perceived as an exemplar of “artificial intelligence” having become a routine technology. Capabilities still classified as AI include advanced Chess and Go systems and self-driving cars.’
Artificial intelligence originally aspired to replace doctors. Researchers imagined robots that could ask you questions, run the answers through an algorithm that would learn with experience and tell whether you had the flu or a cold. However, those promises largely failed, as artificial intelligent algorithms were too rudimentary to perform those functions.
Particularly tricky was the variability between people, which caused basic machine learning algorithms to miss the patterns. Eventually though, a subset of AI called deep learning became sensitive enough to recognize speech from voice data. Although deep learning algorithms required loads of training data, they could eventually learn to recognize words regardless of accents and other differences in speech patterns.
After recognizing speech, technologists applied deep learning to recognize objects in image data — which remains its primary application today. For instance, driverless cars largely depend on deep learning to identify and navigate around people to safely get their occupants home. In the health field, a pack of companies — including San Francisco-based startup Enlitic — are applying deep learning to recognize suspicious masses on radiological scans that are likely cancerous. Many of these image recognition tools are already used in hospitals.
Now, before we go further what is this deep learning?
Deep learning (also known as deep structured learning, hierarchical learning or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using a deep graph with multiple processing layers, composed of multiple linear and non-linear transformations.
Deep learning is part of a broader family of machine learning methods based on learning representations of data. An observation (e.g., an image) can be represented in many ways such as a vector of intensity values per pixel, or in a more abstract way as a set of edges, regions of particular shape, etc.
Some representations are better than others at simplifying the learning task (e.g., face recognition or facial expression recognition). One of the promises of deep learning is replacing handcrafted features with efficient algorithms for unsupervised or semi-supervised feature learning and hierarchical feature extraction.
How are these machines different from each other?
Classic AI Approach
The earliest approaches to AI were computer programs designed to solve problems that human brains performed easily, such as understanding text or recognizing objects in an image. Results of this work were disappointing and progress was slow. For many problems, researchers concluded that a computer had to have access to large amounts of knowledge in order to be “smart”.
Thus they introduced “expert systems”, computer programs combined with rules provided by domain experts to solve problems, such as medical diagnoses, by asking a series of questions. If the disease was not properly diagnosed, the expert adds additional questions/rules to narrow the diagnosis. A Classic AI system is highly tuned for a specific problem.
IBM’s Watson could be viewed as a modern version of a Classic AI system. It focuses on creating a sophisticated knowledge base on a particular issue. Although Watson doesn’t rely on encoded rules, it requires the close involvement of domain experts to provide data and evaluate its performance.
Classic AI has solved some clearly defined problems but is limited by its inability to learn on its own and by the need to create specific solutions to individual problems.
In this regard, in spite of it being called artificial intelligence, it has very little in common with general human intelligence.
Simple Neural Network Approach
Some early researchers explored the idea of neuron models for artificial intelligence. When the limits of Classic AI became clear, this notion picked up steam and with the addition of back propagation techniques, started proving useful. The resulting technology, artificial neural networks (ANNs), was created over 50 years ago when very little was known about how real neurons worked.
Since then, neuroscientists have learned a great deal about neural anatomy and physiology, but the basic design of ANNs has changed very little. Therefore, despite the name neural networks, the design of ANNs has little in common with real neurons. Instead, the emphasis of ANNs moved from biological realism to the desire to learn from data without human supervision.
Consequently, the big advantage of Simple Neural Networks over Classic AI is that they learn from data and don’t require an expert to provide rules. Today ANNs are part of a broader category called “machine learning” which includes other mathematical and statistical techniques. Machine learning techniques, including ANNs, look at large bodies of data, extract statistics, and classify the results.
ANNs have recently evolved into Deep Learning networks, whose advances have been enabled by access to fast computers and vast amounts of data for training. Deep Learning has successfully addressed many problems such as image classification, language translation and identifying spam in email.
Although Simple Neural Network systems can solve many problems that were not solvable using Classic AI, they have limitations. For example, they don’t work well when there is limited data for training, and they don’t handle problems where the patterns in the data are constantly changing.
Essentially, the Simple Neural Network approach is a sophisticated mathematical technique that finds patterns in large, static data sets.
There is a deeper and more important issue beyond the current limitations of Classic AI and of Simple Neural Networks. In our view, both of these approaches are not on a path to achieve true machine intelligence; they don’t provide a roadmap to get there, which brings us to the third approach.
Biological Neural Network Approach
Everyone agrees that the human brain is an intelligent system; in fact it is the only system everyone agrees is intelligent. We believe that by studying how the brain works we can learn what intelligence is and what properties of the brain are essential for any intelligent system. For example we know the brain represents information using sparse distributed representations (SDRs), which are essential for semantic generalization and creativity. We are confident that all truly intelligent machines will be based on SDRs.
SDRs are not something that can be added to existing machine learning techniques; they are more like a foundation upon which everything else depends. Other essential attributes include that memory is primarily a sequences of patterns, that behaviour is an essential part of all learning, and that learning must be continuous. In addition, we now know that biological neurons are far more sophisticated than the simple neurons used in the Simple Neural Network approach — and the differences matter.
We believe you can’t get to machine intelligence by incrementally building upon the simple neuron approach, but instead must throw it away and start over with a more realistic biological approach.
From being a simple calculator, computers have come a very long way, they now control almost everything we do. It sounds like Person of interest would very well be a reality and its just us who have not been told about it. I am not a conspiracy theorist, but seeing such TV series just gives us questions of “What if?” Which side are you going to be on when everything comes out in the open?