One of the major points of argument pitted against artificial intelligence, that often gives humans advantage over AI is the fact that, despite the advanced accuracy and efficiency, it is after all, a machine. The human touch; the emotional backdrop against which the subtleties of life are painted is absent in artificial intelligence. A point that stresses that the intelligence here is ‘artificial’. Chances are that the argument might not hold anymore. Artificial intelligence is all set to scale that human exclusive mountain with the advent of emotional intelligence.
Researchers have already succeeded in creating computers capable of seeing the world around them, which will act as the base for autonomous cars, planes and security systems in the future. Now, with emotional intelligence as the steering wheel, researchers are all set to take AI to the next level where it can not just recognize the objects in the image, but also comprehend the feelings induced by those images. According to Panos Achiloptas, this will prove to be a remarkable step towards making artificial intelligence more human. Achiloptas is a doctoral candidate in computer science at Stanford University.
The key to achieving this goal is ArtEmis, the new dataset, published in arXiv pre-print. The dataset based on 81,000 WikiArt paintings, includes about 440,000 written responses from over 6500 humans, explaining how a particular painting makes them feel and stating the reason behind an emotion. These responses were used by Achiloptas and team, headed by Leonidas Guibas, engineering professor at Stanford, to train neural speakers aka AI that responds in written words. The neural speakers would facilitate emotional responses in computers in reaction to visual art while also justifying those emotions in language. In contrast to classical computer vision capturing that was based literal contend, this application includes emotional content. ArtEmis works on any kind of art irrespective of subject matter, whether it be still life or portraits or abstracts.
How does the algorithm work ?
The algorithm works by categorizing the work of art into one among the eight emotional categories. It then goes on to explain the particular element in the image that justifies a given emotion, in written text. The computer can discern how a human might feel, seeing any image. Another point of attraction is that the algorithm does more than capturing a broad emotional experience. It can dive deeper into the emotional responses, deciphering the cocktail of emotions that is present in a particular painting. In this way, the tool works in sync with the subjectivity that is an inevitable constituent of any human response.
ArtEmis has the potential to become an effective tool that helps artists evaluate their work in the course of creation. It can tell them if their creation will have the desired impact on the viewers. In the near future, Achiloptas expects emotion based algorithms to add an element of emotional awareness to chatbots and conversational AI agents, rendering them with a human, personal touch, thereby enriching the experience.