28 July, 2018
“Theory of mind is clearly a crucial ability,” for navigating a world full of other minds says Alison Gopnik, a developmental psychologist at the University of California, Berkeley, who was not involved in the work. By about the age of 4, human children understand that the beliefs of another person may diverge from reality, and that those beliefs can be used to predict the person’s future behavior. Some of today’s computers can label facial expressions such as “happy” or “angry”—a skill associated with theory of mind—but they have little understanding of human emotions or what motivates us.
The new project began as an attempt to get humans to understand computers. Many algorithms used by AI aren’t fully written by programmers, but instead rely on the machine “learning” as it sequentially tackles problems. The resulting computer-generated solutions are often black boxes, with algorithms too complex for human insight to penetrate. So Neil Rabinowitz, a research scientist at DeepMind in London, and colleagues created a theory of mind AI called “ToMnet” and had it observe other AIs to see what it could learn about how they work.
ToMnet comprises three neural networks, each made of small computing elements and connections that learn from experience, loosely resembling the human brain. The first network learns the tendencies of other AIs based on their past actions. The second forms an understanding of their current “beliefs.” And the third takes the output from the other two networks and, depending on the situation, predicts the AI’s next moves.
The AIs under study were simple characters moving around a virtual room collecting colored boxes for points. ToMnet watched the room from above. In one test, there were three “species” of character: One couldn’t see the surrounding room, one couldn’t remember its recent steps, and one could both see and remember. The blind characters tended to follow along walls, the amnesiacs moved to whatever object was closest, and the third species formed subgoals, strategically grabbing objects in a specific order to earn more points. After some training, ToMnet could not only identify a character’s species after just a few steps, but it could also correctly predict its future behavior, researchers reported this month at the International Conference on Machine Learning in Stockholm.
A final test revealed ToMnet could even understand when a character held a false belief, a crucial stage in developing theory of mind in humans and other animals. In this test, one type of character was programmed to be nearsighted; when the computer altered the landscape beyond its vision halfway through the game, ToMnet accurately predicted that it would stick to its original path more frequently than better-sighted characters, who were more likely to adapt.
Gopnik notes that the kind of social competence computers are developing will improve not only cooperation with humans, but also, perhaps, deception. If a computer understands false beliefs, it may know how to induce them in people. Expect future pokerbots to master the art of bluffing.
(Image:- Sciencemag.org)