The world of ones and zeroes is weaving potential worlds; better reflections of our reality. And developments in the field of artificial intelligence spring up like mushrooms during rainfall. Recently, a new algorithm was curated which enables researchers to develop soft bots that are better suited to retain information from their surroundings.
‘There are some tasks that traditional robots -the rigid and metallic kind – simply aren’t cut out for. Soft-bodied robots, on the other hand, may be able to interact with people more safely or slip into tight spaces with ease. But for robots to reliably complete their programmed duties, they need to know the whereabouts of all their body parts. That’s a tall task for a soft robot that can deform in a virtually infinite number of ways.’
Source – MIT
The algorithm designed by MIT researchers enables engineers to design soft robots that are capable of collecting information from their surroundings. The algorithm based on deep learning enables better interaction with the surroundings, with the aid of sensors placed within the robots’ body. Given that sensor placement is a hard bridge to cross, this solution opens an exciting door of huge scope.
In comparison to their rigid counterparts, however, soft robots have a major disadvantage. Where rigid robots are characterized by limited motion, soft robots are ‘infinitely dimensional’, posing a challenge when it comes to mapping the location of its body parts. Rigid robots facilitate manageable calculations by algorithms, thereby making it easier to control mapping and motion planning. The case is different with soft robots. Due to their ability to deform, it is not as easy to trace them. To deal with this roadblock, researchers advocated a strategy using external cameras that would help to chart the position and then feed the control system with the acquired information. However, that doesn’t comply with the researchers’ aim of creating a soft bot free of external aid.
It won’t be logical or practical to bombard the robot with infinitely many sensors. The key to this problem is to optimize; to tap into those parts that are most useful. And the analogical ‘alloy’ for the key comes from deep learning. With the help of deep learning, a novel neural architecture was developed. Researchers achieved the listed goals first by dividing the robots’ body into regions, named ‘particles.’ The rate of strain of each particle acts as input to the neural network. The network then ‘learns’ the movements that are most efficient through trial and error. It also removes the less used particles from subsequent trials by keeping an eye on the frequency of use of each particle. This ensures efficient performance since the most important particles are identified by the network, which then puts forth suggestions as to where to place the sensors.
The competence of the algorithm was proven by pitting it against a set of expert predictions. The results of the comparison between human- sensorized robots and algorithm- sensorized robots indicated that the algorithm was better suited to get to the root of certain subtleties.
This advance will prove to be of immense benefit to automation of robot design.
IEEE International Conference on Soft Robotics, this April, will witness the presentation of this innovative research. It will be later published in the journal, IEEE Robotics and Automation Letters. Amini and Andrew Spilberg are the Co-lead authors. Both are PhD students in MIT Computer Science and Artificial Intelligence Laboratory. MIT PhD student, Lillian Chin and professors, Wojciech Matusik and Daniela Rus are the other co-authors.