In recent years, the growth trajectory of artificial intelligence has been impressive and it continues to be so. Artificial vision, touch, and even smell have helped robots edge closer to the human realm. Now, researchers have expanded their goals and they are moving from human perception to super-human perception. This feat of pushing technology up a notch is performed by Professor Fadel Adib (MIT Associate Professor) and his team.
RF Grasp
The robot called RF Grasp is the culmination of their efforts. The robot is capable of sensing occluded objects with the help of radio waves that can penetrate through the walls. Traditional computer vision skillfully blended with this powerful sensing equips RF Grasp with the ability to locate and quite literally, ‘grasp’ items that are blocked from view. With the advance, picking a screwdriver from a muddled tool kit would no longer be an irritable job. In other words, it won’t be a job at all.
With the help of RF-Grasp, warehouse work, a primarily human domain that entails potential dangerous feats can now be accomplished with ease and convenience. The earlier roadblock that prevented robots from taking up this domain was the lack of perception and picking. Robots that solely use optical vision are not capable of perceiving the presence of items that hidden or blocked from sight due to the presence of a solid object. This is due to the fact that light ways cannot pass through walls, unlike radio waves.
This why radio frequency (RF) identification has proved to be a perfect solution. RF systems constitute a reader and a tag. The tiny computer chip that is attached to the trackable item is the tag. An RF signal is then emitted by the reader. After being modulated by the tag, the signal is reflected back to the reader again.
The details concerning the location and identity of the tagged object are provided by this signal. The technology has already gained the spotlight among retail supply chains, particularly in Japan. This popularity is what that led researchers to profuse RF with AI, thereby equipping the robots with a different and effective mode of perception.
How does RF Grasp work?
RF Grasp includes a camera and an RF reader, which helps it to locate tagged objects. It also has a robotic arm that is connected to a grasping hand. The camera is placed on the wrist of the robot. The robot’s control algorithm is provided with the tracking information by the RF reader which is independent of the robot. This indicates that the robot is simultaneously receiving visual images and tracking data, which is then integrated into the decision-making system. Initially, this posed a challenge to the researchers.
The robot is left with the responsibility of deciding how the attention should be divided effectively between the two streams. The problem gets more complicated since it asks for RF-eye-hand coordination, not just easier hand-eye coordination.
Here is how the robot begins its seek-and-pluck process, in the words of Professor Adib,
“It starts by using RF to focus the attention of vision. Then you use vision to navigate fine maneuvers.”
Much similar to how a human responds when hearing a siren. The step followed is, hear- alert- turn- observe.
After spotting the targeted object, RF Grasp gets closer to the object. This is when the vision that provides much finer details than RF dominates the decision-making.
In comparison to a similar robot without an RF reader, RF Grasp proved to be more effective in spotting and grasping the object with minimum movements. In addition, RF Grasp also showed the ability to ‘declutter.’ It can easily remove the obstacles in the way. This is due to the penetrative RF sensing, which gives it the guidance that other systems lack.
Research Team
Tara Boroushaki (research assistant, Signal Kinetics Group, MIT Media Lab) is the lead author of the paper. Co-authors include Adib (Director, Signal Kinetics Group), Alberto Rodriguez (Associate Professor, Department of Mechanical Engineering, class of 1957). Junshan Leng (Research engineer, Harvard University) and Ian Clestern (Ph.D. Student, Georgia Tech) are also part of the team.
The IEEE International Conference on Robotics and Automation will witness the presentation of the research in May.