Video Credits: EPFL, YouTube
A new approach, which can improve the control of robotic hands has been developed by EPFL scientists. This approach is a combination of individual finger control and automation. This technique is considered to be extremely helpful for amputees which will help in the improvement of grasping and manipulation functions.
This technique is a combination of two different concepts from two different fields. This approach is a collaboration of neuro engineering and robotics. This has so far been successfully tested on three amputees and seven healthy subjects. These results were then published in the ‘Nature Machine Intelligence’.
Aude Billard, who is currently leading EPFL’s Learning Algorithms and Systems Laboratory said that, “When you hold an object in your hand, and it starts to slip, you only have a couple of milliseconds to react.” While explaining the process, he further added that, “The robotic hand has the ability to react within 400 milliseconds. Equipped with pressure sensors all along the fingers, it can react and stabilize the object before the brain can actually perceive that the object is slipping.”
This could be considered a very important development as collaborating these two concepts has never been done before for robotic hand control. This contribution can be considered vital for the emerging field of shared control in neuro prosthetics.
The way this robotic arm functions is, the neuro engineering part of the arm deciphers the finger movement which is intended by the user. This can be understood from the muscular activity. The robotic hand is equipped with an individual finger control. The robotic side helps the robotic hand to get hold of the intended object and helps it in grasping.
The first author of the publication Katie Zhuang said in her statement that, “Because muscle signals can be noisy, we need a machine learning algorithm that extracts meaningful activity from those muscles and interprets them into movements.”
The algorithm that is developed for this robotic hand starts working by decoding the user’s intentions. Once the decoding process is completed it is then translated into the finger movements of the prosthetic hand. This algorithm uses machine learning, which means the user is required to perform a number of tasks and hand movements which will help train this algorithm.
This was followed by the next stage of algorithm development. In this stage, the scientists developed the algorithm such that robotic automation is activated when the user tries to hold any object. This algorithm is capable of instructing the prosthetic hand to close its fingers when an object comes in contact with the sensors of the fingers.
Silvestro Micera, EPFL’s Bertarelli Foundation Chair in Translational Neuroengineering, and Professor of Bioelectronics at Scuola Superiore Sant”Anna said, “Our shared approach to control robotic hands could be used in several neuro prosthetic applications such as bionic hand prostheses and brain-to-machine interfaces, increasing the clinical impact and usability of these devices.”
This algorithm first tries to study and understand what are the hand movements user wants to perform. Once this is done, it then uses this information to control individual fingers of the prosthetic hand. There are many more hurdles that the team needs to overcome before this prosthetic hand is made available for appropriate buyers. Currently, the algorithm is being tested on a robot.