8 August, 2018
Â
Open AI — a non-profit started by Elon Musk — has found a way to programme a robot hand so that it can nimbly manipulate an object using human-like movements it has taught itself.
“We’ve trained a human-like robot hand to manipulate physical objects with unprecedented dexterity,” said OpenAI of its Dactyl system, which is shown in a videotwisting a block into 50 different requested orientations.
Dactyl works by training the robot hand in a simulation and then transferring the knowledge gained there to the real world.
Simulated learning is becoming widespread in AI, withDactyl representing a milestone in terms of how well it’s been able to execute the task its trained for in reality.
The result is a robot hand that can complete many tasks, efficiently, using a range of movements, without them having to be individually programmed by a human.
“We’re working on teaching robots to solve a wide variety of tasks, without having to programme them for any one specific task,” said Alex Ray, a machine learning engineer at OpenAI.
“The system runs on a human-like robot hand and we used reinforcement learning and simulation to teach the robot how to solve tasks in the real world.”
OpenAI says that Dactyl’s edge comes from an approach it calls “domain randomisation”. This means that rather than trying to make its simulation a completely accurate reflection of the robot’s reality — a common goal with simulations — it instead presented the robot with many realities, each slightly different.
Sometimes the angle of the hand would shift, for instance, or the block might be heavier. These realities were randomly served up to the AI, and it had to manipulate the block in every instance.
“Our learning algorithm sees all of these different worlds, and that lets it learn a way of manipulating the block that is very robust — robust enough so that eventually we can accomplish the same task in the real world,” said Ray.
The system is strong enough that it works in the real world even with imperfect knowledge of the block’s position. It is fed coordinates from the robot hand’s fingertips and images from three cameras — but sometimes this vision is blocked by the placement of the digits.
(Image:-dezeen.com)