A password will be e-mailed to you.

Deep Learning for Robots
Enables Grasping and Moving Objects Easily

A lot of things have changed and are changing due to Covid 19 pandemic lockdown. There is a sharp shift in the way things were done and the way they are being done now. We had a lot of new things being introduced at the beginning of the lockdown which have now become a norm and at the same time things that we used to do as norm then are getting outdated.

Video Credits: Ichnowski et al., Sci. Robot. 5, eabd7710 (2020)

Due to lockdown being practiced and a lot other safety measures such as isolation due to Covid 19 it has been noticed that online shopping is becoming more popular than ever. However, this rising demand is becoming more and more difficult for the retailers to fulfill. Especially while ensuring the safety of the workers who are at the job.

In order to address this issue the researchers at the University of California, Berkeley have now developed a new artificial intelligence software. This AI software will enable the robots with the skill as well as the speed to grasp and move the objects smoothly, thus making them capable of assisting humans in warehouses soon.

Image Credits: Ichnowski et al., Sci. Robot. 5, eabd7710 (2020)

Ken Goldberg who is the senior author of the study and William S. Floyd Jr. Distinguished Chair in Engineering at UC Berkeley said, “Warehouses are still operated primarily by humans, because it’s still very hard for robots to reliably grasp many different objects.”

While adding further they said, “In an automobile assembly line, the same motion is repeated over and over again, so that it can be automated. But in a warehouse, every order is different.”

The details about this technology are available in a paper published in the journal ‘Science Robotics’. The task of automating warehouses can be a tricky one as many actions and decisions that come naturally to humans can be quite difficult for the robots such as the decision of how to pick up an object, the coordination of the movements of shoulders, wrists and arms to move each object from one location to another location.

Also, the robotic motion tends to be a bit jerky which poses a threat of damaging the product as well as the robot.

In their previous work, Ken Goldberg and Jeffrey Ichnowsky who is postdoctoral researcher at UC Berkeley had created a Grasp Optimized Motion Planner that was capable of computing how a robot should pick up an object and also how it should move in order to transfer the object from a location to another. But the motions that were generated by this planner were quite jerky.

Ken Goldberg said, “Shopping for groceries, pharmaceuticals, clothing and many other things has changed as a result of COVID-19, and people are probably going to continue shopping this way even after the pandemic is over. This is an exciting new opportunity for robots to support human workers”

In the new study, Ichnowsky and Goldberg along with Yahav Avigal who is a graduate student at UC Berkeley and Vishal Satisfied who is an undergraduate student at UC Berkeley sped up the computing time of the motion planner with the help of deep learning neural network.

Jeffrey Ichnowsky said, “The neural network takes only a few milliseconds to compute an approximate motion. It’s very fast, but it’s inaccurate. However, if we then feed that approximation into the motion planner, the motion planner only needs a few iterations to compute the final motion.”

Comments

comments

No more articles
Send this to a friend