Image Credits:

Helping Robots Coordinate with Machine Learning
New Data Driven Method Into Play

Robots play a huge role when it comes to performing tasks which the humans are physically incapable of performing. A lot of robots are specially designed to withstand extreme weather and collect data from places where humans can’t reach. Robots have been quite helpful to carry out mundane tasks and also the one which require the utmost level of precision.

Video Credits: Aerospace Robotics and Control at Caltech

Robots are now being considered as a major part of many sectors as they carry out tasks efficiently for the company and keep the workplace safer for the worker. Many robots are also used for search and rescue missions, which require them to operate in a particular formation in order to capture and collect data.

The problem of multi robot motion coordination is faced by the entire robotics sector. The wide array of applications that are functional in a broad range from search and rescue to the control of fleets of self driving cars to formational flying above the cluttered environment.

There are two major causes that make multi robot motion coordination difficult to achieve. The first one is, when the robots are moving in new environments they must make a split second decision with regards to their trajectories despite them not having the complete data about their future path.

The second problem is, due to a large number of robots being present in the same environment makes it difficult for them to interact and also makes them more prone to colliding into each other.

In order to address this problem, engineers at Caltech have now designed a new data driven method which is capable of controlling the movement of multiple robots even through cluttered and unmapped spaces and also prevents them from colliding with each other.

Soon Jo Chung, Bren Professor of Aerospace along with Yisong Yue who is the Professor for Computing and Mathematical Sciences, Benjamin Riviere who is graduate student from Caltech, Wolfgang Honig who is a postdoctoral scholar and Guanya Shi, a graduate student has come up with a multi robot motion planning algorithm named as “Global-to-Local Safe Autonomy Synthesis” or GLAS.

This system is capable of imitating a complete information planner with the help of local information along with ‘Neural Swarm’ which is a swarm tracking controller that is augmented to learn complex aerodynamic interactions in close proximity flight.

Soon Jo Chung said, “Our work shows some promising results to overcome the safety, robustness, and scalability issues of conventional black-box artificial intelligence (AI) approaches for swarm motion planning with GLAS and close-proximity control for multiple drones using Neural-Swarm.”

With the application of GLAS along with Neural Swarm, the robot does not need to contain a complete and comprehensive picture of the environment that it is currently moving through or the path that the fellow robots are about to go on.

Yisong Yue says, “These projects demonstrate the potential of integrating modern machine-learning methods into multi-agent planning and control, and also reveal exciting new directions for machine-learning research.”

Once equipped with these systems, the robots are capable of navigating in an area while on the fly. As they go into a ‘learned model’ they are able to incorporate new information for the movement. This also allows the decentralisation of computation as each robot in the swarm only requires information regarding its local surroundings.