Self-driving cars or autonomous vehicles are the future of transportation, with many companies now looking to develop or are already developing such cars. Companies like Google, Tesla, General Motors are already testing their autonomous vehicles in the USA.
The attempts to create driverless cars began in 1970, but due to the lack of advanced technology, autonomous cars remained a distant dream. But with time, the technology improved and we developed more powerful computers, the GPS system, and most notably, AI. Autonomous cars are now not only a possibility, they are here, almost.
Self-Driving Cars in a Nutshell
An autonomous vehicle can sense its environment and navigate without human inputs. To complete this assignment, each car has to be equipped with a GPS unit, a navigation system and a wide range of sensors including rangefinders, radars, and videos.
The positional information is used by the vehicles with the help of GPS and inertial navigation system. It can localize itself and for refining its estimate position, it can use sensor data and build a three-dimensional image (3D image) of its environment.
Data from each sensor is filtered to remove noise and to augment the original image, it is more often fused with other sources of data. The control System will always determine how the data will be used to make navigation decisions.
A deliberative architecture is used by the majority of self-driving vehicle’s control system, which means they are capable of making intelligent decisions such as maintaining an internal map of their world and using that map to find an optimal path to their destination while avoiding obstacles from a set of possible paths.
Once the best path is determined by the vehicle, the decision is dissected into commands, which is further fed to the actuators. The vehicle’s steering, braking, and throttle – all these are controlled by the actuators.
Localization, mapping, obstacle evading, and route planning procedure is repeated many times on powerful onboard processors until the vehicle reaches its destination.
The next part focuses on the technical modules of each process – mapping and localization, obstacle avoidance, and path planning. Though car manufacturers use different algorithms and suites depending on the unique cost and operational restrictions, the main procedure across all the vehicles is the same.
Mapping and Localization
The self-driving vehicle is required to make a map of its environment and precisely localize itself within that map before making any navigation decisions. Laser rangefinders and cameras are the most frequently used sensors for building a map.
By the help of swaths of laser beams, the vehicle can determine its environment and calculate the distance of nearby objects. An advantage of laser rangefinders is that depth information is readily available to the vehicle for building the three-dimensional map and the video from the camera is ideal for extracting scene color.
The laser beams deviate as they travel through space, so it becomes difficult to obtain accurate distance readings greater than 100m using most state-of-the-art rangefinders, which in return limits the amount of reliable data that can be captured in the map. The data collected from each sensor is filtered and discretized by the vehicle and a comprehensive map is created by aggregating the information which then can be used for path planning.
The GPS must be used with inertial navigation unit and sensors to precisely localize itself and to know where it is in relation to the objects in the map.
Due to signal delays caused by changes in the atmosphere and reflection off buildings and surrounding terrains, the GPS estimates can be off by many meters and errors in accumulation position can occur.
To reduce uncertainty, localization algorithms will often be used in maps or sensor data previously collected from the same location. The new positional information and sensor data are used to update the vehicle’s internal map as the vehicle moves.
The internal map of a vehicle includes the predicted and the current location of all still (eg., traffic lights, stop signs, buildings,) and moving (eg., other vehicles, and pedestrians) obstacles in its neighborhood.
Obstacles are categorized on the basis of how much they match up with a vehicle’s library of pre-determined shape and motion descriptors. The future path of moving objects (based on shape and motion descriptor) is tracked using a probabilistic model.
For example, if a two-wheeled object is riding at 50 mph versus 80mph, it is most likely to be a motorcycle and not a bicycle and will get categorized as such by the vehicle. This allows the vehicle to make intelligent decisions when approaching busy intersections and crosswalks.
The plan of the path is used when vehicle’s internal map giving the location of all obstacles that be previous, current and predicted future location in the vehicle’s neighborhood and are incorporated with its internal map.
The objective of path planning is to safely direct the vehicle to its destination using the information captured while avoiding obstacles and following the road rules. Although the algorithms planned by manufacturers will be different as per the navigation system and sensors they used, the following describes a general path planning algorithm which has been used on ground vehicles used by armies.
The procedure used by army ground vehicles determines a rough plan for a long range to be followed by the vehicle while continuously filtering short-range plan (e.g., drive forward, turn right, change lanes). It begins from a set of short-range paths that the vehicle would be dynamically skilled of finishing, given its direction, angular position, and speed, and remove all those that would be crossing an obstacle or would be coming too close to the predicted path.
For example, a vehicle is riding at 50mph would not be able to securely turn complete right 5 meters ahead, therefore, that path will be removed from the possible set. Remaining Paths are assessed on the basis of safety, speed, and time requirements.
Once the best path has been recognized, a set of throttle, brakes, and steering commands are passed on to the vehicle’s onboard processors and actuators. This process takes an average time of 50ms, but it can be shorter and longer depending on the amount of collected data, available processing power, and the difficulty of the path planning algorithm.
This process of localization, mapping, obstacle detection, and path planning is continuous until the vehicle reaches its given destination.