According to researchers at the University of California, there are possible risks with autonomous vehicles. They state that the driving could result in undesired driving behavior by having an ordinary object on the side of the road. These vehicles can be tricked and change driving behavior easily.

UCI professor of computer science, Qi Alfred Chen said, “A box, bicycle, or traffic cone may be all that is necessary to scare a driverless vehicle into coming to a dangerous stop in the middle of the street or on a freeway off-ramp, creating a hazard for other motorists and pedestrians,”
The danger is that vehicles cannot identify when an object is placed intentionally to change the route of the road. It could mean there is an object on the road. It could mean dangerous when someone places an object to intentionally change the route of the vehicle. Chen said, “Both can cause erratic driving behavior,” referring to not being able to distinguish between objects present intentionally as a part of an attack. The research team is focused on investigating on the security vulnerabilities of autonomous driving. The details about which autonomous software is being tested are not revealed.
Testing process
Chen and his team focused their investigation on security vulnerabilities specific to the planning module, a part of the software code that controls autonomous driving systems. This component oversees the vehicle’s decision-making processes governing when to cruise, change lanes or slow down and stop, among other functions.
“The vehicle’s planning module is designed with an abundance of caution, logically, because you don’t want driverless vehicles rolling around, out of control,” said lead author Ziwen Wan, UCI Ph.D. student in computer science. “But our testing has found that the software can err on the side of being overly conservative, and this can lead to a car becoming a traffic obstruction, or worse.”
For this project, the researchers at UCI’s Donald Bren School of Information and Computer Sciences designed a testing tool, dubbed PlanFuzz, which can automatically detect vulnerabilities in widely used automated driving systems. As shown in video demonstrations, the team used PlanFuzz to evaluate three different behavioral planning implementations of the open-source, industry-grade autonomous driving systems Apollo and Autoware.
The researchers found that cardboard boxes and bicycles placed on the side of the road caused vehicles to permanently stop on empty thoroughfares and intersections. In another test, autonomously driven cars, perceiving a nonexistent threat, neglected to change lanes as planned.