Following the crash between an autonomous car and a pedestrian on March 18, California’s lawmakers reconsider relevant legislation, and the debate over self-driving cars continues to grow.
The ridesharing company Uber was testing the self-driving car in question when, even with a human supervisor, it struck and killed a pedestrian at an intersection in Tempe, Arizona. Afterwards, Uber suspended tests of autonomous cars not only in Arizona but also in Pennsylvania, California, and Canada.
“From what I heard, [this crash] was not the software’s fault,” Eric Raeber, a software engineer at Amazon, comments. “It was a pedestrian that jumped onto the street. If it had not been a self-driving car, then the same accident might have happened, so I think that poses an interesting legal problem of who is responsible.”
Raeber works with algorithms similar to the ones that self-driving cars use and explains the technology behind them.
“A self-driving car has a laser sensor that senses its environment, so the car knows all the obstacles around it, and you have a software algorithm that does a local planning [and] plans where to go given the obstacles around you,” Raeber describes. “The car’s sensors quickly react to pedestrians, to bikes, and to other cars to avoid obstacles.”
Despite the car’s complex programming, the technology remains imperfect; company documents reveal that Uber’s self-driving cars drove an average of only 13 miles per driver intervention. In contrast, Google’s autonomous car project Waymo traveled about 5,600 miles per intervention.
“Waymo is supposedly further ahead than Uber and a lot of these other [autonomous] driving companies,” Joshua Rubin, a mathematics and computer science teacher at Woodside, adds. “If Waymo can show data that proves that they are able to predict when a human is crossing the street… then maybe that would be enough to allow California’s governing board to decide [self-driving cars] are okay.”
Although California has historically been less receptive to autonomous cars than Arizona, the state passed a law last February allowing truly driverless cars to begin operating on April 2.
“With [autonomous cars], I would feel much safer as a pedestrian, because I know that the chances of an accident are much lower with software than with human drivers,” Raeber reasons. “It’s all the phases in between, when you mix humans and self-driving cars, that create a lot of trouble. Once all of the cars are self-driving, I think a lot of the problems would go away.”
While Raeber and Rubin have only ridden in semi-autonomous Teslas rather than truly driverless cars, both would feel safe in one. Still, Rubin fears that from hacking to job loss, the introduction of autonomous cars will have drawbacks.
“Hacking is one of the scariest weapons that exist in the world today, and I don’t think that people really understand [its] consequences,” Rubin explains. “There have been proven studies of not just cars but also other large infrastructures within our government that could be potentially compromised… Also, Uber and Lyft have been very successful companies, and as soon as they don’t have to pay the driver, that… eliminates a lot of jobs that people rely on.”
Another problem facing autonomous cars pertains not to technology but rather ethics. A study by MIT, titled the Moral Machine, forces participants to choose who an autonomous car should strike if a crash is inevitable.
“In one situation, people are jaywalking,” Rubin recalls. “The car has a decision: do I hit a barrier and potentially kill the people inside the car, or do I just run over the people who have been jaywalking because they shouldn’t have been jaywalking? That is the most difficult question, and I do not know how to program a computer to decide whose life to take or whose life to potentially take.”
Yet many predict that the introduction of effective autonomous cars could drastically reduce vehicular deaths. An estimated 40,000 Americans died in traffic-related incidents in 2017, an average of nearly 110 fatalities per day, and 94% of them involve human error.
“I would personally say that [the Arizona crash] was not the fault of the self driving car,” Woodside sophomore Adrienne Evans, who frequently takes Uber, decides. “I think that it will only fuel the fire and the motivation behind [self-driving cars, and] if they can develop technology that is able to sense what human reflexes aren’t able to in a short period of time, then freak accidents will be less likely to occur.”
Raeber expects that the Arizona crash may temporarily deter lawmakers, and with reason― less than a week away from the onset of California’s new regulation, state officials look into the Arizona crash and Uber opts not to renew its state-issued autonomous car permit. Still, Raeber predicts that the government will continue to push for more lenient legislation, allowing self-driving cars to shape the lifestyle of the future.
“[There will be] a better organization among all the self-driving cars, which allows for better traffic efficiency [and] better efficiency in pollution,” Raeber predicts. “I think there will always be drivers who do that for the pleasure of driving, the same way you have people riding horses that do it for the pleasure.”
Similar to Rubin and Raeber, Evans also expects autonomous cars to eventually replace the majority of human drivers.
Ultimately, Evans concludes that “we’re definitely heading towards a driverless future.”