4 June 2020

As an emerging technology, autonomous vehicles are a topic that is ripe for speculation. In fact it can be difficult to talk about autonomous vehicles without straying into speculation, which presents a never-ending challenge for an educational organization like PAVE. There’s nothing fundamentally wrong with trying to anticipate the transformative changes that AVs might unlock, but extrapolating from an unsound factual foundation can lead to grotesquely warped conclusions.

The dangers of speculation based on unsound assumptions are dramatically illustrated in a rash of recent media headlines making variations of the same claim: “self-driving cars could only prevent a third of U.S. crashes.” Given the fundamentally speculative nature of this entire topic, dependent as it is on a dizzying variety of variables, PAVE is not here to denounce this specific claim as factually incorrect. There are, however, some assumptions in the study on which these headlines are based that raise questions about its conclusions and make this a teachable moment

These headlines come from an IIHS study of current crash statistics, separating them into five categories and then predicting the impact that an all-AV fleet would have on them. The first half of this exercise is unambiguously valuable, providing a more granular look at the kinds of factors that contribute to road crashes today. Where the study goes off the rails is in its assumptions about how AVs are designed, which badly warp the results being highlighted in media reports.

IIHS breaks driver-related factors from the 5,000 police-reported crashes in the National Motor Vehicle Crash Causation Survey database into the following five categories:

  • 24%: “Sensing and perceiving” errors included things like driver distraction, impeded visibility and failing to recognize hazards before it was too late.
  • 17%: “Predicting” errors occurred when drivers misjudged a gap in traffic, incorrectly estimated how fast another vehicle was going or made an incorrect assumption about what another road user was going to do.
  • 39%: “Planning and deciding” errors included driving too fast or too slow for the road conditions, driving aggressively or leaving too little following distance from the vehicle ahead.
  • 23%: “Execution and performance” errors included inadequate or incorrect evasive maneuvers, overcompensation and other mistakes in controlling the vehicle.
  • 10%: “Incapacitation” involved impairment due to alcohol or drug use, medical problems or falling asleep at the wheel.

This is a fascinating and valuable insight on its own, but IIHS then proceeds to assume that AVs will only solve the factors in the “sensing and perceiving” and “incapacitation” categories. This results in the now-oft-repeated claim that AVs could only solve 34% of road deaths.

Now, it would be hard to deny that AVs are able to perceive the world around them better than humans, as anyone can tell from one glance at the highly-detailed and long-range 360 degree view afforded by terabytes of lidar, radar and camera data. It would also be hard to deny that AVs will avoid “incapacitation,” considering they won’t get tired or intoxicated the way humans will. But what of the other 66% of crashes? Can AVs not improve on human performance in these areas too?

The answer is far more nuanced than the media headlines would have you believe. In the “discussion” section, the study’s authors write “Only about a third of serious crashes could be preventable by AVs if they are not designed to respond safely to what they perceive,” which is a bit like saying that a marble won’t roll very far if it’s not round. The authors only offer two pieces of evidence to support the claim that AVs aren’t being designed to do precisely this: the fatal 2018 Uber crash in Tempe, and the idea that a rider’s preference for speed over safety might outweigh the system’s fundamental safety functions.

The Tempe crash was a terrible event that galvanized the AV sector, and lessons from the cascading failures that led to Elaine Herzberg’s death should be studied and learned from… but it was an outlier compared to how the AV industry operates on more levels than we have space for here. In any case, the notion that AV riders would be able to choose to trade off safety for speed is what features far more prominently in IIHS and media explanations of their finding. This issue of “rider preference” trading off with system safety might make sense in a SciFi story, but in reality no AV developer has even hinted that this would be an option.

Instead, AV developers emphasize again and again that safety is the North Star of their work. One PAVE member, Aurora, exemplifies the sector’s commitment to safety when CEO Chris Urmson says they are “proactively and intentionally taking the time to program our vehicles to operate like model drivers,” always following the existing rules no matter the jurisdiction. PAVE member Argo AI has explained in depth how its safety-focused culture permeates its testing policies, and not just its system design. There’s even a joke that shows how fundamental safety is to AV design: you can make a car drive itself by putting a brick on the throttle, where it gets hard is getting a car to drive itself safely.

In fact, it’s not impossible that AVs might violate traffic rules in some circumstances, but when they do it will be in the name of enhancing safety and not trading it off for speed as the study’s authors speculate. In the AV safety case framework “Safety First for Automated Driving, [PDF]” developed and signed by PAVE members Intel and Audi (among others), an unsafe situation that can not otherwise be safely handled calls for the AV to “prioritize traffic rules while making a safety-assured maneuver.” In other words: break as few traffic rules as necessary to assure safety.

What you won’t find anywhere is an autonomous vehicle developer saying that they will trade off safety or compliance with the rules of the road for speed. The only source for the author’s speculation that this might happen is a questionnaire survey of automated shuttle riders in Berlin, which seems to suggest that riders might prefer more speed even at the cost of safety. Needless to say, it is unlikely that riders of an automated shuttle have considered the complex and high-stakes issues related to legal liability and regulatory compliance that actual AV developers have no choice but to incorporate into their systems.

Given the total lack of evidence that AV developers will actually allow their vehicles to prioritize speed (which contributed to 23% of crashes) and illegal maneuvers (15%) over safety, one must concede another 37% of at least potential crashes prevented to AVs. That would more than double the number of crashes that AVs could prevent compared to the mere 34% the authors concede, and which feature in the headline of almost every media story about the study. We could go even further, pointing out that AVs have a huge advantage over humans when it comes to environmental conditions and the entire category of “execution” errors, as these can generally be solved by making complex (for a human) calculations about physics (coefficients of traction related to road surfaces) that are well-proven in traction and stability control systems.

The point of all this is not to prove that AVs will definitely solve all road deaths. That perception, as PAVE Academic Advisory Committee member Dr Missy Cummings tells the New York Times, is a “layman’s conventional wisdom that somehow this technology is going to be a panacea that is going to prevent all death.” But given the fundamental issues with some of IIHS’s assumptions, it’s not clear that saying AVs can only solve 34% of road deaths is a much more fact-based mental model than the overly-optimistic view it seeks to replace. The search for an accurate projection of AVs’ life-saving potential goes on.

Source: PAVE