Jump to content

Professionalism/The Death of Elaine Herzberg

From Wikibooks, open books for an open world

On the night of March 18, 2018, in Tempe, Arizona, an autonomous vehicle (AV) collided with and killed Elaine Herzberg. The AV, a modified Volvo XC90, was part of a test program of the Uber Advanced Technologies Group (ATG). Herzberg was jaywalking across a four-lane roadway with her bicycle when struck. Toxicology screening showed methamphetamine and THC in her bloodstream. The safety driver of the AV, Rafaela Vasquez, was streaming a TV show on her phone right before the crash, a violation of Uber policy and Arizona state law. The National Transportation Safety Board (NTSB) found that Vasquez’s distractedness compromised her reaction time, preventing her from manually avoiding Herzberg when prompted by the AV.[1]

Humans, not technologies, were culpable for Herzberg's death

[edit | edit source]

Uber’s software had dangerous flaws

[edit | edit source]

As stated above, the proximate causes for Herzberg’s death were the impairment and jaywalking of Herzberg and the distractedness of Vasquez, however, we have to take a closer look at the mechanics of the AV to determine the higher-level causes. The AV had three mechanisms for sensing its environment - LIDAR, radar, and cameras. These three worked together to paint a 3D picture of the surroundings and allow the car’s computer vision algorithms to judge distances, predict trajectories of moving objects, and classify different objects as cars, bicycles, pedestrians, or something else.

There were a few key flaws in Uber’s AV software, known as Perception. First, reclassified objects lost their movement history, and the AV could not predict a trajectory for them. In the seconds leading up to Herzberg’s death, the computer vision system reclassified Herzberg multiple times, cycling between an unknown object, a bicycle, and a vehicle. The AV could not predict Herzberg’s movement path to avoid her. Second, objects classified as unknown did not have a predicted trajectory. Third, according to the NTSB, Perception “did not include consideration for jaywalking pedestrians.” Fourth, Perception had flawed emergency braking. Uber programmed Volvo’s native emergency braking systems to deactivate when Perception was active. Post-crash, Volvo conducted simulations of the crash scenario and determined that if Volvo’s emergency braking system had been active, it would have allowed the AV to avoid Herzberg or at least slow down to reduce impact speed. Perception’s own emergency braking did not activate to reduce impact speed but rather only if the AV could avoid impact.[1]

Strikingly, there was no technical malfunction, and all sensor systems were fully active doing their job. An executive of Velodyne, the LIDAR supplier, stated that the LIDAR “doesn’t make the decision to put on the brakes or get out of her [Herzberg's] way”.[2] The key takeaway is that the AV performed as it was programmed to under Perception, and therefore the design of Perception was flawed, not the underlying technology.

Poor safety culture leads to tragedy

[edit | edit source]

If the problem was Perception’s design, why did Uber fail to detect these flaws earlier, or why did they fail to design redundant safety measures into the AV to guard against undetected shortcomings? The NTSB points to Uber’s “poor commitment to safety culture”.[1] This was Uber’s ethical pitfall. The following is a list of practices at Uber ATG that the NTSB cited as evidence of weak safety culture:

  • Weakened safety-redundancy
    • Removal of second safety driver in 2017
    • Disabling Volvo’s native emergency braking and forward collision detection
  • Poor supervision of safety drivers
    • e.g. supervisors “spot-checking” safety drivers using the feed from their inward-facing camera
  • Failure to anticipate "human factors"
    • Pedestrian - jaywalking
    • Safety driver - inattentional blindness, automation complacency, and risk compensation (see below)
  • No corporate division to oversee safety[1]

Notably, Uber ATG employees knew Perception and safety standards were flawed even before Herzberg’s death. Uber AVs were frequently involved in accidents, almost “every other day” in the February before Herzberg died. Safety drivers who committed fireable offenses, such as Vasquez’s cell phone use, did not receive punishment.[3][4] To Uber’s credit, the company claimed to rectify almost all of these weak areas post-crash,[1] but nevertheless, these factors still led to Herzberg’s death.

AV systems have ethical assumptions built in

[edit | edit source]

There will always be ethical decisions embedded in designing a software for AVs. The concept of consequentialism explains that the right action is whatever leads to the best results in quantified terms, such as the least number of deaths.[5] AV manufacturers apply consequentialism when they consider crash optimization - if a crash is imminent, how will an AV system minimize the amount of damage, injury, or death?[5] To execute this, targeting algorithms are implemented, which deliberately discriminate against certain people depending on certain factors.

For example, consider a thought experiment where a car would have to choose between hitting a motorcyclist wearing a helmet and a motorcyclist not wearing one.[5] It might seem reasonable to hit the one wearing the helmet since the one not wearing a helmet would probably not survive the crash. But, this essentially would mean that motorcyclists are being penalized and discriminated against for wearing a helmet, possibly encouraging motorcyclists to not wear helmets to avoid this targeting. Consequentialism pertains to one of the most famous ethical dilemmas in history, the trolley problem,[6][7] and similar to the trolley problem, there is no definitive answer as to how to perfect crash optimization.

AV companies deflect blame and support continued street testing

[edit | edit source]

In response to AV crashes, companies such as Uber and Tesla have refused to accept responsibility. In 2016, Joshua Brown was killed when his Tesla Model S collided with a tractor-trailer in Autopilot mode. Elon Musk, CEO of Tesla, has blamed crashes such as Brown’s on driver overconfidence in the AV, stating that “the issue is more one of complacency” rather than any shortcoming of Tesla’s Autopilot system.[8] Marketing terms such as Tesla’s “Autopilot” may imply a greater level of autonomy than the system provides,[9][10] inadvertently leading to the complacency issue that Musk mentioned. Uber has stated that it is impossible for them to “anticipate and eliminate” every potential risk before street testing, and that is why testing on public roads is necessary.[11] Uber's poor safety culture may say otherwise. Uber was well aware of flaws in their AV system and safety standards before Herzberg’s death and continued to test their vehicles on public roads despite regular crash occurrences.[3][4] Waymo has echoed the need for continued street testing,[12] but their safety record has been better with no recorded fatalities.

Human behavior is poorly-equipped to use AVs

[edit | edit source]

Current AVs have impractical demands for human attention

[edit | edit source]

SAE International, also known as the Society of Automotive Engineers, has defined six levels of autonomy for AVs.[13] Level 3 AVs, such as the one involved in Herzberg's death, do not suit human psychology and can give the safety driver a sense of false safety. When a human is driving (Levels 0-2), the human is engaged in the driving task and is focused on the road. In Level 3 autonomy, the AV obviates the need to focus on the road, yet the driver still must focus on the road and be ready to take over in case of emergency. This is an unreasonable task to ask humans, due to inattentional blindness, the failure to notice things (like a jaywalking pedestrian) when not attentive to them.[14][15]

Automation leads to driver recklessness

[edit | edit source]

When engineers design a technology to be safer, the user may take more risks with the technology, thereby nullifying any safety benefit. This phenomenon is called risk compensation.[16][17] A special variant of risk compensation occurs for automated technologies, called automation complacency. When a technology becomes automated, users tend to forget to perform the old manual tasks that the system now performs automatically. If the automated system fails, the user forgets to remedy the failure.[18][19][20][21] When safety drivers are operating AVs, automation complacency makes them forget the old safety precautions they would take if they were driving a conventional car. For example, safety drivers become more likely to engage in distractions, such as Vasquez’s cell phone use. Therefore, the challenge for safety drivers is that they may be distracted when automation failures occur, and they may not be ready to take control in an emergency.

Economic and political factors contributed to Herzberg's death

[edit | edit source]

When California said no, Arizona said yes

[edit | edit source]

In 2016, Uber was originally testing its AVs in its headquarters city of San Francisco, CA. However, after an Uber AV ran a red light, California officials wanted Uber to apply for permitting.[22] Back in 2014, California created a permit system for AV testing.[1] Uber refused to undergo the permitting process, and Arizona Governor Doug Ducey tried to attract Uber’s AV testing to his state. In a press release, Ducey said that while “California puts the brakes on innovation and change with more bureaucracy and more regulation, Arizona is paving the way for new technology and new businesses” and that “Arizona welcomes Uber self-driving cars with open arms and wide open roads”.[23] Ducey’s intentions to foster his state’s economy may have been pure, but his hastiness to attract Uber left Arizona vulnerable to tragedy. He lured Uber to Arizona without establishing regulations to screen AV companies or their safety practices, ultimately leading to the demise of Arizona citizen Elaine Herzberg. Post-crash, Gov. Ducey placed a moratorium on Uber’s AV testing via executive order, but the NTSB has criticized Arizona’s failure to create a “safety-focused application-approval process” for AV testing.[1]

Lack of federal regulation leads to ethical quandaries for states

[edit | edit source]

The NTSB also criticized the National Highway Traffic Safety Administration (NHTSA) for “not providing definitive leadership to the states to manage the expansive growth in AV testing”.[1] The NHTSA has yet to publish comprehensive standards for AVs, outline best practices for AV testing, or create a mandatory vetting process for AVs and AV companies' safety protocols. In 2017 and 2020, Congress tried to pass comprehensive AV legislation but failed.[24] This has left a regulatory vacuum, forcing the states to regulate AV standards themselves. Perhaps states are not solely to blame for the ethical dilemmas brought on by AV testing, since there is no unified guidance from the federal government.

Key lessons

[edit | edit source]

This case shows what happens when we value progress and profit over safety. Uber had forewarning that their AV program and Perception were unsafe. They made the problem even worse by eliminating the second safety driver to cut costs. Arizona’s government could have been more careful in letting Uber test on their streets, but Gov. Ducey’s hunger for economic prosperity was ahead of the state’s AV legislation. The result was the death of an innocent pedestrian. Herzberg’s death may have hurt Uber financially, as the company experienced decelerating growth over 2018.[25] It seems Uber has addressed some of these issues in the years after the crash, but all industries should view this case as a lesson to prioritize safety first.

Future directions

[edit | edit source]

Future research could examine whether individual state AV regulations are efficacious. We may also forecast how much a federal AV-regulating body would cost and the potential benefits of such an organization. There are countless ethical questions that remain in the AV field, and some such as crash optimization may never have definitive answers. One ethical issue that may have some definitive answer is an objective criteria to determine if an AV company is morally responsible for a crash. Finally, we must realize that hindsight is 20/20, and no AV company will be able to forecast every possible contingency before street testing.

References

[edit | edit source]
  1. a b c d e f g h National Transportation Safety Board. (2019, November 19). Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian. Highway Accident Report NTSB/HAR-19/03. Washington, DC.
  2. Naughton, K. (2018, March 23). Sensor Supplier to Self-Driving Uber Defends Tech After Fatality. Bloomberg Hyperdrive. https://www.bloomberg.com/news/articles/2018-03-23/sensor-supplier-to-self-driving-uber-defends-tech-after-fatality (accessed May 1, 2020).
  3. a b Bort, J. (2018, November 19). Uber insiders describe infighting and questionable decisions before its self-driving car killed a pedestrian. Business Insider. https://www.businessinsider.com/sources-describe-questionable-decisions-and-dysfunction-inside-ubers-self-driving-unit-before-one-of-its-cars-killed-a-pedestrian-2018-10 (accessed May 1, 2020).
  4. a b Ringle, H. (2018, December 11). Report: Uber manager warned execs about problems days before fatal Tempe crash. Phoenix Business Journal. https://www.bizjournals.com/phoenix/news/2018/12/11/report-uber-manager-warned-execs-about-problems.html (accessed May 1, 2020).
  5. a b c Lin, P. (2016). Why Ethics Matters for Autonomous Cars. In M. Maurer, J.C. Gerdes, B. Lenz, & H. Winner (Eds.), Autonomous Driving: Technical, Legal and Social Aspects (pp. 69-85). Springer-Verlag Berlin Heidelberg.
  6. Thomson, J. J. (1976). Killing, Letting Die, and the Trolley Problem. The Monist, 59(2), 204–217. Retrieved from JSTOR.
  7. Kamm, F. M. (1989). Harming Some to Save Others. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, 57(3), 227–260. Retrieved from JSTOR.
  8. O'Kane, S. (2018, May 2). Tesla will regularly release data about the safety of Autopilot, Elon Musk says. The Verge. https://www.theverge.com/2018/5/2/17313324/tesla-autopilot-safety-statistics-elon-musk-q1-earnings (accessed April 4, 2020).
  9. Jefferson, J., & McDonald, A. D. (2019). The autonomous vehicle social network: Analyzing tweets after a recent Tesla autopilot crash: Proceedings of the Human Factors and Ergonomics Society Annual Meeting. https://doi.org/10.1177/1071181319631510
  10. Teoh, E. R. (2020). What’s in a name? Drivers’ perceptions of the use of five SAE Level 2 driving automation systems. Journal of Safety Research, 72, 145–151. https://doi.org/10.1016/j.jsr.2019.11.005
  11. Khosrowshahi, D. (2018, November 2). A Principled Approach to Safety. Medium. https://medium.com/@UberATG/a-principled-approach-to-safety-30dd0386a97c (accessed April 4, 2020).
  12. Krafcik, J. (2018, November 5). The very human challenge of safe driving. Medium. https://medium.com/waymo/the-very-human-challenge-of-safe-driving-58c4d2b4e8ee (accessed April 4, 2020).
  13. SAE International, (2018, June 15). Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles.
  14. Rock, I., Linnett, C. M., Grant, P., & Mack, A. (1992). Perception without attention: Results of a new method. Cognitive Psychology, 24(4), 502–534. https://doi.org/10.1016/0010-0285(92)90017-V
  15. Simons, D. J., & Chabris, C. F. (1999). Gorillas in our midst: Sustained inattentional blindness for dynamic events. Perception, 28(9), 1059–1074. https://doi.org/10.1068/p281059
  16. Peltzman, S. (1975). The Effects of Automobile Safety Regulation. Journal of Political Economy, 83(4), 677–725. Retrieved from JSTOR.
  17. Hedlund, J. (2000). Risky business: Safety regulations, risk compensation, and individual behavior. Injury Prevention, 6(2), 82–89. https://doi.org/10.1136/ip.6.2.82
  18. Grugle, N. (2019, November 20). Human Factors in Autonomous Vehicles. American Bar Association. https://www.americanbar.org/groups/tort_trial_insurance_practice/publications/tortsource/2019/fall/human-factors-autonomous-vehicles/ (accessed April 30, 2020).
  19. Sklar, A. E., & Sarter, N. B. (1999). Good Vibrations: Tactile Feedback in Support of Attention Allocation and Human-Automation Coordination in Event-Driven Domains. Human Factors, 41(4), 543–552. https://doi.org/10.1518/001872099779656716
  20. Moray, N., & Inagaki, T. (2000). Attention and complacency. Theoretical Issues in Ergonomics Science, 1(4), 354–365. https://doi.org/10.1080/14639220052399159
  21. Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410. https://doi.org/10.1177/0018720810376055
  22. Stilgoe, J. (2019, November 26). Who’s Driving Innovation? New Technologies and the Collaborative State. “Chapter 1: Who Killed Elaine Herzberg?”
  23. Arizona Office of the Governor. (2016, December 22). Governor Ducey Tells Uber 'CA May Not Want You, But AZ Does'. https://azgovernor.gov/governor/news/2016/12/governor-ducey-tells-uber-ca-may-not-want-you-az-does (accessed April 4, 2020).
  24. Hawkins, A. J. (2020, February 11). We still can’t agree how to regulate self-driving cars. The Verge. https://www.theverge.com/2020/2/11/21133389/house-energy-commerce-self-driving-car-hearing-bill-2020 (accessed April 4, 2020).
  25. Bosa, D. & Zaveri, P., (2019, February 15). Uber’s growth slowed dramatically in 2018. CNBC. https://www.cnbc.com/2019/02/15/uber-2018-financial-results.html (accessed May 2, 2020).