On March 18, a robot-driven Volvo operated by Uber hit and killed a pedestrian in Arizona.
Advocates for automation maintained that the tragedy shouldn't detract from the likelihood that driverless technology is eliminating human error and making driving safer. But the death, and a fatality five days later that involved Tesla's Autopilot driver-assist system, were unusual in another way: They were rare instances in which driverless-car companies were forced to share data about how their systems work, in this case with investigators.