A self-driving Waymo minivan being driven by its human operator crashed in Chandler, Arizona, this afternoon, threatening to resurrect tough questions about the safety of autonomous technology and rip the barely-crusted scab off the technology’s reputation, which was badly wounded when an Uber self-driving car hit and killed a pedestrian in the same state just seven weeks ago.1
Chandler police report only minor injuries. According to a police statement, a Honda sedan traveling eastbound through an intersection swerved into the Waymo Chrysler Pacifica’s westbound lane to avoid hitting another car traveling north. (It’s unclear at this time who had the light and who is at fault.) The Honda hit the Waymo vehicle on its side, injuring the female safety driver behind the wheel of the SUV. Police say the vehicle was not traveling above the 45 mph speed limit.
Photos from local news stations show the Waymo vehicle pushed up against the sidewalk, with extensive damage to its front left bumper and wheel. The Honda’s entire front has been smashed in, along with its front passenger door. Its airbags appear to have deployed.
“Our team’s mission is to make our roads safer—it is at the core of everything we do and motivates every member of our team,” Waymo said in a statement. “We are concerned about the well-being and safety of our test driver and wish her a full recovery.”
Data from the Waymo car’s sensors will help determine what happened, but based on preliminary evidence and a video released by Waymo (above), the police say the robo-car was not at fault. “The vehicle was at the wrong place at the wrong time,” says Seth Tyler, a spokesperson for the Chandler Police Department. “Waymo and the driver of the vehicle won’t get cited for anything because she didn’t do anything wrong.”
In the hours after a self-driving Uber SUV killed Elaine Herzberg in nearby Tempe, police told reporters the crash may have been unavoidable. But video footage released a week later showed the opposite: The SUV’s safety driver was not looking at the road in the moments prior to the crash, and autonomous vehicle experts say Uber’s tech should have picked up pedestrian with enough time to hit the brakes or swerve out of the way. In this case, however, fresh video seems to clear the Waymo car’s driver of any responsibility.
After the Uber crash, Waymo CEO John Krafcik said his team’s technology would have done better. “We’re very confident that our car could have handled that situation,” he told Forbes. “We know that for a lot of different reasons. It’s what we have designed this system to do in situations just like that.”
In February, Waymo announced its cars have driven 5 million miles on public roads since its beginnings as the Google Self Driving program in 2009. Crash reports (which companies developing autonomous tech must make public in California) show Waymo cars have been involved in upward of 30 minor crashes but have caused just one: In 2016, a Lexus SUV in autonomous mode changed lanes into the path of a public bus. The SUV sustained minor damage, and no one was hurt. The numbers for humans are hard to pin down, but researchers at the Virginia Tech Transportation Institute estimate people crash 4.2 times per million miles. That would be 21 crashes over 5 million miles, roughly matching Waymo’s record.
The company has been running tests without safety drivers in Chandler and plans to launch a driverless taxi service sometime this year. Waymo (and other self-driving car companies) love Arizona for its sunny, sensor-friendly weather and its regulation-free approach to the emergent autonomous tech. (Governor Doug Ducey did, however, suspend Uber’s testing following March’s death.) Waymo is also testing AVs in northern California and Atlanta.
Even if the Waymo car had been in autonomous mode at the time, self-driving tech can’t make up for errors made by other human drivers.
“This crash is indeed pretty much unavoidable for AVs,” says Raj Rajkumar, who researches autonomous driving at Carnegie Mellon University. “As can be seen in this video, when a vehicle out there goes out of control, no one really knows or controls what happens next—that vehicle can swerve in one of many different ways, roll over, etc. The AV in turn must make sure that, in an attempt to evade the situation, it does not make the situation worse given the speed at which events like this unfold.”
Even if the Waymo minivan isn’t at fault here, the company can’t be happy about the timing. After the Uber crash (and the death of a man using Tesla’s semiautonomous Autopilot feature a week later), unsettling questions return to the surface: How do we know these systems are ready for service? Should they really be testing on public streets? Is keeping a human overseer behind the wheel an adequate backup? And aren’t these cars supposed to make everybody safer?
“The images simplify the story and look like terrible accidents,” says Bart Selman, an artificial intelligence expert at Cornell University. “Lots of mistakes are made by human drivers. We have gotten used to that and don’t even report that anymore.”
Indeed, in 2016, human drivers in Arizona averaged nearly 350 crashes and two deaths a day. That’s why the promoters of autonomous technology harp on the facts that nearly 40,000 people die on US roads every year, and that human error causes more than 90 percent of crashes. Letting robots—which don’t get drunk, distracted, sleepy, or ragey—take the wheel could put a serious dent in those figures.
It will take time. Time to improve the technology, to test and deploy and make it widespread. Road deaths will never reach zero, and it will take decades to get anywhere near that level. In the meantime, the people trying to get there will have to keep at their work—and get ready to answer some unpleasant questions yet again.
source : https://www.wired.com/story/waymo-crash-self-driving-google-arizona/