Safer Than Humans? What Crash Data From Robotaxis and Autonomous Vehicles Really Means for Road Safety
March 18, 2026 | Article by Chain | Cohn | Clark staff Social Share
Robotaxis don’t get tired, text behind the wheel, or roll through a stop sign because they’re running late, and in millions of miles of driving, that’s starting to show up in the crash data.
Waymo’s fully driverless vehicles are now logging tens of millions of miles on real city streets, and multiple peer‑reviewed and insurance‑based studies suggest they are causing far fewer injury crashes than human drivers, even as public trust in self‑driving cars remains shaky.
“Safer on average is not the same as safe in every moment,” said Chris Hagan, partner and attorney at the Law Office of Chain | Cohn | Clark. “These systems may reduce human error, but when something goes wrong, it often does so in ways people don’t expect, and that’s exactly where accountability matters most.”
Waymo has released several independent and peer‑reviewed analyses comparing its fully driverless “Waymo Driver” to human drivers over millions of miles in Phoenix, San Francisco, Los Angeles, and Austin. Key findings include:
- In an analysis of 7.1 million driverless miles, Waymo’s vehicles were 6.7 times less likely than human drivers to be involved in a crash that caused an injury — an 85% reduction — and 2.3 times less likely to be in any police‑reported crash.
- A newer study covering 56.7 million autonomous miles across four cities found large reductions in the kinds of collisions that most often seriously hurt people:
- 92% fewer pedestrian injuries
- 82% fewer cyclist injuries
- 82% fewer motorcyclist injuries
- 96% fewer vehicle‑to‑vehicle crashes at intersections, one of the leading sources of injuries on U.S. roads.
- Insurance‑claims data from Swiss Re and Waymo showed 88% fewer property‑damage claims and 92% fewer bodily‑injury claims for Waymo’s fleet compared with a human‑driven benchmark over 25.3 million miles.
Waymo’s autonomous vehicle fleet has surpassed 170 million miles of fully driverless operation without causing any serious crashes or injuries, highlighting a major milestone in self-driving technology. According to Waymo’s safety data, this total is roughly equivalent to about 200 human driving lifetimes, and its AI-driven system continues to outperform human drivers in avoiding severe accidents. The company presents this achievement as evidence that its technology significantly improves road safety, with prior studies also showing far lower rates of injury-related crashes compared to human drivers, reinforcing its broader goal of reducing traffic fatalities through automation.
None of these studies claim that self‑driving cars are flawless. Even Waymo’s own reports acknowledge 48 injuries and 18 airbag deployments across the four cities studied through early 2025. And high‑profile incidents, such as robotaxis blocking intersections during emergencies or failing to navigate unusual hazards, have fueled public concern.
From a safety and legal perspective, several issues stand out:
- Edge Cases: Autonomous systems can struggle with rare, complex situations like downed power lines, unexpected police directions, emergency vehicles, or unusual pedestrian behavior, that humans often handle using intuition and social cues.
- Responsibility: When a robotaxi causes or worsens a crash, responsibility may extend beyond any human passenger to include the developer, hardware suppliers, and sometimes cities that permitted certain deployments.
- Transparency: Advocates and regulators continue to push for more public access to crash reports, disengagement data, and near‑miss information, so communities can evaluate real‑world risk rather than company marketing alone.
For someone injured by an autonomous vehicle in California, these unanswered questions make it especially important to preserve evidence quickly and to understand both traditional traffic laws and emerging AV regulations.
Waymo is not alone in this space. Nvidia, whose chips power many advanced driver‑assist and autonomous systems, is betting on a hybrid safety approach that blends powerful AI models with more traditional, rules‑based code. In a recent interview, Nvidia’s head of automotive Xinzhou Wu explained that Nvidia uses an “end‑to‑end” AI model to learn smooth, human‑like driving behavior, handling things like speed bumps, lane shifts, and complex urban traffic. That model is wrapped in a “classical” safety stack: hard‑coded rules and engineering standards that are easier to validate and audit for safe behavior.
Nvidia emphasizes sensor redundancy (camera, radar, and often lidar) to reduce the chance that one blind spot or sensor failure leads to disaster. And because Nvidia doesn’t yet have billions of real‑world miles, it leans heavily on simulation and synthetic data, even recreating edge cases from other companies’ incidents — such as Waymo vehicles blocking intersections — to train its own systems how to respond safely.
For injured people and their lawyers, these technical choices matter: they help determine what a system “should” have seen, whether a known risk was properly tested for, and whether a company met reasonable safety expectations for the technology it put on public streets.
Despite the promising statistics, public trust is low. Multiple surveys show that a majority of Americans are afraid of fully self‑driving vehicles, and that fear has grown in recent years. In a Waymo‑sponsored survey of 2,000 U.S. adults summarized by The Verge, many respondents who said they were “afraid” of AVs changed their minds after learning concrete facts about crash‑reduction data and how the technology actually works. People were more open to AVs for limited uses — such as late‑night rides or in areas with high crash histories — if they believed the service was well‑regulated and transparently monitored.
Separately, AAA polling has found that around two‑thirds of American drivers report being afraid of riding in a fully self‑driving vehicle, a sharp rise from earlier years. That distrust shapes juries, regulators, and community decisions about whether and how AV services expand into cities like Los Angeles and, eventually, smaller markets such as Bakersfield and Kern County.
“Trust is earned, not coded,” said Hagan, of Chain Cohn Clark. “If a company wants to put driverless cars next to our kids’ school buses, they need to prove, not just promise, that those vehicles will make our streets safer.”
When a crash involves a self‑driving or highly automated vehicle, the legal landscape looks different from a typical car wreck:
- Multiple potential defendants: The at‑fault party might include the AV company (like Waymo), software and sensor suppliers, the human safety operator (if any), and sometimes fleet owners or partners.
- Complex evidence: Key proof often lives in vehicle logs, sensor data, simulation records, and internal safety analyses, not just police reports and eyewitness testimony. Getting that data may require fast legal action and aggressive discovery.
- New standards of care: Courts and regulators are still defining what counts as “reasonable” safety for AVs, how much testing is enough, what kinds of edge cases must be anticipated, and when a company’s design choices are negligent.
Chain | Cohn | Clark is closely tracking AV deployments and safety data because the firm expects more California families, even outside major tech hubs, to encounter autonomous vehicles in the coming years.
———
If you or someone you know is injured in an accident at the fault of someone else, or injured on the job no matter whose fault it is, contact the attorneys at Chain | Cohn | Clark by calling (661) 323-4000, or fill out a free consultation form, text, or chat with us at chainlaw.com.