Yet it is quite odd how Tesla also reports that untrained customers using old versions of FSD with outdated hardware average 1,500,000 miles per minor collision [1], a literal 3000% difference, when there are no penalties for incorrect reporting.
That said, the redaction issue is the real story. Waymo publishes detailed narratives. Zoox publishes detailed narratives. Tesla marks everything confidential. When every other company is transparent and one is not, that tells you something about what they are finding in the data. You cannot independently assess fault or system failure, which makes any comparison meaningless.
Tesla needs their FSD system to be driving hundreds of thousands of miles without incident. Not the 5,000 miles Michael FSD-is-awesome-I-use-it-daily Smith posts incessantly on X about.
There is this mismatch where overly represented people who champion FSD say it's great and has no issues, and the reality is none of them are remotely close to putting in enough miles to cross the "it's safe to deploy" threshold.
A fleet of robotaxis will do more FSD miles in an afternoon than your average Tesla fanatic will do in a decade. I can promise you that Elon was sweating hard during each of the few unsupervised rides they have offered.
In some spaces we still have rule of law - when xAI started doing the deepfake nude thing we kind of knew no one in the US would do anything but jurisdictions like the EU would. And they are now. It's happening slowly but it is happening. Here though, I just don't know if there's any institution in the US that is going to look at this for what it is - an unsafe system not ready for the road - and take action.
I'm curious how crashes are reported for humans, because it sounds like 3 of the 5 examples listed happened at like 1-4 mph, and the fourth probably wasn't Tesla's fault (it was stationary at the time). The most damning one was a collision with a fixed object at a whopping 17 mph.
Tesla sucks, but this feels like clickbait.
For those complaining about Tesla's redactions - fair and good. That said, Tesla formed its media strategy at a time when gas car companies and shorts bought ENTIRE MEDIA ORGs just to trash them to back their short. Their hopefulness about a good showing on the media side died with Clarkson and co faking dead batteries in a roadster test -- so, yes, they're paranoid, but also, they spent years with everyone out to get them.
``` The incidents included a collision with a fixed object at 17 miles per hour, a crash with a bus while the Tesla vehicle was stopped, a crash with a truck at four miles per hour, and two cases where Tesla vehicles backed into fixed objects at low speeds. ```
so in reality one crash with fixed object, the rest is... questionable, and it's not a crash as you portrait. Such statistic will not even go into human reports, as it goes into non driving incidents, parking lot etc.
Then they compare that numerator to Tesla’s own “minor collision” benchmark — which is not police-reported fender benders; it’s a telemetry-triggered “collision event” keyed to airbag deployment or delta-V ≥ 8 km/h. Different definitions. Completely bogus ratio.
Any comparison to police-reported crashes is hilariously stupid for obvious reasons.
On top of that, the denominator is hand-waved ("~800k paid miles extrapolated"), which is extra sketchy because SGO crashes can happen during non-paid repositioning/parking while "paid miles" excludes those segments. And we’re talking 14 events in one geofenced, early rollout in Austin so your confidence interval is doing backflips. If you want a real claim vs humans, do matched Austin exposure, same reportable-crash criteria, severity stratification, and show uncertainty bands.
But what you get instead is clickbait so stop falling for this shit please HN.
Though maybe the safety drivers are good enough for the major stuff, and the software is just bad enough at low speed and low distance collisions where the drivers don't notice as easily that the car is doing something wrong before it happens.
We are still a long, long, long way off for someone to feel comfortable jumping in a FSD cab on a rainy night in in New York.
There's no real discussion to be had on any of this. Just people coming in to confirm their biases.
As for me, I'm happy to make and take bets on Tesla beating Waymo. I've heard all these arguments a million times. Bet some money
https://www.cnbc.com/2026/01/22/musk-tesla-robotaxis-us-expa...
Tesla CEO Elon Musk said at the World Economic Forum in Davos that the company’s robotaxis will be “widespread” in the U.S. by the end of 2026.
if Tesla drops the ego they could obtain Waymo software and track record on future Tesla hardware
Given the way Musk has lied and lied about Tesla's autonomous driving capabilities, that can't be much of a surprise to anyone.
>The new crashes include [...] a crash with a bus while the Tesla was stationary
Doesn't this imply that the bus driver hit the stationary Tesla, which would make the human bus driver at fault and the party responsible for causing the accident? Why should a human driver hitting a Tesla be counted against Tesla's safety record?
It's possible that the Tesla could've been stopped in a place where it shouldn't have, like in the middle of an intersection (like all the Waymos did during the SF power outage), but there aren't details being shared about each of these incidents by Electrek.
>The new crashes include [...] a collision with a heavy truck at 4 mph
The chart shows only that the Tesla was driving straight at 4mph when this happened, not whether the Tesla hit the truck or the truck hit the Tesla.
Again, it's entirely possible that the Tesla hit the truck, but why aren't these details being shared? This seems like important data to consider when evaluating the safety of autonomous systems - whether the autonomous system or human error was to blame for the accident.
I appreciate that Electrek at least gives a mention of this dynamic:
>Tesla fans and shareholders hold on to the thought that the company’s robotaxis are not responsible for some of these crashes, which is true, even though that’s much harder to determine with Tesla redacting the crash narrative on all crashes, but the problem is that even Tesla’s own benchmark shows humans have fewer crashes.
Aren't these crash details / "crash narrative" a matter of public record and investigations? By e.g. either NHTSA, or by local law enforcement? If not, shouldn't it be? Why should we, as a society, rely on the automaker as the sole source of information about what caused accidents with experimental new driverless vehicles? That seems like a poor public policy choice.
No idea how these things are being allowed on the road. Oh wait, yes I do. $$$$
4x worse than humans is misleading, I bet it's better than humans, by a good margin.
In before, 'but it is a regulation nightmare...'