Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Isn’t there already a mounting amount of statistics of accidents of self driving cars vs manual ones and the difference is night and day that the self driving ones are statistically safer?


Not at all. Self-driving cars cannot yet safely navigate complex or unusual scenarios. The human backups in test cars still have to take over with good regularity. That's not something you'd see if it were anywhere close to ready. Even with all their data, there aren't enough training examples of every situation. Deep learning models also have a tendency to be pathologically wrong in rare but completely unpredictable circumstances.

Tesla's self driving has resulted in cars swerving into gore points, for example.


> Not at all. Self-driving cars cannot yet safely navigate complex or unusual scenarios.

That's assuming accidents are caused by complex and unusual scenarios instead of humans being inattentive during routine tasks. If machines fail at complex tasks but fail safely then that's not an argument against self-driving cars being safe, it's only one about their limited usefulness.


> If machines fail at complex tasks but fail safely then that's not an argument against self-driving cars being safe, it's only one about their limited usefulness.

They don't, not at the moment. The software detects the situation is outside its capability space and says "jesus, take the wheel!".

For all practical intents and purposes the only reasonable way to interpret the current self-driving car statistics is to treat every single human intervention as a would-be accident. That rate is somewhere in the 1 per 10000 km ballpark. That's - for now - orders of magnitudes worse than human drivers.


> For all practical intents and purposes the only reasonable way to interpret the current self-driving car statistics is to treat every single human intervention as a would-be accident.

Not necessarily. If the AI response is to stop the car when it can't figure something out, then the cost of each of those situations is the car not continuing on as quickly as it should (and maybe stuck cars if we dont have a way to override the stop/take control manually).

Every hand over to a driver is not an accident.


I wonder how much of that fail safe difficulty is to blame on humans too. The car can't just slow down on a highway due to the reaction time of other drivers after all.


No, it gives back control if it can't continue in the situation. That means that it can't handle the scenario at all. Other drivers are completely irrelevant. Even if you took that out you'd still need to deal with pedestrians, cyclists, cars blowing tires and veering out of control, etc.

Not to mention that many of the times control has to be taken have nothing to do with other drivers or even other cars.

The reality is that a car can only be driven by a general artificial intelligence. We're nowhere in that ballpark.


Point taken. However if you a look at statistics regarding the leading cause of accidents in the US, for example by going to https://www.after-car-accidents.com/car-accident-causes.html

It seems that most accidents are caused by human error and humans being distracted or under the influence of a substance.

Weather and complex situations would only rank at the bottom of the list and therefore could be considered a YAGNI problem.

If you remove the human element from the driving, already the top 4 causes of accidents on the road would be reduced drastically.

So at the moment a self driving car may not be able to handle some of the most extreme situations that a human driver could handle however, I would challenge that assumption and simply ask: does it have to?


> Weather and complex situations would only rank at the bottom of the list and therefore could be considered a YAGNI problem.

This is a logical fallacy. You are inferring from a comparatively low incident rate that the condition must also be rare, which is not a reasonable conclusion to make.


Breaking those causes you linked individually:

#1 was speeding. Self driving cars would be trivially programmed not to do that, of course. But this one is tricky to interpret, because Americans habitually speed. That means two things: Speeding is trivially something that can be listed as a factor; it wouldn't be a lot less meaningful to say that bucket seats are a factor. And not speeding might actually be more dangerous. That leaves me thinking this one is equivocal for the purposes of the debate.

#s 2, 3, 4 and 6 are drunk driving, distractions, cell phones, and driver fatigue. Those are easy point for self driving cars, of course.

#5 is weather. This feels like an easy one in favor of humans, considering that all the major self driving car companies deal with weather by avoiding it. That's not much of a vote of confidence.

#6 is red light accidents. I would want to see more about override statistics before interpreting this one. It could be that self-driving cars literally never miss a red light. But, given that some of the intersections in my city have quite confusing traffic light arrangements - reflectors that make the light invisible outside of certain angles, lighted intersections spaced 40 feet apart that are separate lighted intersections nonetheless, printed signs informing drivers of non-standard semantics for that light, wildly varying yellow light durations, etc. - I have my doubts about self-driving cars being able to properly deal with the traffic lights. If there are any tests going on in a similar city, I wouldn't be at all surprised to find out that operator overrides are routine at lighted intersections.

Zooming out to the big picture, though - the ones that are a clear win for self-driving cars are all instances where the human's decisionmaking capacity has been impaired. Which implies that humans are really quite good at operating vehicles when they aren't being stupid. But it also implies something that I think doesn't get enough credit among many advocates of self-driving cars: the fact that, behind the wheel of every safe self-driving car, there's a highly trained human operator who is not drunk and not tired and not diddling around with their cell phone. And we've got evidence to suggest that, when the human operator is a bit more human, the self driving car's accident rate suddenly spikes considerably.

And that's a big deal. It's common to place those errors on the shoulders of the car's driver, but that's trying to have your cake and eat it too. You can't wave away every fatality involving a car on autopilot by saying, "Oh, that wasn't the car's fault, the driver was being an idiot," and also claim that we've got good evidence to suggest that we know how to make cars operate safely without relying on a human driver to act as a backstop for the fallibility of the human programmers. It may eventually come - I think we all hope it does - but given how, in AI, the last year or two's worth of R&D always ends up taking another decade or two before everyone just gives up on the whole enterprise and moves on to a greener field, a little skepticism is far from unwarranted.


The problem is the datasets are incompatible. Tesla's numbers are very impressive until you realize that it will currently only engage in scenarios which are less prone to accidents across the board.

As the number of situations where self-driving kicks in increases to include more complex environments, it's likely the numbers will shift.


Even though the numbers will shift, Tesla can't afford to have worse statistics than with human drivers.

That's why it takes years to enable city level (L2 and maybe L3 in rush hours) driving.

At the same time it really looks like Tesla will enable it this year, as Elon's predicted timelines are getting shorter.


No. There is not yet enough data to say that Tesla's or anyone else's current technology is better than human drivers. You likely need tens or hundreds of billions of miles to prove that. However since accidents are a very low probability event, it would take substantially less data to prove that this technology is drastically less safe than human drivers and that hasn't shown itself in the data either. Either side that points to specific numbers as proof either doesn't understand the statistics or is intentionally trying to mislead you.


Self driving cars killed one pedestrian in 10 million miles driven, about an order of magnitude worse than humans


But that’s a sample size of one, so we can’t really draw any meaningful conclusions from that data. (Also the safety driver was watching Netflix on her phone; it could be argued that the vehicle was not production-ready and not expected or intended to be able to handle every situation.)


Let's say everyone replaced their current cars with self-driving vehicles and they never killed anyone ever again. If normal vehicle usage continued and fifty years passed without a single traffic death, your claim suggests we'd still be unable to say if self-driving cars were safer, as it would still be a sample size of one.

I don't think your use of statistics is correct.


I'm not sure that sample size is a relevant idea here, but it's certainly misleading as you state it. One accident in many billions of miles would be a wonderful result, but you would dismiss it as a sample size of one.


Is that data on self driving cars or assisted driving? I think disengagement statistics also need to be factored in.


That's the Uber "Level 3" accident.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: