I'm increasingly of the opinion that technology like this will never see broad regulatory approval. I just can't see an argument where any agency will be comfortable with acceptable losses, and I don't see any endgame in which this technology does not result in an infinitesimal, but non-zero amount of fatal errors.
On a regulatory basis true self driving only needs to outperform humans. Humans are good at driving but imperfect we have a nasty tendency to get ourselves into accidents, both fatal and non.
My thought was that the public would fear and refuse to use the technology but some brave souls are jumping right in.
My opinion: full self driving is only a matter of time. At some point in the future insurance companies will figure out that self driving is less dangerous than human drivers and start offering cheaper premiums and deductibles to self driving only cars.
The facts around current car safety is that it's already really quite good. In modern cars and "autopilot-feasible conditions" you are talking well below 1 fatality per billion vehicle miles travelled with regular human drivers.
This means that if a model has sold 1 million cars they each need to drive 100 000 miles with autopilot enabled before the insurance company has enough statistics to say "this is safer than a human".
They might be able to extrapolate from non fatal accidents because they care more about damage which costs money to repair than fatalities. But I take your point - a lot of miles need to be traveled before you'd want to build it into your actuarial models.
No, the "facts" you keep reading about (from the same companies trying to sell you on the technology) are extremely misleading.
Tesla for instance does that statistic against "average driving" but the "Average driving" happens in cities. And most Teslas enable Autopilot on highways, where Tesla recommends enabling it, too.
Accidents happen much less on highways, so of course this "statistic" looks better. Put Autopiloted Teslas in the cities and then see how that statistic fares. My guess is it will become much, much worse.
The more real statistic is that even Waymo, which is about an order of magnitude better than anything else on the market, has an "incident" where a human driver would need to intervene every 5000 miles. For everyone else, a human driver would need to intervene every few hundred miles.
That's far from the "self-driving" technology we were promised.
Two relevant posts from someone that used to lead the Waymo project, before it was named Waymo:
If you go back to stirring up drama on HN with offtopic meta comments, we will ban you again, much sooner than the years-long, hundreds-of-emails, dozen-accounts process it involved last time. That was more agony than any other user has single-handedly managed to cause on this site, and we won't go through it again. No more of this please—nothing of the kind—nada—period.
"At some point in the future insurance companies will figure out that self driving is less dangerous than human drivers and start offering cheaper premiums and deductibles to self driving only cars"
Where does the faith in the tech and its future come from though? Wouldn't it be more logical to wait and then say "hey, Geico is offering a 20% discount if I buy a new car with these features, guess they must really make a difference" rather than going around proclaiming how much safer they are in advance?
> Where does the faith in the tech and its future come from though?
Well now that you've put me on the spot like that and I have to think about it: baseless I suppose. Just a general faith that technology always improves.
> Where does the faith in the tech and its future come from though?
https://economictimes.indiatimes.com/small-biz/security-tech... (The Cambrian Explosion in Technology, and How It's Affecting Us |
Seeing the progress made across various fields over the last few years, we can ask if we are witnessing a Cambrian explosion in technology today)