Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tesla driver arrested for homicide after running over motorcyclist on Autopilot (electrek.co)
97 points by brohee on April 24, 2024 | hide | past | favorite | 95 comments


Drivers fault 100%.

But Tesla is not blameless with their marketing "Autopilot", "Full self driving", blah blah, giving people a false sense of security. I can't think of a worse problem to try and solve with AI. GPT hallucinates and gives a wrong fact. No biggie, but annoying. Tesla FSD hallucinates and runs over a child, a biggie.


Exactly. Tesla shares responsibility for marketing something deceptively that can cause serious incidents. We require toy manufacturers to put clear labels that the toy is not appropriate for certain ages, but allow Tesla to market something as dangerous as Autopilot and FSD, implying that the user is not needed.


> Tesla shares responsibility for marketing something deceptively that can cause serious incidents

The driver is criminally liable to the estate of the young man he killed. Tesla may be liable for money damages to the driver they misled. (Maybe the estate also has a civil claim on Tesla if they can show its marketing was grossly negligent.)


Why are you so sure that criminal liability ends with the driver? Washington’s first degree manslaughter law requires “recklessly caus[ing] the death of another person.” Second degree requires merely causing the death of another person “with criminal negligence.” Criminal negligence requires Tesla to “fail to be aware of a substantial risk that a wrongful act may occur” and that this failure be “a gross deviation from the standard of care that a reasonable person would exercise in the same situation.”

It’s certainly not hard to make the case for either, especially given Tesla’s apparent inaction after a number of people have died while Autopilot is active and a number of accidents have occurred while Autopilot is active. Cf the Crumbleys: https://www.bbc.com/news/world-us-canada-68223118

Also a pedantic point: the driver is civilly liable to the estate. Criminal charges (and hence criminal liability) can only be brought by the state.


> Why are you so sure that criminal liability ends with the driver?

I'm not. But it would be precedent setting. Given this case has the driver admitting to cops that he was distracted in a way Tesla's user manual presumably tells you not to be, I'm not sure I'd start here.

> pedantic point: the driver is civilly liable to the estate

You're right, I was being loose with my words.


I would additionally apportion some blame to the YouTubers who cherry pick fsd footage and make ridiculous claims about how good it is, especially those who also use defeat devices so they can do it with their hands off the wheels.


The big difference is that Tesla is the manufacturer and thus has much higher credibility (and responsibility not to deceive), than a random YouTube video.


What if "Autopilot"/"FSD"/computer-assisted driving hallucinates and drivers using it run over children but at a lower rate than people who are not using computer-assisted driving run over children?

If progress for automobiles is that it has to be perfect (and not just better) then the only answer is to use automobiles less or have stricter training standards for drivers.

I do think there needs to be strong proof that it is better to avoid liability. And I agree Tesla's marketing is bad and should make Tesla liable.


It comes down to the simple fact that somebody is going to hold the liability bag.

Tesla and similar obviously are not pushing to take on that liability, courts/legislatures aren't particularly interested in putting that liability on the manufacturers. So that leaves the drivers to try and offload the liability from themselves.

Unfortunately drivers are not particularly well equipped to be pushing that liability off themselves so there it stays. Insurance companies are better equipped for that sort of thing but they're only interested in the financial liability. The criminal liability, is definitely not going anywhere anytime soon.


> drivers are not particularly well equipped to be pushing that liability off themselves so there it stays

It's premature to conclude this. We don't have enough real-world cases off which to construct a framework.

> Insurance companies are better equipped for that sort of thing but they're only interested in the financial liability

Even this hasn't been cleanly severed in any competent jurisdiction.


> It's premature to conclude this. We don't have enough real-world cases off which to construct a framework.

Precisely, it's premature to conclude that anybody but the driver is at fault. Until a sufficient body of law has been built towards the alternative it will remain by default with the driver.


I think it depends on how the car ran over the children (or anyone in that case). If it was an unavoidable freak accident, then you can't really blame the car.

But, if the car literally didn't see a person in front of it (where reasonably a human would 99.9999% of the time) because it's cameras malfunctioned or the LLM read it as something else, then those cars should not be on the road.


I actually disagree, if it can be shown to be safer overall then I don’t think it should be required to be a strict superset of human abilities.

AI will always behave differently to human cognition, we should expect them to have different illusions and different failure cases. That doesn’t mean worse, if they perform better than a legal human driven car on average then I don’t think there’s a justification to exclude them.

That said, they will probably have to be _significantly_ better than human drivers in order to survive the media and public perception, so this might be irrelevant in the end.


To play devil's advocate. Say that on average it was 10x or 100x safer than human drivers, but for whatever reason, there was a 1/1,000,000 chance that the self driving car would plow into traffic at a red light because some black-box instruction told it it was the thing to do, very likely injuring or killing you. But overall it was better in all other cases. Would you take that risk?


There is no LLM in Tesla cars. Otherwise correct. I'd rather expect they use a convnet or a vision transformer, probably the former.


It’s not called “almost autonomous driving”, it’s named incorrectly and as a result falsely marketed.


> not called “almost autonomous driving”, it’s named incorrectly

Article says Autopilot was engaged, not FSD. You can't just check out after engaging autopilot on a plane.


> GPT hallucinates and gives a wrong fact. No biggie, but annoying.

Sometimes also a biggie: https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-a...


In a few hundred years, we are going to decide that people are allowed to have agency over their own lives.

We are just too primitive/traditional to understand.

Anyway, after 6 month health issue, I will literally laugh at people who think there is some value in enduring pain. Children.


If Tesla is not blameless, then it is not driver's fault 100%. Total blame cannot exceed 100%.


It's not like the juries of two different cases get together and decide how to proportion the penalty.


Depends on the jurisdiction. Some states allow fault to be apportioned to third parties, and some forbid it entirely. [1]

[1] https://pro.bloomberglaw.com/insights/litigation/contributor...


But it can and often does. They do not "share" the blame, but are both at fault (IMHO).


There is no law of conservation of blame. Why would there be?



Liability for paying amounts of money is not at all the same thing as blame.


Are we talking about torts or crimes?

If we're talking about tort law, then yes, damages are measured in dollars. And those damages are absolutely subject to rules about how they are split when multiple people are liable.

If we're talking about crimes, then "percent of blame" is nonsensical. There is no percent liability for a crime, you either are guilty or you are not.


I feel like 100% is not meant to be taken literally here.


I vote for letting numbers be literal.


Well 100% of the votes went to not doing that.


If there was more than one tort or crime that happened, then yes, there could be.


There are plenty of situations where total blame adds up to less than 100%, where multiple people make honest mistakes.

This situation is the opposite of that, because multiple people were knowingly reckless.

Edit: By "multiple people" I mean the driver and Elon.


Ask some people on death row. 3-4 people were arrested and sentenced to death for the exact same crime.


<ianal>Clearly FSD is defective if it correctly doesn't plow into motorcycles most of the time, but fails in some low % of cases. That's products liability lawsuit, and some attribution of the fault in this case (idk 1-10%?)


> Clearly FSD is defective

FSD is not Autopilot.

FSD does a pretty good job of pedestrian and cyclist identification. It’s supposed to.

Autopilot (which is what I use most)… I dunno… I’ve never had a problem in SF, LA, or Vegas.


FSD is also much scarier to use because it does ridiculously dangerous stuff all the time. (for me in Vegas, having had it since the original beta)

I've stopped using it unless I'm the only person on the road.


> FSD is also much scarier to use because it does ridiculously dangerous stuff all the time

That was my original experience. It was so bad that I strongly recommended people not get it.

That said, the new FSD update (supervised beta) has been super smooth both on the interstate and in town for me. Have you tried it?


I have tried it, and while it's a bit better, it's still nowhere near safe enough to use around other cars/pedestrians imo. Other people's thresholds and looking like a tool on the road may be different, but for me, it's a nogo.


Doesn't it seem weird that Autopilot and FSD might be using different obstacle recognition systems? The inputs (sensors) are identical, so why would Autopilot's obstacle recognition be worse?


> so why would Autopilot's obstacle recognition be worse?

Great question. I don’t know if they are different.

FSD is asked to do way more than AP… that’s all I’m certain of.


Are there cyclists in Las Vegas?

I always want Tesla to test their systems in a European urban center, see how they deal with tourists and trams set in picturesque 19th century streets.


Tesla does not even use it in the underground one-lane tunnel, the Las Vegas loop [1], instead opting for paid drivers operating the cars fully manually.

[1] https://en.m.wikipedia.org/wiki/Las_Vegas_Convention_Center_...


Also, Canadian or even Minnesotan winters. My working assumption is that anybody who thinks automatic driving is close to solved has never hit a patch of black ice under a thin layer of snow in whiteout conditions. Good luck relying on cameras alone!


> automatic driving… in whiteout conditions

Neither FSD nor AP will be allowed to turn on or stay on in these conditions.


Oh, I'm well aware that those features are so entirely useless that even Tesla won't let you use them in some situations. Rather than focus on the hard part, Tesla is tackling the easy stuff because it makes the illusion of progress. For those of us who snow and rain and low visibility are common driving, this is all a rich joke.


Tesla: Designed in and for California.


> Are there cyclists in Las Vegas?

Holy shit, yes. And a bunch of other crazy pedestrian shit.

I mean, it’s not a biking Mecca or anything, but cars and trucks aren’t your only concern in Vegas.

SF sort of takes the cake of those three, with cyclists, one wheelers, scooters, pedestrians, and folks who are flying on something that they only experience. I’ve done AP there on certain streets (esp. with stop and go traffic) with no problem.

The peripheral areas of greater LA also have lots of non-auto traffic to deal with.


Does the US not regulate marketing? They’ve been lying for years now


How many times did Andrej Karpathy sit smiling uncomfortably, by the side of Elon Musk, while he did his loony dance on FSD with no extra comment? I wonder if he feels any responsibility?

https://youtu.be/BFdWsJs6z4c

https://youtu.be/i5tjTACY_3Q


No one who has driven a Tesla with "Autopilot" for 5 minutes is under the delusion that it is a level 5 autonomous driving system.

It is amazing and useful, and assuredly makes me a better driver than I am without it.

I am begoggled at the scolds who think the "deceptive" branding contained _in a single word_ is sufficient to override the lived experience of driving with this fantastic (if flawed?) tool.


Clearly an issue they will solve in the next 4 months... \s

"Tesla will unveil a robotaxi on August 8, according to Musk" - https://www.engadget.com/tesla-will-unveil-a-robotaxi-on-apr...


I'd rather the article was more precisely titled "manslaughter" rather than "homicide." As much as in legal jargon, homicide encompasses manslaughter, murder, and a few other things, to many people, homicide is synonymous with murder.


Journalists normally report the charge as written in court documents. Washington state appears to have "vehicular homicide" defined as a specific offense, but not "vehicular manslaughter". It would not be more precise to report an incorrect name for the charged offense.


I was confused by this too.

In the article they do clarify it is "vehicular homicide" which is the same thing as "vehicular manslaughter" legally.

A very serious crime, though often times people walk away with this with little or no jail time.


It's not an unreasonable outcome. The goal of the criminal system is to reduce crime, not random vengeance.

I can certainly think of a time when I was driving when I missed another car in my blind spot and almost caused an accident, or when I saw someone in a car distracted by a toddler, or similar. Those didn't lead to accidents, but they might have if someone were less lucky.

There's a range of ways a car can kill people, ranging from driving through a red light at 100MPH, to an illegal U-turn, to making a stupid mistake, to a completely random fluke of circumstance.

On one end of the spectrum, there should be prison time, and regardless of whether an accident happens. On the other end, insurance should pay damages, but I'm not sure what good the criminal system can do in terms of deterrence.

If a driver is already doing their best to be safe, but slips up, or even isn't doing their best but isn't being unreasonable, criminal penalties don't seem like the right outcome.


Elon's refusal to adopt lidar - statistically probably fine, anecdotally it could turn out very badly for any one person which is a hard thing to swallow if it's you...


Tesla's direction on sensors seems quite ass backwards to me. Didn't they even get rid of parking sensors and replace them with AI vision?


I'm not saying what the better option would be (because I don't know), but many people approach the problem from a very myopic point of view.

Adopting Lidar would of course provide Tesla with higher-quality input for their self-driving model. But the quality of the input isn't the whole equation; you need to process it as well. In other words, adopting Lidar would incur costs not only on the hardware side, but also on the software side, which of course would result in more expensive cars. More expensive cars means less cars sold, and less cars sold means less data, which in turns means less input.

Does this result in a worse model? Again, I don't know, but I do know that the issue is more complicated (and not only because of the reasons I mentioned here) than many people seem to think.


You have it wrong. Processing LIDAR is way, way more computationally efficient than processing camera footage into a 3D model. LIDAR feeds you direct distance and speed data. Cameras of course do not, meaning you have to try to compute it, something which is very hard and error prone for even the most powerful computers that have ever been created (human brains).


You aren't wrong, but my assumption was (perhaps incorrectly?) that they would add Lidar sensors in addition to their cameras, not replace them.


Basically every AV company except Tesla is doing cameras + LIDAR. Tesla decided to do camera-only for definitely-not-cost-cutting-reasons.


> “… camera-only for definitely-not-cost-cutting-reasons.”

Is this sarcasm?


Yes


It makes a lot of sense if you're trying to churn out more profit per unit - less costs - they're at the mercy of a sour market atm on the other side of it.


Honestly, I'd love for Tesla to be civilly liable here, and not the driver. That aligns incentives correctly. Without that, there is every incentive to market "full self-driving," while investing in a system with the intelligence of a trained parrot.

As much as it's absolutely the driver's fault, it's also an inevitable accident.

Wrongful death is about $500k-$1M (but should be higher). Tesla market cap is $500B. Damages for O(one million people) are equal to Tesla's market cap, so this is not even bad for business; it's simply factoring in externalities into the cost of doing business.

A good split might be criminal system for the driver (which ranges from fines to prison time, and is designed to be punitive), and civil system for Tesla (which is only financial, but designed to cover the financial cost of the death to the family).


A big bad news headline alone is worth -$1M to tesla.

I really wish every single death ended up splitting a fine amongst all companies even tangentially involved.

Grandma fell down the stairs and died? Well fine the architect who designed those stairs, and the builder who built them.

Sure, all those companies will pass the cost back to the consumer, but the government will pass the fine revenue back to the consumer in the form of reduced taxes.

Ends up net zero overall, but everyone now has a real incentive to look out for the health and safety of others.


No, they won't pass the cost back to the consumer. The cost structure is:

- Unsafe product is shipped

- Damage is done

- Someone pays for it (ultimately the consumer)

The difference is that if the consumer directly pays for it, companies have every incentive to make unsafe products, and those costs are very, very high.

If the companies pay for it, the free hand of capitalism steps in, and optimizes such that any safety measures where expected returns are positive are implemented (and those where expected returns are negative, not).

If a death costs $1M, and I can do a safety measure which reduces odds of death by 0.01% for $100, it's cost-neutral. If that same measure costs $50, an efficient business will implement it. If it costs $200, it won't.

That allows global optimization. By setting the price of death, you can have a set point for what safety measures will be taken, the system optimizes adequately well, and deaths go down, so fewer costs are passed back.


Lidar is blinded by reflective signage.


And humidity


Related video "Tesla Autopilot Crashes into Motorcycle Riders - Why?"[0], summarized to: vision used by Tesla seems to process motorcycles differently, and may be incorrectly "assuming" the closer spaced brake lights on a motorcycle is actually a far away car.

More details on the homicide here[1], which shows the crash happened during daylight hours and the bike resembles a sport bike. This is a different condition than my referenced video (night collisions with cruiser-style motorcycles), but I suspect similar incorrect assumptions by Tesla vision happened.

[0]https://www.youtube.com/watch?v=yRdzIs4FJJg

[1] https://www.king5.com/article/traffic/traffic-news/tesla-on-...


Another theory I've heard is that the driver was holding down the accelerator to prevent phantom braking. If this is true Tesla will likely respond fairly quickly to prove it wasn't them. So the longer they don't the less likely this theory is.


> Another theory I've heard is that the driver was holding down the accelerator to prevent phantom braking.

Would be interesting to know how commonly this workaround is applied by Tesla owners. If this is common enough it seems like a case where a feature that's merely unreliable becomes a safety issue due to second-order effects. Echoes of Therac 25[1].

[1] https://en.wikipedia.org/wiki/Therac-25


Keep in mind that so far we only have the driver's assertion that they were using Autopilot at the time. The driver may be attempting to shift blame.


Brand new Model Y with latest software did 4 really dangerous phantom braking stunts. I engaged the system 5 times in total. It’s called enhanced autopilot. I can’t understand how people trust this kind of systems with their lives. Maybe in USA it works much better than elsewhere. But I will never ever turn it again. For the record I didn’t bought it. Got 3 months trial for using referral link.


Independent of the incident itself, the article made it sound like the driver would have benefited from hiring a lawyer before making statements to police.


Sounds like he passed the breathalyzer, yet still admitted to 'having a drink' before driving...

What a fool...


If he's under the legal limit, does it matter if he had a drink?


Yes. You can be judged to be intoxicated even if you are under the limit.


There have been various discussion over the years of adopting and modernizing the model of Equine law , which dealt with injuries from horse & carriages another type of autonomous / semi-automnomous vehicles.

In this case in resolving do the people behind the vehicle share some of the blame.

An excerpt from: https://www.forbes.com/sites/rahulrazdan/2020/01/07/horses-e...

>How does the legal system adapt to new technologies? Generally, this is done by constructing new legal theories that should not conflict with older models and also have characteristics of stability and rationality. What might be the potential legal theories for Autonomous Vehicles? Here are the current candidates:

>Negligence: Today, a typical example includes impaired driving. An impaired AV? >Negligent Entrustment of Vehicles: Here the driver was negligent, but the owner is liable because they should not have trusted the driver. Can you be found negligent if you trust your Tesla AutoDrive? >Res Ipsa Loquitur: In this theory, (“the thing that speaks for itself”) the accident would not have occurred if not for some action from the plaintiff. By applying this logic, the plaintiff caused the accident because they became startled by an AV homing features because it was surprising. >Product Liability and Warranty: Are there implied warranties associated when you buy an AV? Can it be proven that some AV vendors are safer than others? If so, do all AV vendors have to come to some standard ?

>At this point, it is not clear which theory may apply. However, we may gain insight from a very old body of law — Equine Law. Horses were the original autonomous vehicles and for many centuries, the court system had to deal with horse-related accidents.

And while probably not applicable a guy texting, an earlier paper from 2012 which explores an interesting aspect about horses in a frightened state which is akin to the vehicle making its own decision in a crisis scenario:

"Of Frightened Horses and Autonomous Vehicles: Tort Law and its Assimilation of Innovations"

https://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?artic...


FSD == Fool Self Driving


[flagged]


For?


Negative Tesla posts often get flagged immediately, probably by a bunch of dedicated Tesla fans/employees trying to surpress it.


I'm a huge Tesla skeptic, but Tesla and Musk are lightning rods for tabloid-style garbage that doesn't belong on HN, so it doesn't surprise me that we often see negative Tesla content flagged to death. Meanwhile we also see plenty of content that hits the front page and stays there [0].

Do you have examples of professional, interesting Tesla content that got flagged?

[0] More than half of the past year's most popular Tesla articles were negative: https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...


It hasn't been flagged (and I haven't). Elon bashing may be fun, it's rarely productive conversation. And when it isn't productive it should be flagged and often is.


Articles that appear to be negative on Tesla or Elon Musk usually get flagged to oblivion.


IME it seems to be primarily ones that are just absurdly biased. This story seems extremely relevant, and also sets the stage for an important question to be answered that could have implications even beyond self driving.


Is this worse than humans?

(edit) I see that the article included that FSD is 5x safer than humans, which may be valid.

The article then said : "However, the only reason it is safer than the US average is that it is supervised by drivers who ideally pay extra attention when using FSD."

I am positive that they had zero data to back that assertion.


Historically these kinds of assertions have been quite misleading. If FSD is mostly used on highways and other "low complexity" environments and then you compare that to human collision rates in all environments, of course FSD will be "safer". Especially if you're measuring collisions/mile vs collisions/hour. Then there are other confounding factors like how Teslas are:

* Generally newer than average.

* Generally owned by more affluent drivers than average.

* Probably used predominantly in urban areas instead of rural ones (to be clear this might unfairly tilt the stats against Tesla thanks to the highway thing).

I'm not sure I've seen a good "apples to apples" comparison on this that corrects for these confounding factors.


Tesla's using statistical sleight of hand with that stat; FSD can only be engaged in certain scenarios, and they're inherently newer vehicles than the national average. Comparing Teslas on the highway in California against 20 year old beaters in snowstorms in New England is... not reasonable.

It's also entirely self-reported, which given that they've knowingly lied about range, is itself a bit concerning... https://www.reuters.com/investigates/special-report/tesla-ba...


Do you agree that without "autopilot" this incident would not have happened?


I don't think there's any way of knowing that for sure.

People look at their phones while driving with and without autopilot.


People glance at diatractions, aware that they are responsible for driving their car. Believing that your car can drive itself is something new.


What about this current incident? In your opinion?


Barring a time machine, we've very little way to know when it comes to one specific incident. Again, people distracted by their cell phones kill people on a very regular basis in vehicles without autopilot.

If we had better aggregate stats we could compute the statistical likelihood of it, but this stuff isn't tracked anywhere near as closely or completely as, say, aircraft accidents. I don't trust Tesla's self-reported cherry-picked stats.


The question is irrelevant when it comes to liability and responsibility.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: