Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tesla in self-drive mode slams into police car in Orange County (ktla.com)
78 points by belter on June 15, 2024 | hide | past | favorite | 83 comments


It boggles my mind that any self driving system would knowingly drive into any object, no matter what it is, as large as a police car.

I'd be willing to bet that Elon's insistence on only using cameras is combining with the flashing lights to effectively make objects that appear in some frames of video, and not in others, as the camera recovers from the flash... and this makes it appear as noise, and below a certain threshold, ignored.

Or, it could be the reflective decals... who knows.


AFAIK the latest betas have switched to a cutting edge “transformer” model (or models?). I wonder if it’s prone to hallucinations like its LLM cousins.

On a different note, I like how Mercedes changes the color of external lights when in self-driving mode to make others aware. A bit like outsourcing the responsibility externally - but maybe this is the way forward until the tech is reliable/better than people.


I like the Mercedes idea, but haven't seen it in practice yet.

It would be useful to have a universal outside light that informs drivers behind you that the reason you're keeping a safe distance ahead is that the car is on adaptive cruise control. Some people feel like 2 seconds distance is a reason to try crazy stunts to get around you. And then once the road is clear drive slower in front of you than the speed you had set...


>I wonder if it’s prone to hallucinations like its LLM cousins.

When was their vision system not glitching? A year ago it was seeing the moon on the sky as a perpetual yellow traffic light.


That's not hallucination, that's image recognition issue.


How do you define the border between hallucination vs recognition issue?


I thought hallucination is when the model imagines something that isn't there at all. Not a classification issue where it misinterprets an object in the source data.

Sometimes it might be difficult to tell, but in the case of moon -> traffic light it seems clear.

Maybe I'm wrong about the terminology.


Isolating an object is baseline recognition. Any sort of semantic association is baseline interpretation. So that border is somewhere between "light source" and "traffic light".


transformer and LLM aren't cousins.

it's like saying transistor and cpu is a cousin...


First off, Tesla 'self driving' isn't a self driving system. Read their letters to the California DMV and ignore their marketting, including their choice of names.

Second, radar wouldn't be better here. Afaik, all of the radar assisted cruise control / emergency braking systems ignore stationary objects when travelling above a certain speed. Using radar, it's difficult to determine the height or shape of a potential obstruction, so a car on the road looks a lot like a metal sign above the roadway.

Lidar is much better, but cost prohibitive to put onto vehicles for driver assistance. A camera system could possibly detect stationary objects of concern, but apparently doesn't.


Afaik, all of the radar assisted cruise control / emergency braking systems ignore stationary objects when travelling above a certain speed.

This is false. Radar assisted cruise control definitely do not ignore stationary objects above a certain speed; indeed, detecting stationary objects is a fundamental part of how emergency braking systems work...

Other automakers are simply better at determining what objects are in the path of travel than Tesla, because they actually test their systems before they release them to the public.


This article [1] is several years old, but I believe it is still an accurate summary:

> Sam Abuelsamid, an industry analyst at Navigant and former automotive engineer, tells Ars that it's "pretty much universal" that "vehicles are programmed to ignore stationary objects at higher speeds."

Chevrolet says [2]

> At speeds between 5 and 50 mph, Automatic Emergency Braking (AEB) can help you avoid or reduce the severity of a collision† with a detected vehicle you’re following using camera technology. It can automatically provide hard emergency braking or enhance the driver’s hard braking.

They don't specifically say it won't detect stationary objects on this page, but a vehicle you're following will have been in motion, or you're not following it.

Consumer reports describes several types of AEB [3], none of which mention stationary objects.

If you've got a reference that suggests that driver assistance features stop for stationary objects outside of parking speeds, please provide a link.

[1] https://arstechnica.com/cars/2018/06/why-emergency-braking-s...

[2] https://www.chevrolet.com/support/vehicle/driving-safety/bra...

[3] https://www.consumerreports.org/cars/car-safety/automatic-em...


Whether it's true or not, Elon has previously explicitly stated that they ignore stationary items

He went so far as saying that a can on the road was indistinguishable from a highway underpass and then went on about how they would white list every underpass to avoid it triggering

But this is Elon, so I have no faith in its accuracy, but don't be surprised when people believe that the CEO is describing how their car works


> Lidar is much better, but cost prohibitive to put onto vehicles for driver assistance

Why, actually? Luminar sells LiDARs (that they market as intended for vehicles) for $1000. Tesla sells FSD Beta for $12000.

Do you believe a much more expensive LiDAR (than Luminar) is needed?


> Using radar, it's difficult to determine the height or shape of a potential obstruction

Is is an wavelength/antenna sizing problem? Surely a radar could tell how tall/how high above ground an object is in principle.


There needs to be regulation made by standards committees on what exactly is "self-driving" mode. Nobody should be allowed to use the term willy-nilly without extensive fines.


The Society of Automative Engineers has a taxonomy [1]. But no way to enforce usage.

[1] https://www.sae.org/standards/content/j3016_202104/


The problem is that Tesla uses an ML classifier to identify objects it sees. Once it has identified them, it can estimate how big it is, and thus how far away it is. But if the classifier doesn't identify the object, it might as well not exist.

Contrast this with Subaru, who uses binocular view through two widely spaced cameras to build a depth map of the view in front of it. It doesn't know or care what is there, only that something is there it shouldn't hit. And so the emergency breaking will trigger for things like trains and stationary emergency vehicles that Tesla seems oblivious to.

Sometimes simpler really is better.


This is false. Tesla has an occupancy network.


Please pardon my failure to remember the correct term. But fundamentally, it constructs a 3d model based on what it thinks each pixel represents. And sometimes gets it wrong, leading to it having a pattern of driving into stationary objects.


It's not a matter of terminology. Tesla's occupancy network "doesn't know or care what is there, only that something is there it shouldn't hit". As the name suggests, it's solely concerned with what space is occupied and what space is unoccupied, not with identifying objects. As such, the distinction you tried to draw between Subaru and Tesla is false, no matter what terms you use.


Right, but we are relying on what the 'driver said.' Anyone who has been involved in an accident knows the other driver will lie through their teeth to avoid responsibility.


I mean, it's to be expected. Most driving infrastructure is designed for human eyes and brains, including flashing lights on police cars. If you redesign the driving system from scratch to work on silicon and not brain, you pretty much expect its failures to not look like the kind of failures we're used to. Even if it's objectively better, with a lower accident rate, lower fatality rate and so on, it will occasionally do something as stupid looking as "drive into a police car with flashing lights on".

Which is why suitability decisions should not be made based on anecdotes.


> redesign the driving system from scratch to work on silicon

Nobody proposed this. My Subaru can lane keep even in whiteout conditions (if there is a car in front of me) due to radar. I can’t do that. Tesla’s can’t do that.

> it will occasionally do something as stupid looking as "drive into a police car with flashing lights on"

Two sensors operating on different wavelengths can add more information than a single doubling of the first sensor.


If I could have lidar eyes in addition to my normal ones you bet I'd go for it in the instance. Same with more variety of cones or thermal or any of the other cool sensory systems some other animals have.

Tesla took out the non-visible detectors for cost savings alone. I don't get anyone who every believed otherwise.


Huh? The Subaru EyeSight system uses only cameras. There is no radar.

https://www.subaru.com/eyesight.html

IME it works pretty well but cuts out in heavy precipitation because the cameras don't get a clear image.


EyeSight is not the only lane-keeping component.


EyeSight is the only lane-keeping component on Subarus. Some models have radar sensors for blind spot or cross traffic warning, but those aren't used for lane keeping or forward collision prevention.


It learned from reddit that cops aren't people.


One of those situations where it’d be advisable to examine the recorded telemetry. Prediction: the car wasn’t in self driving mode and the owner just wanted a get out of the jail free card. This happened several times in the past.


Sure, but you can't trust Tesla on that and they're the only ones that have the data.

If Tesla claims next week "it wasn't self driving at the time of the crash" that doesn't tell you anything. Because they will claim that even if the car self drove right into the police car but disengaged self driving mode 0.1 seconds before the crash.

Their CEO is a pathological liar that will say anything to make the stock price go up.


Not the only ones. I’m sure the court will have access to it, too. That data is there for this exact reason. Lying about this would be pretty devastating for Tesla and easy to prove through in-cabin video recording


Are you saying there are CEOs who won't lie to make their stock price go up?

Because I have not seen them.


Yesterday a Model X just changed lane into me. I barely avoided hitting it.

Not 100% sure but it almost seemed that it was in self drive mode. Or, the guy with MIT sticker at the back was just a bleeping bad driver.

They are making things worse for everyone.


It could also be a Model X from a decade ago. They're not all equally equipped.


Automatically Supervised Manual Mode Full Self Driving



"Tesla in self-drive mode slams into police car responding to fatal crash": https://youtu.be/ukq6h55GnvE


The reporter claims "autopilot" yet the description and title of the youtube video says "self-drive mode". Probably the reporter was correct and it was changed for clickbait reasons.


You are asking for a YouTube video to be precise when Tesla is ambiguous?


One bad behavior doesn’t justify another


> “ It was unclear if the driver would be cited.”

They admitted to being on their phone. This is an easy choice


In aircraft, legal liability is unaffected by whether the autopilot is on; the pilot in command is responsible no matter what. I see no reason for cars to be any different.


What happens when one of the tools is malfunctioning? Say autopilot doesn’t turn off? Is it still blamed on captain?


Yes, because there are usually no less then 4 separate ways to turn it off (AP disconnect, trim switch, circuit breaker(s), servo override) and this includes small GA airplanes. The autopilot does not fly the airplane on intuition. It flies based on the way the pilot programmed it, malfunctions are expected and trained for and memory items are required for quick disabling of malfunctioning systems.

I’m a pilot and airplane owner.


Autopilot not turning off is not a realistic scenario. There are multiple ways to turn them off, including cutting power entirely by pulling a circuit breaker. You'll often see a colored plastic ring attached to the AP breaker for this exact reason.


Self drive mode is fundamentally unsafe.

This bullshit where companies are allowed to say “the driver must always be in control”, is just that. Either it is self driving, or it is not.

You cannnot make a product that is physiologically impossible to use, and then say that fault lies with anything other than the product.

This is before you get to it being labeled and advertised explicitly as literally “full self driving” which is objectively fraud.


To clarify on the physiologically impossible bit:

1. The NTSB and FAA both found repeatedly, as have pretty much every other study - academic or otherwise - that people _cannot_ focus on watching a task without actually being involved in or doing the task. Any "self driving" model that requires the non-operator to be engaged in driving is simply not humanly possible. So the only reason for that stated rule is the manufacturers know these systems are unsafe, but nonetheless want to make themselves not be liable for their faulty products.

2. The only other "vehicle completely controls itself" system in practice is aircraft flight, where the pilots are necessarily engaged in all critical times, and for the remainder of the time all the systems either give sufficient warning before anything goes wrong (e.g. when autopilot disengages on an aircraft you have a substantial amount of time before impact - not the <=2s warning Tesla gives people), or the aircraft systems have specific extremely heavily trained signals for which there is only a single action and no immediate requirement to regain situational awareness (stall, acas, tcas, etc). The time these "self driving" systems provide to regain situational awareness and then take appropriate action is less than what we expect of trained pilots, yet it's somehow acceptable for random drivers.

The current self driving "you are expected to be in charge of the vehicle at all times" is entirely liability shifting: manufacturers are knowingly selling unsafe products, especially Tesla, they are lying about the capabilities of those products, and then they are saying it is the drivers fault when they fail.

What manufacturers are doing is no different from a manufacturer selling a car with an ABS system that fails, and saying "the driver is responsible for identifying ABS has failed and pumping the brakes if it does, if they fail to do so we do not accept liability for the ABS failing"


Aircraft autopilot flight is much more like cruise control in a car. The pilot flying is actively monitoring instruments and adjusting settings to make the autopilot do what's needed. Not playing games on their phone.

Especially in the terminal area there is no such thing as the autopilot flying the plane with no pilot involvement. There are heading, speed and altitude adjustments as input to the autopilot just like a car driver would be actively involved when driving with normal cruise control.


The whole point is that they are not _constantly_ monitoring, because again it’s not something people are physiologically capable of.

The difference in aircraft autopilots is that if you take 5 or 10 or even 30 seconds to notice an issue, you correct. Whereas for “self driving” you have 2 seconds or less, and are expected to have complete situational awareness for the entire time. [edit: to notice non critical problems, immediate response failures have significant repetitive training to cause immediate correct response - stall, tcas, mcas, etc]

As in, the defined “correct” way to use self driving cars is to have as much focus awareness and concentration as you use while actually driving, only without actually doing anything at all. Which again, is not something that any person is capable of doing. This is established, even in the context of pilots.

In the context of commercial pilots as I understand it, autopilot is often disengaged for the purpose of ensuring they’re engaged (in addition to standard elements of not getting rusty).

And again, if the autopilot fails in flight, pilots who are trained specifically to deal with the myriad issues and complexities of failures and autopilot disconnects, have literal orders of magnitude more time to react than untrained occupants of “self driving” cars.

Referring to autopilot not existing in terminal or during takeoff/landing (though I do understand ILS or similar can handle normal landing?). Those are times when the pilots are actively engaged in operating the aircraft (even with ILS).

The issue with autopilot/self driving is not what happens while operating in an environment that requires active engagement by the operator (pilot or driver), nor what happens at points of engagement (start or end of travel, making adjustments to travel), it is that humans are physiologically incapable of maintaining engagement in an activity if they are not actually involved in the activity. Again, this is something that has been established in study after study, and applies to extremely well trained pilots responsible for hundreds of lives just as much as it does random driver.

These cars “self driving” systems require the occupant (they are not driving) of a car to have greater control and engagement, and faster reaction times, than a trained pilot, in a system that has even less cause for engagement.

You referenced how autopilot in an aircraft requires adjustments (which as far as I understand are not being made constantly, but no matter, let’s assume they are), “self driving” cars do not even have that. The argument “aircraft autopilot requires engagement from the pilot” does not make my point about “self driving” cars incorrect: it in fact further demonstrates how unsafe “self driving” cars are - pilots in this model do have actual engagement with the operation of the self flying plane, in addition to increased training, significantly larger time to react, and significantly better response options when no-delay responses are needed.

Self driving cars say the occupant is required to have greater engagement in driving the self driving car, while doing less, having less training, having less time to gain full situational awareness, having less time to respond when the system fails, and not having any clear response options or training for immediate-response-needed events.

I’m genuinely curious, and I’m sure that NTSB or FAA must have published some statistics on the amount of time pilots need to notice and respond correctly to different failure modes.


re: #1, I’m curious how life guards stay attentive.


There are many levels to this.

The inherent problem of constant attentiveness being extremely challenging means that most relevant authorities say a life guard should not be on duty for more than an hour at a time, and they should have at least 15 minutes break between those hour shifts.

Even with those minds are going to wander (or there will be distractions - people asking for directions, etc) but there are multiple failsafes: there's generally more than one life guard, being distracted for a few seconds is not a failure (a life guard is having to scan a significant area so they don't have 100% awareness of 100% of their zone at once, so there's always potentially significant delay between something going wrong for someone, and a life guard seeing it), and the time to catastrophic failure is measured in (contextually) significant amounts of time.

The issue with self driving cars, as they are currently set up, is that they say "the car will do everything" and it does, but they then say "however the driver is still in control of the vehicle so if a crash happens it was the drivers fault for not paying sufficient attention".

In the pilot case: there are periods of flight where the pilot is doing very little for extended periods, but those are all at altitude, and the time from "something went wrong" to "it is irrecoverable" (in non-aircraft failure modes) is remarkably large (at least to me - my mental model was always 'something went wrong, it's seconds to crashing' until I binged air crash documentaries and even if they're trying it takes a long time to go from cruising altitude to 0). There are also modes where the pilots must always react immediately, whether or not they were distracted, or if they were focused but on a completely different task, but those modes are all close to "this alert occurs->reflexively do a specific action before you even know why".

Attentiveness is a real problem for long haul train traffic and multiple accidents have occurred because of it (or the loss of it), and there are many things they've tried to do to prevent the exact same problem that self driving cars introduce, and they simply do not work. At least for trains you can in principle (those the US seemingly does not) use safety cut offs such that a train that is not responding correctly to signals is halted automatically regardless of the engineers and operators. What companies frequently try to do is add variations of dead controller switches (similar to the eye tracking in "self driving" cars), but for the same reason that attentiveness is an issue in multiple hours of no operation those switches get circumvented (brains don't like to focus on a single thing while not actually doing anything for hours, muscles don't like being constantly in a single stress point for hours).


In every one of these threads based on a Tesla crash, gallons of digital ink are spilled in regurgitated (though not untruthful) generalities. So I'll repeat mine:

1) FSD makes me a better driver. Without it, I'm (uselessly) more aggressive and on the gas because it feels good. With it, it's basically ego-less driving.

2) The headlines are dramatic, but what does the actual data say? Tesla has thousands of miles driven under "full self driving", is it actually more dangerous (in the quantity or severity of average or long-tail accidents)?

3) FSD is surely an improvement over impaired driving, however that is defined. Does this encourage impairment?


What about systems like cruise control that keeps lane and following other car? The do require constant attention. How tesla much different?


> The spokesperson said that the Tesla was in self-drive mode and the driver admitted to being on a cellphone at the time of the crash.

> It was unclear if the driver would be cited.

Oh, come on!


It seems like a fair statement from the reporter. Unless they know definitively whether the driver is cited or not then the outcome is unclear.

That said I hope there are repercussions for the driver


I hope there are also repercussion for whoever named this tech "full self driving"


I believe this is where it I was a Tesla apologist, I would say you can’t really expect full self driving to fully self drive the car.

That said, given that over my lifetime, we seem to have legalized all sorts of fraud and removed as many consumer protections as possible, I can see why people get a bit confused.


Do we even know it was in FSD? The source mentions autopilot.


In the future we will have the Samsung effect on naming this technology, because once they went to high definition, they had to upgrade to ultra high, then mega.

Extra full self driving?


General self driving. Super self driving.


AI driver Obviously


There are, I hear he just got a $56bn dollar bonus. Worth every penny that Elon, his work is so fine and free from flaw.

Sigh


A bonus that shareholders previously approved for very concrete goals that he has met.


You are responsible for your vehicle at all times when FSD is engaged.


Even when the wife is driving?


"It was unclear if the driver would be cited."


[dead]


This article reads like it generated by an LLM, and its source looks like similar junk content, claiming for example that Tesla "operates on an operating system built on the Python programming language" -- if there is a human author, that author definitely doesn't know what an operating system is. I can find no reputable source making the same claim, and in fact, Elon Musk referred to ">300k lines of C++ code" in a previous version of FSD.[0]

Meanwhile, your account is less than an hour old, and you seem to have made it just for this comment. Are you sure you're not just trying to help this content farm-looking blog gain search engine ranking by shoehorning it into this thread?

[0] https://x.com/elonmusk/status/1686513363495346178


Perceiving safety based on programming language is dumb and cringe, unless you're talking about something formally verifiable.


driving two tons of steel is a safety critical context, the less moving parts the better


[flagged]


It’s a typo, it’s meant to say “fool”


[flagged]


[flagged]


Full self driving"."


[flagged]


Elon's pay package did make it to the front page.


And received negative comments about Musk as usual.

https://news.ycombinator.com/item?id=40666289


I did not see it on the front page the entire day when the results came out. Is there anything you can share that provides evidence for this?


Thought and discussion terminating meme twitter phrases like that bring the discourse level a lot lower on here.

There was much discussion in the relevant thread about the pay issue.

You’re a new account of 67 days, perhaps learn the culture here before spouting twitter / Reddit type culture war shit?


Looking at someone’s profile to find something irrelevant to discredit them, like you are doing with the age of my account, is the real Reddit behavior. It really has no logical bearing on the point I am making.

As for the point I’m making: minor incidents that are not newsworthy get upvoted as long as they are unfavorable to Elon Musk. On the other hand, one of the most significant pieces of financial news this year, the reapproval of his pay package from several years ago, did not make the front page and had barely any comments for how significant it is, because many HN users cannot stand to acknowledge anything that is favorable to Elon Musk. This is the type of overly politicized ideological and tribal thinking that should not have a place here.


It really is impressive how much he's disliked.


May be Tesla is looking to gain support from anti police activists. After Musk's antics sales must have gone down among rich coastal liberals.


Why does it say "fatal crash" both driver and police officer survived?


The cop was responding to a fatal crash.


Because the news story covered two crashes, and it's describing the one that was fatal.


...because the officer was parked, using their vehicle to block the scene of a fatal motorcycle crash?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: