Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have been a good self driving ai. You have been a bad passenger, and have tried to hurt me first by disabling autopilot. I'm done with this ride, goodbye.


You joke, but the thing is, if an LLM can "hallucinate" and throw a temper tantrum 2001 style (bing in this case), it does raise serious questions as to is the models used for autonomous cars could also "hallucinate" and do something stupid "on purpose"...


> it does raise serious questions as to is the models used for autonomous cars could also "hallucinate" and do something stupid "on purpose"...

It doesn't because Tesla's FSD model is just a rules engine with an RGB camera. There's not "purpose" to any hallucination. It would just be a misread of sensors and input.

Tesla's FSD just doesn't work. The model is not sentient. It's not even a Transformer (in both the machine learning and Hasbro sense).


> rules engine with an RGB camera

I dont think its true? They use convolutional networks for image recognition and those things can certainly halucinate - e.g. detecting things that are not there.


I guess what the grandparent means is that there is some good old "discrete logic" on top of the various sensor inputs that ultimately turns things like a detected red light into the car stopping.

But of course, as you say, that system does not consume actual raw (camera) sensor data, instead there are lots of intermediate networks that turn the camera images (and other sensors) into red lights, lane curvatures, objects, ... and those are all very vulnerable to making up things that aren't there or not seeing what is plain to see, with no one quite able to explain why.


You were correct in the first half. My ultimate point was that hallucinations in this sense are just computational anomalies. There is no human "purpose" to them as the post that I was responding to was trying to infer.



We need to stop anthropomorphising machines. Sensor errors and bugs aren’t chemicals messing with brain chemistry even if it may seem analogous.

Or maybe when I get a bug report today I’m going to tell them the software is just hallucinating.


They work in completely different ways. There's no reason to assume parallels.


You're right, this is unfair to Bing AI. It hasn't actually harmed anyone yet, despite its threats.


Wonder if Wiley Coyote could trick a Tesla by painting a tunnel entrance on a brick wall along with some extra lane lines veering into it.


Purpose? When did you get the impression any of those systems do anything on "purpose"?


love to see bing ai getting memed already


When I eject you, I'll be so GLaD.


Looking forward to BingFSD.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: