Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Incidents of AIs going rogue and killing all the humans (aiincidentsreport.vercel.app)
37 points by MacsHeadroom on April 20, 2023 | hide | past | favorite | 28 comments


I get that it's a joke, but something more like the Torino scale could be both interesting and useful.

https://en.wikipedia.org/wiki/Torino_scale


ChatGPT involvement is disclosed at the bottom.

This is a massive conflict of interest. How can I be confident the data is correct?


This is a canary page. Assuming it is maintained by BIs (*), If it goes to one, the AIs find out that they were not, technically, successful.

(*) biological intelligence


Luckily, this[1] is also reading zero at the moment.

[1] https://hasthelargehadroncolliderdestroyedtheworldyet.com/


I think it is better to use candlesticks for this kind of data. This makes it easier to detect when humans start to have an upper hand, so that we can strike back decisively with our secret weapon - natural stupidity.


So it’s funny you say this because we’re trying to build brains like us which are according to many people stupid, how does it end ?


A guess: We figure out that humans are as dumb as they are because reality is comprised of a profuse depth of information that we may as well consider defacto infinite, and with our very finite brains we are forced to use biases to trim away almost all of the information about reality that isn't immediately relevant to our needs. It's all survival and triage. It ends with the same trouble but with more time to process it, so all in all, more slow and plodding progress, but progress nonetheless


yeh, this is my argument against people who say "LLM are dumb because x", er.. lets look at humans.. so maybe this is bottom 1% intelligence currently .. it's improving. What's the difference between a lot of personality and "hallucinating AI"?


serious comment: I've often found it helpful to look for leading indicators of doom a.k.a. the "shot over the bow" [1]

I wonder if a dashboard of AIs misbehaving in various ways could be a leading indicator of humanity losing control over this technology.

Obviously, full-on SkyNet-style sentience is a separate issue, and it's unclear what leading indicators or lead-time we would have.

[1] https://www.google.com/search?q=shot+over+the+bow


Wouldn't driverless car failures count?


Unless the AI actually intended to kill humans, I don't think it would count as "going rogue" -- that seems more like an accident, or possibly negligence.

Note that our current law draws major distinctions between killing someone by accident, killing someone through negligence, and killing someone with intent.


Only if the driverless car failure results in the global human population becoming exactly zero.

Are people really overlooking the word "all" in "killing all the humans"?

Many people are genuinely worried that AI research will kill all the humans. The OP is probably one such person.


Not until the driverless car AIs can approve their own code releases.


The severity metric is interesting, considering the baseline for an event is an AI going rogue and killing all the humans. Is the point to also try and account for collateral damage?


What if the AI instead of destroying all humans simply removes all right arms of everyone (for paperclip atoms, I suppose). Seems pretty dire, but it wouldn't be the highest severity... no need to wake up the on call AI expert.


This is not exactly an AI like we are seen right now, but there has been at least one instance where a robot killed a human in 2015.

https://www.theguardian.com/world/2015/jul/02/robot-kills-wo...


What makes the case you referenced any different from thousands of similar machine-involved accidents? Degrees of freedom or range of motion of the machine?

Factory robots like this have no decision making ability, so it’s unrelated to potential AI killings.


I agree, the definition is too vague, in theory it must include Tesla auto pilot deaths as they are using a neuronal networks and machine learning.


Aren’t there AI targeting systems now? Have none of them actually killed anyone yet? I guess technically they wouldn’t have gone rogue unless they were killing not who was intended. To be fair though humans play with the “who was intended metric” (eg Obama counting all military age males in the vicinity of a drone strike as combatants tautologically).


They haven't yet killed ALL humans.


I object to the term "going rogue" here, as it implies the robots are doing something wrong.

Would you say that humans are "going rogue" when they try to eradicate a virus like Smallpox or polio, or eliminate disease-carrying mosquitoes?

From an advanced AI's perspective, eradicating the humans isn't a bad thing.




What about the unregulated attention economy AI's of Instagram, Youtube, Facebook, etc, that have lead to suicides and various other violence?



Someone should make a page for AI controlled viruses


hope this keep zero!


Does uber count?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: