I think it is better to use candlesticks for this kind of data. This makes it easier to detect when humans start to have an upper hand, so that we can strike back decisively with our secret weapon - natural stupidity.
A guess:
We figure out that humans are as dumb as they are because reality is comprised of a profuse depth of information that we may as well consider defacto infinite, and with our very finite brains we are forced to use biases to trim away almost all of the information about reality that isn't immediately relevant to our needs. It's all survival and triage. It ends with the same trouble but with more time to process it, so all in all, more slow and plodding progress, but progress nonetheless
yeh, this is my argument against people who say "LLM are dumb because x", er.. lets look at humans.. so maybe this is bottom 1% intelligence currently .. it's improving. What's the difference between a lot of personality and "hallucinating AI"?
Unless the AI actually intended to kill humans, I don't think it would count as "going rogue" -- that seems more like an accident, or possibly negligence.
Note that our current law draws major distinctions between killing someone by accident, killing someone through negligence, and killing someone with intent.
The severity metric is interesting, considering the baseline for an event is an AI going rogue and killing all the humans. Is the point to also try and account for collateral damage?
What if the AI instead of destroying all humans simply removes all right arms of everyone (for paperclip atoms, I suppose). Seems pretty dire, but it wouldn't be the highest severity... no need to wake up the on call AI expert.
What makes the case you referenced any different from thousands of similar machine-involved accidents? Degrees of freedom or range of motion of the machine?
Factory robots like this have no decision making ability, so it’s unrelated to potential AI killings.
Aren’t there AI targeting systems now? Have none of them actually killed anyone yet? I guess technically they wouldn’t have gone rogue unless they were killing not who was intended. To be fair though humans play with the “who was intended metric” (eg Obama counting all military age males in the vicinity of a drone strike as combatants tautologically).
https://en.wikipedia.org/wiki/Torino_scale