This gets way too philosophical way too fast. The AI doesn’t have to want to do anything. The AI just has to do something different than what you tell it to do. If you put an AI in control of something like controlling the water flow from a dam, and the AI does something wrong it could be catastrophic. There doesnt have to be intent.
The danger of using regular software exists too, but the logical and deterministic nature of traditional software makes it provable.
So ML/LLM or more likely people using ML and LLM do something that kills a bunch of people... Let's face facts this is most likely going to be bad software.
Suddenly we go from being called engineers to being actual engineers, software gets treated like bridges or sky scrapers. I can buy into that threat, but it's a human one not an AGI one.
Or we could try to train it to do something, but the intent it learns isn't what we wanted. Like water behind the dam should be a certain shade of blue, then come winter it changes and when the AI tries to fix that it just opens the dam completely and floods everything.
The danger of using regular software exists too, but the logical and deterministic nature of traditional software makes it provable.