Well, for example I believe that nukes represent an existential risk, because they have already been used to kill thousands of people in a short period of time. What you are saying doesn't really counter my point at all though, it is another vague theoretical argument.
It was clear that nukes were a risk before they were used; that is why there was a race to create them.
I am not in the camp that is especially worried about the existential threat of AI, however, if AGI is to become a thing, what does the moment look like where we can see it is coming and still have time to respond?
>It was clear that nukes were a risk before they were used; that is why there was a race to create them.
Yes, because there were other kinds of bombs before then that could already kill many people, just at a smaller scale. There was a lot of evidence that bombs could kill people, so the idea that a more powerful bomb could kill even more people was pretty well justified.
>if AGI is to become a thing, what does the moment look like where we can see it is coming and still have time to respond?
I think this implicitly assumes that if AGI comes into existence we will have to have some kind of response in order to prevent it killing everyone, which is exactly the point I am saying in my original argument isn't justified.
Personally I believe that GPT-4, and even GPT-3, are non-superintelligent AGI already, and as far as I know they haven't killed anyone at all.