It's not that there are "technical problems" preventing anything. It's that these systems (LLMs) do not possess the fundamental properties that are necessary for being scary, and we have no clue at all how the systems would get those properties.
Current "hyperintelligent AI" fears are exactly like the "grey goo" fear when nanotechnology was a buzzword in the early 2000s. We can all agree that if you take the fundamental vague concept "AI" or "nanotechnology" and pile on a couple of wheelbarrows full of hypotheticals, you get to something scary. That doesn't mean it has any relevance for the universe we currently exist in.
Current "hyperintelligent AI" fears are exactly like the "grey goo" fear when nanotechnology was a buzzword in the early 2000s. We can all agree that if you take the fundamental vague concept "AI" or "nanotechnology" and pile on a couple of wheelbarrows full of hypotheticals, you get to something scary. That doesn't mean it has any relevance for the universe we currently exist in.