Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not that there are "technical problems" preventing anything. It's that these systems (LLMs) do not possess the fundamental properties that are necessary for being scary, and we have no clue at all how the systems would get those properties.

Current "hyperintelligent AI" fears are exactly like the "grey goo" fear when nanotechnology was a buzzword in the early 2000s. We can all agree that if you take the fundamental vague concept "AI" or "nanotechnology" and pile on a couple of wheelbarrows full of hypotheticals, you get to something scary. That doesn't mean it has any relevance for the universe we currently exist in.



And yet you qualify your statement with “current.” Why?

People anticipating future risks and discussing how and when to detect and mitigate them is actually exactly what we should be doing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: