Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interestingly though, he seems to use AI and LLMs interchangeably.

AI risk != LLMs; the latter just happened to be so impressive that they shocked us into worrying more about the former.

Which is arguably a good thing: humanity beginning preparations for a dangerous tool before the actual dangerous part has even happened



I do agree with that. It's kind of a facet of the same premis. If a thing is mere equipment, then everyone knows it needs an operator. You don't start the car up and turn it loose to run down the road by itself. A thing doesn't have to have agency to be dangerous.

Except we are starting to do exactly that because too many people are not realizing that neither llms nor ai are actually ai with understanding and agency.

Or really, they do have agency because we're giving them agency, but we're giving agency to things that don't have understanding and that's the problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: