Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The more the author lays out the detail, the more I like what they're describing.

We don't want to (further) sleepwalk into a society where we're governed by algorithms we don't (or can't) understand. There is a real risk we build systems where humans end up just blindly accepting what their computers are telling them.

Computers are powerful, but ultimately people work on incentives. Even a rigorously tested system fails in the presence of misaligned incentives. Adding in AI so you can't even rigorously reason about the system further obscures the real issue of misaligned incentives.

If we get AI "wrong", then we forever bake wrong incentives into the systems and our societal fabric. Attempts to correct these will be hampered by those same systems.



> The more the author lays out the detail, the more I like what they're describing.

I think the original AI act was actually pretty good, but the stuff they crammed in at the last minute about LLMs is nowhere near as good, and potentially actively harmful (because it was stuffed in at the last minute during a massive hype cycle).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: