Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm the simplest way - if you allow AI to make decisions, you're responsible. Like this https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-th...

So far we're doing pretty good with that idea globally (I've not seen any case going the other way in court)



I mean how would it work, if you tried to hold the AI liable?


Liability for the company selling the AI, I'd presume.


And that's perfectly acceptable, if everyone involved agreed beforehand.


Ah, I misunderstood. That is an interesting idea to consider.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: