Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> it can also save us from biased content

I am pessimistic on that front, since:

1. If LLM's can't detect biases in their own output, why would we expect them to reliably detect it in documents in general?

2. As a general rule, deploying bias/tricks/fallacies/BS is much easier than the job of detecting them and explaining why it's wrong.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: