Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t think “better spam detection technology” can help out of this even in theory. The whole point of LLMs is that, by construction, they produce content which is statistically indistinguishable from human text. Almost by definition, any statistical test to distinguish LLM text from human text could be turned around to make better LLMs.


So statistical test won't work, and we need something else




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: