Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

as long as an LLM is a black box (i.e. we haven't mapped its logical structure) then there can always be another prompt injection attack you didn't account for.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: