Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One model, one prompt, one time? That barely qualifies as putting it "to the test".

No obfuscation, no adversarial prompting, etc.



I get your point. The malicious instructions could be encoded and all that, but this is about defense in depth, so every little bit helps




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: