Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe their methodology worked at the start but has since stopped working. I assume model outputs are passed through another model that classifies a prompt as a successful jailbreak so that guardrails can be enhanced.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: