Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Prompt injection engineering for attackers: Exploiting GitHub Copilot (trailofbits.com)
11 points by agentictime 4 months ago | hide | past | favorite | 1 comment


This is maybe the most interesting part of LLMs for me - finding ways to exploit them, and then figuring out how to protect against those attacks.

Is there any step in the chain where a repo maintainer in this case could see the full text of what was sent to Copilot in the “hidden” text?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: