We are changing the behaviour of the LLM itself. No "real" code execution necessary. We show a variety of different novel scenarios and attack vectors. Malicious prompts can be planted on the internet or actively sent to targets. It's effectively turning the LLM itself into the compromised computer that the attacker controls.
It affects any proposed LLM use case involving connecting the LLM to anything at all, for a lot of our demos we only require a "search" capability. The concrete LLM (davinci3) with LangChain is just one example, this work should generalize to other systems such as Bing Chat (we just didn't have access). We are currently working on more real-world proof-of-concepts, but then we obviously have to go through responsible disclosure so it is not so quick to publish.