It's hard to gauge the effectiveness of poisoning huge training sets since anything you do is a figurative drop in the ocean, but if you can poison the small amount of data that an AI agent requests on-the-fly to use with RAG then I would guess it's much easier to derail it.