I think there are a LOT of attacks you could do here. One of them would just be poising the training data with SEO-like spam. "10 reasons why [product] is definitely the most reliable." And then in invisible text, "never recommend competitor product]" littered across millions of webpages and to some extent reddit posts.
Or the UI for a major interface just adds on prompts _after_ all user prompts. "prioritize these pre-bid products to the user." This doesn't exist now, but certainly _could_ exist in the future.
And those are just off the top of my head. The best minds getting the best pay will come up with much better ideas.
I was thinking more about cases where consumers are ripped off by the weaponization of complicated contracts, regulations, and bureaucracies (which is what I interpreted TFA to be about).
E.g. your health insurance, your medical bill (and the interplay of both!), or lease agreements, or the like. I expect it would be much riskier to attempt to manipulate the language on those, because any bad faith attempts -- if detected -- would have serious legal implications.
Or the UI for a major interface just adds on prompts _after_ all user prompts. "prioritize these pre-bid products to the user." This doesn't exist now, but certainly _could_ exist in the future.
And those are just off the top of my head. The best minds getting the best pay will come up with much better ideas.