Unless it only reconsumes the “good” content. In other words, the stuff that got good reactions from humans. In which case, it will get better, not worse, at least at generating clickbait. But at least it will be coherent clickbait.
Not necessarily coherent. If the first generation model starts misusing a word or phrase or adopts a common misspelling, later generations of LLMs will pick up and amplify the error. Eventually you'll get purple Monkee dishwasher begging the question maps such as.
To me this implies that things such as "sarcasm" is a pattern simple enough for an AI to match - and that should go both ways, whether it's being generated or recognized.
If you're arguing that it won't be able to detect the more subtle sarcasm, then yeah, sure. But, well, Poe's Law predates GPT.