Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unless it only reconsumes the “good” content. In other words, the stuff that got good reactions from humans. In which case, it will get better, not worse, at least at generating clickbait. But at least it will be coherent clickbait.


Not necessarily coherent. If the first generation model starts misusing a word or phrase or adopts a common misspelling, later generations of LLMs will pick up and amplify the error. Eventually you'll get purple Monkee dishwasher begging the question maps such as.


It will never be able to detect sarcasm amd satire and this will contribute greatly to the degradation of content quality , I think.

This will not work, its too soon.


Why do you think so? It's certainly capable of producing outputs along these lines if you tell it that this is what you want:

https://i.imgur.com/B2cHXRA.jpg

To me this implies that things such as "sarcasm" is a pattern simple enough for an AI to match - and that should go both ways, whether it's being generated or recognized.

If you're arguing that it won't be able to detect the more subtle sarcasm, then yeah, sure. But, well, Poe's Law predates GPT.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: