Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks for pushing for clarity here. So: I'm not saying that fact-checker warnings are ineffective because people just click through and ignore them. I doubt that they do; I assume the warnings "work". The problem is, only a tiny, tiny fraction of bogus Facebook posts get the warnings in the first place. To make matters worse, on Facebook, unlike on Twitter, a huge amount of communication happens inside (often very large) private groups, where fact-checker warnings have no hope of penetrating.

The end-user experience of Facebook's moderation is that amidst a sea of advertisements, AI slop, the rare update from a distant acquaintance, and other engagement-bait, you get sporadic warnings that Facebook is about to show you something that it thinks you shouldn't see. It's like they're going out of their way to make the user experience worse.

A lot of us here probably have the experience of reporting posts to Facebook for violating this or that clearly-stated rule. By contrast, I think very few of us have the experience of Facebook actually taking any of them down. But they'll still flash weird fact-checker posts. It's all very silly.



So, why wasn't a mixed approach taken? That's the obvious question you should be asking. Paid fact checkers are leaps in quality and depth of research, meanwhile Jonny Twoblokes doesn't have the willingness to research such topic, nor the means to provide a nuanced context to the information. You are saying that the impact was limited, but it was not because it was low quality. If you do both, where the first draft id done by crowdsource with the professional fact checker to give the final version, I don't think you would have a good reason to not do it.


I've answered elsewhere on the thread why I think the warning-label approach Facebook took was doomed to failure, as a result of the social dynamics of Facebook.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: