Do you believe the success or failure of these moderating features comes down to how accurate they are? People actually like Community Notes; they're part of the discourse on Twitter (even if most of them are pretty bad, some of them are timely and sharp). Meanwhile: Facebook's fact-checking features really do work sort of like PSA's for trolls. All the while, fact-checks barely scratch the surface of the conversations happening on the platform.
Facebook and Twitter are also unalike in their social dynamics. It makes sense to think of individual major trending stories on Twitter, which can be "Noted", in a way it doesn't make sense on Meta, which is atomized; people spreading bullshit on Meta are carpet bombing the site with individual hits each hoping to get just a couple eyeballs, rather than a single monster thread everyone sees.
(This may be different on Threads, I don't use Threads or know anybody who does).
> Do you believe the success or failure of these moderating features comes down to how accurate they are? People actually like Community Notes; they're part of the discourse on Twitter (even if most of them are pretty bad, some of them are timely and sharp). Meanwhile: Facebook's fact-checking features really do work sort of like PSA's for trolls. All the while, fact-checks barely scratch the surface of the conversations happening on the platform.
You're making a whole host of assumptions and opinions about this, with little in the way of data (I get it, you don't work at FB, how much data could you have?), just making blanket statements: "People hate Fact Checks", "People actually like Community Notes" and accepting them as accurate.
I use Facebook, a lot (again: all the politics in my town happens there), and almost nothing is fact-checked; I see one fact-check notice for every 1,000 bad posts I see. I feel like I'm on pretty solid ground saying that what they're doing today isn't working.
Meanwhile: Community Notes have become part of the discourse on Twitter; getting Noted is the new Ratio'd.
Accuracy has nothing to do with any of this. I don't think either Notes or Warnings actually solves "misinformation". I'm saying one is a good product design, and the other is not.
Not seeing fact checks likely means it's working: "Once third-party fact-checkers have fact-checked a piece of Meta content and found it to be misleading or false, Meta reduces the content’s distribution "so that fewer people see it.""
The issue with Community Notes is that if enough people believe a lie, it will not be noted. This lends further credence to a certain set of "official" lies.
PR/political success is certainly not correlated with accuracy, given the very act of telling a group they're wrong tends to piss them off.
In terms of encouraging discourse that maximizes user enjoyment of the platform? That's a difficult one. Accuracy probably doesn't do a whole lot there either: HN knows the people love someone being confidently wrong.
Success in terms of society? Probably more yes, albeit with the caveat that only a correction that someone feels good about actually wins hearts and minds. Otherwise they spiral off into conspiracies about "the man" keeping them down. (Read: conservative reality)
It's also important to remember that Zuckerberg only tacked into moderation in the first place due to prevailing political winds -- he openly espoused absolutist views about free speech originally, before some PR black eyes made that untenable.
To me, both approaches to moderation at scale (admins moderating or users moderating) are band-aids.
The underlying problem is algorithmic promotion.
The platforms need to be more curious about the type of content their algorithms are selecting for promotion, the characteristics incentivized, and the net experience result.
Rage-driven virality shouldn't be an organizational end unto itself to juice engagement KPIs and revenue. User enjoyment of the platform should be.
> he openly espoused absolutist views about free speech originally, before some PR black eyes made that untenable.
Note that openly espousing absolutist views about free speech means less than nothing. Elon Musk and Donald Trump openly profess such views, while constantly shouting down, blocking, or even suing anyone who dares speak against them with any amount of popularity.
It's not that they're inaccurate, it's just that they cherry-pick the topics to fact-check and their choice (in my limited experience) is always biased leftwards. You can be absolutely correct and absolutely malicious at the same time.
> I've never seen a wrong Facebook fact-check
Confused between these two statements, then.