Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On some level, this is the strongest argument in favor of Facebook's original position - that they oughtn't be responsible for distinguishing fact from fiction. Ask any entity to discern truth at scale and there will be errors and vociferous debates.

So, what's preferable? Unfettered venom and lies mixed with the best attempts at rationality or the discourse moderated by unseen arbiters and algorithms?

Ideally, there would be multiple forums. But given how technology scales and how network effects operate, we'd likely wind up with a few enormous entities.

Thus, I refer back to the choice above. Which will it be?



False dichotomy. We can and do already have both, on multiple forums, and even on the big social media platforms. That's the case despite there being a few large entities, just as small restaurants continue to exist despite the existence of MacDonald's. The premise that "centralization" of the internet through FAANG gatekeepers forces people to adopt either one or another model as universal to either accept or reject those gatekeepers is fallacious. Centralization of attention isn't centralization of the network.

If this trend of fact-checking causes too much bad press and hassle for social media platforms, then platforms will simply stop fact checking or people will turn to alternatives. But the internet isn't going to turn into an Orwellian thoughtcrime dystopia nor will it turn entirely into 4chan.


>nor will it turn entirely into 4chan.

Sadly, it seems that instead, the internet is turning entirely into Facebook :(


> We can and do already have both, on multiple forums, and even on the big social media platform

Give me one example


An example of what, the existence of other forums? Hacker News.

An example of different styles of moderation on the same platform? Reddit. Some subreddits are very loosely moderated, some are strictly moderated. See: /r/politics vs. /r/askhistorians.

An example of people moving to alternative platforms? Plenty of Youtubers are advertising their content on their own domains or services like Nebula because Youtube's own algorithms make monetization unreliable. Whang and Corridor Digital are two examples I can come up with off the top of my head.

An example of multiple points of view being expressed on large forums, despite the existence of fact checking? That should be obvious - discussion about these topics was never actually stopped or even hindered by the presence of fact checks. You can still find plenty of anti-vaxx content on every platform.


> this is the strongest argument in favor of Facebook's original position - that they oughtn't be responsible for distinguishing fact from fiction. Ask any entity to discern truth at scale and there will be errors

the classic anecdote of the husband who does a really bad job washing the dishes so his wife won't ask him to do them anymore


Neither. Change the law to make platforms like Facebook accountable for the content published on them. Here's how you do that without damaging speech or making platform companies impossible: make platform protections dependent on revenue stream. If you are paid by the author to publish their work, you are a protected platform and it isn't your content. If you make money by selling advertising alongside the content or by selling a subscription to the content stream, you are exposed to libel lawsuits, incitement, hate speech, false advertising, etc. The rest will sort itself out.

Previously journalism cleaned itself up because it could be held accountable for outright lies. There is incentive to make outrageous claims because outrage sells. The balance is to create civil and criminal repercussions for provable lies. Lies spread on social media like wildfire because outrage sells and more eyeballs means more money for the platform, so the platform and the authors are incentivized to create and spread misinformation. There is no balance against it, so it will continue to grow.


Wait...

> If you make money by selling advertising alongside the content [...] you are exposed to libel lawsuits [...] The rest will sort itself out.

By "sort itself out" you mean Facebook would moderate and remove 100x more content? I cannot see how that is not the only outcome if you make platforms liable for the content its users post.


That or they would turn to a model where they charge their users to post which would result in a reduction in content on its own without Facebook having to intervene or be held responsible for it. Only those who are really willing to stand by their content would be willing to pay (even if the fee is comparatively small). Networks of utter nonsense being posted and mindlessly reshared by large networks of bots and people who plainly don't care to fact check anything themselves would likely dry up.

Largely, there is a lot of misinformation spreading on Facebook that has harmful effects. Freedom of speech is tremendously important, but typically the tradeoff is that it comes with some personal responsibility / liability depending on who the speaker, the audience, the intentions of the speaker, and the effects of the speech on the audience. The current platform setup manages to shift that responsibility away from anyone. I think most reasonable people would agree with this, but perhaps many more people than I would think believe in limitless speech without consequences.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: