Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Beware of he who would deny you access to information for in his heart he dreams himself your master." - Commissioner Pravin Lal, U.N. Declaration of Rights


Full quote: "As the Americans learned so painfully in Earth's final century, free flow of information is the only safeguard against tyranny. The once-chained people whose leaders at last lose their grip on information flow will soon burst with freedom and vitality, but the free nation gradually constricting its grip on public discourse has begun its rapid slide into despotism. Beware of he who would deny you access to information, for in his heart he deems himself your master."

(Alpha Centauri, 1999, https://civilization.fandom.com/wiki/The_Planetary_Datalinks... )


I sit here in my cubicle, here on the motherworld. When I die, they will put my body in a box and dispose of it in the cold ground. And in the million ages to come, I will never breathe, or laugh, or twitch again. So won't you run and play with me here among the teeming mass of humanity? The universe has spared us this moment."

~Anonymous, Datalinks.


[flagged]


[flagged]


You can watch YouTube without watching any channels from an american person what do you mean


Weird American ads from crazy American Christians convinced about the rapture.


That's why you buy $20,000 GPU for local inference for your AI-ad-blocker, geez.

Orrrrr you pay $20 per month to either left or right wing one on the cloud.


There is a difference between free flow of information and propaganda. Much like how monopolies can destroy free markets, unchecked propaganda can bury information by swamping it with a data monoculture.

I think you could make a reasonable argument that the algorithms that distort social media feeds actually impede the free flow of information.


> Much like how monopolies can destroy free markets, unchecked propaganda can bury information by swamping it with a data monoculture.

The fundamental problem here is exactly that.

We could have social media that no central entity controls, i.e. it works like the web and RSS instead of like Facebook. There are a billion feeds, every single account is a feed, but you subscribe to thousands of them at most. And then, most importantly, those feeds you subscribe to get sorted on the client.

Which means there are no ads, because nobody really wants ads, and so their user agent doesn't show them any. And that's the source of the existing incentive for the monopolist in control of the feed to fill it with rage bait, which means that goes away.

The cost is that you either need a P2P system that actually works or people who want to post a normal amount of stuff to social media need to pay $5 for hosting (compare this to what people currently pay for phone service). But maybe that's worth it.


>We could have social media that no central entity controls, i.e. it works like the web and RSS instead of like Facebook. There are a billion feeds, every single account is a feed, but you subscribe to thousands of them at most. And then, most importantly, those feeds you subscribe to get sorted on the client.

The Fediverse[1] with ActivityPub[0]?

[0] https://activitypub.rocks/

[1] https://fediverse.party/


Something along those lines, but you need it to be architectured in such a way that no organization can capture the network effect in order to set up a choke point. You need all moderation to be applied on the client, or you'll have large servers doing things like banning everyone from new/small independent servers by default so that people have to sign up with them instead. The protocol needs to make that impossible or the long-term consequences are predictable.


>but you need it to be architectured in such a way that no organization can capture the network effect in order to set up a choke point.

How is that not the case now?

>You need all moderation to be applied on the client, or you'll have large servers doing things like banning everyone from new/small independent servers by default so that people have to sign up with them instead.

I suppose. There are ActivityPub "clients" which act as interfaces that allow the former and act as agents for a single user interacting with other ActivityPub instances. which, I'd expect can take us most of the way you say we should go.

I haven't seen the latter, as there's really no incentive to do so. Meta tried doing so by federating (one-way) with threads, but that failed miserably as the incentives are exactly the opposite in the Fediverse.

I suppose that incentives can change, although money is usually the driver for that and monetization isn't prioritized there.

>The protocol needs to make that impossible or the long-term consequences are predictable.

Impossible? Are you suggesting that since ActivityPub isn't perfect, it should be discarded?

ActivityPub is easily 75% of where you say we should go. Much farther along that line than anything else. But since it's not 100% it should be abandoned/ignored?

I'm not so sure about your "long-term consequences" being predictable. Threads tried to do so and failed miserably. In fact, the distributed model made sure that it would, even though the largest instances did acquiesce.

ActivityPub is the best you're going to get right now. and the best current option for distributed social media.

Don't let the perfect be the enemy of the good.

Edit: I want to clarify that I'm not trying to dunk on anyone here. Rather, I'm not understanding (whether that's my own obtuseness or something else) the argument being made against ActivityPub in the comment to which I'm replying. Is there some overarching principle or actual data which supports the idea that all social media is doomed to create dystopian landscapes? Or am I missing something else here?


> How is that not the case now?

The protocol allows servers, rather than users, to ban other servers. Servers should be only the dumbest of pipes.

> Are you suggesting that since ActivityPub isn't perfect, it should be discarded?

I'm saying that by the time something like this has billions of users the protocol is going to be a lot harder to change, so you should fix the problems without delay instead of waiting until after that happens and getting deja vu all over again.

> Threads tried to do so and failed miserably.

Threads tried to do that all at once.

The thing that should be in your threat model is Gmail and Chrome and old school Microsoft EEE. Somebody sets up a big service that initially doesn't try to screw everyone, so it becomes popular. Then once they've captured a majority of users, they start locking out smaller competitors.

The locking out of smaller competitors needs to be something that the protocol itself is designed to effectively resist.


>> How is that not the case now?

>The protocol allows servers, rather than users, to ban other servers. Servers should be only the dumbest of pipes.

A fair point. A good fix for this is to have individual clients that can federate/post/receive/moderate/store content. IIUC, there is at least one client/server hybrid that does this. It's problematic for those who don't have the computing power and/or network bandwidth to run such a platform. But it's certainly something to work towards.

>> Are you suggesting that since ActivityPub isn't perfect, it should be discarded?

>I'm saying that by the time something like this has billions of users the protocol is going to be a lot harder to change, so you should fix the problems without delay instead of waiting until after that happens and getting deja vu all over again.

I'm still not seeing the "problems" with server usage you're referencing. Federation obviates the need for users to be on the same server and there's little, if any, monetary value in trying to create mega servers. Discoverability is definitely an issue, but (as you correctly point out) should be addressed. It is, however, a hard problem if we want to maintain decentralization.

>The thing that should be in your threat model is Gmail and Chrome and old school Microsoft EEE. Somebody sets up a big service that initially doesn't try to screw everyone, so it becomes popular. Then once they've captured a majority of users, they start locking out smaller competitors.

Given the landscape of the Fediverse, that seems incredibly unlikely. Perhaps I'm just pie in the sky on this, but those moving to ActivityPub platforms do so to get away from such folks.

Adding to that the ability to manage one's own content on one's own hardware with one's own tools, it seems to be a really unlikely issue.

Then again, I could absolutely be wrong. I hope not. That said, I'm sure that suggestions for changes along the lines you suggest to the ActivityPub protocol[0][1][2] as a hedge against making it fall into a series of corporate hell holes, as you put it, "impossible," would be appreciated.

[0] https://github.com/w3c/activitypub

[1] https://activitypub.rocks/

[2] https://w3c.github.io/activitypub/

Edit: Clarified my thoughts WRT updates to the ActivityPub protocol.


There is no generally accepted definition of propaganda. One person's propaganda is another person's accurate information. I don't trust politicians or social media employees to make that distinction.


There is definitely videos that are propaganda.

Like those low quality AI video about Trump or Biden, saying things that didn't happened. Anyone with critical thinking knows that those are either propaganda or engagement farming


Or they're just humorous videos meant to entertain and not be taken seriously. Or they are meant to poke fun of the politician, e.g. clearly politically motivated speech, literally propaganda, but aren't meant to be taken as authentic recordings and deception isn't the intent.

Sometimes it's clearly one and not the other, but it isn't always clear.


'I'm just a comedian guys' interviewing presidential candidates, spouting how we shouldn't be in Ukraine, then the second they get any pushback 'I'm just a comedian'. It's total bullshit. They are trying to influence, not get a laugh.


Downvoted...yet here is the Vice President saying the FCC Commissioner saying 'we can do this the hard way or the easy way' regarding censoring Jimmy Kimmel was 'just telling a joke':

https://bsky.app/profile/atrupar.com/post/3lzm3z3byos2d

You 'it's just comedy' guys are so full of it. The FCC Head attacking free media in the United States isn't 'just telling jokes'.


What you think is propaganda is irrelevant. When you let people unnaturally amplify information by paying to have it forced into someone’s feed that is distorting the free flow of information.

Employees choose what you see every day you use most social media.


Congrats! You are 99% of the way to understanding it. Now you just have to realize that "whoever is in charge" might or might not have your best interests at heart, government or private.

Anyone who has the power to deny you information absolutely has more power than those who can swamp out good information with bad. It's a subtle difference yes, but it's real.


Banning algorithms and paid amplification is not denying you information. You can still decide for yourself who to follow, or actively look for information, actively listen to people. The difference is that it becomes your choice.


Well, this is about bringing back creators banned for (in YouTube's eyes) unwarranted beliefs stemming from distrust of political or medical authorities, and promoting such distrust. They weren't banned because of paid amplification.

I don't quite understand how the Ressa quote in the beginning of this thread justifies banning dissent for being too extreme. The algorithms are surely on YouTube and Facebook (and Ressa's!) side here, I'm sure they tried to downrank distrust-promoting content as much as they dared and had capabilities to, limited by e.g. local language capabilities and their users' active attempts to avoid automatic suppression - something everyone does these days.


Just regulate the algorithm market. Let people see, decide, share, compare


What is the "algorithm market"? Where can I buy one algorithm?


Isn’t one yet, that would be the roll of government to create a market on these large platforms.


OK, but that's an argument against advertising, and maybe against dishonest manipulation of ranking systems.

It's not an argument for banning doctors from YouTube for having the wrong opinions on public health policy.


> distorting the free flow of information

There is no free flow of information. Never was. YouTube and FB and Google saying "oh it's the algorithm" is complete BS. It always manipulated, boosting whoever they feel fit.


And propaganda by definition isn’t false information. Propaganda can be factual as well.


So many people have just given up on the very idea of coherent reality? Of correspondence? Of grounding?

Why? No one actually lives like that when you watch their behavior in the real world.

It's not even post modernism, it's straight up nihilism masquerading as whatever is trendy to say online.

These people accuse every one of bias while ignoring that there position comes from a place of such extreme biased it irrationally, presuppositionaly rejects the possibility of true facts in their chosen, arbitrary cut outs. It's special pleading as a lifestyle.

It's very easy to observe, model, simulate, any node based computer networks that allow for coherent and well formed data with high correspondence, and very easy to see networks destroyed by noise and data drift.

We have this empirically observed in real networks, it's pragmatic and why the internet and other complex systems run. People rely on real network systems and the observed facts of how they succeed or fail then try to undercut those hard won truths from a place of utter ignorance. While relying on them! It's absurd ideological parasitism, they deny the value of the things the demonstrably value just by posting! Just the silliest form of performative contradiction.

I don't get it. Fact are facts. A thing can be objectively true in what for us is a linear global frame. The log is the log.

Wikipedia and federated text content should never be banned, logs and timelines, data etc... but memes and other primarily emotive media is case by case, I don't see their value. I don't see the value in allowing people to present unprovable or demonstrably false data using a dogmatically, confidentally true narrative.

I mean present whatever you want but mark it as interpretation or low confidence interval vs multiple verified sources with a paper trail.

Data quality, grounding and correspondence can be measured. It takes time though for validation to occur, it's far easier to ignore those traits and just generate infinite untruth and ungrounded data.

Why do people prop up infinite noise generation as if it was a virtue? As if noise and signal epistemically can't be distinguished ever? I always see these arguments online by people who don't live that way at all in any pragmatic sense. Whether it's flat earthers or any other group who rejects the possibility of grounded facts.

Interpretation is different, but so is the intentional destruction of a shared meaning space by turning every little word into a shibboleth.

People are intentionally destroying the ability to even negotiate connections to establish communication channels.

Infinite noise leads to runaway network failure and in human systems the inevitably of violence. I for one don't like to see people die because the system has destroyed message passing via attentional ddos.


Fortunately your biased opinion about what information has value is utterly worthless and will have zero impact on public policy. Idealized mathematical models of computer networks have no relevance to politics or freedom of expression in the real world.


There isn’t. Yet, everybody knows what I mean under “propaganda against immigration” (just somebody would discredit it, somebody would fight for it), and nobody claims that the Hungarian government’s “information campaign” about migrants is not fascist propaganda (except the government, obviously, but not even their followers deny it). So, yes, the edges are blurred, yet we can clearly identify some propaganda.

Also accurate information (like here is 10 videos about black killing whites) with distorted statistics (there is twice as much white on black murder) is still propaganda. But these are difficult to identify, since they clearly affect almost the whole population. Not many people even tried to fight against it. Especially because the propaganda’s message is created by you. // The example is fiction - but the direction exists, just look on Kirk’s twitter for example -, I have no idea about the exact numbers off the top of my head


Propaganda wouldn't be such a problem if content wasn't dictated by a handful of corporations, and us people weren't so unbelievably gullible.


indeed, didn't YT ban a bunch of RT employees for undisclosed ties? I bet those will be coming back.


Oh, but can you make an argument that the government, pressuring megacorporations with information monopolies to ban things they deem misinformation, is a good thing and makes things better?

Because that's the argument you need to be making here.


You don't even need to make the argument. Go copy paste some top HN comments on this issue from around the time the actions we're discussing youtube reversing happened.


I think those arguments sound especially bad today, actually. They got the suppression they wanted, but it did not give the outcome they wanted.


Not really. You can argue that the government should have the right to request content moderation from private platforms and that private platforms should have the right to decline those requests. There are countless good reasons for both sides of that.

In fact, this is the reality we have always had, even under Biden. This stuff went to court. They found no evidence of threats against the platforms, the platforms didn't claim they were threatened, and no platform said anything other than they maintained independent discretion for their decisions. Even Twitter's lawyers testified under oath that the government never coerced action from them.

Even in the actual letter from YouTube, they affirm again that they made their decisions independently: "While the Company continued to develop and enforce its policies independently, Biden Administration officials continued to press the company to remove non-violative user-generated content."

So where does "to press" land on the spectrum between requesting action and coercion? Well, one key variable would be the presence of some type of threat. Not a single platform has argued they were threatened either implicitly or explicitly. Courts haven't found evidence of threats. Many requests were declined and none produced any sort of retaliation.

Here's a threat the government might use to coerce a platform's behavior: a constant stream of subpoenas! Well, wouldn't you know it, that's exactly what produced the memo FTA.[1]

Why hasn't Jim Jordan just released the evidence of Google being coerced into these decisions? He has dozens if not hundreds of hours of filmed testimony from decision-makers at these companies he refuses to release. Presumably because, like in every other case that has actually gone to court, the evidence doesn't exist!

[1] https://www.politico.com/live-updates/2025/03/06/congress/ji...


The key problem with the government "requesting" a company do something is that the government has nigh infinite unrelated decisions that can be used to apply pressure to that company.

It's unreasonable to expect some portion of the executive branch to reliably act counter to the President's stated goals, even if they would otherwise have.

And that opportunity for perversion of good governance (read: making decisions objectively) is exactly why the government shouldn't request companies censor or speak in certain ways, ever.

If there are extenuating circumstances (e.g. a public health crisis), then there need to be EXTREMELY high firewalls built between the part of the government "requesting" and everyone else (and the President should stay out of it).


The government has a well-established right to request companies to do things, and there are good reasons to keep it.

For example, the government has immense resources to detect fraud, CSAM, foreign intelligence attacks, and so on.

It is good, actually, that the government can notify employers that one of their employees is a suspected foreign asset and request they do not work on sensitive technologies.

It is good, actually, that the government can notify a social media platform that there are terrorist cells spreading graphic beheading videos and request they get taken down.

It's also good that in the vast majority cases, the platforms are literally allowed to reply with "go fuck yourself!"

The high firewall is already present, it's called the First Amendment and the platforms' unquestioned right to say "nope," as they do literally hundreds of times per day.


How does any of that prevent the government from de facto tying unrelated decisions to compliance by companies? E.g. FCC merger approval?


None of it de facto prevents anything, but if a corporation feels they're being bullied in this way they can sue.

In the Biden admin, multiple lawsuits (interestingly none launched by the allegedly coerced parties) revealed no evidence of such mechanics at play.

In the Trump admin, the FCC Commissioner and POTUS have pretty much explicitly tied content moderation decisions to unrelated enforcement decisions.

Definitely there's possibility for an admin to land in the middle (actually coercive, but not stupid enough to do it on Truth Social), and in those scenarios we rely on the companies to defend themselves.

The idea that government should be categorically disallowed from communicating and expressing preferences is functionally absurd.


That sounds great in the context of a game, but in the years since its release, we have also learned that those who style themselves as champions of free speech also dream themselves our master.

They are usually even more brazen in their ambitions than the censors, but somehow get a free pass because, hey, he's just fighting for the oppressed.


I'd say free speech absolutism (read: early-pandemic Zuckerberg, not thumb-on-the-scales Musk) has always aged better than the alternatives.

The trick is there's a fine line between honest free speech absolutism and 'pro free speech I believe in and silence about the freedom of that I don't.' Usually when ego and power get involved (see: Trump, Musk).

To which, props to folks like Ted Cruz on vocally addressing the dissonance of and opposing FCC speech policing.


Anything that people uncritically good attracts the evil and the illegitimate because they cannot build power on their own so they must co-opt things people see as good.


Not in the original statement, but as it referenced here, the word 'information' is doing absolutely ludicrous amounts of lifting. Hopefully it bent at the knees, because it my book it broke.

You can't call the phrase "the sky is mint chocolate chip pink with pulsate alien clouds" information.


While this is true, It's also important to realize that during the great disinformation hysteria, perfectly reasonable statements like "This may have originated from a lab", "These vaccines are non-sterilizing", or "There were some anomalies of Benford's Law in this specific precinct and here's the data" were lumped into the exact same bucket as "The CCP built this virus to kill us all", "The vaccine will give you blood clots and myocarditis", or "The DNC rigged the election".

The "disinformation" bucket was overly large.

There was no nuance. No critical analysis of actual statements made. If it smelled even slightly off-script, it was branded and filed.


The mRNA based COVID-19 vaccines literally did cause myocarditis as a side effect in a small subset of patients. We can argue about the prevalence and severity or risk trade-offs versus possible viral myocarditis but the basic statement about possible myocarditis should have never been lumped into the disinformation bucket.

https://www.cdc.gov/vaccines/covid-19/clinical-consideration...


Doesn't detract from my point. "These vaccines are correlated with an N% increased risk of myocarditis" is a different statement from "These vaccines will give you myocarditis".

BOTH of them were targeted by the misinformation squad, as if equivalent.


But it is because of the deluge that that happens. We can only process so much information. If the amount of "content" coming through is orders of magnitude larger, it makes sense to just reject everything that looks even slightly like nonsense, because there will still be more than enough left over.


So does that justify the situation with Jimmy Kimmel? After all there was a deluge of information and a lot of unknowns about the shooter but the word choice he used was very similar to the already debunked theory that it was celebratory gunfire from a supporter.

Of course not.


That sentence from Kimmel was IMO factually incorrect, and he was foolish to make the claim, but how is offensive towards the dead, and why is it worth a suspension?

But as we know, MAGA are snowflakes and look for anything so they can pull out their Victim Card and yell around...


MAGA are badasses when they're out of power, yet apparently threatened enough by an escalator stopping so as to call for terrorism charges.

The doublethink is real.


You can call it data and have sufficient respect of others that they may process it into information. Too many have too little faith in others. If anything we need to be deluged in data and we will probably work it out ourselves eventually.


Facebook does its utmost to subject me to Tartarian, Flat Earth and Creationist content.

Yes I block it routinely. No the algo doesnt let up.

I dont need "faith" when I can see that a decent chunk of people disbelieve modern history, and aggressively disbelieve science.

More data doesnt help.


This is a fear of an earlier time.

We are not controlling people by reducing information.

We are controlling people by overwhelming them in it.

And when we think of a solution, our natural inclination to “do the opposite” smacks straight into our instinct against controlling or reducing access to information.

The closest I have come to any form of light at the end of the tunnel is Taiwan’s efforts to create digital consultations for policy, and the idea that facts may not compete on short time horizon, but they surely win on longer time horizons.


The problem is that in our collective hurry to build and support social networks, we never stopped to think about what other functions might be needed with them to promote good, factual society.

People should be able to say whatever the hell they want, wherever the hell they want, whenever the hell they want. (Subject only to the imminent danger test)

But! We should also be funding robust journalism to exist in parallel with that.

Can you imagine how different today would look if the US had leveraged a 5% tax on social media platforms above a certain size, with the proceeds used to fund journalism?

That was a thing we could have done. We didn't. And now we're here.


Beware of those who quote videogames and yet attribute them to "U.N. Declaration of Rights".


They're not wrong; the attribution is part of the quote. In-game, the source of the quote is usually important, and is always read aloud (unlike in Civ).


I would argue that they are, if not wrong, at least misleading.

If you've never played Alpha Centauri (like me) you are guaranteed to believe this to be a real quote by a UN diplomat. It also doesn't help that searching for "U.N. Declaration of Rights" takes me (wrongly) to the (real) Universal Declaration of Human Rights. I only noticed after reading ethbr1's comment [1], and I bet I'm not the only one.

[1] https://news.ycombinator.com/item?id=45355441


Hence my reply.

Also, you missed a great game.


Beware he who would tell you that any effort at trying to clean up the post apocalyptic wasteland that is social media is automatically tyranny, for in his heart he is a pedophile murderer fraudster, and you can call him that without proof, and when the moderators say your unfounded claim shouldn't be on the platform you just say CENSORSHIP.


The thing is that burying information in a firehose of nonsense is just another way of denying access to it. A great way to hide a sharp needle is to dump a bunch of blunt ones on top of it.


Sure, great. Now suppose that a very effective campaign of social destabilisation propaganda exists that poses an existential risk to your society.

What do you do?

It's easy to rely on absolutes and pithy quotes that don't solve any actual problems. What would you, specifically, with all your wisdom do?


Let's not waste time on idle hypotheticals and fear mongering. No propaganda campaign has ever posed an existential threat to the USA. Let us know when one arrives.


Have you seen the US recently? Just in the last couple of days, the president is standing up and broadcasting clear medical lies about autism, while a large chunk of the media goes along with him.


I have seen the US recently. I'm not going to attempt to defend the President but regardless of whether he is right or wrong about autism this is hardly an existential threat to the Republic. Presidents have been wrong about many things before and that is not a valid justification for censorship. In a few years we'll have another president and he or she will be wrong about a whole different set of issues.


I hope I’m wrong, but I think America is fundamentally done, because it turns out the whole “checks and balances” system turned out to be trivial to steamroll as president, and future presidents will know that now.

By done I don’t mean it won’t continue to be the worlds biggest and most important country, but I don’t expect any other country to trust America more than they have to for a 100 years or so.


A lot of people thought that America was fundamentally done in 1861, and yet here we are. The recent fracturing of certain established institutional norms is a matter of some concern. But whether other countries trust us or not is of little consequence. US foreign policy has always been volatile, subject to the whims of each new administration. Our traditional allies will continue to cooperate regardless of trust (or lack thereof) because mutual interests are still broadly aligned and they have no credible alternative.


> whether other countries trust us or not is of

some consequence. Not all consuming, but significant.

> Our traditional allies will continue to cooperate regardless of

whether they continue to include the US within that circle to the same degree, or indeed at all.

Trump's tariff's have been a boon for China's global trade connections, they continue to buy soybeans, but from new partners whereas before they sourced mainly from the US.


> turned out to be trivial to steamroll as president, and future presidents will know that now

... when the Presidency, House, and Senate are also controlled by one unified party, and the Supreme Court chooses not to push back aggressively.

That rarely happens.


"You cannot trust basic statements of fact coming from POTUS, HHS, FDA, CDC, DOD" is absolutely an existential risk.


I won't attempt to defend the current administration's incompetent and chaotic approach to public health (or communications in general) but it's hardly an existential crisis. The country literally existed for over a century before HHS was even created.


Among other major problems, the logic in your comment implicitly assumes that the worst a badly-run (incompetent, malevolent, or some combination) central authority can be is equal to the effect of no central authority.

Another important error is the implicit assumption that public health risks are constant, and do not vary with changing time and conditions, so that the public health risk profile today is essentially the same as in the first century of the US’s existence.


They are spreading this nonsense in part in order to hide from the fact that they refuse to release the Epstein files, something that seems to include a rather lot of high profile/high importance official potentially doing really bad things.

It's called flooding the zone, and it is a current Republican strategy to misinform, to sow defeatism in their political opposition, default/break all of the existing systems for handling politics, with the final outcome to manipulate the next election. And they publicized this yet people like you claim to think it's non issue.


It doesn't have to be national threat. Social media can be used by small organisations or even sufficiently motivated individuals to easily spread lies and slanders against individuals or group and it's close to impossible to prevent (I've been fighting some trolls threatening a group of friends on Facebook lately, and I can attest how much the algorithm favor hate speach over reason)


That's a non sequitur. Your personal troubles are irrelevant when it comes to public policy, social media, and the fundamental human right of free expression. While I deplore hate speech, it's existence doesn't justify censorship.


It is of course subjective. For you hate speech does not justify censorship but for me it does. Probably because we make different risk assessments: you might expect hate speech to have no consequences in general and censorship to lead to authoritarianism, whereas I expect hate speech to have actual consequences on people life that are worse and more likely than authoritarianism. When I think about censorship and authoritarianism, I think about having to hide, but when I think about hate speech I picture war propaganda and genocides.


There are twin goals: total freedom of speech and holding society together (limit polarization). I would say you need non-anonymous speech, reputation systems, trace-able moderation (who did you upvote), etc. You can say whatever you want but be ready to stand by it.

One could say the problem with freedom of speech was that there weren't enough "consequences" for antisocial behavior. The malicious actors stirred the pot with lies, the gullible and angry encouraged the hyperbole, and the whole US became polarized and divided.

And yes, this system chills speech as one would be reluctant to voice extreme opinions. But you would still have the freedom to say it but the additional controls exert a pull back to the average.


Is your point that any message is information?

Without truth there is no information.


That seems to be exactly her point, no?

Imagine an interface that reveals the engagement mechanism by, say, having an additional iframe. In this iframe an LLM clicks through its own set of recommendations picked to minimize negative emotions at the expense of engagement.

After a few days you're clearly going to notice the LLM spending less time than you clicking on and consuming content. At the same time, you'll also notice its choices are part of what seems to you a more pleasurable experience than you're having in your own iframe.

Social media companies deny you the ability to inspect, understand, and remix how their recommendation algos work. They deny you the ability to remix an interface that does what I describe.

In short, your quote surely applies to social media companies, but I don't know if this is what you originally meant.


Raising the noise floor of disinformation to drown out information is a way of denying access to information too..


Facebook speaks through what it chooses to promote or suppress and they are not liable for that speech because of Section 230.


Not quite: prior to the communications Decency Act of 1996 (which contained section 230), companies were also not liable for the speech of their users, but lost that protection if they engaged in any moderation. The two important cases at hand are Stratton Oakmont, Inc. v. Prodigy Services Co. And Cubby, Inc. v. CompuServe Inc.

The former moderated content and was thus held liable for posted content. The latter did not moderate content and was determined not to be liable for user generated content they hosted.

Part of the motivation of section 230 was to encourage sites to engage in more moderation. If section 230 were to be removed, web platforms would probably choose to go the route of not moderating content in order to avoid liability. Removing section 230 is a great move if one wants misinformation and hateful speech to run unchecked.


You say "Not quite" but it looks to me like you're agreeing?


We must dissent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: