Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To me this is sort of like saying why do we need seat belts when we could just have people go to the gym so they're strong off to push back an oncoming car. Well, you can't get that strong, and also you can't really educate people well enough to reliably deal with the full force of the information firehose. Even people who are good at doing it do so largely by relying on sources they've identified as trustworthy and thus offloading some of the work to those. I don't think there's anyone alive who could actually distinguish fact from fiction if they had to, say, view every Facebook/Twitter/Reddit/everything post separately in isolation (i.e., without relying on pre-screening of some sort).

And once you know you need pre-screening, the question becomes why not just provide it instead of making people hunt it down?



With modern safety design and human factors, we do both and more. A car can have an automated breaking system, and a manual break, and an information system that informs the driver of the surroundings in order for better informed driver. We don't remove any of those in the false belief that one of them should be enough.

Applying that to information and propaganda, users should have some automated defenses (like ad blockers), but also manual control of what should or should not be blocked, and also education and tools to be better informed when taking manual control.

In neither system should we remove manual control, education or automated help. They all act in union to make people safer.


Perhaps a better analogy from recent HN discussion would be auto-lock-on-drive doors.

Some people die (often children) by opening doors while a vehicle is moving or before it is safe to do so.

However, this also impedes the ability of rescuers to extract people from crashed vehicles (especially with fail-dangerous electric car locks).

Is it safer to protect citizens from themselves or empower them to protect themselves?

In my perfect US, both would be done:

"Dealing with disinformation" as a semester-long required high-school level course and federally mandating the types of tools that citizens could use to better (read: requiring all the transparency data X and Meta stopped reporting, from any democracy-critical large platform).

While also mandating measures to limit disinformation where technically possible and ethically appropriate (read: not making hamfisted government regulations, when mandating data + allowing journalists / citizens to act is a better solution).


Children often lack experience and education to prevent harm, as that is one of the distinguishing aspect between adults and children. We also know from biology that children are more prone to poor impulse control. Children in general are dependent on an adult for safety, and children agency is occasionally removed in favor of security. Auto-lock-on-drive doors is a prime example of this. The adult driver is also liable for their passenger, especially children, so they have multiple incentives to ensure good security.

Treating children as children is fine and expected. Treating adults as children is not. Protecting children from disinformation, under the assumption that they lack the experience, education, impulse control, and expectation to handle information security themselves is fine. The government can also be an acceptable party to define this for children, even if some parents will object to not carry that role. An alternative could also be to make the parent liable if they fail in their role to protect their children from information harm.

Going back to auto-lock-on-drive doors, giving the government remote control of the car doors with no override, including the driver door, is unlikely to be acceptable to the adult driver who own the car.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: