YouTube and others pay for clicks/views, so obviously you can maximize this by producing lots of mediocre content.
LinkedIn is a place to sell, either a service/product to companies or yourself to a future employer. Again, the incentive is to produce more content for less effort.
Even HN has the incentive of promoting people's startups.
Is it possible to create a social network (or "discussion community", if you prefer) that doesn't have any incentive except human-to-human interaction? I don't mean a place where AI is banned, I mean a place where AI is useless, so people don't bother.
The closest thing would probably be private friend groups, but that's probably already well-served by text messaging and in-person gatherings. Are there any other possibilities?
I remember participating on *free* phpBB forums, or IRC channels. I was amazed that I could chat with people smarter than me, on a wide range of topics, all for the cost of having an internet subscription.
It's only recently, when I was considering to revive the old-school forum interaction, that I have realized that while I got the platforms for free, there were people behind them who paid for the hosting and the storage, and were responsible to moderate the content in order to not derail every discussion to low level accusation and name calling contest.
I can't imagine the amount of time, and tools, it takes to keep discussion forums free of trolls, more so nowadays, with LLMs.
Something that's been on my mind for a while now is shared moderation - instead of having a few moderators who deal with everything, distribute the moderation load across all users. Every user might only have to review a couple posts a day or whatever, so it should be a negligible burden, and send each post that requires moderation to multiple users so that if there's disagreement it can be pushed to more senior/trusted users.
This is specifically in the context of a niche hobby website where the rules are simple and identifying rule-breaking content is easy. I'm not sure it would work on something with universal scope like Reddit or Facebook, but I'd rather we see more focused communities anyway.
I dont know if it's true or not. But I remember reading about this person who would do the community reports for cheating for a game like cs or something. They had numerous bot accounts and spent a hour a day on it. Set up in a way that when they reviewed a video the bots would do the same.
But all the while they were doing legitimate reporting, when they came across their real cheating account they'd report not cheating. And supposedly this person got away with it for years for having good reputable community reporting with high alignment scores.
I know 1 exception doesnt mean it's not worth it. But we must acknowledge the potential abuse. Id still rather have 1 occasionally ambitious abuser over countless low effort ones.
Yeah I can definitely see that being a threat model. In the gaming case I think it's harder because it's more of a general reputation system and it's based on how people feel while playing with you, whereas for a website every post can be reviewed by multiple parties and the evidence is right there. But certainly I would still expect some people to try to maximize their reputation and use that to push through content that should be more heavily moderated, and in the degenerate case the bad actors comprise so much of the userbase that they peer review their own content.
>> Is it possible to create a social network (or "discussion community", if you prefer) that doesn't have any incentive except human-to-human interaction?
Yes, it is possible. Like anything worth, it is not easy. I am a member of a small forum of around 20-25 active users for 20 years. We talk all kind of stuff, it was initially just IT-related, but we also touch motorcycles (at least 5 of us do or did ride, I used to go ride with a couple of them in the past), some social aspects, tend to avoid politics (too divisive) and religion (I think none is religious enough to debate). We were initially in the same country and some were meeting IRL from time to time, but now we are spread in many places around Europe (one in US), so the forum is what keeps us in contact. Even the ones in the same country, probably a minority these days, are spread too thin, but the forum is there.
If human interaction involves IRL, I met less that 10 forum members and I met frequently just 3 (2 on motorcycle trips, one worked for a few years in the same place as I), but that is not a metric that means much. It is the false sense of being close over internet while being geographically far, which works in a way but not really. For example my best friends all emigrated, most were childhood friends, communicating to them on the phone or Internet makes me never feel lonely, but seeing them every few years makes grows the distance between us. That is impacting human to human interaction, there is no way around it.
spot on. The number of times I've came across a poorly made video where half the comments are calling out its inaccuracies. In the end Youtube (or any other platform) and the creator get paid. Any kind of negative interaction with the video either counts as engagement or just means move on to the next whack-a-mole variant.
None of these big tech platforms that involve UGC were ever meant to scale. They are beyond accountable.
> Is it possible to create a social network (or "discussion community", if you prefer) that doesn't have any incentive except human-to-human interaction? I don't mean a place where AI is banned, I mean a place where AI is useless, so people don't bother.
Yes, but its size must be limited by Dunbar's number[0]. This is the maximum size of a group of people where everyone can know everyone else on a personal basis. Beyond this, it becomes impossible to organically enforce social norms, and so abstractions like moderators and administrators and codes of conduct become necessary, and still fail to keep everyone on the same page.
I don’t think this is a hard limit. It’s also a matter of interest and opportunity to meet people, consolidate relationship through common endeavor, and so greatly influenced by the social super-structure and how they push individual to interact with each other.
To take a different cognitive domain, think about color. Wikitionary gives around 300 of them for English[1]. I doubt many English speakers would be able to use all of them with relevant accuracy. And obviously even RGB encoding allows to express far more nuances. And obviously most people can fathom far more nuances than what could verbalize.
Private groups work because reputation is local and memory is long. You can't farm engagement from people you'll talk to again next week. That might be the key
Filtering out bots is prohibitive, as bots are currently so close to human text that the false positive rate will curtail human participation.
Any community that ends up creating utility to its users, will attract automation, as someone tries to extract, or even destroy that utility.
A potential option could be figuring out community rules that ensure all content. including bot generated content, provides utility to users. Something like the rules on change my view, or r/AITA. Theres also tests being run to see if LLMs can identify or provide bridges across flamewars.
I’m actually happy with the way things are going. Content Mills, fake testimonials, and clandestine marketing aren’t anything new at all. But now it’s so painfully obvious that the only reason reasonable response is complete distrust of all these medium. We might actually just start taking what our neighbors more seriously than fake internet people.
Exactly. People spend less time thinking about the underlying structure at play here. Scratch enough at the surface and the problem is always the ads model of internet. Until that is broken or is economically pointless the existing problem will persist.
Elon Musk cops a lot for the degradation of twitter to people who care about that sort of thing, and he definitely plays a part there, but its the monetisation aspect that was the real tilt to all noise in a signal to noise ratio perspective
We've taken a version of the problem in the physical world to the digital world. It runs along the same lines of how high rents (commercial or residential) limit the diversity of people or commercial offering in a place simply because only a certain thing can work or be economically viable. People always want different mixes of things and offering but if the structure (in this case rent) only permits one type of thing then that's all you're going to get
Scratch further and beneath the ad business you'll find more incentives to allow fake engagement. Man is a simple animal and likes to see numbers go up. Internet folklore says the Reddit founders used multiple accounts to get their platform going at the start? If they did, they didn't do that with ad fraud in mind. The incentives are plenty and from the people running the platform to the users to the investors - everyone likes to be fooled. Take the money out and you still have reasons to turn a blind eye to it.
The biggest problem I see is that the Internet has become a brainwashing machine, and even if you have someone running the platform with the integrity of a saint, if the platform can influence public opinion, it's probably impossible to tell how many real users there actually are.
I don't think it's doable with the current model of social media but:
1. prohibit all sorts of advertising, explicit and implicit, and actually ban users for it. The reason most people try to get big on SM is so they can land sponsorships outside of the app. But we'd still have the problem of telling whether something is sponsored or not.
2. no global feed, show users what their friends/followers are doing only. You can still have discovery through groups, directories, etc. But it would definitely be worse UX than what we currently have.
i had the idea of starting a forum based social network that used domain validation (e.g. work domain) as part of your registration and then displayed that as part of your profile.
the idea being that you'd somewhat ensure the person is a human that _may well_ know what they're talking about e.g. `abluecloud from @meta.com`.
There is no definitive reason for creators to be paid. Zero. These platforms can and should stop paying people for their content. Without the platforms, the creators are dead. Make them pay for access to the audience and this whole problem disappears and makes the platforms far more profits.
Kill the influencer, kill the creator. Its all bullshit.
I miss the days when most people uploading things were doing it just for "love of the game" or to find likeminded enthusiasts. Not because it was their "hustle" or something to put on a resume. Those times are sadly long gone.
I think incentives is the right way to think about it. Authentic interactions are not monetized. So where are people writing online without expecting payment?
Blogs can have ads, but blogs with RSS feeds are a safer bet as it's hard to monetize an RSS feed. Blogs are a great place to find people who are writing just because they want to write. As I see more AI slop on social media, I spend more time in my feed reader.
I've been thinking recently about a search engine that filters away any sites that contain advertising. Just that would filter away most of the crap.
Kagi's small web lens seems to have a similar goal but doesn't really get there. It still includes results that have advertising, and omits stuff that isn't small but is ad free, like Wikipedia or HN.
It's challenging to decide where to draw the line. Is it OK that wikipedia begs for donations? Sites that use affiliate links can be full of crap too, although technically ad free.
I think its worth exploring.
BTW, HN has ads. They post job openings at their start ups, these appear like posts but you can't vote on them. Launch HN is another way this site is monetized, although that seems quite reasonable and you can vote/flag those posts like any other.
Monetization isn't the only possible incentive for non-genuine content though CV-stuffing is another that is likely to affect blogs - and there have been plenty obviously AI-generated/"enhanced" blogs posted here.
Sure, but with a feed reader I unsubscribe when ever I see slop. News aggregators don't have the same filtering ability; I don't think you can ban a site from HN for posting slop, for example.
YouTube and others pay for clicks/views, so obviously you can maximize this by producing lots of mediocre content.
LinkedIn is a place to sell, either a service/product to companies or yourself to a future employer. Again, the incentive is to produce more content for less effort.
Even HN has the incentive of promoting people's startups.
Is it possible to create a social network (or "discussion community", if you prefer) that doesn't have any incentive except human-to-human interaction? I don't mean a place where AI is banned, I mean a place where AI is useless, so people don't bother.
The closest thing would probably be private friend groups, but that's probably already well-served by text messaging and in-person gatherings. Are there any other possibilities?