Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The key mechanism that makes Radiopaper different from other social networks, and more resistant to trolling and abuse, is that messages are not published until the counterparty replies or accepts your comment.

My mind is blown at how simple and elegant this solution is!

Great work there.



It is simple and I do like it, but I also worry that the counterparty can simply not accept your comment and always have the last word.

So there is a reduced incentive to invest your time in writing a reply.


Once a conversation is published, you can no longer choose not to accept your counterparty's messages. However, if you choose not to respond to further messages in that conversation, your counterparty's additional messages will be considered "post-scripts," which do not cause a conversation to rise to the top of the Explore page. The effect is that, while no user can claim the last word from a counterparty, you can make it unlikely for a conversation to be seen by simply allowing the other user to have the last word.


I think it will hide discourse rather than trolls since, I strongly believe, people just wouldn't approve things they disagreed with or go against their beliefs. I don't see a value in a public conversation that allows one to silence another (well, besides spam/trolls of course, but "trolls" has mostly lost its meaning).


Reminds me of how "letters to the editor" have worked for ages. A good faith editor picks out the best responses both praising and criticizing.


I think it's extremely important to not assume good faith/lack of bias, with anything of importance. I would prefer the option to see all responses, with "accepted" responses, from the editor, being the default view.


I don't see why that's a problem. They always can start their own conversation about the same topic if I, the conversation starter, don't find their contribution to my conversation valuable.


Wouldn't that strengthen the echo-chamber effect? If the original poster supports position X and does not allow anyone who supports position Y in to the conversation, then a position Y person will eventually start another thread and not allow anyone from position X to reply. So now you have two threads: one for position X supporters and one for position Y supporters, with no cross-talk between the two.

Unless, of course, there's some kind of mechanism that gives threads with higher engagement more prominence? That might create an incentive to have more inclusive conversations. Although that might incentivise trolling too...

EDIT: Perhaps combine the above with a reputation system, where you can see how ban-happy an original poster is. Since people don't like to waste their effort writing a reply that just gets ignored, ban-happy posters would be penalised by lack of engagement. Then platform provided moderation could just become a kind of 'meta-moderation' - basically just banning people who try to game the system (e.g. posting threads saying 'please reply so I can get my ban percentage down').


This could just limit the types of conversations that flourish on the service.

If neither party is adult enough to participate, then perhaps twitter and facebook aren’t the right places to have controversial discourse.

I could however see it leading to echo chamber threads, or sham threads where one party is deliberately providing a weak counter-argument, a bit like a fox news interview to a right wing politician.


Good point. Perhaps they could implement a feature that indicates there has been a response submitted but not yet approved. Possibly it could include when it was sent, who the reply is from, whether the approver has seen it (and when they saw it). That would at least make it apparent when someone is withholding approvals for an unreasonable time.


The "seen" feature on whatsapp/facebook is the most infuriating feature that ever existed IMHO.

It's just so frustrating to have no replies when you know the message have been read.

HN is free of such shenanigans and it's been so far the best experience I've had on the internet.


Sometimes I grab my daughter's phone because she left it there, and the most recent imessage notification shows up as soon as it detects being picked up. Most commonly seen "most recent message" is "Bitch leavin me on read angry emoji..."


I would prefer transparency, with the ability to see all submitted replies to a comment, if I chose to, to help bring to light any bias/shenanigans.


The system could provide a higher friction way of accessing the unaccepted comment. This allows for audits (so as to compute the good-faithedness of the deciding counterparty), but still keeps trolls out of the limelight.


Exactly my thought - I reply here all the time and simply ignore responses half or more of the time.


Isn't this how comments work on Gawker/Kinja properties?

On Gawker/Kinja if you've been "followed" by a power user, your comment shows up right away. If not, your comment goes into "the greys", which are hard to see, until either the person you replied to replies or stars your comment.

I've spent a lot of time reading Jezebel and TheRoot over the years — they're a balm after experiencing the single-silo HN. The Gawker properties aren't what they used to be, but this commenting mechanism has its advantages. It truly defangs trolls. Jezebel and TheRoot could never operate without troll protections — there are so many disturbed characters hanging around trying the most vile stunts, you'd never manage to have a conversation proceed otherwise.

There's a significant flaw, though: the more that your interlocutor disagrees with your reply, the less likely it is that your comment will get approved. This doesn't apply to everyone on Gawker properties because there are lots of approved posters. I don't think the chained comment system would be that great unless it's supplemented by a way of approving/deapproving posters as well.


The key question you have to ask yourself: If I was a bad actor, how would I take advantage of this?

Sockpuppets.


Obviously having multiple accounts can aid in some trolling efforts, but I think most trolls aren't happy merely trolling themselves on threads that other people might later see or participate in.


There's a whole subgenre of "everyone clapped" fake posts that have littered a lot of the history of Reddit. They seem to have been written in exactly that "might see later" spirit.

It's probably trivial to gain a mere two accounts on this service to post a fake conversation of two sockpuppets attempting to "outwoke" each other. You can probably already see the arc of such a fake post in your mind.


There's a fair bit of "false flag wokeness" floating around already.


Exactly, possibly the main function of troll farms is to attack opposing voices, usually to drive them from the platform entirely by overwhelming them. Creating their own echo chamber doesn’t have the desired effect. This new service is like a public version of direct messaging on instagram/facebook/similar, which all use a similar blocked-until accepted approach.


I believe that a proportion of trolls (for want of a better word) are targeting anyone with an audience. They're not trying to convince that person, they're using them as a stepping stone to reach their audience. If they can get 1 person in 1000 on to the "d0 yOuR r3se4rch" youtube train... well eventually you end up with antivaxx. Or flat earth.


As I understand it, every message needs a counter-message. A sock puppet wouldn't be able to advance a thread unless the principal user engages with the sock puppet also. Right?


If I'm reading the design correctly (and I might not be), "counter-message" means "parent message". So puppet1 goes in with the reasonable response, and puppet2 replies to that with the unreasonable response, and we're off to the races.

(I emphasize I could have the wrong end of the stick about the design).


not clear to me either, but yeah if it's just about an initial response to OP, I see how that could go down


Had the exact same « owww that’s smart » instant reaction when reading that part. congratulations to the team.

Edit : since the founders are reading this, i had an idea once about what an anti-twitter would be like and came to this idea : no message under 1k characters. Do what you want with that idea, i give it to you.


Information is not correlated with word count. It would result in messages with low signal to noise ratio that you would have to skim through.


i don't think people will spend the time to inflate the number of words on every short message they want to post. They'll just be lazy a post elsewhere.


AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA


Then the rules would require that the text doesn't contain repetition (i.e. doesn't compress well), so people would mash random letters on the keyboard. Then the rules would require that the text contains a minimum percentage of dictionary words, so people would start copy-pasting from Wikipedia articles. Then the site would implement some sort of plagiarism detector, so people would start using GPT-3 to rewrite the articles.


why would people spend that much energy if they've got basically so very little thing to say ?


You need 957 more A's. Sorry.


It's definitely interesting, but it also comes with what seems like a significant tradeoff whose effects may be difficult to anticipate: initial posters are given a big advantage in terms of control over the conversation.

But maybe it won't be such a big deal in practice: in a way, it's kind of like enforcing a certain amount of politeness when conversing on someone else's turf; you've entered their 'house' and are expected to play by their rules while there.

On the other hand, when it comes to debate of any kind, there always has to be one party who gets priority over the other and can tailor the appearance of the outcome of the debate to a certain extent.

TBH, I'm mostly very curious what kind of behavioral dynamics would emerge around this—it's probably not possible to infer too much in the abstract. In any case, an interesting idea.


It will be an enormous deal in practice. Most people are not attracted to the idea of their conversations being shut off by people speaking in bad faith, but people interested in arguing in bad faith are inherently attracted to these designs.


But how do you avoid astroturfing? Seems like someone with two accounts (or a small cabal) could get anything they want published with very low friction.

Are you considering a strongly-bound 1-account-per-person model with verified accounts?


I believe the comment needs to be accepted/replied to by the person who wrote the parent comment.

It doesn't stop you from posting your own stuff at top level, just stops flames, I suppose?


Many people in this thread are assuming that the site can be used for "open-forum" discussions like most other social media sites, with whomever replying to whomever, when in fact it's more like a public 1-on-1 discussion board.


Though I imagine if the moderation doesn’t happen very quickly, there will often be a lot of very similar comments being submitted because they don’t see each other. And the moderator/OP will have to decide between rejecting the “duplicates” (which can feel problematic) or accepting them all (which leads to a lot redundant comments being published). That in turn can create peer pressure to not moderate too slowly. Not sure I’d like those dynamics.


I think this might be a feature? differentiated, insightful comments are the ones that yield the most interesting, differentiated replies.

If you get enough overhead from the platform from writing boilerplate comments, maybe you just stop?


Well, I guess it could end up creating a culture of differentiated commenting, but at Reddit/Twitter scale I’d be a bit skeptical.

One other worry is that those commentees who are willing to spend significant time moderating the comments they receive may not be exactly those who care about quality. To be honest, I wouldn’t want having to moderate the replies I get on HN. :)


Selecting, from critical comments, only the easiest to dunk on would seem to be likely, and is a tactic suggested by the site name (after all, it's a well-known technique used in screening callers to radio shows...)


Good point. I’d also expect speculation or accusation of suppressing certain replies to become a topic of discussion. Factions will accumulate in separate subthreads where they mutually approve their respective positions, creating mini filter bubbles.


This is actually supported by the first — completely unrelated! - conversation I read on the platform.

“Whenever I write anything publicly I risk being pulled into the maelstrom, which I call the epistemological woodchipper. It's a risk.”


The idea is promising, and the design is good, too.

Hoping this works out!

In the end, I see this as a feature for discussions / subreddits - not exactly a business. But who knows ¯\_(ツ)_/¯


I don't like when I post something and it doesn't immediately show up, like why comment at all

Granted I don't expect to write something bad but yeah, I just have this gut reaction I did something bad, like downvotes

Like a "shadow ban"


It’s a neat idea but I’d worry if it scales at all to anything beyond a hundred or so followers. At that point it’s offloading the spam filter onto the author, which will end up not publishing any replies at all simply due to the effort necessary.


The ability of the OP to accept or ignore replies is in addition to, not instead of, regular spam filtering techniques. Certainly we'll want to automatically block bots and other bad actors that violate our policies to reduce the burden on users.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: