This use-case may be good or bad, but the logic underneath it is 100% correct IMO. Fundamentally, these new models allow you to encode and communicate high-level thoughts in the same way that the internet allowed you to encode and communicate well-defined information.
The natural evolution of this technology is to insert it into human communication channels and automatically transform raw thoughts into something better for the other end of the channel. "Better" is open to interpretation and going to be very interesting in this new world, but there are so many options.
Why not a version of HackerNews that doesn't just have commenting guidelines, but actually automatically enforces them? Or a chrome extension that takes HN comments and transforms them all to be kinder (or whatever you want) when you open a thread? Or a text input box that automatically rewrites (or proposes a rewrite of) your comments if they don't meet the standards you have for yourself?
> Or a chrome extension that takes HN comments and transforms them all to be kinder
I was about to start working on something like this. I would like to try browsing the internet for a day, where all comments that I read are rewritten after passing through a sentiment filter. If someone says something mean, I would pass the comment through an LLM with the prompt: "rewrite this comment as if you were a therapist, who was reframing the commenter's statement from the perspective that they are expressing personal pain, and are projecting it through their mean comment"
I find 19 times out of 20, that really mean comments come from a place of personal insecurity. So if someone says: "this chrome extension is a dumb idea, anti-free speech, blah blah blah" , I would read: "commentor wrote something mean. They might be upset about their own perceived insignificance in the world, and are projecting this pain through their comment <click here to reveal original text>"
Also on my project list - the AntiAssholeFilter. It's so interesting the ways you could handle this. Personally, I would want to just transform the comment into something that doesn't even mention that that the commenter wrote a mean comment - if it has something of value just make it not mean, otherwise hide it.
A couple things are really interesting about this idea. First - it's so easy for the end-user to customize a prompt that you don't need to get it right, you just need to give people the scaffolding and then they can color their internet bubble whatever color they want.
Second, I think that just making all comments just a couple percent more empathetic could be really impactful. It's the sort of systemic nudge that can ripple very far.
I appreciate someone actually putting an argument for this, just so we can tease out what's wrong here.
Customer support is one place where I don't want to just "send information". I want to be able to "exert leverage". I want my communication to be able to impel the other actor to take action.
The thing with hn comments is that the guidelines are flexible, even things that violate the guidelines are a kind of communication and play into the dynamics of the site. The "feelings" of hn have impact (for good and ill but still important).
Customer support is interesting here. I think you're very right (personally I think that the new AI in the article is a bad idea). I wonder if the ideal is transforming speech into multiple modalities. Maybe make the speech kinder to avoid activating people emotionally and then offer something visual that indicates emotions or "give a shit" level. But I dislike speech as a modality; the single channel nature is incredibly limiting IMO.
For HN comments, I think you're right. But I think there is still lots of potential there, from tooling to help Dang use his time more effectively to tooling that you can switch on when you are in a bad mood that lets you explore your curiosity but filters out/transforms the subset of comments that you don't have the emotional capacity to deal with well.
The cool thing is that this tech can be easily layered on top of the actual forum (does vertical integration give you anything? certainly crazy expensive) and so the user can be in control of what filters/auto-moderation they embrace. Plus text makes it easy to always drill deeper and see the original as needed.
Adding additional meta-data (like subtext to the raw text) is a good idea - however rewriting and automatically transforming behind the scenes is a fantastic way to create an even larger potential miscommunication gulf between two parties.
Even an individual extension or system that automatically transforms data into your desired content risks creating an artificially imposed echo chamber.
> risks creating an artificially imposed echo chamber.
I think that ship has sailed? Agree that the ramifications of auto-transforming communication is huge, but I think I'm more optimistic. The internet is a cesspool, I think that improving things is pretty likely now that empathetic software is no longer the stuff of dreams.
Smarter writing tools might be cool, but thinking of this as “enforcement” is kind of backwards given the current state of technology. AI is gullible and untrustworthy and it’s up to us to vet the results, because they could be bananas.
I think proposing a rewrite and letting people decide (and make final edits) could work well, though.
I think enforcement could have some positive uses. Think of reddit automod and what you could do with prompt engineering + LLM-driven automod. People's opinion of automod will vary, but I think it is a powerful tool when used right - there is so much garbage on the big parts of the internet, it's helpful to be able to deal with the worst stuff automatically and then incorporate a human escalation process.
This new moderation system sucks, who the hell thought of this?!
[What is displayed to the world]
I love this new moderation system, makes the site so much better to use!
----
You then think, fuck this, I'm deleting my account. Which you do losing access to it. But you see all your content is still there, and new posts are being made daily. Welcome to the new AI world where humans need not even apply.
Will it get this bad, no idea, but I bet some site will try it.
The natural evolution of this technology is to insert it into human communication channels and automatically transform raw thoughts into something better for the other end of the channel. "Better" is open to interpretation and going to be very interesting in this new world, but there are so many options.
Why not a version of HackerNews that doesn't just have commenting guidelines, but actually automatically enforces them? Or a chrome extension that takes HN comments and transforms them all to be kinder (or whatever you want) when you open a thread? Or a text input box that automatically rewrites (or proposes a rewrite of) your comments if they don't meet the standards you have for yourself?