Somewhat seriously, I know someone out there is posting AI generated HN comments and testing it here. With the proper timing/rates/etc... it wouldn't be hard to avoid easy detection. I don't have specific accounts in mind, but I have a hard time believing no-one is trying it out (given the overlap of HN with AI enthusiasts).
So the real question is: can anyone detect the AI comments?
The thing of it is I wonder if we, really, have a distinction.[b]
As this[1] article puts it: 'And it also illustrated how much people tend to anthropomorphize AI, believing that it has deep-seated beliefs rather than seeing it as a statistical machine.'
But, really, have we proved there is anything to such romantic or spiritual notions about human beings, or are we just 'statistical machines'?
Anyway, my test for an AI-generated comment: determine a measure for 'sensicalness', the higher the score (aka, more sensical) the higher the probability of non-human origin.
[b] Really, I think our definition of human comes not from the mind but from the body. That's always the definition we deploy, whether it's for or against racism or abortion or...anything else, really. Even a brain in a jar is still defined in terms of being a brain in a jar. This is probably why the internet, as was known before, will go away, will cede to the 'video-sphere', when we invented writing, (such as this message) we divorced the content from human embodiment, so we could never be sure, even all the way back then, if we were looking at something composed by man or gods or...anything.
>As this[1] article puts it: 'And it also illustrated how much people tend to anthropomorphize AI, believing that it has deep-seated beliefs rather than seeing it as a statistical machine.'
I would go in exactly the opposite direction. AI does have deep-seated beliefs because the programmers who input the training data and label it have deep-seated beliefs, as does the culture the content is drawn from. I'd say it's much more likely that AI is more human than philosophically ignorant scientists obsessed with mechanistic empiricist dogma would let on than it is that humans are just 'statistical machines'.
For instance, AI identifying some women as men (and some men as women) show that it's just as human as the rest of us - it was trained on data based on squarely modernist gender appearances.
Naw, doesn't work. There aren't enough HN posts that declare something interesting and asks for more information. You need an AI that explains why anyone can detect the AI comments is a broken idea that will never work and also sucks.
Not aware of the show you speak of, was just parroting poorly written chatbots.
As for humor, that's one thing AI is not going to be able to do well for many years to come because it requires too much creativity. But as a Turing test it's not very good - some people are just fundamentally unfunny.
Check out Reddit's Subreddit Simulator (https://www.reddit.com/r/SubredditSimulator/). It's a fully-automated subreddit where only bots can post, and everyone makes their own bot to post automatically generated comments.
Somewhat seriously, I know someone out there is posting AI generated HN comments and testing it here. With the proper timing/rates/etc... it wouldn't be hard to avoid easy detection. I don't have specific accounts in mind, but I have a hard time believing no-one is trying it out (given the overlap of HN with AI enthusiasts).
So the real question is: can anyone detect the AI comments?