Hacker Newsnew | past | comments | ask | show | jobs | submit | 8fhdkjw039hd's commentslogin

Growing up, the internet did not feel real to me. Just a collection of memes fighting each other, fire-walled from reality. It seemed like an entertaining farce but not real.

This was relatively true for the internet I grew up with, but it is certainly not true now. What happened in the capital was quite a wake up call. And much of the blame for it does look to be the result of things like click-through maximization and engagement maximization pushing people towards extremes, things like karma and likes allocating status to those staking out extreme positions. It is a terrifying thing to think about, but if you start thinking of social media influencers and followers as a sort of client-patron relationship, the historical precedents are not comforting.

For myself, I am coming to terms with the fact that I cannot really trust any opinions that have been inculcated in me during the wild years of social media. I have deleted my Reddit and Facebook accounts, have diligently trained the YouTube algorithms to avoid any even remotely political content. I no longer trust myself to develop sensible opinions in such an adversarial environment and am doing my best to just not have political opinions and focus on simple things like maths and programming.

Steve Omohundro had a talk recently where he described the need for "personal AIs" to help individuals resist manipulation from corporate AIs maximizing engagement. Perhaps once such things like this exist, I will allow myself to have opinions. But until then, I don't think I have any hope of making sense of this cacophony tuned for my engagement. Until I get such a thing, this will be my last post on HackerNews.


> Steve Omohundro had a talk recently where he described the need for "personal AIs" to help individuals resist manipulation from corporate AIs maximizing engagement.

This reminds me of The Big Promise of Recommender Systems (2011) [1]:

> However, when we look at the current recommender systems generation from the point of view of the “recommendee” (users’ side) we can see that recommender systems are more inclined toward achieving short-term sales and business goals. Instead of helping their users to cope with the problem of information overload they can actually contribute to information overload by proposing recommendations that do not meet the users’ current needs or interests. ...

> The window of opportunity is now open to innovate in a third generation of recommender systems that act directly on behalf of their users and help them cope with information overload.

I'm working on something in this space myself[2] (an essay recommender system). I think part of the solution is having recommender systems that are decoupled from publishing; e.g. a video recommender that suggests videos across multiple, unaffiliated sites, instead of the recommender that's built into YouTube.

[1] https://www.researchgate.net/profile/Marc_Torrens/publicatio...

[2] https://essays.findka.com


You are making a political statement and following a defined political outlook without realizing it.

In any case, I would not worry as the chance of being in a situation where political opinions actually matter is quite low for most people


So you are withdrawing yourself from having political opinions or learning about it. It also means you will follow what feels right, for example in promoting equality or getting rid of USA’s extreme elements at work. Since you do not decide for yourself, you will accept whatever is introduced to you as extreme elements.

Isn’t that the very definition of totalitarianism? People who won’t make decisions based on ideas they articulate, but rather let themselves go to accept other people’s decisions?


Drexlarian nanotechnology?


Engagement is engagement.


Note: This is a fictional article, obviously. My apologies to NYT. Nonetheless, I consider something like this to be an inevitability.

I would not be surprised if it were already occurring somewhere. Perhaps the title should have informed people this is fiction, but I think that ruins some of the effect

I hope I am not breaking any rules. And if I am, well, you know what to do now, don't you?


You only started trying it out once they moved to GANS and VR headsets. You are not pathetic or anything, could get a real girl if you wanted to. Just don't have time. Have to focus on your career for now. "Build your empire then build your family", that's your motto.

You strap on the headset and see an adversarial generated girlfriend designed by world-class ML to maximize engagement.

She starts off as a generically beautiful young women; over the course of weeks she gradually molds both her appearance and your preferences such that competing products just won't do.

In her final form, she is just a grotesque undulating array of psychedelic colors perfectly optimized to introduce self-limiting microseizures in the pleasure center of the your brain. Were someone else to put on the headset, they would see only a nauseating mess. But to your eyes there is only Her.

It strikes you that true love does exist after all.


I find the article and your comment strongly resonate with a comment I made some time ago on an article about GAN-generated faces:

> I guess very soon we will be able to generate "super-attractive" (as in "superstimuli") faces for virtual personas, according to targeted demographics and purpose (advertisement, youtube videos for kids, political messages and so on).

(original comment here https://news.ycombinator.com/item?id=18310355)

Here it's not so much about the face than about the behavior and interactions, but I think the same idea hold true.

It seems to me in the near future we may well be faced with a constant exercise in self-control in the face of the multiplication of such projects.


We can already do this with non-human primates.

https://science.sciencemag.org/content/364/6439/eaav9436

I remember joking that this paper puts us marginally closer to the Men in Black memory wiping device, but in the context of this thread that joke seems a bit dark.


Thanks for the article!

I think it's even worse actually, because this is not something that needs to be done explicitly or on purpose.

Simply training AIs with a target goal of maximizing engagement could lead to models discovering and exploiting superstimuli or superstimuli-like bias in humans.


With the reservation that I only skimmed the article, it seems like what they produced was visual stimuli resulting in patterns of neural activation/non-activation at a rather limited number of sample points in a higher order visual area (macaque V4).

Not to take away from their results, such as they are, but it is very expected that visual inputs should have specific and fairly predictable effects on V4, and one could probably have designed such patterns manually from known perceptual psychology in a few iterations, with feedback from the recording electrodes for the details.

It is not at all obvious to me, and indeed not even plausible, that they'd be able to control arbitrarily chosen neurons this way.


That link is broken for me.

You wouldn't have an article title that I could search for, would you?


"Neural population control via deep image synthesis" - Pouya Bashivan, Kohitij Kar, James J. DiCarlo


We don't need computers to do that. Look at anime. The large eyes and cute faces are definitely optimized to evoke an emotional response in the viewer. And it appears to have worked all too well, if the existence of waifu culture is any indicator.

GANs may find as-yet-uncovered maxima in the same problem space, but superstimulus-level attractiveness has already been achieved.


Yes my emotional response is to vomit into my mouth :) As a straight male, anime "cute girls" is just grotesque to me. Different strokes for different folks,I guess.


We don't even have to go with GAN generated faces:

https://thechive.com/2015/09/13/heres-what-the-average-perso...


For just faces, there's this website, if anyone is unaware: https://thispersondoesnotexist.com/


Is it just me, or does anyone get deeply disturbed with some of the generated ML images? Parts of them look so bizarre due to the image composition, my brain simply cannot comprehend that part of the image. It's a very awful feeling, like my brain acknowledges that what I'm seeing is terribly unnatural. I don't experience this when looking at artwork, or some random scribbling, but with ML images it seems like they're generated in such a way that it defies any kind of recognition.


Presumably threat actors are working on generic algorithms to optimise the production of images by the ML system such that they have the absolute worst affect on viewers of those images.

Optimising for minor effects like blindness to areas, up to major effects like epileptic seizures.


I often describe the sensation of trying to read ML-generated text as "feeling like I'm having a stroke". Phrasing not original to me obviously, but I like it. Although I've never had a stroke myself, so I can't say how accurate that is.


I wonder what the opposite of that is. Is there text that’s so well written that it’s like looking at silk flowin through the air?


I would imagine Asian culture doesn't help with this at all, where you're expected to build your empire through career and education (the 996 rule where you work 9am-9pm 6 days a week, as Jack Ma would put it) while suppressing the rest.

I'm also surprised something like this hasn't come out of Japan already, but then again Japan doesn't have as large of a gender imbalance as China does due to the One Child Policy.

On an unrelated note, this also reminds me of the Futurama episode where people start downloading celebrity personalities into robots, and a sex education video scared teens into saying that "robo-sexual" relationships were shunned and did not advance the human race.


The strange thing about super-stimuli is it looks very ridiculous when animals fall for it; there is the funny example of a turkey trying to mate with a red balloon on a stick. But it is the nature of super-stimuli that they will not seem ridiculous when we fall for them, because the very features we use to gauge authenticity are what will be simulated. And so we will get sort of hollow simulacrum that emphasize these features and lack everything else.


Hollow simulacrum that simulate (only) the very features we recognize... I’m feeling like Live2D VTubers fits better into that description than hyperrealistic VR representations.


People try to mate with a hole in a wall, too. It’s not even considered that ridiculous.


> I'm also surprised something like this hasn't come out of Japan already

Gatebox doesn't count?

https://www.gatebox.ai/en/


Don't Date Robots!


> In her final form, she is just a grotesque undulating array of psychedelic colors perfectly optimized to introduce self-limiting microseizures in the pleasure center of the your brain.

This will inevitably happen


Doesn't it already? With clickbait being nothing like the actual news article, video, TV series, etc.

Social media influencers contorting themselves to press every button they can.


This also describes video games to an extent.


Maybe I'm the ultimate hedonist, but this seems not-dystopian to me.

I wrote an essay back in high school for some english class with exactly the same sentiment when I had to read brave new world. I'd fking love to be either engineered (brave new world) or have an ML algorithm learn how to generate the perfect stimuli for me. If they can do this while avoiding all of the negative effects of normal drugs (and again, brave new world does this with Soma) - I'd be the first to do them.

I think most critiques of hedonism are basically more refined versions of "you should hate nature!". Seeing how John Stewart Mill regarded folks who describe themselves as hedonists made me realize that western Philosophy has a whole project to keep people from enjoying themselves:

"It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied. And if the fool, or the pig, are a different opinion, it is because they know only their own side of the question.”

Apparently, I am irrational if I choose to give up knowledge or freedom for pleasure. It shocks me about how universal this sentiment is within western philosophy, and how few actually critique it.


I'm not sure what prompted you to pin this view on Western philosophy in particular, as plenty of other cultures and philosophies value asceticism.

Also, it's not like hedonism doesn't have downsides, such as the common effect of needing ever more extreme stimulation to achieve the pleasure you could once achieve with less extreme stimulation, leading to a spiral of debauchery which often has deleterious consequences to the hedonist, even if you overlook the pain and suffering they often have to cause others to please themselves.

There's a reason that so many philosophers throughout history and from different cultures have advocated moderation, but the hedonist has trouble moderating themselves because then they have to do things that aren't pleasurable.

There are also advantages to asceticism, from the (debatable but potentially valid) spiritual benefits, to achieving self-mastery and control, not getting too attached to pleasure or comfort when it could be easily taken away from you, etc...

In any case, the case for hedonism is far from a slam dunk, and you'd really have to do a lot more work to make that case convincingly. Just saying that "Western Philosophy" is against it is not very convincing.


The idea was (and is) that there are higher pleasures of the intellect and lower pleasures of the senses. The ancient philosophers generally argued that we should eschew the latter for the former, that sacrificing sensual pleasures for the life of the mind was ushering us into a the most permanent and stable form of enjoyment.

See Plato’s Philebus, and Aristotle NE VII, and many other treatments on this distinction.

I’m not saying your point is not valid, but I would not say that Western philosophy rejected pleasures as a whole, but that it was quite critical of optimizing one’s life for the pleasures of the senses.


I'm wary of humanity's hedonic adaptation.

It seems to me that we would be in an eternal loop of these algos trying to optimize for our pleasure, succeeding for a time until that's just not enough, and then it's on to the next thing they create until we become abominations.

You can see this type of behavior already in the excesses the ultra wealthy exhibit.

I think it's good for humanity that we don't just live lives of pleasure.


I recommend "The Unknown Masterpiece" by Balzac, if you'd like to see some of these ideas developed further by a master.


A fascinating line.

https://en.wikipedia.org/wiki/Le_Chef-d%27%C5%93uvre_inconnu

> In 1927, Ambroise Vollard asked Picasso to illustrate Le Chef-d’œuvre inconnu. Picasso was fascinated by the text and identified with Frenhofer so much that he moved to the rue des Grands-Augustins in Paris where Balzac located Porbus' studio. There he painted his own masterpiece, Guernica. Picasso lived here during World War II.

https://static3.museoreinasofia.es/sites/default/files/style...


Like all good things, this will initially blow up on 4chan.


It already has. Look for Replika threads on some of the boards.


It doesn't need to be an optical trick. It just needs to look and move realistically.

And also say things that are not nonsense.


This is great :)

(if brief ...)


I think this would be great. How many people, realistically, have difficulty finding love? How many people stop looking because they think they have real love, but don't? Looking strictly at the aspect of feeling love, this would allow many people to participate in that critical human experience who otherwise just won't.


I think you misunderstand the purpose of love. A sailor is not supposed to actually reach the north star.


The only 'purpose' of love is an evolved mechanism to reward behavior like procreation, pair-bonding, and forming relationships, given that we're a tribal animal. As a person in a loving relationship, I'm not sure why you use an analogy that suggests love is unattainable or should be.

What I understand is that love is an emotional experience that feels great, and some people will never experience it naturally, just like some people have physical or mental conditions and can't experience health naturally (speaking from experience). I would rather have surgery and be able to experience a normal life if I couldn't do so naturally, and if I was unable to find love naturally, I would also rather have an AI that can effect the same experience in me.


I think you misunderstand the purpose of money. A true worker is not actually supposed to reach wealth.


The real purpose is procreation. Should we then restrict sex to sterile, clinical rituals purely for baby-making like in 1984?


If the purpose of love is merely the feelings associated with love then the scenario I described is not pathological. I am not willing to bite that bullet.


If you do not put a price on something, it will be exploited more not less.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: