Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have two competing responses:

1) Check out the AI of Iain M Bank's Culture series [0]. In it is a benevolent society of AI machines (the Minds) that generally want to make the universe a better place. Shenanigans ensue (really awesome shenanigans).

2) In response to the competing AI directives, I'll reference another Less Wrong bit o' media, this time a short story called Friendship is Optimal [1]. Wherein we see what a Paperclip Maximizer [2] can do when it works for Hasboro. (It is both as bad, awesome, and interesting as you might expect it to be.)

Personally, I think the general idea is that once one strong AI comes about, there will also be a stupefying amount of spare idle CPU time available that will suddenly be subsumed by the AI and jumpstart the Singularity. Once that hockey stick takes off, there will be very little time for anything else to get in on being a dominant AI. It's... a bit silly written like that, but I get the impression it's assumed AI will be just like us: both competitive and jealous of resources, paranoid that it will be supplanted by others and will work to suppress fledgling AI.

I have no idea why this is prevailing, aside from it's like us. Friendship is Optimal makes a strong point that the AI isn't benevolent, merely doing it's job.

> The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else. — Eliezer Yudkowsky

[0] http://en.wikipedia.org/wiki/The_Culture

[1] http://lesswrong.com/lw/efi/friendship_is_optimal_a_my_littl...

[2] http://wiki.lesswrong.com/wiki/Paperclip_maximizer

EDIT: I feel it may be appropriate for me to share my opinion: AI will likely be insanely helpful and not at all dangerous. But there will be AI that run amok and foul things up - life threatening things, even. But we already do that with all manner of non-AI equipment and software, so I'm not terribly worried (well, no more so than I usually am).



I think bostrom/Yudlowski's arguments are a bit flawed on thsi topic.

The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips.

Why is the worthiness of this goal not subject to intelligent analysis, though? The whole scenario rests on the idea of an entity so intelligent as to wipe out all humanity, but simultaneously so limited as to be satisfied with maximizing paperclips (or any other limited goal for which this is a proxy).

An AGI is simply an optimization process—a goal-seeker, a utility-function-maximizer.

Then I submit that it's not an artificial general intelligence because it apparently lacks the ability to evaluate or set its own goals. I'm reminded of the 6th sally from the Cyberiad in which an inquisitive space pirate is undone by his excessive appetite for facts.


>it apparently lacks the ability to evaluate or set its own goals.

The AI would have to evaluate the goal by some standard, so 'maximize paperclips' is a proxy for whatever goals get a high evaluation from the standard. Getting the standard right presents essentially the same problem as setting the goal.

Putting in 'a need to be intellectually satisfied by the complexity of your end product' is complicated and still wouldn't save humanity.


Any intelligent animal is fighting for its survival when feeling threatened. There's no reason to assume that a self-aware AI will be OK with us simply pulling the plug on it.


I'd like to think we could find some middle ground ebtween helpless surrender and imposing the death penalty on a sentient individual, both in moral terms and in terms of having some failsafe mechanisms, so that supplying electricity didn't allow for a takeover of the power grid or other doomish scenario.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: