Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I rolled my eyes and had trouble taking the rest of the article seriously when such an implausible thought experiment came so early in the article without being criticized for the deep flaws in this assumption

The thought experiment wasn't supposed to be 'plausible'. It's purpose was to introduce the concept of worldviews residing in a high dimensional space, which could perhaps be approximated. It was also actually a quote from an earlier article, with more context.

> I don't think anyone is internally consistent all the time

I agree. Only a moron or a God could accomplish such a feat. However, this doesn't invalidate the thought experiment - all you need to do is add a time axis.

"On this particular morning, the Judge had dreamed that she sat in a beautiful parlor while doves fed grain to her kittens. Her general amiability is notably up, and she will deliver far less lenient verdicts all morning because of her contrarian tendencies. Then, a bone in her smoked haddock sandwich triggers repressed childhood memories, leaving her 31.64% more disaffected and apathetic." - The thought experiment could encompass full knowledge of all of this, from moment to moment.

> This is why we won't ever be able to, for example, train up AI surrogates to go have all our arguments for us, on a societal level ... and let them make decisions about agreements that should govern our actions

Is that why? Or is it because by the time AI is that advanced the likelihood of our being able to control it is virtually nil, thanks to a breakneck developmental arms race...

Besides which, being frustrated with clarity is something of a contradiction in itself, lol. I imagine a hyper-intelligent being could teach us to be better people in a way we'd actually like, if they were into that sort of thing. The Culture books explore this quite well.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: