Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article's wrong. It's not the Illustrated Primer that presages the chatbots, it's the Librarian in Snow Crash. His "intelligence without consciousness" take is spookily accurate, I think.


Not really. He multiple times goes out of his way to have the Librarian state that it cannot understand analogy.

There's literally a paper on improved success at prompting by having a model create an analogous problem, solve that, and apply the approach to the original problem:

https://arxiv.org/abs/2310.01714




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: