This article's wrong. It's not the Illustrated Primer that presages the chatbots, it's the Librarian in Snow Crash. His "intelligence without consciousness" take is spookily accurate, I think.
Not really. He multiple times goes out of his way to have the Librarian state that it cannot understand analogy.
There's literally a paper on improved success at prompting by having a model create an analogous problem, solve that, and apply the approach to the original problem: