My whole qualm with this AI integration into search engines: it's a search engine, not a question engine. I go to google to search the internet for something, not ask it a question. IMO, asking AI for something is a different task than searching the internet.
It's sorta the same problem as if I go into a store and ask an employee where something is, and they reply with "well what are you trying to do?"
for a lot of people and in a lot of use cases, it is a tool for answering questions. it generally works well for that.
i get that the AI implementation sucks, but to suggest that people don't use google to find the answer to questions is absurd. that's absolutely what it's for.
Your interpretation is a bit strict, with little charity, its clear the poster means "i don't always just want an answer, i want to learn"
I saw this over and over again working at products at G, someone would invoke some myth I can't quite remember about "Larry" had a vision of just giving the answer
That's true but comes back to the central mistake Google makes: we don't actually have AGI, they can't actually answer questions, and people aren't actually satisfied with just the answer.
There's all sorts of tendrils from there, ex. a major sin here _has_ to be they're using a very crappy very cheap LLM.
But, I saw it over and over again, 7 years at Google, on every AI project I worked on or was adjacent to, except one. They all assume $LATEST_STACK can just give the perfect answer and users will be so happy. It can't, they don't actually want just the answer, and BigCo culture means you don't rock the boat and just keep moving forward.
Recently I searched Google for a slightly unlikely phrase — in quotation marks — and Google proudly told me that my phrase was grammatically correct.
And nothing else. They didn't give me any search results. Or even tell me there weren't any results. Or even give me a button to press to say "no, I really wanted to search the internet for this phrase".
And also I have zero interest in Google's opinion on English grammar and am frankly insulted to be offered it, although to be fair I'm probably in a minority worldwide on that one.
If I can't use Google to search the internet for things, then Google is eventually going to have a big problem.
> I sometimes wants a search engine, sometimes a question engine.
If you want a search engine, it's easy to use the results as a feedback to refine the query. But a question (answer?) engine would need to be an expert in the subject. And not parroting stuff. That usually means curation. You need something to do the work ahead to filter the wheat from the shaft. I don't see how LLMs can do that.
LLMs can't be a search engine, and can't be an question engine. The best way to treat it is a simulation engine, but the use cases depend on the training data. But the proof is there that the internet is full of junk, and not that expansive.
If it's in the training data, then it should be able to do that. That is to say, a comment's points matter. and the subreddit it's on. and who said it, and how the rest of their comments do/where they are. The LLM could annotate the unredacted reddit dataset with metadata as to where to rate it on the words used, the accuracy of the information, the sarcasm quotient, the hilarity quotient, how condescending the comment is; all of that an LLM could generate metadata about and feed into itself to get better and better.
It's sorta the same problem as if I go into a store and ask an employee where something is, and they reply with "well what are you trying to do?"