Nobody really knows "the point" of LLMs yet. They weren't even "invented" as much as they emerged as a trick to get computers to better understand human language.
They're still brand spanking new and everyone's trying to figure out how to best use them. We don't even really know if they're ever going to be "really good at" any given task!
Are they "really good at" these things or are they merely "OK-ish"?
* Answering factual questions.
* Programming.
* Understanding what the user wants from natural language.
* Searching/recommending stuff.
Real world testing suggests that with billions and billions of dollars spent, you really can get an LLM to be "OK-ish" at all those things :D
Yet literally hundreds of billions of dollars are being invested in them. That’s what’s so concerning. And I can tell you not one of these startups would EVER acknowledge the truth of your statement.
They're still brand spanking new and everyone's trying to figure out how to best use them. We don't even really know if they're ever going to be "really good at" any given task!
Are they "really good at" these things or are they merely "OK-ish"?
Real world testing suggests that with billions and billions of dollars spent, you really can get an LLM to be "OK-ish" at all those things :D