And what's the end result? All one can see is just bigger representation of those who confidently subscribe to false information and become arrogant when their validity is questioned, as the LLM writing style has convinced them it's some sort of authority. Even people on this website are so misinformed to believe that ChatGPT has developed its own reasoning, despite it being at the core an advanced learning algorithm trained on a enormous amount of human generated data.
And let's not speak about those so deep into sloth that put it into use to deteriorate, and not augment as they claim to do, humane creative recreational activities.
This seems a bit self-contradictory: you say LLMs mislead people and can't reason, then fault them for being good at helping people solve puzzles or win trivia games. You can't have it both ways.
Why would you postulate these two to be mutually exclusive?
> then fault them for being good at helping people solve puzzles or win trivia games
They only help them in the same sense that a calculator would 'help' win at a hypothetical mental math competition, that is the gist; robbing people of the creative and mentally stimulating processes that make the game(s) fun. But I've come to realize this is an unpopular opinion on this website where being fiercely competitive is the only remarkable personality trait, so I guess yeah it may be useful for this particular population.
And let's not speak about those so deep into sloth that put it into use to deteriorate, and not augment as they claim to do, humane creative recreational activities.
https://archive.ph/fg7HE