A bird and a submarine both apply the same underlying laws of physics.
Just because LLMs seems to reason does not necessarily imply that it is able to actually reason. An airplane can't fake flying, it either flys or it doesn't. With text, reasoning can be faked.
My argument here is that the discussion of LLMs reasoning is a semantic question, not a technical or scientific one. I specifically didn't use "flying" because both airplanes and birds fly, and "fly" generally means "move through the air" regardless of method.
Swim means "move through water" but with the strong connotation is "move through water in the way that living things move through water". Submarines move through water but they do not swim.
Reason means what -- something like "arrive at conclusions", but with a strong connotation of "arrive at conclusions as living things do", and a weak connotation of "use logic and step-by-step thinking to arrive at conclusions". So the question is, what aspect of "reasoning" is tied to the biological aspect of reasoning (that is, how animals reason) vs. a general sense of arriving at conclusions. Don't try to argue a definition of "reason" that is different than mine -- doing so makes it immediately apparent that we're just playing with semantics. The question is "what observable behavior does a thing that we all agree can 'reason' have that LLMs do not have?". And the related question is "to what degree does humans' ability to 'reason' reflect our ideal conception of what it means to 'reason' using logic".
Both the statements "LLMs can reason" and "LLMs cannot reason" are "not even wrong"[1]
I use LLMs daily in coding and it very clear to me (as a humble average thinking machine) that it is an approximation of reasoning and very close one. But the mistakes it makes clearly shows that this system does not really understand, it is not very different from the mistakes those who memorized text do. Humans, when they really think, they think differently. That is why, I would never expect the current architecture of LLMs to come up with a something like special relativity or any novel idea, because again it doesn't not really reasonn the same way deep thinkers, philosophers do. However, most knowledge work does not require that much depth in reasoning, hence LLMs wide adoption.
Just because LLMs seems to reason does not necessarily imply that it is able to actually reason. An airplane can't fake flying, it either flys or it doesn't. With text, reasoning can be faked.
Does that clarify?