> they are just statistical machine outputing whatever they training set as most probable.
How is this sentiment still the top comment on an article about AI on HN in 2026? It's not true with today's models. They undergo vast amounts of reinforcement learning optimizing an objective that is NOT just predict the most likely next token given the training corpus. I would say even without the RL the "predict the next token" objective doesn't preclude thinking and reasoning, but that's a separate discussion. Generative sequence modeling learns to (approximately) model the process that produced the sequence. When you consider that text sequences are produced by human minds, which most would consider to be thinking and reasoning, well...
How is this sentiment still the top comment on an article about AI on HN in 2026? It's not true with today's models. They undergo vast amounts of reinforcement learning optimizing an objective that is NOT just predict the most likely next token given the training corpus. I would say even without the RL the "predict the next token" objective doesn't preclude thinking and reasoning, but that's a separate discussion. Generative sequence modeling learns to (approximately) model the process that produced the sequence. When you consider that text sequences are produced by human minds, which most would consider to be thinking and reasoning, well...