> We've just made a math equation that can spit out language, you're comparing this to things that can actually _experience_ without someone clicking enter on a keyboard.
What would it mean to "experience" language properly, and what precludes LLMs from doing that?
In my view, hubris is exactly the opposite view, that our minds are somehow "more" or "better" than a bunch of matrix multiplications (with poor historical track record, see e.g. geocentrism, or the whole notions of lifeforce/soul/divinity).
I'd argue that a large amount of complexity in our brains is extremely likely to be incidental; I don't see why we would ever need to fully simulate a gut biome to achieve human-level cognitive performance (being overcomplicated certainly makes it harder to replicate exactly or to understand fully, but I don't think either of those is necessary).
What would it mean to "experience" language properly, and what precludes LLMs from doing that?
In my view, hubris is exactly the opposite view, that our minds are somehow "more" or "better" than a bunch of matrix multiplications (with poor historical track record, see e.g. geocentrism, or the whole notions of lifeforce/soul/divinity).
I'd argue that a large amount of complexity in our brains is extremely likely to be incidental; I don't see why we would ever need to fully simulate a gut biome to achieve human-level cognitive performance (being overcomplicated certainly makes it harder to replicate exactly or to understand fully, but I don't think either of those is necessary).