There's a new theory that we might actually gain a greater understanding of the human mind by studying the AI systems we create, because we can basically get a perfect X-ray of their neural nets at any particular state.
When we look at the "apple zone" part of an AI model that lights up, we see it in way higher resolution than our best scans of the human brain, and this might tell us something about how apples are perceived by both systems, or how language is represented neurally, or any number of other things.
And we can barely figure out how the modern LLMs work.
That doesn't bode well for minds being human-interpretable, not at all.
I used to think that the biggest bottleneck to understanding the workings of the human brain was that it defies instrumentation. Which could be solved by better imaging techniques, high throughput direct neural interfaces, etc. But looking at the state of AI now?
If we had full read/write access to the state of every single neuron in the brain, what would we be able to learn? Maybe not that much.
I think this is more or less exactly what Chomsky hoped (hopes?) AI research would eventually become, rather than the purely pragmatic pursuit of tool making it has typically been.
When we look at the "apple zone" part of an AI model that lights up, we see it in way higher resolution than our best scans of the human brain, and this might tell us something about how apples are perceived by both systems, or how language is represented neurally, or any number of other things.