To paraphrase the great Noam Chomsky: cognitive science is in a pre-Gallilean stage.
Many thousands of incredible scientists have done amazing work over the past ~century, but cutting-edge neuroscience still doesn't have the conceptual tools to go much farther than "when you look at apples this part of your cortex is more active, so we'll call this the Apple Zone".
Sadly/happily, I personally think there's good reason to think that this will change in our lifetime, which mean's we can all find out if trading the medicalization of mental health treatment (i.e. progressing beyond symptom-based guess-and-check) for governmental access to actual lie detecting helmets (i.e. dystopia) is worth it...
There's a new theory that we might actually gain a greater understanding of the human mind by studying the AI systems we create, because we can basically get a perfect X-ray of their neural nets at any particular state.
When we look at the "apple zone" part of an AI model that lights up, we see it in way higher resolution than our best scans of the human brain, and this might tell us something about how apples are perceived by both systems, or how language is represented neurally, or any number of other things.
And we can barely figure out how the modern LLMs work.
That doesn't bode well for minds being human-interpretable, not at all.
I used to think that the biggest bottleneck to understanding the workings of the human brain was that it defies instrumentation. Which could be solved by better imaging techniques, high throughput direct neural interfaces, etc. But looking at the state of AI now?
If we had full read/write access to the state of every single neuron in the brain, what would we be able to learn? Maybe not that much.
I think this is more or less exactly what Chomsky hoped (hopes?) AI research would eventually become, rather than the purely pragmatic pursuit of tool making it has typically been.
Of course, that's specifically about human anatomy. In this case we're talking about a feature that I'd bet is present in other animals too, so the factors discussed here don't all apply. In this case though there seems to be a straightforward answer -- the structures involved are very small! The post I linked is largely talking about larger structures we failed to find...
Don't call it that. He didn't make it up. Descriptive names are better than memorial names anyway. Call it the known-unknown matrix, or known-unknown risk classification system.
It's a fine name, because everyone (in the USA) knows what is being referred to in only 2 words. AND we get to remember a feeling of amusement mixed with horror that we had at the time that man who said something so intelligent was simultaneously making decisions so catastrophically stupid and impactful to all of us.
They are things that you know, that you don't know you know. For example, uncovering links between existing knowledge that uncovers something that then feels obvious.
This happens occasionally where two previously thought seperate fields of study discover a common link with each being able to explain the questions the other had struggled with.
Yup. My daughter (15y) had the perfect example this morning. She said she was thinking about shooting a person with a cannon upwards, then realizing they feel effectively zero gravity at top, and maybe this could be usefull. Then she remembered parabolic flight paths to train astronauts, realized it’s the same thing, and that she started wondering about a thing she already knew.
Lots of people know things deeply in their subconscious that they are fully ignorant of in their conscious thinking. These can manifest as gut feelings, or anxieties, and with therapy, can be identified. Things like "you should leave him", or whatever.
Until we both discover everything down to the Planck length, and then prove somehow that the Planck length is truly the smallest "unit", then we have not discovered everything. And we have probably hardly discovered anything, relatively.