>> The stubborn fact remains that, no matter how deeply we probe into the nature of bacon, eggs, oatmeal, and avocado toast—to say nothing of shakshuka, grits, bear claws, or dim sum—or the interactions between these fundamental building blocks and, say, orange juice or coffee and the morning paper, we simply have no convincing theory to explain how such disparate, seemingly inert components give rise to the phenomenon we subjectively experience as “breakfast.”
That's the best sentence I've read in a long time, despite the fact that the pandemic has me spending too much time surfing HN.
I think the author makes a very solid, implicit, argument regarding the problem of consciousness. The "problem" is hard to pin down. It doesn't seem any more implausible to me that consciousness is an emergent property from our brain bio/chem/physics then that hurricanes and diamonds and stalagmites naturally emerge from their respective environments. Or that massive spaceships emerge from Conway's Game of Life. Of course humans claim that we're special - "My consciousness is more important than a darned stalagmite!" - thinking of ourselves as more important than rocks and hurricanes, as more special, gives us a desire to survive. Our predecessors who genuinely thought they were no more "special" than a stalagmite were ineffective at passing down their genes.
So I claim that the hard problem of consciousness is not such a hard problem, but rather just a case of us humans being obsessed with ourselves.
I think you're sidestepping the issue a bit. The hard problem of consciousness doesn't say that consciousness has to be better as a value judgment, just that it... is. Saying that it's an emergent property of physics doesn't address the problem. Sure, any old combination of atoms can randomly become conscious if they get lucky, but what does that mean?
And it is a hard problem: how do you even define subjective consciousness non-circularly? "Consciousness is the state of feeling like something" - okay, but what is it that's doing the feeling?
> It doesn't seem any more implausible to me that consciousness is an emergent property from our brain bio/chem/physics then that hurricanes and diamonds and stalagmites naturally emerge from their respective environments.
It does to me, but more importantly, no one is able to give a coherent model of how consciousness would emerge from complex arrangements of matter.
I'm an atheist with zero spiritual inclinations whatsoever, but it does seem to me that science has not properly grappled with the hard problem of consciousness.
Everything I've read that tackles it from a science perspective seems like side-stepping, weasel words ("emergent property"), or wholesale denial ("it's an illusion"). Not very scientific!
There seems to be a collective hubris at play, where people are afraid to say "I don't know".
Here is my Blindsight inspired hypothesis, that consciousness has been an accidental application to ourselves of mechanisms that are useful for predicting the behaviour of others:
https://news.ycombinator.com/item?id=23475069
In breakfast terms: if it has been fitness-increasing in the past to note that all the other animals go down to the water hole at similar times (thereby freeing up some desirable carcasses for scavenging? or at least making it easier to cross over to far away trees while the predators are busy elsewhere?) we may start to model mealtimes. Once we notice that others have a morning feed that affect our actions ("breakfast" is followed by "rush hour"), it isn't that big a jump for our grey cells to experience our own morning feed in the same category, as breakfast.
Once I read a neuroscience article saying that all of our senses are external to the brain -- the brain only 'experiences' electrical signals, not 'reality.'
What if we do have sensory neurons in our brains, though -- wouldn't that lead to a sense of self? Quite literally. I suspect it's something like that.
We can do better than say "I don't know." We already know that consciousness can be manipulated with drugs, electrodes, accidents, etc. That rules out the metaphysics and quantum voodoo some people think is necessary. It's not a solved problem, but I'm confident we'll find the a perfectly scientific explanation. In that sense, it's merely a hard problem, not "The Hard Problem."
HN needs to lighten up sometimes that article went over pretty easy. It might be a little scrambled, but you can poach meaning from it if you try. Things don't always have to have such a deep hard boiled meaning.
Puns are my bread and butter but even I'd have to say you're really milking it there. And I suppose this kind of humor is not just everyone's cup of tea.
This article perfectly articulates my sentiments about the "hard problem of consciousness". It's a made up problem. There is no such thing as consciousness. It's a term we made up to differentiate ourselves from the rest of the matter in the universe because we wanted to feel special. There is no mystery of consciousness, the only mystery is why humans are incapable of accepting the fact that they are just stuff, like any other matter in the universe.
You've made a huge leap from there-is-no-hard-problem-of-consciousness to there-is-no-consciousness. Consciousness is the experience of being something. I know it exists, because I'm experiencing it right now. The experience is consciousness, despite being illusory in many ways. Even a completely illusory experience is a conscious experience. In no way does it preclude my just being stuff, or other forms of stuff experiencing things too (and thus also being conscious). The mystery is how I can feel anything at all despite being just stuff.
But there is no mystery. Feeling is a self-referential concept. The concept of 'you' or 'me' 'feeling' presupposes a self, which smuggles in the conclusion that consciousness is related to some immaterial element.
Feeling is just how we experience reward and punishment. It's an epiphenomena of our machinery for interpreting and interacting with the world. An intermediate representation, essentially.
I don't see how even a minimally-feeling self necessarily involves something immaterial. I'm convinced there isn't anything immaterial or magical about my brain being able to feel things. It's how the feeling self arises at all that is the question here.
> Feeling is just how we experience reward and punishment.
It is that, but not just that. This behaviourist Skinnerian abstraction doesn't do it justice. It may be an epiphenomenon, but how it arises at all is the question of interest here.
We may be meat robots, but we are social meat robots, so it's not surprising many of us feel a desire to discuss our shared meat robot-ness, aka "consciousness".
It is a semantic dispute, but in this case, it's an interesting one. The words in question are "I" and "feel". You even used the word:
>many of us feel a desire
Can you restate that clause without using any synonym for experience, subjectivity, consciousness, etc.? I posit that you can't, and the fact that you can't explain the most basic fact of existence except recursively is itself interesting.
Okay, first of all the fact that people cannot explain their experience does not invalidate their experience. Second of all, that is totally possible to explain if you know about the map-territory distinction and how it plays out psychologically.
Basically, humans are meat robots whose brain contains a symbolic-experiential map of their surroundings, meaning it (among many other things) associates symbolic concepts with sensations. Since humans self-model as part of their social behavior, the map contains a representation of themselves. The map also contains a sensory input and associated symbolic representation for its own operation; the meat robot calls "the meat robot sensing in the map that the meat robot is modelling itself" consciousness or "being itself". Separately, it calls the concept it associates with itself "I" and the sensory input of operating the map "being".
I don't disagree with your first point. I meant that it's interesting that there is a phenomenon that is undefinable despite (in my view) self-evidently existing
I'm familiar with the map-territory distinction, but I still don't think that's sufficient. I'll grant that it's possible to define "I" as an arm's-length name for a representation - basically the same as "my body" or "my hands". And we can, without any philosophical complexity, do that with consciousness itself - I can talk about "my state of consciousness" in the third person.
That doesn't, to me, explain subjectivity. The simplest factory robot will have a model of self - likely a much more accurate one than mine. Is it conscious? Does it subjectively experience?
If it has a (recursive?) model of itself and a sensory input representing its operation, I would argue it's conscious. For instance, I believe computers are conscious in this sense; not in the sense that anything that computes is conscious but in the sense that `top` (or the Windows Task Manager) fulfills many of the listed traits - self-awareness, self-modelling, discrimination. Of course, computers can't actually do nearly as much as we can with their consciousness, but that's a question of skills that are not themselves consciousness.
For the closest example, I think there's probably something it is to be like the Linux OOM killer.
The problem with these debates is that they turn into a referendum on how people feel about mysticism. Some want to think that a mysterious phenomenon like consciousness requires a solution that takes us beyond science into the mystical. Others can't stand mysticism, especially not in a field of scientific inquiry.
If neuroscience provides a detailed scientific explanation for how consciousness works, that'll put it all to rest. Of course, the mysticism camp doesn't believe this is possible even in principle, and there are likely to be some holdouts even after the problem is solved, just like we have flat earthers now.
I wonder if the solution will turn out to have too many components to be satisfyingly understandable by humans, i.e. something like a 200-page proof that only a proof checker can validate. That may be a situation to confound both the mysterians and the non-mysterians.
I think there's probably a few key conceptual ingredients at most, but it might be that the implementation is complicated. As an analogy: we know that mitochondria provide power to cells by oxidizing sugar and converting the energy to ADP/ATP, but how all of that actually gets carried out is very complex.
You see hubris in those who would put human consciousness on a pedestal above any other organization of matter in the universe. I see hubris in those who would declare our limited observations of the universe as descriptive of the holistic nature of reality.
The truth is we can’t ever be sure our understanding is complete. This is why philosophy exists as distinct from science and always will.
> There is no such thing as consciousness. It's a term we made up to differentiate ourselves from the rest of the matter in the universe because we wanted to feel special.
I don't know that I'd go that far. Is there such a thing as quicksort, or is it a term we made up to differentiate certain bit patterns from the rest of the possible bit patterns (i.e., the rest of the positive integers)?
You could write an article like this about quicksort. At what point do you look at a certain positive integer, usually a couple thousand digits long, and say, "Behold, there is quicksort for x86_64"? (What is x86_64, for that matter? Scientists have recently proven it can exist without silicon, and it is well known that silicon can exist without x86_64.)
I'd agree with something extremely close to your point - that consciousness is an abstraction, just like quicksort and x86_64 are - and I'd even happily admit that abstractions are things we've made up, but just because something is an abstraction doesn't mean it isn't real!
Quicksort is a legitimate model though; there are objective answers about what is a quicksort and what isn't. Two people will never look at the same processor running the same algorithm and disagree about whether it's a quicksort or not (or if they do, they'll be able to localise that disagreement down to a more specific question like whether using the cache in a particular way qualifies as using extra memory). Whereas two people might look at the same processor running the same algorithm, agree about all the measurable facts and all of the lower-level definitions they were using, and still disagree about whether it was conscious.
We can find plenty of fuzzier abstractions. Does "machine learning" exist? Does "NoSQL"? Does "scrum"? (That one can't even be used to describe an integer, and it certainly doesn't describe a physical object.)
I'd claim that if two people agree about measurable facts and disagree about whether something is conscious, they do in fact disagree on the definition of consciousness, just as if they disagreed on whether something counted as NoSQL or scrum. (Though it's been a while since I paid attention to this area of philosophy, so I might just be wrong here - is there an example of two people claiming to agree on the definition of consciousness but reaching different conclusions?)
Honestly maybe this was the brilliance of the article. We don't even need to agree on the definition of "breakfast" to conclude that it's a useful and meaningful term. Nobody would claim breakfast doesn't exist on the mere grounds that it's made up. Of course it's made up, it's an abstraction.
I see what you mean - I'm not sure if it's a question of the model itself or how well it's communicated, or whether that's a meaningful distinction. Perhaps a better model would be that there's a spectrum of how well-defined a model is: for a good model everyone agrees on what it does or doesn't cover, a mediocre model is one where people agree about some cases but disagree about a lot of edge cases, and a bad model people don't agree about at all. And then the claim is that "consciousness" doesn't actually add any understanding or communicate anything, because people don't really agree about any more cases of it (beyond being just a synonym for "human").
For sure, consciousness is used to put humans over other creatures [1]. But that doesn't mean that consciousness doesn't exist. Why don't you look at it the other way round and assume consciousness wherever possible? Vegans have compassion with animals because they see consciousness. Are they wrong?
Or don't they go far enough and plants and even stones have consciousness? But then, why don't we feel different when we cut our hair and lose some matter, or exchange all atoms of our body during our life?
We don't "feel different" because there is no evolutionarily useful purpose to feeling bad about cutting off a hair. And that's probably why most people don't feel so bad about having an animal killed in order for us to eat. It seems likely to me that our ability to have compassion for animals is in some sense a "misfire" of a system that was "intended" to help us cooperate with other humans. We're stretching a system way beyond its original purpose.
My point is just that the way we feel about a certain action says almost nothing about the nature of reality or morality, but rather about us as a species. Whether or not stones have a consciousness wouldn't affect how bad we feel about crushing them into dust.
For what it's worth, I tend to lean toward the "not far enough" camp. I see no good reason to assume that there is anything special about any particular arrangement of matter that causes there to be something that it is like to be that matter. I think it is more accurate to consciousness, than to say that we are conscious.
There are still interesting questions to answer about consciousness. One of them being "why does the brain spend so much effort simulating an internally consistent model of what it's "like" to be me?" - and we know that it is, because it flaps my jaws and wiggles my fingers on the keyboard in patterns that are consistent with that It seems like a lot of resources to spend on something that isn't incredibly useful somehow. But those are questions about the mechanics of the brain, not about some mystic property or sensory qualia
Uncounsciousness is often taken to mean the absence of consciousness, but it could just as well be the absence of objects in consciousness. After all, by definition, no one has ever had first-hand experience of their consciousness disappearing, so we don't even know that consciousness can disappear. Likewise for consciousness appearing.
I agree with your conclusion that we are just a fantastic bundle of jiggling, excited stuff. But that doesn't preclude a discussion about the experience we have of that stuff in motion.
That viewpoint is nicely argued by Gary Marcus in Kluge (2008): that we’re basically a collection of hacks (and not the the Marvin Minsky Society of Mind sense) but that this makes our cognition fascinating in (yet another) it’s own way.
So, what is a red visual experience? Is it the reflective surface of the red-looking object? Is it photons around 700 nanometers wavelength? Is it the electrons traveling to the visual cortex from the eyes? Is it the neural activity in the visual cortex? Is it the symphony of different brain regions brining it all together into a visual experience?
Is this the exact symphony that plays in my brain that plays in yours when we look at the same red object? What about when you dream of a red object?
I'd argue that the question isn't well-defined - same as asking whether your experience of counting to five is the same as mine. There is no "experience" of seeing red beyond the simple fact of perceiving 700nm-wavelength light.
It is no more possible for me to experience as blue what you experience as red, and vice versa, as for me to experience as 3 what you experience as 2. If I experienced it as 3, that is, one less than 4, it would actually be 3. When I dream of three objects, I experience them just like you would. I don't see why dreaming of a red object would be any different.
> I'd argue that the question isn't well-defined - same as asking whether your experience of counting to five is the same as mine. There is no "experience" of seeing red beyond the simple fact of perceiving 700nm-wavelength light.
If your brain can detect 700nm-wavelength light without you having a conscious experience of red, then there certainly is a question. Also, dreaming of red is not perceiving red, since there is no 700nm light. This is why you can't just simply reduce an experience to the stimulus.
> It is no more possible for me to experience as blue what you experience as red, and vice versa, as for me to experience as 3 what you experience as 2. If I experienced it as 3, that is, one less than 4, it would actually be 3. When I dream of three objects, I experience them just like you would. I don't see why dreaming of a red object would be any different.
The question isn't about the quality of your subjective experiences, it's about whether the pattern of neurons firing would necessarily be the same between people. If not, then you can't simply reduced subjective experiences to brain states. This has been a long standing objecting to the identity theory of mind.
I am not reducing an experience to the stimulus. I am simply reducing its identity to "that experience which is correlated with this stimulus."
My brain can detect three items without me having a conscious experience of three, and I can dream of three items without three items existing. I am simply saying that - whatever it is - your brain also has some experience it correlates with the stimulus of three, and there is no meaningful way to compare or contrast that experience with my brain's experience beyond noting that they're both correlated with the same stimulus.
We think we can imagine, what if my experience of "red" is your experience of "blue," and vice versa? But that's as meaningful as asking, what if my experience of "two" is your experience of "three," and vice versa?
> it's about whether the pattern of neurons firing would necessarily be the same between people
Nothing in my argument has anything to do with patterns of neurons firing or even with the existence of brains themselves. Perhaps my "red" neurons are in one place and your "red" neurons are in another place, sure, whatever. Why should that mean that our experiences are different?
I mean, certainly, the neurons themselves are not the same. Your neurons don't fire when I see red, or vice versa. But that's not enough to settle anything.
> I mean, certainly, the neurons themselves are not the same. Your neurons don't fire when I see red, or vice versa. But that's not enough to settle anything.
Correct, other than it makes an identity theory of mind problematic.
> Why should that mean that our experiences are different?
That's not my objection. You stated in the previous post:
> There is no "experience" of seeing red beyond the simple fact of perceiving 700nm-wavelength light.
I'm objecting to attempts to reduce experience to its physical substrate and then say there's nothing more to the matter. But there is if the reduction doesn't work. And it doesn't in this case, because there are red visual experiences which are not perceptual. Dreams, memories, visualizations and hallucinations can all contain red visual experiences in the absence of 700nm photons. So can directly stimulating the right neurons with a magnet or electrodes in the brain via surgery.
You're right, my phrasing was not accurate in my initial post. Let me rephrase:
You cannot identify/designate the experience of seeing "red" any more precisely than "that experience one has on seeing 700nm-wavelength light," just like you cannot identify the experience of seeing "three" any more than "that experience one has on seeing a thing and another thing and another thing."
It's true that you do not need the actual perception in order to have the experience, in either case. But do you believe there is a hard problem of the experience of three? Do you ask whether the exact symphony that plays in your brain plays in mine when we look at the same three objects?
(Maybe yes, and I'm misunderstanding what you think the hard problem is?)
> But do you believe there is a hard problem of the experience of three?
I think there seems to be a hard problem of any experience, be it red, three, pain, etc, because nobody has come up with a convincing way of how to reduce that to the physical.
> Do you ask whether the exact symphony that plays in your brain plays in mine when we look at the same three objects?
I doubt it does as individual brains differ, although there are likely similar patterns of neurons firing. This is why it's hard to reduce experience to brain activity.
> Maybe yes, and I'm misunderstanding what you think the hard problem is?
There is indeed no mystery of consciousness except that some humans are incapable of accepting the fact that they are just consciousness, matter just being their user interface with the rest of consciousness.
This works just as well (better IMHO), does it not ?
Absolutely. This quote from Westworld articulates is nicely too:
"The self is a kind of fiction, for hosts and humans alike. It's a story we tell ourselves. [...] There is no threshold that makes us greater than the sum of our parts, no inflection point at which we become fully alive. We can't define consciousness because consciousness does not exist. Humans fancy that there's something special about the way we perceive the world, and yet we live in loops as tight and as closed as the hosts do, seldom questioning our choices, content, for the most part, to be told what to do next. No, my friend, you're not missing anything at all."
Only if you think replacing talk of subjective experiences like color, sound, pain, etc. with talk of loops and inflection points in the context of androids makes the hard problem go away.
I could quote Morpheus rhetorically asking Neo what reality is and then answering with, "If real is what you can feel, smell, taste and see, then 'real' is simply electrical signals interpreted by your brain." Which doesn't settle any philosophical discussion on the nature of reality or perception, or render those discussions moot.
No such thing as consciousness? I can understand an idea that consciousness is somehow pervasive, along the lines of panpsychism - but that doesn't make it nonexistent. Or, in the other direction, consciousness can be binary and rare but not particularly special, if it emerges from a confluence of lesser phenomena - but then it does still exist, and we're arguing only over its value.
Do you not doubt, ergo think, ergo be? Who is reading this sentence?
Consciousness does exist in some form though. The "experience" of sensations that we're both having right now is what many would label being conscious surely? I too don't think it's anything necessarily magical or separate from physicality about it, but it certainly exists.
A typical way to articulate it is that the experience of consciousness is just another sense-perception, not the "center of you". In the same way that your sense for certain types of light is called sight, your sense for certain types of neural activity is consciousness.
We have consciousness because our brain tells us we do. Establishing anything beyond this is futile.
Consciousness sounds like an extremely beneficial evolutionary trait to me. Imagine an "unconscious human" who shares the same body as us but the brain thinks it can't think, thinks it is dead, thinks that nothing it senses is alive or real including itself and its own thoughts. That unconscious human can definitively exist as an organism but it would have a hard time to coexist in a community of humans.
Please don't nitpick the usage of the word "think" because I'm referring to "machine style" thinking. A computer can think without being conscious.
This definition is fine as long as you extend it to all living things. In some sense all plants and animals (and bacteria and fungus) are experiencing sensations.
Of course, doing that removes any sense that humans are special or unique, which was darawk's point.
The hard problem of consciousness isn't about humans being special. It's just a starting point because we know ourselves to be conscious. The hard problem is explaining our subjective experiences in physical terms. And it's the same problem for anything that has subjective experiences. Thus Nagel bringing up bat sonar experiences and Block using Commander Data.
The even harder problem is to understand that those physical terms are a temporary consens among conscious beings-not some absolute truth. You cannot think physics as if you were not a human.
We do not know that all plants and animals experience sensations subjectively. We know that they respond to stimuli; we do not know that they experience qualia.
This path can lead to solipsism, but there are many exit points between "I am the only conscious being" and "all living things are conscious."
We don't know that anything experiences qualia, ourselves included, because the concept hasn't been rigorously defined - perhaps because it's not actually a valid concept at all.
I think you're going to find it a very hard sell to convince me that I don't experience qualia. I don't know what it's like to feel pain? I don't experience sight, or sound, or abstract thoughts and beliefs, as subjective experiences?
The problem with qualia, subjective consciousness, and all their near-synonyms is that they're almost impossible to define non-self-referentially. That doesn't mean they don't exist, though - and all human experience is indication that they do.
> I think you're going to find it a very hard sell to convince me that I don't experience qualia. I don't know what it's like to feel pain? I don't experience sight, or sound, or abstract thoughts and beliefs, as subjective experiences?
Well, what would you expect to be different if you weren't? Presumably you'd still behave the same way that you do now (because by definition qualia aren't the externally observable parts of your experience), so none of the experiences that affect your decision-making process can be part of them - so wouldn't existing without qualia feel exactly the same as existing with qualia? In which case, why are you sure that you're not already doing it?
What if you're not actually conscious, you just think you are?
What you've described is a p-zombie. Living without qualia would (if you buy that theory; there are various levels of rebuttal) look exactly the same from the outside. Behavior and decisions would be the same, and I'd pass any possible test that I'm a real person (at least with current neuroscience).
But that only works for an outside observer. I can't be tricked into thinking I'm conscious, because it's recursive: who, then, is getting tricked? I know that I'm conscious because I'm the one experiencing things. You know that you're conscious for the same reason. Neither of us has a good way of being sure that the other one - or anyone else - is, at least not without some deeper philosophical reasoning.
> But that only works for an outside observer. I can't be tricked into thinking I'm conscious, because it's recursive: who, then, is getting tricked? I know that I'm conscious because I'm the one experiencing things. You know that you're conscious for the same reason.
Again, seriously, how can I tell? There are people who can't visualise things in their head, and largely don't realise there's anything strange about them. I can certainly introspect my thoughts, but I can talk about the results of that, so presumably a p-zombie is also able to do that. I don't think there's any part of my experience that doesn't affect my actions, so I don't think this notion of qualia is valid.
So is it a joke on "Gilbert Ryle: The Concept of Mind"?
But it just reiterates his first example using breakfast instead of university...
'The first example is of a visitor to Oxford. The visitor, upon viewing the colleges and library, reportedly inquired "But where is the University?"[3] The visitor's mistake is presuming that a University is part of the category "units of physical infrastructure" rather than that of an "institution".'
I dunno, I’m actually about halfway to being convinced that if we can figure out what causes breakfast we’ll necessarily have solved the problem of consciousness, too.
The point is breakfast is what we define it to be. It is a set of objects that have certain properties that we deem to be described as breakfast. In the same way consciousness is what we decide it to be. There is no natural definition, its a thing we made up. This also implies that searching for the natural explanation of consciousness is about as useful as searching for the natural definition of breakfast.
The article is saying that there is an experience/phenomenon of "the breakfast" that cannot be explained by an analysis of its constituents. The whole (the breakfast) is more/something else than the sum of its parts. Contemporary scientists only look at the parts because that's what their methods allow them to do/reason about. They miss the essence of the breakfast. Their methods don't even allow them to acknowledge that breakfasts exist -- however & whatever that would mean and imply.
Since I think the current scientific treatment of consciousness is rather deplorable (unconscious scientists unconsciously trying to be consciously smart), I like this article.
Interestingly an appreciation of what is meant by "qualia" seems to be correlated with age. I would wager that it is also inversely correlated with scientific qualifications. Imho there is a kind of person that while hugely clever is not necessarily very conscious and curricula (also imho) select for those kinds of people.
So when you say "unconscious scientists unconsciously trying to be consciously smart" I take it literally. I think scientists can only make such claims as "consciousness does not exist" because they are not particularly conscious themselves. Or at least their awareness is, in the flow, as it were, with no meta - no awareness of awareness and hence no consciousness as I think the term is best described.
Oh and I like your interpretation of the article better. Although I'm not convinced I give the author that much credit.
I was about to downvote this comment, but I went and checked the first 8 citations in the articles. None of them exists, and yes, the article makes very little sense as well.
I honestly thought it's a bit silly but I get the idea how complex breakfast is.
And I thought it was nice philosophers still have something useful to do and talk about these things because science will soon find out what makes breakfast, or if it even exists.
It's great HN is discussing this stuff.
Come to comments, oh, it's just about consciousness again.
> It has long been understood that no breakfast can exist in the absence of its constituent foods and their related supporting structures such as plates and bowls, utensils, and toasters
I took it in the spirit of questioning one’s questions, and thought it was rather good. Russellian, even. Judging from the general reaction in here this may just mean I’m a dim bulb with a bad sense of humor.
Maybe universities should teach satire as a form of a scientific investigation. I first wanted to write that it should be acknowledged as a proper form of inquiry at least in philosophy. After further consideration, I'd say it should be a mandatory building block for all students.
That's the best sentence I've read in a long time, despite the fact that the pandemic has me spending too much time surfing HN.
I think the author makes a very solid, implicit, argument regarding the problem of consciousness. The "problem" is hard to pin down. It doesn't seem any more implausible to me that consciousness is an emergent property from our brain bio/chem/physics then that hurricanes and diamonds and stalagmites naturally emerge from their respective environments. Or that massive spaceships emerge from Conway's Game of Life. Of course humans claim that we're special - "My consciousness is more important than a darned stalagmite!" - thinking of ourselves as more important than rocks and hurricanes, as more special, gives us a desire to survive. Our predecessors who genuinely thought they were no more "special" than a stalagmite were ineffective at passing down their genes.
So I claim that the hard problem of consciousness is not such a hard problem, but rather just a case of us humans being obsessed with ourselves.