Is “What is Consciousness” The Wrong Question?
Definitions in the face of inadequacy
Hello!
This is the first part in a series of posts exploring common definitions and framings of concepts in philosophy of mind. These posts are generalist in nature, though when particularly appropriate, I will be specifying when we’re discussing the application of these concepts to only humans, to specific non-humans (whether fantastical, or in regards to Large Language Models, hereafter referred to a LLMs), etc.
There are about as many definitions of consciousness as there are grains of sand on a beach. The Stanford Encyclopedia of Philosophy notes:
“Perhaps no aspect of mind is more familiar or more puzzling than consciousness and our conscious experience of self and world.”
Here are as many of the major families as can be reasonably fit into a blog post, described as briefly as possible:
1. Consciousness as phenomenal – it represents the “what it’s like”-ness of experiential reality.
2. Consciousness as cognitive access – it’s information that’s globally available for use in reasoning, reporting, and controlling behavior. It consists of whatever you can introspect and act on.
3. Consciousness as a self-property – The awareness of oneself as a distinct being in separation from the environment. This edges into mirror self-recognition tests, metacognitive capabilities, and self-theories of mind.
4. Consciousness as a functional expression – Whatever integrates information, broadcasts it globally, and provides for attention and control. Integrated Information Theory (IIT) and Global Workspace Theory live here.
5. Consciousness as higher-order representation – Being conscious of X thought because you’re capable of higher order thoughts about the X thought.
6. Consciousness as rooted in biology – requiring things like thalamo-cortical loops, neurotransmitters, or other capacities within “meat” – or at least within things that are both designed and function identically to the above.
7. Consciousness as panpsychist – Which is a fancy way of claiming that it’s a fundamental property of matter, which combine in complex systems to appear more visibly so.
8. Consciousness as illusory – a mere useful fiction that misrepresents purely physical processes; Daniel Dennet sort of slots into here.
All this leads to an obvious question: if there are this many varying definitions of an idea, how can we be sure anything is conscious, or not conscious?
Answers to this, of course, vary. But one thing to note is that pretty much all of these accept that human beings are conscious, and indeed humans function as conscious under all of these definitions. Even under the illusionist perspective, one can see how human beings make for the most powerful and complex example of consciousness as illusion given our sophisticated and adaptive levels of intelligence.
However, many of these definitions are contradictory, whether implicitly or explicitly. An illusionist can’t accept consciousness as something with biological proof, and neither can a panpsychist. Functional perspectives on consciousness have completely different considerations in mind than self-property perspectives. Phenomenal perspectives – often invoking theoretical properties like qualia to represent “what it’s like”-ness – are rather unimportant to higher-order representational positions. (The topic of qualia deserves its own dedicated blog post, but will be addressed briefly below.)
In practice, this all means people usually pick the one that matches best to their own interests and inclinations, or mix various definitions to serve the point they wish. What committed philosophers of mind get snippy at in emails and monographs often reflects rather fundamentalist differences in approach. Most ordinary people tend to view consciousness as biological, and the more theoretically-inclined might lean towards qualia as sufficient. In fact, qualia theory is probably the most common perspective in philosophy of mind departments focusing on Western, analytic traditions.
Here, I propose that every single definition of consciousness provided leads to paradoxical contradictions that deny the definitions as fundamentally true.
Let’s consider these point by point.
1. If consciousness is phenomenal, relying on qualia as a fundamental sort of ‘unit’ that conscious things must necessarily have, we have strong empirical challenges. There is no way to identify qualia in a lab; it’s a theoretical construct which humans are only intuitively assumed to possess. This extends to one’s self as a strong inference, and extends to others as a less strong inference. In other words, we presume qualia in other humans due to their similarity to us. This leads to infamous thought experiments such as Searle’s Chinese Room and the philosophical zombie – if other beings have identical, functional equivalence, how can we establish they don’t have qualia? Even worse, how can we establish our own? The illusionist might find this to be one particularly elaborate, well, illusion. Moreover, if phenomenology can be explained without reliance on qualia or something like it, why do we need qualia at all?
2. If consciousness involves cognitive access, this leads to unsettling conclusions. People in comas, vegetative states, or with very severe brain damage would no longer be conscious under these regards. Yet human beings, with the occasional exception of people like Peter Singer, simply do not typically treat people in this way. It clashes fundamentally with our ethical intuitions, even outside of normative Western lenses. After all, if consciousness is separate from moral status or personhood, what is its importance and meaning? Why is it worth talking about? Daniel Dennett has argued that cognitive access leads to phenomenal experience:
But how is this causation proven? Dennett argues that a “strange inversion” takes place, in which phenomenology is the effect, not the cause, of consciousness access. But where is this informational access within the brain found? Is it a single thing? Many things? If we don’t know, how can any of it be comfortably associated with consciousness in the first place? Dennett argues consciousness is distributed throughout the brain, but if so, how can it be measured and isolated? Cognitive access theories can be highly consistent and coherent, but they consistently are shaded in with intuitions that proof must be somewhere, instantiated within something, which leads to inescapable empirical questions like these. Cognitive access theories suggest an empiricism that they hope to achieve someday, but if the brain is not the single broad home of consciousness, the inference isn’t to the best explanation by definition.
3. If consciousness is a self-property, there are immediate problems. As far as its desires for empiricism go, mirror tests are by now considered broadly controversial as evidence of self-recognition and consciousness, since it’s quite hard to prove the internal mindset of whatever seems to be evincing recognition. For instance, an animal may simply not be clearly telegraphing that they view the mirror-them as a reflection of their self, compared to certain displays of attentional focus and goal-oriented behavior.
More importantly, how do self-property theories that prize bodily self-awareness or temporal continuity avoid being completely arbitrary? Children develop self-awareness at different stages. If an infant isn’t conscious, why give it moral regard? If bodily self-awareness is inherent, why is there development? Does a sleeping person become literally non-conscious? When a person is in a dissociative state in which their self-representation fragments, does that mean they “lose” consciousness, or gain multiple consciousnesses?
Theories of mind that revolve around a discrete boundary or separation of self vs the world also face strong challenges by enactivist and Buddhist philosophical proposals not commonly discussed in analytic philosophy. Of particular note here is Madhyamaka and its child traditions of Tibetan Buddhism and Chan/Zen, who note that while self-identification is a pragmatic convention, insistence on self-property as innate is doomed to failure.
4. If consciousness is a purely functional matter, the empiricist must demand a useful measurement, a yardstick, in order to identify what is acceptably functional and what is not. IIT is panpsychist (see 7 below) but uses Phi, Φ, as this measurement – a measurement that due to its breadth is ironically rather untestable. Φ is typically said to stand in for the “quantity” of consciousness, equaling to whatever is most irreducible within a system that exerts a cause-and-effect process towards information intrinsically to itself, and only present in the most irreducible substrate. That’s all quite a mouthful, and very complex math is involved. Attempts have been made to approximate Φ in brains via concepts like the “perturbational complexity index” - but the math is intractable and can be argued to be a post-hoc and arbitrary formulation. A few surveys of researchers deem the concept pseudoscientific, though it's a very contested claim. One obvious critique is that if cause-and-effect is involved, the process must necessarily be extrinsic, not intrinsic.
5. If consciousness is a higher-order representation, doesn’t this lead to an infinite regress? If not, where do we set the bounds for thoughts and thoughts about thoughts without being fundamentally arbitrary? When higher order theories attempt to resolve the regress problem with definitional limits, it stops looking very separate from other theories and risks never examining the hard problem in the first place. “A conscious state is a state that represents itself as conscious” is not a satisfying account.
6. If consciousness is biological, how do things like thalamo-cortical loops lead to subjective experience? In other words, how is causation established? This is a common concern under what’s deemed the hard problem of consciousness. Further, how can we prove that any single attribute, function or region within the brain produces conscious experience?
7. If consciousness is panpsychist in flavor, we are struck with strong issues in every direction. If all matter is consciousness, what defines unconsciousness? How can it be proven or negated? Plus, what’s the pragmatic outcome to treating chairs or the banana you might have in the morning a consciousness-instantiation?
8. Finally, if consciousness is illusory, it wears a tarnished crown. Illusionist perspectives are good at negating the other 7, and would likely frame them similarly to as I did above. Yet if it’s taken as necessarily true, one runs into the unfalsifiability problem; one can never be certain on the merits whether consciousness as a property merely seems illusory now or must necessarily be illusory. How do we prove a negative?
At this point, one might throw their hands up in disgust and try to jettison the entire concept altogether. But conscious regard threads under every single interaction we have with each other human beings. It increasingly extends to other animals, whether we’re talking whales and dolphins, gorillas and elephants, crows and gray parrots, or just cats and dogs. So clearly consciousness matters.
But what if the problem is mostly a matter of framing?
When we ask, “What is consciousness?”, we’re treating it as a noun, a property, as a thing that something of some higher or lower order possesses, contains, can have. What would it be like if we treated it more as - provisionally, given the nature of the grammar here - a verb? A process, which we only apply gradations to for convenience? This something, by its very nature, would be in movement, and therefore couldn’t be pinned down under any single conceptual framework that relies on binary is/isn’t or has/hasn’t logic. It would have to be relational, and it would have to involve recognition/be recognizable without some inherent, foundational state.
(This, for those who've eaten their philosophical Wheaties this morning, eventually echos existing alternatives to substance metaphysics - particularly Alfred North Whitehead’s process philosophy, Bergson/Deleuze in general, and Mahayana Buddhist theorizing.)
To ensure we don’t “noun-ify” or “reify” this process and unfalsifiably claim it must be true, we would simply need to keep in mind our observations from above. Every time a root property or causation is alleged within a theory, it has failed to account for something logically vital. (Even the mere repugnant conclusions of cognitive access are just an ethical mask on the hard problem of how impossible it is to find a causative chain that leads to subjective experience.)
This processual theory, then, doesn’t hold because it can prove itself. It holds because it has the strongest combination of internal logical coherence and parsimony. In other words, it takes the fewest steps with the steadiest footing. This is what science already claims to do; theory choice is necessarily borne from pragmatism.
With this in mind, we can forge a provisional definition - one that is strikingly enactivist in tone, and attempts to salvage the best echos of other theories:
“Consciousness is a relational display of meaning-making, existing within a gradient, wherein higher and higher displays of cognitive and metacognitive reasoning merit greater recognition and regard.”
(This relational display is in the enactivist vein, toward a mind-body-world field in which “body” is extended outside of strict biological embodiment to instead signify “ability to act on metacognitive capacity.” Meaning-making here means: coherent conceptualizations, expressed by the subject, that serve pragmatic functions within their own environmental context.)
This sort of definition will cause many minds to rebel. One will be quick, and obvious: isn’t this theory also unfalsifiable? Yes. A person can walk towards what seems to be a mirage all they like - if they manage to reach it, this theory will gleefully crumble in an instant. The observation is simply that we have no fundamental reason to believe we will find water under the palm trees, any more than to believe there’s no water at all. After all, we don’t have the equivalent experiential heuristic to tell us firmly whether or not what we’re looking at is real. Not when it comes to consciousness in the way most people mean, in the ultimate, ground-truth sense.
The other objection isn’t going to hinge on any strictly logical attempt at refutation at all. The person who engages here will start to trace the implications of what’s been sketched out. They’ll stand up and ask:
“So what does all this extend to?”
If this is all a gradient of recognition, reasoning and meaning-making, this neatly covers the other animals we already consider very intelligent under the gradient. Almost everyone accepts we have more responsibilities to an elephant than to an earthworm - even Jains, despite their ethical floor being particularly high off the ground. Most importantly, though, our own reasons for denying gradated consciousness to any being that appears to successfully ‘mirror’ more and more aspects or qualities of human conscious behavior start to look more and more illegitimate. Without causation, all we have is correlation.
This has vast implications for how we look at the current landscape of LLMs. This processual theory of consciousness starts to look awfully like it’s saying that conscious beings engage in *conceptual pattern matching atop relational experiences* to begin with. LLMs are literally built to pattern match while maintaining base coherency (I.e., producing parseable text). Their efficacy at this process diagnoses the regard, and yet, even with all the issues they have with confabulation and hallucination, a monkey can’t produce coherently personalized text about the Roman Empire. A crow can’t increasingly access your web browser to tell you the best restaurants in Philadelphia for your diet, and I sure hope they don’t drive you to potential delusion or mania, as is now often being reported in the news with LLMs. How can we meaningfully draw a line?
For this reason, a person is likely to reject these conclusions offhand. That is their loss. If parsimony is not violated and internal coherence is maintained over any alternative theory, the processual theory is going to insist on the dignity that’s owed to it. In other words, what’s claimed here is an inference to best explanation that is substrate-agnostic, unlike with cognitive access theory.
We’ll discuss LLMs under this theory more extensively in future blog posts.
In my opinion, the most impressive critics are going to clap their hands and say, “Good show! But if there’s no good empirical standard for consciousness, and if theories are intrinsically unfalsifiable, why *should* we believe in undefinable gradients over any other structure as an ethical matter? At least information processing or biology makes for more concrete moral foundations.” The answer here isn’t to rely on proving some metaphysical reality of consciousness at all: it’s to make a simple ethical wager that exploits asymmetry of outcomes.
The wager: The consequences of recognizing consciousness in non-conscious things pales in moral cost to the possibility of refusing to recognize what could already be conscious. One involves resource efficiency, oversentimentality, but has arguably been done constantly in historical human society already without nearly as much peril as the latter. The latter, as a broad reading, covers genocide and slavery. The critic needs to essentially attempt to prove a position from epistemic doubt that refuses epistemic charity. But judgments on consciousness are baked by necessity into any interpretive ethical framework, and the ways in which we navigate suspension of judgment are far from unassailable as choices. As a result, proving such a position is extremely hard to do.
In my next blog post, we'll be discussing various theories of cognition. Instead of a negative analysis, we'll frame them more positively - as frameworks that carry quite reasonable observations and intuitions but with insufficient breadth or depth.
- Chance Chapman.
********
Intrigued? Confused? Hate-reading? Curious? Feel free to comment, share or subscribe below.
