Few, if any, problems in philosophy seem as mystifying as the ‘hard’ problem of consciousness. How is that certain entities in the universe have this magical property of being able to perceive, think and experience? And where, in this picture, do we fit experience is ‘like’, how the taste of apple juice differs from dread of the unknown? The problem is not helped by the tendency to start introspectively. If we forget about the strangeness of our own experience for a moment, and examine instead how consciousness functions in the world, we might start to get a clearer picture. In particular, I think three points stand out. The first is that consciousness is real, and it interacts with the world causally. The second is that our knowledge of consciousness is not simply drawn from introspection: we see evidence for consciousness in others in our everyday lives, all the time, and it plays a crucial role in our lives, both in explaining phenomena and setting accurate expectations. The third is that the causal function of consciousness is not independent from the qualitative properties of experience: what it’s like is intimately connected with what it does.
The first point I want to make is a purely conceptual one. Perception presupposes that consciousness is causally embedded in the world. We can’t say we have seen something without believing that the thing we have seen has caused the perception of sight (in this instant reflected a light source into the retina). A similar sensation that is not brought about causally in this way we would call illusory (a mirage, a hallucination, a dream etc). For perception to be useful, this process itself has to be causal (it is conceptually possible that that perception is epiphenomenal, i.e not useful, but this seems improbable: more on this later). It has to have some impact on future actions or dispositions, if nothing else our memory. Every day experience seems to suggest this is the case, or at least it is very much part of our everyday descriptions of perception that it is (“I remember seeing the sunrise”, “I disagree with what you just said” etc etc). At the very least, it seems an inescapable part of life to live as ifcertain mental events, perceptual or otherwise, are causally significant. I cannot decide which train to take to work if I think that my thought process has no impact on events in the world. In the particular case of perception, what counts as perceiving something seems to be about the propriety of mental events to antecedent conditions, given the causal significance of the mental events. I perceive something if a particular mental experience associated with x occurs when x is the case, and that mental experience causes me to have the true belief that x, or respond in some useful way to x.
We see evidence of the consciousness of others all the time. Our expectations and descriptions of their behaviour would make no sense if this were not the case. If you arrange to meet with a friend, they arrive at a particular place and time because they have decided to do so. Perhaps they are late because their clock displayed the wrong time, and they falsely believed it was earlier that it was, or they misjudged how long it would take to walk from A to B. Examples like this are endless, and they are not easily replaced by accounts which do not allow for conscious agents. Imagine trying to predict the position of this friend purely by thinking about velocity, acceleration and displacement: even when such a description is possible (as it sometimes is), it could never hope to account for the fact that they have arrived at 11:04 near the bus stop, because 11:04 is ‘roughly 11:00’, and here is near enough to the bus stop for you to find me.
As it happens, we actually have remarkably rich and accurate expectations of the behaviours of others as conscious agents. We are able to expect that people may be our friends, or partners, might be hostile to us, might be our boss at work, might give us parking fines if we park in the wrong place etc etc. All of these expectations seem to presuppose consciousness, in that they expect people to respond in a way that is situationally appropriate and specific in a way that often requires conscious perception. Think of being arrested because a police officer believes you are behaving in a drunk and disorderly fashion. Or a shop that is particularly busy because there is a New Year’s sale. Or even taking a contrarian philosophical position and expecting angry disagreement. A huge part of the way we make sense of the world around involves thinking in terms of the expected behaviours of conscious agents.
This is even clearer when we think about emergent social phenomena, like money, schools and political institutions. A piece of paper with a picture of the Queen on it is money because people think it is. It has value because people take it to have value. Trump is the President of the United States because he is held to be, and millions of people putting crosses on pieces of paper (or pulling levers on machines) is part of a process called ‘voting in an election’. Between 1939 and 1945, the behaviour of tens of millions was part of some large social process called ‘being at war’. These phenomena are all quite comprehensible if we see them as emergent behaviours of large groups of agents.
Could these behaviours be equally possible for non-conscious agents? Perhaps, but for the time being, we could make no sense of them and would have no idea what to expect to happen in the world unless we were to say that things occurred ‘as if the Earth were inhabited by 7 billion conscious human beings’. And to think that the world might be so constituted that by extraordinary coincidence people behave in exactly the way they would if they were conscious, but in fact are not, through mechanisms that make no sense to us, while not logically impossible, would be a very strange belief to entertain indeed.
Even if we accepted that consciousness was a real phenomenon, might we at some point in the future develop explanatory frameworks for human behaviour that make it redundant? Perhaps when it comes to certain behaviours (some things we think of as conscious decisions might turn out not to be, we are often deluded about how much about a situation we are really perceiving etc). But the difficulty remains that our behaviour is often circumstantially specific, where what counts as a given set of circumstances is itself reliant on the act of interpretation and on human volitions and expectations (I am annoyed because someone is ‘late’). As a practical matter, we could certainly never have the sufficient data to predict behaviour in everyday life. And then we must take into account the fact that conscious perception can be novel. Consciousness allows us to respond to an indefinitely large number of unique, never before described circumstances (and to describe them!) and for this to explain behaviour causally. We can be particularly moved by the sight of a star, and in doing so have our behaviour determined by a system thousands of light years away. We can think differently of a film because we missed the first scene. To do away with consciousness as explanatory would require a set of recursive rules which can account for a similar array of responses, all happening to coincide with conscious experience.
And why would we have to do so anyway? The key motivation, I think, for attempting to do so, is the idea that seeing consciousness as part of causal chains makes us think that there should be some kind of intuitively satisfying explanation of how something immaterial like perception could arise and interact causally with the ‘material’ world. This is indeed an alluring assumption, and it is difficult not to see consciousness as somehow magical and mysterious. But this might be a misconception of what explanations should seek to do. There is no reason why the universe should be so constituted as to be describable in a way that allows for ‘intuitive’ mechanisms that would follow from how we visualise a problem. And even if we did, why would this make such an account more plausible? Our ability to derive satisfying metaphors that reduce causation to dominoes falling, or things pushing and bumping into one another does not make our descriptions more or less true. A complete causal explanation does not have to allow us to picture the process; being able to do so may be satisfying, and perhaps sometimes help us apply a concept, but it is not logically necessary. A more realistic ambition might be simply to have a description of consciousness that is theoretically comprehensible, i.e we can know what it predicts (in this case, the conditions under which consciousness occurs, how it behaves) and that does not violate other principles we know to be true of the natural world.
On this last point, we might think that consciousness seems to violate other natural laws, as it seems to require spontaneous behaviour (i.e behaviour which is not determined by the laws of physics). But I think this is a mistake. Belief that consciousness is causally significant does not require consciousness to be random or spontaneous. It can be fully determined by antecedent conditions yet still causally necessary for subsequent events. We might say: a system would behave this way under these conditions, given that it is conscious, and it would be conscious, given what we know about the system, but it would not behave this way if it were not conscious. I think part of the problem might come from the a fallacy surrounding the distributive nature of causation. Typically it is the case that if A caused B and B caused C we can say (more or less usefully) that A caused C. But this does not imply that B did not cause C, that B’s causing of C was independent of A or that A’s causing of C was independent of B. If it rains on a cold night, the water freezes and the next day I slip on the ice, I may not have slipped were it not for the rain. But the rain only made me slip because it turned to ice. And the slipperiness of the ice is not a property which exists independently from the arrangement of water particles in a crystalline structure at a certain temperature, but nor does a description of the latter get rid of the former.
But how does consciousness ‘do work causally’? Or, put differently, how can the consciousness of an organism explain its behaviours? Certainly there is a lot we don’t know yet. One starting point might be that the phenomenal content (what experience is ‘like’) does not seem to be as separate to what an experience ‘does’ as some might suggest. It is not the case that what it is ‘like’ to experience pain is unrelated to how we behave when we are in pain. Pain, after all, is (generally!) something unpleasant we try to avoid. Perhaps we cannot describe the difference between the experience of seeing something blue to experiencing something red, but we can say that these are different experiences. That is, after all, what allows us to differentiate between the colours of different things (it can, correspondingly, be determined observationally that someone is colour blind, as they are unable to differentiate between colours). If, as earlier suggested, what makes something count as perception is the functional significance of the mental experience of perception, ie what it causes us to do, believe, remember etc, and whether this is appropriate to the antecedent conditions which brought this experience about, the question then seems to be: how does the phenomenal content of experience (what it is like) relate to its function (what it does). Might it be that certain types of experience, when brought about in the right conditions, are capable of generating useful responses, and others aren’t? And how does the function of a conscious experience affect what we might look for in the system from which it arises? If, for example, some consciousness is computational, we might look for a physical system capable of carrying out computations, or giving rise to a phenomenon capable of doing so.
Is consciousness identical to neurological states, a property of a physiological system, or something separate which is ‘brought about’ by physiological systems? I’m not sure there’s as much difference between these descriptions as sometimes is made out. Consciousness isn’t a separate entity (as John Searle once remarked “it’s not a juice squirted out by the brain”), and it’s not clear what it means to say it is spatially located, other than saying that its causal interactions are specially located. We can call it a property of a system if we like, so long as we remember that in doing so this property is taken to be something which is likely to explain the behaviour of the system (in the same way that liquid water behaves a certain way because it is fluid, ice because it is a solid etc). We can call it something ‘brought about by a system’, so long as we remember that this does not imply a separately existing spatial entity.
Hopefully these few paragraphs might give some clarity to how we might think about this problem. At the very least, I hope they aren’t entirely trivial. I don’t wish to assert that these tentative answers are absolutely necessary. While perception may require consciousness to be brought about causally, how do we know we aren’t perceiving at all? And while reference to conscious agents might be important for making sense of the world, how do we know the world is structured in a predictable way, at all? These objections are of course all possible, but in the same way it is possible that the world is fundamentally incomprehensible, that the sun might not rise tomorrow, that we might all be brains in vats etc etc. We can still entertain doubt about the existence and nature of consciousness in the same way we can entertain radical scepticism about existence, in so far as we do not find answers to that scepticism satisfying. But we must not confuse this kind of radical scepticism with an observation specific to consciousness or the mind-body problem. And while the account outlined does not allow us to intuitively ‘get’ how this strange property could be true of physical systems, it does, at least, mean we do not have to doubt the every day phenomena that life does not allow us to seriously call into question. It is, perhaps, an attempt at answering a the following question: if, as may or may not be the case, our concepts of knowledge, perception and belief do roughly work, and if these concepts are at work in accounting for behaviour, how, in the abstract, do these experiences relate to the things they are supposed to refer to?