Beyond Rethinking Consciousness
We have examined reconceptualizations of consciousness. Block asks us to think of consciousness as a label for a family of concepts—in his terminology, a mongrel concept—and to divide the problem of consciousness into those problems that are solvable by traditional information-processing, functionalist approaches, and those problems of phenomenal consciousness that are not. Baars adopts an informationprocessing, functionalist model as well but is more optimistic and attempts to go further with contrastive analysis. He outlines a total cognitive system to be mapped to functional neuroanatomy. We have a long way to go before we have completed this mapping, but tools such as brain scanners and neural cell recording help us to pinpoint structures that not only correlate with—but also are essential to—consciousness. If we can do this, there is nothing left to explain, in Baars’s view. But is his blackboard model plausible? What evidence do we have that the informationprocessing model of the mind is correct? If the brain does not process information according to functionalist cognitive science—if it goes beyond such processing, or if its causal powers are important for understanding consciousness—then what are we to make of Baars’s marriage between functionalism and contrastive analysis? As we have seen, Searle maintains that we should understand consciousness as an ordinary causal phenomenon, and it seems clear that brains cause consciousness. But we have also seen that some mysterians are skeptical not only about functionalist accounts of consciousness but also about causal accounts.
Let us remind ourselves briefly of the general problem with causal accounts. Suppose we find that consciousness involves, at most, causal structures X, Y, and Z. We are confident that these structures are sufficient for conscious experience, but we are not confident that all of them are necessary. We then discover that X is not necessary for consciousness. Now we think that Y and Z are essential for conscious experience. Further analysis also shows that Z is necessary but not sufficient, and that the same can be said about Y.
We now think we have found the essential causal structure of consciousness. In the diagram, this structure is named B. But how do we know that B is the minimal causal structure of consciousness? How do we know, for example, that Y and Z cannot be decomposed into further elements, only some of which are necessary for consciousness? We could start over again, break up Y and Z into their parts, and see if we can find a simpler structure that is sufficient for conscious experience—but is the problem of consciousness solvable in this way? How do we know when to stop our investigations and declare victory? We can think of this as the stopping problem in our investigations of finer and finer correlations between consciousness and neural structures. A mysterian response could be that we don’t know when to stop and perhaps never will, because we have no inkling of how brain mechanisms could explain conscious experience.
Suppose we are visited by aliens who have proved that neural structure B is the cause of human conscious experience. Could they explain why B must be the cause of consciousness? One mysterian response would be no. Why? Because even though the aliens have solved the problem of consciousness, we might not be smart enough to grasp the explanation. How could we know that we would understand an explanation of human consciousness even if we had it served to us on a silver plate?