The unfolding argument: Why IIT and other causal structure theories cannot explain consciousness
- PMID: 31078047
- DOI: 10.1016/j.concog.2019.04.002
The unfolding argument: Why IIT and other causal structure theories cannot explain consciousness
Abstract
How can we explain consciousness? This question has become a vibrant topic of neuroscience research in recent decades. A large body of empirical results has been accumulated, and many theories have been proposed. Certain theories suggest that consciousness should be explained in terms of brain functions, such as accessing information in a global workspace, applying higher order to lower order representations, or predictive coding. These functions could be realized by a variety of patterns of brain connectivity. Other theories, such as Information Integration Theory (IIT) and Recurrent Processing Theory (RPT), identify causal structure with consciousness. For example, according to these theories, feedforward systems are never conscious, and feedback systems always are. Here, using theorems from the theory of computation, we show that causal structure theories are either false or outside the realm of science.
Keywords: Causal structure; Consciousness; IIT; Neural networks; RPT; Theories.
Copyright © 2019 The Authors. Published by Elsevier Inc. All rights reserved.
Comment in
-
A reply to "the unfolding argument": Beyond functionalism/behaviorism and towards a science of causal structure theories of consciousness.Conscious Cogn. 2020 Mar;79:102877. doi: 10.1016/j.concog.2020.102877. Epub 2020 Jan 28. Conscious Cogn. 2020. PMID: 32004720 No abstract available.
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
