Falsifying the Integrated Information Theory of Consciousness (Hanson)

Jake R Hanson, Sr. Data Scientist, Astrophysics, 07-Dec  2023

VIDEO

Abstract: Integrated Information Theory is a prominent theory of consciousness in contemporary neuroscience, based on the premise that feedback, quantified by a mathematical measure called Phi, corresponds to subjective experience. A straightforward application of the mathematical definition of Phi fails to produce a unique solution due to unresolved degeneracies inherent in the theory. This undermines nearly all published Phi values to date. In the mathematical relationship between feedback and input-output behavior in finite-state systems automata theory shows that feedback can always be disentangled from a system’s input-output behavior, resulting in Phi=0 for all possible input-output behaviors. This process, known as “unfolding,” can be accomplished without increasing the system’s size, leading to the conclusion that Phi measures something fundamentally disconnected from what could ground the theory experimentally. These findings demonstrate that IIT lacks a well-defined mathematical framework and may either be already falsified or inherently unfalsifiable according to scientific standards.

Jake Hanson is a Senior Data Scientist at a financial tech company in Salt Lake City, Utah. His doctoral research in Astrophysics from Arizona State University focused on the origin of life via the relationship between information processing and fundamental physics. He demonstrated that there were multiple foundational issues with IIT, ranging from poorly defined mathematics to problems with experimental falsifiability and pseudoscientific handling of core ideas.

Hanson, J.R., & Walker, S.I. (2019). Integrated information theory and isomorphic feed-forward philosophical zombiesEntropy, 21.11, 1073.

Hanson, J.R., & Walker, S.I. (2021). Formalizing falsification for theories of consciousness across computational hierarchies.Neuroscience of Consciousness, 2021.2, niab014.

Hanson, J.R., & Walker, S.I. (2021). Falsification of the Integrated Information Theory of ConsciousnessDiss. Arizona State University, 2021.

Hanson, J.R., & Walker, S.I. (2023). On the non-uniqueness problem in Integrated Information TheoryNeuroscience of Consciousness, 2023.1, niad014.

From the History of Philosophy to AI: Does Thinking Require Sensing? (Chalmers)

David Chalmers , Center for Mind, Brain & Consciousness, NYU, 28-Sep 2023

VIDEO

ABSTRACT: There has recently been widespread discussion of whether large language models might be sentient or conscious. Should we take this idea seriously? I will discuss the underlying issue and will break down the strongest reasons for and against. I suggest that given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current models: for example, their lack of recurrent processing, a global workspace, and unified agency. At the same time, it is quite possible that these obstacles will be overcome in the next decade or so. I conclude that while it is somewhat unlikely that current large language models are conscious, we should take seriously the possibility that extensions and successors to large language models may be conscious in the not-too-distant future.

David Chalmers is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is the author of The Conscious Mind (1996), Constructing The World (2010), and Reality+: Virtual Worlds and the Problems of Philosophy (2022). He is known for formulating the “hard problem” of consciousness, and (with Andy Clark) for the idea of the “extended mind,” according to which the tools we use can become parts of our minds.

Chalmers, D. J. (2023). Could a large language model be conscious?. arXiv preprint arXiv:2303.07103.

Chalmers, D.J. (2022) Reality+: Virtual worlds and the problems of philosophy. Penguin

Chalmers, D. J. (1995). Facing up to the problem of consciousnessJournal of Consciousness Studies2(3), 200-219.

Clark, A., & Chalmers, D. (1998). The extended mindAnalysis58(1), 7-19.