Falsifying the Integrated Information Theory of Consciousness (Hanson)

Jake R Hanson, Sr. Data Scientist, Astrophysics, 07-Dec  2023

VIDEO

Abstract: Integrated Information Theory is a prominent theory of consciousness in contemporary neuroscience, based on the premise that feedback, quantified by a mathematical measure called Phi, corresponds to subjective experience. A straightforward application of the mathematical definition of Phi fails to produce a unique solution due to unresolved degeneracies inherent in the theory. This undermines nearly all published Phi values to date. In the mathematical relationship between feedback and input-output behavior in finite-state systems automata theory shows that feedback can always be disentangled from a system’s input-output behavior, resulting in Phi=0 for all possible input-output behaviors. This process, known as “unfolding,” can be accomplished without increasing the system’s size, leading to the conclusion that Phi measures something fundamentally disconnected from what could ground the theory experimentally. These findings demonstrate that IIT lacks a well-defined mathematical framework and may either be already falsified or inherently unfalsifiable according to scientific standards.

Jake Hanson is a Senior Data Scientist at a financial tech company in Salt Lake City, Utah. His doctoral research in Astrophysics from Arizona State University focused on the origin of life via the relationship between information processing and fundamental physics. He demonstrated that there were multiple foundational issues with IIT, ranging from poorly defined mathematics to problems with experimental falsifiability and pseudoscientific handling of core ideas.

Hanson, J.R., & Walker, S.I. (2019). Integrated information theory and isomorphic feed-forward philosophical zombiesEntropy, 21.11, 1073.

Hanson, J.R., & Walker, S.I. (2021). Formalizing falsification for theories of consciousness across computational hierarchies.Neuroscience of Consciousness, 2021.2, niab014.

Hanson, J.R., & Walker, S.I. (2021). Falsification of the Integrated Information Theory of ConsciousnessDiss. Arizona State University, 2021.

Hanson, J.R., & Walker, S.I. (2023). On the non-uniqueness problem in Integrated Information TheoryNeuroscience of Consciousness, 2023.1, niad014.

Cognitive architectures and their applications (Lebière)

Christian LebièreCarnegie-Mellon, 20 October, 2022

VIDEO

Abstract: Cognitive architectures are computational implementations of unified theories of cognition. Being able to represent human cognition in computational form enables a wide range of applications when humans and machines interact. Using cognitive models to represent common ground between deep learners and human users enables adaptive explanations. Cognitive models representing the behavior of cyber attackers can be used to optimize cyber defenses including techniques such as deceptive signaling. Cognitive models of human-automation interaction can improve robustness of human-machine teams by predicting disruptions to measures of trust under various adversarial situations. Finally, the consensus of 50 years of research in cognitive architectures can be captured in the form of a Common Model of Cognition that can provide a guide for neuroscience, artificial intelligence and robotics. 

Christian Lebière is a Research Faculty member in the Psychology Department at Carnegie Mellon University. His main research interests are cognitive architectures and their applications to psychology, artificial intelligence, human-computer interaction, decision-making, intelligent agents, network science, cognitive robotics and neuromorphic engineering. 

Cranford, E. A., Gonzalez, C., Aggarwal, P., Tambe, M., Cooney, S., & Lebiere, C. (2021). Towards a cognitive theory of cyber deception. Cognitive Science, 45(7), e13013.

Cranford, E., Gonzalez, C., Aggarwal, P., Cooney, S., Tambe, M., & Lebiere, C. (2020). Adaptive cyber deception: Cognitively informed signaling for cyber defense.

Lebiere, C., Blaha, L. M., Fallon, C. K., & Jefferson, B. (2021). Adaptive cognitive mechanisms to maintain calibrated trust and reliance in automation. Frontiers in Robotics and AI, 8, 652776.

Laird, J. E., Lebiere, C., & Rosenbloom, P. S. (2017). A standard model of the mind: Toward a common computational framework across artificial intelligence, cognitive science, neuroscience, and robotics. AI Magazine, 38(4), 13-26.

Lebiere, C., Pirolli, P., Thomson, R., Paik, J., Rutledge-Taylor, M., Staszewski, J., & Anderson, J. R. (2013). A functional model of sensemaking in a neurocognitive architecture. Computational Intelligence and Neuroscience, 2013.

Constraining networks biologically to explain grounding (Pulvermüller)

Friedemann PulvermuellerFU Berlin, 3 December, 2020

VIDEO

Abstract: Meaningful use of symbols requires grounding in action and perception through learning. The mechanisms of this sensorimotor grounding, however, are rarely specified in mechanistic terms; and mathematically precise formal models of the relevant learning processes are scarce. As the brain is the device that is critical for mechanistically supporting and indeed implementing grounding, modelling needs to take into account realistic neuronal processes in the human brain. This makes it desirable to use not just ‘neural’ networks that are vaguely similar to some aspects of real networks of neurons, but models implementing constraints imposed by neuronal structure and function, that is, biologically realistic learning and brain structure along with local and global structural connectivity and functional interaction. After discussing brain constraints for cognitive modelling, the talk will focus on the biological implementation of grounding, in order to address the following questions: Why do the brains of humans — but not those of their closest relatives — allow for verbal working memory and learning of huge vocabularies of symbols? Why do different word and concept types seem to depend on different parts of the brain (‘category-specific’ semantic mechanisms)? Why are there ‘semantic and conceptual hubs’ in the brain where general semantic knowledge is stored — and why would these brain areas be different from those areas where grounding information is present (i.e., the sensory and motor cortices)? And why should sensory deprivation shift language and conceptual processing toward ‘grounding areas’ — for example toward the visual cortex in the blind? I will argue that brain-constrained modelling is necessary to answer (some of) these questions and, more generally, to explain the mechanisms of grounding. 

Friedemann Pulvermüller is professor in the neuroscience of language and pragmatics at the Freie Universität Berlin, where he also directs the ‘Brain Language Laboratory’. 

Carota, F., Nili, H., Kriegeskorte, N., & Pulvermüller, F. (2023). Experientially-grounded and distributional semantic vectors uncover dissociable representations of conceptual categoriesLanguage, Cognition and Neuroscience, 1-25.

Pulvermüller, F., Garagnani, M., & Wennekers, T. (2014). Thinking in circuits: Towards neurobiological explanation in cognitive neuroscience. Biological Cybernetics, 108(5), 573-593. doi: 10.1007/s00422-014-0603-9