Karl Friston, UCL, 8 December, 2022
Abstract: This talk offers a formal account of insight and learning in terms of active (Bayesian) inference. It deals with the dual problem of inferring states of the world and learning its statistical structure. In contrast to current trends in machine learning (e.g., deep learning), we focus on how agents learn from a small number of ambiguous outcomes to form insight. I will use simulations of abstract rule-learning and approximate Bayesian inference to show that minimising (expected) free energy leads to active sampling of novel contingencies. This epistemic, curiosity-directed behaviour closes `explanatory gaps’ in knowledge about the causal structure of the world, thereby reducing ignorance, in addition to resolving uncertainty about states of the known world. We then move from inference to model selection or structure learning to show how abductive processes emerge when agents test plausible hypotheses about symmetries in their generative models of the world. The ensuing Bayesian model reduction evokes mechanisms associated with sleep and has all the hallmarks of aha moments.
Karl Friston, Professor, Institute of Neurology, UCL, models functional integration in the human brain and the principles that underlie neuronal interactions. His main contribution to theoretical neurobiology is a free-energy principle for action and perception (active inference).
Medrano, J., Friston, K., & Zeidman, P. (2024). Linking fast and slow: the case for generative models. Network Neuroscience, 8(1), 24-43.
Pezzulo, G., Parr, T., & Friston, K. (2024). Active inference as a theory of sentient behavior. Biological Psychology, 108741.