Learning Categories by Creating New Descriptions (Goldstone)

Robert Goldstone, Indiana University, February 1, 2024

ABSTRACT:  In Bongard problems, problem-solvers must come up with a rule for distinguishing visual scenes that fall into two categories.  Only a handful of examples of each category are presented. This requires the open-ended creation of new descriptions. Physical Bongard Problems (PBPs) require perceiving and predicting the spatial dynamics of the scenes.  We compare the performance of a new computational model (PATHS) to  human performance. During continual perception of new scene descriptions over the course of category learning, hypotheses are constructed by combining descriptions into rules for distinguishing the categories. Spatially or temporally juxtaposing similar scenes promotes category learning when the scenes belong to different categories but hinders learning when the similar scenes belong to the same category.

Robert Goldstone is a Distinguished Professor in the Department of Psychological and Brain Sciences and Program in Cognitive Science at Indiana University. His research interests include concept learning and representation, perceptual learning, educational applications of cognitive science, and collective behavior. 

Goldstone, R. L., Dubova, M., Aiyappa, R., & Edinger, A. (2023). The spread of beliefs in partially modularized communities. Perspectives on Psychological Science, 0(0). https://doi.org/10.1177/17456916231198238

Goldstone, R. L., Andrade-Lotero, E., Hawkins, R. D., & Roberts, M. E. (2023). The emergence of specialized roles within groups.  Topics in Cognitive Science, DOI: 10.1111/tops.12644.

Weitnauer, E., Goldstone, R. L., & Ritter, H. (2023). Perception and simulation during concept learning.  Psychological Review, https://doi.org/10.1037/rev0000433.

LLMs are impressive but we still need grounding to explain human cognition (Bergen)

Benjamin Bergen, Cognitive Science, UCSD, 14 sept 2023

VIDEO

ABSTRACT: Human cognitive capacities are often explained as resulting from grounded, embodied, or situated learning. But Large Language Models, which only learn on the basis of word co-occurrence statistics, now rival human performance in a variety of tasks that would seem to require these very capacities. This raises the question: is grounding still necessary to explain human cognition? I report on studies addressing three aspects of human cognition: Theory of Mind, Affordances, and Situation Models. In each case, we run both human and LLM participants on the same task and ask how much of the variance in human behavior is explained by the LLMs. As it turns out, in all cases, human behavior is not fully explained by the LLMs. This entails that, at least for now, we need grounding (or, more accurately, something that goes beyond statistical language learning) to explain these aspects of human cognition. I’ll conclude by asking but not answering a number of questions, like, How long will this remain the case? What are the right criteria for an LLM that serves as a proxy for human statistical language learning? and, How could one tell conclusively whether LLMs have human-like intelligence?

Ben Bergen is Professor of Cognitive Science at UC San Diego, where he directs the Language and Cognition Lab. His research focuses on language processing and production with a special interest in meaning. He’s also the author of ‘Louder than Words: The New Science of How the Mind Makes Meaning‘ and ‘What the F: What Swearing Reveals about Our Language, Our Brains, and Ourselves.’ 

Trott, S., Jones, C., Chang, T., Michaelov, J., & Bergen, B. (2023). Do Large Language Models know what humans know? Cognitive Science 47(7): e13309.

Chang, T. & B. Bergen (2023). Language Model Behavior: A Comprehensive Survey. Computational Linguistics.

Michaelov, J., S. Coulson, & B. Bergen (2023). Can Peanuts Fall in Love with Distributional Semantics? Proceedings of the 45th Annual Meeting of the Cognitive Science Society. Austin, TX: Cognitive Science Society.

Jones, C., Chang, T., Coulson, S., Michaelov, J., Trott, T., & Bergen, B. (2022). Distributional Semantics Still Can’t Account for Affordances. Proceedings of the 44th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society.