Gesture Semantics: Deictic Reference, Deferred Reference and Iconic Co-Speech Gestures (Lücking)

Andy Lücking, Goethe University Frankfurt, March 14, 2024

ABSTRACT:  Language use is situated in manifold ways, including the exploitation of the visual context and the use of manual gestures (multimodal communication). I will survey recent theoretical advances concerning the semantics and the semantic contribution of co-verbal deictic and iconic gestures. Multimodal communication challenges traditional notions of reference and meaning developed in formal semantics. Computationally tractable models of deictic and deferred reference and iconic gestures are proposed instead. These models specify language/perception interfaces for two concrete phenomena that are central to situated language. Inasmuch as LLMs lack perception and embodiment, these phenomena are currently, but not in principle, out of reach. I will conclude by pointing out *what* is needed for an LLM  to be capable of deferred reference and iconic gestures. 

Andy Lücking is Privatdozent at Goethe University Frankfurt. His work contributes to theoretical linguistics and computational semantics, in particular to a linguistic theory of human communication, that is, face-to-face interaction within and beyond single sentences. Besides publishing on deixis and iconicity in manual gesture, Andy is the main author of Referential Transparency Theory, the current semantic theory of plurality and quantification. His work on the perception of iconic gestures received an IEEEbest paper award.

Andy Lücking, Alexander Henlein, and Alexander Mehler (2024). Iconic Gesture Semantics. In review. Manuscript available on request.

Andy Lücking and Jonathan Ginzburg (2023). Leading voices: Dialogue semantics, cognitive science, and the polyphonic structure of multimodal interactionLanguage and Cognition,  15(1). 148–172.

Andy Lücking, Thies Pfeiffer and Hannes Rieser (2015). Pointing and Reference Reconsidered. In: Journal of Pragmatics 77: 56–79. DOI: 10.1016/j.pragma.2014.12.013.

The Grounding Problem in Language Models is not only about Grounding (Lenci)

Alessandro Lenci, Linguistics, U. Pisa, February 29, 2024

ABSTRACT:  The Grounding Problem is typically assumed to concern the lack of referential competence of AI models. Language Models (LMs) that are trained only on texts without direct access to the external world are indeed rightly regarded to be affected by this limit, as they are ungrounded. On the other hand Multimodal LMs do have extralinguistic training data and show important abilities to link language with the visual world. In my talk, I will argue that incorporating multimodal data is a necessary but not sufficient condition to properly address the Grounding Problem. When applied to statistical models based on distributional co-occurrences like LMs, the Grounding Problem should be reformulated in a more extensive way, which sets an even higher challenge for current data-driven AI models.

Alessandro Lenci is Professor of linguistics and director of the Computational Linguistics Laboratory (CoLing Lab), University of Pisa. His main research interests are computational linguistics, natural language processing, semantics and cognitive science.

i

Lenci A., & Sahlgren (2023). Distributional Semantics, Cambridge, Cambridge University Press. 

Lenci, A. (2018). Distributional models of word meaningAnnual review of Linguistics, 4, 151-171.

Lenci, A. (2023). Understanding Natural Language Understanding Systems. A Critical Analysis. Sistemi Intelligenti, arXiv preprint arXiv:2303.04229.

Lenci, A., & Padó, S. (2022). Perspectives for natural language processing between AI, linguistics and cognitive scienceFrontiers in Artificial Intelligence5, 1059998.

What counts as understanding? (Lupyan)

Gary Lupyan, University of Wisconsin-Madison, Feb 22, 2024

ABSTRACT:  The question of what it means to understand has taken on added urgency with the recent leaps in capabilities of generative AI such as large language models (LLMs). Can we really tell from observing the behavior of LLMs whether underlying the behavior is some notion of understanding? What kinds of successes are most indicative of understanding and what kinds of failures are most indicative of a failure to understand? If we applied the same standards to our own behavior, what might we conclude about the relationship between between understanding, knowing and doing?

Gary Lupyan is Professor of Psychology at the University of Wisconsin-Madison. His work has focused on how natural language scaffolds and augments human cognition, and attempts to answer the question of what the human mind would be like without language. He also studies the evolution of language, and the ways that language adapts to the needs of its learners and users.

References

Liu, E., & Lupyan, G. (2023). Cross-domain semantic alignment: Concrete concepts are more abstract than you thinkPhilosophical Transactions of the Royal Society B. DOI: 10.1098/rstb.2021-0372 

Duan, Y., & Lupyan, G. (2023). Divergence in Word Meanings and its Consequence for Communication. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 45, No. 45)

van Dijk, B. M. A., Kouwenhoven, T., Spruit, M. R., & van Duijn, M. J. (2023). Large Language Models: The Need for Nuance in Current Debates and a Pragmatic Perspective on Understanding (arXiv:2310.19671). arXiv.  

Aguera y Arcas, B. (2022). Do large language models understand us? Medium

Titus, L. M. (2024). Does ChatGPT have semantic understanding? A problem with the statistics-of-occurrence strategyCognitive Systems Research83

Pezzulo, G., Parr, T., Cisek, P., Clark, A., & Friston, K. (2024). Generating meaning: Active inference and the scope and limits of passive AITrends in Cognitive Sciences28(2), 97–112.