Gesture Semantics: Deictic Reference, Deferred Reference and Iconic Co-Speech Gestures (Lücking)

Andy Lücking, Goethe University Frankfurt, March 14, 2024

ABSTRACT:  Language use is situated in manifold ways, including the exploitation of the visual context and the use of manual gestures (multimodal communication). I will survey recent theoretical advances concerning the semantics and the semantic contribution of co-verbal deictic and iconic gestures. Multimodal communication challenges traditional notions of reference and meaning developed in formal semantics. Computationally tractable models of deictic and deferred reference and iconic gestures are proposed instead. These models specify language/perception interfaces for two concrete phenomena that are central to situated language. Inasmuch as LLMs lack perception and embodiment, these phenomena are currently, but not in principle, out of reach. I will conclude by pointing out *what* is needed for an LLM  to be capable of deferred reference and iconic gestures. 

Andy Lücking is Privatdozent at Goethe University Frankfurt. His work contributes to theoretical linguistics and computational semantics, in particular to a linguistic theory of human communication, that is, face-to-face interaction within and beyond single sentences. Besides publishing on deixis and iconicity in manual gesture, Andy is the main author of Referential Transparency Theory, the current semantic theory of plurality and quantification. His work on the perception of iconic gestures received an IEEEbest paper award.

Andy Lücking, Alexander Henlein, and Alexander Mehler (2024). Iconic Gesture Semantics. In review. Manuscript available on request.

Andy Lücking and Jonathan Ginzburg (2023). Leading voices: Dialogue semantics, cognitive science, and the polyphonic structure of multimodal interactionLanguage and Cognition,  15(1). 148–172.

Andy Lücking, Thies Pfeiffer and Hannes Rieser (2015). Pointing and Reference Reconsidered. In: Journal of Pragmatics 77: 56–79. DOI: 10.1016/j.pragma.2014.12.013.

The Grounding Problem in Language Models is not only about Grounding (Lenci)

Alessandro Lenci, Linguistics, U. Pisa, February 29, 2024

ABSTRACT:  The Grounding Problem is typically assumed to concern the lack of referential competence of AI models. Language Models (LMs) that are trained only on texts without direct access to the external world are indeed rightly regarded to be affected by this limit, as they are ungrounded. On the other hand Multimodal LMs do have extralinguistic training data and show important abilities to link language with the visual world. In my talk, I will argue that incorporating multimodal data is a necessary but not sufficient condition to properly address the Grounding Problem. When applied to statistical models based on distributional co-occurrences like LMs, the Grounding Problem should be reformulated in a more extensive way, which sets an even higher challenge for current data-driven AI models.

Alessandro Lenci is Professor of linguistics and director of the Computational Linguistics Laboratory (CoLing Lab), University of Pisa. His main research interests are computational linguistics, natural language processing, semantics and cognitive science.

i

Lenci A., & Sahlgren (2023). Distributional Semantics, Cambridge, Cambridge University Press. 

Lenci, A. (2018). Distributional models of word meaningAnnual review of Linguistics, 4, 151-171.

Lenci, A. (2023). Understanding Natural Language Understanding Systems. A Critical Analysis. Sistemi Intelligenti, arXiv preprint arXiv:2303.04229.

Lenci, A., & Padó, S. (2022). Perspectives for natural language processing between AI, linguistics and cognitive scienceFrontiers in Artificial Intelligence5, 1059998.