Autotelic Agents that Use and Ground Large Language Models (Oudeyer)

Pierre-Yves OudeyerInria, Bordeaux21 mar 2024

ABSTRACT: Developmental AI aims to design and study artificial agents that are capable of open-ended learning. I will discuss two fundamental ingredients: (1) curiosity-driven exploration mechanisms, especially mechanisms enabling agents to invent and sample their own goals (such agents are called ‘autotelic’; (2) language and culture enabling enabling agents to learn from others’ discoveries, through the internalization of cognitive tools. I will discuss the main challenges in designing autotelic agents (e.g., how can they be creative in choosing their own goals?) and how some of them require language and culture to be addressed. I will also discuss using LLMs as proxies for human culture in autotelic agents, and how autotelic agents can leverage LLMs to learn faster, but also to align and ground them on the dynamics of the environment they interact with. I will also address some of the current main limitations of LLMs.

Pierre-Yves Oudeyer and his team at INRIA Bordeaux study open lifelong learning and the self-organization of behavioral, cognitive and language structures, at the frontiers of AI and cognitive science. In the field of developmental AI, we use machines as tools to better understand how children learn, and to study how machines could learn autonomously as children do and could integrate into human cultures. We study models of curiosity-driven autotelic learning, enabling humans and machines to set their own goals and self-organize their learning program. We also work on applications in education and assisted scientific discovery, using AI techniques to serve humans, and encourage learning, curiosity, exploration and creativity.

Colas, C; T Karch, C Moulin-Frier, PY Oudeyer (2022) Language and Culture Internalisation for Human-Like Autotelic AI  Nature Machine Intelligence 4 (12), 1068-1076 https://arxiv.org/abs/2206.01134

Carta, T., Romac, C., Wolf, T., Lamprier, S., Sigaud, O., & Oudeyer, P. Y. (2023). Grounding large language models in interactive environments with online reinforcement learning. ICML    https://arxiv.org/abs/2302.02662 

Colas, C., Teodorescu, L., Oudeyer, P. Y., Yuan, X., & Côté, M. A. (2023). Augmenting Autotelic Agents with Large Language Models. arXiv preprint arXiv:2305.12487. https://arxiv.org/abs/2305.12487

Leave a comment