Content

Image
Karin de Langis

Speaker

Karin de Langis (University of Minnesota)

Abstract

Large language models (LLMs) are increasingly enlisted to assist with daily tasks and problem solving, but they remain prone to occasional unpredictable failures. Explaining and better anticipating these failures may require an understanding of how LLMs “think” — in other words, to what extent the artificial cognition exhibited by LLMs represents genuine reasoning versus sophisticated statistical pattern matching. A growing body of research applies methods from the cognitive sciences to this problem. I will discuss recent work characterizing artificial cognition with respect to executive functioning and narrative comprehension, presenting evidence that LLMs show less goal-oriented processing and cognitive control in comparison with humans.

Bio

Karin de Langis is a PhD candidate at the University of Minnesota working with the Minnesota NLP group. She researches artificial cognition in language models, as well as cognitive metrics for human annotations. Her work has been published at multiple NLP venues including ACL and EMNLP. She received a Bachelor of Arts in Linguistics and Cognitive Science from Pomona College in Claremont, CA.