Content

Image
A photo of Tianmin Shu

Speaker 

Tianmin Shu (Johns Hopkins University)

Abstract

Despite our tremendous progress in AI, current AI agents still cannot adequately understand humans and flexibly interact with humans in real-world settings. One of the key missing ingredients is Theory of Mind, which is the ability to understand humans’ mental states from their behaviors. In this talk, I will discuss how we can engineer human-level machine Theory of Mind for socially intelligent embodied AI partners. I will begin by showing how cognitive modeling and foundation models can be combined to build scalable, robust, model-based approaches for embodied multimodal Theory of Mind. I will then discuss how Theory of Mind reasoning can enhance multimodal human-AI collaboration. Finally, I will present some of our recent efforts on scaling Theory of Mind in the real world, including (1) internalizing explicit mental reasoning through self-supervised reinforcement learning and (2) ground mental reasoning to the 3D world via generative world models.

Speaker Bio

Dr. Tianmin Shu is an Assistant Professor in the Department of Computer Science at Johns Hopkins University and directs the Social Cognitive AI Lab. He also holds a secondary appointment with the Department of Cognitive Science at JHU. His research goal is to advance human-centered AI by engineering human-level machine social intelligence to build socially intelligent systems that can understand, reason about, and interact with humans in real-world settings. His work received an Outstanding Paper Award at ACL 2024, the 2017 Cognitive Science Society Computational Modeling Prize in Perception/Action, and several best paper awards at NeurIPS workshops and an IROS workshop. He received his PhD degree from the University of California, Los Angeles, in 2019. Before joining JHU, he was a research scientist at the Massachusetts Institute of Technology.