Faculty Recruiting Support CICS

Towards High-fidelity Human Motion Generation

25 Apr
Thursday, 04/25/2024 12:00pm to 1:00pm
CS 150/151, Zoom
Machine Learning and Friends Lunch

Abstract: Generating virtual humans that can move and behave like us has tremendous applications in various domains, such as health care, gaming, sports, etc. In this talk, I will present two projects of our lab aiming at generating high-fidelity human motion using diffusion models. In this first part, I will present an upcoming ICLR 24 paper (https://arxiv.org/abs/2310.08580) that supports controllable human motion generation. Unlike existing work that can only support dense control signals on the pelvis only, our approach can control any human joint at any time step. In the second one, I will present a method that can translate a textual prompt into a sequence of 3D human-object interactions (HOIs) (https://arxiv.org/abs/2312.06553). Among other concurrent works, our approach is the first one that can do so. Our key insight is to estimate the affordance of the object for interactions and use it to guide the HOIs generation.

Bio: Huaizu Jiang is an Assistant Professor in the Khoury College of Computer Sciences at Northeastern University. Prior to that, he was a Postdoc at Caltech and Visiting Researcher at NVIDIA. He received my Ph.D. degree in Computer Science from University of Massachusetts, Amherst. He has broad research interests in computer vision, computational photography, natural language processing, and machine learning. His long-term research aims to teach machines to develop visual intelligence in a manner analogous to humans by combining 3D and multi-modal cues. In the short term, his research goal is to create smart visual perception tools to improve people's life experiences of using cameras. He received Adobe Fellowship and NVIDIA Graduate Fellowship both in 2019. His research has been supported by NSF, MathWorks, Adobe, and Google.