Machine Learning and Friends Lunch: Chen Sun, Grounding Deep Generative Models in the Physical World
Content
Speaker: Chen Sun (Brown University)
Abstract: Generative Artificial Intelligence (GenAI) has made explosive strides, demonstrating the ability to solve high-school math problems and produce movie-quality videos. Yet, significant challenges remain: the same models that achieve such feats also suggest putting glue on pizza or generating morphing cats and four-legged ants. As AI-synthesized content increasingly
dominates the Internet, concerns arise about performance degeneration in future GenAI models, due to contamination from their own synthesized data.
In this talk, I advocate for the importance of grounding deep generative models in the physical world to address these challenges, based on recent work from my lab at Brown University. I will explore: (1) the good: How GenAI streamlines the design of video perception frameworks and serves as a powerful prior for embodied policy learning; (2) the bad: How training GenAI models on their synthesized data can lead to degradation and collapse, and how incorporating "physics" corrections can mitigate this issue; and finally (3) the future: Why inductive biases remain essential in the era of "scaling laws," and how "self-consuming" training can evolve into "self-improving" systems by integrating embodied agents into the training loop.
Bio: Chen Sun is an assistant professor of computer science at Brown University, studying computer vision and machine learning. He is also a part-time research scientist at Google Deepmind, where he spent five years before joining Brown. Chen received his Ph.D. from the University of Southern California in 2016, and bachelor's degree from Tsinghua University in 2011. His research appeared in the CVPR 2019 best paper finalist, and was recognized by a Richard B. Salomon Faculty Research Award.