Faculty Recruiting Support CICS

Foundations of Responsible Machine Learning

09 Feb
Thursday, 02/09/2023 4:00pm to 5:00pm
Computer Science Building, Room 150/151

Title: Gradual Verification: Assuring Programs Incrementally

Abstract: Algorithms make predictions about people constantly.  The spread of such prediction systems has raised concerns that machine learning algorithms may exhibit problematic behavior, especially against individuals from marginalized groups.  This talk will provide an overview of my research on building a theory of "responsible" machine learning.  Specifically, I will highlight a notion of fairness in prediction, called Multicalibration (ICML'18), which formalizes the goals of fair prediction through the lens of complexity theory.  Multicalibration requires that algorithmic predictions be well-calibrated, not simply overall, but simultaneously over a rich collection of subpopulations.  This ``multi-group'' approach strengthens the guarantees of group fairness definitions, without incurring the costs (statistical and computational) associated with individual-level protections.  Additionally, I will present a new paradigm for learning, Outcome Indistinguishability (STOC'21), which provides a broad framework for learning predictors satisfying formal guarantees of responsibility.  Finally, I will discuss the threat of Undetectable Backdoors (FOCS'22), which represent a serious challenge for building trust in machine learning models.

Bio: Michael P. Kim is a Postdoctoral Research Fellow at the Miller Institute for Basic Research in Science at UC Berkeley, hosted by Shafi Goldwasser.  Before this, Kim completed his Ph.D. in Computer Science at Stanford University, advised by Omer Reingold.  Kim's research addresses basic questions about the appropriate use of machine learning algorithms that make predictions about people.

Faculty Host