When Machines Learn About Humans

21 Feb
Thursday, 02/21/2013 11:00am to 12:00pm
Seminar

Moritz Hardt
IBM Research Almaden

Computer Science Building, Room 151

Faculty Host: Andrew McGregor

The "human element" in data introduces fundamental algorithmic challenges such as protecting individual privacy, ensuring fairness in classification, and, designing algorithms that are robust to population outliers and adversarial conditions. This talk focuses on the interplay between privacy and robustness in the context of discovering and computing with linear structure in massive data sets.

We will first give a simple and practical method for computing the principal components of a data set under the strong notion of differential privacy. Our algorithm always guarantees privacy while its utility analysis circumvents a severe obstruction in differential privacy using a realistic assumption central to robust principal component analysis.

We then turn to the problem of analyzing massive data sets using few linear measurements---an algorithmic paradigm known as "linear sketching". Here we prove a "no free lunch" theorem showing that the computational efficiency of linear sketches comes at the cost of robustness. Indeed, we show that efficient linear sketches cannot guarantee correctness in a natural interactive model of computation even for very basic problems. Our result builds on a close connection to privacy and can be seen as a broad generalization of so-called "reconstruction attacks" in the privacy setting.

Bio:
Moritz Hardt is a post-doctoral researcher in the theory group at IBM Research Almaden. He completed his PhD in Computer Science at Princeton University in 2011, advised by Boaz Barak. His work focuses on the algorithmic foundations of privacy, fairness and robustness in statistical data analysis.

A reception will be held at 3:40 in the atrium, outside the presentation room.