Faculty Recruiting Support CICS

Quantifying and Strengthening the Security of Federated Learning

26 Apr
Wednesday, 04/26/2023 9:00am to 11:00am
Hybrid - LGRC A104 and Zoom
PhD Thesis Defense

Federated learning is an emerging distributed learning paradigm that allows multiple users to collaboratively train a joint machine learning model without having to share their private data with any third party. Due to many of its attractive properties, federated learning has received significant attention from academia as well as industry and now powers major applications, e.g., Google's Gboard and Assistant, Apple's Siri, Owkin's health diagnostics, etc. However, federated learning is yet to see widespread adoption due to a number of challenges. One such challenge is its susceptibility to poisoning by malicious users who aim to manipulate the joint machine learning model.
 
In this work, we take significant steps towards this challenge. We start by providing a systemization of poisoning adversaries in federated learning and use it to build adversaries with varying strengths and to show how some adversaries common in the prior literature are not practically relevant. For the majority of this thesis, we focus on untargeted poisoning as it can impact much larger federated learning population than other types of poisoning and also because most of the prior poisoning defenses for federated learning aim to defend against untargeted poisoning.
 
Next, we introduce a general framework to design strong untargeted poisoning attacks against various federated learning algorithms. Using our framework, we design state-of-the-art poisoning attacks and demonstrate how the theoretical guarantees and empirical claims of prior state-of-the-art federated learning poisoning defenses are brittle under the same strong (albeit theoretical) adversaries that these defenses aim to defend against. We also provide concrete lessons highlighting the shortcomings of prior defenses. Using these lessons, we also design two novel defenses with strong theoretical guarantees and demonstrate their state-of-the-art performances in various adversarial settings.
 
Finally, for the first time, we thoroughly investigate the impact of poisoning in real-world federated learning settings and draw significant, and rather surprising, conclusions about robustness of federated learning in practice. For instance, we show that contrary to the established belief, federated learning is highly robust in practice even when using simple, low-cost defenses. One of the major implications of our study is that, although interesting from theoretical perspectives, many of the strong adversaries, and hence, strong prior defenses, are of little use in practice.

Advisor: Amir Houmnasdr

Join via Zoom