Faculty Recruiting Support CICS

Emerging Trustworthiness Issues in Distributed Learning Systems

03 May
Tuesday, 05/03/2022 11:00am
PhD Dissertation Proposal Defense
Speaker: Hamid Mozaffari

Abstract:
A distributed learning system allocates learning processes onto several workstations to enable faster learning algorithms. Federated Learning (FL) is an increasingly popular type of distributed learning which allows mutually untrusted clients to collaboratively train a common machine learning model without sharing their private/proprietary training data with each other. In this thesis, we plan to explore several emerging trustworthiness issues in distributed learning systems, and FL in particular, under real-world settings, and we design mechanisms to mitigate these issues.

First, we show that FL is susceptible to poisoning by malicious clients who aim to hamper the accuracy of the commonly trained model. To defeat FL poisoning, we design federated rank learning (FRL) which limits the adversary's options for poisoning attacks. FRL provides robustness by reducing the space of client updates from model parameter updates in standard FL (a continuous space of float numbers) to the space of parameter rankings (a discrete space of integer values).

The second issue of FL we investigate is fairness of the globally trained model. Due to the heterogeneity in clients' data distributions, one single model cannot represent all the clients equally. To alleviate this issue, we can learn multiple global models, so that each group of clients get benefits from their personalized model. We design a fair FL based on learning on parameter ranks where the global model performs similarly across different clients. However, in this approach, the server can learn the access patterns of the clients, which can be privacy sensitive in many real-world scenarios. This brings us to the third issue of FL, privacy of user data. To preserve access privacy, we design heterogeneous private information retrieval (HPIR) in which the clients can fetch their specific model parameters from untrusted servers without leaking any information.
 

Chair: Amir Houmansadr

Meeting details:
Virtual (Zoom: https://umass-amherst.zoom.us/j/7574221634)