Faculty Recruiting Support CICS

Preserving Trust and Privacy of Machine Learning Models in Resource-Constrained Environments

26 Oct
Wednesday, 10/26/2022 9:00am to 11:00am
A311 & Zoom
PhD Dissertation Proposal Defense
Speaker: Akanksha Atrey

Recent advances in mobile computing and the Internet of Things (IoT) enable the global integration of heterogeneous smart devices via wireless networks. A common characteristic across these modern day systems is their ability to collect and communicate streaming data making machine learning (ML) appealing for processing, reasoning and predicting about the environment. More recently, low network latency requirements have made offloading intelligence to the cloud undesirable. These novel requirements have led to the emergence of edge computing, an approach that brings computation closer to the device with low latency, high throughput, and efficiency. Based on system requirements, cloud, edge and device can collaborate on distributing computation. However, continuous collaboration between cloud, edge and device is susceptible to information leakage and loss that can create insecure and unreliable experiences. This raises an important question: how can we design, develop and evaluate high-performing ML systems that are trustworthy and privacy-preserving in resource-constrained edge environments?

This proposal talk addresses the above question by designing and implementing trustworthy and privacy-preserving ML models in distributed applications. Specifically, trustworthiness follows from reliability and explainability attributes of distributed ML systems, and privacy preservation focuses on preserving sensitive information regarding the client and server in a distributed application. I first introduce an empirical approach to establish trust in the explanations generated for reinforcement learning (RL) agents in Internet of Things applications. The method is grounded in counterfactual reasoning to evaluate the explanations generated from a popular visualization technique, saliency maps, and assess the degree to which they correspond to the semantics of RL environments. Second, I examine the privacy implications of personalized models in distributed mobile services by proposing time-series based model inversion attacks. To thwart such attacks, I present a distributed framework, Pelican, that learns and deploys transfer learning-based personalized ML models in a privacy preserving manner on resource-constrained mobile devices. Third, I investigate on-device machine learning models that are used to provide a service and propose novel privacy attacks that can leak sensitive proprietary information of the service provider. For mitigating such attacks, I propose to build an end-to-end ML framework for training and serving without compromising the service provider's proprietary information. Finally, I propose to improve the trustworthiness and privacy of ML time-series predictions when sharing IoT data across applications.

Advisor: Prashant Shenoy

Join via Zoom