Faculty Recruiting Support CICS

Fair Algorithms for Sequential Learning Problems

19 Jan
Friday, 01/19/2024 10:30am to 12:30pm
Zoom
PhD Dissertation Proposal Defense

Machine learning (ML) systems are increasingly being used to automate decision-making in different domains, with incredible benefits. However, these systems can also propagate biases present in their training data. If left unaddressed, these biases can lead to harmful outcomes. Issues of fairness become critical when such outcomes disproportionately impact different communities of people. While a growing body of research aims to develop methods that mitigate unfairness in ML systems, there are still many open challenges. 
    
This proposal addresses three problems in fair ML. The first focuses on promoting fairness in the offline contextual bandit setting, where an algorithm must use historical data to, with high probability, find a fair policy. Importantly, different application domains may require different fairness criteria, and an additional challenge is to allow for the specification of fairness from a broad class of definitions. To address these challenges, we introduce RobinHood, a fair bandit. Our theoretical results confirm that the probability RobinHood returns an unfair policy is less than a user-defined constant. Moreover, we show empirically that RobinHood can satisfy various definitions of fairness across three domains, including a user study with an automated tutoring system. 

The second problem involves mitigating long-term unfairness in the classification setting. Most definitions of fairness in classification are myopic and do not directly account for how model predictions impact the well-being of different communities over time. Towards this, we introduce ELF (Enforcing Long-term Fairness), an algorithm whose theoretical and empirical results demonstrate its ability to successfully provide models that satisfy long-term fairness goals. 
    
The final challenge concerns promoting fairness in resource allocation, an application domain that is often formulated as a sequential learning problem. In this setting, a scarce resource must be distributed across groups over a sequence of rounds, with the goal of optimizing group welfare subject to fairness criteria. We consider a definition of fairness that generalizes principles of equality of opportunity, which asks that the probability an individual receives a resource is approximately independent of their group affiliation. We present insights that lay the foundation for developing an effective algorithm for this problem setting.

Advisor: Phil Thomas

Join via Zoom