Faculty Recruiting Support CICS

Learning from Sequential User Data: Models and Sample-efficient Algorithms

18 Aug
Thursday, 08/18/2022 1:00pm to 3:00pm
PhD Thesis Defense
Speaker: Aritra Ghosh

Abstract: Deep learning algorithms are notoriously data-hungry. However, in many sequential task domains, we have a limited amount of data. Thus, it is important to devise algorithms to select data samples optimally and devise neural models that utilize data samples efficiently. In this thesis proposal, we study how to select data samples in sequential tasks such that downstream task performance is maximized. Moreover, we study novel neural architecture for deep learning methods in low data regimes such that prediction performance is optimized.

We focus on four sequential tasks: knowledge tracing in the educational domain, career path modeling in the labor market, computerized adaptive testing, and sketching for recommender systems.

For the first two tasks, we devise novel neural models to improve performance with limited data samples.

For knowledge tracing, we propose a novel neural architecture, inspired by cognitive and psychometric models, to improve the prediction of students' future performance and utilize the labeled data samples efficiently.

For career path modeling, we propose a novel and interpretable monotonic nonlinear state-space model to analyze online user professional profiles and provide actionable feedback and recommendations to users on how they can reach their career goals.

For the last two tasks, we devise novel sample-efficient algorithms to query a minimal number of sequential samples to improve future predictions.

Computerized adaptive testing (CAT), a form of personalized test adaptively selects the next most informative question for each student given their responses to previous questions; prior works use a static question selection policy (e.g., active learning heuristics) which is neither optimal nor can it learn with larger datasets. We propose a Bilevel Optimization-Based framework for CAT to learn a data-driven question selection algorithm that improves existing data selection policies significantly.

Finally, we are also tackle the sketching problem in the recommender system, with the task of recommending the next item using a stored subset of prior data samples; unlike the CAT setup, the downstream tasks do not remain the same, making this problem more challenging.

In this setting, we develop a data-driven sequential selection algorithm that tackles evolving downstream task distribution.

The structures, introduced in the neural architecture for the models in the first two tasks using domain knowledge, open up future directions to learn deep models using limited data samples effectively. The data-driven differentiable data selection algorithms for the last two tasks open up future direction to query (a non-differentiable operation) a minimal number of samples optimally to maximize prediction performance.

Advisor: Andrew Lan

Join via Zoom