Faculty Recruiting Support CICS

Learning from Sequential User Data: Models and Sample-efficient Algorithms

02 Feb
Wednesday, 02/02/2022 12:00pm to 2:00pm
Virtual via Zoom
PhD Dissertation Proposal Defense
Speaker: Aritra Ghosh

Abstract: Abstract: Deep learning algorithms are notoriously data-hungry. However, in many sequential task domains, we have a limited amount of data. Thus, it is important to devise algorithms to select data samples optimally and devise neural models that utilize data samples efficiently. In this thesis proposal, we study how to select data samples in sequential tasks such that downstream task performance is maximized. Moreover, we study novel neural architecture for deep learning methods in low data regimes such that prediction performance is optimized.


We focus on four sequential tasks: knowledge tracing in the educational domain, career path modeling in the labor market, computerized adaptive testing, and sketching for recommender systems.


For the first two tasks, we devise novel neural models to improve performance with limited data samples. For knowledge tracing, we propose a novel neural architecture, inspired by cognitive and psychometric models, to improve the prediction of students' future performance and utilize the labeled data samples efficiently. For career path modeling, we propose a novel and interpretable monotonic nonlinear state-space model to analyze online user professional profiles and provide actionable feedback and recommendations to users on how they can reach their career goals.  For the last two tasks, we  (will) devise novel sample-efficient algorithms to query a minimal number of sequential samples to improve future predictions. Computerized adaptive testing (CAT), a form of personalized test adaptively selects the next most informative question for each student given their responses to previous questions; prior works use a static question selection policy (e.g., active learning heuristics) which is neither optimal nor can it learn with larger datasets. We propose a Bilevel Optimization-Based framework for CAT to learn a data-driven question selection algorithm that improves existing data selection policies significantly. Finally, we are also going to tackle the sketching problem in the recommender system, with the task of recommending the next item using a stored subset of prior data samples; unlike the CAT setup, the downstream tasks do not remain the same, making this problem more challenging. In this setting, we are interested in developing a data-driven sequential selection algorithm that tackles evolving downstream task distribution. The structures, introduced in the neural architecture for the models in the first two tasks using domain knowledge, open up future directions to learn deep models using limited data samples effectively. The data-driven differentiable data selection algorithms for the last two tasks open up future direction to query (a non-differentiable operation) a minimal number of samples optimally to maximize prediction performance.

Advisor: Andrew Lan

JOIN VIA ZOOM