Faculty Recruiting Support CICS

Deep Learning Models for Irregularly Sampled and Incomplete Time Series Data

06 Aug
Add to Calendar
Friday, 08/06/2021 1:00pm to 3:00pm
Zoom Meeting
PhD Thesis Defense
Speaker: Satya Shukla

Zoom Meeting: https://umass-amherst.zoom.us/j/2614740715?pwd=UXlBbURCRFVlVmx0MFZPS1B5VnVYdz0

Abstract: Irregularly sampled time series data arise naturally in many application domains including biology, ecology, climate science, astronomy, geology, finance, and health. Such data present fundamental challenges to many classical models from machine learning and statistics. The first challenge with modeling such data is that the dimensionality of the inputs can be different for different data cases. This occurs naturally due to the fact that different data cases are likely to include different numbers of observations. The second challenge is the lack of alignment of observation time points across different dimensions in multivariate time series. These features of irregularly sampled time series data invalidate the assumption of a coherent fully-observed fixed-dimensional feature space that underlies many basic supervised and unsupervised learning models.   In this thesis proposal, we focus on the development of deep learning models for the problems of supervised and unsupervised learning from irregularly sampled time series data. We begin by introducing a computationally efficient architecture for whole time series classification and regression problems based on the use of a novel deterministic interpolation-based layer that acts as a bridge between multivariate irregularly sampled time series data instances and standard neural network layers that assume regularly-spaced or fixed-dimensional inputs. The architecture is based on the use of an RBF kernel-based interpolation network followed by the application of a prediction network. Next, we show how the use of fixed RBF kernel functions can be relaxed through the use of a novel attention-based continuous-time interpolation framework. We show that using attention to learn temporal similarity results in improvements over fixed RBF kernels and other recent approaches in terms of both supervised and unsupervised tasks.  Although this model was shown to provide state-of-the-art classification and deterministic interpolation performance, it is not able to reflect uncertainty due to variable input sparsity. In our final contribution, we address this problem by presenting a novel deep learning framework for probabilistic interpolation that significantly improves uncertainty quantification in the output interpolations. Furthermore, we show that this framework is also able to improve classification performance.    Advisor: Ben Marlin