Faculty Recruiting Support CICS

Fast Linear Algebra for Gaussian Processes

25 Jan
Wednesday, 01/25/2023 11:00am to 1:00pm
Zoom
PhD Dissertation Proposal Defense
Speaker: Mohit Yadav

With the emergence of large-scale data, there is a massive surge in machine learning (ML) applications. In many, uncertainty calibration and interpretability of ML predictions are crucial. Gaussian processes (GPs) allow learning non-parametric models with uncertainty and interpretability of prediction. However, a well-known limitation of GPs is their running time. Inference and learning for GPs require performing linear algebraic operations (LAOs) (e.g., matrix inversion and computing log determinants) on an n x n kernel matrix, where n is the number of training data points. Naive methods for the exact computation of these LAOs require Th(n3) time, which severely prohibits their application to large-scale problems. This thesis focuses on developing novel fast algorithms for the LAOs mentioned above via exploiting methods from fast linear algebra, thereby accelerating inference and learning for GPs.

In the first two chapters of this thesis, we investigate the structured kernel interpolation (SKI) framework, which interpolates the kernel covariance matrix using a dense grid in the input space and exploits iterative algorithms to accelerate GPs. The iterative algorithms can perform LAOs utilizing a sequence of kernel matrix-vector multiplications (MVMs). The first chapter presents a novel and fast iterative algorithm for the SKI framework for GP problems. We show that after a single O(n) time computation, the remaining sequence of computations for kernel MVMs is entirely independent of n. We demonstrate speedups in GP inference for several large-scale datasets in low dimensions.

Unfortunately, SKI scales poorly in the dimension of the input points since the dense grid size grows exponentially with the dimension. In the second chapter, this thesis proposes applying sparse grids within the SKI framework, considering that sparse grids enable accurate interpolation but with the number of points growing
more slowly with dimension. This thesis contributes a novel MVM algorithm for the sparse grid kernel matrix to facilitate using sparse grids. These changes demonstrate that SKI can be scaled to relatively higher dimensions.

In the last chapter, we will investigate bandit algorithms using GPs to recommend top-k items from a list of available items. Finally, we propose an algorithm that accelerates LAOs for the Kendall kernel matrix of top-k rankings for a more extensive list of items by exploiting the structure of the kernel matrix.

Advisor: Daniel Sheldon

 

Join the Zoom