Faculty Recruiting Support CICS

Fast, Scalable, Warm-Start Semidefinite Programming With Application To Knowledge Editing And Interpretability

22 Mar
Friday, 03/22/2024 2:00pm to 4:00pm
Zoom
PhD Dissertation Proposal Defense
Speaker: Rico Angell

While semidefinite programming (SDP) has traditionally been limited to moderate sized problems, recent algorithms augmented with matrix sketching techniques have enabled solving larger SDPs. However, these methods achieve scalability at the cost of an increase in the number of necessary iterations, resulting in slower convergence as the problem size grows. Furthermore, they require iteration-dependent parameter schedules that prohibit effective utilization of warm-start initializations important in practical applications with incrementally-arriving data or constraints.
 
We present Unified Spectral Bundling with Sketching (USBS), a provably correct, fast and scalable algorithm for solving massive SDPs that can leverage a warm-start initialization to further accelerate convergence. Our proposed algorithm is a spectral bundle method for solving general SDPs containing both equality and inequality constraints. Moreover, when augmented with an optional matrix sketching technique, our algorithm achieves the dramatically improved scalability of previous work while sustaining convergence speed. We empirically demonstrate the effectiveness of our method across multiple applications, with and without warm-starting. For example, USBS provides a 500x speed-up over the state-of-the-art scalable SDP solver on an instance with over 2 billion decision variables.
 
The speed and scalability of USBS enables the use of SDPs in novel applications. First, we present a new paradigm of interactive feedback, existential cluster constraints, for correcting entity resolution predictions and present a novel SDP relaxation at the core of the proposed inference algorithm. We demonstrate empirically that our proposed framework facilitates more accurate entity resolution with dramatically fewer user feedback inputs. We show USBS is not only faster than the previous state-of-the-art scalable SDP solver, but can also effectively leverage a warm-start initialization to improve empirical convergence. Finally, we propose to decompose neural network representations into interpretable monosemantic features utilizing USBS as part of a sparse dictionary learning algorithm. The goal of this proposed work is to improve and further enable white-box auditing of AI systems.

Advisor: Andrew McCallum

Zoom Link

Join via Zoom