Machine Learning and Friends Lunch: Insights from Deep Representations

10 Apr
Tuesday, 04/10/2018 12:00am to 1:00pm
Computer Science Building Room 150/151
Machine Learning and Friends Lunch
Speaker: Maithra Raghu

To continue the successes of deep learning, it becomes increasingly important to better understand the phenomena exhibited by these models, ideally through a combination of systematic experiments and theory. Central to this challenge is a better understanding of deep representations. In this talk I discuss some of our work addressing questions in this space. I overview our development of measures of neural network expressivity, and quantify and empirically measure the effect of network depth and width on the latent representations. I then describe adapting Canonical Correlation Analysis (SVCCA) as a tool to directly compare latent representations, across layers, training steps, and even different networks. The results show differences in per-layer convergence and also help identify parts of the representation critical to the task. Finally, I introduce a new testbed of environments for Deep Reinforcement Learning that lets us study different RL algorithms, single agent, multiagent and self play settings, and evaluate generalization in a systematic way.

Maithra Raghu is a PhD student at Cornell, working with Jon Kleinberg, and a research resident at Google Brain. Her research interests are broadly in developing a better understanding of latent representations learned by deep neural networks, and using these insights to help guide new improvements.