Content

Image
A headshot of Andrew Lee

Speaker

Andrew Lee (Harvard)

Abstract

Despite the central role of attention heads in Transformers, we lack tools to understand why a model attends to a particular token. To address this, we study the query-key (QK) space—the bilinear joint embedding space between queries and keys. We present a contrastive covariance method to decompose the QK space into low-rank, human-interpretable components. It is when features in keys and queries align in these low-rank subspaces that high attention scores are produced. We first study our method both analytically and empirically in a simplified setting. We then apply our method to large language models to identify human-interpretable QK subspaces for categorical semantic features and binding features. Finally, we demonstrate how attention scores can be attributed to our identified features.

Speaker Bio

Andrew Lee is a post-doctoral fellow at Harvard, hosted by Martin Wattenberg and Fernanda Viegas. He completed his PhD in computer science at the University of Michigan. His research interests are around model interpretability: understanding feature representations and the roles that they play.