UMass AI&Sec Fall '25 Seminar: Kathrin Grosse, From Practical Machine Learning Security to AI Security Incident Reporting
Content
Speaker: Kathrin Grosse (IBM Research)
Abstract: Cybersecurity ensures the trustworthy and reliable functioning of digital systems. Currently, companies spend roughly 10% of their IT budget on cybersecurity. Thus, security becomes increasingly relevant also for emerging technologies like artificial intelligence (AI). Despite a large body of academic research, our current understanding of AI security has a critical gap. It does not cover how companies, public institutions, and non-profits use AI. This gap manifests as models are studied instead of pipelines, infeasible perturbations, or assumptions are unrealistic. This leaves us with a limited understanding of AI vulnerabilities. Meanwhile, attackers aren't waiting. They are already exploiting these vulnerabilities, and we discuss the evidence of these real-world AI security incidents. We thus discuss a proposal for an AI security incident reporting framework to create a practical understanding of AI security threats, allowing us to take a step towards trustworthy and secure AI.
Bio: Kathrin Grosse is a Research Scientist at IBM Research, Zurich, Switzerland. Her research interests focus on AI security in the industry. Her work bridges research (in AI security) and industry needs. She received her master’s degree from Saarland University and her Ph.D. from Saarland University, in 2021 under the supervision of Michael Backes at CISPA Helmholtz Center. Following, she did a PostDoc with Battista Biggio in Cagliari, Italy; and Alexandre Alahi at EPFL, Switzerland. She interned with IBM in 2019 and Disney Research in 2020/21. As part of her work, she serves as a reviewer for, among others, IEEE S&P, Usenix Security, and ICML and organizes workshops at NeurIPS and ICML. In 2019, she was nominated as an AI Newcomer for the German Federal Ministry of Education and Research’s Science Year.
Host: UMass AI Security