Faculty Recruiting Support CICS

Trustworthy AI via Formal Verification and Adversarial Testing

06 Mar
Monday, 03/06/2023 4:00pm
Computer Science Building, Room 150/151
Seminar
Speaker: Huan Zhang

Title: Trustworthy AI via Formal Verification and Adversarial Testing

Abstract: To apply deep learning to safety-critical tasks, we need to formally verify their trustworthiness, ensuring properties like safety, security, robustness, and correctness. Unfortunately, modern deep neural networks (DNNs) are largely "black boxes", and existing tools lack the ability to formally reason about them. In this talk, I will present a new framework for trustworthy AI, relying on novel methods for formal verification and adversarial testing of DNNs.

In particular, I will first describe a novel framework called "linear bound propagation methods" to enable efficient formal verification of deep neural networks. This framework exploits the structure of this NP-hard verification problem to solve it efficiently, and achieves three orders of magnitude speedup compared to traditional verification tools. My work leads to the open-source a,v-CROWN verifier, the winners of the 2021 and 2022 International Verification of Neural Networks Competitions, with applications in image classification, image segmentation, reinforcement learning, and computer systems. Besides verification, I will discuss the complementary problem of disproving the robustness and safety of AI-based systems using adversarial testing, including black-box adversarial attacks to DNNs, and theoretically-principled attacks to deep reinforcement learning. Finally, I will conclude my talk by discussing opportunities for verifying AI models as building blocks in complex systems, and the potential to address hard engineering problems using the bound propagation framework.

Bio: Huan Zhang is a postdoctoral researcher at Carnegie Mellon University, working with Prof. Zico Kolter. He obtained his Ph.D. in Computer Science at UCLA in 2020, advised by Prof. Cho-Jui Hsieh. Huan's research aims to build trustworthy AI systems that can be safely and reliably used in mission-critical tasks, with a focus on using formal verification techniques to give provable performance guarantees on machine learning systems. He is the leader of a multi-institutional team developing the a,v-CROWN neural network verifier, which won VNN-COMP 2022 and VNN-COMP 2021. He has received several awards, including an IBM Ph.D. fellowship, the 2021 Adversarial Machine Learning Rising Star Award, and a Schmidt Futures AI2050 Early Career Fellowship.

 

Faculty host
: