Abstract: Applying deep learning to safety-critical tasks requires us to formally verify their trustworthiness, ensuring properties like safety, security, robustness, and correctness. Unfortunately, modern deep neural networks (DNNs) are largely “black boxes”, and existing tools can hardly formally reason about them. My talk presents a new framework for building trustworthy AI, based on novel methods for formal verification of DNNs. In particular, I will introduce a novel framework called “linear bound propagation methods” to enable efficient formal verification of deep neural networks. Although the verification problem can be cast into a large-scale integer optimization problem, traditional solvers cannot scale to realistic neural networks. By exploiting the structure of this problem, my framework achieves up to three orders of magnitude speedup compared to traditional verification algorithms. My work led to the open-source α,β-CROWN verifier, the winners of the 2021, 2022, and 2023 International Verification of Neural Networks Competitions (VNN-COMP), with applications in computer vision, computer systems, and non-linear control. Finally, I will conclude with an outlook on enabling trustworthy AI components in complex systems and engineering applications, paving the way for a safer, more reliable AI-powered future.
Bio: Huan Zhang is an assistant professor in the Department of Electrical and Computer Engineering at the University of Illinois Urbana-Champaign (UIUC). Huan’s work aims to build trustworthy AI systems that can be safely and reliably used in mission-critical tasks, with a focus on using formal verification techniques to give provable performance guarantees for machine learning. He leads a multi-institutional team developing the α,β-CROWN neural network verifier, which won VNN-COMP 2021, 2022, and 2023. He has received several awards, including an IBM Ph.D. fellowship, the 2021 Adversarial Machine Learning Rising Star Award, and a Schmidt Futures AI2050 Early Career Fellowship.