NCSA staff who would like to submit an item for the calendar can email newsdesk@ncsa.illinois.edu.
Abstract: Neural networks have become a crucial element in modern artificial intelligence.However, they are often black-boxes and can behave unexpectedly and produce surprisinglywrong results under malicious inputs. When applying neural networks to mission-criticalsystems such as autonomous driving and aircraft control, it is often desirable to formallyverify their trustworthiness such as safety and robustness. Unfortunately, the complexity ofneural networks has made the task of formally verifying their properties very challenging. Totackle this challenge, I first propose an efficient verification algorithm based on linearrelaxations of neural networks, which produces guaranteed output bounds given boundedinput perturbations. The algorithm propagates linear inequalities through the networkefficiently in a backward manner and can be applied to arbitrary network architectures. Toreduce relaxation errors, I develop an efficient optimization procedure that can tightenverification bounds rapidly on machine learning accelerators such as GPUs. Lastly, I discusshow to further empower the verifier with branch and bound by incorporating the additionalbranching constraints into the bound propagation procedure. The combination of theseadvanced neural network verification techniques leads to α,β-CROWN(alpha-beta-CROWN), a scalable, powerful and GPU-based neural network verifier that wonthe 2nd International Verification of Neural Networks Competition (VNN-COMP’21) with thehighest total score.
Biography: Huan Zhang is a postdoctoral researcher at Carnegie Mellon University,supervised by Prof. Zico Kolter. He received his Ph.D. degree in Computer Science at UCLAin 2020, advised by Prof. Cho-Jui Hsieh. Huan's research focuses on the robustness andtrustworthiness of artificial intelligence (AI), especially on using formal verification andprovable methods to evaluate and enhance the robustness of machine learning models,such as deep neural networks and tree ensembles. Huan systematically studied therobustness of many machine learning scenarios including reinforcement learning, naturallanguage processing, and image generation. Huan led the development of α,β-CROWN, atoolbox for neural network robustness verification, which won the first prize in Verification ofNeural Networks Competition (VNN-COMP'21). Huan Zhang was awarded an IBM Ph.D.fellowship during 2018 - 2020 and the 2021 AdvML Rising Star Award sponsored byMIT-IBM Watson AI Lab.