National Center for Supercomputing Applications WordPress Master Calendar

Back to Listing

NCSA staff who would like to submit an item for the calendar can email

Improved Adversarial Attacks and Certified Defenses via Nonconvex Relaxations

Event Type
Sponsor Digital Transformation Institute
Dec 1, 2022   3:00 - 4:00 pm  
Richard Y. Zhang, Assistant Professor of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign
Contact Digital Transformation Institute
Originating Calendar
NCSA External Events Feed

After training a machine learning model to be resilient towards adversarial attacks, one often desires a mathematical proof or “certification” that the model is rigorously robust against further attacks. Typically, certification is performed by taking the nonconvex set of all possible attacks, and relaxing it into a larger, convex set of attacks, some (or almost all) of which may not be physically realizable. Unfortunately, such certifications are often extremely conservative, as they tend to be overly cautious towards fictitious attacks that cannot be physically realized. In the end, there remains a large “convex relaxation barrier” between our ability to train ostensibly resilient models, and our ability to guarantee them as being rigorously robust. In this talk, we discuss nonconvex relaxations for the adversarial attack and certification problem. We argue that nonconvex relaxations are able to conform much better to the set of physically realizable attacks, as these are also naturally nonconvex. Our nonconvex relaxations are inspired by recent work on the Burer-Monteiro factorization for optimization on Riemannian manifolds. Our results find that the nonconvex relaxation can almost fully close the “convex relaxation barrier” that stymies the existing state-of-the-art. For safety-critical applications, the technique promises to guarantee that a model trained today will not become a security concern in the future: it will resist all attacks, including previously unknown attacks, and even if the attacker is given full knowledge of the model.

Richard Y. Zhang is an assistant professor in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. He received the S.M. and Ph.D. degrees in EECS from MIT in 2012 and 2017 respectively, and was a postdoc at UC Berkeley. His research is in optimization and machine learning, with a particular focus on building structure-exploiting algorithms to solve real-world problems with provable guarantees on quality, speed, and safety. He received an NSF CAREER Award in 2021.

link for robots only