National Center for Supercomputing Applications WordPress Master Calendar

View Full Calendar

NCSA staff who would like to submit an item for the calendar can email newsdesk@ncsa.illinois.edu.

Trustworthy Machine Learning: Robustness, Privacy, Generalization, and their Interconnections

Event Type
Seminar/Symposium
Sponsor
C3.ai Digital Transformation Institute
Date
Sep 8, 2022   3:00 - 4:00 pm  
Speaker
Bo Li, Assistant Professor of Computer Science, University of Illinois at Urbana-Champaign
Registration
required.
Contact
C3.ai Digital Transformation Institute
Views
9
Originating Calendar
C3.ai DTI Events Calendar

Advances in machine learning have led to the rapid and widespread deployment of learning-based methods in safety-critical applications, such as autonomous driving and medical healthcare. Standard machine learning systems, however, assume that training and test data follow the same or similar distributions, without explicitly considering active adversaries manipulating either distribution. For instance, recent work demonstrates that motivated adversaries can circumvent anomaly detection or other machine learning models at test-time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors during inference through poisoning attacks. Such distribution shifts could also lead to other trustworthiness issues, such as generalization. In this talk, we describe different perspectives of trustworthy machine learning, such as robustness, privacy, generalization, and their underlying interconnections. We focus on a certifiably robust learning approach based on statistical learning with logical reasoning as an example, and then discuss the principles towards designing and developing practical trustworthy machine learning systems with guarantees, by considering these trustworthiness perspectives holistically.

Bo Li is an assistant professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign. She is the recipient of the IJCAI Computers and Thought Award, Alfred P. Sloan Research Fellowship, NSF CAREER Award, MIT Technology Review TR-35 Award, Dean’s Award for Excellence in Research, C.W. Gear Outstanding Junior Faculty Award, Intel Rising Star Award, Symantec Research Labs Fellowship, research awards from tech companies including Amazon, Facebook, Intel, and IBM, and best paper awards at several top machine learning and security conferences. Her research focuses on both theoretical and practical aspects of trustworthy machine learning, security, machine learning, privacy, and game theory. She has designed several scalable frameworks for trustworthy machine learning and privacy-preserving data publishing systems. Her work has been featured by major press outlets, including Nature, Wired, Fortune, and the New York Times.

 
 
link for robots only