NCSA staff who would like to submit an item for the calendar can email newsdesk@ncsa.illinois.edu.
Advances in machine learning have led to the rapid and widespread deployment of learning-based methods in safety-critical applications, such as autonomous driving and medical healthcare. Standard machine learning systems, however, assume that training and test data follow the same or similar distributions, without explicitly considering active adversaries manipulating either distribution. For instance, recent work demonstrates that motivated adversaries can circumvent anomaly detection or other machine learning models at test-time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors during inference through poisoning attacks. Such distribution shifts could also lead to other trustworthiness issues, such as generalization. In this talk, we describe different perspectives of trustworthy machine learning, such as robustness, privacy, generalization, and their underlying interconnections. We focus on a certifiably robust learning approach based on statistical learning with logical reasoning as an example, and then discuss the principles towards designing and developing practical trustworthy machine learning systems with guarantees, by considering these trustworthiness perspectives holistically.
Bo Li is an assistant professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign. She is the recipient of the IJCAI Computers and Thought Award, Alfred P. Sloan Research Fellowship, NSF CAREER Award, MIT Technology Review TR-35 Award, Dean’s Award for Excellence in Research, C.W. Gear Outstanding Junior Faculty Award, Intel Rising Star Award, Symantec Research Labs Fellowship, research awards from tech companies including Amazon, Facebook, Intel, and IBM, and best paper awards at several top machine learning and security conferences. Her research focuses on both theoretical and practical aspects of trustworthy machine learning, security, machine learning, privacy, and game theory. She has designed several scalable frameworks for trustworthy machine learning and privacy-preserving data publishing systems. Her work has been featured by major press outlets, including Nature, Wired, Fortune, and the New York Times.