General Events

View Full Calendar

CAII Fall Seminar Series: "Secure Learning in Adversarial Environment" by Assistant Professor, Bo Li

Event Type
Seminar/Symposium
Sponsor
Center for Artificial Intelligence Innovation
Date
Oct 18, 2021   11:00 am - 12:00 pm  
Speaker
Bo Li, Computer Science
Views
99
Originating Calendar
Center for Artificial Intelligence Innovation

Dr. Bo Li, Assistant Professor in the department of Computer Science at the University of Illinois at Urbana-Champaign, will give a presentation during the CAII Seminar Series on Monday, October 18 at 11:00 a.m. The talk is titled “Secure Learning in Adversarial Environment." 

View Seminar here: https://go.ncsa.illinois.edu/2021CAIIFallSeminarSeries 

Abstract: Advances in machine learning have led to rapid and widespread deployment of learning based inference and decision making for safety-critical applications, such as autonomous driving and security diagnostics. Current machine learning systems, however, assume that training and test data follow the same, or similar, distributions, and do not consider active adversaries manipulating either distribution. Recent work has demonstrated that motivated adversaries can circumvent anomaly detection or other machine learning models at test time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors in inference time through poisoning attacks. In this talk, I will describe my recent research about security and privacy problems in machine learning systems. In particular, I will introduce several adversarial attacks in different domains, and discuss potential defensive approaches and principles, including game theoretic based and knowledge enabled robust learning paradigms, towards developing practical robust learning systems with robustness guarantees.

Speaker Bio: 

I am an Assistant Professor in the Computer Science Department at University of Illinois at Urbana-Champaign. My research focuses on machine learning, security, privacy, and game theory. Specifically, much of our work aims at exploring vulnerabilities of machine learning systems to various adversarial attacks, and endeavors to develop real-world robust learning systems.

All presentations will be recorded and will available on the CAII website shortly after the presentation. 

link for robots only