Bo Li
Abstract: Advances in machine learning have led to rapid and widespread deployment of learning based inference and decision making for safety-critical applications, such as autonomous driving and security diagnostics. Current machine learning systems, however, assume that training and test data follow the same, or similar, distributions, and do not consider active adversaries manipulating either distribution. Recent work has demonstrated that motivated adversaries can circumvent anomaly detection or other machine learning models at test time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors in inference time through poisoning attacks. In this talk, I will describe my recent research about security and privacy problems in machine learning systems, with a focus on potential certifiably defense approaches via logic reasoning and domain knowledge integration with neural networks. We will also discuss other defense principles towards developing practical robust learning systems with robustness guarantees.
Bio:
Dr. Bo Li is an assistant professor in the Department of Computer Science at the University of Illinois at Urbana–Champaign. She is the recipient of the MIT Technology Review TR-35 award, Alfred P. Sloan Research Fellowship, NSF CAREER Award, Intel Rising Star award, Symantec Research Labs Fellowship, Rising Star Award, Research Awards from Tech companies such as Amazon, Facebook, Intel, and IBM, and best paper awards at several top machine learning and security conferences. Her research focuses on both theoretical and practical aspects of trustworthy machine learning, security, machine learning, privacy, and game theory. She has designed several scalable frameworks for robust machine learning and privacy-preserving data publishing systems. Her work has been featured by major publications and media outlets such as Nature, Wired, Fortune, and New York Times.
Her website is http://boli.cs.illinois.edu/\
Kris Hauser
Abstract: Neither classical methods nor learning alone are enough to achieve reliable autonomous robots: classical methods oversimplify the world and learning generalizes poorly from limited data. And we can’t all follow the path of autonomous vehicle companies burning billions of dollars and millions of hours on engineering manpower and data gathering. So what hope do we have of achieving autonomy in “open worlds” where the task environment is not known at design time — for example, in human environments, in agriculture, or in space? I discuss a vision for the “minimum viable product” of open world robots, as well as a strategy to attain near-term viability by leveraging human operator expertise. This talk will discuss some of my lab’s recent research on adaptive modeling and planning algorithms, both for autonomous robots and for human operator assistance for semi-autonomous robots. I am also passionate about helping accelerate the robot system development process with better methods and software tools. Applications of this work include medical, manipulation, and space robots.