Information Trust Institute (ITI) Calendar

View Full Calendar

ITI Seminar Series: Yevgeniy Vorobeychik, "Adversarial AI for Social Good"

Event Type
Seminar/Symposium
Sponsor
Information Trust Institute
Location
Coordinated Science Lab STUDIO (1232 CSL Studio)
Date
Sep 14, 2018   11:00 am  
Speaker
Yevgeniy Vorobeychik, Washington University in Saint Louis
Views
43
Originating Calendar
Information Trust Institute

ABSTRACT:

AI technologies, such as machine learning, are seeing increasing adoption in adversarial settings. One important domain in which AI techniques are particularly promising is detection; for example, one can, in principle, use data to learn how to detect a host of malicious activities, including malware and intrusions. A key challenge in detection is that of how to trade off the consequences of failure to detect malicious activity against the cost of false alarms, especially when the malicious party is making deliberate attempts to avoid being detected.
 
While stealthy attacks are always possible, careful design and deployment of detectors can have a significant effect on the impact of attacks and their likelihood of success. I will describe our approaches to two classes of detection problems: 1) determining where to place detectors, and 2) designing heterogeneous detectors on networks. I will then discuss a more fundamental question in the context of adversarial machine learning: model validation. I will describe our framework for validating models of classifier evasion attacks, appealing to an important dichotomy between problem-space attacks (i.e., attacks that produce actual malicious artifacts) and feature-space attacks (stylized models that directly manipulate classifier features). I will then use PDF malware detection as a case study to demonstrate that common feature space attacks do not adequately proxy realistic attacks (in a nontrivial sense), and show that the gap can be significantly bridged by identifying and capturing conserved features, or features that are invariant under real evasion attacks. Finally, I will demonstrate the power of abstraction provided by feature space attack models by showing that when we account for conserved features, they allow us to obtain more generalized robustness against real evasion attacks.

BIOGRAPHY:

Yevgeniy Vorobeychik is an Associate Professor of Computer Science & Engineering at Washington University in Saint Louis. Previously, he was an Assistant Professor of Computer Science at Vanderbilt University. Between 2008 and 2010 he was a post-doctoral research associate at the University of Pennsylvania Computer and Information Science department. He received Ph.D. (2008) and M.S.E. (2004) degrees in Computer Science and Engineering from the University of Michigan, and a B.S. degree in Computer Engineering from Northwestern University. His work focuses on game-theoretic modeling of security and privacy, adversarial machine learning, algorithmic and behavioral game theory and incentive design, optimization, agent-based modeling, complex systems, network science, and epidemic control. Dr. Vorobeychik received an NSF CAREER award in 2017, and was invited to give an IJCAI-16 early career spotlight talk. He has also received several Best Paper awards, including one of the 2017 Best Papers in Health Informatics. He was nominated for the 2008 ACM Doctoral Dissertation Award and received honorable mention for the 2008 IFAAMAS Distinguished Dissertation Award.

link for robots only