Link to Talk Video: https://mediaspace.illinois.edu/media/t/1_osp7tass
Abstract: In this talk I will discuss some of my research group's work on understanding adversarial attacks on deep learning models, and approaches to defend them. I will discuss information theoretic and perceptual reasonings behind this phenomenon, and a novel way to quantify the adversarial robustness of a model. I will explain the phenomenon of "accidental robustness" -- how a model trained using conventional training methods can nonetheless ocassionally be robust to adversarial attacks. I will discuss info-theoretically and cognitively motivated approaches to defending neural network models against such attacks. Finally I will discuss the relevance of adversarial attacks to speech recognition systems, and at least one method of defending ASR systems against them. The presentation will generally be at a level that is intended to be accessible to a lay audience, inasumuch as an audience of computer scientists and electrical engineers may be considered lay people.
Bio: Dr. Bhiksha Raj is a tenured (full) professor of Computer Science at Carnegie Mellon University. Dr. Raj completed his Ph.D in Electrical engineering and Computer Science from Carnegie Mellon University, USA, in 2000. He was at Compaq (Cambridge) Research Lab until 2001. From 2001 to 2008 he led Speech Research at Mitsubishi Electric Research Labs. Since 2008 he has been a full-time faculty at Carnegie Mellon. Over his career Dr. Raj has made pioneering contributions to three broad areas of research: Speech and Audio Processing, Privacy and Security in Multimedia Processing, and lately, Deep Learning and AI. He holds over 30 patents in these areas, is co-editor of three technical books and has published over over 360 research papers in peer-reviewed journals and conferences. His current research spans topics of high contemporary importance, such as exploiting data and structure redundancy for deep learning and AI systems, preserving user privacy in speech and audio processing systems, learning and evaluating classifiers under real-world labeling assumptions, and robustness of AI systems to adversarial attacks. He is a fellow of the IEEE.