NCSA staff who would like to submit an item for the calendar can email newsdesk@ncsa.illinois.edu.
Abstract: In this talk, I will introduce my three ICLR2021 works on 1) backdoor defense, 2) adversarialdefense, and 3) data protection, respectively. The first work explores a neural attention distillationapproach to erase backdoors from deep neural networks (DNNs). The second work reveals themagnitude and frequency characteristics of adversarially robust activations at the intermediate layers of DNNs and introduces a channel-wise activation suppressing (CAS) technique torobustify DNNs. The third work proposes a type of error-minimizing noise to fool DNNs to believethere is nothing to learn from the training data so as to achieve an effect of “unlearnable” for thepurpose of data protection. I will share some insights into unlearnable examples, their currentlimitations and the opportunities.
Bio:Dr. Xingjun (Daniel) Ma is an assistant professor in the School of Information Technology atDeakin University and an honorary fellow at The University of Melbourne. He obtained his Ph.D.degree in machine learning from The University of Melbourne where he also worked as apostdoctoral research fellow for one year and a half. His research interests include adversarialmachine learning, weakly supervised learning, AI security, and data privacy. He has published20+ works at top-tier conferences such as ICML, ICLR, CVPR, ICCV, ECCV, AAAI, and IJCAI.These works have made substantial impacts in the machine learning community with eithertheoretical contributions or new SOTA results. His work on “unlearnable examples” in 2021 wasrecently featured by MIT Technology Review. He also serves as a PC/SPC member or reviewerfor a number of leading machine learning conferences and journals.