Abstract: Being robust to outliers or adversarial corruptions is of paramount importance when we fuel the AI by big data, especially for safety-critical applications. Besides that, training machine learning models in a centralized fashion with massive data often faces significant challenges due to resource, regulatory and privacy concerns in real-world use cases. Distributed learning (e.g., Federated Learning) is one natural trend in machine learning that can mitigate these challenges by allowing agents to collaboratively learn and/or inference without sharing the raw data. Despite these advantages, there are several new security challenges that are imposed by limited visibility of the training/testing data and distrustful agents. In this talk, I will start from several fundamental problems like certifiable Robust Linear Regression, Robust PCA, and High-dimensional Robust Mean Estimation. Then present my recent work on Robust Distributed Learning & Inference. I will conclude the talk with future directions in efficient and trustworthy Artificial Intelligence of Things (AIoT).
Bio:Jing Liu is an 'Illinois Future Faculty' fellow in CS department of UIUC working with Sanmi Koyejo and Bo Li. Before that, he was a postdoc in Coordinated Science Lab of UIUC hosted by Venu Veeravalli. He obtained his Ph.D. from UCSD, advised by Bhaskar Rao. He received the 1st prize of Beijing Science & Technology Award in 2013 due to the applications of his research in China. He also received Shannon Graduate Fellowship nomination award and Frontiers of Innovation Fellowship in UCSD, Guanghua Fellowship in Tsinghua University, National Fellowships of China, as well as Silver Medal and Young Mentor award in Beijing Institute of Technology. He was also the mentor of a ‘Best Capstone Project’ in UCSD. His research interests include Data Science, Internet of Things (IoT), as well as Distributed Learning & Inference.