Events

National Center for Supercomputing Applications master calendar

View Full Calendar

NCSA staff who would like to submit an item for the calendar can email newsdesk@ncsa.illinois.edu.

C3.ai Digital Transformation Institute Colloquium on Digital Transformation Science Webinar

Event Type
Lecture
Sponsor
C3.ai Digital Transformation Institute
Virtual
wifi event
Date
May 13, 2021   3:00 pm  
Speaker
Stefano Soatto, Vice President of Applied Science, Amazon Web Services and Professor of Computer Science, UCLA
Registration
Registration
Views
4
Originating Calendar
NCSA-related events

Attend the C3.ai Digital Transformation Institute Colloquium on Digital Transformation Science Webinar Thursday, May 13 at 3:00 p.m. U.S. Central time. Presenting "Graceful AI: Backward-Compatibility, Positive-Congruent Training, and the Search for Desirable Behavior of Deep Neural Networks" is Stefano Soatto from UCLA.

Registration is required to attend this event.

Abstract: As machine learning-based decision systems improve rapidly, we are discovering that it is no longer enough for them to perform well on their own. They should also behave nicely towards their predecessors and peers. More nuanced demands beyond accuracy now drive the learning process, including robustness, explainability, transparency, fairness, and now also compatibility and regression minimization. We call this "Graceful AI," because in 2021, when we replace an old trained classifier with a new one, we should expect a peaceful transfer of decision powers.

Today, a new model can introduce errors that the old model did not make, despite significantly improving average performance. Such "regression" can break post-processing pipelines, or cause the need to reprocess large amounts of data. How can we train machine learning models to not only minimize the average error, but also minimize "regression"? Can we design and train new learning-based models in a manner that is compatible with previous ones, so that it is not necessary to re-process any data?

These problems are prototypical of the nascent field of cross-model compatibility in representation learning. I will describe the first approach to Backward-Compatible Training (BCT), introduced at the last Conference on Computer Vision and Pattern Recognition (CVPR), and an initial solution to the problem of Positive-Congruent Training (PC-Training), a first step towards "regression constrained" learning, to appear at the next CVPR. Along the way, I will also introduce methodological innovations that enable full-network fine-tuning by solving a linear-quadratic optimization. Such Linear-Quadratic Fine-Tuning (LQF, also to appear at the next CVPR) achieves performance equivalent to non-linear fine-tuning, and superior in the low-data regime, while allowing easy incorporation of convex constraints.

link for robots only