IQUIST Master Calendar

View Full Calendar

Vision Seminar: Zsolt Kira, "Exemplar-Free Continual Learning"

Event Type
Seminar/Symposium
Sponsor
Svetlana Lazebnik
Virtual
wifi event
Date
Jan 25, 2022   10:00 am  
Views
92
Originating Calendar
Computer Science Speakers Calendar

Zoom: https://illinois.zoom.us/j/86238233298?pwd=Y29EWXRPOWtiZ09DczRYMXJZK3JRUT09

Meeting ID: 862 3823 3298

Password: 684009

 

Title: Exemplar-Free Continual Learning

 

Abstract: While deep learning has achieved tremendous success across a range of modalities, there are still significant limitations that prevent their usage in real-world settings, for example where new tasks must be learned without catastrophic forgetting of old ones. Solutions to this problem, often called lifelong or continual learning, are dominated by data replay where exemplars from old tasks are stored to prevent forgetting. There are several downsides to this, however, including for low-resource settings or domains with privacy concerns. In this talk, I will describe some of our recent works to move beyond data replay using two key ideas: 1) Generation of data using only the discriminative model (as opposed to cumbersome generative models) and 2) Using the world as its own replay buffer to leverage the abundant unlabeled data that an embodied agent might encounter. We introduce several novel components, including a novel incremental distillation strategy from “dreamed” replay data and the use of semi-supervised learning and out-of-distribution detection for leveraging unlabeled data. In the end, I will also discuss future directions for more realistic continual learning settings with various types of distribution shift as well as handling more realistic embodied streams of data.

 

Bio: Zsolt Kira is an Assistant Professor at the Georgia Institute of Technology and Associate Director of Georgia Tech’s Machine Learning Center. His work lies at the intersection of machine learning and artificial intelligence for sensor processing, perception, and robotics. Current projects and interests relate to moving beyond current limitations of supervised machine learning to tackle un/self-/semi-supervised methods, out-of-distribution detection, model calibration, learning under imbalance, continual/lifelong learning, and adaptation. Prof. Kira has grown a portfolio of projects funded by NSF, ONR, DARPA, and the IC community, has over 45 publications in top venues, and has received several best paper/student paper awards.

link for robots only