NCSA staff who would like to submit an item for the calendar can email email@example.com.
A central problem in machine learning is as follows: How should we train models using data generated from a collection of heterogeneous tasks/environments if we know that these models will be deployed in a new and unseen environment? In the setting of few-shot learning, a prominent approach is to develop a modeling framework that is “primed” to adapt, such as Model Adaptive Meta Learning (MAML), and then fine-tune the model for the deployment environment. We study this approach in the multi-task linear representation setting. We show that the reason behind generalizability of the models in new environments is that the dynamics of training induces the models to evolve toward the common data representation among the various tasks. The structure of the bi-level update at each iteration (an inner and outer update with MAML) holds the key — the diversity among client data distributions are exploited via inner/local updates. This is the first result that formally shows representation learning, and derives exponentially fast convergence to the ground-truth representation. The talk concludes by making a connection between MAML and Federated Average (FedAvg) in the context of personalized federated learning, where the the local and global updates of FedAvg exhibits the same representation learning. This is based on joint work with Liam Collins, Aryan Mokhtari, and Sanjay Shakkottai.
Sewoong Oh is an associate professor at the Paul G. Allen School of Computer Science and Engineering at the University of Washington, since 2019. Previously, he was an assistant professor in the Department of Industrial and Enterprise Systems Engineering at the University of Illinois at Urbana-Champaign, since 2012. He received his PhD from the Department of Electrical Engineering at Stanford University in 2011, under the supervision of Andrea Montanari. He was a postdoctoral researcher at the Laboratory for Information and Decision Systems (LIDS) at MIT, under the supervision of Devavrat Shah. Oh was co-awarded the ACM SIGMETRICS best paper award in 2015, NSF CAREER award in 2016, ACM SIGMETRICS Rising Star Award in 2017, and Google Faculty Research Awards in 2017 and 2020.