Abstract:
Understanding how humans and machines learn from few examples remains a fundamental challenge. Humans are remarkably able to grasp a new concept from just few examples, or learn a new skill from just few trials. By contrast, state-of-the-art machine learning techniques typically require thousands of training examples and often break down if the training sample set is too small.
In this talk, I will discuss our efforts towards endowing visual learning systems with few-shot learning ability. Our key insight is that the visual world is well structured and highly predictable in feature, data, and model spaces. Such structures and regularities enable the systems to learn how to learn new tasks rapidly by reusing previous experience. I will focus on two topics to demonstrate how to leverage this idea of learning to learn, or meta-learning, to address a broad range of few-shot learning tasks: task-oriented generative modeling and meta-learning in model space. In addition, I will discuss some ongoing work towards building embodied robots in the wild, especially on streaming computation with bounded resources.
Bio:
Yuxiong Wang is an Assistant Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign, starting in fall 2020. Before joining Illinois CS, he was a postdoctoral fellow in the Robotics Institute at Carnegie Mellon University. He received a Ph.D. in robotics from Carnegie Mellon University under the supervision of Martial Hebert. His research interests lie in computer vision, machine learning, and robotics, with a particular focus on few-shot learning, meta-learning, and streaming perception. He was a visitor in the Center for Data Science at New York University. He has also spent time at Facebook AI Research (FAIR). He received the Best Paper Honorable Mention Award in ECCV 2020.
Hosted by: Nancy Amato
Recording available to watch at: https://mediaspace.illinois.edu/media/1_j7cuu6ir