Understanding how humans and machines learn from few examples remains a fundamental challenge. Humans are remarkably able to grasp a new concept from just few examples, or learn a new skill from just few trials. By contrast, state-of-the-art machine learning techniques typically require thousands of training examples and often break down if the training sample set is too small.
In this talk, I will discuss our efforts towards endowing visual learning systems with few-shot learning ability. Our key insight is that the visual world is well structured and highly predictable in feature, data, and model spaces. Such structures and regularities enable the systems to learn how to learn new tasks rapidly by reusing previous experience. I will focus on two topics to demonstrate how to leverage this idea of learning to learn, or meta-learning, to address a broad range of few-shot learning tasks: task-oriented generative modeling and meta-learning in model space. I will also discuss some ongoing work towards building machines that are able to operate in highly dynamic and open environments, making intelligent and independent decisions based on limited information.
Yuxiong Wang is a postdoctoral fellow in the Robotics Institute at Carnegie Mellon University. He received a Ph.D. in robotics from Carnegie Mellon University under the supervision of Martial Hebert in 2018. His research interests lie in computer vision, machine learning, and robotics, with a particular focus on few-shot learning and meta-learning. He has spent time at Facebook AI Research (FAIR), and has collaborated with researchers in other institutions, including NYU, UIUC, UC Berkeley, Cornell University, INRIA (France), and CSIC-UPC (Spain).
Faculty Host: Derek Hoiem