Modern machine learning systems have made astonishing progress in automating labor-intensive tasks such as visual recognition and machine translation. While ML systems complete these tasks better and faster, humans are largely left behind. Indeed, most humans are entirely excluded from the creation process of machine learning models, except for tedious data annotation.
In this talk, I describe our ongoing efforts in developing new algorithms and user interfaces for anyone to customize and create a generative model with minimal human efforts. Using our system, a user can quickly change the behavior of generative models such as GANs and conditional NeRFs, without large-scale data annotation and machine learning expertise. The resulting models not only match the user's mental picture but also preserve the original model's sampling diversity and visual quality. Collectively, these tools will make the process of model creation accessible to everyone including visual artists, content creators, and students.
Jun-Yan Zhu is an Assistant Professor at the School of Computer Science of Carnegie Mellon University. Prior to joining CMU, he was a Research Scientist at Adobe Research and a postdoctoral researcher at MIT CSAIL. He obtained his Ph.D. from UC Berkeley and his B.E. from Tsinghua University. He studies computer vision, computer graphics, computational photography, and machine learning. He is the recipient of ACM SIGGRAPH Outstanding Doctoral Dissertation Award, and UC Berkeley EECS David J. Sakrison Memorial Prize for outstanding doctoral research. His co-authored work has received the NVIDIA Pioneer Research Award, SIGGRAPH 2019 Real-time Live! Best of Show Award and Audience Choice Award, and The 100 Greatest Innovations of 2019 by Popular Science.