We look forward to seeing you online on February 8.
Abstract: While the quality of image generative models has progressed to produce photorealistic images nearly indistinguishable from real ones, the slow inference time still remains as a challenge, hindering interactivity and controllability. In this talk, I will briefly go over existing approaches that can produce output images at interactive speed, and main focus on two approaches of mine, Distribution Matching Distillation (DMD) and GigaGAN. I will also discuss how the fast generative models can enable more controllable and interactive image synthesis.
Bio: Taesung is a Research Scientist at Adobe Research, focusing on image editing using generative models. He is a core contributor to Adobe Firefly, a text-to-image generative model that is faster in generating high-resolution images and trained on ethically sourced data. He received Ph.D. in Computer Science at UC Berkeley, advised by Prof. Alexei Efros. Previously he interned at Adobe in 2019, working with Richard Zhang, and at NVIDIA, working with Ming-Yu Liu in summer 2018. He received B.S. in Mathematics and M.S. in Computer Science, both at Stanford University.