Zoom: https://illinois.zoom.us/j/86238233298?pwd=Y29EWXRPOWtiZ09DczRYMXJZK3JRUT09
Title: Towards Neural Graphics
Abstract: Controlling the rendering process is a bedrock of modern visual effects systems. While generative models have been successful in photorealistic image synthesis, the control and manipulation of the generated image remains challenging. In this talk, I will present two works that model explicit control in the generation and rendering process using neural models — MorphGAN and NeuMan. MorphGAN is a conditional Generative Adversarial Network (GAN) that provides explicit control to generating face images using a rig. NeuMan is a NeRF framework that controls the rendering process of the human and the scene from a single video and enables novel view synthesis of the human and the scene, as well as novel pose synthesis of the human.
Bio: I am a Researcher at Apple Machine Intelligence. My interests lie at the intersection of deep learning, computer vision and 3D geometry. I did my PhD at Max Planck Institute for Intelligent Systems with Michael Black on Geometric Understanding of Motion. My work links motion of objects in videos to structure and geometry in the world using deep learning paradigms and self-supervised learning.