Title: GANcraft - an unsupervised 3D neural method for world-to-world translation
Abstract: Advances in 2D image-to-image translation methods, such as SPADE/GauGAN, have enabled users to paint photorealistic images by drawing simple sketches similar to those created in Microsoft Paint. Despite these innovations, creating a realistic 3D scene remains a painstaking task, out of the reach of most people. It requires years of expertise, professional software, a library of digital assets, and a lot of development time. Wouldn’t it be great if we could build a simple 3D world made of blocks representing various materials, feed it to an algorithm, and receive a realistic looking 3D world featuring tall green trees, ice-capped mountains, and the blue sea? This talk will provide an overview of GANcraft, an unsupervised neural rendering framework for generating photorealistic images of large 3D block worlds. GANcraft builds upon prior work in 2D image synthesis and 3D neural rendering to overcome the lack of paired training data between user-created block worlds and the real world, and allow for user control over scene semantics, camera trajectory, and output style.
Bio: Arun Mallya is a Senior Research Scientist at NVIDIA Research. He obtained his Ph.D. from the University of Illinois at Urbana-Champaign in 2018, with a focus on performing multiple tasks efficiently with a single deep network. He holds a B.Tech. in Computer Science and Engineering from the Indian Institute of Technology - Kharagpur (2012), and an MS in Computer Science from the University of Illinois at Urbana-Champaign (2014). His interests are in generative modeling and enabling new applications of deep neural networks.
Meeting ID: 862 3823 3298