Abstract: The field of computer graphics, specifically computational photography and rendering, has seen tremendous progress over the past decades and, as a result, is an essential part of the film, gaming, and camera industries. However, the current state-of-the-art algorithms use complex optimization systems with heuristic components, and thus, are typically slow and produce sub-optimal results. Deep learning has the potential to revolutionize computer graphics by modeling problems in a data-driven and systematic way. However, the major challenge in applying deep learning to synthesis applications, such as view synthesis and high dynamic range imaging, is the task complexity and lack of large scale training data.
In this talk, I show how to address these problems by incorporating the underlying physical process of these applications into deep learning. Instead of solving one complex problem with deep learning, which is often intractable, I propose to break it into two smaller sub-problems that are physically motivated and easier to learn on limited training data. Specifically, I propose a novel, general two-stage framework, which can be applied (with slight modifications) to a variety of problems, including light field image synthesis, high dynamic range image and video generation, and Monte Carlo denoising.
Bio: Nima Khademi Kalantari is a postdoctoral researcher at the University of California, San Diego. He received his Ph.D. from the University of California, Santa Barbara in 2015. Nima's primary research interests are in computer graphics with an emphasis on computational photography and rendering. He has recently focused on developing deep learning techniques for image synthesis in these two fields.