Abstract: 3D city modeling is an active research realm, and has been drawing increasing attention these years, as the concept of digital twin and metaverse arises. As a foundational component, 3D city modeling provides the spatial and visual context for simulating and analyzing urban environments, and for users to explore, interact, and collaborate. To enable such experience, several key aspects need to be taken into consideration, including realism and detail, data integration, dynamic contents, scalability, and accessibility. In this presentation, I will discuss the potential of using neural radiance fields to represent 3D urban scenes, leveraging different scene representations and data structures, allowing user-controlled high-quality novel view synthesis, as well as flexible scene editing and creation choices.
Bio: Yuanbo Xiangli is a postdoc scholar at Cornell University, working with Prof. Noah Snavely. Prior to this, she did her Ph.D at Multimedia Lab, the Chinese University of Hong Kong, supervised by Prof. Dahua Lin. She received her Master degree from University of Oxford and Diploma from the University of Nottingham in Computer Science. Her research interests lie in 3D computer vision and generative modelling. She has been working on photorealistic and efficient large-scale 3D indoor/outdoor scenes rendering, manipulation and generation, leveraging diverse 2D/3D data sources, geographic and architectural information.