Mixed-reality applications present a visually unique experience which is characterized by deep immersion and demand for high-quality and high-performance graphics. Given the computational demands on such systems, it is often essential to trade visual quality for improvements in rendering performance. Image Quality Assessment (IQA) metrics that accurately capture potential perceptual artifacts are useful in exploring this design space. However, traditional metrics for image quality assessment are insufficient due to the unique viewing conditions as well as the nature of perceptual artifacts. In my talk I will motivate the need as well as requirements for new IQA metrics that will solve this problem, as well as details of a recent metric – FovVideoVDP – that aims to address this need.
Anjul Patney is a Principal Research Scientist in NVIDIA’s Human Performance and Experience research group, based in Redmond, Washington. Previously, he was a Research Scientist at Facebook Reality Labs (2019-2021) and a Senior Research Scientist in Real-Time Rendering at NVIDIA (2013-2019). He received a Ph.D. from UC Davis in 2013, and B.Tech. from IIT Delhi in 2007.
Anjul’s research areas include visual perception, computer graphics, machine learning, and virtual/augmented reality. His recent work led to advances in deep-learning for real-time graphics (co-developed DLSS 1.0), perceptual metrics for spatiotemporal image quality (co-developed FovVideoVDP), foveated rendering for VR graphics, and redirected walking in VR environments.