We look forward to seeing you in person in 2405 Siebel Center on Tuesday, 12/5.
Abstract: Many real-world applications, such as autonomous driving, robotics, and immersive computing, rely on 3D perception. Although there has been significant progress in enhancing the accuracy of perception models, their efficiency often falls short of real-time performance, which hinders their use in practical applications.
In this talk, I will share some of my efforts on improving the efficiency of 3D perception through the lens of sparsity. Given the inherent sparsity of LiDAR data, I will first discuss how sparse system and algorithmic support can translate LiDAR's theoretical sparsity benefits into actual speed gains on hardware. Next, I will introduce techniques for introducing sparsity into dense images by filtering out less informative pixels. Finally, I will present my work on unifying LiDAR and camera perception into a single, more efficient and integrated 3D perception system.
Bio: Zhijian Liu is a Ph.D. candidate at MIT, advised by Song Han. His research focuses on efficient machine learning and systems. He has developed efficient algorithms and systems for deep learning and applied them to computer vision, robotics, natural language processing, and scientific discovery. His research has been adopted by Microsoft, NVIDIA, Intel, and Waymo. He was selected as the recipient of Qualcomm Innovation Fellowship and NVIDIA Graduate Fellowship. He was also recognized as a Rising Star in ML and Systems by MLCommons and a Rising Star in Data Science by UChicago and UCSD. Previously, he was the founding research scientist at OmniML (acquired by NVIDIA). He received his B.Eng. degree from Shanghai Jiao Tong University.