We look forward to seeing you online today, February 1.
Abstract: Humans easily apply learned skills to different situations, a flexibility that AI systems still struggle to achieve. Current AI models are often confined to its training setup, leading to isolated developments and a narrow scope of application. This largely restricts the creation of flexible and general-purpose AI systems. 'Deep Model Reuse' presents a novel solution. Imagine tapping into a vast library of pre-trained models, each a master in its specialized domain. Our approach re-purposes these existing models, extracting and transforming their knowledge for the development of novel AI system. In this talk, we explore the essential techniques of this transformative process, highlighting the shift towards versatile and efficient AI that mirrors human cognition's adaptability.
Bio: Xingyi Yang is a third-year Ph.D student at National University of Singapore (NUS), affiliated with the Learning and Vision Lab (LV-Lab). His research interest lies in machine learning and computer vision. His main effort is on repurposing trained AI models to not only master new tasks but also enhance efficiency. Additionally, he also made efforts to explore the areas of representation learning and generative models. His work on reassembling deep models has been nominated as paper award for NeurIPS 2022.