The Center for Artificial Intelligence Innovation will be hosting a Research Seminar on Monday, April 4 from 11:00am-12:00pm via Zoom. The speaker for this seminar will be Saurabh Kulkarni from Graphcore. Attend this session to learn about how Graphcore aims to address scale challenges associated with training large models. You will also hear about how GNNs are being accelerated on Graphcore systems at scale. Get to know our Intelligent Processing Unit (IPU) - a purpose-built hardware accelerator with a unique MIMD architecture - designed to address the most demanding compute and memory bandwidth needs of modern ML models.
Abstract: We live in a world where hyperscale systems for machine intelligence are increasingly being used to solve complex problems ranging from natural language processing to computer vision to molecular modeling, drug discovery, climate modeling and recommendation systems. Beyond images and text, graphs are fast emerging as a key datatype that AI models are being used to process. A convergence of breakthrough research in machine learning models and algorithms, increased accessibility to hardware systems at cloud scale for research and thriving software ecosystems are paving the way for an exponential increase in model sizes. There has also been a sharp uptick in the range of problems that AI is addressing and the rate of innovation in emerging model architectures (like GNNs). Effective parallel processing and model decomposition techniques and large clusters of accelerators will be required to train these models of the future economically. Our network disaggregated architecture uniquely positions us to build highly scalable systems (IPU-PODs) with thousands of accelerators aimed at exploiting various dimensions of parallelism.
Presenter Bio: Saurabh Kulkarni, VP & GM, North America, Graphcore
Saurabh Kulkarni is VP & GM for North America at Graphcore. Over the last 20 years, he has held various leadership positions at Intel, Microsoft, and Oracle prior to his current role at Graphcore. His roles have spanned a variety of domains, including computer architecture, server platform architecture, cloud infrastructure, and hardware accelerators for AI/ML.