Matthew Krafczyk, Research Programmer from the National Center for Supercomputing Applications, will present "Demystifying Hardware Acceleration for Machine Learning" on Monday, November 2 at 11:00 a.m.
Abstract: The Machine Learning (ML) community faces a dizzying array of hardware options for development and deployment of new applications. I aim to give the listener a better understanding of the available hardware from CPUs and GPUs, to FPGAs and Edge compute devices. What do the new generations of GPUs and CPUs offer over the old? What do modern Machine Learning frameworks do to take advantage of these advancements?
Starting with a survey of CPU architecture, I will introduce some assembly language and how modern CPU instruction extensions are used by compilers to accelerate certain computing workflows. Then, I will discuss GPU architecture along with improvements being made on modern GPU platforms. Next, the role, advantages, and disadvantages of FPGA and Edge compute devices. Finally, I will discuss strategies modern ML frameworks like Tensorflow use to make their models more performant.
Register to attend this webinar.
Seminar Zoom link.