Computer Science Speaker Series Master Calendar

View Full Calendar

COLLOQUIUM: Michael Pellauer, "Programmable Accelerators: Historical Blip or Fundamental Tentpole?"

Event Type
Seminar/Symposium
Sponsor
Illinois Computer Science
Location
HYBRID: 2405 Siebel Center for Computer Science OR Zoom https://illinois.zoom.us/j/81306080052?pwd=Mm1QRGs4bndSNHE2OTBBZTZjWFpNZz09
Virtual
wifi event
Date
Apr 17, 2023   3:30 pm  
Views
132
Originating Calendar
Computer Science Colloquium Series

Zoom: https://illinois.zoom.us/j/81306080052?pwd=Mm1QRGs4bndSNHE2OTBBZTZjWFpNZz09

Abstract: 
The modern world has come to rely on sustained predictable improvements to computer performance and efficiency, which the industry has faithfully delivered for 50 years. As transistor scaling slows, increased hardware specialization seems like an intuitive way to continue the expected cadence. Indeed, pairing general-purpose CPUs with offload accelerators has become standard practice in both mobile chips and datacenters. It is counter-intuitive then that the most succesful accelerators in this emerging heterogeneous computational ecosystem are programmable GPUs. Why have fully fixed-function ASICs not displaced programmability? Will we look back on the era of the programmable accelerator as a blip, or as a fundamental tentpole supporting the industry? 
 
 At a high level, the job of the computer architect is to spend the hardware area and power budget to provision the highest computational throughput possible for the available memory bandwidth. This "roofline" approach reveals that data reuse in the algorithm (i.e., "computational intensity") is a fundamental component of modern accelerators. But how much reuse is truly present in Deep Learning computations? How do model transformations like datatype quantization, sparse pruning, or tensor decomposition affect it? As the art of Deep Learning evolves into an optimized science, will accelerators converge to memory-bound commodity parts where the computational organization barely matters? 
 
 This talk re-examine these issues in an "a-historical" context that puts aside the well-known story of the development of the GPU in the real world and evaluates it from fundamental principles. We discuss the role of hardware support for programmability and dive into areas where CPUs and GPUs have - and have not - succesfully differentiated. We demonstrate how targetted specialization choices within the GPU substrate (e.g., tensor cores, ray-tracing hardware) can have a large impact. We look towards a future where hardware specialization can be paired with programmable datapaths in a more interesting hierarchy than disjoint unrelated hardware blocks.

Bio: 
Dr. Michael Pellauer is a senior research scientist at Nvidia’s Architecture Research Group (ARG). His research focuses on domain-specific hardware accelerators, and how their learnings can be integrated into a programmable substrate like a GPU. His current focus is on sparse tensor algebra acceleration for deep learning. He has a PhD from MIT in Computer Science, a Masters of Science from Chalmers University of Technology, and a double Bachelors from Brown University in Computer Science and English. He previously worked at Intel Corporation’s Versatile Systems and Simulation Advanced Development (VSSAD) group as a senior architect.

Part of the Illinois Computer Science Speakers Series. Faculty Host: Chris Fletcher 

Join us in person in 2405 Siebel Center for Computer Science, 201 N. Goodwin Ave. or with Zoom Link (meeting ID: 813 0608 0052, password: csillinois).

link for robots only