We look forward to seeing you in person on Tuesday, September 12, at 4:00pm. Join in person at 2405 Siebel Center for Computer Science, 201 N. Goodwin Ave or via zoom, https://illinois.zoom.us/j/85331033399?pwd=NlFReUlQUVlvYWVyL0VsUXFOTUllZz09
TeAAL: A Declarative Framework for Modeling Sparse Tensor Accelerators
Nandeeka Nayak, Univ. of Illinois
Abstract
Over the past few years, the end of Dennard scaling and the slowing of Moore’s law have led to an increased focus on domain-specific accelerators for a variety of applications, including sparse tensor algebra. Exploiting the sparsity present in real-world tensors enables improvements in performance and efficiency by eliminating data movement of and computation on zero values. However, due to the irregularity present in sparse tensors, accelerators must employ a wide variety of novel solutions to achieve good performance. Unfortunately, prior work on sparse accelerator modeling does not express this full range of design features. This has made it difficult to compare or extend the state of the art, and understand the impact of each design choice.
To address this gap, this talk describes TeAAL: a framework that enables the concise and precise specification and evaluation of sparse tensor algebra architectures. Specifically, we explore how the TeAAL specification language can be used to represent state-of-the-art accelerators and explain how the TeAAL simulator generagor translates designs written in this language to executable performance models that can be evaluated on real input tensors. We validated the TeAAL performance model on four state-of-the-art sparse tensor algebra accelerators (ExTensor, Gamma, OuterSPACE, and SIGMA) and used it to propose a new accelerator for graph problems (improving upon Graphicianado and GraphDynS).
Bio
Nandeeka Nayak is a fourth-year, Computer Science PhD student at University of Illinois at Urbana-Champaign, advised by Chris Fletcher. She works on understanding domain-specific accelerators for tensor algebra, with a focus on building abstractions that unify a wide variety of kernels and accelerator designs into a small set of primitives, in collaboration with Joel Emer and Michael Pellauer. In the past, she has also worked on hardware security.
Before coming to the University of Illinois, she completed her B.S. in Computer Science from Harvey Mudd College in 2020. There, she worked with Chris Clark in the Lab for Autonomous and Intelligent Robotics. Additionally, for her senior capstone project, she added a numerical programming library to the programming language Factor.
In her free time, she enjoys cooking, social dancing, traveling with her family, and studying Korean.
---
DataFlow SuperComputing for BigData DeepAnalytics
Dr. Veljko Milutinovic, Univ. of Belgrade / Indiana Univ. / TU Graz
Abstract
This presentation analyses the essence of DataFlow SuperComputing, defines its advantages and sheds light on the related programming model that corresponds to the recent Intel patent about the future Intel's dataflow processor. The stress is on issues of interest for General Engineering and on the problems of interest for this audience.
According to Alibaba and Google, as well as the open literature, the DataFlow paradigm, compared to the ControlFlow paradigm, offers: (a) Speedups of at least 10x to 100x and sometimes much more (depends on the algorithmic characteristics of the most essential loops and the spatial/temporal characteristics of the Big Data Stream, etc.), (b) Potentials for a better precision (depends on the characteristics of the optimizing compiler and the operating system, etc.), (c) Power reduction of at least 10x (depends on the clock speed and the internal architecture, etc.), and (d) Size reduction of well over 10x (depends on the chip implementation and the packaging technology, etc.). The bigger the data, and the higher the reusability of individual data items (which is typical of ML), the higher the benefits of the dataflow paradigm over the control flow paradigm. However, the programming paradigm is different, and has to be mastered.
Bio
Prof. Veljko Milutinovic (1951) received his PhD from the University of Belgrade in Serbia, spent about a decade on various faculty positions in the USA (mostly at Purdue University and more recenlty at the Indiana University in Bloomington), and was a co-designer of the DARPAs first GaAs (Gallium Arsenide) RISC microprocessor at 200MHz (about a decade before commercial efforts on the same speed) and the DARPAs first GaAs Systolic Array with 4096 processors on 200MHz (both well documented in the open literature). Later, for about three decades, he taught and conducted research at the University of Belgrade, in EE, MATH, MBA, and SCI. Now he serves as the Chairman of the Board of IPSI Belgrade (a spin-off of Fraunhofer IPSI from Darmstadt, Germany).
His research is mostly in datamining algorithms and dataflow computing, with the emphasis on mapping of big data algorithms onto fast emerging technologies and energy efficient architectures. For 20 of his edited books and related publications, focused forewords or condensed wisdom were contributed by 20 different Nobel Laureates with whom he cooperated on his past industry sponsored projects. He has over 100 SCI journal papers (mostly in IEEE and ACM journals), about 2000 Thomson-Reuters citations, about 2000 SCOPUS citations, and the number of his Google Scholar citations that is slowly approaching 6000, with the current Google Scholar indices: h=40, i10=120, i100=12, and i468=1.