Date: Monday, September 26
Time: 4:30pm - 5:30pm
Location: The seminar is hybrid. You can join via Zoom (https://illinois.zoom.us/j/89400978467?pwd=NjV3ZFQrQ1JidTNyS0ZNUEVOcEtpUT09) or you can attend in-person at Room 2124 (Siebel)
Speaker(s): Tom (Yishen) Chen
Title: All you need is superword-level parallelism: systematic control-flow vectorization with SLP
Abstract: SLP vectorization is a proven auto-vectorization technique (and has been adopted by production compilers such as GCC and Clang). Compared to traditional loop vectorization, it is both simpler to implement and also more flexible in the sense that it can vectorize even when loop vectorization fails. In this talk, I will talk about our ongoing work that extends SLP vectorization so that it can 1) systematically handle complex, data-dependent control flow and 2) automatically target emerging, complex vector instructions such as Intel’s Vector Neural Network Instructions. Our evaluation shows that a single instance of our vectorizer is competitive with and, in many cases, significantly better than LLVM’s vectorization pipeline, which includes both loop and SLP vectorizers. For example, on an unoptimized, sequential volume renderer from Pharr and Mark, our vectorizer gains a 3.28× speedup, whereas none of the production compilers that we tested can vectorize it because of its complex control-flow constructs.
Bio: Tom (Yishen) is a 5th-year Ph.D. student at MIT co-advised by Saman Amarasinghe and Charith Mendis. He is a TA this semester and is not getting much done.