Siebel School Events Calendar

View Full Calendar

EVT: Accelerating Deep Learning Training with Epilogue Visitor Tree

Event Type
Seminar/Symposium
Sponsor
Muchen Xu
Location
Room 2406 (Siebel)
Date
Oct 14, 2024   4:30 - 5:30 pm  
Views
14

Conference: ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS' 24)

Author(s): Zhaodong Chen (UC  Santa Barbara), Andrew Kerr (NVIDIA), Richard Cai (NVIDIA), Jack Kosaian (NVIDIA), Haicheng Wu (NVIDIA), Yufei Ding (UC San Diego), Yuan Xie (HKUST)

 NoteThis talk is a student presentation and not by the authors of the paper(s) being presented. 

 Abstract: As deep learning models become increasingly complex, the deep learning compilers are critical for enhancing the system efficiency and unlocking hidden optimization opportunities. Although excellent speedups have been achieved in inference workloads, existing compilers face significant limitations in training. Firstly, the training computation graph involves intricate operations challenging to fuse, such as normalization, loss functions, and reductions, which limit optimization opportunities like kernel fusion. Secondly, the training graph's additional edges connecting forward and backward operators pose challenges in finding optimal and feasible partitions for kernel fusion. More importantly, existing compilers cannot either generate kernels with state-of-the-art performance on modern GPUs or accommodate diverse fusion patterns. In this paper, we introduce Epilogue Visitor Tree (EVT), a novel compiler that overcomes these limitations. EVT employs novel graph-level compilation passes to unlock hidden fusion and optimization opportunities. It also incorporates a novel integer linear programming-based partitioner that efficiently solves the optimal and feasible partitions in complex joint forward-backward graphs. Moreover, we present the Epilogue Visitor Abstraction and introduce the EVT operator compiler that automatically generates flexible epilogues that can be integrated with high-performance main loop implementations from CUTLASS and other SOTA libraries. EVT is evaluated on diverse training workloads across domains and achieves 1.26~3.1× speedup.

link for robots only