CS Compiler Seminar: Please join us on Feb 20, 2025, from 4pm - 5pm in Room 3102(Siebel) where Vimarsh Sathia will give a talk, “Compiler Optimizations for Higher-Order Automatic Differentiation”. Please see the abstract and biography below:
Abstract: Higher-order derivatives have critical applications in scientific computing and machine learning, ranging from neural ODEs to Hessian-based optimizers. Although existing approaches to compute these derivatives use automatic differentiation (AD), the generated code to compute higher-order derivatives is usually suboptimal, leaving room for optimizations on the table.
We show how compiler optimizations can improve the performance of higher-order AD in tensor programs. Our key insight is that AD-generated derivative code often contains exploitable high-level structure, such as common sub-expressions and symmetric mixed derivatives. To leverage these structures, we implement compiler passes in the MLIR StableHLO intermediate representation, including constant propagation and other tensor-specific rewrites. Preliminary benchmarks on second-order Laplacian computations in deep neural networks show geometric speedups of up-to $1.3\times$.
These findings suggest that compiler-level optimizations offer a promising path toward making higher-order AD more practical and efficient, particularly for large-scale applications that rely on complex derivative computations.
Speaker Bio.: Vimarsh is a 3rd year PhD student at UIUC, advised by Billy Moses. He is interested in the intersection of Automatic Differentiation (AD) and compiler optimizations. Additionally, he is also interested in the automated mapping of AD onto heterogeneous architectures like GPUs, without sacrificing on performance.