NCSA staff who would like to submit an item for the calendar can email email@example.com.
We look forward to seeing you in person or virtually on Monday, January 30, at 4:30pm. Join in person at 0216 Siebel Center for Computer Science, 201 N. Goodwin Ave or via zoom, https://illinois.zoom.us/j/83675834345?pwd=T1l6aXdzK3lOdnNmVUtjZjFzdHZsdz09.
This is a practice talk for an upcoming paper in PPOPP '23.
Abstract: Temporal Graph Neural Networks are gaining popularity in modeling interactions on dynamic graphs. Among them, Temporal Graph Attention Networks (TGAT) have gained adoption in predictive tasks, such as link prediction, in a range of application domains. Most optimizations and frameworks for Graph Neural Networks (GNNs) focus on GNN models that operate on static graphs. While a few of these optimizations exploit redundant computations on static graphs, they are either not applicable to the self-attention mechanism used in TGATs or do not exploit optimization opportunities that are tied to temporal execution behavior.
In this paper, we explore redundancy-aware optimization opportunities that specifically arise from computations that involve temporal components in TGAT inference. We observe considerable redundancies in temporal node embedding computations, such as recomputing previously computed neighbor embeddings and time-encoding of repeated time delta values. To exploit these redundancy opportunities, we developed TGOpt which introduces optimization techniques based on deduplication, memoization, and precomputation to accelerate the inference performance of TGAT. Our experimental results show that TGOpt achieves a geomean speedup of 4.9x on CPU and 2.9x on GPU when performing inference on a wide variety of dynamic graphs, with up to 6.3x speedup for the Reddit Posts dataset on CPU.
Bio: Yufeng Wang is an M.S. candidate in Computer Science at the University of Illinois Urbana-Champaign where he is advised by Professor Charith Mendis. His current research is related to Temporal Graph Neural Networks, particularly in optimizing their performance and ease of implementation. Prior to this, Yufeng received his Bachelors from Northeastern University, and had worked as a Software Engineer at the Broad Institute.