IQUIST Master Calendar

View Full Calendar

CS Compiler Seminar: Shubham Ugare , "Incremental Verification of Neural Networks" and Ashitabh Misra , "Applying Deep Learning to the Cache Replacement Problem"

Event Type
Seminar/Symposium
Sponsor
Computer Science
Location
2124 Siebel Center
Virtual
wifi event
Date
Apr 3, 2023   8:30 am  
Views
48
Originating Calendar
Computer Science Speakers Calendar

We look forward to seeing you in person on Monday, April 3, at 4:30pm. Join in person at 2124 Siebel Center for Computer Science, 201 N. Goodwin Ave or via Zoom, https://illinois.zoom.us/j/83675834345?pwd=T1l6aXdzK3lOdnNmVUtjZjFzdHZsdz09

Speaker: Shubham Ugare  (Student Speaker)

Title: Incremental Verification of Neural Networks

NoteThis work has been submitted to a conference and is currently under review.

Abstract: Complete verification of deep neural networks (DNNs) can exactly determine whether the DNN satisfies a desired trustworthy property (e.g., robustness, fairness) on an infinite set of inputs or not. Despite the tremendous progress to improve the scalability of complete verifiers over the years on individual DNNs, they are inherently inefficient when a deployed DNN is updated to improve its inference speed or accuracy. The inefficiency is because the expensive verifier needs to be run from scratch on the updated DNN. To improve efficiency, we propose a new, general framework for incremental and complete DNN verification based on the design of novel theory, data structure, and algorithms. Our contributions implemented in a tool named IVAN yield an overall geometric mean speedup of 2.4x for verifying challenging MNIST and CIFAR10 classifiers and a geometric mean speedup of 3.8x for the ACAS-XU classifiers over the state-of-the-art baselines.


Speakers: Ashitabh Misra  (Student Speaker)

Title: Applying Deep Learning to the Cache Replacement Problem

Conference: MICRO ‘19 (Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture) 

Author(s): Zhan Shi, Xiangru Huang, Akanksha Jain, Calvin Lin 

NoteThe following talk is a student presentation and not by the authors of the paper(s) being presented.  

Abstract: Despite its success in many areas, deep learning is a poor fit for use in hardware predictors because these models are impractically large and slow, but this paper shows how we can use deep learning to help design a new cache replacement policy. We first show that for cache replacement, a powerful LSTM learning model can in an offline setting provide better accuracy than current hardware predictors. We then perform analysis to interpret this LSTM model, deriving a key insight that allows us to design a simple online model that matches the offline model's accuracy with orders of magnitude lower cost.

The result is the Glider cache replacement policy, which we evaluate on a set of 33 memory-intensive programs from the SPEC 2006, SPEC 2017, and GAP (graph-processing) benchmark suites. In a single-core setting, Glider outperforms top finishers from the 2nd Cache Replacement Championship, reducing the miss rate over LRU by 8.9%, compared to reductions of 7.1% for Hawkeye, 6.5% for MPPPB, and 7.5% for SHiP++. On a four-core system, Glider improves IPC over LRU by 14.7%, compared with improvements of 13.6% (Hawkeye), 13.2% (MPPPB), and 11.4% (SHiP++).


link for robots only