Siebel School Master Calendar

View Full Calendar

Scalable Relational Verification of Deep Neural Network & Exploring COMET: Neural Cost Model Explanation Framework

Event Type
Seminar/Symposium
Sponsor
The Department of Computer Science, University of Illinois and FM/SE Seminar
Location
SC 0216
Virtual
wifi event
Date
Apr 12, 2024   2:00 pm  
Speaker
Debangshu Banerjee, UIUC & Isha Chaudhary, UIUC
Contact
Isha Chaudhary
E-Mail
isha4@illinois.edu
Views
14
Originating Calendar
Siebel School Events Calendar

Talk 1

Abstract:

 We are investigating the validation of input-related characteristics within deep neural networks (DNNs), such as resilience against universal adversarial perturbations and monotonicity. Achieving accurate validation of these characteristics necessitates analyzing multiple executions of the same DNN. To address this, we propose a new concept called "difference tracking," which calculates the disparity between the outputs of two executions of the same DNN across all layers. We have developed a novel abstract domain, called DiffPoly, tailored for efficient difference tracking that can handle large DNNs. DiffPoly incorporates customized abstract transformers for common activation functions (ReLU, Tanh, Sigmoid, etc.) and affine layers, enabling precise generation of linear cross-execution constraints. We have implemented an input-related verifier for DNNs named RaVeN, utilizing DiffPoly and linear program formulations to manage a broad spectrum of input-related properties. Our experimental findings, based on rigorous benchmarks, demonstrate that RaVeN, through the utilization of precise linear constraints spanning multiple DNN executions, achieves significantly higher precision compared to baseline methods across various datasets, networks, and input-related properties.

References: https://focallab.org/files/raven.pdf

Talk 2

Abstract:

 Cost models predict the cost of executing given assembly code basic blocks on a specific microarchitecture. Recently, neural cost models have been shown to be fairly accurate and easy to construct. They can replace heavily engineered analytical cost models used in compilers. However, their black-box nature discourages their adoption. In this work, we develop the first framework, COMET, for generating faithful, generalizable, and intuitive explanations for neural cost models. We generate and compare COMET’s explanations for the popular neural cost model, Ithemal against those for an accurate CPU simulation-based cost model, uiCA. We obtain an empirical inverse correlation between the prediction errors of Ithemal and uiCA and the granularity of basic block features in COMET’s explanations for them, indicating potential reasons for Ithemal’s higher error with respect to uiCA.

References: https://openreview.net/pdf?id=YVQSGT6ME0

link for robots only