NCSA staff who would like to submit an item for the calendar can email newsdesk@ncsa.illinois.edu.
Abstract: The uninterpretability of Deep Neural Networks (DNNs) hinders their use in safety-critical applications. Abstract-Interpretation-based DNN certifiers provide promising avenues for building trust in DNNs. Unsoundness in the mathematical logic of these certifiers can lead to incorrect results. However, current approaches to ensure their soundness rely on manual, expert-driven proofs that are tedious to develop, limiting the speed of the development of new certifiers. Automating the verification process is challenging due to the complexity of verifying certifiers for arbitrary DNN architectures and handling diverse abstract analyses.