TITLE: Towards Understanding Deep Neural Networks
Explaining a deep learning model's inner-workings and decisions is increasingly important, especially for life-critical applications e.g. in medical diagnosis or criminal justice. Work in this space can be divided into two main types: global and local explanations. For global explanations, I will discuss three different ways one can synthesize what a single neuron wants to see, shedding light into the inner-workings of neural networks. Interestingly, this type of visualization work (called "Activation maximization") also opened a window into all sorts of failures of state-of-the-art ML models (here, image classifiers). For example, most recently, we found that randomly positioning and rotating a 3D training-set object in front of the camera can bring the classification accuracy from 77.5% to 3%. simply randomly rotating and randomly placing a familiar, training-set object in front of the camera is sufficient to bring the classification accuracy from 77.5% down to 3%. Such notorious brittleness of neural networks, therefore, begs for explanations of why a model makes a given decision---i.e., local explanations. In local explanations, I will briefly share some recent work showing that these methods are unreliable, being sensitive to hyperparameters and how harnessing generative models to synthesize counterfactual intervention samples can improve the robustness and accuracy of the attribution methods.
BIO: Anh completed his Ph.D. in 2017 at the University of Wyoming, working with Jeff Clune and Jason Yosinski. His current research focus is Deep Learning, specifically explainable artificial intelligence and generative models. He has also worked as an ML research intern at Apple and Geometric Intelligence (now Uber AI Labs), and a software engineer at Bosch. Anh’s research has won 3 Best Paper Awards at CVPR, GECCO, ICML Visualization workshop and 2 Best Research Video Awards at IJCAI and AAAI. His research has also been covered by many press articles including MIT Technology Review, Nature, Scientific American and machine learning lectures at various institutions.