Link to Talk Video: https://mediaspace.illinois.edu/media/t/1_83i4fadr
Abstract: Prediction models should know what they do not know if they are to be trusted for making important decisions. Prediction models would accurately capture their uncertainty if they could predict the true probability of the outcome of interest, such as the true probability of a patient's illness given the symptoms. While outputting these probabilities exactly is impossible in most cases, I show that it is surprisingly possible to learn probabilities that are “indistinguishable” from the true probabilities for large classes of decision making tasks. I propose algorithms to learn indistinguishable probabilities, and show that they provably enable accurate risk assessment and better decision outcomes. In addition to learning probabilities that capture uncertainty, my talk will also discuss how to acquire information to reduce uncertainty in ways that optimally improve decision making. Empirically, these methods lead to prediction models that enable better and more confident decision making in applications such as medical diagnosis and policy making.
Bio: I am a PhD candidate in computer science at Stanford University advised by Stefano Ermon. I research prediction models and autonomous agents that can be reliably deployed in high-stakes applications, with a focus on topics including uncertainty quantification, probabilistic deep learning, experimental design, and ML for science. I have authored 22 peer-reviewed publications at top conferences, including 5 orals, and won multiple merit based fellowships.