Research Seminars @ Illinois

Tailored for undergraduate researchers, this calendar is a curated list of research seminars at the University of Illinois. Explore the diverse world of research and expand your knowledge through engaging sessions designed to inspire and enlighten.

To have your events added or removed from this calendar, please contact OUR at ugresearch@illinois.edu

Machine Learning Seminar: Dr. Olawale Salaudeen, "Characterizing When Spurious Correlations and Distribution Shifts Harm Generalization."

Apr 3, 2026   2:00 - 3:15 pm  
Sponsor
Research Area of Artificial Intelligence
Speaker
Dr. Olawale Salaudeen
Contact
Allison Mette
E-Mail
agk@illinois.edu
Originating Calendar
Siebel School Speakers Calendar
Title: Characterizing When Spurious Correlations and Distribution Shifts Harm Generalization

Abstract: Machine learning systems often exhibit strong performance on standard benchmarks yet fail under distribution shift. In this talk, I revisit the common observation that accuracy on in-distribution data often predicts out-of-distribution performance ("accuracy on the line") and show that this phenomenon is often a consequence of misspecified distribution-shift benchmarks rather than a reliable indicator of robustness. I present a characterization of when such relationships hold and when they break down, highlighting the roles of spurious correlations and evaluation design. Building on this, I examine when methods aimed at robustness, particularly causal representation learning approaches, lead to improved generalization in practice. Together, these results point toward a more precise understanding of what our evaluations measure, and when they provide reliable signals for deployment under real-world change.

Bio: Olawale (Wale) Salaudeen is a postdoctoral researcher at MIT and the Broad Institute of MIT and Harvard. He received his PhD in Computer Science from the University of Illinois at Urbana-Champaign and held a visiting PhD student appointment at Stanford University. His research advances a unified agenda of Reliable AI through Measurement and Intervention. He develops valid measurements of AI capabilities and risks and designs interventions that address the mechanisms driving system failures, particularly under distribution shift. His work has appeared in leading venues such as NeurIPS, AISTATS, and TMLR, earning spotlight and oral presentations, as well as a Best Paper Award at the NeurIPS Workshop on LLM Evaluation. He has also received competitive research fellowships, including the AI Center Fellowship at Schmidt Sciences.
link for robots only