What are the optimal algorithms for learning from data? Have we found them already, or are better ones out there to be discovered? Making these questions precise, and answering them, requires taking on the complex interaction between statistics and computation. It also requires reconciling our existing toolbox with surprising new phenomena arising from practice, which violate conventional rules of thumb regarding algorithm and model design. I will discuss some progress along these lines: in terms of designing new algorithms for basic learning problems, and controlling generalization in large statistical models.
Frederic Koehler is currently a Motwani Postdoctoral Fellow in the Department of Computer Science at Stanford University. He was previously a research fellow at the Simons Institute program on "Computational Complexity of Statistical Inference". He received his PhD in Mathematics and Statistics and Elchanan Mossel at the Massachusetts Institute of Technology. His research interests center on algorithmic and statistical problems in learning and include work on generalization, learning algorithms, and theory for sampling & generalization.
Meeting ID: 817 4846 2830; Password: csillinois