Machine learning has been successful and is prevalent in everyday life, shaping many aspects of modern society. Nevertheless, many fundamental questions remain, and it is important to develop a proper theoretical understanding of machine learning to guide its future development. In this talk I will discuss the fundamental properties of optimization, sampling, and game dynamics for machine learning. In optimization, I will present a variational perspective on accelerated methods via the principle of least action in continuous time, and derive new families of accelerated methods which achieve faster convergence under refined smoothness conditions. In sampling, I will present a study of sampling as optimization in the space of measures, and show fast convergence of Langevin algorithms under isoperimetry conditions which extend classical log-concavity results. In game dynamics, I will present an analysis of minimax games as skew-optimization in the space of joint configurations, and show fast convergence of the classical fictitious play algorithm and its optimistic variant under smoothness.
Andre Wibisono is a postdoctoral researcher in the School of Computer Science at the Georgia Institute of Technology. He received his SB in Mathematics and in Computer Science in 2009 and his MEng in Computer Science in 2010 from MIT. He received his MA in Statistics in 2013 and his PhD in Computer Science in 2016 from the University of California, Berkeley, advised by Michael I. Jordan. Andre was a postdoctoral researcher at the University of Wisconsin, Madison from 2016 to 2018, before he came to Georgia Tech. His research interests are in the theoretical and algorithmic aspects of machine learning, including for problems in optimization, sampling, and game dynamics.
Faculty Host: Matus Telgarsky