*Presentation will be recorded.
Reliable and multi-agent machine learning has seen tremendous achievements in recent years; yet, the translation from minimization models to min-max optimization models and/or variational inequality models --- two of the basic formulations for reliable and multi-agent machine learning --- is not straightforward. In fact, finding an optimal solution of either nonconvex-nonconcave min-max optimization models or nonmonotone variational inequality models is computationally intractable in general. Fortunately, there exist special structures in many application problems, allowing us to define reasonable optimality criterion and develop simple and provably efficient algorithmic schemes. In this talk, I will present the results on structure-driven algorithm design in reliable and multi-agent machine learning. More specifically, I will explain why the nonconvex-concave min-max formulations make sense for reliable machine learning and show how to analyze the simple and practical two-timescale gradient descent ascent by exploiting the special structure. I will also show how a simple and intuitive adaptive scheme leads to a class of optimal second-order variational inequality methods. Finally, I discuss two future research directions for reliable and multi-agent machine learning with potential for significant practical impacts: reliable multi-agent learning and reliable topic modeling.
Tianyi Lin is currently a Ph.D. candidate in the EECS department at UC Berkeley, advised by Michael I. Jordan. He received his M.S. in pure mathematics and statistics from University of Cambridge in 2012 and M.S. in operations research from UC Berkeley in 2017. His research interests include machine learning, optimization, game theory and optimal transport.