Title. Statistical-Computational Trade-Offs in Large-Scale Random Models
Abstract. Broad classes of models in high-dimensional statistics demonstrate an intriguing trade-off between computational and statistical efficiency: computationally efficient algorithms are often suboptimal in terms of statistical efficiency (such as amount of data, signal strength, or objective value), whereas statistically efficient algorithms typically demand intensive computational resources.
In this talk, I will focus on a natural class of such models known as random optimization problems, which entail optimizing intrinsically random objectives. Such problems arise very naturally in data science, network science, computer science and beyond. I will introduce an emerging theoretical framework that leverages geometric insights to elucidate the optimal statistical performance attainable by a wide range of practical algorithms. To illustrate this framework, I will discuss some of my previous work, including those on perceptron, number partitioning and spin glasses, as well as ongoing work, especially in the online setting.