Within the realm of computational methods, there has been a long-standing trade-off between the scalability of different techniques and their optimality guarantees. However, most of today’s systems—such as transportation, power, and brain networks—are large-scale and safety-critical, thereby requiring both scalability and optimality guarantees. To address these challenges, we develop structure-aware and scalable computational methods for safety-critical systems that come equipped with performance guarantees. We show that exploiting hidden structures—such as graph-induced or spectral sparsity—is a key game-changer in the pursuit of scalable and guaranteed computational methods for real-world problems.
In the first part of the talk, we provide a massively-scalable algorithm for the graphical model inference problem, where the goal is to reveal hidden correlation structures of high-dimensional datasets. We introduce a graph-based method that is capable of solving instances with billions of variables in less than an hour, significantly outperforming other state-of-the-art methods. In the second part of the talk, we provide global guarantees for a class of nonconvex and nonsmooth optimization problems in safe machine learning. In particular, we show that, despite their nonconvexity, a class of problems in robust matrix recovery is devoid of spurious and sub-optimal solutions, thereby leading to the guaranteed success of fast local-search algorithms. To demonstrate the effectiveness of our developed methods, we give case-studies on real-word systems, such as power, transportation, and brain networks.