Industrial & Enterprise Systems Engineering Calendar

View Full Calendar

Dissertation Defense for Peijun Xiao

Event Type
Seminar/Symposium
Sponsor
ISE Graduate Programs
Virtual
wifi event
Date
Apr 8, 2022   9:00 - 10:30 am  
Speaker
Peijun Xiao
Contact
Lauren Redman
E-Mail
lredman@illinois.edu
Phone
217-333-2252
Views
46
Originating Calendar
ISE Academic Programs

Abstract: This thesis aims to understand and develop faster algorithms in solving constrained and min-max problems via three settings.


We start from constrained optimization to the more general setting, min-max optimization, and end with distributed optimization with compression. In (un)constrained optimization, block decomposition algorithms are commonly used. The algorithms decompose the problem variable into multiple blocks and update one block only at each iteration to save computation. The blocks are selected to update according to a particular update order in each iteration. We are interested in the performance with different update orders. For unconstrained problems, we prove that coordinate descent with two deterministic symmetric update orders can be $\mathcal{O}(n^2)$ times slower than randomized coordinate descent. For some constrained problem instances, we empirically demonstrate that the convergence speed of ADMM algorithms with those two symmetric update orders can be $\mathcal{O}(n^2)$ times slower than randomly permuted ADMM. Next, we study a more general setting than constrained optimization, min-max optimization. While the classical gradient descent ascent (GDA) algorithm exhibits oscillation, we introduce a "smoothing" scheme to stabilize the oscillation and ensure convergence to a stationary solution. We prove that the proposed algorithm achieves an $O(1/\epsilon^2)$ iteration complexity for minimizing the pointwise maximum of a finite collection of nonconvex functions and achieves an $O(1/\epsilon^4)$ iteration complexity for general nonconvex-concave problems. Furthermore, we extend the algorithm to multi-block cases and illustrate the practical efficiency in robust training. Finally, we study compressed distributed algorithms, commonly used to solve large-scale problems when distributed computation is necessary, and communication can be a bottleneck issue. A natural goal for designing compressed distributed algorithms is to preserve the convergence speed, i.e., achieve the same convergence speed as the non-compressed counterpart. We propose a class of algorithms preserving the speed in solving unconstrained finite sum problems in strongly convex and nonconvex settings. We use a universal proof framework and show that the proposed algorithms are all special cases of a certain inexact SGD, with the same convergence rate as the standard SGD. We further extend the compression scheme and analysis framework in solving distributed min-max problems. We prove that the proposed algorithms can also preserve the speed for the nonconvex-strongly-concave setting.

link for robots only