Projective Splitting Methods for Monotone Inclusions and Convex Optimization
This talk describes a relatively new class of decomposition methods with numerous applications in convex optimization. A very general class of monotone inclusion problems (subsuming all of convex optimization) is reformulated as finding a point in a certain convex “Kuhn-Tucker” solution set in a primal-dual space. Each iteration creates a separating hyperplane between the current iterate and the Kuhn-Tucker set, and then projects onto this hyperplane, possibly with some over-relaxation. There are many possible ways to generate the separating hyperplane, but all involve the decomposition of the problem into smaller components that may essentially be considered separately. Unlike other classes of decomposition methods, projective splitting algorithms do not need to process every problem component at each iteration: instead, the calculations for some components can simply be “recycled”. In addition to presenting the basic theory, this talk will discuss some computational results on data fitting problems with multiple regularizers. In building up the underlying theory, the talk will also attempt to cover the basics of monotone operators and their relation to convex optimization, proximal/resolvent operations, Douglas-Rachford splitting, and the ADMM.