Submodular Maximization: From Discrete to Continuous and Back

Seminar: 
Applied Mathematics/Analysis Seminar
Event time: 
Tuesday, April 3, 2018 - 4:00pm to 5:00pm
Speaker: 
Amin Karbasi
Speaker affiliation: 
Yale Institute of Network Science
Event description: 

Many procedures in statistics and artificial intelligence require solving non-convex problems. Historically, the focus has been to convexify the non-convex objectives. Inrecent years, however, there has been significant progress to optimizenon-convex functions directly. This direct approach has led to provably good guarantees for specific problem instances such as latent variable models, non-negative matrix factorization, robust PCA, matrix completion, etc. Unfortunately, there is no free lunch and it is well known that in general finding the global optimum of a non-convex optimization problem is NP-hard. This computational barrier has mainly shifted the goal of non-convex optimization towards two directions: a) finding an approximate local minimum by avoiding saddle points or b) characterizing general conditions under which the underlying non-convex optimization is tractable.Inthis talk, I will consider a broad class of non-convex optimization problems that possess special combinatorial structures. More specifically, I will focus on maximization of stochastic continuous submodular functions that demonstrate diminishing returns. Despite the apparent lack of convexity in such functions, we will see that first order methods can indeedprovide strong approximation guarantees. In particular, for monotone and continuous submodular functions, we will show that projected stochastic gradient methods achieve a ½ approximation ratio. We then see how we can reach the tight (1-1/e) approximation guarantee by developing a new class of stochastic projection-free gradient methods. A simple variant of these algorithms also achieves a (1/e) approximation ratio in the non-monotone case. Finally, by using stochastic continuous optimization as an interface, we will also provide tight approximation guarantees for maximizing a (monotone or non-monotone) stochastic submodularset function subjectto a general matroid constraint.In this talk, I will not assume any particular background on submodularity or optimization and will try to motivate and define all the necessary concepts.