Wednesday, February 9, 2022
Time | Items |
---|---|
All day |
|
12pm |
02/09/2022 - 12:00pm Abstract: We consider the problem of learning a union of subspaces from data corrupted by outliers. State-of-the-art methods based on convex l1 and nuclear norm minimization require the subspace dimensions and the number of outliers to be sufficiently small. In this talk I will present a non-convex approach called Dual Principal Component Pursuit (DPCP), which can provably learn subspaces of high relative dimension and tolerate a large number of outliers by solving a non-convex l1 minimization problem on the sphere. Specifically, I will present both geometric and probabilistic conditions under which every global solution to the DPCP problem is a vector in the orthogonal complement to one of the subspaces. Such conditions show that DPCP can tolerate as many outliers as the square of the number of inliers. I will also present various optimization algorithms for solving the DPCP problem and show that a Projected Sub-Gradient Method admits linear convergence to the global minimum of the underlying non-convex and non-smooth optimization problem. Experiments show that the proposed method is able to handle more outliers and higher relative dimensions than state-of-the-art methods. Joint work with Tianjiao Ding, Daniel Robinson, Manolis Tsakiris and Zhihui Zhu. Biosketch: René Vidal is the Herschel Seder Professor of Biomedical Engineering and Director of the Mathematical Institute for Data Science at Johns Hopkins University. He is also an Amazon Scholar, Chief Scientist at NORCE, and Associate Editor in Chief of TPAMI. He also directs the NSF-Simons Collaboration on the Mathematical Foundations of Deep Learning and the TRIPODS Institute on the Foundations of Graph and Deep Learning. His current research focuses on the foundations of deep learning and its applications in computer vision and biomedical data science. He is an AIMBE Fellow, IEEE Fellow, IAPR Fellow and Sloan Fellow, and has received numerous awards for his work, including the IEEE Edward J. McCluskey Technical Achievement Award, D’Alembert Faculty Award, J.K. Aggarwal Prize, ONR Young Investigator Award, NSF CAREER Award as well as best paper awards in machine learning, computer vision, controls, and medical robotics. Location:
https://yale.zoom.us/j/97458245891
|
4pm |
02/09/2022 - 4:15pm Abstract: An error-correcting code is locally testable (LTC) if there is a random tester that reads only a small number of bits of a given word and decides whether the word is in the code, or at least close to it. A long-standing problem asks if there exists such a code that also satisfies the golden standards of coding theory: constant rate and constant distance. Unlike the classical situation in coding theory, random codes are not LTC, so this problem is a challenge of a new kind. We construct such codes based on what we call (Ramanujan) Left/Right Cayley square complexes. These are 2-dimensional versions of the expander codes constructed by Sipser and Spielman (1996) The main result and lecture will be self-contained. But we hope also to explain how the seminal work Howard Garland ( 1972) on the cohomology of quotients of the Bruhat-Tits buildings of p-adic Lie group has led to this construction ( even though, it is not used at the end). Based on joint work with I. Dinur, S. Evra, R. Livne, and S. Mozes The lecture will also serve as a preview for a crash course on that topic which will start the day after. Location: |