Monday, November 2, 2020
11/02/2020 - 2:30pm
One of the aims of dimension reduction is to find intrinsic coordinates that describe the data manifold. Manifold Learning algorithm developed in Machine Learning return abstract coordinates; finding their physical or domain-related meaning is not formalized and left to domain experts. In this talk, I propose a method to explain embedding coordinates of a manifold as non-linear compositions of functions from a user-defined dictionary. We show that this problem can be set up as a sparse linear Group Lasso recovery problem, find sufficient recovery conditions, and demonstrate its effectiveness on data. With this class of new methods, called ManifoldLasso, a scientist can specify a (large) set of functions of interest, and
In the more general case, when functions with physical meaning are not available, I will present a statistically founded methodology to estimate and then cancel out the distortions introduced by a manifold learning algorithm, thus effectively preserving the Riemannian
All the methods described are implemented by the python package megaman, and can be applied to data sets up to a million points.
This work is part of Marina Meila¹s current research program² Unsupervised Validation for Unsupervised Learning² which aims to design broad-ranging, mathematically and statistically grounded methods to interpret, verify and validate the output of Unsupervised Machine Learning algorithms with a minimum of assumptions and of human intervention.
11/02/2020 - 4:00pm
Patterson-Sullivan measures were introduced by Patterson (1976) and Sullivan (1979) to study the Kleinian groups and their limit sets. In this talk, we discuss an extension of this classical construction for $P$-Anosov subgroups $\Gamma$ of $G$, where $G$ is a real semisimple Lie group and $P<G$ is a parabolic subgroup. In parallel with the theory for Kleinian groups, we will discuss how one can understand the Hausdorff dimension of the limit set of $\Gamma$ in terms of a certain critical exponent. This is a joint work with Michael Kapovich.
11/02/2020 - 4:30pm
Kazhdan and Lusztig proved the Deligne-Langlands conjecture, a bijection between irreducible representations of principal block representations of a p-adic group with certain unipotent Langlands parameters (a q-commuting semisimple-nilpotent pair) in the Langlands dual group. We lift this bijection to a statement on the level of categories. Namely, we define a stack of unipotent Langlands parameters and a coherent sheaf on it, which we call the coherent Springer sheaf, which generates a subcategory of the derived category of coherent sheaves equivalent to modules for the affine Hecke algebra (or specializing at q, smooth principal block representations of a p-adic group). Our approach involves categorical traces, Hochschild homology, and Bezrukavnikov’s Langlands dual realizations of the affine Hecke category. This is a joint work with David Ben-Zvi, David Helm and David Nadler.
https://yale.zoom.us/j/99433355937 (password was emailed by Ivan on 9/11, also available from Ivan by email)