Tuesday, February 13, 2018
02/13/2018 - 4:00pm to 5:00pm
Security, privacy, and fairness have become critical in the era of data science and machine learning. More andmore we see that achieving universally secure, private, and fair systems is practically impossible. We have seen for example how generative adversarial networks can be used to learn about the expected private training data\; how the exploitation of additional data can reveal private information in the original one\; and how what looks like unrelated features can teach us about each other. Confronted with this challenge, in this work we open a new line of research, where the security, privacy, and fairness is learned and used in a closed environment. The goal is to ensurethat a given entity (e.g., the company or the government), trusted to infer certain information with our data, is blocked from inferring protected information from it. For example, a hospital might be allowed to produce diagnosis on the patient (the positive task), without being able to infer the gender of the subject (negative task). Similarly, a company can guarantee that internally it is not using the provided data for any undesired task, an important goal that is not contradicting the virtuallyimpossible challenge of blocking everybody from the undesired task. We design a system that learns to succeed on the positive task while simultaneously fail at the negative one, and illustrate this with challenging cases where the positive task is actually harder than the negative one being blocked. Fairness, to the information in the negative task, is often automatically obtained as a result of this proposed approach. The particular framework and examples open the door to security, privacy, and fairness in very important closed scenarios, ranging from private data accumulation companies like social networks to law-enforcement and hospitals. The talk will present initial results, connect the mathematics of privacy with continuous learning and explainable AI, and open the discussion of this just starting paradigm in privacy and learning. Joint work with J.Sokolic, Q. Qiu, and M. Rodrigues.
02/13/2018 - 4:15pm to 5:15pm
The "shift locus" of degree d is a subset of the space of one variable complex polynomials of degree d.In this talk, I will first recall some dynamical properties of the polynomials of this subset, and then explain a current project joint with Calegari, He, Koch and Walker, in which we are trying to understand the fundamental group of this "shift locus" in terms of "braids on a Cantor set".
02/13/2018 - 4:30pm to 5:30pm
The geometric Langlands program arose in the 1980's as an analogue of the Langlands program for algebraic curves, but only recently were Arinkin and Gaitsgory (2012) able to formulate a plausible categorical version of the conjecture. A few years earlier, Kapustin and Witten (2006) placed a version of the categorical conjecture in a physical context, but their work didn't capture the algebro-geometric nature of the conjecture nor did it address the subtleties Arinkin and Gaitsgory had to overcome. After setting up a rigorous mathematical model for Kapustin and Witten's theory, we identify the physics of the Arinkin-Gaitsgory formulation of the conjecture. Moreover, from the physical interpretation, we suggest some curious factorization structure in the geometricLanglands theory. This is based on a joint work with Chris Elliott.