“Learning to Succeed while Teaching to Fail: Privacy in Machine Learning Systems”

Seminar: 
Applied Mathematics/Analysis Seminar
Event time: 
Tuesday, February 13, 2018 - 4:00pm to 5:00pm
Speaker: 
Guillermo Sapiro
Speaker affiliation: 
Duke University
Event description: 

Security, privacy, and fairness have become critical in the era of data science and machine learning. More andmore we see that achieving universally secure, private, and fair systems is practically impossible. We have seen for example how generative adversarial networks can be used to learn about the expected private training data\; how the exploitation of additional data can reveal private information in the original one\; and how what looks like unrelated features can teach us about each other. Confronted with this challenge, in this work we open a new line of research, where the security, privacy, and fairness is learned and used in a closed environment. The goal is to ensurethat a given entity (e.g., the company or the government), trusted to infer certain information with our data, is blocked from inferring protected information from it. For example, a hospital might be allowed to produce diagnosis on the patient (the positive task), without being able to infer the gender of the subject (negative task). Similarly, a company can guarantee that internally it is not using the provided data for any undesired task, an important goal that is not contradicting the virtuallyimpossible challenge of blocking everybody from the undesired task. We design a system that learns to succeed on the positive task while simultaneously fail at the negative one, and illustrate this with challenging cases where the positive task is actually harder than the negative one being blocked. Fairness, to the information in the negative task, is often automatically obtained as a result of this proposed approach. The particular framework and examples open the door to security, privacy, and fairness in very important closed scenarios, ranging from private data accumulation companies like social networks to law-enforcement and hospitals. The talk will present initial results, connect the mathematics of privacy with continuous learning and explainable AI, and open the discussion of this just starting paradigm in privacy and learning. Joint work with J.Sokolic, Q. Qiu, and M. Rodrigues.