Monday, October 26, 2020
10/26/2020 - 2:30pm
Abstract: Significant research efforts are being invested in improving Deep Neural Networks (DNNs) training efficiency, i.e. the amount of time, data, and resources required to train these models. For example, changing the model (e.g., architecture, numerical precision) or the training algorithm (e.g., parallelization). However, such modifications often cause an unexplained degradation in the generalization performance of the DNN to unseen data. Recent findings suggest that this degradation is caused by changes to the hidden algorithmic bias of the training algorithm and model. This bias determines which solution is selected from all solutions which fit the data. I will discuss a few examples in which we can understand and control such algorithmic bias.
Bio: Daniel is an assistant professor and in the Department of Electrical Engineering at the Technion, working in the areas of machine learning and theoretical neuroscience. He did his post-doc (as a Gruss Lipper fellow) working with Prof. Liam Paninski in the Department of Statistics and the Center for Theoretical Neuroscience at Columbia University. He is interested in all aspects of neural networks and deep learning. His recent works focus on quantization, resource efficiency, and implicit bias in neural networks.
10/26/2020 - 4:00pm
A countable group is said to be left-orderable if it preserves a total order invariant by left multiplication or equivalently if it embeds in the group of homeomorphisms of the line. I’ll explain the basics of left-orderability and what is known about left-orderability of lattices in Lie groups.
Our main result is that an irreducible lattice in a real semi-simple Lie group G of higher rank is left-orderable if and only if G is a product of two Lie groups and one factor is the universal covering of SL_2(R). In particular, we show that every lattice in SL_n(R) (if n > 2) is not left-orderable, solving conjectures of Witte-Morris and Ghys. The tools used in the proof include 1) the study of random walks by homeomorphisms in the line, 2) the construction of a compactification of a group action in the line and 3) the study of the stiffness of some stationary measures. (Joint work with Bertrand Deroin).
10/26/2020 - 4:30pm
I will begin with a relaxed overview of a number of recent
https://yale.zoom.us/j/99433355937 (password was emailed by Ivan on 9/11, also available from Ivan by email)