Monday, April 5, 2021
Time | Items |
---|---|
All day |
|
10am |
04/05/2021 - 10:15am Is your favorite measure extremal in the sense of metric Diophantine approximation? We present a small stream of research, developed in collaboration with Lior Fishman, David Simmons, and Mariusz Urbanski, which attempts to provide some answers to this broad question. The talk will be accessible to students and faculty interested in some convex combination of dynamics, Diophantine approximation and fractal geometry. The topic, still in its nascency, contains several open questions and directions that have yet to be fully explored. In more detail, we obtain the extremality of several new classes of dynamically defined fractal measures, e.g., by proving Patterson–Sullivan measures of all nonplanar geometrically finite Kleinian groups, and Gibbs measures for nonplanar infinite conformal iterated function systems and non-uniformly hyperbolic rational functions are <<weakly quasi-decaying>> (a geometric sufficient condition for extremality that is significantly weaker than <<friendliness>>). Among other results, we can improve on the celebrated theorem of Kleinbock and Margulis (1998) resolving Sprindzuk’s conjecture (viz. that Lebesgue measure on any nondegenerate manifold is extremal), as well as its extension by Kleinbock, Lindenstrauss and Weiss (2004). Location:
Zoom
|
1pm |
04/05/2021 - 1:00pm Abstract: Sparsity has been a driving force in signal & image processing and machine learning for decades. In this talk we’ll explore sparse representations based on dictionary learning techniques from two perspectives: over-parameterization and adversarial robustness. First, we will characterize the surprising phenomenon that dictionary recovery can be facilitated by searching over the space of larger (over-realized/parameterized) models. This observation is general and independent of the specific dictionary learning algorithm used. We will demonstrate this observation in practice and provide a theoretical analysis of it by tying recovery measures to generalization bounds. We will further show that an efficient and provably correct distillation mechanism can be employed to recover the correct atoms from the over-realized model, consistently providing better recovery of the ground-truth model. We will then switch gears towards the analysis of adversarial examples, focusing on the hypothesis class obtained by combining a sparsity-promoting encoder coupled with a linear classifier, and show an interesting interplay between the flexibility and stability of the (supervised) representation map and a notion of margin in the feature space. Leveraging a mild encoder gap assumption in the learned representations, we will provide a bound on the generalization error of the robust risk to L2-bounded adversarial perturbations and a robustness certificate for end-to-end classification. We will demonstrate the applicability of our analysis by computing certified accuracy on real data, and comparing with other alternatives for certified robustness. This analysis will shed light on to how to characterize this interplay for more general models. email tatianna.curtis@yale.edu for info. Location:
Zoom Meeting ID: 97670014308
|
4pm |
04/05/2021 - 4:30pm The relationship between periods of automorphic forms and L-functions has been studied since the times of Riemann, but remains mysterious. In this talk, I will explain how periods and L-functions arise as quantizations of certain Hamiltonian spaces, and will propose a conjectural duality between certain Hamiltonian spaces for a group $G$, and its Langlands dual group $\check G$, in the context of the geometric Langlands program, recovering known and conjectural instances of the aforementioned relationship. This is joint work with David Ben-Zvi and Akshay Venkatesh. Location:
https://yale.zoom.us/j/92811265790 (Password is the same as last semester)
|