Abstract: The tremendous importance of graph structured data due to recommender systems or social networks led to the introduction of graph convolutional neural networks (GCN). Those split into spatial and spectral GCNs, where in the later case filters are defined as elementwise multiplication in the frequency domain of a graph. Since often the dataset consists of signals defined on many different graphs, the trained network should generalize to signals on graphs unseen in the training set. One instance of this problem is the transferability of a GCN, which refers to the condition that a single filter or the entire network have similar repercussions on both graphs, if two graphs describe the same phenomenon. However, for a long time it was believed that spectral filters are not transferable.

In this talk by modelling graphs mimicking the same phenomenon in a very general sense, also taking the novel graphon approach into account, we will debunk this misconception. In general, we will show that spectral GCNs are transferable, both theoretically and numerically. This is joint work with R. Levie, S. Maskey, W. Huang, L. Bucci, and M. Bronstein.

email tatianna.curtis@yale.edu for info.