Prof. Gitta Kutyniok, Mathematical Institute of the University of Munich
The tremendous importance of graph structured data due to
recommender systems or social networks led to the introduction of
graph convolutional neural networks (GCN). Those split into spatial
and spectral GCNs, where in the later case filters are defined as
elementwise multiplication in the frequency domain of a graph.
Since often the dataset consists of signals defined on many
different graphs, the trained network should generalize to signals
on graphs unseen in the training set. One instance of this problem
is the transferability of a GCN, which refers to the condition that
a single filter or the entire network have similar repercussions on
both graphs, if two graphs describe the same phenomenon. However, for a long time it was believed that spectral filters are not transferable.
In this talk we aim at debunking this common misconception by
showing that if two graphs discretize the same continuous metric
space, then a spectral filter or GCN has approximately the same
repercussion on both graphs. Our analysis also accounts for large
graph perturbations as well as allows graphs to have completely
different dimensions and topologies, only requiring that both
graphs discretize the same underlying continuous space. Numerical
results then even imply that spectral GCNs are superior to spatial
GCNs if the dataset consists of signals defined on many different
This is joint work with R. Levie, W. Huang, L. Bucci, and M.
Prof. Massimo Fornasier, Department of Mathematics, Technical University of Munich
We introduce a new stochastic Kuramoto-Vicsek type model for global optimization of nonconvex functions on the sphere. This model belongs to the class of Consensus-Based Optimization methods. In fact, particles move on the sphere driven by a drift towards an instantaneous consensus point, computed as a convex combination of the particle locations weighted by the cost function according to Laplace's principle, which represents an approximation to a global minimizer. The dynamics is further perturbed by a random vector field to favor exploration, whose variance is function of the distance of the particles with respect to the consensus point. In particular, as soon as consensus is reached the stochastic component vanishes. In the first part of the talk, we study the well-posedness of the model and we derive rigorously its mean-field approximation for large particle limit. The main results of the second part of the talk are about the proof of convergence of the numerical scheme to global minimizers provided conditions of well-preparation of the initial datum. The proof combines the previous results of mean-field limit with a novel asymptotic analysis, and classical convergence results
of numerical methods for SDE. We present several numerical experiments, which show that the algorithm scales well with the dimension and is extremely versatile. To quantify the performances of the new approach, we show that the algorithm is able to perform essentially as good as ad hoc state of the art methods and in some instances it obtains quantifiable better results in challenging problems in signal processing and machine learning, namely the phase retrieval problem and the robust subspace detection.
Prof. Yonina Eldar, Electrical Engineering, Weizmann Institute of Science
The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal and image processing. However, in many modern applications, the signal bandwidths have increased tremendously, while the acquisition capabilities have not scaled sufficiently fast. Consequently, conversion to digital has become a serious bottleneck. Furthermore, the resulting digital data requires storage, communication and processing at very high rates which is computationally expensive and requires large amounts of power. In the context of medical imaging sampling at high rates often translates to high radiation dosages, increased scanning times, bulky medical devices, and limited resolution.
In this talk, we present a framework for sampling and processing a large class of wideband analog signals at rates far below Nyquist in space, time and frequency, which allows to dramatically reduce the number of antennas, sampling rates and band occupancy.
Our framework relies on exploiting signal structure and the processing task. We consider applications of these concepts to a variety of problems in communications, radar and ultrasound imaging and show several demos of real-time sub-Nyquist prototypes including a wireless ultrasound probe, sub-Nyquist MIMO radar, super-resolution in microscopy and ultrasound, cognitive radio, and joint radar and communication systems. We then discuss how the ideas of exploiting the task, structure and model can be used to develop interpretable model-based deep learning methods that can adapt to existing structure and are trained from small amounts of data. These networks achieve a more favorable trade-off between increase in parameters and data and improvement in performance, while remaining interpretable.