Geometric Deep Learning Seminar

​What is geometric deep learning? It includes several directions within the foundations of machine learning.

One direction is “geometrization” of neural networks, driven by the need to deal with data defined on non-Euclidean domains, for example graphs or manifolds. Efforts here are in full focus due to the abundance of non-Euclidean data in biology, physics, network science, and computer vision. Another direction is to study the geometry of spaces of probability measures. This paves the way for geometric developments and mathematical understanding of deep learning. An example is to analyze the Riemannian gradient flow structure of Sinkhorn and other algorithms used within neural networks.

Research about geometric structures in deep learning is still in its infancy. But it is rapidly developing, as witnessed by the diversity of subjects covered at recent workshops and conferences. At ICLR 2021, Bronstein gave a keynote talk entitled Geometric deep learning -- the Erlangen program of machine learning. He outlined his grand vision for geometric deep learning and compared it to the famous Erlangen program in mathematics, proposed by Klein in 1872 as a unified theory of geometry, connecting group theory, and geometry in profound ways. In a similar way, geometric deep learning brings geometry, PDE, group theory, and representation theory into the realm of machine learning.

The research at the Department of Mathematical Sciences at the University of Gothenburg and Chalmers has close links to many different parts of geometric deep learning, in particular, optimal transport, information geometry, stochastic differential equations and representation theory. An important forum in this research is our Geometric Deep Learning Seminar, as a platform for discussion and engagement.

 

Upcoming and past seminars

2022

Next seminar: TBA

 

2/3
Speaker: Axel Flinth (Dept. of Electrical Engineering, Chalmers)
Title: A universal rotation-equivariant neural network architecture for point clouds

Abstract: We consider the problem of constructing a neural network architecture capable of learning functions defined on point clouds in the plane that are rotationally equivariant. Since we wish the networks to be defined on point clouds rather than lists of vectors, we thus need to find a network architecture that is both equivariant to rotations of the cloud as well as invariant to permutations of the points in the cloud. In this talk, we discuss a simple network architecture that still is provably universal for the function class at hand.

This work is based on joint work with Georg Bökman and Fredrik Kahl.

Slides: https://www.dropbox.com/s/5vyfcmwpnbgceds/presentation_gdl_Flinth.pdf?dl=0

2021

24/11
Speaker: Pan Kessel (Technical University Berlin)
Title: "Can Explanations be trusted? Geometry says no!"

Abstract: In this talk, I will briefly introduce explanation methods which aim to make the underlying decision process of neural networks transparent and interpretable.

I will argue theoretically that existing methods can be easily manipulated using basic concepts of differential geometry and discuss numerical experiments confirming this.
Finally, I will briefly discuss mitigation methods.

Page manager Published: Tue 05 Apr 2022.