Johan Jonasson, Chalmers and University of Gothenburg: Noise sensitivity/stability for deep Boolean neural nets
Overview
- Date:Starts 19 January 2023, 15:15Ends 19 January 2023, 16:00
- Seats available:24
- Location:MV:L14, Chalmers tvärgata 3
- Language:English
Abstract: A well-known and ubiquitous property of neural net classifiers is that they can be fooled into misclassifying some objects by changing the input in tiny ways that are indistinguishable for the human eye. These changes can be adversarial, but sometimes they can be just random noise. This makes it interesting to ask if this property is something that almost all neural nets have and, when they do, why that is. There are good heuristic explanations, but to prove mathematically rigorous results seems very difficult in general. Here we prove some first results on various toy models. We treat our questions within the framework of the established field of noise sensitivity/stability. What we prove can roughly be stated as:
- A sufficiently deep fully connected network with sufficiently wide layers and iid Gaussian weights is noise sensitive, i.e. an arbitrarily small random noise makes the predicted class of a binary input string before and after the noise is added virtually independent. If one imposes correlations on the weights corresponding to the same input features, this still holds unless the correlation is very close to 1.
- Neural nets consisting of only convolutional layers may or may not be noise sensitive and we present examples of both behaviours.
Organisers
- Senior Lecturer, Applied Mathematics and Statistics, Mathematical Sciences
- Senior Lecturer, Applied Mathematics and Statistics, Mathematical Sciences

