Seminarium

Seminarium Geometry, Algebra and Physics in Deep Neural Networks (GAPinDNNs)

Pierfrancesco Urbani, Université Paris-Saclay, CNRS, CEA, IPhT: Separation of timescales controls feature learning and overfitting in large neural networks

Översikt

  • Datum:Startar 14 April 2026, 13:15Slutar 14 April 2026, 14:00
  • Plats:
    MV:L15, Chalmers tvärgata 3
  • Språk:Engelska

Abstrakt finns enbart på engelska: To understand the inductive bias and generalization capabilities of large, overparameterized machine learning models, it is essential to analyze the dynamics of their training algorithms. Using dynamical mean field theory we investigate the learning dynamics of large two-layer neural networks. Our findings reveal that, for networks with a large width, the training process exhibits a separation of timescales phenomenon. This leads to several key observations:

  1. The emergence of a slow timescale linked to the growth in Gaussian/Rademacher complexity of the network;
  2. An inductive bias favoring low complexity when the initial model complexity is sufficiently small;
  3. A dynamical decoupling between feature learning and overfitting phases;
  4. A non-monotonic trend in test error, characterized by a “feature unlearning” regime at later stages of training.

Joint work with Andrea Montanari.

Jan Gerken
  • Biträdande universitetslektor, Algebra och geometri, Matematiska vetenskaper
Seminarium Geometry, Algebra and Physics in Deep Neural Networks (GAPinDNNs) | Chalmers