Five projects funded within deep neural networks and machine learning

​Chalmers AI Research Centre (Chair) will fund five PhD student projects at Chalmers. The projects focuses on different aspects of deep neural networks and machine learning.

​In the beginning of May, we opened a call for PhD student projects within AI

Within the call we looked for projects that aimed to:

  • develop theoretical foundations and computational methods for AI, through research activities focused on algorithmic, mathematical, and statistical principles;
  • use innovative AI tools to tackle foundational problems in other fields such as biology, physics, material science, etc.
  • tackle core problems related to the development and the deployment of systems containing AI components.

We received 63 project proposals from researchers across almost all departments at Chalmers.Five projects have now been selected after an external review process. These projects will receive full support for a PhD student from Chair.

“The large number of high-quality submissions to this call shows that our researchers have great interest in AI and have the capacity to perform excellent research within AI. We look forward to seeing the research output of the five selected projects and we regret we could not support more projects due to budget restrictions” says Giuseppe Durisi, Co-director of Chalmers AI Research Centre.

Selected projects within the call “PhD student projects within AI”

Deep Learning and likelihood-free Bayesian inference for intractable stochastic models

Applicant: Umberto Picchini, Department of Mathematical Sciences

We construct new deep neuronal networks (DNNs) to learn the parameters of complex stochastic dynamical models that do not have tractable likelihood functions. Specifically, we leverage the expressive approximation power of our DNNs to extract essential information from time-series data, both Markovian and not-Markovian, and then learn model parameters using likelihood-free methodology, such as approximate Bayesian computation. Special (though not exclusive) focus is directed to stochastic differential equation models and state space models (SSMs), where SSMs represent noisy observations of a latent Markovian process. The result will be a flexible plug-and-play machine learning methodology, allowing inference for complex stochastic models.

Energy-based models for supervised deep neural networks and their applications

Applicants: Christopher Zach, Department of Electrical Engineering, and Morteza Haghir Chehreghani, Department of Computer Science and Engineering

Despite deep learning-based methods being the state-of-the-art in many AI-related applications, there is a lack of consensus of how to understand and interpret deep neural networks in order to reason about their strengths and weaknesses. Energy-based models in machine learning have a long tradition as a framework to learn from unlabeled data, i.e. unsupervised learning. Recently, it has been shown that supervised learning of deep neural networks using the back propagation method is a limiting case of a suitably defined approach for learning energy-based models using a so-called contrastive loss. This connection is the basis for our interest in a tighter connection between deep learning and energy-based models.

Mechanisms for secure and private machine learning

Applicants: Aikaterini Mitrokotsa, Department of Computer Science and Engineering, and Christos Dimitrakakis, Department of Computer Science and Engineering

We envision secure and privacy-preserving machine learning algorithms for artificial intelligence applications in everyday life, that can provide confidentiality and integrity guarantees. In particular, we aim to:

  1. Safeguard the privacy of individuals that participate by either (a) providing their data to build the system, or (b) being end-users of the system.
  2.  Safeguard the integrity of the system by (a) ensuring its robustness to adversarial inputs (b) cryptographically limiting the possible points of adversarial manipulation.

Stochastic continuous-depth neural networks

Applicant: Moritz Schauer, Department of Mathematical Sciences
We will advance the understanding of deep neural networks through the investigation of stochastic continuous-depth neural networks. These can be thought of as deep neural networks (DNN) composed of infinitely many stochastic layers, where each single layer only brings about a gradual change to the output of the preceding layers. We will analyse such stochastic continuous-depth neural networks using tools from stochastic calculus and Bayesian statistics. From that, we will derive practically relevant and novel training algorithms for stochastic DNNs with the aim to capture the uncertainty associated with the predictions of the network.

VisLocLearn - Understanding and Overcoming the Limitations of Convolutional Neural Networks for Visual Localization

Applicant: Torsten Sattler, Department of Electrical Engineering

Visual localization is the problem of estimating the position and orientation from which an image was taken with respect to the scene. In other words, visual localization allows an AI system to determine its position in the world through a camera. Understanding why current approaches fail and proposing novel approaches that are able to accurately localize a camera are problems of high practical relevance. This is the purpose for the proposed project, VisLocLearn.

Published: Mon 14 Oct 2019.