Most recent successes of AI have been based on Supervised Learning (SL) methods, fueled by large quantities of parallel compute power and humanly annotated training data. However, we are now at a point where it is becoming intractable to manually annotate new datasets that are of sufficient quantity and quality to further boost development. Additionally, there are sensor modalities, e.g. radar, where the availability of annotated data is scarce and where it, in contrast to image data, requires expert domain knowledge to label the data accurately making it even more expensive.
Instead, many believe that Semi-Supervised Learning (SSL) will drive the next AI revolution by using the vast amount of unlabeled data (and some labelled examples) to discover all concepts and underlying causes that matter when interpreting what we see around us. Further, using SSL we can potentially have systems that continuously and automatically learn and adapt throughout their lifecycle. In the past year, we have seen the first indication of this, where SSL methods have outperformed SL on image classification tasks, even though vast quantities of labelled data are available [1,2]. The basic premise of these SSL algorithms, and which makes them so effective, is that they exploit prediction consistency (pseudo-labels) between weakly and strongly augmented versions of the same image. For example, the model in  is trained to produce consistent predictions for an image which is horizontally flipped and translated (weak augmentation), and an image which is cropped, resized and colour distorted (strong augmentation). Another example is  where geometric correspondences are used to train a model to produce consistent semantic labels across different seasons.
In this project, we aim to extend the SSL revolution by developing more robust and precise SSL schemes that generalize to new situations. Our focus is on (i) improving how model uncertainty is considered during training and (ii) adapting the insights from current ground-breaking SSL schemes for high-level image tasks to use on the abovementioned prediction consistencies. Additionally, we plan to extend our SSL scheme to new situations by, e.g., using inspiration from [1, 2] to explore augmentation schemes for radar and lidar data collected from SAAB’s surveillance system and to use our expertise in classical Bayesian sensor fusion algorithms to build additional confidence and to connect the unlabeled data both across time and modalities. Taking the first steps towards truly self-learning systems.
 Chen, Ting, et al. "A simple framework for contrastive learning of visual representations." arXiv, 2020.
 Sohn et al. "FixMatch: Simplifying semi-supervised learning with consistency and confidence", arXiv, 2020.
 Hammarstrand, Kahl et al. "A cross-season correspondence dataset for robust semantic segmentation", CVPR, 2019.