Licentiatavhandling
Evenemanget har passerat

Licentiatseminarium, Data- och informaitonsteknik, Adam Breitholtz

Översikt

Evenemanget har passerat

Towards practical and provable domain adaptation

One of the most central questions in statistical modeling is how well a model will generalize. Absent strong assumptions we find that this question is difficult to answer in a meaningful way. In this work we seek to increase our understanding of the domain adaptation setting through two different lenses. First, we investigate whether tractably computable and tight generalization bounds on the performance of neural network classifiers exist in the current literature. The tightest bounds we find use a portion of the input data to tighten the gap between measured performance and the calculated bound. We present evaluations of four bounds using this tightening method on classifiers applied to image classification tasks: Two bounds from the literature in addition to two of our own construction. Further, we find that for situations lacking domain overlap, the existing literature lacks the tools to achieve tight, tractably computable bounds for the neural network models which we use. We conclude that a new approach might be needed. In the second part we therefore consider a setting where we change our underlying assumptions to ones which might be more plausible. This setting, based on learning using privileged information, is shown to result in consistent learning. We also show empirical gains over comparable methods when our assumptions are likely to hold, both in terms of performance and sample efficiency. In summary, the work set out herein has been a first step towards a better understanding of domain adaptation and how using data and new assumptions can help us further our knowledge about this topic.

Opponent: Pascal Germain, Université Laval, Kanada