Title of master thesis: Methods for reducing costs of running Large-scale Machine Learning models
Overview
- Date:Starts 6 November 2023, 17:00Ends 6 November 2023, 18:00
- Location:Nexus, Physics building, campus Johanneberg
- Language:English
Abstract: Large Language Models have taken over our world with our biggest contributor to that being chatGPT but many more companies such as Facebook are launching their own versions to be able to keep up in the race. The model footprints are increasingly large and therefore also the cost associated with running them. The company, Substorm, has a transformer model of a type called BERT which are today used to be able to classify male and female bias in text. They are interested in looking into different ways of reducing cost of said model as well as models further into the future.
In this Master Thesis you will be introduced to methods for both faster loading of Transformer models but also methods for reducing it's model byte-size footprint. The methods are tested on both a smaller Fully Connected network trained and tested on the MNIST data set as well as Google's highly competitive BERT model, used first and foremost for the classification of text in different ways. The model is trained on the PANDORA data set which is consisting of a large sum of comments compiled from reddit as well as a large sum of them being gender labeled. For the loading part of the project a speedup of ~99% is shown when cold loading the model. For the model minimisation part, three different variants of the model are presented, one quantized model, one pruned model as well as one both quantized and pruned model. The modified models are then tested towards it's original counterparts on the PANDORA test set to be able to determine their viability. For the quantized model, no accuracy loss was detected while being able to reduce the model footprint by ~60%. For the 75% pruned model an accuracy loss of only ~2% is shown while being able to theoretically decrease the model footprint by 50% percent. For the both quantized and 75% pruned model an accuracy loss of only ~2.2% while being able to theoretically decrease the model footprint by ~80%.
These size decreases means that Substorm have the possibility of decreasing their server costs in at least half since the servers usually come in multiples of two from each other. This also means faster computational time associated with running the model in comparison to it's original state while maintaining competitive accuracy results when the model is being used.
Supervisor: Sergio Liberman Bronfman
Examiner: Giovanni Volpe
Opponents: Viktor Månsson, Lukas Falke