Automated boundary testing for QUality of Ai/ml modelS (AQUAS)
Software systems are increasingly deployed with Machine Learning (ML) models that autonomously make critical decisions in areas like medical diagnosis, self-driving cars, or fraud-detection, to name a few. This global trend has caught the attention of researchers within software (SW) quality and reliability, who have revealed robustness problems as well as harmful vulnerabilities in ML models and the systems that use them.
An inherent difficulty with ML models is to delimit their scope, i.e. identifying where and to what degree they can be trusted. While requirements engineering as well as testing efforts for conventional, programmed SW focus on describing what are the valid and invalid inputs and then go on to describe how to act on valid inputs, the training of ML models focuses primarily on the latter. While the boundaries between valid and invalid inputs are often clear-cut for conventional SW, i.e. sharp, they are typically unknown or at least understood only in a fuzzy manner for ML models. While ML researchers have proposed some techniques that can quantify model uncertainty they are not general-purpose and limit the form of the models. Here, we will instead address the problem of scope delimitation of ML models in general by leveraging and extending methods from automated testing of conventional SW. In particular, we will extend our techniques for automated boundary value analysis, exploration, and testing for the general-purpose boundary sharpening of ML models.
This page is only available in english
- Swedish Research Council (VR) (Public, Sweden)