Abstract
Is Android malware classification a solved problem? Published F 1 scores of up to 0.99 appear to leave very little room for improvement. In this paper, we argue that results are commonly inflated due to two pervasive sources of experimental bias: spatial bias caused by distributions of training and testing data that are not representative of a real-world deployment; and temporal bias caused by incorrect time splits of training and testing sets, leading to impossible configurations. We propose a set of space and time constraints for experiment design that eliminates both sources of bias. We introduce a new metric that summarizes the expected robustness of a classifier in a real-world setting, and we present an algorithm to tune its performance. Finally, we demonstrate how this allows us to evaluate mitigation strategies for time decay such as active learning. We have implemented our solutions in TESSERACT, an open source evaluation framework for comparing malware classifiers in a realistic setting. We used TESSERACT to evaluate three Android malware classifiers from the literature on a dataset of 129K applications spanning over three years. Our evaluation confirms that earlier published results are biased, while also revealing counter-intuitive performance and showing that appropriate tuning can lead to significant improvements.
Original language | English |
---|---|
Title of host publication | Proceedings of the 28th USENIX Security Symposium |
Subtitle of host publication | Proceedings of the 28th USENIX Security Symposium. August 14–16, 2019. Santa Clara, CA, USA |
Publisher | USENIX |
Pages | 729-746 |
Number of pages | 18 |
ISBN (Print) | 9781939133069 |
Publication status | Published - Aug 2019 |