King's College London

Research portal

TESSERACT: Eliminating Experimental Bias in Malware Classification across Space and Time

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Standard

TESSERACT : Eliminating Experimental Bias in Malware Classification across Space and Time. / Pendlebury, Feargus; Pierazzi, Fabio; Jordaney, Roberto; Kinder, Johannes; Cavallaro, Lorenzo.

Proceedings of the 28th USENIX Security Symposium: Proceedings of the 28th USENIX Security Symposium. August 14–16, 2019. Santa Clara, CA, USA. USENIX, 2019. p. 729-746.

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Harvard

Pendlebury, F, Pierazzi, F, Jordaney, R, Kinder, J & Cavallaro, L 2019, TESSERACT: Eliminating Experimental Bias in Malware Classification across Space and Time. in Proceedings of the 28th USENIX Security Symposium: Proceedings of the 28th USENIX Security Symposium. August 14–16, 2019. Santa Clara, CA, USA. USENIX, pp. 729-746. <https://www.usenix.org/conference/usenixsecurity19/presentation/pendlebury>

APA

Pendlebury, F., Pierazzi, F., Jordaney, R., Kinder, J., & Cavallaro, L. (2019). TESSERACT: Eliminating Experimental Bias in Malware Classification across Space and Time. In Proceedings of the 28th USENIX Security Symposium: Proceedings of the 28th USENIX Security Symposium. August 14–16, 2019. Santa Clara, CA, USA (pp. 729-746). USENIX. https://www.usenix.org/conference/usenixsecurity19/presentation/pendlebury

Vancouver

Pendlebury F, Pierazzi F, Jordaney R, Kinder J, Cavallaro L. TESSERACT: Eliminating Experimental Bias in Malware Classification across Space and Time. In Proceedings of the 28th USENIX Security Symposium: Proceedings of the 28th USENIX Security Symposium. August 14–16, 2019. Santa Clara, CA, USA. USENIX. 2019. p. 729-746

Author

Pendlebury, Feargus ; Pierazzi, Fabio ; Jordaney, Roberto ; Kinder, Johannes ; Cavallaro, Lorenzo. / TESSERACT : Eliminating Experimental Bias in Malware Classification across Space and Time. Proceedings of the 28th USENIX Security Symposium: Proceedings of the 28th USENIX Security Symposium. August 14–16, 2019. Santa Clara, CA, USA. USENIX, 2019. pp. 729-746

Bibtex Download

@inproceedings{cc473bd4b8b24a8a8c7f6526ee0ddb57,
title = "TESSERACT: Eliminating Experimental Bias in Malware Classification across Space and Time",
abstract = "Is Android malware classification a solved problem? Published F 1 scores of up to 0.99 appear to leave very little room for improvement. In this paper, we argue that results are commonly inflated due to two pervasive sources of experimental bias: spatial bias caused by distributions of training and testing data that are not representative of a real-world deployment; and temporal bias caused by incorrect time splits of training and testing sets, leading to impossible configurations. We propose a set of space and time constraints for experiment design that eliminates both sources of bias. We introduce a new metric that summarizes the expected robustness of a classifier in a real-world setting, and we present an algorithm to tune its performance. Finally, we demonstrate how this allows us to evaluate mitigation strategies for time decay such as active learning. We have implemented our solutions in TESSERACT, an open source evaluation framework for comparing malware classifiers in a realistic setting. We used TESSERACT to evaluate three Android malware classifiers from the literature on a dataset of 129K applications spanning over three years. Our evaluation confirms that earlier published results are biased, while also revealing counter-intuitive performance and showing that appropriate tuning can lead to significant improvements. ",
author = "Feargus Pendlebury and Fabio Pierazzi and Roberto Jordaney and Johannes Kinder and Lorenzo Cavallaro",
year = "2019",
month = aug,
language = "English",
isbn = "9781939133069",
pages = "729--746",
booktitle = "Proceedings of the 28th USENIX Security Symposium",
publisher = "USENIX",

}

RIS (suitable for import to EndNote) Download

TY - GEN

T1 - TESSERACT

T2 - Eliminating Experimental Bias in Malware Classification across Space and Time

AU - Pendlebury, Feargus

AU - Pierazzi, Fabio

AU - Jordaney, Roberto

AU - Kinder, Johannes

AU - Cavallaro, Lorenzo

PY - 2019/8

Y1 - 2019/8

N2 - Is Android malware classification a solved problem? Published F 1 scores of up to 0.99 appear to leave very little room for improvement. In this paper, we argue that results are commonly inflated due to two pervasive sources of experimental bias: spatial bias caused by distributions of training and testing data that are not representative of a real-world deployment; and temporal bias caused by incorrect time splits of training and testing sets, leading to impossible configurations. We propose a set of space and time constraints for experiment design that eliminates both sources of bias. We introduce a new metric that summarizes the expected robustness of a classifier in a real-world setting, and we present an algorithm to tune its performance. Finally, we demonstrate how this allows us to evaluate mitigation strategies for time decay such as active learning. We have implemented our solutions in TESSERACT, an open source evaluation framework for comparing malware classifiers in a realistic setting. We used TESSERACT to evaluate three Android malware classifiers from the literature on a dataset of 129K applications spanning over three years. Our evaluation confirms that earlier published results are biased, while also revealing counter-intuitive performance and showing that appropriate tuning can lead to significant improvements.

AB - Is Android malware classification a solved problem? Published F 1 scores of up to 0.99 appear to leave very little room for improvement. In this paper, we argue that results are commonly inflated due to two pervasive sources of experimental bias: spatial bias caused by distributions of training and testing data that are not representative of a real-world deployment; and temporal bias caused by incorrect time splits of training and testing sets, leading to impossible configurations. We propose a set of space and time constraints for experiment design that eliminates both sources of bias. We introduce a new metric that summarizes the expected robustness of a classifier in a real-world setting, and we present an algorithm to tune its performance. Finally, we demonstrate how this allows us to evaluate mitigation strategies for time decay such as active learning. We have implemented our solutions in TESSERACT, an open source evaluation framework for comparing malware classifiers in a realistic setting. We used TESSERACT to evaluate three Android malware classifiers from the literature on a dataset of 129K applications spanning over three years. Our evaluation confirms that earlier published results are biased, while also revealing counter-intuitive performance and showing that appropriate tuning can lead to significant improvements.

UR - http://www.scopus.com/inward/record.url?scp=85073162287&partnerID=8YFLogxK

UR - https://www.usenix.org/sites/default/files/sec19_full_proceedings.pdf

M3 - Conference contribution

SN - 9781939133069

SP - 729

EP - 746

BT - Proceedings of the 28th USENIX Security Symposium

PB - USENIX

ER -

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454