King's College London

Research portal

TESSERACT: Eliminating Experimental Bias in Malware Classification across Space and Time

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Feargus Pendlebury, Fabio Pierazzi, Roberto Jordaney, Johannes Kinder, Lorenzo Cavallaro

Original languageEnglish
Title of host publicationProceedings of the 28th USENIX Security Symposium
Subtitle of host publicationProceedings of the 28th USENIX Security Symposium. August 14–16, 2019. Santa Clara, CA, USA
PublisherUSENIX
Pages729-746
Number of pages18
ISBN (Print)9781939133069
Accepted/In press18 Jan 2019
E-pub ahead of print14 Aug 2019
PublishedAug 2019

Documents

  • TESSERACT Eliminating Experimental Bias_PENDELBURY_Acc18Jan2018Epub14Aug2019_GOLD VoR(

    TESSERACT_Eliminating_Experimental_Bias_PENDELBURY_Acc18Jan2018Epub14Aug2019_GOLD_VoR_.pdf, 7.34 MB, application/pdf

    Uploaded date:03 Apr 2020

    Version:Final published version

    USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone

  • TESSERACT. arXiv v4. Corrected VoR. Posted 12Sep2019

    TESSERACT_V4_FINAL_VoR_including_CORRECTION_epub_12Sep2019.pdf, 7.38 MB, application/pdf

    Uploaded date:24 Feb 2021

    Version:Final published version

King's Authors

Abstract

Is Android malware classification a solved problem? Published F 1 scores of up to 0.99 appear to leave very little room for improvement. In this paper, we argue that results are commonly inflated due to two pervasive sources of experimental bias: spatial bias caused by distributions of training and testing data that are not representative of a real-world deployment; and temporal bias caused by incorrect time splits of training and testing sets, leading to impossible configurations. We propose a set of space and time constraints for experiment design that eliminates both sources of bias. We introduce a new metric that summarizes the expected robustness of a classifier in a real-world setting, and we present an algorithm to tune its performance. Finally, we demonstrate how this allows us to evaluate mitigation strategies for time decay such as active learning. We have implemented our solutions in TESSERACT, an open source evaluation framework for comparing malware classifiers in a realistic setting. We used TESSERACT to evaluate three Android malware classifiers from the literature on a dataset of 129K applications spanning over three years. Our evaluation confirms that earlier published results are biased, while also revealing counter-intuitive performance and showing that appropriate tuning can lead to significant improvements.

Download statistics

No data available

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454