King's College London

Research portal

Enabling Fair ML Evaluations for Security

Research output: Chapter in Book/Report/Conference proceedingConference paper

Feargus Pendlebury, Fabio Pierazzi, Roberto Jordaney, Johannes Kinder, Lorenzo Cavallaro

Original languageEnglish
Title of host publicationProceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security
Pages2264-2266
ISBN (Electronic)9781450356930
DOIs
Publication statusPublished - 15 Oct 2018

Publication series

NameProceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security

Bibliographical note

ACM Conference on Computer and Communications Security, CCS '18 ; Conference date: 15-10-2018 Through 19-10-2018

Documents

King's Authors

Abstract

Machine learning is widely used in security research to classify malicious activity, ranging from malware to malicious URLs and network traffic. However, published performance numbers often seem to leave little room for improvement and, due to a wide range of datasets and configurations, cannot be used to directly compare alternative approaches; moreover, most evaluations have been found to suffer from experimental bias which positively inflates results. In this manuscript we discuss the implementation of Tesseract, an open-source tool to evaluate the performance of machine learning classifiers in a security setting mimicking a deployment with typical data feeds over an extended period of time. In particular, Tesseract allows for a fair comparison of different classifiers in a realistic scenario, without disadvantaging any given classifier. Tesseract is available as open-source to provide the academic community with a way to report sound and comparable performance results, but also to help practitioners decide which system to deploy under specific budget constraints.

Download statistics

No data available

View graph of relations

© 2018 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454