King's College London

Research portal

Intriguing Properties of Adversarial ML Attacks in the Problem Space

Research output: Contribution to journalConference paperpeer-review

Standard

Intriguing Properties of Adversarial ML Attacks in the Problem Space. / Pierazzi, Fabio; Pendlebury, Feargus; Cortellazzi, Jacopo; Cavallaro, Lorenzo.

In: 2020 IEEE Symposium on Security and Privacy, 18.05.2020, p. 1332-1349.

Research output: Contribution to journalConference paperpeer-review

Harvard

Pierazzi, F, Pendlebury, F, Cortellazzi, J & Cavallaro, L 2020, 'Intriguing Properties of Adversarial ML Attacks in the Problem Space', 2020 IEEE Symposium on Security and Privacy, pp. 1332-1349. https://doi.org/10.1109/SP40000.2020.00073

APA

Pierazzi, F., Pendlebury, F., Cortellazzi, J., & Cavallaro, L. (2020). Intriguing Properties of Adversarial ML Attacks in the Problem Space. 2020 IEEE Symposium on Security and Privacy, 1332-1349. https://doi.org/10.1109/SP40000.2020.00073

Vancouver

Pierazzi F, Pendlebury F, Cortellazzi J, Cavallaro L. Intriguing Properties of Adversarial ML Attacks in the Problem Space. 2020 IEEE Symposium on Security and Privacy. 2020 May 18;1332-1349. https://doi.org/10.1109/SP40000.2020.00073

Author

Pierazzi, Fabio ; Pendlebury, Feargus ; Cortellazzi, Jacopo ; Cavallaro, Lorenzo. / Intriguing Properties of Adversarial ML Attacks in the Problem Space. In: 2020 IEEE Symposium on Security and Privacy. 2020 ; pp. 1332-1349.

Bibtex Download

@article{e2d03f4997c448099ef42f47f757cff7,
title = "Intriguing Properties of Adversarial ML Attacks in the Problem Space",
abstract = "Recent research efforts on adversarial ML have investigated problem-space attacks, focusing on the generation of real evasive objects in domains where, unlike images, there is no clear inverse mapping to the feature space (e.g., software). However, the design, comparison, and real-world implications of problem-space attacks remain under-explored.This paper makes two major contributions. First, we propose a novel formalization for adversarial ML evasion attacks in the problem-space, which includes the definition of a comprehensive set of constraints on available transformations, preserved semantics, robustness to preprocessing, and plausibility. We shed light on the relationship between feature space and problem space, and we introduce the concept of side-effect features as the byproduct of the inverse feature-mapping problem. This enables us to define and prove necessary and sufficient conditions for the existence of problem-space attacks. We further demonstrate the expressive power of our formalization by using it to describe several attacks from related literature across different domainsSecond, building on our formalization, we propose a novel problem-space attack on Android malware that overcomes past limitations. Experiments on a dataset with 170K Android apps from 2017 and 2018 show the practical feasibility of evading a state-of-the-art malware classifier along with its hardened version. Our results demonstrate that “adversarial-malware as a service{\textquoteright} is a realistic threat, as we automatically generate thousands of realistic and inconspicuous adversarial applications at scale, where on average it takes only a few minutes to generate an adversarial app. Yet, out of the 1600+ papers on adversarial ML published in the past six years, roughly 40 focus on malware [15]—and many remain only in the feature space.Our formalization of problem-space attacks paves the way to more principled research in this domain. We responsibly release the code and dataset of our novel attack to other researchers, to encourage future work on defenses in the problem space",
keywords = "Adversarial machine learning, Evasion, Input space, Malware, Problem space, Program analysis",
author = "Fabio Pierazzi and Feargus Pendlebury and Jacopo Cortellazzi and Lorenzo Cavallaro",
year = "2020",
month = may,
day = "18",
doi = "10.1109/SP40000.2020.00073",
language = "English",
pages = "1332--1349",
journal = "2020 IEEE Symposium on Security and Privacy",
issn = "2375-1207",

}

RIS (suitable for import to EndNote) Download

TY - JOUR

T1 - Intriguing Properties of Adversarial ML Attacks in the Problem Space

AU - Pierazzi, Fabio

AU - Pendlebury, Feargus

AU - Cortellazzi, Jacopo

AU - Cavallaro, Lorenzo

PY - 2020/5/18

Y1 - 2020/5/18

N2 - Recent research efforts on adversarial ML have investigated problem-space attacks, focusing on the generation of real evasive objects in domains where, unlike images, there is no clear inverse mapping to the feature space (e.g., software). However, the design, comparison, and real-world implications of problem-space attacks remain under-explored.This paper makes two major contributions. First, we propose a novel formalization for adversarial ML evasion attacks in the problem-space, which includes the definition of a comprehensive set of constraints on available transformations, preserved semantics, robustness to preprocessing, and plausibility. We shed light on the relationship between feature space and problem space, and we introduce the concept of side-effect features as the byproduct of the inverse feature-mapping problem. This enables us to define and prove necessary and sufficient conditions for the existence of problem-space attacks. We further demonstrate the expressive power of our formalization by using it to describe several attacks from related literature across different domainsSecond, building on our formalization, we propose a novel problem-space attack on Android malware that overcomes past limitations. Experiments on a dataset with 170K Android apps from 2017 and 2018 show the practical feasibility of evading a state-of-the-art malware classifier along with its hardened version. Our results demonstrate that “adversarial-malware as a service’ is a realistic threat, as we automatically generate thousands of realistic and inconspicuous adversarial applications at scale, where on average it takes only a few minutes to generate an adversarial app. Yet, out of the 1600+ papers on adversarial ML published in the past six years, roughly 40 focus on malware [15]—and many remain only in the feature space.Our formalization of problem-space attacks paves the way to more principled research in this domain. We responsibly release the code and dataset of our novel attack to other researchers, to encourage future work on defenses in the problem space

AB - Recent research efforts on adversarial ML have investigated problem-space attacks, focusing on the generation of real evasive objects in domains where, unlike images, there is no clear inverse mapping to the feature space (e.g., software). However, the design, comparison, and real-world implications of problem-space attacks remain under-explored.This paper makes two major contributions. First, we propose a novel formalization for adversarial ML evasion attacks in the problem-space, which includes the definition of a comprehensive set of constraints on available transformations, preserved semantics, robustness to preprocessing, and plausibility. We shed light on the relationship between feature space and problem space, and we introduce the concept of side-effect features as the byproduct of the inverse feature-mapping problem. This enables us to define and prove necessary and sufficient conditions for the existence of problem-space attacks. We further demonstrate the expressive power of our formalization by using it to describe several attacks from related literature across different domainsSecond, building on our formalization, we propose a novel problem-space attack on Android malware that overcomes past limitations. Experiments on a dataset with 170K Android apps from 2017 and 2018 show the practical feasibility of evading a state-of-the-art malware classifier along with its hardened version. Our results demonstrate that “adversarial-malware as a service’ is a realistic threat, as we automatically generate thousands of realistic and inconspicuous adversarial applications at scale, where on average it takes only a few minutes to generate an adversarial app. Yet, out of the 1600+ papers on adversarial ML published in the past six years, roughly 40 focus on malware [15]—and many remain only in the feature space.Our formalization of problem-space attacks paves the way to more principled research in this domain. We responsibly release the code and dataset of our novel attack to other researchers, to encourage future work on defenses in the problem space

KW - Adversarial machine learning

KW - Evasion

KW - Input space

KW - Malware

KW - Problem space

KW - Program analysis

UR - http://www.scopus.com/inward/record.url?scp=85091583442&partnerID=8YFLogxK

U2 - 10.1109/SP40000.2020.00073

DO - 10.1109/SP40000.2020.00073

M3 - Conference paper

SP - 1332

EP - 1349

JO - 2020 IEEE Symposium on Security and Privacy

JF - 2020 IEEE Symposium on Security and Privacy

SN - 2375-1207

ER -

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454