Exploring Defenses Against Adversarial Attacks in Machine Learning-Based Malware Detection

Student thesis: Doctoral ThesisDoctor of Philosophy

Abstract

Machine learning (ML) has facilitated progress in several disciplines as a result of greater resources, data volumes, and algorithmic developments. Particularly in cybersecurity, ML-based malware detection has proven to be an effective method of classifying software executables as benign or malicious. However, an unintended consequence of the increased usage of machine learning is that the attack surface has been widened through adversarial ML attacks. With such attacks, malicious users can craft input samples with the intention of evading ML models and causing them to output a specific prediction. For example, in an evasion attack, a malicious executable may be carefully constructed so that an ML model misclassifies it as benign, despite its truly malicious nature. Such an attack has important ramifications for the safety of machine learning systems, especially considering the well-established adversarial nature of malware detection. Adversarial attacks are not exclusive to ML-based malware detection; they apply across all domains. Thus, significant effort has been devoted to developing methods for defending ML models against adversarial attacks. However, as we discuss, these have a number of limitations, such as their ineffectiveness, failure to cope with newer types of attacks, and reduced performance on legitimate queries. Furthermore, in ML-based malware detection, defenses against adversarial attacks have received substantially limited attention, despite the threat posed by these attacks.

In this dissertation, we aim to establish a fresh understanding of methods to defend against adversarial attacks in ML-based malware detection. We propose two novel defensive methods based on promising approaches, moving target defenses (MTDs) and stateful defenses, which are based on other areas of cybersecurity and have never been applied to this domain before. Particularly, we first propose a novel strategic moving target defense against adversarial attacks that overcomes the challenges and limitations of related work. We show that this strategic defense outperforms several prior defenses in the first evaluation of its kind, which considers a range of threat models. Second, we characterize and evaluate, for the first time, the effectiveness of several MTDs for adversarial ML applied to this domain. In a comprehensive study, we extensively compare their performances with each other and with other types of defenses for adversarial ML.We expose the vulnerability of these MTDs to both existing attack strategies and our proposed novel ones, highlighting the key weaknesses of these approaches. Based on these findings, we present key recommendations for advancing work on MTDs against adversarial ML. Inspired by one of these recommendations, we then propose a novel stateful defense against adversarial attacks on ML-based malware detection. In the first study of its kind, we showcase our defense’s capabilities under several attack scenarios, demonstrating its ability to reduce attack success in a manner that is superior to a range of prior stateful defenses.

This dissertation, when taken as a whole, offers a fresh perspective on the various methods that may be considered promising directions for coping with adversarial attacks on ML-based malware detection.


Date of Award1 Sept 2023
Original languageEnglish
Awarding Institution
  • King's College London
SupervisorHana Chockler (Supervisor) & Jose Such (Supervisor)

Cite this

'