Machine Learning (ML) has proven to be effective in many application domains. However, ML methods can be vulnerable to adversarial attacks, in which an attacker tries to fool the classification/prediction mechanism by crafting the input data. In the case of ML-based Network Intrusion Detection Systems (NIDSs), the attacker might use their knowledge of the intrusion detection logic to generate malicious traffic that remains undetected. One way to solve this issue is to adopt adversarial training, in which the training set is augmented with adversarial traffic samples. This paper presents an adversarial training approach called GADoT, which leverages a Generative Adversarial Network (GAN) to generate adversarial DDoS samples for training. We show that a state-of-the-art NIDS with high accuracy on popular datasets can experience more than 60% undetected malicious flows under adversarial attacks. We then demonstrate how this score drops to 1.8% or less after adversarial training using GADoT.

GaDoT: GAN-based Adversarial Training for Robust DDoS Attack Detection

Maged Abdelaty;Roberto Doriguzzi-Corin;Domenico Siracusa
2021-01-01

Abstract

Machine Learning (ML) has proven to be effective in many application domains. However, ML methods can be vulnerable to adversarial attacks, in which an attacker tries to fool the classification/prediction mechanism by crafting the input data. In the case of ML-based Network Intrusion Detection Systems (NIDSs), the attacker might use their knowledge of the intrusion detection logic to generate malicious traffic that remains undetected. One way to solve this issue is to adopt adversarial training, in which the training set is augmented with adversarial traffic samples. This paper presents an adversarial training approach called GADoT, which leverages a Generative Adversarial Network (GAN) to generate adversarial DDoS samples for training. We show that a state-of-the-art NIDS with high accuracy on popular datasets can experience more than 60% undetected malicious flows under adversarial attacks. We then demonstrate how this score drops to 1.8% or less after adversarial training using GADoT.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/328747
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact