Understanding and Combating Robust Overfitting via Input Loss Landscape Analysis and Regularization

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)
8 Downloads (Pure)


Adversarial training is widely used to improve the robustness of deep neural networks to adversarial attack. However, adversarial training is prone to overfitting, and the cause is far from clear. This work sheds light on the mechanisms underlying overfitting through analyzing the loss landscape w.r.t. the input. We find that robust overfitting results from standard training, specifically the minimization of the clean loss, and can be mitigated by regularization of the loss gradients. Moreover, we find that robust overfitting turns severer during adversarial training partially because the gradient regularization effect of adversarial training becomes weaker due to the increase in the loss landscapes curvature. To improve robust generalization, we propose a new regularizer to smooth the loss landscape by penalizing the weighted logits variation along the adversarial direction. Our method significantly mitigates robust overfitting and achieves the highest robustness and efficiency compared to similar previous methods. Code is available at https://github.com/TreeLLi/Combating-RO-AdvLC.
Original languageEnglish
Article number109229
Early online date8 Dec 2022
Publication statusPublished - Apr 2023


  • cs.LG


Dive into the research topics of 'Understanding and Combating Robust Overfitting via Input Loss Landscape Analysis and Regularization'. Together they form a unique fingerprint.

Cite this