L 1-norm double backpropagation adversarial defense

Abstract : Adversarial examples are a challenging open problem for deep neural networks. We propose in this paper to add a penalization term that forces the decision function to be at in some regions of the input space, such that it becomes, at least locally, less sensitive to attacks. Our proposition is theoretically motivated and shows on a rst set of carefully conducted experiments that it behaves as expected when used alone, and seems promising when coupled with adversarial training.
Complete list of metadatas

Cited literature [7 references]  Display  Hide  Download

https://hal-clermont-univ.archives-ouvertes.fr/hal-02049020
Contributor : Gaelle Loosli <>
Submitted on : Tuesday, March 5, 2019 - 8:44:47 AM
Last modification on : Friday, March 15, 2019 - 1:14:30 AM
Long-term archiving on : Thursday, June 6, 2019 - 12:26:26 PM

Files

main.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02049020, version 1
  • ARXIV : 1903.01715

Citation

Ismaïla Seck, Gaëlle Loosli, Stephane Canu. L 1-norm double backpropagation adversarial defense. ESANN - European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Apr 2019, Bruges, France. ⟨hal-02049020⟩

Share

Metrics

Record views

90

Files downloads

42