Adversarial Training for Probabilistic Spiking Neural Networks

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

23 Citations (Scopus)

Abstract

Classifiers trained using conventional empirical risk minimization or maximum likelihood methods are known to suffer dramatic performance degradations when tested over examples adversarially selected based on knowledge of the clas-sifier's decision rule. Due to the prominence of Artificial Neural Networks (ANNs) as classifiers, their sensitivity to adversarial examples, as well as robust training schemes, have been recently the subject of intense investigation. In this paper, for the first time, the sensitivity of spiking neural networks (SNNs), or third-generation neural networks, to adversarial examples is studied. The study considers rate and time encoding, as well as rate and first-to-spike decoding. Furthermore, a robust training mechanism is proposed that is demonstrated to enhance the performance of SNNs under white-box attacks.

Original languageEnglish
Title of host publication2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications, SPAWC 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Volume2018-June
ISBN (Print)9781538635124
DOIs
Publication statusPublished - 24 Aug 2018
Event19th IEEE International Workshop on Signal Processing Advances in Wireless Communications, SPAWC 2018 - Kalamata, Greece
Duration: 25 Jun 201828 Jun 2018

Conference

Conference19th IEEE International Workshop on Signal Processing Advances in Wireless Communications, SPAWC 2018
Country/TerritoryGreece
CityKalamata
Period25/06/201828/06/2018

Keywords

  • adversarial examples
  • adversarial training
  • Generalized Linear Model (GLM)
  • Spiking Neural Networks (SNNs)

Fingerprint

Dive into the research topics of 'Adversarial Training for Probabilistic Spiking Neural Networks'. Together they form a unique fingerprint.

Cite this