DEIB - Conference Room "Emilio Gatti"
May 23rd, 2018
2.30 pm
Contacts:
Marco Ciccone
Research Line:
Artificial intelligence and robotics
Deep Neural Networks (DNNs) rapidly became the first choice in a variety of challenging applications. The reasons behind this success have to be found in their remarkable generalization properties. However, DNNs have been recently discovered to be particularly vulnerable to well designed input perturbations, called “Adversarial Examples”. These type of perturbations are imperceptible to the human eyes, but can completely fool Neural Networks. This vulnerability becomes one of the major risks for applying DNNs in safety critical systems such as self-driving cars and needs to be addressed. In this talk we give an overview on attacks methods to generate adversarial examples, and techniques to defend against this type of attacks. In particular we propose a taxonomy that covers all of them. Finally we analyze some of the reasons that might explains why Neural Networks are vulnerable to adversarial perturbations and new research directions.