Defensive approximation: securing CNNs using approximate computing

Archive ouverte : Communication dans un congrès

Guesmi, Amira | Alouani, Ihsen | Khasawneh, Khaled | Baklouti, Mouna | Frikha, Tarek | Abid, Mohamed | Abu-Ghazaleh, Nael

Edité par HAL CCSD ; ACM

International audience. In the past few years, an increasing number of machine-learning and deep learning structures, such as Convolutional Neural Networks (CNNs), have been applied to solving a wide range of real-life problems. However, these architectures are vulnerable to adversarial attacks: Inputs crafted carefully to force the system output to a wrong label. Since machine-learning is being deployed in safety-critical and security-sensitive domains, such attacks may have catastrophic security and safety consequences. In this paper, we propose for the first time to use hardware-supported approximate computing to improve the robustness of machine learning classifiers. We show that our approximate computing implementation achieves robustness across a wide range of attack scenarios. Specifically, we show that successful adversarial attacks against the exact classifier have poor transferability to the approximate implementation. The transferability is even poorer for the black-box attack scenarios, where adversarial attacks are generated using a proxy model. Surprisingly, the robustness advantages also apply to white-box attacks where the attacker has unrestricted access to the approximate classifier implementation: In this case, we show that substantially higher levels of adversarial noise are needed to produce adversarial examples. Furthermore, our approximate computing model maintains the same level in terms of classification accuracy, does not require retraining, and reduces resource utilization and energy consumption of the CNN. We conducted extensive experiments on a set of strong adversarial attacks; We empirically show that the proposed implementation increases the robustness of a LeNet-5 and an Alexnet CNNs by up to 99% and 87%, respectively for strong transferability-based attacks along with up to 50% saving in energy consumption due to the simpler nature of the approximate logic. We also show that a white-box attack requires a remarkably higher noise budget to fool the approximate classifier, causing an average of 4 dB degradation of the PSNR of the input image relative to the images that succeed in fooling the exact classifier.

Consulter en ligne

Suggestions

Du même auteur

SIT: Stochastic Input Transformation to Defend Against Adversarial Attacks ...

Archive ouverte: Article de revue

Guesmi, Amira | 2022-06

International audience. Deep Neural Networks (DNNs) have been deployed in a wide range of applications, including safety-critical domains, owing to their proven efficiency in solving complex problems. However, these...

ROOM: Adversarial Machine Learning Attacks Under Real-Time Constraints

Archive ouverte: Communication dans un congrès

Guesmi, Amira | 2022-07-18

International audience. Advances in deep-learning have enabled a wide range of promising applications. However, these systems are vulnerable to adversarial attacks; adversarially crafted pertur-bations to their inpu...

Lower Voltage for Higher Security: Using Voltage Overscaling to Secure Deep...

Archive ouverte: Communication dans un congrès

Islam, Shohidul | 2021-11-01

International audience. Deep neural networks (DNNs) are shown to be vulnerable to adversarial attacks-- carefully crafted additive noise that undermines DNNs integrity. Previously proposed defenses against these att...

Du même sujet

Feeling multiple edges: the tactile perception of short ultrasonic square r...

Archive ouverte: Communication dans un congrès

Gueorguiev, David | 2017-06-06

International audience. This study investigates human perception of tactile feedback using ultrasonic lubrication, in situation where feedback is provided using short frictional cues of varying duration and sharpnes...

Numerical study of the reflected elastic waves using Rayleigh diffraction i...

Archive ouverte: Article de revue

Maghlaoui, Nadir | 2019-10-25

International audience. In this work, the transient ultrasonic waves radiated by a linear phased array transducer in a liquid then reflected at a liquid solid interface is studied. A model based on the Rayleigh inte...

New laws of robotics : defending human expertise in the age of AI / Frank P...

Livre | Pasquale, Frank. Auteur | 2020

Présentation de l'éditeur : "AI is poised to disrupt our work and our lives. We can harness these technologies rather than fall captive to them—but only through wise regulation. Too many CEOs tell a simple story about the future o...

Cybermonde, la politique du pire : entretien avec Philippe Petit / Paul Vir...

Livre | Virilio, Paul (1932-2018). Auteur | 1996

Towards zero-latency video transmission through frame extrapolation

Archive ouverte: Communication dans un congrès

Vijayaratnam, Melan | 2022-10-16

International audience. In the past few years, several efforts have been devoted to reduce individual sources of latency in video delivery, including acquisition, coding and network transmission. The goal is to impr...

Mobile application for 3D real-time visualization for Outdoor sports compet...

Archive ouverte: Communication dans un congrès

Pagès, Thierry | 2017-09-29

1 article soumis au comité scientifique du salon de la géomatique, 1 présentation pdf et son fichier mp3 associé pour la démonstration du suivi des sportifs en temps réel. International audience. L’application LIS3D...

Chargement des enrichissements...