Redirigiendo al acceso original de articulo en 20 segundos...
Inicio  /  Algorithms  /  Vol: 15 Par: 12 (2022)  /  Artículo
ARTÍCULO
TITULO

A Mask-Based Adversarial Defense Scheme

Weizhen Xu    
Chenyi Zhang    
Fangzhen Zhao and Liangda Fang    

Resumen

Adversarial attacks hamper the functionality and accuracy of deep neural networks (DNNs) by meddling with subtle perturbations to their inputs. In this work, we propose a new mask-based adversarial defense scheme (MAD) for DNNs to mitigate the negative effect from adversarial attacks. Our method preprocesses multiple copies of a potential adversarial image by applying random masking, before the outputs of the DNN on all the randomly masked images are combined. As a result, the combined final output becomes more tolerant to minor perturbations on the original input. Compared with existing adversarial defense techniques, our method does not need any additional denoising structure or any change to a DNN?s architectural design. We have tested this approach on a collection of DNN models for a variety of datasets, and the experimental results confirm that the proposed method can effectively improve the defense abilities of the DNNs against all of the tested adversarial attack methods. In certain scenarios, the DNN models trained with MAD can improve classification accuracy by as much as 90%" role="presentation">90%90% 90 % compared to the original models when given adversarial inputs.

 Artículos similares

       
 
Fabio Carrara, Roberto Caldelli, Fabrizio Falchi and Giuseppe Amato    
The adoption of deep learning-based solutions practically pervades all the diverse areas of our everyday life, showing improved performances with respect to other classical systems. Since many applications deal with sensible data and procedures, a strong... ver más
Revista: Information

 
Dapeng Lang, Deyun Chen, Sizhao Li and Yongjun He    
The deep model is widely used and has been demonstrated to have more hidden security risks. An adversarial attack can bypass the traditional means of defense. By modifying the input data, the attack on the deep model is realized, and it is imperceptible ... ver más
Revista: Information

 
Zhirui Luo, Qingqing Li and Jun Zheng    
Transfer learning using pre-trained deep neural networks (DNNs) has been widely used for plant disease identification recently. However, pre-trained DNNs are susceptible to adversarial attacks which generate adversarial samples causing DNN models to make... ver más
Revista: Applied Sciences

 
Shilin Qiu, Qihe Liu, Shijie Zhou and Chunjiang Wu    
In recent years, artificial intelligence technologies have been widely used in computer vision, natural language processing, automatic driving, and other fields. However, artificial intelligence systems are vulnerable to adversarial attacks, which limit ... ver más
Revista: Applied Sciences