Redirigiendo al acceso original de articulo en 15 segundos...
Inicio  /  Information  /  Vol: 13 Par: 12 (2022)  /  Artículo
ARTÍCULO
TITULO

Improving the Adversarial Robustness of Neural ODE Image Classifiers by Tuning the Tolerance Parameter

Fabio Carrara    
Roberto Caldelli    
Fabrizio Falchi and Giuseppe Amato    

Resumen

The adoption of deep learning-based solutions practically pervades all the diverse areas of our everyday life, showing improved performances with respect to other classical systems. Since many applications deal with sensible data and procedures, a strong demand to know the actual reliability of such technologies is always present. This work analyzes the robustness characteristics of a specific kind of deep neural network, the neural ordinary differential equations (N-ODE) network. They seem very interesting for their effectiveness and a peculiar property based on a test-time tunable parameter that permits obtaining a trade-off between accuracy and efficiency. In addition, adjusting such a tolerance parameter grants robustness against adversarial attacks. Notably, it is worth highlighting how decoupling the values of such a tolerance between training and test time can strongly reduce the attack success rate. On this basis, we show how such tolerance can be adopted, during the prediction phase, to improve the robustness of N-ODE to adversarial attacks. In particular, we demonstrate how we can exploit this property to construct an effective detection strategy and increase the chances of identifying adversarial examples in a non-zero knowledge attack scenario. Our experimental evaluation involved two standard image classification benchmarks. This showed that the proposed detection technique provides high rejection of adversarial examples while maintaining most of the pristine samples.

 Artículos similares

       
 
Yuanming Chen, Xiaobin Hong, Bin Cui and Rongfa Peng    
With the increasingly maturing technology of unmanned surface vehicles (USVs), their applications are becoming more and more widespread. In order to meet operational requirements in complex scenarios, the real-time interaction and linkage of a large amou... ver más

 
Muzi Cui, Hao Jiang and Chaozhuo Li    
Image inpainting aims to synthesize missing regions in images that are coherent with the existing visual content. Generative adversarial networks have made significant strides in the development of image inpainting. However, existing approaches heavily r... ver más
Revista: Information

 
Jiaping Wu, Zhaoqiang Xia and Xiaoyi Feng    
In recent years, adversarial examples have aroused widespread research interest and raised concerns about the safety of CNNs. We study adversarial machine learning inspired by a support vector machine (SVM), where the decision boundary with maximum margi... ver más
Revista: Applied Sciences

 
Zhen Li, Heng Yao, Ran Shi, Tong Qiao and Chuan Qin    
In daily life, when taking photos of scenes containing glass, the images of the dominant transmission layer and the weak reflection layer are often blended, which are difficult to be uncoupled. Meanwhile, because the reflection layer contains sufficient ... ver más
Revista: Applied Sciences

 
Weimin Zhao, Sanaa Alwidian and Qusay H. Mahmoud    
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms. Adversarial training is one of the methods used to defend against the thr... ver más
Revista: Algorithms