Inicio  /  Applied Sciences  /  Vol: 13 Par: 3 (2023)  /  Artículo
ARTÍCULO
TITULO

A Robust Adversarial Example Attack Based on Video Augmentation

Mingyong Yin    
Yixiao Xu    
Teng Hu and Xiaolei Liu    

Resumen

Despite the success of learning-based systems, recent studies have highlighted video adversarial examples as a ubiquitous threat to state-of-the-art video classification systems. Video adversarial attacks add subtle noise to the original example, resulting in a false classification result. Thorough studies on how to generate video adversarial examples are essential to prevent potential attacks. Despite much research on this, existing research works on the robustness of video adversarial examples are still limited. To generate highly robust video adversarial examples, we propose a video-augmentation-based adversarial attack (v3a), focusing on the video transformations to reinforce the attack. Further, we investigate different transformations as parts of the loss function to make the video adversarial examples more robust. The experiment results show that our proposed method outperforms other adversarial attacks in terms of robustness. We hope that our study encourages a deeper understanding of adversarial robustness in video classification systems with video augmentation.

 Artículos similares

       
 
Valeria Mercuri, Martina Saletta and Claudio Ferretti    
As the prevalence and sophistication of cyber threats continue to increase, the development of robust vulnerability detection techniques becomes paramount in ensuring the security of computer systems. Neural models have demonstrated significant potential... ver más
Revista: Algorithms

 
Amani Alqarni and Hamoud Aljamaan    
Software defect prediction is an active research area. Researchers have proposed many approaches to overcome the imbalanced defect problem and build highly effective machine learning models that are not biased towards the majority class. Generative adver... ver más
Revista: Applied Sciences

 
Dejian Guan, Wentao Zhao and Xiao Liu    
Recent studies show that deep neural networks (DNNs)-based object recognition algorithms overly rely on object textures rather than global object shapes, and DNNs are also vulnerable to human-less perceptible adversarial perturbations. Based on these two... ver más
Revista: Applied Sciences

 
Weimin Zhao, Sanaa Alwidian and Qusay H. Mahmoud    
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms. Adversarial training is one of the methods used to defend against the thr... ver más
Revista: Algorithms

 
Rina Komatsu and Tad Gonsalves    
In CycleGAN, an image-to-image translation architecture was established without the use of paired datasets by employing both adversarial and cycle consistency loss. The success of CycleGAN was followed by numerous studies that proposed new translation mo... ver más
Revista: AI