Inicio  /  Information  /  Vol: 14 Par: 9 (2023)  /  Artículo
ARTÍCULO
TITULO

Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation

Mehdi Sadi    
Bashir Mohammad Sabquat Bahar Talukder    
Kaniz Mishty and Md Tauhidur Rahman    

Resumen

Universal adversarial perturbations are image-agnostic and model-independent noise that, when added to any image, can mislead the trained deep convolutional neural networks into the wrong prediction. Since these universal adversarial perturbations can seriously jeopardize the security and integrity of practical deep learning applications, the existing techniques use additional neural networks to detect the existence of these noises at the input image source. In this paper, we demonstrate an attack strategy that, when activated by rogue means (e.g., malware, trojan), can bypass these existing countermeasures by augmenting the adversarial noise at the AI hardware accelerator stage. We demonstrate the accelerator-level universal adversarial noise attack on several deep learning models using co-simulation of the software kernel of the Conv2D function and the Verilog RTL model of the hardware under the FuseSoC environment.

 Artículos similares

       
 
Mingyong Yin, Yixiao Xu, Teng Hu and Xiaolei Liu    
Despite the success of learning-based systems, recent studies have highlighted video adversarial examples as a ubiquitous threat to state-of-the-art video classification systems. Video adversarial attacks add subtle noise to the original example, resulti... ver más
Revista: Applied Sciences

 
Xintao Liang, Yuhang Li, Xiaomin Li, Yue Zhang and Youdong Ding    
Implementing single-channel speech enhancement under unknown noise conditions is a challenging problem. Most existing time-frequency domain methods are based on the amplitude spectrogram, and these methods often ignore the phase mismatch between noisy sp... ver más
Revista: Information

 
Gaosheng Luo, Gang He, Zhe Jiang and Chuankun Luo    
To address the phenomenon of color shift and low contrast in underwater images caused by wavelength- and distance-related attenuation and scattering when light propagates in water, we propose a method based on an attention mechanism and adversarial autoe... ver más
Revista: Applied Sciences

 
Albatul Albattah and Murad A. Rassam    
Deep learning (DL) models are frequently employed to extract valuable features from heterogeneous and high-dimensional healthcare data, which are used to keep track of patient well-being via healthcare monitoring systems. Essentially, the training and te... ver más
Revista: Applied Sciences

 
Kaziwa Saleh, Sándor Szénási and Zoltán Vámossy    
Although current computer vision systems are closer to the human intelligence when it comes to comprehending the visible world than previously, their performance is hindered when objects are partially occluded. Since we live in a dynamic and complex envi... ver más
Revista: Algorithms