Redirigiendo al acceso original de articulo en 20 segundos...
Inicio  /  Applied Sciences  /  Vol: 14 Par: 6 (2024)  /  Artículo
ARTÍCULO
TITULO

Adversarial Attacks on Medical Segmentation Model via Transformation of Feature Statistics

Woonghee Lee    
Mingeon Ju    
Yura Sim    
Young Kul Jung    
Tae Hyung Kim and Younghoon Kim    

Resumen

Deep learning-based segmentation models have made a profound impact on medical procedures, with U-Net based computed tomography (CT) segmentation models exhibiting remarkable performance. Yet, even with these advances, these models are found to be vulnerable to adversarial attacks, a problem that equally affects automatic CT segmentation models. Conventional adversarial attacks typically rely on adding noise or perturbations, leading to a compromise between the success rate of the attack and its perceptibility. In this study, we challenge this paradigm and introduce a novel generation of adversarial attacks aimed at deceiving both the target segmentation model and medical practitioners. Our approach aims to deceive a target model by altering the texture statistics of an organ while retaining its shape. We employ a real-time style transfer method, known as the texture reformer, which uses adaptive instance normalization (AdaIN) to change the statistics of an image?s feature.To induce transformation, we modify the AdaIN, which typically aligns the source and target image statistics. Through rigorous experiments, we demonstrate the effectiveness of our approach. Our adversarial samples successfully pass as realistic in blind tests conducted with physicians, surpassing the effectiveness of contemporary techniques. This innovative methodology not only offers a robust tool for benchmarking and validating automated CT segmentation systems but also serves as a potent mechanism for data augmentation, thereby enhancing model generalization. This dual capability significantly bolsters advancements in the field of deep learning-based medical and healthcare segmentation models.

 Artículos similares

       
 
Valeria Mercuri, Martina Saletta and Claudio Ferretti    
As the prevalence and sophistication of cyber threats continue to increase, the development of robust vulnerability detection techniques becomes paramount in ensuring the security of computer systems. Neural models have demonstrated significant potential... ver más
Revista: Algorithms

 
Suliman A. Alsuhibany    
The Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) technique has been a topic of interest for several years. The ability of computers to recognize CAPTCHA has significantly increased due to the development of deep le... ver más
Revista: Applied Sciences

 
Saqib Ali, Sana Ashraf, Muhammad Sohaib Yousaf, Shazia Riaz and Guojun Wang    
The successful outcomes of deep learning (DL) algorithms in diverse fields have prompted researchers to consider backdoor attacks on DL models to defend them in practical applications. Adversarial examples could deceive a safety-critical system, which co... ver más
Revista: Applied Sciences

 
Raluca Chitic, Ali Osman Topal and Franck Leprévost    
Recently, convolutional neural networks (CNNs) have become the main drivers in many image recognition applications. However, they are vulnerable to adversarial attacks, which can lead to disastrous consequences. This paper introduces ShuffleDetect as a n... ver más
Revista: Applied Sciences

 
Minxiao Wang, Ning Yang, Dulaj H. Gunasinghe and Ning Weng    
Utilizing machine learning (ML)-based approaches for network intrusion detection systems (NIDSs) raises valid concerns due to the inherent susceptibility of current ML models to various threats. Of particular concern are two significant threats associate... ver más
Revista: Computers