82   Artículos

 
en línea
Woonghee Lee and Younghoon Kim    
This study introduces a deep-learning-based framework for detecting adversarial attacks in CT image segmentation within medical imaging. The proposed methodology includes analyzing features from various layers, particularly focusing on the first layer, a... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Meng Bi, Xianyun Yu, Zhida Jin and Jian Xu    
In this paper, we propose an Iterative Greedy-Universal Adversarial Perturbations (IGUAP) approach based on an iterative greedy algorithm to create universal adversarial perturbations for acoustic prints. A thorough, objective account of the IG-UAP metho... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Sharoug Alzaidy and Hamad Binsalleeh    
In the field of behavioral detection, deep learning has been extensively utilized. For example, deep learning models have been utilized to detect and classify malware. Deep learning, however, has vulnerabilities that can be exploited with crafted inputs,... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Woonghee Lee, Mingeon Ju, Yura Sim, Young Kul Jung, Tae Hyung Kim and Younghoon Kim    
Deep learning-based segmentation models have made a profound impact on medical procedures, with U-Net based computed tomography (CT) segmentation models exhibiting remarkable performance. Yet, even with these advances, these models are found to be vulner... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
William Villegas-Ch, Angel Jaramillo-Alcázar and Sergio Luján-Mora    
This study evaluated the generation of adversarial examples and the subsequent robustness of an image classification model. The attacks were performed using the Fast Gradient Sign method, the Projected Gradient Descent method, and the Carlini and Wagner ... ver más
Revista: Big Data and Cognitive Computing    Formato: Electrónico

 
en línea
Yuting Guan, Junjiang He, Tao Li, Hui Zhao and Baoqiang Ma    
SQL injection is a highly detrimental web attack technique that can result in significant data leakage and compromise system integrity. To counteract the harm caused by such attacks, researchers have devoted much attention to the examination of SQL injec... ver más
Revista: Future Internet    Formato: Electrónico

 
en línea
James Msughter Adeke, Guangjie Liu, Junjie Zhao, Nannan Wu and Hafsat Muhammad Bashir    
Machine learning (ML) models are essential to securing communication networks. However, these models are vulnerable to adversarial examples (AEs), in which malicious inputs are modified by adversaries to produce the desired output. Adversarial training i... ver más
Revista: Future Internet    Formato: Electrónico

 
en línea
Sapdo Utomo, Adarsh Rouniyar, Hsiu-Chun Hsu and Pao-Ann Hsiung    
Smart city applications that request sensitive user information necessitate a comprehensive data privacy solution. Federated learning (FL), also known as privacy by design, is a new paradigm in machine learning (ML). However, FL models are susceptible to... ver más
Revista: Future Internet    Formato: Electrónico

 
en línea
Mingyong Yin, Yixiao Xu, Teng Hu and Xiaolei Liu    
Despite the success of learning-based systems, recent studies have highlighted video adversarial examples as a ubiquitous threat to state-of-the-art video classification systems. Video adversarial attacks add subtle noise to the original example, resulti... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Yuwen Fu, E. Xia, Duan Huang and Yumei Jing    
Machine learning has been applied in continuous-variable quantum key distribution (CVQKD) systems to address the growing threat of quantum hacking attacks. However, the use of machine learning algorithms for detecting these attacks has uncovered a vulner... ver más
Revista: Applied Sciences    Formato: Electrónico

« Anterior     Página: 1 de 5     Siguiente »