28   Artículos

 
en línea
Sharoug Alzaidy and Hamad Binsalleeh    
In the field of behavioral detection, deep learning has been extensively utilized. For example, deep learning models have been utilized to detect and classify malware. Deep learning, however, has vulnerabilities that can be exploited with crafted inputs,... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Zhe Yang, Yi Huang, Yaqin Chen, Xiaoting Wu, Junlan Feng and Chao Deng    
Controllable Text Generation (CTG) aims to modify the output of a Language Model (LM) to meet specific constraints. For example, in a customer service conversation, responses from the agent should ideally be soothing and address the user?s dissatisfactio... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Yuting Guan, Junjiang He, Tao Li, Hui Zhao and Baoqiang Ma    
SQL injection is a highly detrimental web attack technique that can result in significant data leakage and compromise system integrity. To counteract the harm caused by such attacks, researchers have devoted much attention to the examination of SQL injec... ver más
Revista: Future Internet    Formato: Electrónico

 
en línea
Mingyong Yin, Yixiao Xu, Teng Hu and Xiaolei Liu    
Despite the success of learning-based systems, recent studies have highlighted video adversarial examples as a ubiquitous threat to state-of-the-art video classification systems. Video adversarial attacks add subtle noise to the original example, resulti... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Sapdo Utomo, Adarsh Rouniyar, Hsiu-Chun Hsu and Pao-Ann Hsiung    
Smart city applications that request sensitive user information necessitate a comprehensive data privacy solution. Federated learning (FL), also known as privacy by design, is a new paradigm in machine learning (ML). However, FL models are susceptible to... ver más
Revista: Future Internet    Formato: Electrónico

 
en línea
Songshen Han, Kaiyong Xu, Songhui Guo, Miao Yu and Bo Yang    
Automatic Speech Recognition (ASR) provides a new way of human-computer interaction. However, it is vulnerable to adversarial examples, which are obtained by deliberately adding perturbations to the original audios. Thorough studies on the universal feat... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Dapeng Lang, Deyun Chen, Jinjie Huang and Sizhao Li    
Small perturbations can make deep models fail. Since deep models are widely used in face recognition systems (FRS) such as surveillance and access control, adversarial examples may introduce more subtle threats to face recognition systems. In this paper,... ver más
Revista: Algorithms    Formato: Electrónico

 
en línea
Dapeng Lang, Deyun Chen, Sizhao Li and Yongjun He    
The deep model is widely used and has been demonstrated to have more hidden security risks. An adversarial attack can bypass the traditional means of defense. By modifying the input data, the attack on the deep model is realized, and it is imperceptible ... ver más
Revista: Information    Formato: Electrónico

 
en línea
Dmitry Namiot,Eugene Ilyushin     Pág. 101 - 118
This article, written for the Robust Machine Learning Curriculum, discusses the so-called Generative Models in Machine Learning. Generative models learn the distribution of data from some sample data set and then can generate (create) new data instances.... ver más
Revista: International Journal of Open Information Technologies    Formato: Electrónico

 
en línea
Vasily Kostyumov     Pág. 11 - 20
Deep learning has received a lot of attention from the scientific community in recent years due to excellent results in various areas of tasks, including computer vision. For example, in the problem of image classification, some authors even announced th... ver más
Revista: International Journal of Open Information Technologies    Formato: Electrónico

« Anterior     Página: 1 de 2     Siguiente »