Inicio  /  Applied Sciences  /  Vol: 13 Par: 17 (2023)  /  Artículo
ARTÍCULO
TITULO

Adversarial Attack Defense Method for a Continuous-Variable Quantum Key Distribution System Based on Kernel Robust Manifold Non-Negative Matrix Factorization

Yuwen Fu    
E. Xia    
Duan Huang and Yumei Jing    

Resumen

Machine learning has been applied in continuous-variable quantum key distribution (CVQKD) systems to address the growing threat of quantum hacking attacks. However, the use of machine learning algorithms for detecting these attacks has uncovered a vulnerability to adversarial disturbances that can compromise security. By subtly perturbing the detection networks used in CVQKD, significant misclassifications can occur. To address this issue, we utilize an adversarial sample defense method based on non-negative matrix factorization (NMF), considering the nonlinearity and high-dimensional nature of CVQKD data. Specifically, we employ the Kernel Robust Manifold Non-negative Matrix Factorization (KRMNMF) algorithm to reconstruct input samples, reducing the impact of adversarial perturbations. Firstly, we extract attack features against CVQKD by considering the adversary known as Eve. Then, we design an Artificial Neural Network (ANN) detection model to identify these attacks. Next, we introduce adversarial perturbations into the data generated by Eve. Finally, we use the KRMNMF decomposition to extract features from CVQKD data and mitigate the influence of adversarial perturbations through reconstruction. Experimental results demonstrate that the application of KRMNMF can effectively defend against adversarial attacks to a certain extent. The accuracy of KRMNMF surpasses the commonly used Comdefend method by 32.2% and the JPEG method by 30.8%. Moreover, it exhibits an improvement of 20.8% compared to NMF and outperforms other NMF-related algorithms in terms of classification accuracy. Moreover, it can complement other defense strategies, thus enhancing the overall defensive capabilities of CVQKD systems.

 Artículos similares

       
 
Viacheslav Moskalenko, Vyacheslav Kharchenko, Alona Moskalenko and Borys Kuzikov    
Artificial intelligence systems are increasingly being used in industrial applications, security and military contexts, disaster response complexes, policing and justice practices, finance, and healthcare systems. However, disruptions to these systems ca... ver más
Revista: Algorithms

 
Mehdi Sadi, Bashir Mohammad Sabquat Bahar Talukder, Kaniz Mishty and Md Tauhidur Rahman    
Universal adversarial perturbations are image-agnostic and model-independent noise that, when added to any image, can mislead the trained deep convolutional neural networks into the wrong prediction. Since these universal adversarial perturbations can se... ver más
Revista: Information

 
Lei Chen, Zhihao Wang, Ru Huo and Tao Huang    
As an essential piece of infrastructure supporting cyberspace security technology verification, network weapons and equipment testing, attack defense confrontation drills, and network risk assessment, Cyber Range is exceptionally vulnerable to distribute... ver más
Revista: Algorithms

 
Weimin Zhao, Sanaa Alwidian and Qusay H. Mahmoud    
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms. Adversarial training is one of the methods used to defend against the thr... ver más
Revista: Algorithms

 
Jiazhu Dai and Siwei Xiong    
Capsule networks are a type of neural network that use the spatial relationship between features to classify images. By capturing the poses and relative positions between features, this network is better able to recognize affine transformation and surpas... ver más
Revista: Algorithms