Inicio  /  Aerospace  /  Vol: 9 Par: 11 (2022)  /  Artículo
ARTÍCULO
TITULO

Deep Muti-Modal Generic Representation Auxiliary Learning Networks for End-to-End Radar Emitter Classification

Zhigang Zhu    
Zhijian Yi    
Shiyao Li and Lin Li    

Resumen

Radar data mining is the key module for signal analysis, where patterns hidden inside of signals are gradually available in the learning process and its superiority is significant for enhancing the security of the radar emitter classification (REC) system. Owing to the disadvantage that radio frequency fingerprinting (RFF) caused by the imperfection of emitter?s hardware is difficult to forge, current deep-learning REC methods based on deep-learning techniques, e.g., convolutional neural network (CNN) and long short term memory (LSTM) are difficult to capture the stable RFF features. In this paper, an online and non-cooperative multi-modal generic representation auxiliary learning REC model, namely muti-modal generic representation auxiliary learning networks (MGRALN), is put forward. Multi-modal means that multi-domain transformations are unified to a generic representation. After this, the representation is employed to facilitate mining the implicit information inside of the signals and to perform the better model robustness, which is achieved by using the available generic genenation to guide the network training and learning. Online means the learning process of REC is only once and the REC is end-to-end. Non-cooperative denotes no demodulation techniques are used before the REC task. Experimental results on the measured civil aviation radar data demonstrate that the proposed method enables one to achieve superior performance.

 Artículos similares