TY - JOUR
T1 - Automatic Recognition of Dual-Component Radar Signals Based on Deep Learning
AU - Tang, Zeyu
AU - Shen, Hong
AU - Lam, Chan Tong
N1 - Publisher Copyright:
© 2025 by the authors.
PY - 2025/3
Y1 - 2025/3
N2 - The increasing density and complexity of electromagnetic signals have brought new challenges to multi-component radar signal recognition. To address the problem of low recognition accuracy under low signal-to-noise ratios (SNR) in adapting the common recognition framework of combining time–frequency transformations (TFTs) with convolutional neural networks (CNNs), this paper proposes a new dual-component radar signal recognition framework (TFGM-RMNet) that combines a deep time–frequency generation module with a Transformer-based residual network. First, the received noisy signal is preprocessed. Then, the deep time–frequency generation module is used to learn the complete basis function to obtain various TF features of the time signal, and the corresponding time–frequency representation (TFR) is output under the supervision of high-quality images. Next, a ResNet combined with cascaded multi-head attention (MHSA) is applied to extract local and global features from the TFR. Finally, modulation format prediction is achieved through multi-label classification. The proposed framework does not require explicit TFT during testing, and the TFT process is built into TFGM to replace the traditional TFT. The classification results and ideal TFR are obtained during testing, realizing an end-to-end deep learning (DL) framework. The simulation results show that, when SNR > −8 dB, this method can achieve an average recognition accuracy close to 100%. It achieves 97% accuracy even at an SNR of −10 dB. At the same time, under low SNR, the recognition performance is better than the existing algorithms including DCNN-RAMIML, DCNN-MLL, and DCNN-MIML.
AB - The increasing density and complexity of electromagnetic signals have brought new challenges to multi-component radar signal recognition. To address the problem of low recognition accuracy under low signal-to-noise ratios (SNR) in adapting the common recognition framework of combining time–frequency transformations (TFTs) with convolutional neural networks (CNNs), this paper proposes a new dual-component radar signal recognition framework (TFGM-RMNet) that combines a deep time–frequency generation module with a Transformer-based residual network. First, the received noisy signal is preprocessed. Then, the deep time–frequency generation module is used to learn the complete basis function to obtain various TF features of the time signal, and the corresponding time–frequency representation (TFR) is output under the supervision of high-quality images. Next, a ResNet combined with cascaded multi-head attention (MHSA) is applied to extract local and global features from the TFR. Finally, modulation format prediction is achieved through multi-label classification. The proposed framework does not require explicit TFT during testing, and the TFT process is built into TFGM to replace the traditional TFT. The classification results and ideal TFR are obtained during testing, realizing an end-to-end deep learning (DL) framework. The simulation results show that, when SNR > −8 dB, this method can achieve an average recognition accuracy close to 100%. It achieves 97% accuracy even at an SNR of −10 dB. At the same time, under low SNR, the recognition performance is better than the existing algorithms including DCNN-RAMIML, DCNN-MLL, and DCNN-MIML.
KW - convolutional neural networks
KW - dual-component pulse-internal modulation
KW - multi-head self-attention
KW - multi-label learning
KW - pulse-internal modulation classification
UR - https://www.scopus.com/pages/publications/105000989486
U2 - 10.3390/s25061809
DO - 10.3390/s25061809
M3 - Article
AN - SCOPUS:105000989486
SN - 1424-3210
VL - 25
JO - Sensors
JF - Sensors
IS - 6
M1 - 1809
ER -