TY - GEN
T1 - A Low Complexity Model-Driven Deep Learning LDPC Decoding Algorithm
AU - Wu, Qingle
AU - Tang, Su Kit
AU - Liang, Yuanhui
AU - Lam, Chan Tong
AU - Ma, Yan
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/4/23
Y1 - 2021/4/23
N2 - A novel Neural Offset Min-Sum(NOMS) Belief Propagation(BP) decoding algorithm based on model-driven is proposed which applied to LDPC decoding. NOMS is improved multiplication in Neural Normalized Min-Sum(NNMS) into addition operation to reduce the complexity of calculation., a better Bit Error Rate (BER) performance is simultaneously achieved in the same condition. Secondly, considering that there are still many multiplication operations in NOMS, we propose a novel Shared Offset Min-Sum(SNOMS) to reduce the number of weights in the network by sharing parameters. Finally, codebook-based quantization is used to further reduce the memory consumption. Simulation experimental results show that the proposed method has a better BER performance, and the decoding accuracy of the decoder is 0.65dB higher than that of the NNMS after 5 iterations. In addition, SNOMS decoding method achieves almost the same decoding performance comparable to that of NOMS, but requires less complex calculation. Proposed quantization of code-book method reduces memory requirement significantly with slight performance loss.
AB - A novel Neural Offset Min-Sum(NOMS) Belief Propagation(BP) decoding algorithm based on model-driven is proposed which applied to LDPC decoding. NOMS is improved multiplication in Neural Normalized Min-Sum(NNMS) into addition operation to reduce the complexity of calculation., a better Bit Error Rate (BER) performance is simultaneously achieved in the same condition. Secondly, considering that there are still many multiplication operations in NOMS, we propose a novel Shared Offset Min-Sum(SNOMS) to reduce the number of weights in the network by sharing parameters. Finally, codebook-based quantization is used to further reduce the memory consumption. Simulation experimental results show that the proposed method has a better BER performance, and the decoding accuracy of the decoder is 0.65dB higher than that of the NNMS after 5 iterations. In addition, SNOMS decoding method achieves almost the same decoding performance comparable to that of NOMS, but requires less complex calculation. Proposed quantization of code-book method reduces memory requirement significantly with slight performance loss.
KW - Deep learning
KW - belief propagation
KW - model-driven neural network
KW - offset min-sum decoding
UR - http://www.scopus.com/inward/record.url?scp=85113333488&partnerID=8YFLogxK
U2 - 10.1109/ICCCS52626.2021.9449266
DO - 10.1109/ICCCS52626.2021.9449266
M3 - Conference contribution
AN - SCOPUS:85113333488
T3 - 2021 IEEE 6th International Conference on Computer and Communication Systems, ICCCS 2021
SP - 558
EP - 563
BT - 2021 IEEE 6th International Conference on Computer and Communication Systems, ICCCS 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 6th IEEE International Conference on Computer and Communication Systems, ICCCS 2021
Y2 - 23 April 2021 through 26 April 2021
ER -