A Low-Complexity Neural Normalized Min-Sum LDPC Decoding Algorithm Using Tensor-Train Decomposition

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)

Abstract

Compared with traditional low-density parity-check (LDPC) decoding algorithms, the current model-driven deep learning (DL)-based LDPC decoding algorithms face the disadvantage of high computational complexity. Based on the Neural Normalized Min-Sum (NNMS) algorithm, we propose a low-complexity model-driven DL-based LDPC decoding algorithm using Tensor-Train (TT) decomposition and syndrome loss function, called TT-NNMS+ algorithm. Our experiments show that the proposed TT-NNMS+ algorithm is more competitive than the NNMS algorithm in terms of bit error rate (BER) performance, memory requirement and computational complexity.

Original languageEnglish
Pages (from-to)2914-2918
Number of pages5
JournalIEEE Communications Letters
Volume26
Issue number12
DOIs
Publication statusPublished - 1 Dec 2022

Keywords

  • Model-driven LDPC decoding
  • neural normalized min-sum
  • syndrome loss
  • tensor-train decomposition

Fingerprint

Dive into the research topics of 'A Low-Complexity Neural Normalized Min-Sum LDPC Decoding Algorithm Using Tensor-Train Decomposition'. Together they form a unique fingerprint.

Cite this