4.7 Article

Learning to Decode Protograph LDPC Codes

Journal

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
Volume 39, Issue 7, Pages 1983-1999

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSAC.2021.3078488

Keywords

Decoding; Iterative decoding; Training; Complexity theory; Convergence; 5G mobile communication; Research and development; Protograph LDPC codes; 5G; neural min-sum decoder; parameter-sharing; iteration-by-iteration training

Funding

  1. National Natural Science Foundation of China [92067202, 62001049, 62071058, 61971062]
  2. National Key Research and Development Program of China [2018YFE0205501, 2018YFB1800800]
  3. China Post-Doctoral Science Foundation [2019M660032]
  4. Qualcomm Inc.
  5. U.S. National Science Foundation [CCF-1908308]
  6. Key Area Research and Development Program of Guangdong Province [2018B030338001]
  7. Shenzhen Outstanding Talents Training Fund
  8. Guangdong Research Project [2017ZT07X152]

Ask authors/readers for more resources

The recent development of deep learning methods has offered a new way to optimize belief propagation decoding of linear codes. A high-performance neural min-sum decoding method utilizing the lifting structure of LDPC codes and a layer-by-layer greedy training approach have been proposed. Additionally, a training method to enhance generalization ability has been introduced, showing faster convergence and improvement in performance compared to traditional decoding methods.
The recent development of deep learning methods provides a new approach to optimize the belief propagation (BP) decoding of linear codes. However, the limitation of existing works is that the scale of neural networks increases rapidly with the codelength, thus they can only support short to moderate codelengths. From the point view of practicality, we propose a high-performance neural min-sum (MS) decoding method that makes full use of the lifting structure of protograph low-density parity-check (LDPC) codes. By this means, the size of the parameter array of each layer in the neural decoder only equals the number of edge-types for arbitrary codelengths. In particular, for protograph LDPC codes, the proposed neural MS decoder is constructed in a special way such that identical parameters are shared by a bundle of edges derived from the same edge-type. To reduce the complexity and overcome the vanishing gradient problem in training the proposed neural MS decoder, an iteration-by-iteration (i.e., layer-by-layer in neural networks) greedy training method is proposed. With this, the proposed neural MS decoder tends to be optimized with faster convergence, which is aligned with the early termination mechanism widely used in practice. To further enhance the generalization ability of the proposed neural MS decoder, a codelength/rate compatible training method is proposed, which randomly selects samples from a set of codes lifted from the same base code. As a theoretical performance evaluation tool, a trajectory-based extrinsic information transfer (T-EXIT) chart is developed for various decoders. Both T-EXIT and simulation results show that the optimized MS decoding can provide faster convergence and up to 1dB gain compared with the plain MS decoding and its variants with only slightly increased complexity. In addition, it can even outperform the sum-product algorithm for some short codes.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available