4.7 Article

Pruning and Quantizing Neural Belief Propagation Decoders

Journal

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
Volume 39, Issue 7, Pages 1957-1966

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSAC.2020.3041392

Keywords

Iterative decoding; Maximum likelihood decoding; Belief propagation; Quantization (signal); Task analysis; Optimization; Neural networks; Belief propagation; deep learning; min-sum decoding; neural decoders; pruning; quantization

Funding

  1. European Union [676448, 749798]
  2. Swedish Research Council [2016-04253]
  3. Marie Curie Actions (MSCA) [676448, 749798] Funding Source: Marie Curie Actions (MSCA)

Ask authors/readers for more resources

This research focuses on near maximum-likelihood (ML) decoding of short linear block codes, proposing a novel decoding approach based on neural belief propagation (NBP) with pruning for significant performance improvements and reduced complexity. Experimental results show that the pruning method leads to noticeable performance gains compared to conventional methods, under a specific complexity level.
We consider near maximum-likelihood (ML) decoding of short linear block codes. In particular, we propose a novel decoding approach based on neural belief propagation (NBP) decoding recently introduced by Nachmani et al. in which we allow a different parity-check matrix in each iteration of the algorithm. The key idea is to consider NBP decoding over an overcomplete parity-check matrix and use the weights of NBP as a measure of the importance of the check nodes (CNs) to decoding. The unimportant CNs are then pruned. In contrast to NBP, which performs decoding on a given fixed parity-check matrix, the proposed pruning-based neural belief propagation (PB-NBP) typically results in a different parity-check matrix in each iteration. For a given complexity in terms of CN evaluations, we show that PB-NBP yields significant performance improvements with respect to NBP. We apply the proposed decoder to the decoding of a Reed-Muller code, a short low-density parity-check (LDPC) code, and a polar code. PB-NBP outperforms NBP decoding over an overcomplete parity-check matrix by 0.27-0.31 dB while reducing the number of required CN evaluations by up to 97%. For the LDPC code, PB-NBP outperforms conventional belief propagation with the same number of CN evaluations by 0.52 dB. We further extend the pruning concept to offset min-sum decoding and introduce a pruning-based neural offset min-sum (PB-NOMS) decoder, for which we jointly optimize the offsets and the quantization of the messages and offsets. We demonstrate performance 0.5 dB from ML decoding with 5-bit quantization for the Reed-Muller code.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available