4.7 Article

Pruning and Quantizing Neural Belief Propagation Decoders

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSAC.2020.3041392

关键词

Iterative decoding; Maximum likelihood decoding; Belief propagation; Quantization (signal); Task analysis; Optimization; Neural networks; Belief propagation; deep learning; min-sum decoding; neural decoders; pruning; quantization

资金

  1. European Union [676448, 749798]
  2. Swedish Research Council [2016-04253]
  3. Marie Curie Actions (MSCA) [676448, 749798] Funding Source: Marie Curie Actions (MSCA)

向作者/读者索取更多资源

This research focuses on near maximum-likelihood (ML) decoding of short linear block codes, proposing a novel decoding approach based on neural belief propagation (NBP) with pruning for significant performance improvements and reduced complexity. Experimental results show that the pruning method leads to noticeable performance gains compared to conventional methods, under a specific complexity level.
We consider near maximum-likelihood (ML) decoding of short linear block codes. In particular, we propose a novel decoding approach based on neural belief propagation (NBP) decoding recently introduced by Nachmani et al. in which we allow a different parity-check matrix in each iteration of the algorithm. The key idea is to consider NBP decoding over an overcomplete parity-check matrix and use the weights of NBP as a measure of the importance of the check nodes (CNs) to decoding. The unimportant CNs are then pruned. In contrast to NBP, which performs decoding on a given fixed parity-check matrix, the proposed pruning-based neural belief propagation (PB-NBP) typically results in a different parity-check matrix in each iteration. For a given complexity in terms of CN evaluations, we show that PB-NBP yields significant performance improvements with respect to NBP. We apply the proposed decoder to the decoding of a Reed-Muller code, a short low-density parity-check (LDPC) code, and a polar code. PB-NBP outperforms NBP decoding over an overcomplete parity-check matrix by 0.27-0.31 dB while reducing the number of required CN evaluations by up to 97%. For the LDPC code, PB-NBP outperforms conventional belief propagation with the same number of CN evaluations by 0.52 dB. We further extend the pruning concept to offset min-sum decoding and introduce a pruning-based neural offset min-sum (PB-NOMS) decoder, for which we jointly optimize the offsets and the quantization of the messages and offsets. We demonstrate performance 0.5 dB from ML decoding with 5-bit quantization for the Reed-Muller code.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据