4.3 Article

Numerical reproducibility for the parallel reduction on multi- and many-core architectures

期刊

PARALLEL COMPUTING
卷 49, 期 -, 页码 83-97

出版社

ELSEVIER
DOI: 10.1016/j.parco.2015.09.001

关键词

Parallel floating-point summation; Reproducibility; Accuracy; Long accumulator; Error-free transformations; Multi- and many-core architectures

资金

  1. French National Research Agency (ANR) as part of the Investissements d'Avenir program [ANR-11-LABX-0037-01, ANR-11-IDEX-0004-02]
  2. Region Ile-de-France
  3. project Equip@Meso by ANR as part of the Investissements d' Avenir program [ANR-10-EQPX-29-01]
  4. FastRelax project through the ANR public grant [ANR-14-CE25-0018-01]
  5. Agence Nationale de la Recherche (ANR) [ANR-14-CE25-0018] Funding Source: Agence Nationale de la Recherche (ANR)

向作者/读者索取更多资源

On modern multi-core, many-core, and heterogeneous architectures, floating-point computations, especially reductions, may become non-deterministic and, therefore, non-reproducible mainly due to the non-associativity of floating-point operations. We introduce an approach to compute the correctly rounded sums of large floating-point vectors accurately and efficiently, achieving deterministic results by construction. Our multi-level algorithm consists of two main stages: first, a filtering stage that relies on fast vectorized floating-point expansion; second, an accumulation stage based on superaccumulators in a high-radix carry-save representation. We present implementations on recent Intel desktop and server processors, Intel Xeon Phi co-processors, and both AMD and NVIDIA GPUs. We show that numerical reproducibility and bit-perfect accuracy can be achieved at no additional cost for large sums that have dynamic ranges of up to 90 orders of magnitude by leveraging arithmetic units that are left underused by standard reduction algorithms. (C) 2015 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.3
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据