4.6 Article

Temporal Fusion Transformers for interpretable multi-horizon time series forecasting

期刊

INTERNATIONAL JOURNAL OF FORECASTING
卷 37, 期 4, 页码 1748-1764

出版社

ELSEVIER
DOI: 10.1016/j.ijforecast.2021.03.012

关键词

Deep learning; Interpretability; Time series; Multi-horizon forecasting; Attention mechanisms; Explainable AI

向作者/读者索取更多资源

This paper introduces the Temporal Fusion Transformer (TFT), a novel attention-based architecture that combines high-performance multi-horizon forecasting with interpretable insights into temporal dynamics. TFT utilizes recurrent layers for local processing and interpretable self-attention layers for long-term dependencies, achieving high performance in a wide range of scenarios. By selecting relevant features and suppressing unnecessary components, TFT demonstrates significant performance improvements over existing benchmarks on various real-world datasets.
Multi-horizon forecasting often contains a complex mix of inputs - including static (i.e. time-invariant) covariates, known future inputs, and other exogenous time series that are only observed in the past - without any prior information on how they interact with the target. Several deep learning methods have been proposed, but they are typically `black-box' models that do not shed light on how they use the full range of inputs present in practical scenarios. In this paper, we introduce the Temporal Fusion Transformer (TFT) - a novel attention-based architecture that combines high-performance multi-horizon forecasting with interpretable insights into temporal dynamics. To learn temporal relationships at different scales, TFT uses recurrent layers for local processing and interpretable self-attention layers for long-term dependencies. TFT utilizes specialized components to select relevant features and a series of gating layers to suppress unnecessary components, enabling high performance in a wide range of scenarios. On a variety of real-world datasets, we demonstrate significant performance improvements over existing benchmarks, and highlight three practical interpretability use cases of TFT. (C) 2021 The Authors. Published by Elsevier B.V. on behalf of International Institute of Forecasters.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据