4.7 Article

TLTD: Transfer Learning for Tabular Data

期刊

APPLIED SOFT COMPUTING
卷 147, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.asoc.2023.110748

关键词

Deep-learning; Feature-extraction; Tabular datasets

向作者/读者索取更多资源

Deep Neural Networks (DNNs) are effective for machine learning tasks, particularly with unstructured data. However, decision tree models outperform DNNs with structured data, and DNNs perform poorly with small sample sizes. This paper introduces Transfer Learning for Tabular Data (TLTD), which converts tabular data into images and utilizes distillation technique for better learning, showing the usefulness of TLTD with 25 structured datasets.
Deep Neural Networks (DNNs) have become effective for various machine learning tasks. DNNs are known to achieve high accuracy with unstructured data in which each data sample (e.g., image) consists of many raw features (e.g., pixels) of the same type. The effectiveness of this approach diminishes for structured (tabular) data. In most cases, decision tree-based models such as Random Forest (RF) or Gradient Boosting Decision Trees (GBDT) outperform DNNs. In addition, DNNs tend to perform poorly when the number of samples in the dataset is small. This paper introduces Transfer Learning for Tabular Data (TLTD) which utilizes a novel learning architecture designed to extract new features from structured datasets. Using the DNN's learning capabilities on images, we convert the tabular data into images, then use the distillation technique to achieve better learning. We evaluated our approach with 25 structured datasets, and compared the outcomes to those of RF, eXtreme Gradient Boosting (XGBoost), Tabnet, KNN, and TabPFN. The results demonstrate the usefulness of the TLTD approach.& COPY; 2023 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据