Review
Computer Science, Artificial Intelligence
Ziying Tan, Linbo Luo, Jinghui Zhong
Summary: Evolutionary multi-task optimization (EMTO) is an optimization algorithm that aims to optimize multiple tasks simultaneously. It utilizes common knowledge across tasks to improve performance in solving each task independently. This survey focuses on the research progress of knowledge transfer methods in EMTO and proposes a taxonomy to categorize the existing work. It aims to identify research directions for improving knowledge transfer performance in EMTO.
APPLIED SOFT COMPUTING
(2023)
Article
Automation & Control Systems
Mahdiyeh Ghaffari, Hamid Abdollahi
Summary: The study introduces a standardization method named Score-Augmented Projection-Based Standardization (SA-PBS) which addresses the issue of time and financial investments in reconstructing statistical models for novel instrumentation. SA-PBS extracts spectral sub-spaces containing interferents common to both the primary and secondary instruments using orthogonal projections, and selects a suitable subset from the primary instrument's recorded dataset using the Convex hull tool. The efficacy of SA-PBS is validated through quantitative analyses and comparative assessments against alternative standardization approaches.
CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS
(2023)
Article
Computer Science, Information Systems
Wei Chang, Feiping Nie, Rong Wang, Xuelong Li
Summary: In this paper, a calibrated multi-task subspace learning method (CMTSL) is proposed to address the negative transfer problem and improve the generalization performance in joint learning. The model utilizes subspace learning, low-rank constraint, and binary group indicator to determine task sharing and perform multi-task inference. Experimental results demonstrate the superiority of the proposed method.
INFORMATION SCIENCES
(2023)
Article
Thermodynamics
Yugui Tang, Kuo Yang, Shujing Zhang, Zhen Zhang
Summary: Accurate forecasting of wind power is important for grid system scheduling. A hybrid forecasting model is proposed, which consists of a dual dilated convolution-based self-attention sub-model and an autoregressive sub-model. The model improves accuracy by capturing non-linear correlations and utilizing multi-task learning. Experimental results show better forecasting accuracy and reduced dependence on training data.
Article
Computer Science, Artificial Intelligence
Wei Chang, Feiping Nie, Rong Wang, Xuelong Li
Summary: In multi-task learning, the problem of robustly learning the common feature structure shared by tasks and discriminating task relationships is addressed. A multi-task subspace learning model with discrete group structure constraint is proposed, which clusters the learned tasks into groups. The superiority of the proposed method is demonstrated through experimental results. (c) 2023 Elsevier Ltd. All rights reserved.
PATTERN RECOGNITION
(2023)
Article
Computer Science, Artificial Intelligence
Keqiuyin Li, Jie Lu, Hua Zuo, Guangquan Zhang
Summary: Transfer learning techniques leverage knowledge from similar domains to tackle tasks in a target domain. The proposed method in this article simultaneously learns similarities and diversities of domains to improve the transferability of latent features, aiming at enhancing the performance of the final target predictor.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
(2022)
Article
Computer Science, Information Systems
Romain Mormont, Pierre Geurts, Raphael Maree
Summary: In this study, multi-task learning was explored as a method for pre-training models for classification tasks in digital pathology. By assembling and transforming multiple datasets, a pool of 22 classification tasks and nearly 900k images was successfully created. Experimental results showed that our models either significantly outperformed ImageNet pre-trained models or provided comparable performance on different target tasks.
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS
(2021)
Article
Computer Science, Artificial Intelligence
Yu Zhang, Qiang Yang
Summary: This paper provides a survey of Multi-Task Learning (MTL) from the perspective of algorithmic modeling, applications, and theoretical analyses. It discusses different MTL algorithms and their characteristics, as well as the combination of MTL with other learning paradigms. The paper also reviews MTL models for large-scale tasks or high-dimensional data, as well as dimensionality reduction and feature hashing. Real-world applications of MTL are examined, and theoretical analyses and future directions are discussed.
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
(2022)
Article
Computer Science, Artificial Intelligence
Bogdan Kustowski, Jim A. Gaffney, Brian K. Spears, Gemma J. Anderson, Rushil Anirudh, Peer-Timo Bremer, Jayaraman J. Thiagarajan, Michael K. G. Kruse, Ryan C. Nora
Summary: Many problems in science and engineering require making predictions based on limited observations. This paper proposes a method that combines transfer learning and deep learning to build predictive models with multi-modal outputs, using simulated data to supplement sparse data. It demonstrates the effectiveness of this method in improving simulation predictions and highlights its potential applicability to a wide range of problems.
MACHINE LEARNING-SCIENCE AND TECHNOLOGY
(2022)
Article
Physics, Multidisciplinary
Ying Chen, Jiong Yu, Yutong Zhao, Jiaying Chen, Xusheng Du
Summary: This paper proposes a multi-task learning model, PBFS, which combines soft parameter sharing with model pruning to achieve information sharing between tasks and improve transfer learning performance.
Article
Biology
Amir Abbasi, Erfan Miahi, Seyed Abolghasem Mirroshandel
Summary: This paper proposed two deep learning algorithms for analyzing abnormal male human sperm morphology, one using a deep transfer learning approach, and the other utilizing a deep multi-task transfer learning method. These algorithms demonstrated state-of-the-art results in experiments, marking a significant advancement in the field of sperm morphology analysis.
COMPUTERS IN BIOLOGY AND MEDICINE
(2021)
Article
Computer Science, Artificial Intelligence
Sunggu Kyung, Keewon Shin, Hyunsu Jeong, Ki Duk Kim, Jooyoung Park, Kyungjin Cho, Jeong Hyun Lee, GilSun Hong, Namkug Kim
Summary: With the development of deep learning, the classification and segmentation tasks of intracranial hemorrhage using non-contrast head computed tomography have become popular in emergency medical care. However, there are still challenges such as the heterogeneity of intracranial hemorrhage, the need for high performance, patient-level predictions, and vulnerability to real-world external data. This study proposes a supervised multi-task aiding representation transfer learning network called SMART-Net to overcome these challenges.
MEDICAL IMAGE ANALYSIS
(2022)
Article
Green & Sustainable Science & Technology
Lingyu Zhang, Xu Geng, Zhiwei Qin, Hongjun Wang, Xiao Wang, Ying Zhang, Jian Liang, Guobin Wu, Xuan Song, Yunhai Wang
Summary: This paper proposes a graph convolution network-based approach to model region-wise relationships in urban computing, and enhances the model's generalization performance through multi-modal machine learning and modality interaction mechanisms.
Article
Computer Science, Information Systems
Tianxin Wang, Fuzhen Zhuang, Ying Sun, Xiangliang Zhang, Leyu Lin, Feng Xia, Lei He, Qing He
Summary: This paper proposes a lightweight architecture for modeling task relationships in small or middle-sized datasets. The framework learns a task-specific ensemble of subnetworks and can adapt the model architecture based on the given data. The hierarchical model structure allows for sharing both general and specific distributed representations to capture the inherent relationships between tasks.
INFORMATION SCIENCES
(2022)
Article
Computer Science, Artificial Intelligence
Jesse Read
Summary: In multi-label learning, it has been commonly believed that explicitly modeling the dependence among labels is necessary for achieving the best accuracy. However, even though the need for dependence modeling has been challenged, these models still outperform independent models in certain contexts, suggesting other factors contributing to their performance. This article explores the problem of joint modeling without measurable dependence among task labels, and proposes a model-agnostic method for cross-domain transfer learning that does not require source data. The insights and results from this study have important implications for both multi-label and transfer learning research.
APPLIED INTELLIGENCE
(2023)