Article
Computer Science, Artificial Intelligence
Ao Ma, Jingjing Li, Ke Lu, Lei Zhu, Heng Tao Shen
Summary: This article proposes a novel domain adaptation scheme named adversarial entropy optimization (AEO), which improves model discriminability and promotes representation transferability by minimizing and maximizing entropy. This approach is well aligned with the core idea of adversarial learning and achieves state-of-the-art performance across diverse domain adaptation tasks.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
(2022)
Article
Computer Science, Artificial Intelligence
JoonHo Lee, Gyemin Lee
Summary: Unsupervised domain adaptation (UDA) aims to improve prediction performance in the target domain by minimizing the divergence between the source and target domains. This paper proposes a novel UDA method, called Model Uncertainty-based UDA (MUDA), which learns domain-invariant features to minimize domain divergence. MUDA utilizes a Bayesian framework and Monte Carlo dropout sampling to evaluate model uncertainty. Experimental results on image recognition tasks demonstrate the superiority of MUDA over existing state-of-the-art methods. MUDA is also extended to multi-source domain adaptation problems.
Article
Computer Science, Artificial Intelligence
Masanari Kimura, Hideitsu Hino
Summary: Many machine learning methods assume the same distribution for training and test data, but this assumption is often violated in the real world. Covariate shift, the change in marginal distribution of data, is a significant research topic in machine learning. We have shown that the well-known family of covariate shift adaptation methods can be unified within the framework of information geometry. Additionally, we have demonstrated that parameter search for geometrically generalized covariate shift adaptation method can be efficiently achieved. Numerical experiments indicate that our generalization outperforms existing methods it encompasses.
NEURAL COMPUTATION
(2022)
Article
Mathematics, Applied
Elke R. Gizewski, Lukas Mayer, Bernhard A. Moser, Duc Hoan Nguyen, Sergiy Pereverzyev, Sergei V. Pereverzyev, Natalia Shepeleva, Werner Zellinger
Summary: In this study, we analyze the use of the general regularization scheme in unsupervised domain adaptation under the covariate shift assumption. The results show that the general scheme is a generalization of importance weighted regularized least squares method and can be linked to the estimation of Radon-Nikodym derivatives in reproducing kernel Hilbert spaces. Numerical examples are provided to illustrate the theoretical findings.
APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS
(2022)
Article
Mathematical & Computational Biology
Jon A. Steingrimsson
Summary: This article presents methods for evaluating the performance of prediction models in the target population, taking into account right censored observations. The methods utilize outcome and covariate data from a source population for model development and apply them to a target population where only covariate data is available. The proposed estimators are evaluated using simulations and applied to a lung cancer screening trial for a nationally representative population.
Article
Statistics & Probability
Samory Kpotufe, Guillaume Martinet
Summary: Transfer Learning tackles situations in which little or no labeled data is available for a target prediction problem, but much labeled data is accessible from some related but different data distribution. The study focuses on the fundamental limits of transfer, including sample sizes from different distributions and differences in data distributions, aiming to address practical questions about optimal sampling and performance limits.
ANNALS OF STATISTICS
(2021)
Article
Engineering, Electrical & Electronic
Zhonghua Liu, Kaiming Shi, Danmei Niu, Hua Huo, Kaibing Zhang
Summary: In recent years, domain adaptation (DA) methods have been proposed to address the issue of domain shift between training and test sets. However, most feature-based unsupervised domain adaptation methods have strict requirements on the transformation matrix and lack integration with classifiers. Therefore, this paper introduces a novel dynamic classifier approximation (DCA) method for unsupervised domain adaptation. The proposed DCA combines classifier methods to relax the requirements on the transformation matrix and learns domain invariant features from both the structure and data levels. Experimental results on four datasets demonstrate the superiority of the proposed method over many state-of-the-art domain adaptation methods, indicating its effectiveness.
Article
Automation & Control Systems
Akshay Agrawal, Shane Barratt, Stephen Boyd
Summary: Convex optimization models predict outputs by solving convex optimization problems, including well-known models like linear and logistic regression. The paper proposes a heuristic for learning parameters in convex optimization models and describes three general classes of such models, presenting numerical experiments for each.
IEEE-CAA JOURNAL OF AUTOMATICA SINICA
(2021)
Article
Computer Science, Artificial Intelligence
Dayan Guan, Jiaxing Huang, Shijian Lu, Aoran Xiao
Summary: SVMin aims to improve image segmentation performance in the target domain by minimizing scale variance, maintaining semantic structure consistency within images. This method can be easily integrated with existing UDA techniques, providing consistent performance boost with minimal additional parameters.
PATTERN RECOGNITION
(2021)
Article
Engineering, Electrical & Electronic
Guokai Liu, Weiming Shen, Liang Gao, Andrew Kusiak
Summary: This article discusses the domain adaptation challenge faced by deep-learning algorithms and introduces the concept of broad learning to address domain adaptation and training time issues. A new adaptive unsupervised broad transfer learning algorithm is proposed, utilizing sparse autoencoder and random orthogonal mapping to handle cross-domain problems.
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT
(2021)
Article
Computer Science, Artificial Intelligence
Jingyao Li, Zhanshan Li, Shuai Lu
Summary: This paper proposes a double weighted domain adaptation (DWDA) method, which explores the relative importance of different distribution alignments through new distribution alignment weighting and sample reweighting strategies, and improves overall performance. Experimental results demonstrate that the method outperforms state-of-the-art domain adaptation methods on public datasets and imbalanced datasets, and is robust to a wide range of parameters. Additionally, convergence curves show rapid convergence within five iterations, and component analysis suggests the necessity of each component in DWDA.
NEURAL COMPUTING & APPLICATIONS
(2021)
Article
Computer Science, Information Systems
Andrea Rosales Sanabria, Franco Zambonelli, Juan Ye
Summary: Sensor-based human activity recognition (HAR) plays a significant role in various fields such as smart city, smart home, and personal healthcare. The shift-GAN proposed in this article integrates Bi-GAN and KMM to achieve robust feature transfer between two different domains. Experimental results demonstrate that shift-GAN outperforms over 10 existing domain adaptation techniques, learns intrinsic feature mappings independent of activities between two domains, is robust to sensor noise, and less sensitive to training data.
Article
Chemistry, Multidisciplinary
Daniele Germano, Nicolina Sciaraffa, Vincenzo Ronca, Andrea Giorgi, Giacomo Trulli, Gianluca Borghini, Gianluca Di Flumeri, Fabio Babiloni, Pietro Arico
Summary: This study evaluates the impact of different positions of the EEG headset on the performance of the BCI system and detects covariate shift using the Isolation Forest model. The results show a significant negative correlation between covariate shift and the accuracy of the passive BCI system. This approach opens up new perspectives for developing robust and flexible BCI systems.
APPLIED SCIENCES-BASEL
(2023)
Article
Computer Science, Artificial Intelligence
Xiaofeng Liu, Fangxu Xing, Jane You, Jun Lu, C. -C. Jay Kuo, Georges El Fakhri, Jonghye Woo
Summary: This paper proposes a new method to adaptively perform fine-grained subtype-aware alignment in order to improve the performance in the target domain. The method simultaneously enforces subtype-wise compactness and class-wise separation using intermediate pseudo-labels, and utilizes a dynamic queue framework to steadily evolve the subtype cluster centroids. Experimental results demonstrate the effectiveness and validity of the proposed method.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
(2022)
Article
Mathematics, Interdisciplinary Applications
Yuling Yao, Gregor Pirs, Aki Vehtari, Andrew Gelman
Summary: This article introduces a model averaging technique called Stacking, which achieves optimal predictions through linear averaging of different models. The article also proposes an improvement method using a hierarchical model and extends Stacking to Bayesian hierarchical stacking. Experimental results demonstrate that this method can achieve better performance on different types of data.
Article
Computer Science, Artificial Intelligence
Hanrui Wu, Yuguang Yan, Sentao Chen, Xiangkang Huang, Qingyao Wu, Michael K. Ng
Summary: This paper proposes a method to match latent visual and semantic representations by exploiting shared concepts in a common subspace, while introducing reconstruction losses for both types of features to reduce domain shift and information loss. Experimental results demonstrate that the accuracy of the proposed method outperforms existing ZSL methods on six benchmark datasets.
KNOWLEDGE-BASED SYSTEMS
(2021)
Article
Computer Science, Artificial Intelligence
Sentao Chen, Hanrui Wu, Cheng Liu
Summary: Domain adaptation focuses on matching distributions and learning general feature representation, with DIAA solution matching and aligning distributions through feature transformation and comparing disparities under KL divergence.
KNOWLEDGE-BASED SYSTEMS
(2021)
Article
Computer Science, Artificial Intelligence
Geng Tu, Jintao Wen, Hao Liu, Sentao Chen, Lin Zheng, Dazhi Jiang
Summary: Both discrete and dimensional models can be applied to emotion recognition tasks, and in this work, a mechanism combining the advantages of these two models is introduced. The multitask graph neural network (MGNN) is proposed to implement this mechanism, balancing exploration and exploitation through a weighting scheme and self-attention mechanism. Experimental results show the superiority of MGNN over advanced methods on tested datasets.
KNOWLEDGE-BASED SYSTEMS
(2022)
Article
Computer Science, Artificial Intelligence
Sentao Chen, Mehrtash Harandi, Xiaona Jin, Xiaowei Yang
Summary: The study addresses the issue of joint distribution matching in domain adaptation, proposing an asymmetric approach and demonstrating its effectiveness in handling nonlinear data.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
(2021)
Article
Computer Science, Artificial Intelligence
Sentao Chen, Lei Wang, Zijie Hong, Xiaowei Yang
Summary: In this work, we propose a solution to the problem of domain generalization for classification, which involves learning a classification model on a set of source domains and generalizing it to a target domain. Our approach aligns a joint distribution and a product distribution using a neural transformation and minimizes the Relative Chi-Square (RCS) divergence between the two distributions to learn the transformation. We demonstrate the effectiveness of our solution through comparisons with state-of-the-art methods on various image classification datasets.
PATTERN RECOGNITION
(2023)
Article
Computer Science, Artificial Intelligence
Sentao Chen
Summary: This paper addresses the problem of generalizing a predictor trained on source domains to an unseen target domain, proposing a method to align the domains directly through solving a decomposed minimax problem without introducing additional network parameters. The approach demonstrates superiority over relevant methods on multiple multi-domain datasets.
KNOWLEDGE-BASED SYSTEMS
(2023)
Article
Computer Science, Artificial Intelligence
Sentao Chen, Liang Chen
Summary: In this work, the authors address the problem of domain generalization, which is the task of generalizing a prediction model trained on one set of domains to an unseen but related target domain. They propose a neural network representation function to align joint and product distributions in the representation space, effectively aligning multiple domains. Experimental results on synthetic and real-world datasets demonstrate the effectiveness of the proposed solution, achieving the best average classification accuracy of 82.26% on the text dataset Amazon Reviews and the best average regression error of 0.114 on the WiFi dataset UJIIndoorLoc.
NEURAL COMPUTING & APPLICATIONS
(2023)
Article
Computer Science, Artificial Intelligence
Sentao Chen, Lin Zheng, Hanrui Wu
Summary: Multi-Source Domain Adaptation (MSDA) aims to train a classification model that achieves small target error by utilizing labeled data from multiple source domains and unlabeled data from a target domain. This paper characterizes the joint distribution difference using Hellinger distance, and shows the upper bound of the target error of a neural network classification model. Motivated by the error bound, Riemannian Representation Learning (RRL) is introduced to train the network model by minimizing the average empirical Hellinger distance and empirical source error.
PATTERN RECOGNITION
(2023)
Article
Computer Science, Artificial Intelligence
Sentao Chen, Zijie Hong
Summary: Domain generalization refers to generalizing a prediction model trained on multiple source domains to an unseen target domain. Existing works assume the domains are related by a feature transformation and learn this transformation through kernel mean matching or adversarial training. In this study, we propose a neural network approach that relates the source and target domains through the network mapping and learn this mapping by matching multiple source joint distributions to their mixture distribution.
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS
(2023)
Article
Computer Science, Artificial Intelligence
Sentao Chen, Zijie Hong, Mehrtash Harandi, Xiaowei Yang
Summary: This article introduces a method called Domain Neural Adaptation (DNA) that uses nonlinear deep neural networks to solve the problem of domain adaptation. The method matches the joint distributions of the source and target domains and learns the classifier, resulting in a well-generalized classification model. Empirical results show that this method performs better than its competitors on several visual datasets.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
(2023)
Article
Computer Science, Artificial Intelligence
Sentao Chen, Mehrtash Harandi, Xiaona Jin, Xiaowei Yang
IEEE TRANSACTIONS ON IMAGE PROCESSING
(2020)