4.1 Article

Incorporating the Loss Function into Discriminative Clustering of Structured Outputs

期刊

IEEE TRANSACTIONS ON NEURAL NETWORKS
卷 21, 期 10, 页码 1564-1575

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNN.2010.2064177

关键词

Clustering methods; dependence Maximization; Hilbert Schmidt independence criterion; loss; structured clustering

资金

  1. Research Grants Council of the Hong Kong Special Administrative Region [614508]

向作者/读者索取更多资源

Clustering using the Hilbert Schmidt independence criterion (CLUHSIC) is a recent clustering algorithm that maximizes the dependence between cluster labels and data observations according to the Hilbert Schmidt independence criterion (HSIC). It is unique in that structure information on the cluster outputs can be easily utilized in the clustering process. However, while the choice of the loss function is known to be very important in supervised learning with structured outputs, we will show in this paper that CLUHSIC is implicitly using the often inappropriate zero-one loss. We propose an extension called CLUHSICAL (which stands for Clustering using HSIC and loss) which explicitly considers both the output dependency and loss function. Its optimization problem has the same form as CLUHSIC, except that its partition matrix is constructed in a different manner. Experimental results on a number of datasets with structured outputs show that CLUHSICAL often outperforms CLUHSIC in terms of both structured loss and clustering accuracy.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.1
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

Article Computer Science, Information Systems

Toward Equivalent Transformation of User Preferences in Cross Domain Recommendation

Xu Chen, Ya Zhang, Ivor W. Tsang, Yuangang Pan, Jingchao Su

Summary: This article discusses cross-domain recommendation in scenarios where different domains have the same set of users but no overlapping items. Most existing methods focus on shared-user representation, but fail to capture domain-specific features. In this article, an equivalent transformation learner (ETL) is proposed to preserve both domain-specific and overlapped features by modeling the joint distribution of user behaviors across domains.

ACM TRANSACTIONS ON INFORMATION SYSTEMS (2023)

Article Engineering, Industrial

Operation twins: production-intralogistics synchronisation in Industry 4.0

Mingxing Li, Daqiang Guo, Ming Li, Ting Qu, George Q. Huang

Summary: The widespread adoption of Industry 4.0 technologies is revolutionising manufacturing operations. This paper introduces a novel concept of operations twins (OT) for achieving synchronisation between production and intralogistics (PiL) through the use of Industry 4.0 technologies and innovative operations management strategies.

INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH (2023)

Article Computer Science, Artificial Intelligence

LADDER: Latent boundary-guided adversarial training

Xiaowei Zhou, Ivor W. Tsang, Jie Yin

Summary: Deep Neural Networks have achieved great success in classification tasks, but they are vulnerable to adversarial attacks. Adversarial training is an effective strategy to improve the robustness of DNN models, but existing methods fail to generalize well to standard test data. To achieve a better trade-off between standard accuracy and adversarial robustness, a novel adversarial training framework called LADDER is proposed, which generates high-quality adversarial examples through perturbations on latent features.

MACHINE LEARNING (2023)

Article Computer Science, Artificial Intelligence

Coarse-to-Fine Contrastive Learning on Graphs

Peiyao Zhao, Yuangang Pan, Xin Li, Xu Chen, Ivor W. Tsang, Lejian Liao

Summary: Inspired by the success of contrastive learning, graph augmentation strategies have been used to learn node representations. Existing methods add perturbations to construct contrastive samples. However, they ignore the prior information assumption, leading to decreased similarity and increased discrimination among nodes. In this article, a general ranking framework is proposed to incorporate these prior information into contrastive learning. Experimental results on benchmark datasets show the effectiveness of the proposed algorithm compared to supervised and unsupervised models.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2023)

Article Computer Science, Artificial Intelligence

Latent Class-Conditional Noise Model

Jiangchao Yao, Bo Han, Zhihan Zhou, Ya Zhang, Ivor W. Tsang

Summary: Learning with noisy labels is important in the Big Data era to save costs. Previous noise-transition-based methods achieved good performance but relied on impractical anchor sets. Our approach introduces a Bayesian framework for parameterizing the noise transition and solves the problem of ill-posed stochastic learning in back-propagation.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2023)

Article Computer Science, Artificial Intelligence

DifFormer: Multi-Resolutional Differencing Transformer With Dynamic Ranging for Time Series Analysis

Bing Li, Wei Cui, Le Zhang, Ce Zhu, Wei Wang, Ivor W. Tsang, Joey Tianyi Zhou

Summary: Time series analysis is crucial in various fields such as economics, finance, and surveillance. However, traditional Transformer models have limitations in representing nuanced patterns in time series data. To overcome these challenges, we propose a novel Transformer architecture called DifFormer, which incorporates a multi-resolutional differencing mechanism. DifFormer outperforms existing models in classification, regression, and forecasting tasks, while also exhibiting efficiency and lower time consumption.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2023)

Article Computer Science, Artificial Intelligence

Noisy Label Learning With Provable Consistency for a Wider Family of Losses

Defu Liu, Wen Li, Lixin Duan, Ivor W. Tsang, Guowu Yang

Summary: Deep models have achieved impressive performance in various visual recognition tasks, but their generalization ability is compromised by noisy labels. This paper presents a dynamic label learning algorithm that allows the use of different loss functions for classification in the presence of label noise, ensuring that the search for the optimal classifier of noise-free samples is not hindered by label noise.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2023)

Article Computer Science, Artificial Intelligence

Missingness-Pattern-Adaptive Learning With Incomplete Data

Yongshun Gong, Zhibin Li, Wei Liu, Xiankai Lu, Xinwang Liu, Ivor W. W. Tsang, Yilong Yin

Summary: Many real-world problems involve data with missing values, which can hinder learning achievements. Existing methods use a universal model for all incomplete data, resulting in suboptimal models for each missingness pattern. This paper proposes a general model that can adjust to different missingness patterns, minimizing competition between data. The model is based on observable features and does not rely on data imputation, and a low-rank constraint is introduced to improve generalization ability.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2023)

Article Computer Science, Artificial Intelligence

MetaCAR: Cross-Domain Meta-Augmentation for Content-Aware Recommendation

Hui Xu, Changyu Li, Yan Zhang, Lixin Duan, Ivor W. Tsang, Jie Shao

Summary: Cold-start is crucial for recommendations, especially when there is limited user-item interaction data. Meta-learning-based approaches have shown success in addressing this issue by leveraging their strong generalization capabilities to quickly adapt to new tasks in cold-start settings. However, these methods are prone to meta-overfitting when trained with single and sparse ratings because a single rating cannot capture a user's diverse interests under different circumstances. To overcome this, a meta-augmentation technique is proposed to convert non-mutually-exclusive (Non-ME) tasks into mutually-exclusive (ME) tasks without changing inputs, thus relieving the issue of meta-overfitting. Inspired by this technique, this paper proposes a cross-domain meta-augmentation technique for content-aware recommendation systems (MetaCAR) to construct ME tasks in the recommendation scenario. The proposed method consists of two stages: meta-augmentation and meta-learning. In the meta-augmentation stage, domain adaptation is performed using a dual conditional variational autoencoder (CVAE) with a multi-view information bottleneck constraint, and the learned CVAE is applied to generate ratings for users in the target domain. In the meta-learning stage, both the true and generated ratings are used to construct ME tasks, enabling meta-learning recommendations to avoid meta-overfitting. Experiments conducted on real-world datasets demonstrate the significant superiority of MetaCAR over competing baselines, including cross-domain, content-aware, and meta-learning-based recommendations, in dealing with the cold-start user issue.

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING (2023)

Article Computer Science, Artificial Intelligence

A Multi-View Multi-Task Learning Framework for Multi-Variate Time Series Forecasting

Jinliang Deng, Xiusi Chen, Renhe Jiang, Xuan Song, Ivor W. Tsang

Summary: Multi-variate time series (MTS) data is a common type of data abstraction in the real world, generated from a hybrid dynamical system. MTS data can be categorized into spatial and temporal attributes, and can be analyzed from the spatial view or temporal view. A novel multi-view multi-task (MVMT) learning framework is proposed to extract hidden MVMT information from MTS data while predicting. The framework improves effectiveness and efficiency of canonical architectures according to extensive experiments on three datasets.

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING (2023)

Article Computer Science, Artificial Intelligence

Latent Representation Guided Multi-View Clustering

Shudong Huang, Ivor W. W. Tsang, Zenglin Xu, Jiancheng Lv

Summary: Multi-view clustering aims to reveal correlations between different input modalities in an unsupervised way. This paper proposes a novel model that learns a robust structured similarity graph and performs multi-view clustering simultaneously. The similarity graph is adaptively learned based on a latent representation that is invulnerable to noise and outliers. Experimental results on benchmark datasets demonstrate the effectiveness of the proposed model.

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING (2023)

Article Computer Science, Artificial Intelligence

Taming Overconfident Prediction on Unlabeled Data From Hindsight

Jing Li, Yuangang Pan, Ivor W. Tsang

Summary: This article proposes a dual mechanism called adaptive sharpening (ADS) to minimize prediction uncertainty in semi-supervised learning. ADS applies a soft-threshold to mask out uncertain and negligible predictions, and sharpens the informed ones to distill certain predictions.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2023)

Article Computer Science, Artificial Intelligence

Distribution Matching for Machine Teaching

Xiaofeng Cao, Ivor W. Tsang

Summary: Machine teaching is a reverse problem of machine learning, aiming to guide the student towards its target hypothesis using known learning parameters. Previous studies focused on balancing teaching risk and cost to find the best teaching examples. However, when the student doesn't disclose any cue of the learning parameters, the optimization solver becomes ineffective. This article presents a distribution matching-based machine teaching strategy that iteratively shrinks teaching cost to eliminate boundary perturbations, providing an effective solution.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2023)

Article Computer Science, Artificial Intelligence

Structure-Informed Shadow Removal Networks

Yuhao Liu, Qing Guo, Lan Fu, Zhanghan Ke, Ke Xu, Wei Feng, Ivor W. Tsang, Rynson W. H. Lau

Summary: In this paper, a novel structure-informed shadow removal network (StructNet) is proposed to address the problem of shadow remnants in existing deep learning-based methods. StructNet reconstructs the structure information of the input image without shadows and uses it to guide the image-level shadow removal. Two main modules, MSFE and MFRA, are developed to extract image structural features and regularize feature consistency. Additionally, an extension called MStructNet is proposed to exploit multi-level structure information and improve shadow removal performance.

IEEE TRANSACTIONS ON IMAGE PROCESSING (2023)

Article Computer Science, Artificial Intelligence

Data-Efficient Learning via Minimizing Hyperspherical Energy

Xiaofeng Cao, Weiyang Liu, Ivor W. Tsang

Summary: This paper addresses the problem of data-efficient learning from scratch in scenarios where data or labels are expensive to collect. It proposes the MHEAL algorithm based on active learning on homeomorphic tubes of spherical manifolds, and provides comprehensive theoretical guarantees. Empirical results demonstrate the effectiveness of MHEAL in various applications for data-efficient learning.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2023)

暂无数据