4.7 Article

Objective Reduction in Many-Objective Optimization: Evolutionary Multiobjective Approaches and Comprehensive Analysis

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TEVC.2017.2672668

关键词

Many-objective optimization; multiobjective evolutionary algorithms (MOEAs); multiobjective optimization; objective reduction

资金

  1. A*Star-TSRP
  2. Singapore Institute of Manufacturing Technology-Nanyang Technological University (NTU) Joint Laboratory and Collaborative Research Programme on Complex Systems
  3. Computational Intelligence Graduate Laboratory at NTU
  4. National Natural Science Foundation of China [61673235]

向作者/读者索取更多资源

Many-objective optimization problems bring great difficulties to the existing multiobjective evolutionary algorithms, in terms of selection operators, computational cost, visualization of the high-dimensional tradeoff front, and so on. Objective reduction can alleviate such difficulties by removing the redundant objectives in the original objective set, which has become one of the most important techniques in many-objective optimization. In this paper, we suggest to view objective reduction as a multiobjective search problem and introduce three multiobjective formulations of the problem, where the first two formulations are both based on preservation of the dominance structure and the third one utilizes the correlation between objectives. For each multiobjective formulation, a multiobjective objective reduction algorithm is proposed by employing the nondominated sorting genetic algorithm II to generate a Pareto front of nondominated objective subsets that can offer decision support to the user. Moreover, we conduct a comprehensive analysis of two major categories of objective reduction approaches based on several theorems, with the aim of revealing their strengths and limitations. Lastly, the performance of the proposed multiobjective algorithms is studied extensively on various benchmark problems and two real-world problems. Numerical results and comparisons are then shown to highlight the effectiveness and superiority of the proposed multiobjective algorithms over existing state-of-the-art approaches in the related field.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

Article Computer Science, Artificial Intelligence

Evolutionary Machine Learning With Minions: A Case Study in Feature Selection

Nick Zhang, Abhishek Gupta, Zefeng Chen, Yew-Soon Ong

Summary: This article introduces a novel algorithm-centric solution using evolutionary multitasking to speed up decision-making in the machine learning pipeline. By creating small data proxies and combining them with the main task, the efficiency of evolutionary search can be improved, accelerating the decision-making process. Experiments show that multitasking can significantly speed up the baseline evolutionary algorithms.

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION (2022)

Article Engineering, Civil

Towards Faster Vehicle Routing by Transferring Knowledge From Customer Representation

Liang Feng, Yuxiao Huang, Ivor W. Tsang, Abhishek Gupta, Ke Tang, Kay Chen Tan, Yew-Soon Ong

Summary: The Vehicle Routing Problem is a challenging optimization problem, and this article proposes a method to speed up the optimization process by transferring knowledge from past solved problems.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2022)

Article Computer Science, Artificial Intelligence

Half a Dozen Real-World Applications of Evolutionary Multitasking, and More

Abhishek Gupta, Lei Zhou, Yew-Soon Ong, Zefeng Chen, Yaqing Hou

Summary: Evolutionary multitasking (EMT) is a concept that fills the potential gap of skill transfer between distinct optimization problems in evolutionary computation, by utilizing a population's implicit parallelism to jointly solve a set of tasks. This paper reviews various application-oriented explorations of EMT and provides recipes on how general problem formulations can be transformed in the light of EMT.

IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE (2022)

Editorial Material Computer Science, Artificial Intelligence

Guest Editorial Special Issue on Multitask Evolutionary Computation

Abhishek Gupta, Yew-Soon Ong, Kenneth A. De Jong, Mengjie Zhang

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION (2022)

Article Computer Science, Artificial Intelligence

A layer-wise neural network for multi-item single-output quality estimation

Edward K. Y. Yapp, Abhishek Gupta, Xiang Li

Summary: A layer-wise neural network architecture is proposed for classification and regression of time series data with single-output instances. The approach is benchmarked against other methods and an ablation study is conducted to understand the critical design choices. The results show that the proposed method outperforms others and the parameter sharing in dense layers is key to improving performance.

JOURNAL OF INTELLIGENT MANUFACTURING (2023)

Article Automation & Control Systems

From Multitask Gradient Descent to Gradient-Free Evolutionary Multitasking: A Proof of Faster Convergence

Lu Bai, Wu Lin, Abhishek Gupta, Yew-Soon Ong

Summary: This paper introduces a new multitasking algorithm, MTGD, and its derivative, MTESs, demonstrating faster convergence compared to the single task scenario. The theoretical findings are supported by numerical experiments on synthetic benchmarks and practical optimization examples.

IEEE TRANSACTIONS ON CYBERNETICS (2022)

Article Automation & Control Systems

Frame-Correlation Transfers Trigger Economical Attacks on Deep Reinforcement Learning Policies

Xinghua Qu, Yew-Soon Ong, Abhishek Gupta

Summary: This article showcases the exploration of transferability across frames to boost the creation of minimal yet powerful attacks in image-based reinforcement learning. By introducing three types of frame-correlation transfers (FCTs), the study demonstrates the tradeoff between complexity and potency of transfer mechanism, significantly speeding up attack generation on four state-of-the-art policies across six Atari games.

IEEE TRANSACTIONS ON CYBERNETICS (2022)

Article Multidisciplinary Sciences

Pareto optimization with small data by learning across common objective spaces

Chin Sheng Tan, Abhishek Gupta, Yew-Soon Ong, Mahardhika Pratama, Puay Siew Tan, Siew Kei Lam

Summary: In multi-objective optimization, covering the Pareto front becomes challenging due to the exponential increase in points with the dimensionality of the objective space. Pareto estimation (PE) aims to overcome insufficient PF representations by using inverse machine learning. However, the accuracy of the inverse model is limited by the scarce training data in high-dimensional/expensive objectives. To address this challenge, this paper proposes multi-source inverse transfer learning for PE to enhance PF approximation. Experimental results demonstrate significant improvements in the predictive accuracy and PF approximation capacity of Pareto set learning. It envisions a future of on-demand human-machine interaction for facilitating multi-objective decisions.

SCIENTIFIC REPORTS (2023)

Article Computer Science, Artificial Intelligence

Jack and Masters of all Trades: One-Pass Learning Sets of Model Sets From Large Pre-Trained Models

Han Xiang Choong, Yew-Soon Ong, Abhishek Gupta, Caishun Chen, Ray Lim

Summary: In the field of deep learning, the size of neural networks is crucial. Large pre-trained models, capable of handling various tasks and trained on extensive data, are at the forefront of artificial intelligence. However, the real-world utility of these singular models, known as Jacks of All Trades (JATs), may be limited due to resource constraints, changing objectives, and diverse task requirements. This paper explores the concept of creating a diverse set of compact machine learning models, called the Set of Sets, to address these limitations. A novel approach using a neuroevolutionary multitasking algorithm is presented, bringing us closer to collectively achieving models that are Masters of All Trades.

IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE (2023)

Article Computer Science, Artificial Intelligence

Adversary Agnostic Robust Deep Reinforcement Learning

Xinghua Qu, Abhishek Gupta, Yew-Soon Ong, Zhu Sun

Summary: This article discusses the issue of robustness in deep reinforcement learning (DRL) policies when facing unknown perturbations. They propose an adversary agnostic robust DRL paradigm that does not require predefined adversaries. The authors provide theoretical analysis and conduct experiments to demonstrate the effectiveness of their approach.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2023)

Article Automation & Control Systems

Scalable Transfer Evolutionary Optimization: Coping With Big Task Instances

Mojtaba Shakeri, Erfan Miahi, Abhishek Gupta, Yew-Soon Ong

Summary: This article proposes a novel transfer evolutionary optimization framework that enables joint evolution in the source knowledge space and the search space of solutions to the target problem, with scalability and online learning agility.

IEEE TRANSACTIONS ON CYBERNETICS (2023)

Article Computer Science, Artificial Intelligence

Multitask Neuroevolution for Reinforcement Learning With Long and Short Episodes

Nick Zhang, Abhishek Gupta, Zefeng Chen, Yew-Soon Ong

Summary: This study proposes a novel neuroevolutionary multitasking algorithm (NuEMT) to address the issue of high sample complexity in deep reinforcement learning. By transferring information from short-term auxiliary tasks to the target task, the algorithm enables efficient learning and evaluation of policies, reducing the requirement for expensive agent-environment interaction data.

IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS (2023)

Proceedings Paper Computer Science, Artificial Intelligence

Tightening Regret Bounds for Scalable Transfer Optimization with Gaussian Process Surrogates

Abhishek Gupta, Ray Lim, Chin Chun Ooi, Yew-Soon Ong

Summary: By incorporating knowledge transfer into black-box optimization with Gaussian process surrogates, the cumulative regret bounds can be tightened, leading to faster convergence and overcoming the cold start problem of traditional Bayesian optimization algorithms. Extending this method to multi-source settings further tightens the regret bounds and maintains algorithmic complexity linear in the number of sources.

2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI) (2022)

Article Automation & Control Systems

Scaling Multiobjective Evolution to Large Data With Minions: A Bayes-Informed Multitask Approach

Zefeng Chen, Abhishek Gupta, Lei Zhou, Yew-Soon Ong

Summary: In this article, a method is proposed to quickly optimize large datasets using auxiliary source tasks. A computational resource allocation strategy is designed to effectively utilize these auxiliary tasks. Experimental results show that the proposed algorithm achieves higher speedup compared to existing methods, demonstrating its efficiency in handling real-world multiobjective optimization problems involving large datasets.

IEEE TRANSACTIONS ON CYBERNETICS (2022)

Proceedings Paper Computer Science, Artificial Intelligence

An Initial Investigation of Data-Lean Transfer Evolutionary Optimization with Probabilistic Priors

Ray Lim, Abhishek Gupta, Yew-Soon Ong

Summary: This paper investigates a data-lean variant of Transfer Evolutionary Optimization (TrEO) algorithm, which utilizes source-target similarity capture and solution representation learning to improve convergence rates. Experimental results show that this data-lean approach can achieve competitive performance.

2022 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC) (2022)

暂无数据