Article
Energy & Fuels
Dong Hu, Hui Xie, Kang Song, Yuanyuan Zhang, Long Yan
Summary: This study proposes an apprenticeship-reinforcement learning (A-RL) framework based on expert demonstration (ED) model embedding to improve efficient energy management strategies (EMS) for hybrid electric vehicles (HEV). The framework combines apprenticeship learning (AL) with deep reinforcement learning (DRL) and uses the ED model to guide the DRL. The results show significant improvement in training convergence rate and fuel economy.
Article
Chemistry, Physical
Xiaohua Zeng, Haoming Gao, Zhitao Chen, Dongpo Yang, Dafeng Song
Summary: In this paper, a collaborative control strategy based on Nash equilibrium is proposed for cruise speed optimization and energy management of hybrid electric vehicles (HEVs). The strategy can optimize both vehicle-level and hybrid-system-level control simultaneously. Simulation results show that the proposed method outperforms hierarchical model predictive control in terms of vehicle following, safety, comfort, and energy consumption reduction.
JOURNAL OF POWER SOURCES
(2023)
Review
Engineering, Mechanical
Shengguang Xiong, Yishi Zhang, Chaozhong Wu, Zhijun Chen, Jiankun Peng, Mingyang Zhang
Summary: Energy management is a crucial task in the research field of plug-in split hybrid electric vehicles (PSHEV) due to complex powertrains and changing driving conditions. By combining optimized Dijkstra's path planning algorithm (ODA) and reinforcement learning Deep-Q-Network (DQN), an intelligent energy management strategy is proposed for IPSHEV to address these challenges, showing feasibility and effectiveness in simulation results.
PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART D-JOURNAL OF AUTOMOBILE ENGINEERING
(2021)
Article
Chemistry, Multidisciplinary
Qinghua Tang, Demin Li, Yihong Zhang, Xuemin Chen
Summary: With the growing popularity of AEVs, optimizing path-planning and charging strategy is crucial. This paper proposes a joint push-pull communication mode to obtain real-time traffic conditions and charging infrastructure information. Dynamic optimization algorithms are used to minimize travel and charging costs.
APPLIED SCIENCES-BASEL
(2023)
Article
Engineering, Electrical & Electronic
Ying Zhang, Muyang Li, Yuanchang Chen, Yao-Yi Chiang, Yunpeng Hua
Summary: Electric vehicle route planning (EVRP) is crucial for the widespread use of battery electric vehicles, and existing solutions have limitations in computational complexity and efficiency. To address these issues, we propose an efficient Deep Reinforcement Learning based methodology for constraint-based routing that considers charging policies. Our methodology outperforms traditional methods in computation time and can be applied to various problem instances without re-training.
IEEE TRANSACTIONS ON SMART GRID
(2023)
Article
Engineering, Ocean
Runlong Miao, Lingxiao Wang, Shuo Pang
Summary: This article presents a coordination algorithm for organizing a fleet of unmanned surface vehicles to search multiple moving object targets in the ocean environment. The algorithm allows USVs to exchange sensing information and construct a grid confidence map based on the received and self-perceived information. The coordination is modeled as a reinforcement learning problem, encouraging exploration of new regions and preventing revisiting already searched areas. Experimental results demonstrate that the proposed method is more intelligent and efficient compared to conventional strategies.
APPLIED OCEAN RESEARCH
(2022)
Article
Automation & Control Systems
Peng Mei, Hamid Reza Karimi, Hehui Xie, Fei Chen, Cong Huang, Shichun Yang
Summary: Considering the importance of energy management strategy for hybrid electric vehicles, this paper addresses the energy optimization control issue using reinforcement learning algorithms. It establishes a hybrid electric vehicle power system model and designs a hierarchical energy optimization control architecture based on networked information. Three learning-based energy optimization control strategies, namely Q-learning, deep Q network (DQN), and deep deterministic policy gradient (DDPG) algorithms, are introduced. The superiority of the DDPG algorithm over Q-learning and DQN algorithms in terms of robustness and faster convergence for better energy management purposes is illustrated through simulation.
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
(2023)
Article
Chemistry, Physical
Liange He, Zihan Gu, Yan Zhang, Haodong Jing, Pengpai Li
Summary: This study proposes an Auxiliary Power Unit (APU) control strategy C that considers battery SOC, vehicle power, and battery temperature in order to improve the range of REEV at high temperatures while meeting the cooling requirements of the battery and cabin. Experimental results show that strategy C not only reduces fuel consumption, but also increases the cooling rate of the battery and cabin.
SUSTAINABLE ENERGY & FUELS
(2023)
Article
Thermodynamics
Jie Li, Xiaodong Wu, Min Xu, Yonggang Liu
Summary: In this study, a deep reinforcement learning-based eco-driving control strategy is proposed to optimize the fuel economy, driving safety, and travel efficiency of automated hybrid electric vehicles in a connected traffic environment with signalized intersections. The method utilizes a twin-delayed deep deterministic policy gradient agent to plan vehicle speed in real-time, and transforms the multi-objective optimization function into the value function of the deep reinforcement learning algorithm. The proposed strategy is verified in a real road traffic environment and demonstrates significant reduction in fuel consumption while satisfying traffic lights and safety rules, showing feasibility for real-time application.
Article
Thermodynamics
Chunyang Qi, Yiwen Zhu, Chuanxue Song, Guangfu Yan, Da Wang, Feng Xiao, Xu Zhang, Jingwei Cao, Shixin Song
Summary: This research introduces a novel reinforcement learning-based deep Q-learning algorithm for the energy management strategy of HEVs. The proposed method not only addresses the issue of sparse reward during training, but also achieves optimal power distribution. Additionally, the hierarchical structure of the algorithm enhances exploration of the vehicle environment, leading to improved training efficiency and reduced fuel consumption.
Article
Computer Science, Information Systems
Gan Huang, Ping Zhao, Guanglin Zhang
Summary: This article proposes an energy management strategy based on deep reinforcement learning, taking into account battery thermal effects. By formulating the problem as an optimization task, extracting features using GRU, and utilizing double DQN algorithm, significant energy reduction is achieved across different driving cycles.
IEEE INTERNET OF THINGS JOURNAL
(2022)
Article
Energy & Fuels
Mohamed Lotfi, Tiago Almeida, Mohammad S. Javadi, Gerardo J. Osorio, Claudio Monteiro, Joao P. S. Catalao
Summary: This study proposes and models the coordination between home energy management systems (HEMSs) and EV parking lot management systems (PLEMS), achieving optimal energy management through partially sharing individual EV schedules and without sharing private information. The results show that this coordination framework is both technically and economically beneficial for power grids and EV owners.
Article
Engineering, Electrical & Electronic
Ningkang Yang, Lijin Han, Rui Liu, Zhengchao Wei, Hui Liu, Changle Xiang
Summary: This article proposes a multiobjective energy management strategy based on multiagent reinforcement learning for a hybrid electric vehicle. The strategy takes into consideration fuel economy improvement, battery state of charge maintenance, battery degradation reduction, and constraint on ultracapacitor state of charge. The proposed strategy combines game theory and reinforcement learning to achieve a Nash equilibrium among multiple objectives. Simulation results show that the proposed strategy outperforms single-agent reinforcement learning and dynamic programming in optimizing multiple objectives.
IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION
(2023)
Article
Computer Science, Information Systems
Hongda Guo, Youchun Xu, Yulin Ma, Shucai Xu, Zhixiong Li
Summary: This paper introduces a path planning method for multi-UGVs that combines gradient descent and deep reinforcement learning to solve the problem of long computation time and excessive path inflection points. Experimental results demonstrate the superior performance of the proposed method in pursuit tasks.
Article
Chemistry, Analytical
Xianfeng Ye, Zhiyun Deng, Yanjun Shi, Weiming Shen
Summary: This paper presents a multi-agent reinforcement learning algorithm for multiple automated guided vehicles (AGVs) to address scheduling and routing problems with the aim of minimizing energy consumption. The proposed algorithm is based on the multi-agent deep deterministic policy gradient (MADDPG) algorithm with modifications to fit the AGV activities. The paper develops a well-designed reward function and incorporates the e-greedy exploration strategy, resulting in improved energy efficiency and faster convergence.
Article
Computer Science, Artificial Intelligence
Jianan Yang, Yimin Zhu, Tong Wu, Lixian Zhang, Yang Shi
Summary: This article focuses on a class of discrete-time hybrid fuzzy systems with semi-Markov switching, where the sojourn time of each mode is constrained. It introduces a practical scenario of transitional asynchrony, where the designed controllers lag behind the plant's switchings, depending on the transition between adjacent modes. The article presents stability criteria based on the semi-Markov kernel approach, deriving existence conditions for stabilizing controllers that can overcome the transitional asynchrony. Compared to previous studies, this approach yields less conservative results, as demonstrated by two illustrative examples involving a class of bicopters.
IEEE TRANSACTIONS ON FUZZY SYSTEMS
(2023)
Article
Automation & Control Systems
Yijia Xie, Xiang Yu, Yang Shi, Lei Guo
Summary: In this article, an antidisturbance control scheme based on singular perturbation theory (SPT) and composite hierarchical antidisturbance control (CHADC) is proposed to enhance the capability of antidisturbance. The scheme effectively compensates and attenuates disturbances, significantly improving the control performance of complex systems. The application of a CHAD predictor-corrector for the slow subsystem and a CHAD H-infinity controller for the fast subsystem enhances disturbance handling from both guidance and control perspectives.
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
(2023)
Article
Automation & Control Systems
Kunwu Zhang, Yang Shi, Stamatis Karnouskos, Thilo Sauter, Huazhen Fang, Armando Walter Colombo
Summary: Industrial cyber-physical systems (ICPS) have gained increasing attention due to their potential benefits to society, economy, environment, and citizens. This article provides an overview of ICPS, including its architecture, developments, and future research directions.
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS
(2023)
Editorial Material
Automation & Control Systems
Yang Shi, Stamatis Karnouskos, Thilo Sauter, Huazhen Fang
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS
(2023)
Article
Engineering, Civil
Yifan Zhang, Qian Xu, Jianping Wang, Kui Wu, Zuduo Zheng, Kejie Lu
Summary: In this paper, a new model for discretionary lane change (DLC) decision-making is proposed, which integrates human factors represented by driving styles and considers contextual traffic information and driving styles of surrounding vehicles. The model can imitate human drivers' decision-making maneuvers and achieves a prediction accuracy of 98.66%. The impact of the model on improving traffic safety and speed compared to human drivers is also analyzed.
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
(2023)
Article
Automation & Control Systems
Henglai Wei, Yang Shi
Summary: Autonomous marine vehicles (AMVs) have gained attention for their essential roles in marine applications. Recent advances in communication technologies, perception capability, computational power, and optimization algorithms have stimulated the development of AMVs. Model predictive control (MPC) is effective in handling constraints and optimizing control performance. This paper reviews the progress in motion planning and control for AMVs from the perspective of MPC and highlights future research trends.
IEEE-CAA JOURNAL OF AUTOMATICA SINICA
(2023)
Article
Automation & Control Systems
Xiaodong Shao, Qinglei Hu, Yang Shi, Youmin Zhang
Summary: This paper addresses the fault-tolerant control (FTC) problem for full-state error constrained attitude tracking of an uncertain spacecraft. The traditional FTC solutions rely on an uniform strong controllability (USC) assumption, which may not hold in the presence of multiplicative actuator faults. A sufficient condition for the USC assumption is presented to evaluate the applicability of existing FTC solutions. A saturated adaptive robust FTC algorithm is proposed, based on the unified error transformation method and symmetric barrier Lyapunov functions, which eliminates the USC assumption and ensures that full-state error constraints are never violated, despite the presence of uncertainties, disturbances, and actuator faults.
Article
Automation & Control Systems
Dong Yang, Guangdeng Zong, Yang Shi, Peng Shi
Summary: This paper investigates the model reference adaptive tracking control problem of uncertain hybrid switching Markovian systems. The stochastic multiple piecewise Lyapunov function method is used to design a hybrid switching signal and a piecewise dynamic switching adaptive controller. The proposed method improves the adaptive tracking capability by providing ample adjusting time during the stochastic switching stage. A set of piecewise dynamic switching adaptive controllers are designed to ensure that all signals of the tracking error system remain bounded and the tracking error converges to a neighborhood of zero. The effectiveness of the developed adaptive tracking control theory is demonstrated through numerical and application examples of an electro-hydraulic model.
SIAM JOURNAL ON CONTROL AND OPTIMIZATION
(2023)
Article
Automation & Control Systems
Deyin Yao, Hongyi Li, Yang Shi
Summary: In this study, the robust adaptive event-triggered sliding-mode control method is employed to address the adaptive tracking control problem of leader-following nonlinear MASs subject to unknown perturbations and limited network bandwidth. The distributed integral sliding mode is established to achieve the finite-time reachability of system states, and an adaptive triggering control mechanism is proposed to dynamically adjust the triggering interval for reducing wear and resource consumption. The effectiveness of the proposed event-based robust adaptive sliding-mode controller design is validated through three simulation examples.
IEEE TRANSACTIONS ON CYBERNETICS
(2023)
Editorial Material
Engineering, Electrical & Electronic
Hui Zhang, Manjiang Hu, Anh-Tu Nguyen, Yunpeng Wang, Yang Shi
AUTOMOTIVE INNOVATION
(2023)
Article
Automation & Control Systems
Binyan Xu, Afzal Suleman, Yang Shi
Summary: This paper presents a dual-loop hierarchical controller for trajectory tracking of quadrotors with unexpected actuator faults. The controller consists of an outer-loop translation control and an inner-loop rotation control. A fault-tolerant Lyapunov-based model predictive control strategy and a fault-tolerant control law are proposed for translation and rotation control, respectively. The stability of the control system is proven using singular perturbation theory, and sufficient stability conditions are established for tuning control parameters and sampling periods. Numerical simulations demonstrate the effectiveness of the proposed control design in trajectory tracking and fault tolerance.
Article
Automation & Control Systems
Yuzhe Li, Ran Chen, Yang Shi
Summary: To address model uncertainty, extensive stochastic MPC methods have been developed, assuming uncertainties to follow statistical distributions. However, in practical scenarios, the statistical properties of uncertainties may depend on varying hyperparameters in time and space. Therefore, a spatiotemporal learning-based stochastic model predictive control algorithm is proposed, using spatiotemporal Gaussian processes (GPs) to approximate uncertainties based on measurement data. The algorithm is applied to compressor control, showing its effectiveness compared to other MPC controllers.
Article
Automation & Control Systems
Changxin Liu, Zirui Zhou, Jian Pei, Yong Zhang, Yang Shi
Summary: Decentralized optimization, particularly decentralized composite convex optimization (DCCO) problems, has found many applications. This article proposes a new decentralized dual averaging (DDA) algorithm that can solve DCCO in stochastic networks. Under a mild condition, the algorithm achieves global linear convergence if each local objective function is strongly convex.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
(2023)
Article
Automation & Control Systems
Tian-Yu Zhang, Dan Ye, Yang Shi
Summary: This article discusses how false-data injection (FDI) attacks compromise state omniscience, particularly in jointly detectable sensor networks. The study focuses on decentralized FDI (DFDI) attacks that destabilize estimation error dynamics while eliminating the effects on sensor node residuals. The article investigates the sufficiency, necessity, and design of DFDI attacks, as well as the secure range for observer interaction weights and sensor protection scheme to ensure state omniscience security. Theoretical results are demonstrated using a linearized discrete-time model of an aircraft system.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
(2023)
Article
Automation & Control Systems
Zhengen Zhao, Yunsong Xu, Yuzhe Li, Ziyang Zhen, Ying Yang, Yang Shi
Summary: This article studies the issues of data-driven attack detection and identification for cyber-physical systems under sparse sensor attacks. It proposes a data-driven monitor and presents attack detection and identification strategies using the subspace approach. The proposed methods are verified through simulations on a flight vehicle system.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
(2023)