A deep reinforcement learning approach for rail renewal and maintenance planning
Published 2022 View Full Article
- Home
- Publications
- Publication Search
- Publication Details
Title
A deep reinforcement learning approach for rail renewal and maintenance planning
Authors
Keywords
-
Journal
RELIABILITY ENGINEERING & SYSTEM SAFETY
Volume 225, Issue -, Pages 108615
Publisher
Elsevier BV
Online
2022-05-28
DOI
10.1016/j.ress.2022.108615
References
Ask authors/readers for more resources
Related references
Note: Only part of the references are listed.- DEEP reinforcement learning driven inspection and maintenance planning under incomplete information and constraints
- (2021) C.P. Andriotis et al. RELIABILITY ENGINEERING & SYSTEM SAFETY
- Maintenance optimisation of multicomponent systems using hierarchical coordinated reinforcement learning
- (2021) Yifan Zhou et al. RELIABILITY ENGINEERING & SYSTEM SAFETY
- A taxonomy of railway track maintenance planning and scheduling: A review and research trends
- (2021) Mahdieh Sedghi et al. RELIABILITY ENGINEERING & SYSTEM SAFETY
- Joint optimization of preventive maintenance and production scheduling for multi-state production systems based on reinforcement learning
- (2021) Hongbing Yang et al. RELIABILITY ENGINEERING & SYSTEM SAFETY
- Deep reinforcement learning for long‐term pavement maintenance planning
- (2020) Linyi Yao et al. COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING
- Deep reinforcement learning-based sampling method for structural reliability assessment
- (2020) Zhengliang Xiang et al. RELIABILITY ENGINEERING & SYSTEM SAFETY
- Condition-Based Maintenance with Reinforcement Learning for Dry Gas Pipeline Subject to Internal Corrosion
- (2020) Zahra Mahmoodzadeh et al. SENSORS
- Deep reinforcement learning based preventive maintenance policy for serial production lines
- (2020) Jing Huang et al. EXPERT SYSTEMS WITH APPLICATIONS
- A reinforcement learning framework for optimal operation and maintenance of power grids
- (2019) R. Rocchetta et al. APPLIED ENERGY
- Managing engineering systems with large state and action spaces through deep reinforcement learning
- (2019) C.P. Andriotis et al. RELIABILITY ENGINEERING & SYSTEM SAFETY
- Exploring the impact of foot-by-foot track geometry on the occurrence of rail defects
- (2019) Reza Mohammadi et al. TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES
- Dynamic selective maintenance optimization for multi-state systems over a finite horizon: A deep reinforcement learning approach
- (2019) Yu Liu et al. EUROPEAN JOURNAL OF OPERATIONAL RESEARCH
- Can deep learning predict risky retail investors? A case study in financial risk behavior forecasting
- (2019) A. Kim et al. EUROPEAN JOURNAL OF OPERATIONAL RESEARCH
- Optimal policy for structure maintenance: A deep reinforcement learning framework
- (2019) Shiyin Wei et al. STRUCTURAL SAFETY
- Data-driven optimization of railway maintenance for track geometry
- (2018) Siddhartha Sharma et al. TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES
- Recent applications of big data analytics in railway transportation systems: A survey
- (2018) Faeze Ghofrani et al. TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES
- Reinforcement Learning-Based and Parametric Production-Maintenance Control Policies for a Deteriorating Manufacturing System
- (2018) A. S. Xanthopoulos et al. IEEE Access
- Human-level control through deep reinforcement learning
- (2015) Volodymyr Mnih et al. NATURE
- Multi-agent reinforcement learning based maintenance policy for a resource constrained flow line system
- (2014) Xiao Wang et al. JOURNAL OF INTELLIGENT MANUFACTURING
- Joint optimization of control chart and preventive maintenance policies: A discrete-time Markov chain approach
- (2013) Yisha Xiang EUROPEAN JOURNAL OF OPERATIONAL RESEARCH
Publish scientific posters with Peeref
Peeref publishes scientific posters from all research disciplines. Our Diamond Open Access policy means free access to content and no publication fees for authors.
Learn MoreAsk a Question. Answer a Question.
Quickly pose questions to the entire community. Debate answers and get clarity on the most important issues facing researchers.
Get Started