4.5 Article

Analyzing Real Options and Flexibility in Engineering Systems Design Using Decision Rules and Deep Reinforcement Learning

Journal

JOURNAL OF MECHANICAL DESIGN
Volume 144, Issue 2, Pages -

Publisher

ASME
DOI: 10.1115/1.4052299

Keywords

computer-aided engineering; data-driven design; design automation; design theory and methodology; energy systems design; machine learning; multidisciplinary design and optimization; systems design; uncertainty analysis; uncertainty modeling

Funding

  1. UK Engineering and Physical Sciences Research Council (EPSRC) [EP/R513052/1]

Ask authors/readers for more resources

Engineering systems are essential for providing services to society, but their performance is affected by uncertainties such as climate change and pandemics. Existing design methods often fail to recognize uncertainty and result in rigid systems. This study proposes a novel approach using deep reinforcement learning to analyze flexibility and improve system adaptability. The results show dynamic solutions parametrized by artificial neural network, with significantly improved economic value compared to previous solutions.
Engineering systems provide essential services to society, e.g., power generation, transportation. Their performance, however, is directly affected by their ability to cope with uncertainty, especially given the realities of climate change and pandemics. Standard design methods often fail to recognize uncertainty in early conceptual activities, leading to rigid systems that are vulnerable to change. Real options and flexibility in design are important paradigms to improve a system's ability to adapt and respond to unforeseen conditions. Existing approaches to analyze flexibility, however, do not leverage sufficiently recent developments in machine learning enabling deeper exploration of the computational design space. There is untapped potential for new solutions that are not readily accessible using existing methods. Here, a novel approach to analyze flexibility is proposed based on deep reinforcement learning (DRL). It explores available datasets systematically and considers a wider range of adaptability strategies. The methodology is evaluated on an example waste-to-energy (WTE) system. Low and high flexibility DRL models are compared against stochastically optimal inflexible and flexible solutions using decision rules. The results show highly dynamic solutions, with action space parametrized via artificial neural network (ANN). They show improved expected economic value up to 69% compared with previous solutions. Combining information from action space probability distributions along expert insights and risk tolerance helps make better decisions in real-world design and system operations. Out of sample testing shows that the policies are generalizable, but subject to tradeoffs between flexibility and inherent limitations of the learning process.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available