4.4 Article

Reinforcement learning for adaptive optimal control of unknown continuous-time nonlinear systems with input constraints

期刊

INTERNATIONAL JOURNAL OF CONTROL
卷 87, 期 3, 页码 553-566

出版社

TAYLOR & FRANCIS LTD
DOI: 10.1080/00207179.2013.848292

关键词

adaptive control; input constraints; neural networks; optimal control; reinforcement learning

资金

  1. National Natural Science Foundation of China [61034002, 61233001, 61273140]

向作者/读者索取更多资源

In this paper, an adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem of constrained-input continuous-time nonlinear systems in the presence of nonlinearities with unknown structures. Two different types of neural networks (NNs) are employed to approximate the Hamilton-Jacobi-Bellman equation. That is, an recurrent NN is constructed to identify the unknown dynamical system, and two feedforward NNs are used as the actor and the critic to approximate the optimal control and the optimal cost, respectively. Based on this framework, the action NN and the critic NN are tuned simultaneously, without the requirement for the knowledge of system drift dynamics. Moreover, by using Lyapunov's direct method, the weights of the action NN and the critic NN are guaranteed to be uniformly ultimately bounded, while keeping the closed-loop system stable. To demonstrate the effectiveness of the present approach, simulation results are illustrated.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据