Journal
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
Volume 25, Issue 12, Pages 2141-2155Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2014.2305841
Keywords
Adaptive dynamic programming (ADP); Markov jump systems (MJSs); neural network; optimal control; state identifier
Categories
Funding
- National Science Foundation [ECCS 1053717]
- Army Research Office [W911NF-12-1-0378]
- NSF-DFG Collaborative Research on Autonomous Learning [CNS 1117314]
- National Natural Science Foundation of China [51228701, 61034005]
- IAPI Fundamental Research Funds [2013ZCX01-07]
- Direct For Computer & Info Scie & Enginr
- Division Of Computer and Network Systems [1117314] Funding Source: National Science Foundation
- Div Of Electrical, Commun & Cyber Sys
- Directorate For Engineering [1053717] Funding Source: National Science Foundation
Ask authors/readers for more resources
In this paper, we develop and analyze an optimal control method for a class of discrete-time nonlinear Markov jump systems (MJSs) with unknown system dynamics. Specifically, an identifier is established for the unknown systems to approximate system states, and an optimal control approach for nonlinear MJSs is developed to solve the Hamilton-Jacobi-Bellman equation based on the adaptive dynamic programming technique. We also develop detailed stability analysis of the control approach, including the convergence of the performance index function for nonlinear MJSs and the existence of the corresponding admissible control. Neural network techniques are used to approximate the proposed performance index function and the control law. To demonstrate the effectiveness of our approach, three simulation studies, one linear case, one nonlinear case, and one single link robot arm case, are used to validate the performance of the proposed optimal control method.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available