Output-feedback H∞ quadratic tracking control of linear systems using reinforcement learning
Published 2017 View Full Article
- Home
- Publications
- Publication Search
- Publication Details
Title
Output-feedback H∞
quadratic tracking control of linear systems using reinforcement learning
Authors
Keywords
-
Journal
INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING
Volume -, Issue -, Pages -
Publisher
Wiley
Online
2017-10-16
DOI
10.1002/acs.2830
References
Ask authors/readers for more resources
Related references
Note: Only part of the references are listed.- Optimal model-free output synchronization of heterogeneous systems using off-policy reinforcement learning
- (2016) Hamidreza Modares et al. AUTOMATICA
- Optimal Output-Feedback Control of Unknown Continuous-Time Linear Systems Using Off-policy Reinforcement Learning
- (2016) Hamidreza Modares et al. IEEE Transactions on Cybernetics
- Adaptive Suboptimal Output-Feedback Control for Linear Systems Using Integral Reinforcement Learning
- (2015) Lemei M. Zhu et al. IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY
- $ {H}_{ {\infty }}$ Tracking Control of Completely Unknown Continuous-Time Systems via Off-Policy Reinforcement Learning
- (2015) Hamidreza Modares et al. IEEE Transactions on Neural Networks and Learning Systems
- Optimal Tracking Control of Unknown Discrete-Time Linear Systems Using Input-Output Measured Data
- (2015) Bahare Kiumarsi et al. IEEE Transactions on Cybernetics
- Optimal tracking control of nonlinear partially-unknown constrained-input systems using integral reinforcement learning
- (2014) Hamidreza Modares et al. AUTOMATICA
- Linear Quadratic Tracking Control of Partially-Unknown Continuous-Time Systems Using Reinforcement Learning
- (2014) Hamidreza Modares et al. IEEE TRANSACTIONS ON AUTOMATIC CONTROL
- Online Adaptive Policy Learning Algorithm for $H_{\infty }$ State Feedback Control of Unknown Affine Nonlinear Discrete-Time Systems
- (2014) Huaguang Zhang et al. IEEE Transactions on Cybernetics
- Integral Q-learning and explorized policy iteration for adaptive optimal control of continuous-time linear systems
- (2012) Jae Young Lee et al. AUTOMATICA
- Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics
- (2012) Yu Jiang et al. AUTOMATICA
- Reinforcement Learning and Feedback Control: Using Natural Decision Methods to Design Optimal Adaptive Controllers
- (2012) IEEE CONTROL SYSTEMS MAGAZINE
- Simultaneous policy update algorithms for learning the solution of linear continuous-time H∞ state feedback control
- (2012) Huai-Ning Wu et al. INFORMATION SCIENCES
- Computationally efficient simultaneous policy update algorithm for nonlinearH∞state feedback control with Galerkin's method
- (2012) Biao Luo et al. INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL
- Optimal Tracking Control of Motion Systems
- (2011) Anusha Mannava et al. IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY
- Online solution of nonlinear two-player zero-sum games using synchronous policy iteration
- (2011) Kyriakos G. Vamvoudakis et al. INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL
- An iterative adaptive dynamic programming method for solving a class of nonlinear zero-sum differential games
- (2010) Huaguang Zhang et al. AUTOMATICA
- Reinforcement Learning for Partially Observable Dynamic Processes: Adaptive Dynamic Programming Using Measured Output Data
- (2010) F L Lewis et al. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS
- Adaptive Dynamic Programming: An Introduction
- (2009) Fei-Yue Wang et al. IEEE Computational Intelligence Magazine
- Adaptive optimal control for continuous-time linear systems based on policy iteration
- (2008) D. Vrabie et al. AUTOMATICA
Find Funding. Review Successful Grants.
Explore over 25,000 new funding opportunities and over 6,000,000 successful grants.
ExplorePublish scientific posters with Peeref
Peeref publishes scientific posters from all research disciplines. Our Diamond Open Access policy means free access to content and no publication fees for authors.
Learn More