Online adaptive Q-learning method for fully cooperative linear quadratic dynamic games
出版年份 2019 全文链接
标题
Online adaptive Q-learning method for fully cooperative linear quadratic dynamic games
作者
关键词
-
出版物
Science China-Information Sciences
Volume 62, Issue 12, Pages -
出版商
Springer Science and Business Media LLC
发表日期
2019-11-15
DOI
10.1007/s11432-018-9865-9
参考文献
相关参考文献
注意:仅列出部分参考文献,下载原文获取全部文献信息。- Disturbance observer-based optimal longitudinal trajectory control of near space vehicle
- (2019) Rongsheng Xia et al. Science China-Information Sciences
- Output feedback Q-learning for discrete-time linear zero-sum games with application to the H-infinity control
- (2018) Syed Ali Asad Rizvi et al. AUTOMATICA
- Off-Policy Q-Learning: Set-Point Design for Optimizing Dual-Rate Rougher Flotation Operational Processes
- (2018) Jinna Li et al. IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
- Data-Driven Optimal Consensus Control for Discrete-Time Multi-Agent Systems With Unknown Dynamics Using Reinforcement Learning Method
- (2017) Huaguang Zhang et al. IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
- Developing nonlinear adaptive optimal regulators through an improved neural learning mechanism
- (2017) Ding Wang et al. Science China-Information Sciences
- Q-learning for continuous-time linear systems: A model-free infinite horizon optimal control approach
- (2017) Kyriakos G. Vamvoudakis SYSTEMS & CONTROL LETTERS
- Off-Policy Integral Reinforcement Learning Method to Solve Nonlinear Continuous-Time Multiplayer Nonzero-Sum Games
- (2017) Ruizhuo Song et al. IEEE Transactions on Neural Networks and Learning Systems
- Iterative Adaptive Dynamic Programming for Solving Unknown Nonlinear Zero-Sum Game Based on Online Data
- (2017) Yuanheng Zhu et al. IEEE Transactions on Neural Networks and Learning Systems
- Discrete-Time Deterministic $Q$ -Learning: A Novel Convergence Analysis
- (2017) Qinglai Wei et al. IEEE Transactions on Cybernetics
- Discrete-Time Nonzero-Sum Games for Multiplayer Using Policy-Iteration-Based Adaptive Dynamic Programming Algorithms
- (2017) Huaguang Zhang et al. IEEE Transactions on Cybernetics
- Model-Free Optimal Tracking Control via Critic-Only Q-Learning
- (2016) Biao Luo et al. IEEE Transactions on Neural Networks and Learning Systems
- Experience Replay for Optimal Control of Nonzero-Sum Game Systems With Unknown Dynamics
- (2016) Dongbin Zhao et al. IEEE Transactions on Cybernetics
- Non-zero sum Nash Q-learning for unknown deterministic continuous-time linear systems
- (2015) Kyriakos G. Vamvoudakis AUTOMATICA
- A novel policy iteration based deterministic Q-learning for discrete-time nonlinear systems
- (2015) QingLai Wei et al. Science China-Information Sciences
- Off-Policy Reinforcement Learning for $ H_\infty $ Control Design
- (2015) Biao Luo et al. IEEE Transactions on Cybernetics
- Online Synchronous Approximate Optimal Learning Algorithm for Multi-Player Non-Zero-Sum Games With Unknown Dynamics
- (2014) Derong Liu et al. IEEE Transactions on Systems Man Cybernetics-Systems
- Differential Games Controllers That Confine a System to a Safe Region in the State Space, With Applications to Surge Tank Control
- (2012) Paola Falugi et al. IEEE TRANSACTIONS ON AUTOMATIC CONTROL
- Zero-Sum Two-Player Game Theoretic Formulation of Affine Nonlinear Discrete-Time Systems Using Neural Networks
- (2012) Shahab Mehraeen et al. IEEE Transactions on Cybernetics
- Near-Optimal Control for Nonzero-Sum Differential Games of Continuous-Time Nonlinear Systems Using Single-Network ADP
- (2012) Huaguang Zhang et al. IEEE Transactions on Cybernetics
- Multi-player non-zero-sum games: Online adaptive learning solution of coupled Hamilton–Jacobi equations
- (2011) Kyriakos G. Vamvoudakis et al. AUTOMATICA
- Hybrid MDP based integrated hierarchical Q-learning
- (2011) ChunLin Chen et al. Science China-Information Sciences
Find Funding. Review Successful Grants.
Explore over 25,000 new funding opportunities and over 6,000,000 successful grants.
ExplorePublish scientific posters with Peeref
Peeref publishes scientific posters from all research disciplines. Our Diamond Open Access policy means free access to content and no publication fees for authors.
Learn More