4.6 Article

Neural-network-based synchronous iteration learning method for multi-player zero-sum games

期刊

NEUROCOMPUTING
卷 242, 期 -, 页码 73-82

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2017.02.051

关键词

Adaptive dynamic programming; Approximate dynamic programming; Adaptive critic designs; Multi-player; Iteration learning; Neural network

资金

  1. National Natural Science Foundation of China [61304079, 61673054, 61374105]
  2. Fundamental Research Funds for the Central Universities [FRF-TP-15-056A3]
  3. Open Research Project from SKLMCCS [20150104]

向作者/读者索取更多资源

In this paper, a synchronous solution method for multi-player zero-sum games without system dynamics is established based on neural network. The policy iteration (PI) algorithm is presented to solve the Hamilton-Jacobi-Bellman (HJB) equation. It is proven that the obtained iterative cost function is convergent to the optimal game value. For avoiding system dynamics, off-policy learning method is given to obtain the iterative cost function, controls and disturbances based on Pl. Critic neural network (CNN), action neural networks (ANNs) and disturbance neural networks (DNNs) are used to approximate the cost function, controls and disturbances. The weights of neural networks compose the synchronous weight matrix, and the uniformly ultimately bounded (UUB) of the synchronous weight matrix is proven. Two examples are given to show that the effectiveness of the proposed synchronous solution method for multi-player ZS games. (C) 2017 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据