4.7 Article

Approximating Ergodic Average Reward Continuous-Time Controlled Markov Chains

期刊

IEEE TRANSACTIONS ON AUTOMATIC CONTROL
卷 55, 期 1, 页码 201-207

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAC.2009.2033848

关键词

Approximation of control problems; Ergodic Markov decision processes (MDPs); policy iteration algorithm

向作者/读者索取更多资源

We study the approximation of an ergodic average reward continuous-time denumerable state Markov decision process (MDP) by means of a sequence of MDPs. Our results include the convergence of the corresponding optimal policies and the optimal gains. For a controlled upwardly skip-free process, we show some computational results to illustrate the convergence theorems.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据