4.7 Article

Markov processes follow from the principle of maximum caliber

期刊

JOURNAL OF CHEMICAL PHYSICS
卷 136, 期 6, 页码 -

出版社

AMER INST PHYSICS
DOI: 10.1063/1.3681941

关键词

-

资金

  1. National Institutes of Health (NIH) [R01GM 34993, 1R01GM090205]
  2. NSF of China (NSFC) [10901040]
  3. specialized Research Fund for the Doctoral Program of Higher Education (New Teachers) [20090071120003]

向作者/读者索取更多资源

Markov models are widely used to describe stochastic dynamics. Here, we show that Markov models follow directly from the dynamical principle of maximum caliber (Max Cal). Max Cal is a method of deriving dynamical models based on maximizing the path entropy subject to dynamical constraints. We give three different cases. First, we show that if constraints (or data) are given in the form of singlet statistics (average occupation probabilities), then maximizing the caliber predicts a time-independent process that is modeled by identical, independently distributed random variables. Second, we show that if constraints are given in the form of sequential pairwise statistics, then maximizing the caliber dictates that the kinetic process will be Markovian with a uniform initial distribution. Third, if the initial distribution is known and is not uniform we show that the only process that maximizes the path entropy is still the Markov process. We give an example of how Max Cal can be used to discriminate between different dynamical models given data. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.3681941]

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据