4.6 Article

Learning Sparse Causal Gaussian Networks With Experimental Intervention: Regularization and Coordinate Descent

期刊

JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
卷 108, 期 501, 页码 288-300

出版社

AMER STATISTICAL ASSOC
DOI: 10.1080/01621459.2012.754359

关键词

Adaptive lasso; Experimental data; L-1 regularization; Penalized likelihood; Structure learning

资金

  1. National Science Foundation [DMS-1055286]
  2. Direct For Mathematical & Physical Scien [1055286] Funding Source: National Science Foundation
  3. Division Of Mathematical Sciences [1055286] Funding Source: National Science Foundation

向作者/读者索取更多资源

Causal networks are graphically represented by directed acyclic graphs (DAGs). Learning causal networks from data is a challenging problem due to the size of the space of DAGs, the acyclicity constraint placed on the graphical structures, and the presence of equivalence classes. In this article, we develop an L-1-penalized likelihood approach to estimate the structure of causal Gaussian networks. A blockwise coordinate descent algorithm, which takes advantage of the acyclicity constraint, is proposed for seeking a local maximizer of the penalized likelihood. We establish that model selection consistency for causal Gaussian networks can be achieved with the adaptive lasso penalty and sufficient experimental interventions. Simulation and real data examples are used to demonstrate the effectiveness of our method. In particular, our method shows satisfactory performance for DAGs with 200 nodes, which have about 20,000 free parameters. Supplementary materials for this article are available online.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据