4.8 Article

Lasso adjustments of treatment effect estimates in randomized experiments

出版社

NATL ACAD SCIENCES
DOI: 10.1073/pnas.1510506113

关键词

randomized experiment; Neyman-Rubin model; average treatment effect; high-dimensional statistics; Lasso

资金

  1. NSF [DMS-11-06753, DMS-12-09014, DMS-1107000, DMS-1129626, DMS-1209014]
  2. Computational and Data-Enabled Science and Engineering in Mathematical and Statistical Sciences (Focused Research Group) [1228246, DMS-1160319]
  3. AFOSR [FA9550-14-1-0016]
  4. NSA [H98230-15-1-0040]
  5. Center for Science of Information, a US NSF Science and Technology Center [CCF-0939370]
  6. Department of Defense for Office of Naval Research [N00014-15-1-2367]
  7. National Defense Science and Engineering Graduate Fellowship Program
  8. Direct For Mathematical & Physical Scien [1228246] Funding Source: National Science Foundation
  9. Direct For Mathematical & Physical Scien
  10. Division Of Mathematical Sciences [1513378] Funding Source: National Science Foundation
  11. Direct For Mathematical & Physical Scien
  12. Division Of Mathematical Sciences [1209014] Funding Source: National Science Foundation
  13. Division Of Mathematical Sciences [1228246] Funding Source: National Science Foundation

向作者/读者索取更多资源

We provide a principled way for investigators to analyze randomized experiments when the number of covariates is large. Investigators often use linear multivariate regression to analyze randomized experiments instead of simply reporting the difference of means between treatment and control groups. Their aim is to reduce the variance of the estimated treatment effect by adjusting for covariates. If there are a large number of covariates relative to the number of observations, regression may perform poorly because of overfitting. In such cases, the least absolute shrinkage and selection operator (Lasso) may be helpful. We study the resulting Lasso-based treatment effect estimator under the Neyman-Rubin model of randomized experiments. We present theoretical conditions that guarantee that the estimator is more efficient than the simple difference-of-means estimator, and we provide a conservative estimator of the asymptotic variance, which can yield tighter confidence intervals than the difference-of-means estimator. Simulation and data examples show that Lasso-based adjustment can be advantageous even when the number of covariates is less than the number of observations. Specifically, a variant using Lasso for selection and ordinary least squares (OLS) for estimation performs particularly well, and it chooses a smoothing parameter based on combined performance of Lasso and OLS.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据