期刊
ANNALS OF STATISTICS
卷 39, 期 1, 页码 82-130出版社
INST MATHEMATICAL STATISTICS-IMS
DOI: 10.1214/10-AOS827
关键词
Median regression; quantile regression; sparse models
资金
- NSF [SES-0752266]
- Direct For Social, Behav & Economic Scie
- Divn Of Social and Economic Sciences [0752823] Funding Source: National Science Foundation
We consider median regression and, more generally, a possibly infinite collection of quantile regressions in high-dimensional sparse models. In these models, the number of regressors p is very large, possibly larger than the sample size n, but only at most s regressors have a nonzero impact on each conditional quantile of the response variable, where s grows more slowly than n. Since ordinary quantile regression is not consistent in this case, we consider l(1)-penalized quantile regression (l(1)-QR), which penalizes the l(1)-norm of regression coefficients, as well as the post-penalized QR estimator (post-l(1)-QR), which applies ordinary QR to the model selected by l(1)-QR. First, we show that under general conditions l(1)-QR is consistent at the near-oracle rate. root s/n root log(p boolean OR n), uniformly in the compact set u subset of (0, 1) of quantile indices. In deriving this result, we propose a partly pivotal, data-driven choice of the penalty level and show that it satisfies the requirements for achieving this rate. Second, we show that under similar conditions post-l(1)-QR is consistent at the near-oracle rate root s/n root log(p boolean OR n), uniformly over u, even if the l(1)-QR-selected models miss some components of the true models, and the rate could be even closer to the oracle rate otherwise. Third, we characterize conditions under which l(1)-QR contains the true model as a submodel, and derive bounds on the dimension of the selected model, uniformly over u; we also provide conditions under which hard-thresholding selects the minimal true model, uniformly over u.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据