4.2 Article

ADMM for Penalized Quantile Regression in Big Data

Journal

INTERNATIONAL STATISTICAL REVIEW
Volume 85, Issue 3, Pages 494-518

Publisher

WILEY
DOI: 10.1111/insr.12221

Keywords

Penalized quantile regression; ADMM; large-scale; divide-and-conquer; Hadoop; MapReduce

Ask authors/readers for more resources

Traditional linear programming algorithms for quantile regression, for example, the simplex method and the interior point method, work well for data of small to moderate sizes. However, these methods are difficult to generalize to high-dimensional big data for which penalization is usually necessary. Further, the massive size of contemporary big data calls for the development of large-scale algorithms on distributed computing platforms. The traditional linear programming algorithms are intrinsically sequential and not suitable for such frameworks. In this paper, we discuss how to use the popular ADMM algorithm to solve large-scale penalized quantile regression problems. The ADMM algorithm can be easily parallelized and implemented in modern distributed frameworks. Simulation results demonstrate that the ADMM is as accurate as traditional LP algorithms while faster even in the nonparallel case.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.2
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available