4.4 Article

Towards General and Efficient Online Tuning for Spark

Journal

PROCEEDINGS OF THE VLDB ENDOWMENT
Volume 16, Issue 12, Pages 3570-3583

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.14778/3611540.3611548

Keywords

-

Ask authors/readers for more resources

This paper presents a general and efficient Spark tuning framework that addresses the issues of limited functionality, high overhead, and inefficient search. The framework introduces a generalized tuning formulation and a Bayesian optimization solution to support multiple tuning goals and constraints. It proposes tuning parameters through online evaluations and ensures safety during job executions. Additionally, innovative techniques such as adaptive sub-space generation, approximate gradient descent, and meta-learning are leveraged to accelerate the search process.
The distributed data analytic system - Spark is a common choice for processing massive volumes of heterogeneous data, while it is challenging to tune its parameters to achieve high performance. Recent studies try to employ auto-tuning techniques to solve this problem but suffer from three issues: limited functionality, high overhead, and inefficient search. In this paper, we present a general and efficient Spark tuning framework that can deal with the three issues simultaneously. First, we introduce a generalized tuning formulation, which can support multiple tuning goals and constraints conveniently, and a Bayesian optimization (BO) based solution to solve this generalized optimization problem. Second, to avoid high overhead from additional offline evaluations in existing methods, we propose to tune parameters along with the actual periodic executions of each job (i.e., online evaluations). To ensure safety during online job executions, we design a safe configuration acquisition method that models the safe region. Finally, three innovative techniques are leveraged to further accelerate the search process: adaptive sub-space generation, approximate gradient descent, and meta-learning method. We have implemented this framework as an independent cloud service, and applied it to the data platform in Tencent. The empirical results on both public benchmarks and large-scale production tasks demonstrate its superiority in terms of practicality, generality, and efficiency. Notably, this service saves an average of 57.00% memory cost and 34.93% CPU cost on 25K in-production tasks within 20 iterations, respectively.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available