Journal
KNOWLEDGE-BASED SYSTEMS
Volume 205, Issue -, Pages -Publisher
ELSEVIER
DOI: 10.1016/j.knosys.2020.106247
Keywords
Bayesian optimization; Gaussian process; Hyperparameter tuning
Categories
Funding
- Australian Government through the Australian Research Council (ARC)
- Telstra-Deakin Centre of Excellence in Big Data and Machine Learning, Australia
- ARC Australian Laureate Fellowship [FL170100006]
Ask authors/readers for more resources
In this paper we develop a Bayesian optimization based hyperparameter tuning framework inspired by statistical learning theory for classifiers. We utilize two key facts from PAC learning theory; the generalization bound will be higher for a small subset of data compared to the whole, and the highest accuracy for a small subset of data can be achieved with a simple model. We initially tune the hyperparameters on a small subset of training data using Bayesian optimization. While tuning the hyperparameters on the whole training data, we leverage the insights from the learning theory to seek more complex models. We realize this by using directional derivative signs strategically placed in the hyperparameter search space to seek a more complex model than the one obtained with small data. We demonstrate the performance of our method on the tasks of tuning the hyperparameters of several machine learning algorithms. (C) 2020 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available