4.5 Article

Return of the features Efficient feature selection and interpretation for photometric redshifts

Journal

ASTRONOMY & ASTROPHYSICS
Volume 616, Issue -, Pages -

Publisher

EDP SCIENCES S A
DOI: 10.1051/0004-6361/201833103

Keywords

methods: data analysis; methods: statistical; galaxies: distances and redshifts; quasars: general

Funding

  1. Klaus Tschira Foundation
  2. ASI/INAF [2017-14-H.0]
  3. National Science Foundation [ACI-1440620]
  4. National Aeronautics and Space Administration's Earth Science Technology Office
  5. NASA [NCC5-626]
  6. California Institute of Technology [NCC5-626]
  7. Alfred P. Sloan Foundation
  8. National Science Foundation
  9. U.S. Department of Energy
  10. National Aeronautics and Space Administration
  11. Japanese Monbukagakusho
  12. Max Planck Society
  13. Higher Education Funding Council for England

Ask authors/readers for more resources

Context. The explosion of data in recent years has generated an increasing need for new analysis techniques in order to extract knowledge from massive data-sets. Machine learning has proved particularly useful to perform this task. Fully automatized methods (e.g. deep neural networks) have recently gathered great popularity, even though those methods often lack physical interpretability. In contrast, feature based approaches can provide both well-performing models and understandable causalities with respect to the correlations found between features and physical processes. Aims. Efficient feature selection is an essential tool to boost the performance of machine learning models. In this work, we propose a forward selection method in order to compute, evaluate, and characterize better performing features for regression and classification problems. Given the importance of photometric redshift estimation, we adopt it as our case study. Methods. We synthetically created 4520 features by combining magnitudes, errors, radii, and ellipticities of quasars, taken from the Sloan Digital Sky Survey (SDSS). We apply a forward selection process, a recursive method in which a huge number of feature sets is tested through a k-Nearest-Neighbours algorithm, leading to a tree of feature sets. The branches of the feature tree are then used to perform experiments with the random forest, in order to validate the best set with an alternative model. Results. We demonstrate that the sets of features determined with our approach improve the performances of the regression models significantly when compared to the performance of the classic features from the literature. The found features are unexpected and surprising, being very different from the classic features. Therefore, a method to interpret some of the found features in a physical context is presented. Conclusions. The feature selection methodology described here is very general and can be used to improve the performance of machine learning models for any regression or classification task.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available