4.6 Review

A translucent box: interpretable machine learning in ecology

Journal

ECOLOGICAL MONOGRAPHS
Volume 90, Issue 4, Pages -

Publisher

WILEY
DOI: 10.1002/ecm.1422

Keywords

interpretable machine learning; machine learning; model interpretation; phylogenetic regression; random effects; Random Forest

Categories

Funding

  1. Bill and Melinda Gates Foundation

Ask authors/readers for more resources

Machine learning has become popular in ecology but its use has remained restricted to predicting, rather than understanding, the natural world. Many researchers consider machine learning algorithms to be a black box. These models can, however, with careful examination, be used to inform our understanding of the world. They are translucent boxes. Furthermore, the interpretation of these models can be an important step in building confidence in a model or in a specific prediction from a model. Here I review a number of techniques for interpreting machine learning models at the level of the system, the variable, and the individual prediction as well as methods for handling non-independent data. I also discuss the limits of interpretability for different methods and demonstrate these approaches using a case example of understanding litter sizes in mammals.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available