4.6 Article

Decision Tree Integration Using Dynamic Regions of Competence

Journal

ENTROPY
Volume 22, Issue 10, Pages -

Publisher

MDPI
DOI: 10.3390/e22101129

Keywords

decision tree; random forest; majority voting; classifier ensemble; classifier integration

Funding

  1. National Science Centre, Poland [2017/25/B/ST6/01750]

Ask authors/readers for more resources

A vital aspect of the Multiple Classifier Systems construction process is the base model integration. For example, the Random Forest approach used the majority voting rule to fuse the base classifiers obtained by bagging the training dataset. In this paper we propose the algorithm that uses partitioning the feature space whose split is determined by the decision rules of each decision tree node which is the base classification model. After dividing the feature space, the centroid of each new subspace is determined. This centroids are used in order to determine the weights needed in the integration phase based on the weighted majority voting rule. The proposal was compared with other Multiple Classifier Systems approaches. The experiments regarding multiple open-source benchmarking datasets demonstrate the effectiveness of our method. To discuss the results of our experiments, we use micro and macro-average classification performance measures.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available