4.6 Article

Explaining Support Vector Machines: A Color Based Nomogram

期刊

PLOS ONE
卷 11, 期 10, 页码 -

出版社

PUBLIC LIBRARY SCIENCE
DOI: 10.1371/journal.pone.0164568

关键词

-

资金

  1. postdoctoral fellow of the Research foundation Flanders (FWO)
  2. Center of Excellence (CoE): (OPTEC) [PFV/10/002]
  3. iMinds Medical Information Technologies
  4. Belgian Federal Science Policy Office: (DYSCO, 'Dynamical systems, control and optimization') [IUAP P7/19/]
  5. European Research Council: ERC Advanced Grant, BIOTENSORS [339804]
  6. ERC AdG A-DATADRIVE-B [FWO G.0377.12, G.088114N]
  7. European Research Council (ERC) [339804] Funding Source: European Research Council (ERC)

向作者/读者索取更多资源

Problem setting Support vector machines (SVMs) are very popular tools for classification, regression and other problems. Due to the large choice of kernels they can be applied with, a large variety of data can be analysed using these tools. Machine learning thanks its popularity to the good performance of the resulting models. However, interpreting the models is far from obvious, especially when non-linear kernels are used. Hence, the methods are used as black boxes. As a consequence, the use of SVMs is less supported in areas where interpretability is important and where people are held responsible for the decisions made by models. Objective In this work, we investigate whether SVMs using linear, polynomial and RBF kernels can be explained such that interpretations for model-based decisions can be provided. We further indicate when SVMs can be explained and in which situations interpretation of SVMs is (hitherto) not possible. Here, explainability is defined as the ability to produce the final decision based on a sum of contributions which depend on one single or at most two input variables. Results Our experiments on simulated and real-life data show that explainability of an SVM depends on the chosen parameter values (degree of polynomial kernel, width of RBF kernel and regularization constant). When several combinations of parameter values yield the same cross-validation performance, combinations with a lower polynomial degree or a larger kernel width have a higher chance of being explainable. Conclusions This work summarizes SVM classifiers obtained with linear, polynomial and RBF kernels in a single plot. Linear and polynomial kernels up to the second degree are represented exactly. For other kernels an indication of the reliability of the approximation is presented. The complete methodology is available as an R package and two apps and a movie are provided to illustrate the possibilities offered by the method.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据