4.0 Article

Explaining Intrusion Detection-Based Convolutional Neural Networks Using Shapley Additive Explanations (SHAP)

期刊

BIG DATA AND COGNITIVE COMPUTING
卷 6, 期 4, 页码 -

出版社

MDPI
DOI: 10.3390/bdcc6040126

关键词

artificial intelligence (AI); explainability; explainable AI (XAI); convolutional neural networks (CNN); intrusion detection; SHAP (Shapley additive explanations); kernel density estimation (KDE)

向作者/读者索取更多资源

This study presents an analytical approach to studying the density functions for intrusion detection dataset features, explaining the importance of features in XAI models, proposing a method to explain results of different machine learning models, and surveying dataset features that can perform better for convolutional neural networks.
Artificial intelligence (AI) and machine learning (ML) models have become essential tools used in many critical systems to make significant decisions; the decisions taken by these models need to be trusted and explained on many occasions. On the other hand, the performance of different ML and AI models varies with the same used dataset. Sometimes, developers have tried to use multiple models before deciding which model should be used without understanding the reasons behind this variance in performance. Explainable artificial intelligence (XAI) models have presented an explanation for the models' performance based on highlighting the features that the model considered necessary while making the decision. This work presents an analytical approach to studying the density functions for intrusion detection dataset features. The study explains how and why these features are essential during the XAI process. We aim, in this study, to explain XAI behavior to add an extra layer of explainability. The density function analysis presented in this paper adds a deeper understanding of the importance of features in different AI models. Specifically, we present a method to explain the results of SHAP (Shapley additive explanations) for different machine learning models based on the feature data's KDE (kernel density estimation) plots. We also survey the specifications of dataset features that can perform better for convolutional neural networks (CNN) based models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.0
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据