4.3 Article

Formal Methods Boost Experimental Performance for Explainable AI

Journal

IT PROFESSIONAL
Volume 23, Issue 6, Pages 8-12

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/MITP.2021.3123495

Keywords

Artificial intelligence

Ask authors/readers for more resources

Explainable AI, as a new branch of AI, aims to understand the impact of sophisticated heuristics and hyperparameter tuning on advanced AI tools and algorithms more precisely. Through formal methods, a deeper explanation of phenomena and more accurate information and predictions can be obtained.
IN Towards Explainability in Machine Learning: The Formal Methods Way,1 we illustrated last year how Explainable AI can profit by formal methods in terms of its explainability. In fact, Explainable AI is a new branch of AI, directed to a finer granular understanding of how the fancy heuristics and experimental fine tuning of hyperparameters influence the outcomes of advanced AI tools and algorithms. We discussed the concept of explanation, and showed how the stronger meaning of explanation in terms of formal models leads to a precise characterization of the phenomenon under consideration. We illustrated how, following the Algebraic Decision Diagram (ADD)-based aggregation technique originally established in Gossen and Steffen's work2 we can produce precise information about and exact, deterministic prediction of the outcome from a random forest consisting of 100 trees.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available