Article
Computer Science, Artificial Intelligence
Lun Ai, Stephen H. Muggleton, Celine Hocquette, Mark Gromowski, Ute Schmid
Summary: This paper explores the explanatory effects of machine learned theories in human learning, proposing a framework to identify the harmfulness of machine explanations based on the cognitive window concept. Empirical evidence shows that human performance is significantly improved when aided by a symbolic machine learned theory that satisfies the cognitive window, while performance declines when aided by a theory that fails to satisfy the window.
Article
Robotics
Pamela Carreno-Medrano, Stephen L. Smith, Dana Kulic
Summary: This article discusses the issue of robots learning from nonexpert humans, advocating for the robot to understand not only the human's objectives but also their expertise level. The article proposes two inference approaches and demonstrates them in simulation and with real user data.
IEEE TRANSACTIONS ON ROBOTICS
(2023)
Article
Computer Science, Artificial Intelligence
Lun Ai, Johannes Langer, Stephen H. Muggleton, Ute Schmid
Summary: The comprehensibility of machine-learned theories has gained attention, particularly in the context of logic programming. Previous studies have shown the potential for improving human comprehension with machine-learned logic rules. However, the presentation of machine-learned explanations in game learning can have both positive and negative effects. In this research, the effects of concept ordering and the presence of machine-learned explanations on human comprehension in sequential problem-solving were examined. The results suggest that the sequential teaching of concepts and the presence of explanations can enhance human comprehension and problem-solving strategies.
Article
Plant Sciences
Wilfried Woeber, Lars Mehnen, Peter Sykacek, Harald Meimberg
Summary: Recent advancements in machine learning and deep learning have enabled the development of precision farming systems for plant and crop detection based on systematic inspection of morphological features. However, the current models lack interpretability, highlighting the need for unsupervised machine learning, careful feature investigation, and statistical analysis for biological applications.
Article
Computer Science, Artificial Intelligence
Gwenole Quellec, Hassan Al Hajj, Mathieu Lamard, Pierre-Henri Conze, Pascale Massin, Beatrice Cochener
Summary: This paper introduces an explanatory Artificial Intelligence algorithm called ExplAIn, which achieves the same performance level as black-box AI algorithms in classifying the severity of diabetic retinopathy. The algorithm is trained with image supervision, allowing the concepts of lesions and lesion categories to emerge by themselves for explainable automatic diagnoses.
MEDICAL IMAGE ANALYSIS
(2021)
Article
Multidisciplinary Sciences
Jonathan M. Henshaw, Lutz Fromhage, Adam G. Jones
Summary: The aesthetic preferences of potential mates play a significant role in the evolution of elaborate ornaments. Females tend to prefer ornaments that signal a male's quality and have preexisting perceptual biases. The costs of preference expression and the potential genetic benefits associated with offspring attractiveness are important factors in shaping female preferences.
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
(2022)
Article
Computer Science, Artificial Intelligence
Zihao Wang, Thomas Demarcy, Clair Vandersteen, Dan Gnansia, Charles Raffaelli, Nicolas Guevara, Herve Delingette
Summary: This paper presents a Bayesian inference approach for parametric shape models to segment medical images, aiming to provide interpretable results. The framework defines likelihood appearance probability and prior label probability based on a generic shape function, controlling the trade-off between shape and appearance information.
MEDICAL IMAGE ANALYSIS
(2022)
Article
Computer Science, Artificial Intelligence
Pinggai Zhang, Ling Wang, Zixiang Fei, Lisheng Wei, Minrui Fei, Muhammad Ilyas Menhas
Summary: This paper introduces a novel human learning optimization algorithm called HLOBIL, which utilizes Bayesian inference learning strategy for enhanced learning efficiency. The proposed Bayesian inference learning operator (BILO) improves the exploitation ability by achieving optimal values and retrieving optimal information. Additionally, the inborn characteristics of Bayesian inference enhance the exploration ability of HLOBIL. Experimental results demonstrate the superiority of HLOBIL over previous HLO variants and other state-of-art algorithms in terms of exploitation and exploration abilities.
KNOWLEDGE-BASED SYSTEMS
(2023)
Article
Astronomy & Astrophysics
Sayantan Auddy, Ramit Dey, Min-Kai Lin, Daniel Carrera, Jacob B. Simon
Summary: In this study, a Bayesian deep-learning network, DPNNet-Bayesian, is introduced to predict planet mass from disk gaps and provide uncertainties associated with the prediction. The unique feature of this approach is its ability to distinguish between uncertainty related to the deep-learning architecture and uncertainty due to measurement noise in the input data. The results show that the network's predictions are comparable to those from other studies based on specialized simulations.
ASTROPHYSICAL JOURNAL
(2022)
Article
Automation & Control Systems
Pascal Klink, Hany Abdulsamad, Boris Belousov, Carlo D'Eramo, Jan Peters, Joni Pajarinen
Summary: This study introduces an automated curriculum generation method in reinforcement learning, formalizing the self-paced learning paradigm as inducing a distribution over training tasks to balance task complexity and the goal of matching a desired task distribution. Experiment results demonstrate that training on this induced distribution can help avoid poor local optima in different RL algorithms across tasks with uninformative rewards and challenging exploration requirements.
JOURNAL OF MACHINE LEARNING RESEARCH
(2021)
Article
Computer Science, Artificial Intelligence
Firat Ozdemir, Zixuan Peng, Philipp Fuernstahl, Christine Tanner, Orcun Goksel
Summary: This paper proposes an active learning framework that optimally utilizes expert clinician time in medical image analysis, generating improved segmentation performance. By combining representativeness with uncertainty, the method iteratively estimates ideal samples to be annotated from a given dataset.
KNOWLEDGE-BASED SYSTEMS
(2021)
Article
Mechanics
O. Duranthon, M. Marsili, R. Xie
Summary: Learning machines extract representations with maximal relevance, where mutual information is bounded by relevance; Optimal learning machines with maximal relevance provide maximally informative representations; On specific learning tasks, models with maximal relevance achieve maximum likelihood values, learning is associated with a broadening of the energy level spectrum in internal representations.
JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT
(2021)
Article
Computer Science, Artificial Intelligence
Yulin Chen, Bo Yuan, Beishui Liao, Dov M. Gabbay
Summary: This paper proposes a novel framework called Contrasting Logical Knowledge Learning (CLK) that addresses the challenge of balancing accuracy and interpretability in deep learning models for sentiment analysis. Empirical results demonstrate that CLK effectively achieves high accuracy and provides human-understandable explanations.
KNOWLEDGE-BASED SYSTEMS
(2023)
Article
Education & Educational Research
Ali Sorayyaei Azar, Nur Haslinda Iskandar Tan
Summary: In the face of the Covid-19 pandemic crisis, synchronous and asynchronous learning have played a significant role in constructing a collaborative online environment for Malaysian university students. Students largely prefer synchronous learning mode, but they also value the combination of text-presentation and video learning tools in both synchronous and asynchronous approaches. Thus, it is important for university lecturers to explore interactive pedagogical methods and utilize a range of delivery methods to enhance motivation, participation, and engagement among the students.
EDUCATION AND INFORMATION TECHNOLOGIES
(2023)
Review
Psychology, Mathematical
Yeray Mera, Gabriel Rodriguez, Eugenia Marin-Garcia
Summary: Making errors can enhance learning, but specific conditions may influence errorful learning, such as the timing of corrective feedback, the type of errors, learner awareness, motivation, special populations, and whether errors need to be recovered on the final test. Four explanatory theories of errorful learning highlight the importance of semantic relationships between study materials and error recovery on the final test.
PSYCHONOMIC BULLETIN & REVIEW
(2022)