|
|
|
Jie Wu, Yongjin He, Chengyu Xu, Xiaoping Jia, Yule Huang, Qianru Chen, Chuyue Huang, Armin Dadras Eslamlou and Shiping Huang
Crack detection is an important task in bridge health monitoring, and related detection methods have gradually shifted from traditional manual methods to intelligent approaches with convolutional neural networks (CNNs) in recent years. Due to the opaque ...
ver más
|
|
|
|
|
|
|
Yoshinari Motokawa and Toshiharu Sugawara
In this paper, we propose an enhanced version of the distributed attentional actor architecture (eDA3-X) for model-free reinforcement learning. This architecture is designed to facilitate the interpretability of learned coordinated behaviors in multi-age...
ver más
|
|
|
|
|
|
|
Diego F. Collazos-Huertas, Andrés M. Álvarez-Meza and German Castellanos-Dominguez
Brain activity stimulated by the motor imagery paradigm (MI) is measured by Electroencephalography (EEG), which has several advantages to be implemented with the widely used Brain?Computer Interfaces (BCIs) technology. However, the substantial inter/intr...
ver más
|
|
|
|
|
|
|
Cheng Zhang, Xiong Zou and Chuan Lin
In order to prevent safety risks, control marine accidents and improve the overall safety of marine navigation, this study established a marine accident prediction model. The influences of management characteristics, environmental characteristics, person...
ver más
|
|
|
|
|
|
|
Suhwan Lee, Marco Comuzzi and Nahyun Kwon
The development of models for process outcome prediction using event logs has evolved in the literature with a clear focus on performance improvement. In this paper, we take a different perspective, focusing on obtaining interpretable predictive models f...
ver más
|
|
|
|
|
|
|
Reza Soleimani and Edgar Lobaton
Physiological and kinematic signals from humans are often used for monitoring health. Several processes of interest (e.g., cardiac and respiratory processes, and locomotion) demonstrate periodicity. Training models for inference on these signals (e.g., d...
ver más
|
|
|
|
|
|
|
Vincent Margot and George Luta
Interpretability is becoming increasingly important for predictive model analysis. Unfortunately, as remarked by many authors, there is still no consensus regarding this notion. The goal of this paper is to propose the definition of a score that allows f...
ver más
|
|
|
|
|
|
|
Emmanuel Pintelas, Ioannis E. Livieris and Panagiotis Pintelas
Machine learning has emerged as a key factor in many technological and scientific advances and applications. Much research has been devoted to developing high performance machine learning models, which are able to make very accurate predictions and decis...
ver más
|
|
|
|
|
|
|
Paidamwoyo Mhangara, Willard Mapurisa and Naledzani Mudau
Nanosatellites are increasingly being used in space-related applications to demonstrate and test scientific capability and engineering ingenuity of space-borne instruments and for educational purposes due to their favourable low manufacturing costs, chea...
ver más
|
|
|
|
|
|
|
Peng Ce and Bao Tie
With continuous development of artificial intelligence, text classification has gradually changed from a knowledge-based method to a method based on statistics and machine learning. Among them, it is a very important and efficient way to classify text ba...
ver más
|
|
|
|
|
|
|
Rosa Senatore, Antonio Della Cioppa and Angelo Marcelli
Background: The use of Artificial Intelligence (AI) systems for automatic diagnoses is increasingly in the clinical field, being a useful support for the identification of several diseases. Nonetheless, the acceptance of AI-based diagnoses by the physici...
ver más
|
|
|
|
|
|
|
Rosa Senatore, Antonio Della Cioppa and Angelo Marcelli
-
|
|
|
|
|
|
|
Praveen Kumar Shukla and Surya Prakash Tripathi
-
|
|
|
|
|
|
|
Praveen Kumar Shukla and Surya Prakash Tripathi
Interpretability and accuracy are two important features of fuzzy systems which are conflicting in their nature. One can be improved at the cost of the other and this situation is identified as ?Interpretability-Accuracy Trade-Off?. To deal with this tra...
ver más
|
|
|
|
|
|
|
Mohit Kumar, Bernhard A. Moser, Lukas Fischer and Bernhard Freudenthaler
In order to develop machine learning and deep learning models that take into account the guidelines and principles of trustworthy AI, a novel information theoretic approach is introduced in this article. A unified approach to privacy-preserving interpret...
ver más
|
|
|
|
|
|
|
Bradley Walters, Sandra Ortega-Martorell, Ivan Olier and Paulo J. G. Lisboa
A lack of transparency in machine learning models can limit their application. We show that analysis of variance (ANOVA) methods extract interpretable predictive models from them. This is possible because ANOVA decompositions represent multivariate funct...
ver más
|
|
|
|
|
|
|
Lei Yang, Mengxue Xu and Yunan He
Convolutional Neural Networks (CNNs) have become essential in deep learning applications, especially in computer vision, yet their complex internal mechanisms pose significant challenges to interpretability, crucial for ethical applications. Addressing t...
ver más
|
|
|
|
|
|
|
Wandile Nhlapho, Marcellin Atemkeng, Yusuf Brima and Jean-Claude Ndogmo
The advent of deep learning (DL) has revolutionized medical imaging, offering unprecedented avenues for accurate disease classification and diagnosis. DL models have shown remarkable promise for classifying brain tumors from Magnetic Resonance Imaging (M...
ver más
|
|
|
|
|
|
|
Leon Kopitar, Iztok Fister, Jr. and Gregor Stiglic
Introduction: Type 2 diabetes mellitus is a major global health concern, but interpreting machine learning models for diagnosis remains challenging. This study investigates combining association rule mining with advanced natural language processing to im...
ver más
|
|
|
|
|
|
|
SeyedehRoksana Mirzaei, Hua Mao, Raid Rafi Omar Al-Nima and Wai Lok Woo
Explainable Artificial Intelligence (XAI) evaluation has grown significantly due to its extensive adoption, and the catastrophic consequence of misinterpreting sensitive data, especially in the medical field. However, the multidisciplinary nature of XAI ...
ver más
|
|
|
|