Redirigiendo al acceso original de articulo en 19 segundos...
Inicio  /  Information  /  Vol: 15 Par: 4 (2024)  /  Artículo
ARTÍCULO
TITULO

Bridging the Gap: Exploring Interpretability in Deep Learning Models for Brain Tumor Detection and Diagnosis from MRI Images

Wandile Nhlapho    
Marcellin Atemkeng    
Yusuf Brima and Jean-Claude Ndogmo    

Resumen

The advent of deep learning (DL) has revolutionized medical imaging, offering unprecedented avenues for accurate disease classification and diagnosis. DL models have shown remarkable promise for classifying brain tumors from Magnetic Resonance Imaging (MRI) scans. However, despite their impressive performance, the opaque nature of DL models poses challenges in understanding their decision-making mechanisms, particularly crucial in medical contexts where interpretability is essential. This paper explores the intersection of medical image analysis and DL interpretability, aiming to elucidate the decision-making rationale of DL models in brain tumor classification. Leveraging ten state-of-the-art DL frameworks with transfer learning, we conducted a comprehensive evaluation encompassing both classification accuracy and interpretability. These models underwent thorough training, testing, and fine-tuning, resulting in EfficientNetB0, DenseNet121, and Xception outperforming the other models. These top-performing models were examined using adaptive path-based techniques to understand the underlying decision-making mechanisms. Grad-CAM and Grad-CAM++ highlighted critical image regions where the models identified patterns and features associated with each class of the brain tumor. The regions where the models identified patterns and features correspond visually to the regions where the tumors are located in the images. This result shows that DL models learn important features and patterns in the regions where tumors are located for decision-making.

 Artículos similares