Next Article in Journal
Machine-Learning Applications in Oral Cancer: A Systematic Review
Next Article in Special Issue
Mask Branch Network: Weakly Supervised Branch Network with a Template Mask for Classifying Masses in 3D Automated Breast Ultrasound
Previous Article in Journal
Unjumbling Procedure in the Algorithmic Analysis of Biomechanical Torques Induced by Electrical Stimulation: Case Study of the Lower Limb
Previous Article in Special Issue
Reconstruction of Preclinical PET Images via Chebyshev Polynomial Approximation of the Sinogram
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Skin Cancer Disease Detection Using Transfer Learning Technique

1
Department of CS&SE, Islamic International University, Islamabad 44000, Pakistan
2
Information Services, University of Okara, Renala Khurd 56130, Pakistan
3
Department of CS, University of Okara, Renala Khurd 56130, Pakistan
4
MLC Lab, University of Okara, Renala Khurd 56130, Pakistan
5
Department of Computer, College of Science and Arts in Ar Rass, Qassim University, Ar Rass 52571, Qassim, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(11), 5714; https://doi.org/10.3390/app12115714
Submission received: 20 April 2022 / Revised: 23 May 2022 / Accepted: 28 May 2022 / Published: 3 June 2022
(This article belongs to the Special Issue New Frontiers in Medical Image Processing)

Abstract

:
Melanoma is a fatal type of skin cancer; the fury spread results in a high fatality rate when the malignancy is not treated at an initial stage. The patients’ lives can be saved by accurately detecting skin cancer at an initial stage. A quick and precise diagnosis might help increase the patient’s survival rate. It necessitates the development of a computer-assisted diagnostic support system. This research proposes a novel deep transfer learning model for melanoma classification using MobileNetV2. The MobileNetV2 is a deep convolutional neural network that classifies the sample skin lesions as malignant or benign. The performance of the proposed deep learning model is evaluated using the ISIC 2020 dataset. The dataset contains less than 2% malignant samples, raising the class imbalance. Various data augmentation techniques were applied to tackle the class imbalance issue and add diversity to the dataset. The experimental results demonstrate that the proposed deep learning technique outperforms state-of-the-art deep learning techniques in terms of accuracy and computational cost.

1. Introduction

The unchecked increase in irregular skin cells that leads to malignant tumors is skin cancer. Most of these malignancies are caused by unprotected skin exposure to ultraviolet (UV) radiation [1,2,3]. Melanomas account for 1% of all skin malignancies, with the other 99% being basal cell carcinoma or squamous cell carcinoma [4]. It is one of the most common diseases in American society and a serious one. In the United States alone, more than five million different cases of skin illness are reported every year [5]. For decades, skin cancer has been progressively rising [6]. Melanoma has become the most severe skin cancer and is responsible for around 75% of all skin cancer mortality [7]. The American Cancer Society reported that over 99,780 new cases of melanoma would be discovered in 2022, whereas about 57,100 cases will be reported in men and 42,600 in women. It is expected that about 7650 people will die from melanoma [4]. Melanoma affects the melanocytes (squamons cell layer). Based on cancerous cell severity, it may be further divided into benign and malignant categories. A benign skin lesion is a mole or tag that does not contain cancerous cells. Malignant lesions necessitate immediate treatment due to a high concentration of cancer cells [8]. According to current figures, the survival rate is 99% if the melanoma is detected before spreading near lymph nodes [4]. The survival rate is about 68% after melanoma spreads near lymph nodes, and the survival rate is about 30% in case the melanoma spreads near lymph nodes and other organs [4]. The statistics show that in 2019 about 1,361,282 people were living with melanoma [9]. In 2020, about 324,635 people were diagnosed as melanoma patients, and about 57,043 died from melanoma [10].
Doctors use a variety of ways to detect skin cancer. An expert dermatologist usually follows a series of benchmarks, starting with naked-eye recognition of suspicious tumors, then dermoscopy, and finally a biopsy [11,12]. It can take a long time, and the person may advance to a later step. The detection performance of dermoscopic images has increased by 50%, with absolute accuracy ranging from 75% to 84% [13]. Furthermore, correct diagnosis is unique and highly dependent on the clinician’s abilities [14]. The manual identification of skin diseases is very tough and tiring for patients [15]. Because computer-assisted diagnosis helps the medical experts in analyzing the dermoscopy procedures in case of a lack of expertise in the diagnostic process and lack of availability of a professional [16,17]. A computer-based classification is an option for diminishing inter- and intra-variability. The state-of-the-art computer-assisted dermatological image categorization systems had two fundamental flaws; there are inadequate data [18], and the imaging process is the second difficult challenge, in which skin images are obtained using a specific instrument called dermoscopy [19], whereas other medical images, such as biopsy images and histology images, are obtained using biopsy and microscope. The state-of-the-art approaches [20] needed substantial preprocessing, segmentation, and feature extraction operations to categorize skin images.
Artificial intelligence is a novel area; the revolution related to it is similar to that made by adding techniques to every part of our lives [21,22,23]. Machine learning (ML) methods assist in avoiding the step of manually extracting features and help perform classification tasks [24] efficiently. Recently, there has been growing attention in employing ML approaches to help accurate cancer detection [24,25]. Machine learning algorithms have significantly increased cancer prediction accuracy by 15% to 20% during the last few decades [25]. Deep learning [26,27,28,29,30] is one of AI’s most rapidly growing topics due to its broad range of applications. Deep learning, specifically convolutional neural networks (CNNs) powered by sophisticated computer techniques and massive datasets, has become one of the most potent and popular ML approaches in image identification and classification [31] and has been used to categorize skin lesions [32,33]. The preliminary information and complex image preprocessing methods required for image classification using traditional ML methods are no longer in demand. Some deep-learning-based classifiers have demonstrated the ability to classify skin cancer images with the same accuracy as dermatologists [33]. As a result, CNNs can assist in developing computer-aided rapid skin lesion classifiers at the level of dermatologists.
However, high-quality medical imaging datasets for training are still scarce. It is predominantly related to the absence of annotated/labeled images for abnormal classes [34]. CNN with simple architecture is more likely to overfit on limited training datasets. Some researchers use extremely deep CNNs models (e.g., Resnet152 contains 152 layers) [35]). Although this improves network classification performance and increases computing costs, that is a major problem for clinical applications [36,37]. Moreover, researchers are using pre-trained CNNs to classify skin lesions [38,39,40,41,42], which prevents the issue of overfitting, and pre-trained CNNs use features learned from real-world image datasets (such as ImageNet).
The present study proposes a deep transfer learning technique for melanoma classification based on MobileNetV2. For melanoma detection and recognition, pre-processing and heavy augmentation methods are used for the first level to overcome the imbalanced class problem in the ISIC-2020 challenge dataset. In the second stage, the transfer learning MobileNetV2 architecture is used for auto feature extraction and classification as benign or malignant.
The remaining of the article is organized as follows. In Section 2, a detailed related work of the existing approaches is discussed. The materials and methods are discussed in Section 3. In Section 4, the results and discussion are presented. Finally, Section 5 describes the conclusion and future work, followed by the references.

2. Related Work

Various techniques have been proposed for melanoma classification in the previous few decennaries. Most methods [43,44,45] used image processing techniques to extract features and then fed them into a classification technique. Khan et al. [46] presented a detection and classification technique between melanoma and nevi. At first, the author applied a Gaussian filter to remove noise. K-mean clustering was used for lesion segmentation. Then, textural and color features were extracted using a hybrid super feature vector. After that, support vector machines (SVMs) were applied for classification. The proposed methodology obtained 96% accuracy on the ERMIS dataset. Filali et al. [47] presented a new technique based on combining deep learning (DL) and handcrafted features. The developed method obtained 87.8% on the ISIC challenge dataset and 98% accuracy on the Ph2 dataset. Hu et al. [48] used an approach based on feature similarity measurement, and then SVM was used for classification. Abbas et al. [49] presented a five-layer system known as “DermoDeep” to differentiate between nevi and melanoma. This method integrated visual features and a five-layer model to achieve the best classification results. Dalila et al. [8] extracted three types of features (texture, geometrical properties, and color) and selected optimal features using ant-colony-based segmentation. Then ANN was used for classification. Almansour et al. in [50] proposed an approach in which textual features were extracted, and then SVM was implemented as a classifier. The presented model achieved 90% accuracy on 227 images. Pham et al. [51] used image enhancement techniques for extracting ROIs. After that, SVM was used for the classification of the pre-processed images. The attained accuracy was 87.2%. Yu et al. [52] introduced a method to enhance the images for extracting ROIs and used a deep residual model to classify images. The obtained accuracy of the proposed system was 85.5%.
Recently, researchers have been working on melanoma classification by using deep learning models. Yu et al. [53] developed a new method depending on deep CNN and feature encoding techniques (FV encoding) to create more meaningful features for accurate melanoma recognition. The developed model was trained on the ISIC 2016 dataset and archived with a 86.54% accuracy. Rokhana et al. [54] proposed a deep CNN architecture to classify melanoma dermoscopy images into benign skin lesions and malignant melanoma. The presented approach was evaluated on the ISIC-archive repository. The proposed approach gained 91.97% sensitivity, 84.76% accuracy, and 78.71% specificity. Xie et al. [7] used a classification method based on the ensemble model. Liberman et al. [55] developed an ensemble model based on three classifiers to classify mole images in non-melanomas and melanomas. Zhou et al. [56] presented a new method based on spiking neural networks with time-dependent spike plasticity. Hosny et al. [57] implemented a deep CNN architecture for melanoma classification. The presented method was tested on three different datasets. Mukherjee et al. [58] used a CNN-based method known as CNN malignant lesions detection (CMLD). The developed model achieved 90.14% and 90.58% accuracy on MED-NODE and Dermofit datasets. Esteva et al. [59] presented a technique for detecting skin diseases as an initial stage and classifying skin cancer using deep networks.
Cakmak et al. [60] presented a deep neural network-based model called Nasnet Mobile to detect melanoma. The presented technique was evaluated on the HAM10000 dataset. Various augmentation techniques were used to tackle the problem of imbalanced classes. The proposed model obtained the accuracy with the Nasnet-Mobile network was 89.20% without data augmentation and 97.90% with data augmentation. Brinker et al. [61] used a pre-trained architecture named ResNet50 to classify the skin lesion as melanoma or nevi. The proposed model achieved 77.9% and 82.3% ratios for sensitivity and specificity, respectively. Han et al. [62] utilized the ResNet152 model to classify various skin lesions. The specificity and mean sensitivity for three different lesions, melanoma, seborrheic keratosis, and nevi, were 87.63% and 88.2%, respectively. Hosny et al. [63] replaced the last three layers of AlexNet with fully connected layers, softmax, and an output layer to classify skin lesions. The proposed algorithm achieved 96.86% accuracy. Esteva et al. [64] used a pre-trained model named Inception-v3 to classify skin lesions. They increased the testing dataset by using augmentation techniques. The proposed classification model obtained 71.2% accuracy. The summary of the related work is presented in Table 1.

3. Materials and Methods

Self-learning algorithms are the foundation of artificial intelligence. As new information about the work is received, such algorithms continue to evolve [65]. These techniques are continuously evolving to resolve these issues. Self-learning algorithms can function because these models are based on the human brain [66]. Artificial neural networks (ANNs) are nodes (neurons) connected at various levels, such as human nerve cells. Information is recorded, processed (via positive or negative weighting), and output inside this neuron network. ANNs look especially promising because they have many levels and can recognize more complicated patterns. Deep learning [67,68] refers to the learning processes that such networks can perform.
This research introduces a deep transfer learning system to classify melanoma skin cancer. At the first level, pre-processing and various augmentation approaches are used to resolve the issue of class imbalance in the dataset and generate diversity. At the second level, auto features are extracted, and then a pre-trained “MobileNetV2” model is implemented to classify the malignant melanoma from a benign skin lesion. The flow chart of the proposed technique is presented in Figure 1.

3.1. Dataset

The performance of the deep learning techniques is based on the availability of a suitable and valid dataset. The following dataset is being used in this research.

3.1.1. SIIM-ISIC 2020 Dataset

The ISIC-2020 Archive [69] comprises the world’s most enormous number of quality-controlled skin lesions dermoscopic images publicly available for research. Several institutions contributed data from patients of various ages and sexual orientations. The dataset includes 33,126 dermoscopic images, 584 images related to malignant, and 32,542 benign skin lesions from more than 2000 patients. Every image is associated with one of these patients through a unique patient identifier. We used 11,670 images of benign class and 584 images of melanoma. Considering the data of these two classes is imbalanced. Therefore, to handle the class imbalance issue, 4522 melanoma images were included in the ISIC 2019 archive [70]. After that, various data augmentation strategies were performed, including rescaling, width shift, rotation, shear range, horizontal flip, and channel shift, which became 11,670 after augmentation. The reason behind using 11,670 images of benign is to tackle the class imbalance issue. The images of the benign class were selected arbitrarily from the whole set of images. See sample images in Figure 2 and details of the classes in Table 2.

3.2. Image Pre-Processing

To obtain higher consistency in classification results and improved features, preprocessing is employed for all input images of the ISIC-2020. The CNN approach requires a massive amount of repetitive training; for this purpose, a large-scale image dataset was required to prevent the danger of over-fitting.

3.2.1. Image Resizing

All images in the original ISIC dataset are available in 6000 × 4000 dimensions. The dataset is resized to 256 × 256. It will reduce the model performance dramatically and speed up the processing process.

3.2.2. Data Augmentation

Various data augmentation approaches have been applied to the training set with the help of the image data generator function of the Keras library in Python to overcome overfitting and increase the dataset’s diversity. The computational cost was decreased by utilizing smaller pixel values within the same range; this was accomplished using scale transformation. Therefore, the value of each pixel ranged from 0 to 1 with the help of the parameter value (1. /255). The rotation transformation was used to rotate the images to a particular angle; therefore, 25 was used to rotate the images. Images can be shifted arbitrarily to the right or left by employing the width shift range transformation; the width shift parameter was set to 0.1. With a value of 0.1, the height shift range parameter was used to move the training images vertically. Shear transformation is a technique in which one axis of an image is fixed, and then the other axis is stretched to a certain angle called a shear angle; in this case, a 0.2 shear angle was used. The zoom range argument was used to perform the random zoom transformation; a value greater than 1.0 implies that the images were magnified, and a value less than 1.0 means that the images were zoomed out. As a result, a zoom range of 0.2 was used to magnify the image. Flip was used to flipping the picture horizontally. Brightness transformation was used, in which 0.0 represents no brightness and 1.0 represents maximum brightness; as a result, the zoom range 0.5–1.0 was used. In channel shift transformation, the channel values are randomly shifted by a random value chosen from the particular range; as a result, the 0.05 channel shift range was applied, and the fill mode was the closest, as shown in Table 3.

3.3. Training, Validation and Testing

The ISIC-2020 dataset was composed of three portions: training, testing, and validation. The training set was utilized for training the MobileNetV2 model, and the validation and test datasets were used to evaluate the performance of the introduced model. Therefore, we split the dataset into training, testing, and validation, with 70%, 15%, and 15%, respectively. The MobileNetV2 model was trained using the dataset presented in Section 3.1.1. For the ISIC-2020 dataset training, validation, and testing, 16,350, 3500, and 3500 images were used.

3.4. MobileNetV2 Architecture

In the current study, deep transfer learning MobileNetV2 [71] architecture is to tackle the issue of melanoma classification. Several different factors influenced the selection of the MobileNetV2 model. The dataset used for training a model was relatively tiny, making it susceptible to over-fitting, and using a small but more expressive system, like MobileNetV2, mitigated this effect significantly. MobileNetV2 is a framework that optimizes execution speed and memory consumption at a minimal cost with respect to the error [71]. Due to the high execution speed, parameter tuning and experimenting are considerably more manageable, while minimal memory consumption is an additional attractive feature. The main structure of MobileNetV2 is based on its previous version, MobileNetV1. Two significant notions explaining the MobileNetV2 framework are the depthwise separable convolution, linear bottleneck, and the inverted residual, which are discussed further.

3.4.1. Depthwise Separable Convolutions

As discussed in [71], other efficient networks, such as ShuffleNet [72] and Xception [73], utilize the depthwise separable convolution. The Depthwise separable convolution used in MobileNetV1 was also applied in MobileNetV2 [74]. Depth-wise separable convolution replaces traditional convolution with two procedures. The first procedure is a features map-wise convolution, which means a different convolution is applied to each feature map. The feature maps that come from this process are stacked, and the second procedure, a pointwise convolution, is used for these feature maps to process. In this case, the pointwise convolution is implemented with a 1 × 1 kernel and is implemented to every feature map at once. The image is processed simultaneously in height, width, and channel dimensions in a traditional convolution, as shown in Figure 3.
However, the depthwise separable convolution analyzes the image by height and width dimensions during the first procedure. It handles the channel dimensions during the second procedure, which refers to a factorization of the traditional convolution.

3.4.2. Linear Bottleneck and Inverted Residual

In [71], the inverted residuals were explained and compared with residual blocks [35], which are an integral part of the ResNet network. Both blocks make use of bottleneck and residual connections, and both utilize three convolutional operators. The first and last operators make use of 1 × 1 filters [35,71], which translate data from the input domain to an intermediary representation and from the intermediary representation to the outcome domain. A three-by-three (3 × 3) filter [35,71] is used to process the intermediate representation, as shown in Figure 4.
The initial and final residual block convolutions have a greater number of feature maps than the block inner convolution [35]; on contrary, the inverted residual employs the first and final convolutions with a lesser number of feature mappings than the inner convolution [71]. In both situations, the residual link is between the initial and final feature maps (channels), which are fewer in the scenario of MobileNetV2 as compared to ResNet [35]. When multiple units are stacked together in either architecture, the outcome is an alternation of small and big layer results. The memory efficiency is achieved using the residual block arrangements of the MobileNetV2 [71].
MobileNet V2 includes an expansion layer of 1 × 1, depth-wise and pointwise convolutional layers in each block. In contrast to V1, MobileNetV2 contains pointwise convolutional layers termed the projection layer, which transforms data with many channels into a tensor with a significantly smaller number of feature maps (channels). The bottleneck residual block, which contains the outcome of every block, is a bottleneck in the system. An expansion convolutional layer of 1 × 1 will increase the number of feature maps (channels) based on the expansion factor before passing through the depth-wise convolution. The residual connection is the second new feature introduced in MobileNetV2’s core component. A residual connection is established to facilitate gradient flow through the system. Every layer of the MobileNetV2 architecture includes batch normalization, with the ReLU6 as the activation function. The outcome of the projection layer, on the other hand, does not contain an activation function. The whole MobileNet V2 structure is comprised of 17 bottleneck residual blocks in a queue, followed by a 1 × 1 regular convolution, a global average pooling layer, and then a classification layer. The pre-trained MobileNetV2 is shown in Figure 5. Table 4 shows the model and parameters that produced the best results, with an accuracy of 98.2 percent.

3.5. Evaluation Measures for Classification

After the training process, the proposed technique was tested on the testing dataset. The architecture’s performance was validated using the accuracy, F1 score, precision, and recall. The performance metrics employed in this research are explored in detail below. The definitions and equations are mentioned below, where TP stands for true positives, TN stands for true negatives, FN stands for false negatives, and FP stands for false positives.

3.5.1. Classification Accuracy

The classification accuracy is measured as the percentage of correct predictions to the total number of accurate predictions.
Accuracy = TP + TN ( TP + TN + FP + FN )

3.5.2. Precision

Several examples demonstrate that classification accuracy is not always a valid metric for overall model performance. One of these cases is when the distribution of classes is imbalanced. If we treat all samples as being of the highest quality, we will obtain a high accuracy rate, which makes no sense. On the other hand, precision indicates that inconsistency can be found when repeatedly utilizing the same instrument, for instance, when measuring the same part. Precision is one of such measures, which is characterized as
Precision = TP ( TP + FP )

3.5.3. Recall

A recall is another vital statistic, which can be defined as dividing input samples into classes that are successfully predicted by the system. The recall is calculated as
Recall = TP ( TP + FN )

3.5.4. F1 Score

The f1 score is a well-known metric that measures precision and recall in a single metric. The f1 score is calculated as
F 1 Score = 2 ( Precision Recall ) ( Precision + Recall )

3.5.5. AUC Score and ROC Curve

The area under curves (AUC) reflects the level of separability, and the receiver operating characteristic (ROC) is a probability curve. The ROC curve is a graph that depicts the connection between specificity (rate of false positives) and sensitivity (true positive rate).

4. Results and Discussion

The experiment with the presented MobileNetV2 architecture was carried out on Google Colab. The MobileNetV2 technique was implemented on the Tensor-Flow platform, open-source Keras packages, and the Python programming language. For training, it used the Adam optimizer with a default learning rate and a binary cross-entropy loss function. The results of the proposed MobileNetV2 model focused on the following:
  • To differentiate the dermoscopic images into malignant or benign.
  • Evaluated the performance of the presented MobileNetV2 model on the ISIC-2020 dataset by using various data augmentation techniques.
  • The results were compared with state-of-the-art techniques.

4.1. Proposed Model Performance on ISIC-2020 Dataset

The experiment was conducted to assess the performance of the introduced MobileNetV2 architecture. In the experiment, the Adam optimizer, binary-cross-entropy loss function, 100 epochs, 64 batch size, and the default alpha rate were used as shown in Table 4. The experimental outcomes presented that the introduced method obtained 98.1% and 98.4% accuracy for melanoma and benign lesions, respectively. It also obtained 98.2% average accuracy on the ISIC-2020 dataset, as presented in Table 5. The transfer learning model achieved 98.3% and 98.0% recall on melanoma and benign skin cancer. It obtained a 98.1% F1-score on both diseases, and 98.0%, 98.3% precision on melanoma and benign diseases, respectively. The ISIC 2020 test (leader board) results showed that the proposed method obtained 98.04%, which means there was not enough of a difference between the two test accuracies, our test set and the leader board test set, as shown in Table 5. There were 10,982 images in the ISIC 2020 test set on the leader board, with 690 unique patient IDs and 10,292 duplicates. Because the ground truth for ISIC 2020 was not publicly available, the organizer’s statistic on Kaggle area under the receiver operating characteristics curve (AUC) was used.
The accuracy and losses in every epoch during training and validation are shown in Figure 6. It shows that after 10 epochs, the training and validation accuracies increased rapidly, and it was steady after almost 40 epochs. On the other hand, training and validation losses decreased rapidly after 10 epochs, and after 60 epochs, the losses of training and validation became stable. The results demonstrated that the proposed method yielded higher classification scores on the ISIC-2020 dataset when the data augmentation strategies were used in the training set.
The confusion matrix is a valuable ML method that determines the recall, accuracy, ROC curve, and precision of a model. A confusion matrix was used to measure the classification accuracy visually. It indicated the greater classification accuracy of the MobileNetV2 of the appropriate class in a dark color, whereas a lighter color indicated the incorrectly identified samples. Correct predictions were displayed diagonally in the confusion matrix, whereas incorrect predictions were displayed off-diagonally in the confusion matrix. As demonstrated by the results, the presented MobileNetV2 framework outperformed when data augmentation methods were implemented in the ISIC-2020 dataset, as indicated in Figure 7. It indicated that the MobileNetV2 model correctly identified 1721 benign lesion images out of 1750 and 1556 malignant images out of 1750. The overall accuracy of the presented MobileNetV2 system was 98.2%, and 1.8% error, which indicated the introduced MobileNetV2 model’s generalization.
The MobileNetV2 model demonstrated outstanding classification performance in validation and test set classes by having a larger area under the curve (almost 98.2%). The developed methodology’s performance was measured using the ROC curve as depicted in Figure 8. The black indicated the ROC curve, and the red indicated the random guessing.
The evaluation metrics, including accuracy, F1-score, recall, precision, and the ROC curve, demonstrated that the proposed method performed exceptionally well on the ISIC-2020 dataset when the data augmentation strategies were used in the training set.

4.2. Comparison with State-of-the-Art Methods

To represent the generalization of the introduced approach, we compared the performance of the presented model with state-of-the-art techniques. It was observed that the presented deep learning system outperformed state-of-the-art approaches. There was a slight variation in misclassification when comparing the proposed strategy to state-of-the-art methods. The developed method’s performance was evaluated compared to other melanoma and benign classification strategies that were earlier published. The study results revealed that the introduced method had the highest accuracy compared to other current research, as indicated in Table 6.
The proposed MobileNetV2 model outperformed the existing studies, as Mukherjee et al. [58] obtained 90.58% accuracy in classifying the melanoma and benign skin cancer diseases on the Dermofit dataset and 90.14% accuracy in the MED-NODE dataset using the CNN-based CMLD model. Dalila et al. [8] reported 93.6% accuracy using the ANN-based model on a self-created dataset. It used only 172 images to classify the melanoma and benign diseases. Hu et al. [48] used FSM and SVM models performed 91.9% accuracy and used the Ph2 dataset. The model presented in [7] can classify the melanoma and benign skin diseases with 94.14% on XR dataset and 91.11% accuracy on the CR dataset. Another research conducted by Mijwil [75] obtained 86.90% accuracy using ISIC2019 and ISIC2020 datasets to distinguish between melanoma and benign diseases. We can say that the proposed MobileNetV2 model dominated the existing techniques and thus achieved 98.2% accuracy. It obtained the highest accuracy compared to existing models, as shown in Table 6.

5. Conclusions and Future Work

Melanoma is the worst type of skin cancer, but if caught in a timely enough manner, it can be a non-life-threatening disease. As a result, it is critical to employ supportive imaging modalities that have been proved to help with diagnosis. These methods are based on procedures devised by doctors to detect melanoma before it spreads to nearby lymph nodes. In this research, we provide a transfer learning model for melanoma and benign skin lesions diagnosis based on MobileNetV2, which can be used to investigate any suspicious lesion. The suggested method is applied to an ISIC2020 challenge dataset of skin cancer disorder images to determine if a disease is malignant or benign. Data augmentation techniques were used to increase the dataset’s size and improve the accuracy of MobileNetV2. This architecture works effectively and has a diagnostic accuracy of 98.2 percent. Finally, the accuracy of various state-of-the-art models is compared to the proposed framework. The suggested architecture was found to provide outstanding classification accuracy without needing model training from scratch to improve model efficiency. After a sufficient number of high-resolution photographs is acquired, this study will be carried out on a series of skin cancer images for patients from Pakistan in the future.

Author Contributions

The research conception, technique, and programming were proposed by J.R., G.A., M.R.S. and G.A. are in charge of the technical and theoretical framework. T.A., F.A. and others created the datasets. M.I., M.H. and N.S. were in charge of the technical review and improvement. G.A. and J.R. provide comprehensive technical assistance, direction, and supervision. M.I. and J.R. are in charge of editing and proofreading. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The researchers would like to thank the Deanship of Scientific Research, Qassim University for funding the publication of this project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Factors, R. Basal and Squamous Cell Skin Cancer Causes, Risk Factors, and Prevention. 2019. Available online: https://www.cancer.org/content/dam/CRC/PDF/Public/8819.00.pdf (accessed on 2 March 2022).
  2. Gandhi, S.A.; Kampp, J. Skin cancer epidemiology, detection, and management. Med Clin. 2015, 99, 1323–1335. [Google Scholar] [CrossRef] [PubMed]
  3. Harrison, S.C.; Bergfeld, W.F. Ultraviolet light and skin cancer in athletes. Sport. Health 2009, 1, 335–340. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Skin Cancer. Available online: https://www.aad.org/media/stats-skin-cancer (accessed on 22 May 2022).
  5. Incidence estimate of nonmelanoma skin cancer (keratinocyte carcinomas) in the US population, 2012. JAMA Dermatol. 2015, 151, 1081–1086. [CrossRef] [PubMed]
  6. Whiteman, D.C.; Green, A.C.; Olsen, C.M. The growing burden of invasive melanoma: Projections of incidence rates and numbers of new cases in six susceptible populations through 2031. J. Investig. Dermatol. 2016, 136, 1161–1171. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Xie, F.; Fan, H.; Li, Y.; Jiang, Z.; Meng, R.; Bovik, A. Melanoma classification on dermoscopy images using a neural network ensemble model. IEEE Trans. Med. Imaging 2016, 36, 849–858. [Google Scholar] [CrossRef]
  8. Dalila, F.; Zohra, A.; Reda, K.; Hocine, C. Segmentation and classification of melanoma and benign skin lesions. Optik 2017, 140, 749–761. [Google Scholar] [CrossRef]
  9. Cancer Stat Facts: Melanoma of the Skin. Available online: https://seer.cancer.gov/statfacts/html/melan.html (accessed on 22 May 2022).
  10. Melanoma: Statistics. Available online: https://www.cancer.net/cancer-types/melanoma/statistics (accessed on 22 May 2022).
  11. Bomm, L.; Benez, M.D.V.; Maceira, J.M.P.; Succi, I.C.B.; Scotelaro, M.d.F.G. Biopsy guided by dermoscopy in cutaneous pigmented lesion-case report. An. Bras. Dermatol. 2013, 88, 125–127. [Google Scholar] [CrossRef] [Green Version]
  12. Kato, J.; Horimoto, K.; Sato, S.; Minowa, T.; Uhara, H. Dermoscopy of melanoma and non-melanoma skin cancers. Front. Med. 2019, 6, 180. [Google Scholar] [CrossRef] [Green Version]
  13. Gershenwald, J.E.; Scolyer, R.A.; Hess, K.R.; Sondak, V.K.; Long, G.V.; Ross, M.I.; Lazar, A.J.; Faries, M.B.; Kirkwood, J.M.; McArthur, G.A.; et al. Melanoma staging: Evidence-based changes in the American Joint Committee on Cancer eighth edition cancer staging manual. CA Cancer J. Clin. 2017, 67, 472–492. [Google Scholar] [CrossRef] [Green Version]
  14. Ibrahim, H.; El-Taieb, M.; Ahmed, A.; Hamada, R.; Nada, E. Dermoscopy versus skin biopsy in diagnosis of suspicious skin lesions. Al-Azhar Assiut Med. J. 2017, 15, 203. [Google Scholar] [CrossRef]
  15. Bajwa, M.N.; Muta, K.; Malik, M.I.; Siddiqui, S.A.; Braun, S.A.; Homey, B.; Dengel, A.; Ahmed, S. Computer-aided diagnosis of skin diseases using deep neural networks. Appl. Sci. 2020, 10, 2488. [Google Scholar] [CrossRef] [Green Version]
  16. Carli, P.; Quercioli, E.; Sestini, S.; Stante, M.; Ricci, L.; Brunasso, G.; De Giorgi, V. Pattern analysis, not simplified algorithms, is the most reliable method for teaching dermoscopy for melanoma diagnosis to residents in dermatology. Br. J. Dermatol. 2003, 148, 981–984. [Google Scholar] [CrossRef] [PubMed]
  17. Carrera, C.; Marchetti, M.A.; Dusza, S.W.; Argenziano, G.; Braun, R.P.; Halpern, A.C.; Jaimes, N.; Kittler, H.J.; Malvehy, J.; Menzies, S.W.; et al. Validity and reliability of dermoscopic criteria used to differentiate nevi from melanoma: A web-based international dermoscopy society study. JAMA Dermatol. 2016, 152, 798–806. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Gutman, D.; Codella, N.C.; Celebi, E.; Helba, B.; Marchetti, M.; Mishra, N.; Halpern, A. Skin lesion analysis toward melanoma detection: A challenge at the international symposium on biomedical imaging (ISBI) 2016, hosted by the international skin imaging collaboration (ISIC). arXiv 2016, arXiv:1605.01397. [Google Scholar]
  19. Binder, M.; Kittler, H.; Seeber, A.; Steiner, A.; Pehamberger, H.; Wolff, K. Epiluminescence microscopy-based classification of pigmented skin lesions using computerized image analysis and an artificial neural network. Melanoma Res. 1998, 8, 261–266. [Google Scholar] [CrossRef]
  20. Burroni, M.; Corona, R.; Dell’Eva, G.; Sera, F.; Bono, R.; Puddu, P.; Perotti, R.; Nobile, F.; Andreassi, L.; Rubegni, P. Melanoma computer-aided diagnosis: Reliability and feasibility study. Clin. Cancer Res. 2004, 10, 1881–1886. [Google Scholar] [CrossRef] [Green Version]
  21. Nath, R.P.; Balaji, V.N. Artificial intelligence in power systems. IOSR J. Comput. Eng. (IOSR-JCE) 2014, e-ISSN, 2278-0661. [Google Scholar]
  22. Sivadasan, B. Application of artificial intelligence in electrical engineering. In Proceedings of the National Conference on Emerging Research Trend in Electrical and Electronics Engineering (ERTEE 2018), Kalady, Kerala, 25 March 2018. [Google Scholar]
  23. Xu, Y.; Ahokangas, P.; Louis, J.N.; Pongrácz, E. Electricity market empowered by artificial intelligence: A platform approach. Energies 2019, 12, 4128. [Google Scholar] [CrossRef] [Green Version]
  24. Kourou, K.; Exarchos, T.P.; Exarchos, K.P.; Karamouzis, M.V.; Fotiadis, D.I. Machine learning applications in cancer prognosis and prediction. Comput. Struct. Biotechnol. J. 2015, 13, 8–17. [Google Scholar] [CrossRef] [Green Version]
  25. Cruz, J.A.; Wishart, D.S. Applications of machine learning in cancer prediction and prognosis. Cancer Inform. 2006, 2, 117693510600200030. [Google Scholar] [CrossRef]
  26. Sohail, M.; Ali, G.; Rashid, J.; Ahmad, I.; Almotiri, S.H.; AlGhamdi, M.A.; Nagra, A.A.; Masood, K. Racial Identity-Aware Facial Expression Recognition Using Deep Convolutional Neural Networks. Appl. Sci. 2021, 12, 88. [Google Scholar] [CrossRef]
  27. Rashid, J.; Khan, I.; Ali, G.; Almotiri, S.H.; AlGhamdi, M.A.; Masood, K. Multi-Level Deep Learning Model for Potato Leaf Disease Recognition. Electronics 2021, 10, 2064. [Google Scholar] [CrossRef]
  28. Dargan, S.; Kumar, M.; Ayyagari, M.R.; Kumar, G. A survey of deep learning and its applications: A new paradigm to machine learning. Arch. Comput. Methods Eng. 2020, 27, 1071–1092. [Google Scholar] [CrossRef]
  29. Hordri, N.F.; Yuhaniz, S.S.; Shamsuddin, S.M. Deep learning and its applications: A review. In Proceedings of the Conference on Postgraduate Annual Research on Informatics Seminar, Kuala Lumpur, Malaysia, 12 September 2016. [Google Scholar]
  30. Najafabadi, M.M.; Villanustre, F.; Khoshgoftaar, T.M.; Seliya, N.; Wald, R.; Muharemagic, E. Deep learning applications and challenges in big data analytics. J. Big Data 2015, 2, 1–21. [Google Scholar] [CrossRef] [Green Version]
  31. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  32. Brinker, T.J.; Hekler, A.; Utikal, J.S.; Grabe, N.; Schadendorf, D.; Klode, J.; Berking, C.; Steeb, T.; Enk, A.H.; Von Kalle, C. Skin cancer classification using convolutional neural networks: Systematic review. J. Med. Internet Res. 2018, 20, e11936. [Google Scholar] [CrossRef]
  33. Fujisawa, Y.; Inoue, S.; Nakamura, Y. The possibility of deep learning-based, computer-aided skin tumor classifiers. Front. Med. 2019, 6, 191. [Google Scholar] [CrossRef] [Green Version]
  34. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Van Der Laak, J.A.; Van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
  35. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  36. Kong, B.; Sun, S.; Wang, X.; Song, Q.; Zhang, S. Invasive cancer detection utilizing compressed convolutional neural network and transfer learning. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 16–20 September 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 156–164. [Google Scholar]
  37. Wu, S.; Gao, Z.; Liu, Z.; Luo, J.; Zhang, H.; Li, S. Direct reconstruction of ultrasound elastography using an end-to-end deep neural network. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 16–20 September 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 374–382. [Google Scholar]
  38. Codella, N.; Cai, J.; Abedini, M.; Garnavi, R.; Halpern, A.; Smith, J.R. Deep learning, sparse coding, and SVM for melanoma recognition in dermoscopy images. In Proceedings of the International Workshop on Machine Learning in Medical Imaging, Lille, France, 11 July 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 118–126. [Google Scholar]
  39. Haenssle, H.A.; Fink, C.; Schneiderbauer, R.; Toberer, F.; Buhl, T.; Blum, A.; Kalloo, A.; Hassen, A.B.H.; Thomas, L.; Enk, A.; et al. Man against machine: Diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann. Oncol. 2018, 29, 1836–1842. [Google Scholar] [CrossRef]
  40. Kawahara, J.; Hamarneh, G. Multi-resolution-tract CNN with hybrid pretrained and skin-lesion trained layers. In Proceedings of the International Workshop on Machine Learning in Medical Imaging, Athens, Greece, 17 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 164–171. [Google Scholar]
  41. Li, Y.; Shen, L. Skin lesion analysis towards melanoma detection using deep learning network. Sensors 2018, 18, 556. [Google Scholar] [CrossRef] [Green Version]
  42. Menegola, A.; Fornaciali, M.; Pires, R.; Bittencourt, F.V.; Avila, S.; Valle, E. Knowledge transfer for melanoma screening with deep learning. In Proceedings of the 2017 IEEE 14th international symposium on biomedical imaging (ISBI 2017), Melbourne, Australia, 18–21 April 2017; pp. 297–300. [Google Scholar]
  43. Giotis, I.; Molders, N.; Land, S.; Biehl, M.; Jonkman, M.F.; Petkov, N. MED-NODE: A computer-assisted melanoma diagnosis system using non-dermoscopic images. Expert Syst. Appl. 2015, 42, 6578–6585. [Google Scholar] [CrossRef]
  44. Lynn, N.C.; War, N. Melanoma classification on dermoscopy skin images using bag tree ensemble classifier. In Proceedings of the 2019 International Conference on Advanced Information Technologies (ICAIT), Yangon, Myanmar, 6–7 November 2019; pp. 120–125. [Google Scholar]
  45. Mukherjee, S.; Adhikari, A.; Roy, M. Melanoma identification using MLP with parameter selected by metaheuristic algorithms. In Intelligent Innovations in Multimedia Data Engineering and Management; IGI Global: Hershey, PA, USA, 2019; pp. 241–268. [Google Scholar]
  46. Khan, M.Q.; Hussain, A.; Rehman, S.U.; Khan, U.; Maqsood, M.; Mehmood, K.; Khan, M.A. Classification of melanoma and nevus in digital images for diagnosis of skin cancer. IEEE Access 2019, 7, 90132–90144. [Google Scholar] [CrossRef]
  47. Filali, Y.; El Khoukhi, H.; Sabri, M.A.; Aarab, A. Efficient fusion of handcrafted and pre-trained CNNs features to classify melanoma skin cancer. Multimed. Tools Appl. 2020, 79, 31219–31238. [Google Scholar] [CrossRef]
  48. Hu, K.; Niu, X.; Liu, S.; Zhang, Y.; Cao, C.; Xiao, F.; Yang, W.; Gao, X. Classification of melanoma based on feature similarity measurement for codebook learning in the bag-of-features model. Biomed. Signal Process. Control 2019, 51, 200–209. [Google Scholar] [CrossRef]
  49. Abbas, Q.; Celebi, M.E. DermoDeep-A classification of melanoma-nevus skin lesions using multi-feature fusion of visual features and deep neural network. Multimed. Tools Appl. 2019, 78, 23559–23580. [Google Scholar] [CrossRef]
  50. Almansour, E.; Jaffar, M.A. Classification of Dermoscopic skin cancer images using color and hybrid texture features. IJCSNS Int. J. Comput. Sci. Netw. Secur. 2016, 16, 135–139. [Google Scholar]
  51. Pham, T.C.; Luong, C.M.; Visani, M.; Hoang, V.D. Deep CNN and data augmentation for skin lesion classification. In Asian Conference on Intelligent Information and Database Systems; Springer: Berlin/Heidelberg, Germany, 2018; pp. 573–582. [Google Scholar]
  52. Yu, L.; Chen, H.; Dou, Q.; Qin, J.; Heng, P.A. Automated melanoma recognition in dermoscopy images via very deep residual networks. IEEE Trans. Med. Imaging 2016, 36, 994–1004. [Google Scholar] [CrossRef]
  53. Yu, Z.; Jiang, X.; Zhou, F.; Qin, J.; Ni, D.; Chen, S.; Lei, B.; Wang, T. Melanoma recognition in dermoscopy images via aggregated deep convolutional features. IEEE Trans. Biomed. Eng. 2018, 66, 1006–1016. [Google Scholar] [CrossRef]
  54. Rokhana, R.; Herulambang, W.; Indraswari, R. Deep convolutional neural network for melanoma image classification. In Proceedings of the 2020 International Electronics Symposium (IES), Marrakech, Morocco, 24–26 March 2020; pp. 481–486. [Google Scholar]
  55. Liberman, G.; Acevedo, D.; Mejail, M. Classification of melanoma images with fisher vectors and deep learning. In Iberoamerican Congress on Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2018; pp. 732–739. [Google Scholar]
  56. Zhou, Q.; Shi, Y.; Xu, Z.; Qu, R.; Xu, G. Classifying melanoma skin lesions using convolutional spiking neural networks with unsupervised stdp learning rule. IEEE Access 2020, 8, 101309–101319. [Google Scholar] [CrossRef]
  57. Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Skin melanoma classification using ROI and data augmentation with deep convolutional neural networks. Multimed. Tools Appl. 2020, 79, 24029–24055. [Google Scholar] [CrossRef]
  58. Mukherjee, S.; Adhikari, A.; Roy, M. Malignant melanoma classification using cross-platform dataset with deep learning CNN architecture. In Recent Trends in Signal and Image Processing; Springer: Berlin/Heidelberg, Germany, 2019; pp. 31–41. [Google Scholar]
  59. Esteva, A.; Kuprel, B.; Thrun, S. Deep Networks for Early Stage Skin Disease and Skin Cancer Classification; Stanford University: Stanford, CA, USA, 2015. [Google Scholar]
  60. Çakmak, M.; Tenekecı, M.E. Melanoma detection from dermoscopy images using Nasnet Mobile with Transfer Learning. In Proceedings of the 2021 29th Signal Processing and Communications Applications Conference (SIU), Istanbul, Turkey, 9–11 June 2021; pp. 1–4. [Google Scholar]
  61. Brinker, T.J.; Hekler, A.; Enk, A.H.; Berking, C.; Haferkamp, S.; Hauschild, A.; Weichenthal, M.; Klode, J.; Schadendorf, D.; Holland-Letz, T.; et al. Deep neural networks are superior to dermatologists in melanoma image classification. Eur. J. Cancer 2019, 119, 11–17. [Google Scholar] [CrossRef] [Green Version]
  62. Han, S.S.; Kim, M.S.; Lim, W.; Park, G.H.; Park, I.; Chang, S.E. Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm. J. Investig. Dermatol. 2018, 138, 1529–1538. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Classification of skin lesions using transfer learning and augmentation with Alex-net. PLoS ONE 2019, 14, e0217293. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef] [PubMed]
  65. Shaikhina, T.; Lowe, D.; Daga, S.; Briggs, D.; Higgins, R.; Khovanova, N. Machine learning for predictive modelling based on small data in biomedical engineering. IFAC-PapersOnLine 2015, 48, 469–474. [Google Scholar] [CrossRef]
  66. Attaran, M.; Deb, P. Machine learning: The new’big thing’for competitive advantage. Int. J. Knowl. Eng. Data Min. 2018, 5, 277–305. [Google Scholar] [CrossRef]
  67. Reichstein, M.; Camps-Valls, G.; Stevens, B.; Jung, M.; Denzler, J.; Carvalhais, N. Deep learning and process understanding for data-driven Earth system science. Nature 2019, 566, 195–204. [Google Scholar] [CrossRef]
  68. Sarvepalli, S.K. Deep Learning in Neural Networks: The Science Behind an Artificial Brain; Liverpool Hope University: Liverpool, UK, 2015. [Google Scholar]
  69. The ISIC 2020 Challenge Dataset. Available online: https://challenge2020.isic-archive.com/ (accessed on 5 March 2022).
  70. The ISIC 2019 Challenge Dataset. Available online: https://challenge2019.isic-archive.com/ (accessed on 5 March 2022).
  71. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  72. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE conference on computer vision and pattern recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
  73. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
  74. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  75. Mijwil, M.M. Skin cancer disease images classification using deep learning solutions Multimedia Tools and Applications. Multimed. Tools Appl. 2021, 80, 26255–26271. [Google Scholar] [CrossRef]
Figure 1. Proposed method flowchart.
Figure 1. Proposed method flowchart.
Applsci 12 05714 g001
Figure 2. (a) Benign and (b) melanoma lesions images.
Figure 2. (a) Benign and (b) melanoma lesions images.
Applsci 12 05714 g002
Figure 3. Traditional convolution and depth-wise separable convolution.
Figure 3. Traditional convolution and depth-wise separable convolution.
Applsci 12 05714 g003
Figure 4. Bottleneck residual block.
Figure 4. Bottleneck residual block.
Applsci 12 05714 g004
Figure 5. Classifier based on MobileNetV2.
Figure 5. Classifier based on MobileNetV2.
Applsci 12 05714 g005
Figure 6. (a) Accuracies graph, and (b) loss graph of the proposed model.
Figure 6. (a) Accuracies graph, and (b) loss graph of the proposed model.
Applsci 12 05714 g006
Figure 7. Confusion matrix of the MobileNetV2 model.
Figure 7. Confusion matrix of the MobileNetV2 model.
Applsci 12 05714 g007
Figure 8. MobileNetV2 ROC curve on ISIC-2020.
Figure 8. MobileNetV2 ROC curve on ISIC-2020.
Applsci 12 05714 g008
Table 1. Related work summary.
Table 1. Related work summary.
ReferenceMethodologyDiseaseDatasetAccuracy
[60]Nasnet Mobile withMelanomaHAM10000 skin97.90%
Transfer Learning lesion dataset
[46]Support Vector MachineMelanoma, NevusDERMIS dataset96.0%
(SVM)
[53]DCNN-FVMelanoma,ISBI 201686.54%
Non-Melanomachallenge
[54]Deep ConvolutionalBenign,ISIC Archive84.76%
Neural Network (CNN)Malignant, MelanomaRepository
[47]SVMMelanoma,Ph2 & ISICPh2 98%,
Non-MelanomaChallengeISIC 87.8%
[7]Ensemble ModelMalignant,Xanthous Race (XR),XR (94.14%),
BenignCaucasian Race (CR)CR (91.11%)
[48]FSM & SVMMalignant,Ph291.90%
Benign
[49]DCNNMelanoma,Self Contained96%
Nevi2800 Images
[8]ANNMalignant,Self Contained93.6%
Benign172 Images
[61]ResNet-50Melanoma,Self ContainedSensitivity (82.3%),
Nevi4204 ImagesSpecificity (77.9%)
[55]Ensemble ModelMelanoma,ISICAvg Precision
Non-Melanoma (98.0%)
[56]STDP basedMalignant,ISIC 201887.7%
Spiking NNMelanoma,
Benign, Nevi
[57]DCNNMelanocytic,MED-NODEMED-NODE
Non-melanocyticDermIS & DermQuest(99.29%),
(D&D),D&D(99.15%),
ISIC-2017ISIC(98.14%)
[58]CNN basedMelanoma,Dermofit,Dermofit
CMLD modelBenignMED-NODE(90.58%),
MED-NODE
(90.14%)
Table 2. Summary of the ISIC-2020 dataset.
Table 2. Summary of the ISIC-2020 dataset.
Class LabelsTrainingValidationTesting
Melanoma817017501750
Benign817017501750
Total16,34035003500
Table 3. Image augmentation techniques.
Table 3. Image augmentation techniques.
TransformationsSetting
Scale transformationranged from 0 to 1
Rotation transformation25°
Zoom transformation0.2
Horizontal flipTrue
Shear transformation20°
Table 4. Parameters used in the Experiment.
Table 4. Parameters used in the Experiment.
ParametersValues
Architecture UsedMobileNetV2
Type of TransferFrom scratch transfer Knowledge
Train LayersAll
Learning AlgorithmAdam
Learning rateDefault Alpha Rate
Activation FunctionReLu & Sigmoid
Loss Functionbinary-cross-entropy
Batch Size64
Epochs100
Table 5. Classification accuracies, recall, precision and F1-score of presented MobileNetV2 Model on ISIC-2020 dataset.
Table 5. Classification accuracies, recall, precision and F1-score of presented MobileNetV2 Model on ISIC-2020 dataset.
Performance MeasureMelanomaBenignAverage AccuracyLeader Board Accuracy
Accuracy98.1%98.4%98.2%98.04%
Recall98.3%98.0%--
F1-Score98.1%98.1%--
Precision98.0%98.3%--
Table 6. Comparison with state-of-the-art models.
Table 6. Comparison with state-of-the-art models.
Ref.MethodologyDiseasesDatasetAccuracy
[58]CNN basedMelanoma,Dermofit,Dermofit (90.58%),
CMLD modelBenignMED-NODEMED-NODE (90.14%)
[8]ANNMelanoma,Self Contained93.6%
Benign172 Images
[48]FSM & SVMMelanoma,Ph291.90%
Benign
[7]Ensemble ModelMelanoma,Xanthous Race (XR),XR (94.14%),
BenignCaucasian Race (CR)CR (91.11%)
[75]InceptionV3, ResNet,Melanoma,ISIC archive between86.90%
and VGG19Benign2019 and 2020
Proposed MethodMelanoma,ISIC202098.20%
Benign
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rashid, J.; Ishfaq, M.; Ali, G.; Saeed, M.R.; Hussain, M.; Alkhalifah, T.; Alturise, F.; Samand, N. Skin Cancer Disease Detection Using Transfer Learning Technique. Appl. Sci. 2022, 12, 5714. https://doi.org/10.3390/app12115714

AMA Style

Rashid J, Ishfaq M, Ali G, Saeed MR, Hussain M, Alkhalifah T, Alturise F, Samand N. Skin Cancer Disease Detection Using Transfer Learning Technique. Applied Sciences. 2022; 12(11):5714. https://doi.org/10.3390/app12115714

Chicago/Turabian Style

Rashid, Javed, Maryam Ishfaq, Ghulam Ali, Muhammad R. Saeed, Mubasher Hussain, Tamim Alkhalifah, Fahad Alturise, and Noor Samand. 2022. "Skin Cancer Disease Detection Using Transfer Learning Technique" Applied Sciences 12, no. 11: 5714. https://doi.org/10.3390/app12115714

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop