Next Article in Journal
Improving Film Cooling Efficiency with Lobe-Shaped Cooling Holes: An Investigation with Large-Eddy Simulation
Previous Article in Journal
A Novel Wideband Common-Mode Noise Suppression Filter That Combines Mushroom and Defected Corrugated Reference Plane Structures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detecting Arcing Faults in Switchgear by Using Deep Learning Techniques

by
Yaseen Ahmed Mohammed Alsumaidaee
1,
Chong Tak Yaw
2,*,
Siaw Paw Koh
2,*,
Sieh Kiong Tiong
2,
Chai Phing Chen
3,
Chung Hong Tan
2,
Kharudin Ali
4 and
Yogendra A. L. Balasubramaniam
5
1
College of Graduate Studies (COGS), Universiti Tenaga Nasional (The Energy University), Kajang 43000, Malaysia
2
Institute of Sustainable Energy, Universiti Tenaga Nasional (The Energy University), Kajang 43000, Malaysia
3
Department Electrical and Electronics Engineering, Universiti Tenaga Nasional (The Energy University), Kajang 43000, Malaysia
4
Faculty of Electrical and Automation Engineering Technology, Kolej Universiti Tati, Kemaman 24000, Malaysia
5
TNB Research Sdn. Bhd., No. 1, Kawasan Institusi Penyelidikan, Kajang 43000, Malaysia
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(7), 4617; https://doi.org/10.3390/app13074617
Submission received: 24 February 2023 / Revised: 18 March 2023 / Accepted: 22 March 2023 / Published: 5 April 2023

Abstract

:
Switchgear and control gear are susceptible to arc problems that arise from slowly developing defects such as partial discharge, arcing, and heating due to faulty connections. These issues can now be detected and monitored using modern technology. This study aims to explore the effectiveness of deep learning techniques, specifically 1D-CNN model, LSTM model, and 1D-CNN-LSTM model, in detecting arcing problems in switchgear. The hybrid model 1D-CNN-LSTM was the preferred model for fault detection in switchgear because of its superior performance in both time and frequency domains, allowing for analysis of the generated sound wave during an arcing event. To investigate the effectiveness of the algorithms, experiments were conducted to locate arcing faults in switchgear, and the time and frequency domain analyses of performance were conducted. The 1D-CNN-LSTM model proved to be the most effective model for differentiating between arcing and non-arcing situations in the training, validation, and testing stages. Time domain analysis (TDA) showed high success rates of 99%, 100%, and 98.4% for 1D-CNN; 99%, 100%, and 98.4% for LSTM; and 100%, 100%, and 100% for 1D-CNN-LSTM in distinguishing between arcing and non-arcing cases in the respective training, validation, and testing phases. Furthermore, frequency domain analysis (FDA) also demonstrated high accuracy rates of 100%, 100%, and 95.8% for 1D-CNN; 100%, 100%, and 95.8% for LSTM; and 100%, 100%, and 100% for 1D-CNN-LSTM in the respective training, validation, and testing phases. Therefore, it can be concluded that the developed algorithms, particularly the 1D-CNN-LSTM model in both time and frequency domains, effectively recognize arcing faults in switchgear, providing an efficient and effective method for monitoring and detecting faults in switchgear and control gear systems.

1. Introduction

Switchgear is an indispensable component of power plants as it plays a crucial role in breaking and shutting down electrical current as well as transferring and replacing power loads. Additionally, it allows for the isolation of faulty equipment or lines, thereby ensuring the secure operation of the power system [1]. In order to minimize power losses caused by current flow, power distribution is performed at service voltage levels that are lower than the rated voltage [2]. The switchgear cabinets used in power plants typically incorporate various parts such as circuit breakers, knife gates, bus bars, and insulators, among others. This results in a complex internal structure with multiple gaps between devices, which poses a significant problem in terms of providing reliable insulation to prevent breakdowns [3,4]. Switchgear can be classified into three types: gas-insulated, oil-insulated, and air-insulated, which are further categorized based on their voltage level into low, medium, and high voltage groups [5,6]. To ensure continuous and uninterrupted power supply to the end user, regular and thorough monitoring of operational performance and general working condition of switchgear is necessary [7]. In case of any malfunction, quick and efficient troubleshooting along with appropriate countermeasures are required [8]. Switchgear faults and failures can be categorized into several types, one of which is arcing, which is a common issue in switchgear [9,10]. An arcing fault occurs when a high-voltage plasma discharge creates an electric arc between stage-to-stage or stage-to-neutral conductors, resulting in an arc flash. Several factors that can lead to an arc flash are mechanical failure, overload, or human error such as inadvertently touching the wiring or an electrified busbar [11]. An arcing fault is a high-current, low-impedance fault that occurs when an electrical discharge jumps across a gap in the insulation between conductors. This discharge produces high temperatures that can damage the insulation and the conductors themselves, leading to further electrical problems [12]. Partial discharge is another type of fault that occurs when small electrical discharges or arcs happen between conductors in the insulation material, resulting in localized heating and degradation of the insulation. This fault can eventually lead to the failure of the insulation and cause a short circuit or arcing fault [13]. Arcing faults are dangerous to both people and equipment and can cause big financial losses and halt important operations. Because of this, arc fault mitigation measures have been made to deal with these dangers and increase awareness among workers [14]. To quickly and effectively suppress arcs, various protection strategies and the effects of arc short-circuit are discussed in these studies [15,16]. The energy loss caused by an arc discharge is affected by the arc voltage, current, and duration. Thus, there are several approaches to minimize arc duration or short-circuit duration and limit the energy of the arc [17,18]. One common technique is moving the arc from its original location by grounding, while another method is to activate the primary switch and accelerate protection using sensors other than traditional current transformers. These methods can decrease the duration of the arc and reduce its incident energy, as well as the magnitude of the arc current in some cases [19]. There are two main types of arcing faults [20]. The first type is a “series arc fault” which happens when a single wire is damaged to the point that it cannot carry current. Arcing then occurs in the gaps between the conductors and spreads to the insulator. The second type is a “parallel arc fault” which occurs when current passes through defective insulation and shorts out as it travels between conductors. This type of fault can be difficult to detect by circuit breakers since it is a weak circuit fault that creates arcs as the leakage current passes through the insulating components [21]. Typically, the conventional sensor module in the switchgear provides the most recent data on the current. Arc faults can be addressed by using either a differentiated protection system or regular overcurrent protection [22]. The time required for contemporary relays and circuit breakers to detect and interrupt an arc fault is normally at least 50 milliseconds (ms), which includes the trip relay contact closing and the switch opening. Depending on the relay types and circuit breaker technology, the automatic trip time can reach 50 ms or longer. This delay can be due to the need to blow off lids or doors (in gas-insulated switchgear) with sufficient force to ensure that the arc fault is extinguished before the pressure peak [23,24]. Additionally, it can be caused by the operator in the switch room releasing hot gas, or by shortening the time for the circuit breaker (via loop sensor) to trip when an abnormal light is detected in the switchgear assembly, thereby accelerating the protective logic’s response time. There are several ways to recognize and monitor these faults in switchgear and control gear. Diagnostic testing, visual inspection, and condition monitoring are some of the most effective methods [25,26,27]. Diagnostic testing is an important technique that is used to evaluate the condition of switchgear and control gear. It involves subjecting the equipment to various tests including insulation resistance testing, dielectric testing, and partial discharge testing. These tests can identify any weak points in the insulation or other components of the equipment that may be susceptible to arcing or partial discharge [28]. Visual inspection is another important technique that can help to recognize arcing faults and partial discharge in switchgear and control gear. Inspectors should look for signs of damage, wear, or overheating, such as discoloration or burn marks on the equipment, as well as debris or other contaminants that may have accumulated inside the equipment [27]. Condition monitoring is a more advanced technique that involves monitoring the equipment’s performance in real-time using sensors and other monitoring devices. This technique is especially useful for detecting early signs of arcing and partial discharge that may be missed by other inspection methods. For example, a monitoring system can detect increases in temperature, changes in electrical signals, or other anomalies that may indicate a developing fault [5]. In recent years, deep learning has made significant progress and has been extensively applied in various fields, particularly in images [29] and natural language processing [30], as well as in the medical [31], industrial [32], and energy sectors [33,34]. One of the most well-known deep learning models, the Convolutional Neural Network (CNN), can extract features by using different filters in the convolutional layers which includes pooling layers, normalization layers, and fully connected layers, and can improve the performance of various tasks during execution [35]. The CNN is a powerful algorithm that can automatically extract relevant features from input data, including time series data. However, unlike other algorithms such as the Long Short-Term Memory (LSTM), CNN cannot remember past time series patterns, which can limit its ability to identify the most significant and representative aspects of the data. This makes it challenging for CNN to directly comprehend the underlying temporal dependencies within the time series data [36]. In the context of switchgear fault detection and classification, Recurrent Neural Networks (RNNs) are employed in this application, which is a type of neural network capable of recalling previous data by using the network’s outputs to input new data [37]. In recent years, RNNs have gained a lot of attention for their use in natural language processing [38] and speech recognition [39]. Among the various RNN architectures, the LSTM has gained widespread popularity in the field of time series analysis [40]. Its architecture is particularly adept at handling the gradient-vanishing issue that arises in basic RNNs. Additionally, it aids in learning long-term relationships, which may improve the network’s understanding of the temporal aspects of sequential data. To enhance the modeling capabilities of deep neural networks and produce a time-accurate representation of the sequence [41], a convolutional LSTM neural network was developed by combining the strengths of CNN and RNN, and applied to various tasks including energy forecasting for time series data with different durations [42,43], speech emotion recognition [44], and challenging vocabulary tasks [45]. The specifications provided by the manufacturer on the dashboard are often used as a basis for identifying the location where an arc could occur and tracking its position. The evaluator collects the optical sensor signal and analyzes the trigger signal when the input from the sensor surpasses a certain threshold [46]. Over the last few decades, soft computing and artificial intelligence algorithms have been successfully used to solve real-world engineering issues in many areas [47,48]. Therefore, this study aims to employ soft computing and ultrasonic inspection systems to detect arcing faults immediately as they occur. By doing so, it will be possible for the government and appropriate companies to take the necessary steps and precautions to prevent future incidents and avoid larger losses.
The study’s motivations and contributions are outlined in the following points:
  • The goal of this study is to improve how deep learning (DL) techniques can be used to find arcing faults in switchgear. In particular, the study looks at how well 1D-CNN, LSTM, and a model that combines 1D-CNN and LSTM work. The motivation behind this research is to enhance the safety and reliability of power systems by accurately identifying arcing faults, which are a common cause of switchgear failure.
  • One significant contribution of this study is the development of a novel hybrid approach for arcing fault detection. The hybrid model combines the advantages of the 1D-CNN and LSTM models, allowing for more accurate and efficient identification of arcing faults. This is the first time that the hybrid technique is applied to arcing fault detection in switchgear, making this research novel and valuable.
  • We have compared the different DL models used in this study to figure out which is the best way to find arcing faults. Through extensive experimentation, the hybrid approach (1D-CNN-LSTM) was found to be superior to the other methods in arcing fault identification. This highlights the importance of considering a hybrid approach to detecting arcing faults in switchgear.
  • The evaluation of the different techniques in both the time and frequency domains is another important part of this study. This is a new way of doing things that has not been done before in studies that used the same DL techniques. By conducting the research in both time and frequency domains, we were able to obtain a more comprehensive understanding of the performance of the different DL models in arcing fault detection.
  • The hybrid model has proven to be effective in rapidly finding arcing defects and distinguishing them from other types of flaws. This is crucial in ensuring the reliability and safety of power systems. Overall, the hybrid approach is considered the optimum model for arcing fault detection in both the time and frequency domains.
The article is categorized as follows: In Section 2, materials and methods are presented. In Section 3: Performance Metrics, Section 4: Results and Discussion, and Section 5 concludes the article.

2. Materials and Methods

2.1. Proposed Methods

In this study, the authors aimed to detect arcing defects in switchgear using DL techniques in both the time and frequency domains. To do so, the researchers followed a multi-step process depicted in Figure 1. Firstly, sound data was collected and transformed into images using Mel-Spectrogram and supplemented with gathering the dataset of switchgear faults (arcing with non-arcing). The dataset was then split into three phases: training, validation, and testing. To achieve high accuracy, features were extracted using two methods: 1D-CNN and the classification algorithm (LSTM) for the hybrid model. The results of the hybrid model were compared to those of the lone models (1D-CNN and LSTM) to identify arcing faults in both the time and frequency domains. This approach can be useful for identifying faults in switchgear to prevent bigger losses and improve the overall performance of the equipment.

2.1.1. Data Collection

The data used for this article was collected using Airborne Ultrasonic Test (AUT) equipment, and the data was recorded in waveform audio, moving image experts’ group (MPEG), or audio layer 3 (mp3) file formats, also known as wav. To make the data more appropriate for a specific machine learning algorithm, a data pre-processing technique called data transformation was employed, which involved consolidating or converting the data into a suitable format. In this case, the data was translated into a matrix format to meet the requirements of the MATLAB program, which was used in the study. There are four types of equipment used by TNB’s CBM teams and vendors to diagnose switchgear’s health using ultrasound. In this research, raw data were collected from the test results obtained by using different ultrasonic test equipment, namely:
  • Ultra TEV Plus
  • Ultra TEV Plus 2
  • Ultra Probe 9000
  • Ultra Probe 10,000
Table 1 displays the samples of datasets and their sizes for arcing and non-arcing cases in both the time and frequency domains.

2.1.2. Pre-Processing

In this study, the pre-processing step was very important because it turned the data collected into a format that deep learning algorithms could use. This process involved converting the waveform audio data into an image format using the Mel-Spectrogram technique. Mel-Spectrogram is a widely used method to extract spectral features from audio signals. This process involves transforming the sound signal into a spectrogram, which is essentially a 2D matrix of intensities that represent the frequency spectrum of the sound signal. After transforming the data into images, the data was then further transformed into a matrix format to comply with the requirements of the Python programming language, which was used for the implementation of the machine learning algorithms. This transformation allowed the data to be organized and structured in a way that could be processed by the algorithms. The arcing and non-arcing data were combined and then split into three phases: training, validation, and testing. The training phase comprised the largest proportion of the dataset (70%), while the remaining 30% of the dataset was split equally for each of the validation (15%) and testing (15%) phases. It is important to note that the non-arcing data in this study was not limited to just one type of fault but rather included a range of other faults such as corona, tracking, mechanical, and normal faults. Additionally, the pre-processing and flowchart were implemented using the Google Colab environment.

2.1.3. 1D CNN

In this study, 1D convolution operations were used to extract representative features of arcing and non-arcing in both the time and frequency domains. The architecture used was a basic 1D-CNN with convolutional layers, filters, MaxPooling, a fully connected layer, and a classification layer. Here, the 1D-CNN can match the one-dimensional characteristics of datasets in the time and frequency domains. To introduce nonlinearity, the rectified linear unit (RELU) was utilized as the activation function. Additionally, batch normalization and dropout were used to normalize inputs and prevent overfitting. Although deep CNNs have gained prominence in image classification competitions like ImageNet [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49], they are primarily used for large-scale 2D visual databases such as video analysis and image identification. However, there are not many uses for 1D sequence data in forecasting and classification issues. Arcing and non-arcing categorization can be seen as a serial modeling process, making short and concise 1D-CNNs ideal for real-time and limited applications [50]. Equation (1) through (3) and Figure 2 illustrate the convolution layers, filters, and fully connected layer utilized in the 1D-CNN architecture. In this study, one-dimensional feature maps were used to match the one-dimensional nature of the datasets in the time and frequency domains. The 1D-CNN architecture, combined with the Mel-Spectrogram transformation, was able to identify arcing faults in switchgear with high accuracy.
x o , f l l = f i m x i l 1 k i o , f l l + b l
x o l = f m a x i m x i l 1 + b l
x o l = f ( x i l 1 d i o l + b l )

2.1.4. LSTM Structure

The study utilized LSTM, a type of RNN that can handle long-term dependencies and mitigate gradient vanishing problems. This approach was applied to detect arcing and non-arcing events in both time and frequency domains by replacing the hidden layers of RNNs with LSTM units that have memory blocks and gates. The LSTM architecture consists of memory cells (C) that are controlled by gates such as input gates, output gates, and forget gates [51]. These gates allow the data to be stored or retrieved from the memory cell. LSTM has been successfully used in sequential modeling tasks such as text categorization and time series modeling [52]. The LSTM architecture is illustrated in Figure 3 and Equation (4) through (9) provide the mathematical formula for an LSTM block.
i t = σ x t U i + h t 1 W i + b i
f t = σ x t U f + h t 1 W f + b f
O t = σ x t U o + h t 1 W o + b o
C t = σ f t C t 1 + i t C ˇ t U i + b c
C ˇ = t a n h x t U g + h t 1 W g
h t = t a n h C t O t
where x t is the network input matrix, h t is the hidden layer output, and σ shows the SoftMax function. Since important case is decided by Equation (7), C serves as a memory cell that acts to retain this information. The outdated cell case C t 1 is updated to the modern cell case C t . New potential values C ˇ and output of present LSTM block h t tangent function in hyperbolic space are shown in Equations (8) and (9). Weights make up the learnable parameters [( W i , W f , W o , W g ) and ( U i , U f , U o , U g )] which organize the internal parameters and bias vectors of the neural network ( b i , b f , b o , b c ). The model updates weights and biases by reducing the cost function. The operator denotes element-by-element multiplication.

2.1.5. 1D-CNN-LSTM

In the hybrid architecture, the LSTM model learned to remember long-term temporal dependencies by analyzing the output from the 1D-CNN model. When these two models were put together, the network was able to pick up both local and global features of the input data. This made it easier to find both arcing and non-arcing faults. The input data was first processed by the 1D-CNN layers to extract relevant features, and the output was then fed into the LSTM layers to capture the temporal dependencies. Finally, the FC and output layers were used to classify the faults. This approach showed promising results in fault classification and can be extended to other sequential modeling tasks. Figure 4 shows a sample of a hybrid 1D-CNN-LSTM model.
In the proposed 1D-CNN-LSTM architecture, the collected samples in the time domain were in the form of a dimensional matrix containing multiple input sequences. The LSTM neural network was then employed to classify the arcing and non-arcing conditions after training. The Adam optimizer was used for weight optimization during the optimization process. The model’s parameters, such as the number of convolutional layers, kernel size, MaxPooling, and FC layer were fine-tuned for maximum accuracy and minimum loss. The final architecture of the 1D-CNN-LSTM model consisted of two global MaxPooling layers and two 1D convolutional layers, each with 64 and 128 filters. In the 1D-CNN part, which required less processing time, the RELU activation function was used with dropouts of 0.2 after every layer starting from the global MaxPooling layers. The pool size of MaxPooling and the kernel size were set to 3. The following two LSTM layers were employed with dropouts of 0.2 between each of their 1st LSTM and 2nd LSTM unit. The subsequent FC layer had a learning rate of 0.0001 and contained a SoftMax activation function. The 1D-CNN architecture in the time domain scenario consists of two convolutional layers with 16 and 32 filters, respectively, and a dropout rate of 0.4 after each layer. Two MaxPooling layers were also included, with a pool size of 2 and a kernel size of 3. The fully connected layer had 32 neurons and a SoftMax activation function, and the learning rate was set to 0.0001. In comparison, the LSTM model was designed with two connected LSTM units, each with 64 and 32 units, respectively, and a dropout rate of 0.5 after each unit. The fully connected layer had 32 neurons and a SoftMax activation function, and the learning rate was also set to 0.0001. The training time for the LSTM model was longer than the 1D-CNN model due to its architecture. Figure 5 provides a visual representation of the 1D-CNN, LSTM, and 1D-CNN-LSTM hybrid model in the Time domain. Table 2 provides more detailed information about the design of deep learning algorithms for the time domain scenario.
It is worth noting that the proposed method for the frequency domain followed a similar approach as that for the time domain. The collected samples included various input sequences as one-dimensional frequency domain data. The 1D-CNN model was applied in the first stage to extract features from the input samples. The features were then fed into the LSTM model for training and classification of arcing and non-arcing conditions. The Adam optimizer was also used for weight optimization in this case. The 1D-CNN-LSTM architecture for the frequency domain included two 1D convolutional layers with 64 and 128 filters, followed by two MaxPooling layers with a pool size of 2 and a kernel size of 3. RELU activation function was used for the CNN layers with a dropout of 0.2 after each MaxPooling layer. The LSTM layers included two connected units with 128 and 32 units, respectively, and a dropout of 0.2 between them. Finally, a fully connected layer with 32 neurons and a SoftMax activation function was added. The learning rate for the FC layer was set to 0.0001. As for the 1D-CNN method, it consisted of two convolutional layers with 16 and 32 filters for each layer, two MaxPooling layers, and a dropout of 0.3 applied between each layer to prevent overfitting. A fully connected layer with 32 neurons and a SoftMax activation function were included, with a kernel size of 3 and a MaxPooling pool size of 3. The learning rate for this architecture was set to 0.0001.
On the other hand, the LSTM model involved two connected LSTM units, which had been shown to be the most efficient design for detecting both arcing and non-arcing faults. The first layer had 64 LSTM units, followed by another layer with 64 units, and a dropout of 0.5 was applied after each unit to reduce overfitting. The architecture also included a fully connected layer with 32 neurons and a SoftMax activation function, with a learning rate of 0.0001. Additionally, Figure 5 illustrates the 1D-CNN, LSTM, and 1D-CNN-LSTM architecture for the frequency domain. Table 3 provides detailed information about the design of deep learning algorithms for the frequency domain.

3. Performance Metrics

The effectiveness of the proposed models was evaluated using a confusion matrix, as presented in Table 4. In this matrix, arcing and non-arcing situations were represented by 0 and 1, respectively. The model’s performance was assessed using various indices, including categorization, reliability, dependability, sensitivity, and F1 measure, as determined by Equations (8)–(11). Recall represents the ratio of relevant samples that were accurately identified from all relevant samples, while precision refers to the percentage of relevant samples that were correctly identified from the retrieved data. Additionally, the cross-entropy loss was employed to evaluate the model’s performance by measuring the closeness of the model’s predictions to the target data as given in Equation (12). One important performance requirement is the minimization of the gap between the arcing and non-arcing states of two probability distributions, which is frequently addressed in binary classification problems. For a perfect model, the loss value would be 0.
C l a s s i f i c a t i o n   A c c u r a c y % = 100 × T P + T N T P + F P + F N + T N
R e c a l l S e n s i t i v i t y % = 100 × T P T P + F N
P r e c i s i o n D e p e n d a b i l i t y % = 100 × T P T P + F P
F 1   m e a s u r e % = 100 × 2 × ( P r e c i s i o n × R e c a l l ) ( P r e c i s i o n + R e c a l l )
L o s s ( L ) = 1 N × I = 1 M T i log ( x i )
x i is the projected answer, N is the number of observations overall in x , and M is the total number of expected replies in x (all findings and categories combined). The goal value is T i . The binary crossentropy loss is normalised by the total number of instances after adding up all the categories and observations.
Table 4. For arcing and non-arcing cases, a confusion matrix.
Table 4. For arcing and non-arcing cases, a confusion matrix.
Predictive Arcing
Findings (0)
Predictive Non-Arcing
Findings (1)
Actual Arcing Findings (0)TPFP
Actual Non-Arcing Findings (1)FNTN
False Positive (FP) and False Negative (FN) stand for inaccurate forecasts, while True Positive (TP) and True Negative (TN) stand for accurate predictions (TN).

4. Results and Discussion

Based on the analysis of sound waves in the time and frequency domains, experiments were done to see how well the proposed algorithms for finding arcing defects in switchgear worked. Figure 6 and Figure 7 show the results of the experiments. They show the accuracy and loss in both domains during the training, validation, and testing stages. A total of 438 data samples were used to train and validate the 1DCNN, LSTM, and 1D-CNN-LSTM models for detecting arcing and non-arcing conditions in the time domain and 160 samples in the frequency domain. The classifier with arcing in the time and frequency domains was able to accurately classify all test data into positive and negative categories.
In the time domain, 438 samples were generated from simulations to represent arcing and non-arcing conditions, and 70% of these samples were randomly selected for training, while 15% each were used for model validation and testing. The same testing, validation, and training datasets were used for all methods, totaling 306 samples in the training phase. The results of multiple trials were compiled into confusion matrices, which show the accuracy and error rates for each phase. The validation and testing phases contained 66 samples each. In the 1D-CNN-LSTM model training phase, Table 5 shows that, out of the 306 data sets, 272 non-arcing instances and 34 arcing instances were accurately identified, resulting in an accuracy rate of 100% with a 0% error rate. On the other hand, both 1D-CNN and LSTM models detected 260 non-arcing cases and 43 arcing cases with a 99% accuracy rate and a 1% error rate, with three non-arcing cases mistakenly classified as arcing.
During the validation phase of the time domain analysis, 66 data sets were used for the 1D-CNN-LSTM model. Out of these, 56 cases where there was no arcing and 10 cases where there was arcing were found. This gives an estimated accuracy of 100% and a 0% error rate. For the 1D-CNN model, all 66 data sets were utilized, with four arcing instances detected and a perfect accuracy of 100% achieved for both arcing and non-arcing instances with a 0% error rate. The LSTM model achieved similar results to the 1D-CNN model, with 66 data sets and an identical accuracy and error rate. The validation phase’s output matrix is presented in Table 6.
In the time domain, 66 data sets were used during the testing phase of the 1D-CNN-LSTM model, which is shown in Table 7. The method correctly identified 56 non-arcing instances and 10 arcing instances, resulting in a 100% accuracy rate and a 0% error rate. In the case of the 1D-CNN model, 66 data sets were used, and the method successfully identified seven arcing instances and 58 non-arcing instances, with only one non-arcing instance being misclassified as an arcing fault. The accuracy rate for this model was 98.4%, with an error rate of 1.6%. The LSTM model, which also used 66 data sets, correctly identified 58 instances of non-arcing and seven instances of arcing. However, one instance of non-arcing was misclassified as an arcing fault. The accuracy rate for this model was also 98.4%, with an error rate of 1.6%, which is the same as the 1D-CNN model.
In the frequency domain, 160 samples of arcing and non-arcing conditions were taken from simulations. The dataset was randomly split into 70% for training, 15% for validation, and 15% for model testing. The same testing, training, and validation datasets were used for each method, with a total of 112 cases for the training phase. The tables below show the true and false detections, as well as the best results from repeated training for the testing and validation phases. The training phase output matrix for the 1D-CNN-LSTM model in the frequency domain is shown in Table 8. Out of a total of 112 cases, 35 cases of arcing and 77 cases of non-arcing were correctly identified. The model achieved 100% accuracy and a 0% error rate. Similarly, the 1D-CNN model successfully identified 77 non-arcing instances and 35 arcing instances out of 112 sets of data. The overall accuracy was 100% with a 0% error rate. For the LSTM model, all 112 sets of data were used, with 39 arcing instances and 73 non-arcing instances being correctly identified. The accuracy of the LSTM model was 100%, with a 0% error rate.
During the validation phase of the frequency domain analysis, the 1D-CNN-LSTM model used a total of 24 sets of data, successfully recognizing 15 non-arcing instances and nine arcing instances. The overall accuracy was 100% with no errors. Similarly, in the 1D-CNN model, 15 non-arcing instances and nine arcing instances were successfully identified, resulting in an accuracy of 100% with no errors. The LSTM model achieved an accuracy of 100% with no errors, correctly detecting five instances of arcing and 19 instances of non-arcing, using a total of 24 sets of data. Table 9 displays the results matrix for the validation phase.
Table 10 shows the results matrix from the 24 sets of data used in the testing phase of the 1D-CNN-LSTM model in the frequency domain. The algorithm correctly identified 15 non-arcing instances and nine arcing instances, resulting in a 100% accuracy rate and a 0% error rate. For the 1D-CNN model, 24 sets of data were utilized, and it correctly identified 15 non-arcing instances and eight arcing instances. The overall accuracy rate was 95.8%, with a 4.2% error rate, where one instance of arcing was misclassified as non-arcing. The LSTM model also used 24 sets of data and correctly identified 15 non-arcing instances and 9 arcing instances. However, one instance of arcing was misclassified as non-arcing, resulting in a 95.8% accuracy rate and a 4.2% error rate.
A thorough study was done using performance measures including accuracy, loss, sensitivity, dependability, and F1-measure in Table 11 and Table 12 to assess the reliability of all three methods more rigorously.
The 1D-CNN, LSTM, and 1D-CNN-LSTM networks were thoroughly compared based on their outcomes. The training, validation, and testing phases of these methods showed that the 1D-CNN-LSTM model, as depicted in Figure 6 and Figure 7, had the highest accuracy in both time and frequency domains, as demonstrated in Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9. The 1D-CNN-LSTM model exhibited 100% accuracy in identifying arcing and non-arcing instances in the testing phase in the time domain, while 1D-CNN and LSTM models achieved 98.4% accuracy. The 1D-CNN-LSTM model showed exceptional performance, with a loss of less than 0.1%. Similarly, the 1D-CNN-LSTM model achieved 100% accuracy in the testing phase in the frequency domain, with an error rate of less than 0%, while 1D-CNN and LSTM models achieved an accuracy of 95.8%. However, the 1D-CNN-LSTM model took longer to train in both time and frequency domains than the other models due to its various parameters. Table 1 and Table 2 indicate that the 1D-CNN model, which had a total of 11,922 parameters in the time domain and 5650 parameters in the frequency domain, had the fastest training time.

5. Conclusions

Switchgear is an essential component in ensuring safety in power plants, and reliable fault detection systems are crucial in preventing malfunctions and improving equipment maintenance. In this study, various deep learning structures were proposed for arcing detection approaches using sound waves, including the 1D-CNN, LSTM, and hybrid 1D-CNN-LSTM models. The results indicated that the 1D-CNN and LSTM models were both successful in differentiating between arcing and non-arcing instances in the training, validation, and testing phases of the time domain analysis, with success rates of 99%, 100%, and 98.4%. However, the 1D-CNN-LSTM model outperformed both the 1D-CNN and LSTM models, achieving a 100% success rate in all three phases. In the frequency domain analysis, the success rates for the 1D-CNN and LSTM models in the training, validation, and testing phases were 100%, 100%, and 95.8%, respectively. On the other hand, the 1D-CNN-LSTM model achieved a 100% success rate in all three phases. The results of the analysis conducted in both the time and frequency domains showed that all three algorithms were successful in detecting arcing faults, with the 1D-CNN-LSTM model achieving the highest success rates of 100% in all phases. These findings demonstrate the potential of deep learning techniques in improving fault detection systems for switchgear. However, there is still room for improvement and future work in this area. One possible direction for future research is to explore the use of other types of sensors, such as temperature or vibration sensors, in combination with sound wave analysis for more accurate fault detection. Another area of future work could be to optimize the proposed models further to reduce the training time and computational resources required. Additionally, it may be useful to investigate the effectiveness of transfer learning and data augmentation techniques in improving the performance of the proposed models. Finally, this study provides a promising foundation for further research on the development of deep learning-based fault detection systems for switchgear.

Author Contributions

Conceptualization, S.P.K. and S.K.T.; methodology, Y.A.M.A., C.T.Y., S.P.K. and S.K.T.; software, Y.A.M.A.; validation, Y.A.M.A., C.T.Y., S.P.K. and C.P.C.; formal analysis, Y.A.M.A., C.T.Y. and S.P.K.; investigation, Y.A.M.A., C.T.Y., S.P.K., Y.A.L.B., C.H.T., K.A. and C.P.C.; resources, Y.A.M.A., C.T.Y., S.P.K., S.K.T. and C.P.C.; data curation, Y.A.M.A., C.T.Y. and C.P.C.; writing—original draft preparation, Y.A.M.A., C.T.Y. and S.P.K.; writing—review and editing, Y.A.M.A., C.T.Y., S.P.K., S.K.T., C.P.C., K.A., Y.A.L.B. and C.H.T.; visualization, S.P.K., C.T.Y., S.K.T., Y.A.L.B. and C.P.C.; supervision, S.P.K., C.P.C. and C.T.Y.; project administration, S.P.K. and S.K.T.; funding acquisition, S.P.K. and S.K.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by 202101KETTHA and BOLDREFRESH 2025 (J510050002 (IC6C)).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This work was supported by Universiti Tenaga Nasional, BOLDREFRESH 2025 and AAIBE of Renewable Energy (ChRE) for providing all out-laboratory support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Song, J.; Zhang, J.; Fan, X. Device for online monitoring of insulation faults in high-voltage switchgears. Int. J. Distrib. Sens. Networks 2021, 17, 1550147721999284. [Google Scholar] [CrossRef]
  2. Prévé, C.; Maladen, R.; Dakin, G.; Gentils, F.; Piccoz, D. Dielectric stress, design and validation of MV switchgear. In Proceedings of the CIRED 2019 Conference, Madrid, Spain, 3–6 June 2019. [Google Scholar]
  3. Bityukov, O.V.; Vil’, V.A.; Merkulova, V.M.; Nikishin, G.I.; Terent’Ev, A.O. Silica gel mediated oxidative C–O coupling of β-dicarbonyl compounds with malonyl peroxides in solvent-free conditions. Pure Appl. Chem. 2017, 90, 7–20. [Google Scholar] [CrossRef]
  4. Ghassemi, M. Accelerated insulation aging due to fast, repetitive voltages: A review identifying challenges and future research needs. IEEE Trans. Dielectr. Electr. Insul. 2019, 26, 1558–1568. [Google Scholar] [CrossRef]
  5. Alsumaidaee, Y.A.M.; Yaw, C.T.; Koh, S.P.; Tiong, S.K.; Chen, C.P.; Ali, K. Review of Medium-Voltage Switchgear Fault Detection in a Condition-Based Monitoring System by Using Deep Learning. Energies 2022, 15, 6762. [Google Scholar] [CrossRef]
  6. Prabaharan, N.; Palanisamy, K. A comprehensive review on reduced switch multilevel inverter topologies, modulation techniques and applications. Renew. Sustain. Energy Rev. 2017, 76, 1248–1282. [Google Scholar] [CrossRef]
  7. Subramaniam, A.; Sahoo, A.; Manohar, S.S.; Raman, S.J.; Panda, S.K. Switchgear Condition Assessment and Lifecycle Management: Standards, Failure Statistics, Condition Assessment, Partial Discharge Analysis, Maintenance Approaches, and Future Trends. IEEE Electr. Insul. Mag. 2021, 37, 27–41. [Google Scholar] [CrossRef]
  8. Bornare, A.B.; Naikwadi, S.B.; Pardeshi, D.B.; William, P. Preventive Measures to Secure Arc Fault using Active and Passive Protection. In Proceedings of the 2022 International Conference on Electronics and Renewable Systems (ICEARS), Tuticorin, India, 16–18 March 2022; pp. 934–938. [Google Scholar]
  9. Ishak, S.; Yaw, C.T.; Koh, S.P.; Tiong, S.K.; Chen, C.P.; Yusaf, T. Fault Classification System for Switchgear CBM from an Ultrasound Analysis Technique Using Extreme Learning Machine. Energies 2021, 14, 6279. [Google Scholar] [CrossRef]
  10. Chang, C.-K. Mitigation of high energy arcing faults in nuclear power plant medium voltage switchgear. Nucl. Eng. Technol. 2019, 51, 317–324. [Google Scholar] [CrossRef]
  11. Yang, K.; Zhang, R.; Yang, J.; Liu, C.; Chen, S.; Zhang, F. A Novel Arc Fault Detector for Early Detection of Electrical Fires. Sensors 2016, 16, 500. [Google Scholar] [CrossRef]
  12. Lala, H.; Karmakar, S. Detection and Experimental Validation of High Impedance Arc Fault in Distribution System Using Empirical Mode Decomposition. IEEE Syst. J. 2020, 14, 3494–3505. [Google Scholar] [CrossRef]
  13. Montanari, G.C.; Ghosh, R.; Cirioni, L.; Galvagno, G.; Mastroeni, S. Partial Discharge Monitoring of Medium Voltage Switchgears: Self-Condition Assessment Using an Embedded Bushing Sensor. IEEE Trans. Power Deliv. 2021, 37, 85–92. [Google Scholar] [CrossRef]
  14. Kumpulainen, L. Aspects and directions of internal arc protection. Vaasan Yilopisto 2016, 71–74. Available online: https://core.ac.uk/download/pdf/197967335.pdf (accessed on 21 March 2023).
  15. Satpathi, K.; Ukil, A.; Pou, J. Short-circuit fault management in DC electric ship propulsion system: Protection requirements, review of existing technologies and future research trends. IEEE Trans. Transp. Electrif. 2017, 4, 272–291. [Google Scholar] [CrossRef]
  16. Xu, Y.; Li, J.; Zeng, X.; Yu, K.; Che, X.; Liu, F. Research on current transfer arc-extinguishing technology of distribution network. In Proceedings of the 2019 IEEE 3rd Conference on Energy Internet and Energy System Integration (EI2), Changsha, China, 8–10 November 2019; pp. 2524–2528. [Google Scholar]
  17. Shekhar, A.; Ramirez-Elizondo, L.; Bandyopadhyay, S.; Mackay, L.; Bauera, P. Detection of Series Arcs Using Load Side Voltage Drop for Protection of Low Voltage DC Systems. IEEE Trans. Smart Grid 2017, 9, 6288–6297. [Google Scholar] [CrossRef] [Green Version]
  18. Prasad, A.; Edward, J.B.; Ravi, K. A review on fault classification methodologies in power transmission systems: Part-I. J. Electr. Syst. Inf. Technol. 2018, 5, 48–60. [Google Scholar] [CrossRef]
  19. Prasad, A.; Edward, J.B.; Ravi, K. A review on fault classification methodologies in power transmission systems: Part-II. J. Electr. Syst. Inf. Technol. 2018, 5, 61–67. [Google Scholar] [CrossRef] [Green Version]
  20. Saeed, E.A.; Abdulhassan, K.M.; Khudair, O.Y. Series and Parallel Arc Fault Detection Based on Discrete Wavelet vs. FFT Techniques. Iraqi J. Electr. Electron. Eng. 2020. Available online: https://ijeee.edu.iq/Papers/Vol18-Issue1/1570772477.pdf (accessed on 21 March 2023). [CrossRef]
  21. Kay, J.A.; Kumpulainen, L. Maximizing protection by minimizing arcing times in medium voltage systems. In Proceedings of the Conference Record of 2012 Annual IEEE Pulp and Paper Industry Technical Conference (PPIC), Portland, OR, USA, 17–21 June 2012. [Google Scholar]
  22. Zimmerman, K.; Costello, D. Impedance-based fault location experience. In Proceedings of the 2006 IEEE Rural Electric Power Conference, Albuquerque, NM, USA, 9–11 April 2006. [Google Scholar]
  23. Ngu, E.; Ramar, K. A combined impedance and traveling wave based fault location method for multi-terminal transmission lines. Int. J. Electr. Power Energy Syst. 2011, 33, 1767–1775. [Google Scholar] [CrossRef]
  24. Çapar, A.; Arsoy, A.B. A performance oriented impedance based fault location algorithm for series compensated transmission lines. Int. J. Electr. Power Energy Syst. 2015, 71, 209–214. [Google Scholar] [CrossRef]
  25. Andrusca, M.; Adam, M.; Dragomir, A.; Lunca, E.; Seeram, R.; Postolache, O. Condition Monitoring System and Faults Detection for Impedance Bonds from Railway Infrastructure. Appl. Sci. 2020, 10, 6167. [Google Scholar] [CrossRef]
  26. Węgierek, P.; Kostyła, D.; Lech, M. Directions of Development of Diagnostic Methods of Vacuum Medium-Voltage Switchgear. Energies 2023, 16, 2087. [Google Scholar] [CrossRef]
  27. Yin, K.; Fang, J.; Mo, W.; Wang, H.; Zhang, T.; Yang, M. Robot Real-time Inspection Method for Compliance Inspection of Switchgear Circuit Breaker Trolley. In Proceedings of the 2021 6th International Conference on Robotics and Automation Engineering (ICRAE), Guangzhou, China, 19–22 November 2021. [Google Scholar]
  28. Liu, H.; Ren, M.; Huang, W.; Li, W.; Ren, Z.; Dong, M. Insulation Status Diagnosis on Metal-enclosed Switchgear via TEV sensing Network. In Proceedings of the 2017 2nd International Conference on Communication and Information Systems, Wuhan, China, 7–9 November 2017. [Google Scholar]
  29. Jiao, L.; Zhao, J. A survey on the new generation of deep learning in image processing. IEEE Access 2019, 7, 172231–172263. [Google Scholar] [CrossRef]
  30. Young, T.; Hazarika, D.; Poria, S.; Cambria, E. Recent Trends in Deep Learning Based Natural Language Processing. IEEE Comput. Intell. Mag. 2018, 13, 55–75. [Google Scholar] [CrossRef]
  31. Fourcade, A.; Khonsari, R. Deep learning in medical image analysis: A third eye for doctors. J. Stomatol. Oral Maxillofac. Surg. 2019, 120, 279–288. [Google Scholar] [CrossRef] [PubMed]
  32. Le, Q.; Miralles-Pechuán, L.; Kulkarni, S.; Su, J.; Boydell, O. An Overview of Deep Learning in Industry. Data Anal. AI 2020, 65–98. Available online: https://www.taylorfrancis.com/chapters/edit/10.1201/9781003019855-5/overview-deep-learning-industry-quan-le-luis-miralles-pechu%C3%A1n-shridhar-kulkarni-jing-su-ois%C3%ADn-boydell (accessed on 21 March 2023).
  33. Helbing, G.; Ritter, M. Deep Learning for fault detection in wind turbines. Renew. Sustain. Energy Rev. 2018, 98, 189–198. [Google Scholar] [CrossRef]
  34. Somu, N.; Raman, G.; Ramamritham, K. A deep learning framework for building energy consumption forecast. Renew. Sustain. Energy Rev. 2020, 137, 110591. [Google Scholar] [CrossRef]
  35. Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a convolutional neural network. In Proceedings of the 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 21–23 August 2017. [Google Scholar]
  36. Xu, G.; Ren, T.; Chen, Y.; Che, W. A One-Dimensional CNN-LSTM Model for Epileptic Seizure Recognition Using EEG Signal Analysis. Front. Neurosci. 2020, 14, 578126. [Google Scholar] [CrossRef]
  37. Schuster, M.; Paliwal, K.K. Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 1997, 45, 2673–2681. [Google Scholar] [CrossRef] [Green Version]
  38. Tarwani, K.M.; Edem, S. Survey on recurrent neural network in natural language processing. Int. J. Eng. Trends Technol. 2017, 48, 301–304. Available online: https://www.researchgate.net/profile/Swathi-Edem/publication/319937209_Survey_on_Recurrent_Neural_Network_in_Natural_Language_Processing/links/5e957cd192851c2f529f5337/Survey-on-Recurrent-Neural-Network-in-Natural-Language-Processing.pdf (accessed on 21 March 2023). [CrossRef]
  39. Amberkar, A.; Awasarmol, P.; Deshmukh, G.; Dave, P. Speech recognition using recurrent neural networks. In Proceedings of the 2018 International Conference on Current Trends Towards Converging Technologies (ICCTCT), Coimbatore, India, 1–3 March 2018; pp. 1–4. [Google Scholar]
  40. Bandara, K.; Bergmeir, C.; Smyl, S. Forecasting across time series databases using recurrent neural networks on groups of similar series: A clustering approach. Expert Syst. Appl. 2019, 140, 112896. [Google Scholar] [CrossRef] [Green Version]
  41. Song, X.; Liu, Y.; Xue, L.; Wang, J.; Zhang, J.; Wang, J.; Jiang, L.; Cheng, Z. Time-series well performance prediction based on Long Short-Term Memory (LSTM) neural network model. J. Pet. Sci. Eng. 2019, 186, 106682. [Google Scholar] [CrossRef]
  42. Rick, R.; Berton, L. Energy forecasting model based on CNN-LSTM-AE for many time series with unequal lengths. Eng. Appl. Artif. Intell. 2022, 113, 104998. [Google Scholar] [CrossRef]
  43. Mohammed Alsumaidaee, Y.A.; Yaw, C.T.; Koh, S.P.; Tiong, S.K.; Chen, C.P.; Yusaf, T.; Abdalla, A.N.; Ali, K.; Raj, A.A. Detection of Corona Faults in Switchgear by Using 1D-CNN, LSTM, and 1D-CNN-LSTM Methods. Sensors 2023, 23, 3108. [Google Scholar] [CrossRef]
  44. Zhao, J.; Mao, X.; Chen, L. Speech emotion recognition using deep 1D & 2D CNN LSTM networks. Biomed. Signal Process. Control. 2018, 47, 312–323. [Google Scholar] [CrossRef]
  45. Sainath, T.N.; Senior, A.W.; Vinyals, O.; Sak, H. Convolutional, Long Short-Term Memory, Fully Connected Deep Neural Networks. U.S. Patent 10783900B2, 22 September 2020. Patent and Trademark Office: Washington, DC, USA. Available online: https://patents.google.com/patent/US10783900B2/en (accessed on 21 March 2023).
  46. Chunju, F.; Xiuhua, D.; Shengfang, L.; Weiyong, Y. An Adaptive Fault Location Technique Based on PMU for Transmission Line. In Proceedings of the 2007 IEEE Power Engineering Society General Meeting, Tampa, FL, USA, 24–28 June 2007; pp. 1–6. [Google Scholar] [CrossRef]
  47. Tan, J.D.; Dahari, M.; Koh, S.P.; Koay, Y.Y.; Abed, I.A. A new experiential learning electromagnetism-like mechanism for numerical optimization. Expert Syst. Appl. 2017, 86, 321–333. [Google Scholar] [CrossRef]
  48. Tan, J.D.; Koh, S.P.; Au, M.T.; Tiong, S.K.; Ali, K. Implementation of Voltage Optimization for Sustainable Energy. Indones. J. Electr. Eng. Comput. Sci. 2018, 12, 341–347. [Google Scholar] [CrossRef]
  49. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Available online: https://books.google.iq/books?hl=ar&lr=&id=omivDQAAQBAJ&oi=fnd&pg=PR5&dq=Goodfellow,+Ian,+Yoshua+Bengio,+and+Aaron+Courville.+Deep+learning.+MIT+press,+2016.%E2%80%8F.&ots=MNT6iolBTT&sig=vv2r9JAOsY0CWD5jufDDfPgTAns&redir_esc=y#v=onepage&q=Goodfellow%2C%20Ian%2C%20Yoshua%20Bengio%2C%20and%20Aaron%20Courville.%20Deep%20learning.%20MIT%20press%2C%202016.%E2%80%8F.&f=false (accessed on 21 March 2023).
  50. Ismail Fawaz, H.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.-A. Deep learning for time series classification: A review. Data Min. Knowl. Discov. 2019, 33, 917–963. [Google Scholar] [CrossRef] [Green Version]
  51. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  52. Van Houdt, G.; Mosquera, C.; Nápoles, G. A review on the long short-term memory model. Artif. Intell. Rev. 2020, 53, 5929–5955. [Google Scholar] [CrossRef]
Figure 1. Diagram Illustrating the Overall Process of The Research Techniques.
Figure 1. Diagram Illustrating the Overall Process of The Research Techniques.
Applsci 13 04617 g001
Figure 2. The fundamental design of 1D CNN.
Figure 2. The fundamental design of 1D CNN.
Applsci 13 04617 g002
Figure 3. The fundamental design of LSTM.
Figure 3. The fundamental design of LSTM.
Applsci 13 04617 g003
Figure 4. An instance of a 1D CNN-LSTM model.
Figure 4. An instance of a 1D CNN-LSTM model.
Applsci 13 04617 g004
Figure 5. The suggested (ac) model’s architecture (in case Time and Frequency domain).
Figure 5. The suggested (ac) model’s architecture (in case Time and Frequency domain).
Applsci 13 04617 g005
Figure 6. The accuracy and loss curves are shown in the time domain for the 1D-CNN, LSTM, and 1D-CNN-LSTM models.
Figure 6. The accuracy and loss curves are shown in the time domain for the 1D-CNN, LSTM, and 1D-CNN-LSTM models.
Applsci 13 04617 g006aApplsci 13 04617 g006b
Figure 7. The accuracy and loss curves are shown in the frequency domain for the 1D-CNN, LSTM, and 1D-CNN-LSTM models.
Figure 7. The accuracy and loss curves are shown in the frequency domain for the 1D-CNN, LSTM, and 1D-CNN-LSTM models.
Applsci 13 04617 g007aApplsci 13 04617 g007b
Table 1. The samples of datasets for arcing and non-arcing in the time and frequency domains.
Table 1. The samples of datasets for arcing and non-arcing in the time and frequency domains.
Name of FaultNumber of Samples in TDANumber of Samples in FDA
Arcing Faults54 × 20,00153 × 10,001
Corona Faults41 × 20,00139 × 10,001
Tracking Faults313 × 20,00140 × 10,001
Mechanical Faults17 × 20,00116 × 10,001
Normal Faults13 × 20,00112 × 10,001
Total Size17.5 Mega-Byte (MB)11.3 Mega-Byte (MB)
Table 2. The deep learning algorithm designs for time domain.
Table 2. The deep learning algorithm designs for time domain.
1D-CNNLSTM1D-CNN-LSTM
1DConv(16-RELU)LSTM(64)1DConv(64-RELU)
Drop-outDrop-outMaxPooling
MaxPoolingLSTM(32)Drop-out
1DConv(32-RELU)Drop-out1DConv(128-RELU)
Drop-outFlattenMaxPooling
MaxPoolingDense(32-RELU)Drop-out
FlattenDense(2) (SoftMax)LSTM(128)
Dense(32-RELU) Drop-out
Dense(2) (SoftMax) LSTM(32)
Dense(2) (SoftMax)
Total Parameters
11,92235,298217,986
Table 3. The deep learning algorithm designs for Frequency Domain.
Table 3. The deep learning algorithm designs for Frequency Domain.
1D-CNNLSTM1D-CNN-LSTM
1D Conv(16-RELU)LSTM(64)1D Conv (64) (RELU)
Drop-outDrop-outMaxPooling
MaxPoolingLSTM(64)Drop-out
1D Conv(32-RELU)Drop-out1D Conv(128-RELU)
DropoutFlattenMaxPooling
Max-PoolingDense(32-RELU)Drop-out
FlattenDense(2) (SoftMax)LSTM(128)
Dense(32-RELU) Dropout
Dense(2) (SoftMax) LSTM(32)
Dense(2) (SoftMax)
Total Parameters
565084,578184,706
Table 5. Training Phase Output Matrix: Time Domain Arcing Fault Classification.
Table 5. Training Phase Output Matrix: Time Domain Arcing Fault Classification.
1D-CNNLSTM1D-CNN-LSTM
ArcingNon-ArcingArcingNon-ArcingArcingNon-Arcing
Actual Arcing430430340
Actual non-Arcing326032600272
Table 6. Validation Phase Output Matrix: Time Domain Arcing Fault Classification.
Table 6. Validation Phase Output Matrix: Time Domain Arcing Fault Classification.
1D-CNNLSTM1D-CNN-LSTM
ArcingNon-ArcingArcingNon-ArcingCoronaNon-Arcing
Actual Arcing4040100
Actual non-Arcing062062056
Table 7. Testing Phase Output Matrix: Time Domain Arcing Fault Classification.
Table 7. Testing Phase Output Matrix: Time Domain Arcing Fault Classification.
1D-CNNLSTM1D-CNN-LSTM
ArcingNon-ArcingArcingNon-ArcingArcingNon-Arcing
Actual Arcing7070100
Actual non- Arcing158158056
Table 8. Training Phase Output Matrix: Frequency Domain Arcing Fault Classification.
Table 8. Training Phase Output Matrix: Frequency Domain Arcing Fault Classification.
1D-CNNLSTM1D-CNN-LSTM
ArcingNon-ArcingArcingNon-ArcingArcingNon-Arcing
Actual Arcing350390350
Actual non-Arcing077073077
Table 9. Validation Phase Output Matrix: Frequency Domain Arcing Fault Classification.
Table 9. Validation Phase Output Matrix: Frequency Domain Arcing Fault Classification.
1D-CNNLSTM1D-CNN-LSTM
ArcingNon-ArcingArcingNon-ArcingArcingNon-Arcing
Actual Arcing905090
Actual non-Arcing015019015
Table 10. Testing Phase Output Matrix: Frequency Domain Arcing Fault Classification.
Table 10. Testing Phase Output Matrix: Frequency Domain Arcing Fault Classification.
1D-CNNLSTM1D-CNN-LSTM
ArcingNon-ArcingArcingNon-ArcingArcingNon-Arcing
Actual Arcing818190
Actual non-Arcing015015015
Table 11. The assessment of metrics in accordance with the performance outcomes of DL Structure during testing for the time domain in cases of arcing and non-arcing.
Table 11. The assessment of metrics in accordance with the performance outcomes of DL Structure during testing for the time domain in cases of arcing and non-arcing.
Case Arcing Findings (0)
TechniquesAccuracySensitivityDependabilityF1-Measure
1D-CNN98.41008893
LSTM98.41008893
1D-CNN-LSTM100100100100
Case Non-Arcing Findings (1)
TechniquesAccuracySensitivityDependabilityF1-Measure
1D-CNN989810099
LSTM989810099
1D-CNN-LSTM100100100100
Table 12. The assessment of metrics in accordance with the performance outcomes of DL Structure during testing for the frequency domain in cases of arcing and non-arcing.
Table 12. The assessment of metrics in accordance with the performance outcomes of DL Structure during testing for the frequency domain in cases of arcing and non-arcing.
Case Arcing Findings (0)
TechniquesAccuracySensitivityDependabilityF1-Measure
1D-CNN95.88910094
LSTM95.88910094
1D-CNN-LSTM100100100100
Case Non-Arcing Findings (1)
TechniquesAccuracySensitivityDependabilityF1-Measure
1D-CNN95.81009497
LSTM95.810010097
1D-CNN-LSTM100100100100
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mohammed Alsumaidaee, Y.A.; Yaw, C.T.; Koh, S.P.; Tiong, S.K.; Chen, C.P.; Tan, C.H.; Ali, K.; Balasubramaniam, Y.A.L. Detecting Arcing Faults in Switchgear by Using Deep Learning Techniques. Appl. Sci. 2023, 13, 4617. https://doi.org/10.3390/app13074617

AMA Style

Mohammed Alsumaidaee YA, Yaw CT, Koh SP, Tiong SK, Chen CP, Tan CH, Ali K, Balasubramaniam YAL. Detecting Arcing Faults in Switchgear by Using Deep Learning Techniques. Applied Sciences. 2023; 13(7):4617. https://doi.org/10.3390/app13074617

Chicago/Turabian Style

Mohammed Alsumaidaee, Yaseen Ahmed, Chong Tak Yaw, Siaw Paw Koh, Sieh Kiong Tiong, Chai Phing Chen, Chung Hong Tan, Kharudin Ali, and Yogendra A. L. Balasubramaniam. 2023. "Detecting Arcing Faults in Switchgear by Using Deep Learning Techniques" Applied Sciences 13, no. 7: 4617. https://doi.org/10.3390/app13074617

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop