Next Article in Journal
An Optical Intervention to Improve Cycling Time Trials: A Feasibility Study
Previous Article in Journal
Impact of Vanadium Complexes with Tetradentate Schiff Base Ligands on the DPPC and EYL Liposome Membranes: EPR Studies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Spider Monkey Optimization Based on Beta-Hill Climbing Optimizer for Unmanned Combat Aerial Vehicle (UCAV)

1
Laboratory of SATIT, Department of Industrial Engineering, Abbes Laghrour University, Khenchela 40004, Algeria
2
Department of Mechanical Engineering, Abbes Laghrour University, Khenchela 40004, Algeria
3
School of Computing, University of Eastern Finland, 70210 Kuopio, Finland
4
College of Engineering and Technology, American University of the Middle East, Egaila 54200, Kuwait
5
ENISO Laboratory: Networked Objectives, Control, and Communication Systems, National Engineering School of Sousse, Sousse 4023, Tunisia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(5), 3273; https://doi.org/10.3390/app13053273
Submission received: 14 January 2023 / Revised: 20 February 2023 / Accepted: 27 February 2023 / Published: 3 March 2023

Abstract

:
Unmanned Combat Aerial Vehicle (UCAV) path planning is a challenging optimization problem that seeks the optimal or near-optimal flight path for military operations. The problem is further complicated by the need to operate in a complex battlefield environment with minimal military risk and fewer constraints. To address these challenges, highly sophisticated control methods are required, and Swarm Intelligence (SI) algorithms have proven to be one of the most effective approaches. In this context, a study has been conducted to improve the existing Spider Monkey Optimization (SMO) algorithm by integrating a new explorative local search algorithm called Beta-Hill Climbing Optimizer (BHC) into the three main phases of SMO. The result is a novel SMO variant called SMOBHC, which offers improved performance in terms of intensification, exploration, avoiding local minima, and convergence speed. Specifically, BHC is integrated into the main SMO algorithmic structure for three purposes: to improve the new Spider Monkey solution generated in the SMO Local Leader Phase (LLP), to enhance the new Spider Monkey solution produced in the SMO Global Leader Phase (GLP), and to update the positions of all Local Leader members of each local group under a specific condition in the SMO Local Leader Decision (LLD) phase. To demonstrate the effectiveness of the proposed algorithm, SMOBHC is applied to UCAV path planning in 2D space on three different complex battlefields with ten, thirty, and twenty randomly distributed threats under various conditions. Experimental results show that SMOBHC outperforms the original SMO algorithm and a large set of twenty-six powerful and recent evolutionary algorithms. The proposed method shows better results in terms of the best, worst, mean, and standard deviation outcomes obtained from twenty independent runs on small-scale (D = 30), medium-scale (D = 60), and large-scale (D = 90) battlefields. Statistically, SMOBHC performs better on the three battlefields, except in the case of SMO, where there is no significant difference between them. Overall, the proposed SMO variant significantly improves the obstacle avoidance capability of the SMO algorithm and enhances the stability of the final results. The study provides an effective approach to UCAV path planning that can be useful in military operations with complex battlefield environments.

1. Introduction

Unmanned Aerial Vehicles (UAVs) have been increasingly used for a wide range of applications, including surveillance, search and rescue, and delivery. A crucial aspect of UAV operations is path planning, which involves finding the optimal path for the UAV to achieve its objectives. The Unmanned Combat Aerial Vehicle (UCAV) is a specific type of UAV that is designed for combat missions. However, path planning for UCAVs presents unique challenges compared to other UAVs, mainly due to their operation in a contested environment where enemy air defense systems pose significant threats. Effective path planning for UCAVs involves minimizing exposure to these threats by considering enemy air defense systems and planning the route accordingly.
UCAVs are equipped with a variety of sensors, including radar and cameras, that assist in the path planning process. The development of advanced path planning algorithms for UCAVs is critical to ensuring their effective use in military operations. Navigation systems-based planning and optimization techniques-based planning are two widely used approaches to UAV path planning.
Navigation systems-based planning involves using existing maps and sensor data, such as GPS and LIDAR, to plan the UAV’s path. This approach is typically used in applications where the UAV has a pre-existing map of the environment and the primary goal is to navigate from point A to point B. Navigation systems-based planning algorithms use various techniques, such as A* search, Dijkstra’s algorithm, and RRT, to generate the path that the UAV will follow. Although this approach is limited by the accuracy of the maps and sensor data, it is more accessible for beginners and those without extensive programming or mathematical backgrounds [1,2].
Optimization techniques-based planning, on the other hand, uses mathematical optimization to generate the most efficient path for the UAV. This approach involves developing a mathematical model that considers various constraints, such as the UAV’s maximum speed, minimum turning radius, and minimum altitude. Common optimization algorithms used for UAV path planning include Genetic Algorithms (GAs), Particle Swarm Optimization (PSO), and Simulated Annealing (SA). Although this approach can generate more efficient paths than navigation systems-based planning, it is computationally expensive, which can make it challenging to implement in real-time applications [3,4].
The choice of approach in path planning depends on the specific application and the available resources. Navigation systems-based planning may be more appropriate for simpler environments with accurate maps and sensor data, while optimization techniques-based planning may be better for more complex environments. Regardless of the approach chosen, path planning is a critical component of UAV operations and can significantly impact the effectiveness of the mission.
In recent years, optimization algorithms based on natural phenomena have been extensively studied and developed to solve various problems in science and engineering. The Spider Monkey Optimization (SMO) algorithm, proposed by Bansal et al. [5], is one such optimization algorithm based on the intelligent foraging behavior of animals with Fission–Fusion social structures. The SMO algorithm has shown promising results in solving optimization problems in various fields, including the Unmanned Combat Aerial Vehicle (UCAV) path planning problem in 2D space.
The SMO algorithm uses the concept of population splitting or merging depending on the search situation, which is widely used in various fields. To handle the UCAV path planning problem with obstacle avoidance, Zhu et al. [6] proposed a Cooperative Co-Evolution (CE) based Spider Monkey Optimization algorithm (CESMO) that was tested and compared with ten other Swarm Intelligence (SI) algorithms using thirty-six test cases. The results showed that CESMO was robust and had a competitive performance for UCAV path planning. In another study, Zhu et al. [7] compared the basic version of SMO with eleven other SI algorithms for UCAV path planning in a complex 2D environment. The results showed that SMO had better performance in finding a safe path compared to the other algorithms.
However, there is always room for improvement, and to enhance the performance of the SMO algorithm, an improved version called SMOBHC has been proposed. The main contributions of this study are threefold. Firstly, the Beta-Hill Climbing Optimizer (BHC), a new exploratory local search algorithm, has been integrated into the three main phases of the SMO algorithm to create SMOBHC. This new algorithm has the advantage of being better than the SMO in terms of intensification, exploration, avoidance of local minima, and speed of convergence. The BHC is implemented in the SMO algorithm for three main purposes: to improve each new Spider Monkey solution generated in the Local Leader Phase (LLP), to enhance each new Spider Monkey solution produced in the Global Leader Phase (GLP), and finally to update the positions of all Local Leader members under a specific condition in the Local Leader Decision (LLD) phase.
Secondly, the SMOBHC algorithm has been adapted and applied to generate an optimal path for the UCAV in 2D space in complex battlefields under different conditions. Finally, to demonstrate the effectiveness and efficiency of the proposed algorithm, SMOBHC has been compared with the SMO and a large set of powerful and recent algorithms. The results showed that SMOBHC outperformed SMO and other algorithms in terms of finding an optimal path for the UCAV in a complex 2D environment.
Overall, it can be seen that the SMO algorithm and its improved version, SMOBHC, are highly effective in solving optimization problems, particularly in the area of UCAV path planning. The successful integration of the BHC algorithm into the SMO algorithm has resulted in significant improvements in its performance, which is demonstrated through extensive experimentation and comparison with other algorithms in this study.
A list of the meanings of variables, abbreviations, and adjustable control variables utilized in this paper can be found in Table A1, which is located in Appendix A.
The rest of this paper is organized as follows: Section two presents a review of related works. Section three discusses the 2D mathematical model of the UCAV path planning problem. Section four provides details on the SMO, the Beta-Hill Climbing Optimizer (BHC), and the SMOBHC. Section five presents the experimental results. Finally, Section six contains our concluding remarks.

2. Related Review

The purpose of this review is to provide a comprehensive overview of recent research on Navigation Systems-based UAV Path Planning methods and Optimization Techniques-based UAV Path Planning methods. As emphasized in the introduction section, these methods are crucial in the field of Unmanned Aerial Vehicles (UAVs).
In this review, we present two tables that showcase the latest research on these path planning methods, highlighting their significance and relevance in the UAV industry. By examining these examples, readers will gain a deeper understanding of the current state of research in this field and the advancements made in recent years.
Table 1 and Table 2 outline the advantages and limitations of each method. Navigation Systems-based UAV Path Planning methods utilize GPS and sensor-based technologies to plan and optimize paths for UAVs, while Optimization Techniques-based UAV Path Planning methods rely on mathematical algorithms and models to find the most efficient path for UAVs.
Navigation Systems-based UAV Path Planning methods offer several benefits, including real-time capabilities that enable adjustments to the UAV’s path while in-flight. Additionally, these methods are relatively easy to implement and applicable to a wide range of UAV applications. However, their reliance on sensors and GPS could limit their effectiveness in areas with poor signal coverage.
On the other hand, Optimization Techniques-based UAV Path Planning methods can provide optimized paths for UAVs in complex environments. These methods can consider various constraints and objectives, making them suitable for a wide range of applications. However, they may require significant computational resources, making them less suitable for real-time applications.
To sum up, Navigation Systems-based UAV Path Planning and Optimization Techniques-based UAV Path Planning methods each have their own strengths and weaknesses, and the selection of the most suitable method for a particular UAV application will depend on specific requirements. This review has shed light on the capabilities and limitations of each approach, which will be helpful for researchers and practitioners in making an informed decision about the path planning method that best meets their needs.

3. Problem Formulation

Path planning is a crucial component of a UCAV mission control system, and emergencies must be taken into account. The objective of this numerical problem is to find an optimal or near-optimal flight path for the UCAV that avoids obstacles and reaches its destination safely. The 2D mathematical model for UCAV path planning used in this paper was introduced in [44]. As shown in Figure 1, there are several threats on the battlefield, such as radars, missiles, and natural barriers. The impact of each threat is represented by a circle with a particular radius, which signifies the risk area that must be avoided during the flight. If any part of the path falls within a circle, the risk of the guided UCAV being damaged increases. Hence, considering all existing threats on the battlefield and fuel consumption, the main goal of the flight mission is to find an optimal flight path that connects the starting point to the endpoint.

3.1. UCAV Path Planning Model

According to [44], a mathematical modeling of this problem is carried out as follows:
  • A segment ST that directly connects the starting and terminal point is drawn,
  • The segment is divided into D + 1 equal parts by D perpendicular lines,
  • All D lines are taken as new axes, and a series of points are assigned and connected on them to form a complete path.
The whole path is divided into D + 1 steps, represented by a vector S i = Y i = x i 1 , x i 2 , . , x i D , where x i j represents the jth dimension of the ith solution vector and indicates the discrete position of each step in the vertical axis. In reality, the components of the S i vector represent the ordinates of the D path points relative to the ST segment, and their abscissae are X l = x 1 , x 2 , . , x D = S T   / D , S T   / D × 2 , . , S T   ,   l = 1 , D , where S T represents the length of the segment ST . For example, if 0 , 0 is the starting point and S T   , 0 is the endpoint, then the distance between them is S T . The vector S i can be used to create a path with D nodes, where the coordinates of each node are P 1 S T   / D , x i 1 , P 2 S T   / D × 2 , x i 2 , and so on. Linking all the nodes results in the complete path. Restrictions can be imposed on each solution path to keep it within the battle area, according to the size of the battlefield (refer to Figure 2). It is important to note that the coordinates x , y of the generated nodes P 1 , . P D in the plane S X Y are related to their coordinates x , y in the plane O X Y through a simple transformation. This transformation can be expressed as x x y = c o s θ s i n θ s i n θ c o s θ × x y .

3.2. Objective Function

A probability density model, expressed in Equation (1), is used to evaluate the pathways. Unlike a conventional cost–function model, there is no clear cutoff where damage risk is negligible. As the distance grows, the likelihood of encountering a threat decreases, but it never reaches zero. The symbol d k j represents the distance between the jth step and the kth threat center, δ = R i / l o g 20 is a parameter that determines the form of the density function, R i is the radius of the ith threat element, and n t h r is the number of threats in the battlefield.
C o s t t h r e a t s = k = 1 n t h r e x p J = 1 D d k j δ
The fuel consumption of a UCAV with constant flight speed is solely tied to the flight path length, and its cost can be expressed as follows:
C o s t f u e l = J = 1 D d i j S T
where D denotes the number of steps, S T is the length of the segment S T . Additionally, J = 1 D d i j represents the length of the whole generated flight path, and d i j is the length of the jth step of the ith solution.
Therefore, the objective function of the model taking into account threat cost and fuel cost can be written as:
f o b j = γ k = 1 n t h r   e x p J = 1 D D k j δ + 1 γ J = 1 D d i j S T
where γ is the weight coefficient ranging in [0, 1]. In this paper, γ is set to 0.5, meaning that both the threats cost and the fuel cost are of the same importance [29].

4. The Proposed New SMO Variant

4.1. The SMO Basic Algorithm

The Spider Monkey Optimization (SMO) algorithm is a population-based meta-heuristic inspired by the Fission-Fusion System (FFS) of spider monkeys [5]. When stagnation occurs, the population is divided into subgroups, and a local leader is assigned to each subgroup while a global leader is assigned to the entire population. The algorithm operates in two main stages to update the population during the evolution process. In the first stage, called the Local Leader Stage or Local Leader Phase (LLP), the population is updated using the experience of the local leaders. In the second stage, called the Global Leader Stage or Global Leader Phase (GLP), the population is updated using the experience of the global leader. If no better individual is found through iterations, the global leader will divide the entire population into subgroups. If the number of subgroups reaches the Maximum Group limit ( M G ), the subgroups will reunite into a single population. If a local leader cannot be updated, the subgroup will be randomly generated and a new direction will be explored. In addition to M G , the SMO algorithm has three main control parameters: the Global Leader Limit (GLL), the Local Leader Limit (LLL), and the perturbation rate ( p r ). The first two parameters are used to track the number of times the global and local leaders cannot be updated through iterations, while the last parameter controls the amount of perturbation in the current position. Based on the FFS, the SMO algorithm performs better than other Swarm Intelligence algorithms, particularly for the UCAV path planning problem [7]. The basic version of the SMO algorithm is described in detail in the following sub-sections.

4.1.1. Initialization Step

Initially, N Spider Monkey positions ( X S M i , j ) with D dimensions are generated using a random distribution in the search interval L b , U b , as shown in the following equation:
X S M i , j = L b j + r a n d 0 , 1 × U b j L b j ,   j = 1 , . D
where X S M i , j is the ith spider monkey position in the jth dimension, L b j and U b j are the lower and the upper bounds respectively for the jth component in the ith spider monkey position, and r a n d 0 , 1 is a randomly generated number in the interval 0 ,   1 .

4.1.2. Local Leader Phase (LLP)

In this step, each spider monkey element adjusts its current position based on the experience of the Local Leader and all members of the same kth local spider monkey group, as described by the following equation:
X S M n e w i , j = X S M i , j + r a n d 0 , 1 × X L L   S M k , j X S M i , j + r a n d 1 , 1 × X S M r , j X S M i , j
where X S M i , j is the jth component of the ith spider monkey position that belongs to the kth local spider monkey group, X L L   S M k , j represents the jth component of the kth local group leader position, X S M r , j is the jth element of the rth spider monkey position in the k t h local spider monkey group, with the condition that r i . r a n d 0 ,   1 and r a n d 1 ,   1 are randomly generated numbers in the intervals 0 ,   1 and 1 ,   1 respectively. It is important to note that the new solution X S M n e w i , j obtained is evaluated and compared to the old one X S M i , j based on a fitness value calculated using the objective function f o b j ( ) . If X S M i , j is worse than X S M n e w i , j , an update of X S M i , j to X S M n e w i , j is performed. Algorithm 1 provides a detailed description of the process for generating and updating solutions in the SMO algorithm LLP.
Algorithm 1: Local Leader Phase (LLP)
Input: p r
Output: X S M n e w i , j
1:function SMO-LLP
2:. for each member spider   monkey i k th   group do
3:    if rand 0 , 1 < pr then
4:      Compute a novel spider   monkey solution X S M n e w i , j using (5).
5:      if  f o b j X S M n e w i , j <   f o b j X S M i , j then
6:       X S M i , j = X S M n e w i , j
7:      end if
8:    else
9:       X S M n e w i , j = X S M i , j
10:     end if
11:   end for
12:end function

4.1.3. Global Leader Phase (GLP)

In this stage of the SMO algorithm, which starts immediately after the LLP, all members of the spider monkey group change their positions using information received from both the Global Leader and local group members. This SMO step generates a new global spider monkey solution as follows:
X S M n e w i , j = X S M i , j + r a n d 0 , 1 × X G L   S M j X S M i , j + r a n d 1 , 1 × X S M r , j X S M i , j
where X G L   S M j represents the jth component of the global leader position, which is chosen randomly from the set j 1 , 2 , D . The generation of X S M n e w i , j follows a random selection mechanism based on a random value called p r o b i , which is calculated using the following formula [31]:
p r o b i = f i t n e s s i i = 1 N f i t n e s s i
It is evident from Equation (7) that the spider monkey with a better fitness value has a greater chance of improving itself, where f i t n e s s i represents the fitness value of the ith spider monkey. Furthermore, just as in the LLP, the existing solution X S M i , j is replaced by X S M n e w i , j if the new one is superior. Algorithm 2 provides a detailed explanation of the process of generating a new solution and updating the existing one in the GLP SMO algorithm.
Algorithm 2: Global Leader Phase (GLP)
Input: p r o b i
Output: X S M n e w i , j
1:function SMO-GLP
2:. for each member spider   monkey i whole SM group do
3:      if rand 0 , 1 < p r o b i then
4:        Select a j index randomly from the set 1 , 2 , D
5:        Compute a novel spider monkey solution X S M n e w i , j using (6).
6:        if  f o b j X S M n e w i , j <   f o b j X S M i , j then
7:         X S M i , j = X S M n e w i , j
8:       end if
9:    else
10:         X S M n e w i , j = X S M i , j
11:    end if
12:  end for
13:end function

4.1.4. Global Leader Learning (GLL) Phase

In this phase of the SMO algorithm, the solution carried by the Global Leader element is updated using greedy selection across the entire population. This means that the spider monkey element with the best fitness simply becomes the global leader. However, there is a possibility that the update may not be performed, and in such cases, the number of non-updating times is tracked using the Global Limit Count (GLL). This counter is incremented by one each time the update is not performed, until it reaches the limit value specified by the Global Leader Limit.

4.1.5. Local Leader Learning (LLL) Phase

This stage is similar to the GLL phase in terms of its steps. The only difference is that the update at this stage concerns the position of each Local Leader in its subgroup. If the position of a Local Leader is not updated, a Local Limit Count counter specific to that subgroup is incremented by one until it reaches the global limit value specified by the Local Leader Limit.

4.1.6. Local Leader Decision (LLD) phase

The SMO algorithm enters this stage after checking if the kth Local Limit Count has reached or exceeded the Local Leader Limit. If this condition is met, all members of the kth spider monkey group update their position either through random initialization (as indicated by Equation (4)) or using the following equation:
X S M n e w l , j = X S M l , j + r a n d 0 , 1 × X G L   S M j X S M l , j + r a n d 0 , 1 × X S M l , j X L L   S M k , j
where X S M n e w l , j is the newly generated solution for the kth group. l is an index that takes the following cyclic values: l 1 k , l 1 k + 1 , , l 1 k + N M G , where l 1 k is the first index of the kth group, and X S M l , j is the existing solution for the kth group. The position of each Local Leader is updated by taking the position of the best spider monkey element within each group. Algorithm 3 provides a detailed description of the LLD phase.
Algorithm 3: Local Leader Decision (LLD) phase
Input: p r , Local Leader Limit (LLL), M G
Output: X S M n e w l , j for the kth group, X L L   S M n e w k , j , the kth Local Limit Count = 0
1:function SMO-LLD phase
2:. if the kth Local Limit Count Local Leader Limit
3:    The kth Local Limit Count = 0
4:    for each l   l 1 k , l 1 k + 1 , , l 1 k + N M G do
5:      for each j 1 , 2 , D do
6:        if rand 0 , 1 < pr then
7:        Compute a novel spider monkey solution X S M n e w l , j using (4).
8:        else
9:       Compute a novel spider monkey solution X S M n e w l , j using (8).
10:        end if
11:      end for
12:     end for
13:       X L L   S M n e w k , j = t h e   b e s t   X S M n e w l , j
14:      The kth Local Limit Count = 0
15:  end if
14:end function

4.1.7. Global Leader Decision (GLD) Phase

This step in the SMO algorithm is based on the status of the GLL phase counter. If the latter is greater than or equal to the Global Leader Limit, the population is divided. This division leads to the creation of new local leaders for each part of the population and a global leader for the entire population. These actions, including fragmentation, creation of new local leaders, and global leader creation, are monitored by the status of the GLL phase counter until the maximum number of divisions, equal to the Maximum Group Limit, is reached. At this stage, all groups created are combined into a single global group. It is important to note that the GLL phase counter is reset to zero after each division and aggregation. Algorithm 4 provides a detailed explanation of the GLD phase.
Algorithm 4: Global Leader Decision (GLD) phase
Input: Global Leader Limit (GLL), M G
Output:   X G L   S M n e w j , X L L   S M n e w k , j  for each kth group, Global Limit Count = 0
1:function SMO-GLD phase
2:. if Global Limit Count Global Leader Limit
3:   if Numbers of groups <   MG
4:     Number of groups = Number of groups + 1
5:     Generate a X L L   S M n e w k , j equal to the existing groups, where l   l 1 k , l 1 k + 1 , , l 1 k + N M G , k is the index of the kth group
6:     Generate a new X G L   S M n e w j
7:     Global Limit Count = 0
8:   else
9:     Numbers of groups =1
10:    Global Limit Count = 0
11:   end if
12:  end if
13:end function

4.2. The Beta-Hill Climbing Optimizer (BHC)

The Beta-Hill Climbing optimizer, created by Mohammed Azmi Al-Betar [45], is a novel local search method that explores solutions. The BHC algorithm is iterative and starts with the generation and evaluation of a random solution, x j = x 1 , x 2 , x D , where D is the dimension of the problem. The algorithm employs two main operators, the N -operator and the B -operator, which are used for exploitation and exploration, respectively. The N -operator serves as a neighborhood search, while the B -operator operates similarly to a mutation operator. In each iteration of the BHC algorithm, the N -operator is executed before the B -operator. After the initial random solution x is evaluated using the objective function f o b j ( ) , the N -operator updates it by producing a new solution x in the vicinity of the original solution x :
x j = x j ± r a n d 0 , 1 × b w
where j is randomly selected from the set j 1 , 2 , , D , b w denotes the distance between the new solution and the current solution, and r a n d 0 , 1 represents a random number in the range 0 ,   1 . It is important to note that in each iteration, the N -operator only modifies one component randomly in the solution vector x j = x 1 , x 2 , x D .
Using the B-operator, a new solution vector component, x is designed. Its index j is selected beforehand using a random selection, as shown in the following formula:
x j = l j + u j l j × r a n d 1 0 , 1 r a n d 2 0 , 1 < B x j o t h e r w i s e
If the condition r a n d 2 0 , 1 < B is verified, the index j is randomly selected from the set 1 , 2 , , D . r a n d 1 0 , 1 and r a n d 2 0 , 1 are random numbers in 0 ,   1 . Finally, we obtain a new solution vector x j n e w , for example, equal to:
x j n e w = x 1 , x , x , x , . x D
It should be noted that the x j n e w has only one component obtained using the N -operator and at least one component found using the B -operator. Additionally, the element x can be replaced by an another x produced during the B -operator stage. Finally, if f o b j x n e w < f o b j x , then x is replaced by x n e w .

4.3. Proposed Method

The SMO algorithm has been improved by combining it with the BHC optimizer to create a new and more efficient variant called SMOBHC. This hybrid approach leverages the standard form of the BHC optimizer to optimize solutions generated in the SMO Local Leader Phase (LLP), improve solutions produced in the SMO Global Leader Phase (GLP), and finally update the positions of Local Leader members under specific conditions in the SMO Local Leader Decision (LLD) phase. The algorithmic changes made in each phase are outlined in Algorithms 5–7 and visually depicted in Figure 3, Figure 4 and Figure 5. A comprehensive flowchart of the SMOBHC algorithm can be found in Figure 6.
It’s noteworthy that integrating N and B operators into these three crucial phases of the SMO algorithm can greatly enhance its global exploration and exploitation abilities (action of the N -operator) and convergence behavior (action of the B -operator). These improvements will be highlighted in the experimental results.
Applsci 13 03273 i001
Applsci 13 03273 i002aApplsci 13 03273 i002b
Applsci 13 03273 i003aApplsci 13 03273 i003b
Figure 3. Flowchart of the Local Leader Phase (LLP) based on BHC.
Figure 3. Flowchart of the Local Leader Phase (LLP) based on BHC.
Applsci 13 03273 g003
Figure 4. Flowchart of the Global Leader Phase (GLP) based on BHC.
Figure 4. Flowchart of the Global Leader Phase (GLP) based on BHC.
Applsci 13 03273 g004
Figure 5. Flowchart of the Local Leader Decision (LLD) phase based on BHC.
Figure 5. Flowchart of the Local Leader Decision (LLD) phase based on BHC.
Applsci 13 03273 g005
Figure 6. Flowchart of the SMOBHC algorithm.
Figure 6. Flowchart of the SMOBHC algorithm.
Applsci 13 03273 g006
Additionally, Figure 7 depicts the graphical representation of the UCAV path planning based on the SMOBHC algorithm. The flight device receives the coordinates of the optimal path nodes offline.

5. Experimental Results and Discussions

To demonstrate the effectiveness and efficiency of our proposed method, SMOBHC, we conducted tests on three battlefields, each containing ten, thirty, and twenty threats distributed randomly under specific conditions. The algorithms used in the tests were run on a PC with an Intel(R) Core (TM) i7-4510U CPU at 2.60 GHz, 6 GB RAM, and 64-bit Windows 10 operating system. The algorithm codes were compiled using MATLAB 2022b. The centers and radii of all threats in the three battlefields were defined as follows:
Battlefield 1:
X threat   c e n t e r 1 = 350 ,   105 ,   305 ,   105 , 175 ,   245 ,   415 ,   480 ,   40 ,   470 ,
Y threat   c e n t e r 1 = 200 , 25 , 150 ,   95 ,   55 ,   60 ,   0 ,   110 ,   100 , 50   ,
Radius threat   c e n t e r 1 = 140 ,   70 ,   150 ,   30 ,   25 ,   25 ,   25 ,   90 ,   40 ,   30   .
Battlefield 2:
X threat   c e n t e r j 2 = X threat   c e n t e r 1 , X threat   c e n t e r 1 + 500 , X threat   c e n t e r 1 + 1000 ,   where   j = 1 , ...30 ,
Y threat   c e n t e r j 2 = Y threat   c e n t e r 1 , Y threat   c e n t e r 1 , Y threat   c e n t e r 1 ,   where   j = 1 , ...30 ,
Radius threat   c e n t e r j 2 = . Radius threat   c e n t e r 1 , Radius threat   c e n t e r 1 , Radius threat   c e n t e r 1 ,   where   j = 1 , ...30 ,
Battlefield 3:
X threat   c e n t e r j 3 = X threat   c e n t e r 1 , X threat   c e n t e r 1 + 500 ,   where   j = 1 , ...20 , Y threat   c e n t e r j 3 = Y threat   c e n t e r 1 , Y threat   c e n t e r 1 ,   where   j = 1 , ...20 , Radius threat   c e n t e r j 3 = . Radius threat   c e n t e r 1 , Radius threat   c e n t e r 1 ,   where   j = 1 , ...20 ,
The starting and terminal nodes for the three battlefields are positioned at points S and T , which have coordinates S 1 0 , 0 , T 1 500 , 0 , S 2 0 , 0 , T 2 1000 , 0 and S 3 10 , 10 , T 1 950 , 0 , respectively. Additionally, the length of the segment S T for each battlefield is calculated using the following equation:
S T = x T j x S j 2 y T j y S j 2
In this study, the SMOBHC is benchmarked using three battlefields under three different problem dimensions of 30, 60, and 90. The control parameters for SMOBHC are set as follows: the size of the population N is 50, the maximum group limit ( M G ) is 2, the initial group number (GroupNumber) is 1, L L L = N   x   D , G L L = 60 , p r = 0.8 , N -operator bandwidth ( b w ) is 0.001 , and B -operator probability ( B ) is 0.1. The SMOBHC is evaluated with a maximum of 1000 iterations ( I t e r m a x ) and run 20 times ( R u n m a x ). The best, worst, mean, and standard deviation (std.) of all SMOBHC results are calculated.
The SMOBHC algorithm is compared to SMO and 13 other algorithms for the first two battlefields under the same conditions. These algorithms are PSO, GQPSO [46], BA [47], CS [48], DE, FA [49], GCMBO [50], GWO [51], HS [52], WOA [53], ALO [54], AOA [55], and DA [56]. For the third battlefield, it is compared to SMO and other algorithms, such as AHA [57], AOS [58], ARO [59], SO [60], BWO [61], SCO [62], GBO [63], DMOA [64], DO [65], EO [66], FHO [67], WSO [68], and POA [69]. The results of the first two tests (battlefields 1 and 2) obtained using all 15 algorithms are shown in Table 3, Table 4, Table 5 and Table 6, and those of the third test (battlefield 3) are shown in Table 7 and Table 8.
Table 4, Table 6, and Table 8 present the statistical results obtained from the Wilcoxon’s rank-sum nonparametric statistical test, which was performed with a significance level of 0.05. The best results and rankings are shown in bold. The comparison of the convergence behavior of SMOBHC with other algorithms, the boxplot illustrations, and the path planning results optimized by SMOBHC and seven other competitive algorithms (selected as examples) are presented for Battlefield 1, Battlefield 2, and Battlefield 3.
The boxplot illustrations consist of 20 run results in each box, where the central line (drawn in red) represents the median, the edges of the box represent the 25th and 75th percentiles, and the whiskers extend to the most extreme data points that are not considered outliers. Outliers are plotted individually.
Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 show the convergence behavior of SMOBHC compared to other algorithms, the boxplot illustrations, and the path planning results optimized by SMOBHC and seven other competitive algorithms for Battlefield 1. Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17 display the same information for Battlefield 2, while Figure 18, Figure 19, Figure 20, Figure 21 and Figure 22 show the results for Battlefield 3 in the same order of illustration as in Battlefield 1.
Battlefield 1:
For the first experimental test related to the first battlefield, we can clearly see from Table 1 that the SMOBHC outperforms all its competitors in the best, worst, and mean values, except for the ALO algorithm in the worst value for the third-dimension case. Furthermore, the ALO algorithm had the best standard deviation values among competitors due to its adaptive boundary shrinking and elitism mechanisms that control its exploitation, as well as its use of a random walk and roulette wheel selection in its research process. The SMOBHC also provides very competitive results and outperforms the SMO in most comparison indices (best, worst, mean, and standard deviation values). This result confirms that the integration of the BHC optimizer into the SMO algorithmic structure is useful. Specifically, the addition of the BHC optimizer to the SMO LLP, GLP, and LLD phases can significantly improve its global exploration and exploitation characteristics through the use of both B -operator and N -operator.
Figure 8, Figure 9 and Figure 10 shows that the SMOBHC has an acceptable convergence speed (specifically for the D = 30 and D = 60 cases) compared to its competitors. The superiority of SMOBHC can also be seen from the boxplot graphs (Figure 11), which illustrate that SMOBHC has the smallest average value among other algorithms. The non-parametric statistical results, based on Wilcoxon’s rank-sum test, indicated in Table 3 prove that SMOBHC performs better than all other algorithms in the three problem dimensions, except for the SMO in the third-dimension case. Figure 12 shows the UCAV flight paths obtained by SMOBHC, SMO, and six other algorithms (selected as examples) for the first-dimension problem. The optimized paths generated by SMOBHC are smooth and can significantly avoid threat areas with the lowest threat cost. It is clear from Figure 12, especially the paths relative to BA, GCMBO, and HS, that their calculated paths have poor quality in terms of stability and avoiding local optima. Other algorithms, such as DE, AOA, and DA, were able to find an acceptable optimal flight path, but with a lower quality compared to SMOBHC and SMO.
Figure 8. Comparative convergence curves of algorithms for battlefield 1 with D = 30 .
Figure 8. Comparative convergence curves of algorithms for battlefield 1 with D = 30 .
Applsci 13 03273 g008
Figure 9. Comparative convergence curves of algorithms for battlefield 1 with D = 60 .
Figure 9. Comparative convergence curves of algorithms for battlefield 1 with D = 60 .
Applsci 13 03273 g009
Figure 10. Comparative convergence curves of algorithms for battlefield 1 with D = 90 .
Figure 10. Comparative convergence curves of algorithms for battlefield 1 with D = 90 .
Applsci 13 03273 g010
Figure 11. Boxplot graphs of mean values relative to all algorithms for battlefield 1 with D = 30 , 60, and 90.
Figure 11. Boxplot graphs of mean values relative to all algorithms for battlefield 1 with D = 30 , 60, and 90.
Applsci 13 03273 g011
Figure 12. Comparison between SMOBHC and other seven algorithms for battlefield 1 with D = 30 .
Figure 12. Comparison between SMOBHC and other seven algorithms for battlefield 1 with D = 30 .
Applsci 13 03273 g012
Battlefield 2:
For this experimental test, the battlefield subject of the examination has an increased number of threats, totaling thirty elements distributed over a domain with a length of 1000 units. As seen in Table 5, SMOBHC clearly outperforms all its competitors’ algorithms in terms of the best, worst, mean, and std. values, except for the std. value obtained by SMO for the third-dimension problem. Statistically, SMOBHC outperforms all examined methods except SMO, where there is no significant difference between them. Figure 13, Figure 14 and Figure 15 demonstrates that SMOBHC converges faster to the optimal flight path than all other algorithms for a 30-dimensional problem, but as the dimension increases to 60 and 90, the convergence speed of all algorithms decreases, with SMOBHC having the lowest. Additionally, the final result is better than all competitors. The boxplot illustrations in Figure 16 show the dominance of SMOBHC over its competitor algorithms. Figure 17 illustrates the flight paths of SMOBHC, SMO, and six other algorithms (selected as examples) for the third-dimension problem on the battlefield. The first observation is that the SMOBHC-optimized flight path is superior to all tested methods in terms of flight path stability and avoidance of local optima. On the other hand, SMO is trapped in a local optimum, similar to PSO and GWO, while BA, CS, and HS algorithms fail to produce an acceptable quality flight path.
Figure 13. Comparative convergence curves of algorithms for battlefield 2 with D = 30 .
Figure 13. Comparative convergence curves of algorithms for battlefield 2 with D = 30 .
Applsci 13 03273 g013
Figure 14. Comparative convergence curves of algorithms for battlefield 2 with D = 60 .
Figure 14. Comparative convergence curves of algorithms for battlefield 2 with D = 60 .
Applsci 13 03273 g014
Figure 15. Comparative convergence curves of algorithms for battlefield 2 with D = 90 .
Figure 15. Comparative convergence curves of algorithms for battlefield 2 with D = 90 .
Applsci 13 03273 g015
Figure 16. Boxplot graphs of mean values relative to all algorithms for battlefield 2 with D = 30 , 60, and 90.
Figure 16. Boxplot graphs of mean values relative to all algorithms for battlefield 2 with D = 30 , 60, and 90.
Applsci 13 03273 g016
Figure 17. Comparison between SMOBHC and other seven algorithms for battlefield 2 with D = 90 .
Figure 17. Comparison between SMOBHC and other seven algorithms for battlefield 2 with D = 90 .
Applsci 13 03273 g017
Battlefield 3:
For the next experimental test related to the third battlefield consisting of twenty threats, the proposed algorithm outperforms all its competitors in the best, worst, and mean values, as shown in Table 7. In regards to the obtained standard deviation values, the AHA algorithm performs better for the first-dimension case and the BWO algorithm for both the second- and third-dimension cases. The SMOBHC has a relatively slow convergence speed, but its curve continuously converges to a better fitness value compared to all other algorithms. As shown in Table 8, the SMOBHC displays superior performance to all other algorithms except the SMO, where there is no significant difference between them. The boxplot illustrations in Figure 21 indicate the superiority of SMOBHC over its competitors, while Figure 22 shows the flight paths produced by SMOBHC, SMO, and six other algorithms for the second-dimension problem with a small modification to the starting and ending points. Despite the change in flight conditions, SMOBHC and SMO still produced optimal flight paths of good quality.
In particular, the optimized flight paths of SMOBHC and SMO have minimal collisions with the battlefield threat areas, unlike some cases like DMOA and PDO, which have a clear insufficiency in problem treatment as seen in their optimized flight paths. Meanwhile, the paths related to the ASO, SO, DO, and EO algorithms are all trapped in local optima.
Figure 18. Comparative convergence curves of algorithms for battlefield 3 with D = 30 .
Figure 18. Comparative convergence curves of algorithms for battlefield 3 with D = 30 .
Applsci 13 03273 g018
Figure 19. Comparative convergence curves of algorithms for battlefield 3 with D = 60 .
Figure 19. Comparative convergence curves of algorithms for battlefield 3 with D = 60 .
Applsci 13 03273 g019
Figure 20. Comparative convergence curves of algorithms for battlefield 3 with D = 90 .
Figure 20. Comparative convergence curves of algorithms for battlefield 3 with D = 90 .
Applsci 13 03273 g020
Figure 21. Boxplot graphs of mean values relative to all algorithms for battlefield 3 with D = 30 , 60, and 90.
Figure 21. Boxplot graphs of mean values relative to all algorithms for battlefield 3 with D = 30 , 60, and 90.
Applsci 13 03273 g021
Figure 22. Comparison between SMOBHC and other seven algorithms for battlefield 3 with D = 60 .
Figure 22. Comparison between SMOBHC and other seven algorithms for battlefield 3 with D = 60 .
Applsci 13 03273 g022
The integration of the basic SMO algorithm with the BHC optimizer has resulted in the creation of a new SMO variant, referred to as SMOBHC. This hybrid algorithm boasts a simple yet effective structure and has been shown to outperform a recent class of meta-heuristic algorithms in solving the path planning problem for UCAVs of varying scales.
The combination of these two algorithms results in a powerful tool for addressing the complex and challenging task of UCAV path planning. The significance of the SMOBHC algorithm lies in its ability to provide optimal solutions for the path planning problem with a high degree of accuracy and efficiency, outperforming other contemporary methods. These findings make SMOBHC a valuable contribution to the field of UCAV path planning and optimization.

6. Conclusions

In this paper, a novel variant of the Spider Monkey Optimization (SMO) algorithm is proposed and investigated for UCAV path planning. Our proposed algorithm combines the Beta-Hill Climbing Optimizer (BHC) with the basic version of the SMO algorithm to improve its exploration and exploitation capabilities and convergence behavior. The hybridization mechanism uses the BHC optimizer in its standard form to first improve each new Spider Monkey solution (position) generated in the SMO Local Leader Phase (LLP), then enhance each new solution produced in the SMO Global Leader Phase (GLP), and finally update the positions of all Local Leader members of each local group under a specific condition in the SMO Local Leader Decision (LLD) phase.
Experimental results show that our proposed SMO variant, called SMOBHC, is more competitive than twenty state-of-the-art evolutionary algorithms, including the standard SMO, in terms of the quality and stability of the final generated paths for UCAV path planning. SMOBHC outperforms almost all competing algorithms in terms of accuracy (best, worst, mean, and std. values) for the three battlefields under different problem dimensions (30, 60, and 90 respectively). Statistically, SMOBHC performs better than all its rival algorithms for the three battlefields except SMO, where there is no significant difference between them. In terms of convergence behavior, SMOBHC has an acceptable convergence speed compared to its competitors, especially for the first battlefield with problem dimensions equal to 30 and 60.
Despite the promising results achieved in our study, it must be acknowledged that it also has several limitations. Firstly, our focus on UCAV path planning within a 2D space can be considered a constraint. Secondly, the method’s limited applicability in a static environment is another hindrance. Lastly, the computational demands of the method can prove to be challenging. However, in light of these limitations, we are motivated to explore the potential for further improvement. Our plan is to develop a more comprehensive 2D mathematical model for UCAV path planning, which we believe will help us better understand the advantages of the SMO algorithm and its variants. With this new model, we aim to create an even more advanced algorithm and rigorously test it using a newly developed 3D UCAV battlefield model that takes into account both static and dynamic threats. Our ultimate goal is to provide a more comprehensive solution for UCAV path planning in complex and challenging environments.

Author Contributions

Conceptualization, F.A., A.A. and X.-Z.G.; methodology, F.A. and S.B.; software, F.A., S.B. and X.-Z.G.; validation, F.A., X.-Z.G. and I.B.; formal analysis, F.A., A.A. and S.B.; investigation, F.A., S.B. and I.B.; writing—original draft preparation, F.A., N.K. and F.L.; writing—review and editing, F.A., N.K. and F.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Variables, abbreviations, and adjustable control variables signification.
Table A1. Variables, abbreviations, and adjustable control variables signification.
ParameterSignification
D Number of steps
S T The   line   which   connects   directly   S   and   T points
δ Parameter that determines the form of the density function
R i The radius of the ith threat element
γ A weight coefficient ranging in [0, 1]
S T The   length   of   the   segment   S T
n t h r Total number of threats in the battlefield
FFSFission–Fusion System of spider monkeys
LLPLocal Leader Phase
GLPGlobal Leader Phase
LLDLocal Leader Decision phase
M G Maximum Group limit
G L L Global Leader Limit
L L L Local Leader Limit
p r Perturbation rate
N Spider Monkey positions number
G r o u p N u m b e r Group number counter (number of groups)
b w N -operator bandwidth
B B -operator probability
I t e r m a x Maximum number of iterations
R u n m a x Maximum number of runs

References

  1. Elmokadem, T.; Savkin, A.V. Towards fully autonomous UAVs: A survey. Sensors 2021, 21, 6223. [Google Scholar] [CrossRef]
  2. Altan, A.; Hacıoglu, R. Model predictive control of three-axis gimbal system mounted on UAV for real-time target tracking underexternal disturbances. Mech. Syst. Signal Process. 2020, 138, 106548. [Google Scholar] [CrossRef]
  3. Saadi, A.A.; Soukane, A.; Meraihi, Y.; Gabis, A.B.; Mirjalili, S.; Ramdane-Cherif, A. UAV Path Planning Using Optimization Approaches: A Survey. Arch. Comput. Methods Eng. 2022, 29, 4233–4284. [Google Scholar] [CrossRef]
  4. Belge, E.; Altan, A.; Hacıoglu, R. Metaheuristic Optimization-Based Path Planning and Tracking of Quadcopter for Payload Hold-Release Mission. Electronics 2022, 11, 1208. [Google Scholar] [CrossRef]
  5. Bansal, J.C.; Sharma, H.; Jadon, S.S.; Clerc, M. Spider monkey optimization algorithm for numerical optimization. Memet. Comput. 2014, 6, 31–47. [Google Scholar] [CrossRef]
  6. Zhu, H.; Wang, Y.; Li, X. UCAV path planning for avoiding obstacles using cooperative co-evolution Spider Monkey Optimization. Knowl. Based Syst. 2022, 246, 108713. [Google Scholar] [CrossRef]
  7. Zhu, H.; Wang, Y.; Ma, Z.; Li, X. A comparative study of swarm intelligence algorithms for UCAV path-planning problems. Mathematics 2021, 9, 171. [Google Scholar] [CrossRef]
  8. Guo, T.; Jiang, N.; Li, B.Y. UAV navigation in high dynamic environments: A deep reinforcement learning approach. Chin. J. Aeronaut. 2021, 34, 479–489. [Google Scholar] [CrossRef]
  9. Huang, H.; Savkin, A.V. Autonomous Navigation of a Solar-Powered UAV for Secure Communication in Urban Environments with Eavesdropping Avoidance. Future Internet 2020, 12, 170. [Google Scholar] [CrossRef]
  10. Huang, H.; Savkin, A.V.; Ni, W. Energy-Efficient 3D Navigation of a Solar-Powered UAV for Secure Communication in the Presence of Eavesdroppers and No-Fly Zones. Energies 2020, 13, 1445. [Google Scholar] [CrossRef] [Green Version]
  11. Isik, O.K.; Hong, J.; Petrunin, I.; Tsourdos, A. Integrity analysis for GPS-based navigation of UAVs in urban environment. Robotics 2020, 9, 66. [Google Scholar] [CrossRef]
  12. Zhang, S.; Li, Y.; Dong, Q. Autonomous navigation of UAV in multi-obstacle environments based on a Deep Reinforcement Learning approach. Appl. Soft Comput. 2022, 115, 108194. [Google Scholar] [CrossRef]
  13. Delamer, J.-A.; Watanabe, Y.; Chanel, C.P.C. Safe path planning for UAV urban operation under GNSS signal occlusion risk. Rob. Auton. Syst. 2021, 142, 103800. [Google Scholar] [CrossRef]
  14. He, L.; Aouf, N.; Song, B. Explainable Deep Reinforcement Learning for UAV autonomous path planning. Aerosp. Sci. Technol. 2021, 118, 107052. [Google Scholar] [CrossRef]
  15. Al-Kabi, H.; Mazinani, S.M. DNCS: New UAV navigation with considering the no-fly zone and efficient selection of the charging station. Ain Shams Eng. J. 2021, 12, 3669–3676. [Google Scholar] [CrossRef]
  16. Wang, S.; Zhan, X.; Zhai, Y.; Chi, C.; Shen, J. Highly reliable relative navigation for multi-UAV formation flight in urban environments. Chin. J. Aeronaut. 2020, in press. [Google Scholar] [CrossRef]
  17. Zhu, X.; Wang, L.; Li, Y.; Song, S.; Ma, S.; Yang, F.; Zhai, L. Path planning of multi-UAVs based on deep Q-network for energy-efficient data collection in UAVs-assisted IoT. Veh. Commun. 2022, 36, 100491. [Google Scholar] [CrossRef]
  18. Li, J.; Yang, G.; Cai, Q.; Niu, H.; Li, J. Cooperative navigation for UAVs in GNSS-denied area based on optimized belief propagation. Meas. J. Int. Meas. Confed. 2022, 192, 110797. [Google Scholar] [CrossRef]
  19. Hoon, L.M.; Moon, J. Deep reinforcement learning-based model-free path planning and collision avoidance for UAVs: A soft actor–critic with hindsight experience replay approach. ICT Express 2022, in press. [Google Scholar]
  20. Habibi, H.; Safaei, A.; Voos, H.; Darouach, M.; Sanchez-Lopez, J.L. Safe navigation of a quadrotor UAV with uncertain dynamics and guaranteed collision avoidance using barrier Lyapunov function. Aerosp. Sci. Technol. 2023, 132, 108064. [Google Scholar] [CrossRef]
  21. Chen, H.; Lu, P. Real-time identification and avoidance of simultaneous static and dynamic obstacles on point cloud for UAVs navigation. Robot. Auton. Syst. 2022, 154, 104124. [Google Scholar] [CrossRef]
  22. El-Basioni, B.M.M.; El-Kader, S.M.A. Mission-based PTR triangle for multi-UAV systems flight planning. Ad Hoc Netw. 2023, 142, 103115. [Google Scholar] [CrossRef]
  23. Wang, Y.; Liu, W.; Liu, J.; Sun, C. Cooperative USV–UAV marine search and rescue with visual navigation and reinforcement learning-based control. ISA Trans. 2023, in press. [Google Scholar] [CrossRef] [PubMed]
  24. Seo, D.; Kang, J. Collision-avoided Tracking Control of UAV Using Velocity-adaptive 3D Local Path Planning. Int. J. Control Autom. Syst. 2023, 21, 231–243. [Google Scholar] [CrossRef]
  25. Li, T.; Zhang, X.; Zhang, S.; Zhang, X.; Chen, X. STUNS-Planner: A Spatiotemporal Motion Planner with Unbending and Consistency Awareness for Quadrotors in Unknown Environments. J. Intell. Robot Syst. 2023, 107, 7. [Google Scholar] [CrossRef]
  26. Zhang, Y.; Wang, P.; Yang, L.; Liu, Y.; Lu, Y.; Zhu, X. Novel Swarm Intelligence Algorithm for Global Optimization and Multi-UAVs Cooperative Path Planning: Anas Platyrhynchos Optimizer. Appl. Sci. 2020, 10, 4821. [Google Scholar] [CrossRef]
  27. Shi, K.; Zhang, X.; Xia, S. Multiple Swarm Fruit Fly Optimization Algorithm Based Path Planning Method for Multi-UAVs. Appl. Sci. 2020, 10, 2822. [Google Scholar] [CrossRef] [Green Version]
  28. Nayeem, G.M.; Fan, M.; Li, S.; Ahammad, K. A Modified Particle Swarm Optimization for Autonomous UAV Path Planning in 3D Environment. In Cyber Security and Computer Science; Bhuiyan, T., Rahman, M.M., Ali, M.A., Eds.; ICONCS 2020. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering; Springer: Cham, Switzerland, 2020; Volume 325. [Google Scholar]
  29. Ge, F.; Li, K.; Han, Y.; Xu, W.; Wang, Y. Path planning of UAV for oilfield inspections in a three-dimensional dynamic environment with moving obstacles based on an improved pigeon-inspired optimization algorithm. Appl. Intell. 2020, 50, 2800–2817. [Google Scholar] [CrossRef]
  30. Chai, X.; Xiao, J.; Zheng, Z.; Zhang, L.; Qu, B.; Yan, L.; Sun, H. UAV 3D Path Planning Based on Multi-Population Ensemble Differential Evolution. In Bio-Inspired Computing: Theories and Applications; Pan, L., Liang, J., Qu, B., Eds.; BIC-TA 2019. Communications in Computer and Information Science; Springer: Singapore, 2020; p. 1159. [Google Scholar]
  31. Mesquita, R.; Gaspar, P.D. A Novel Path Planning Optimization Algorithm Based on Particle Swarm Optimization for UAVs for Bird Monitoring and Repelling. Processes 2022, 10, 62. [Google Scholar] [CrossRef]
  32. Xia, S.; Zhang, X. Constrained path planning for unmanned aerial vehicle in 3D terrain using modified multi-objective particle swarm optimization. Actuators 2021, 10, 255. [Google Scholar] [CrossRef]
  33. Liu, H.; Ge, J.; Wang, Y.; Li, J.; Ding, K.; Zhang, Z.; Guo, Z.; Li, W.; Lan, J. Multi-UAV optimal mission assignment and path planning for disaster rescue using adaptive genetic algorithm and improved artificial bee colony method. Actuators 2022, 11, 4. [Google Scholar] [CrossRef]
  34. Huo, L.S.; Zhu, J.H.; Li, Z.M.; Ma, M.H. A Hybrid Differential Symbiotic Organisms Search Algorithm for UAV Path Planning. Sensors 2021, 21, 3037. [Google Scholar] [CrossRef] [PubMed]
  35. Tang, A.D.; Han, T.; Zhou, H.; Xie, L. An improved equilibrium optimizer with application in unmanned aerial vehicle path planning. Sensors 2021, 21, 1814. [Google Scholar] [CrossRef]
  36. Ahmed, N.; Pawase, C.J.; Chang, K. Distributed 3-D Path Planning for Multi-UAVs with Full Area Surveillance Based on Particle Swarm Optimization. Appl. Sci. 2021, 11, 3417. [Google Scholar] [CrossRef]
  37. Jarray, R.; Al-Dhaifallah, M.; Rezk, H.; Bouallègue, S. Parallel Cooperative Coevolutionary Grey Wolf Optimizer for Path Planning Problem of Unmanned Aerial Vehicles. Sensors 2022, 22, 1826. [Google Scholar] [CrossRef]
  38. Zhang, C.; Liu, Y.; Hu, C. Path Planning with Time Windows for Multiple UAVs Based on Gray Wolf Algorithm. Biomimetics 2022, 7, 225. [Google Scholar] [CrossRef]
  39. Chu, H.; Yi, J.; Yang, F. Chaos Particle Swarm Optimization Enhancement Algorithm for UAV Safe Path Planning. Appl. Sci. 2022, 12, 8977. [Google Scholar] [CrossRef]
  40. Zhang, R.; Li, S.; Ding, Y.; Qin, X.; Xia, Q. UAV Path Planning Algorithm Based on Improved Harris Hawks Optimization. Sensors 2022, 22, 5232. [Google Scholar] [CrossRef] [PubMed]
  41. Chen, H.; Zhou, X.; Ran, X.; Wang, J.; Chen, H.; Deng, W. Adaptive cylinder vector particle swarm optimization with differential evolution for UAV path planning. Eng. Appl. Artif. Intell. 2023, 121, 105942. [Google Scholar]
  42. Yu, X.; Jiang, N.; Wang, X.; Li, M. A hybrid algorithm based on grey wolf optimizer and differential evolution for UAV path planning. Expert Syst. Appl. 2022, 215, 119327. [Google Scholar] [CrossRef]
  43. Chowdhury, A.; De, D. RGSO-UAV: Reverse Glowworm Swarm Optimization inspired UAV path-planning in a 3D dynamic environment. Ad Hoc Netw. 2023, 140, 103068. [Google Scholar] [CrossRef]
  44. Li, B.; Gong, L.; Zhao, C. Unmanned combat aerial vehicles path planning using a novel probability density model based on artificial bee colony algorithm. In Proceedings of the 2013 Fourth International Conference on Intelligent Control and Information Processing (ICICIP), Beijing, China, 9–11 June 2013; pp. 620–625. [Google Scholar]
  45. Al-Betar, M.A. B-hill climbing: An exploratory local search. Neural Comput. Appl. 2017, 28, 153–168. [Google Scholar] [CrossRef]
  46. Coelho, L.D.S. Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems. Expert Syst. Appl. 2010, 37, 1676–1683. [Google Scholar] [CrossRef]
  47. Yang, X.S. A New Metaheuristic Bat-Inspired Algorithm. In Nature Inspired Cooperative Strategies for Optimization (NISCO 2010); Cruz, C., Gonzalez, J.R., Pelta, D.A., Terrazas, G., Eds.; Studies in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2010; Volume 284, pp. 65–74. [Google Scholar]
  48. Yang, X.S.; Deb, S. Cuckoo Search via L´evy Flights. In Proceedings of the World Congress on Nature & Biologically Inspired Computing (NaBIC 2009), Coimbatore, India, 9–11 December 2009; IEEE Publications: Piscataway, NJ, USA; pp. 210–214. [Google Scholar]
  49. Yang, X.-S. Firefly algorithm, Levy flights and global optimization. In Research and Development in Intelligent Systems; Springer: Berlin/Heidelberg, Germany, 2010; pp. 209–218. [Google Scholar]
  50. Wang, G.G.; Zhao, X.; Deb, S. A novel monarch butterfly optimization with greedy strategy and self-adaptive. In Proceedings of the 2015 Second International Conference on Soft Computing and Machine Intelligence (ISCMI), Hong Kong, China, 23–24 November 2015; pp. 45–50. [Google Scholar]
  51. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  52. Lee, K.; Geem, Z. A new meta-heuristic algorithm for continuous engineering optimization: Harmony search theory and practice. Comput. Methods Appl. Mech. Eng. 2005, 194, 3902–3933. [Google Scholar] [CrossRef]
  53. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  54. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  55. Abualigah, L.; Diabat, A.; Mirjalili, S.; Elaziz, M.A.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  56. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  57. Weiguo, Z.; Wang, L.; Mirjalili, S. Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar]
  58. Azizi, M. Atomic orbital search: A novel metaheuristic algorithm. Appl. Math. Model 2021, 93, 657–683. [Google Scholar] [CrossRef]
  59. Wang, L.; Cao, Q.; Zhang, Z.; Mirjalili, S.; Zhao, W. Artificial rabbits optimization: A new bio-inspired meta-heuristic algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2022, 114, 105082. [Google Scholar] [CrossRef]
  60. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization Algorithm. Knowl. Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  61. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl. Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  62. Shami, T.M.; Grace, D.; Burr, A.; Mitchell, P.D. Single candidate optimizer: A novel optimization algorithm. Evol. Intel. 2022. [Google Scholar] [CrossRef]
  63. Ahmadianfar, O. Bozorg-Haddad, X. Chu, Gradient-based optimizer: A new metaheuristic optimization algorithm. Inform. Sci. 2020, 540, 131–159. [Google Scholar] [CrossRef]
  64. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Dwarf Mongoose Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2022, 391, 114570. [Google Scholar] [CrossRef]
  65. Zhao, S.; Zhang, T.; Ma, S.; Chen, M. Dandelion Optimizer: A nature-inspired metaheuristic algorithm for engineering applications. Eng. Appl. Artif. Intell. 2022, 114, 105075. [Google Scholar] [CrossRef]
  66. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl. Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  67. Azizi, M.; Talatahari, S.; Gandomi, A.H. Fire Hawk Optimizer: A novel metaheuristic algorithm. Artif. Intell. Rev. 2022, 56, 287–363. [Google Scholar] [CrossRef]
  68. Braik, M.; Hammouri, A.; Atwan, J.; Al-Betar, M.A.; Awadallah, M.A. White Shark Optimizer: A novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl. Based Syst. 2022, 243, 108457. [Google Scholar] [CrossRef]
  69. Trojovský, P.; Dehghani, M. Pelican Optimization Algorithm: A Novel Nature-Inspired Algorithm for Engineering Applications. Sensors 2022, 22, 855. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The schematic diagram for modeling the combat field. The X and Y axes represent the length and width of the horizontal battlefield, respectively. S and T indicate the starting and terminal points, respectively. All blue circles represent the threats on the battlefield. The black line connecting S and T directly represents the segment ST. The blue points indicate all generated nodes, which must be linked together to construct the optimal path indicated by the red dashed line.
Figure 1. The schematic diagram for modeling the combat field. The X and Y axes represent the length and width of the horizontal battlefield, respectively. S and T indicate the starting and terminal points, respectively. All blue circles represent the threats on the battlefield. The black line connecting S and T directly represents the segment ST. The blue points indicate all generated nodes, which must be linked together to construct the optimal path indicated by the red dashed line.
Applsci 13 03273 g001
Figure 2. Thorough graphical depiction of the path planning modeling procedure (e.g., here D = 7 and n t h r = 2 ).
Figure 2. Thorough graphical depiction of the path planning modeling procedure (e.g., here D = 7 and n t h r = 2 ).
Applsci 13 03273 g002
Figure 7. Graphical representation of UCAV Path planning based on SMOBHC.
Figure 7. Graphical representation of UCAV Path planning based on SMOBHC.
Applsci 13 03273 g007
Table 1. A detailed study of UAV path planning using Navigation Systems-based UAV Path Planning.
Table 1. A detailed study of UAV path planning using Navigation Systems-based UAV Path Planning.
Navigation Systems-Based UAV Path Planning
RefProposed AlgorithmAdvantagesLimitations
[8]A deep reinforcement learning approach for UAV navigation in high dynamic environments. The method uses a neural network to learn control policies from sensor inputs and reward signals.Can handle high-dynamic environments, is adaptable to different environments, can learn from experience, and requires minimal human intervention.Needs a large amount of data for training, may suffer from safety issues if deployed without sufficient training and can be affected by sensor noise and disturbances in the environment.
[9]Autonomous navigation method for a solar-powered UAV that can provide secure communication in urban environments while avoiding eavesdropping. The method utilizes an onboard camera and a GPS receiver to create a real-time map of the environment and uses machine learning algorithms to detect and avoid potential eavesdropping areas.Provides secure communication in urban areas, is environmentally friendly and cost-effective, and effective tool for surveillance and reconnaissance.Limited flight time, GPS signal disruption, the accuracy of machine learning algorithms affected by the complexity of urban environments and multiple signals, and privacy and security concerns.
[10]A 3D navigation method for a solar-powered UAV that provides secure communication in the presence of eavesdroppers and no-fly zones while being energy efficient. The method uses a dynamic programming algorithm to optimize the UAV’s flight path and speed based on the available solar energy and communication requirements.Provides secure communication in challenging environments, is energy efficient, can avoid no-fly zones, and is adaptable to different communication requirements.May not be suitable for environments with low solar energy availability; requires accurate solar energy prediction; may not be able to handle sudden changes in the environment; may require a more complex communication system for larger distances.
[11]The paper proposes a novel method for ensuring the integrity of GPS-based navigation for Unmanned Aerial Vehicles (UAVs) in an urban environment using an integrity map and a weighted sum of the navigation solution residuals.The advantages of the proposed method are that it provides a means to detect and mitigate GPS signal errors and can improve the reliability and safety of UAV navigation in urban environments.The limitations of the proposed method are that it relies on the assumption that the GPS errors are independent and identically distributed, which may not always be the case in practice. Additionally, the method may not be effective in environments where the GPS signal is severely obstructed or jammed.
[12]The paper proposes a deep reinforcement learning approach for the autonomous navigation of Unmanned Aerial Vehicles (UAVs) in multi-obstacle environments. The method involves training a Deep Q-Network (DQN) agent to select actions that enable the UAV to navigate through an environment with multiple obstacles while maximizing its reward.The proposed method offers several advantages, including the ability to navigate complex environments without prior knowledge of the environment, the ability to handle dynamic obstacles, and the ability to handle multiple obstacles simultaneously. Additionally, the method offers the potential to reduce human intervention in UAV navigation, which could lead to increased efficiency and reduced costs.One potential limitation of the proposed method is the need for significant computational resources for training the DQN agent. Additionally, the method may not be well-suited for environments with high levels of uncertainty or unpredictability, as the agent’s performance is dependent on the quality of the reward function and the training data. Finally, the method may not be able to handle certain types of obstacles or environments, such as those with complex or irregular shapes.
[13]A safe path planning algorithm for UAVs that uses a grid-based approach and considers the risk of GNSS signal occlusion in urban environments.The algorithm takes into account the risk of GNSS signal occlusion and is able to plan safe paths in urban environments with high accuracy and efficiency.The algorithm relies on accurate and up-to-date maps of the environment and may not be able to handle unforeseen obstacles or changes in the environment in real-time.
[14]The paper proposes an Explainable Deep Reinforcement Learning (X-DRL) algorithm for UAV autonomous path planning.The X-DRL algorithm provides a way to interpret the decision-making process of the UAV, allowing for greater transparency and accountability. It also achieves higher accuracy and faster convergence compared to other methods.The proposed algorithm relies on a pre-defined reward function and may not be suitable for complex environments where the optimal reward function is unknown. Additionally, the X-DRL algorithm requires a large amount of training data and computational resources.
[15]The paper proposes a new UAV navigation method called DNCS, which considers the no-fly zone and efficiently selects the charging station to optimize the UAV’s energy consumption.The DNCS method can effectively avoid no-fly zones and choose the best charging station based on the UAV’s energy level, resulting in longer flight time and reduced energy consumption.The proposed method is evaluated through simulations and may not reflect real-world scenarios. The accuracy of the no-fly zone map and the availability of charging stations may also affect the performance of the method.
[16]The paper proposes a highly reliable relative navigation method for multi-UAV formation flight in urban environments using a combination of vision-based and sensor-based techniques.The proposed method achieves high reliability in relative navigation, even in challenging urban environments with obstacles and GPS-denied conditions. It also allows for flexible formation reconfiguration and can be applied to a variety of multi-UAV formations.The proposed method relies on the availability of multiple UAVs and may not be suitable for single UAV missions. The accuracy of the method is also affected by the quality of sensor data and the complexity of the environment.
[17]Deep Q-Network (DQN)-based path planning algorithm for multiple UAVs for energy-efficient data collection in UAV-assisted IoT.The proposed algorithm can efficiently plan optimal paths for multiple UAVs, reducing energy consumption and enabling more efficient data collection.The algorithm relies on accurate and timely communication between the UAVs and the ground station, which can be challenging in certain scenarios. Additionally, the algorithm assumes that the UAVs have unlimited battery capacity, which may not be practical in real-world scenarios.
[18]Optimized belief propagation-based cooperative navigation algorithm for multiple UAVs in GNSS-denied areas using only onboard sensors.The proposed algorithm can achieve high positioning accuracy and reliability in GNSS-denied areas by leveraging the cooperation of multiple UAVs and the optimization of the belief propagation algorithm.The proposed algorithm relies on the availability of multiple UAVs and assumes that they can communicate with each other in real-time, which may not always be feasible in practical scenarios. Additionally, the algorithm may require significant computational resources, which may limit its real-time applicability.
[19]Soft actor-critic with Hindsight Experience Replay (HER) algorithm based on deep reinforcement learning for model-free path planning and collision avoidance for UAVs.The proposed algorithm can learn effective collision-free paths for UAVs in complex environments using only raw sensory input, without relying on a pre-defined map or model of the environment.The training of the algorithm can be computationally expensive and time-consuming, and the resulting policies may not always generalize well to unseen environments. Additionally, the algorithm may require a large amount of data to achieve good performance, which can be challenging to obtain in real-world scenarios.
[20]Barrier Lyapunov Function (BLF)-based control algorithm for safe navigation of quadrotor UAVs in the presence of uncertain dynamics and with guaranteed collision avoidance.The proposed algorithm can guarantee safe and collision-free navigation of quadrotor UAVs even in the presence of model uncertainties and external disturbances.The algorithm assumes that the UAV’s state and environment information are accurately known, which may not always be the case in real-world scenarios. Additionally, the algorithm may require tuning of certain parameters, which can be challenging for users without a deep understanding of control theory.
[21]Real-time obstacle identification and avoidance algorithm based on point cloud data for UAV navigation in environments with simultaneous static and dynamic obstacles.The proposed algorithm can identify and avoid both static and dynamic obstacles in real-time, enabling safe and efficient UAV navigation in complex environments.The algorithm relies on accurate and reliable point cloud data, which may not always be available or may be affected by environmental factors such as weather. Additionally, the algorithm may struggle with identifying and avoiding small or transparent obstacles that are not easily detectable in point cloud data.
[22]Mission-based planning, tasking, and re-tasking (PTR) triangle framework for flight planning of multi-UAV systems in complex missions.The proposed framework can facilitate efficient mission planning, tasking, and re-tasking of multi-UAV systems, enabling improved mission performance and adaptability.The framework may require significant communication and computational resources to achieve real-time re-tasking and adaptation of the UAVs, which can be challenging in certain scenarios. Additionally, the framework assumes that the UAVs have unlimited battery capacity and does not consider the impact of energy consumption on mission performance.
[23]Cooperative marine search and rescue framework based on visual navigation and reinforcement learning-based control for USV-UAV systems.The proposed framework enables efficient and effective search and rescue missions in marine environments by combining the advantages of both USVs and UAVs. The use of visual navigation and reinforcement learning-based control can improve the adaptability and robustness of the system.The framework may require significant computational resources to process a large amount of sensory data and perform real-time control of the USV-UAV system. Additionally, the framework assumes that the system has accurate and reliable sensors and communication capabilities, which may not always be the case in real-world scenarios.
[24]Velocity-adaptive 3D local path planning method for collision-avoided tracking control of UAVs.The proposed method can enable efficient and safe tracking control of UAVs in complex environments by adapting the velocity of the UAV to avoid obstacles and reduce the risk of collisions. Path planning in 3D can provide more flexibility and maneuverability for the system.The proposed method relies on accurate and reliable sensor data for obstacle detection and path planning, which may not always be available or may be affected by environmental factors such as weather. Additionally, the method may not always guarantee optimal or globally optimal paths for the UAV.
[25]Spatiotemporal motion planner for quadrotors in unknown environments with unbending and consistency awareness referred to as STUNS-Planner.The STUNS-Planner can enable quadrotors to navigate safely and efficiently in unknown environments by taking into account the dynamics of the quadrotor and the surrounding obstacles. The unbending and consistency awareness can improve the stability and robustness of the system.The proposed method may be computationally intensive and require significant computing resources, limiting its real-time applicability. Additionally, the method may not always guarantee globally optimal paths or may be affected by sensor noise or inaccuracies in the estimation of the quadrotor dynamics.
Table 2. A detailed study of UAV path planning using Optimization Techniques-based UAV Path Planning.
Table 2. A detailed study of UAV path planning using Optimization Techniques-based UAV Path Planning.
Optimization Techniques-Based UAV Path Planning
RefProposed AlgorithmAdvantagesLimitations
[26]A novel Swarm Intelligence algorithm called the Anas Platyrhynchos Optimizer (APO) for global optimization and multi-UAV cooperative path planning.The APO algorithm can optimize multiple objectives, such as energy consumption and path length while taking into account the coordination and cooperation among multiple UAVs. The method has been shown to achieve good performance and convergence speed in simulation experiments.The effectiveness of the APO algorithm may depend on the choice of parameters and may be affected by the complexity of the optimization problem. The algorithm may also require significant computational resources for large-scale problems, limiting its practical applicability.
[27]Multiple Swarm Fruit Fly Optimization Algorithm-Based Path Planning method for multi-UAVs, which utilizes a hybrid of a Fruit Fly Optimization Algorithm and Swarm Intelligence for path planning of multiple Unmanned Aerial Vehicles (UAVs).The proposed method provides a balance between exploration and exploitation and improves the efficiency and effectiveness of path planning for multi-UAVs.The proposed method does not consider dynamic environments and obstacles, which may limit its applicability in real-world scenarios. Additionally, the computational complexity may increase with an increasing number of UAVs.
[28]A modified Particle Swarm Optimization (PSO) algorithm for autonomous path planning of Unmanned Aerial Vehicles (UAVs) in 3D environments that takes into account obstacles and the need for safe separation distances.The proposed method achieves a higher success rate and a lower computational time than other PSO-based path planning methods. It can also handle complex environments and be used for both single and multiple UAVs.The proposed method assumes that the UAV has complete and accurate knowledge of the environment, which may not be the case in real-world scenarios. Additionally, the method may require tuning its parameters to achieve optimal performance.
[29]An improved Pigeon-Inspired Optimization algorithm (IPIO) for path planning of Unmanned Aerial Vehicles (UAVs) in dynamic 3D environments with moving obstacles, specifically designed for oilfield inspections.The proposed IPIO algorithm outperforms other popular optimization algorithms in terms of convergence rate and solution quality. It can effectively handle the complexities of a dynamic oilfield environment while optimizing multiple objectives, such as the shortest path and safe separation distances.The proposed method assumes perfect information about the environment and obstacles, which may not always be available in real-world scenarios. Additionally, the computational time may increase with larger search spaces and more UAVs.
[30]A Multi-Population Ensemble Differential Evolution (MP-EDE) algorithm for path planning of Unmanned Aerial Vehicles (UAVs) in 3D environments that optimizes multiple objectives such as collision avoidance and path smoothness.The proposed MP-EDE algorithm shows a better convergence rate, diversity, and robustness compared to other state-of-the-art algorithms for UAV path planning. It can also handle complex 3D environments with multiple UAVs.The proposed method assumes a static environment and may require parameter tuning to achieve optimal performance. Additionally, the computational time may increase with larger search spaces and more UAVs.
[31]The paper proposes a path planning optimization algorithm based on Particle Swarm Optimization (PSO) for Unmanned Aerial Vehicles (UAVs) for bird monitoring and repelling.The proposed algorithm can optimize the path planning for UAVs to effectively monitor and repel birds, which can reduce the damage caused by birds to crops and reduce the risk of bird-aircraft collisions.The paper does not provide extensive experimental results, and the proposed algorithm may not be applicable to other scenarios beyond bird monitoring and repelling.
[32]The paper proposes a modified Multi-Objective Particle Swarm Optimization (MOPSO) algorithm for path planning of Unmanned Aerial Vehicles (UAVs) in 3D terrain with constraints.The proposed method is able to find feasible paths that satisfy different constraints while considering multiple objectives, leading to better performance compared to other optimization methods.The proposed method is computationally intensive and may not be suitable for real-time applications with limited computational resources. Additionally, the effectiveness of the method is dependent on the accuracy of the input data used to represent the terrain.
[33]The paper proposes an Adaptive Genetic Algorithm (AGA) and an Improved Artificial Bee Colony (IABC) method for multi-UAV mission assignment and path planning for disaster rescue.The proposed method is able to simultaneously optimize the mission assignment and path planning for multiple UAVs, improving the efficiency and effectiveness of disaster rescue operations. It also incorporates adaptive strategies to enhance the performance of the optimization algorithms.The proposed method has not been tested in real-world disaster rescue scenarios, and its performance may depend on the accuracy of the underlying models and assumptions. Additionally, the computational complexity of the method may limit its scalability to larger numbers of UAVs or more complex scenarios.
[34]A Hybrid Differential Symbiotic Organisms Search (HDSOS) algorithm for unmanned aerial vehicle (UAV) path planning.HDSOS combines the advantages of differential evolution and symbiotic organisms search algorithms, leading to improved global search capabilities and faster convergence in finding optimal paths for UAVs.The effectiveness of HDSOS for UAV path planning may be limited by the complexity of the environment and the number of waypoints to be visited.
[35]An improved Equilibrium Optimizer (EO) for unmanned aerial vehicle (UAV) path planning, which includes a modified search strategy and a new balance operator.The improved EO algorithm demonstrated improved performance in terms of finding optimal paths for UAVs compared to other optimization algorithms.The effectiveness of the improved EO algorithm may be limited by the complexity of the environment and the number of waypoints to be visited. Additionally, the algorithm may require tuning its parameters for optimal performance.
[36]Distributed 3D path planning for multiple UAVs based on the Particle Swarm Optimization (PSO) algorithm for full-area surveillance.Provides a distributed approach to path planning for multiple UAVs with high accuracy and efficiency, as well as full-area surveillance with reduced time and cost.The proposed method may require a large number of UAVs to achieve full-area surveillance, and it may not consider dynamic obstacles in real-time.
[37]Parallel Cooperative Coevolutionary Grey Wolf Optimizer (CCGWO) for path planning of Unmanned Aerial Vehicles (UAVs) in complex environments.Improved convergence speed and solution quality compared to other optimization algorithms, and the ability to handle complex environments with dynamic obstacles.The proposed CCGWO method may require significant computational resources to run, and it may not perform well in cases where the search space is not well defined.
[38]Path planning for multiple Unmanned Aerial Vehicles (UAVs) with time windows using the Grey Wolf Algorithm (GWA).Improved efficiency and accuracy in path planning with time constraints, as well as the ability to optimize multiple UAVs simultaneously.The proposed GWA-based method may struggle to find optimal solutions in complex environments with many obstacles, and it may not be suitable for real-time applications due to the computational resources required.
[39]A Chaos-Enhanced Particle Swarm Optimization (CPSO) algorithm for safe path planning of Unmanned Aerial Vehicles (UAVs).The proposed CPSO algorithm is efficient at finding safe paths for UAVs while avoiding obstacles and can handle complex environments. The chaos enhancement improves the algorithm’s convergence speed and accuracy.The CPSO algorithm’s performance may be affected by the selection of hyperparameters, and it may struggle with time-constrained applications as the search process can take longer than expected.
[40]An algorithm for path planning for Unmanned Aerial Vehicles (UAVs) based on an improved Harris Hawks Optimization (IHHO) algorithm.The IHHO algorithm provides high accuracy in finding optimal paths for UAVs while avoiding obstacles, and it can handle complex environments with multiple objectives.The proposed algorithm may require significant computational resources to run, and it may not be suitable for real-time applications where fast planning is required.
[41]Adaptive Cylinder Vector Particle Swarm Optimization with Differential Evolution (ACVPSO-DE) for unmanned aerial vehicle (UAV) path planning.The proposed ACVPSO-DE algorithm provides high-quality solutions for finding safe paths for UAVs while avoiding obstacles in real-time applications.The performance of the algorithm may be impacted by the selection of hyperparameters, and it may struggle with complex environments with multiple objectives.
[42]A hybrid algorithm based on the Grey Wolf optimizer and Differential Evolution (GWO-DE) for unmanned aerial vehicle (UAV) path planning.The proposed GWO-DE algorithm provides an efficient and effective solution for finding safe paths for UAVs while avoiding obstacles in dynamic environments.The proposed algorithm may struggle with complex environments with multiple objectives, and the selection of hyperparameters may impact the performance.
[43]The proposed method is RGSO-UAV, which is a path planning algorithm inspired by Reverse Glowworm Swarm Optimization (RGSO) for Unmanned Aerial Vehicles (UAVs) in a 3D dynamic environment.The advantages of RGSO-UAV are that it is effective in finding optimal or near-optimal paths for UAVs in dynamic environments while also considering energy consumption, obstacle avoidance, and smoothness of the path.The limitations of RGSO-UAV are not explicitly mentioned in the abstract, but further investigation of the paper may reveal any potential shortcomings, such as computational complexity or limited applicability to certain types of environments.
Table 3. Experimental results obtained for the fifteen experimented algorithms relative to three different problem dimensions equal respectively to 30 , 60, and 90 for the first battlefield composed of ten threats.
Table 3. Experimental results obtained for the fifteen experimented algorithms relative to three different problem dimensions equal respectively to 30 , 60, and 90 for the first battlefield composed of ten threats.
DResultsSMOBHCSMOPSOGQPSOBACSDEFAGCMBOGWOHSWOAALOAOADA
30Best2.28947e+002.29310e+002.75424e+002.29066e+003.92196e+002.57876e+002.55853e+002.65663e+004.05979e+002.71719e+004.44264e+003.20219e+002.63780e+003.07582e+002.82583e+00
Worst2.61597e+002.71327e+003.22385e+004.61145e+005.29548e+002.88037e+002.99855e+003.72796e+006.17494e+003.23827e+005.66178e+003.99601e+002.86127e+003.44006e+004.34395e+00
Mean2.43943e+002.53647e+002.93352e+002.94125e+004.69916e+002.65740e+002.66122e+003.06191e+004.98465e+002.87950e+005.21530e+003.62562e+002.67352e+003.21786e+003.59273e+00
Std.1.36303e-011.17987e-011.13051e-016.03611e-014.82167e-018.13444e-021.23028e-012.90825e-014.23119e-011.43531e-012.94617e-012.39591e-017.52754e-029.50957e-024.16094e-01
60Best3.77426e+003.99230e+005.61468e+004.41786e+001.04104e+015.87736e+004.71361e+005.49910e+005.49910e+004.77953e+001.19106e+015.86641e+004.29269e+006.27408e+006.32505e+00
Worst4.27692e+004.53576e+006.45057e+008.40310e+001.40119e+017.81696e+008.08800e+007.43200e+007.43200e+005.69980e+001.42811e+017.74777e+004.50942e+006.59945e+009.00176e+00
Mean4.06937e+004.21641e+005.97044e+007.45684e+001.18586e+016.65323e+006.05573e+006.52296e+006.52296e+005.15119e+001.32566e+016.69630e+004.31833e+006.44717e+007.32161e+00
Std.1.29764e-011.49738e-012.28953e-011.54426e+009.28414e-015.57735e-018.38343e-016.33541e-016.33541e-012.62726e-016.23371e-014.79265e-016.27870e-029.40193e-027.25221e-01
90Best5.20185e+005.72008e+009.06882e+006.46893e+001.72251e+011.10881e+011.23797e+018.72690e+008.72690e+006.83429e+002.14295e+019.34339e+005.94606e+009.53456e+001.03423e+01
Worst6.77599e+006.70230e+001.04167e+011.19491e+012.25479e+011.51550e+011.58565e+011.21446e+011.21446e+018.07238e+002.40816e+011.11104e+016.22909e+001.03621e+011.34498e+01
Mean5.97088e+006.10449e+009.71420e+001.05624e+012.00473e+011.31477e+011.44052e+019.69276e+009.69276e+007.39528e+002.24904e+011.01782e+016.01703e+009.86784e+001.17318e+01
Std.4.33765e-013.02698e-014.57557e-012.22147e+001.53747e+009.22082e-018.91705e-018.24435e-018.24435e-013.63992e-018.21393e-015.09086e-018.62215e-022.41530e-019.63863e-01
Table 4. Summarised Wilcoxon rank-sum comparisons between the SMOBHC algorithm as a reference and fourteen experimented algorithms for the first battlefield composed of ten threats.
Table 4. Summarised Wilcoxon rank-sum comparisons between the SMOBHC algorithm as a reference and fourteen experimented algorithms for the first battlefield composed of ten threats.
DSMOBHC vs. SMOSMOBHC vs. PSOSMOBHC vs. GQPSOSMOBHC vs. BASMOBHC vs. CSSMOBHC vs. DESMOBHC vs. FASMOBHC vs. GCMBOSMOBHC vs. GWOSMOBHC vs. HSSMOBHC vs. WOASMOBHC vs. ALOSMOBHC vs. AOASMOBHC vs. DA
30 p 3.3362e-036.7956e-088.2924e-056.7956e-081.6571e-072.7451e-046.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-08
h 11111111111111
60p4.3202e-036.7956e-086.7574e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-08
h11111111111111
90 p 3.2348e-016.7956e-081.1772e-076.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-087.3527e-016.7956e-086.7956e-08
h 01111111111111
Diff   ( h = 1)23333333333233
No   diff   ( h = 0)10000000000000
Best23333333333333
Table 5. Experimental results obtained for the fifteen experimented algorithms relative to three different problem dimensions equal respectively to 30 , 60, and 90 for the second battlefield composed of thirty threats.
Table 5. Experimental results obtained for the fifteen experimented algorithms relative to three different problem dimensions equal respectively to 30 , 60, and 90 for the second battlefield composed of thirty threats.
DResultsSMOBHCSMOPSOGQPSOBACSDEFAGCMBOGWOHSWOAALOAOADA
30Best2.51028e+002.51028e+002.84819e+002.52220e+002.95020e+002.51845e+002.51028e+002.84445e+003.06962e+002.61220e+003.22308e+003.00608e+002.60448e+003.00800e+002.78753e+00
Worst2.51477e+002.51580e+003.04426e+002.71888e+003.78299e+002.65383e+002.72929e+003.16937e+003.77617e+002.85639e+003.61163e+003.90492e+002.96006e+003.26194e+003.74059e+00
Mean2.51177e+002.51184e+002.91253e+002.60964e+003.33122e+002.57734e+002.59191e+002.97449e+003.45475e+002.75032e+003.39670e+003.45755e+002.75710e+003.12795e+003.23671e+00
Std.1.80077e-031.80160e-035.15307e-023.21338e-022.33012e-014.46921e-025.79417e-028.75677e-021.66528e-016.27334e-021.01415e-011.86706e-011.15131e-015.81913e-022.04256e-01
60Best3.83595e+003.83733e+005.14681e+004.00704e+006.25087e+004.53306e+004.05029e+005.53866e+006.31199e+004.45724e+006.93861e+005.92446e+004.61219e+005.82414e+005.60451e+00
Worst4.09095e+004.12617e+005.74427e+006.84385e+007.89760e+005.55078e+005.31220e+006.68037e+007.79732e+005.11988e+007.98475e+007.10324e+005.04743e+006.52705e+007.15782e+00
Mean3.93090e+003.96579e+005.36391e+004.46120e+007.09673e+005.08951e+004.57179e+005.99368e+006.99630e+004.77400e+007.50167e+006.48509e+004.75926e+006.16272e+006.43501e+00
Std.7.73957e-029.64438e-021.32390e-017.97121e-014.34957e-012.80242e-012.95791e-012.49162e-014.45219e-011.83088e-012.77829e-013.82256e-011.30515e-012.31844e-013.63704e-01
90Best5.11541e+005.15118e+007.55355e+005.56467e+001.06711e+017.36641e+008.14985e+007.79696e+009.64809e+006.34746e+001.15298e+018.46152e+006.39245e+008.95448e+009.04897e+00
Worst5.41980e+005.48288e+008.72640e+001.18545e+011.27817e+019.79593e+001.00464e+011.09265e+011.27122e+017.53783e+001.31188e+011.08261e+016.91228e+009.95324e+001.11885e+01
Mean5.26593e+005.34675e+008.13438e+009.39583e+001.14995e+018.77740e+008.95983e+009.00075e+001.10847e+016.88814e+001.23627e+019.39447e+006.58353e+009.57711e+001.02167e+01
Std.1.23101e-017.47452e-022.67716e-011.80024e+006.84311e-015.45987e-015.44965e-017.38526e-017.58873e-012.79571e-014.12738e-015.80251e-011.32454e-013.06967e-015.35252e-01
Table 6. Summarised Wilcoxon rank-sum comparisons between the SMOBHC algorithm as a reference and fourteen experimented algorithms for the second battlefield composed of thirty threats.
Table 6. Summarised Wilcoxon rank-sum comparisons between the SMOBHC algorithm as a reference and fourteen experimented algorithms for the second battlefield composed of thirty threats.
DSMOBHC vs. SMOSMOBHC vs. PSOSMOBHC vs. GQPSOSMOBHC vs. BASMOBHC vs. CSSMOBHC vs. DESMOBHC vs. FASMOBHC vs. GCMBOSMOBHC vs. GWOSMOBHC vs. HSSMOBHC vs. WOASMOBHC vs. ALOSMOBHC vs. AOASMOBHC vs. DA
30 p 2.9768e-016.7956e-086.7956e-086.7956e-082.3557e-066.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-08
h 01111111111111
60 p 1.9883e-016.7956e-081.5997e-056.7956e-086.7956e-081.0646e-076.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-08
h 11111111111111
90 p 9.6196e-026.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-08
h 01111111111111
Diff   ( h = 1)03333333333333
No   diff   ( h = 0)30000000000000
Best 03333333333333
Table 7. Experimental results obtained for the fifteen experimented algorithms relative to three different problem dimensions equal respectively to 30, 60, and 90 for the third battlefield composed of twenty threats.
Table 7. Experimental results obtained for the fifteen experimented algorithms relative to three different problem dimensions equal respectively to 30, 60, and 90 for the third battlefield composed of twenty threats.
DResultsSMOBHCSMOAHAAOSAROSOBWOSCOGBODMOADOEOFHOWSOPDO
30Best2.42035e+002.42035e+002.74338e+002.74393e+002.74339e+002.42839e+002.90601e+004.78941e+002.73036e+002.48093e+002.44246e+002.63792e+002.86500e+002.77533e+003.18897e+00
Worst2.57116e+002.57117e+002.77586e+003.17199e+002.93080e+002.83734e+002.99695e+004.11803e+002.87057e+003.22760e+002.78509e+002.92475e+003.02361e+003.10438e+004.23746e+00
Mean2.42963e+002.44267e+002.75207e+002.90602e+002.76218e+002.62672e+002.96914e+004.62706e+002.76473e+002.73652e+002.67790e+002.74856e+002.96439e+002.95392e+003.68263e+00
Std.3.35353e-024.85828e-028.11277e-031.41608e-014.09533e-021.19591e-011.89275e-021.93943e-013.34833e-021.96120e-018.77372e-026.73683e-024.49008e-021.04436e-012.76668e-01
60Best3.70782e+003.71227e+004.51038e+004.63060e+004.51131e+003.86218e+005.11575e+007.95593e+004.50639e+008.76551e+004.25750e+004.49766e+005.12686e+004.91835e+006.23948e+00
Worst4.15695e+004.25617e+004.55403e+005.22236e+004.91454e+004.84537e+005.16204e+008.86259e+005.02363e+001.10561e+014.86091e+004.89462e+005.21059e+005.84186e+008.64117e+00
Mean3.87064e+003.95329e+004.53216e+004.90928e+004.58364e+004.40985e+005.14784e+008.74873e+004.62033e+001.02349e+014.58760e+004.63640e+005.18204e+005.40936e+007.42693e+00
Std.1.45410e-011.67976e-011.29237e-022.00486e-019.84922e-022.45296e-011.26257e-022.06429e-011.31735e-015.48412e-011.90264e-011.26067e-012.27147e-022.73430e-017.61039e-01
90Best5.03620e+005.17218e+006.29085e+006.63783e+006.26632e+006.45604e+007.24355e+001.19037e+016.25195e+001.86112e+016.01061e+006.24270e+007.27208e+007.56239e+0001.03499e+01
Worst5.61129e+006.10704e+006.56629e+007.17214e+006.79300e+007.34307e+007.29806e+001.26385e+016.70000e+002.05484e+016.98320e+007.02330e+007.33953e+008.62073e+001.22534e+01
Mean5.36272e+005.47085e+006.43876e+006.86873e+006.44597e+006.83742e+007.28132e+001.24897e+016.37801e+001.97401e+016.45996e+006.54829e+007.32062e+008.21006e+001.10281e+01
Std.1.30557e-012.02255e-018.43281e-021.65042e-011.22716e-012.47851e-011.25090e-022.19932e-011.22234e-016.45794e-012.74162e-011.86834e-011.83397e-023.13058e-015.36196e-01
Table 8. Summarised Wilcoxon rank-sum comparisons between the SMOBHC algorithm as a reference and fourteen experimented algorithms for the third battlefield composed of twenty threats.
Table 8. Summarised Wilcoxon rank-sum comparisons between the SMOBHC algorithm as a reference and fourteen experimented algorithms for the third battlefield composed of twenty threats.
DSMOBHC vs. SMOSMOBHC vs. AHASMOBHC vs. AOSSMOBHC vs. AROSMOBHC vs. SOSMOBHC vs. BWOSMOBHC vs. SCOSMOBHC vs. GBOSMOBHC vs. DMOASMOBHC vs. DOSMOBHC vs. EOSMOBHC vs. FHOSMOBHC vs. WSOSMOBHC vs. PDO
30 p 3.9417e-016.7956e-086.7860e-086.7956e-082.2178e-076.7956e-086.7956e-086.7956e-081.0646e-079.1728e-086.7956e-086.7956e-086.7956e-086.7956e-08
h 01111111111111
60 p 1.3328e-016.7956e-086.7956e-086.7956e-082.9598e-076.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-08
h 01111111111111
90 p 6.7868e-026.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-086.7956e-08
h 01111111111111
Diff ( h = 1)03333333333333
No diff ( h = 0)30000000000000
Best03333333333333
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Allouani, F.; Abboudi, A.; Gao, X.-Z.; Bououden, S.; Boulkaibet, I.; Khezami, N.; Lajmi, F. A Spider Monkey Optimization Based on Beta-Hill Climbing Optimizer for Unmanned Combat Aerial Vehicle (UCAV). Appl. Sci. 2023, 13, 3273. https://doi.org/10.3390/app13053273

AMA Style

Allouani F, Abboudi A, Gao X-Z, Bououden S, Boulkaibet I, Khezami N, Lajmi F. A Spider Monkey Optimization Based on Beta-Hill Climbing Optimizer for Unmanned Combat Aerial Vehicle (UCAV). Applied Sciences. 2023; 13(5):3273. https://doi.org/10.3390/app13053273

Chicago/Turabian Style

Allouani, Fouad, Abdelaziz Abboudi, Xiao-Zhi Gao, Sofiane Bououden, Ilyes Boulkaibet, Nadhira Khezami, and Fatma Lajmi. 2023. "A Spider Monkey Optimization Based on Beta-Hill Climbing Optimizer for Unmanned Combat Aerial Vehicle (UCAV)" Applied Sciences 13, no. 5: 3273. https://doi.org/10.3390/app13053273

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop