Next Article in Journal
Nature and Distribution of Beach Ridges on the Islands of the Greater Caribbean
Previous Article in Journal
Identification of Wind Load Exerted on the Jacket Wind Turbines from Optimally Placed Strain Gauges Using C-Optimal Design and Mathematical Model Reduction
Previous Article in Special Issue
Effect of Sampling Rate in Sea Trial Tests on the Estimation of Hydrodynamic Parameters for a Nonlinear Ship Manoeuvring Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performability Evaluation of Autonomous Underwater Vehicles Using Phased Fault Tree Analysis

School of Electronics & Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2024, 12(4), 564; https://doi.org/10.3390/jmse12040564
Submission received: 13 March 2024 / Revised: 24 March 2024 / Accepted: 25 March 2024 / Published: 27 March 2024
(This article belongs to the Special Issue Advances in Marine Vehicles, Automation and Robotics—2nd Edition)

Abstract

:
This paper presents a phased fault tree analysis (phased-FTA)-based approach to evaluate the performability of Autonomous Underwater Vehicles (AUVs) in real time. AUVs carry out a wide range of missions, including surveying the marine environment, searching for specific targets, and topographic mapping. For evaluating the performability of an AUV, it is necessary to focus on the mission-dependent components and/or subsystems, because each mission exploits different combinations of devices and equipment. In this paper, we define a performability index that quantifies the ability of an AUV to perform the desired mission. The novelty of this work is that the performability of the AUV is evaluated based on the reliability and performance of the relevant resources for each mission. In this work, the component weight, expressing the degree of relevance to the mission, is determined using a ranking system. The proposed ranking system assesses the performance of the components required for each mission. The proposed method is demonstrated under various mission scenarios with different sets of faults and performance degradations.

1. Introduction

Autonomous Underwater Vehicles (AUVs) carry out a wide range of missions, such as surveying the marine environment, searching for specific targets, and topographic mapping. As shown in Figure 1, modern AUVs are even required to continue their missions at sea for several months with the support of Unmanned Surface Vehicles (USVs) [1]. Long-term missions also include tasks for launching and recovery, battery charging, and communications with the USV. Typical AUVs for long-term exploration are equipped with a number of mission devices and equipment, and thus have a higher risk of failure in critical missions due to the breakdown of the AUV. As a result, when carrying out a mission, it is essential to monitor the performance and dependability of AUVs. Numerous studies, including those on AUV risk analysis [2], uncertainty estimation [3], and design techniques [4], are being conducted to study dependability. For evaluating the performability of AUVs, it is necessary to focus on the mission-dependent components and/or subsystems, because each mission exploits different combinations of devices and equipment. In this paper, a phased fault tree analysis (phased-FTA) is adopted to deal with the mission-dependent performability of AUVs in real time.
Depending on the circumstance and operating time, AUVs carry out a series of different missions, called phased mission systems (PMSs). The features that every mission uses will differ based on their unique requirements. Thus, the impact of the different component combinations on each stage differs [5]. In [6], a PMS was examined using a multivalued decision diagram (MDD)-based approach that extended a binary decision diagram (BDD). In addition to the simplification of modeling and assessment procedures, MDD approaches could lower the computing complexity.
Three categories of system performance level were identified, and a phase-level FT was created for each. A phased-mission industrial network system with several performance levels was examined for performability, and a new decision-making technique, known as multiple-terminal binary decision diagrams, was proposed [7].
A Markov model was developed to ascertain the state of a multi-stage PMS, taking into account component failure and recovery [8]. The universal generating function (UGF) was utilized to examine the system’s reliability based on each stage’s system structure, using the Markov model as a basis. A technique was proposed for a PMS reliability analysis that takes incomplete failure reaction strategies into account and uses a UGF [9]. Structural optimization was studied to increase system reliability. A multidimensional UGF technique was proposed to assess the performability of a multi-state containerized IP multimedia subsystem while taking availability and performance into account [10].
In an earlier study by the authors [11], the FT-based index was utilized to assess the health of the system based on the AUV’s reliability, performance, and weight. By taking into account the performance and dependability of every component, in the earlier work, it was possible to validate the AUV’s overall operational capabilities, independent of the task the vehicle was currently carrying out. However, due to component failures or performance degradation, it is challenging to determine whether the current operation or specific task can be completed. The practical possibility of performing a task, that is, the operability of the system, may be assessed differently depending on how serious or significant component failures are with respect to the accomplishment of the missions.
We present a phased FTA method that can be used to examine the system’s performability in a PMS where mission-specific key functions change. Phased FTAs, a phased mission-based analytic technique, are utilized to evaluate the performability of AUVs on each mission. A phased FT is designed based on the AUV design produced in the project being carried out by the Korea Research Institute of Ships and Ocean Engineering. Faults and the performance of the components which are required for each mission are examined in the form of a phased FT in order to assess the performability of the whole vehicle. The ranking of performance is defined using Top–Down–Left–Right (TDLR), which ranks the performance based on the FT structure. The results are used in rank-summed weighting to define the weight of each performance. (Note that the criteria for selecting the weight were ambiguous in earlier work [11].) On a sub-FT level, each phased FT’s performability is determined using the FT-based reliability and weighted arithmetic mean. The UGF is utilized to determine the overall system’s performability based on the reliability and performance determined for every subsystem.
Therefore, in this paper, phased FTs for an AUV are designed and performance and performability metrics for a system are defined. Based on the FT structure, the weighted average was used to determine each sub-FT’s performance. A UGF is designed to calculate these measurements. The viability of the proposed approach is verified under various mission scenarios with different sets of faults and performance degradations.

2. Literature Review

Safety-critical systems (SCSs) have to complete assigned tasks precisely and on schedule. Conventional reliability represents the viability of a task in terms of a binary view. Numerous studies are currently underway to ensure the stable operation of Autonomous Underwater Vehicles (AUVs), which are safety-critical systems [12,13,14]. In reliability research, system-level reliability analyses are carried out for preventive maintenance [15], and reliability analyses of individual components are carried out by examining their properties and operating environment [16]. Performance typically describes the system’s capabilities under the assumption of a condition free from failures [17]. It is crucial to complete a task in an SCS even in the event of a failure [18]. Even in the event of a partial failure, a fault-tolerant system can still function at a lower level of performance [19].
Consequently, it is possible to assess the likelihood that a system will function at a specific performance level in the event of a breakdown through a performability analysis. The performability refers to the unified performance-dependability measure. It is the probability that a system will operate at a certain level, which corresponds to a predetermined range of achievement levels that represent system performance [20]. Dependability is a measure of the user’s degree of trust based on a number of factors, including availability, reliability, security, and safety [21]. Availability and reliability are the two main concepts used in performability analyses. The performability of an SCS can be assessed using a variety of techniques, including analysis techniques, probability-distribution-based methodologies, stochastic modeling, etc. [22].
There are several ways to evaluate the performance and dependability of a system. Among these, performability analyses of stochastic modeling methodologies are easy to conduct. Performability can be analyzed by analytical models using parameter-based formulation, while modeling-based performability approaches make use of the system’s structure. The system’s intrinsic reliability, failure, MTTF, etc., for a system in a normal condition are all used by analytical approaches to assess performability [23]. State-based modeling methods like Petri nets [24,25,26], stochastic activity networks (SANs) [27], and Markov chains [28,29,30] are used in behavioral probabilistic modeling [31].
The semi-Markov process (SMP) model was used to model the reliability and mean and variation of web service response times [32]. Dependability and performance models were used to simulate an electronic funds transfer system considering failures and repairs, and the performability was evaluated [33]. A continuous-time Markov chain with a Markov reward model was employed to assess the fault-tolerant SCS’s performability. The reliability, availability, average failure frequency, and system throughput were used to evaluate the system’s performability [29]. The system’s performability in terms of availability and dependability was analyzed using Markov and regenerative processes [34]. In order to identify key components, the capabilities of the three pumps were compared and examined. Via sensitivity analyses, parameters that impact system dependability were found.
In order to analyze the impact of availability and reliability, a stochastic Petri net model was developed that takes into account the system’s hardware and software [35]. Additionally, a performability evaluation technique was proposed that addresses the redundancy issue in fault-tolerant systems. To guarantee the autonomy of a UAV in risky operating scenarios, a framework for modeling and assessing the system’s performability was proposed [36]. This framework used hierarchically structured stochastic Petri nets (SPNs) to examine performability and suggested new operating strategies.
To represent the architecture for the two-level recovery technique, an extended Markov chain was used. The modeling findings were used to assess the system’s performability [37]. Early on in the system’s functioning, the software’s performability was assessed using stochastic reward networks (SRNs). An automated process for converting high-level Unified Modeling Language (UML) to SRNs was suggested as an effective analysis technique [38]. A performability analysis provides helpful information in making decisions. The expected performabilities for a drone processing system were estimated based on comprehensive SRNs [31].
The reliability and performability of hardware were assessed using probability-distribution- based techniques, which have since been extended to include software performability evaluations. There are instances where a breakdown happens while the system is in use in the field, causing the system’s performance to decline; the breakdown is then fixed, restoring the system’s functionality. A non-homogeneous Poisson process (NHPP) was used to model a situation that was similar to the real one and assess the effect of failure on system performance and reliability [39]. To model a system’s performability based on parameters, there are numerous analytic techniques available [40,41,42,43,44]. Among these, by optimizing performability parameters, an optimization algorithm may assess the system’s performability [45]. Multi-state system theory was utilized to represent the cloud services system, and the performance and reliability were determined via the application of the universal generating function [46]. To assess the wireless mesh network’s availability and performability, intelligent state sampling was used. Availability and performability in relation to transmission power and network node density were examined [47].
Software and hardware faults are both possible in SCS operation, and this needs to be taken into account [48]. The design and operation of SCSs provide numerous challenges as the system grows in size and complexity. It was suggested to use a modeling framework that takes into account both software and hardware failures in order to assess a system’s performance and reliability [49]. When one or more components of a system fail, the system either stops working entirely or performs worse. Due to the influence of components with varying performance levels and the performance deterioration of different components, the system’s performance level also varies. These components, in different states, are coupled together in different ways to form a system. These systems use the minimal performance ratio of the subsystems to assess the overall system’s performability [50].
Based on the probability or failure frequency of each state, a multivalued decision diagram approach was proposed to evaluate the reliability, availability, and performability of a multistate system [51]. The probability of component failure is computed using the fault effect (FE), fault hazard analysis (FHA), and fault tree analysis (FTA), and the results are used to estimate system reliability. Based on the scheduling between parts, a prediction model for system performance was developed. A model was developed utilizing architecture analysis and design language (AADL) to forecast system performability based on the reliability and performance prediction model [52]. Using an MDD, an analysis method was presented to analyze the performability of a dynamic multi-state k-out-of-n system. A model was created for a system-performance MDD that can represent the state needs of the system [53].

3. Performability Evaluation Method Based on Phased Fault Trees

Dependability uses notions like availability and reliability to assess the system’s operability based on whether a failure has occurred or not [54]. Performance determines a system’s capacity to function without faults. The idea of performability was proposed in an effort to close the performance gap with reliability [20]. Performability is a concept that evaluates the probability that a system will operate at a certain level of performance even in the event of a system failure. Performability is the likelihood of exhibiting the system’s measurable performance level B with respect to the achievement level (A, ( B A )) of system S. Performability, P r ( B ) is stated as follows in Equation (1).
P r ( B ) = P ( ω | Y S ( ω ) B )
where Y S is the performance of system S. In this paper, the following specifications are introduced and used to define the system’s accomplishment level.
  • A1: A state where the mission can be executed and executed continuously even in the event that a specific failure arises in the multiplexed part;
  • A2: Normal mission performance with failures; a further failure necessitates mission abortion;
  • A3: A state where executing missions is no longer possible.
An AUV as a whole can be referred to as a multi-state system if each subsystem is thought of as a single state. Based on the status of each subsystem, which has a certain performance level and probability, the expected instantaneous performance (EIP) can be computed as follows in Equation (2).
E I P ( t ) = i g i · P i ( t )
where g i and P i are the performance and probability of the system in state i, respectively. This can also be called the expected performance utility level [55] or expected reward rate at time t [56,57,58]. In order to assess the system’s performability for different missions, a phased FT that takes failure and performance into consideration was designed in this paper. The performability of each sub-FT is evaluated based on the FT for the currently executed mission. The performability of each component is integrated through a UGF to calculate the performability of the entire system. Based on the FT structure, the ranking of each performance is defined and the weight is calculated. The weighted arithmetic mean of performance based on the weights is calculated. The EIP of the system is evaluated using performability. The overall process of the proposed technique is shown in Figure 2 below.

3.1. Reliability Evaluation Method

An exponential reliability function is frequently used to calculate the reliability of general components and systems [59]. Being a univariate function, the reliability function varies with time. The failure rate λ is another parameter for the reliability function. Equation (3) provides the failure probability density function.
f ( t ) = λ e λ t
Based on Equation (3), integration can be used to determine reliability—the likelihood that a failure will not happen before its time—as illustrated by the following Equation (4).
R ( t ) = 1 0 t f ( t ) d t = e λ t
In order to create an FT for the system, events are examined using appropriate logic gates (AND or OR). This allows for an assessment of the fault contribution of individual components. The output probability of “AND” and “OR” gates is computed using probability theory.
P A N D = i = 1 n p i
P O R = 1 i = 1 n ( 1 p i )
where p i is the failure probability of the event that corresponds to the logic gate’s input.

3.2. Performance Evaluation Method

A phased FT is used to calculate each subsystem’s performance. First, the FT structure is used to define each performance’s ranking. A rank-based weight definition approach is used to establish each performance’s weight based on its ranking, and the weighted arithmetic mean is used to determine each subsystem’s performance as shown in Equation (7).
g s u b = i = 1 n ω i g i i = 1 n ω i
where g s u b is the performance of the sub-FT and g i and ω i are the performance and weight of component i.

3.2.1. Ranking Method Based on an FT

There are several ways to establish the ranking of FT events based on the FT structure, including “Top–Down–Left–Right” (TDLR), a “Depth-First Search” (DFS), a “Breadth-First Search” (BFS), the level technique and AND gate counting. TDLR ranks every event by searching through the FT from left to right and top to bottom [60]. Each event is added to an event list by the method, which works left to right and top to bottom. An event will be disregarded if it reappears after already being added to the event list. Once the ranks of all the events within each sub-tree are specified, the DFS technique continues on to the next sub-tree [61]. It determines the ranking of events from left to right in units of sub-FTs. The BFS method specifies the ranking of each event in chronological order for the whole FT rather than breaking it down into sub-FTs [62]. By defining the number of gates that connect each event to the top event as the level, the level technique establishes the order of events [63]. The number of AND gates that separate each event from the top event determines the ranking according to the AND criterion [64]. The highest rank is given to the event with the fewest AND gates. When events have the same number of AND gates, the TDLR technique is used.

3.2.2. Weighting Method

In order to define the weight for each performance, the weight definition method of approximate methodologies was taken into consideration based on the FT design results. Approximation approaches and ratio allocation comprise the majority of weighting methods. Each element is given a score via the ratio assignment technique, which takes into account the relative or absolute importance of each element in relation to the other elements. Each element’s weight is determined by dividing the total number of points by its ratio. Without information such as relative comparisons, approximation algorithms determine weights for each component by taking into account ordinal statistical principles [65]. The rank summed weighting (RS) method uses important information based on the ranking order of each element to estimate the weight of each element [66]. Prior to calculating the ranking summed weight for each element, the RS approach ranks each element. Expert views are used to determine the ranking of each element in the general RS methodology, which uses a mechanism like pairwise ranking [67]. In this work, the ranking for every performance was determined using an FT. The ranking of each element is the only factor for which the RS approach may determine relevant weights. Using this rank, each element’s rank summed weight is calculated. Equation (8) illustrates how the RS approach weights each N ranking element.
w i = N i + 1 i = 1 N N i + 1 = 2 ( N i + 1 ) ( N ( N + 1 )
where the elements are listed in order of importance ( i = 1 ) to ( i = N ) (least important). The rank exponent weighting approach can be described as follows (Equation (9)) by generalizing the rank summed weighting technique.
w i = ( N i + 1 ) p ( k = i N N i + 1 ) p
Every element has the same weight if p is 0, and the outcome is the sum of the ranks if p is 1. Each element’s weight distribution spreads as the p value rises.

3.3. Performability Evaluation

The mission performability of an AUV is evaluated by utilizing the reliability and performance of each subsystem based on the phased FT and achievement levels. First, the likelihood for each sub-FT accomplishment level and the performability are determined based on the FT’s reliability, as shown in Equation (10).
Pr A 3 = P ω Y S ( ω ) A 3 = 1 P T s u b Pr A 2 = P ω A 1 > Y S ( ω ) A 2 = p i E R 1 p i · p j E R , j i p j · p k E C 1 p k Pr A 1 = P ω Y S ( ω ) A 1 = 1 Pr A 2 + P T s u b
where T s u b is the likelihood that the sub-FT’s top event will occur, E R is the set of basic event failure probabilities in which the sub-FT contains redundant components, E C is the set of basic event failure probabilities that the sub-FT undergoes critical failures, and n is the number of events in the sub-FT.
By merging the performability of each sub-FT, the overall system performance was determined using the UGF approach. Reliability and performance assessments of multi-state systems (MSSs) have been successfully handled by the UGF technique [68,69,70]. In order to assess the probability distribution of the overall performance for systems with varying attributes and performance, the U-function expands upon the ordinary moment generation function. Various composition operators can be introduced and applied to the UGF. When every component of the system is statistically independent of every other component, the definition of the U-function of an independent discrete random variable X is given by Equation (11).
u ( z ) = k = 1 K q k z x k
In this case, the performability of subsystem j can be defined by the polynomial u j ( z ) , which means that it can represent all of the potential subsystem states. It is possible to define and introduce a composition operator to combine the performability of two subsystems. In this study, we ascertain the two subsystem’s performabilities using algebraic operations. The composition operator has the following form [71] (Equation (12)):
u i ( z ) u j ( z ) = k = 1 K i p i k z A i k h = 1 K j p j h z A j h = k = 1 K i h = 1 K j p i k p j h z φ ( A i k , A j h )
where the terms A i k and p i k denote the achievement level and performability ( P r ( A i k ) ) of sub-FT k, respectively. Depending on the metrics (reliability, performance, and performability) that need to be considered, there are various ways to define φ ( A i k , A j h ) . The accomplishment level is discussed in this work and is defined as shown in Equation (13).
φ ( A i , A j ) = A i , i f A i < A j A j , i f A i A j A 3 , i f A i = A 3 o r A j = A 3
The following U-function can be used to express the performabilities of ‘sub-FT a’ with no redundant components and ‘sub-FT b’ with redundant components as shown in Equation (14).
u a ( z ) = p a 2 z A 2 + p a 3 z A 3 u b ( z ) = p b 1 z A 1 + p b 2 z A 2 + p b 3 z A 3
The composition operator can be used to determine the performability composition of the two sub-FTs in the following manner (Equation (15)).
u a ( z ) u b ( z ) = k = 1 K i p a k z A k h = 1 K j p b h z A h = p a 2 p b 1 z φ ( A 2 , A 1 ) + p a 2 p b 2 z φ ( A 2 , A 2 ) + p a 2 p b 3 z φ ( A 2 , A 3 ) + p a 3 p b 1 z φ ( A 3 , A 1 ) + p a 3 p b 2 z φ ( A 3 , A 2 ) + p a 3 p b 3 z φ ( A 3 , A 3 ) = p a 2 p b 1 z A 1 + p a 2 p b 2 z A 2 + ( p a 2 p b 3 + p a 3 p b 1 + p a 3 p b 2 + p a 3 p b 3 ) z A 3
The overall system’s performability can be determined by iteratively running the composition operation on each sub-FT, as indicated by the following Equation (16).
u S ( z ) = u 1 ( z ) u 2 ( z ) u 3 ( z ) u 4 ( z ) u n ( z )

4. Phased Fault Tree for AUVs

Fault tree analysis is a top-down system analysis method that defines system failures as top events, analyzes the causes of failures, and ultimately identifies failures at the component level. It is possible to examine the impact on the top event and system failure by linking a logic gate to each basic event’s relationship [72].
A phased mission system, where the mission executed and the system configuration may change over time, can benefit from the application of a phased FTA as a reliability analysis technique [73]. A system that completes several consecutive, non-overlapping missions in succession is known as a phased mission system. In each phase, the system needs to carry out various subtasks in order to complete its purpose. For every mission, a new system configuration might be used. The dependability of the current system can be computed explicitly by creating an FT that is appropriate for the mission and system configuration at each mission.

4.1. System Details

In this work, an AUV sets out on extended oceanic expeditions along with a USV. The AUV is transported by the USV to the mission’s target site. The USV launches the AUV in the mission area, and it then moves while tracking the AUV’s location or follows a predetermined route. The AUV travels to the intended place to carry out tasks like terrain investigation after it is launched. Once it has arrived at the destination, it carries out the designated task. It returns to the USV after the designated mission is finished or the maximum mission performance time is achieved. The AUV that has returned recharges its batteries on the USV and sends the data it has acquired to the ground observation center or USV. The following are the specifics of the systems that comprise the AUV.

4.1.1. Driving Unit

The driving unit comprises a deflection propulsion unit and a buoyancy control system, among other components. The actuator controller and pitch, yaw, and thruster actuators make up the propulsion system. Additionally, it has an actuator battery and a power supply to provide power. Propeller failures such as wing blockage, deformation, blade damage, jamming, slipping, and creeping can hinder the propeller from producing propulsion [74,75]. Failure of the buoyancy pump as well as the steering motor and transmission system is another possibility [76,77,78]. Failure of the battery that powers the thruster could also prevent the thruster from working or the battery from being charged [79]. This system executes fault diagnosis procedures for the detection of water leaks and motor drive malfunctions.

4.1.2. Control Unit

A sophisticated navigation computer for navigating to the predetermined target and an autonomous control computer for mission planning are both included in the control unit. The control unit is powered by the controller battery and power supply. Failures in the mission controller’s mission planning could result in timeouts and stops, and the emergency buoyancy control device’s sudden weight release could interrupt the mission. Furthermore, there are instances in which a mission controller error results in a failure that renders mission execution impossible [79,80]. Battery failures include overcharging, voltage and current monitoring failures, charging failures, and battery detection failures [81].

4.1.3. Communication Unit

The AUV uses a variety of communication methods, including satellite communication, WiFi, and RF on the surface, as well as underwater acoustic signals to communicate with the mission controller of the USV. The communication unit also estimates location using the GPS, a USBL with the USV, a CTD (conductivity, temperature, depth) sensor, an inertial measurement unit, a Doppler velocity log, and a water depth sensor. Periodically, each sensor initializes itself to fix errors that arise during mission execution. Defects in the acoustic elements may occur in underwater acoustic systems, which are installed on underwater communication nodes for wireless communication [76]. There is also a chance of a ground fault [82]. A number of failures can happen, including water leaks, satellite communication and RF connection loss [83], communication modem failure, antenna seal damage, GPS antenna failure [80], GPS communication issues [79], and communication modem failure. Failures like reflection in the water or a loss of connection to the USV station could occur with the USBL [84].

4.1.4. Mission Equipment Unit

The mission equipment unit comprises a range of sensors and control units that are utilized to execute the task. Various sensors are used to perceive the exterior surroundings of the AUV. These sensors include the sonar sensor, camera and control computers. Additionally, the system incorporates a control unit responsible for executing several functions, such as automated control. Sensor malfunctions can result in amplified outputs, zero signals, constant signals, intermittent signals, jumps, drifts, fixed deviations, and more [85]. They can also introduce errors into the system’s closed-loop motion control [86]. When detecting the surroundings and gathering information about the terrain, sonar, camera, and light failures may happen [87]. Furthermore, there is a chance that the sensors that the AUV uses to determine its position internally will encounter issues such as DVL failure, altimeter failure, inertial navigation failure [88], depth sensor noise [82], depth sensor failure [80], ADCP spontaneous restarts [77], etc.

4.2. Phased FT for AUVs

Since the functionalities of the AUV needed for each mission differ, a phased FTA, a phased mission-based reliability analysis method, is utilized to precisely examine the condition of the AUV in each mission.

4.2.1. FT for Launch

During the launch phase, the USV conducts separation control, and communication is used to exchange status updates. As a result, consideration is given to the propulsion system, the navigation system for position and attitude control, the communication system for sharing mission and status information, and the controller for separation control. The following Figure 3a displays the FTA results. Comprehensive information regarding each specific component can be found in Table 1 and Table 2. The sub-FT RA is the FT of the reliability of the driving unit, which considers the state of the motor and power supply and leakage of the driving unit. The sub-FT PA, which includes the performance of the motor, equipment inspection module, power module, and other equipment, is the FT linked to the driving part.
When changes in the performance factors had a substantial impact on the overall operating state of the system, an OR gate was employed to connect them. The performance factors are connected via an AND gate in situations where a degradation in one component’s performance can be compensated for by other components or where the impact on the system as a whole is anticipated to be minimal. Observable performance factors and diagnosable faults in other units were taken into consideration when designing the sub-FTs. Faults and performance FTs for the mission sensor module for data collection and the battery charging module were not included in the launch mission.

4.2.2. FT for Mission Execution

Launched from an USV, the AUV uses mission sensors to gather data as it autonomously navigates to a predetermined destination in accordance with the AUV mission scenario. Through an ultrasonic communication modem, it sends status updates and receives commands from the mission controller on a regular basis. It can self-diagnose faults and avoid obstacles and submerged terrain on its own. Upon completion of the task, it autonomously comes to the surface and periodically uses an underwater communication modem to send the AUV’s current condition. The FTA result pertaining to mission execution is displayed as shown in Figure 3b. The FT includes the driving unit, control unit, communication unit, and mission equipment unit. Power charging modules related to wireless charging and several communication modules were excluded.

4.2.3. FT for Recovery

The AUV initially ascends to the area surrounding the USV during the recovery phase. When the AUV is close to the USV, it starts to dock with it after receiving navigation control commands via communication. As a result, consideration was given to a propulsion system, a navigation system, a power system, a communication system for exchanging mission and status data, and a controller for docking control. The FTA result for the launch is displayed as shown in Figure 3c. Similar to the FT for launch, it includes all driving units, control units, and communication units for docking with a USV. However, the majority of mission sensors and charging-related modules are not included.

4.2.4. FT for Battery Charging

The AUV mounted on the USV sends data collected during the operation and exchanges internal inspection data and mission performance outcomes during the battery charging phase. In order to charge the battery, it also gets power from the USV. The FT is displayed in Figure 3d, taking into account the communication and battery charging. The components of the FT for charging are a wireless battery charging module and components for data transmission and reception.

5. Simulation

5.1. Simulation Setup

Four missions were included in the AUV operation scenario created for the simulation: launching (1.5 h), mission execution (exploration, 7.5 h), recovery (1.5 h), and charging (10 h). We examined how the occurrence of different kinds of failures impacts the mission feasibility, performability, and EIP of the entire system, taking into account a variety of scenarios. Numerous eventualities, including the failure of non-replaceable parts and the recovery of parts for which replacement parts are available, were taken into consideration based on the previously constructed FTs. The probability of failure for each basic event was determined by consulting various sources [89,90,91,92,93]. Every 50 h scenario entails completing four predetermined missions repeatedly. The following is the AUV operation scenario that will be simulated. Simulation results were compared via an MDD-based performability analysis approach [94].
  • Case 1: Normal operation.
  • Case 2: Safety critical fault.
  • Case 3: Single fault in a replaceable component.
  • Case 4: Multiple faults in replaceable components and recovery.

5.2. Simulation Results

5.2.1. Case 1: Normal Operation

The meaning AUV performability, Pr ( A 1 ) , Pr ( A 2 ) , and 1 Pr ( A 3 ) is as shown in Equation (17).
Pr ( A 1 ) = P ( { ω | Y S ( ω ) A 1 } ) Pr ( A 2 ) = P ( { ω | A 1 > Y S ( ω ) A 2 } ) 1 Pr ( A 3 ) = 1 P ( { ω | Y S ( ω ) A 3 } )
Pr ( A 1 ) denotes the probability of the mission being completed even in the event of a system failure, while Pr ( A 2 ) denotes the potential for the mission to be completed while in a safety critical state, meaning that in the event of a failure, the mission can no longer be completed. The likelihood that a safety critical defect or several failures that the system cannot handle will prevent the system from carrying out its mission is represented by 1 Pr ( A 3 ) . The results of the performability analysis using two methods over a 50 h period are shown graphically in Figure 4.
As a result of the AUV functioning flawlessly over the entire operating duration, it is evident that Pr ( A 1 ) and Pr ( A 2 ) remain high. The number of parts used rises and Pr ( A 1 ) somewhat falls when the AUV completes its exploration mission in 1.5 h. It is evident that as the AUV completes the mission and the recovery and charging phases, its performability Pr ( A 1 ) rises. Compared to the MDD-based approach, the proposed method more clearly illustrates the differences between each phase.

5.2.2. Case 2: Safety Critical Fault Occurs

In this case, an irreplaceable part failed at 35 h while the AUV was operating. Figure 5 illustrates how the controller PTM, one of the safety-critical components, failed at 35 h, causing Pr ( A 1 ) and Pr ( A 2 ) to drop to 0 and 1 Pr ( A 3 ) to rise to 1 in both methods. It is evident that the failure of this important component significantly affects the system’s performability and prevents the system from operating.

5.2.3. Case 3: Single Fault in a Replaceable Component

A single-replaceable-component failure was taken into account in the third scenario. At 25 h of AUV operatiON, a depth gauge failure occurs; this gauge can be replaced with a different kind of pressure sensor. The system has redundant components, which allow it to function correctly even in the event of several replaceable failures. Figure 6 illustrates how a failure impacts the system’s overall performability. In the proposed method, the whole system’s performability is determined by calculating each module’s performability and combining them. The overall performability of the system is affected when the performability of a single module deteriorates. In the BDD-based method, the performability is not broken down into individual modules but rather is considered as a single segment.

5.2.4. Case 4: Multiple Faults in Replaceable Components and Recovery

In the fourth scenario, we looked at a situation where there are several replaceable parts that break, and eventually the failures are recovered twice. Around 5 h, internal communication with redundant components, WiFi communication, the depth gauge, and the CTD sensor failed. Because there are replaceable parts, even in the event of numerous failures, in the proposed method, Pr ( A 2 ) has a very high value and Pr ( A 1 ) merely drops to 0 in certain periods, as shown in Figure 7a. In the BDD-based method, module-level performability changes are not reflected, the same as in case 3. The performability Pr ( A 1 ) is shown to rise as different components recover at 25 and 40 h.

5.2.5. EIP Results

The EIP result of case 1 is shown in Figure 8a, where the performance of each component is on the right and the EIP is on the left. The performance of four distinct components was demonstrated out of the many components of the FT that were previously designed. Periodically, pressure sensors undergo initialization to calibrate them, and as they rise to the surface or are calibrated, inaccuracies in positional accuracy are rectified. Furthermore, the controller battery’s charging state and CPU performance were taken into account. During launch and exploration, the initial EIP steadily lowers over time, and the EIP value occasionally increases as a result of the influence of regularly calibrated sensors. The AUV returns to the USV around 8.5 h, when the EIP somewhat improves. From 10 h on, the AUV enters the charging phase. Afterwards, the overall EIP rises as the battery condition improves due to wireless charging. EIP modifications of the AUV can be verified as the succeeding phases recur. For scenario case 2, Figure 8b shows how the EIP likewise drops to 0 when failure occurs at 35 h. Figure 8c illustrates how the system’s EIP somewhat drops as a result of the depth gauge failure that occurred at 25 h for scenario case 3. For scenario case 4, the EIP slightly decreases at 5 h as several components fail. The EIP increases as different components recover at 25 and 40 h, as shown in Figure 8d.

6. Discussion

AUVs must carry out a variety of tasks, including launching, exploration, movement, recovery, and battery charging, to conduct long-term missions. A different set of capabilities are needed for each mission. As such, a thorough understanding of the system’s performance capabilities for each mission is vital.
The MDD-based method determines the performability of each phase by converting phased FTs into a single MDD. The proposed approach, in contrast, computes the performability of each sub-FT within the phased FT and then uses the UGF to determine the overall performability of the system. It can be claimed that utilizing a UGF to integrate each module’s performability differs from conventional performability analysis methods. The findings shows that the proposed approach better captures the variations in performance across phases and emphasizes how performability at the functional module level can be changed, with more effects on the overall system performability.
In an earlier work, performance and reliability were taken into consideration when designing the FT of an AUV, and the health of the system was assessed using a performance reliability index that took the performance’s weights into account. Previous studies evaluated AUV health by taking into account the performance and failure of each component over time.
Due to component failures or performance degradation, it is challenging to determine whether the current operation or a specific mission can be completed using the methodologies currently in use. The real operability of the system may be assessed differently because different missions have varying degrees of importance or severity when a component fails. Other types of missions may still be possible even if the current one is challenging to complete.
The system performance in this paper was determined using an arithmetic weighted average, but this approach is insufficient to accurately determine the system’s performance when compared to the conventional model-based performance calculation method. Consequently, it is thought that more research is required in order to more precisely depict the system’s performance based on an FT. The real-time performance of the proposed method was not thoroughly studied in this paper, but it was studied in Python and ROS to ultimately analyze the system’s state in real time by being used by the AUV during executed missions.

7. Conclusions

In this paper, a phased FTA-based approach was suggested as a means of evaluating the system’s performability, with the AUV functioning as a phased mission system. We designed FTs that account for component failure and performance for every task that sets the AUV apart, including charging, exploration, and launch/recovery. Based on an FT for every mission, TDLR was used to define the component’s performance ranking. The weights of each performance were defined using the rank summed weighting technique, and the sub-FT performance was computed using the weighted arithmetic mean. Three categories were defined based on the AUV accomplishment level, and each level’s performability was assessed. The EIP and performability of the entire system were computed using the UGF based on the reliability, performance, and performability of each sub-FT.
It is feasible to ascertain whether the current mission can be completed by assessing the performability and EIP for each AUV mission. A performability analysis for a phased mission system calculates the performabilities of other missions and determines whether the AUV can perform additional missions, recoveries to the USV, etc. Even in the event that a malfunction occurs and the AUV cannot properly perform its current mission, it is possible to obtain information for future mission planning and decision making. In the future, we will verify the algorithm by implementing it on heterogeneous underwater search fleets made up of actual systems. Additional work is also necessary to more precisely depict the system’s performance based on an FT.

Author Contributions

Conceptualization, S.B. and D.L.; methodology, S.B.; software, S.B.; validation, S.B. and D.L.; formal analysis, S.B.; investigation, S.B.; resources, S.B. and D.L.; data curation, S.B.; writing—original draft preparation, S.B. and D.L.; writing—review and editing, S.B. and D.L.; visualization, S.B.; supervision, D.L.; project administration, D.L.; funding acquisition, D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Unmanned Vehicles Core Technology Research and Development Program through the National Research Foundation of Korea (NRF), the Unmanned Vehicle Advanced Research Center (UVARC) funded by the Ministry of Science and ICT, the Republic of Korea (NRF-2020M3C1C1A02086313).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Acknowledgments

This research received administrative and techinical support from Fausto Pedro García Márquez.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AADLArchitecture Analysis and Design Language
AUVAutonomous Underwater Vehicle
BDDBinary Decision Diagram
EIPExpected Instantaneous Performance
FEFault Effect
FHAFault Hazard Analysis
MDDMultivalued Decision Diagram
MSSMulti-State System
NHPPNon-Homogeneous Poisson Process
PCMPower Charging Module
Phased-FTAPhased Fault Tree Analysis
PMSPhased Mission System
PTMPower Transformation Module
RSRank Summed Weighting
SANStochastic Activity Network
SCSSafety-Critical System
SMPSemi-Markov Process
SPNStochastic Petri Net
SRNStochastic Reward Network
TDLRTop–Down–Left–Right
UGFUniversal Generating Function
UMLUnified Modeling Language
USVUnmanned Surface Vehicle

References

  1. Marini, S.; Gjeci, N.; Govindaraj, S.; But, A.; Sportich, B.; Ottaviani, E.; Márquez, F.P.G.; Bernalte Sanchez, P.J.; Pedersen, J.; Clausen, C.V.; et al. ENDURUNS: An integrated and flexible approach for seabed survey through autonomous mobile vehicles. J. Mar. Sci. Eng. 2020, 8, 633. [Google Scholar] [CrossRef]
  2. Noh, H.; Kang, K.; Park, J.-Y. Risk Analysis of Autonomous Underwater Vehicle Operation in a Polar Environment Based on Fuzzy Fault Tree Analysis. J. Mar. Sci. Eng. 2023, 11, 1976. [Google Scholar] [CrossRef]
  3. Zhang, Y.; Zhang, F.; Wang, Z.; Zhang, X. Localization Uncertainty Estimation for Autonomous Underwater Vehicle Navigation. J. Mar. Sci. Eng. 2023, 11, 1540. [Google Scholar] [CrossRef]
  4. Wang, Z.; Wen, Z.; Yang, W.; Liu, Z.; Dong, H. Model-Based Digital Overall Integrated Design Method of AUVs. J. Mar. Sci. Eng. 2023, 11, 1953. [Google Scholar] [CrossRef]
  5. Somani, A.K. Simplified phased-mission system analysis for systems with independent component repairs. Int. J. Reliab. Qual. Saf. Eng. 1997, 4, 167–189. [Google Scholar] [CrossRef]
  6. Zang, X.; Sun, N.; Trivedi, K.S. A BDD-based algorithm for reliability analysis of phased-mission systems. IEEE Trans. Reliab. 1999, 48, 50–60. [Google Scholar] [CrossRef]
  7. Gong, Y.H.; Mo, Y.C.; Liu, Y.; Ding, Y. Analyzing phased-mission industrial network systems with multiple ordered performance levels. J. Ind. Prod. Eng. 2019, 36, 125–133. [Google Scholar] [CrossRef]
  8. Oktay, F.B.; Gültekin, Ö.E. Combined Markov and UGF Methods for Multi-State Repairable Phased Mission Systems. Int. J. Reliab. Risk Saf. Theory Appl. 2023, 6, 1–9. [Google Scholar]
  9. Peng, R.; Zhai, Q.; Xing, L.; Yang, J. Reliability analysis and optimal structure of series-parallel phased-mission systems subject to fault-level coverage. Iie Trans. 2016, 48, 736–746. [Google Scholar] [CrossRef]
  10. De Simone, L.; Di Mauro, M.; Longo, M.; Natella, R.; Postiglione, F. Performability Assessment of Containerized Multi-Tenant IMS through Multidimensional UGF. In Proceedings of the 2022 18th International Conference on Network and Service Management (CNSM), Thessaloniki, Greece, 31 October–4 November 2022. [Google Scholar]
  11. Byun, S.; Papaelias, M.; Márquez, F.P.G.; Lee, D. Fault-tree-analysis-based health monitoring for autonomous underwater vehicle. J. Mar. Sci. Eng. 2022, 10, 1855. [Google Scholar] [CrossRef]
  12. Wu, G.; Li, Y.; Jiang, C.; Wang, C.; Guo, J.; Cheng, R. Multi-vessels collision avoidance strategy for autonomous surface vehicles based on genetic algorithm in congested port environment. Brodogr. Teor. Praksa Brodogr. Pomor. Teh. 2022, 73, 69–91. [Google Scholar] [CrossRef]
  13. Wang, Z.; Wei, Z.; Yu, C.; Cao, J.; Yao, B.; Lian, L. Dynamic modeling and optimal control of a positive buoyancy diving autonomous vehicle. Brodogr. Int. J. Nav. Archit. Ocean Eng. Res. Dev. 2023, 74, 19–40. [Google Scholar] [CrossRef]
  14. Hou, S.; Zhang, Z.; Lian, H.; Xing, X.; Gong, H.; Xu, X. Hull shape optimization of small underwater vehicle based on Kriging-based response surface method and multi-objective optimization algorithm. Brodogr. Teor. Praksa Brodogr. Pomor. Teh. 2022, 73, 111–134. [Google Scholar] [CrossRef]
  15. Petritoli, E.; Leccese, F.; Ciani, L. Reliability and Maintenance Analysis of Unmanned Aerial Vehicles. Sensors 2018, 18, 3171. [Google Scholar] [CrossRef] [PubMed]
  16. Petritoli, E.; Leccese, F. Power Management and Reliability Analysis of Albacore: An AUV for Shallow Waters. In Proceedings of the 2023 IEEE International Workshop on Metrology for the Sea; Learning to Measure Sea Health Parameters (MetroSea), La Valletta, Malta, 4–6 October 2023. [Google Scholar]
  17. Eusgeld, I.; Happe, J.; Limbourg, P.; Rohr, M.; Salfner, F. Performability. In Dependability Metrics: Advanced Lectures; Springer: Berlin/Heidelberg, Germany, 2008; pp. 245–254. [Google Scholar]
  18. Bashiri, M.; Miremadi, S.G. Performability guarantee for periodic tasks in real-time systems. Sci. Iran. 2014, 21, 2127–2137. [Google Scholar]
  19. Tokuno, K.; Yamada, S. Availability-based software performability model with user-perceived performance degradation. Int. J. Softw. Eng. Its Appl. 2010, 4, 1–14. [Google Scholar]
  20. Meyer. On evaluating the performability of degradable computing systems. IEEE Trans. Comput. 1980, 100, 720–731. [Google Scholar]
  21. Al-Kuwaiti, M.; Kyriakopoulos, N.; Hussein, S. A comparative analysis of network dependability, fault-tolerance, reliability, security, and survivability. Commun. Surv. Tutor. 2009, 11, 106–124. [Google Scholar] [CrossRef]
  22. Ahamad, S. Some studies on performability analysis of safety critical systems. Comput. Sci. Rev. 2021, 39, 100319. [Google Scholar] [CrossRef]
  23. Tokuno, K.; Nagata, T.; Yamada, S. Stochastic software performability evaluation based on NHPP reliability growth model. Int. J. Reliab. Qual. Saf. Eng. 2011, 18, 431–444. [Google Scholar] [CrossRef]
  24. Jyotish, N.K.; Singh, L.K.; Kumar, C.; Singh, P. Reliability and Performance Measurement of Safety-Critical Systems Based on Petri Nets: A Case Study of Nuclear Power Plant. IEEE Trans. Reliab. 2023, 72, 1523–1539. [Google Scholar] [CrossRef]
  25. Araújo, C.; Oliveira, M., Jr.; Nogueira, B.; Maciel, P.; Tavares, E. Performability evaluation of NoSQL-based storage systems. J. Syst. Softw. 2024, 208, 111885. [Google Scholar] [CrossRef]
  26. Marinho, O.; Callou, G.; Melo, R.; Andrade, E. Performability Evaluation of Railway Systems: A Study on the Impact of Adding Alternative Routes. IEEE Lat. Am. Trans. 2023, 21, 270–276. [Google Scholar] [CrossRef]
  27. Mariotti, F.; Lollini, P.; Mattiello-Francisco, F. The GOLDS satellite constellation: Preparatory works for a model-based performability analysis. Proceedings of the 2023 IEEE 34th International Symposium on Software Reliability Engineering Workshops (ISSREW), Florence, Italy, 9–12 October 2023.
  28. Marcozzi, M.; Mostarda, L. Analytical model for performability evaluation of Practical Byzantine Fault-Tolerant systems. Expert Syst. Appl. 2024, 238, 121838. [Google Scholar] [CrossRef]
  29. Ahamad, S.; Gupta, R. A reward-based performability modelling of a fault-tolerant safety–critical system. Int. J. Syst. Assur. Eng. Manag. 2023, 14, 2218–2234. [Google Scholar] [CrossRef]
  30. Younes, S.; Idi, M.; Robbana, R. Performability analysis of multi-service call admission control schemes in LTE networks. Int. J. Gen. Syst. 2023, 53, 215–253. [Google Scholar] [CrossRef]
  31. Machida, F.; Zhang, Q. Andrade, E. Performability analysis of adaptive drone computation offloading with fog computing. Future Gener. Comput. Syst. 2023, 145, 121–135. [Google Scholar] [CrossRef]
  32. Zheng, Z.; Trivedi, K.S.; Qiu, K.; Xia, R. Semi-markov models of composite web services for their performance, reliability and bottlenecks. IEEE Trans. Serv. Comput. 2015, 10, 448–460. [Google Scholar] [CrossRef]
  33. Araújo, C.; Maciel, P.; Zimmermann, A.; Andrade, E.; Sousa, E.; Callou, G.; Cunha, P. Performability modeling of electronic funds transfer systems. Computing 2011, 91, 315–334. [Google Scholar] [CrossRef]
  34. Rizwan, S.M.; Sachdeva, K.; Alagiriswamy, S.; Al Rahbi, Y. Performability and Sensitivity Analysis of the Three Pumps of a Desalination Water Pumping Station. Int. J. Eng. Trends Technol. 2023, 71, 283–292. [Google Scholar] [CrossRef]
  35. Sousa, E.; Maciel, P.; Lins, F.; Marinho, M. Maintenance policy and its impact on the performability evaluation of eft systems. Int. J. Comput. Sci. Eng. Appl. 2012, 2, 95. [Google Scholar] [CrossRef]
  36. Andrade, E.; Machida, F. Assuring Autonomy of UAVs in Mission-critical Scenarios by Performability Modeling and Analysis. ACM Trans. Cyber-Phys. Syst. 2023. [Google Scholar] [CrossRef]
  37. Li, X.D.; Yin, Y.F.; Fiondella, L. Reliability and Performance Analysis of Architecture-Based Software Implementing Restarts and Retries Subject to Correlated Component Failures. Int. J. Softw. Eng. Knowl. Eng. 2015, 25, 1307–1334. [Google Scholar] [CrossRef]
  38. Khan, R.H.; Heegaard, P.E.; Machida, F. From uml to SRN: A tool based support for performability modeling of distributed system considering reusable software components. In Proceedings of the IASTED International Conference on Modelling and Simulation, Banff, AB, Canada, 3–9 July 2012. [Google Scholar]
  39. Tokuno, K.; Yamada, S. Stochastic performability measurement for software system with random performance degradation and field-oriented restoration. Int. J. Syst. Assur. Eng. Manag. 2010, 1, 330–339. [Google Scholar] [CrossRef]
  40. Ataie, E.; Entezari-Maleki, R.; Rashidi, L.; Trivedi, K.S.; Ardagna, D.; Movaghar, A. Hierarchical stochastic models for performance, availability, and power consumption analysis of IaaS clouds. IEEE Trans. Cloud Comput. 2017, 7, 1039–1056. [Google Scholar] [CrossRef]
  41. Entezari-Maleki, R.; Trivedi, K.S.; Sousa, L.; Movaghar, A. Performability-based workflow scheduling in grids. Comput. J. 2018, 61, 1479–1495. [Google Scholar] [CrossRef]
  42. Mitrevski, P.; Mitrevski, F.; Gusev, M. A Decade Time-Lapse of Cloud Performance and Dependability Modeling: Performability Evaluation Framework. In Proceedings of the 2nd International Conference on Networking, Information Systems & Security, Rabat, Morocco, 27–29 March 2019. [Google Scholar]
  43. Singh, L.K.; Vinod, G.; Tripathi, A.K. Modeling and prediction of performability of safety critical computer based systems using Petri nets. In Proceedings of the 2012 IEEE 23rd International Symposium on Software Reliability Engineering Workshops, Dallas, TX, USA, 27–30 November 2012. [Google Scholar]
  44. Entezari-Maleki, R.; Mohammadkhan, A.; Yeom, H.Y.; Movaghar, A. Combined performance and availability analysis of distributed resources in grid computing. J. Supercomput. 2014, 69, 827–844. [Google Scholar] [CrossRef]
  45. Taheri, N.; Jamali, S.; Esmaeili, M. Achieving Performability and Reliability of Data Storage in the Internet of Things. Int. J. Eng. Manuf. (IJEM) 2022, 12, 12–28. [Google Scholar] [CrossRef]
  46. Wu, Z.; Xiong, N.; Huang, Y.; Gu, Q.; Hu, C.; Wu, Z.; Hang, B. A fast optimization method for reliability and performance of cloud services composition application. J. Appl. Math. 2013, 39, 1–12. [Google Scholar] [CrossRef]
  47. Pathak, P.H.; Dutta, R.; Mohapatra, P. On availability-performability tradeoff in wireless mesh networks. IEEE Trans. Mob. Comput. 2014, 14, 606–618. [Google Scholar] [CrossRef]
  48. Tokuno, K.; Yamada, S. Codesign-oriented performability modeling for hardware-software systems. IEEE Trans. Reliab. 2011, 60, 171–179. [Google Scholar] [CrossRef]
  49. Zhang, H.; Li, P.; Zhou, Z. A correlated model for evaluating performance and energy of cloud system given system reliability. Discret. Dyn. Nat. Soc. 2015, 2015, 497048. [Google Scholar] [CrossRef]
  50. Mo, Y.; Xing, L.; Dugan, J.B. Performability analysis of k-to-l-out-of-n computing systems using binary decision diagrams. IEEE Trans. Dependable Secur. Comput. 2015, 15, 126–137. [Google Scholar] [CrossRef]
  51. Amari, S.V.; Xing, L.; Shrestha, A.; Akers, J.; Trivedi, K.S. Performability analysis of multistate computing systems using multivalued decision diagrams. IEEE Trans. Comput. 2010, 59, 1419–1433. [Google Scholar] [CrossRef]
  52. Ahamad, S.; Gupta, R. Performability modeling of safety-critical systems through AADL. Int. J. Inf. Technol. 2022, 14, 2709–2722. [Google Scholar] [CrossRef]
  53. Wang, C.; Wang, S.; Xing, L.; Guan, Q. Efficient performability analysis of dynamic multi-state k-out-of-n: G systems. Reliab. Eng. Syst. Saf. 2023, 237, 109384. [Google Scholar] [CrossRef]
  54. Ahamad, S.; Goel, S. Fault-Tolerant and Performability for Safety-Critical Systems: A Study Based on Interrelation. In Proceedings of the International Conference on Innovative Computing & Communication (ICICC), Delhi, India, 20–21 February 2021. [Google Scholar]
  55. Xue, J.; Yang, K. Dynamic reliability analysis of coherent multistate systems. IEEE Trans. Reliab. 1995, 44, 683–688. [Google Scholar]
  56. Trivedi, K.S.; Muppala, J.K.; Woolet, S.P.; Haverkort, B.R. Composite performance and dependability analysis. Perform. Eval. 1992, 14, 197–215. [Google Scholar] [CrossRef]
  57. Sahner, R.A.; Trivedi, K.; Puliafito, A. Performance and Reliability Analysis of Computer Systems: An Example-Based Approach Using the SHARPE Software Package; Springer Science & Business Media: New York, NY, USA, 2012. [Google Scholar]
  58. Haverkort, B.; Marie, R.; Rubino, G.; Trivedi, K.S. Performability Modelling Tools and Techniques; John Wiley and Sons: West Sussex, UK, 2001. [Google Scholar]
  59. Chang, Y.; Brito, M. On the Reliability of Experts’ Assessments for Autonomous Underwater Vehicle Risk of Loss Prediction: Are Optimists better than Pessimists? In Proceedings of the Probabilistic Safety Assessment and Management (PSAM), Los Angeles, CA, USA, 16–21 September 2018. [Google Scholar]
  60. Bartlett, L.M. Progression of the Binary Decision Diagram Conversion Methods. In Proceedings of the 21st International System Safety Conference, Ottawa, ON, Canada, 4–8 August 2003. [Google Scholar]
  61. Cormen, T.H.; Leiserson, C.E.; Rivest, R.L.; Stein, C. Introduction to Algorithms; MIT Press: Cambridge, MA, USA, 2001. [Google Scholar]
  62. Jensen, R.M.; Veloso, M.M. OBDD-based universal planning for synchronized agents in non-deterministic domains. J. Artif. Intell. Res. 2000, 13, 189–226. [Google Scholar] [CrossRef]
  63. Malik, S.; Wang, A.R.; Brayton, R.K.; Sangiovanni-Vincentelli, A. Logic verification using binary decision diagrams in a logic synthesis environment. In Proceedings of the 1988 IEEE International Conference on Computer-Aided Design, Santa Clara, CA, USA, 7–10 November 1988. [Google Scholar]
  64. Xie, M.; Tan, K.C.; Goh, K.H.; Huang, X.R. Optimum prioritisation and resource allocation based on fault tree analysis. Int. J. Qual. Reliab. Manag. 2000, 17, 189–199. [Google Scholar] [CrossRef]
  65. Ezell, B.; Lynch, C.J.; Hester, P.T. Methods for weighting decisions to assist modelers and decision analysts: A review of ratio assignment and approximate techniques. Appl. Sci. 2021, 11, 10397. [Google Scholar] [CrossRef]
  66. Stillwell, W.G.; Seaver, D.A.; Edwards, W. A comparison of weight approximation techniques in multiattribute utility decision making. Organ. Behav. Hum. Perform. 1981, 28, 62–77. [Google Scholar] [CrossRef]
  67. U.S. Coast Guard. Coast Guard Process Improvement Guide: Total Quality Tools for Teams and Individuals, 2nd ed.; U.S. Government Printing Office: Boston, MA, USA, 1994.
  68. Ushakov, I.A. A universal generating function. Sov. J. Comput. Syst. Sci. 1986, 24, 118–129. [Google Scholar]
  69. Lisnianski, A.; Levitin, G. Multi-State System Reliability: Assessment, Optimization and Applications; World Scientific: Singapore, 2003. [Google Scholar]
  70. Li, Y.F.; Zio, E. A multi-state model for the reliability assessment of a distributed generation system via universal generating function. Reliab. Eng. Syst. Saf. 2012, 106, 28–36. [Google Scholar] [CrossRef]
  71. Levitin, G. A universal generating function approach for the analysis of multi-state systems with dependent elements. Reliab. Eng. Syst. Saf. 2004, 84, 285–292. [Google Scholar] [CrossRef]
  72. Nikoaos, L. Fault Trees; ISTE LTD: London, UK, 2007; pp. 49–66. [Google Scholar]
  73. Bian, R.; Pan, Z.; Cheng, Z.; Bai, S. Improved MDD algorithm for mission reliability estimation of an escort formation. IEEE Access 2020, 8, 51340–51351. [Google Scholar] [CrossRef]
  74. Omerdic, E.; Roberts, G. Thruster fault diagnosis and accommodation for open-frame underwater vehicles. Control Eng. Pract. 2004, 12, 1575–1598. [Google Scholar] [CrossRef]
  75. Rae, G.J.; Dunn, S.E. On-line damage detection for autonomous underwater vehicles. In Proceedings of the IEEE Symposium on Autonomous Underwater Vehicle Technology, Cambridge, MA, USA, 19–20 July 1994. [Google Scholar]
  76. Liu, F.; Ma, Z.; Mu, B.; Duan, C.; Chen, R.; Qin, Y.; Pu, H.; Luo, J. Review on fault-tolerant control of unmanned underwater vehicles. Ocean Eng. 2003, 285, 115471. [Google Scholar] [CrossRef]
  77. Dearden, R.; Ernits, J. Automated fault diagnosis for an autonomous underwater vehicle. IEEE J. Ocean. Eng. 2013, 38, 484–499. [Google Scholar] [CrossRef]
  78. Wang, W.; Chen, Y.; Xia, Y.; Xu, G.; Zhang, W.; Wu, H. A fault-tolerant steering prototype for x-rudder underwater vehicles. Sensors 2020, 20, 1816. [Google Scholar] [CrossRef]
  79. Podder, T.K.; Sibenac, M.; Thomas, H.; Kirkwood, W.J.; Bellingham, J.G. Reliability growth of autonomous underwater vehicle-Dorado. In Proceedings of the Oceans’ 04 MTS/IEEE Techno-Ocean’04, Kobe, Japan, 9–12 November 2004. [Google Scholar]
  80. Brito, M.; Griffiths, G.; Ferguson, J.; Hopkin, D.; Mills, R.; Pederson, R.; MacNeil, E. A behavioral probabilistic risk assessment framework for managing autonomous underwater vehicle deployments. J. Atmos. Ocean. Technol. 2012, 29, 1689–1703. [Google Scholar] [CrossRef]
  81. Chen, X.; Bose, N.; Brito, M.; Khan, F.; Thanyamanta, B.; Zou, T. A review of risk analysis research for the operations of autonomous underwater vehicles. Reliab. Eng. Syst. Saf. 2021, 216, 108011. [Google Scholar] [CrossRef]
  82. Griffiths, G.; Brito, M.; Robbins, I.; Moline, M. Reliability of two REMUS-100 AUVs based on fault log analysis and elicited expert judgment. In Proceedings of the International Symposium on Unmanned Untethered Submersible Technology (UUST 2009), Durham, NH, USA, 23–26 August 2009. [Google Scholar]
  83. Brito, M.P.; Smeed, D.A.; Griffiths, G. Analysis of causation of loss of communication with marine autonomous systems: A probability tree approach. Methods Oceanogr. 2014, 10, 122–137. [Google Scholar] [CrossRef]
  84. Lima, C.S.C.; Reis, M.; Schnitman, L.; Lepikson, H. Functional FMECA method applied to autonomous underwater vehicle development. In Proceedings of the OCEANS, Anchorage, AK, USA, 18–21 September 2017. [Google Scholar]
  85. Wang, P.; Zheng, J.; Li, C. Cooperative fault-detection mechanism with high accuracy and bounded delay for underwater sensor networks. Wirel. Commun. Mob. Comput. 2009, 9, 143–153. [Google Scholar] [CrossRef]
  86. Fang, S.; Wang, L.; Zhu, J.; Pang, Y. Sensor fault-tolerant control of an autonomous underwater vehicle. Robot 2007, 29, 155–159+166. [Google Scholar]
  87. Utne, I.B.; Schjølberg, I. A systematic approach to risk assessment: Focusing on autonomous underwater vehicles and operations in arctic areas. In Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering, San Francisco, CA, USA, 8–13 June 2014. [Google Scholar]
  88. Hegde, J.; Utne, I.B.; Schjølberg, I.; Thorkildsen, B. A Bayesian approach to risk modeling of autonomous subsea intervention operations. Reliab. Eng. Syst. Saf. 2018, 175, 142–159. [Google Scholar] [CrossRef]
  89. Rausand, M. Risk Assessment: Theory, Methods, and Applications; Wiley: New York, NY, USA, 2013. [Google Scholar]
  90. Department of Defense. MIL-HDBK-217F, MILITARY HANDBOOK: RELIABILITY PREDICTION OF ELECTRONIC EQUIPMENT; United States Department of Defense: Arlington, VA, USA, 1991.
  91. Bian, X.; Mou, C.; Yan, Z.; Xu, J. Reliability analysis of AUV based on fuzzy fault tree. In Proceedings of the International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering (QR2MSE), Chengdu, China, 9–12 August 2009. [Google Scholar]
  92. Hu, Z.; Yang, Y.; Lin, Y. Failure analysis for the mechanical system of autonomous underwater vehicles. In Proceedings of the International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering (QR2MSE), Chengdu, China, 15–18 July 2013. [Google Scholar]
  93. Aslansefat, K.; Latif-Shabgahi, G.; Kamarlouei, M. A strategy for reliability evaluation and fault diagnosis of Autonomous Underwater Gliding Robot based on its Fault Tree. Int. J. Adv. Sci. Eng. Technol. 2014, 2, 83–89. [Google Scholar]
  94. Mo, Y.; Cui, L.; Xing, L.; Zhang, Z. Performability analysis of large-scale multi-state computing systems. IEEE Trans. Comput. 2017, 67, 59–72. [Google Scholar] [CrossRef]
Figure 1. Overview of long-term operations with AUVs.
Figure 1. Overview of long-term operations with AUVs.
Jmse 12 00564 g001
Figure 2. Performability evaluation process of the proposed method.
Figure 2. Performability evaluation process of the proposed method.
Jmse 12 00564 g002
Figure 3. The design of fault trees for each mission: (a) launch, (b) mission execution, (c) recovery, and (d) charging.
Figure 3. The design of fault trees for each mission: (a) launch, (b) mission execution, (c) recovery, and (d) charging.
Jmse 12 00564 g003
Figure 4. Performability in scenario case 1.
Figure 4. Performability in scenario case 1.
Jmse 12 00564 g004
Figure 5. Performability in scenario case 2.
Figure 5. Performability in scenario case 2.
Jmse 12 00564 g005
Figure 6. Performability in scenario case 3.
Figure 6. Performability in scenario case 3.
Jmse 12 00564 g006
Figure 7. Performability in scenario case 4.
Figure 7. Performability in scenario case 4.
Jmse 12 00564 g007
Figure 8. EIP results of scenarios.
Figure 8. EIP results of scenarios.
Jmse 12 00564 g008
Table 1. Basic reliability event description of the fault tree.
Table 1. Basic reliability event description of the fault tree.
EventMeaningEventMeaningEventMeaningEventMeaning
R1Propeller breakageR12Motor overcurrentR23Controller PCMR34Noise
R2Yaw actuator failureR13Motor agingR24Communication unit signal bus failureR35Integrated antenna assembly failure
R3Pitch actuator failureR14Motor driver failureR25Satellite communication failureR36Forward-looking MBS failure
R4Buoyancy pump failureR15Leak in power supply for motorR26WIFI communication failureR37Undersea terrain exploration MBS failure
R5Air bladder leakR16Motor power transformation module (PTM)R27RF communication failureR38Underwater camera failure
R6ShortR17Motor power charging module (PCM)R28LTE–maritime communication failureR39Side scan sonar failure
R7Shaft misalignmentR18Complex navigation computer failureR29Communication unit serial bus failureR40Depth gauge failure
R8Leak detection failureR19Autonomous control computer failureR30Underwater ultrasonic communication transducer failureR41Pressure sensor failure
R9Case breakageR20Emergency buoyancy control device failureR31Underwater wireless optical transmitter failureR42CTD failure
R10O-ring breakageR21Leak in power supply for control unitR32USBL failureR43ADCP failure
R11O-ring corrosionR22Controller PTMR33GPS failureR44DVL failure
Table 2. Basic performance event description of the fault tree.
Table 2. Basic performance event description of the fault tree.
EventMeaningEventMeaningEventMeaning
P1Motor RPMP13Charging timeP25Controller PTM current
P2Motor horizontal direction controlP14CPU usageP26Controller PCM voltage
P3Motor vertical direction controlP15Memory usageP27Controller PCM current
P4Hydraulic pump motor output currentP16HDD CapacityP28Charging current
P5Throttle valve motor output currentP17CPU temperatureP29Charging time
P6Motor battery charging stateP18CPU usageP30Depth accuracy w/comms
P7Discharging currentP19Memory usageP31Position accuracy w/comms
P8Motor PTM voltageP20HDD CapacityP32Underwater sound velocity
P9Motor PTM currentP21CPU temperatureP33Depth accuracy w/o comms
P10Motor PCM voltageP22Controller battery charging stateP34Position accuracy w/o comms
P11Motor PCM currentP23Discharging currentP35Velocity accuracy
P12Charging currentP24Controller PTM voltage
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Byun, S.; Lee, D. Performability Evaluation of Autonomous Underwater Vehicles Using Phased Fault Tree Analysis. J. Mar. Sci. Eng. 2024, 12, 564. https://doi.org/10.3390/jmse12040564

AMA Style

Byun S, Lee D. Performability Evaluation of Autonomous Underwater Vehicles Using Phased Fault Tree Analysis. Journal of Marine Science and Engineering. 2024; 12(4):564. https://doi.org/10.3390/jmse12040564

Chicago/Turabian Style

Byun, Sungil, and Dongik Lee. 2024. "Performability Evaluation of Autonomous Underwater Vehicles Using Phased Fault Tree Analysis" Journal of Marine Science and Engineering 12, no. 4: 564. https://doi.org/10.3390/jmse12040564

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop