Next Article in Journal
Data/Knowledge-Driven Behaviour Analysis for Maritime Autonomous Surface Ships
Previous Article in Journal
Performance Analysis of MIMO-mQAM System with Pointing Errors and Beam Spreading in Underwater Málaga Turbulence Channel
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unmanned Engine Room Surveillance Using an Autonomous Mobile Robot

1
Division of Marine Engineering, Mokpo National Maritime University, Mokpo 58628, Republic of Korea
2
Division of Coast Guard, Mokpo National Maritime University, Mokpo 58628, Republic of Korea
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2023, 11(3), 634; https://doi.org/10.3390/jmse11030634
Submission received: 22 February 2023 / Revised: 12 March 2023 / Accepted: 15 March 2023 / Published: 17 March 2023
(This article belongs to the Section Ocean Engineering)

Abstract

:
With the rapid advances in science and technology, ships that do not require any crew on board (i.e., autonomous ships) are being actively researched. Several studies on unmanned ships are in progress, and unmanned engine room studies are also being conducted. These studies mainly focus on engine failure prediction and diagnosis, but have not paid sufficient attention to various abnormal situations. Accordingly, this study focusses on the surveillance of engine rooms and abnormal situations using autonomous mobile robots. The abnormal situation considered in this study was a fire that could be highly dangerous if it occurred in the ship’s engine room. A map of the engine room was created using an autonomous robot, and when a destination was set on the map, a path was found, and the engine room was surveilled by autonomously moving by tracking the path. When a fire is detected during surveillance, the coordinates of the fire are converted so that the autonomous mobile robot can use them and move to a new destination. Experiments were conducted to evaluate the autonomous mobile robot’s movement performance, fire detection performance, and performance in the engine room. In terms of movement performance evaluation, the arrival rate to the destination was 88% on average, and the fire detection performance was a 0.9833 detection rate, 0 false alarm rate, and 0.9916 accuracy. The performance evaluation in the engine room confirmed that the fire was detected while driving, and the destination was changed to a new destination by coordinate conversion and driving was performed autonomously. Through this, it was confirmed that it is possible to surveil the engine room using an autonomous mobile robot, which contributes towards the development of unmanned engine room and ship safety.

1. Introduction

In the marine field, autonomous ships are an exciting research topic, and studies on unmanned engine rooms are also being conducted. The International Maritime Organization (IMO) divides autonomous ships into four classes [1]. At present, most ships belong to the first class in which seafarers board and operate the ship. However, in the third class of autonomous ships, unmanned engine rooms are a crucial need. Unmanned engine room studies have mainly focused on engine performance monitoring and failure prediction/diagnosis. An engine performance monitoring study determines general machine conditions using data (e.g., temperature, pressure, vibration, and water level) obtained from engine room sensors and predicts and diagnoses machine failures if anomalies are observed [2,3,4]. However, there are abnormal situations (e.g., fire, flooding, and leakage of fuel oil, lubricating oil, and fresh water) in the engine room, so it is difficult to predict abnormal situations only using data obtained through sensors (e.g., thermometer, pressure gauge, flowmeter).
According to marine accident statistics from the Korea Maritime Safety Tribunal, fire is the third most common ship accident after collisions and safety accidents [5]. Marine accident statistics indicate that more than 50% of all ship fires occur in the engine room owing to engine equipment defects and poor handling. The Korean Register of shipping’s guidelines for autonomous ships [6] presents general risk factors considering the functional aspects. General risk factors specify the engine room fire as a risk factor in the propulsion system of autonomous ships. This indicates that the occurrence of an engine room fire is an immense risk in terms of the functioning of an autonomous ship system. Therefore, real-time fire surveillance in ship’s engine rooms is an important facility for maintaining ship safety and developing unmanned engine rooms.
Engine room surveillance generally involves the use of a fixed Closed Circuit Television (CCTV). Engine room surveillance using CCTVs has the advantage of being able to surveil the same area continuously. Kim et al. [7] conducted a study to surveil the engine room through a multi-view monitoring system using an active Pan-Tilt-Zoom (PTZ) camera. However, in recent years, ships are becoming larger, and the engine rooms of ships are also becoming larger. An average modern ship’s engine room is 1300 m2, therefore the CCTV camera solution could cause a high cost due to the installation of the CCTV system (CCTV cable, multiple CCTVs, control system, etc.). However, as the autonomous mobile robot could freely move, it is possible to provide surveillance of the engine room using only a small number of robots. In addition, a more integrated system has a smaller chance of malfunction than a distributed CCTV solution.
In the past, fire detection methods mainly used sensors to detect heat, smoke, and flames [8,9]. However, fire detection using sensors has the disadvantage that a considerable amount of time elapses before the sensor detects heat, smoke, or flames. A study on fire detection using images was conducted to mitigate these disadvantages. One study detects fire by inferring the intensity and saturation of red values from fire-pixels based on the Red, Green, Blue (RGB) color model [10]. In order to improve performance compared to studies using only the RGB color model, studies using both the RGB color model and the YCbCr color model to detect fire were conducted [11,12]. However, there is a limit to detecting fire using only color, so fire detection studies were conducted considering the characteristics of motion and shape along with color [13,14,15]. Fire detection studies using the hidden Markov tree (HMT) [16,17,18] and support vector machine (SVM) [19,20,21] techniques have been conducted for performance improvement.
In recent times, computer graphic processing unit (GPU) technology has grown rapidly, and artificial intelligence technologies are also rapidly evolving based on GPU. Artificial intelligence has improved object detection performance through images, and the application of artificial intelligence to fire detection using images and videos is being actively studied. Fire detection using images has mainly been conducted using convolutional neural networks (CNN) [22,23,24,25]. Among the CNN models, You Only Look Once (YOLO) has the advantages of high detection rate and fast detection speed. Li et al. [26] studied fire detection performance using YOLOv3. Zhao et al. [27] and Abdusalomov et al. [28] conducted a fire detection study by modifying the YOLOv3 model. However, YOLOv3 has too deep of a CNN layer, making it difficult to apply to edge devices used in autonomous mobile robots. Therefore, in this study, by applying Tiny-YOLOv3, which is a lighter structure than YOLOv3, we try to detect fire in real-time by learning flame and smoke as one class called ‘fire’.
This study aims to surveil abnormal situations (especially fire) in the ship’s engine room using an autonomous mobile robot for the operation of the unmanned engine room of the autonomous ship in order to improve the safety of the ship. First, a map of the engine room was created using Light Detection and Ranging (LiDAR), and the location of the autonomous mobile robot on the map was estimated using adaptive Monte Carlo localization (AMCL). When a destination is set on the map, the path from the current location to the destination is found using the A star algorithm (A* algorithm). The Pure Pursuit technique was used to track the path for movement to the destination. The autonomous mobile robot continuously surveilled the engine room using a camera while moving along the path and detected a fire using the weight file learned with Tiny-YOLOv3. The detected fire area image was changed to the Red, Green, Blue, Depth (RGBD) image, and location information was given to the autonomous mobile robot. The autonomous mobile robot moved to the corresponding point to check the fire.
For the performance evaluation, the fire detection performance and the movement performance of the autonomous mobile robot were validated, and finally the fire detection and movement performance using the autonomous mobile robot was confirmed through an experiment on an actual ship’s engine room.

2. Proposed Method

Figure 1 is a flowchart for the operation of an autonomous mobile robot while it is driving autonomously to surveil an engine room. The autonomous mobile robot must have a map for autonomous driving. If there is no completed map available, a suitable map must be created. The autonomous mobile robot uses AMCL on the map to estimate the current location. When a destination is set to the autonomous mobile robot, it found a path from the current location to the destination using the A* algorithm. The autonomous mobile robot tracks the path using the Pure Pursuit technique algorithm and drives autonomously. The autonomous mobile robot moves autonomously to its destination and surveils the engine room through a camera continuously. If a fire is detected during surveillance, the location of the fire is set as a new destination on the map by converting the coordinates of the detection box image data. Then, it starts sequentially from the set destination block and stops the operation of the autonomous mobile robot when it arrives at a new destination (i.e., a fire point).

2.1. Path Finding and Path Tracking

For surveillance, the autonomous mobile robot drives a certain point repeatedly and moves to a new point in the engine room. For this, the path from the current point to the destination must be found. In addition, during path finding, obstacles on the path must be avoided while moving from the current point to the destination. There are many kinds of path finding algorithms (including the Dijkstra algorithm, Bellman–Ford algorithm, Floyd–Warshall algorithm, A* algorithm, etc.). The Dijkstra and Bellman–Ford algorithms calculate the shortest path in all cases based on the starting point, although there are some differences. The Floyd–Warshall algorithm calculates the shortest path of all points. On the other hand, the A* algorithm has the advantage of having a small amount of computation and a fast computation speed compared to other path finding algorithms because it only calculates the points for which the F(n) value is the minimum based on the starting point. Since the autonomous mobile robot is using an edge device, the A* algorithm was selected to reduce the amount of calculation and increase the calculation speed. The A* algorithm is based on Equation (1).
F ( n ) = G ( n ) + H ( n )
G(n) is the cost from the starting point to the current point, and H(n) is the estimated cost from the current point to the destination. F(n) is the sum of G(n) and H(n), and the next moving point is the lowest value point of F(n). As this process is repeated, the final moving path is obtained.
The A* algorithm used in this study is Algorithm 1. The A* algorithm is designed to repeat three to five processes until reaching the destination. The algorithm allowed for diagonal movement while performing F(n) calculation for smooth path finding, and the diagonal movement F(n) value was obtained using the Pythagorean theorem. If there are multiple points with small F(n) values, all of the points are set to be calculated.
Algorithm 1 Main steps of path finding
1:
The autonomous mobile robot estimates the current location and sets it as the starting point.
2:
The autonomous mobile robot sets a destination.
3:
The F(n) value of the points around the starting point is calculated.
(Diagonal movement is allowed, and obstacles are set as non-movable points.)
4:
The next moving point is determined as a movable point having the smallest F(n) value. (If there are multiple points with the same F(n) value, all points are calculated.)
5:
If the moving point is not the destination, F(n) values of the surrounding points are calculated. If the moving point is a destination, the final path from destination to the starting point is output.
The autonomous mobile robot tracks the path and moves. Mohammad et al. [29] evaluated the performance of the path tracking algorithm of an autonomous vehicle. Performance evaluation results showed that Pure Pursuit and Stanley techniques can be easily applied to robots and require less computation than other path tracking algorithms. However, Stanley is less robust than Pure Pursuit. In addition, Pure Pursuit performs better when the speed of the robot is low. Since the autonomous mobile robot used in the experiment requires a small amount of computation and operates at low speed, the Pure Pursuit technique was selected because it was more appropriate than other techniques. The Pure Pursuit technique sets a target point ahead of a certain distance and adjusts the steering angle of the robot to track it. Equations (2)–(6) can be derived from Figure 2.
l d sin 2 α = R sin ( π 2 α )  
l d 2 sin α cos α = R cos α
l d sin α = 2 R
In Equation (5), k represents the curvature.
k = 1 R = 2 sin α l d
R = L tan δ
The steering angle δ of the autonomous mobile robot is obtained through Equation (7) calculated by combining Equations (5) and (6).
δ = tan 1 ( 2 L sin α l d )
The Pure Pursuit algorithm used in this study to obtain the steering angle is Algorithm 2. The autonomous mobile robot may cause damage to the steering system if the δ value is very large. To prevent this, the maximum δ value was set to 0.4 radian for the left and right sides of the robot center based at 0 radian. Algorithm 2 is designed not to exceed the maximum δ value.
Algorithm 2 Main steps of path tracking
1:
The autonomous mobile robot estimates the current location.
2:
The autonomous mobile robot sets a point that is as far away as the look-ahead distance on the path as a tracking point.
3:
The angle α is calculated using the tracking point and the current position, and R for curvature calculation is obtained.
4:
The steering angle δ is calculated through the obtained values.
5:
The heading of the autonomous mobile robot is updated, and steps 1 to 5 are repeated.

2.2. Object Detection

YOLO [30,31,32] is an object detection algorithm using a CNN specialized for image detection. There are several versions of YOLO models, and Tiny-YOLOv3 was chosen in this study. In YOLO, the tiny version is a structure that increases the detection speed by reducing the number of convolution layers. As the tiny version has a small amount of computation, it is suitable for this study, which requires real-time object detection using an edge device.
YOLO subscribes to image data from the camera and detects an object from the image data. YOLO publishes the detection object, the probability of being a detection object, and the maximum and minimum values of the x-axis and y-axis coordinates of the bounding box. Equation (8) gives the coordinates of the center of the detected bounding box.
center   x = x max + x min 2 ,   center   y = y max + y min 2
The coordinates of the center of the bounding box are camera coordinates based on pixels. In order to move the autonomous mobile robot to the position of the bounding box, conversion to the distance unit coordinate is required. In this study, Algorithm 3 was used to calculate RGBD image data corresponding to the coordinates of the RGB image data. In the algorithm, xmax, ymax, xmin, and ymin represent the maximum and minimum coordinates of the x and y axis of the RGB image data, and the point is the RGBD image data corresponding to the RGB coordinates. The point has x, y, and z-axis distance data from the camera. In Figure 3, the left side is the detection of fire in the RGB image, and the right side is the coordinate conversion of the RGB image to RGBD through Algorithm 3.
Algorithm 3 Main steps of the coordinate conversion
  • 1: i = xmin
  • 2: j = ymin
  • 3:  for i < xmax
  • 4:    i++
  • 5:    for j < ymax
  • 6:     j++
  • 7:     point = (j × depth_width) + i
A bounding box in a RGBD image is created as a set of points. The center value of the bounding box based on the distance of the RGBD image can be obtained using Equation (9).
( center   x = x max + x min 2 ,   center   y = y max + y min 2 ,   center   z = z max + z min 2 )

3. Results and Discussion

The autonomous mobile robot used in the experiment must meet the specifications below:
  • The tire should be large enough not to be obstructed by the handle hole for opening and closing the engine room plate;
  • The camera must be installed for engine room surveillance;
  • LiDAR must be installed for engine room mapping.
The autonomous mobile robot used in the experiment was purchased as a product similar to the above conditions and we modified it to fit the experiment. The camera selected a Realsense D455 to acquire RGBD images. The LiDAR was installed high enough to detect the equipment in the engine room. The autonomous mobile robot can service for about 6 h when fully charged (charging takes about 2 h). Table 1 is the hardware list of the autonomous mobile robot. Figure 4 shows the shape and size of the autonomous mobile robot.
For evaluating the moving performance of the autonomous mobile robot, it was checked whether it arrived at the destination by driving 25 times each on a 45 m long straight path and a path combining curves and straight sections. The arrival was confirmed as success or failure if it arrived at a location within 45 cm of the correct spot, which is 1% of the path distance of 45 m.
In the experiment, we used LiDAR to create a map of the place to be tested and estimated the current location from the created map. After setting a destination and finding a path, it autonomously drives to the destination. Figure 5 shows the two kinds of paths used in the experiment. In the figure, the blue dot is the robot’s current estimated position, while the red dot is the robot’s destination. On the map, black is the point where the robot cannot move, gray is the movable area, and light gray is the path that the autonomous mobile robot has found.
Table 2 summarizes the movement performance of the autonomous mobile robot. The arrival rate is 92% on the straight path, which is higher than the 84% arrival rate on the combined curved and straight path. The distance from the destination to the arrival point was also confirmed to be shorter in the straight path.
Figure 6 shows the arrival point of the autonomous mobile robot based on the destination. The x-axis is the forward and backward distance, the y-axis is the left and right distance, and (0, 0) represents the destination. In the figure, the sky color circle has a radius of 45 cm, representing the range of successful arrival. In the straight path, it was confirmed that the destination was approached almost similarly, and in the combination of the straight and curved path, it was confirmed that the majority of the arrival points were before the destination. These results are presumed to indicate that the autonomous mobile robot did not arrive at the destination owing to the slippery floor material during rotation. However, it was confirmed that the arrival range was within 1% of the path distance in most cases.
In the experiment on the fire detection performance, the fire of the test dataset was detected with the weight file obtained by learning the fire dataset using Tiny-YOLOv3. The dataset used the fire dataset of Kim [32], and this dataset was obtained through a simulation experiment assuming a fire on a ship. A total of 1938 fire datasets were divided into 502 training data and 1436 test data. The learning data was augmented to 8032 by image rotation (−20~15°) and horizontal flip to increase the learning effect. Figure 7 shows augmentation in the training data.
Learning was conducted using the Tiny-YOLOv3 GPU version, and smoke and flame were learned as a single class called fire. The workstation that performed the training has the following specifications: Intel(R) Xeon(R) Gold 6240 CPU @2.60GHz, 128GB DDR4 RAM, NVIDIA GeForce RTX 3090, and Cuda 11.7.0. Table 3 lists the parameters used for learning.
For performance evaluation, fire detection was used as a confusion matrix in the test dataset. The test image consists of 719 fire images and 717 non-fire images. Table 4 presents the performance test results as a confusion matrix.
Detection   Rate = TP TP + FN
False   Alarm   Rate = FP FP + TN
Accuracy = TP + TN TP + TN + FP + FN
Table 5 presents the performance of the confusion matrix values calculated using Equations (10)–(12). The detection rate is 0.9833, which is better than Tiny-YOLOv2, and the false alarm rate is also improved to 0. Furthermore, the accuracy was improved to 0.9916.
Figure 8 shows the fire detection on the test dataset. In the detection image, fire was detected in all cases such as only fire, only smoke, fire and smoke together, large size, and small size.
The last experiment was conducted in the ship’s engine room. It was difficult to experiment using fire or smoke in the engine room, so a banner was used. Algorithm 4 is the setting algorithm of the autonomous mobile robot during the experiment.
Algorithm 4 Main steps of engine room experiment
1:
When a destination is set on the map, the autonomous mobile robot finds a path, tracks the found path, and surveils the engine room while moving to the destination.
2:
The autonomous mobile robot calculates the location of the fire when it detects the fire through the camera while surveilling the engine room.
3:
The calculated coordinates are set as the new destination.
4:
The autonomous mobile robot moves to a new destination, a fire.
Figure 9 is the engine room surveillance experiment image. Figure 9a,b are images of the virtual fire installation banners. Figure 9c,d are the graphical user interface (GUI) images. In these images, the left shows the camera image that the autonomous mobile robot is looking at, and the right shows a map. On the map, the blue dot represents the current location of the autonomous mobile robot, and the red dot represents the destination. The light gray from the current location to the destination is the found path. The initial destination was set to a location near fire and autonomous driving was performed. Figure 9d shows that the autonomous mobile robot detects a fire and changes the destination while moving toward the initial destination autonomously. The detected fire is displayed as a red square box on the left camera image of the GUI, coordinates are converted, and sent to the autonomous mobile robot as a new destination. Comparing Figure 9c and Figure 9d, the destination has changed. Figure 9b is the engine room image of Figure 9d. Comparing Figure 9b and Figure 9d, it can be confirmed that the fire is located next to the stairs, and the new destination is also located next to the stairs.
It was confirmed that the autonomous mobile robot moved freely in the engine room, surveilling the engine room and detecting abnormal situations. However, it was also confirmed that only a limited field of view can be covered depending on the angle of the camera mounted on the robot. In addition, it was confirmed that the moving speed of the robot was slow, and it took a lot of time to surveil all engine rooms. In order to improve this, engine room surveillance using several robots or combining robots and CCTV is considered to be an effective solution.

4. Conclusions

This study considered an approach to conduct the surveillance of an engine room of a ship using an autonomous mobile robot. Specifically, the focus was on surveilling fires in the engine room. The autonomous mobile robot surveilled the engine room while driving autonomously. When it detected a fire, it set the location as a new destination and moved towards it to check the fire. After confirming the fire, the robot will send an alarm and abnormal situation to the shipping company.
An autonomous driving test to evaluate the movement performance of the autonomous mobile robot and a fire detection test to evaluate the engine room surveillance performance were conducted. The movement performance showed an average arrival rate of 88% for the two kinds of paths. The fire detection performance was 0.9833, 0, and 0.9916 for the detection rate, false alarm rate, and accuracy, respectively. In an experiment using a virtual fire (banner) in the ship’s engine room, the engine room was surveilled, the fire was detected accurately, and the robot drove to the corresponding location autonomously. Through these results, it can be confirmed that real-time surveillance of the engine room using an autonomous mobile robot can contribute to the design of unmanned engine rooms and substantially improve ship safety.
In this study, among potential abnormal situations, only fire was considered, but it is thought that the proposed approach can be expanded widely by learning various situations such as oil spills, flooding, and so on. In addition, since the battery charging time and service time of autonomous mobile robots are important factors, we will study how to enable automatic charging in the future. Finally, for efficient engine room monitoring, we plan to study a monitoring method using several robots or combining robots and CCTV.

Author Contributions

Conceptualization, S.-D.K. and C.-O.B.; methodology, S.-D.K.; software, S.-D.K.; writing—original draft preparation, S.-D.K.; writing—review and editing, C.-O.B.; supervision, C.-O.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. IMO Takes First Steps to Address Autonomous Ships. Available online: http://www.imo.org/en/MediaCentre/PressBriefings/Pages/08-MSC-99-MASS-scoping.aspx (accessed on 13 January 2023).
  2. Shahid, S.M.; Ko, S.; Kwon, S. Real-time abnormality detection and classification in diesel engine operations with convolutional neural network. Expert Syst. Appl. 2022, 192, 16233. [Google Scholar] [CrossRef]
  3. Raptodimos, Y.; Lazakis, I. Using artificial neural network-self-organising map for data clustering of marine engine condition monitoring applications. Ships Offshore Struct. 2018, 13, 649–656. [Google Scholar] [CrossRef] [Green Version]
  4. Ellefsen, A.L.; Bjorlykhaug, E.; Aesoy, V.; Zhang, H. An Unsupervised Reconstruction-Based Fault Detection Algorithm for Maritime Components. IEEE Access 2019, 7, 16101–16109. [Google Scholar] [CrossRef]
  5. Korean Maritime Safety Tribunal, Marine Accident Statistics Report 2021. Available online: https://www.kmst.go.kr/web/board.do?menuIdx=135&bbsIdx=100087 (accessed on 13 January 2023).
  6. Korean Register of Shipping. Available online: https://www.krs.co.kr/KRRules/KRRules2022/data/data_other/korean/gc28k000.pdf (accessed on 15 January 2023).
  7. Kim, H.-H.; Hong, S.-J.; Nam, T.K. Design of PTZ Camera-Based Multiview Monitoring System for Efficient Observation in Vessel Engine Room. KOSOMES 2021, 27, 1129–1136. [Google Scholar] [CrossRef]
  8. Liu, Z. Review of Recent Developments in Fire Detection Technologies. J. Fire Prot. Eng. 2003, 13, 129–151. [Google Scholar] [CrossRef] [Green Version]
  9. Meacham, B.J. International Developments in Fire Sensor Technology. J. Fire Prot. Eng. 1994, 6, 89–98. [Google Scholar] [CrossRef]
  10. Chen, T.-H.; Wu, P.-H.; Chiou, Y.-C. An early fire-detection method based on image processing. In Proceedings of the International Conference on Image Processing, Singapore, 24–27 October 2004; Volume 3, pp. 1707–1710. [Google Scholar]
  11. Vipin, V. Image Processing Based Forest Fire Detection. Int. J. Adv. Res. Eng. Technol. 2012, 2, 87–95. [Google Scholar]
  12. Zaidi, N.I.; Lokman, N.A.A.; Daud, M.; Achmad, M.S.; Khor, A. Fire recognition using RGB and YCbCr color space. ARPN J. Eng. Appl. Sci. 2015, 10, 9786–9790. [Google Scholar]
  13. Foggia, P.; Saggese, A.; Vento, M. Real-Time Fire Detection for Video-Surveillance Applications Using a Combination of Experts Based on Color, Shape, and Motion. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 1545–1556. [Google Scholar] [CrossRef]
  14. Mueller, M.; Karasev, P.; Kolesov, I.; Tannenbaum, A. Optical Flow Estimation for Flame Detection in Videos. IEEE Trans. Image Process. 2013, 22, 2786–2797. [Google Scholar] [CrossRef] [Green Version]
  15. Chunyu, Y.; Jun, F.; Jinjun, W. Video Fire Smoke Detection Using Motion and Color Features. Fire Technol. 2010, 46, 651–663. [Google Scholar] [CrossRef]
  16. Ye, W.; Zhao, J.; Wang, S.; Wang, Y.; Zhang, D.; Yuan, Z. Dynamic texture based smoke detection using Surfacelet transform and HMT model. Fire Saf. J. 2015, 73, 91–101. [Google Scholar] [CrossRef]
  17. Töreyin, B.U.; Cinbiş, R.G.; Dedeoğlu, Y.; Çetin, A.E. Fire detection in infrared video using wavelet analysis. Opt. Eng. 2007, 46, 067204. [Google Scholar] [CrossRef] [Green Version]
  18. Toreyin, B.U.; Dedeoglu, Y.; Cetin, A.E. Flame detection in video using hidden Markov models. In Proceedings of the IEEE International Conference on Image Processing, Genova, Italy, 14 September 2005; p. II-1230. [Google Scholar]
  19. Park, K.; Bae, C. Smoke detection in ship engine rooms based on video images. IET Image Process. 2020, 14, 1141–1149. [Google Scholar] [CrossRef]
  20. Ko, B.C.; Cheong, K.-H.; Nam, J.-Y. Fire detection based on vision sensor and support vector machines. Fire Saf. J. 2009, 44, 322–329. [Google Scholar] [CrossRef]
  21. Gubbi, J.; Marusic, S.; Palaniswami, M. Smoke detection in video using wavelets and support vector machines. Fire Saf. J. 2009, 44, 1110–1115. [Google Scholar] [CrossRef]
  22. Muhammad, K.; Ahmad, J.; Mehmood, I.; Rho, S.; Baik, S.W. Convolutional Neural Networks Based Fire Detection in Surveillance Videos. IEEE Access 2018, 6, 18174–18183. [Google Scholar] [CrossRef]
  23. Muhammad, K.; Ahmad, J.; Baik, S.W. Early fire detection using convolutional neural networks during surveillance for effective disaster management. Neurocomputing 2018, 288, 30–42. [Google Scholar] [CrossRef]
  24. Muhammad, K.; Ahmad, J.; Lv, Z.; Bellavista, P.; Yang, P.; Baik, S.W. Efficient Deep CNN-Based Fire Detection and Localization in Video Surveillance Applications. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 1419–1434. [Google Scholar] [CrossRef]
  25. Dunnings, A.J.; Breckon, T.P. Experimentally Defined Convolutional Neural Network Architecture Variants for Non-Temporal Real-Time Fire Detection. In Proceedings of the 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 1558–1562. [Google Scholar]
  26. Li, P.; Zhao, W. Image fire detection algorithms based on convolutional neural networks. Case Stud. Therm. Eng. 2020, 19, 100625. [Google Scholar] [CrossRef]
  27. Zhao, L.; Zhi, L.; Zhao, C.; Zheng, W. Fire-YOLO: A Small Target Object Detection Method for Fire Inspection. Sustainability 2022, 14, 4930. [Google Scholar] [CrossRef]
  28. Abdusalomov, A.; Baratov, N.; Kutlimuratov, A.; Whangbo, T.K. An Improvement of the Fire Detection and Classification Method Using YOLOv3 for Surveillance Systems. Sensors 2021, 21, 6519. [Google Scholar] [CrossRef] [PubMed]
  29. Rokonuzzaman, M.; Mohajer, N.; Nahavandi, S.; Mohamed, S. Review and performance evaluation of path tracking controllers of autonomous vehicles. IET Intell. Transp. Syst. 2021, 15, 646–670. [Google Scholar] [CrossRef]
  30. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 30 June 2016; pp. 779–788. [Google Scholar]
  31. Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
  32. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
  33. Kim, S.-D. A Study on the Fire Detection Using YOLO Algorithm in Engine Room of a Ship. Master’s Thesis, Mokpo National Maritime University, Mokpo, Republic of Korea, 2020. [Google Scholar]
Figure 1. Flowchart of the autonomous mobile robot operation.
Figure 1. Flowchart of the autonomous mobile robot operation.
Jmse 11 00634 g001
Figure 2. Pure Pursuit path tracking control. (a) Geometry of Pure Pursuit for look ahead distance. (b) Geometry of Pure Pursuit for δ. The red circle is the point the robot want to track, and the blue dashed line connecting the red circle and the center of the rear wheel of the robot is the look ahead distance (ld). The robot steers and turns to move to the tracking point, and the arc of the circle created at this time is the arc marked in light blue. The radius of the circle made using the light blue arc is R. L is the length connecting the center of the front and rear wheels of the robot.
Figure 2. Pure Pursuit path tracking control. (a) Geometry of Pure Pursuit for look ahead distance. (b) Geometry of Pure Pursuit for δ. The red circle is the point the robot want to track, and the blue dashed line connecting the red circle and the center of the rear wheel of the robot is the look ahead distance (ld). The robot steers and turns to move to the tracking point, and the arc of the circle created at this time is the arc marked in light blue. The radius of the circle made using the light blue arc is R. L is the length connecting the center of the front and rear wheels of the robot.
Jmse 11 00634 g002
Figure 3. Coordinate conversion of RGB to RGBD. Purple square is a fire bounding box.
Figure 3. Coordinate conversion of RGB to RGBD. Purple square is a fire bounding box.
Jmse 11 00634 g003
Figure 4. Frame size of the autonomous mobile robot.
Figure 4. Frame size of the autonomous mobile robot.
Jmse 11 00634 g004
Figure 5. Path image of the experiment. Red dot is robot’s destination. Blue dot is the robot’s current estimated position. Gray area is the movable area, and light gray is the path that the robot has found.
Figure 5. Path image of the experiment. Red dot is robot’s destination. Blue dot is the robot’s current estimated position. Gray area is the movable area, and light gray is the path that the robot has found.
Jmse 11 00634 g005
Figure 6. Movement performance result of the autonomous mobile robot. The red dot is the point where the robot arrived.
Figure 6. Movement performance result of the autonomous mobile robot. The red dot is the point where the robot arrived.
Jmse 11 00634 g006
Figure 7. Sample of the dataset augmentation. In order to increase the learning data, it was rotated and horizontal flipped from −20° to 15°.
Figure 7. Sample of the dataset augmentation. In order to increase the learning data, it was rotated and horizontal flipped from −20° to 15°.
Jmse 11 00634 g007
Figure 8. Detection result of fire (true positive). The purple squares are the fire detection bouding boxes.
Figure 8. Detection result of fire (true positive). The purple squares are the fire detection bouding boxes.
Jmse 11 00634 g008
Figure 9. Engine room surveillance experiment result. (a) virtual fire banner installation (b) Image of a robot arriving at a virtual fire. (c) Set initial destination. (d) Set new destinations and moving autonomously. Red dot is destination, blue dot is current location of autonomous mobile robot.
Figure 9. Engine room surveillance experiment result. (a) virtual fire banner installation (b) Image of a robot arriving at a virtual fire. (c) Set initial destination. (d) Set new destinations and moving autonomously. Red dot is destination, blue dot is current location of autonomous mobile robot.
Jmse 11 00634 g009
Table 1. Hardware list of the autonomous mobile robot.
Table 1. Hardware list of the autonomous mobile robot.
EquipmentModel
(Manufacturer/Country)
ChassisTRAXXAS 7407 1/10 Rally 4WD
(Traxxas, USA)
ControllerJetson TX2 Developer Kit
(Nvidia, USA)
Motor ControllerVESC-X HW 4.12
(Maytech, China)
CameraIntel RealSense D455
(Intel, USA)
LiDARRPLiDAR A2
(Slamtec, China)
Inertial Measurement Unit (IMU)Razor 9DOF IMU
(SparkFun, USA)
Battery8.4 V, 3000 mA
(Traxxas, USA)
Table 2. Movement performance of the autonomous mobile robot.
Table 2. Movement performance of the autonomous mobile robot.
IndexValue
Arrival RateStraight (92%)Total (88%)
Curve + Straight (84%)
Distance Average
(from the destination)
Straight (23 cm)Total (28 cm)
Curve + Straight (32 cm)
Table 3. Parameter of Tiny-YOLOv3 at learning.
Table 3. Parameter of Tiny-YOLOv3 at learning.
ParameterValue
Batch64
Subdivisions2
Width416
Height416
Momentum0.9
Decay0.0005
Learning Rate0.001
Max_Batches10,000
Table 4. Experimental confusion matrix.
Table 4. Experimental confusion matrix.
Actual FireNon-Fire
Predicted Fire707 (True Positive, TP)0 (False Positive, FP)
Predicted Non-Fire12 (False Negative, FN)717 (True Negative, TN)
Table 5. Performance comparison between Tiny-YOLOv2 and Tiny-YOLOv3.
Table 5. Performance comparison between Tiny-YOLOv2 and Tiny-YOLOv3.
Detection RateFalse Alarm RateAccuracy
Tiny-YOLOv2 [33]0.97910.01530.9819
Tiny-YOLOv30.98330.00.9916
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, S.-D.; Bae, C.-O. Unmanned Engine Room Surveillance Using an Autonomous Mobile Robot. J. Mar. Sci. Eng. 2023, 11, 634. https://doi.org/10.3390/jmse11030634

AMA Style

Kim S-D, Bae C-O. Unmanned Engine Room Surveillance Using an Autonomous Mobile Robot. Journal of Marine Science and Engineering. 2023; 11(3):634. https://doi.org/10.3390/jmse11030634

Chicago/Turabian Style

Kim, Seon-Deok, and Cherl-O Bae. 2023. "Unmanned Engine Room Surveillance Using an Autonomous Mobile Robot" Journal of Marine Science and Engineering 11, no. 3: 634. https://doi.org/10.3390/jmse11030634

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop