Next Article in Journal
An Asymmetric Bargaining Model for Natural-Gas Distribution
Previous Article in Journal
FE Analysis of Motorcycle Helmet Performance under Severe Accidents
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Method of Ski Tracks Extraction Based on Laser Intensity Information

School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(11), 5678; https://doi.org/10.3390/app12115678
Submission received: 24 April 2022 / Revised: 30 May 2022 / Accepted: 30 May 2022 / Published: 2 June 2022
(This article belongs to the Section Optics and Lasers)

Abstract

:
At present, the mainstream laser point cloud classification algorithms are mainly based on the geometric information of the target. Nevertheless, if there is occlusion between the targets, the classification effect will be negatively affected. Compared with the above methods, a new method of ski tracks extraction using laser intensity information based on target reflection is presented in this paper. The method can complete the downsampling of the point cloud datasets of ski tracks under the condition that the information of the target edge is complete. Then, the clustering and extraction of ski tracks are effectively accomplished based on the smoothing threshold and curvature between adjacent point clouds. The experimental results show that, different from the traditional methods, the composite classification method based on the intensity information proposed in this paper can effectively extract ski tracks from the complex background. By comparing the proposed method to the Euclidean distance method, the clustering segmentation method, and the RANSAC method, the average extraction accuracy is increased by 16.9%, while the over extraction rate is reduced by 8.4% and the under extraction rate is reduced by 8.6%, allowing us to accurately extract the ski track point cloud of a ski resort.

1. Introduction

With the successful holding of the 24th Beijing Winter Olympic Games in 2022, people are progressively interested in participating in winter sports, especially skiing [1,2]. In the meantime, researchers and scientific research are becoming increasingly interested in skiing. In recent years, a brand-new high-tech method to collect high-precision 3D point cloud data and extract the needed information of ski tracks by LiDAR has been developed [3,4,5,6]. For the managers of ski resorts, the relevant measurement and statistics (such as thickness, slope et al.) is an important part of daily maintenance. Therefore, a key step is to effectively separate the ski tracks from the background by semantic segmentation of 3D point cloud datasets.
In addition, as the key technology of point cloud processing, the semantic segmentation of laser point cloud is widely used in artificial intelligence, reverse engineering, target reconstruction, robot, medical image analysis, and other related fields [7,8]. The point cloud semantic segmentation classifies three-dimensional points according to their local features, classifies points with the same or similar attributes into a set of disjoint points, and points in the same set that have similar features [9]. The semantic segmentation, which is helpful in scene analysis, such as location and object recognition, classification and feature extraction, and object recognition, is one of the key links in point cloud data processing.
The current recognition method of the target classification for 3D datasets research mainly uses the coordinate information of point clouds [10,11,12]. The geometric dimension of the target is calculated by the coordinate information of the point cloud; thus the target classification and recognition can be realized. Guo M et al. divided the point clouds into independent targets and classified the targets by extracting the volume characteristics of each target [13]. Xu X et al. divided the three-dimensional datasets into slices according to height and extracted the length and width features of each slice to detect pedestrian targets [14]. The classification method based on the coordinate information of point clouds is easily affected by the occlusion of target, and if the targets have similar geometric characteristics, they cannot be distinguished effectively. This situation makes these methods unable to effectively separate the ski tracks from the complex background. The LiDAR can accurately record coordinate information and reflection intensity of an object, which can efficiently reflect the reflection ability of the object, and has great application prospects in semantic segmentation for multiscale targets. Generally, the reflection intensity of the target is affected by many factors such as atmospheric conditions, incident azimuth, and emission distance, resulting in different reflection intensities of objects [15,16,17,18,19]. However, due to artificial snow and its high hardness at ski resorts, the laser reflection is similar. The research on how to extract the parameters of the object characteristics from the intensity information is of great significance in strengthening the utilization of the intensity information and broadening target classification methods.
In this paper, a composite ski tracks extraction algorithm based on laser point cloud intensity information, which can effectively extract the ski tracks under the complex background and provide convenience for the daily operation and maintenance of ski resorts, is proposed.

2. Study Area and Data

The LiDAR datasets used in this paper were acquired through the unmanned aircraft LiDAR System (RIEGL VUX-1 UAV System). The collection site is located in Zhangjiakou Wanlong ski resort, the most famous ski resort in China, near Yanqing District, Beijing (249 km away from Beijing, 2110.3 m above sea level, more than 200,000 square meters).
The collection time was 12:00 to 14:00, and the temperature of the ski resort was −18 °C. Figure 1a shows the environmental conditions of the Wanlong ski resort at the time of collection, Figure 1b is an aerial picture taken during acquisition, and Figure 2 is an introduction to the system architecture.
The seven ski tracks of Wanlong ski resort are all made of extremely hard artificial snowPlease verify meaning is retained. The environment of the snow field is relatively complex, including not only five main snow tracks, but also two training tracks, as well as trees, riprap, etc. A DJI M600 PRO UAV was used as the flight platform with LiDAR because of its excellent stability and flight performance, and the fact that its battery life can reach 30 min. In the acquisition process, the scanning frequency was 550 KHz, the overlap rate of sidebands was 20 percent, the flight altitude was 110 m, the flight speed was 5 m/s, and we conducted four flight scanning missions in total.

3. Methodology

3.1. Downsampling with Edge-Preserving Information

The downsampling of the point cloud datasets can reduce the point cloud density, reduce the requirements for computing power, and improve the operation speed. The paper uses a point cloud downsampling framework based on an edge detection algorithm. Its technical roadmap is shown in Figure 3.
First, the two-dimensional images from multiple angles were processed to calculate the two-dimensional feature points, such as Holistically-Nested Edge Detection (HED), binarization, contour thinning, etc., and then the three-dimensional features were calculated by using the corresponding relationship between the two-dimensional image and the three-dimensional model. Finally, the non-feature points were sampled to complete the downsampling of the model. In brief, the point cloud downsampling network based on edge detection algorithm mainly uses two-dimensional images with several different angles.

3.2. The First Segmentation Based on Intensity Information

The first segmentation and first classification of targets using LiDAR intensity information need to analyze the influence of each factor on intensity. The intensity returns a value of 3D LiDAR and can be regarded as the discrete integer value of the received power of LiDAR after certain signal processing. The echo intensity of LiDAR is mainly characterized by the LiDAR equation [20,21,22]. According to the LiDAR equation, the laser echo light power is determined by the laser transmitting power, the transmitting optical system parameters, the atmospheric laser transmission characteristics, the receiving optical system parameters, and the target reflection characteristics [23,24]. The expression is as follows:
P r = 4 P t π θ t 2 R t 2 · τ t η t τ r η r · ρ c o s θ i c o s θ r A t D 2 4 R r 2
where P r represents the received echo power of the target, P t represents the power of the laser emission, θ t represents the laser reflection angle, R t is the distance from the laser light source to the target, τ t represents the atmospheric transmittance between the laser light source and the target, η t represents the optical efficiency of the transmitting optical system, τ r s the atmospheric transmittance between the target and the receiver, η r represents the optical efficiency of the receiving optical system, ρ represents the reflectivity of the target, θ i and θ r are the incident and reflection angles of the target relative to the LiDAR, A t represents the effective area of the laser beam on the target, R r represents the distance from the receiver to the target, and D represents the aperture of the receiving detector.
The common three-dimensional scanning LiDAR system is generally a single station structure integrating transmitting and receiving [25,26,27,28,29]. In this case, there are the following equation relations: R t = R r = R , θ i = θ r = θ , and τ t = τ r = e α R , where α is the atmospheric extinction coefficient. In summary, the LiDAR equation can be derived in the following form:
P r = P t η t η r D 2 4 · e 2 α R R 2 · ρ c o s 2 θ
Then, the downsampled point cloud datasets were attached with different colors according to different intensities. On this basis, according to the intensity distribution of point cloud, the statistical histogram of the intensity information of laser point cloud was drawn, the suitable segmentation range was selected, the starting and ending points of intensity segmentation value were set in the program, and the initial segmented point cloud was obtained.

3.3. The Second Segmentation with The Cloth Method

A cloth simulation filtering (CSF) algorithm is based on a simple physical process simulation [30,31,32,33]. The algorithm assumes that a virtual cloth falls by gravity on the terrain surface. If the cloth is soft enough, it will be attached to the terrain, and the shape of the fabric is the D S M . When the terrain is turned over, the fabric shape that falls on the surface is D E M , as shown in Figure 4a. The algorithm uses a cloth simulation to extract the ground points of laser point cloud data.
The cloth is a grid made up of a large number of interrelated nodes, as shown in Figure 4b, which is called the particle spring model [34]. In this model, the cloth points are connected by a virtual spring, and the interaction between the points follows the law of elasticity. In other words, the cloth point will produce displacement after being stressed. To simulate the shape of the cloth at a certain time, the position of the cloth points in the 3 D space needs to be calculated. Combined with Newton’s second law, the relationship between the position of the distribution point and the interaction force follows the following formula:
X ( t + Δ t ) = 2 X ( t ) X ( t Δ t ) + G m Δ t 2
where m represents the mass of the distribution points and is set as constant 1; X represents the node position at a certain time; Δ t represents the time step; G represents the gravitational constant. Given the time step and the initial position of the node, the position of the current point will be obtained.
In the process of cloth simulation, the position of the cloth point after the displacement caused by gravity is calculated by Formula (3). In addition, the blank area of the overturned surface is usually micro-terrain such as buildings or deep pits. Therefore, to limit the movement of distribution points, it is necessary to calculate the elevation difference between adjacent distribution points to correct the position of distribution points moved by the force between adjacent nodes. If two adjacent nodes are movable points and have different elevation values, they move the same distance in the opposite direction in the vertical direction. If there is only one immovable point between them, the other point moves. If both are at the same elevation, neither is moved. The corrected displacement of each cloth point is calculated as follows:
d = 1 2 b ( p i p 0 ) · n
where d is the movement vector of the node; p 0 is the current position of the node to be moved; p i is the adjacent node position of p 0 ; n is the standard vector in the vertical direction, n = ( 0 , 0 , 1 ) T ; b is the parameter used to determine whether the node moves. When the node is a movable point, set b = 1 , otherwise b = 0 .

3.4. Ski Tracks Extraction

Before the semantic segmentation of the ski tracks and the completion of the extraction work, it is necessary to establish a topological relationship for the three-dimensional datasets of point cloud, determine the seed nodes in datasets to be segmented, and according to the growth criteria, judge whether the neighborhood points and seed nodes in the seed stage belong to the same surface. If the conditions are met, the point continues to grow as a new seed point, and so on. The datasets of ski-track point cloud with the same attributes are divided into the same region until the growth ceases. In the paper, the k-d tree method is used to establish the topological relationship between the ski-track datasets, because it has the advantages of fast neighborhood point search speed and high segmentation efficiency.

3.4.1. K-Neighborhood Acquisition of k-d Tree

The topological relationship between the scattered point cloud datasets of Wanlong ski resort must be established before point clouds can be grown and segmented. With the establishment of a topological relationship for the point cloud, the processing range of data can be effectively reduced, and the algorithm will perform more efficiently. Currently, octree and k-d trees are commonly used point cloud data organizing algorithms. In spite of the fact that the time complexity of searching K adjacent points by both of them is O ( n ) , the octree method does not provide a satisfactory location for a single point. Therefore, this paper uses k-d trees to establish the topological relationship of point clouds, and the FLANN library is used to implement k-d tree search in PCL. K-dimensional trees are used to organize the set of points in k-dimensional space. It is similar to a binary search tree but with additional constraints. The three-dimensional point cloud is analyzed using a three-dimensional k-d tree. The k-d tree places the value A of the specified dimension on the root, and then B , D , and E with smaller values are found on the left subtree, and the larger values C , F , and G are found on the right subtree. Once this process is repeated for both the left and right subtrees, only one element will remain in the last tree. The specific workflow is shown in Figure 5.

3.4.2. Establishment of Criteria for Regional Growth and Selection of Initial Seed Points

In region growing segmentation, the choice of seed points determines the accuracy of the segmentation. We will use the point with the smallest curvature as the seed point in this paper and grow from the point with the smallest curvature. Starting to grow at the point with the lowest curvature is the smoothest area, which can reduce the total number of areas and increase the efficiency. To determine the curvature of the point cloud in the first-segmented points, the following formula should be used.
(1) Calculate the average curvature K h of a point P in the surface. The average curvature K h satisfies the following relationship:
2 K h n = l i m [ d i a m ( A ) ] 0 A A
In Formula (5), n stands for the normal vector, A stands for an infinitesimal region around P, d i a m ( A ) represents its diameter, and ∇ is the gradient operator about point P.
(2) Discretize Equation (1) to obtain the average curvature of P i :
K h ( P i ) = n 4 A m i n γ
γ = j N ( i ) ( c o t α i j + c o t β i j ) ( P i P j )
In Formulas (6) and (7), α i j and β i j are included angles between the line connecting point P i , P j and their normal vectors, respectively.
This algorithm calculates the angle between the normal of the neighborhood point and that of the current seed point and adds the neighborhood points that are less than the smooth threshold of the current area. The smooth threshold is implemented in the P C L library by calling the setSmoothnessThreshold member function and then checking the curvature of each neighborhood point and adding them to the current seed point sequence. To estimate the normal vector of the point cloud, the P C L library uses the midpoint and the neighborhood fitting plane to determine the normal vector of the point. The problem can be rephrased into a least squares plane fitting estimation problem. In other words, estimating the surface normals is equivalent to analyzing the eigenvectors and eigenvalues of a covariance matrix, which is created from the nearest neighbors of the query point. Specifically, for each point P i , the corresponding point covariance matrix C is as follows:
C = 1 k i = 1 k ( P i P ¯ ) ( P i P ¯ ) T
C ( n j ) = λ j n j
In Formulas (8) and (9), the number of adjacent points of P i is given by k, P ¯ is the three-dimensional centroid of the closest element, λ j is the jth eigenvalue of the covariance matrix, and n j is the jth eigenvector. The eigenvector associated with the smallest eigenvalue is the normal vector of the plane, i.e., the normal of the point cloud. By calling the third Eigen library in P C L , it is possible to estimate the covariance matrix of a point set.

3.4.3. Setting the Threshold for Curvature

The setting of the curvature threshold in segmenting regions directly impacts the quality of segmentation. By selecting the curvature threshold incorrectly, one can cause oversegmentation or undersegmentation. The curvature threshold value is directly related to the complexity of the actual object; for example, the curvature threshold value of the gentler object is larger, whereas the curvature threshold value of the sharper object is smaller. The curvature threshold can be set in the P C L library by using the setCurvatureThreshold member function. Based on the points data of the Wanlong ski resort used in this article, the curvature threshold is set between 0.0016 and 0.0023 .
The growth radius and vertical distance between the neighboring points and the seed surface were judged by setting the range of spatial threshold. If the angle is less than the threshold, the current point is added to the current area. The normal vector of the fitting surface is calculated and compared with the normal vector of the seed surface to judge whether the two included angles belong to the threshold range. If the curvature is less than the critical value, the point is added to the seed point. In summary, the growth segmentation process is shown in Figure 6, and the point cloud extraction algorithm for ski tracks is in Algorithm 1.
Algorithm 1: The Point Cloud Extraction Algorithm for Ski Tracks
Require: Point cloud = {P}; Point normals = {N};
1:Points curvatures ={K };
2:Neighbor finding function = { Ω ( . ) };
3:Curvature threshold = { K t h };
4:Angle threshold = { θ t h };
5:Region list R 0 ;
6:Available points list { A } { 1 , , | P | } .
Ensure: Clustering points set, M
7: while { A } is not empty do
8:Current region { R K } 0
9:Current seeds { S K } 0
10:Point with minimum curvature in { A } 0
11: { S K } { S K } P m i n
12: { R K } { R K } P m i n
13: { A } { A } P m i n
14:for i = 0 to size < { S K } >  do
15:      Find nearest neighbours of current seed point { B K } Ω ( S K { i } )
16:      Current neighbour point { P j } B K { j }
17:      if  { A } contains P j and c o s 1 ( | ( N { S K { i } , N { S K { j } } ) | ) < θ t h  then
18:             { R K } { R K } P j
19:             { A } P j
20:            if  K { P j } < K t h  then
21:                   { S K } { S K } P j
22:            end if
23:      end if
24:end for
25:Add current region to global segment list { R } { R } { R K }
26: end while
27:return M

4. Results and Discussion

In this paper, a total of four samples from two regions were selected for the downsampling experiment. The downsampling method of the point cloud datasets of the Wanlong ski resort based on edge detection algorithm can obtain the following four groups of results:
Figure 7 shows the downsampling results of the first laser point cloud datasets in region A; the number of raw data is 8,283,717 points, and the number after downsampling is 398,776 points. Figure 8 shows the downsampling results of the second laser point cloud datasets in region A; the number of raw data is 5,318,641 points, and the number after downsampling is 5,318,641 points.
Figure 9 shows the downsampling results of the first laser point cloud datasets in region B; the number of raw data is 2,648,473 points, and the number after downsampling is 293,198 points. Figure 10 shows the downsampling results of the second laser point cloud datasets in region B; the number of raw data is 497,563 points, and the number after downsampling is 242,744 points.
After downsampling, the edges of trees and ski tracks in point cloud datasets are obvious, and the information is clearly visible. Through analyzing the above four sets of results, we found that this method can effectively preserve the edge information of snow tracks, gentle slopes, and other places. The successful downsampling of laser point cloud datasets lays a foundation for the subsequent segmentation and classification of ground and ground objects, and the extraction of ski tracks.
Next, the target was preliminarily segmented and classified according to the intensity information of laser point cloud datasets. First, the points were attached with different R G B values according to the echo intensity to display the segmentation and classification effect directly. Then, the statistical histogram of the laser point cloud was drawn according to the echo intensity. Finally, the point cloud datasets wase segmented according to the intensity information distribution, and the segmentation results are as follows:
After analyzing the intensity statistical histogram, in Figure 11 and Figure 12, it was determined that the intensity segmentation range of the second laser point cloud datasets in region A is 45,688 to 53,588. The result of point cloud sets segmentation is shown in Figure 13, where (a) is the non-ground points and (b) is the ground points.
The intensity statistical histogram based on the Figure 14 and Figure 15 indicates that the intensity segmentation range of the second laser point cloud datasets in region A is 47,587 to 53,488. The result of point cloud sets segmentation is shown in Figure 16, where (a) is the non-ground points and (b) is the ground points.
After analyzing the intensity statistical histogram, in Figure 17 and Figure 18, it is determined that the intensity segmentation range of the first laser point cloud datasets in region B is 46,073 to 52,903. The result of point cloud sets segmentation is shown in Figure 19, where (a) is the non-ground points and (b) is the ground points.
After analyzing the intensity statistical histogram, in the Figure 20 and Figure 21, it is determined that the intensity segmentation range of the second laser point cloud datasets in region B is 46,032 to 52,918. The result of point cloud sets segmentation is shown in Figure 22, where (a) is the non-ground points and (b) is the ground points.
Analyzing the above results of laser point cloud segmentation based on the intensity distribution of LiDAR point cloud can easily distinguish the snow-covered ground from the non-ground points, but it is still unable to distinguish the ski tracks from the snow-covered ground, so further segmentation operation is needed.
Therefore, to further separate the surface, the user-defined parameters of the cloth algorithm were set separately. After several adjustments, the best results can be obtained when cloth resolution is set to 0.2 , max iterations is set to 500, and the classification threshold is set to 0.5 . The point cloud segmentation results are shown in the Figure 23, Figure 24, Figure 25 and Figure 26.
The included angle between the normal of the seed point field and the normal of the current seed point is calculated and compared with the smoothing threshold. After several attempts, when the smoothing threshold is set to 2.4 and the curvature is set to 0.2 , the ski tracks can be separated from other ground points successfully. The results are shown in the Figure 27, Figure 28, Figure 29 and Figure 30.
For further verification of the effectiveness, this paper compares the method presented in this paper with the Euclidean distance method, the cluster segmentation method, and the RANSAC method on three aspects: correct rate, oversegmentation rate, and undersegmentation rate. From the above four groups of test data, snow tracks were extracted using different methods, and the specific comparison results are shown in Table 1.
Taking into account the analysis of Table 1, the extraction accuracy rate of the method proposed in this paper is the highest (92.2%), while the under-extraction rate is the lowest (3.1%), whereas the correct extraction rate of the European distance method is the lowest (68.3%), and the over-extraction rate is the highest (20.4%). As for the second group of experimental data, the extraction accuracy of the proposed method is highest (94.5%), and the under-extraction rate is lowest (2.7%), whereas the clustering segmentation method has the lowest correct extraction rate (74.3%), and the over-extraction rate is highest (12.6%). With the third group of experimental data, the extraction accuracy of most methods has decreased due to the complex test data environment; however, the extraction accuracy of the methods presented in this work is still the highest (89.4%), and the under extraction rate is the lowest (3.8%), whereas the European clustering method has the lowest correct extraction rate (59.8%) and the highest over extraction rate (26.3%). As evidenced by the fourth set of experimental data, the proposed method had the highest extraction accuracy (92.9%) and the lowest under-extraction rate (2.0%), while the clustering segmentation method had the lowest extraction accuracy (70.8%) and the highest over-extraction rate (18.5%).
Analyzing the results in Table 1, it can be seen that the method proposed in this article achieves the highest ski-track point cloud extraction accuracy, the lowest over-extraction rate, and the lowest under-extraction rate when compared with the European distance method, the clustering segmentation method, and the RANSAC method. Accordingly, compared with other methods, the average extraction accuracy of the ski-track point cloud is improved by 16.9%, the over-extraction rate is reduced by 8.4%, and the under-extraction rate is reduced by 8.6%, which enables effective extraction of the ski track points cloud of a ski resort.

5. Conclusions and Future Work

Compared with the current mainstream classification algorithms, this paper used a composite detection method based on the laser intensity information reflected by the target. The method in this paper could complete the downsampling of point cloud datasets under the condition of ensuring the integrity of target edge information and separating the ground points and non-ground points according to the intensity distribution histogram and cloth method. Finally, the clustering and extraction of ski tracks were effectively completed based on the smoothing threshold and curvature between adjacent point clouds. The future work will focus on the study of laser point cloud intensity distribution and simplify the extraction process. In addition, the depth and slope of the ski tracks will be calculated according to the snow-free and snow-covered scenes, respectively.

Author Contributions

Conceptualization, W.W. and H.Z.; methodology, W.W. and C.Z.; software, W.W.; validation, W.W., C.Z. and H.Z.; formal analysis, W.W.; investigation, W.W.; resources, C.Z.; data curation, W.W.; writing—original draft preparation, W.W.; writing—review and editing, C.Z.; visualization, W.W.; supervision, C.Z. and H.Z.; project administration, C.Z.; funding acquisition, C.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key R&D Program of China, Special project of “Science and Technology Winter Olympics” (2018YFF0300802).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Han, X.F.; Jin, J.S.; Wang, M.J.; Jiang, W.; Gao, L.; Xiao, L. A review of algorithms for filtering the 3D point cloud. Process. Image Commun. 2017, 57, 103–112. [Google Scholar] [CrossRef]
  2. Li, Y.; Li, C.; Wang, S.; Fang, J. A CSF-Modified Filtering Method based on Topography Feature. Remote Sens. Technol. Appl. 2020, 34, 1261–1268. [Google Scholar]
  3. Schindl, A.; Schindl, M.; Pernerstorfer-Schön, H.; Schindl, L. Low-intensity laser therapy: A review. J. Investig. Med. Off. Publ. Am. Fed. Clin. Res. 2000, 48, 312–326. [Google Scholar]
  4. Chang, W.C.; Pham, V.T. 3-d point cloud registration using convolutional neural networks. Appl. Sci. 2019, 9, 3273. [Google Scholar] [CrossRef] [Green Version]
  5. Pomerleau, F.; Liu, M.; Colas, F.; Siegwart, R. Challenging data sets for point cloud registration algorithms. Int. J. Robot. Res. 2012, 31, 1705–1711. [Google Scholar] [CrossRef] [Green Version]
  6. Froula, D.H.; Turnbull, D.; Davies, A.S.; Kessler, T.J.; Haberberger, D.; Palastro, J.P.; Bahk, S.W.; Begishev, I.A.; Boni, R.; Bucht, S.; et al. Spatiotemporal control of laser intensity. Nat. Photonics 2018, 12, 262–265. [Google Scholar] [CrossRef]
  7. Borghesi, M.; Fuchs, J.; Bulanov, S.V.; Mackinnon, A.J.; Patel, P.K.; Roth, M. Fast ion generation by high-intensity laser irradiation of solid targets and applications. Fusion Sci. Technol. 2006, 49, 412–439. [Google Scholar] [CrossRef]
  8. Zhang, J.; Zhao, X.; Chen, Z.; Lu, Z. A review of deep learning-based semantic segmentation for point cloud. IEEE Access 2019, 7, 179118–179133. [Google Scholar] [CrossRef]
  9. Huang, X.; Mei, G.; Zhang, J.; Abbas, R. A comprehensive survey on point cloud registration. arXiv 2021, arXiv:2103.02690. [Google Scholar]
  10. Nguyen, A.; Le, B. 3D point cloud segmentation: A survey. In Proceedings of the 2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM), Manila, Philippines, 12–15 November 2013. [Google Scholar]
  11. Liu, Y.; Fan, B.; Xiang, S.; Pan, C. 3D point cloud segmentation: A survey. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  12. Landrieu, L.; Simonovsky, M. Large-scale point cloud semantic segmentation with superpoint graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  13. Guo, M.H.; Cai, J.X.; Liu, Z.N.; Mu, T.J.; Martin, R.R.; Hu, S.M. Pct: Point cloud transformer. Comput. Vis. Media 2021, 7, 187–199. [Google Scholar] [CrossRef]
  14. Xu, X.; Lee, G.H. Weakly supervised semantic point cloud segmentation: Towards 10x fewer labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  15. Wu, B.; Ma, J.; Chen, G.; An, P. Feature Interactive Representation for Point Cloud Registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019. [Google Scholar]
  16. Zeybek, M.; Şanlıoğlu, İ. Point cloud filtering on UAV based point cloud. Measurement 2019, 133, 99–111. [Google Scholar] [CrossRef]
  17. Gojcic, Z.; Zhou, C.; Wegner, J.D.; Guibas, L.J.; Birdal, T. Learning multiview 3d point cloud registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 19–25 June 2020. [Google Scholar]
  18. Meng, H.Y.; Gao, L.; Lai, Y.K.; Manocha, D. Vv-net: Voxel vae net with group convolutions for point cloud segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019. [Google Scholar]
  19. Goyal, A.; Law, H.; Liu, B.; Newell, A.; Deng, J. Revisiting point cloud shape classification with a simple and effective baseline. In Proceedings of the International Conference on Machine Learning, Virtual Event, 18–24 July 2021. [Google Scholar]
  20. Deng, Z.; Yao, Y.; Deng, B.; Zhang, J. A robust loss for point cloud registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 11–17 October 2021. [Google Scholar]
  21. Xiao, J.; Adler, B.; Zhang, H. 3D point cloud registration based on planar surfaces. In Proceedings of the 2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Hamburg, Germany, 13–15 September 2012. [Google Scholar]
  22. Poullis, C. A framework for automatic modeling from point cloud data. Trans. Pattern Anal. Mach. Intell. 2013, 35, 2563–2575. [Google Scholar] [CrossRef] [PubMed]
  23. Özdemir, S.; Akbulut, Z.; Karsli, F.; Hayrettin, A.C.A.R. Automatic extraction of trees by using multiple return properties of the lidar point cloud. Int. J. Eng. Geosci. 2021, 6, 20–26. [Google Scholar] [CrossRef]
  24. Kammerl, J.; Blodow, N.; Rusu, R.B.; Gedikli, S.; Beetz, M.; Steinbach, E. Real-time compression of point cloud streams. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, St. Paul, MN, USA, 14–19 May 2012. [Google Scholar]
  25. Fu, K.; Liu, S.; Luo, X.; Wang, M. Robust point cloud registration framework based on deep graph matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
  26. Lee, J.B.; Jung, J.H.; Kim, H.J. Segmentation of seabed points from airborne bathymetric LiDAR point clouds using cloth simulation filtering algorithm. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2020, 38, 1–9. [Google Scholar]
  27. Park, J.; Zhou, Q.Y.; Koltun, V. Colored point cloud registration revisited. In Proceedings of the IEEE International Conference on Computer Vision, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  28. Wang, Y.; Sheng, Y.; Qin, J.; Zhang, S.; Min, X. Purification of single building point cloud data using cloth simulation filter. Bull. Surv. Mapp. 2020, 72. [Google Scholar] [CrossRef]
  29. Sithole, G.; Vosselman, G. Automatic structure detection in a point-cloud of an urban landscape. In Proceedings of the 2003 2nd GRSS/ISPRS Joint Workshop on Remote Sensing and Data Fusion over Urban Areas, Berlin, Germany, 22–23 May 2003. [Google Scholar]
  30. Quan, S.; Yang, J. Compatibility-guided sampling consensus for 3-d point cloud registration. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7380–7392. [Google Scholar] [CrossRef]
  31. Zaganidis, A.; Sun, L.; Duckett, T.; Cielniak, G. Integrating deep semantic segmentation into 3-d point cloud registration. IEEE Robot. Autom. Lett. 2018, 3, 2942–2949. [Google Scholar] [CrossRef] [Green Version]
  32. Liu, K.; Wu, J.; Li, J.J.; Shen, J.J. Research on point cloud data processing for helicopter aided navigation. In Proceedings of the Seventh Symposium on Novel Photoelectronic Detection Technology and Applications, Kunming, China, 5–7 November 2021; International Society for Optics and Photonics: Bellingham, WA, USA, 2021. [Google Scholar]
  33. Prokop, M.; Shaikh, S.A.; Kim, K.S. Low overlapping point cloud registration using line features detection. Remote Sens. 2019, 12, 61. [Google Scholar] [CrossRef] [Green Version]
  34. Min, T.; Kim, E.; Shim, I. Geometry guided network for point cloud registration. IEEE Robot. Autom. Lett. 2021, 12, 7270–7277. [Google Scholar] [CrossRef]
Figure 1. Collection point cloud datasets in the ski resort: (a) shows the environmental conditions of the Wanlong ski resort at the time of collection; (b) is an aerial picture taken during acquisition.
Figure 1. Collection point cloud datasets in the ski resort: (a) shows the environmental conditions of the Wanlong ski resort at the time of collection; (b) is an aerial picture taken during acquisition.
Applsci 12 05678 g001
Figure 2. Structure diagram of point cloud acquisition system, mainly including inertial navigation, DGPS, camera, and laser scanning.
Figure 2. Structure diagram of point cloud acquisition system, mainly including inertial navigation, DGPS, camera, and laser scanning.
Applsci 12 05678 g002
Figure 3. The roadmap of downsampling technology with edge-preserving information.
Figure 3. The roadmap of downsampling technology with edge-preserving information.
Applsci 12 05678 g003
Figure 4. Working principle and mesh distribution of C S F : (a) is the working principle of C S F ; (b) is the cloth grid made up of interrelated nodes.
Figure 4. Working principle and mesh distribution of C S F : (a) is the working principle of C S F ; (b) is the cloth grid made up of interrelated nodes.
Applsci 12 05678 g004
Figure 5. The working principle of k-d tree.
Figure 5. The working principle of k-d tree.
Applsci 12 05678 g005
Figure 6. The workflow of the growth segmentation process.
Figure 6. The workflow of the growth segmentation process.
Applsci 12 05678 g006
Figure 7. The downsampling results of the first laser point cloud datasets in region A: (a) raw data points; (b) downsampled points.
Figure 7. The downsampling results of the first laser point cloud datasets in region A: (a) raw data points; (b) downsampled points.
Applsci 12 05678 g007
Figure 8. The downsampling results of the second laser point cloud datasets in region A: (a) raw data points; (b) downsampled points.
Figure 8. The downsampling results of the second laser point cloud datasets in region A: (a) raw data points; (b) downsampled points.
Applsci 12 05678 g008
Figure 9. The downsampling results of the first laser point cloud datasets in region B: (a) raw data points; (b) downsampled points.
Figure 9. The downsampling results of the first laser point cloud datasets in region B: (a) raw data points; (b) downsampled points.
Applsci 12 05678 g009
Figure 10. The downsampling results of the second laser point cloud datasets in region B: (a) raw data points; (b) downsampled points.
Figure 10. The downsampling results of the second laser point cloud datasets in region B: (a) raw data points; (b) downsampled points.
Applsci 12 05678 g010
Figure 11. Adding colors to the datasets of Figure 7b.
Figure 11. Adding colors to the datasets of Figure 7b.
Applsci 12 05678 g011
Figure 12. Selecting a range of intensity values of the laser point clouds of Figure 11 for segmentation.
Figure 12. Selecting a range of intensity values of the laser point clouds of Figure 11 for segmentation.
Applsci 12 05678 g012
Figure 13. Selecting a range of intensity values of the first laser point cloud datasets in region A for segmentation: (a) selection the starting value; (b) selection the ending value.
Figure 13. Selecting a range of intensity values of the first laser point cloud datasets in region A for segmentation: (a) selection the starting value; (b) selection the ending value.
Applsci 12 05678 g013
Figure 14. Adding colors to the datasets of Figure 8b.
Figure 14. Adding colors to the datasets of Figure 8b.
Applsci 12 05678 g014
Figure 15. Selecting a range of intensity values of the laser point clouds of Figure 14 for segmentation.
Figure 15. Selecting a range of intensity values of the laser point clouds of Figure 14 for segmentation.
Applsci 12 05678 g015
Figure 16. The point clouds of the Figure 14 is segmented according to the intensity: (a) is the non-ground points; (b) is the ground points.
Figure 16. The point clouds of the Figure 14 is segmented according to the intensity: (a) is the non-ground points; (b) is the ground points.
Applsci 12 05678 g016
Figure 17. Adding colors to the datasets of Figure 9b.
Figure 17. Adding colors to the datasets of Figure 9b.
Applsci 12 05678 g017
Figure 18. Selecting a range of intensity values of the laser point clouds of Figure 17 for segmentation.
Figure 18. Selecting a range of intensity values of the laser point clouds of Figure 17 for segmentation.
Applsci 12 05678 g018
Figure 19. The point clouds of the Figure 17 is segmented according to the intensity: (a) is the non-ground points; (b) is the ground points.
Figure 19. The point clouds of the Figure 17 is segmented according to the intensity: (a) is the non-ground points; (b) is the ground points.
Applsci 12 05678 g019
Figure 20. Adding colors to the datasets of Figure 10b.
Figure 20. Adding colors to the datasets of Figure 10b.
Applsci 12 05678 g020
Figure 21. Selecting a range of intensity values of of the laser point clouds of Figure 20 for segmentation.
Figure 21. Selecting a range of intensity values of of the laser point clouds of Figure 20 for segmentation.
Applsci 12 05678 g021
Figure 22. The point clouds of the Figure 20 is segmented according to the intensity: (a) is the non-ground points; (b) is the ground points.
Figure 22. The point clouds of the Figure 20 is segmented according to the intensity: (a) is the non-ground points; (b) is the ground points.
Applsci 12 05678 g022
Figure 23. The segmentation results of the datasets of Figure 13b with C S F : (a) is the removed points; (b) is the remaining points.
Figure 23. The segmentation results of the datasets of Figure 13b with C S F : (a) is the removed points; (b) is the remaining points.
Applsci 12 05678 g023
Figure 24. The segmentation results of the datasets of Figure 16b with C S F : (a) is the removed points; (b) is the remaining points.
Figure 24. The segmentation results of the datasets of Figure 16b with C S F : (a) is the removed points; (b) is the remaining points.
Applsci 12 05678 g024
Figure 25. The segmentation results of the point clouds of Figure 19b with C S F : (a) is the removed points; (b) is the remaining points.
Figure 25. The segmentation results of the point clouds of Figure 19b with C S F : (a) is the removed points; (b) is the remaining points.
Applsci 12 05678 g025
Figure 26. The segmentation results of the point clouds of Figure 22b with C S F : (a) is the removed points; (b) is the remaining points.
Figure 26. The segmentation results of the point clouds of Figure 22b with C S F : (a) is the removed points; (b) is the remaining points.
Applsci 12 05678 g026
Figure 27. The ski-track was extracted from the Figure 23b: (a) is non-ski-track points; (b) is ski-track points.
Figure 27. The ski-track was extracted from the Figure 23b: (a) is non-ski-track points; (b) is ski-track points.
Applsci 12 05678 g027
Figure 28. The ski-track was extracted from the Figure 24b: (a) is non-ski-track points; (b) is ski-track points.
Figure 28. The ski-track was extracted from the Figure 24b: (a) is non-ski-track points; (b) is ski-track points.
Applsci 12 05678 g028
Figure 29. The ski-track was extracted from the Figure 25b: (a) is non-ski-track points; (b) is ski-track points.
Figure 29. The ski-track was extracted from the Figure 25b: (a) is non-ski-track points; (b) is ski-track points.
Applsci 12 05678 g029
Figure 30. The ski-track was extracted from the Figure 26b: (a) is non-ski-track points; (b) is ski-track points.
Figure 30. The ski-track was extracted from the Figure 26b: (a) is non-ski-track points; (b) is ski-track points.
Applsci 12 05678 g030
Table 1. Comparison of ski-track extraction effects with different extraction algorithms.
Table 1. Comparison of ski-track extraction effects with different extraction algorithms.
Name of
Points
Extraction AlgorithmCorrect Rate (%)Over-Extraction Rate (%)Under-Extraction Rate (%)
First Data
in Region A
Euclidean Distance68.320.411.3
Clustering Segmentation76.96.216.9
RANSAC Method82.67.310.1
Proposed Method92.24.73.1
Second Data
in Region A
Euclidean Distance75.610.414.0
Clustering Segmentation74.312.613.1
RANSAC Method85.48.95.7
Proposed Method94.52.82.7
First Data
in Region B
Euclidean Distance59.826.313.9
Clustering Segmentation72.514.313.2
RANSAC Method80.110.79.2
Proposed Method89.46.83.8
Second Data
in Region B
Euclidean Distance72.215.412.4
Clustering Segmentation70.818.510.7
RANSAC Method85.67.86.6
Proposed Method92.95.12.0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, W.; Zhao, C.; Zhang, H. A New Method of Ski Tracks Extraction Based on Laser Intensity Information. Appl. Sci. 2022, 12, 5678. https://doi.org/10.3390/app12115678

AMA Style

Wang W, Zhao C, Zhang H. A New Method of Ski Tracks Extraction Based on Laser Intensity Information. Applied Sciences. 2022; 12(11):5678. https://doi.org/10.3390/app12115678

Chicago/Turabian Style

Wang, Wenxin, Changming Zhao, and Haiyang Zhang. 2022. "A New Method of Ski Tracks Extraction Based on Laser Intensity Information" Applied Sciences 12, no. 11: 5678. https://doi.org/10.3390/app12115678

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop