Next Article in Journal
Anti-Slip Gait Planning for a Humanoid Robot in Fast Walking
Previous Article in Journal
Remarkably Facile Preparation of Superhydrophobic Functionalized Bismuth Trioxide (Bi2O3) Coatings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Fusion Strategy for Reliable Attitude Measurement Utilizing Vision and Inertial Sensors

1
School of Software, North University of China, Taiyuan 030051, China
2
Key Laboratory of Instrumentation Science & Dynamic Measurement, Ministry of Education, School of Instrument and Electronics, North University of China, Taiyuan 030051, China
3
School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100081, China
4
College of Energy & Electric Engineering, Hohai University, Nanjing 210098, China
*
Author to whom correspondence should be addressed.
Hanxue Zhang and Chong Shen have the equal contribution to this work.
Appl. Sci. 2019, 9(13), 2656; https://doi.org/10.3390/app9132656
Submission received: 17 May 2019 / Revised: 26 June 2019 / Accepted: 28 June 2019 / Published: 29 June 2019

Abstract

:
In this paper, we present a radial basis function (RBF) and cubature Kalman filter (CKF) based enhanced fusion strategy for vision and inertial integrated attitude measurement for sampling frequency discrepancy and divergence. First, the multi-frequency problem of the integrated system and the reason for attitude divergence are analyzed. Second, the filter equation and attitude differential equation are constructed to calculate attitudes separately in time series when visual and inertial data are available or when there are only inertial data. Third, attitude errors between inertial and vision are sent to the input layer of RBF for training. After this, through the activation function of the hidden layer, the errors are transferred to the output layer for weighting the sums, and the training model is established. To overcome the problem of divergence inherent in a multi-frequency system, the well-trained RBF, which can output the attitude errors, is utilized to compensate the attitudes calculated by pure inertial data. Finally, semi-physical simulation experiments under different scenarios are performed to validate the effectiveness and superiority of the proposed scheme in accurate attitude measurements and enhanced anti-divergence capability.

1. Introduction

Attitude measurement has received considerable attention from the research community due to its significant importance in navigation, positioning, tracking and control [1,2,3,4], where measurement precision and stability are also important requirements. The main purpose of attitude measurement is to dynamically obtain the spatial attitude angles of the measured object using one or more sensors installed onto it in engineering measurement, military, controlling and locating robots [5,6,7].
However, the development of an effective attitude measurement strategy that performs well while ensuring stability and precision is still a significant challenge due to the following important reasons. (i) The single sensor has limited performance due to its own measurement characteristics. (ii) To use an integrated system, the data fusion algorithm, such as the filtering method, can only operate when at least two types of data are available. That is, there is always a sampling frequency discrepancy among heterogeneous sensors. All the aforementioned factors pose multiple handicaps in implementing a high-performance attitude measurement system.
For common single measurement techniques [8,9], such as inertial measurement and visual measurement, their applications are fairly limited due to their inherent measurement characteristics. Focusing on the low cost and widely used micro-electromechanical systems (MEMS) inertial sensor [10,11], its sampling frequency is high, but the measurement errors accumulate quickly with time due to bias error drift and scale factor instability [12,13,14]. The vision measurement can provide relatively stable results. However, compared to the inertial sensor, it has a significantly lower frequency, which may not be sufficient for fast moving object tracking. Thus, the single attitude measurement technique may fail to meet the practical attitude measurement requirements, which promote the development of the fusion measurement system. Considering the advantages and limitations of a single sensor, a vision sensor and an inertial sensor are frequently deployed together where data are fused to enhance measurement performance by fully utilizing the complementary properties of each sensor to overcome the shortcomings inherent with a single sensor. For attitude measurement, the combination of vision and inertial sensors [15] can enhance the accuracy, fault detection and confidence in decision making.
To fuse data from heterogeneous sensors, such as the vision sensor and inertial sensor together, a common strategy is nonlinear Kalman filter (KF) [16], such as the extended KF (EKF) [17], unscented KF (UKF) [18,19], and cubature KF (CKF) [20]. In general applications, the EKF relies only on the first order linearization of nonlinear system models. Thus, the EKF may provide particularly poor performance if the dynamic systems are highly nonlinear. To better treat the nonlinearity, the UKF has been developed to address nonlinear state estimation in the context of the control theory. Unlike the conventional EKF, the UKF can obtain second order accuracy of the posterior mean and covariance. Under the Bayesian filtering framework, the CKF provides a new direction to realize the nonlinear filter [21]. CKF is more preferred over the UKF for its more stable performance. Therefore, CKF is used to fuse inertial and visual data, and its performance is compared with those of other filter methods.
Actually, in a vision and inertial integrated system, the attitude measurement performance does not only depend on the selection of the filtering methods. Other factors may also need to be considered. Among them, the multi-frequency is an important problem in theory and practice. As the sampling frequency of the inertial sensor is high and that of visual sensor is low, an inconsistent sampling frequency always exists in a vision and inertial integrated attitude measurement system [22]. Correspondingly, a degradation of measurement performance is unavoidable in the intersampling of slow vision data. To ensure precision and convergence, special approaches to correct pure inertial attitude errors may need to be developed.
To avoid the attitude divergence phenomenon created by the attitude differential equation when only depending on inertial data during the intersampling of slow visual data, the traditional single frequency data fusion method needs to be improved. Nowadays, the deep learning algorithms are widely used in signal trend prediction to enhance the performance of data fusion in different applications. An artificial neural network [23] is used to enhance the attitude accuracy, which has the characteristic of strong machine learning. Long-short term memory [24] is employed to fuse the inertial and vision data, which can provide high precision for attitude measurement. These methods focus on reducing the attitude error by training the neural networks algorithm when the visual data are available. However, when the visual data are unavailable, these methods cannot ensure the high accuracy of attitude measurement due to the differences in sampling frequency. A convolutional neural network–support vector machine [25] method is used to remove the faults in data fusion, which have impactful effect. A convolutional neural network [26] is proposed to improve the accuracy of data fusion, which is robust and reliable. These methods can effectively fuse data, but they have some limitations in multi-frequency integrated attitude measurements.
To achieve a reliable attitude in a vision and inertial integrated system during visual data unavailability, especially when the interval lasts for a period of time, a data fusion methodology based on radial basis function (RBF) and CKF for enhancing the performance of attitude measurements is developed. The algorithm can reduce the attitude divergence affected by an inconsistent sampling frequency between inertial and visual data. Combining CKF and RBF can fully utilize the advantages of these two techniques. In this way, the problem of divergence resulting from the sampling frequency discrepancy is ameliorated.
The rest of the paper is organized as follows. Section 2 explains the problem formulation and system modeling. Furthermore, the core algorithms of the enhanced fusion strategy based on RBF and CKF is given in Section 3. Experimental testing and results are discussed in Section 4. Finally, Section 5 concludes the article.

2. Problem Formulation and System Modeling

2.1. Multi-Frequency Problem in Vision and Inertial Heterogeneous Sensor Integrated System

For a heterogeneous sensors integrated system, there is always a sampling frequency discrepancy problem, namely the multi-frequency problem. The sampling frequencies of different sensors are probably inconsistent. The vision and inertial integrated attitude measurement system, is constructed of two heterogeneous sensors, namely the MEMS-based gyroscope and camera. The sampling frequency of the camera is often several or dozens of Hertz while that of the gyroscope is hundreds of Hertz. There is considerable sampling frequency discrepancy between them, as shown in Figure 1.
Furthermore, the multi-frequency problem can lead to divergence in attitude results in the vision and inertial integrated measurement system. For both available visual and inertial data, CKF deals with the nonlinear filter problem by fusing them together. Inertial measurements are incorporated in the filter loop as inputs, and visual attitudes are calculated out of the filter loop as the observation vectors. Thus, the nonlinear filter model consists of the nonlinear state model and linear observation model [27].
{ x k = f ( x k 1 , u k 1 ) + w k 1 z k = H k x k + v k
where x k is the state vector and u k 1 is the control input from the high frequency angular velocity of gyroscope. z k is the observation vector from low frequency visual measurements. H k is the measurement matrix, w k 1 and v k are independent zero-mean Gaussian white process noise with covariances Q k 1 and R k .
When there are both vision and inertial data, the normal filtering can be used to fuse data together and output the optimal attitude estimation. However, during the intersampling of slow vision data, there is still a significant amount of inertial gyroscope data. To ensure the high frequency of attitude output, the attitude differential equation is used to update attitude angles. The specific formula is given as follows:
q ( k + 1 ) = ( cos ( | ω | Δ t 2 ) I + 2 sin ( | ω | Δ t 2 ) | ω | Ω ( ω ) ) q ( k )
where
Ω ( ω ) = 1 2 [ 0 ω x ω y ω z ω x 0 ω z ω y ω y ω z 0 ω x ω z ω y ω x 0 ]
ω = [ ω x   ω y   ω z ] T is the angular velocity, q is the rotation quaternion parameter, and I is a unit matrix with proper dimensions.
To obtain an updated attitude by solving the above formula in an integral operation, the measurement errors can accumulate quickly with time due to bias error drift by the MEMS gyroscope. There are two reasons for this. The existing manufacturing level of the general MEMS sensor has too much bias and noise contained in its output signal. Besides, the integral operation means that the bias and noise will inevitably accumulate with time. The two factors makes it difficult to guarantee the convergence during the intersampling of visual data, especially when the interval lasts for a period of time.

2.2. System Modeling

The established integrated attitude measurement system consists of a vision module and inertial module as displayed in Figure 2. In the inertial module, the built-in gyroscope of the IMU is used for output angular velocity. Due to the inconsistent sampling frequency between the inertial and visual data, the multi-frequency problem exists in data fusion of the integrated attitude measurement system. Without visual data, the observation noise will trend towards infinity and the Kalman gain will approach zero. It can be easily seen that the integrated attitude system is forced into a pure inertial mode when vision data are unavailable, which may result in divergence during the intersampling of slow visual data. To enhance the fusion performance, a data fusion methodology based on CKF and RBF is developed. When the visual data are available, CKF is employed to fuse the inertial data and visual data. At the same time, the RBF model is trained by attitude errors between inertial and visual data. During the intersampling of vision data, the trained RBF can predict attitude errors and used to compensate the attitude calculated by inertial data. The specific process of the proposed RBF and CKF will be discussed in the next section.

3. The Enhanced Fusion Strategy Based on RBF and CKF

3.1. Basic Filter Process of CKF Algorithm

For multi-frequency fusion issues, an enhanced fusion strategy based on RBF and CKF is mainly discussed in this paper. To start, the basic filter process of CKF is given. Instead of applying the linearization operation on the nonlinear equation, CKF approximates the probability density distribution of nonlinear function using a set of cubature points. Thus, this results in good estimation accuracy with an acceptable computation complexity. Furthermore, in CKF, the cubature points and weights are determined uniquely by the dimension of the state vector so that the complex process of parameter selection can be removed. The filter process is given as follows.
(1) Calculating volume point:
ξ i = n [ 1 ] i
ω i = 1 2 n , i = 1 , 2 , , 2 n
ω i , k | k 1 = P k 1 | k 1 ξ i + x ^ k 1 | k 1
where [ 1 ] i denotes the column i of [ 1 ] , and [ 1 ] is a symmetric matrix with a dominant diagonal line of 1.
(2) Time update:
The estimated attitude and covariance matrix at time k can be described as:
x ^ k | k 1 = 1 2 n i = 1 2 n X i , k | k 1
P k | k 1 = 1 2 n i = 1 2 n X i , k | k 1 X i , k | k 1 T x ^ k | k 1 x ^ k | k 1 T
(3) Measurement update:
Calculating volume point:
X i , k | k 1 = P k | k 1 ξ i + x ^ k | k 1
Using measurement equation to propagate the volume point:
Z i , k | k 1 = h ( X i , k | k 1 )
z ^ k | k 1 = 1 2 n i = 1 2 n Z i , k | k 1
The predicted measurement and covariance matrix at time k are in the following forms:
P x z , k | k 1 = i = 1 2 n ω i X i , k | k 1 Z i , k | k 1 T x ^ k | k 1 z ^ k | k 1 T
P z z , k | k 1 = i = 1 2 n ω i Z i , k | k 1 Z i , k | k 1 T x ^ k | k 1 z ^ k | k 1 T
Calculating Kalman filter gain:
K k = P x z , k | k 1 P z z , k | k 1 1
Attitude estimates and covariance estimate of the attitude error at time k are expressed as follows.
x ^ k | k = x ^ k | k 1 + K k ( z k x ^ k | k 1 )
P k | k = P k | k 1 K k P z z , k | k 1 K k T

3.2. RBF Algorithm

The radial basis function neural network is one of the extensively employed deep learning methods, which has good approximation, fast learning convergence and simple structure of feed-forward neural network. There are usually three layers of network structure, as shown in Figure 3 where x i denotes the i th input element. The hidden layer contains m radial basis neurons, whose basis function responds locally to the input. φ i ( x ) is the activation function in hidden layer. When the input is close to the basis function, a large number of outputs will be generated. W i indicates the weight between the hidden layer and output layer. Y is the linear weighted sum of the output of hidden layer neurons.
The input layer includes a set of sensory units, which represents the input of signal source nodes. The hidden layer has the sufficient dimension, which uses the activation functions to map a low dimensional nonlinear separable input to a high dimensional linear separable space. The output layer expresses a linear transformation from the hidden node to the output space. The output can be given by:
y ( x ) = W i · φ i ( x ) + w 0
where w 0 is the bias term and W i is the weight between the hidden layer and output layer. The Gaussian basis function is the most frequently used radial basis function for φ i ( x ) with the following function.
φ ( x ) = e | | x μ | | 2 2 σ 2
where x is the input of the network; | | · | | represents the Euclidean distance; μ and σ are the center and width of the Gaussian function, and the two factors is used to adjust the deviation. By analyzing the structure characteristics, the center nodes μ in hidden layer determine the performance of RBF. An appropriate center node in the hidden layer will enhance the learning effect. Furthermore, the value of μ can be determined by different methods. The commonly used method is the k-means [28] clustering algorithm to obtain a subset of the data points as the RBF centers, which can adjust the deviation of the RBF [29,30]. The specific steps of k-means clustering algorithm are as follows:
Step 1: Preset the output precision E of the whole neural network. k nodes in hidden layer are selected arbitrarily from the input sample X.
Step 2: M samples among the X and the width of Gauss function are randomly selected as the center of gauss function in hidden layer and the width of gauss function, respectively.
Step 3: K-means clustering algorithm is employed to average Euclidean distance from each simple to the center node for updating the center nodes and function width.
Step 4: RBF is constructed by using the updated central data and function width, and the weights between the hidden layer and output layer. The output Y is obtained.
Step 5: Calculate the sums of squared errors E ( M ) by comparing the practical output Y and theoretical output y. If E ( M ) < E , proceed to step 6. Otherwise, M = M + 1 and return to step 2.
Step 6: Compare E ( M ) and E ( M 1 ) . If they both dissatisfy the accuracy requirement, M = M 1 and repeat steps 2 to 5. Otherwise, if E ( M 1 ) < E , the M is set as the number of center nodes and end loop.
Using the above-described procedures, the center nodes of hidden layer can be determined by the k-means clustering algorithm.

3.3. Fusion Strategy Based on RBF and CKF

As the sampling frequency of inertial sensor is high and that of visual sensor is low, an inconsistent sampling frequency exists in the inertial and vision integration attitude measurement system. During the intersampling of vision data, only a time update is performed and there is no measurement update. This is because the measurement error is regarded as infinity, and the Kalman gain approaches zero. The attitude measurement cannot be compensated by the slow measurement vector. At the same time, the accuracy and stability of the filtering are difficult to guarantee. To deal with the multi-frequency issue between the inertial and visual data, a fusion strategy based on RBF and CKF algorithm is developed to improve the performance of the integrated attitude measurement system.
This research uses RBF to establish the network model by training attitude errors while the visual data are available and to compensate attitude by predicted errors without visual data. In the process of training, attitude errors are treated as the input of RBF. The errors are transferred to the weighting sum of output layer by the activation function in the hidden layer. In the process of prediction, the well-trained RBF model outputs the estimated errors. The output attitude angles are obtained by CKF and RBF. CKF is used to estimate the attitude by data fusion, when there are both inertial and visual data. Without the vision data, the well-trained RBF and CKF are employed to predict estimation attitude by the data fusion method. By this method, the attitude estimation accuracy and stability can be improved and strengthened. The specific process of the optimal input–output estimation relationship constructed by RBF can be described as shown in Figure 4.
In our study, the relationship between the past vision data and current attitude angles is investigated to continuously provide estimated error even during intersampling of visual data. It can be concluded from Figure 4a that the training process of RBF goes through when visual sensor works well. The attitude errors are inputs of RBF, and the activation function of hidden layer is used to transfer the error to output layer for weighting sum. The output attitude angles of the system are corrected with the proposed fusion data algorithm. The prediction process of RBF during the intersampling of visual data is shown in Figure 4b. Attitude error is used as an independent input of RBF. The well-trained RBF model can output estimated errors. Inertial data are corrected by the RBF prediction results of output final attitude angles.
The reliable attitude estimation for integrated attitude measurements is achieved by the data fusion algorithm based on RBF and CKF. The proposed methodology can reduce the side-effect of inconsistent sampling frequency. Furthermore, the problem of divergence resulting from the sampling frequency discrepancy between the inertial and vision data are ameliorated and the accuracy and reliability of the estimated attitude are both improved.

4. Experimental Test and Comparison

A semi-physical simulation platform for attitude measurements is established, as shown in Figure 5, to evaluate the performance of the inertial and vision data fusion algorithm based on RBF and CKF. The camera is fixed onto the bracket, which is installed on the platform. Furthermore, the target and IMU are fixed on the turntable. The target is represented by four non-coplanar infrared LEDs [31,32,33,34,35,36], as shown in Figure 5. An external PC collects the image data and the angular velocity from the camera and a built-in gyroscope of IMU separately. With only four feature points, the workload of image processing and feature matching is greatly reduced, and the computational efficiency of data fusion is largely improved. In this experiment, the specifications of the gyroscope in IMU are given in Table 1.
In the experiment, to fuse the vision and inertial data, the time synchronization of the sensor fusion system is considered. To guarantee the time synchronization of the whole attitude measurement system, the key is to guarantee the time synchronization of both visual and inertial systems. The program calculagraph starts to obtain the system time at the start of the inertial measurement and visual measurement. The inertial system is comprised of a IMU sensor and the vision system is formed by a camera. When the inertial sensor captures inertial data, there is a time stamp attached to each data. When the vision sensor captures one frame of a picture, the time stamp is recorded with the picture. After this, the time stamps of the inertial system and vision system are matched in the calculation process. When the difference of inertial and visual time stamps is small enough, the mean inertial and vision data are synchronized. Thus, the time synchronization of a sensor fusion system is guaranteed.

4.1. Normalization of Coordinate System

The multi-sensor attitude measurement system is composed of inertia and vision sensors, which are expressed in different coordinate systems. To obtain the attitude from the inertial and vision data, the basic requirement is that the two different measurements are converted into a same coordinate system. There is a turntable coordinate system (b system), the camera system (c system), inertial coordinate system (i system) and target coordinate system (t system). The specific normalization among these coordinate frames is depicted in Figure 6.
As our objective is to measure the relative angles between the body coordinate system and one stationary reference frame, such as the camera coordinate system, it can be expressed by rotation matrix R c b ( t ) . The t in the parentheses indicates that the matrix is time-varying. The coordinate system normalization is to transform the inertial and visual measurements into the R c b ( t ) . The mathematical model of rotation transformations from the c system to the b system by visual and inertial quantity can be expressed as follows:
{ R c b ( t ) = R t b R c t ( t ) R c b ( t ) = R i b R c i ( t )
where R c t ( t ) is the rotation matrix between the t system and c system, which is measured by vision. R c i ( t ) is the rotation matrix between the i system and c system, which can be obtained by inertial measurement. R t b is the rotation matrix between t system and b system and R i b is the rotation matrix between i system and b system. As the IMU and target are both fixed onto the turntable, R t b and R i b can be obtained by calibration.
The rotation transformations between two coordinate systems can be expressed by three times of rotation, which can be indicated by rotating around x , y and z axes with corresponding α , β and θ angles, as displayed in Figure 7. In this way, the inertial and visual measurements are converted to express the motion of the object turntable, which prepares for further data fusion.

4.2. Comparison Among Different Methods by Repeated Experiments with Round-Trip-Step-up Motion

Repeated experiments are conducted to compare the performance among different methods. The motion trajectory of the moving object is designed as displayed in Figure 8, which we called the round-trip-step-up motion. The pitch axis is from −20 degrees to +20 degrees in the round-trip form and the yaw axis is from −20 degrees to +20 degrees in the step-up form.
To provide convincing results, the performance of five different methods are conducted and the attitude angle errors of the pitch angle and yaw angle are shown in Figure 9, where the abscissa is time in seconds and the ordinate represents the pitch and yaw angle errors. It can be directly seen that considering the overall performance of the five methods in the viewpoint of stability and precision, the proposed RBF-CKF method outperforms the EKF, RBF-EKF, CKF and back propagation neural network (BPNN) BP-CKF methods. Besides, during the period when the errors of other method is quite large, such as the 40–60 s for the pitch axis as well as 20–40 s and 40–60 s for the yaw axis, the RBF-CKF can stay steady, which indicates its stability. More importantly, it indicates the RBF-CKF algorithm can maintain good convergence during the intersampling of slow visual data.
The reasons for the above phenomena are that when the visual measurements are available, attitude is calculated by normal filtering to fuse the inertial and visual data together. During the intersampling of the visual data, for the EKF or CKF method, attitude is calculated by inertial data only. For RBF-EKF and BP-CKF methods, the attitude calculated by inertial data is compensated by RBF and BP prediction results before combining the EKF and CKF respectively. Finally, we need to output the final estimated attitudes. Furthermore, for the RBF-CKF method, the attitude angles calculated by inertial data are compensated by RBF prediction results before putting the final estimated attitudes as the output. The optimal estimation of RBF-CKF can be reflected in the pitch and yaw angle errors and it is determined that the performance of RBF-CKF is obviously superior to the other four methods.
The maximum errors and Root Mean Squared Errors (RMSE) for different methods across five groups of experiments are shown in Table 2 and Figure 10. It can be clearly seen that RBF-CKF is more stable compared to the other methods. The running time of each method is shown in Table 3. Due to the complex structure of RBFNN as well as the process of establishing the training model and predicting the estimated error, the RBF-CKF running time is the longest among the five methods.

4.3. Performance of the Round-trip Motion

Another two repeated experiments with longer running times were conducted to confirm the performance of the proposed RBF-CKF method. The motion trajectory of the moving object is shown in Figure 11, which has a round-trip motion. The pitch and yaw axes range from 0° to +40° in round-trip form. Two groups of experiments were carried out and the attitude angle errors of pitch and yaw angles are shown in Figure 12. The optimal estimation of RBF-CKF can be displayed in pitch and yaw angle errors with long data. It is clear that the performance of RBF-CKF is superior to the other four methods with long data. During the period when the errors of the other four methods are relatively large, such as 40–60 s and 120–140 s for the pitch axis as well as 20–40 s and 100–120 s for the yaw axis, the RBF-CKF remains stable. Actually, RBF-CKF possesses the good convergence during the intersampling of vision data.
The maximum errors and RMSE for different methods across two groups of experiments are shown in Table 4 and Figure 13. Even after lengthening the experimental data and time, the performance of RBF-CKF method is better than the other four methods.

4.4. Application and Development Prospects of the Research

In our study, the attitude measurement technique is applied to the helmet tracking system, which is an indoor attitude measurement with high accuracy. The attitude measurement of the object is obtained by capturing four feature points installed on the moving object with a camera. The above measurement method possesses practicality and operability in indoor close-range attitude measurement and achieves this high accuracy, especially when applied to the helmet tracking system.
In the next step of research, experimental forms will be improved by adding the groups of infrared spots in a helmet to extend the number of feature points, which is shown in Figure 14. In Figure 14, four feature points are considered as one group and there are multiple groups in a helmet for attitude measurement. The final objective is to obtain highly accurate, stable, applicable and operational attitude measurements, which can be applied in the helmet tracking system.

5. Conclusions

An enhanced data fusion strategy based on RBF and CKF has been developed for attitude estimation in a vision and inertial integrated system with consideration of sampling frequency discrepancy and divergence. By integrating the technique of RBF into the normal filtering framework of CKF, the proposed reliable attitude measurement not only can reduce the problem of divergence inherent in the intersampling of slow vision data when calculated by only inertial data but also can ensure the stability of the high frequency output of attitude angles. More importantly, in contrast to the existing compensation techniques that only rely on the past information of single information, the present scheme makes full use of the past information of inertial and vision information. Comparisons and extensive experiments under different scenarios illustrate that reliable attitude estimation performance and enhanced convergence capability are achieved by the proposed approach.

Author Contributions

Methodology X.G.; Software, H.Z., C.S. and X.G.; Validation, C.S.; Writing—original draft, H.Z., X.C. and C.S.; Writing—review & editing, H.C., D.Z., H.H. and X.G.

Funding

This work was supported in part by the National Natural Science Foundation of China (61603353, 51705477, 51705021, U1764261), and the Pre-research Field Foundation (6140518010201) and the Scientific and Technology Innovation Programs of Higher Education Institutions in Shanxi (201802084), the Shanxi Province Science Foundation for Youths (201601D021067), the Program for the Top Young Academic Leaders of Higher Learning Institutions of Shanxi, the Young Academic Leaders Foundation in North University of China, Science Foundation of North University of China (XJJ201822), and the Fund for Shanxi “1331 Project” Key Subjects Construction, National Natural Science Foundation of China (61703098), and Natural Science Foundation of Jiangsu Province (BK20160699).

Acknowledgments

Hanxue Zhang and Chong Shen have the equal contribution to this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Enayati, N.; De Momi, E.; Ferrigno, G. A quaternion-based unscented Kalman filter for robust optical/inertial motion tracking in computer-assisted surgery. IEEE Trans. Instrum. Meas. 2015, 64, 2291. [Google Scholar] [CrossRef]
  2. Li, N.; Zhao, L.; Li, L.; Jia, C. Integrity monitoring of high-accuracy GNSS-based attitude determination. GPS Solut. 2018, 22, 120. [Google Scholar] [CrossRef]
  3. Himberg, H.; Motai, Y.; Bradley, A. A multiple model approach to track head orientation with delta quaternions. IEEE Trans. Cybern. 2013, 43, 90–101. [Google Scholar] [CrossRef] [PubMed]
  4. No, H.; Cho, A.; Kee, C. Attitude estimation method for small UAV under accelerative environment. GPS Solut. 2015, 19, 343–355. [Google Scholar] [CrossRef]
  5. Yu, J.; Bu, X.Z.; Xiang, C.; Wang, X.Z. Spinning projectile’s attitude measurement using background magnetic field compensation. J. Appl. Remote Sens. 2016, 10, 014001. [Google Scholar] [CrossRef]
  6. Jin, J.; Zhao, L.N.; Xu, S.L. High-precision rotation angle measurement method based on monocular vision. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 2014, 31, 1401–1407. [Google Scholar] [CrossRef] [PubMed]
  7. Ko, N.Y.; Jeong, S.; Bae, Y. Sine rotation vector method for attitude estimation of an underwater robot. Sensors 2016, 16, 1213. [Google Scholar] [CrossRef]
  8. Rong, Y.P.; Wang, Q.J.; Lu, S.L.; Li, G.L. Improving attitude detection performance for spherical motors using a MEMS inertial measurement sensor. Iet Electr. Power Appl. 2019, 13, 198–205. [Google Scholar] [CrossRef]
  9. Li, J. Binocular vision measurement method for relative position and attitude based on dual-quaternion. J. Mod. Opt. 2017, 64, 1846–1853. [Google Scholar] [CrossRef]
  10. Cao, H.L.; Zhang, Y.G.; Han, Z.Q.; Shao, X.L.; Huang, G.K.; Shi, Y.B.; Tang, J.; Shen, C.; Liu, J. Pole-zero-temperature compensation circuit design and experiment for dual-mass MEMS gyroscope bandwidth expansion. IEEE/Asme Trans. Mechatron. 2019, 24, 677–688. [Google Scholar] [CrossRef]
  11. Guo, H.; Zhu, Q.; Tang, J.; Nian, F.S.; Liu, W.Y.; Zhao, R.; Du, F.F.; Yang, B.G.; Liu, J. A temperature and humidity synchronization detection method based on microwave coupled-resonator. Sens. Actuators B Chem. 2018, 261, 434–440. [Google Scholar] [CrossRef]
  12. Shen, C.; Yang, J.T.; Tang, J.; Liu, J.; Cao, H.L. Note: Parallel processing algorithm of temperature and noise error for micro-electro-mechanical system gyroscope based on variational mode decomposition and augmented nonlinear differentiator. Rev. Sci. Instrum. 2018, 89, 076107. [Google Scholar] [CrossRef] [PubMed]
  13. Shen, C.; Song, R.; Li, J.; Zhang, X.M.; Tang, J.; Shi, Y.B.; Liu, J.; Cao, H.L. Temperature drift modeling of MEMS gyroscope based on genetic-Elman neural network. Mech. Syst. Signal. Process. 2016, 72, 897–905. [Google Scholar]
  14. Shen, C.; Liu, X.; Cao, H.; Zhou, Y.; Liu, J.; Tang, J.; Chen, X. Brain-like navigation scheme based on MEMS-INS and place recognition. Appl. Sci. 2019, 9, 1708. [Google Scholar] [CrossRef]
  15. Park, C.G.; Kang, C.H.; Hwang, S.; Chung, C.J. An adaptive complementary filter for gyroscope/vision integrated attitude estimation. Int. J. Aeronaut. Space Sci. 2016, 17, 214–221. [Google Scholar] [CrossRef]
  16. Soken, H.E.; Hajiyev, C.; Sakai, S. Robust Kalman filtering for small satellite attitude estimation in the presence of measurement faults. Eur. J. Control 2014, 20, 64–72. [Google Scholar] [CrossRef]
  17. Gui, H.C.; de Ruiter, A.H.J. Quaternion invariant extended kalman filtering for spacecraft attitude estimation. J. Guid. Control Dyn. 2018, 41, 863–878. [Google Scholar] [CrossRef]
  18. Lee, D.; Vukovich, G.; Lee, R. Robust adaptive unscented kalman filter for spacecraft attitude estimation using quaternion measurements. J. Aerosp. Eng. 2017, 30, 04017009. [Google Scholar] [CrossRef]
  19. Cao, L.; Yang, W.W.; Li, H.N.; Zhang, Z.H.; Shi, J.J. Robust double gain unscented kalman filter for small satellite attitude estimation. Adv. Space Res. 2017, 60, 499–512. [Google Scholar] [CrossRef]
  20. Qiu, Z.B.; Qian, H.M.; Wang, G.Q. Adaptive robust cubature kalman filtering for satellite attitude estimation. Chin. J. Aeronaut. 2018, 31, 806–819. [Google Scholar] [CrossRef]
  21. Yang, F.; Luo, Y.J.; Zheng, L.T. Double-layer cubature kalman filter for nonlinear estimation. Sensors 2019, 19, 986. [Google Scholar] [CrossRef] [PubMed]
  22. Guo, X.T.; Sun, C.K.; Wang, P. Multi-rate cubature kalman filter based data fusion method with residual compensation to adapt to sampling rate discrepancy in attitude measurement system. Rev. Sci. Instrum. 2017, 88, 085002. [Google Scholar] [CrossRef] [PubMed]
  23. Zhang, J.W.; Cao, J.; Zhang, D.Y. ANN-based data fusion for lumber moisture content sensors. Trans. Inst. Meas. Control 2006, 28, 69–79. [Google Scholar] [CrossRef]
  24. Monner, D.; Reggia, J.A.A. generalized LSTM-like training algorithm for second-order recurrent neural networks. Neural Netw. 2012, 25, 70–83. [Google Scholar] [CrossRef] [PubMed]
  25. Nahhas, F.H.; Shafri, H.Z.M.; Sameen, M.I.; Pradhan, B. Deep learning approach for building detection using LiDAR—Orthophoto fusion. J. Sens. 2018, 2018, 7212307. [Google Scholar] [CrossRef]
  26. Xu, X.D.; Li, W.; Ran, Q.; Du, Q.; Gao, L.R. Multisource remote sensing data classification based on convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 937–949. [Google Scholar] [CrossRef]
  27. Wang, S.; Feng, J.; Chi, K.T. Novel cubature kalman filtering for systems involving nonlinear states and linear measurements. Aeu Int. J. Electron. Commun. 2014, 69, 314–320. [Google Scholar] [CrossRef]
  28. Oh, S.K.; Kim, W.D.; Pedrycz, W.; Joo, S.C. Design of K-means clustering-based polynomial radial basis function neural networks (pRBF NNs) realized with the aid of particle swarm optimization and differential evolution. Neurocomputing 2012, 78, 121–132. [Google Scholar] [CrossRef]
  29. Xu, X.Z.; Shan, D.; Li, S.; Sun, T.F.; Xiao, P.C.; Fan, J.P. Multi-label learning method based on ML-RBF and laplacian. Elm. Neurocomput. 2019, 331, 213–219. [Google Scholar] [CrossRef]
  30. Wang, Z.; Wang, J.; Cai, W.; Zhou, J.; Du, W.; Wang, J.; He, H. Application of an improved ensemble local mean decomposition method for gearbox composite fault diagnosis. Complexity 2019, 2019, 17. [Google Scholar] [CrossRef]
  31. Guo, X.; Tang, J.; Li, J.; Shen, C.; Liu, J. Attitude measurement based on imaging ray tracking model and orthographic projection with iteration algorithm. ISA Trans. 2019. [Google Scholar] [CrossRef] [PubMed]
  32. Guo, X.T.; Tang, J.; Li, J.; Wang, C.G.; Shen, C.; Liu, J. Determine turntable coordinate system considering its non-orthogonality, Review of Scientific Instruments. Rev. Sci. Instrum. 2019, 90, 033704. [Google Scholar] [CrossRef] [PubMed]
  33. Wang, Z.; He, G.; Du, W.; Zhou, J.; Han, X.; Wang, J.; Kou, Y. Application of parameter optimized variational mode decomposition method in fault diagnosis of gearbox. IEEE Access 2019, 7, 44871–44882. [Google Scholar] [CrossRef]
  34. Wang, Z.; Zhou, J.; Wang, J.; Du, W.; Wang, J.; Han, X.; He, G. A novel fault diagnosis method of gearbox based on maximum kurtosis spectral entropy deconvolution. IEEE Access 2019, 7, 29520–29532. [Google Scholar] [CrossRef]
  35. Wang, Z.; Du, W.; Wang, J.; Zhou, J.; Han, X.; Ziying, Z.; Huang, L. Research and application of improved adaptive MOMEDA fault diagnosis method. Measurement 2019, 140, 63–75. [Google Scholar] [CrossRef]
  36. Wang, Z.; Zheng, L.; Du, W.; Cai, W.; Zhou, J.; Wang, J.; He, G. A novel method for intelligent fault diagnosis of bearing based on capsule neural network. Complexity 2019, 2019, 17. [Google Scholar] [CrossRef]
Figure 1. Sampling frequency discrepancy between visual and inertial data.
Figure 1. Sampling frequency discrepancy between visual and inertial data.
Applsci 09 02656 g001
Figure 2. Vision and inertial integrated attitude measurement model. RBF: radial basis function; CKF: cubature Kalman filter.
Figure 2. Vision and inertial integrated attitude measurement model. RBF: radial basis function; CKF: cubature Kalman filter.
Applsci 09 02656 g002
Figure 3. Structure of RBF.
Figure 3. Structure of RBF.
Applsci 09 02656 g003
Figure 4. Processing of (a) RBF training with vision data; and (b) RBF prediction without vision data. IMU: inertial measurement unit.
Figure 4. Processing of (a) RBF training with vision data; and (b) RBF prediction without vision data. IMU: inertial measurement unit.
Applsci 09 02656 g004
Figure 5. Experimental device.
Figure 5. Experimental device.
Applsci 09 02656 g005
Figure 6. Transformation of the coordinate systems.
Figure 6. Transformation of the coordinate systems.
Applsci 09 02656 g006
Figure 7. Rotation transformation of (a) inertial data and (b) vision data.
Figure 7. Rotation transformation of (a) inertial data and (b) vision data.
Applsci 09 02656 g007
Figure 8. Motion trajectory of pitch axis and yaw axis.
Figure 8. Motion trajectory of pitch axis and yaw axis.
Applsci 09 02656 g008
Figure 9. (a) The attitude errors of experiment one; (b) The attitude errors of experiment two; (c) The attitude errors of experiment three.
Figure 9. (a) The attitude errors of experiment one; (b) The attitude errors of experiment two; (c) The attitude errors of experiment three.
Applsci 09 02656 g009aApplsci 09 02656 g009b
Figure 10. Comparison of RMSE for different methods across three groups of experiments.
Figure 10. Comparison of RMSE for different methods across three groups of experiments.
Applsci 09 02656 g010
Figure 11. Motion trajectory of pitch axis and yaw axis.
Figure 11. Motion trajectory of pitch axis and yaw axis.
Applsci 09 02656 g011
Figure 12. (a) The attitude errors of experiment one; (b) The attitude errors of experiment two.
Figure 12. (a) The attitude errors of experiment one; (b) The attitude errors of experiment two.
Applsci 09 02656 g012
Figure 13. Comparison of RMSE among different methods of two groups of experiments.
Figure 13. Comparison of RMSE among different methods of two groups of experiments.
Applsci 09 02656 g013
Figure 14. A helmet with multiple groups of four non-coplanar feature points.
Figure 14. A helmet with multiple groups of four non-coplanar feature points.
Applsci 09 02656 g014
Table 1. Gyroscope parameters.
Table 1. Gyroscope parameters.
ParametersValue
GyroIn-run bias stability10 deg/h
Bandwidth (−3 dB)450 Hz
A/D resolution16 bits
Noise density0.01 (deg/s)/√Hz
Table 2. Maximum errors and RMSE for different methods across three groups of experiments (degree).
Table 2. Maximum errors and RMSE for different methods across three groups of experiments (degree).
PitchYaw
Max ErrorRMSEMax ErrorRMSE
EKF-10.90390.3515−0.70130.2783
RBF-EKF-10.76570.3221−0.70210.2562
CKF-10.19840.1163−0.24750.1233
BP-CKF-10.09460.1083−0.35570.0936
RBF-CKF-10.12090.0447−0.13510.0772
EKF-20.66170.2451−0.53780.1785
RBF-EKF-20.52530.2451−0.67380.1785
CKF-20.25190.1055−0.29870.1509
BP-CKF-20.14540.1022−0.40330.1507
RBF-CKF-20.14480.0563−0.20250.1064
EKF-30.74940.3438−0.67020.2254
RBF-EKF-30.61470.1471−0.80350.2642
CKF-30.22350.1216−0.37180.1801
BP-CKF-30.11690.1231−0.47610.1735
RBF-CKF-30.15700.0769−0.27430.1376
Table 3. Running time of five methods.
Table 3. Running time of five methods.
Running Time/s
EKF0.1460
RBF-EKF0.1496
CKF0.3578
BP-CKF0.3679
RBF-CKF0.4032
Table 4. Maximum errors and RMSE among different methods of two groups of experiments (degree).
Table 4. Maximum errors and RMSE among different methods of two groups of experiments (degree).
PitchYaw
Max ErrorRMSEMax ErrorRMSE
EKF-11.03480.35150.69030.2783
RBF-EKF-10.89560.32210.55370.2562
CKF-10.58430.1163−0.26370.1233
BP-CKF-10.47570.10830.22070.0936
RBF-CKF-10.03480.04470.15580.0772
EKF-20.85570.24510.39120.1785
RBF-EKF-20.72140.2451−0.33570.1785
CKF-20.50460.10550.14580.1509
BP-CKF-20.40320.1022−0.16580.1507
RBF-CKF-20.33570.0563−0.09470.1064

Share and Cite

MDPI and ACS Style

Zhang, H.; Shen, C.; Chen, X.; Cao, H.; Zhao, D.; Huang, H.; Guo, X. An Enhanced Fusion Strategy for Reliable Attitude Measurement Utilizing Vision and Inertial Sensors. Appl. Sci. 2019, 9, 2656. https://doi.org/10.3390/app9132656

AMA Style

Zhang H, Shen C, Chen X, Cao H, Zhao D, Huang H, Guo X. An Enhanced Fusion Strategy for Reliable Attitude Measurement Utilizing Vision and Inertial Sensors. Applied Sciences. 2019; 9(13):2656. https://doi.org/10.3390/app9132656

Chicago/Turabian Style

Zhang, Hanxue, Chong Shen, Xuemei Chen, Huiliang Cao, Donghua Zhao, Haoqian Huang, and Xiaoting Guo. 2019. "An Enhanced Fusion Strategy for Reliable Attitude Measurement Utilizing Vision and Inertial Sensors" Applied Sciences 9, no. 13: 2656. https://doi.org/10.3390/app9132656

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop