Next Article in Journal
Applying Deep Learning to Medical Imaging: A Review
Next Article in Special Issue
Deep Learning Logging Sedimentary Microfacies via Improved U-Net
Previous Article in Journal
Determinants of Memory Encoding of Altruistic Messages: M-Delphi and F-DEMATEL Approach
Previous Article in Special Issue
Research on the Construction Method of a Training Image Library Based on cDCGAN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Seismic Elastic Parameter Inversion via a FCRN and GRU Hybrid Network with Multi-Task Learning

1
School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an 710049, China
2
Research Institute of Petroleum Exploration and Development, PetroChina, Beijing 100083, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(18), 10519; https://doi.org/10.3390/app131810519
Submission received: 28 August 2023 / Revised: 13 September 2023 / Accepted: 18 September 2023 / Published: 21 September 2023

Abstract

:
Seismic elastic parameter inversion translates seismic data into subsurface structures and physical properties of formations. Traditional model-based inversion methods have limitations in retrieving complex geological structures. In recent years, deep learning methods have emerged as preferable alternatives. Nevertheless, inverting multiple elastic parameters using neural networks individually is computationally intensive and can lead to overfitting due to a shortage of labeled data in field applications. Multi-task learning can be employed to invert elastic parameters simultaneously. In this work, a hybrid network that leverages the fully convolutional residual network (FCRN) and the gated recurrent unit network (GRU) is designed for the simultaneous inversion of P-wave velocity and density from post-stack seismic data. The FCRN efficiently extracts local information from seismic data, while the GRU captures global dependency over time. To further improve the horizontal continuity and inversion stability, we use a multi-trace to single-trace (M2S) inversion strategy. Consequently, we name our proposed method the M2S multi-task FCRN and GRU hybrid network (M2S-MFCRGRU). Through anti-noise experiments and blind well tests, M2S-MFCRGRU exhibits superior anti-noise performance and generalization ability. Comprehensive experimental inversion results also showcase the excellent lateral continuity, vertical resolution, and stability of the M2S-MFCRGRU inversion results.

1. Introduction

Seismic inversion refers to the process of inferring various physical property parameters of rocks using seismic and well log data [1]. Specifically, seismic elastic parameter inversion aims to deduce elastic parameters, such as P-wave velocity (Vp) and density, which are two of the most important subsurface properties. Seismic inversion plays a vital role in various applications, including oil and gas exploration, mineral resource prospecting, and groundwater resource management [2,3,4,5].
In recent years, deep learning (DL) methods, known for their efficiency, flexibility, accuracy, and adaptability, have become integral parts of geophysical exploration workflows [6]. These data-driven methods have been successfully applied in various tasks, such as seismic resolution enhancement [7], seismic lithology prediction [8], reservoir characterization [9], seismic fault detection [10,11], seismic data reconstruction [12,13], denoising [14,15], and seismic inversion [16,17,18]. In practice, the inversion problem is characterized by nonlinearity and multiple solutions [19]. Traditional inversion methods, often known as model-driven methods, heavily rely on prior knowledge and physical models that link seismic data to model parameters [20,21,22]. Furthermore, traditional approaches still require the appropriate initial models to avoid being trapped in local optima. In contrast, DL-based methods surpass traditional approaches by exploiting their capacity to learn the intricate nonlinear relationship between seismic data and elastic parameters. This results in improved performance and accuracy [23]. The learning ability of artificial neural networks (ANNs) is limited due to the shallow network structure. Deep neural networks (DNNs) with more complex network structures can learn complex mapping relationships better and overcome the deficiency of ANNs. Convolutional neural networks (CNNs) have emerged as powerful tools for seismic inversion due to their exceptional capability to extract features from data. Biswas [16] developed a semi-supervised inversion model utilizing a two-layer CNN and a forward model to invert post-stack seismic data. Similarly, Das [17] applied fully convolutional networks (FCNs) with two convolutional layers to invert impedance with satisfactory results. Building on the concept of FCNs, Wu [24] proposed the fully convolutional residual network (FCRN), which combines the flexibility and simplicity of FCNs with the advantages of residual blocks. Experimental results strongly support the effectiveness of FCRN in achieving high inversion accuracy and noise immunity.
A time series is a sequence of values observed over time and arranged in chronological order [25]. A seismic trace is in a natural time series. Based on this inception, recurrent neural networks (RNNs) [26] are used for seismic inversion to model time-related information among seismic traces and elastic parameters. RNNs have a feedback loop that allows retaining memory and incorporating temporal information. For RNN models, there can potentially be gradient disappearance or gradient explosion. On this point, long short-term memory (LSTM) [27] and gated recurrent unit networks (GRUs) [28] are proposed to mitigate the drawbacks of RNNs. They have been successfully implemented for seismic inversion [29,30]. Gao [31] utilized a single-layer LSTM and one fully connected layer to model the correlation between seismic data and the large-scale density parameter. Guo [32] introduced a bidirectional LSTM (BiLSTM) layer into CNN for P-impedance inversion. On the other hand, GRU demonstrated superiority over LSTM due to fewer parameters and faster convergence. Alfarraj [33] employed a stack of three layers of bidirectional GRU (BiGRU) in the timing module to extract low-frequency information from seismic data and ensure smooth network output. Wang [34] proposed a deep convolutional GRU (DCGRU) architecture that combines CNN and GRU. The CNN component is responsible for precisely identifying and preserving the complex relationship between S-wave velocity and seismic data, while the GRU component extracts critical temporal characteristics of data.
However, seismic inversion still faces two challenges. The first is the time-intensive process of inverting multiple elastic parameters individually. The second pertains to the limited labeled data available in supervised learning approaches, which can potentially result in network overfitting. Multi-task learning has been successful in various domains, including natural language processing [35], speech recognition [36], and computer vision [37]. Multi-task learning aims to empower the network to concurrently learn multiple tasks, enhance the generalization ability by leveraging diverse data from each task to extract more reliable features [38], improve the performances of individual tasks, and mitigate overfitting. Due to the interconnection between seismic data and elastic parameters [39], it is advantageous to integrate multi-task learning into the multi-parameter inversion of seismic data [40,41,42,43,44]. In the multi-task network, multiple tasks utilize a shared representation simultaneously. There are two common approaches used for sharing parameters of hidden layers in multi-task learning [45]. The first approach is hard parameter sharing, where hidden layers are shared among all tasks while individual task-specific output layers are retained. The second approach is soft parameter sharing, where different tasks operate with independent models and parameters, but constraints are imposed on the model parameters across tasks. In this work, our multi-tasking model utilizes hard parameter sharing.
In addition, the results of traditional 1D trace-by-trace inversion methods often have lateral geological artifact [46]. The stable inversion results are difficult to guarantee and can easily be affected by noise. In this work, we leverage the structural information between adjacent seismic traces by converting multi-trace seismic data to single-trace elastic parameters during elastic parameter inversion.
Based on the multi-trace to single-trace (M2S) strategy, we design an M2S multi-task FCRN and GRU hybrid network (M2S-MFCRGRU) for simultaneously inverting seismic elastic parameters, specifically Vp and density. M2S-MFCRGRU combines the capabilities of FCRN and GRU to learn data features across various scales. Moreover, the multi-task framework enhances inversion efficiency and takes advantage of the correlations between different tasks to improve the stability of inversion. At the same time, the M2S strategy ensures that the inversion results maintain structural horizontal continuity, enhances the inversion stability and reliability, and improves the anti-noise performance. As a supervised approach, the limited label data are expanded through data augmentation and transfer learning is evoked to enhance the inversion accuracy on field data implementations. Data augmentation artificially expands the labeled dataset, while transfer learning facilitates the transmission of knowledge from a pre-trained model to the specific seismic inversion task. The major contributions of the proposed method can be summarized as follows:
1.
The proposed M2S-MFCRGRU method combines the strengths of CNNs and RNNs for the accurate and reliable estimation of elastic parameters. The integration of FCRN and GRU allows capturing both local features and global temporal information, enhancing the inversion results. The multi-task framework enables the utilization of interconnections among the elastic parameters, resulting in an efficient and reliable inversion process.
2.
As a supervised deep learning method, the M2S training strategy is employed to optimize the utilization of reliable seismic data while adhering to the structural constraints of seismic data. Given the scarcity of label field seismic data, data augmentation and transfer learning techniques are utilized to enhance the reliability of inversion results within the limitations of the insufficient label quantity.
3.
Extensive experiments on synthetic data and field data, including anti-noise experiments and blind well tests, have consistently demonstrated the superior inversion performance of the proposed method.
The subsequent sections of this paper are structured in the following manner. In Section 2, the M2S-MFCRGRU method is described in detail. In Section 3, comprehensive experimental comparison results on both synthetic and field data are presented to validate the inversion performance of the proposed method. Section 4 presents the discussion, while Section 5 provides a summary.

2. Methodology

2.1. The M2S Multi-Task FCRN and GRU Hybrid Network (M2S-MFCRGRU) Framework

Given the focus of the efficient deep learning architecture, FCRN, on local information extraction, the incorporation of GRU into FCRN allows the hybrid network to more effectively capture time-related information. The framework of the proposed M2S-MFCRGRU is presented in Figure 1. Through the adoption of the hard parameter sharing, the single-task model can be extended to the multi-task model. The M2S-MFCRGRU network is illustrated in Figure 1a. In contrast to the traditional 1D trace-by-trace inversion method, the M2S training strategy implicitly exploits the spatial correlation between adjacent seismic data traces. Specifically, the M2S-MFCRGRU network takes 2 k + 1 seismic data traces as input and generates predictions for Vp and density for the middle trace. For this work, the value of k is set to 2.
The M2S-MFCRGRU network consists of two partitions: the Encoder section and the Decoder section for different tasks. The Encoder section includes a convolutional layer with 16 kernels of size 299 × 3, which effectively captures information from the seismic data. It also includes three stacked residual blocks, as depicted in Figure 1b. Each residual block consists of two convolutional layers: the first layer uses 16 kernels of size 299 × 3 and the second layer features 16 kernels of size 3 × 3; these two convolution kernels extract local information at different scales. As for each task in the Decoder section, the structure remains identical but performs different tasks. The task part comprises a 3 × 3 convolutional layer with 16 kernels, followed by a GRU network and a 3 × 3 convolutional layer with 1 kernel.
In addition, batch normalization (BN) is applied after each convolutional layer, except for the last one, to improve the generalization ability and reduce training time. The rectified linear unit (ReLU) activation function is employed to enhance the nonlinearity to capture the complex patterns. The residual block further reduces computational complexity and the number of parameters by employing skip connections, while also mitigating the potential issue of gradient dispersion.

2.2. The GRU Component

The GRU component helps the M2S-MFCRGRU capture the global time-related information efficiently. Specifically, GRU mainly employs the update gate and the reset gate to selectively update and forget information from the previous hidden state, while also incorporating new input information into the current hidden state. The operation process of GRU is as follows:
z t = s i g m o i d W z x t + U z h t 1 + b z r t = sigmoid W r x t + U r h t 1 + b r h ˜ t = tanh W h x t + U h r t h t 1 + b h h t = z t h t 1 + 1 z t h ˜ t
where z t , r t , h ˜ t , and  h t represent the update gate, the reset gate, the candidate state, and the current hidden state, respectively. x t represents the t-th element of the input sequence and h t 1 is the previous hidden state. W z , U z , W r , U r , W h , and  U h are the weights; b z , b r , and b h are the corresponding biases. The operator ⊙ denotes element-wise multiplication.
To be more specific, z t and r t take values between 0 and 1. Higher values of z t indicate a higher probability of retaining h t 1 . As  r t approaches 1, h ˜ t is more dependent on h t 1 . This selective mechanism of the information flow enables GRU to capture long-term dependencies in sequence data by selectively ignoring information from the previous time step and updating the essential information for the current time step.
Figure 2 shows the structure of the GRU component. In the figure, 1− represents using a matrix with all 1 elements for subtraction operations; + and × represent addition and element-wise multiplication, respectively.

2.3. Multi-Task Loss Function

The mean square error (MSE) loss function can quantify the average squared difference between the predicted and label values. Based on the inversion tasks of inverting P-wave velocity and density, the loss function is shown below:
l o s s m u l t i = l o s s v p + l o s s d = v p v ^ p 2 2 + d d ^ 2 2
where v p and d are the labels of P-wave velocity and density; v ^ p and d ^ are the corresponding predictions. The loss function l o s s m u l t i is composed of the loss function of the inversion P-wave velocity task ( l o s s v p ) and the inversion density task ( l o s s d ).

2.4. Quantitative Indicators

This paper uses MSE (seen in Equation (3)), normalized root mean square error (NRMSE) (seen in Equation (4)), and the Pearson correlation coefficient (PCC) (seen in Equation (5)) to evaluate the inversion results. PCC measures the linear relationship between the prediction and the label data and a PCC that is closer to 1 means a stronger linear correlation.
M S E = 1 n i = 1 n y i y ^ i 2
N R M S E = 1 n i = 1 n y i y ˜ i 2 y max y min
P C C = i = 1 n y i y ¯ y ˜ i y ˜ ¯ i = 1 n y i y ¯ 2 i = 1 n y ˜ i y ˜ ¯ 2
where n is the number of time samples; y i and y ˜ i denote the i-th label value and the output value, respectively; y max and y min are the maximum and minimum values of the label value, respectively; y ¯ and y ˜ ¯ are the average of the label value and the output value, respectively.

2.5. Training Hyperparameters

For network training, the parameters are shown in Table 1.
We use PyTorch to construct the network and speed up training with GPU. In this paper, we change the size of the training epoch according to the dataset. We use an early stopping strategy to increase the generalization performance of the network. The other details of the training stage are presented in the Numerical Experiments section.

3. Numerical Experiments

The numerical experiments conducted in this work involve two datasets: the Marmousi2 synthetic dataset and a field dataset. For a fair comparison, the FCRN [24], multi-task FCRN (MFCRN), multi-task FCRN and GRU hybrid network (MFCRGRU), and M2S-MFCRGRU are trained using the same dataset for inverting Vp and density. As FCRN is a single-task network, it is used to independently invert Vp and density. It should be noted that the inputs for FCRN, MFCRN, and MFCRGRU are single-trace seismic data, whereas the inputs for M2S-MFCRGRU are multi-trace seismic data. Additionally, all these methods are trained under identical environmental conditions.

3.1. Synthetic Example

The Marmousi2 model has a complex stratigraphic structure, which includes faults, interfaces, and strong lateral and vertical variations; it is representative of a geophysical model [47]. Synthetic seismic data are generated using the zero-phase Ricker wavelet with a main frequency of 30 Hz. The synthetic seismic data, Vp, and density are shown in Figure 3. The parameters of synthetic data are set as follows: the trace number is 13,601, the sample number per trace is 2800, and the time sampling interval is 1 ms. For the experiments, 101 data pairs of equidistantly sampled traces are utilized for training, 34 pairs for validation, and the complete profile for testing. The training epoch is set to 1000.
The inversion results of Vp and density are shown in Figure 4. For both Vp and density inversion results, the red rectangles in inversion profiles show that the inversion details of FCRN and MFCRN are insufficient with obvious hanging lines and rough structures, while the layer interfaces of MFCRGRU and M2S-MFCRGRU are sharp with better horizontal continuity. Using the M2S strategy, the geological structures of the inversion results are clearer and closer to the geological structures of the labeled data. Moreover, when inverting the structure to the gas-charged sand channel indicated by the arrows, the inversion results of M2S-MFCRGRU accurately capture the important details. The entire inversion profiles of M2S-MFCRGRU are very close to label data. Therefore, M2S-MFCRGRU can deal with complex stratigraphic structures, like shallow horizontal layers and steep faults,  achieving the best inversion performance.
To assess the inversion performance of the proposed method, quantitative evaluations are conducted for all the results. The evaluation results, utilizing MSE and PCC metrics, are summarized in Table 2. From the computation results, the GRU-adds networks that have lower MSE and higher PCC, which means GRU can significantly improve the network learning ability. The operation of adding GRU into FCRN is reasonable and significant. For density inversion, M2S-MFCRGRU has the smallest MSE (0.001124) and the highest PCC (0.9973). For Vp inversion, we find that the difference in quantitative evaluations between MFCRGRU and M2S-MFCRGRU is not significant for the noise-free data. In particular, although it can be seen that MFCRGRU has the best MSE (0.010300) and PCC (0.0049), it is only 0.000417 smaller than the MSE (0.010717) of M2S-MFCRGRU and it is only 0.0004 higher than the PCC (0.9945) of M2S-MFCRGRU. In view of the lateral continuity of the inversion profile, the slight loss in accuracy is acceptable.
To further compare the inversion results of elastic parameters, we extracted the 8105th trace to showcase the differences in the outcomes produced by the aforementioned four networks relative to the labeled values (Figure 5). Figure 5a,b show the comparison between the Vp and density, respectively. The overall trend of the inversion results of M2S-MFCRGRU (the red line) aligns more closely with the label data (the black line) compared to the results from FCRN (the blue line), MFCRN (the green line), and MFCRGRU (the pink line). In addition, we calculate the MSE between the label data and the inversion results of FCRN, MFCRN, MFCRGRU, and M2S-MFCRGRU on the 8105th trace. The MSE values of the inverted Vp results are 0.045136 (FCRN), 0.119093 (MFCRN), 0.034037 (MFCRGRU), and 0.017040 (M2S-MFCRGRU), respectively. The MSE values of the inverted density results are 0.005289 (FCRN), 0.008109 (MFCRN), 0.002761 (MFCRGRU), and 0.002460 (M2S-MFCRGRU), respectively.
Noise is inevitable in field seismic data. In this case, different levels of Gaussian white noise are added to synthetic seismic data to evaluate the anti-noise performance of M2S-MFCRGRU. Specifically, signal-to-noise ratios (SNR) of 5 dB, 10 dB, 15 dB, and 20 dB Gaussian white noise are added to clean seismic data. We calculate the MSEs between the inversion results and label data in Table 3. As SNR decreases, the M2S-MFCRGRU method achieves the best inversion performance. It shows that M2S-MFCRGRU has the lowest MSE in all cases. Therefore, although there is a marginal difference in the inversion performance between M2S-MFCRGRU and MFCRGRU in clean seismic data, M2S-MFCRGRU has superior anti-noise performance. This means that M2S-MFCRGRU, using spatial knowledge from seismic data, has the best anti-noise ability and can effectively inverse elastic parameters even with significant noise in the seismic data. We find that FCRN has the second smallest MSE, right after the proposed M2S-MFCRGRU in the high-noise scenario, which shows the importance of GRU and M2S ingredients for the multi-task inversion network.
Here, to present a visual comparison, the results from adding Gaussian white noise to 10 dB SNR are presented. The inversion result profiles are shown in Figure 6. Using the Vp inversion results as an example, the outcomes of the M2S-MFCRGRU model demonstrate fewer hanging lines artifacts, enhanced lateral continuity, and superior vertical resolution when compared to the results obtained from the other three models. Comparing the inversion profiles with the label data, M2S-MFCRGRU can deal with the Gaussian white noise well and can maintain the original stratigraphic information of the Marmousi2 model. It shows that the inversion results of M2S-MFCRGRU have the richest details, highest resolution, and least hanging lines.

3.2. Field Example

We tested on field seismic data for elastic parameter inversion to further verify the superiority and generality of M2S-MFCRGRU. The field seismic dataset comprises dimensions of 1501 (time) × 219 (crossline) × 219 (inline), with a time sampling interval of 2 ms. The target layer is the turbidite sedimentary layer, which ranges from 2244 to 2494 ms with 125 time-sampling points.
There are only six available well logs in the field seismic dataset, and the log data include the corresponding Vp and density. Figure 7 shows the field seismic data. Figure 7a is a time slice, which shows the locations of the crosswell profile and six wells (w1–w6). The seismic data of the crosswell profile and the locations of the six wells are shown in Figure 7b. In order to address the scarcity of label field data and achieve comprehensive training of the network, we utilize data augmentation and transfer learning in the training stage. By incorporating these strategies, accurate and reliable inversion results are obtained.
In the pre-training stage, we employ the locally weighted method to enhance the training dataset by interpolating the logging data. This process enables us to obtain interpolated volume for Vp and density. Subsequently, we generate synthetic seismic data based on the convolution model, resulting in a total of 12,100 traces. Then, along the crosswell profile, we select and designate 185 traces as the pre-training dataset. Figure 8 shows the interpolated Vp, interpolated density, and synthetic seismic data of the crosswell profile, respectively. Excluding the 6 wells, we utilize 143 traces (80% of the dataset) to train the network, while the remaining 36 traces (20% of the dataset) are dedicated to validation. The pre-training epoch is set to 2500.
In the fine-tuning stage, we select twenty-four seismic data traces from the vicinity of each of the six wells to create the fine-tuning dataset. This dataset comprises a total of 144 seismic data traces. Based on the high likelihood that the interpolated data near the wells closely correspond to the label data, this approach ensures accurate and reliable results. The locations of the selected seismic traces are depicted in Figure 9a. The interpolated seismic data of the crosswell profile are shown in Figure 9b. It is imperative to note that the six wells are deliberately excluded from the dataset. This is to ensure a fair evaluation and avoid potential data leakage. Subsequently, we partition the dataset into two distinct subsets, one for training (115 traces) and the other for validation (29 traces). The pre-trained model is used as the initial model for fine-tuning. In this way, the efficient initialization expedites the network convergence and effectively leverages its knowledge of similar features. The fine-training epoch is 2000. The overall fine-tuning procedure with transfer learning for elastic parameter inversion is summarized in Algorithm 1.
Algorithm 1 The fine-tuning procedure with transfer learning for elastic parameter inversion.
  • Input: Selected interpolated seismic data, label Vp, label density, training epoch N, early stopping patience P;
    1:
    Initial model: the pre-trained model;
    2:
    for epoch= 1:N do
    3:
        Train and update the model parameters with early stopping criterion;
    4:
        Save the best network parameters;
    5:
    end for
    6:
    Using the saved network to predict elastic parameters;
  • Output: Inverted Vp and density elastic parameters.
To validate the proposed model, we compared the predicted Vp and density of FCRN, MFCRN, MFCRGRU, and M2S-MFCRGRU across the entire crosswell profile and six wells. Figure 10 presents the visual results of the Vp inversion alongside the observed well data. It is apparent that the inversion results predicted by FCRN (Figure 10a) obviously contain non geological structures. In general, the inversion result of M2S-MFCRGRU (Figure 10d) is stable and geologically makes sense compared to the other three models. Specifically, it exhibits superior horizontal continuity in the regions marked by red rectangles and red arrows and shows the clearest stratigraphic information in the yellow circles. Regarding Figure 11, the visual comparison of density inversions is showed. The density inversion profiles predicted by FCRN (Figure 11a) and MFCRN (Figure 11b) display unreasonable inversion details, as indicated by black arrows. Accordingly, the density inversion achieved by M2S-MFCRGRU (Figure 11d) exhibits the clearest horizontal continuity and is visually superior in the regions marked by red rectangles and white arrows. Furthermore, in the area highlighted by black circles, the M2S-MFCRGRU successfully reproduces inversion results that exhibit greater consistency with the stratigraphic structure displayed by seismic data. Overall, Figure 10 and Figure 11 show that the Vp and density profiles predicted by M2S-MFCRGRU exhibit increased transverse continuity and improved longitudinal resolution compared to the other three networks.
Table 4 shows the NRMSEs and PCCs of the six wells. On the one hand, the comparison of MFCRN and MFCRGRU illustrates that the GRU component can effectively improve the accuracy of the inversion. On the other hand, we can see that the numerical evaluation of the M2S-MFCRGRU inversion results is not the best, but from the overall crosswell profile, we can see that M2S-MFCRGRU has the best generalization ability.
To further verify the generability of M2S-MFCRGRU, a blind well test is performed in this paper. In this test, we choose w5 as the blind well and only use five wells (w1 to w4, and w6) to augment the training and validation dataset. We compare the accuracy of the network inversion by a single trace comparison at the blind well, the predicted profiles, and the two quantitative metrics.
The Vp prediction and density prediction at w5 are presented in Figure 12a,b, respectively. The overall trend of the well-logging data shows that the MS2-MFCRGRU inversion results are more close to the labeled data. Moreover, we invert the seismic data of the inline profile where w5 is located, as displayed in Figure 13. Figure 14 and Figure 15 show the inline profile predictions of Vp and density, respectively. It can be seen that the Vp inversion profile (Figure 14d) and density inversion profile (Figure 15d) of MS2-MFCRGRU have the most complete stratigraphic structures with the best lateral continuity.
Table 5 calculates the NRMSEs and PCCs of the Vp inversion and density inversion results predicted by the four methods at blind well w5. For both Vp and density, MS2-MFCRGRU has the lowest NRMSE and the highest PCC. Thus, the generalization superiority and inversion effectiveness of M2S-MFCRGRU are further demonstrated.

4. Discussion

Nowadays, DL methods have gained considerable traction in seismic applications. CNNs are particularly popular for seismic inversion. However, using only the convolutional layer in the network to learn the complex mapping relationship between seismic data and elastic parameters leads to a lack of temporal information in the inversion results. Furthermore, traditional 1D trace-by-trace inversion methods lack spatial information and lateral continuity. To address these issues, the proposed method, M2S-MFCRGRU, combines the GRU component and M2S strategy to capture both temporal and spatial information. The experiments in Section 3 demonstrate the superior inversion performance of M2S-MFCRGRU. In more detail, the inversion results exhibit superior horizontal continuity, vertical resolution, and alignment with logging data. Furthermore, despite a longer training time compared to the other three methods, the proposed method demonstrates better noise resistance and generalization capability.
While the proposed method M2S-MFCRGRU achieves impressive inversion performance, there are several aspects that can be further explored and improved upon. Firstly, besides adding the GRU component to the network, other components could be incorporated to extract features of different scales. For example, the transformer module can extract global features, overcoming the limitations of receptive field sizes in CNNs. Therefore, different components can be explored to enhance inversion performance. Secondly, M2S-MFCRGRU is a supervised method that relies on high-quality and large-quantity label data. Due to limitations in label data, exploring semi-supervised or unsupervised methods for seismic inversion will be beneficial. In this regard, leveraging physical constraints for seismic inversion warrants further investigation. Thirdly, although the M2S strategy shows advantages over the traditional 1D trace-by-trace method, it does not represent the typical 2D inversion approach. To tackle the 2D inversion problem, exploring a multi-trace to multi-trace strategy could be fruitful. Lastly, pre-stack seismic inversion can generate more comprehensive elastic parameters, aiding in distinguishing between lithology and oil–gas possibilities during fieldwork and providing a richer and more accurate basis for drilling. However, this work focuses on inverting post-stack seismic data. Consequently, improving the network topology to invert pre-stack seismic data will be a future goal.

5. Conclusions

In this work, the proposed method, M2S-MFCRGRU, is designed to simultaneously invert Vp and density by leveraging the advantages of the hybrid network and multi-task learning. The hybrid network captures both local information and time-related information, while multi-task learning effectively exploits the correlation between elastic parameters. Furthermore, the M2S strategy incorporates spatial information from seismic data, thereby enhancing the stability and reliability of the inversion results. Moreover, we utilize data augmentation and transfer learning to overcome the challenge of insufficient field data labels and improve the generalization capability.
Experimental results on both the synthetic Marmousi2 model and field data demonstrate that the M2S-MFCRGRU method exhibits superior horizontal continuity and vertical resolution. Moreover, when compared to well data in the field example, the proposed method shows great consistent performance. In addition, the anti-noise experiment reveals that M2S-MFCRGRU has the strongest noise immunity, while the blind well test confirms its highest generalization capability. Consequently, despite the limited availability of label data, our proposed M2S-MFCRGRU method yields promising inversion results for elastic parameters.

Author Contributions

Conceptualization, Q.Z., H.R. and B.W.; methodology, Q.Z., H.R. and B.W.; formal analysis, Q.Z. and B.W.; funding acquisition, C.W. and X.Y.; investigation, Q.Z.; data curation, Q.Z.; visualization, B.W.; supervision, C.W., X.Y. and B.W.; writing—original draft preparation, Q.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Key Laboratory of Geophysics, PetroChina.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available upon request due to restrictions, e.g., privacy or ethical restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
FCRNfully convolutional residual network
GRUgated recurrent unit network
M2Smulti-trace to single-trace
M2S-MFCRGRUM2S multi-task FCRN and GRU hybrid network
VpP-wave velocity
DLdeep learning
ANNsartificial neural networks
DNNsdeep neural networks
CNNsconvolutional neural networks
FCNsfully convolutional neural networks
RNNsrecurrent neural networks
LSTMlong short-term memory
BiLSTMbidirectional LSTM
BiGRUbidirectional GRU
DCGRUdeep convolutional GRU
BNbatch normalization
ReLUrectified linear unit
MSEmean squared error
NRMSEnormalized root mean squared error
PCCPearson correlation coefficient
MFCRNmulti-task FCRN
MFCRGRUmulti-task FCRN and GRU hybrid network
SNRsignal-to-noise ratio

References

  1. Maurya, S.P.; Singh, N.P.; Singh, K.H. Fundamental of Seismic Inversion. In Seismic Inversion Methods: A Practical Approach; Springer International Publishing: Cham, Switzerland, 2020; pp. 1–18. [Google Scholar] [CrossRef]
  2. Vedanti, N.; Sen, M.K. Seismic inversion tracks in situ combustion: A case study from Balol oil field, India. Geophysics 2009, 74, B103–B112. [Google Scholar] [CrossRef]
  3. Dai, J.; Xu, H.; Snyder, F.; Dutta, N. Detection and estimation of gas hydrates using rock physics and seismic inversion: Examples from the northern deepwater Gulf of Mexico. Lead. Edge 2004, 23, 60–66. [Google Scholar] [CrossRef]
  4. Lelièvre, P.G.; Farquharson, C.G.; Hurich, C.A. Joint inversion of seismic traveltimes and gravity data on unstructured grids with application to mineral exploration. Geophysics 2012, 77, K1–K15. [Google Scholar] [CrossRef]
  5. Lähivaara, T.; Malehmir, A.; Pasanen, A.; Kärkkäinen, L.; Huttunen, J.M.; Hesthaven, J.S. Estimation of groundwater storage from seismic data using deep learning. Geophys. Prospect. 2019, 67, 2115–2126. [Google Scholar] [CrossRef]
  6. Adler, A.; Araya-Polo, M.; Poggio, T. Deep Learning for Seismic Inverse Problems: Toward the Acceleration of Geophysical Analysis Workflows. IEEE Signal Process. Mag. 2021, 38, 89–119. [Google Scholar] [CrossRef]
  7. Zhang, H.; Alkhalifah, T.; Liu, Y.; Birnie, C.; Di, X. Improving the Generalization of Deep Neural Networks in Seismic Resolution Enhancement. IEEE Geosci. Remote Sens. Lett. 2023, 20, 7500105. [Google Scholar] [CrossRef]
  8. Zhang, G.; Wang, Z.; Chen, Y. Deep learning for seismic lithology prediction. Geophys. J. Int. 2018, 215, 1368–1387. [Google Scholar] [CrossRef]
  9. Shahbazi, A.; Monfared, M.S.; Thiruchelvam, V.; Fei, T.K.; Babasafari, A.A. Integration of knowledge-based seismic inversion and sedimentological investigations for heterogeneous reservoir. J. Asian Earth Sci. 2020, 202, 104541. [Google Scholar] [CrossRef]
  10. Tang, Z.; Wu, B.; Wu, W.; Ma, D. Fault Detection via 2.5D Transformer U-Net with Seismic Data Pre-Processing. Remote Sens. 2023, 15, 1039. [Google Scholar] [CrossRef]
  11. Wu, W.; Yang, Y.; Wu, B.; Ma, D.; Tang, Z.; Yin, X. MTL-FaultNet: Seismic data reconstruction assisted multi-task deep learning 3D fault interpretation. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5914815. [Google Scholar] [CrossRef]
  12. Wang, B.; Zhang, N.; Lu, W.; Wang, J. Deep-learning-based seismic data interpolation: A preliminary result. Geophysics 2019, 84, V11–V20. [Google Scholar] [CrossRef]
  13. Yu, J.; Wu, B. Attention and Hybrid Loss Guided Deep Learning for Consecutively Missing Seismic Data Reconstruction. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5902108. [Google Scholar] [CrossRef]
  14. Liu, D.; Wang, W.; Wang, X.; Wang, C.; Pei, J.; Chen, W. Poststack Seismic Data Denoising Based on 3-D Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1598–1629. [Google Scholar] [CrossRef]
  15. Xu, Z.; Luo, Y.; Wu, B.; Meng, D. S2S-WTV: Seismic Data Noise Attenuation Using Weighted Total Variation Regularized Self-Supervised Learning. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5908315. [Google Scholar] [CrossRef]
  16. Biswas, R.; Sen, M.K.; Das, V.; Mukerji, T. Prestack and poststack inversion using a physics-guided convolutional neural network. Interpretation 2019, 7, SE161–SE174. [Google Scholar] [CrossRef]
  17. Das, V.; Pollack, A.; Wollner, U.; Mukerji, T. Convolutional neural network for seismic impedance inversion. Geophysics 2019, 84, R869–R880. [Google Scholar] [CrossRef]
  18. Zhang, Z.; Alkhalifah, T. Regularized elastic full-waveform inversion using deep learning. In Advances in Subsurface Data Analytics; Bhattacharya, S., Di, H., Eds.; Elsevier: Amsterdam, The Netherlands, 2022; pp. 219–250. [Google Scholar] [CrossRef]
  19. Wang, Y. Seismic Inversion: Theory and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2016. [Google Scholar]
  20. Cooke, D.A.; Schneider, W.A. Generalized linear inversion of reflection seismic data. Geophysics 1983, 48, 665–676. [Google Scholar] [CrossRef]
  21. Ferguson, R.; Margrave, G. A simple algorithm for band-limited impedance inversion. CREWES Res. Rep. 1996, 8, 1–10. [Google Scholar]
  22. Sacchi, M.D. Reweighting strategies in seismic deconvolution. Geophys. J. Int. 1997, 129, 651–656. [Google Scholar] [CrossRef]
  23. Misra, S.; Chopra, S. Neural network analysis and impedance inversion—Case study. CSEG Rec. 2011, 36, 34–39. [Google Scholar]
  24. Wu, B.; Meng, D.; Wang, L.; Liu, N.; Wang, Y. Seismic Impedance Inversion Using Fully Convolutional Residual Network and Transfer Learning. IEEE Geosci. Remote Sens. Lett. 2020, 17, 2140–2144. [Google Scholar] [CrossRef]
  25. Morales-Esteban, A.; Martínez-Álvarez, F.; Troncoso, A.; Justo, J.; Rubio-Escudero, C. Pattern recognition to forecast seismic time series. Expert Syst. Appl. 2010, 37, 8333–8342. [Google Scholar] [CrossRef]
  26. Lipton, Z.C. A Critical Review of Recurrent Neural Networks for Sequence Learning. arXiv 2015, arXiv:1506.00019, [1506.00019]. [Google Scholar]
  27. Graves, A. Supervised Sequence Labelling with Recurrent Neural Networks; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  28. Chung, J.; Gülçehre, Ç.; Cho, K.; Bengio, Y. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv 2014, arXiv:1412.3555, [1412.3555]. [Google Scholar]
  29. Wei, C.; Guo, X.; Feng, T.; Ying, S.; Wei-Hong, W.; Hong-Ri, S.; Xuan, K. Seismic velocity inversion based on CNN-LSTM fusion deep neural network. Appl. Geophys. 2021, 18, 499–514. [Google Scholar] [CrossRef]
  30. Liu, X.; Liang, L.; Kang, Z.; Guo, Q. Elastic Impedance Inversion with GRU-CNN Hybrid Deep Learning: Visualizing the Black Box. Earth Sci. 2022, 11, 194–203. [Google Scholar] [CrossRef]
  31. Gao, Z.; Li, C.; Zhang, B.; Jiang, X.; Pan, Z.; Gao, J.; Xu, Z. Building large-scale density model via a deep learning based data-driven method. Geophysics 2020, 86, M1–M15. [Google Scholar] [CrossRef]
  32. Guo, R.; Zhang, J.; Liu, D.; Zhang, Y.; Zhang, D. Application of Bi-directional Long Short-Term Memory Recurrent Neural Network for Seismic Impedance Inversion. In Proceedings of the 81st EAGE Conference and Exhibition 2019, London, UK, 3–6 June 2019; Volume 2019, pp. 1–5. [Google Scholar] [CrossRef]
  33. Alfarraj, M.; AlRegib, G. Semisupervised sequence modeling for elastic impedance inversion. Interpretation 2019, 7, SE237–SE249. [Google Scholar] [CrossRef]
  34. Wang, J.; Cao, J. Data-driven S-wave velocity prediction method via a deep-learning-based deep convolutional gated recurrent unit fusion network. Geophysics 2021, 86, M185–M196. [Google Scholar] [CrossRef]
  35. Collobert, R.; Weston, J. A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning. In Proceedings of the ICML’08: 25th International Conference on Machine Learning, Helsinki, Finland, 5–9 July 2008; Association for Computing Machinery: New York, NY, USA, 2008; pp. 160–167. [Google Scholar] [CrossRef]
  36. Deng, L.; Hinton, G.; Kingsbury, B. New types of deep neural network learning for speech recognition and related applications: An overview. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 8599–8603. [Google Scholar] [CrossRef]
  37. Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
  38. Caruana, R. Multitask Learning. Mach. Learn. 1997, 28, 41–75. [Google Scholar] [CrossRef]
  39. Alaei, N.; Soleimani Monfared, M.; Roshandel Kahoo, A.; Bohlen, T. Seismic imaging of complex velocity structures by 2D pseudo-viscoelastic time-domain full-waveform inversion. Appl. Sci. 2022, 12, 7741. [Google Scholar] [CrossRef]
  40. Zheng, X.; Wu, B.; Zhu, X.; Zhu, X. Multi-Task Deep Learning Seismic Impedance Inversion Optimization Based on Homoscedastic Uncertainty. Appl. Sci. 2022, 12, 1200. [Google Scholar] [CrossRef]
  41. Li, Z.; Chen, X.; Li, J.; Zhang, J. Pertinent Multigate Mixture-of-Experts-Based Prestack Three-Parameter Seismic Inversion. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5920315. [Google Scholar] [CrossRef]
  42. Wang, Z.; Wang, S.; Zhou, C.; Cheng, W. AVO Inversion Based on Closed-Loop Multitask Conditional Wasserstein Generative Adversarial Network. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5906013. [Google Scholar] [CrossRef]
  43. Mustafa, A.; AlRegib, G. Joint learning for seismic inversion: An acoustic impedance estimation case study. In SEG Technical Program Expanded Abstracts 2020; Society of Exploration Geophysicists: Houston, TX, USA, 2020; pp. 1686–1690. [Google Scholar] [CrossRef]
  44. Liu, X.; Wu, B.; Yang, H. Multi-Task Full Attention U-Net for Prestack Seismic Inversion. IEEE Geosci. Remote Sens. Lett. 2023, 20, 3002605. [Google Scholar] [CrossRef]
  45. Ruder, S. An Overview of Multi-Task Learning in Deep Neural Networks. arXiv 2017, arXiv:1706.05098, [1706.05098]. [Google Scholar]
  46. Liu, Z.; Chen, X.; Li, J.; Hou, S.; Li, Z.; Liu, G. Robust weakly supervised learning pre-stack multi-trace seismic inversion. IEEE Trans. Geosci. Remote Sens. 2023; early access. [Google Scholar] [CrossRef]
  47. Martin, G.S.; Wiley, R.; Marfurt, K.J. Marmousi2: An elastic upgrade for Marmousi. Lead. Edge 2006, 25, 156–166. [Google Scholar] [CrossRef]
Figure 1. The framework of the M2S multi-task FCRN and GRU hybrid network (M2S-MFCRGRU). (a) The network of M2S-MFCRGRU. (b) The details of the residual block.
Figure 1. The framework of the M2S multi-task FCRN and GRU hybrid network (M2S-MFCRGRU). (a) The network of M2S-MFCRGRU. (b) The details of the residual block.
Applsci 13 10519 g001
Figure 2. The structure of the GRU component.
Figure 2. The structure of the GRU component.
Applsci 13 10519 g002
Figure 3. Marmousi2 model. (a) Synthetic seismic data. (b) Vp. (c) Density.
Figure 3. Marmousi2 model. (a) Synthetic seismic data. (b) Vp. (c) Density.
Applsci 13 10519 g003
Figure 4. Inversion results of the Vp and density on the Marmousi2 model without noise. Top row: inverted Vp of FCRN (a), MFCRN (b), MFCRGRU (c), M2S-MFCRGRU (d); Bottom row: inverted density of FCRN (e), MFCRN (f), MFCRGRU (g), M2S-MFCRGRU (h).
Figure 4. Inversion results of the Vp and density on the Marmousi2 model without noise. Top row: inverted Vp of FCRN (a), MFCRN (b), MFCRGRU (c), M2S-MFCRGRU (d); Bottom row: inverted density of FCRN (e), MFCRN (f), MFCRGRU (g), M2S-MFCRGRU (h).
Applsci 13 10519 g004
Figure 5. A comparison of the inversion results of FCRN (blue line), MFCRN (green line), MFCRGRU (pink line), and M2S-MFCRGRU (red line) in the 8105th trace with label values (black line) on the Marmousi2 model. (a) Vp. (b) Density.
Figure 5. A comparison of the inversion results of FCRN (blue line), MFCRN (green line), MFCRGRU (pink line), and M2S-MFCRGRU (red line) in the 8105th trace with label values (black line) on the Marmousi2 model. (a) Vp. (b) Density.
Applsci 13 10519 g005
Figure 6. Inversion results of Vp and density on the Marmousi2 model with 10 dB Gaussian white noise. Top row: inverted Vp of FCRN (a), MFCRN (b), MFCRGRU (c), M2S-MFCRGRU (d); bottom row: inverted density of FCRN (e), MFCRN (f), MFCRGRU (g), M2S-MFCRGRU (h).
Figure 6. Inversion results of Vp and density on the Marmousi2 model with 10 dB Gaussian white noise. Top row: inverted Vp of FCRN (a), MFCRN (b), MFCRGRU (c), M2S-MFCRGRU (d); bottom row: inverted density of FCRN (e), MFCRN (f), MFCRGRU (g), M2S-MFCRGRU (h).
Applsci 13 10519 g006
Figure 7. The field seismic data. (a) A time slice. The pink line is the location of the crosswell profile and the red diamonds are the locations of six wells (w1–w6). (b) The seismic data of the crosswell profile (dash-dot vertical lines indicate the location of wells).
Figure 7. The field seismic data. (a) A time slice. The pink line is the location of the crosswell profile and the red diamonds are the locations of six wells (w1–w6). (b) The seismic data of the crosswell profile (dash-dot vertical lines indicate the location of wells).
Applsci 13 10519 g007
Figure 8. The datasets used in the pre-training stage. (a) Interpolated Vp. (b) Interpolated density. (c) Synthetic seismic data using the convolution model for the crosswell profile.
Figure 8. The datasets used in the pre-training stage. (a) Interpolated Vp. (b) Interpolated density. (c) Synthetic seismic data using the convolution model for the crosswell profile.
Applsci 13 10519 g008
Figure 9. The datasets used in the fine-tuning stage. (a) Red crosses indicate the locations of the selected seismic traces in the interpolated seismic volume. (b) Interpolated seismic data of the crosswell profile.
Figure 9. The datasets used in the fine-tuning stage. (a) Red crosses indicate the locations of the selected seismic traces in the interpolated seismic volume. (b) Interpolated seismic data of the crosswell profile.
Applsci 13 10519 g009
Figure 10. Inversion results of Vp on field data. (a) FCRN. (b) MFCRN. (c) MFCRGRU. (d) M2S-MFCRGRU.
Figure 10. Inversion results of Vp on field data. (a) FCRN. (b) MFCRN. (c) MFCRGRU. (d) M2S-MFCRGRU.
Applsci 13 10519 g010
Figure 11. Inversion results of density on field data. (a) FCRN. (b) MFCRN. (c) MFCRGRU. (d) M2S-MFCRGRU.
Figure 11. Inversion results of density on field data. (a) FCRN. (b) MFCRN. (c) MFCRGRU. (d) M2S-MFCRGRU.
Applsci 13 10519 g011
Figure 12. A comparison of the inversion results of FCRN (blue line), MFCRN (green line), MFCRGRU (pink line), and M2S-MFCRGRU (red line) with ground truth (black line) at blind well w5. (a) Vp. (b) Density.
Figure 12. A comparison of the inversion results of FCRN (blue line), MFCRN (green line), MFCRGRU (pink line), and M2S-MFCRGRU (red line) with ground truth (black line) at blind well w5. (a) Vp. (b) Density.
Applsci 13 10519 g012
Figure 13. The seismic data of the inline profile where w5 is located.
Figure 13. The seismic data of the inline profile where w5 is located.
Applsci 13 10519 g013
Figure 14. Inline profile inversion results of Vp on the blind well test. (a) FCRN. (b) MFCRN. (c) MFCRGRU. (d) M2S-MFCRGRU.
Figure 14. Inline profile inversion results of Vp on the blind well test. (a) FCRN. (b) MFCRN. (c) MFCRGRU. (d) M2S-MFCRGRU.
Applsci 13 10519 g014
Figure 15. Inline profile inversion results of density on the blind well test. (a) FCRN. (b) MFCRN. (c) MFCRGRU. (d) M2S-MFCRGRU.
Figure 15. Inline profile inversion results of density on the blind well test. (a) FCRN. (b) MFCRN. (c) MFCRGRU. (d) M2S-MFCRGRU.
Applsci 13 10519 g015
Table 1. The parameters used in the training stage.
Table 1. The parameters used in the training stage.
Training ParameterValue
Learning rate0.001
Weight decay rate 1 × 10 7
Batch size12
Optimization algorithmAdam
Network weight initializationKaiming initialization
Table 2. MSEs and PCCs of inversion results for different methods on the Marmousi2 model. ↓ means the lower the better and ↑ means the higher the better. The best values are in bold.
Table 2. MSEs and PCCs of inversion results for different methods on the Marmousi2 model. ↓ means the lower the better and ↑ means the higher the better. The best values are in bold.
MethodVpDensity
MSE↓PCC↑MSE↓PCC↑
FCRN0.0226040.98920.0028100.9945
MFCRN0.0219350.98870.0025950.9940
MFCRGRU0.0103000.99490.0011630.9972
M2S-MFCRGRU0.0107170.99450.0011240.9973
Table 3. MSEs of inversion results for different methods at different SNR levels on the Marmousi2 model. ↓ means the lower the better. The best values are in bold.
Table 3. MSEs of inversion results for different methods at different SNR levels on the Marmousi2 model. ↓ means the lower the better. The best values are in bold.
MSE ↓SNR (dB)FCRNMFCRNMFCRGRUM2S-MFCRGRU
Vp50.0957410.2604940.1953590.046389
100.0401100.0577380.0509240.018357
150.0275610.0301310.0203000.012805
200.0241470.0242600.0133020.011364
Density50.0128790.0181440.0161990.004482
100.0039210.0052680.0044490.001842
150.0030920.0032070.0020560.001326
200.0028870.0027070.0014400.001182
Table 4. NRMSE ↓/PCC ↑ of the inversion results for different methods at the six wells. ↓ means the lower the better and ↑ means the higher the better. The best values are in bold.
Table 4. NRMSE ↓/PCC ↑ of the inversion results for different methods at the six wells. ↓ means the lower the better and ↑ means the higher the better. The best values are in bold.
WellFCRNMFCRNMFCRGRUM2S-MFCRGRU
Vpw10.066408/0.94190.070059/0.93590.072230/0.93160.077819/0.9259
w20.022538/0.99450.029373/0.98970.020107/0.99510.039119/0.9822
w30.041671/0.97390.043983/0.97090.037750/0.97840.058496/0.9526
w40.007013/0.99950.006798/0.99950.005852/0.99960.044270/0.9811
w50.007271/0.99940.006110/0.99950.006249/0.99960.044753/0.9736
w60.022889/0.99100.021399/0.99230.021433/0.99210.032104/0.9839
Densityw10.044346/0.98240.045243/0.98240.045739/0.98230.053312/0.9737
w20.023002/0.99500.035206/0.98750.023644/0.99400.034670/0.9866
w30.037807/0.98720.038484/0.98650.031067/0.99180.043722/0.9826
w40.012595/0.99940.010185/0.99940.008250/0.99980.035916/0.9891
w50.010091/0.99950.008326/0.99950.009420/0.99970.040890/0.9871
w60.024093/0.99470.026015/0.99430.024861/0.99460.036868/0.9875
Table 5. NRMSEs and PCCs of inversion results for different methods at blind well w5. ↓ means the lower the better and ↑ means the higher the better. The best values are in bold.
Table 5. NRMSEs and PCCs of inversion results for different methods at blind well w5. ↓ means the lower the better and ↑ means the higher the better. The best values are in bold.
MethodVpDensity
NRMSE↓PCC↑NRMSE↓PCC↑
FCRN0.1541760.71060.2502590.5180
MFCRN0.1242080.79760.1460030.8130
MFCRGRU0.1278640.75770.1285730.8816
M2S-MFCRGRU0.1206950.80360.0942280.9292
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, Q.; Wei, C.; Yan, X.; Ruan, H.; Wu, B. Seismic Elastic Parameter Inversion via a FCRN and GRU Hybrid Network with Multi-Task Learning. Appl. Sci. 2023, 13, 10519. https://doi.org/10.3390/app131810519

AMA Style

Zheng Q, Wei C, Yan X, Ruan H, Wu B. Seismic Elastic Parameter Inversion via a FCRN and GRU Hybrid Network with Multi-Task Learning. Applied Sciences. 2023; 13(18):10519. https://doi.org/10.3390/app131810519

Chicago/Turabian Style

Zheng, Qiqi, Chao Wei, Xinfei Yan, Housong Ruan, and Bangyu Wu. 2023. "Seismic Elastic Parameter Inversion via a FCRN and GRU Hybrid Network with Multi-Task Learning" Applied Sciences 13, no. 18: 10519. https://doi.org/10.3390/app131810519

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop