Next Article in Journal
Experimental Investigation of the Evolution Process of Suspended Pipelines through River Bottoms under Unsteady Flow Conditions
Next Article in Special Issue
Optimization of Collective Irrigation Network Layout through the Application of the Analytic Hierarchy Process (AHP) Multicriteria Analysis Method
Previous Article in Journal
Reclaimed Water Use Regulations in the U.S.: Evaluating Changes and Regional Patterns in Patchwork State Policies from 2004–2023
Previous Article in Special Issue
Numerical Simulation of Confluence Flow in a Degraded Bed
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Downscaling Daily Reference Evapotranspiration Using a Super-Resolution Convolutional Transposed Network

1
Artificial Intelligence Key Laboratory of Sichuan Province, Yibin 643000, China
2
National Engineering Research Center for Geographic Information System, China University of Geosciences, Wuhan 430074, China
3
Department of Water Resources Engineering, Dalian University of Technology, Dalian 116024, China
*
Author to whom correspondence should be addressed.
Water 2024, 16(2), 335; https://doi.org/10.3390/w16020335
Submission received: 11 December 2023 / Revised: 8 January 2024 / Accepted: 17 January 2024 / Published: 19 January 2024
(This article belongs to the Special Issue Advances in Hydraulic and Water Resources Research)

Abstract

:
The current work proposes a novel super-resolution convolutional transposed network (SRCTN) deep learning architecture for downscaling daily climatic variables. The algorithm was established based on a super-resolution convolutional neural network with transposed convolutions. This study designed synthetic experiments to downscale daily reference evapotranspiration (ET0) data, which are a key indicator for climate change, from low resolutions (2°, 1°, and 0.5°) to a fine resolution (0.25°). The entire time period was divided into two major parts, i.e., training–validation (80%) and test periods (20%), and the training–validation period was further divided into training (80%) and validation (20%) parts. In the comparison of the downscaling performance between the SRCTN and Q-M models, the root-mean-squared error (RMSE) values indicated the accuracy of the models. For the SRCTN model, the RMSE values were reported for different scaling ratios: 0.239 for a ratio of 8, 0.077 for a ratio of 4, and 0.015 for a ratio of 2. In contrast, the RMSE values for the Q-M method were 0.334, 0.208, and 0.109 for scaling ratios of 8, 4, and 2, respectively. Notably, the RMSE values in the SRCTN model were consistently lower than those in the Q-M method across all scaling ratios, suggesting that the SRCTN model exhibited better downscaling performance in this evaluation. The results exhibited that the SRCTN method could reproduce the spatiotemporal distributions and extremes for the testing period very well. The trained SRCTN model in one study area performed remarkably well in a different area via transfer learning without re-training or calibration, and it outperformed the classic downscaling approach. The good performance of the SRCTN algorithm can be primarily attributed to the incorporation of transposed convolutions, which can be partially seen as trainable upsampling operations. Therefore, the proposed SRCTN method is a promising candidate tool for downscaling daily ET0 and can potentially be employed to conduct downscaling operations for other variables.

1. Introduction

Understanding the spatiotemporal patterns and hydrological influences of climate change is essential for ensuring the prudent planning and effective management of water resources [1,2]. To tackle the challenge of scale discrepancy in spatial resolutions within climate data sets [3], downscaling coarse-resolution climate data sets to finer resolutions [4,5,6,7,8,9] is an approach designed to enhance the spatial resolution of data sets originating from general circulation models and numerical weather models.
Downscaling is a challenging task, partially because meteorological or hydrological variables exhibit significant and complicated variations in both space and time. The downscaling approaches can generally be classified into two categories: dynamical and statistical [6,10]. A dynamical approach typically uses a regional model to provide high-resolution predictions, using coarse-resolution data for boundary conditions [11,12,13]. However, regional climate models are often computationally expensive and subject to inherent model errors, and a calibrated model with specified boundary conditions is only valid for local regions. A statistical downscaling approach provides high-resolution data by first figuring out the statistical relationships between coarse-resolution and fine-resolution data and then using the relationships to convert the coarse-resolution data into finer-resolution data [5,14]. The development and applications of statistical downscaling approaches have been significantly studied in the past several decades, and the approaches can be classified as either simple traditional approaches or advanced modern machine learning (ML) approaches [15,16]. However, the practice of downscaling meteorological or hydrological variables is still not adequately handled, especially for detailed spatial and temporal features, and the modern ML approaches have not significantly improved the downscaling performance compared with the simple traditional downscaling approaches.
The performance of traditional downscaling and simple classic machine learning approaches largely depends on the predefined relationships or the designed features, and thus these approaches cannot adequately extract the spatial and temporal patterns if there are deeply hidden or unknown relationships. The recent development of deep learning (DL) techniques has provided a new avenue for determining data relationships without requiring predefined model structures [10,17,18,19,20,21,22]. The convolutional neural network (CNN) is one of the most widely used DL algorithms, and a key merit of the algorithm is that it can more efficiently extract spatial patterns from image-like data [23,24]. In the past several years, many variants of the CNN algorithm have been proposed or applied to perform downscaling for climate data [25,26]. However, there are three significant gaps in the literature with respect to the downscaling of climate or hydrological variables using DL techniques. Firstly, the majority of the existing CNN architectures use classic upsampling for super-resolution operations. These upsampling layers are not trainable, so the super-resolution capability of these architectures is limited, which either leads to inadequate performance or leads to a higher requirement for the number of layers. To make up for the relatively poor performance, deeper layers can be utilized, but the increase in the number of network layers substantially increases the computational costs. Secondly, downscaling evapotranspiration (ET0) using DL techniques has rarely been reported. Spatiotemporal variations in ET0 are the combined results of variations in the maximum and minimum air temperature, relative humidity, solar radiation, and wind speed [6], so it is quite challenging to downscale ET0 due to the complicated variations and mechanisms. However, it is significantly meaningful to investigate downscaling techniques for ET0, partially because ET0 is a very important and comprehensive indicator for climate change [27]. In addition, a downscaling approach that shows a good performance in terms of ET0 is expected to provide satisfactory predictions for the contributing climate variables for ET0, so it is beneficial to use ET0 for the assessment of the downscaling method. Thirdly, downscaling ET0 for regions with limited data through transfer learning techniques has rarely been reported. Transfer learning for DL is an approach that employs the knowledge obtained from one region with sufficient data to make predictions for another region with limited data [28,29]. Transfer learning can potentially be a powerful tool for downscaling climate data in data-scarce regions.
The current work proposes a novel super-resolution convolutional transposed network (SRCTN) deep learning architecture for downscaling daily climatic variables (here, ET0). The algorithm was established based on a super-resolution convolutional neural network with transposed convolutions, which are also known as fractionally strided convolutions or deconvolutions [30]. The transposed convolutions can be seen as trainable upsampling operations, and the incorporation of these layers allows for a better super-resolution performance with a relatively shallow architecture. The network was assessed by downscaling coarse-resolution daily ET0 to fine-resolution counterparts under various scaling ratios. Via transfer learning, the network trained using the data in one region was directly used to perform data downscaling in another region without any local information, and the performance was further evaluated by comparing it with those provided by a traditional downscaling approach. To the best of the authors’ knowledge, this is the first super-resolution DL algorithm for ET0 downscaling. The proposed approach reproduced the spatial and temporal variations in ET0 remarkably well, successfully captured some small-scale and short-term events, such as extreme events, and exhibited impressive potential to downscale climate data in data-scarce regions. The following contents of the paper are organized as follows: Section 2 describes the methodologies comprising the DL algorithms, data sets, and synthetic experiments. Section 3 and Section 4 present and discuss the results, respectively. Conclusions are drawn in Section 5.

2. Methodology

2.1. Super Resolution Convolutional Transposed Network (SRCTN)

The SRCTN consists of input, convolutional, and transposed convolutional layers, and Figure 1 illustrates the model structure for one of the designed synthetic experiments with a data scaling ratio of 8. The two-dimensional (2D) convolutional layers were primarily used to extract spatial features from the coarse-resolution data. The transposed convolutional layers were utilized to map the coarse-resolution feature maps to finer-resolution ones. The final convolutional layer reconstructed the fine-resolution data to make the size of the outputs consistent with that of the ground-truth data (data from [31]).
The SRCTN algorithm was established based upon the CNN algorithm. CNNs are a special type of artificial neural network (ANN), and they are generally more advanced because they can better extract spatial features [32,33,34]. In a traditional ANN or multilayer perceptron (MLP) neural network, the neurons in one layer are not connected with each other but are fully connected to all neurons in the neighboring layers. Therefore, the traditional networks exhibit poor performance in extracting local features. For the spatial distributions of climate and hydrological variables, there is a close relationship between nearby locations and a weak correlation between farther locations, so the traditional networks cannot provide satisfactory performance in processing such data. CNNs employ convolutional layers to create local connections between neurons in the same layer and establish a local feature map to extract local spatial patterns, so they are more capable of dealing with these data [35,36]. The convolutional layers use filters with a predefined stride and an activation function to process the data transformed from a previous layer, and the ReLU approach was employed because it is one of the most widely accepted activation functions [37,38,39].
A distinguishing feature of SRCTN is that it employs transposed convolutions, also called fractionally strided convolutions [30,40], for upsampling. Figure 2 shows a simple example to illustrate how a transposed convolution operation works. In this example, the channels are ignored, the stride is 2, and there is no padding. The dimensions of the input tensor and the kernel are both 2 × 2. The elements in the input tensor were multiplied by the kernel, so 4 intermediate tensors were generated by sliding the kernel over the input tensor with a stride of 2. Finally, an output with a size of 4 × 4 was obtained by summing the 4 intermediate tensors. As can be seen, the transposed convolution broadcasts input elements by sliding the kernel over the input tensor, and it can thus increase the spatial dimensions of the feature maps. This operation is the reverse of the downscaling operations by the regular convolution and can be seen as a trainable upsampling technique. A significant merit of adopting the transposed convolution block is that it enables the upsampling operations to be trainable. This merit can remarkably reduce the required number of network layers and enables efficient downscaling using a comparably shallow network architecture, which can in turn substantially reduce the computational costs and required disk storage space.
The network determines the internal parameters by minimizing the errors, which are quantified in the current work by the loss function based on the mean squared error. In terms of the optimization algorithm, the Adam optimizer was employed [41,42], and the learning rate was set as 0.001. The batch size for each epoch was set as 128, and the number of epochs was set as 1000. The network was constructed using the TensorFlow2.6.0 package with Keras in the Python 3.9 environment. The network training was conducted using an NVIDIA GeForce RTX 3060 GPU (Manufacturer: NVIDIA Corporation City: Santa Clara, CA, Country: USA).

2.2. The Quantiles-Matching Downscaling Approach

The quantiles-matching (Q-M) approach is one of the most widely used statistical downscaling methods [6,43], and it was thus selected as a baseline method for comparison with the SRCTN algorithm. The quantiles-matching approach adjusts coarse-resolution data to make their statistical distribution closer to that of the ground truths. The approach first extracts the coarse-resolution and ground-truth data corresponding to the same location, then develops empirical cumulative distribution functions for the two data sets, and finally corrects the coarse-resolution data according to the following transformation:
E T c o r r = F G T 1 F c o a r s e E T c o r a s e
where ETcorr and ETcoarse indicate the corrected and coarse-resolution ET0 values, respectively, and FGT and Fcoarse are the empirical cumulative distribution functions for the ground-truth data and coarse-resolution data, respectively.

2.3. Synthetic Experiments

Synthetic experiments were designed to assess and compare the performance of the SRCTN and quantiles-matching approaches for climate downscaling. It is a common practice to use synthetic experiments for evaluations and comparisons of downscaling approaches because they can eliminate the errors and uncertainties caused by other influencing factors and determine the errors that are only attributed to the downscaling approaches [10,44]. In recent years, synthetic experiments have been widely utilized in research on downscaling methods, especially for evaluating ML-based methods [10,45].
In terms of the fine-resolution data set, the daily gridded ET0 data set with a resolution of 0.25° for the period from 1 January 1984 to 31 December 2019 was employed. In this work, three sets of synthetic experiments were designed, following the approaches used by Wang et al. [10]. More specifically, this study utilized maximum, averaging, and median operations for the fine-resolution data to generate three coarse-resolution data sets with different resolution levels. The resolutions of these three coarse-resolution data sets were 0.5°, 1°, and 2°, respectively, and their corresponding scaling ratios compared to the fine-resolution data set were 2, 4, and 8, respectively. The high-resolution and low-resolution sequential data were split into different sub-data sets: 80% of the data were assigned to the training-validation data set, and the remaining were assigned to the testing data set. In the training-validation data set, 80% of the data were utilized for network training, and the remaining 20% were utilized for model validation. This study considered two study areas that had different climate characteristics: a rectangular area covering 110° to 120° in longitude and 30° to 40° in latitude, and another rectangular area covering 120° to 130° in longitude and 40° to 50° in latitude. The Results section (Section 3) of this paper primarily focuses on the first region. It covered both a temperate sub-humid monsoon climate and a subtropical humid monsoon climate, showing obvious transitional features, and thus the weather was significantly changeable, and the interannual variation in precipitation was very large. The Discussion section (Section 4) also investigates the potential of transfer learning of the proposed approach, which considers the second area. It was located in a temperate monsoon climate with four distinct seasons, and it was hot and rainy in summer while cold and dry in winter. Two distinct regions were considered in the synthetic experiments because this approach can better reflect the broad applicability of the downscaling approaches.
The overall performance of the downscaling approaches was indicated by the root-mean-squared error, RMSE, and coefficient of determination, R2 (Appendix A).

3. Results

3.1. Overall Performance

The overall performance of the downscaling approaches was indicated by the root-mean-squared error (RMSE) and coefficient of determination (R2).
Table 1 summarizes the performance indicators corresponding to each experiment using the Q-M and SRCTN downscaling approaches. It is evident that the SRCTN method exhibited much lower RMSE and higher R2 values than the Q-M method, showing that the proposed SRCTN approach can significantly outperform the Q-M approach. The performance of the downscaling approaches was dependent on the scaling ratios to some degree. More specifically, the approaches provided lower RMSE and higher R2 values for lower scaling ratios. For instance, the RMSE value was the smallest and the R2 value was the largest for the synthetic experiment with a scaling ratio of 2 (i.e., the resolution of the coarse-resolution data was 0.5°), while the RMSE value was the largest and the R2 value was the smallest for the synthetic experiment with a scaling ratio of 8 (i.e., the resolution for the coarse-resolution data was 2°). This is consistent with the general engineering judgement that it is more challenging to estimate values at 1600 cells (grid resolution = 0.25°) from 25 cells (grid resolution = 2°; scaling ratio = 8) than from 400 cells (grid resolution = 0.5°; scaling ratio = 2). Although the performance of the downscaling methods was sensitive to the scaling ratios, the SRCTN method outperformed the Q-M method for all the experiments.
To further quantify the performance of the downscaling methods, scatter plots of the predicted values versus ground truths for the testing period were prepared and are presented in Figure 3. The data points corresponding to the SRCTN predictions resided quite close to the 1:1 line, and the distribution of the scatter points above and below the 1:1 line was very uniform, indicating that the SRCTN method could reproduce both the actual values and the distribution of the ground-truth data very well. The Q-M method could also correctly predict the data distribution, but the scatter points were located in a larger region than those for the SRCTN predictions, indicating that the Q-M method had a poorer performance than the SRCTN method, especially for the experiment with a small scaling ratio.

3.2. Spatiotemporal Distributions

Figure 4 presents the spatial distribution of the ground-truth and predicted ET0 on a sample day (1 August 2018) for each of the synthetic experiments. Both the Q-M and SRCTN methods successfully captured the general trends of the spatial distribution of ET0. However, it is quite obvious that the Q-M maps showed unrealistic rectangular patterns in all the experiments, while the SRCTN approach impressively captured the local and small-scale features. This implies that the capability of the SRCTN approach in reproducing small-scale patterns is remarkably good, and it is thus a better approach for downscaling.
Figure 5 shows the spatial distributions of bias for the same sample results. It is very impressive that the bias for the experiment for the SRCTN method with a scaling ratio of 2 was almost zero for the entire field, and the bias values were consistently smaller than those for the Q-M method, reconfirming the good performance of the proposed SRCTN method in downscaling climate and hydrological variables.
Figure 6 shows the time series of the daily ET0 in a sample year (2018) at four locations in the ground-truth data set and obtained by the downscaling methods. The performance indicators showed that the SRCTN approach could accurately predict the local features of ET0, and its performance was superior to the Q-M approach.

3.3. Systematic Errors and Extremes

The systematic errors were quantified by comparing the mean ET0 and the bias between the fine-resolution and downscaling data. Figure 7 compares the mean of the daily ET0 during the testing period obtained by the ground-truth data set and those obtained by the downscaling synthetic experiments. The figures indicate that the SRCTN method reproduced both the large-scale and fine-scale spatial features of ET0 remarkably well, especially for the experiments with smaller scaling ratios. The Q-M downscaling approach could also satisfactorily capture the large-scale features, but there were some unrealistic rectangular features, indicating that the raw coarse-resolution data were not adequately corrected. Figure 8 and Figure 9 show the mean and standard deviation of the bias for each experiment, respectively. The results were encouraging because they showed that the mean and standard deviation of the bias for the SRCTN method with a scaling ratio of 2 were both very close to zero, and those for the other scaling ratios were consistently smaller than those for the Q-M approach. These results highlight that the data obtained by the proposed SRCTN approach exhibited a better agreement with the fine-resolution data than the data obtained by the Q-M method, reconfirming the good performance of the proposed SRCTN method in downscaling climate and hydrological variables.
The annual highest ET0 was used as an indicator for assessing the capability of the downscaling methods in dealing with extreme events. The highest annual ET0 has a significant influence on the irrigation water demand, agriculture, soil moisture, and ecosystems. A MATLAB script was written to identify the highest ET0 in each year during the testing period for each point in the study area. Figure 10 shows the time series of the annual highest ET0 at nine locations that were randomly selected. The results obtained with the SRCTN method closely followed the peaks and troughs of the fine-resolution data, and those SRCTN predictions corresponding to a scaling ratio of 2 were nearly identical with the fine-resolution data. However, the results obtained by the Q-M method showed higher oscillations and deviated farther from the ground-truth data. Figure 11 presents the spatial distribution of the extreme index for each synthetic experiment. The spatial distribution at each point closely matched the fine-resolution and SRCTN results, and the SRCTN results consistently showed closer matches with the fine-resolution data as compared to the Q-M results. These findings are very encouraging because they demonstrated that the SRCTN method can perfectly capture the extreme events, which is widely known to be a difficult task.

4. Discussion

The results demonstrated that the SRCTN method significantly outperformed the Q-M approach, which is one of the most widely utilized downscaling approaches. Another merit of DL algorithms (here, SRCTN) is that they can employ the knowledge obtained from one region with sufficient data to make predictions for another region with limited data through transfer learning. In this work, the performance of transfer learning of SRCTN was evaluated using the second study region, which covered 120° to 130° in longitude and 40° to 50° in latitude. For the evaluations, three different methods were considered: SRCTN-Transfer-Learning (SRCTN-TL), SRCTN-Retrained (SRCTN-RT), and Q-M. In the SRCTN-TL method, the SRCTN network trained for the first region was directly used to downscale the data for the second region. In the SRCTN-RT method, the network with the same structure was retrained by the data in the second region.
Figure 12 compares the fine-resolution data and downscaling data obtained by the different methods. It appears that the data points corresponding to Q-M were more diffused than those corresponding to SRCTN-TL, which were more diffused than those for SRCTN-RT. Figure 13 presents the spatial distribution of the ET0 on a sample date. All methods reproduced the general patterns very well, but the Q-M approach showed artificial rectangular features. A closer examination shows that the map obtained by the SRCTN-RT method was more similar to the truth map compared with the map obtained by the SRCTN-TL method. Figure 14 shows the spatial distribution of bias for the different methods on the sample date. It is quite evident that the SRCTN algorithms generated lower bias values than the Q-M method, and the bias for the SRCTN-RT method was lower than that for the SRCTN-TR method. The mean and standard deviation of the bias over the testing period were also compared, as shown in Figure 15 and Figure 16, respectively. Consistent with the results for the sample date, both the mean and standard deviation of the bias corresponding to the SRCTN-RT approach were very close to zero. Those for the SRCTN-TL method were slightly larger, and those for the Q-M method were the largest. These observations showed that the SRCTN algorithm trained in one region can be satisfactorily applied in another region even without any additional data, despite the fact that network retraining can further improve the performance of the algorithm.
It should be noted that the Q-M method needs local data to develop cumulative distribution functions, and it thus cannot be directly used in regions with limited data. This highlights the special merit of the proposed DL-type algorithm in performing downscaling for data-scarce regions. The good performance of the transfer learning can be attributed to the excellent capability of DL algorithms in developing mapping functions between fine-resolution and coarse-resolution data, including some unknown and deeply hidden relationship functions. To be specific, the network figured out the data relationships between the fine-resolution and coarse-resolution data in the first region and could use these functions to make predictions in other regions. In the current study, the coarse-resolution data were generated through synthetic experiments, and the relationships were stable across the different regions. This is useful in eliminating the influence of other factors and is thus better in evaluating the generalization capability of the downscaling methods [10,45,46]. However, real-world data relationships actually vary in different regions, and thus the transfer learning algorithm may require auxiliary data, such as elevation [10,47], latitude, and longitude, to maintain good performance.
In addition to the synthetic experiments that used coarse-resolution data obtained by averaging operations, the current study also considered synthetic experiments that used coarse-resolution data obtained by maximum and median operations. The results showed that the SRCTN algorithm consistently outperformed the Q-M algorithm, which was consistent with the observations based on the synthetic experiments corresponding to the averaging operations. This reconfirms that the proposed SRCTN algorithm is a robust and powerful tool for downscaling climate data.
Although few studies have been proposed to downscale ET0 data using ML algorithms, there are some recent studies that employed ML approaches to downscale other climate or hydrological variables. For instance, Srivastava et al. [48] used three AI techniques (artificial neural network, support vector machine, and correlation vector machine) and a generalized linear model (GLM) to improve soil moisture estimations and spatial resolution, and the results showed that the AI downscaling algorithms could generally improve the performance more than the classical approach, and the artificial-neural-network-based downscaling methods were superior to the other tested downscaling methods. Wang et al. [14] used recurrent neural networks (RNNs) to perform statistical downscaling and compared them with traditional artificial neural networks (ANNs). The results showed that the RNN and ANN models had similar performance in downscaling precipitation data, but the RNN model demonstrated its superiority compared to the ANN algorithm. Xu et al. [49] developed multiple ML downscaling models based on the Bayesian model mean (BMA). The results showed that all the downscaling models greatly improved the performance compared to the original GCM outputs, and the downscaled results based on the BMA ensemble simulation and SVR model were the best. Anaraki et al. [50] used LSSVM-WOA, LSSVM, K-nearest neighbors (KNN), and ANN for downscaling precipitation and temperature, and the results showed that the ANNs and least-squares SVM_WOA methods slightly outperformed the other ML algorithms. Kumar et al. [47] used three algorithms (i.e., SRCNN, stacked SRCNN, and DeepSD) for regional climate prediction and high-resolution real-time rainfall observations, and the results showed that DeepSD-based downscaling provided the best performance. Wang et al. [10] employed a super-resolution deep residual network (SRDRN) approach to downscale daily precipitation and temperature. The results showed that the SRDRN algorithm could significantly outperform the constructed analog (CA) downscaling approach. Generally, ML approaches can improve the practice of data downscaling. However, as stated by Wang et al. [10], most simple ML techniques can only provide marginal improvement, and the existing plain CNN cannot provide satisfactory or expected performance in downscaling data. The present study also tested a three-layer CNN in preliminary studies, and the results confirmed that the network could not provide satisfactory performance in downscaling ET0. As stated by Wang et al. [10], the number of layers is a very important factor that influences the performance of a network, and the comparably disappointing performance of the plain CNN may be attributed to the shallow architecture of the algorithms. Previous studies have tested deeper architectures, such as U-Net [47] and SRDRN [10], and these algorithms can provide satisfactory performance. Generally, the adoption of more stacked layers can improve the capability of a network in extracting more features. However, previous studies have also found that stacking more layers beyond an optimal number of layers cannot improve or can even lower the network performance [51]. Wang et al. [10] incorporated batch normalization layers and residual blocks to resolve these vanishing–exploding gradients and degradation issues, and their study demonstrated that the modified approach could provide excellent results for downscaling precipitation and temperature data. However, the present study attempted deeper architectures in preliminary studies, but it appeared that the increased number of layers significantly increased the computational costs and memory requirements, which limits the broader application of these deeper architectures. The upsampling operation is crucial for downscaling data, and thus we attempted various upsampling techniques. Finally, it was found that the transposed convolutional operation could achieve a very good balance between accuracy and efficiency. The results of this study indicated that the proposed SRCTN algorithm could provide satisfactory downscaling results without significantly increasing the number of layers. This paper has shown that the proposed SRCTN approach can satisfactorily reproduce both large-scale and small-scale spatiotemporal distribution patterns, minimize bias and errors, capture extreme events, and make predictions for data-scarce regions, demonstrating that it is a promising tool for downscaling ET0 and can be potentially powerful in downscaling other climate and hydrological data.
In the synthetic experiments, the coarse-resolution data aggregated from the fine-resolution data were used as the input variables for the DL network, and thus the same variable was used as both the input and output variable. However, coarse-resolution ET0 data are typically estimated from the influencing variables, such as the maximum and minimum air temperature, relative humidity, sunshine duration, and wind speed [52,53]. It will be interesting to evaluate two different downscaling procedures to check if it is possible to further improve the practice of data downscaling: (1) predicting the fine-resolution ET0 using coarse-resolution influencing climate variables, and (2) downscaling the influencing climate variables first and then calculating the fine-resolution ET0 using the downscaled climate variables. This study focused on ET0, which quantifies the potential evaporative capability in a region without considering the other influencing factors, such as soil and crop conditions. This can eliminate errors from other sources and is thus very useful in distinguishing the performance of different downscaling errors, but it is meaningful to further evaluate the performance of the SRCTN algorithm in dealing with actual ET0 data. The present work evaluated the capability of the DL transfer learning technique in downscaling data in regions with limited data, and the results exhibited good performance. However, the real-world relationships between coarse-resolution and fine-resolution data were more complicated than those defined in the synthetic experiments. The synthetic experiments were meaningful in distinguishing the performance of the downscaling methods with the errors from other sources being avoided, but future work is required to further assess the transfer learning capability of the proposed SRCTN algorithm using real-world data with more complex functional relationships.

5. Conclusions

This paper presented a novel SRCTN DL method for downscaling daily ET0. The results demonstrated that the proposed approach could satisfactorily reproduce the spatiotemporal distribution and extreme events of ET0 and outperform the traditional downscaling approach. This study also showed that the SRCTN algorithm trained in one region can be satisfactorily applied in another region with limited data through transfer learning. The good performance of the SRCTN algorithm can be primarily attributed to the transposed convolutional operations, which can provide satisfactory upsampling results without increasing the number of layers. Thus, the SRCTN algorithm has an excellent balance between accuracy and efficiency. The remarkable capability of the proposed SRCTN algorithm in downscaling ET0 data implies that the method can potentially be employed to conduct downscaling operations for other variables. This is very meaningful for transforming coarse-resolution climate data sets to fine-resolution ones and addressing the problem of scale discrepancy in climatic and hydrological analyses. The present work only focused on the performance of model under limited data conditions, and it is necessary to apply it to practical complex cases in the future.

Author Contributions

Y.L. directed the research, X.Y. prepared the original draft, W.D. and T.Z. reviewed and edited the manuscript, X.B. and R.N. were involved in the data collection, analysis, and interpretation. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Opening Fund of Artificial Intelligence Key Laboratory of Sichuan Province (No. 2022RYJO1), Open Fund of National Engineering Research Center for Geographic Information System, University of Geosciences, Wuhan 430074, China (grant No. 2022KFJJ03), and the National Natural Science Foundation of China (grant No. 52309079).

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The root-mean-squared error (RMSE) and coefficient of determination (R2) can be expressed as
R M S E = 1 N Σ i = 1 N ( E P E G ) 2
R 2 = N i = 1 N E G E P i = 1 N E G i = 1 N E P i = 1 N E G 2 i = 1 N E G 2 i = 1 N E P 2 i = 1 N E P 2 2
where EG is the ground-truth data, EP is the predicted data, and N is the number of data pairs.

References

  1. Mengistu, D.; Bewket, W.; Dosio, A.; Panitz, H.-J. Climate change impacts on water resources in the Upper Blue Nile (Abay) River Basin, Ethiopia. J. Hydrol. 2021, 592, 125614. [Google Scholar] [CrossRef]
  2. Kao, I.F.; Zhou, Y.; Chang, L.C.; Chang, F.J. Exploring a Long Short-Term Memory based Encoder-Decoder framework for multi-step-ahead flood forecasting. J. Hydrol. 2020, 583, 124631. [Google Scholar] [CrossRef]
  3. Zeleňáková, M.; Kubiak-Wojcicka, K.; Weiss, R.; Weiss, E.; Elhamid, H.F.A. Environmental risk assessment focused on water quality in the Laborec River watershed. Ecohydrol. Hydrobiol. 2021, 21, 641–654. [Google Scholar] [CrossRef]
  4. Hu, H.; Ayyub, B.M. Validating and Enhancing Extreme Precipitation Projections by Downscaled Global Climate Model Results and Copula Methods. J. Hydrol. Eng. 2019, 24, 04019019. [Google Scholar] [CrossRef]
  5. Eum, H.-I.; Gupta, A.; Dibike, Y. Effects of univariate and multivariate statistical downscaling methods on climatic and hydrologic indicators for Alberta, Canada. J. Hydrol. 2020, 588, 125065. [Google Scholar] [CrossRef]
  6. Yan, X.; Mohammadian, A. Forecasting daily reference evapotranspiration for Canada using the Penman–Monteith model and statistically downscaled global climate model projections. Alex. Eng. J. 2020, 59, 883–891. [Google Scholar] [CrossRef]
  7. Sharma, C.; Ojha, C.S.P.; Shukla, A.K.; Pham, Q.B.; Linh, N.T.T.; Fai, C.M.; Loc, H.H.; Dung, T.D. Modified Approach to Reduce GCM Bias in Downscaled Precipitation: A Study in Ganga River Basin. Water 2019, 11, 2097. [Google Scholar] [CrossRef]
  8. Kim, S.-H.; Kim, J.-B.; Bae, D.-H. Optimizing Parameters for the Downscaling of Daily Precipitation in Normal and Drought Periods in South Korea. Water 2022, 14, 1108. [Google Scholar] [CrossRef]
  9. Li, X.; Zhang, K.; Babovic, V. Projections of Future Climate Change in Singapore Based on a Multi-Site Multivariate Downscaling Approach. Water 2019, 11, 2300. [Google Scholar] [CrossRef]
  10. Wang, F.; Tian, D.; Lowe, L.; Kalin, L.; Lehrter, J. Deep Learning for Daily Precipitation and Temperature Downscaling. Water Resour. Res. 2021, 57, e2020WR029308. [Google Scholar] [CrossRef]
  11. Shamir, E.; Halper, E.; Modrick, T.; Georgakakos, K.P.; Chang, H.-I.; Lahmers, T.M.; Castro, C. Statistical and dynamical downscaling impact on projected hydrologic assessment in arid environment: A case study from Bill Williams River basin and Alamo Lake, Arizona. J. Hydrol. X 2019, 2, 100019. [Google Scholar] [CrossRef]
  12. Iseri, Y.; Diaz, A.J.; Trinh, T.; Kavvas, M.L.; Ishida, K.; Anderson, M.L.; Ohara, N.; Snider, E.D. Dynamical downscaling of global reanalysis data for high-resolution spatial modeling of snow accumulation/melting at the central/southern Sierra Nevada watersheds. J. Hydrol. 2021, 598, 126445. [Google Scholar] [CrossRef]
  13. Lima, C.H.; Kwon, H.-H.; Kim, Y.-T. A Bayesian Kriging model applied for spatial downscaling of daily rainfall from GCMs. J. Hydrol. 2021, 597, 126095. [Google Scholar] [CrossRef]
  14. Wang, Q.; Huang, J.; Liu, R.; Men, C.; Guo, L.; Miao, Y.; Jiao, L.; Wang, Y.; Shoaib, M.; Xia, X. Sequence-based statistical downscaling and its application to hydrologic simulations based on machine learning and big data. J. Hydrol. 2020, 586, 124875. [Google Scholar] [CrossRef]
  15. Chadalawada, J.; Herath, H.M.V.V.; Babovic, V. Hydrologically Informed Machine Learning for Rainfall-Runoff Modeling: A Genetic Programming-Based Toolkit for Automatic Model Induction. Water Resour. Res. 2020, 56, e2019WR026933. [Google Scholar] [CrossRef]
  16. Yang, S.; Yang, D.; Chen, J.; Santisirisomboon, J.; Lu, W.; Zhao, B. A physical process and machine learning combined hydrological model for daily streamflow simulations of large watersheds with limited observation data. J. Hydrol. 2020, 590, 125206. [Google Scholar] [CrossRef]
  17. Kreyenberg, P.J.; Bauser, H.H.; Roth, K. Velocity Field Estimation on Density-Driven Solute Transport with a Convolutional Neural Network. Water Resour. Res. 2019, 55, 7275–7293. [Google Scholar] [CrossRef]
  18. Van, S.P.; Le, H.M.; Thanh, D.V.; Dang, T.D.; Loc, H.H.; Anh, D.T. Deep learning convolutional neural network in rainfall–runoff modelling. J. Hydroinformatics 2020, 22, 541–561. [Google Scholar] [CrossRef]
  19. Lei, X.; Chen, W.; Panahi, M.; Falah, F.; Rahmati, O.; Uuemaa, E.; Kalantari, Z.; Ferreira, C.S.S.; Rezaie, F.; Tiefenbacher, J.P.; et al. Urban flood modeling using deep-learning approaches in Seoul, South Korea. J. Hydrol. 2021, 601, 126684. [Google Scholar] [CrossRef]
  20. Gude, V.; Corns, S.; Long, S. Flood Prediction and Uncertainty Estimation Using Deep Learning. Water 2020, 12, 884. [Google Scholar] [CrossRef]
  21. Im, Y.; Song, G.; Lee, J.; Cho, M. Deep Learning Methods for Predicting Tap-Water Quality Time Series in South Korea. Water 2022, 14, 3766. [Google Scholar] [CrossRef]
  22. Lee, K.; Choi, C.; Shin, D.H.; Kim, H.S. Prediction of Heavy Rain Damage Using Deep Learning. Water 2020, 12, 1942. [Google Scholar] [CrossRef]
  23. Chen, H.; Chen, A.; Xu, L.; Xie, H.; Qiao, H.; Lin, Q.; Cai, K. A deep learning CNN architecture applied in smart near-infrared analysis of water pollution for agricultural irrigation resources. Agric. Water Manag. 2020, 240, 106303. [Google Scholar] [CrossRef]
  24. Shu, X.; Ding, W.; Peng, Y.; Wang, Z.; Wu, J.; Li, M. Monthly Streamflow Forecasting Using Convolutional Neural Network. Water Resour. Manag. 2021, 35, 5089–5104. [Google Scholar] [CrossRef]
  25. Höhlein, K.; Kern, M.; Hewson, T.; Westermann, R. A comparative study of convolutional neural network models for wind field downscaling. Meteorol. Appl. 2020, 27, e1961. [Google Scholar] [CrossRef]
  26. Nagasato, T.; Ishida, K.; Ercan, A.; Tu, T.; Kiyama, M.; Amagasaki, M.; Yokoo, K. Extension of Convolutional Neural Network along Temporal and Vertical Directions for Precipitation Downscaling. arXiv 2021, arXiv:2112.06571. [Google Scholar]
  27. Sun, Z.; Liu, Y.; Chen, H.; Zhang, J.; Jin, J.; Bao, Z.; Wang, G.; Tang, L. Evaluation of future climatology and its uncertainty under SSP scenarios based on a bias processing procedure: A case study of the Lancang-Mekong River Basin. Atmos. Res. 2023, 298, 107134. [Google Scholar] [CrossRef]
  28. Jena, S.; Mohanty, B.P.; Panda, R.K.; Ramadas, M. Toward Developing a Generalizable Pedotransfer Function for Saturated Hydraulic Conductivity Using Transfer Learning and Predictor Selector Algorithm. Water Resour. Res. 2021, 57, e2020WR028862. [Google Scholar] [CrossRef]
  29. Willard, J.D.; Read, J.S.; Appling, A.P.; Oliver, S.K.; Jia, X.; Kumar, V. Predicting Water Temperature Dynamics of Unmonitored Lakes with Meta-Transfer Learning. Water Resour. Res. 2021, 57, e2021WR029579. [Google Scholar] [CrossRef]
  30. Im, D.; Han, D.; Choi, S.; Kang, S.; Yoo, H.-J. DT-CNN: Dilated and Transposed Convolution Neural Network Accelerator for Real-Time Image Segmentation on Mobile Devices. In Proceedings of the 2019 IEEE International Symposium on Circuits and Systems (ISCAS), Sapporo, Japan, 26–29 May 2019; IEEE: New York, NY, USA, 2019; pp. 1–5. [Google Scholar] [CrossRef]
  31. Yan, X.; Mohammadian, A.; Ao, R.; Liu, J.; Chen, X. Spatiotemporal Variations in Reference Evapotranspiration and Its Contributing Climatic Variables at Various Spatial Scales across China for 1984–2019. Water 2022, 14, 2502. [Google Scholar] [CrossRef]
  32. Wang, J.-H.; Lin, G.-F.; Chang, M.-J.; Huang, I.-H.; Chen, Y.-R. Real-Time Water-Level Forecasting Using Dilated Causal Convolutional Neural Networks. Water Resour. Manag. 2019, 33, 3759–3780. [Google Scholar] [CrossRef]
  33. Panahi, M.; Sadhasivam, N.; Pourghasemi, H.R.; Rezaie, F.; Lee, S. Spatial prediction of groundwater potential mapping based on convolutional neural network (CNN) and support vector regression (SVR). J. Hydrol. 2020, 588, 125033. [Google Scholar] [CrossRef]
  34. Elmorsy, M.; El-Dakhakhni, W.; Zhao, B. Generalizable Permeability Prediction of Digital Porous Media via a Novel Multi-Scale 3D Convolutional Neural Network. Water Resour. Res. 2022, 58, e2021WR031454. [Google Scholar] [CrossRef]
  35. Yu, J.; Zhang, X.; Xu, L.; Dong, J.; Zhangzhong, L. A hybrid CNN-GRU model for predicting soil moisture in maize root zone. Agric. Water Manag. 2021, 245, 106649. [Google Scholar] [CrossRef]
  36. Mei, P.; Li, M.; Zhang, Q.; Li, G.; Song, L. Prediction model of drinking water source quality with potential industrial-agricultural pollution based on CNN-GRU-Attention. J. Hydrol. 2022, 610, 127934. [Google Scholar] [CrossRef]
  37. Schmidt-Hieber, J. Nonparametric regression using deep neural networks with ReLU activation function. Ann. Stat. 2020, 48, 1875–1897. [Google Scholar] [CrossRef]
  38. Lin, Y.; Wang, D.; Wang, G.; Qiu, J.; Long, K.; Du, Y.; Xie, H.; Wei, Z.; Shangguan, W.; Dai, Y. A hybrid deep learning algorithm and its application to streamflow prediction. J. Hydrol. 2021, 601, 126636. [Google Scholar] [CrossRef]
  39. Ide, H.; Kurita, T. Improvement of Learning for CNN with ReLU Activation by Sparse Regularization. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; IEEE: New York, NY, USA, 2017; pp. 2684–2691. [Google Scholar]
  40. Su, Y.; Sun, W.; Liu, J.; Zhai, G.; Jing, P. Photo-realistic image bit-depth enhancement via residual transposed convolutional neural network. Neurocomputing 2019, 347, 200–211. [Google Scholar] [CrossRef]
  41. Zhang, Z. Improved adam optimizer for deep neural networks. In Proceedings of the 2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQOS), Banff, AB, Canada, 4–6 June 2018; IEEE: New York, NY, USA, 2018; pp. 1–2. [Google Scholar] [CrossRef]
  42. Wang, Y.; Xiao, Z.; Cao, G. A convolutional neural network method based on Adam optimizer with power-exponential learning rate for bearing fault diagnosis. J. Vibroengineering 2022, 24, 666–678. [Google Scholar] [CrossRef]
  43. Yan, X.; Mohammadian, A. Estimating future daily pan evaporation for Qatar using the Hargreaves model and statistically downscaled global climate model projections under RCP climate change scenarios. Arab. J. Geosci. 2020, 13, 938. [Google Scholar] [CrossRef]
  44. Shin, Y.; Mohanty, B.P. Development of a deterministic downscaling algorithm for remote sensing soil moisture footprint using soil and vegetation classifications. Water Resour. Res. 2013, 49, 6208–6228. [Google Scholar] [CrossRef]
  45. He, X.; Chaney, N.W.; Schleiss, M.; Sheffield, J. Spatial downscaling of precipitation using adaptable random forests. Water Resour. Res. 2016, 52, 8217–8237. [Google Scholar] [CrossRef]
  46. Pierce, D.W.; Cayan, D.R.; Thrasher, B.L. Statistical Downscaling Using Localized Constructed Analogs (LOCA). J. Hydrometeorol. 2014, 15, 2558–2585. [Google Scholar] [CrossRef]
  47. Sha, Y.; Gagne, D.J., II; West, G.; Stull, R. Deep-Learning-Based Gridded Downscaling of Surface Meteorological Variables in Complex Terrain. Part I: Daily Maximum and Minimum 2-m Temperature. J. Appl. Meteorol. Clim. 2020, 59, 2057–2073. [Google Scholar] [CrossRef]
  48. Srivastava, P.K.; Han, D.; Ramirez, M.R.; Islam, T. Machine Learning Techniques for Downscaling SMOS Satellite Soil Moisture Using MODIS Land Surface Temperature for Hydrological Application. Water Resour. Manag. 2013, 27, 3127–3144. [Google Scholar] [CrossRef]
  49. Xu, R.; Chen, N.; Chen, Y.; Chen, Z. Downscaling and Projection of Multi-CMIP5 Precipitation Using Machine Learning Methods in the Upper Han River Basin. Adv. Meteorol. 2020, 2020, 8680436. [Google Scholar] [CrossRef]
  50. Anaraki, M.V.; Farzin, S.; Mousavi, S.-F.; Karami, H. Uncertainty Analysis of Climate Change Impacts on Flood Frequency by Using Hybrid Machine Learning Methods. Water Resour. Manag. 2021, 35, 199–223. [Google Scholar] [CrossRef]
  51. Pan, B.; Hsu, K.; AghaKouchak, A.; Sorooshian, S. Improving Precipitation Estimation Using Convolutional Neural Network. Water Resour. Res. 2019, 55, 2301–2321. [Google Scholar] [CrossRef]
  52. Jiang, S.; Liang, C.; Cui, N.; Zhao, L.; Du, T.; Hu, X.; Feng, Y.; Guan, J.; Feng, Y. Impacts of climatic variables on reference evapotranspiration during growing season in Southwest China. Agric. Water Manag. 2019, 216, 365–378. [Google Scholar] [CrossRef]
  53. Yang, Y.; Cui, Y.; Bai, K.; Luo, T.; Dai, J.; Wang, W.; Luo, Y. Short-term forecasting of daily reference evapotranspiration using the reduced-set Penman-Monteith model and public weather forecasts. Agric. Water Manag. 2019, 211, 70–80. [Google Scholar] [CrossRef]
Figure 1. Architecture and a sample model structure of super-resolution convolutional transposed network (SRCTN): the SRCTN consists of 1 input layer, 8 two-dimensional (2D) convolutional layers, and 3 transposed convolutional layers; the transposed convolutional layers were utilized to map the coarse-resolution feature maps to finer-resolution ones.
Figure 1. Architecture and a sample model structure of super-resolution convolutional transposed network (SRCTN): the SRCTN consists of 1 input layer, 8 two-dimensional (2D) convolutional layers, and 3 transposed convolutional layers; the transposed convolutional layers were utilized to map the coarse-resolution feature maps to finer-resolution ones.
Water 16 00335 g001
Figure 2. An example illustrating the transposed convolution operation.
Figure 2. An example illustrating the transposed convolution operation.
Water 16 00335 g002
Figure 3. Comparisons of the downscaled data versus fine-resolution data obtained by the SRCTN and Q-M downscaling approaches over the testing period. (a) scaling ratio = 8; (b) scaling ratio = 4; and (c) scaling ratio = 2. The unit of the RMSE values is (mm day−1).
Figure 3. Comparisons of the downscaled data versus fine-resolution data obtained by the SRCTN and Q-M downscaling approaches over the testing period. (a) scaling ratio = 8; (b) scaling ratio = 4; and (c) scaling ratio = 2. The unit of the RMSE values is (mm day−1).
Water 16 00335 g003
Figure 4. Spatial distribution of the daily ET0 in the fine-resolution data set and those obtained by different synthetic experiments on a sample day (1 August 2018).
Figure 4. Spatial distribution of the daily ET0 in the fine-resolution data set and those obtained by different synthetic experiments on a sample day (1 August 2018).
Water 16 00335 g004
Figure 5. Spatial distribution of the bias for different synthetic experiments on a sample day (1 August 2018).
Figure 5. Spatial distribution of the bias for different synthetic experiments on a sample day (1 August 2018).
Water 16 00335 g005
Figure 6. Time-series of daily ET0 for a sample year (2018) in the fine-resolution data set and those obtained by the SRCTN and Q-M downscaling approaches.
Figure 6. Time-series of daily ET0 for a sample year (2018) in the fine-resolution data set and those obtained by the SRCTN and Q-M downscaling approaches.
Water 16 00335 g006
Figure 7. Spatial distribution of the mean ET0 over the testing period obtained from the fine-resolution data set and those from different synthetic experiments.
Figure 7. Spatial distribution of the mean ET0 over the testing period obtained from the fine-resolution data set and those from different synthetic experiments.
Water 16 00335 g007
Figure 8. Spatial distribution of the mean of the bias over the testing period for different synthetic experiments.
Figure 8. Spatial distribution of the mean of the bias over the testing period for different synthetic experiments.
Water 16 00335 g008
Figure 9. Spatial distribution of the standard deviation of the bias over the testing period for different synthetic experiments.
Figure 9. Spatial distribution of the standard deviation of the bias over the testing period for different synthetic experiments.
Water 16 00335 g009
Figure 10. Annual highest ET0 over the testing period for sample locations obtained by different synthetic experiments.
Figure 10. Annual highest ET0 over the testing period for sample locations obtained by different synthetic experiments.
Water 16 00335 g010
Figure 11. Spatial distribution of the mean of the annual highest ET0 over the testing period obtained from the fine-resolution data set and those from different synthetic experiments.
Figure 11. Spatial distribution of the mean of the annual highest ET0 over the testing period obtained from the fine-resolution data set and those from different synthetic experiments.
Water 16 00335 g011
Figure 12. Comparisons of the downscaled data versus fine-resolution data obtained by the SRCTN-TL, SRCTN-RT, and Q-M downscaling approaches over the testing period.
Figure 12. Comparisons of the downscaled data versus fine-resolution data obtained by the SRCTN-TL, SRCTN-RT, and Q-M downscaling approaches over the testing period.
Water 16 00335 g012
Figure 13. Spatial distribution of the daily ET0 for the second study region in the fine-resolution data set and those obtained by different synthetic experiments on a sample day (1 August 2018).
Figure 13. Spatial distribution of the daily ET0 for the second study region in the fine-resolution data set and those obtained by different synthetic experiments on a sample day (1 August 2018).
Water 16 00335 g013
Figure 14. Spatial distribution of the bias for the second study region in the fine-resolution data set and those obtained by different synthetic experiments on a sample day (1 August 2018).
Figure 14. Spatial distribution of the bias for the second study region in the fine-resolution data set and those obtained by different synthetic experiments on a sample day (1 August 2018).
Water 16 00335 g014
Figure 15. Spatial distribution of the mean of the bias for the second study region in the fine-resolution data set and those obtained by different synthetic experiments.
Figure 15. Spatial distribution of the mean of the bias for the second study region in the fine-resolution data set and those obtained by different synthetic experiments.
Water 16 00335 g015
Figure 16. Spatial distribution of the standard deviation of the bias for the second study region in the fine-resolution data set and those obtained by different synthetic experiments.
Figure 16. Spatial distribution of the standard deviation of the bias for the second study region in the fine-resolution data set and those obtained by different synthetic experiments.
Water 16 00335 g016
Table 1. Performance indictors corresponding to each experiment using the SRCTN and Q-M downscaling approaches.
Table 1. Performance indictors corresponding to each experiment using the SRCTN and Q-M downscaling approaches.
ExperimentsMAE (mm day−1)RMSE (mm day−1)R2 (—)
SRCTN-80.1630.2390.980
Q-M-80.2270.3340.961
SRCTN-40.0460.0770.998
Q-M-40.1370.2080.985
SRCTN-20.0100.0151.000
Q-M-20.0720.1090.996
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; Yan, X.; Du, W.; Zhang, T.; Bai, X.; Nan, R. Downscaling Daily Reference Evapotranspiration Using a Super-Resolution Convolutional Transposed Network. Water 2024, 16, 335. https://doi.org/10.3390/w16020335

AMA Style

Liu Y, Yan X, Du W, Zhang T, Bai X, Nan R. Downscaling Daily Reference Evapotranspiration Using a Super-Resolution Convolutional Transposed Network. Water. 2024; 16(2):335. https://doi.org/10.3390/w16020335

Chicago/Turabian Style

Liu, Yong, Xiaohui Yan, Wenying Du, Tianqi Zhang, Xiaopeng Bai, and Ruichuan Nan. 2024. "Downscaling Daily Reference Evapotranspiration Using a Super-Resolution Convolutional Transposed Network" Water 16, no. 2: 335. https://doi.org/10.3390/w16020335

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop