Next Article in Journal
Elicitor-Mediated Response of Growth, Yield, and Quality of Kalmegh (Andrographis paniculata Wall. ex Nees, Family Acanthaceae)
Next Article in Special Issue
Intelligent Detection of Lightweight “Yuluxiang” Pear in Non-Structural Environment Based on YOLO-GEW
Previous Article in Journal
Advances in Crop Molecular Breeding and Genetics
Previous Article in Special Issue
YOLOv5-AC: A Method of Uncrewed Rice Transplanter Working Quality Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Buckwheat Plant Height Estimation Based on Stereo Vision and a Regression Convolutional Neural Network under Field Conditions

1
College of Agricultural Engineering, Shanxi Agricultural University, Jinzhong 030801, China
2
Dryland Farm Machinery Key Technology and Equipment Key Laboratory of Shanxi Province, Jinzhong 030801, China
*
Author to whom correspondence should be addressed.
Agronomy 2023, 13(9), 2312; https://doi.org/10.3390/agronomy13092312
Submission received: 10 August 2023 / Revised: 30 August 2023 / Accepted: 30 August 2023 / Published: 1 September 2023
(This article belongs to the Special Issue Computer Vision and Deep Learning Technology in Agriculture)

Abstract

:
Buckwheat plant height is an important indicator for producers. Due to the decline in agricultural labor, the automatic and real-time acquisition of crop growth information will become a prominent issue for farms in the future. To address this problem, we focused on stereo vision and a regression convolutional neural network (CNN) in order to estimate buckwheat plant height. MobileNet V3 Small, NasNet Mobile, RegNet Y002, EfficientNet V2 B0, MobileNet V3 Large, NasNet Large, RegNet Y008, and EfficientNet V2 L were modified into regression CNNs. Through a five-fold cross-validation of the modeling data, the modified RegNet Y008 was selected as the optimal estimation model. Based on the depth and contour information of buckwheat depth image, the mean absolute error (MAE), root mean square error (RMSE), mean square error (MSE), and mean relative error (MRE) when estimating plant height were 0.56 cm, 0.73 cm, 0.54 cm, and 1.7%, respectively. The coefficient of determination (R2) value between the estimated and measured results was 0.9994. Combined with the LabVIEW software development platform, this method can estimate buckwheat accurately, quickly, and automatically. This work contributes to the automatic management of farms.

1. Introduction

Buckwheat contains a variety of vitamins, multiproteins, cellulose, and many elements needed by the human body, and has the effect of lowering blood lipids, protecting eyesight, softening blood vessels, and reducing blood sugar [1,2]. Buckwheat is also an important forage source for livestock [3], and is mainly cultivated in the Russian Federation, Ukraine, United States of America, Brazil, Kazakhstan, Japan, Belarus, Nepal, and the central and western regions of China [4]. Height information, one of the most important indicators for buckwheat, provides important references for irrigation and fertilization management [5]. During the growth process, controlling the height of buckwheat within an appropriate range helps to increase its lodging resistance, mechanical harvesting performance, and yield [6,7,8]. In the process of spraying, it is also necessary to adjust the heights of sprinkler heads according to the heights of crops in real time [9]. Agriculture is becoming automated, intelligent, and unmanned [10]. The accurate measurement of buckwheat plant height is helpful for saving fertilizer, manpower, and pesticides, as well as improving economic benefits.
Plant height is traditionally measured using a scale or vernier caliper, which is highly accurate but time-consuming, laborious, and subject to subjective factors. With the progress of detection technology, an increasing number of researchers have searched for modern methods with which to estimate plant height using noncontact methods to avoid direct measurements. These methods include the following:
(i)
Image processing method: After acquiring 2D or 3D images of plants, rectangle fitting [11], extracting the highest points of regions of interest [12], detecting peak pixels of depth images [13], creating 3D point clouds [14], and other methods were applied to obtain crop height. Although image processing is slightly cumbersome, the accuracy of this method is over 90%.
(ii)
Light detection and ranging (LiDAR) method: LiDAR sends out light pulses periodically and calculates distance via duration of travel [15]. This being the case, 3D point cloud images of crops can be obtained via the use of LiDAR, after which plant height information can be obtained by subtracting the elevation of bare soil from the 3D point cloud images of crops. Using the LiDAR method, the coefficient of determination (R2) between the estimated crop height and its true value is higher than 94% [16,17,18,19,20]. Currently, using LiDAR sensors to measure crop height is expensive.
(iii)
Ultrasonic method: This method can also detect the distance from the target to the sensor. Using this method to measure crop height, the root mean square error (RMSE) was in the range of 1 to 4 cm, and the correlation coefficient was higher than 0.9 [21,22,23,24]. However, the ultrasonic sensor has weak anti-interference ability and slow ranging speed, which is not conducive to realizing real-time crop height acquisition.
(iv)
Hyperspectral images method: The ability of plants to absorb and reflect light at different wavelengths is also different. This being the case, the vegetation indices that reflect plant canopy information can be calculated via two or more wavebands. Some scholars have tried to use hyperspectral images to estimate crop height, and their accuracies were lower than other methods, with an RMSE of 5.92–7.37 cm and R2 of 0.78–0.96 [25,26].
These methods can effectively predict crop height, but they all have the problem that operators require a high technical level of expertise, making them unfriendly to nonprofessionals. In some application scenarios, such as nozzle height adjustment in a spray system and a cutting table height adjustment system in a harvester, there are strict requirements on the speed and automation of altitude information acquisition [27,28], and it is difficult for the existing methods to meet these requirements.
A convolutional neural network extracts information automatically from images in an end-to-end manner with fast processing speeds [29,30], and the estimated results can be given directly after the preprocessed images are input into a neural network without manual intervention. MobileNet V3, NasNet, RegNet, EfficientNet, and other mature feature extraction models have been applied in agriculture and have achieved desirable results [31,32,33,34]. The MobileNet V3 retains the depth-wise separable convolution of the MobileNet V1 and the linear inverted residual bottlenecks of the MobileNet V2. In addition, the MobileNet V3 introduces a Squeeze-and-Excitation (SE) block [35] to reinforce important information between channels, and optimizes the end layers without reducing inference accuracy. As a result, the MobileNet V3 has fewer parameters, less computation, a short inference time, and high accuracy, making it the preferred lightweight model [36]. Designing a neural network by hand becomes increasingly challenging as the parameters of a network increase. Based on a neural architecture search, NasNet uses a new search space for transfer learning to transfer the generated network onto a large-scale dataset, and proposes a regularization method called a scheduled descent path, which improves the generalization ability of the model. NasNet outperforms previous human-designed models in terms of accuracy and inference speed [37]. RegNet takes the residual bottleneck as the basic module, studies the design of bottleneck ratio coefficients, the number of groups, the number of channels, and the depth at each stage, and gradually reduces the search space to finally obtain a small-scale, high-performance model ensemble. The use of this approach is useful for the automatic optimization of network architectures [38]. EfficientNet is an efficient convolutional neural network architecture that optimizes the depth, width, and resolution of the network simultaneously by using compound scaling, thus achieving better performance while keeping the number of model parameters relatively small [39]. At the same time, 3D images of crops obtained via stereo vision technology can reflect height information without contact. This being the case, it is promising to realize the rapid, accurate, and nondestructive estimation of buckwheat height combined with stereo vision and a convolutional neural network to meet the requirements of real-time control in production.
Given the above rationale, we aimed to develop a buckwheat height estimation method via the use of stereo vision and a regression convolutional neural network. This study’s objectives were (i) to train and select a buckwheat height estimation model, (ii) to test the accuracy of this model, and (iii) to apply the method.

2. Materials and Methods

2.1. Design of the Buckwheat 3D Image Acquisition System

Training and evaluating a regression convolutional neural network require a mass of 3D images and height data for buckwheat. For 3D images of buckwheat, a buckwheat 3D image acquisition system was designed (Figure 1). A RealSense D435 depth camera can transmit data and be powered by a USB cable simultaneously, and it is convenient to use in a field environment. This being the case, an Intel RealSense D435 depth camera with a resolution of 1280 × 720 pixels was selected for this study. The effective range of the depth camera was 0.2 to 10 m, and the error was less than 2% in the range of 0.2 to 2 m. The camera was mounted on an adjustable stand and connected to a laptop via a USB cable. Given the buckwheat height and camera accuracy range, before acquiring images, the camera plane was adjusted to be parallel to the ground, and the camera plane was 1.6 m from the ground.
A control program of the system was developed based on the LabVIEW software development platform, 32-bit 2018 edition, which can realize the acquisition, analysis, and storage, and control of images and other data. At the same time, RealSense provides an application programing interface for LabVIEW, which makes it easy to design a depth image acquisition program. The developed control program of the buckwheat 3D image acquisition system is shown in Figure 2. Three-dimensional images of buckwheat were acquired manually by clicking the ‘capture’ button. After the capture of images was finished, the ‘Stop’ button was clicked to terminate the program. The 3D images obtained were in an APD format. In an APD image, the pixel value for each point is the value of the distance in millimeters from the object to the depth camera, The redder the color, the greater the distance; the greener the color, the smaller the distance. The file name of a depth image was set to be time-accurate, to the millisecond, to when the image was captured, e.g., ‘2022-07-28 17:50 11 322.apd’.

2.2. Data Acquisition

This study was conducted at the experimental field of Shanxi Agriculture University, which is in the Taigu district of Shanxi province, China (37°25.6′ N, 12°35.0′ E). The experimental field is divided into two parts along the north (Field A) and south (Field B) sides of a farm track. The field on the north side is 1324.1 × 1441.8 m, approximately 1.9 ha, and the south is 13,354 × 973.5 m, approximately 1.3 ha. The soil of the experimental field was sandy soil. The mean annual rainfall of this site is 391 mm, and the mean temperature is 13 °C. The Hongshan buckwheat variety was selected as the test object for this study; the local growth period of this breed is from July to October every year. The sowing ridge spacing was 50 cm, and the sowing density was about 2,100,000 plants/ha for the experiment. The fertilizer used in the experiment was biological fertilizer (2000 kg/ha).
The modeling data and test data were collected in this study; the modeling data were further divided into training data and validation data according to k-fold cross-validation (k is taken to be 5 in this study). The training data were used to train the buckwheat height estimation regression convolutional neural network models. Each time the model was trained, the accuracy was examined using validation data, and the best parameters of the model were saved. After the training of each model, the test data that were not involved in the model training were used to investigate the final estimation accuracy of each model, and the final estimation model was selected.
The 3D images of buckwheat were obtained using the buckwheat 3D image acquisition system. Before each 3D image was captured, the camera plane was adjusted to be parallel to the ground at a distance of 1600 mm. The modeling data were collected in Field A, and the test data were collected in Field B. Plant height was defined as the average vertical distance from the soil to the highest point when all parts of a plant were in their natural position [40,41], and were measured using a measuring stick. Five plants were chosen randomly under the depth camera and measured manually, and the average of them was regarded as the mean crop height of the 3D image. Each 3D image was then labeled with the average crop height in the image. Due to the crops being short at the seeding stage, soil unevenness has a great influence on the accuracy of estimating crop height in this growing period, so the collected data were from the range of 4 cm to 100 cm (mature period). A total of 23,416 pieces of modeling data and 2930 pieces of test data captured in different direction were used in this study; samples of the images are shown in Figure 3. The amount of modeling data and test data was similar in each crop height interval. The number of modeling data points in the height categories of 4–20 cm, >20–40 cm, >40–60 cm, >60–80 cm, and >80–100 cm were 4781, 4692, 4688, 4807, and 4448, respectively. The number of test data points in the height categories of 4–20 cm, >20–40 cm, >40–60 cm, >60–80 cm, and >80–100 cm were 551, 581, 610, 598, and 590, respectively.

2.3. Data Preprocessing

The purpose of data preprocessing is to improve the training speed of the estimating models. In order to ensure that the estimation accuracy was not affected by preprocessing, the same preprocessing method was adopted for all images (Figure 4). The size of a raw 3D image was 1280 × 720 pixels; however, the pixel value at each point in a 3D image represents the distance in millimeters from the object to the camera plane. In order to restore the true height of buckwheat, it was necessary to convert the image distance; the specific conversion method is as follows:
P t = 1600 P r
where Pt represents the true height of the object in a converted image and Pr is the pixel’s value in a raw image. After conversion, the pixel values in a converted image ranged from 0 to 1600. Then, the 3D images in an APD format were converted into grayscale images of 1280 × 720 pixels that could be recognized by deep learning models. The formula used for the conversion is as follows:
P g = P t × 255 ÷ 1600
where Pg is the pixel value in a grayed image. At this point, the pixel values in a grayed image ranged from 0 to 255. In order to improve the training speed of the models, all gray scale images were uniformly scaled to 224 × 224 pixels. Since all of the 3D images were processed in the same way, and the process did not change the relative position and proportion of buckwheat in the images, it had little impact on the final estimation. Finally, each resized image was labeled with the average crop height in the image.

2.4. Building, Training, and Testing of Buckwheat Height Estimation Models

Four lightweight classic models (MobileNet V3 Small, NasNet Mobile, RegNet Y002, and EfficientNet V2 B0) and four heavyweight models (MobileNet V3 Large, NasNet Large, RegNet Y008, and EfficientNet V2 L) were selected and modified into regression convolutional neural networks for buckwheat height estimation. Based on the above feature extraction models, buckwheat plant height estimation models were constructed. Officially, these networks are used for classification, while crop height estimation is a regression task. This being the case, it was necessary to turn the origin models into regression models. The specific methods are shown in Figure 5: (i) remove the classification layer of each model; and (ii) add a global average pooling (GAP) and a dense layer with only one node and no activation function to each model. All of the modified models were built with Python 3.9.16, TensorFlow-GPU-2.10.0, and Keras 2.10.0. The codes were run on a computer with an Intel i7-12700k processor, 64 GB of RAM, Windows 11 (64 bit), and an Nvidia GeForce RTX 3090 24 GB graphics card with Nvidia Ampere architecture. All of the models used the processed grayscale images of 224 × 224 pixels as the input and the estimated buckwheat height as the output. The codes of the models with training results are available at the following GitHub link: https://github.com/18801389568/Buckwheat-height-estimation (accessed on 26 July 2023).
Training the models was essentially a process of continually updating the trainable parameters of each model in order to make the crop height estimation results increasingly accurate. Considering the quantity of collected data, the five-fold cross validation and fine-tuning methods were used. The K-fold cross-validation method effectively improved the learning ability of the model, which was similar to increasing the number of training samples, making the trained model more robust. The fine-tuning method in this paper was based on the models pretrained on the ImageNet dataset, which not only accelerated the convergence but also improved the generalization ability of the models. In order to evaluate the estimation accuracy of each model under the same conditions, all of the models used the same hyper-parameters, the Adam optimization function was used to optimize the model parameters (the configuration used default parameters [42,43] and are shown in Table 1), and the mean square error (MSE) in the validation data was used to evaluate the accuracy of the model when estimating the crop height of buckwheat. The MSE was defined as follows:
M S E = 1 5 ( m i n ( M S E f o l d 1 ) + m i n ( M S E f o l d 2 ) + m i n ( M S E f o l d 3 ) + m i n ( M S E f o l d 4 ) + m i n ( M S E f o l d 5 ) )
M S E f o l d m = 1 N n = 1 N ( y n f o l d m y ^ n f o l d m ) 2
where foldm can be any one of fold1, fold2, fold3, fold4, and fold5; N is the quantity of validation data points; y n f o l d m represents the measured plant height of the nth data point in mth fold training; and y ^ n f o l d m was the estimated plant height of the nth data point in mth fold training. Each fold of a model was trained 300 times, and the batch size was set as 32. The parameters with the highest estimation accuracy on the validation data were retained during the training of each fold.
Table 2 presents the information of each trained model. The parameters comprise the trainable and nontrainable parameters in each model. The nontrainable parameters included parameters in GAP layers and batch-normalization layers. The modified MobileNet V3 Small had the fewest parameters and trainable parameters, smallest model size, and shortest training time. Conversely, the modified EfficientNet V2 L had the most parameters and trainable parameters, biggest model size, and longest training time.
After training the models, the final buckwheat height estimation model was selected according to the MSE, and retrained using all of the modeling data. In the retraining process, the Adam function was also selected as the optimization function, and its parameter configuration was the same as before. The MSE in the modeling data was chosen as the loss function to save the best parameters. The generalization performance of the selected model in terms of the estimated mean relative error (MRE), mean absolute error (MAE), root mean square error (RMSE), and mean estimation time (MET) per image was then verified via the use of test data.

3. Results and Discussion

3.1. Training Results of the Models

Table 3 shows the minimum MSE and MET during the training of each fold for each model. Each model achieved high estimation accuracy and realized real-time estimation. It was found that a model being too large or too small will reduce estimation accuracy, which may be because the fitting ability of a small-scale model is too weak, while a large-scale model with too many parameters can cause the model to overfit. Similar to how these models performed on an ImageNet dataset, the modified RegNet Y008 had the best estimation accuracy, and the MSE of this model was as low as 0.065. This may be due to the fact that the modified RegNet Y008, optimized by design space, has a more suitable structure and scale to extract height information from depth images compared with other models. Therefore, the modified RegNet Y008 was selected as the buckwheat height estimation model in this study.
The modified RegNet Y008 was then retrained using all of the modeling data. Figure 6 shows the loss (MSE) change in the model on the modeling data during the retraining process. During the training process, the MSE of the modeling data showed an overall decreasing trend. At the 111th step of training, the model had converged. The lowest MSE occurred in the 293rd training, with a value of 0.023, and these model parameters were saved as the final buckwheat height estimation model.

3.2. Model Test Results

To check the model’s ability to estimate data not involved in the model training, test data were used to investigate its generalization ability. The measured and estimated plant heights are shown in Figure 7. The coefficient of determination (R2) value between the measured and estimated plant heights was as high as 0.9994. The MAE, RMSE, and MSE were 0.56 cm, 0.73 cm, and 0.54 cm, respectively. The maximum absolute error between the measured and estimated plant heights was 3.14 cm, and the MRE was 1.7%. The accuracy when using this method to estimate crop height was slightly lower than that in [11] (which used the image processing method), [17] (which used the LiDAR method), and [22] (which used the ultrasonic method), and higher than in other literature; however, with this method, once the model is trained, it can output estimation results as long as a preprocessed image is input, which can realize the automatic estimation of crop height. The MET was 14.33 ms per image, which can realize automatic and real-time estimation.
The estimation accuracy of buckwheat plant height could not be further improved because (1) the depth camera itself suffers from 2% error when capturing depth images; (2) the camera was supposed to be perpendicular to the ground, but there were greater or smaller errors in the actual measurement; (3) measurement errors occurred when measuring crop height with a stick; (4) the uneven ground in the experimental field caused errors in the depth image collected when the crop was not ridged; and (5) when the depth image was being acquired, the wind changed the height of the crop in the depth image, resulting in estimation errors.
The estimated relative error at each height is shown in Figure 8. It was found that the relative estimation error gradually decreased with an increase in crop height. When the buckwheat plant height was 4 cm, 7 cm, and 10 cm, the average relative error was 8.4%, 5.6%, and 5.5%, respectively. This may be because when the height of the crop is less than 10 cm, the fluctuation in the field has a significant impact on the accuracy of crop height estimation; when the camera is relatively far away from the crop, the ranging accuracy of the depth camera is slightly lower. When the height of buckwheat is higher than 10 cm, the estimated relative error decreases significantly and tends to be stable.

3.3. Feature Maps Analysis

The feature maps can roughly reflect what information the model used to estimate crop height. In the shallow layers of the model, it is common to extract some elementary information, such as contours, edges, and textures. More abstract and comprehensive information is extracted in deeper layers. To determine on what basis the modified RegNet Y008 estimated buckwheat height, 32 feature maps (Figure 9b) generated by the first convolutional layer of the model were viewed. Comparing the input image (Figure 9a) with the feature maps, it was found that, after the first convolution layer, the height and contour information in some feature maps were activated. In other feature maps, the texture information of the image was extracted and the background information was suppressed. Therefore, it can be inferred that the model estimates the plant height based mainly on the depth and contour information of the input buckwheat depth image.

3.4. Application Prospect

LabVIEW is an integrated development platform that can, according to production needs, collect and analyze required information, finally realizing reasoning and decision-making. In the 2023 Q1 version of LabVIEW, Python 3.9 is supported, which can combine a depth camera and trained model to accomplish real-time, accurate buckwheat plant height estimation. After the model training, a buckwheat plant height estimation system based on LabVIEW (Figure 10) was developed. When the depth image of buckwheat was collected by the system, the plant height parameters were estimated automatically within 0.1 s without manual intervention. The system can be used for crops irrigation and fertilization management. It can also be installed on tractors to control the height of a sprinkler head during spraying, and to adjust the height of a cutting table during harvesting.
In short, stereo vision and a regression convolutional neural network can be used to estimate buckwheat plant height under field conditions. The process of estimation can be fully automated without human intervention. The fast estimation speed of this method ensures its real-time operation in field management. Even when estimating the plant height of buckwheat under different varieties and planting patterns, only a small amount of buckwheat data needs to be collected and corrected on the basis of the original model output.

4. Conclusions

We propose an innovative method of estimating buckwheat plant height using stereo vision and a regression convolutional neural network under field conditions. After a five-fold cross-validation among the modified MobileNet V3 Small, NasNet Mobile, RegNet Y002, EfficientNet V2 B0, MobileNet V3 Large, NasNet Large, RegNet Y008, and EfficientNet V2 L, the modified RegNet Y008 was finally selected as the optimal buckwheat height estimation model. This method estimates plant height based on the depth and contour information of buckwheat depth images. The MAE, RMSE, MSE, and MRE values when estimating buckwheat height were 0.56 cm, 0.73 cm, 0.54 cm, and 1.7%, respectively. This method can successfully estimate buckwheat plant height on the LabVIEW 2023 Q1 platform in a fully automated way, and the MET was less than 0.1 s. It is feasible to apply this method to tractors to control the height of a sprinkler head during spraying in a timely manner, as well as to adjust the height of a cutting table during harvesting. This method is suitable for large-scale, automated, and unmanned farms to avoid the uncontrolled and inadequate use of fertilizers and pesticide, thus reducing the imbalance of certain factors in the soil important for the preservation of biocoenosis and groundwater. This method can also be used to estimate the plant height of other crops, such as soybean and wheat. More specific crop phenotypic parameter determinations can also be achieved by combining the RegNet Y008 and object detection technology in the future.

Author Contributions

Data curation, J.Z. and W.X.; investigation, D.Z.; methodology, J.Z. and X.S.; project administration, D.Z.; software, W.L. and Y.C.; supervision, D.Z.; writing—original draft, J.Z.; writing—review and editing, D.Z., Y.C. and W.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Shanxi Province Excellent Doctoral Work Award Scientific Research Project (No. SXBYKY2022019), Shanxi Agricultural University Ph.D. Research Startup Project (No. 2021BQ85), Major Special Projects for the Construction of China Modern Agricultural Industrial Technology System (No. CARS-07-D-2), and Shanxi Agricultural University Academic Recovery Project (No. 2023XSHF2).

Data Availability Statement

If interested in the data used in this study, contact [email protected] for the original dataset.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ren, C.; Shan, F.; Wang, M.; Li, Y. Review on nutrition and functionality and food product development of buckwheat. J. Chin. Cereals Oils Assoc. 2021, 37, 261–269. [Google Scholar] [CrossRef]
  2. Wang, Y.; Qu, S.; Chen, M.; Cui, Y.; Shi, C.; Pu, X.; Gao, W.; Li, Q.; Han, J.; Zhang, A. Effects of buckwheat milk Co-fermented with two probiotics and two commercial yoghurt strains on gut microbiota and production of short-chain Fatty Acids. Food Biosci. 2023, 53, 102537. [Google Scholar] [CrossRef]
  3. Yan, W.; Qian, X.; Chunmei, W.; Yiyu, Z. Effects of different forage ratios on fattening, carcass and meat traits of Weining cattle. Feed. Ind. 2023, 44, 54–62. [Google Scholar] [CrossRef]
  4. FAO. Food and Agriculture Organization of the United States. Available online: https://www.fao.org/faostat/en/#data/QCL/visualize (accessed on 24 March 2023).
  5. Jian, Z.; Tianjin, X.; Wanneng, Y.; Guangsheng, Z. Research status and prospect on height estimation of field crop using near-field remote sensing technology. Smart Agric. 2021, 3, 1–15. [Google Scholar] [CrossRef]
  6. Wenliang, C.; Xiulian, L.; Xinghai, S.; Xiaodong, L.; Gaimei, L.; Longlong, L. Comprehensive evaluation on yield characters of common buckwheat and AMMI analysis. J. Nucl. Agric. Sci. 2023, 37, 60–68. [Google Scholar] [CrossRef]
  7. Zhang, C.; Craine, W.A.; McGee, R.J.; Vandemark, G.J.; Davis, J.B.; Brown, J.; Hulbert, S.H.; Sankaran, S. High-throughput phenotyping of canopy height in cool-season crops using sensing techniques. Agron. J. 2021, 113, 3269–3280. [Google Scholar] [CrossRef]
  8. Lumme, J.; Karjalainen, M.; Kaartinen, H.; Kukko, A.; Hyyppa, J.; Hyyppa, H.; Jaakkola, A.; Kleemola, J. Terrestrial laser scanning of agricultural crops. In Proceedings of the 2008 21st ISPRS International Congress for Photogrammetry and Remote Sensing, Beijing, China, 3–11 July 2008; pp. 563–566. [Google Scholar]
  9. Xinhua, W.; Jing, S.; Dandan, M.; Lin, L.; Xiaowei, X. Online Control System of Spray Boom Height and Balance. Trans. Chin. Soc. Agric. Mach. 2015, 46, 66–71. [Google Scholar] [CrossRef]
  10. Yubin, L.; Denan, Z.; Yanfei, Z.; Junke, Z. Exploration and development prospect of eco-unmanned farm modes. Trans. Chin. Soc. Agric. Eng. 2021, 37, 312–327. [Google Scholar] [CrossRef]
  11. Gupta, C.; Tewari, V.K.; Machavaram, R.; Shrivastava, P. An image processing approach for measurement of chili plant height and width under field conditions. J. Saudi Soc. Agric. Sci. 2022, 21, 171–179. [Google Scholar] [CrossRef]
  12. Kim, W.-S.; Lee, D.-H.; Kim, Y.-J.; Kim, T.; Lee, W.-S.; Choi, C.-H. Stereo-vision-based crop height estimation for agricultural robots. Comput. Electron. Agric. 2021, 181, 105937. [Google Scholar] [CrossRef]
  13. Jiang, Y.; Li, C.; Paterson, A.H. High throughput phenotyping of cotton plant height using depth images under field conditions. Comput. Electron. Agric. 2016, 130, 57–68. [Google Scholar] [CrossRef]
  14. Andújar, D.; Ribeiro, A.; Fernández-Quintanilla, C.; Dorado, J. Using depth cameras to extract structural parameters to assess the growth state and yield of cauliflower crops. Comput. Electron. Agric. 2016, 122, 67–73. [Google Scholar] [CrossRef]
  15. Rivera, G.; Porras, R.; Florencia, R.; Sánchez-Solís, J.P. LiDAR applications in precision agriculture for cultivating crops: A review of recent advances. Comput. Electron. Agric. 2023, 207, 107737. [Google Scholar] [CrossRef]
  16. Dhami, H.; Yu, K.; Xu, T.; Zhu, Q.; Dhakal, K.; Friel, J.; Li, S.; Tokekar, P. Crop height and plot estimation for phenotyping from unmanned aerial vehicles using 3D LiDAR. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 2643–2649. [Google Scholar]
  17. Yuan, W.; Li, J.; Bhatta, M.; Shi, Y.; Baenziger, P.S.; Ge, Y. Wheat height estimation using LiDAR in comparison to ultrasonic sensor and UAS. Sensors 2018, 18, 3731. [Google Scholar] [CrossRef] [PubMed]
  18. Zhou, L.; Gu, X.; Cheng, S.; Guijun, Y.; Shu, M.; Sun, Q. Analysis of plant height changes of lodged maize using UAV-LiDAR data. Agriculture 2020, 10, 146. [Google Scholar] [CrossRef]
  19. Jimenez-Berni, J.A.; Deery, D.M.; Rozas-Larraondo, P.; Condon, A.T.G.; Rebetzke, G.J.; James, R.A.; Bovill, W.D.; Furbank, R.T.; Sirault, X.R.R. High throughput determination of plant height, ground cover, and above-ground biomass in wheat with LiDAR. Front. Plant Sci. 2018, 9, 237. [Google Scholar] [CrossRef]
  20. Walter, J.D.C.; Edwards, J.; McDonald, G.; Kuchel, H. Estimating biomass and canopy height with LiDAR for field crop breeding. Front. Plant Sci. 2019, 10, 1145. [Google Scholar] [CrossRef]
  21. Chang, Y.K.; Zaman, Q.U.; Rehman, T.U.; Farooque, A.A.; Esau, T.; Jameel, M.W. A real-time ultrasonic system to measure wild blueberry plant height during harvesting. Biosyst. Eng. 2017, 157, 35–44. [Google Scholar] [CrossRef]
  22. Montazeaud, G.; Langrume, C.; Moinard, S.; Goby, C.; Ducanchez, A.; Tisseyre, B.; Brunel, G. Development of a low cost open-source ultrasonic device for plant height measurements. Smart Agric. Technol. 2021, 1, 100022. [Google Scholar] [CrossRef]
  23. Bronson, K.F.; French, A.N.; Conley, M.M.; Barnes, E.M. Use of an ultrasonic sensor for plant height estimation in irrigated cotton. Agron. J. 2021, 113, 2175–2183. [Google Scholar] [CrossRef]
  24. Wang, X.; Singh, D.; Marla, S.; Morris, G.; Poland, J. Field-based high-throughput phenotyping of plant height in sorghum using different sensing technologies. Plant Methods 2018, 14, 53. [Google Scholar] [CrossRef]
  25. Tao, H.; Feng, H.; Xu, L.; Miao, M.; Yang, G.; Yang, X.; Fan, L. Estimation of the Yield and Plant Height of Winter Wheat Using UAV-Based Hyperspectral Images. Sensors 2020, 20, 1231. [Google Scholar] [CrossRef] [PubMed]
  26. Zhen, Z.; Lou, Y.S.; Moses, O.A.; Rui, L.; Li, M.; Jun, L. Hyperspectral vegetation indexes to monitor wheat plant height under different sowing conditions. Spectrosc. Lett. 2020, 53, 194–206. [Google Scholar] [CrossRef]
  27. Haoran, T.; Hanjie, D.; Changyuan, Z.; Yukang, L.; Shuo, Y.; Liping, C. Design and test of boom height control system for boom sprayer. J. Agric. Mech. Res. 2021, 43, 156–160. [Google Scholar] [CrossRef]
  28. Weijian, L.; Xiwen, L.; Shan, Z.; Li, Z. Performance test and analysis of the self-adaptive profiling header for ratooning rice based on fuzzy PID control. Trans. Chin. Soc. Agric. Eng. 2022, 38, 1–9. [Google Scholar] [CrossRef]
  29. Arumuga Arun, R.; Umamaheswari, S. Effective multi-crop disease detection using pruned complete concatenated deep learning model. Expert Syst. Appl. 2023, 213, 118905. [Google Scholar] [CrossRef]
  30. Sanaeifar, A.; Guindo, M.L.; Bakhshipour, A.; Fazayeli, H.; Li, X.; Yang, C. Advancing precision agriculture: The potential of deep learning for cereal plant head detection. Comput. Electron. Agric. 2023, 209, 107875. [Google Scholar] [CrossRef]
  31. Wei, Z.; Rui, M.; Jia, W.; Hongjie, G.; Jinpu, X. Classification and Identification of Corn Varieties Based on Ear Image. J. Agric. Sci. Technol. 2023, 25, 97–106. [Google Scholar] [CrossRef]
  32. Tiantian, D.; Xinyuan, N.; Jiaxing, H.; Wenlong, Z.; Zhixia, M. Identifying the damage degree of various crop diseases using an improved RegNet. Trans. Chin. Soc. Agric. Eng. 2022, 38, 150–158. [Google Scholar] [CrossRef]
  33. Li, Y.; Ma, X.; Wang, J. Pineapple maturity analysis in natural environment based on Mobilenet V3-YOLOv4. Smart Agric. 2023, 5, 35–44. [Google Scholar] [CrossRef]
  34. Agarwal, M.; Gupta, S.K.; Biswas, K.K. Development of Efficient CNN model for Tomato crop disease identification. Sustain. Comput. Inform. Syst. 2020, 28, 100407. [Google Scholar] [CrossRef]
  35. Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar] [CrossRef]
  36. Howard, A.; Sandler, M.; Chu, G.; Chen, L.-C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for MobileNetV3. arXiv 2019, arXiv:1905.02244. [Google Scholar] [CrossRef]
  37. Zoph, B.; Vasudevan, V.; Shlens, J.; Le, Q.V. Learning transferable architectures for scalable image recognition. arXiv 2018, arXiv:1707.07012. [Google Scholar] [CrossRef]
  38. Radosavovic, I.; Prateek Kosaraju, R.; Girshick, R.; He, K.; Dollár, P. Designing network design spaces. arXiv 2020, arXiv:2003.13678. [Google Scholar] [CrossRef]
  39. Tan, M.; Le, Q.V. EfficientNetV2: Smaller models and faster training. arXiv 2021, arXiv:2104.00298. [Google Scholar] [CrossRef]
  40. Heady, H.F. The measurement and value of plant height in the study of herbaceous vegetation. Ecology 1957, 38, 313–320. [Google Scholar] [CrossRef]
  41. Pérez-Harguindeguy, N.; Díaz, S.; Garnier, E.; Lavorel, S.; Poorter, H.; Jaureguiberry, P.; Bret-Harte, M.S.; Cornwell, W.K.; Craine, J.M.; Gurvich, D.E.; et al. New handbook for standardised measurement of plant functional traits worldwide. Aust. J. Bot. 2013, 61, 167–234. [Google Scholar] [CrossRef]
  42. Kingma, D.P.; Ba, J.L. Adam: A Method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations 2015, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  43. Keras. Adam. Available online: https://keras.io/api/optimizers/adam/ (accessed on 28 April 2020).
Figure 1. Buckwheat 3D image acquisition system.
Figure 1. Buckwheat 3D image acquisition system.
Agronomy 13 02312 g001
Figure 2. Control program of the buckwheat 3D image acquisition system.
Figure 2. Control program of the buckwheat 3D image acquisition system.
Agronomy 13 02312 g002
Figure 3. Samples of buckwheat images captured in different directions.
Figure 3. Samples of buckwheat images captured in different directions.
Agronomy 13 02312 g003
Figure 4. Three-dimensional image preprocessing method.
Figure 4. Three-dimensional image preprocessing method.
Agronomy 13 02312 g004
Figure 5. Construction method of the buckwheat crop height estimation models.
Figure 5. Construction method of the buckwheat crop height estimation models.
Agronomy 13 02312 g005
Figure 6. MSE change in the modified RegNet Y008 on modeling data.
Figure 6. MSE change in the modified RegNet Y008 on modeling data.
Agronomy 13 02312 g006
Figure 7. Comparison between the measured and estimated plant height.
Figure 7. Comparison between the measured and estimated plant height.
Agronomy 13 02312 g007
Figure 8. Relative error distribution in estimation.
Figure 8. Relative error distribution in estimation.
Agronomy 13 02312 g008
Figure 9. Input image (a) and feature maps (b) output from the first convolutional layer of the modified RegNet Y008.
Figure 9. Input image (a) and feature maps (b) output from the first convolutional layer of the modified RegNet Y008.
Agronomy 13 02312 g009
Figure 10. LabVIEW panel of the buckwheat plant height estimation system.
Figure 10. LabVIEW panel of the buckwheat plant height estimation system.
Agronomy 13 02312 g010
Table 1. The configuration parameters of the Adam optimization function.
Table 1. The configuration parameters of the Adam optimization function.
Learning RateWeight DecayBeta-1Beta-2Epsilon
0.0010.010.90.9991 × 10−7
Table 2. Information of the models.
Table 2. Information of the models.
Modified ModelParametersTrainable
Parameters
Model Size
(MB)
Training Time
(h)
MobileNet V3 Small1,590,9931,518,88122.54.4
NasNet Mobile4,270,7734,234,03570.412.0
RegNet Y0022,815,2132,794,36537.56.1
EfficientNet V2 B05,920,5935,859,98575.28.5
MobileNet V3 Large2,997,3132,972,91339.65.0
NasNet Large84,920,85184,724,1831000.223.6
RegNet Y0085,524,8255,494,93768.88.2
EfficientNet V2 L117,748,129117,235,5531373.524.2
Table 3. Minimum MSE and MET of each fold during training process for each model.
Table 3. Minimum MSE and MET of each fold during training process for each model.
Modified ModelMinimum MSE of Each FoldMSEMET
Fold 1Fold 2Fold 3Fold 4Fold 5
MobileNet V3 Small0.1030.1090.1400.1150.1120.1169.90 ms
NasNet Mobile0.0510.1150.0620.2620.0630.11145.05 ms
RegNet Y0020.5130.5000.4830.2330.1630.37813.65 ms
EfficientNet V2 B00.0840.1100.0860.0660.0360.07628.32 ms
MobileNet V3 Large0.1900.3530.2830.2160.5300.31411.94 ms
NasNet Large0.2440.0660.0140.1000.0070.08649.49 ms
RegNet Y0080.0290.0590.0720.0620.1020.06514.33 ms
EfficientNet V2 L0.0860.350.1070.1220.0560.14450.51 ms
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, J.; Xing, W.; Song, X.; Cui, Y.; Li, W.; Zheng, D. Buckwheat Plant Height Estimation Based on Stereo Vision and a Regression Convolutional Neural Network under Field Conditions. Agronomy 2023, 13, 2312. https://doi.org/10.3390/agronomy13092312

AMA Style

Zhang J, Xing W, Song X, Cui Y, Li W, Zheng D. Buckwheat Plant Height Estimation Based on Stereo Vision and a Regression Convolutional Neural Network under Field Conditions. Agronomy. 2023; 13(9):2312. https://doi.org/10.3390/agronomy13092312

Chicago/Turabian Style

Zhang, Jianlong, Wenwen Xing, Xuefeng Song, Yulong Cui, Wang Li, and Decong Zheng. 2023. "Buckwheat Plant Height Estimation Based on Stereo Vision and a Regression Convolutional Neural Network under Field Conditions" Agronomy 13, no. 9: 2312. https://doi.org/10.3390/agronomy13092312

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop