Welcome to Smart Agriculture

Smart Agriculture ›› 2023, Vol. 5 ›› Issue (1): 132-145.doi: 10.12133/j.smartag.SA202302009

• Information Processing and Decision Making • Previous Articles     Next Articles

Extraction of Potato Plant Phenotypic Parameters Based on Multi-Source Data

HU Songtao1(), ZHAI Ruifang1(), WANG Yinghua2, LIU Zhi1, ZHU Jianzhong1, REN He1, YANG Wanneng2,3, SONG Peng2,3   

  1. 1.College of Informatics, Huazhong Agricultural University, Wuhan 430070, China
    2.College of Plant Science & Technology, Huazhong Agricultural University, Wuhan 430070, China
    3.National Key Laboratory of Crop Genetic Improvement, Huazhong Agricultural University, Wuhan 430070, China
  • Received:2023-02-28 Online:2023-03-30
  • corresponding author: ZHAI Ruifang, E-mail:rfzhai@mail.hzau.edu.cn
  • About author:HU Songtao,E-mail:cxvd3ttak@163.com
  • Supported by:
    National Natural Science Foundation of China (U21A20205)


Crops have diverse structures and complex growth environments. RGB image data can reflect the texture and color features of plants accurately, while 3D data contains information about crop volume. The combination of RGB image and 3D point cloud data can achieve the extraction of two-dimensional and three-dimensional phenotypic parameters of crops, which is of great significance for the research of phenomics methods. In this study, potatoe plants were chosen as the research subject, and RGB cameras and laser scanners were used to collect 50 potato RGB images and 3D laser point cloud data. The segmentation accuracy of four deep learning semantic segmentation methods, OCRNet, UpNet, PaNet, and DeepLab v3+, were compared and analyzed for the RGB images. OCRNet, which demonstrated higher accuracy, was used to perform semantic segmentation on top-view RGB images of potatoes. Mean shift clustering algorithm was optimized for laser point cloud data processing, and single-plant segmentation of laser point cloud data was completed. Stem and leaf segmentation of single-plant potato point cloud data were accurately performed using Euclidean clustering and K-Means clustering algorithms. In addition, a strategy was proposed to establish a one-to-one correspondence between RGB images and point clouds of single-plant potatoes using pot numbering. 8 2D phenotypic parameters and 10 3D phenotypic parameters, including maximum width, perimeter, area, plant height, volume, leaf length, and leaf width, etc., were extracted from RGB images and laser point clouds, respectively. Finally, the accuracy of three representative and easily measurable phenotypic parameters, leaf number, plant height, and maximum width were evaluated. The mean absolute percentage errors (MAPE) were 8.6%, 8.3% and 6.0%, respectively, while the root mean square errors (RMSE) were 1.371 pieces, 3.2 cm and 1.86 cm, respectively, and the determination coefficients (R2) were 0.93, 0.95 and 0.91, respectively. The research results indicated that the extracted phenotype parameters can accurately and efficiently reflect the growth status of potatoes. Combining the RGB image data of potatoes with three-dimensional laser point cloud data can fully exploit the advantages of the rich texture and color characteristics of RGB images and the volumetric information provided by three-dimensional point clouds, achieving non-destructive, efficient, and high-precision extraction of two-dimensional and three-dimensional phenotype parameters of potato plants. The achievements of this study could not only provide important technical support for the cultivation and breeding of potatoes but also provide strong support for phenotype-based research.

Key words: LiDAR, multi-source data, Clustering segmentation, 3D phenotyping, OCRNet, LiDAR points cloud, deep learning

CLC Number: