欢迎您访问《智慧农业(中英文)》官方网站! English

Smart Agriculture ›› 2023, Vol. 5 ›› Issue (1): 132-145.doi: 10.12133/j.smartag.SA202302009

• 信息处理与决策 • 上一篇    下一篇

基于多源数据的马铃薯植株表型参数提取

胡松涛1(), 翟瑞芳1(), 王应华2, 刘志1, 朱剑忠1, 任荷1, 杨万能2,3, 宋鹏2,3   

  1. 1.华中农业大学 信息学院,湖北 武汉 430070
    2.华中农业大学 植物科学技术学院,湖北 武汉 430070
    3.华中农业大学 作物遗传改良国家重点实验室,湖北 武汉 430070
  • 收稿日期:2023-02-28 出版日期:2023-03-30
  • 基金资助:
    国家自然科学基金(U21A20205)
  • 作者简介:胡松涛,硕士研究生,研究方向为机器视觉与图像处理。E-mail:cxvd3ttak@163.com
  • 通信作者: 翟瑞芳,博士,副教授,研究方向为LiDAR数据处理。E-mail:rfzhai@mail.hzau.edu.cn

Extraction of Potato Plant Phenotypic Parameters Based on Multi-Source Data

HU Songtao1(), ZHAI Ruifang1(), WANG Yinghua2, LIU Zhi1, ZHU Jianzhong1, REN He1, YANG Wanneng2,3, SONG Peng2,3   

  1. 1.College of Informatics, Huazhong Agricultural University, Wuhan 430070, China
    2.College of Plant Science & Technology, Huazhong Agricultural University, Wuhan 430070, China
    3.National Key Laboratory of Crop Genetic Improvement, Huazhong Agricultural University, Wuhan 430070, China
  • Received:2023-02-28 Online:2023-03-30

摘要:

作物具有结构多样、生长环境复杂等特征。RGB图像数据能真实地反映植株的纹理特征与颜色特征,三维点云数据包含了作物的体量信息。将RGB图像和三维点云数据结合,实现作物的二维和三维表型参数提取,对表型组学的方法研究具有重要意义。本研究以马铃薯为研究对象,使用RGB相机和激光扫描仪分别采集了50个马铃薯的RGB图像与三维激光点云数据。对比了OCRNet,UpNet,PaNet和DeepLab v3+四种深度学习语义分割方法的分割精度,并选择精度较高的OCRNet网络实现马铃薯顶视图像的语义分割。优化了Mean shift聚类算法流程,完成了马铃薯植株激光点云的单株分割,并结合欧式聚类和K-Means聚类算法对单株马铃薯植株点云的茎和叶进行准确地分割。同时,提出一种利用编号建立马铃薯单株RGB图像和激光点云间一一对应关系的策略,并以此为基础分别从RGB图像与激光点云中提取同一马铃薯植株包括最大宽度、周长、面积、株高、体积、叶长与叶宽在内的8个二维表型参数与10个三维表型参数。最后,选择了比较具有代表性、易测量的叶片数、株高、最大宽度三个表型参数进行精度评估,平均绝对百分比误差(Mean Absolute Percentage Error,MAPE)分别为8.6%、8.3%和6.0%,均方根误差(Root Mean Square Error,RMSE)分别为1.371片、3.2 cm和1.86 cm,决定系数R2分别为0.93、0.95和0.91。精度评估的结果表明,所提取的表型参数能够准确、高效地反映马铃薯的生长状态,将马铃薯的RGB影像数据与三维激光点云数据相结合,能够充分发挥RGB图像纹理颜色特征丰富、三维点云能够提供体量信息的优势,实现马铃薯植株二维与三维表型参数高精度、非破坏性的提取。本研究成果不仅可以为马铃薯的种植和育种提供重要的技术支持,还可以为基于表型数据的研究提供有力支持。

关键词: LiDAR, 多源数据, 聚类分割, 三维表型, OCRNet, 激光点云, 深度学习

Abstract:

Crops have diverse structures and complex growth environments. RGB image data can reflect the texture and color features of plants accurately, while 3D data contains information about crop volume. The combination of RGB image and 3D point cloud data can achieve the extraction of two-dimensional and three-dimensional phenotypic parameters of crops, which is of great significance for the research of phenomics methods. In this study, potatoe plants were chosen as the research subject, and RGB cameras and laser scanners were used to collect 50 potato RGB images and 3D laser point cloud data. The segmentation accuracy of four deep learning semantic segmentation methods, OCRNet, UpNet, PaNet, and DeepLab v3+, were compared and analyzed for the RGB images. OCRNet, which demonstrated higher accuracy, was used to perform semantic segmentation on top-view RGB images of potatoes. Mean shift clustering algorithm was optimized for laser point cloud data processing, and single-plant segmentation of laser point cloud data was completed. Stem and leaf segmentation of single-plant potato point cloud data were accurately performed using Euclidean clustering and K-Means clustering algorithms. In addition, a strategy was proposed to establish a one-to-one correspondence between RGB images and point clouds of single-plant potatoes using pot numbering. 8 2D phenotypic parameters and 10 3D phenotypic parameters, including maximum width, perimeter, area, plant height, volume, leaf length, and leaf width, etc., were extracted from RGB images and laser point clouds, respectively. Finally, the accuracy of three representative and easily measurable phenotypic parameters, leaf number, plant height, and maximum width were evaluated. The mean absolute percentage errors (MAPE) were 8.6%, 8.3% and 6.0%, respectively, while the root mean square errors (RMSE) were 1.371 pieces, 3.2 cm and 1.86 cm, respectively, and the determination coefficients (R2) were 0.93, 0.95 and 0.91, respectively. The research results indicated that the extracted phenotype parameters can accurately and efficiently reflect the growth status of potatoes. Combining the RGB image data of potatoes with three-dimensional laser point cloud data can fully exploit the advantages of the rich texture and color characteristics of RGB images and the volumetric information provided by three-dimensional point clouds, achieving non-destructive, efficient, and high-precision extraction of two-dimensional and three-dimensional phenotype parameters of potato plants. The achievements of this study could not only provide important technical support for the cultivation and breeding of potatoes but also provide strong support for phenotype-based research.

Key words: LiDAR, multi-source data, Clustering segmentation, 3D phenotyping, OCRNet, LiDAR points cloud, deep learning

中图分类号: