Welcome to Smart Agriculture 中文
Topic--Machine Vision and Agricultural Intelligent Perception

A Lightweight Fruit Load Estimation Model for Edge Computing Equipment

Expand
  • 1.Agricultural Information Institute, Chinese Academy of Agricultural Sciences/Key Laboratory of Agricultural Big Data, Ministry of Agriculture and Rural Affairs, Beijing 100081, China
    2.Chinese Academy of Agricultural Sciences, Beijing 100081, China
XIA Xue, E-mail:xiaxue@caas.cn

1. ZHANG Ning, E-mail:zhangning@caas.cn

2. SUN Tan, E-mail:suntan@caas.cn

Received date: 2023-05-11

  Online published: 2023-07-15

Supported by

National Key R&D Program Project (2022YFD2002205); Special Fund for Basic Research Business Expenses of Central level Public Welfare Research Institutes (Y2022QC17); Chinese Academy of Agricultural Sciences Science and Technology Innovation Project (CAAS-ASTIP-2021-AII-08)

Abstract

[Objective] The fruit load estimation of fruit tree is essential for horticulture management. Traditional estimation method by manual sampling is not only labor-intensive and time-consuming but also prone to errors. Most existing models can not apply to edge computing equipment with limited computing resources because of their high model complexity. This study aims to develop a lightweight model for edge computing equipment to estimate fruit load automatically in the orchard. [Methods] The experimental data were captured using the smartphone in the citrus orchard in Jiangnan district, Nanning city, Guangxi province. In the dataset, 30 videos were randomly selected for model training and other 10 for testing. The general idea of the proposed algorithm was divided into two parts: Detecting fruits and extracting ReID features of fruits in each image from the video, then tracking fruit and estimating the fruit load. Specifically, the CSPDarknet53 network was used as the backbone of the model to achieve feature extraction as it consumes less hardware computing resources, which was suitable for edge computing equipment. The path aggregation feature pyramid network PAFPN was introduced as the neck part for the feature fusion via the jump connection between the low-level and high-level features. The fused features from the PAFPN were fed into two parallel branches. One was the fruit detection branch and another was the identity embedding branch. The fruit detection branch consisted of three prediction heads, each of which performed 3×3 convolution and 1×1 convolution on the feature map output by the PAFPN to predict the fruit's keypoint heat map, local offset and bounding box size, respectively. The identity embedding branch distinguished between different fruit identity features. In the fruit tracking stage, the byte mechanism from the ByteTrack algorithm was introduced to improve the data association of the FairMOT method, enhancing the performance of fruit load estimation in the video. The Byte algorithm considered both high-score and low-score detection boxes to associate the fruit motion trajectory, then matches the identity features' similarity of fruits between frames. The number of fruit IDs whose tracking duration longer than five frames was counted as the amount of citrus fruit in the video. [Results and Discussions] All experiments were conducted on edge computing equipment. The fruit detection experiment was conducted under the same test dataset containing 211 citrus tree images. The experimental results showed that applying CSPDarkNet53+PAFPN structure in the proposed model achieved a precision of 83.6%, recall of 89.2% and F1 score of 86.3%, respectively, which were superior to the same indexes of FairMOT (ResNet34) model, FairMOT (HRNet18) model and Faster RCNN model. The CSPDarkNet53+PAFPN structure adopted in the proposed model could better detect the fruits in the images, laying a foundation for estimating the amount of citrus fruit on trees. The model complexity experimental results showed that the number of parameters, FLOPs (Floating Point Operations) and size of the proposed model were 5.01 M, 36.44 G and 70.2 MB, respectively. The number of parameters for the proposed model was 20.19% of FairMOT (ResNet34) model's and 41.51% of FairMOT (HRNet18) model's. The FLOPs for the proposed model was 78.31% less than FairMOT (ResNet34) model's and 87.63% less than FairMOT (HRNet18) model's. The model size for the proposed model was 23.96% of FairMOT (ResNet34) model's and 45.00% of FairMOT (HRNet18) model's. Compared with the Faster RCNN, the model built in this study showed advantages in the number of parameters, FLOPs and model size. The low complexity proved that the proposed model was more friendly to edge computing equipment. Compared with the lightweight backbone network EfficientNet-Lite, the CSPDarkNet53 applied in the proposed model's backbone performed better fruit detection and model complexity. For fruit load estimation, the improved tracking strategy that integrated the Byte algorithm into the FairMOT positively boosted the estimation accuracy of fruit load. The experimental results on the test videos showed that the AEP (Average Estimating Precision) and FPS (Frames Per Second) of the proposed model reached 91.61% and 14.76 f/s, which indicated that the proposed model could maintain high estimation accuracy while the FPS was 2.4 times and 4.7 times of the comparison models, respectively. The RMSE (Root Mean Square Error) of the proposed model was 4.1713, which was 47.61% less than FairMOT (ResNet34) model's and 22.94% less than FairMOT (HRNet18) model's. The R2 of the determination coefficient between the algorithm-measured value and the manual counted value was 0.9858, which was superior to other comparison models. The proposed model revealed better performance in estimating fruit load and lower model complexity than other comparatives. [Conclusions] The experimental results proved the validity of the proposed model for fruit load estimation on edge computing equipment. This research could provide technical references for the automatic monitoring and analysis of orchard productivity. Future research will continue to enrich the data resources, further improve the model's performance, and explore more efficient methods to serve more fruit tree varieties.

Cite this article

XIA Xue, CHAI Xiujuan, ZHANG Ning, ZHOU Shuo, SUN Qixin, SUN Tan . A Lightweight Fruit Load Estimation Model for Edge Computing Equipment[J]. Smart Agriculture, 2023 , 5(2) : 1 -12 . DOI: 10.12133/j.smartag.SA202305004

References

1 FENG A J, ZHOU J F, VORIES E D, et al. Yield estimation in cotton using UAV-based multi-sensor imagery[J]. Biosystems engineering, 2020, 193: 101-114.
2 KURTULMUS F, LEE W S, VARDAR A. Green citrus detection using 'eigenfruit', color and circular Gabor texture features under natural outdoor conditions[J]. Computers and electronics in agriculture, 2011, 78(2): 140-149.
3 QURESHI W S, PAYNE A, WALSH K B, et al. Machine vision for counting fruit on mango tree canopies[J]. Precision agriculture, 2017, 18(2): 224-244.
4 ZHOU R, DAMEROW L, SUN Y R, et al. Using colour features of cv. 'Gala' apple fruits in an orchard in image processing to predict yield[J]. Precision agriculture, 2012, 13(5): 568-580.
5 ANNAMALAI P, LEE W S. Citrus yield mapping system using machine vision[C]// 2003, Las Vegas, NV July 27-30, 2003. St. Joseph, MI, USA: American Society of Agricultural and Biological Engineers, 2003: 1.
6 STAJNKO D, RAKUN J, BLANKE M. Modelling apple fruit yield using image analysis for fruit colour, shape and texture[J]. European journal of horticultural science, 2009, 74(6): 260-267.
7 DORJ U O, LEE M, YUN S S. An yield estimation in citrus orchards via fruit detection and counting using image processing[J]. Computers and electronics in agriculture, 2017, 140: 103-112.
8 SA I, GE Z Y, DAYOUB F, et al. DeepFruits: A fruit detection system using deep neural networks[J]. Sensors, 2016, 16(8): ID 1222.
9 CHEN S W, SHIVAKUMAR S S, DCUNHA S, et al. Counting apples and oranges with deep learning: A data-driven approach[J]. IEEE robotics and automation letters, 2017, 2(2): 781-788.
10 BARGOTI S, UNDERWOOD J. Deep fruit detection in orchards[C]// 2017 IEEE International Conference on Robotics and Automation (ICRA). Piscataway, NJ, USA: IEEE, 2017: 3626-3633.
11 H?NI N, ROY P, ISLER V. A comparative study of fruit detection and counting methods for yield mapping in apple orchards[J]. Journal of field robotics, 2020, 37(2): 263-282.
12 李志军, 杨圣慧, 史德帅, 等. 基于轻量化改进YOLOv5的苹果树产量测定方法[J]. 智慧农业(中英文), 2021, 3(2): 100-114.
  LI Z J, YANG S H, SHI D S, et al. Yield estimation method of apple tree based on improved lightweight YOLOv5[J]. Smart agriculture, 2021, 3(2): 100-114.
13 KESTUR R, MEDURI A, NARASIPURA O. MangoNet: A deep semantic segmentation architecture for a method to detect and count mangoes in an open orchard[J]. Engineering applications of artificial intelligence, 2019, 77: 59-69.
14 高芳芳, 武振超, 索睿, 等. 基于深度学习与目标跟踪的苹果检测与视频计数方法[J]. 农业工程学报, 2021, 37(21): 217-224.
  GAO F F, WU Z C, SUO R, et al. Apple detection and counting using real-time video based on deep learning and object tracking[J]. Transactions of the Chinese society of agricultural engineering, 2021, 37(21): 217-224.
15 WANG Z L, WALSH K, KOIRALA A. Mango fruit load estimation using a video based MangoYOLO-kalman filter-hungarian algorithm method[J]. Sensors, 2019, 19(12): ID 2742.
16 LUO W H, XING J L, MILAN A, et al. Multiple object tracking: A literature review[J]. Artificial intelligence, 2021, 293: ID 103448.
17 RAKAI L, SONG H S, SUN S J, et al. Data association in multiple object tracking: A survey of recent techniques[J]. Expert systems with applications, 2022, 192: ID 116300.
18 涂淑琴, 汤寅杰, 李承桀, 等. 基于改进ByteTrack算法的群养生猪行为识别与跟踪技术[J]. 农业机械学报, 2022, 53(12): 264-272.
  TU S Q, TANG Y J, LI C J, et al. Behavior recognition and tracking of group-housed pigs based on improved ByteTrack algorithm[J]. Transactions of the Chinese society for agricultural machinery, 2022, 53(12): 264-272.
19 ZHANG Y F, WANG C Y, WANG X G, et al. FairMOT: On the fairness of detection and re-identification in multiple object tracking[J]. International journal of computer vision, 2021, 129(11): 3069-3087.
20 ZHANG Y F, SUN P Z, JIANG Y, et al. ByteTrack: Multi-object tracking by associating every detection box[C]// European conference on computer vision. Berlin, German: Springer, 2022: 1-21.
21 吴昊. 基于YOLOX和重识别的行人多目标跟踪方法[J]. 自动化与仪表, 2023, 38(3): 59-62, 67.
  WU H. Pedestrian multi-target tracking method based on YOLOX and person re-identification[J]. Automation & instrumentation, 2023, 38(3): 59-62, 67.
22 OUYANG W L, WANG X G, ZENG X Y, et al. DeepID-Net: Deformable deep convolutional neural networks for object detection[C]// 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2015: 2403-2412.
23 REDMON J, FARHADI A. YOLOv3: An incremental improvement[EB/OL]. arXiv: , 2018.
24 WANG C Y, MARK LIAO H Y, WU Y H, et al. CSPNet: A new backbone that can enhance learning capability of CNN[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Piscataway, NJ, USA: IEEE, 2020: 1571-1580.
25 韦锦, 李正强, 许恩永, 等. 基于DA2-YOLOv4算法绿篱识别研究[J]. 中国农机化学报, 2022, 43(9): 122-130.
  WEI J, LI Z Q, XU E Y, et al. Research on hedge recognition based on DA2-YOLOv4 algorithm[J]. Journal of Chinese agricultural mechanization, 2022, 43(9): 122-130.
26 WANG C Y, BOCHKOVSKIY A, LIAO H Y M. Scaled-YOLOv4: Scaling cross stage partial network[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2021: 13024-13033.
27 GüNEY E, BAYILMI? C, ?AKAN B. An implementation of real-time traffic signs and road objects detection based on mobile GPU platforms[J]. IEEE access, 2022, 10: 86191-86203.
28 LIN T Y, DOLLáR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2017: 936-944.
29 LIU S, QI L, QIN H F, et al. Path aggregation network for instance segmentation[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway, NJ, USA: IEEE, 2018: 8759-8768.
30 孙泽强, 陈炳才, 崔晓博, 等. 融合频域注意力机制和解耦头的YOLOv5带钢表面缺陷检测[J]. 计算机应用, 2023, 43(1): 242-249.
  SUN Z Q, CHEN B C, CUI X B, et al. Strip steel surface defect detection by YOLOv5 algorithm fusing frequency domain attention mechanism and decoupled head[J]. Journal of computer applications, 2023, 43(1): 242-249.
Outlines

/