欢迎您访问《智慧农业(中英文)》官方网站! English

Smart Agriculture

• •    

基于YOLOv10n-CHL轻量化的蜜蜂授粉识别模型

常戬1, 王冰冰1, 尹龙1, 李燕青2, 李兆歆3(), 李壮2()   

  1. 1. 辽宁工程技术大学,辽宁兴城 125100,中国
    2. 中国农业科学院果树研究所,辽宁兴城 125100,中国
    3. 中国农业科学院农业信息研究所,北京 100081,中国
  • 收稿日期:2025-03-29 出版日期:2025-06-06
  • 基金项目:
    中国农业科学院科技创新工程项目(CAAS-CSSAE-202401)
  • 作者简介:

    常 戬,博士,副教授,研究方向为数字图像处理、大数据。E-mail:

  • 通信作者:
    李兆歆,博士,副研究员,研究方向为三维视觉感知。E-mail:
    李壮,博士,研究员,研究方向为果树栽培。E-mail:

The Bee Pollination Recognition Model Based On The Lightweight YOLOv10n-CHL

CHANG Jian1, WANG Bingbing1, YIN Long1, LI Yanqing2, LI Zhaoxin3(), LI Zhuang2()   

  1. 1. Liaoning Technical University, Xingcheng 125100, China
    2. Institute of Fruit Tree Research, Chinese Academy of Agricultural Sciences, Xingcheng 125100, China
    3. Institute of Agricultural Information, Chinese Academy of Agricultural Sciences, Beijing 100081, China
  • Received:2025-03-29 Online:2025-06-06
  • Foundation items:China Academy of Agricultural Sciences Science and Technology Innovation Engineering Project(CAAS-CSSAE-202401)
  • About author:

    CHANG Jian, E-mail:

  • Corresponding author:
    LI Zhaoxin, E-mail: ;
    LI Zhuang, E-mail:

摘要:

【目的/意义】 蜜蜂授粉识别实验有助于评估蜂群的授粉效率,检测花朵是否授粉,从而为后期的疏花疏果提供科学依据,优化蜂群管理和农业生产。为应对蜜蜂授粉检测中目标小、背景复杂等挑战,本研究提出了一种基于YOLOv10n-CHL的轻量化蜜蜂授粉识别模型。 【方法】 本实验创建了草莓花、蓝莓花和菊花三种不同花朵的蜜蜂授粉数据集,并通过以下改进提升模型性能:首先,主干网络中用跨阶段多尺度边缘信息增强模块(Cross Stage Partial Network_Multi-Scale Edge Information Enhance, CSP_MSEE)模块替换C2f,融合跨阶段局部网络(Cross Stage Partial Network, CSPNet)跨阶段连接与多尺度边缘增强策略,强化特征提取能力;颈部模块引入多级特征融合金字塔Hierarchical Skip Feature Pyramid Network, HS-FPN,通过特征选择与融合策略,提升复杂背景下小目标检测精度并优化计算效率;在头部采用轻量化检测头(Lightweight Shared Detail-enhanced Convolutional Detection Head, LSDECD)替代原始检测头,增强细节捕捉能力的同时减少参数量,使模型更易于部署和应用于边缘设备。 【结果和讨论】 在三个不同数据集上的测试结果表明,与原YOLOv10n模型相比,改进后的模型在计算量和参数量上分别降低了3.1 GFLOPS和1.3 M。同时,草莓花、蓝莓花、菊花蜜蜂授粉数据集的召回率和mAP50(Mean Average Precision at IoU Threshold of 50%)分别达到了82.6%、84%、84.8%和89.3%、89.5%、88%,较原模型分别提升了2.1%、2.0%、2.1%和1.7%、2.6%、2.2%。 【结论】 这一改进显著增强了模型在蜜蜂授粉数据集上的检测精度,同时通过轻量化设计有效减轻了计算负担,提升了部署的可行性。这为蜜蜂授粉识别技术的实际应用奠定了坚实的技术基础。

关键词: 蜜蜂授粉识别, YOLOv10n, 小目标检测, 轻量化, 特征提取

Abstract:

[Objective] Bee pollination plays a crucial role in plant reproduction and crop yield, making its identification and monitoring highly significant for agricultural production. This study aimed to scientifically evaluate pollination efficiency, accurately detect the pollination status of flowers, and provide reliable data to guide flower and fruit thinning in orchards. Ultimately, it supports the scientific management of bee colonies and enhances agricultural efficiency. However, practical detection of bee pollination poses various challenges, including the small size of bee targets, their low pixel occupancy in images, and the complexity of floral backgrounds. To address these issues, the study proposed a lightweight recognition model capable of effectively overcoming these obstacles, thereby advancing the practical application of bee pollination detection technology in smart agriculture. [Methods] A specialized bee pollination dataset was constructed comprising three flower types: strawberry, blueberry, and chrysanthemum. Videos capturing the pollination process were recorded using high-resolution cameras and subjected to frame sampling to extract representative images. These initial images underwent manual screening to ensure quality and relevance. To address challenges such as limited data diversity and class imbalance, a comprehensive data augmentation strategy was employed. Techniques including rotation, flipping, brightness adjustment, and mosaic augmentation were applied, significantly expanding the dataset's size and variability. The enhanced dataset was subsequently split into training and validation sets at an 8:2 ratio to ensure robust model evaluation. The base detection model was built upon an improved YOLOv10n architecture. The conventional C2f module in the backbone was replaced with a novel Cross Stage Partial network_multi-scale edge information enhance (CSP_MSEE) module, which synergizes the cross-stage partial connections from cross stage partial network (CSPNet) with a multi-scale edge enhancement strategy. This design greatly improved feature extraction, particularly in scenarios involving fine-grained structures and small-scale targets like bees. For the neck, the researchers implemented a hybrid-scale feature pyramid network (HS-FPN), incorporating a channel attention (CA) mechanism and a Dimension Matching (DM) module to refine and align multi-scale features. These features were further integrated through a selective feature fusion (SFF) module, enabling the effective combination of low-level texture details and high-level semantic representations. The detection head was replaced with the lightweight shared detail enhanced convolutional detection head (LSDECD), an enhanced version of the Lightweight shared convolutional detection head (LSCD) detection head. It incorporated detail enhancement convolution (DEConv) from DEA-Net to improve the extraction of fine-grained bee features. Additionally, the standard convolution_groupnorm (Conv_GN) layers were replaced with detail enhancement convolution_ groupnorm (DEConv_GN), significantly reducing model parameters and enhancing the model's sensitivity to subtle bee behaviors. This lightweight yet accurate model design made it highly suitable for real-time deployment on resource-constrained edge devices in agricultural environments. [Results and Discussions] Experimental results on the three bee pollination datasets—strawberry, blueberry, and chrysanthemum—demonstrated the effectiveness of the proposed improvements over the baseline YOLOv10n model. The enhanced model achieved significant reductions in computational overhead, lowering the computational complexity by 3.1 GFLOPs and the number of parameters by 1.3 M. These reductions contribute to improved efficiency, making the model more suitable for deployment on edge devices with limited processing capabilities, such as mobile platforms or embedded systems used in agricultural monitoring. In terms of detection performance, the improved model showed consistent gains across all three datasets. Specifically, the recall rates reached 82.6% for strawberry, 84.0% for blueberry, and 84.8% for chrysanthemum flowers. Corresponding mAP50 (mean Average Precision at IoU threshold of 0.5) scores were 89.3%, 89.5%, and 88.0%, respectively. Compared to the original YOLOv10n model, these results marked respective improvements of 2.1% in recall and 1.7% in mAP50 on the strawberry dataset, 2.0% and 2.6% on the blueberry dataset, and 2.1% and 2.2% on the chrysanthemum dataset. [Conclusions] The proposed YOLOv10n-CHL lightweight bee pollination detection model, through coordinated enhancements at multiple architectural levels, achieved notable improvements in both detection accuracy and computational efficiency across multiple bee pollination datasets. The model significantly improved the detection performance for small objects while substantially reducing computational overhead, facilitating its deployment on edge computing platforms such as drones and embedded systems. This research provides a solid technical foundation for the precise monitoring of bee pollination behavior and the advancement of smart agriculture. Nevertheless, the model's adaptability to extreme lighting and complex weather conditions remains an area for improvement. Future work will focus on enhancing the model's robustness in these scenarios to support its broader application in real-world agricultural environments.

Key words: bee pollination recognition, YOLOv10n, small target detection, lightweight, feature extraction

中图分类号: