| [1] |
十六糖. 色铅笔下的37种果实[M]. 北京: 中国纺织出版社, 2021.
|
|
SHI L T. 37 kinds of fruits under colored pencils[M]. Beijing: ChinaTextile&ApparelPress, 2021.
|
| [2] |
王文辉, 王国平, 田路明, 等. 新中国果树科学研究70年: 梨[J]. 果树学报, 2019, 36(10): 1273-1282.
|
|
WANG W H, WANG G P, TIAN L M, et al. Fruit scientific research in New China in the past 70 years: Pear[J]. Journal of fruit science, 2019, 36(10): 1273-1282.
|
| [3] |
王思丽, 张伶, 杨恒, 等. 深度学习语言模型的研究综述[J]. 农业图书情报学报, 2023, 35(8): 4-18.
|
|
WANG S L, ZHANG L, YANG H, et al. A summary of the research on deep learning language model[J]. Journal of library and information science in agriculture, 2023, 35(8): 4-18.
|
| [4] |
蒋雪松, 计恺豪, 姜洪喆, 等. 深度学习在林果品质无损检测中的研究进展[J]. 农业工程学报, 2024, 40(17): 1-16.
|
|
JIANG X S, JI K H, JIANG H Z, et al. Research progress of non-destructive detection of forest fruit quality using deep learning[J]. Transactions of the Chinese society of agricultural engineering, 2024, 40(17): 1-16.
|
| [5] |
SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL]. arXiv: 1409.1556, 2014.
|
| [6] |
HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, New Jersey, USA: IEEE, 2016: 770-778.
|
| [7] |
BOCHKOVSKIY A, WANG C Y, LIAO H M. YOLOv4: Optimal speed and accuracy of object detection[EB/OL]. arXiv: 2004.10934, 2020.
|
| [8] |
LI C Y, LI L L, JIANG H L, et al. YOLOv6: A single-stage object detection framework for industrial applications[EB/OL]. arXiv: 2209.029762022:.
|
| [9] |
SAPKOTA R, FLORES-CALERO M, QURESHI R, et al. YOLO advances to its genesis: A decadal and comprehensive review of the You Only Look Once (YOLO) series[J]. Artificial intelligence review, 2025, 58(9): ID 274.
|
| [10] |
KARKI S, BASAK J K, TAMRAKAR N, et al. Strawberry disease detection using transfer learning of deep convolutional neural networks[J]. Scientia horticulturae, 2024, 332: ID 113241.
|
| [11] |
周宏平, 金寿祥, 周磊, 等. 基于迁移学习与YOLOv8n的田间油茶果分类识别[J]. 农业工程学报, 2023, 39(20): 159-166.
|
|
ZHOU H P, JIN S X, ZHOU L, et al. Classification and recognition of Camellia oleifera fruit in the field based on transfer learning and YOLOv8n[J]. Transactions of the Chinese society of agricultural engineering, 2023, 39(20): 159-166.
|
| [12] |
陈俊霖, 赵鹏, 曹先林, 等. 基于通道剪枝的轻量化YOLOv8s草莓穴盘苗分级检测与定位方法[J]. 智慧农业(中英文), 2024, 6(6): 132-143.
|
|
CHEN J L, ZHAO P, CAO X L, et al. Lightweight YOLOv8s-based strawberry plug seedling grading detection and localization via channel pruning[J]. Smart agriculture, 2024, 6(6): 132-143.
|
| [13] |
黎祖胜, 唐吉深, 匡迎春. 基于改进YOLOv10n的轻量化荔枝虫害小目标检测模型[J]. 智慧农业(中英文), 2025, 7(2): 146-159.
|
|
LI Z S, TANG J S, KUANG Y C. A lightweight model for detecting small targets of Litchi pests based on improved YOLOv10n[J]. Smart agriculture, 2025, 7(2): 146-159.
|
| [14] |
杨启良, 禹璐, 梁嘉平. 基于改进YOLOv11的采后芦笋分级检测方法[J]. 智慧农业(中英文), 2025, 7(4): 84-94.
|
|
YANG Q L, YU L, LIANG J P. Grading Asparagus officinalis L. using improved YOLOv11[J]. Smart agriculture, 2025, 7(4): 84-94.
|
| [15] |
LI L T, ZHAO Y D. Tea disease identification based on ECA attention mechanism ResNet50 network[J]. Frontiers in plant science, 2025, 16: ID 1489655.
|
| [16] |
HU W X, XIONG J T, LIANG J H, et al. A method of Citrus epidermis defects detection based on an improved YOLOv5[J]. Biosystems engineering, 2023, 227: 19-35.
|
| [17] |
INBAR O, SHAHAR M, GIDRON J, et al. Analyzing the secondary wastewater-treatment process using Faster R-CNN and YOLOv5 object detection algorithms[J]. Journal of cleaner production, 2023, 416: 137913.
|
| [18] |
谭厚森, 马文宏, 田原, 等. 基于改进YOLOv8n的香梨目标检测方法[J]. 农业工程学报, 2024, 40(11): 178-185.
|
|
TAN H S, MA W H, TIAN Y, et al. Improved YOLOv8n object detection of fragrant pears[J]. Transactions of the Chinese society of agricultural engineering, 2024, 40(11): 178-185.
|
| [19] |
CHEN J R, KAO S H, HE H, et al. Run, don't walk: Chasing higher FLOPS for faster neural networks[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, New Jersey, USA: IEEE, 2023: 12021-12031.
|
| [20] |
TERVEN J, CÓRDOVA-ESPARZA D M, ROMERO-GONZÁLEZ J A. A comprehensive review of YOLO architectures in computer vision: From YOLOv1 to YOLOv8 and YOLO-NAS[J]. Machine learning and knowledge extraction, 2023, 5(4): 1680-1716.
|
| [21] |
ZHANG H, XU C, ZHANG S J. Inner-IoU: More effective intersection over union loss with auxiliary bounding box[EB/OL]. arXiv: 2311.02877, 2023.
|
| [22] |
MENGHANI G. Efficient deep learning: A survey on making deep learning models smaller, faster, and better[J]. ACM computing surveys, 2023, 55(12): 1-37.
|
| [23] |
ZHU L H, LIAO B C, ZHANG Q, et al. Vision mamba: Efficient visual representation learning with bidirectional state space model[EB/OL]. arXiv: 2401.09417, 2024.
|
| [24] |
WANG Z Y, LI C, XU H Y, et al. Mamba YOLO: A simple baseline for object detection with state space model[EB/OL]. arXiv: 2406.05835, 2024.
|
| [25] |
SHI D. TransNeXt: Robust foveal visual perception for vision transformers[C]// 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, New Jersey, USA: IEEE, 2024: 17773-17783.
|
| [26] |
LIU W Z, LU H, FU H T, et al. Learning to upsample by learning to sample[C]// 2023 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway, New Jersey, USA: IEEE, 2023: 6004-6014.
|
| [27] |
CHEN L W, GU L, ZHENG D Z, et al. Frequency-adaptive dilated convolution for semantic segmentation[C]// 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, New Jersey, USA: IEEE, 2024: 3414-3425.
|
| [28] |
GU A, GOEL K, RÉ C. Efficiently modeling long sequences with structured state spaces[EB/OL]. arXiv: 2111.00396, 2021.
|
| [29] |
LIU Y, TIAN Y J, ZHAO Y Z, et al. VMamba: Visual state space model[EB/OL]. arXiv: 2401.10166, 2024.
|
| [30] |
WANG J Q, CHEN K, XU R, et al. CARAFE: Content-aware ReAssembly of FEatures[C]// 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway, New Jersey, USA: IEEE, 2019: 3007-3016.
|
| [31] |
LU H, LIU W Z, FU H T, et al. FADE: Fusing theAssets ofDecoder andEncoder forTask-Agnostic Upsampling[C]// Computer Vision-ECCV 2022. Cham, Germany: Springer, 2022: 231-247.
|
| [32] |
LU H, LIU W Z, YE Z X, et al. SAPA: Similarity-aware point affiliation for feature upsampling[EB/OL]. arXiv: 2209.12866, 2022.
|