Welcome to Smart Agriculture 中文

Smart Agriculture

   

An Embedded Fluorescence Imaging Detection System for Fruit and Vegetable Quality Deterioration Based on Improved YOLOv8

GAO Chenhong, ZHU Qibing(), HUANG Min   

  1. School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China
  • Received:2025-05-30 Online:2025-09-04
  • Foundation items:National Natural Science Foundation of China(62273166)
  • corresponding author:
    ZHU Qibing, E-mail:

Abstract:

[Objective] Fresh fruits and vegetables are prone to quality deterioration during storage and transportation due to microbial proliferation and changes in enzyme activity. Although traditional quality detection methods (e.g., physicochemical analysis and microbial culture) offer high accuracy, they are destructive, time-consuming, and require expert operation, making them inadequate for the modern supply chain's demand for real-time, non-destructive detection. While advanced optical detection technologies like hyperspectral imaging provide non-destructive advantages, their equipment is expensive, bulky, and lacks portability. This study aimed to integrate fluorescence imaging technology, embedded systems, and lightweight deep learning models to develop an embedded detection system for fruit and vegetable quality deterioration, addressing the bottlenecks of high cost and insufficient portability in current technologies, and providing a low-cost, efficient solution for non-destructive quality detection of fruits and vegetables. [Methods] An embedded quality detection system based on fluorescence imaging and a ZYNQ platform was developed. The system adopted the Xilinx ZYNQ XC7Z020 heterogeneous SoC as the core controller and used 365 nm, 10 W ultraviolet LED beads as the excitation light source. A CMOS camera served as the image acquisition sensor to capture and process fluorescence images. Algorithmically, an improved, lightweight object detection model based on YOLOv8 was developed. The improved model replaced the original YOLOv8 backbone network with MobileNetV4 to reduce computational load. To further achieve lightweighting, a channel pruning technique based on the batch normalization (BN) layer's scaling factor (γ) was employed. During training, L1 regularization was applied to γ to induce sparsity, after which channels with small γ values were pruned according to a threshold (γ_threshold = 0.01), followed by fine-tuning of the pruned model. Finally, in accordance with the hardware characteristics of the ZYNQ platform, a dynamic 16-bit fixed-point quantization method was adopted to convert the model from 32-bit floating point to 16-bit fixed point, and the FPGA's parallel computing capability was utilized for hardware acceleration to improve inference speed. [Results and Discussion] Grapes and spinach were used as experimental samples in a controlled laboratory setting (26°C; 20%-40% humidity) over an eight-day storage experiment. Fluorescence images were collected daily, and physicochemical indices were measured simultaneously to construct ground-truth labels (spinach: chlorophyll, vitamin C; grapes: titratable acidity, total soluble solids). K-means clustering combined with principal component analysis (PCA) was used to categorize quality into three levels—"fresh," "sub-fresh," and "spoiled"—based on changes in physicochemical indices, and images were labeled accordingly. In terms of system performance, the improved YOLOv8-MobileNetV4 model achieved a mean average precision (mAP) of 95.91% for the three-level quality classification. Ablation results showed that using only the MobileNetV4 backbone or applying channel pruning to the original model each reduced average detection time (by 14.0% and 29.0%, respectively) but incurred some loss of accuracy. In contrast, combining both yielded a synergistic effect: precision reached 97.04%, while recall and mAP increased to 95.24% and 95.91%, respectively. Comparative experiments indicated that the proposed model (8.98 MB parameters) outperformed other mainstream lightweight models (e.g., Faster R-CNN and YOLOv8-Ghost) in mAP and also exhibited faster detection, demonstrating an excellent balance between accuracy and efficiency. [Conclusions] Targeting practical needs in detecting fruit and vegetable quality deterioration, this study proposed and implemented an efficient detection system based on fluorescence imaging and an embedded platform. By integrating the MobileNetV4 backbone with the YOLOv8 detection framework and introducing BN-based channel pruning, the model achieved structured compression and accelerated inference. Experimental results showed that the YOLOv8-MobileNetV4 plus pruning model significantly reduced model size and hardware resource consumption while maintaining detection accuracy, thereby enhancing real-time responsiveness. The system's low hardware cost, compact size, and portability make it a practical solution for rapid, non-destructive, real-time quality monitoring in fruit and vegetable supply chains. Future work will focus on expanding the sample library to include more produce types and mixed deterioration levels and further optimizing the algorithm to improve robustness in complex multi-target scenarios.

Key words: fruits and vegetables, ZYNQ development board, fluorescence imaging technology, YOLOv8, network pruning

CLC Number: