Welcome to Smart Agriculture

Smart Agriculture

   

Shrimp Diseases Detection Method Based on Improved YOLOv8 and Multiple Features

XU Ruifeng1,2(), WANG Yaohua1, DING Wenyong1, YU Junqi1, YAN Maocang1(), CHEN Chen1()   

  1. 1. Zhejiang Mariculture Research Institute, Wenzhou 325000, China
    2. Shanghai Ocean University, Shanghai 201306, China
  • Received:2023-11-09 Online:2024-02-29
  • corresponding author:
    1. YAN Maocang, E-mail: ; 2
    CHEN Chen, E-mail:
  • Supported by:
    Zhejiang Key Science and Technology Project(2021C02025); Key Scientific and Technological Innovation Projects of Wenzhou(ZN2021001); Zhejiang Province San-Nong-Jiu-Fang Science and Technology Cooperation Project(2023SNJF077); National Key Research and Development Program of China(2020YFD0900801)

Abstract:

Objective Shrimp farming holds a pivotal position within the aquaculture industry. However, in recent years, there has been a steady rise in the incidence and mortality of shrimp diseases, leading to significant repercussions in shrimp farming. These diseases are characterized by their rapid onset, high contagion, challenging control, and high mortality rates. As shrimp factory farming continues to expand, manual detection methods are no longer sufficient to meet current demands. Consequently, there is an urgent need for an automated method to detect shrimp diseases. The primary objective of this study is to develop a cost-effective inspection method based on computer vision that strikes a balance between cost and detection effectiveness. Methods The improved YOLOv8 (You Only Look Once) network and multiple features were employed to detect shrimp diseases. To eliminate the interference of surface foam, the improved YOLOv8 network was applied to detect and extract surface shrimps as the foreground of the image. This target detection method accurately identifies objects of interest in the image, determining their category and location, and its extraction effect surpasses that of threshold segmentation. Considering the cost constraints of platform computing power in practical production conditions, the network was enhanced by reducing parameters and calculations, thereby improving detection speed and deployment performance. Additionally, the Farnberck optical flow method and gray level co-occurrence matrix (GLCM) were employed to extract the movement and image texture features of shrimp video clips. A dataset was created using these extracted multiple feature parameters, and a Support Vector Machine (SVM) classifier was trained. Finally, a classifier was employed to classify the multiple feature parameters in video clips, enabling the detection of shrimp health. Results and Discussions The improved YOLOv8 in this study effectively enhanced detection accuracy without increasing the number of parameters and flops. According to the results of the ablation experiment, replacing the backbone network with FasterNet lightweight backbone network significantly reduces the number of parameters and computation, albeit at the cost of decreased accuracy. However, after integrating the efficient multi-scale attention (EMA) on the neck, the mAP0.5 increased by 0.3% compared to YOLOv8s, while mAP0.95 only decreased by 2.1%. Furthermore, the parameter count decreased by 45%, and FLOPs decreased by 42%. The improved YOLOv8 exhibits remarkable performance, ranking second only to YOLOv7 in terms of mAP0.5 and mAP0.95, with respective reductions of 0.4% and 0.6%. Additionally, it possesses a significantly reduced parameter count and FLOPS compared to YOLOv7, matching those of YOLOv5. Despite the YOLOv7-Tiny and YOLOv8-VanillaNet models boasting lower parameters and Flops, their accuracy lags behind that of the improved YOLOv8. The mAP0.5 and mAP0.95 of YOLOv7-Tiny and YOLOv8-VanillaNet are 22.4%, 36.2%, 2.3%, and 4.7% lower than that of the improved YOLOv8, respectively. Using a support vector machine (SVM) trained on a comprehensive dataset incorporating multiple feature, the classifier achieved an impressive accuracy rate of 97.625%. 150 normal fragments and 150 diseased fragments were randomly selected as test samples. The classifier exhibited a detection accuracy of 89% on this dataset of 300 samples. This result indicates that the combination of features extracted using the Farnberck optical flow method and GLCM can effectively capture the distinguishing dynamics of movement speed and direction between infected and healthy shrimp. In this research, the majority of errors stem from the incorrect recognition of diseased segments as normal segments, accounting for 88.2% of the total error. These errors can be categorized into three main types: 1. The first type occurs when floating foam obstructs the water surface, resulting in a small number of shrimp being extracted from the image. 2. The second type is attributed to changes in water movement. In this study, nanotubes were used for oxygenation, leading to the generation of sprays on the water surface, which affected the movement of shrimp. 3. The third type of error is linked to video quality. When the video's pixel count is low, the difference in optical flow between diseased shrimp and normal shrimp becomes relatively small. Therefore, it is advisable to adjust the collection area based on the actual production environment and enhance video quality. Conclusions The multiple features introduced in this study effectively capture the movement of shrimp, and can be employed for disease detection. The improved YOLOv8 is particularly well-suited for platforms with limited computational resources and is feasible for deployment in actual production settings. However, the experiment was conducted in a factory farming environment, limiting the applicability of the method to other farming environments. Overall, this method only requires consumer-grade cameras as image acquisition equipment and has lower requirements on the detection platform, and can provide a theoretical basis and methodological support for the future application of aquatic disease detection methods.

Key words: shrimp diseases, computer vision, YOLOv8, Farnberck optical flow, gray level co-occurrence matrix, support vector machine