Welcome to Smart Agriculture

Smart Agriculture

   

A Regional Farming Pig Counting System Based on Improved Instance Segmentation Algorithm

ZHANG Yanqi1,2(), ZHOU Shuo1,2, ZHANG Ning1,2(), CHAI Xiujuan1,2, SUN Tan1,2   

  1. 1. Agricultural Information Institute, Chinese Academy of Agricultural Sciences, Beijing 100081, China
    2. Key Laboratory of Agriculture and Rural Affairs, Beijing 100081, China
  • Received:2023-09-28 Online:2024-02-28
  • corresponding author:
    ZHANG Ning, E-mail:
  • Supported by:
    National Key R & D Program of China(2022ZD0115702); National Natural Science Foundation of China(61976219); Beijing Smart Agriculture Innovation Consortium Project(BAIC10-2024); Innovation Program of Chinese Academy of Agricultural Sciences(CAAS-ASTIP-2023-AII); Central Public-interest Scientific Institution Basal Research Fund(JBYW-AII-2022-14)

Abstract:

[Objective] The pig farming industry is increasingly moving towards intensification. Precise feeding and management based on the number of pigs in the barn are crucial for large-scale breeding operations. Currently, pig farming facilities mainly rely on manual counting for tracking slaughtered and stored pigs. This is not only time-consuming and labor-intensive, but also prone to counting errors due to pig movement and potential cheating. As breeding operations expand, the periodic live asset inventories put significant strain on human, material and financial resources. Although methods based on electronic ear tags can assist in pig counting, these ear tags are easy to break and fall off in group housing environments. Most of the existing methods for counting pigs based on computer vision require capturing images from a top-down perspective, necessitating the installation of cameras above each hogpen or even the use of drones, resulting in high installation and maintenance costs. To address the challenges faced in the group pig counting task, a high-efficiency and low-cost pig counting method was proposed based on improved instance segmentation algorithm and WeChat public platform. [Methods] Firstly, a smartphone was used to collect pig image data in the area from a human view perspective, and each pig's outline in the image was annotated to establish a pig count dataset. The training set contains 606 images and the test set contains 65 images. Secondly, an efficient global attention module was proposed by improving Convolutional Block Attention Module (CBAM). The efficient global attention module first performed a dimension permutation operation on the input feature map to obtain the interaction between its channels and spatial dimensions. The permuted features were aggregated using Global Average Pooling (GAP). One-dimensional convolution replaced the fully connected operation in CBAM, eliminating dimensionality reduction and significantly reducing the model's parameters number. This module was integrated into the YOLOv8 single-stage instance segmentation network to build the pig counting model YOLOv8x-Ours. By adding an efficient global attention module into each C2f layer of the YOLOv8 backbone network, the dimensional dependencies and feature information in the image could be extracted more effectively, thereby achieving high-accuracy pig counting. Lastly, with a focus on user experience and outreach, a pig counting WeChat mini program was developed based on the WeChat public platform and Django Web framework. The counting model was deployed to count pigs using images captured by smartphones. The pig counting WeChat mini program mainly includes the following functions: 1) Login. 2) Establishing the hierarchical structure of pig farm. 3) Image acquisition. 4) Pig Counting. 5) User interaction. 6) Historical records. [Results and Discussions] This study experimentally demonstrated the feasibility of deep learning technology in the task of pig counting. Compared with existing methods of Mask R-CNN, YOLACT(Real-time Instance Segmentation), PolarMask, SOLO and Yolov5x, the proposed pig counting model YOLOv8x-Ours exhibited superior performance in terms of accuracy and stability. Notably, YOLOv8x-Ours achieved the highest accuracy in counting, with errors of less than 2 and 3 pigs on the test set. Specifically, 93.8% of the total test images had counting errors of less than 3 pigs. Compared with the two-stage instance segmentation algorithm Mask R-CNN and the YOLOv8x model that applies the CBAM attention mechanism, YOLOv8x-Ours showed performance improvements of 7.6% and 3%, respectively. And due to the single-stage design and anchor-free architecture of the YOLOv8 model, the processing speed of a single image was only 64 ms, 1/8 of Mask R-CNN. By embedding the model into the WeChat mini program platform, pig counting was conducted using smartphone images. In cases where the model incorrectly detected pigs, users were given the option to click on the erroneous location in the result image to adjust the statistical outcomes, thereby enhancing the accuracy of pig counting. [Conclusions] The proposed pig counting method eliminates the need for installing hardware equipment in the breeding area of the pig farm, enabling pig counting to be carried out effortlessly using just a smartphone. Users can promptly spot any errors in the counting results through image segmentation visualization and easily rectify any inaccuracies. This collaborative human-machine model not only reduces the need for extensive manpower but also guarantees the precision and user-friendliness of the counting outcomes.

Key words: pig counting, deep learning, WeChat mini program, YOLOv8, instance segmentation