Welcome to Smart Agriculture

Smart Agriculture ›› 2023, Vol. 5 ›› Issue (2): 115-125.doi: 10.12133/j.smartag.SA202303011

• Topic--Machine Vision and Agricultural Intelligent Perception • Previous Articles     Next Articles

Classification and Recognition Method for Yak Meat Parts Based on Improved Residual Network Model

ZHU Haipeng1(), ZHANG Yu'an1(), LI Huanhuan1, WANG Jianwen1, YANG Yingkui2, SONG Rende3   

  1. 1.Department of Computer Technology and Application, Qinghai University, Xining 810016, China
    2.Academy of Animal Husbandry and Veterinary Sciences, Qinghai University, Xining 810016, China
    3.Animal Disease Prevention and Control Center of Yushu Prefecture, Qinghai Province, Yushu 815000, China
  • Received:2023-03-26 Online:2023-06-30

Abstract:

[Objective] Conducting research on the recognition of yak meat parts can help avoid confusion and substandard parts during the production and sales of yak meat, improve the transparency and traceability of the yak meat industry, and ensure food safety. To achieve fast and accurate recognition of different parts of yak meat, this study proposed an improved residual network model and developed a smartphone based yak meat part recognition software. [Methods] Firstly, the original data set of 1960 yak tenderloin, high rib, shank and brisket were expanded by 8 different data enhancement methods, including horizontal flip, vertical flip, random direction rotation 30°, random direction rotation 120°, random direction rotation 300°, contrast adjustment, saturation adjustment and hue adjustment. After expansion, 17,640 yak meat images of different parts were obtained. The expanded yak meat images of different parts were divided according to the 4:1 ratio, resulting in 14,112 yak meat sample images in the training set and 3528 yak meat sample images in the test set. Secondly, the convolutional block attention module (CBAM) was integrated into each residual block of the original network model to enhance the extraction of key detail features of yak images in different parts. At the same time, introducing this mechanism into the network model could achieve greater accuracy improvement with less computational overhead and fewer parameters. In addition, in the original network model, the full connection layer was directly added after all residual blocks instead of global average pooling and global maximum pooling, which could improve the accuracy of the network model, prevent overfitting, reduce the number of connections in subsequent network layers, accelerate the execution speed of the network model, and reduce the computing time when the mobile phone recognized images. Thirdly, different learning rates, weight attenuation coefficients and optimizers were used to verify the influence of the improved ResNet18_CBAM network model on convergence speed and accuracy. According to the experiments, the stochastic gradient descent (SGD) algorithm was adopted as the optimizer, and when the learning rate was 0.001 and the weight attenuation coefficient was 0, the improved ReaNet18_CBAM network model had the fastest convergence speed and the highest recognition accuracy on different parts of yak data sets. Finally, the PyTorch Mobile module in PyTorch deep learning framework was used to convert the trained ResNet18_CBAM network model into TorchScript model and saved it in *.ptl. Then, the yak part recognition App was developed using the Android Studio development environment, which included two parts: Front-end interface and back-end processing. The front-end of the App uses *.xml for a variety of price control layout, and the back-end used Java language development. Then TorchScript model in *.ptl was used to identify different parts of yak meat. Results and Discussions] In this study, CBAM, SENet, NAM and SKNet, four popular attentional mechanism modules, were integrated into the original ResNet18 network model and compared by ablation experiments. Their recognition accuracy on different parts of yak meat dataset were 96.31%, 94.12%, 92.51% and 93.85%, respectively. The results showed that among CBAM, SENet, NAM and SKNet, the recognition accuracy of ResNet18 CBAM network model was significantly higher than that of the other three attention mechanism modules. Therefore, the CBAM attention mechanism module was chosen as the improvement module of the original network model. The accuracy of the improved ResNet18_CBAM network model in the test set of 4 different parts of yak tenderloin, high rib, shank and brisket was 96.31%, which was 2.88% higher than the original network model. The recognition accuracy of the improved ResNet18_CBAM network model was compared with AlexNet, VGG11, ResNet34 and ResNet18 network models on different parts of yak test set. The improved ResNet18_CBAM network model had the highest accuracy. In order to verify the actual results of the improved ResNet18_CBAM network model on mobile phones, the test conducted in Xining beef and mutton wholesale market. In the actual scenario testing on the mobile end, a total of 54, 59, 51, and 57 yak tenderloin, high rib, shank and brisket samples were collected, respectively. The number of correctly identified samples and the number of incorrectly identified samples were counted respectively. Finally, the recognition accuracy of tenderloin, high rib, shank and brisket of yak reached 96.30%, 94.92%, 98.04% and 96.49%, respectively. The results showed that the improved ResNet18_CBAM network model could be used in practical applications for identifying different parts of yak meat and has achieved good results. [Conclusions] The research results can help ensure the food quality and safety of the yak industry, improve the quality and safety level of the yak industry, improve the yak trade efficiency, reduce the cost, and provide technical support for the intelligent development of the yak industry in the Qinghai-Tibet Plateau region.

Key words: image classification, attention mechanism, residual network, mobile applications, recognition of yak meat parts, transfer learning

CLC Number: