Welcome to Smart Agriculture 中文
Topic--Machine Vision and Agricultural Intelligent Perception

Classification and Recognition Method for Yak Meat Parts Based on Improved Residual Network Model

  • ZHU Haipeng ,
  • ZHANG Yu'an ,
  • LI Huanhuan ,
  • WANG Jianwen ,
  • YANG Yingkui ,
  • SONG Rende
Expand
  • 1.Department of Computer Technology and Application, Qinghai University, Xining 810016, China
    2.Academy of Animal Husbandry and Veterinary Sciences, Qinghai University, Xining 810016, China
    3.Animal Disease Prevention and Control Center of Yushu Prefecture, Qinghai Province, Yushu 815000, China
ZHU Haipeng, E-mail:2633866477@qq.com
ZHANG Yu'an, E-mail:2011990029@qhu.edu.cn

Received date: 2023-03-26

  Online published: 2023-07-12

Supported by

Qinghai Provincial Science and Technology Plan Project (2020-QY-218); National Modern Agricultural Industry Technology System Funding (CARS-37)

Abstract

[Objective] Conducting research on the recognition of yak meat parts can help avoid confusion and substandard parts during the production and sales of yak meat, improve the transparency and traceability of the yak meat industry, and ensure food safety. To achieve fast and accurate recognition of different parts of yak meat, this study proposed an improved residual network model and developed a smartphone based yak meat part recognition software. [Methods] Firstly, the original data set of 1960 yak tenderloin, high rib, shank and brisket were expanded by 8 different data enhancement methods, including horizontal flip, vertical flip, random direction rotation 30°, random direction rotation 120°, random direction rotation 300°, contrast adjustment, saturation adjustment and hue adjustment. After expansion, 17,640 yak meat images of different parts were obtained. The expanded yak meat images of different parts were divided according to the 4:1 ratio, resulting in 14,112 yak meat sample images in the training set and 3528 yak meat sample images in the test set. Secondly, the convolutional block attention module (CBAM) was integrated into each residual block of the original network model to enhance the extraction of key detail features of yak images in different parts. At the same time, introducing this mechanism into the network model could achieve greater accuracy improvement with less computational overhead and fewer parameters. In addition, in the original network model, the full connection layer was directly added after all residual blocks instead of global average pooling and global maximum pooling, which could improve the accuracy of the network model, prevent overfitting, reduce the number of connections in subsequent network layers, accelerate the execution speed of the network model, and reduce the computing time when the mobile phone recognized images. Thirdly, different learning rates, weight attenuation coefficients and optimizers were used to verify the influence of the improved ResNet18_CBAM network model on convergence speed and accuracy. According to the experiments, the stochastic gradient descent (SGD) algorithm was adopted as the optimizer, and when the learning rate was 0.001 and the weight attenuation coefficient was 0, the improved ReaNet18_CBAM network model had the fastest convergence speed and the highest recognition accuracy on different parts of yak data sets. Finally, the PyTorch Mobile module in PyTorch deep learning framework was used to convert the trained ResNet18_CBAM network model into TorchScript model and saved it in *.ptl. Then, the yak part recognition App was developed using the Android Studio development environment, which included two parts: Front-end interface and back-end processing. The front-end of the App uses *.xml for a variety of price control layout, and the back-end used Java language development. Then TorchScript model in *.ptl was used to identify different parts of yak meat. Results and Discussions] In this study, CBAM, SENet, NAM and SKNet, four popular attentional mechanism modules, were integrated into the original ResNet18 network model and compared by ablation experiments. Their recognition accuracy on different parts of yak meat dataset were 96.31%, 94.12%, 92.51% and 93.85%, respectively. The results showed that among CBAM, SENet, NAM and SKNet, the recognition accuracy of ResNet18 CBAM network model was significantly higher than that of the other three attention mechanism modules. Therefore, the CBAM attention mechanism module was chosen as the improvement module of the original network model. The accuracy of the improved ResNet18_CBAM network model in the test set of 4 different parts of yak tenderloin, high rib, shank and brisket was 96.31%, which was 2.88% higher than the original network model. The recognition accuracy of the improved ResNet18_CBAM network model was compared with AlexNet, VGG11, ResNet34 and ResNet18 network models on different parts of yak test set. The improved ResNet18_CBAM network model had the highest accuracy. In order to verify the actual results of the improved ResNet18_CBAM network model on mobile phones, the test conducted in Xining beef and mutton wholesale market. In the actual scenario testing on the mobile end, a total of 54, 59, 51, and 57 yak tenderloin, high rib, shank and brisket samples were collected, respectively. The number of correctly identified samples and the number of incorrectly identified samples were counted respectively. Finally, the recognition accuracy of tenderloin, high rib, shank and brisket of yak reached 96.30%, 94.92%, 98.04% and 96.49%, respectively. The results showed that the improved ResNet18_CBAM network model could be used in practical applications for identifying different parts of yak meat and has achieved good results. [Conclusions] The research results can help ensure the food quality and safety of the yak industry, improve the quality and safety level of the yak industry, improve the yak trade efficiency, reduce the cost, and provide technical support for the intelligent development of the yak industry in the Qinghai-Tibet Plateau region.

Cite this article

ZHU Haipeng , ZHANG Yu'an , LI Huanhuan , WANG Jianwen , YANG Yingkui , SONG Rende . Classification and Recognition Method for Yak Meat Parts Based on Improved Residual Network Model[J]. Smart Agriculture, 2023 , 5(2) : 115 -125 . DOI: 10.12133/j.smartag.SA202303011

References

1 闫忠心. 不同部位牦牛肉品质特性差异及机制研究[D]. 杨凌: 西北农林科技大学, 2022.
  YAN Z X. Study on the quality characteristics and mechanism of yak meat from different parts[D]. Yangling: Northwest A & F University, 2022.
2 曹兵海, 李俊雅, 王之盛, 等. 2022年度肉牛牦牛产业技术发展报告[J]. 中国畜牧杂志, 2023, 59(3): 330-335.
  CAO B H, LI J Y, WANG Z S, et al. Report on industrial technology development of beef cattle and yak in 2022[J]. Chinese journal of animal science, 2023, 59(3): 330-335.
3 曹兵海, 李俊雅, 王之盛, 等. 2023年肉牛牦牛产业发展趋势与政策建议[J]. 中国畜牧杂志, 2023, 59(3): 323-329.
  CAO B H, LI J Y, WANG Z S, et al. Development trend and policy suggestions of beef cattle and yak industry in 2023[J]. Chinese journal of animal science, 2023, 59(3): 323-329.
4 BEN M. Spectrum sensing and modulation recognition using a novel CNN Deep Learning model and Learning transfer technique[J]. Przegl?d elektrotechniczny, 2023, 1(5): 95-99.
5 ZHONG Y, ZHAO M. Research on deep learning in apple leaf disease recognition[J]. Computers and electronics in agriculture, 2020, 168: ID 105146.
6 SAHA S, PARK C, KNAPIK S, et al. Deep Learning Discrete Calculus (DLDC): A family of discrete numerical methods by universal approximation for STEM education to frontier research[J]. Computational mechanics, 2023, 72(2): 311-331.
7 王锦锦, 程引会, 聂鑫, 等. 基于机器学习的高空电磁脉冲环境快速计算方法[J]. 计算机科学, 2023, 50(S1): 853-857.
  WANG J J, CHENG Y H, NIE X, et al. Fast calculation method of high-altitude electromagnetic pulse environment based on machine learning[J]. Computer science, 2023, 50(S1): 853-857.
8 孟小峰, 郝新丽, 马超红, 等. 科学发现中的机器学习方法研究[J]. 计算机学报, 2023, 46(5): 877-895.
  MENG X F, HAO X L, MA C H, et al. Research on machine learning for scientific discovery[J]. Chinese journal of computers, 2023, 46(5): 877-895.
9 BROSSARD M, BONNABEL S. Learning wheel odometry and IMU errors for localization[C]// 2019 International Conference on Robotics and Automation (ICRA). Piscataway, NJ, USA: IEEE, 2019: 291-297.
10 KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[C]// Proceedings of the 25th International Conference on Neural Information Processing Systems-Volume 1. New York, USA: ACM, 2012: 1097-1105.
11 常瑞扬, 杨海斌. 基于卷积神经网络的农作物病虫害识别研究[J]. 无线互联科技, 2023, 19(2): 159-161.
  CHANG R Y, YANG H B. Research on crop pest identification based on convolution neural network[J]. Wireless Internet technology, 2023, 19(2): 159-161.
12 陈天娇, 曾娟, 谢成军, 等. 基于深度学习的病虫害智能化识别系统[J]. 中国植保导刊, 2019, 39(4): 26-34.
  CHEN T J, ZENG J, XIE C J, et al. Intelligent identification system of disease and insect pests based on deep learning[J]. China plant protection, 2019, 39(4): 26-34.
13 陶治, 孔建磊, 金学波, 等. 基于深度学习的农作物病虫害图像识别App系统设计[J]. 计算机应用与软件, 2022, 39(3): 341-345.
  TAO Z, KONG J L, JIN X B, et al. Design of image recognition App system for crop diseases and insect pests based on deep learning[J]. Computer applications and software, 2022, 39(3): 341-345.
14 周巧黎, 马丽, 曹丽英, 等. 基于改进轻量级卷积神经网络MobileNetV3的西红柿叶片病害识别[J]. 智慧农业(中英文), 2022, 4(1): 47-56.
  ZHOU Q L, MA L, CAO L Y, et al. Identification of tomato leaf diseases based on improved lightweight convolutional neural networks MobileNetV3[J]. Smart agriculture, 2022, 4(1): 47-56.
15 龚荣新, 鲁向晖, 张海娜, 等. 基于高光谱植被指数的大豆地上部生物量估算模型研究[J]. 大豆科学, 2023, 42(3): 352-359.
  GONG R X, LU X H, ZHANG H N, et al. Study on aboveground biomass estimation model of soybean based on hyperspectral vegetation index[J]. Soybean science, 2023, 42(3): 352-359.
16 SCHNEIDER S, TAYLOR G W, KREMER S C, et al. Bulk arthropod abundance, biomass and diversity estimation using deep learning for computer vision[J]. Methods in ecology and evolution, 2022, 13(2): 346-357.
17 ZHENG C W, ABD-ELRAHMAN A, WHITAKER V M, et al. Deep learning for strawberry canopy delineation and biomass prediction from high-resolution images[J]. Plant phenomics, 2022, 2022: ID 9850486.
18 卜灵心, 来全, 刘心怡. 不同机器学习算法在草原草地生物量估算上的适应性研究[J]. 草地学报, 2022, 30(11): 3156-3164.
  BU L X, LAI Q, LIU X Y. Study on the adaptability of different machine learning algorithms for estimating the biomass of grassland[J]. Journal of grassland science, 2022, 30(11): 3156-3164.
19 陈占琦, 张玉安, 王文志, 等. 基于迁移学习的多尺度特征融合牦牛脸部识别算法[J]. 智慧农业(中英文), 2022, 4(2): 77-85.
  CHEN Z Q, ZHANG Y A, WANG W Z, et al. Multiscale feature fusion yak face recognition algorithm based on transfer learning[J]. Smart agriculture, 2022, 4(2): 77-85.
20 HAIFA T, BAHRAM S, MASOUD M, et al. Comparison of machine and deep learning methods to estimate shrub willow biomass from UAS imagery[J]. Canadian journal of remote sensing, 2021, 47(2): 209-227.
21 朱俊宇. 基于深度学习压缩模型的沼虾表型数据测定研究[D]. 杭州: 浙江大学, 2022.
  ZHU J Y. Research on phenotypic data determination of Macrobrachium prawn based on deep learning compression model[D]. Hangzhou: Zhejiang University, 2022.
22 袁德明. 基于深度学习的大豆表型测量方法研究[D]. 济南: 山东大学, 2021.
  YUAN D M. Research on soybean phenotype measurement method based on deep learning[D]. Ji'nan: Shandong University, 2021.
23 赵鑫龙, 彭彦昆, 李永玉, 等. 基于深度学习的牛肉大理石花纹等级手机评价系统[J]. 农业工程学报, 2020, 36(13): 250-256.
  ZHAO X L, PENG Y K, LI Y Y, et al. Mobile phone evaluation system for grading beef marbling based on deep learning[J]. Transactions of the Chinese society of agricultural engineering, 2020, 36(13): 250-256.
24 孟令峰, 朱荣光, 白宗秀, 等. 基于手机图像的不同贮藏时间下冷却羊肉的部位判别[J]. 食品科学, 2020, 41(23): 21-26.
  MENG L F, ZHU R G, BAI Z X, et al. Discrimination of chilled lamb from different carcass parts at different storage times based on mobile phone images[J]. Food science, 2020, 41(23): 21-26.
25 HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2016: 770-778.
Outlines

/