Welcome to Smart Agriculture 中文
30 July 2024, Volume 6 Issue 4
Topic--Technological Innovation and Sustainable Development of Smart Animal Husbandry
Research Advances and Prospect of Intelligent Monitoring Systems for the Physiological Indicators of Beef Cattle | Open Access
ZHANG Fan, ZHOU Mengting, XIONG Benhai, YANG Zhengang, LIU Minze, FENG Wenxiao, TANG Xiangfang
2024, 6(4):  1-17.  doi:10.12133/j.smartag.SA202312001
Asbtract ( 143 )   HTML ( 11)   PDF (1471KB) ( 120 )  
Figures and Tables | References | Related Articles | Metrics

[Significance] The beef cattle industry plays a pivotal role in the development of China's agricultural economy and the enhancement of people's dietary structure. However, there exists a substantial disparity in feeding management practices and economic efficiency of beef cattle industry compared to developed countries. While the beef cattle industry in China is progressing towards intensive, modern, and large-scale development, it encounters challenges such as labor shortage and rising labor costs that seriously affect its healthy development. The determination of animal physiological indicators plays an important role in monitoring animal welfare and health status. Therefore, leveraging data collected from various sensors as well as technologies like machine learning, data mining, and modeling analysis enables automatic acquisition of meaningful information on beef cattle physiological indicators for intelligent management of beef cattle. In this paper, the intelligent monitoring technology of physiological indicators in beef cattle breeding process and its application value are systematically summarized, and the existing challenges and future prospects of intelligent beef cattle breeding process in China are prospected. [Progress] The methods of obtaining information on beef cattle physiological indicators include contact sensors worn on the body and non-contact sensors based on various image acquisitions. Monitoring the exercise behavior of beef cattle plays a crucial role in disease prevention, reproduction monitoring, and status assessment. The three-axis accelerometer sensor, which tracks the amount of time that beef cattle spend on lying, walking, and standing, is a widely used technique for tracking the movement behavior of beef cattle. Through machine vision analysis, individual recognition of beef cattle and identification of standing, lying down, and straddling movements can also be achieved, with the characteristics of non-contact, stress-free, low cost, and generating high data volume. Body temperature in beef cattle is associated with estrus, calving, and overall health. Sensors for monitoring body temperature include rumen temperature sensors and rectal temperature sensors, but there are issues with their inconvenience. Infrared temperature measurement technology can be utilized to detect beef cattle with abnormal temperatures by monitoring eye and ear root temperatures, although the accuracy of the results may be influenced by environmental temperature and monitoring distance, necessitating calibration. Heart rate and respiratory rate in beef cattle are linked to animal diseases, stress, and pest attacks. Monitoring heart rate can be accomplished through photoelectric volume pulse wave measurement and monitoring changes in arterial blood flow using infrared emitters and receivers. Respiratory rate monitoring can be achieved by identifying different nostril temperatures during inhalation and exhalation using thermal infrared imaging technology. The ruminating behavior of beef cattle is associated with health and feed nutrition. Currently, the primary tools used to detect rumination behavior are pressure sensors and three-axis accelerometer sensors positioned at various head positions. Rumen acidosis is a major disease in the rapid fattening process of beef cattle, however, due to limitations in battery life and electrode usage, real-time pH monitoring sensors placed in the rumen are still not widely utilized. Changes in animal physiology, growth, and health can result in alterations in specific components within body fluids. Therefore, monitoring body fluids or surrounding gases through biosensors can be employed to monitor the physiological status of beef cattle. By processing and analyzing the physiological information of beef cattle, indicators such as estrus, calving, feeding, drinking, health conditions, and stress levels can be monitored. This will contribute to the intelligent development of the beef cattle industry and enhance management efficiency. While there has been some progress made in developing technology for monitoring physiological indicators of beef cattle, there are still some challenges that need to be addressed. Contact sensors consume more energy which affects their lifespan. Various sensors are susceptible to environmental interference which affects measurement accuracy. Additionally, due to a wide variety of beef cattle breeds, it is difficult to establish a model database for monitoring physiological indicators under different feeding conditions, breeding stages, and breeds. Furthermore, the installation cost of various intelligent monitoring devices is relatively high, which also limits its utilization coverage. [Conclusion and Prospects] The application of intelligent monitoring technology for beef cattle physiological indicators is highly significance in enhancing the management level of beef cattle feeding. Intelligent monitoring systems and devices are utilized to acquire physiological behavior data, which are then analyzed using corresponding data models or classified through deep learning techniques to promptly monitor subtle changes in physiological indicators. This enables timely detection of sick, estrus, and calving cattle, facilitating prompt measures by production managers, reducing personnel workload, and improving efficiency. The future development of physiological indicators monitoring technologies in beef cattle primarily focuses on the following three aspects: (1) Enhancing the lifespan of contact sensors by reducing energy consumption, decreasing data transmission frequency, and improving battery life. (2) Integrating and analyzing various monitoring data from multiple perspectives to enhance the accuracy and utility value. (3) Strengthening research on non-contact, high-precision and automated analysis technologies to promote the precise and intelligent development within the beef cattle industry.

Automatic Detection Method of Dairy Cow Lameness from Top-view Based on the Fusion of Spatiotemporal Stream Features | Open Access
DAI Xin, WANG Junhao, ZHANG Yi, WANG Xinjie, LI Yanxing, DAI Baisheng, SHEN Weizheng
2024, 6(4):  18-28.  doi:10.12133/j.smartag.SA202405025
Asbtract ( 82 )   HTML ( 6)   PDF (1828KB) ( 83 )  
Figures and Tables | References | Related Articles | Metrics

[Objective] The detection of lameness in dairy cows is an important issue that needs to be solved urgently in the process of large-scale dairy farming. Timely detection and effective intervention can reduce the culling rate of young dairy cows, which has important practical significance for increasing the milk production of dairy cows and improving the economic benefits of pastures. Due to the low efficiency and low degree of automation of traditional manual detection and contact sensor detection, the mainstream cow lameness detection method is mainly based on computer vision. The detection perspective of existing computer vision-based cow lameness detection methods is mainly side view, but the side view perspective has limitations that are difficult to eliminate. In the actual detection process, there are problems such as cows blocking each other and difficulty in deployment. The cow lameness detection method from the top view will not be difficult to use on the farm due to occlusion problems. The aim is to solve the occlusion problem under the side view. [Methods] In order to fully explore the movement undulations of the trunk of the cow and the movement information in the time dimension during the walking process of the cow, a cow lameness detection method was proposed from a top view based on fused spatiotemporal flow features. By analyzing the height changes of the lame cow in the depth video stream during movement, a spatial stream feature image sequence was constructed. By analyzing the instantaneous speed of the lame cow's body moving forward and swaying left and right when walking, optical flow was used to capture the instantaneous speed of the cow's movement, and a time flow characteristic image sequence was constructed. The spatial flow and time flow features were combined to construct a fused spatiotemporal flow feature image sequence. Different from traditional image classification tasks, the image sequence of cows walking includes features in both time and space dimensions. There would be a certain distinction between lame cows and non-lame cows due to their related postures and walking speeds when walking, so using video information analysis was feasible to characterize lameness as a behavior. The video action classification network could effectively model the spatiotemporal information in the input image sequence and output the corresponding category in the predicted result. The attention module Convolutional Block Attention Module (CBAM) was used to improve the PP-TSMv2 video action classification network and build the Cow-TSM cow lameness detection model. The CBAM module could perform channel weighting on different modes of cows, while paying attention to the weights between pixels to improve the model's feature extraction capabilities. Finally, cow lameness experiments were conducted on different modalities, different attention mechanisms, different video action classification networks and comparison of existing methods. The data was used for cow lameness included a total of 180 video streams of cows walking. Each video was decomposed into 100‒400 frames. The ratio of the number of video segments of lame cows and normal cows was 1:1. For the feature extraction of cow lameness from the top view, RGB images had less extractable information, so this work mainly used depth video streams. [Results and Discussions] In this study, a total of 180 segments of cow image sequence data were acquired and processed, including 90 lame cows and 90 non-lame cows with a 1:1 ratio of video segments, and the prediction accuracy of automatic detection method for dairy cow lameness based on fusion of spatiotemporal stream features reaches 88.7%, the model size was 22 M, and the offline inference time was 0.046 s. The prediction accuracy of the common mainstream video action classification models TSM, PP-TSM, SlowFast and TimesFormer models on the data set of automatic detection method for dairy cow lameness based on fusion of spatiotemporal stream features reached 66.7%, 84.8%, 87.1% and 85.7%, respectively. The comprehensive performance of the improved Cow-TSM model in this paper was the most. At the same time, the recognition accuracy of the fused spatiotemporal flow feature image was improved by 12% and 4.1%, respectively, compared with the temporal mode and spatial mode, which proved the effectiveness of spatiotemporal flow fusion in this method. By conducting ablation experiments on different attention mechanisms of SE, SK, CA and CBAM, it was proved that the CBAM attention mechanism used has the best effect on the data of automatic detection method for dairy cow lameness based on fusion of spatiotemporal stream features. The channel attention in CBAM had a better effect on fused spatiotemporal flow data, and the spatial attention could also focus on the key spatial information in cow images. Finally, comparisons were made with existing lameness detection methods, including different methods from side view and top view. Compared with existing methods in the side-view perspective, the prediction accuracy of automatic detection method for dairy cow lameness based on fusion of spatiotemporal stream features was slightly lower, because the side-view perspective had more effective cow lameness characteristics. Compared with the method from the top view, a novel fused spatiotemporal flow feature detection method with better performance and practicability was proposed. [Conclusions] This method can avoid the occlusion problem of detecting lame cows from the side view, and at the same time improves the prediction accuracy of the detection method from the top view. It is of great significance for reducing the incidence of lameness in cows and improving the economic benefits of the pasture, and meets the needs of large-scale construction of the pasture.

Real-Time Monitoring Method for Cow Rumination Behavior Based on Edge Computing and Improved MobileNet v3 | Open Access
ZHANG Yu, LI Xiangting, SUN Yalin, XUE Aidi, ZHANG Yi, JIANG Hailong, SHEN Weizheng
2024, 6(4):  29-41.  doi:10.12133/j.smartag.SA202405023
Asbtract ( 77 )   HTML ( 9)   PDF (1694KB) ( 34 )  
Figures and Tables | References | Related Articles | Metrics

[Objective] Real-time monitoring of cow ruminant behavior is of paramount importance for promptly obtaining relevant information about cow health and predicting cow diseases. Currently, various strategies have been proposed for monitoring cow ruminant behavior, including video surveillance, sound recognition, and sensor monitoring methods. However, the application of edge device gives rise to the issue of inadequate real-time performance. To reduce the volume of data transmission and cloud computing workload while achieving real-time monitoring of dairy cow rumination behavior, a real-time monitoring method was proposed for cow ruminant behavior based on edge computing. [Methods] Autonomously designed edge devices were utilized to collect and process six-axis acceleration signals from cows in real-time. Based on these six-axis data, two distinct strategies, federated edge intelligence and split edge intelligence, were investigated for the real-time recognition of cow ruminant behavior. Focused on the real-time recognition method for cow ruminant behavior leveraging federated edge intelligence, the CA-MobileNet v3 network was proposed by enhancing the MobileNet v3 network with a collaborative attention mechanism. Additionally, a federated edge intelligence model was designed utilizing the CA-MobileNet v3 network and the FedAvg federated aggregation algorithm. In the study on split edge intelligence, a split edge intelligence model named MobileNet-LSTM was designed by integrating the MobileNet v3 network with a fusion collaborative attention mechanism and the Bi-LSTM network. [Results and Discussions] Through comparative experiments with MobileNet v3 and MobileNet-LSTM, the federated edge intelligence model based on CA-MobileNet v3 achieved an average Precision rate, Recall rate, F1-Score, Specificity, and Accuracy of 97.1%, 97.9%, 97.5%, 98.3%, and 98.2%, respectively, yielding the best recognition performance. [Conclusions] It is provided a real-time and effective method for monitoring cow ruminant behavior, and the proposed federated edge intelligence model can be applied in practical settings.

CSD-YOLOv8s: Dense Sheep Small Target Detection Model Based on UAV Images | Open Access
WENG Zhi, LIU Haixin, ZHENG Zhiqiang
2024, 6(4):  42-52.  doi:10.12133/j.smartag.SA202401004
Asbtract ( 150 )   HTML ( 18)   PDF (1772KB) ( 74 )  
Figures and Tables | References | Related Articles | Metrics

[Objective] The monitoring of livestock grazing in natural pastures is a key aspect of the transformation and upgrading of large-scale breeding farms. In order to meet the demand for large-scale farms to achieve accurate real-time detection of a large number of sheep, a high-precision and easy-to-deploy small-target detection model: CSD-YOLOv8s was proposed to realize the real-time detection of small-targeted individual sheep under the high-altitude view of the unmanned aerial vehicle (UAV). [Methods] Firstly, a UAV was used to acquire video data of sheep in natural grassland pastures with different backgrounds and lighting conditions, and together with some public datasets downloaded formed the original image data. The sheep detection dataset was generated through data cleaning and labeling. Secondly, in order to solve the difficult problem of sheep detection caused by dense flocks and mutual occlusion, the SPPFCSPC module was constructed with cross-stage local connection based on the you only look once (YOLO)v8 model, which combined the original features with the output features of the fast spatial pyramid pooling network, fully retained the feature information at different stages of the model, and effectively solved the problem of small targets and serious occlusion of the sheep, and improved the detection performance of the model for small sheep targets. In the Neck part of the model, the convolutional block attention module (CBAM) convolutional attention module was introduced to enhance the feature information capture based on both spatial and channel aspects, suppressing the background information spatially and focusing on the sheep target in the channel, enhancing the network's anti-jamming ability from both channel and spatial dimensions, and improving the model's detection performance of multi-scale sheep under complex backgrounds and different illumination conditions. Finally, in order to improve the real-time and deploy ability of the model, the standard convolution of the Neck network was changed to a lightweight convolutional C2f_DS module with a changeable kernel, which was able to adaptively select the corresponding convolutional kernel for feature extraction according to the input features, and solved the problem of input scale change in the process of sheep detection in a more flexible way, and at the same time, the number of parameters of the model was reduced and the speed of the model was improved. [Results and Discussions] The improved CSD-YOLOv8s model exhibited excellent performance in the sheep detection task. Compared with YOLO, Faster R-CNN and other classical network models, the improved CSD-YOLOv8s model had higher detection accuracy and frames per second (FPS) of 87 f/s in the flock detection task with comparable detection speed and model size. Compared with the YOLOv8s model, Precision was improved from 93.0% to 95.2%, mAP was improved from 91.2% to 93.1%, and it had strong robustness to sheep targets with different degree of occlusion and different scales, which effectively solved the serious problems of missed and misdetection of sheep in the grassland pasture UAV-on-ground sheep detection task due to the small sheep targets, large background noise, and high degree of densification. misdetection serious problems. Validated by the PASCAL VOC 2007 open dataset, the CSD-YOLOv8s model proposed in this study improved the detection accuracy of 20 different objects, including transportation vehicles, animals, etc., especially in sheep detection, the detection accuracy was improved by 9.7%. [Conclusions] This study establishes a sheep dataset based on drone images and proposes a model called CSD-YOLOv8s for detecting grazing sheep in natural grasslands. The model addresses the serious issues of missed detections and false alarms in sheep detection under complex backgrounds and lighting conditions, enabling more accurate detection of grazing livestock in drone images. It achieves precise detection of targets with varying degrees of clustering and occlusion and possesses good real-time performance. This model provides an effective detection method for detecting sheep herds from the perspective of drones in natural pastures and offers technical support for large-scale livestock detection in breeding farms, with wide-ranging potential applications.

A Regional Farming Pig Counting System Based on Improved Instance Segmentation Algorithm | Open Access
ZHANG Yanqi, ZHOU Shuo, ZHANG Ning, CHAI Xiujuan, SUN Tan
2024, 6(4):  53-63.  doi:10.12133/j.smartag.SA202310001
Asbtract ( 153 )   HTML ( 30)   PDF (2077KB) ( 118 )  
Figures and Tables | References | Related Articles | Metrics

[Objective] Currently, pig farming facilities mainly rely on manual counting for tracking slaughtered and stored pigs. This is not only time-consuming and labor-intensive, but also prone to counting errors due to pig movement and potential cheating. As breeding operations expand, the periodic live asset inventories put significant strain on human, material and financial resources. Although methods based on electronic ear tags can assist in pig counting, these ear tags are easy to break and fall off in group housing environments. Most of the existing methods for counting pigs based on computer vision require capturing images from a top-down perspective, necessitating the installation of cameras above each hogpen or even the use of drones, resulting in high installation and maintenance costs. To address the above challenges faced in the group pig counting task, a high-efficiency and low-cost pig counting method was proposed based on improved instance segmentation algorithm and WeChat public platform. [Methods] Firstly, a smartphone was used to collect pig image data in the area from a human view perspective, and each pig's outline in the image was annotated to establish a pig count dataset. The training set contains 606 images and the test set contains 65 images. Secondly, an efficient global attention module was proposed by improving convolutional block attention module (CBAM). The efficient global attention module first performed a dimension permutation operation on the input feature map to obtain the interaction between its channels and spatial dimensions. The permuted features were aggregated using global average pooling (GAP). One-dimensional convolution replaced the fully connected operation in CBAM, eliminating dimensionality reduction and significantly reducing the model's parameter number. This module was integrated into the YOLOv8 single-stage instance segmentation network to build the pig counting model YOLOv8x-Ours. By adding an efficient global attention module into each C2f layer of the YOLOv8 backbone network, the dimensional dependencies and feature information in the image could be extracted more effectively, thereby achieving high-accuracy pig counting. Lastly, with a focus on user experience and outreach, a pig counting WeChat mini program was developed based on the WeChat public platform and Django Web framework. The counting model was deployed to count pigs using images captured by smartphones. [Results and Discussions] Compared with existing methods of Mask R-CNN, YOLACT(Real-time Instance Segmentation), PolarMask, SOLO and YOLOv5x, the proposed pig counting model YOLOv8x-Ours exhibited superior performance in terms of accuracy and stability. Notably, YOLOv8x-Ours achieved the highest accuracy in counting, with errors of less than 2 and 3 pigs on the test set. Specifically, 93.8% of the total test images had counting errors of less than 3 pigs. Compared with the two-stage instance segmentation algorithm Mask R-CNN and the YOLOv8x model that applies the CBAM attention mechanism, YOLOv8x-Ours showed performance improvements of 7.6% and 3%, respectively. And due to the single-stage design and anchor-free architecture of the YOLOv8 model, the processing speed of a single image was only 64 ms, 1/8 of Mask R-CNN. By embedding the model into the WeChat mini program platform, pig counting was conducted using smartphone images. In cases where the model incorrectly detected pigs, users were given the option to click on the erroneous location in the result image to adjust the statistical outcomes, thereby enhancing the accuracy of pig counting. [Conclusions] The feasibility of deep learning technology in the task of pig counting was demonstrated. The proposed method eliminates the need for installing hardware equipment in the breeding area of the pig farm, enabling pig counting to be carried out effortlessly using just a smartphone. Users can promptly spot any errors in the counting results through image segmentation visualization and easily rectify any inaccuracies. This collaborative human-machine model not only reduces the need for extensive manpower but also guarantees the precision and user-friendliness of the counting outcomes.

Automatic Measurement Method of Beef Cattle Body Size Based on Multimodal Image Information and Improved Instance Segmentation Network | Open Access
WENG Zhi, FAN Qi, ZHENG Zhiqiang
2024, 6(4):  64-75.  doi:10.12133/j.smartag.SA202310007
Asbtract ( 113 )   HTML ( 15)   PDF (3345KB) ( 93 )  
Figures and Tables | References | Related Articles | Metrics

[Objective] The body size parameter of cattle is a key indicator reflecting the physical development of cattle, and is also a key factor in the cattle selection and breeding process. In order to solve the demand of measuring body size of beef cattle in the complex environment of large-scale beef cattle ranch, an image acquisition device and an automatic measurement algorithm of body size were designed. [Methods] Firstly, the walking channel of the beef cattle was established, and when the beef cattle entered the restraining device through the channel, the RGB and depth maps of the image on the right side of the beef cattle were acquired using the Inter RealSense D455 camera. Secondly, in order to avoid the influence of the complex environmental background, an improved instance segmentation network based on Mask2former was proposed, adding CBAM module and CA module, respectively, to improve the model's ability to extract key features from different perspectives, extracting the foreground contour from the 2D image of the cattle, partitioning the contour, and comparing it with other segmentation algorithms, and using curvature calculation and other mathematical methods to find the required body size measurement points. Thirdly, in the processing of 3D data, in order to solve the problem that the pixel point to be measured in the 2D RGB image was null when it was projected to the corresponding pixel coordinates in the depth-valued image, resulting in the inability to calculate the 3D coordinates of the point, a series of processing was performed on the point cloud data, and a suitable point cloud filtering and point cloud segmentation algorithm was selected to effectively retain the point cloud data of the region of the cattle's body to be measured, and then the depth map was 16. Then the depth map was filled with nulls in the field to retain the integrity of the point cloud in the cattle body region, so that the required measurement points could be found and the 2D data could be returned. Finally, an extraction algorithm was designed to combine 2D and 3D data to project the extracted 2D pixel points into a 3D point cloud, and the camera parameters were used to calculate the world coordinates of the projected points, thus automatically calculating the body measurements of the beef cattle. [Results and Discussions] Firstly, in the part of instance segmentation, compared with the classical Mask R-CNN and the recent instance segmentation networks PointRend and Queryinst, the improved network could extract higher precision and smoother foreground images of cattles in terms of segmentation accuracy and segmentation effect, no matter it was for the case of occlusion or for the case of multiple cattles. Secondly, in three-dimensional data processing, the method proposed in the study could effectively extract the three-dimensional data of the target area. Thirdly, the measurement error of body size was analysed, among the four body size measurement parameters, the smallest average relative error was the height of the cross section, which was due to the more prominent position of the cross section, and the different standing positions of the cattle have less influence on the position of the cross section, and the largest average relative error was the pipe circumference, which was due to the influence of the greater overlap of the two front legs, and the higher requirements for the standing position. Finally, automatic body measurements were carried out on 137 beef cattle in the ranch, and the automatic measurements of the four body measurements parameters were compared with the manual measurements, and the results showed that the average relative errors of body height, cross section height, body slant length, and tube girth were 4.32%, 3.71%, 5.58% and 6.25%, respectively, which met the needs of the ranch. The shortcomings were that fewer body-size parameters were measured, and the error of measuring circumference-type body-size parameters was relatively large. Later studies could use a multi-view approach to increase the number of body rule parameters to be measured and improve the accuracy of the parameters in the circumference category. [Conclusions] The article designed an automatic measurement method based on two-dimensional and three-dimensional contactless body measurements of beef cattle. Moreover, the innovatively proposed method of measuring tube girth has higher accuracy and better implementation compared with the current research on body measurements in beef cattle. The relative average errors of the four body tape parameters meet the needs of pasture measurements and provide theoretical and practical guidance for the automatic measurement of body tape in beef cattle.

Pig Back Transformer: Automatic 3D Pig Body Measurement Model | Open Access
WANG Yuxiao, SHI Yuanyuan, CHEN Zhaoda, WU Zhenfang, CAI Gengyuan, ZHANG Sumin, YIN Ling
2024, 6(4):  76-90.  doi:10.12133/j.smartag.SA202401023
Asbtract ( 104 )   HTML ( 16)   PDF (2776KB) ( 60 )  
Figures and Tables | References | Related Articles | Metrics

[Objective] Nowadays most no contact body size measurement studies are based on point cloud segmentation method, they use a trained point cloud segmentation neural network to segment point cloud of pigs, then locate measurement points based on them. But point cloud segmentation neural network always need a larger graphics processing unit (GPU) memory, moreover, the result of the measurement key point still has room of improvement. This study aims to design a key point generating neural network to extract measurement key points from pig's point cloud. Reducing the GPU memory usage and improve the result of measurement points at the same time, improve both the efficiency and accuracy of the body size measurement. [Methods] A neural network model was proposed using improved Transformer attention mechanic called Pig Back Transformer for generating key points and back orientation points which were related to pig body dimensions. In the first part of the network, it was introduced an embedding structure for initial feature extraction and a Transformer encoder structure with edge attention which was a self-attention mechanic improved from Transformer's encoder. The embedding structure using two shared multilayer perceptron (MLP) and a distance embedding algorithm, it takes a set of points from the edge of pig back's point cloud as input and then extract information from the edge points set. In the encoder part, information about the offset distances between edge points and mass point which were their feature that extracted by the embedding structure mentioned before incorporated. Additionally, an extraction algorithm for back edge point was designed for extracting edge points to generate the input of the neural network model. In the second part of the network, it was proposed a Transformer encoder with improved self-attention called back attention. In the design of back attention, it also had an embedding structure before the encoder structure, this embedding structure extracted features from offset values, these offset values were calculated by the points which are none-edge and down sampled by farthest point sampling (FPS) to both the relative centroid point and model generated global key point from the first part that introduced before. Then these offset values were processed with max pooling with attention generated by the extracted features of the points' axis to extract more information that the original Transformer encoder couldn't extract with the same number of parameters. The output part of the model was designed to generate a set of offsets of the key points and points for back direction fitting, than add the set offset to the global key point to get points for pig body measurements. At last, it was introduced the methods for calculating body dimensions which were length, height, shoulder width, abdomen width, hip width, chest circumference and abdomen circumference using key points and back direction fitting points. [Results and Discussions] In the task of generating key points and points for back direction fitting, the improved Pig Back Transformer performed the best in the accuracy wise in the models tested with the same size of parameters, and the back orientation points generated by the model were evenly distributed which was a good preparation for a better body length calculation. A melting test for edge detection part with two attention mechanic and edge trim method both introduced above had being done, when the edge detection and the attention mechanic got cut off, the result had been highly impact, it made the model couldn't perform as well as before, when the edge trim method of preprocessing part had been cut off, there's a moderate impact on the trained model, but it made the loss of the model more inconsistence while training than before. When comparing the body measurement algorithm with human handy results, the relative error in length was 0.63%, which was an improvement compared to other models. On the other hand, the relative error of shoulder width, abdomen width and hip width had edged other models a little but there was no significant improvement so the performance of these measurement accuracy could be considered negligible, the relative error of chest circumference and abdomen circumference were a little bit behind by the other methods existed, it's because the calculate method of circumferences were not complicated enough to cover the edge case in the dataset which were those point cloud that have big holes in the bottom of abdomen and chest, it impacted the result a lot. [Conclusions] The improved Pig Back Transformer demonstrates higher accuracy in generating key points and is more resource-efficient, enabling the calculation of more accurate pig body measurements. And provides a new perspective for non-contact livestock body size measurements.

Automatic Measurement of Mongolian Horse Body Based on Improved YOLOv8n-pose and 3D Point Cloud Analysis | Open Access
LI Minghuang, SU Lide, ZHANG Yong, ZONG Zheying, ZHANG Shun
2024, 6(4):  91-102.  doi:10.12133/j.smartag.SA202312027
Asbtract ( 172 )   HTML ( 27)   PDF (2477KB) ( 141 )  
Figures and Tables | References | Related Articles | Metrics

[Objective] There exist a high genetic correlation among various morphological characteristics of Mongolian horses. Utilizing advanced technology to obtain body structure parameters related to athletic performance could provide data support for breeding institutions to develop scientific breeding plans and establish the groundwork for further improvement of Mongolian horse breeds. However, traditional manual measurement methods are time-consuming, labor-intensive, and may cause certain stress responses in horses. Therefore, ensuring precise and effective measurement of Mongolian horse body dimensions is crucial for formulating early breeding plans. [Method] Video images of 50 adult Mongolian horses in the suitable breeding stage at the Inner Mongolia Agricultural University Horse Breeding Technical Center was first collected. Fifty images per horse were captured to construct the training and validation sets, resulting in a total of 2 500 high-definition RGB images of Mongolian horses, with an equal ratio of images depicting horses in motion and at rest. To ensure the model's robustness and considering issues such as angles, lighting, and image blurring during actual image capture, a series of enhancement algorithms were applied to the original dataset, expanding the Mongolian horse image dataset to 4 000 images. The YOLOv8n-pose was employed as the foundational keypoint detection model. Through the design of the C2f_DCN module, deformable convolution (DCNV2) was integrated into the C2f module of the Backbone network to enhance the model's adaptability to different horse poses in real-world scenes. Besides, an SA attention module was added to the Neck network to improve the model's focus on critical features. The original loss function was replaced with SCYLLA-IoU (SIoU) to prioritize major image regions, and a cosine annealing method was employed to dynamically adjust the learning rate during model training. The improved model was named DSS-YOLO (DCNv2-SA-SIoU-YOLO) network model. Additionally, a test set comprising 30 RGB-D images of mature Mongolian horses was selected for constructing body dimension measurement tasks. DSS-YOLO was used for keypoint detection of body dimensions. The 2D keypoint coordinates from RGB images were fused with corresponding depth values from depth images to obtain 3D keypoint coordinates, and Mongolian horse's point cloud information was transformed. Point cloud processing and analysis were performed using pass-through filtering, random sample consensus (RANSAC) shape fitting, statistical outlier filtering, and principal component analysis (PCA) coordinate system correction. Finally, body height, body oblique length, croup height, chest circumference, and croup circumference were automatically computed based on keypoint spatial coordinates. [Results and Discussion] The proposed DSS-YOLO model exhibited parameter and computational costs of 3.48 M and 9.1 G, respectively, with an average accuracy mAP0.5:0.95 reaching 92.5%, and a dDSS of 7.2 pixels. Compared to Hourglass, HRNet, and SimCC, mAP0.5:0.95 increased by 3.6%, 2.8%, and 1.6%, respectively. By relying on keypoint coordinates for automatic calculation of body dimensions and suggesting the use of a mobile least squares curve fitting method to complete the horse's hip point cloud, experiments involving 30 Mongolian horses showed a mean average error (MAE) of 3.77 cm and mean relative error (MRE) of 2.29% in automatic measurements. [Conclusions] The results of this study showed that DSS-YOLO model combined with three-dimensional point cloud processing methods can achieve automatic measurement of Mongolian horse body dimensions with high accuracy. The proposed measurement method can also be extended to different breeds of horses, providing technical support for horse breeding plans and possessing practical application value.

Automatic Navigation and Spraying Robot in Sheep Farm | Open Access
FAN Mingshuo, ZHOU Ping, LI Miao, LI Hualong, LIU Xianwang, MA Zhirun
2024, 6(4):  103-115.  doi:10.12133/j.smartag.SA202312016
Asbtract ( 88 )   HTML ( 6)   PDF (2160KB) ( 59 )  
Figures and Tables | References | Related Articles | Metrics

[Objective] Manual disinfection in large-scale sheep farm is laborious, time-consuming, and often results in incomplete coverage and inadequate disinfection. With the rapid development of the application of artificial intelligence and automation technology, the automatic navigation and spraying robot for livestock and poultry breeding, has become a research hotspot. To maintain shed hygiene and ensure sheep health, an automatic navigation and spraying robot was proposed for sheep sheds. [Methods] The automatic navigation and spraying robot was designed with a focus on three aspects: hardware, semantic segmentation model, and control algorithm. In terms of hardware, it consisted of a tracked chassis, cameras, and a collapsible spraying device. For the semantic segmentation model, enhancements were made to the lightweight semantic segmentation model ENet, including the addition of residual structures to prevent network degradation and the incorporation of a squeeze-and-excitation network (SENet) attention mechanism in the initialization module. This helped to capture global features when feature map resolution was high, addressing precision issues. The original 6-layer ENet network was reduced to 5 layers to balance the encoder and decoder. Drawing inspiration from dilated spatial pyramid pooling, a context convolution module (CCM) was introduced to improve scene understanding. A criss-cross attention (CCA) mechanism was adapted to acquire context global features of different scales without cascading, reducing information loss. This led to the development of a double attention enet (DAENet) semantic segmentation model was proposed to achieve real-time and accurate segmentation of sheep shed surfaces. Regarding control algorithms, a method was devised to address the robot's difficulty in controlling its direction at junctions. Lane recognition and lane center point identification algorithms were proposed to identify and mark navigation points during the robot's movement outside the sheep shed by simulating real roads. Two cameras were employed, and a camera switching algorithm was developed to enable seamless switching between them while also controlling the spraying device. Additionally, a novel offset and velocity calculation algorithm was proposed to control the speeds of the robot's left and right tracks, enabling control over the robot's movement, stopping, and turning. [Results and Discussions] The DAENet model achieved a mean intersection over union (mIoU) of 0.945 3 in image segmentation tasks, meeting the required segmentation accuracy. During testing of the camera switching algorithm, it was observed that the time taken for the complete transition from camera to spraying device action does not exceed 15 seconds when road conditions changed. Testing of the center point and offset calculation algorithm revealed that, when processing multiple frames of video streams, the algorithm averages 0.04 to 0.055 per frame, achieving frame rates of 20 to 24 frames per second, meeting real-time operational requirements. In field experiments conducted in sheep farm, the robot successfully completed automatic navigation and spraying tasks in two sheds without colliding with roadside troughs. The deviation from the road and lane centerlines did not exceed 0.3 meters. Operating at a travel speed of 0.2 m/s, the liquid in the medicine tank was adequate to complete the spraying tasks for two sheds. Additionally, the time taken for the complete transition from camera to spraying device action did not exceed 15 when road conditions changed. The robot maintained an average frame rate of 22.4 frames per second during operation, meeting the experimental requirements for accurate and real-time information processing. Observation indicated that the spraying coverage rate of the robot exceeds 90%, meeting the experimental coverage requirements. [Conclusions] The proposed automatic navigation and spraying robot, based on the DAENet semantic segmentation model and center point recognition algorithm, combined with hardware design and control algorithms, achieves comprehensive disinfection within sheep sheds while ensuring safety and real-time operation.

Overview Article
The Path of Smart Agricultural Technology Innovation Leading Development of Agricultural New Quality Productivity | Open Access
CAO Bingxue, LI Hongfei, ZHAO Chunjiang, LI Jin
2024, 6(4):  116-127.  doi:10.12133/j.smartag.SA202405004
Asbtract ( 157 )   HTML ( 25)   PDF (1102KB) ( 94 )  
Figures and Tables | References | Related Articles | Metrics

[Significance] Building the agricultural new quality productivity is of great significance. It is the advanced quality productivity which realizes the transformation, upgrading, and deep integration of substantive, penetrating, operational, and media factors, and has outstanding characteristics such as intelligence, greenness, integration, and organization. As a new technology revolution in the field of agriculture, smart agricultural technology transforms agricultural production mode by integrating agricultural biotechnology, agricultural information technology, and smart agricultural machinery and equipment, with information and knowledge as important core elements. The inherent characteristics of "high-tech, high-efficiency, high-quality, and sustainable" in agricultural new quality productivity are fully reflected in the practice of smart agricultural technology innovation. And it has become an important core and engine for promoting the agricultural new quality productivity. [Progress] Through literature review and theoretical analysis, this article conducts a systematic study on the practical foundation, internal logic, and problem challenges of smart agricultural technology innovation leading the development of agricultural new quality productivity. The conclusions show that: (1) At present, the global innovation capability of smart agriculture technology is constantly enhancing, and significant technology breakthroughs have been made in fields such as smart breeding, agricultural information perception, agricultural big data and artificial intelligence, smart agricultural machinery and equipment, providing practical foundation support for leading the development of agricultural new quality productivity. Among them, the smart breeding of 'Phenotype+Genotype+Environmental type' has entered the fast lane, the technology system for sensing agricultural sky, air, and land information is gradually maturing, the research and exploration on agricultural big data and intelligent decision-making technology continue to advance, and the creation of smart agricultural machinery and equipment for different fields has achieved fruitful results; (2) Smart agricultural technology innovation provides basic resources for the development of agricultural new quality productivity through empowering agricultural factor innovation, provides sustainable driving force for the development of agricultural new quality productivity through empowering agricultural technology innovation, provides practical paradigms for the development of agricultural new quality productivity through empowering agricultural scenario innovation, provides intellectual support for the development of agricultural new quality productivity through empowering agricultural entity innovation, and provides important guidelines for the development of agricultural new quality productivity through empowering agricultural value innovation; (3) Compared to the development requirements of agricultural new quality productivity in China and the advanced level of international smart agriculture technology, China's smart agriculture technology innovation is generally in the initial stage of multi-point breakthroughs, system integration, and commercial application. It still faces major challenges such as an incomplete policy system for technology innovation, key technologies with bottlenecks, blockages and breakpoints, difficulties in the transformation and implementation of technology achievements, and incomplete support systems for technology innovation. [Conclusions and Prospects] Regarding the issue of technology innovation in smart agriculture, this article proposes the 'Four Highs' path of smart agriculture technology innovation to fill the gaps in smart agriculture technology innovation and accelerate the formation of agricultural new quality productivity in China. The "Four Highs" path specifically includes the construction of high-energy smart agricultural technology innovation platforms, the breakthroughs in high-precision and cutting-edge smart agricultural technology products, the creation of high-level smart agricultural application scenarios, and the cultivation of high-level smart agricultural innovation talents. Finally, this article proposes four strategic suggestions such as deepening the understanding of smart agriculture technology innovation and agricultural new quality productivity, optimizing the supply of smart agriculture technology innovation policies, building a national smart agriculture innovation development pilot zone, and improving the smart agriculture technology innovation ecosystem.

Information Perception and Acquisition
A Rapid Detection Method for Wheat Seedling Leaf Number in Complex Field Scenarios Based on Improved YOLOv8 | Open Access
HOU Yiting, RAO Yuan, SONG He, NIE Zhenjun, WANG Tan, HE Haoxu
2024, 6(4):  128-137.  doi:10.12133/j.smartag.SA202403019
Asbtract ( 128 )   HTML ( 25)   PDF (2913KB) ( 74 )  
Figures and Tables | References | Related Articles | Metrics

[Objective] The enumeration of wheat leaves is an essential indicator for evaluating the vegetative state of wheat and predicting its yield potential. Currently, the process of wheat leaf counting in field settings is predominantly manual, characterized by being both time-consuming and labor-intensive. Despite advancements, the efficiency and accuracy of existing automated detection and counting methodologies have yet to satisfy the stringent demands of practical agricultural applications. This study aims to develop a method for the rapid quantification of wheat leaves to refine the precision of wheat leaf tip detection. [Methods] To enhance the accuracy of wheat leaf detection, firstly, an image dataset of wheat leaves across various developmental stages—seedling, tillering, and overwintering—under two distinct lighting conditions and using visible light images sourced from both mobile devices and field camera equipmen, was constructed. Considering the robust feature extraction and multi-scale feature fusion capabilities of YOLOv8 network, the foundational architecture of the proposed model was based on the YOLOv8, to which a coordinate attention mechanism has been integrated. To expedite the model's convergence, the loss functions were optimized. Furthermore, a dedicated small object detection layer was introduced to refine the recognition of wheat leaf tips, which were typically difficult for conventional models to discern due to their small size and resemblance to background elements. This deep learning network was named as YOLOv8-CSD, tailored for the recognition of small targets such as wheat leaf tips, ascertains the leaf count by detecting the number of leaf tips present within the image. A comparative analysis was conducted on the YOLOv8-CSD model in comparison with the original YOLOv8 and six other prominent network architectures, including Faster R-CNN, Mask R-CNN, YOLOv7, and SSD, within a uniform training framework, to evaluate the model's effectiveness. In parallel, the performance of both the original and YOLOv8-CSD models was assessed under challenging conditions, such as the presence of weeds, occlusions, and fluctuating lighting, to emulate complex real-world scenarios. Ultimately, the YOLOv8-CSD model was deployed for wheat leaf number detection in intricate field conditions to confirm its practical applicability and generalization potential. [Results and Discussions] The research presented a methodology that achieved a recognition precision of 91.6% and an mAP0.5 of 85.1% for wheat leaf tips, indicative of its robust detection capabilities. This method exceled in adaptability within complex field environments, featuring an autonomous adjustment mechanism for different lighting conditions, which significantly enhanced the model's robustness. The minimal rate of missed detections in wheat seedlings' leaf counting underscored the method's suitability for wheat leaf tip recognition in intricate field scenarios, consequently elevating the precision of wheat leaf number detection. The sophisticated algorithm embedded within this model had demonstrated a heightened capacity to discern and focus on the unique features of wheat leaf tips during the detection process. This capability was essential for overcoming challenges such as small target sizes, similar background textures, and the intricacies of feature extraction. The model's consistent performance across diverse conditions, including scenarios with weeds, occlusions, and fluctuating lighting, further substantiated its robustness and its readiness for real-world application. [Conclusions] This research offers a valuable reference for accurately detecting wheat leaf numbers in intricate field conditions, as well as robust technical support for the comprehensive and high-quality assessment of wheat growth.

Information Processing and Decision Making
Dynamic Prediction Method for Carbon Emissions of Cold Chain Distribution Vehicle under Multi-Source Information Fusion | Open Access
YANG Lin, LIU Shuangyin, XU Longqin, HE Min, SHENG Qingfeng, HAN Jiawei
2024, 6(4):  138-148.  doi:10.12133/j.smartag.SA202403020
Asbtract ( 68 )   HTML ( 2)   PDF (2240KB) ( 250 )  
Figures and Tables | References | Related Articles | Metrics

[Objective] The dynamic prediction of carbon emission from cold chain distribution is an important basis for the accurate assessment of carbon emission and its green credit grade. Facing the fact that the carbon emission of vehicles is affected by multiple factors, such as road condition information, driving characteristics, refrigeration parameters, etc., a dynamic prediction model of carbon emission was proposed from refrigerated vehicles that integrates multi-source information. [Methods] The backbone feature extraction network, neck feature fusion network and loss function of YOLOv8s was firstly improved. The full-dimensional dynamic convolution was introduced into the backbone feature extraction network, and the multidimensional attention mechanism was introduced to capture the contextual key information to improve the model feature extraction capability. A progressive feature pyramid network was introduced into the feature extraction network, which reduced the loss of key information by fusing features layer by layer and improved the feature fusion efficiency. The road condition information recognition model based on improved YOLOv8s was constructed to characterize the road condition information in terms of the number of road vehicles and the percentage of pixel area. Pearson's correlation coefficient was used to compare and analyze the correlation between carbon emissions of refrigerated vehicles and different influencing factors, and to verify the necessity and criticality of the selection of input parameters of the carbon emission prediction model. Then the iTransformer temporal prediction model was improved, and the external attention mechanism was introduced to enhance the feature extraction ability of iTransformer model and reduce the computational complexity. The dynamic prediction model of carbon emission of refrigerated vehicles based on the improved iTransformer was constructed by taking the road condition information, driving characteristics (speed, acceleration), cargo weight, and refrigeration parameters (temperature, power) as inputs. Finally, the model was compared and analyzed with other models to verify the robustness of the road condition information and the prediction accuracy of the vehicle carbon emission dynamic prediction model, respectively. [Results and Discussions] The results of correlation analysis showed that the vehicle driving parameters were the main factor affecting the intensity of vehicle carbon emissions, with a correlation of 0.841. The second factor was cargo weight, with a correlation of 0.807, which had a strong positive correlation. Compared with the vehicle refrigeration parameters, the road condition information had a stronger correlation between vehicle carbon emissions, the correlation between refrigeration parameters and the vehicle carbon emissions impact factor were above 0.67. In order to further ensure the accuracy of the vehicle carbon emissions prediction model, The paper was selected as the input parameters for the carbon emissions prediction model. The improved YOLOv8s road information recognition model achieved 98.1%, 95.5%, and 98.4% in precision, recall, and average recognition accuracy, which were 1.2%, 3.7%, and 0.2% higher than YOLOv8s, respectively, with the number of parameters and the amount of computation being reduced by 12.5% and 31.4%, and the speed of detection being increased by 5.4%. This was due to the cross-dimensional feature learning through full-dimensional dynamic convolution, which fully captured the key information and improved the feature extraction capability of the model, and through the progressive feature pyramid network after fusing the information between different classes through gradual step-by-step fusion, which fully retained the important feature information and improved the recognition accuracy of the model. The predictive performance of the improved iTransformer carbon emission prediction model was better than other time series prediction models, and its prediction curve was closest to the real carbon emission curve with the best fitting effect. The introduction of the external attention mechanism significantly improved the prediction accuracy, and its MSE, MAE, RMSE and R2 were 0.026 1 %VOL, 0.079 1 %VOL, 0.161 5 %VOL and 0.940 0, respectively, which were 0.4%, 15.3%, 8.7% and 1.3% lower, respectively, when compared with iTransformer. As the degree of road congestion increased, the prediction accuracy of the constructed carbon emission prediction model increased. [Conclusions] The carbon emission prediction model for cold chain distribution under multi-source information fusion proposed in this study can realize accurate prediction of carbon emission from refrigerated vehicles, provide theoretical basis for rationally formulating carbon emission reduction strategies and promoting the development of low-carbon cold chain distribution.

Differential Privacy-enhanced Blockchain-Based Quality Control Model for Rice | Open Access
WU Guodong, HU Quanxing, LIU Xu, QIN Hui, GAO Bowen
2024, 6(4):  149-159.  doi:10.12133/j.smartag.SA202311027
Asbtract ( 69 )   HTML ( 2)   PDF (1858KB) ( 24 )  
Figures and Tables | References | Related Articles | Metrics

[Objective] Rice plays a crucial role in daily diet. The rice industry involves numerous links, from paddy planting to the consumer's table, and the integrity of the quality control data chain directly affects the credibility of rice quality control and traceability information. The process of rice traceability also faces security issues, such as the leakage of privacy information, which need immediate solutions. Additionally, the previous practice of uploading all information onto the blockchain leads to high storage costs and low system efficiency. To address these problems, this study proposed a differential privacy-enhanced blockchain-based quality control model for rice, providing new ideas and solutions to optimize the traditional quality regulation and traceability system. [Methods] By exploring technologies of blockchain, interplanetary file system (IPFS), and incorporating differential privacy techniques, a blockchain-based quality control model for rice with differential privacy enhancement was constructed. Firstly, the data transmission process was designed to cover the whole industry chain of rice, including cultivation, acquisition, processing, warehousing, and sales. Each module stored the relevant data and a unique number from the previous link, forming a reliable information chain and ensuring the continuity of the data chain for quality control. Secondly, to address the issue of large data volume and low efficiency of blockchain storage, the key quality control data of each link in the rice industry chain was stored in the IPFS. Subsequently, the hash value of the stored data was returned and recorded on the blockchain. Lastly, to enhance the traceability of the quality control model information, the sensitive information in the key quality control data related to the cultivation process was presented to users after undergoing differential privacy processing. Individual data was obfuscated to increase the credibility of the quality control information while also protecting the privacy of farmers' cultivation practices. Based on this model, a differential privacy-enhanced blockchain-based quality control system for rice was designed. [Results and Discussions] The architecture of the differential privacy-enhanced blockchain-based quality control system for rice consisted of the physical layer, transport layer, storage layer, service layer, and application layer. The physical layer included sensor devices and network infrastructure, ensuring data collection from all links of the industry chain. The transport layer handled data transmission and communication, securely uploading collected data to the cloud. The storage layer utilized a combination of traditional databases, IPFS, and blockchain to efficiently store and manage key data on and off the blockchain. The traditional database was used for the management and querying of structured data. IPFS stored the key quality control data in the whole industry chain, while blockchain was employed to store the hash values returned by IPFS. This integrated storage method improved system efficiency, ensured the continuity, reliability, and traceability of quality control data, and provided consumers with reliable information. The service layer was primarily responsible for handling business logic and providing functional services. The implementation of functions in the application layer relied heavily on the design of a series of interfaces within the service layer. Positioned at the top of the system architecture, the application layer was responsible for providing user-centric functionality and interfaces. This encompassed a range of applications such as web applications and mobile applications, aiming to present data and facilitate interactive features to fulfill the requirements of both consumers and businesses. Based on the conducted tests, the average time required for storing data in a single link of the whole industry chain within the system was 1.125 s. The average time consumed for information traceability query was recorded as 0.691 s. Compared to conventional rice quality regulation and traceability systems, the proposed system demonstrated a reduction of 6.64% in the storage time of single-link data and a decrease of 16.44% in the time required to perform information traceability query. [Conclusions] This study proposes a differential privacy-enhanced blockchain-based quality control model for rice. The model ensures the continuity of the quality control data chain by integrating the various links of the whole industry chain of rice. By combining blockchain with IPFS storage, the model addresses the challenges of large data volume and low efficiency of blockchain storage in traditional systems. Furthermore, the model incorporates differential privacy protection to enhance traceability while safeguarding the privacy of individual farmers. This study can provide reference for the design and improvement of rice quality regulation and traceability systems.

Price Game Model and Competitive Strategy of Agricultural Products Retail Market in the Context of Blockchain | Open Access
XUE Bing, SUN Chuanheng, LIU Shuangyin, LUO Na, LI Jinhui
2024, 6(4):  160-173.  doi:10.12133/j.smartag.SA202309027
Asbtract ( 113 )   HTML ( 8)   PDF (1265KB) ( 30 )  
Figures and Tables | References | Related Articles | Metrics

[Objective] In the retail market for agricultural products, consumers are increasingly concerned about the safety and health aspects of those products. Traceability of blockchain has emerged as a crucial solution to address these concerns. Essentially, a blockchain functions as a dynamic, distributed, and shared database. When implemented in the agricultural supply chain, it not only improves product transparency to attract more consumers but also raises concerns about consumer privacy disclosure. The level of consumer apprehension regarding privacy will directly influence their choice to purchase agricultural products traced through blockchain-traced. Moreover, retailers' choices to sell blockchain-traced produce are influenced by consumer privacy concerns. By analyzing the impact of blockchain technology on the competitive strategies, pricing, and decision-making among agricultural retailers, they can develop market competition strategies that suit their market conditions to bolster their competitiveness and optimize the agricultural supply chain to maximize overall benefits. [Methods] Based on Nash equilibrium and Stackelberg game theory, a market competition model was developed to analyze the interactions between existing and new agricultural product retailers. The competitive strategies adopted by agricultural product retailers were analyzed under four different options of whether two agricultural retailers sell blockchain agricultural products. It delved into product utility, optimal pricing, demand, and profitability for each retailer under these different scenarios. How consumer privacy concerns impact pricing and profits of two agricultural product retailers and the optimal response strategy choice of another retailer when the competitor made the decision choice first were also analyzed. This analysis aimed to guide agricultural product retailers in making strategic choices that would safeguard their profits and market positions. To address the cooperative game problem of agricultural product retailers in market competition, ensure that retailers could better cooperate in the game, blockchain smart contract technology was used. By encoding the process and outcomes of the Stackelberg game into smart contracts, retailers could input their specific variables and receive tailored strategy recommendations. Uploading game results onto the blockchain network ensured transparency and encouraged cooperative behavior among retailers. By using the characteristics of blockchain, the game results were uploaded to the blockchain network to regulate the cooperative behavior, to ensure the maximization of the overall interests of the supply chain. [Results and Discussions] The research highlighted the significant improvement in agricultural product quality transparency through blockchain traceability technology. However, concerns regarding consumer privacy arising from this traceability could directly impact the pricing, profitability and retailers' decisions to provide blockchain-traceable items. Furthermore, an analysis of the strategic balance between two agricultural product retailers revealed that in situations of low and high product information transparency, both retailers were inclined to simultaneously offer sell traceable products. In such a scenario, blockchain traceability technology enhanced the utility and profitability of retail agricultural products, leading consumers to prefer purchase these traceable products from retailers. In cases where privacy concerns and agricultural product information transparency were both moderate, the initial retailer was more likely to opt for blockchain-based traceable products. This was because consumers had higher trust in the initial retailer, enabling them to bear a higher cost associated with privacy concerns. Conversely, new retailers failed to gain a competitive advantage and eventually exit the market. When consumer privacy concerns exceeded a certain threshold, both competing agricultural retailers discovered that offering blockchain-based traceable products led to a decline in their profits. [Conclusions] When it comes to agricultural product quality and safety, incorporating blockchain technology in traceability significantly improves the transparency of quality-related information for agricultural products. However, it is important to recognize that the application of blockchain for agricultural product traceability is not universally suitable for all agricultural retailers. Retailers must evaluate their unique circumstances and make the most suitable decisions to enhance the effectiveness of agricultural products, drive sales demand, and increase profits. Within the competitive landscape of the agricultural product retail market, nurturing a positive collaborative relationship is essential to maximize mutual benefits and optimize the overall profitability of the agricultural product supply chain.

Authority in Charge: Ministry of Agriculture and Rural Affairs of the People’s Republic of China
Sponsor: Agricultural Information Institute, Chinese Academy of Agricultural Sciences
Editor-in-Chief: Chunjiang Zhao, Academician of Chinese Academy of Engineering.
ISSN 2097-485X(Online)
ISSN 2096-8094(Print)
CN 10-1681/S
CODEN ZNZHD7

Search by Issue
Search by Key words
Archive By Volume
Smart Agriculture Wechat
Visited
Links