欢迎您访问《智慧农业(中英文)》官方网站! English
综合研究

羊只体尺测量的研究进展:从二维视觉到三维重建及2D-3D融合

  • 戴维娇 , 1 ,
  • 梁禹东辰 , 1 ,
  • 周勇 2 ,
  • 姚超 1 ,
  • 章程 1, 3 ,
  • 宋永健 4 ,
  • 李国亮 1 ,
  • 田芳 , 1, 3
展开
  • 1. 华中农业大学 信息学院,湖北 武汉 430070,中国
  • 2. 金昌市畜牧兽医总站,甘肃 金昌 737100,中国
  • 3. 华中农业大学农村农业部智慧养殖技术重点实验室,湖北 武汉 430070,中国
  • 4. 香港中文大学 化学病理学系,香港 999077,中国
田 芳,硕士,副教授,研究方向为农业视觉模型和农业智能装备等。E-mail:

. 戴维娇,博士研究生,研究方向为计算机视觉、农业信息工程。E-mail:

收稿日期: 2025-07-28

  网络出版日期: 2026-02-25

Advances and Prospects in Body-Size Measurement of Sheep: From 2D Vision to 3D Reconstruction and 2D-3D Fusion

  • DAI Weijiao , 1 ,
  • LIANG Yudongchen , 1 ,
  • ZHOU Yong 2 ,
  • YAO Chao 1 ,
  • ZHANG Cheng 1, 3 ,
  • SONG Yongjian 4 ,
  • LI Guoliang 1 ,
  • TIAN Fang , 1, 3
Expand
  • 1. College of Informatics, Huazhong Agricultural University, Wuhan 430070, China
  • 2. Jinchang Husbandry and Veterinary Master Station, Jinchang 737100, China
  • 3. Key Laboratory of Smart Animal Farming Technology, Ministry of Agriculture and Rural Affairs, Huazhong Agricultural University, Wuhan 430070, China
  • 4. Department of Chemical Pathology, the Chinese University of Hong Kong, Hong Kong 999077, China

共同第一作者

DAI Weijiao, E-mail:

梁禹东辰,硕士研究生,研究方向为计算机视觉。E-mail:

LIANG Yudongchen, E-mail: .

Received date: 2025-07-28

  Online published: 2026-02-25

Supported by

Earmarked Fund For Gansu Agriculture Research System(GSARS02)

甘肃省现代农业产业技术体系建设专项(GSARS02)

Copyright

copyright©2026 by the authors

摘要

[目的/意义] 在精准育种的全基因组选择工作中,羊的体尺参数是评价其生长发育与育种价值的重要依据。传统人工测量方法效率低、误差大,且易导致羊应激,难以满足精准养殖需求。本文概述了羊只体尺测量技术的研究进展、应用以及分析发展方向,为羊只体尺非接触测量提供理论参考与实践指引。 [进展] 系统综述了基于二维图像、三维点云,以及二维与三维融合技术的三类主流羊体尺特征提取方法的技术原理、关键算法、测量精度及应用现状。基于二维图像的方法,通过人工构建几何特征或采用深度学习关键点识别算法实现测量,其优势在于成本低廉、操作便捷,但其测量结果易受拍摄视角与光照条件干扰,且难以获取关键的围度参数。基于三维点云的技术能精准重建羊只三维模型,可全面获取包括围度在内的各类体尺数据,精度显著提升,然而该技术面临设备成本高、点云数据处理复杂,以及对动物姿态变化敏感等挑战。作为前两者的结合,二维与三维融合技术旨在取长补短,在牛、猪等家畜上已得到有效验证并展现出优异性能,为羊只体尺测量提供了重要的技术借鉴与实践思路。 [结论/展望] 综合分析显示,未来研究应构建大规模高质量数据集;研发轻量级、泛化性强的人工智能模型;优化三维点云处理流程,推动低成本高鲁棒性视觉系统在养殖场景中的集成应用;通过位姿标准化、精细三维分割策略和多模态数据融合,重点提高曲线参数测量(如胸围、腹围和小腿围)的精度;将羊只与其他动物体尺测量技术相互借鉴和协同发展以加速羊只体尺自动化测量技术的产业化,支撑智慧牧场建设。

本文引用格式

戴维娇 , 梁禹东辰 , 周勇 , 姚超 , 章程 , 宋永健 , 李国亮 , 田芳 . 羊只体尺测量的研究进展:从二维视觉到三维重建及2D-3D融合[J]. 智慧农业, 2026 , 8(1) : 120 -147 . DOI: 10.12133/j.smartag.SA202507028

Abstract

[Significance] In alignment with the national germplasm security strategy, current research efforts are accelerating the adoption of precision breeding in sheep. Within the whole-genome selection, accurate phenotyping of body morphometrics is critical for assessing growth performance and breeding value. Traditional manual measurements are inefficient, prone to human error, and may cause stress to sheep, limiting their suitability for precision sheep management. By summarizing the applications of sheep body size measurement technologies and analyzing their development directions, this paper provides theoretical references and practical guidance for the research and application of non contact sheep body size measurement. [Progress] This review synthesizes progress across three principal methodological paradigms: two-dimensional (2D) image-based techniques, three-dimensional (3D) point cloud-based approaches, and integrated 2D-3D fusion systems. 2D methods, employing either handcrafted geometric features or deep learning-based keypoint detector algorithms, are cost-effective and operationally simple but sensitive to variation in imaging conditions and unable to capture critical circumference metrics. 3D point-cloud approaches enable precise reconstruction of full animal morphology, supporting comprehensive body-size acquisition with higher accuracy, yet face challenges including high hardware costs, complex data workflows, and sensitivity to posture variability. Hybrid 2D-3D fusion systems combine semantic richness from RGB imagery with geometric completeness from point clouds. Having been effectively validated in other livestock specise, e.g., cattle and pigs, these fusion systems have demonstrated excellent performance, providing important technical references and practical insights for sheep body size measurement. [Conclusions and Prospects] Firstly, future research should focus on constructing large-scale, high-quality datasets for sheep body size measurement that encompass diverse breeds, growth stages, and environmental conditions, thereby enhancing model robustness and generalization. Secondly, the development of lightweight artificial intelligence models is essential. Techniques such as model compression, quantization, and algorithmic optimization can substantially reduce computational complexity and storage requirements, facilitating deployment in resource-constrained environments. Thirdly, the 3D point cloud processing pipeline should be streamlined to improve the efficiency of data acquisition, filtering, registration, and segmentation, while promoting the integration of low-cost, high-resilience vision systems into practical farming scenarios. Fourthly, specific emphasis should be placed on improving the accuracy of curved-dimensional measurements, such as chest circumference, abdominal circumference, and shank circumference, through advances in pose standardization, refined 3D segmentation strategies, and multi-modal data fusion. Finally, the cross-fertilization of sheep body size measurement technologies with analogous methods for other livestock species offers a promising pathway for mutual learning and collaborative innovation, accelerating the industrialization of automated sheep morphometric systems and supporting the development of intelligent, data-driven pasture management practices.

0 Introduction

The sheep industry is a foundational pillar of animal husbandry, underpinning economic growth and improving national livelihoods. In conventional production systems, sheep growth is primarily evaluated using two sets of metrics: body weight and body size. However, obtaining accurate body weight is challenging due to sheep's resistance and movement, which thereby introduces significant variability and error into scale readings. In contrast, body size measurements provide a more refined assessment of skeletal development, breeding potential, and are critical inputs for weight estimation models.
Traditional measurement methods mainly rely on point-by-point manual measurement with a soft tape which quantifies distances between specific body points of sheep. This approach yields relatively low errors but is time-consuming overall. Moreover, as sheep grow, updating their body size parameters is challenging, making real-time tracking of these parameters impossible. Consequently, there is an urgent need for a simple, accurate, non-contact, and automated system for capturing body dimensions to support precision livestock farming.
Driven by this demand for automation, computer vision has emerged as a promising tool for non-contact morphometric analysis, with significant advances documented in recent literature[1, 2]. However, research specifically targeting sheep remains relatively scarce. Therefore, this paper focuses on sheep while drawing relevant insights from studies on other livestock species. The main computer vision-based approaches are reviewed for body size measurement, namely, two-dimensional (2D) imaging, three-dimensional (3D) point cloud analysis, and hybrid 2D-3D fusion methods, outlining the advantages, limitations, and performance of prevalent technical solutions. Additionally, the limitations of different technologies applied in sheep body size measurement research, as well as the applicability of body measurement methods from other livestock species to sheep, are also discussed, aiming to provide a theoretical basis and technical references for smart feeding and the development of intelligent sheep body measurement systems.

1 Method for measuring sheep body size based on computer vision

Sheep body size provides direct indicators of growth status and constitutes essential quantitative benchmarks for breed selection, fattening potential assessment, and feed management optimization. A synthesis of relevant literature, technical documentation, and field surveys yields the current classification framework for these parameters (Fig. 1).
Fig. 1 Sheep body measurement diagram

Note: BL denotes body length; BH denotes body height; CD denotes chest depth; CG denotes chest girth; AD denotes abdominal dimension; HH denotes hip height; SC denotes shank circumference; SW denotes shoulder width; CW denotes chest width; AW denotes abdominal width; HW denotes hip width.

Current mainstream computer vision-based approaches can be broadly classified into three categories: 2D image-based measurement, 3D point cloud-based measurement, and hybrid 2D-3D systems. Across all technical paradigms, measurement workflows typically follow four core stages: data acquisition, data processing, keypoint or landmark detection, and body size calculation. To enable a clearer understanding of the technical attributes associated with different data modalities, a comparative analysis of these methods is provided (Fig. 2).
Fig. 2 Overview diagram of sheep body measurement techniques

Note: SLIC denotes simple linear iterative clustering; R-CNN denotes region-based convolutional neural network; ICP denotes iterative closest point; SLAC denotes simultaneous localization and calibration; Lite-HRNet denotes lite high-resolution network; SIFT denotes scale-invariant feature transform; SVM denotes support vector machine. ROR denotes radius outlier removal; ISS3D denotes intrinsic shape signatures 3D; 4PCS denotes 4-points congruent sets; KCGATNet denotes kernel-based channel graph attention transformer network.

Following the taxonomy of existing techniques, the subsequent chapters provide a detailed examination of the three core methodologies. Chapter 2 delves into 2D image-based techniques, focusing on image segmentation, geometric feature extraction from key points, deep learning-driven recognition, and the derivation of various body size measurements. Chapter 3 addresses 3D point cloud technology, encompassing essential processing steps such as filtering, registration, segmentation, key point identification, pose normalization, and parameter calculation. Chapter 4 explores the 2D-3D fusion strategies, analyzes how they achieve a balance between accuracy and cost through complementary advantages, as well as the referential applicability of body size measurement technologies from other livestock species to sheep. Chapter 5 discusses existing datasets, the application of deep learning methods, and large-scale deployment, so as to provide a theoretical basis for the non-contact and automated body size measurement of sheep.

2 Two-dimensional (2D) image-based body size measurement

2D camera systems are the most widely employed technologies in modern monitoring applications and research. These imaging frameworks have been progressively refined to meet specific measurement objectives, incorporating task-oriented algorithms and streamlined workflows. In sheep body size measurement, the process typically begins with preprocessing raw images to suppress background noise and extract the animal's silhouette (Fig. 3). Keypoints or anatomical landmarks are then detected to establish precise reference positions. Based on the spatial relationships of these landmarks, dedicated body size algorithms are applied to derive the target measurement parameters.
Fig. 3 2D image sheep body measurement process

a. Raw image b. Background denoising c. Contour extraction d. Key point identification e. Body measurement

2.1 Processing of 2D images

Table 1 presents a comparative analysis of 2D image data processing methods in animal body size measurement. Single-algorithm approaches often exhibit notable measurement errors when confronted with challenges such as low color contrast, cluttered backgrounds, or variable illumination conditions. These limitations have prompted a shift in research focus toward multi-algorithm fusion techniques, which aim to leverage the complementary strengths of different computational methods to enhance measurement robustness and accuracy.
Table 1 image-based body size measurement methods: applications in sheep and reference cases in livestock
Subjects Camera setup Image algorithms Individual number Traits & accuracy Main limitations Year Work
Sheep Logitech C920 hdpro webcam Partial least squares algorithm 27 BL: 3.5%; BH: 3.5%; CD: 3.5% Colour-contrast sensitive 2014 3
Sheep CCD wide-angle camera Background difference method NA BL: 2%; BH: 2% Cluttered background leaks 2014 4
Sheep MV-EM120C SLIC + FCM clustering 27 BL: 2.03%; BH: 1.13%; CD: 4.45%; CW: 2.25%; HH: 1.54%; HW: 2.41% Cluster number must be tuned 2018 5
Pig Kinect v1 (RGB) Semi-global matching + image subtraction 200 BL, BH, HH, HW, SW: < 2.83 % Depth not used; Lighting sensitive 2019 6
Cow Single RGB Canny algorithm NA BL: 0.06%; BH: 2.28% Edge gaps need manual repair 2020 7
Yak Single RGB Sobel operator + SLIC 33 BH: 1.95%; BL: 3.11%; CD: 4.91% K-value manually tuned 2021 8
Cattle Azure Kinect DK Shadow aberration method + Watershed segmentation 34 BL: 2.7 cm; BH: 2.07 cm; AW: 1.47 cm High-contrast uniform background required 2022 9
Sheep Single RGB PreciseEdge 75 BL: r=0.943; BH: r=0.931; CG: r=0.893 High-contrast uniform background required 2022 10
Sheep, Cattle Intel RealSense D415 U-Net 55 BL: 2.42 %; BH: 1.86 %; HH: 2.07 %; CD: 2.72 % Heavy model, embedded-unfriendly 2022 11
Cattle Hikvision RGB SOLOv2 200 BL: 1.36 %; BH: 0.44 %; CD: 2.05 %; CW: 2.80 % Extreme-pose drift 2022 12
Cattle Smartphone RGB YOLOv5s+Lite-HRNet 30 BL: 7.55%; BH: 6.75%; CD: 8.00%; SC: 8.97% Key-point occlusion drift 2024 13
Cattle RealSense D455 Improved Mask2former 137 BH: 4.32%; HH: 3.71%; BL: 5.58%; SC: 6.25% High posture sensitivity, large errors in girth measurements 2024 14

Note:CCD denotes charge-coupled device; FCM denotes fuzzy C-means; NA denotes not available; BL denotes body length; BH denotes body height; CD denotes chest depth; CDM denotes chest dimension; HH denotes rump height; SW denotes shoulder width; CW denotes chest width; AW denotes abdominal width; HW denotes hip width; CG denotes chest girth; SC denotes shank circumference.

Multi-algorithm fusion capitalizes on the complementary strengths of distinct computational techniques, which effectively improving the precision of object detection and body segmentation. A notable example is the combination of traditional edge detection with morphological operations. BAI et al. [15] applied boundary detection to sheep images, while SHI and ZHANG[7] utilized image segmentation for cattle, both studies verifying the specific advantages of this hybrid approach, including higher accuracy in identifying animal body contours, reduced interference from complex background textures, and improved robustness to minor postural variations in animals. In addition, depth information has been integrated to facilitate foreground extraction. For instance, SHI et al. [6] adopted semi-global matching to generate disparity maps, thereby identifying background regions in depth images, subsequently performing image subtraction to achieve rapid contour extraction for pigs. YE et al. [9] employed a shadow difference method to distinguish the background from target objects in depth images, and then applied the watershed algorithm for target segmentation, which reduced the reliance on traditional color contrast-based methods. Moreover, the integration of superpixel methods with clustering algorithms has improved segmentation efficiency and accuracy. ZHANG et al. [8] combined the Sobel operator with the simple linear iterative clustering (SLIC) superpixel algorithm to extract animal foreground images, achieving an improved processing speed without compromising precision.
Although traditional image processing methods have matured in certain aspects, they remain highly reliant on manually engineered feature extraction and parameter tuning. In contrast, deep learning models can automatically learn image features from large-scale datasets, typically offering greater generalization capability and robustness. A key research focus in this domain is on instance segmentation models and the diversification of deep learning architectures. Instance segmentation models enable precise localization and pixel-level segmentation of individual animals. The mask region-based convolutional neural network (Mask R-CNN) model, a leading representative in this field, generates target masks that perform remarkably well in sheep contours extraction[16]. QIN et al. [17] applied this model to identify sheep positions and generate mask images, thus eliminating dependence on handcrafted features. To address limitations in handling blurred boundaries and overlapping scenarios, BELLO et al. [18] proposed an enhanced Mask R-CNN to improve cattle segmentation accuracy, while ZHAO et al. [19] developed the SheepInst algorithm with a modified backbone and detection heads, achieving highly accurate segmentation of densely overlapping sheep. However, such models involve high computational complexity, impose strict hardware requirements, and are prone to overfitting in small-sample scenarios, which limits large-scale deployment in practical applications.
Besides instance segmentation, lightweight models for semantic segmentation and object detection have also been explored for sheep body size measurement and carcass segmentation tasks. XIE et al. [20] proposed a multi-scale dual-attention U-Net, achieving 93.76% precision and 86.94% mean intersection over union (MIoU) on a self-constructed sheep carcass hind leg dataset. Although the you only look once (YOLO) series demonstrates superior inference speed for sheep behavior recognition[21], its primary focus on detection rather than segmentation leads to limited contour delineation capability. This deficiency makes it difficult to meet the pixel-level localization requirements essential for identifying key body dimension measurement points. Future research should prioritize resolving the dual challenges of model lightweighting and high geometric segmentation precision, thereby facilitating the widespread application of edge-computing technology in sheep body size measurement.

2.2 Keypoint recognition for 2D sheep body size measurement

2.2.1 Geometric feature-based keypoint recognition

In the field of 2D image analysis, geometric feature-based keypoint recognition techniques rely on explicit geometric descriptors and mathematical principles to enable the accurate extraction of sheep body size parameters[22, 23]. These methods provide a solid theoretical and computational foundation for locating anatomical landmarks that are critical to measurement accuracy. Table 2 presents a comparative analysis of different geometric approaches for keypoint identification, summarizing their respective advantages and limitations in sheep body size measurement applications.
Table 2 Comparison of geometric feature extraction methods for sheep body size measurement
Subjects Method Principle Time complexity Advantage Disadvantage Work
Sheep Corner (inflection-point) detection Compute curvature or angle discontinuities along the contour O(n) Very fast; accurate localization Noise-sensitive [22]
Sheep Convex-hull analysis Compute the convex hull of a point set and retain its vertices O(n log n) Parameter-free; fast execution Keeps only "most convex" points; loses concavities [23]
Sheep Edge detection Detect gray-level discontinuities via gradient/entropy/filtering O(n) Highly general-purpose Fragile edges; post-processing required [24]
Sheep Maximum-curvature extraction Locate curvature extrema along the edge O(n) Mathematically well-defined Requires smoothing; prone to burrs [5]
Sheep Region-based U-chord curvature Compute curvature with a fixed-chord sliding window O(n) Balances local & global shape Extra chord-length parameter to tune [16]
Overall, geometric feature-based analytical methods are characterized by clear theoretical foundations, relatively low computational complexity, and independence from large-scale data training. These approaches also impose modest hardware requirements and allow results to be interpreted through well-defined geometric rules. However, their heavy reliance on manually designed heuristics and expert experience makes them highly sensitive to the completeness and quality of object contours. Moreover, their generalization capability is limited: algorithmic parameters often need to be recalibrated for different sheep breeds or even for various growth stages of the same sheep. In addition, these methods exhibit poor adaptability to complex environments, as variations in illumination and background conditions can substantially degrade algorithmic performance.

2.2.2 Deep learning-based keypoint recognition

Deep learning models possess the ability to autonomously extract high-level features from images, and particularly with sufficient data, their performance in keypoint recognition significantly surpasses that of manually designed geometric rules. These models provide more robust solutions for anatomical landmark localization in sheep body size measurement. In recent years, researchers have proposed a variety of technical schemes tailored to different application scenarios. Table 3 compares the advantages and disadvantages of several deep learning-based approaches for livestock keypoint detection.
Table 3 Comparison of deep learning-based keypoint detection methods for sheep and related livestock body size measurement
Subjects Method Principle Advantage Disadvantage Work
Sheep, Cattle Multi-stage dense hourglass Multi-scale fusion with iterative up/down-sampling for end-to-end key-point detection Highest localization accuracy and strong robustness to pose and coat variations Large parameter count, high GPU memory usage, slow inference speed [11]
Cattle YOLOv5s +Lite high-resolution network(Lite-HRNet) YOLOv5s proposes regions; Lite-HRNet refines keypoints Excellent speed-accuracy trade-off, lightweight design, and suitable for edge deployment Relies on detection box quality; key points tend to drift under dense occlusion [13]
Pig Faster R-CNN +Neural Architecture Search (NAS) Faster R-CNN defines the search region; NAS refines the keypoints High detection accuracy and strong generalizability Slowest inference speed, difficult to meet real-time requirements, and long training and hyper-parameter tuning cycles [25]
Overall, deep learning-based keypoint recognition methods overcome many limitations inherent to geometric feature-based techniques, such as heavy reliance on prior rules and weak generalization capability. Nevertheless, they also present inherent challenges. First, they require large-scale annotated datasets, and their performance can deteriorate sharply under data-scarce conditions. Second, certain architectures exhibit high computational complexity, limiting deployment on edge devices. Additionally, their robustness still needs improvement under challenging conditions, such as severe occlusion or extreme lighting variations.

2.3 Body size parameter calculation

Sheep body size parameters reflect their growth stage, health status, and breed characteristics. These parameters vary across different breeds and developmental stages. In current studies, differences in measurement methodologies have resulted in variations in the formulation and representation of body size data. In 2D image-based measurements, body dimensions are commonly computed using the Euclidean distance between two anatomical landmarks or the perpendicular distance from a point to a line. This section introduces existing approaches for calculating sheep morphometrics in 2D imagery.

2.3.1 Body length

In livestock production and breeding evaluation, body length typically refers to the straight-line distance between the cranial point of the scapula and the caudal point of the ischial tuberosity. This definition aligns with conventional livestock husbandry practices and provides a unified anatomical reference for the design of subsequent measurement methods. When visual measurement is employed instead of manual measurement, it is essential to precisely locate these two points within the image space and compensate for systematic errors caused by posture, camera angle, and breed-specific coat variations.
2D image-based body length measurement methods generally follow three technical routes: full-contour endpoint selection, maximum inscribed rectangle, and structural segmentation combined with curvature analysis. Full-contour endpoint selection (e.g., Canny-Euclidean distance method[26] as shown in Fig. 4a) is conceptually intuitive and conforms to conventional perceptions of body length measurement. It is computationally efficient and exhibits certain generalizability across sheep breeds and body conformations. However, the accuracy of landmark extraction in this approach is highly dependent on contour quality, and measurement results are susceptible to significant variation due to changes in camera angle. Maximum inscribed rectangle (as a representative of geometric shape analysis[26], Fig. 4b) determines body length by calculating the distance between the diagonal vertices of the largest rectangle that can be inscribed within the animal's contour. This approach can mitigate interference from localized irregularities but requires highly regular contour shapes. If the sheep outline is irregular, accurate extraction of the maximum inscribed rectangle becomes difficult, and its practical performance remains to be verified. Structural segmentation combined with curvature analysis (Fig. 4c): SHEN[27] divided the lateral contour of a sheep's body into four equal sections using a cross-partition method. The boundary was encoded with a 16-neighborhood chain code, and curvature values and edge inflection points were computed for each boundary point. The caudal protrusion of sheep was selected as the ischial tuberosity point, and the anterior thoracic protrusion was taken as the scapular landmark. The Euclidean distance between these points was then calculated. This method is highly targeted, leveraging structural characteristics of the sheep body to improve landmark localization accuracy. However, its performance is sensitive to image quality, and uneven lighting or noise can adversely affect subsequent keypoint detection.
Fig. 4 Body length measurement methods of sheep

a. Overall contour point selection method b. Maximum inscribed matrix method c. Zone detection method

Overall, current contour-based approaches for sheep body length measurement generally follow a "contour extraction, landmark localization, distance calculation" framework. However, several common limitations remain: Strong dependency on image quality (including contour completeness and lighting conditions) and camera angles, which makes measurements susceptible to environmental interference. Landmark localization processes often rely on a single geometric cue (e.g., contour endpoints, rectangle vertices), lacking adaptability to complex sheep body structures. Consequently, it is difficult to maintain stability across different postures and breeds of sheep.

2.3.2 Body height

In linear conformation assessment of sheep, withers height is defined as the vertical distance from the highest point of the scapula (the withers) to the load-bearing ground surface. It is one of the core indicators reflecting skeletal development and growth stage. In vision-based measurement, it is essential to reliably locate the 3D coordinates of the withers under non-contact conditions, accurately estimate or fit the equation of the actual ground plane, and then convert the vertical distance into a real-world metric value in centimeters to ensure direct comparability with manual measurements.
To achieve precise withers height measurement, researchers have proposed image-based solutions optimized for different application scenarios, supported by algorithmic enhancements. MENESATTI et al. [3] processed sheep body images using MATLAB software to accurately determine the spatial coordinates of the withers and the left fore hoof tip (Fig. 5a), followed by calibration of measurement data via a partial least squares regression model. The final vertical distance between the withers point and the ground plane was then computed. This method uses mathematical modeling to reduce measurement error, and exhibits a rigorous logical workflow. However, it demands substantial computational resources and longer processing time, making it more suited to research laboratory contexts (e.g., studies of sheep growth patterns) where measurement precision requirements are stringent and equipment cost and time constraints are less critical. ZHANG et al. [26] located the withers point using an image-processing algorithm and defined a ground reference line as the straight line linking the lowest points of the fore and hind hooves (Fig. 5b). The maximum vertical distance from the anterior shoulder point to this reference line was then taken as the withers height. This approach follows a simple algorithmic logic, is easy to implement, and produces rapid measurement outputs. It is therefore more appropriate for on-farm routine body size measurement, where real-time capability is a priority, measurement precision requirements are moderately relaxed, and animals are typically required to maintain a standard standing posture.
Fig. 5 Body height measurement method of sheep

a. Withers-to-left-fore-hoof connecting method b. Perpendicular distance from the withers point to the line connecting the fore and hind hooves

Overall, current image-based sheep withers height measurement technologies generally follow a "landmark localization, ground reference definition, vertical distance calculation" framework. The use of either the withers point or the anterior shoulder point to determine vertical distance to the ground forms the core measurement principle, with different implementations offering distinct advantages depending on the operational context.

2.3.3 Hip height

In linear conformation scoring and meat production performance prediction for sheep, hip height is defined as the vertical distance from the most posterior protrusion of the ischial tuberosity (i. e., the highest point of the rump) to the load-bearing ground surface. This metric reflects the degree of development of the hindquarters skeletal structure and the extension capacity of the hind limbs, and it is strongly correlated with rump muscle thickness and overall hindquarter meat yield.
[28] projected the rump point cloud data onto the xoy plane and applied a bounding box technique to extract the highest skeletal point and the hind hoof point (Fig. 6a). The distance between these two points was calculated as the rump height. This method achieves high precision in regional extraction, enabling accurate localization of critical anatomical landmarks required for measurement. It is most suitable for research institutions with stringent precision requirements and access to substantial financial and technical resources, such as in-depth studies of 3D sheep morphology. ZHOU et al.[22] applied a bounding box algorithm to divide the sheep body into four regions. The lowest points of the fore and hind hooves were identified and connected to define the ground reference line. Subsequently, the rump height landmark was determined based on the most posterior protrusion of the ischial tuberosity, and the vertical distance from this point to the ground was recorded as rump height (Fig. 6b). This method offers high adaptability to variations in sheep posture and can maintain stable identification of both measurement reference and landmark points, even under moderate changes in stance. Therefore, it is well suited for routine on-farm production scenarios, supporting regular rump height measurements in sheep herds.
Fig. 6 Hip height measurement method of sheep

a. Projection calculation method b. Vertical distance from hip-height point to the hoof line

Overall, current hip height measurement techniques for sheep generally follow a "region extraction, reference ground determination, landmark localization, distance computation" workflow. All methods share the core measurement principle of determining the vertical distance from a key anatomical point in the rump to the ground surface. Nonetheless, their adaptability across diverse operational contexts is still limited. Future technological advancements should aim to balance precision and cost, and develop rump height measurement solutions that are better aligned with the varied requirements of both research and production environments[29].

2.3.4 Chest depth

In linear conformation scoring and prediction of meat production traits in sheep, chest depth is defined as the vertical distance from the highest point of the withers to the most concave point at the lower margin of the sternum (Fig. 1). For vision-based measurement, it is necessary to simultaneously lock the 3D coordinates of the withers and the anatomical landmark at the sternum's lower margin, and to convert the vertical distance into the world coordinate system for direct comparability with manual measurements.
For automated measurement of chest depth in sheep, researchers have proposed multiple image-processing approaches with optimized landmark detection logic, tailored to different operational contexts. LU et al.[30] proposed a pixel statistics-based approach, which employs binarized contour images and vertical pixel counting to identify the lower reference point. Its advantage lies in its ability to emphasize landmark features through well-defined contours, making it suitable for controlled environments with sufficient illumination and uniform backgrounds. However, it is highly sensitive to image quality. ZHOU et al.[22] applied an anatomy knowledge-based approach that directly selects the concave point at the posterior side of the fore hoof, located at the sternum's lower margin, as the lower reference point. Its advantages include intuitive understanding and ease of implementation, making it suitable for routine measurements in regular farm scenarios with standard standing posture. The drawback is its strong dependency on precise animal posture. ZHANG et al.[26] introduced a mathematical fitting-based approach that uses linear fitting of data points along the anterior margin of the sternum to determine the lower reference point via computed distances. Its strength is the reduction of subjective error through a data-driven process, offering the highest measurement accuracy. It is most appropriate for research environments where precision requirements are extremely stringent, but it imposes high demands on both data quality and computational resources.
Overall, existing studies exhibit common persistent limitations: Scene adaptability is heavily constrained by landmark detection logic, making cross-context compatibility difficult (e.g., high-precision methods are ineffective in typical farm environments). High sensitivity to external conditions such as image quality, animal posture, and data integrity, leading to reduced stability under complex field scenarios.

2.3.5 Chest width

In sheep linear conformation assessment, chest width refers to the maximum horizontal distance between the posterior edges of the left and right scapulae (i.e., the outermost points of the shoulders) when the animal is standing naturally. This parameter is a key body size indicator reflecting the lateral development of the body and the width of the thoracic cavity. For vision-based measurement, non-contact localization of the left and right shoulder protrusions is required, followed by conversion of the pixel-based image distance into the world coordinate system for direct comparison with manual measurements.
To achieve automated measurement of chest width in sheep, researchers have proposed three categories of technical schemes based on image morphology analysis and machine learning, which differ markedly in their localization logic, technical features, and scene adaptability: LU et al. [30] employed width-curve extrema analysis as the core method for measuring sheep chest width (Fig. 7a). The method features a simple operational logic and achieves stable outputs when the background is clean and contours are well defined, making it suitable for laboratory or small-scale controlled farm settings. However, the accuracy of measurement is highly sensitive to image quality. ZHANG et al.[26] proposed a method of geometric morphology + axis symmetry positioning (Fig. 7b), which combines geometric contour analysis with body axis symmetry constraints. It achieves theoretically higher precision but requires complete and high-quality initial point cloud or image data, while incurring a relatively high computational cost. The approach is therefore more appropriate for research applications where computational and data processing capabilities are available. ZHANG et al.[31] adopted supervised learning with manual annotation (Fig. 7c),where a trained model autonomously identifies the horizontal line passing through the widest part of the sheep shoulders. The intersection points of this line with the sheep body contour boundaries are defined as the chest width landmarks, and the distance between these two points is taken as the chest width. This technique aligns precisely with the anatomical definition of the parameter and provides high measurement accuracy, yet it depends heavily on extensive manually annotated training data, thus being most suitable for data-rich and precision-sensitive research scenarios.
Fig. 7 Chest width measurement methods of sheep

a. Contour inflection point calculation method b. Binary centroid method c. Horizontal distance maximum method

Overall, current chest width measurement techniques uniformly adopt the maximum horizontal distance between the shoulder protrusions as the core measurement principle. Nevertheless, two major limitations persist: scene adaptability is highly dependent on the underlying technical logic. For example, methods reliant on high-quality imagery are unsuitable for complex farm environments, while those dependent on annotated data face difficulties in resource-limited conditions. Cross-breed generalization remains inadequate, as most existing solutions are tailored for specific breeds and lack a universal technical framework for broader applicability.

2.3.6 Hip width

In sheep conformation assessment, hip width refers to the maximum horizontal distance between the left and right ischial tuberosities (i.e., the outermost points of the rump) when the sheep is standing naturally. This parameter serves as a critical body measurement for evaluating the lateral development of the hindquarters and the balance of overall body structure.
ZHANG et al. [26] proposed a method combining morphological structure and curvature analysis. Their approach identifies hip width landmarks by analyzing the curvature variation of the spinal axis and the contour curves of the thoracic and pelvic regions (Fig. 8a). This biologically informed method can, in theory, achieve accurate localization of hip width points. However, it is highly dependent on image quality: Noise or indistinct contours can distort the spine fitting and curvature calculation, thereby affecting measurement accuracy. Consequently, this approach is best suited for controlled laboratory or precision research settings where image quality can be guaranteed. SHI et al. [6] adopted a contour extraction and geometric analysis method. They first employed image subtraction on depth maps to rapidly obtain body outlines (Fig. 8b) and then utilized a recursive algorithm to identify the largest inscribed rectangle based on convexity and concavity characteristics of the contour. The maximum horizontal distance of this rectangle was defined as the hip width. This method closely matches the operational logic of manual measurement and maintains a simple computational process. However, its robustness is limited: Large posture variations or irregular outlines can lead to inaccurate extraction of the inscribed rectangle, reducing measurement precision. Thus, it is more suitable for on-farm scenarios where sheep maintain a relatively regular stance and clear contour visibility.
Fig. 8 Hip width measurement methods of sheep

a. Binary centroid method b. Maximum inscribed rectangle method

Overall, existing hip width of sheep measurement techniques share two common limitations. First, they exhibit high sensitivity to external conditions such as image quality and animal posture, compromising stability under complex field environments. Second, while these methods perform well for specific breeds, they generally lack cross-breed generalization, making it difficult to adapt measurement frameworks across different sheep populations.

3 Three-dimensional (3D) point cloud-based body size measurement

Because 2D image-based body size of sheep measurement lacks depth information, it cannot directly obtain circumferential parameters such as girth, chest girth, or cannon bone circumference. With the advancement of 3D sensing technologies, measurements based on 3D data now enable direct estimation of such parameters. Unlike 2D imaging, 3D reconstruction typically integrates multiple views of the same animal. The raw images must undergo filtering and registration to reconstruct the complete 3D morphology of the sheep, followed by downsampling and keypoint extraction for body size measurement (Fig. 9).
Fig. 9 Three-dimensional point cloud body size measurement process of sheep

a. Original point cloud b. Filtering c. Registration d. Point cloud simplification e. Body size measurement

Although 3D point clouds are less sensitive to lighting and texture variations, high-precision reconstruction generally requires large-scale training data and significant computational resources: conditions that often conflict with the low-annotation and edge-computing environments typical of livestock farms. Recent studies on animal 3D body size measurement (summarized in Table 4) may be broadly classified into three technical pathways: handheld high-precision devices with small sample sizes, depth-camera-based medium-sample approaches, and deep-learning low-cost large-sample models. Handheld/high-precision scanners (e.g., REVscan, TOF 3D) typically achieve 1%-2.5% error rates using extremely small samples (1-25 animals). However, these systems are expensive and slow, restricting their use to laboratory environments. Depth cameras (e.g., Kinect v2 / DK-series) reduce per-unit cost and expand scale sample sizes to 50-250 animals, achieving stable accuracy between 0.7%-3.3%. Deep learning models, including PointNet++, remain mostly within the small-sample paradigm. Such studies face three fundamental reliability concerns: Insufficient statistical power, resulting in wide confidence intervals; limited diversity in body size, coat color, and environmental conditions, which restricts external validity; lack of independent test sets, leading to potential overfitting to specific experimental settings. Consequently, small-sample outcomes should be interpreted as the upper performance limit of algorithms rather than as field-reproducible accuracy benchmarks. To achieve trustworthy, industry-scale measurement standards, extensive validation studies-spanning larger populations, multiple seasons, and different breeds-are required to ensure model reliability and replication under diverse farming conditions.
Table 4 Comparison of 3D point cloud methods for sheep and related livestock body size measurement
Subjects Camera type Algorithms Animal Numbers Body traits & accuracy Year Work
Sheep REVscan_3D

Octary tree+

Delaunay triangulation

1 BL / BH / CW / HH / HW: 1.01 % 2019 [32]
Cow Binocular Vision Scale-invariant feature transform(SIFT) Algorithms 20 BL:1.14%; BH:1.57%; SW:2.24% 2020 [33]
Sheep TOF 3D camera

Principal component analysis

Random sampling consistency algorithm

Improved regional growth method

1 BL / BH / CD / HH: 2.36 % 2020 [34]
Cattle Kinect v2 Background subtraction, ROR Algorithms, Iterative Closest Point (ICP), ISS3D 103 BL / BH / CD / HH: <3 % 2020 [35]
Pig Intel RealSense D720 Depthwise Separable Convolution 239 BL: 0.75 cm; HW: 0.38 cm; BH: 1.23 cm; SW: 0.33 cm; HH: 0.66 cm 2021 [31]
Cattle Kinect DK Five-point clustering gradient boundary recognition algorithm 10 BL: 2.3%; BH: 2.8%; CG: 2.6%; AD: 2.8%; SW: 1.6% 2022 [36]
Pig Kinect v2 Variance classification algorithms 50 BL: 0.7%; BH: 1.8%; SW: 3.3% 2022 [37]
Pig RealSense L515 DeepLabCut+EfficientNet-b6 12 BL / BH / HH / HW / SW: 1.79 cm 2023 [38]
Pig NA Improved PointNet++ 25 BL: 2.57%; BH: 2.18%; HH: 2.28%; SW: 4.56%; CG: 2.50%; AD: 3.14% 2023 [39]
Sheep Kinect v2 ICP,pass-through filtering, statistical filtering, RANSAC 2 BL / BH / HH / HW: <5 % 2024 [40]
Sheep MV-EM120C GigE YOLOv11n-Pose, CNN, ElasticNet 51 BL: 3.11%; BH: 1.93%; CW: 3.38%; CD: 2.52% 2025 [41]
Sheep KinectV2 PointNet++ 24 BH: 1.67%; CW: 3.63%;HH: 1.14%; BL: 2.71%; CG: 3.57%; HW: 3.71% 2025 [42]
Sheep Kinect DK Improved PointNet++, CPD 120 BL: 3.34%; BH: 3.07%; HH: 3.32%; CD: 3.63%; CG: 2.81% 2025 [43]

Note: TOF denotes time of flight; ISS3D denotes intrinsic shape signatures 3D; RANSAC denotes random sample consensus; CPD denotes coherent point drift.

3.1 3D point cloud filtering

Point cloud filtering techniques have evolved from single-operator approaches to multi-algorithm fusion frameworks, yet remain constrained by the inherent precision-robustness-generalization trade-off. Pass-through filtering combined with statistical outlier removal provides high accuracy in static scenes, but may cause erroneous deletion of valid data when animals move[44]. Improved k-nearest neighbors (k-NN) denoising integrated with octree-based simplification can suppress noise effectively, yet tends to over-smooth, leading to loss of fine details[32]. Depth-color multimodal fusion augmented by morphological operations introduces semantic cues to mitigate the trade-off, but requires manual structural design tailored to specific body types, limiting cross-breed transferability[45]. Achieving both dynamic robustness and detail preservation under a zero manual parameter tuning constraint remains an open challenge. Fig. 10 presents a typical multi-algorithm fusion filtering workflow. Although this process establishes an extensible fusion framework, it still requires further investigation into automatic parameter estimation and dynamic adaptive mechanisms.
Fig. 10 Multi-algorithm fusion filtering process diagram of sheep

Note: DBSCAN denotes density-based spatial clustering of applications with noise.

Meanwhile, target segmentation and ground removal techniques have progressed from traditional clustering approaches to semantic recognition paradigms, forming a "clustering, background separation, ground removal" workflow. Clustering and variance-based classification remain standard for object grouping[34, 35, 40], while dynamic background separation demonstrates superiority in complex environments. LIU  et al.[46] used Gaussian mixture modeling with real-time parameter updates to achieve adaptive background separation for cattle. However, such models rely solely on low-order color distributions and have yet to incorporate semantic labels, falling short of fully semantic recognition.
During the process of measuring sheep body dimensions, point cloud data may change due to movement or postural variations of the sheep. Point cloud simplification can adapt to these dynamic changes by reducing data volume, thereby enhancing the stability and reliability of measurements. Current point cloud simplification techniques have evolved beyond mere dimensionality reduction(e.g., non-uniform grid methods, raster-based simplification, normal vector-based simplification, and curvature-based simplification[47]) and have shifted toward feature-preserving simplification strategies. CHENG et al.[48] proposed a local conditional information evaluation method, an innovative approach in the field of adaptive simplification that can be applied to sheep point cloud simplification. However, a core limitation of this method lies in the lack of a unified standard for setting weights in the feature scoring function, requiring weight parameters to be recalibrated for different animal subjects. Furthermore, in regions with extremely low point cloud density, such as occluded leg areas, the scoring function is prone to misjudgment, leading to the loss of key feature points.

3.2 3D point cloud reconstruction

In sheep 3D reconstruction, stable and high-precision geometry is often established using calibration boxes with known dimensions as a shared reference. As shown in Fig. 11, sheep point clouds are captured from the left, right, and top viewpoints, and respective fitting planes are extracted to serve as constraints for initial alignment. However, during actual measurements, small device shifts caused by sheep collisions or ground vibrations can introduce spatial offsets between viewpoints, necessitating the use of registration algorithms for fine alignment after filtering.
Fig. 11 Coarse registration of sheep point cloud using a calibration box
The Iterative Closest Point (ICP) algorithm remains the most widely used approach for precise registration in 3D reconstruction, owing to its simple principle and high registration accuracy. Nevertheless, ICP is limited by its dependence on high overlap ratios and sensitivity to initial positions, which has prompted relevant research to advance toward multi-dimensional improvements. FAN et al.[40] proposed an improved scheme of "initial alignment based on camera position relations and ICP", which provides an effective solution for sheep 3D registration from the perspective of initial position optimization, but this method relies heavily on ultra-precise camera calibration. DANG et al.[49] developed a framework combining CNN-based data augmentation with simultaneous localization and calibration (SLAC) for cattle 3D registration; this framework breaks through the limitations of ICP from the perspective of data quality and offers a reference for sheep 3D reconstruction.Yet CNN-generated data risk excessive smoothing, potentially removing fine details (e.g., dorsal protrusions on sheep) that are critical for accurate body size measurement.
Meanwhile, sheep point cloud quality-encompassing density, uniformity, and noise level-directly impacts sheep 3D reconstruction performance. Current optimization efforts follow two primary paths for sheep point cloud processing[32,50]: structured reconstruction via triangular mesh topology, which demands high density and uniformity and struggles with dynamic point clouds; uniformization simplification through conditional voxel grids, which requires manual threshold tuning and risks removing valid points in occluded areas. Both approaches share the representation discontinuity problem, which remains a significant bottleneck for subsequent sheep point cloud registration and body size measurement.
For point cloud occlusion-induced data loss, existing studies mainly adopt local completion methods[50], with interpolation techniques favored for their simplicity and efficiency, and increasingly integrated into multi-technique frameworks. While surface reconstruction, multi-view fusion, and global optimization each offer distinct advantages, none currently deliver a unified balance of generalizability, accuracy, and efficiency. Specifically, for dynamic occlusions in sheep point clouds, effective strategies are still lacking.
Finally, workflow ordering within sheep 3D reconstruction has emerged as an important consideration for sheep extraction performance. FAN et al. [40] confirmed that the "filtering, registration, ground removal" workflow achieves the best performance sheep body size measurement by comparing three mainstream sequences-"filtering, registration, ground removal", "registration, filtering, ground removal" and "filtering, ground removal, registration".

3.3 3D point cloud segmentation

As shown in Fig. 12, 3D segmentation plays a crucial role in sheep body size measurement by dividing the animal's body into semantic regions, such as head, torso, and limbs. This process not only enables non-contact automated measurement but also establishes a reliable data foundation for subsequent applications in growth monitoring, selective breeding assessment, and health management.
Fig. 12 Schematic of sheep body point cloud segmentation for circumference measurement

a. Point cloud registration b. Front view after segmentation c. Top view after segmentation

3.3.1 Traditional geometric feature-based segmentation of sheep body parts

Before the widespread adoption of deep learning, segmentation methods for sheep 3D point clouds primarily relied on geometric attributes of point clouds, such as curvature variation and slice distribution, to perform fine-grained extraction of specific body parts (e.g., legs, head of sheep). These methods gradually evolved into two major technical paths: region growing and slice-based clustering[34, 36]. Region growing depends heavily on empirically determined growth thresholds and is sensitive to noise, making it challenging to handle complex topologies (e.g., crossed sheep legs). During processing, it can lead to broken or incomplete segmented regions. LI et al.[36] proposed slice-based clustering, which transforms 3D segmentation into 2D slice analysis and reduces computational complexity. This approach was initially applied to cattle leg segmentation and is principally suited for relatively regular body structures such as legs. Given the similarity in the regularity of leg structures between sheep and cattle, this method also offers a feasible technical reference for efficient sheep leg segmentation.

3.3.2 Deep learning-based segmentation of sheep body

Deep learning, with its powerful feature learning capability, has overcome the reliance on empirical parameters inherent in traditional methods, showing significant advantages in complex livestock point cloud segmentation tasks. It also provides valuable insights for advancing sheep point cloud segmentation. SHI et al.[51] proposed the "kernel convolution integrated with graph attention mechanism + transfer learning" framework which successfully established an efficient segmentation model for cattle and pigs. Its core contribution lies in designing a transfer learning framework suitable for cross‑species livestock segmentation, offering a direct reference for modeling sheep point clouds. Future research on sheep could adopt the core architecture of this framework and select source‑domain species-such as goats or specific sheep breeds-with body structures (e.g., torso contours and limb proportions) more similar to the target sheep, thereby effectively reducing the risk of "negative transfer". On the other hand, JIANG et al.[52] introduced the "improved PointVector++ network", which enhances the handling of complex point clouds and has achieved efficient segmentation of pig body parts. Its feature extraction strategy tailored to complex topological structures offers important insights for sheep segmentation. Since sheep also exhibit irregular topologies in regions such as the torso and head, this network's feature extraction mechanism can be adapted for segmenting corresponding sheep body parts. At the same time, its limitations-such as insufficient feature extraction from very small‑scale point clouds and jagged segmentation boundaries-also highlight directions for improving sheep point cloud segmentation models. Future work could build on its core network architecture by incorporating local point cloud density enhancement modules or introducing multi‑scale feature fusion mechanisms. This would strengthen feature extraction in small‑scale regions of sheep (e.g., ear tips and hooves) and further improve segmentation accuracy.

3.4 Keypoint detection and recognition for 3D sheep body size measurement

3.4.1 Features geometric feature-based keypoint recognition for sheep body

Keypoint detection serves as the foundation of sheep body size measurement. In the field of sheep research, keypoint extraction mainly relies on geometric feature methods such as curvature, normal vector, and spatial distribution. Table 5 compares the advantages and disadvantages of various approaches that rely on manually constructed geometric features of sheep.
Table 5 Comparison of key-point methods based on artificially constructed geometric features of sheep study
Subjects Method Principle Advantages Limitations Work
Sheep Normal-vector and curvature fusion simplification algorithm Simplify point cloud by combining normal vectors and curvature, automatically extract body measurement points on cow back Preserves key feature points, offers high extraction accuracy, and adapts to various postures Relies heavily on point cloud quality and pre-processing, high computational complexity 53
Sheep Spatial distribution statistical method Based on spatial distribution features of point cloud, set thresholds through pass through filtering, combined with extremum method and dimensionality reduction projection to locate measurement points High computational-efficiency, real-time-capability, robust-to wool-thickness-interference Dependent-on fixed-postures and prior spatial-proportion-knowledge, limited breed/growth-stage adaptability, sensitive-to outlier point clouds 54
Overall, geometric feature-based keypoints can provide reliable landmarks for sheep body measurement. However, they inherently face the trade-off among accuracy, robustness, and generalization that prevents simultaneous optimization of all three. Future developments should focus on modality complementarity, adaptive parameterization, and weakly supervised learning. By retaining the interpretability of geometric features while progressively reducing dependency on manual thresholds, template alignment, and synchronization precision, it will be possible to achieve truly breed-generalized and scene-robust fully automated body size measurement systems.

3.4.2 Deep learning-based keypoint detection for sheep body

Deep learning-based keypoint detection technology has become an important research direction to break through the bottleneck of manually constructed geometric keypoints, and related deep learning research in other livestock and poultry fields has accumulated transferable technical experience with significant reference potential.
HU et al.[39] proposed an enhanced PointNet++[55] framework that provides an end-to-end solution for pig body-part segmentation and keypoint localization, effectively overcoming traditional 3D point cloud segmentation methods' inability to capture fine-grained local details. This technical idea is of great reference value for sheep body size keypoint recognition: Its improved network architecture can be learned from, the local feature extraction module can be optimized according to the characteristics of sheep such as irregular trunk contour and slender limbs; the detailed perception ability of key parts such as acromion and rump can be enhanced, and the integration of part segmentation and keypoint localization can be realized. FALQUE et al.[56] introduced a supervised regression-based approach, reformulating cattle keypoint detection as a distance regression problem to enhance localization precision. This framework effectively avoids the cascading error problem caused by segmentation failures and exhibits strong robustness under mild occlusion or partial interference. The core logic of this method can be transferred to sheep research, and a distance regression model suitable for sheep can be designed in the future; at the same time, the data dependence and scene adaptation limitations exposed by it also point out the direction to avoid in sheep research, which is necessary to simultaneously explore data augmentation strategies to reduce the dependence on annotated samples, add point cloud denoising modules to improve anti-interference ability, and enhance the generalization of the model to different sheep breeds and complex scenarios.

3.5 Sheep posture normalization

The accuracy of body size measurements is closely tied to sheep posture, and different posture normalization methods directly affect the measurement precision of body size parameters. Table 6 provides a comparative overview of existing sheep posture normalization methods.
Table 6 Comparison of sheep pose normalization methods
Subjects Method Pose-handling strategy Advantages Limitations Work
Sheep Global-local non-rigid warping Use dynamic position encoding with a similarity transformation group to obtain the pose, followed by template alignment No reliance on strict symmetry or complete point clouds, enabling non-rigid pose recovery Heavy computation; template library required [57]
Sheep CNN micro-pose classification + ElasticNet regression An error-correction model based on CNN micro-pose classification and ElasticNet regression Lightweight deployment Still sensitive to resolution/angle; extreme poses fail [41]
Sheep PointNet++ region segmentation and re-projected coordinate frame Multi-view local pose normalization, PointNet++ segmentation, re-projected coordinate frame Suppresses drifting landmarks under sudden poses Relies solely on geometric coordinates without fusing RGB or other modalities, resulting in limited discriminative power under occlusion or specular reflection [42]
Recent studies indicate that posture normalization is no longer limited to filtering for standard poses; rather, it has shifted toward modeling, correcting, and compensating for non-standard postures. This marks a new stage in sheep body size technology, one that aims to adapt measurement systems to the natural behavior of animals, thereby providing a viable pathway for enhancing algorithmic robustness in complex real-world environments.

3.6 Sheep body size parameter calculation

As shown in Fig. 1, sheep body size measurement involves a variety of indicators. In farm management, parameters such as body length, withers height, shank circumference, hip height, chest width, and hip width are commonly used to assess livestock growth and determine whether individuals meet breed or developmental standards, guiding feeding strategies and environmental adjustments. Chest girth and chest depth of sheep are closely associated with cardiopulmonary function and overall health, serving as critical references for disease prevention and selective breeding- additionally, they are key indicators linked to body weight[58].
Compared with 2D methods, 3D body size measurement for sheep offers significant advantages in spatial coordinate completeness. By redefining reference planes and optimizing geometric logic, 3D systems achieve higher measurement precision for sheep body size parameters. This section addresses two core aspects: (1) redefining reference planes or lines for each parameter to enable direct comparability with manual gold standards for sheep; and (2) applying appropriate geometric operators to extract and calculate body size metrics in line with sheep's anatomical features and practical measurement requirements.

3.6.1 Body length

3D sheep body length measurement is based on spatial point cloud data, directly localizing the 3D coordinates of the anterior edge of the shoulder joint and the posterior edge of the ischial tuberosity of sheep in the world coordinate system, and calculating the spatial straight-line distance between these two points. This fundamentally eliminates the perspective projection error inherent in capturing sheep body images via 2D imaging.
ZHOU et al.[32] proposed a "3D reconstruction + direct feature‑point localization" method for sheep, which constructs a 3D point cloud model to accurately locate measurement points and demonstrates higher tolerance to slight postural variations compared to 2D techniques. SHI et al.[59] proposed a "longitudinal profile fitting + curve integration" approach for pigs, and LI et al. [60] developed a "point cloud partition slicing + DBSCAN clustering" method for cattle. Both provide viable technical references for sheep body length measurement. These methods share a core principle of conforming to the natural torso curvature of livestock and removing outlier points. These strategies can be adapted for sheep body measurement to improve the conformity of length measurement and enhance robustness against complex physiological conditions.

3.6.2 Body height

From the perspective of measurement reference and posture adaptability, 3D measurement operates directly on point cloud data, enabling the fitting of an objective ground plane without dependence on hoof positions or image contours of the sheep. Furthermore, by calculating the vertical distance from the withers point to the ground using spatial coordinates, 3D measurement fundamentally eliminates perspective errors inherent in 2D approaches.
Currently, multiple 3D techniques coexist for measuring sheep body height. Among them, ZHANG[61] proposed a "rectangular region segmentation + binocular vision" method for sheep height measurement. This approach combines the positioning accuracy of 3D data with the intuitiveness of 2D contours, using rectangular segmentation to precisely narrow the search area for the sheep's withers point. Compared to traditional 2D measurement, it allows for faster localization of the measurement point. Moreover, by utilizing 3D coordinates obtained from binocular vision, it effectively avoids the single-viewpoint perspective bias of 2D images, thereby improving both the accuracy and efficiency of sheep height measurement. Furthermore, the core concepts from 3D measurement techniques developed for other livestock provide valuable insights for sheep height measurement. For instance, LI et al.[60] proposed a "point cloud slicing + chest region localization" method for cattle, whose technical rationale efficiently excludes interference from non-measurement areas, can be adapted for sheep to help mitigate interference from the legs or head on withers point localization. Similarly, WANG et al.[62] developed a "Euclidean clustering + principal component analysis (PCA)-based posture normalization" method for pigs, which normalizes the pig's point cloud to a standard posture via PCA and fits a corresponding reference plane to calculate the vertical distance, which offers a useful posture correction strategy. When applied to sheep, this approach can ensure the verticality of the measurement reference even with slight body tilting, significantly enhancing the posture adaptability and robustness of sheep height measurement.

3.6.3 Shank circumference

Defined as the shank circumference (SC) at the upper one-third of the metacarpus on the left foreleg, shank circumference in sheep is a key growth and weight predictor (Fig. 1). PAN[24] proposed a segmented-point-cloud circle fitting method tailored to its cylindrical structure in sheep, but segmentation is sensitive to posture and noise, and parameter dependence reduces generality.
Notably, as shown in Table 7, the shank circumference of sheep serves as a key indicator for constructing weight prediction models and plays a significant role in improving the accuracy of weight estimation. Yet constrained by the poor quality of leg point cloud data, as well as its susceptibility to occlusion and motion interference, current research on 3D measurement of sheep's shank circumference remains scarce. The maturity and application adaptability of relevant technical solutions still await further exploration and optimization.
Table 7 The relationship between body weight and body size of sheep
Animal numbers Body size indicators Number of indicators Year Work
882 BL, BH, CG, SC, HH, HW, etc. 7 2018 63
94 BL, BH, CG, SC 4 2018 64
50 BL, BH, CD, CG, CW, SC, HW, etc. 9 2019 65
706 BL, BH, CG, SC 4 2020 66
145 BL, BH, CD, CG, CW, HW 6 2020 67
32 BL, BH, CD, CG, CW, SC 6 2021 68
136 BL, BH, CD, CG, CW, SC, SH, HW 8 2021 69
745 BL, BH, CD, CG, CW, SC, HH 7 2021 70
653 BL, BH, CD, CG, CW, HW 6 2021 71
408 BL, BL, BH, CD, CG, CW, SC, etc. 10 2022 72
558 BL, BH, CG 3 2022 73
12 500 BL, BH, CD, CG, CW, SC, HH, BW, etc. 13 2022 74
507 BL, BH, CG, etc. 5 2022 75
56 BL, BH, CG, SC, HH, HW 6 2022 76
100 BL, CG, HH, SH 4 2022 77
334 BL, BH, CG, SC 4 2023 78
1 289 BL, BH, CD, CG, CW, SC 6 2023 58
150 BL, CG, SH 3 2023 79
210 BL, CD, CG, CW, HH, BH、etc. 7 2023 80
239 BL, BH, CD, CG, CW, etc. 6 2024 81
916 BL, BH, CG, SC 4 2024 82
239 BL, BH, CG, CW, SC, etc. 6 2024 83
100 BL, CG, HH, HW, AD, SW, etc. 14 2024 84

Note: SH denotes shoulder height; HW denotes hip width; AD denotes abdominal dimension.

3.6.4 Hip height

3D measurement, based on point cloud data, can directly fit the objective ground or a preset spatial reference line for sheep without relying on their hoof contours. Moreover, 3D measurement can lock the hip region of sheep through spatial segmentation and slice analysis. Even in the presence of local occlusion, the measurement point information can be completed using neighbouring point cloud features, thereby fundamentally reducing the limitations of a single-view 2D approach.
Currently, various technical approaches have been developed for 3D hip‑height measurement. ZHANG[61] proposed a "binocular vision + rectangular segmentation" method, narrowing the search region for hip feature points in sheep through rectangular segmentation, which enhances efficiency while minimizing background interference compared to the full‑image traversal typically used in 2D methods. LI et al.[60] introduced a "point cloud segmentation + slice analysis" technique for cattle, whose core technical rationale can be adapted to sheep measurement. This approach does not depend solely on locating a single measurement point, thereby offering significantly greater tolerance to local morphological variations, such as slight hip tilt in livestock. Furthermore, ZHOU et al.[23] presented an "envelope analysis + reference line" method, accurately identifying the highest point of the sacrum in sheep via envelope analysis and computing the vertical distance using a spatial reference line. This strategy not only aligns with the definition of hip height as a vertical spatial distance but also effectively avoids 2D perspective distortion, while simultaneously improving the localization accuracy of measurement points on the sheep body.
Overall, while 3D hip height measurement addresses the key pain points of 2D methods, several challenges remain to be resolved. Firstly, certain schemes depend on extreme-point localization, which is prone to misjudgment due to hair or occlusion. Secondly, parameter settings often rely on empirical values, lacking adaptive capability. Finally, high completeness of the sheep hip point cloud is required, and sparse point clouds can easily compromise measurement accuracy. Future 3D hip height measurement research should focus on improving the robustness of measurement-point localization and enhancing parameter self-adaptation to better meet the demands of practical farming scenarios.

3.6.5 Chest depth

3D chest depth measurement for sheep, based on spatial point cloud data, can directly isolate the thoracic region through filtering and clustering, eliminating dependence on planar contours. Furthermore, it acquires chest depth data based on the vertical correspondence between the sheep's withers point and the sternum point via spatial coordinates, fundamentally eliminating the misalignment of measurement points caused by 2D perspective projection.
[28] introduced a "point cloud filtering + interval averaging method", extracting the chest-region point cloud of the sheep, evenly divided it into intervals, and calculates the vertical coordinates of the upper and lower boundaries for each interval. By averaging these coordinates, the upper and lower body-depth measurement points are determined, with the absolute difference in their vertical coordinates representing the body depth. In comparison, MA[85] proposed a "hierarchical clustering feature point detection (HCFPD)" method for sheep, and autonomously partitioned the point cloud into feature regions using a hierarchical clustering algorithm. It directly locates the highest point of the scapula and the measurement point at the bottom of the sternum, with the Euclidean distance between them defined as chest depth. This approach does not require manual threshold adjustment and can adaptively accommodate different thoracic morphologies of sheep based on point cloud distribution, offering greater flexibility. However, its performance relies on the accurate identification of chest-depth feature points via hierarchical clustering, which may be compromised when key anatomical regions are poorly represented in the point cloud.
Overall, although 3D methods outperform 2D approaches in regional screening efficiency and morphological adaptability, they still face new challenges. First, certain schemes rely on empirical parameters, making it difficult to adapt to breeds with large body-type variation; moreover, when the thoracic structure is irregular, the averaged measurement may deviate from actual boundaries. Second, clustering-based approaches depend on point cloud density and spatial distribution characteristics; if the thoracic point cloud is sparse, feature points may be missed or misidentified. Future 3D sheep chest depth measurement research should focus on enhancing parameter self-adaptation mechanisms and improving robustness under sparse point cloud conditions to further strengthen the universality and stability of the technology.

3.6.6 Chest width

3D chest width measurement for sheep, based on spatial point cloud data, can directly locate the spatial coordinates of the posterior edges of the scapulae on both sides of the chest without relying on planar projection, thereby fundamentally eliminating errors introduced by 2D symmetry assumptions and perspective projection.
MA[85] proposed the HCFPD method for sheep, which autonomously delineates regions within the point cloud via hierarchical clustering. It directly extracts the feature point at the posterior edge of the scapula and calculates the Euclidean distance. This approach requires no manual threshold tuning, adapts to the thoracic morphology of sheep with varying body sizes, and ensures the located measurement points conform to the biological definition of chest width. PAN[24] introduced a "top‑view point cloud projection + segment‑wise statistical analysis" method for calculating sheep chest width. Compared to traversing width curves across a full 2D image, this method more intuitively reflects the horizontal distribution characteristics of chest width and minimizes interference from background point clouds. ZHANG[61] proposed a "binocular vision + rectangular segmentation" method for sheep, defining the measurement region through rectangular segmentation. After locating the anterior edge of the scapula and the withers point of sheep, chest width is derived based on geometric parameters. This solution combines the localization accuracy of 3D data with the intuitive nature of 2D contours, thereby reducing detection difficulty of sheep chest width in occlusion‑prone conditions.
Overall, although 3D sheep chest width measurement demonstrates clear superiority over 2D approaches in morphological adaptability and occlusion resistance, several challenges remain. First, some methods depend heavily on the quality of point cloud clustering, with sparse point clouds or noise often leading to feature-point misidentification. Second, derivation schemes based on the symmetry assumption of the sheep's body can suffer accuracy loss in asymmetrical chest scenarios. Future developments in 3D sheep chest width measurement should focus on improving clustering robustness, enabling self-adaptive parameter adjustment, and overcoming the limitations of symmetry assumptions to support a wider range of measurement scenarios.

3.6.7 Hip width

3D measurement of sheep hip width calculates the maximum horizontal distance between the left and right ischial tuberosities using spatial coordinates, eliminating the need to rely on planar contours. This fundamentally removes errors caused by 2D perspective distortion and postural variation.
ZHOU et al.[32] proposed a "3D reconstruction + sacral feature point localization" method for sheep, which autonomously segments the sacral region from a complete 3D point cloud of the sheep and then selects bilateral feature points on the back for Euclidean distance calculation. Since the sacrum serves as a stable anatomical landmark for the hip with clear biological definition, this method effectively reduces interference from soft tissues, such as hair and fat, during measurement point localization. WANG et al.[62] developed a "point cloud clustering + plane fitting" method for pigs, whose core technical rationale can be adapted for sheep hip width measurement. By constructing an objective measurement reference through clustering and plane fitting, this approach enables posture correction based on the fitted reference plane even when the sheep exhibits slight body tilt, thereby effectively mitigating the impact of posture changes on hip width calculation.
In summary, 3D technology demonstrates distinct advantages in sheep hip width measurement, including precise localization, strong robustness against interference, and excellent posture adaptability. The related technical pathways also provide diverse insights for achieving accurate hip width measurement in sheep.

3.6.8 Chest girth

Chest girth is a key body size indicator reflecting thoracic volume and overall body development in sheep. It is typically defined as the circular perimeter at the widest part of the chest, usually located just below the posterior edge of the scapula, when the animal is in a natural standing posture, with the measurement plane oriented perpendicular to the direction of gravity (Fig. 1).
MA[85] proposed a "circumcircle method based on three measurement points", which constructs a triangle using three selected points from the sheep's chest circumference and approximates the chest circumference by calculating the perimeter of its circumcircle via Heron's formula. This approach does not require a complete thoracic point cloud, making it feasible even under partial occlusion or with sparse data, thereby overcoming the traditional dependency on a full contour. However, the accuracy of this method heavily relies on the appropriate selection of the three points; uneven distribution can lead to significant approximation errors. PAN[24] introduced a "lateral-view point cloud projection with ellipse fitting" method for sheep, which better aligns with the approximately elliptical physiological shape of the sheep's thoracic cavity compared to circular fitting, offering higher precision and effectively excluding interference from non‑thoracic regions. Nevertheless, lateral projection is sensitive to sheep posture, and incomplete point clouds can degrade the fitting performance. LI et al.[60] developed a "bidirectional tomographic slice segmentation (BTSS) + box‑elliptic" method for cattle, following a progressive logic of "segmentation - rectangular constraint - ellipse fitting." This ensures that the fitted ellipse tightly encloses the thoracic contour while reducing local noise interference, demonstrating stronger adaptability for individual sheep with slight thoracic irregularities.
In summary, various 3D technical approaches can effectively estimate sheep chest circumference. While each method has its distinct technical strengths and suitable application scenarios, they are all constrained by factors such as point cloud quality and algorithmic parameter settings. Future optimization of 3D sheep chest girth measurement technologies should focus on addressing the dual challenges of unsupervised automatic measurement-point localization and robust fitting for sparse or incomplete point clouds, thereby enhancing measurement stability in complex scenarios and facilitating the transition of this technology from laboratory research to practical livestock production.

4 2D-3D integrated livestock body size measurement

The 2D-3D integration approach aims to leverage the high-resolution semantic information of RGB imagery together with the geometric completeness of point clouds, offering near-3D-level sheep body measurement accuracy at lower cost. Current mainstream solutions typically follow a generalized framework in which 2D images are used for keypoint detection, the corresponding 3D coordinates are obtained via back-projection, and geometric computations are subsequently performed[86]. Table 8 compares the accuracy of 2D, 3D, and 2D-3D fusion methods for body measurement in sheep and other livestock.
Table 8 Accuracy data comparison of 2D, 3D, and 2D-3D body measurement methods in sheep and other livestock
Subjects Methods Animal numbers Body traits & accuracy Year Work
Sheep 2D 27 BL: 2.03%; BH: 1.13%; CD: 4.45%; CW: 2.25%; HH:1.54%; HW: 2.41% 2018 [5]
Cattle 2D NA BL: 0.06%; BH: 2.28% 2020 [7]
Sheep 2D 55 BL: 2.42 %; BH: 1.86 %; HH: 2.07 %; CD: 2.72 % 2022 [11]
Cattle 2D 30 BL: 7.55%; BH: 6.75%; CD: 8.00%; SC: 8.97% 2024 [13]
Sheep 3D 1 BL / BH / CD / HH: 2.36% 2020 [34]
Cattle 3D 103 BL/BH/CDP/HH: <3% 2020 [35]
Sheep 3D 239 BL: 0.75 cm; HW: 0.38 cm; BH: 1.23 cm; SW: 0.33 cm; HH: 0.66 cm 2021 [31]
Cattle 3D 10 BL: 2.3%; BH: 2.8%; CDM: 2.6%; AD: 2.8%; SW: 1.6% 2022 [36]
Sheep 3D 2 BOL/BH/HH/HW: <5% 2024 [40]
Sheep 3D 24 BH: 1.67%; CW: 3.63%; HH: 1.14%; BL: 2.71%; CDM: 3.57%; HW: 3.71% 2025 [42]
Cattle 2D-3D NA BL: 2.14%; BH: 0.76%; HH: 0.76% 2022 [86]
Pig 2D-3D NA BL: 2.33%; BH: 1.92%; SW: 1.29%; HW: 1.26% 2021 [87]
Horse 2D-3D 80 BH/BL/HH/CG/AD: <2.29% 2024 [88]
In summary, 2D body measurement methods offer advantages in cost and ease of use, but for sheep body measurement, their accuracy is considerably affected by imaging conditions and sheep posture variation. In contrast, 3D methods provide significantly higher accuracy for sheep body measurement, but involve greater equipment costs and computational complexity. The integration of 2D and 3D approaches strikes a balance between cost, accuracy, and operational feasibility, achieving high measurement accuracy for sheep under different conditions by combining the advantages of both methods. However, as shown in the existing data in Table 8, no documented 2D-3D fusion solutions for sheep body dimension measurement have been reported in the literature to date. Such integrated approaches are expected to play a more significant role in the future of smart farming and automated body measurement for sheep, particularly in application scenarios that demand high accuracy and robustness.

4.1 Integration framework and key technical workflow

Most existing 2D-3D integration studies follow the steps of joint calibration, synchronized acquisition, 2D keypoint extraction and 3D mapping[88]. Joint calibration is performed once via rigid mounting of RGB and depth sensors to determine intrinsic and extrinsic camera parameters, ensuring re-projection error remains within a defined pixel threshold. Temporal alignment is achieved using either hardware triggering or synchronized frame indexing to avoid motion blur-induced landmark shifts. 2D keypoints are detected in RGB imagery. These pixel coordinates are back-projected into camera space using intrinsic parameters and associated depth values, and then transformed into world coordinates via extrinsic parameters, completing the 2D-3D mapping. As shown in Fig. 13, the schematic diagram illustrates the principle of mapping 2D keypoints to 3D space, which is drawn with reference to the 2D-3D fusion keypoint projection logic[86]. The original study realized keypoint localization for pig body measurement through this mapping principle; this paper, combining with the morphological characteristics of sheep (such as compact body size and distinct limb proportions from pigs and cattle), has made targeted adjustments to the selection range of keypoints and the adaptability of coordinate transformation during the mapping process to meet the actual needs of sheep body measurement.
Fig.13 Schematic diagram of 2D-to-3D keypoint projection for sheep body measurement

camera view point 2D pixel point 3D point

After obtaining the keypoint P in the RGB image via a two-dimensional keypoint identification method, the camera's intrinsic parameters are used to establish a mapping between P and its corresponding three-dimensional point Pc in the camera space. This process serves as the critical bridge for achieving 2D-3D fusion in body size measurement[86]. The relationship between P and Pc is presented in Equation (1).
Z c u v 1 = f 0 c x 0 f c y 0 0 1 X c Y c Z c
Where, (u, v) and (x, y) are the pixel coordinates and image coordinates of P, respectively; (cx, cy ) refers to the pixel coordinates of the 2D image center, a calibrated intrinsic parameter of the camera; f is the camera's focal length; and (Xc, Yc, Zc ) are the 3D coordinates of Pc in the camera coordinate system. Specifically, the 3D coordinates (Xc, Yc, Zc ) of Pc are first derived through the transformation of the intrinsic matrix using the aforementioned variables, and these camera-space coordinates are subsequently converted to world coordinates via extrinsic parameter calibration to complete the calculation of sheep body measurements.

4.2 Keypoint detection in 2D-3D integration

In 2D-3D fusion frameworks, accurate 2D keypoint extraction is the core factor influencing overall measurement precision. Researchers have proposed a variety of technical solutions tailored to different livestock species for 2D keypoint recognition. LU et al. [89] used a YOLOv4 model to detect regions of varying sizes in 2D images, training with randomly initialized parameters to identify head, rump, and body areas for pig keypoint extraction. ZHAO et al. [90] applied YOLOv5 for cattle landmark detection, followed by Canny edge detection and contour extraction to obtain the relevant anatomical outlines, then used polynomial fitting and three-point arc curvature algorithms to identify keypoints. XU et al. [87] employed convex hull and maximum external contour analysis to extract pigs' head-tail, shoulder-rump, and height-related landmarks. DU et al. [86] used manually annotated body parts of pigs to train DeepLabCut models for keypoint detection, enabling high landmark accuracy for small-sample or species-specific cases. In addition, targeting the demand for Mongolian horse body measurement keypoint detection, LI et al. [88] enhanced YOLOv8n-pose by incorporating deformable convolution in the C2f module, adding shuffle attention, optimizing the loss function, and dynamically tuning learning rates via cosine annealing. The resulting DSS-YOLO model achieved 92.5% average precision, significantly improving accuracy for specialized species and providing technical reference for less widely studied species such as sheep.
From the perspective of technological development, current 2D keypoint extraction has established a technical roadmap characterized by "object detection models as the primary approach, supplemented by traditional image processing methods". This framework aligns with the overall workflow of 2D-3D fusion and provides a foundational technical reference for understudied species such as sheep. Among the various methods, the YOLO series models (v4, v5, v8) have become the mainstream backbone architecture due to their superior object localization capabilities, offering rapid and accurate measurement point inputs for subsequent 3D mapping[88-90]. Some studies have further optimized keypoint capture under complex postures or occlusion by integrating modules such as deformable convolutions and attention mechanisms, thereby reducing the impact of keypoint drift on 3D mapping accuracy.
However, current 2D keypoint extraction methods still face several challenges that affect the practical application of 2D-3D fusion solutions in livestock body measurement. Most models are trained and optimized for specific species like pigs, cattle, or horses, lacking cross-species adaptability. When directly applied to less-studied species such as sheep, or to individuals at different growth stages, detection accuracy tends to decline significantly, which in turn compromises the accuracy of subsequent 3D mapping. Furthermore, existing methods demonstrate limited robustness against real-world disturbances. In practical farming environments-where issues such as sudden lighting changes, hair occlusion, or cluttered backgrounds are common-both the part recognition performed by object detection models and the contour extraction executed by traditional algorithms are prone to keypoint drift or misjudgment. This ultimately leads to deviations in the 2D-to-3D mapping process.

5 Challenges and prospects

5.1 Building large-scale and high-quality datasets

5.1.1 Limitations of existing datasets

Publicly available datasets for sheep body size measurement remain scarce and exhibit the following limitations.
(1) Limited sample size. Most datasets used in current studies contain only dozens to a few hundred samples, which is insufficient to represent the diversity across breeds, growth stages, and environmental conditions.
(2) Lack of diversity. Existing datasets predominantly focus on a single breed or a specific growth stage, with limited coverage of different breeds, rearing environments (e.g., indoor vs. pasture), and varying illumination or background conditions. For instance, although [17] included a relatively large sample size (350 individuals), it was restricted to a single breed, lacking cross-breed variability.
(3) Inconsistent annotation standards. The accuracy and consistency of annotations significantly influence model performance. However, most datasets are annotated independently by research teams without unified protocols. For example, WEI et al.[42] achieved sheep body segmentation based on deep learning, and the subjectivity in its manual annotation process may cause slight deviations in the labeling results.
(4) Scarcity of open datasets. Despite references to dataset construction in the literature, very few datasets are publicly accessible. Most datasets are only used internally by research teams for model training and validation, lacking an external sharing mechanism, which restricts the verification of various innovative sheep body measurement models.

5.1.2 Impact of small-sample datasets on model generalization

The prevalence of small-sample datasets in sheep body size measurement research adversely affects model generalization in several ways.
(1) Overfitting risk. Models trained on small datasets are prone to overfitting, performing well on training data but poorly on unseen data.
(2) Limited generalization capability. Small datasets fail to capture the full variability of real-world farming conditions. As a result, models often underperform when exposed to new environments, lighting variations, or animal postures.
(3) Challenges in model optimization. Small-sample datasets restrict the optimization space of sheep body size measurement models.

5.1.3 Recommendations for future dataset development

In the future, it is necessary to construct 2D and 3D datasets for sheep, covering data of different breeds, growth stages and environmental conditions, to improve the robustness and adaptability of the models.
When collecting image data, it is required to record the manually measured body size, body weight, age and other metadata in a matched manner, thus forming a multi-modal fusion dataset.
A standardized annotation operation process should be formulated to clarify the definition of key points, annotation tools and annotation accuracy requirements, to avoid subjective annotation deviations. The mode of automatic pre-annotation combined with manual correction should be adopted: traditional geometric algorithms are used for data pre-annotation, and manual revision is further conducted to correct the annotation deviations.

5.2 Application of deep learning methods

As the core foundation of sheep body size measurement, 2D keypoint detection currently adopts YOLO[88-90] series models as the mainstream framework. To further meet the needs of practical sheep farming, it is necessary to optimize the network structure, conduct model compression, pruning, quantization, and other techniques. These optimizations aim to reduce computational load and parameters while maintaining high accuracy, ensuring better adaptability to on-site deployment in pastures.
Current 3D point cloud filtering mainly relies on the fusion of traditional multi-algorithms, such as pass-through and statistical filtering [44] and improved k-NN[32]. These methods require manual parameter adjustment and face difficulties in cross-breed transfer. Although some studies have applied PointNet++ to point cloud filtering [43], its potential in sheep body size measurement scenarios has not been fully exploited. In the future, targeting characteristics of sheep point clouds (e.g., sparsity and wool interference), optimization of local feature extraction can be implemented to further improve filtering accuracy.
3D registration still primarily relies on heuristic algorithms such as ICP [40]. While these algorithms achieve relatively high accuracy, they are sensitive to initial positions, dependent on point cloud overlap, and easily affected by equipment displacement and sheep posture changes. In the future, deep learning-based registration algorithms such as robust point matching network (RPMnet) [91] and GeoTransformer[92] can be introduced. These models can realize non-rigid registration by learning point cloud features, reducing reliance on initial positions and overlap. This will enhance adaptability to sheep posture changes and local occlusions, thereby optimizing the alignment effect of multi-view point clouds.
Current mainstream algorithms for 3D body segmentation include deep learning models such as PointNet++[42] and improved PointVector++[52], as well as traditional methods like region growing[34] and slice clustering[36]. With the development of algorithms such as PointTransformer[93] and PointNeXt[94], these newer models focus more on local feature extraction, enabling better adaptation to the irregular body conditions of sheep and thus providing further insights for sheep body size measurement.

5.3 Large-scale application and deployment

(1) Develop lightweight models. To meet the needs of practical sheep farming, further in-depth research should be conducted on computer vision-based body size measurement methods with a focus on lightweight design. Through techniques such as model compression, quantization, and pruning, the computational complexity and storage requirements of the models can be reduced, enabling them to operate efficiently in resource-constrained environments.
(2) To address the scarcity of research on body size measurement for smaller sheep body parts such as shank circumference, future efforts need to enhance the ability to recognize and measure curved structures from aspects including posture standardization, 3D segmentation optimization, and multi-modal data fusion, thereby improving the measurement accuracy of curved data.
(3) In existing studies, the body size measurement technology based on 2D and 3D data fusion has been effectively verified and demonstrated excellent performance in livestock such as cattle and pigs, providing important technical references and practical insights for sheep body size measurement. In future research, it is imperative to promote the integrated application and adaptive optimization of such low-cost, high-robustness multimodal vision systems in sheep farming scenarios.
(4) Cross-species technological exchange. Sheep body size measurement can benefit from knowledge and techniques developed for other livestock, such as cattle and pigs. Future collaboration across animal measurement research communities, through data sharing, experience exchange, and joint technology development, can drive mutual progress. Adopting advanced methods from other species could help address specific challenges in sheep measurement, thereby improving both accuracy and efficiency.

6 Conclusion

This paper reviewed three major non-contact measurement approaches: 2D image-based body size measurement, 3D point cloud-based body size measurement, and integrated 2D-3D morphometry, and discussed their applications in livestock such as sheep, pigs and cattle. Among these, the individual 2D and 3D measurement techniques have been explored in sheep as well as various other livestock including cattle and pigs, where the mature technologies developed for other livestock provide valuable insights for sheep body size measurement. In contrast, while the integrated 2D-3D morphometry has been validated to balance accuracy and cost-effectiveness in livestock such as cattle, pigs, and horses, no relevant reports on its application in sheep body size measurement have been found, offering further research directions for this field. The implementation methods of 2D-3D integrated measurement were introduced, followed by a detailed comparative analysis of measurement accuracy across various 2D and 3D image processing techniques, as well as landmark detection strategies for both modalities. Additionally, a comprehensive comparison of existing body size measurement methods was conducted.
Existing works remain predominantly based on small datasets, and research on measuring small body parts of sheep such as shank circumference is even scarcer. The limitations of traditional 2D image processing algorithms were analyzed, particularly their weak generalization and adaptability to new imaging scenarios or modalities, which often necessitate complete algorithm redesign. Trends toward combining deep learning with traditional methods in 2D processing were highlighted, alongside the sensitivity of conventional 3D registration algorithms to initialization and the emerging potential of deep learning for 3D reconstruction, segmentation, and landmark localization.
Challenges in constructing accurate 3D point cloud models for sheep were discussed, along with prospects for this field. As deep learning, computer vision, and 3D reconstruction technologies are increasingly applied in livestock body size measurement research, sheep body size measurement technology can draw on and co-evolve with relevant technologies for other animals. Ultimately, how to further promote the practical application and implementation of intelligent sheep farming remains a direction for future research to explore-requiring a balanced consideration of modeling precision, algorithmic real-time performance, computational efficiency, and overall system cost in real-world scenarios.

All authors declare no competing interests.

[1]
MA W H, QI X Y, SUN Y, et al. Computer vision-based measurement techniques for livestock body dimension and weight: A review[J]. Agriculture, 2024, 14(2): 306.

[2]
QIAO Y L, KONG H, CLARK C, et al. Intelligent perception for cattle monitoring: A review for cattle identification, body condition score evaluation, and weight estimation[J]. Computers and Electronics in Agriculture, 2021, 185: 106143.

[3]
MENESATTI P, COSTA C, ANTONUCCI F, et al. A low-cost stereovision system to estimate size and weight of live sheep[J]. Computers and Electronics in Agriculture, 2014, 103: 33-38.

[4]
JIANG J, ZHOU L N, LI G. Sheep body size measurement based on computer vision[J]. Journal of Computer Applications, 2014, 34(3): 846-850, 887.

[5]
ZHANG A L, WU BP, WUYUN C T, et al. Algorithm of sheep body dimension measurement and its applications based on image analysis[J]. Computers and Electronics in Agriculture, 2018, 153: 33-45.

[6]
SHI C, ZHANG J L, TENG G H. Mobile measuring system based on LabVIEW for pig body components estimation in a large-scale farm[J]. Computers and Electronics in Agriculture, 2019, 156: 399-405.

[7]
SHI W, ZHANG S Q. Application of edge-based image segmentation in cow body measurement[J]. Digital Technology & Application, 2020, 38(2): 48-51.

[8]
ZHANG Y, SUN Z J, ZHANG C, et al. Body weight estimation of yak based on cloud edge computing[J]. EURASIP Journal on Wireless Communications and Networking, 2021, 2021(1): 6.

[9]
YE W S, KANG X, HE Z J, et al. Automatic measurement of multi-posture beef cattle body size based on depth image[J]. Smart Agriculture, 2022, 4(4):144-155.

[10]
WOODWARD-GREENE M J, KINSER J M, SONSTEGARD T S, et al. PreciseEdge raster RGB image segmentation algorithm reduces user input for livestock digital body measurements highly correlated to real-world measurements[J]. PLoS One, 2022, 17(10): e0275821.

[11]
LI K Q, TENG G F. Study on body size measurement method of goat and cattle under different background based on deep learning[J]. Electronics, 2022, 11(7): 993.

[12]
AI B, LI Q. SOLOv2-based multi-view contactless bovine body size measurement[J]. Journal of Physics: Conference Series, 2022, 2294(1): 012011.

[13]
LI R, WEN Y C, ZHANG S J, et al. Automated measurement of beef cattle body size via key point detection and monocular depth estimation[J]. Expert Systems with Applications, 2024, 244: 123042.

[14]
WENG Z, FAN Q, ZHENG Z Q. Automatic measurement method of beef cattle body size based on multimodal image information and improved instance segmentation network[J]. Smart Agriculture, 2024, 6(4): 64-75.

[15]
BAI M Y, XUE H R, JIANG X H, et al. Body size measurement of sheep based on machine vision[J]. DEStech Transactions on Computer Science and Engineering, 2018. DOI: 10.12783/dtcse/icmsie2017/18641 .

[16]
QIN Q, ZHANG C Y, LAN M X, et al. Machine vision analysis of ujumqin sheep's walking posture and body size[J]. Animals, 2024, 14(14): 2080.

[17]
QIN Q, DAI D L, ZHANG C Y, et al. Identification of body size characteristic points based on the Mask R-CNN and correlation with body weight in Ujumqin sheep[J]. Frontiers in Veterinary Science, 2022, 9: 995724.

[18]
BELLO R W, MOHAMED A S A, TALIB A Z. Contour extraction of individual cattle from an image using enhanced mask R-CNN instance segmentation method[J]. IEEE Access, 2021, 9:56984-57000.

[19]
ZHAO H, MAO R, LI M, et al. SheepInst: A high-performance instance segmentation of sheep images based on deep learning[J]. Animals, 2023, 13(8): 1338.

[20]
XIE B, JIAO W P, WEN C K, et al. Feature detection method for hind leg segmentation of sheep carcass based on multi-scale dual attention U-Net[J]. Computers and Electronics in Agriculture, 2021, 191: 106482.

[21]
WANG W, WANG F S, ZHANG W, et al. Sheep behavior recognition method based on improved YOLOv8s[J]. Transactions of the Chinese Society for Agricultural Machinery, 2024, 55(7): 325-335, 344.

[22]
ZHOU YQ, XUE H R, BAI J, et al. Detection method of sheep body size parameters measurement points based on image[J]. Journal of Inner Mongolia Agricultural University (Natural Science Edition), 2024, 45(2):69-77.

[23]
ZHOU Y Q, XUE H R, JIANG X H, et al. Non-contact measurement of sheep body size based on multi-scale Retinex image enhancement[J]. Journal of China Agricultural University, 2018, 23(9):156-165.

[24]
PAN T R. Research on key technologies for non-stress three-dimensional body size measurement in sheep[D]. Chongqing: Chongqing University of Technology, 2023.

[25]
RIEKERT M, KLEIN A, ADRION F, et al. Automatically detecting pig position and posture by 2D camera imaging and deep learning[J]. Computers and Electronics in Agriculture, 2020, 174: 105391.

[26]
ZHANG ALN, WU B P, JIANG CXH, et al. Development and validation of a visual image analysis for monitoring the body size of sheep[J]. Journal of Applied Animal Research, 2018, 46(1):1004-1015.

[27]
SHEN Y H. Research and implementation of sheep's body size parameters measurement system based on binocular stereo vision[D]. Hohhot: Inner Mongolia Agricultural University, 2019.

[28]
J M. Research on 3D Reconstruction and weight estimation of sheep based on depth camera[D]. Shihezi: Shihezi University, 2023.

[29]
LI Z B, SUN H X, GUO Q N, et al. Research progress on machine vision technology for non-contact body measurement of large livestock[J]. Transactions of the Chinese Society of Agricultural Engineering, 2025, 41(7): 1-12.

[30]
LU M Z, GUANG E Y, CHEN Z K, et al. Automatic measurement method of goat body size based on double vision angle camera image[J]. Transactions of the Chinese Society for Agricultural Machinery, 2023, 54(8): 286-295.

[31]
ZHANG J L, ZHUANG Y R, JI H Y, et al. Pig weight and body size estimation using a multiple output regression convolutional neural network: A fast and fully automatic method[J]. Sensors, 2021, 21(9): 3218.

[32]
ZHOU Y Q, XUE H R, WANG C L, et al. Reconstruction and body size detection of 3D sheep body model based on point cloud data[C]// Computer and Computing Technologies in Agriculture XI. Cham: Springer, 2019: 251-262.

[33]
ZHANG C G, LEI J H, CHEN Y, et al. Measurement application of body size parameters of dairy cow based on machine binocular vision[J]. Application of Electronic Technique, 2020, 46(6): 59-62.

[34]
MA X L, XUE H R, ZHOU Y Q, et al. Point cloud segmentation and measurement of the body size parameters of sheep based on the improved region growing method[J]. Journal of China Agricultural University, 2020, 25(3): 99-105.

[35]
RUCHAY A, KOBER V, DOROFEEV K, et al. Accurate body measurement of live cattle using three depth cameras and non-rigID 3-D shape recovery[J]. Computers and Electronics in Agriculture, 2020, 179: 105821.

[36]
LI J W, LI Q F, MA W H, et al. Key region extraction and body dimension measurement of beef cattle using 3D point clouds[J]. Agriculture, 2022, 12(7): 1012.

[37]
LI G X, LIU X L, MA Y F, et al. Body size measurement and live body weight estimation for pigs based on back surface point clouds[J]. Biosystems Engineering, 2022, 218: 10-22.

[38]
ZHAO Y L, ZENG F G, JIA N, et al. Rapid measurements of pig body size based on DeepLabCut algorithm[J]. Transactions of the Chinese Society for Agricultural Machinery, 2023, 54(2):249-255, 292.

[39]
HU H, YU J C, YIN L, et al. An improved PointNet++ point cloud segmentation model applied to automatic measurement method of pig body size[J]. Computers and Electronics in Agriculture, 2023, 205: 107560.

[40]
FAN C H, CHENG M, YUAN H B, et al. Reconstruction method of the 3D model for sheep based on multi-angle Kinect v2[J]. Journal of Chinese Agricultural Mechanization, 2024, 45(3): 189-197.

[41]
YU M J, ZHANG L N, WEI Y X, et al. Automatic measurement method for sheep body dimensions based on posture compensation[J]. Signal, Image and Video Processing, 2025, 19(12):1025.

[42]
WEI YX, ZHANG LN, YANG F, et al. Automatic measurement method of sheep body size based on 3D reconstruction and point cloud segmentation[J]. Computers and Electronics in Agriculture, 2025, 239: 110978.

[43]
DAI W J, LIANG Y D C, ZHANG J H, et al. 3D Reconstruction of dairy sheep using improved PointNet++ and local point cloud overlap[J]. Transactions of the Chinese Society of Agricultural Engineering, 2025, 41(23): 171-181.

[44]
LI J W, MA W H, LI Q F, et al. Automatic acquisition and target extraction of beef cattle 3D point cloud from complex environment[J]. Smart Agriculture, 2022, 4(2): 64-76.

[45]
NA M H, CHO W H, KIM S K, et al. Automatic weight prediction system for Korean cattle using Bayesian ridge algorithm on RGB-D image[J]. Electronics, 2022, 11(10): 1663.

[46]
LIU D, HE D J, NORTON T. Automatic estimation of dairy cattle body condition score from depth image using ensemble model[J]. Biosystems Engineering, 2020, 194: 16-27.

[47]
ZHANG X Y, LIU G, JING L, et al. Automatic extraction method of cow's back body measuring point based on simplification point cloud[J]. Transactions of the Chinese Society for Agricultural Machinery, 2019, 50(S1): 267-275.

[48]
CHENG Y Q, LI W L, JIANG C, et al. A novel point cloud simplification method using local conditional information[J]. Measurement Science and Technology, 2022, 33(12): 125203.

[49]
DANG C, CHOI T, LEE S, et al. Case study: Improving the quality of dairy cow reconstruction with a deep learning-based framework[J]. Sensors, 2022, 22(23): 9325.

[50]
LIANG J X, YUAN Z Y, LUO X H, et al. A study on the 3D reconstruction strategy of a sheep body based on a Kinect v2 depth camera array[J]. Animals, 2024, 14(17): 2457.

[51]
SHI Y Y, WANG Y X, YIN L, et al. A transfer learning-based network model integrating kernel convolution with graph attention mechanism for point cloud segmentation of livestock[J]. Computers and Electronics in Agriculture, 2024, 225: 109325.

[52]
JIANG Y H, LI Z C, CAO J S, et al. Body parts segmentation and phenotypic traits extraction of pig using an improved point cloud segmentation network with multi-LiDAR[J]. Computers and Electronics in Agriculture, 2025, 237:110624.

[53]
ZHOU Y Q. Study on sheep's body size parameters measurement extraction and three dimension reconstruction based on binocular stereo vision [D]. Hohhot: Inner Mongolia Agricultural University, 2018.

[54]
HU Y H, LUO X Y, GAO Z C, et al. Curve skeleton extraction from incomplete point clouds of livestock and its application in posture evaluation[J]. Agriculture, 2022, 12(7): 998.

[55]
QI C R, LI Y, HAO S, et al. PointNet++ deep hierarchical feature learning on point sets in a metric space[C]// 31st conference on neural information processing systems (NIPS 2017), New York, USA: Curran Associates, Inc., 2017.

[56]
FALQUE R, VIDAL-CALLEJA T, ALEMPIJEVIC A. Semantic keypoint extraction for scanned animals using multi-depth-camera systems[C]// 2023 IEEE International Conference on Robotics and Automation (ICRA). May 29-June 2, 2023, London, United Kingdom. IEEE, 2023:11794-11801.

[57]
HAN B B, ZHANG J R, XIANG Y Y, et al. Shapewarp: A "global-to-local" non-rigid sheep point cloud posture rectification method[J]. Expert Systems with Applications, 2025, 270: 126524.

[58]
YANG C M, ZHANG M H, HE J M, et al. Correlation and regression analyses of body weight with body size indexes of luzhong meat sheep[J]. Shandong Agricultural Sciences, 2023, 55(10): 146-151.

[59]
SHI S, YIN L, LIANG S H, et al. Research on 3D surface reconstruction and body size measurement of pigs based on multi-view RGB-D cameras[J]. Computers and Electronics in Agriculture, 2020, 175: 105543.

[60]
LI J W, MA W H, BAI Q, et al. A posture-based measurement adjustment method for improving the accuracy of beef cattle body size measurement based on point cloud data[J]. Biosystems Engineering, 2023, 230: 171-190.

[61]
ZHANG C. Development and application of individual recognition and intelligent measurement of body size traits in Hu sheep[D]. Wuhan: Huazhong Agricultural University, 2022.

[62]
WANG K, GUO H, MA Q, et al. A portable and automatic Xtion-based measurement system for pig body size[J]. Computers and Electronics in Agriculture, 2018, 148: 291-298.

[63]
CHEN H F, TAN S X, SUN H C, et al. Correlation analysis of body size and body weight of four breeds of mutton sheep based on R project[J]. Journal of Domestic Animal Ecology, 2018, 39(10): 16-20.

[64]
XU X, MA Y J, JIANG Z W. Correlation and regression analysis of body weight and body sizes in Texel sheep[J]. Journal of Gansu Agricultural University, 2018, 53(3): 15-20.

[65]
LIANG X P, ZHANG Z L, AMANKAIDI MOHAMEDIKHAN, et al. Analysis of the influence of body size index of Xinjiang yemule white sheep ram on its body weight[J]. Xinjiang Agricultural Sciences, 2019, 56(4): 740-748.

[66]
TAO L, YANG H Y, JIANG Y T, et al. Path analysis on effects of body size traits on body weight of Yunshang black goat[J]. Journal of Domestic Animal Ecology, 2020, 41(7): 18-22.

[67]
ZHANG Q L, YAN M Y, YU Z X, et al. Establishing of optimum regression model and path analysis between body sizes and body weight in adult oula sheep[J]. Chinese Qinghai Journal of Animal and Veterinary Sciences, 2020, 50(1): 10-15.

[68]
SHI H N, LIU Y T, LI S E, et al. Correlation and regression analyses of the body weight and body sizes in Australian white sheep[J]. China Cattle Science, 2021, 47(1): 77-82.

[69]
YANG S Z, ZHAO W, MENG Y H, et al. The correlation analysis between body weight and body size of Butuo black sheep[J]. China Herbivore Science, 2021, 41(6): 37-40.

[70]
LIAO Y Y, WANG Y X, LIU Y Z, et al. Correlation between body weight and body size of Ningxia Yanchi Tan sheep[J]. Acta Agriculturae Shanghai, 2021, 37(2): 62-67.

[71]
ZHUANG L, YAN M Y, YU Z X, et al. Multiple regression analysis of body weight and body measurement of Qinghai hornless oula lambs[J]. Chinese Qinghai Journal of Animal and Veterinary Sciences, 2021, 51(6): 11-15.

[72]
MIRENISA TUERSUNTUOHETI, ZHANG J H, MAIERHABA, et al. Characteristics of Cele black sheep fat tail and its correlation with body weight and body size[J]. Heilongjiang Animal Science and veterinary Medicine, 2022(19): 59-63.

[73]
CHEN Y H. Maternal genetic effect assessment of weight and size of Duolang sheep[D]. Xinjiang: Xinjiang Agricultural University, 2022.

[74]
LV Z W. Study on linear evaluation method of body shape of Duolang sheep[D]. Xinjiang: Xinjiang Agricultural University, 2022.

[75]
XIN M X, TE R, WANG G Q, et al. Correlation analysis between body weight and body size of sunit sheep[J]. Animal Husbandry and Feed Science, 2022, 43(6):64-67.

[76]
FU J, HE C, HUANG W P, et al. Correlation analysis between body weight and body size of Liangshan black sheep adult ewes[J]. China Herbivore Science, 2022, 42(5): 80-82.

[77]
MOKOENA K, MOLABE KM, SEKGOTA MC, et al. Predicting body weight of Kalahari Red goats from linear body measurements using data mining algorithms[J]. Veterinary World, 2022, 15(7): 1719-1726.

[78]
TANG S, WANG K N, LIU S D. Correlation analysis between body weight and body size of kirgiz rams and selection of optimal regression equation[J]. The Chinese Livestock and Poultry Breeding, 2023, 19(11): 78-84.

[79]
JANNAH ZN, ATMOKO BA, IBRAHIM A, et al. Body weight prediction model analysis based on the body size of female Sakub sheep in Brebes District, Indonesia[J]. Biodiversitas Journal of Biological Diversity, 2023, 24(7): 3657-3664.

[80]
DELIALIOGLU RA, PEHLIVAN E, ALTAY Y. Morphological characterization of the Polatli sheep in terms of live weight using data mining algorithms[J]. Tropical Animal Health and Production, 2023, 55(6): 416.

[81]
SIQIN Q, NI N, WU Y N, et al. Correlation, path and regression analysis between body weight with body size and tail traits of Sunit sheep at different months of age[J]. Heilongjiang Animal Science and Veterinary Medicine, 2024(14): 41-48.

[82]
NUERABUDULA W, BIJIGULI S, NUERQIAOLIPAN A, et al. Analysis of the effect of body size traits on body weight in Kirgiz sheep[J]. Grass-Feeding Livestock, 2024, (1): 9-18.

[83]
WANG X, HANIKIZI·TULAFU, SHI G, et al. Correlation, path, and regression analysis of body weight and body size indexes in Multiparous fine wool sheep in the Xinjiang Region[J]. Heilongjiang Animal Science and Veterinary Medicine, 2024(14): 37-40.

[84]
CHURATA-HUACANI R, WILLIAM CANAZA-CAYO A, JESUS FERNANDES T, et al. Predicting body weight from body measurements of corriedale sheep using ridge and stepwise regression models[J]. Journal of Animal Health and Production, 2024, 12(2): 182-188.

[85]
MA X L. Research on key technologies of sheep 3D reconstruction based on 3D point cloud[D]. Hohhot: Inner Mongolia Agricultural University, 2023.

[86]
DU A, GUO H, LU J, et al. Automatic livestock body measurement based on keypoint detection with multiple depth cameras[J]. Computers and Electronics in Agriculture, 2022, 198: 107059.

[87]
XU J Y, XU A J, ZHOU S Y, et al. Research on the algorithm of curved body size measurement of pig based on Kinect camera[J]. Journal of Northeast Agricultural University, 2021, 52(9): 77-85.

[88]
LI M H, SU L D, ZHANG Y, et al. Automatic measurement of Mongolian horse body based on improved YOLOv8n-pose and 3D point cloud analysis[J]. Smart Agriculture, 2024, 6(4): 91-102.

[89]
LU J, GUO H, DU A, et al. 2-D/3-D fusion-based robust pose normalisation of 3-D livestock from multiple RGB-D cameras[J]. Biosystems Engineering, 2022, 223: 129-141.

[90]
ZHAO J M, ZHAO C, XIA H G. Cattle body size measurement method based on Kinect v4[J]. Journal of Computer Applications, 2022, 42(5): 1598-1606.

[91]
YEW Z J, LEE G H. RPM-net: Robust point matching using learned features[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, New Jersey, USA: IEEE, 2020: 11821-11830.

[92]
QIN Z, YU H, WANG C J, et al. GeoTransformer: Fast and robust point cloud registration with geometric transformer[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(8): 9806-9821.

[93]
ZHAO H S, JIANG L, JIA J Y, et al. Point transformer[C]// 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway, New Jersey, USA: IEEE, 2021: 16239-16248.

[94]
QIAN G C, LI Y C, PENG H W, et al. PointNeXt: Revisiting PointNet++ with improved training and scaling strategies[EB/OL]. arXiv: 2206.04670, 2022.

文章导航

/