Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Mar 21:8:txad033.
doi: 10.1093/tas/txad033. eCollection 2024.

Deep learning based landmark detection for measuring hock and knee angles in sows

Affiliations

Deep learning based landmark detection for measuring hock and knee angles in sows

Ryan L Jeon et al. Transl Anim Sci. .

Abstract

This paper presents a visual deep learning approach to automatically determine hock and knee angles from sow images. Lameness is the second largest reason for culling of breeding herd females and relies on human observers to provide visual scoring for detection which can be slow, subjective, and inconsistent. A deep learning model classified and detected ten and two key body landmarks from the side and rear profile images, respectively (mean average precision = 0.94). Trigonometric-based formulae were derived to calculate hock and knee angles using the features extracted from the imagery. Automated angle measurements were compared with manual results from each image (average root mean square error [RMSE] = 4.13°), where all correlation slopes (average R 2 = 0.84) were statistically different from zero (P < 0.05); all automated measurements were in statistical agreement with manually collected measurements using the Bland-Altman procedure. This approach will be of interest to animal geneticists, scientists, and practitioners for obtaining objective angle measurements that can be factored into gilt replacement criteria to optimize sow breeding units.

Keywords: algorithm; computer vision; key point detection; swine.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
A figure summarizing the main components for determining hock and leg angles using the YOLO object detection algorithm for the side view. The automated measurement procedure requires raw images to be manually annotated. These images will be the training dataset that YOLO will use for training. The trained model localizes the location of each detected body landmark within a boundary box. From provided centroid and boundary box coordinates, a geometric algorithm determined hock and knee angles between body landmarks. These automated measurements are compared to those collected manually on the same image. Statistical tests determined the statistical significance of the slope (different from 0 and 1) between the automated and manual measurements and the statistical agreement between the two measures using the Bland–Altman tests.
Figure 2.
Figure 2.
Example annotation for the side and rear view images. Numbers correspond to class labels described in Table 1. There are two cases of feet in the side image, and two cases of hock and feet in the rear image. The geometric algorithm will identify which are the left and right based on their x and y coordinate.
Figure 3.
Figure 3.
Example output from YOLO v5 model. Through a single pass of the CNN, the YOLO model predicts the labels (green box), boundary boxes (orange box), and the confidence probabilities (blue box) for predicted objects detected in the image.
Figure 4.
Figure 4.
Parts to the YOLO v5 network. The neck features many roles, but primarily serves as an aggregation step. Features like the pyramid pooling and path aggregation are featured in the neck. One of the advantages of later versions of YOLO is increased accuracy due to improvements in the neck. Finally, the head is used to implement feature detection of the sow body landmarks through predicted annotations in each test image.
Figure 5.
Figure 5.
An example depicting IoU) calculation. The blue box is the predicted boundary box, and the orange is the ground truth label. The orange and blue dot is the centroid of the detected hock for the ground truth and predicted boundary box. The IoU metric is calculated as the ratio of the area of overlap between a predicted boundary box and the manual annotation, over the area of the union between the two boundary boxes (Eq. 1).
Figure 6.
Figure 6.
Trends between (a) recall and confidence and (b) precision and confidence for each class in the side view. The thicker blue line represents the average across all classes. Recall for all class objects decreases as confidence increases due to changes in confidence intervals. The inverse occurs where precision increases as confidence decreases. Pink (the neck) followed erratic patterns in later epochs due to possible overfitting in the training and validation dataset.
Figure 7.
Figure 7.
Relationship between mAP and the number of epochs (a: side, b: rear). The number of epochs plateaued at the start of 300 for the side images, and at the start of 200 for the rear images. Triangles in orange are the early truncation point automatically enabled by the patience function.
Figure 8.
Figure 8.
Centroids detected by the trained YOLO model, and body angles calculated from the detected body landmarks, for the rear view image. Centroids used for the angles are in blue, where the vertex of the angles are highlighted.
Figure 9.
Figure 9.
Centroids detected by the trained YOLO model, and body angles calculated from the detected body landmarks, for the side view image. Centroids used for the angles are in blue and the red centroids represent the centroid of the class foot. Angle vertexes are highlighted in yellow.
Figure 10.
Figure 10.
Calculation of adjusted foot coordinates (blue) from origin (red). Boundary box in gray. Where xcb, ycb are the centroid coordinates of the back foot, and xcf, ycf are the centroid coordinates of the front foot.
Figure 11.
Figure 11.
Bland–Altman plots for each knee and hock angle (Hock-Left, Hock-Right, Side-Back-1, Side-Back-2, Side-Front-1, Side-Front-2). The plots visualize the differences between pair of automated and manual points against the mean of each pair. The middle bold line represents the overall mean. The top and bottom dashed lines represent the limits of agreement, set at 1.96 standard deviations above and below the mean.
Figure 12.
Figure 12.
Correlations between automated and manual measurements (in degrees) by knee and hock angle (Hock-Left, Hock-Right, Side-Back-1, Side-Back-2, Side-Front-1, Side-Front-2). Correlations and the corresponding angle are depicted in the upper right corner.

References

    1. Altman, D. G., and Bland J. M... 1983. Measurement in medicine: the analysis of method comparison studies. The Statistician 32:307–317. doi: 10.2307/2987937. - DOI
    1. Bereskin, B. 1979. Genetic aspects of feet and leg soundness in swine. J. Anim. Sci. 48:1322–1328. doi: 10.2527/jas1979.4861322x. - DOI
    1. Bland, J. M., and Altman D. G... 1986. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. 1:307–310. - PubMed
    1. Draper, D., Rothschild M. F., and Goedegebuure S... 1988. Effects of divergent selection for leg weakness on angularity of joints in Duroc Swine. J. Anim. Sci. 66:1636–1642. doi: 10.2527/jas1988.6671636x. - DOI - PubMed
    1. Fan, B., Onteru S. K., Mote B. E., Serenius T., Stalder K. J., and Rothschild M. F... 2009. Large-scale association study for structural soundness and leg locomotion traits in the pig. Genet. Sel. Evol. 41. doi: 10.1186/1297-9686-41-14 - DOI - PMC - PubMed

LinkOut - more resources