Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jun 2;13(11):1861.
doi: 10.3390/ani13111861.

Dead Laying Hens Detection Using TIR-NIR-Depth Images and Deep Learning on a Commercial Farm

Affiliations

Dead Laying Hens Detection Using TIR-NIR-Depth Images and Deep Learning on a Commercial Farm

Sheng Luo et al. Animals (Basel). .

Abstract

In large-scale laying hen farming, timely detection of dead chickens helps prevent cross-infection, disease transmission, and economic loss. Dead chicken detection is still performed manually and is one of the major labor costs on commercial farms. This study proposed a new method for dead chicken detection using multi-source images and deep learning and evaluated the detection performance with different source images. We first introduced a pixel-level image registration method that used depth information to project the near-infrared (NIR) and depth image into the coordinate of the thermal infrared (TIR) image, resulting in registered images. Then, the registered single-source (TIR, NIR, depth), dual-source (TIR-NIR, TIR-depth, NIR-depth), and multi-source (TIR-NIR-depth) images were separately used to train dead chicken detecting models with object detection networks, including YOLOv8n, Deformable DETR, Cascade R-CNN, and TOOD. The results showed that, at an IoU (Intersection over Union) threshold of 0.5, the performance of these models was not entirely the same. Among them, the model using the NIR-depth image and Deformable DETR achieved the best performance, with an average precision (AP) of 99.7% (IoU = 0.5) and a recall of 99.0% (IoU = 0.5). While the IoU threshold increased, we found the following: The model with the NIR image achieved the best performance among models with single-source images, with an AP of 74.4% (IoU = 0.5:0.95) in Deformable DETR. The performance with dual-source images was higher than that with single-source images. The model with the TIR-NIR or NIR-depth image outperformed the model with the TIR-depth image, achieving an AP of 76.3% (IoU = 0.5:0.95) and 75.9% (IoU = 0.5:0.95) in Deformable DETR, respectively. The model with the multi-source image also achieved higher performance than that with single-source images. However, there was no significant improvement compared to the model with the TIR-NIR or NIR-depth image, and the AP of the model with multi-source image was 76.7% (IoU = 0.5:0.95) in Deformable DETR. By analyzing the detection performance with different source images, this study provided a reference for selecting and using multi-source images for detecting dead laying hens on commercial farms.

Keywords: dead laying hen detection; deep learning; depth image; image registration; large-scale farming; near-infrared image; thermal infrared image.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Flow diagram.
Figure 2
Figure 2
Image acquisition device.
Figure 3
Figure 3
Raw images. (a) TIR image. (b) NIR image. (c) Depth image. Note: The images in the first row do not contain any dead chickens, while the images in the second row contain a dead chicken.
Figure 4
Figure 4
Flow of coordinate transform. Note: The meaning of variables and coordinate systems in Figure 4 was described in the following transformation steps.
Figure 5
Figure 5
Calibration board. (a) Photograph. (b) TIR image. (c) NIR image.
Figure 6
Figure 6
Registered images. (a) TIR image. (b) NIR image. (c) Depth image. (d) TND image. Note: The first row is live chicken images, and the second is dead chicken images.
Figure 7
Figure 7
Results of dead chicken detection models. (a) AP50. (b) Recall. (c) AP75. (d) AP@50:5:95. Note: T represents the TIR image, N represents the NIR image, D represents the depth image, T + N represents the TIR-NIR image, T + D represents the TIR-depth image, N + D represents the NIR-depth image, and T + N + D represents the TIR-NIR-depth image.
Figure 8
Figure 8
Detection results with single-source images. Note: (af) are mosaic images. From left to right, the images are TIR, NIR, and depth images in a mosaic image. The green boxes in the image are the annotation boxes, and the red boxes are the prediction boxes of the YOLOv8n object detection algorithm. All bounding boxes for live chickens were removed to enable clear observation of the dead chicken detection.
Figure 8
Figure 8
Detection results with single-source images. Note: (af) are mosaic images. From left to right, the images are TIR, NIR, and depth images in a mosaic image. The green boxes in the image are the annotation boxes, and the red boxes are the prediction boxes of the YOLOv8n object detection algorithm. All bounding boxes for live chickens were removed to enable clear observation of the dead chicken detection.
Figure 9
Figure 9
Detection results with dual-source images. Note: (af) are mosaic images. From left to right, the images are TIR-NIR, TIR-depth, and NIR-depth images in a mosaic image. TIR, NIR, and depth images correspond to the R, G, and B channels of RGB color space, respectively. The idle channel in the dual-source image is set to zero. The green boxes in the image are the annotation boxes, and the red boxes are the prediction boxes of the YOLOv8n object detection algorithm. All bounding boxes for live chickens were removed to enable clear observation of the dead chicken detection.
Figure 9
Figure 9
Detection results with dual-source images. Note: (af) are mosaic images. From left to right, the images are TIR-NIR, TIR-depth, and NIR-depth images in a mosaic image. TIR, NIR, and depth images correspond to the R, G, and B channels of RGB color space, respectively. The idle channel in the dual-source image is set to zero. The green boxes in the image are the annotation boxes, and the red boxes are the prediction boxes of the YOLOv8n object detection algorithm. All bounding boxes for live chickens were removed to enable clear observation of the dead chicken detection.
Figure 10
Figure 10
Detection results with the multi-source image. Note: (af) are TND images. TIR, NIR, and depth images correspond to the R, G, and B channels of RGB color space, respectively. The green boxes in the image are the annotation boxes, and the red boxes are the prediction boxes of the YOLOv8n object detection algorithm. All bounding boxes for live chickens were removed to enable clear observation of the dead chicken detection.

References

    1. Réhault-Godbert S., Guyot N., Nys Y. The golden egg: Nutritional value, bioactivities, and emerging benefits for human health. Nutrients. 2019;11:684. doi: 10.3390/nu11030684. - DOI - PMC - PubMed
    1. Li B., Wang Y., Zheng W., Tong Q. Research progress in environmental control key technologies, facilities and equipment for laying hen production in China. Trans. Chin. Soc. Agric. Eng. 2020;36:212–221. doi: 10.11975/j.issn.1002-6819.2020.16.026. - DOI
    1. Lu C., Zhu W., Pu X. Preliminary report on dead chicken detection system in chicken farm. China Poult. 2008;30:39–40. doi: 10.16372/j.issn.1004-6364.2008.21.032. (In Chinese with English Abstract) - DOI
    1. Zhu W., Peng Y., Ji B. An automatic dead chicken detection algorithm based on svm in modern chicken farm; Proceedings of the 2009 Second International Symposium on Information Science and Engineering; Shanghai, China. 26–28 December 2009; pp. 323–326. - DOI
    1. Peng Y. Master’s Dissertation. Jiangsu University; Jiangsu, China: 2010. Research on Detection Method of Dead Chickens in Chicken Farm Based on Support Vector Machine. (In Chinese with English Abstract)

LinkOut - more resources