Tracking perching behavior of cage-free laying hens with deep learning technologies

270

INTRODUCTION

Domestic fowl (Gallus gallus domesticus) continues to exhibit antipredator behaviors, such as perching or roosting in trees, even in indoor housing environments. Although the domestication of their ancestor, the jungle fowl (Gallus gallus), has altered these behaviors both in quantity and quality (Wood-Gush and Duncan, 1976; Newberry et al., 2001). Domestic fowl shows the tendon-locking phenomenon, which lets them to express perching behavior (PB) with minimum muscular effort (Quinn and Baumel, 1990; Trbojević Vukičević et al., 2018). The domestic hens look for an elevated structures for resting during night as that the jungle fowl, in order to prevent themselves from ground predators (Blokhuis, 1984; Newberry et al., 2001). In nature, the hens can reach up to 10 meter above the ground for resting (Wood-Gush and Duncan, 1976), while under intensive housing conditions, perches are mostly used for roosting (Duncan et al., 1992; Olsson and Keeling, 2000a, 2002; Schrader and Müller, 2009). Hens show strong desire to use perches for roosting (Olsson and Keeling, 2000b, 2002), showing their willingness to put an effort to gain an access to perches. Additionally, they exhibit signs of agitation and frustration when accessing perches is restricted or denied resulting into reduced welfare (Olsson and Keeling, 2002; Fraser et al., 2013). Laying hens are highly motivated to perch and perching has been regarded as a natural behavioral need for the birds that enhance animal welfare (Olsson and Keeling, 2002; Cooper and Albentosa, 2003; Weeks and Nicol, 2006; Bist et al., 2023a). The provision of the perches allows birds to perform their normal perching behavior, which satisfies behavioral demand for the birds (Lay et al., 2011). Several earlier studies have investigated the impact of perch provision on production performance, health, and welfare of laying hens for the past 4 decades.
The United States laying hen industry is undergone a significant shift from traditional conventional caged (CC) systems to cage-free (CF) housing. This transition is primarily driven by growing concerns for animal welfare, rising public demand, and evolving industry standards (UEP, 2017). CF housing presents laying hens with more conducive environment, offering enhanced space and opportunities for natural behaviors, including dustbathing, preening, and perching. Providing perches in CF housing offers significant benefits for laying hens, such as increasing bone health due to continuous movement (Hughes and Appleby, 1989; Abrahamsson et al., 1996). A study that compared the CF and furnished cages found that laying hens in CF system had stronger wing and keel bones than in furnished cages (Rodenburg et al., 2008). Bone fragility and muscle weakness are experienced when birds can’t move and exercise sufficiently (Webster, 2004; Widowski et al., 2013). Therefore, CF housing gives birds an opportunity to move and exercise, which might help improving bone strength than in other housings where birds cannot get enough movement and exercise. However, the challenges of bone fracture in CF housing could not be ignored especially during laying phase. Perch use is not only related to bone strength and leg health, but it also related to birds behavior and welfare. Previous study has reported reduction in fearfulness and aggression in commercial free-range laying hens with the provision of perch use (Donaldson and O’Connell, 2012). Similar benefit can be achieved in CF housing. Despite perching provides several benefits to the birds in CF housing, detecting perching behavior from an early age with higher accuracy stays challenging job for the researchers and producers. Therefore, a more robust precision detection technology is needed to automatically detect perching with less labor, in brief time, with higher precision.
When it comes to object detection, the you only look once (YOLO) model, particularly the YOLOv5 variant, has appeared as a leading model for object detection in analyzing poultry behavior (Neethirajan, 2022). Various studies have illustrated the effectiveness of the YOLO models in identifying and detecting a range of behaviors and activities within CF housing. The earlier adapton of object detection technology in CF housing includes detection of pecking, floor eggs, piling, mislaying behavior, dead hens, egg grading and defect detection, and tracking individual birds (Subedi et al., 2023a, 2023b; Bist et al., 2023b, 2023c, 2023d; Yang et al., 2023a, 2023b, 2024a, 2024b). In addition, the recent advancment of tracking the locmotion of individual chickens has been made possible with model like track anything model (TAM) (Yang et al., 2024c). Further enhancements in YOLO models, such YOLOv6, YOLOv7, and YOLOv8, have heightened their accuracy and applicability for monitoring poultry behavior (Jocher et al., 2023).
Despite the widespread integration of YOLO models in poultry research, there remains a paucity of exploration into their utilization for detecting PB in laying hens within CF housing system. This study aims to bridge this gap by training, validation, and testing a deep learning-based model for detecting PB. The objectives of the study were to (1) develop and test a deep learning model for detecting perching behavior; and (2) evaluate the optimal model’s performance on detecting perching behavior of laying hens of different ages. Our aim is to enrich our understanding of applied behaviors of laying hens in CF housing and contribute to the advancement of robust detection systems for improving animal welfare in the poultry sector.

MATERIALS AND METHODS

Experimental Setup

Four identical research poultry facilities (rooms) were used, and 200 Hy-line W-36 birds were placed in each room from d 1 to d 525, at the University of Georgia, Athens, GA research facility. To meet the CF housing guidelines, these rooms were designed with a provision of perches and litter floor. Each room were identical in terms of dimensions (7.3 meters in length, 6.1 meters in width, and 3 meters in height; Figure 1) and facility (feeders, drinkers, lighting, perches, and nest boxes). Pine shavings (∼5 cm deep) was placed in each room as a bedding material from d 1. Environmental factors such as indoor temperature, relative humidity, light duration (16 h), light intensity (12–15 lux), and ventilation rates were automatically regulated and recorded using the Chore-Tronics Model 8 controller (Chore-Time Equipment in Milford, IN). Institutional Animal Care and Use Committee (IACUC) at the university of Georgia approved the animal use protocol.
Figure 1

  1. Download: Download high-res image (492KB)
  2. Download: Download full-size image

Figure 1. Research cage-free facility housing pullets/laying hens.

Data Collection

Night-vision network cameras (PRO-1080MSB, Swann Communications USA Inc., Santa Fe Springs, LA) with a 90-degree viewing angle and night vision up to 100 ft (30.48 m) in total darkness and 130 ft (39.62 m) with ambient light were used to record laying hen’s activities videos dataset. In each room, total 8 cameras were placed (6 cameras mounted on the celing∼3m above the litter floor and 2 cameras above the ground floor placed at 0.5 m height from the ground). One camera was enough to capture the whole perch for top view and another one for side view in our case while the remaining cameras (6) were used to cover the feeders, water lines, nest box areas and other part of the research room. The pictures included in the paper is from 1.5 m above the top of the perch, which was 3 m above the ground floor as the perch was 1.5 m in height and we wanted to cover whole perch (top to bottom). Camera cleaning depend on the birds age as the birds were in the younger age there was not much dust on the cameras. So, once a week was enough to make cameras clean but as the birds age, more dust made cameras dirty almost every alternate day. So, we cleaned cameras ever alternate days during daily bird check. There was no difference in maintenance between ceiling and wall cameras. The camera recorded the videos data for 24 h and videos from morning, afternoon, evening, and night time were randomly selected for the image acquisition purpose from different days at each growth phase of the birds. The captured videos were stored in a digital video recorder (DVR-4580, Swann Communications USA Inc., Santa Fe Springs, LA) from d 1 to d 525. The video files were stored in .avi format with a 1,920 × 1,080 pixels resolution with a sampling rate of 15 frames per second (FPS).

Image Labeling and Data Preprocessing

Video datasets collected from research facilities were converted into individual image files in .jpg using Free Video to JPG Converter App (ver. 5.0) at 15 FPS image processing rate. Image datasets obtained were filtered and separated manually based on perching behavior presence in the images and high-quality image datasets. Five hundred images from manually separated images were randomly selected that included images from different days and different time within each growth phase of the birds resulting into total 3,000 images. Out of 3,000 images, 70% were used for training, 20% for validation, and 10% for testing. Selecting images from the same day and same time could limit the ability of the model to precisely detect perching behavior that occurs throughout the life period of the birds. So we selected images randomly from different days and time within each growth phase. Image labeling was done using free website (makesense.ai) and stored in YOLO format (Subedi et al., 2023a; Bist et al., 2023b; Guo et al., 2023). Those images were labeled manually by putting a bounding box around the targeted PB in a hen within images so that the bounding box includes all body parts of the hen involved in performing perching behavior. The perching behavior performed by the birds was determined as defined by (Appleby et al., 2004). The detailed process of data collection, labeling, preprocessing, training, validation, testing, and implementation is shown below (Figure 2). The YOLOv7 and YOLOv8 models we trained were originally from the GitHub repository developed by Ultralytics (Jocher et al., 2023). All YOLOv7 and YOLOv8 models used in this study were pretrained with common objects in context datasets and can be readily modified into required object detection models through target object training datasets. Before developing the perching model detector, the experimental configurations were prepared for the model evaluation. Training datasets were analyzed using Oracle Cloud with different experimental configurations presented in Table 1 as given below.
Figure 2

  1. Download: Download high-res image (683KB)
  2. Download: Download full-size image

Figure 2. The processes of perching detection system (i.e., data collection, labeling, training, validation, testing, and implementation).

Table 1. Data preprocessing for YOLOv7 and YOLOv8 models, where each image contains more than 1 birds performing perching behavior.

Classa Original data set Train (70%) Validation (20%) Test (10%)
Starter (1–6 wk) 500 350 100 50
Grower (6–12 wk) 500 350 100 50
Developer (12–15 wk) 500 350 100 50
Prelay (15–17 wk) 500 350 100 50
Peaking (17–37 wk) 500 350 100 50
Layers (37–75 wk) 500 350 100 50
a
Each class or experimental setting was run for 200 epochs with a batch size of 8.
Abbreviation: Wk, week.

YOLOv7-PB Model Description

YOLOv7-PB model was developed based on the YOLOv7 network consisting of an input, backbone layer, head, and output. Feeding an image into YOLOv7 is as similar process in YOLOv5, as explained previously (Yang et al., 2022). The YOLOv7 backbone layer incorporates Bconv layers, E-ELAN layers, and MP layers. There is a combination of convolution, batch normalization (BN), and activation functions within the Bconv layer. The E-ELAN layer employments techniques such as expansion, shuffling, and merging carnality, aimed at enhancing learning capabilities. This approach confirms that the deep network can competently learn and converge without interrupting the original gradient path, as mentioned previously (Wang et al., 2022; Yang et al., 2022). The MP layer consists of input and output channels, where the output dimensions are halved compared to the input, with both halves incorporating Bconv layer.
The head in YOLOv7 is similar to YOLOv5, highlighting the distinctions such as the replacement of the CSP module with E-ELAN module and the transformation of the Down sampling module into the MPConv layer. The entire head layer encompasses SPPCPC layers, multiple Bconv layers, several MPConv layers, numerous Catconv layers, and RepVGG block layers that generate 3 subsequent heads, as explained by (Yang et al., 2022). The SPPCSPC layer is formed through the pyramid pooling operation and CSP structure, with concatenated output information. The Catconv layer serves a function similar to the E-ELAN layer, simplifying more efficient learning and convergence in deeper networks. The operation of the Catconv layer aligns with that of the E-ELAN layers, enabling deeper networks to learn and connect more effectively (Wang et al., 2022).

YOLOv8 Model Description

YOLOv8 is the new addition to the YOLO series developed by Ultralytics (Jocher et al., 2023). As a cutting-edge, state-of-the-art (SOTA), YOLOv8 was built on the success of the earlier versions, introducing new features and improvements for enhanced performance, flexibility, and efficiency. YOLOv8 supports a full range of vision AI tasks, including but not limited to object detection, segmentation, pose estimation, tracking, and classification. It allows real-time object detectors, offering revolutionary performance in terms of accuracy and speed. Expanding on the progress made in earlier YOLO models, YOLOv8 brings forth novel features and enhancements, positioning it as well-suited options for diverse object detection task across a broad spectrum of applications.

Model Evaluation Metrics

Precision. Precision measures how accurately the bounding box predictions match the dataset. It is calculated as the ratio of true positive predictions, such as perching to the total positive predictions made.(a)
where, TP stands for true positive, FP stands for false positive, FN stands for false negative values, respectively.
Recall. Recall indicates the model’s capability to correctly predict bounding box measurements within the dataset. It is calculated as the ratio of true positive predictions, such as perching, to all actual instances of the positive class, perching(b)
F1 score. The F1 score is a vital metric in object detection, standing for the harmonic mean of both precision and recall as shown in equation C. A higher F1 score indicates better detector performance. When the F1 score is 100%, it signifies that object detection is highly accurate without any negative outcomes.(c)
Mean average precision (mAP). The mAP is a crucial evaluation metric that measures the model’s detection performance. It uses an intersection over union (IOU) threshold of 0.5 (mAP@0.50) or a broader range from 0.5 to 0.95 (mAP@0.50:0.95).(d)
Within this equation, APi signifies the average precision of the ith category, and C represents the total number of categories.
Loss Function. The YOLOv7-PB and YOLOv7x-PB object detection algorithms utilize a custom loss function called ‘YOLO Loss’ during training and validation (Figure 3, Figure 4). This loss function combines several components to penalize incorrect predictions and enhance accuracy. The YOLOv7-PB and YOLOv7x-PB Loss is a weighted sum of box loss, abjectness loss, and classification loss, with the importance of each determined through backpropagation and gradient descent to reduce prediction errors and improve model performances (Bist et al., 2023d).
Figure 3

  1. Download: Download high-res image (443KB)
  2. Download: Download full-size image

Figure 3. Performance metrics results sample for YOLOv7-PB model, where Box, objectness, classification, val Box, val Objectness, val Classification, mAP signify the training box loss, training object loss, training classification loss, validation box loss, validation object loss, validation classification loss, and mean average precision used to detect perching behavior.

Figure 4

  1. Download: Download high-res image (433KB)
  2. Download: Download full-size image

Figure 4. Performance metrics results sample for YOLOv7x-PB model, where Box, objectness, classification, val Box, val Objectness, val Classification, mAP represent the training box loss, training object loss, training classification loss, validation box loss, validation object loss, validation classification loss, and mean average precision used to detect perching.

Similarly, the YOLOv8s-PB and YOLOv8x-PB algorithms also use a custom “YOLO loss” during training and validation (Figure 5, Figure 6). This loss function includes terms to correct predictions and enhance accuracy. The YOLOv8s-PB and YOLOv8x-PB Loss is a weighted sum of box loss, classification loss, and distribution focal loss (DFL), with user-defined weights determining the significance of each term.
Figure 5

  1. Download: Download high-res image (566KB)
  2. Download: Download full-size image

Figure 5. Performance metrics results sample for YOLOv8s-PB model, where train/box_loss, train/cls_loss, train/dfl_loss, val/box_loss, val/cls_loss, val/dfl_loss, and mAP represents training box loss, training class loss, training distribution focal loss, validation box loss, validation class loss, validation distribution focal loss, and mean average precision used to detect perching.

Figure 6

  1. Download: Download high-res image (534KB)
  2. Download: Download full-size image

Figure 6. Performance metrics results sample for YOLOv8s-PB model, Where train/box_loss, train/cls_loss, train/dfl_loss, val/box_loss, val/cls_loss, val/dfl_loss, and mAP represents training box loss, training class loss, training distribution focal loss, validation box loss, validation class loss, validation distribution focal loss, and mean average precision used to detect perching.

In YOLOv7 and YOLOv8, the regression loss integrates CIoU (Complete Intersection over Union) loss with DFL (Lou et al., 2023). DFL uses a broad distribution for modeling box positions, emphasizing proximity to object locations.(f)
Here, Si is the sigmoid network output, yi and yi+1 are interval orders, and y is a label.
YOLOv7 and YOLOv8 utilize an Anchor-Free strategy instead of an Anchor-Based approach and introduces dynamic Task Aligned Assigner for the matching process. It computes Anchor-level alignment using equation (g) as given below.(g)
Where, S is the classification score, u is the IoU value, and α and β are weight hyperparameter. The top m anchors per instances are selected as positives and trained through the loss function. These enhancements led to YOLOv8 achieving an 11.4% mAP@0.50:0.95 improvement over YOLOv5 on the Pascal VOC2007 dataset, establishing it as one of the most accurate detectors (Lou et al., 2023).

RESULTS AND DISCUSSION

Performance Metrics Comparison

Performances of the 4 object detection models used to detect PB is given in Table 2. From this study, we found that the YOLOv8x-PB model outperforms all other examined models, with slight variations across performance metrics. A closer examination of its outcomes displays an impressive precision of (94.80%) and recall of (95.10%) for PB, showing its capability for precise detection. The YOLOv8x-PB model’s mAP@0.50 score of (97.60%) for PB detection further highlights its ability to identify PB instances with a high confidence score. All our models used in this study resulted with precision of at least 94%, recall of 93.4%, mAP@0.50 of 96.9%, mAP@0.50:0.95 of 58.1%, and F1-score of 94%.

Table 2. Performance metrics results of different YOLOv7-PB and YOLOv8-PB models for detecting PB behavior.

Models Precision (%) Recall (%) mAP@0.50 (%) mAP@0.50:0.95(%) F1-score (%)
YOLOv7-PB 94.0 95.3 97.3 58.1 95.0
YOLOv7x-PB 94.3 94.9 97.5 58.3 95.0
YOLOv8s-PB 95.0 93.4 96.9 61.4 94.0
YOLOv8x-PB 94.8 95.1 97.6 62.6 95.0
Abbreviations: mAP, mean average precision; PB, perching.
The mAP@0.50 of YOLOv8x-PB model in our study for detecting PB behavior was 97.60%, which surpassed the results of the earlier study that used the YOLOv8 model for small object detection and reported a highest mAP@0.50 of 83%, where the lowest reported mAP@0.50 was 18.1% (Lou et al., 2023). In our previous study, where we found YOLOv8x-PB as the optimal model for detecting dustbathing behavior of CF laying hens, which resulted mAP@0.50 of 93.70% (Paneru et al., 2024). In another study that used improved YOLOv8n model (E-YOLO) for detecting estrus in cow achieved an average precision of estrus (93.90%) and average precision of mounting (95.70%) (Wang et al., 2024). Even though, detection accuracy increases with increase in object size, if we compare the object size of the cow and laying hens, we could expect higher detection accuracy for cows than for laying hens. However, our YOLOv8x-PB model proved the higher accuracy of detection for PB in laying hens. It might not always be the case of object size but could be the reason of the higher number of sample size that went for model training as well as more birds showing perching behavior in a single image that increases the accuracy of the model.
However, another study that used the YOLOv8 model achieved the lowest mAP@0.50 of 47% (Wang et al., 2023), which was 50.6% lower than our result current result. The lower mAP@0.50 in (Wang et al., 2023) study might be credited to the targeted object’s greater height relative to the camera’s location. As previous studies have highlighted that factors such as camera height and image quality can significantly affect detection accuracy (Corregidor-Castro et al., 2021; Gadhwal et al., 2023). In addition, our study achieved high-performance levels for PB. These individual metrics collaborate to yield an impressive overall F1 score of at least 94% in all models and 95% in optimal model (YOLOv8x-PB). When analyzing these results within the context of the 4 model variants that we utilized for PB detection, it is clear that YOLOv8x-PB model resulted in overall higher precision, recall, mAP@0.50, mAP@0.50:0.95 and F1-score as the number of epochs increases as shown in Figure 7.
Figure 7

  1. Download: Download high-res image (364KB)
  2. Download: Download full-size image

Figure 7. Comparative analysis results of PB detection across various YOLOv7 and YOLOv8 models. Where, PB- perching behavior; mAP- mean average precision.

Detection result of perching behavior such as perching birds with bounding boxes and recall on the same image using 4 variants of YOLO models (example) is presented in Figure 8. From the tested images, comprehensive analysis confirms the remarkable performance of the YOLOv7x-PB & YOLOv8x-PB models in effectively detecting PB behavior (Figure 8). The YOLOv8x-PB model’s higher recall and mAP@0.50, mAP@0.50:0.95, and F1- scores collectively highlights its ability of detecting PB behavior within CF housing conditions. Each incremental improvement in performance metrics was important in reducing false detections by the model (Bist et al., 2023d). Therefore, every margin increase in performance metric holds significance within the context of the detection model. These findings confirm the model’s potential as a valuable tool for speeding up prompt and correct PB detection, thereby enhancing the efficiency of poultry management practices not only applicable in CF housing but could be applicable in other housing conditions for broilers and laying hens.
Figure 8

  1. Download: Download high-res image (1MB)
  2. Download: Download full-size image

Figure 8. Comparison of PB detection on the same image using 4 variants of YOLO models, where (A) YOLOv7-PB model, (B) YOLOv7x-PB model, (C) YOLOv8s-PB model, (D) YOLOv8x-PB model.

The comprehensive assessment of the F1-Confidence curve for the YOLOv7-PB and YOLO8-PB models for detecting perching behavior is given in Figure 9 below.
Figure 9

  1. Download: Download high-res image (528KB)
  2. Download: Download full-size image

Figure 9. F-1 confidence curve analysis among (A) YOLOv7-PB, (B) YOLOV7x-PB, (C) YOLOv8s-PB, and (D) YOLOv8x-PB models.

The results show the robust performance showed by YOLOv8x-PB model in correctly detecting PB behavior. Especially, the F1- confidence scores YOLOv7-PB, YOLOv7x-PB, and YOLOv8x-PB was 0.95 while the F1-confidence score for the YOLOv8s-PB model was little lower than other models (0.94). This remarkable performance highlights the YOLOv8x-PB model’ proficiency at maintaining a fine balance between precision and recall across different PB instances. The outstanding performance of the YOLOv8x-PB models is of particular importance, boasting a high F1-score of 0.95 for detecting PB. Earlier study has established that the higher the F1 score, the better the model’s performance (Bist et al., 2023d). The consistent fineness in F1 scores seen across the various YOLOv7-PB and YOLOv8-PB models serves as evidence to the architecture’s strength and efficiency in precisely classifying PB behavior. In addition, the distinguished ability of the YOLOv8x-PB model suggests its ability in achieving a precisely calibrated equilibrium between precision and recall, which ultimately contribute to heightened accuracy in PB detection. This performance shown by the YOLOv8x-PB model carries considerable implications, highlighting its potential to identify PB cases reliably and precisely from an early age of the bird, thereby providing producers and researchers a tool that can contribute significantly to improve bird health and welfare with automatic detection of perching behavior.

Confusion Matrix Analysis

The highest true positive rate of PB prediction (0.96) was shown by YOLOv7-PB, YOLOv7x-PB, and YOLOv8s-PB model while YOLOv8x-PB model displayed the true positive rate of PB prediction of 0.93. A more insightful view of confusion Metrix analysis is given in Figure 10, which also shows the confusion matrix in normalized form with the percentage of correct predictions. The model’s effectiveness at detecting and identifying positive instances from the datasets improves as the true positive counts increase (Yang et al., 2023b). These results highlight the YOLOv7-PB, YOLOv7x-PB, and YOLOv8s-PB models’ ability in accurately detecting PB behavior, which consistently showed robust performance. This ability has considerable implications for efficient poultry health monitoring and emphasizes the potential of YOLOv7-PB, YOLOv7x-PB, and YOLOv8s-PB models in precise PB detection.
Figure 10

  1. Download: Download high-res image (333KB)
  2. Download: Download full-size image

Figure 10. Comparative performance results of YOLOv7 and YOLO8 models based on confusion matrix of PB detection.

Training and Validation Loss Function

The training and validation box loss and class loss functions showed the rate of decrease in the loss function as the number of epochs increase from 0 to 200 during the model training as shown in the Figure 11. Increase in number of epochs leads to decrease in the loss function as the model went through training it iteratively adjusts its parameters to better fit the training data and results in improved predictive accuracy as described by (Subedi et al., 2023a; Bist et al., 2023d). The mAP@0.50:0.95 value highlights the average mAP across these thresholds, providing meaningful evaluation of detection accuracy. From the box loss and classification loss, it is clear that lower box loss and classification loss corresponds to the higher accuracy in detection of PB behavior. Among the 4 different models used for PB detection, YOLOv7x-PB model showed lowest training and validation box loss at the initial phase of model training and validation and the rate of decrease in box and classification loss during training and validation as the number of epochs increase remain relatively smaller in this model compared to other models. While YOLO8x-PB model had higher box and classification loss during training and validation at the initial phases when the number of epochs were lower and the rate of decrease in box and validation loss as the number of epochs increases was higher in this model compared to other models. Therefore, compared to the initial phases of training, YOLOv8x-PB model significantly increased the rate of decreases in box and classification loss at the final stage of the training and validation. This shows the strong performances of YOLOv8x-PB model during training and validation.
Figure 11

  1. Download: Download high-res image (298KB)
  2. Download: Download full-size image

Figure 11. Comparison of (A) Train/Box-Loss, (B) Train/Class-Loss, (C) Val/Box-Loss, and (D) Val/Class-Loss across various YOLOv7 and YOLOv8 models.

Performance Metrics Comparison of Optimal Model (YOLOv8x-PB)

After analyzing the results, it becomes clear that the YOLOv8x-PB model resulted with higher precision (94.80%; Figure 12) for detecting PB behavior in CF housing during all growth phases except for starter phase (87.44%). The mAP@0.50 was highest for the peaking phase (97.66%) followed grower, layers, prelay, developer and the lowest was for starter phase (86.72%). The mAP@0.50:0.95 for detecting PB behavior was highest during layer phase (62.74%) followed by peaking, prelay, developer, grower, and the lowest was for starter phase (43.70%). The recall was highest during peaking (94.62%) followed by grower, prelay, layers, starter and the lowest was for developer (84.41%). The lowest precision during starter phase could be because of smaller body size of the birds for cameras to capture perching behavior from 3m height ceiling. Detection precision increased as birds grow older. Increased body size results into a larger image size when capturing from the same 3m above ceiling. With the optimal model (YOLOv8x-PB), we were able to achieve a precision of at least 87.44% to 96.85%, recall of at least 84.41% to 94.62%, mAP@0.50 at least 82.72% to 96.92% and mAP@0.50:0.95 at least 43.70% to 62.73% during all growth phases of laying hens.
Figure 12

  1. Download: Download high-res image (424KB)
  2. Download: Download full-size image

Figure 12. Comparative analysis results of optimal model (YOLOv8x-PB) on PB detection across various growth phase of laying hens in CF housing, (A) precision, (B), recall, (C) mAP@0.50, (D) mAP@0.50-0.95. Where, PB- perching behavior; mAP- mean average precision

Training and Validation Loss Functions of the Optimal Model during Growth Phases

The training and validation box loss and class loss of the optimal model (YOLOv8x-PB) model is shown in the Figure 13. The rate of decrease in training box loss, class loss and validation box loss and class loss shows a different pattern for the different growth phases. Overall, the training box loss and validation box loss reveal similar pattern, first decreased at higher rate as the number of epochs increase and then decreased at slower rates to reach the lowest point as the epoch number reach 200. Whereas the training class loss and validation class loss pattern looked similar to each other but differ from the training and validation box loss. In the beginning, the rate of decrease of training and validation class loss was sharp but as the number of epochs increases the rate of decrease of training and validation class loss was at lower rate to follow a straight line until it reaches 200 epochs. The increase in epochs leads to a decrease in the loss function as the machine learning model iteratively adjusts its parameters to better fit the training data and improve its predictive accuracy (Subedi et al., 2023a; Bist et al., 2023d). The mAP@0.50:0.95 value signifies the average mAP across these thresholds, providing a meaningful assessment of detection accuracy. Particularly, a lower box loss and classification loss resemble to higher accuracy in detecting PB behavior correctly. Among different growth phases, peaking and layer phases proved the lowest Train/Box-loss, Train/Class-Loss, and Val/Box-Loss, and Val/Class-Loss.
Figure 13

  1. Download: Download high-res image (395KB)
  2. Download: Download full-size image

Figure 13. Comparison of (A) Train/Box-Loss, (B) Train/Class-Loss, (C) Val/Box-Loss, and (D) Val/Class-Loss by optimal model (YOLOv8x-PB) on PB detection across various growth phase of laying hens in CF housing.

The perching behaviors of chickens could be affected by many factors such as environmental factors and animal body weight. The body weights of laying hens in this study were measured as 1.77 kg (±0.17 kg; n = 800 hens) during the study. Besides, hens’ bone quality is declining with the increase of the age after peak production. The effect of aging and bone quality on perching behaviors such as perching frequency and height will be investigated in the future.

CONCLUSIONS

The YOLOv8s-PB model showed higher precision, YOLOv7-PB model showed higher recall, and YOLOv8x-PB showed higher mAP@0.50 and mAP@0.5:0.95 for detecting PB behavior in CF housing conditions than other models utilized. Based on the evaluation matric, YOLOv8x-PB model was considered the optimal model for detecting PB in CF housing system than other models utilized in this study. However, all other models also resulted in precision of at least 94% in detecting PB. From the optimal model (YOLOv8x-PB), we were able to achieve a precision of at least 88.80%, recall of at least 81.70%, mAP@0.50 of at least87.40% and mAP@0.50:0.95 of at least 46.50% during all growth phases of laying hens. PB detection precision was lowest at starter phase and detection precision increased as birds age and the highest detection precision (97.40%) was achieved for peaking phase with the optimal model. This study provides a reference for CF producers that PB behavior can be detected automatically with precision of at least 94% using any of the 4 variants of YOLO models that we used in this study. However, the accuracy can further be increased with frequent camera cleaning. The study highlighted the benefits of utilizing the new addition of YOLO models i.e. YOLOv8x-PB in accurately detecting PB behavior with higher precision. This finding can provide a valuable tool for detecting PB behavior among CF layer producers to advance laying hen welfare in CF housing from an early age of the bird with at least 94% precision using any of the model we used in this study.