Monitoring activity index and behaviors of cage-free hens with advanced deep learning technologies

423

Animal activities and behaviors provide significant insights into the mental and physical well-being of poultry, serving as a key indicator of their health and subjective states (Yang et al., 2024a). For instance, predicting indoor particulate matter concentration in poultry houses can be informed by broiler activity and ventilation rates (Peña Fernández et al., 2019). Additionally, locomotor activity acts as a proxy for the gait of individual broilers, facilitating gait classification (van der Sluis et al., 2021). Currently, such activity is primarily monitored through manual methods, which are labor-intensive and prone to individual bias (Mao et al., 2023). With the world’s population projected to reach 9.5 billion by 2050 and the demand for animal products like meat, eggs, and milk expected to increase by 70% from 2005 levels, developing automated, precise systems for monitoring poultry activity becomes crucial (Guo et al., 2022a). This advancement is especially important for managing health and welfare efficiently under the constraints of limited natural resources such as fresh water, feed, and land (Yang et al., 2022; Bist et al., 2023a).

Automated measurement systems are increasingly vital for monitoring and promoting good welfare within the growing livestock industry. Over recent decades, advancements in digital imaging technologies have spurred the development of automated and precise animal activity tracking systems (Horna et al., 2023). In 1997, a pioneering image analysis method was introduced to quantify the behavioral responses of animals to their micro-environment using a camera system (Panasonic NV-FS200EC) and digitizer board (Turbo Pascal5) (Bloemen et al., 1997a). This method was initially tested on broiler chickens and pigs to measure their activity under different temperatures and has since been widely applied to monitor the behaviors of pigs and individually caged chickens (Oczak et al., 2013). Similarly, the eYeNamic system detects variations in broiler chicken activities by measuring the ratio of object pixels to background pixels via top-view cameras (Kashiha et al., 2013; Peña Fernández et al., 2018a). To enhance farm animal welfare, it is crucial to consider the needs of herds and flocks as well as individual animals.

In recent years, machine vision or deep learning has been applied in animal analysis, particularly for tracking and monitoring poultry’s activities and locomotion in complex environments (Leroy et al., 2006; Guzmán et al., 2013, 2016; Yang et al., 2024a, 2024b). As traditional methods often struggle with issues like occlusion, where chickens are obscured by objects, or background clutter that camouflages them, deep learning offers a robust solution. Using advanced neural networks, such as convolutional neural networks (CNN), these systems can learn from raw sensor data to detect, track, and classify chicken movements with minimal preprocessing. This capability is essential for accurately monitoring chicken behavior and welfare in dynamic and crowded settings, where changes in the environment or in the animals’ appearance can complicate tracking efforts. Based on CNN architectures, the you only look once (YOLO) + Multilayer Residual Module (MRM) algorithm was developed for detecting broiler stunning states, demonstrating the capability to process up to 180,000 broilers per hour with a high accuracy rate of 96.77% (Ye et al., 2020b). Additionally, other CNN networks such as Visual Geometry Group Network (VGGNet) and Residual Network (ResNet) have been employed to identify 4 common chicken diseases: Avian Pox, Infectious Laryngotracheitis, Newcastle Disease, and Marek’s Disease, distinguishing them from healthy birds (Quach et al., 2020). Exceptionally, when CNNs are combined with a Kinect sensor to recognize flock behavior, the accuracy reaches an impressive 99.17% (Pu et al., 2018). Therefore, the integration of CNNs into the poultry industry significantly enhances the automation of poultry welfare and health monitoring (Bist et al., 2023b).

However, previous research didn’t build the connection between chicken flock activity index and welfare indicators such as lameness and piling. The objectives of this study were to (1) develop a poultry flock activity and behavior (i.e., piling and smothering) tracker using deep learning models; (2) compare the performance of popular CNN models, including YOLOv5, YOLOv8, ByteTrack, DeepSORT, and StrongSORT in tracking chickens’ behaviors; and (3) evaluate the best model’s performance in potential applications such as detecting piling and smothering behaviors, and monitoring flock activity, which could be indicative of footpad health.

MATERIALS AND METHODS

Experimental Design

In this study, a year-long experiment was conducted at the University of Georgia (UGA)’s Poultry Research Center (Latitude 33°54′23.0″N, Longitude 83°22′43.9″W), specifically within 4 cage-free facilities (labeled 13 to 16), each with dimensions of 7.3 m in length, 6.1 m in width, and 3 m in height, to rear 800 Hy-Line W-36 laying hens (200 in each facility). These hens were approximately 1 day old at the beginning of the experiment. The spatial design within each room was standardized, equipped with 6 suspended feeders and 2 watering kits. Additionally, a single portable A-frame hen perch was introduced to promote natural perching behavior and to reduce instances of piling by providing increased interactional space (Figure 1) (Gray et al., 2020). Pine shavings (5 cm depth) were spread on the floor as bedding. The soy-corn feed, manufactured at the UGA feed mill, was renewed bi-monthly to ensure freshness and prevent mildew. Water and food were provided ad libitum. Videos recorded at birds’ age of 16 to 24 wk . Environmental conditions were controlled adhering to the Hy-Line W-36 commercial layer management guidelines. Parameters maintained included a relative humidity range of 40% to 60%, an air temperature bracket of 21°C to 23°C, a light intensity of 20 lux, and a photoperiod of 19 h of light to 5 h of darkness (19L:5D). Nutrition was provided in the form of a soy-corn feed, freshly prepared bi-monthly at the UGA feed mill to ensure optimal quality and to prevent mildew. Routine monitoring of the hens’ growth and environmental conditions was carried out daily in accordance with the UGA Poultry Research Center Standard Operating Procedure. This project received approval from the Institutional Animal Care and Use Committee (IACUC) at the University of Georgia, ensuring adherence to ethical guidelines for animal use and management.

Data Collection

To systematically document the hens’ activity and evaluate animal behaviors, each room was outfitted with 6 waterproof HD cameras (Model PRO-1080MSFB, Swann Communications, Santa Fe Springs, CA). The devices were strategically positioned to maximize coverage, with 5 mounted on the ceiling at 90-degree angles and one on the side wall at a 45-degree angle. Each device covered approximately one-fourth of the entire floor area. The cameras were set to record at a resolution of 1,440 × 1,080 pixels and a frame rate of 18 FPS. The installation height was standardized at 3.05 m for ceiling cameras and 1.68 m for side wall cameras. Each continuous video recording consisted of 15-minute segments, captured 24 h a day. To ensure the integrity of the recorded data, the camera lenses were maintained with a lens cleaning cloth to remove dust and other particulates, with cleaning scheduled on a weekly basis. Captured footage was initially saved to a video recorder (Swann Company, Santa Fe Springs, CA) located on-site. After recording, the data was regularly transferred to high-capacity hard disk drives (Western Digital Corporation, San Jose, CA) and stored securely at the data hub within the Department of Poultry Science at the UGA.

Data Labelling

Video selection was conducted manually to ensure inclusion of diverse activity levels and various pilling behaviors of the chickens. This approach aimed to validate the analytical method’s applicability across different developmental stages, specifically for both hens and pullets. The random images were obtained from hard disk drives containing both wall and ceiling video recordings, avian imagery was randomly extracted and converted into JPG format using the Free Video to JPG Converter software. The “total function” option was employed during the conversion process to facilitate the acquisition of random images. After conversion, images with blurriness were discarded. The remaining 1,500 clear photographs underwent a labeling process using Labeling Windows_v1.6.1 software, with 900 images allocated for training, 300 for validation, and 300 for testing. For annotation, only chickens with at least one-third of their body visible within the frame were delineated with bounding boxes. If chickens were in different postures, they were annotated as chickens without consideration of their postures.

Model Innovations for Chicken Tracking

Model Selection

In chicken detection, deep learning represents the forefront of technology, employing advanced CNNs and anchor-based detection strategies to identify regions of interest with high precision (Kim and Lee, 2020). The introduction of the YOLO framework was a game-changer, propelling the use of regression methods for chicken detection. YOLO architectures simplify the process by integrating feature extraction and object classification into a single, streamlined regression network (Figure 2). This unified approach is composed of 3 main elements: the backbone, neck, and head (Jocher, 2020).

Figure 2

  1. Download: Download high-res image (776KB)
  2. Download: Download full-size image

Figure 2. The network structure of YOLO.

The backbone serves as the foundational axis of the network. It comprises various sub-models designed to extract fundamental features from the input data. The innovation of YOLO includes the FOCUS mechanism, which consolidates information about the width and height of the features into the channel space (Elmessery et al., 2023). This addition, along with the enhanced mosaic data augmentation technique, contributes to the improved accuracy of the YOLO model, especially in the detection of smaller objects (Dadboud et al., 2021).

The neck of the model extends the capability of the backbone by incorporating a feature pyramid module known as PAFPN (Zhang et al., 2021b). This component optimizes the processing of different scales and sizes of objects, ensuring consistent recognition across varying dimensions.

The head of the model, through a series of convolutional layers, maintains the essential features of the target, moderating the increase in the number of feature maps and thus streamlining computational demands (Liang et al., 2022).

YOLOv5 and YOLOv8 are the most prevalent versions in the YOLO series (Hussain, 2023). Consequently, this study employs these iterations as foundational models for chicken detection, subsequently integrating them with various tracking algorithms.

Advanced Algorithms for Chicken Multiobject Tracking

Multiobject tracking (MOT) is a critical component of computer vision, tasked with following multiple objects across recorded footage or in live streams (Guo et al., 2022b). For this study, 3 advanced MOT algorithms-deep simple online and real-time tracking (DeepSORT), StrongSORT, and ByteTrack were chosen-each known for its precision and robustness. These algorithms leverage deep learning to detect objects and track their movements effectively, providing reliable performance in dynamic and cluttered settings. Figure 3 presents the workflow of MOT.

Figure 3

  1. Download: Download high-res image (432KB)
  2. Download: Download full-size image

Figure 3. Chicken tracking workflow: detection and multiobject tracking.

DeepSORT: An advanced computer vision tracking algorithm, extends simple online and real-time tracking (SORT) by integrating a deep learning-based appearance descriptor. This enhancement minimizes identity switches, thus improving the tracking efficiency of objects, such as chickens, by maintaining unique identifiers for each. SORT employs Kalman filters and the Hungarian algorithm for tracking, encompassing detection, estimation, data association, and the management of track identities (Wojke et al., 2017).

StrongSORT: StrongSORT enhances tracking capabilities by employing bag of tricks (BoT) with a ResNeSt50 backbone, pretrained on the DukeMTMCreID dataset, for robust appearance feature extraction. This approach, superior to the simpler CNN used in DeepSORT, excels in differentiating individual features, crucial for the precise tracking of chickens. Additionally, StrongSORT incorporates an enhanced correlation coefficient (ECC) correction algorithm to adapt to varying camera viewpoints (Du et al., 2023).

ByteTrack: ByteTrack distinguishes itself in object tracking by treating each detection as a fundamental unit, akin to a byte in programming, thereby not limiting the focus to high-scoring boxes. This approach facilitates a more robust association, recovering objects that may have low detection scores through a secondary association pass. ByteTrack’s simplicity is a key advantage; it eschews complex models for appearance and motion, relying instead on precise detections and byte-level association, which proves particularly effective in chicken tracking scenarios. Its efficient design allows for real-time tracking performance on a single GPU, enhancing its practicality for live monitoring (Zhang et al., 2021a). Table 1 shows checkpoints of each experiment.

Table 1. Algorithm 1-distance calculation summary.

Algorithm 1-distance calculation and visualization for chicken flock activity
def calculate_average_distance (chicken_positions):
total_distance, pair_count = 0, 0
for id1, pos1 in chicken_positions.items():
for id2, pos2 in chicken_positions.items ():
if id1 != id2:
dist = euclidean_distance (pos1, pos2)
total_distance += dist
pair_count += 1
return total_distance/pair_count if pair_count else 0
def visualize_activity (times, average_distances):
plt.figure (figsize=(10, 6))
plt.plot(times, average_distances, marker=’o’, linestyle=’-‘, color=’blue’)
plt.title (‘average distance between chickens over time’)
plt.xlabel (‘time (s)’)
plt.ylabel (‘average distance [pixels]’)
plt.grid (True)
plt.savefig (‘./average_distance_over_time.png’)
plt.show ()

These experiments, which trained for 300 epochs, ran on a system equipped with an Intel i7-8750 CPU at 2.20GHz, an NVIDIA GTX 3090 GPU, and Python 3.7.8, using Pytorch 1.13.0. The software environment included OpenCV 4.3.0 and CUDA 11.1, with PyCharm as the coding environment, all operating on Ubuntu 20.04.

Defining Average Distance

The average distance in this study refers to the mean Euclidean distance (i.e., the distance between 2 points mathematically) between each pair of chickens within a flock (birds in the image), measured in pixels. It is calculated across consecutive video frames to assess the spatial dynamics and activity level of the flock over time. This metric is crucial for understanding social behaviors, space utilization, and overall activity within the open litter environment. Table 1 outlines the detailed operations (Guzmán et al., 2013).

Euclidean distance calculation: The Euclidean distance between center points (x1, y1) and P2 (x2, y2) of 2 bounding boxes in a 2-dimensional space is calculated using following equations:(1)

Where D is the shortest distance between 2 points in a video frame, x is horizontal coordinates, y is vertical coordinates.

Average distance calculation: Given a set of n distances D = {d1, d2, …., dn}, the average distance Davg is computer as:(2)

Where Davg represents the average distance, which is the central value or mean of all the distances being considered, i = 1 to n means calculation starts with the first distance d1 and continue to add distance up to the nth distance dn.

In this study, ‘speed’ refers to the absolute change in average distances between detected chickens in a video from frame to frame, measured in pixels per second. This is used as a proxy for the activity level of the chickens, where larger changes in distances may imply more activity. Algorithm 2 presents the work (Table 2).

Table 2. Algorithm 1-speed calculation summary.

Algorithm 2-speed calculation and visualization
def calculate_speed (average_distances_per_second):
return [abs (average_distances_per_second [i] – average_distances_per_second [i-1])
for i in range (1, len (average_distances_per_second))]
average_distances_per_second = [np.mean(distances) if distances else 0 for distances in distances_per_second.values ()]
times = list (distances_per_second.keys())
speeds = calculate_speed(average_distances_per_second)
plt.figure (figsize=(10, 6))
plt.plot (times[1:], speeds, marker=’o’, linestyle=’-‘, color=’green’, label=’Speed (pixels/second)’)
plt.title (‘Change in Average Distance as a Proxy for Speed’)
plt.xlabel (‘Time (s)’)
plt.ylabel (‘Speed (pixels/second)’)
plt.grid (True)
plt.legend ()
plt.show ()

Average distance for a second: Let Et be the set of all pairwise average distances at second t. The average distance at second t, denoted as Ēt, is calculated as:(3)

Where

means the number of distance measurements included in the set for second t.

is the sum of all the distances in the set Et.

Speed calculation: Speed S between 2 consecutive second t and t – 1, where t > 0, is calculated as the absolute change in average distances:(4)

Where

represents the average distance between the chickens at second t,

represents the average distance between the chickens at second t – 1.

In our study, we also want to use average distance to quantify piling and smothering behaviors. Piling and smothering are abnormal behaviors in cage-free laying hens, where piling occurs when birds crowd together, while smothering, often leading to chicken deaths, includes stress-induced panic smothering from acute disturbances, preference-based smothering in preferred spaces, and recurring smothering where birds cluster without apparent cause. For manual observation of piling and smothering behavior, we checked our video records to find scenarios where a group of chickens aggregated together for more than 1 minute. Outliers within the analysis of average distance measurements in a chicken flock were identified using a 15% boundary. The average normal distance was considered if it fell within this 15% boundary. We used 1-s intervals due to standard speed metrics being in meters per second. Attempts to use fps resulted in messy data with unacceptable deviations. Each second and data points were compared to their neighboring points within the same time frame. Data points exhibiting extreme fluctuations exceeding 15% were marked as outliers. This threshold was determined through testing various boundaries (e.g., 5%, 10%) and finding that the 15% threshold effectively eliminated outliers. The primary reason for these outliers was the high-speed movement of some chickens, such as flying or running, which introduced significant deviations from the average (Guzmán et al., 2016).

Performance Evaluation

To effectively assess the performance of chicken tracking model, the primary metrics used will be Multiple objects tracking accuracy (MOTA), Identification F1 score (IDF1), Identity switches (IDS), and frames per second (FPS).

MOTA consolidates false positives (FP), false negatives (FN). IDF1 gauges the correct identification of individual chickens, considering both detection and identification precision and recall. IDS counts the number of times a tracked chicken is incorrectly identified after being previously correctly identified, which is crucial for understanding the consistency of tracking. Lastly, FPS measures the processing speed, indicating how many frames of video the model can handle per second, which is vital for real-time tracking applications. By focusing on these 4 metrics, the model’s performance in terms of accuracy, identification robustness, tracking consistency, and operational speed can be evaluated.(5)

(6)

Where precision is defined as the ratio of the number of correctly identified trajectories to the number of identified trajectories. Recall is defined as the ratio of the number of correctly identified trajectories to the total number of ground truth trajectories.(7)

RESULTS AND DISCUSSION

Model Comparison in Tracking Chickens’ Activities

Six individual experiments on chicken tracking were conducted to discover the optimal tracker for monitoring birds’ activity. The evaluation compared models based on the YOLO architecture, enhanced with tracking algorithms such as DeepSORT, StrongSORT, and ByteTrack (Table 3). The YOLOv5 models, combined with DeepSORT, StrongSORT, and ByteTrack, displayed commendable results (Figure 4). The YOLOv5+DeepSORT model achieved a MOTA of 80%, an IDF1 score of 78%, processed at 25 FPS, and recorded 50 identity switches. YOLOv5+StrongSORT improved these results with a MOTA of 85%, an IDF1 score of 83%, fewer identity switches 35, and a slightly lower FPS of 22. The integration of ByteTrack further adjusted the trade-off, with YOLOv5+ByteTrack exhibiting an 83% MOTA, 81% IDF1, 40 identity switches, and the highest FPS of 28 among the YOLOv5 series.

Table 3. The summary of model comparison.

Model MOTA IDF1 IDS FPS
YOLOv5+DeepSORT 80% 78% 50 25
YOLOv5+StrongSORT 85% 83% 35 22
YOLOv5+ByteTrack 83% 81% 40 28
YOLOv8+DeepSORT 94% 90% 41 27
YOLOv8+StrongSORT 89% 85% 38 23
YOLOv8+ ByteTrack 84% 88% 43 24
Figure 4
  1. Download: Download high-res image (471KB)
  2. Download: Download full-size image

Figure 4. Comparison of models using radar chart.

A notable advancement was seen with the introduction of YOLOv8+DeepSORT, which significantly elevated the MOTA to 94% and IDF1 to 90%, with a marginal increase in identity switches to 41, while maintaining a good FPS rate of 27. The combination of YOLOv8 with StrongSORT also performed well, achieving a MOTA of 89%, an IDF1 of 85%, 38 identity switches, and 23 FPS. The YOLOv8+ByteTrack variant showed a unique outcome, with an exceptional IDF1 score of 88%, albeit with a slightly lower MOTA of 84%, 43 identity switches, and an FPS of 24 (Figure 5, Figure 6, Figure 7).

Figure 5

  1. Download: Download high-res image (708KB)
  2. Download: Download full-size image

Figure 5. MOTA comparison results for different tracker based on deep learning.

Figure 6

  1. Download: Download high-res image (642KB)
  2. Download: Download full-size image

Figure 6. IDF1 comparison results for different tracker based on deep learning.

Figure 7

  1. Download: Download high-res image (682KB)
  2. Download: Download full-size image

Figure 7. ID switches (IDS) comparison results for different tracker based on deep learning.

These results suggest that the newer YOLOv8 models, when combined with advanced tracking algorithms, offer superior tracking and identification accuracy with a reasonable processing speed, marking a significant improvement in the field of chicken tracking.

The central goal of this study was to identify an ideal tracking model that would enable precise monitoring of bird activity in cage-free poultry farms, which present unique challenges due to the birds’ unrestricted movement patterns (Neethirajan, 2022). The assessment of the YOLO architecture, enhanced with advanced tracking algorithms DeepSORT, StrongSORT, and ByteTrack, revealed that each combination has merits that could be practical for real-world applications.

Beginning with the YOLOv5 series, each model’s integration with tracking algorithms showcased promising results. The YOLOv5+StrongSORT variant exhibited enhanced performance with a higher MOTA and IDF1, coupled with fewer identity switches, suggesting a more stable tracking under varied activity levels. However, it compromised on the processing speed, a trade-off that requires consideration depending on real-time monitoring needs. YOLOv5+ByteTrack balanced performance and speed, offering an improved FPS rate, thus aligning more closely with the requirements for real-time application without a substantial drop in accuracy. The advent of YOLOv8 models brought significant improvements. The YOLOv8+DeepSORT pairing reached impressive heights in accuracy with MOTA of 94% and IDF1 of 90%, with a manageable increase in identity switches, which is indicative of the model’s robustness in distinguishing individual birds effectively. This aspect is particularly important in monitoring the welfare of poultry, as flock tracking allows for the detection of flock activity patterns over time. On the other hand, YOLOv8+StrongSORT and YOLOv8+ByteTrack offered a nuanced view of the trade-offs between accuracy, identity retention, and processing speed, reflecting the complex balance that tracking algorithms must strike in dynamic environments such as cage-free poultry farms.

Despite these technological advancements, challenges in precision tracking persist. For instance, false detections can occur when chickens are occluded by feeders, drinking equipment, and other installations, a complication previously observed in broiler chicken houses (Guo et al., 2020). Additionally, the dusty conditions within cage-free poultry houses contribute to this issue, as dust particles in the air and on the camera, lenses can lead to inaccuracies (Yang et al., 2023). Furthermore, the camera view poses another issue, as hens frequently enter and leave the field of view, which may prevent the model from detecting all chickens in the house consistently.

In summary, the sophisticated interplay between tracking algorithm development and environmental management is crucial for advancing the analysis of flock activity indices. Maintaining a low-dust environment and simplifying the infrastructure in cage-free houses can significantly enhance the accuracy of detection systems.

Average Distance Detection Within a Chicken Flock

To rigorously evaluate the advanced YOLOv8+DeepSORT model’s capability in monitoring cage-free poultry environments, we conducted an analysis using a 73-s video to ascertain the model’s efficacy in determining the average distance among the flock. In this video, the model successfully identified a maximum of 91 chickens. The calculated average distance ranged between 590.75 and 892.95 pixels, as illustrated in Figure 8. During the initial 49 s of the footage, the chickens exhibited aggregation behavior, resulting in an increasing average distance. After this period, most of the chickens moved out of the camera’s field of view. The remaining chickens distributed near corners or feeders, showed a decrease in average distance. This dynamic was captured in visualized results, and corresponding images highlighting detection overlays are presented in Figure 9.

Figure 8

  1. Download: Download high-res image (373KB)
  2. Download: Download full-size image

Figure 8. Temporal variations in the spatial distribution of chickens in a cage-free environment.

Figure 9

  1. Download: Download high-res image (585KB)
  2. Download: Download full-size image

Figure 9. Integrated approach to poultry behavior study: original video (a), flock network analysis (b), and individual tracking with temporal distance data (c).

Figure 9. Integrated approach to pouly behavior study: original video (1), flock network analysis (2), and individual tracking with temporal distance data. The number following “ID” represents the tracking ID, and the number following “bird” indicates the confidence score (3).

In the analysis of average distance measurements within a chicken flock, outliers are identified within each second, marked in red in Figure 10. These outliers are attributed to their extreme fluctuations, exceeding 15% compared to their neighboring data points within the same second (Kwak and Kim, 2017). Outliers can significantly skew the calculated average distances, leading to erroneous estimations in subsequent speed detection analyses. These outliers mainly come from high-speed chickens, such as flying or running chickens. To mitigate this issue, algorithms described in Table 4 are employed to filter out these outliers and reduce noise in the distance measurements. After the removal of outliers, the detection graph becomes smoother and less variable. The range of values changes from 590.75 to 892.95 pixels to 641.72 to 892.95 pixels. The minimum value shifts by 7.94%, while the maximum value remains unchanged, providing a more accurate and reliable estimation of the distances within the flock per second.

Figure 10

  1. Download: Download high-res image (404KB)
  2. Download: Download full-size image

Figure 10. Temporal variations in the spatial distribution of chickens in a cage-free environment (filtered).

Table 4. Algorithm 3-speed calculation summary.

Algorithm 3-distance filtering for noise reduction
def filter_distances (average_distances, times):
filtered_distances = []
filtered_times = []
excluded_distances = []
excluded_times = []
for i in range (1, len (average_distances) – 1):
prev_dist = average_distances [i – 1]
next_dist = average_distances [i + 1]
current_dist = average_distances [I]
avg_neighbor_dist = (prev_dist + next_dist) / 2
margin = 0.15 * avg_neighbor_dist
if abs (current_dist – avg_neighbor_dist) ≤ margin:
filtered_distances.append (current_dist)
filtered_times.append (times[I])
else:
excluded_distances.append (current_dist)
excluded_times.append (times[I])
return filtered_distances, filtered_times, excluded_distances, excluded_times

Average Speed Detection Within a Chicken Flock

In terms of the flock’s locomotion, the detected average speed within the chicken population ranged from 0 to 97 pixels per second. Observation indicated a stable velocity pattern during periods of flock aggregation. Notably, after the 49-second mark, an incremental increase in speed was observed as chickens started to move out of the camera’s field of view. The entire analysis is showed in the Figure 11.

Figure 11

  1. Download: Download high-res image (374KB)
  2. Download: Download full-size image

Figure 11. Fluctuations in the velocity of a free-range chicken flock over time.

For the detection of average speed, red points were used to represent data points excluded due to noise during the summation process over several seconds. After removing outliers, the overall speed profile became more stable. For instance, in Figure 11, around 50 s, there was a drastic fluctuation within 3 seconds when unfiltered; however, this fluctuation was stabilized after filtering, representing a more natural progression in the movement of a large group of chickens, with reduced aggression within the flock. Additionally, from time 0 to 10 s, the highest speed detection points lowered, and their proximity to other data points within this time range increased, which more accurately reflects the movement speed of chickens (Figure 12). This enhancement in the speed detection graph better aids in identifying movement patterns within chicken flocks.

Figure 12

  1. Download: Download high-res image (358KB)
  2. Download: Download full-size image

Figure 12. Fluctuations in the velocity of a free-range chicken flock over time (filtered).

Compare With Related Research

Our studies can be compared to prior work on flock activity estimation using 4 criteria: (1) accuracy across different postures, (2) potential for automation, (3) the level of detail in detection, and (4) capability to assess flock activity. Table 5 delineates how each method fares against these standards. Regarding posture variation, the methods by Bloemen et al (1997a) and Sherlock et al (2010) rely on calculating the pixels attributed to ‘animals’ within predefined gridlines, leading to inaccuracies when chickens change from dust bathing to walking due to the significant reduction in visible body size (Bloemen et al., 1997; Sherlock et al., 2010). In contrast, our method, along with that of Neethirajan (2022) employs deep learning detection bounding boxes to track the central point of each chicken, thereby mitigating errors associated with pose changes (Neethirajan, 2022).

Table 5. The summary of comparable experiments.

Study Activity index Unaffected by posture Automation Identification level Flock activity
Bloemen et al. (1997a)
Image, table 5
Image, table 5
Image, table 5
3 mm
Image, table 5
Sherlock et al. (2010)
Image, table 5
Image, table 5
Image, table 5
Individual chicken
Image, table 5
Neethirajan (2022)
Image, table 5
Image, table 5
Image, table 5
Pixel
Image, table 5
This study
Image, table 5
Image, table 5
Image, table 5
Pixel
Image, table 5

Concerning automation, the first 2 studies in Table 5 require manual oversight to count birds or differentiate between animal pixels and their surroundings. However, the latter 2 studies utilize pretrained deep learning models that independently track chickens, allowing for automated activity calculation.

In terms of identification precision, the Bloemen et al (1997a) achieved a 3 mm accuracy in detecting chickens within images. Based on it, a dynamic mathematical model was developed to predict how broilers respond to step-wise light intensity changes in terms of activity (Kristensen et al., 2006). Additionally, individual chicken activity was correlated with leg health. However, manual observation is limited by the inability of human analysts to discern minute changes over extended periods, restricting identification to no more than 15 subjects. Conversely, deep learning models in our study can track over 48 chickens simultaneously with pixel-level accuracy. While both studies can detect such detail, our research extends up to 72 s of continuous tracking, surpassing the 40-s tracking at free-range farms reported by Neethirajan (2022). While our approach provides detailed tracking, it should be noted that extremely high spatial accuracy may have limited biological relevance; even minor movements of the birds could result in significant movements in the context of their behavior. Therefore, while our method provides valuable insights, its practical application should consider these limitations.

As for flock activity, although Bloemen et al (1997a) could calculate an activity index, it is dependent on human observation, which cannot ensure precision due to potential observational errors. Our study, on the other hand, can monitor changes in distance and velocity, visualizing the activity index based on the average distance within the entire flock and concurrent speed variations, all while maintaining high precision. In conclusion, our research underscores deep learning models, as the most effective method for analyzing the activity of cage-free chickens across a variety of postures and behaviors. These advanced computational approaches exhibit outstanding performance, particularly in their capacity to maintain high precision in tracking and analyzing the activity of broiler chickens, unaffected by a variety of postures and behaviors, because we track the middle of the chickens based on the center of the bounding box. This technological advancement represents a significant leap over traditional manual methods, which are limited by human observational constraints. Our methodology sets a new standard in poultry activity index research, providing a scalable and reliable framework for future studies.

Detection of Piling and Smothering Behaviors

Piling and smothering are abnormal behaviors in cage-free laying hens, where piling happens when birds crowd together, and smothering is the serious condition that results from long-term crowding, often leading to the death of chickens (Gray et al., 2020). Three types of smothering are most common among laying hens. Stress-induced panic smothering occurs due to acute disturbances that trigger fear, causing hens to densely pack together, leading to high mortality rates (Barrett et al., 2014). Preference-based nest box or seeking warm place smothering reflects the hens’ choice to occupy the same preferred space, which can cause varying levels of mortality based on the birds’ overcrowding in preferred locations. Seemingly reasonless, recurring smothering involves slow-moving groups of birds that cluster together without apparent cause, persisting throughout the laying period and resulting in lower mortality rates (Winter et al., 2021). Therefore, the most apparent signs of piling and smothering are their crowded postures, which indicate a lower average distance between chickens. For example, in our study, piling and smothering behaviors occur most frequently when strangers enter the chicken house to collect eggs. The chickens pile up due to their fear of unfamiliar people (Figure 13A), and they also gather in preferred resting places at night (Figure 13B). Under these 2 piling and smothering behaviors, the average distance between chickens is 255.99 pixels and 456.78 pixels, respectively, which is at least 28.82% lower than in normal situations, which means the range of average distance detection within a chicken flock is between 641.72 and 892.95 pixels, as discussed in part 3.2. We used the minimum value, 641.72 pixels, to compare with 456.78 pixels and found it to be 28.82% lower, based on our 73-s continuous monitoring of normal activities. Additionally, when piling and smothering occur in only a portion of the camera view, we can use region-based detection to identify these behaviors in high-risk areas like side walls, doors, and nesting boxes (Ye et al., 2020a; Bist et al., 2023b). Moreover, in the absence of clear fear-inducing elements in the camera footage, environmental data can be combined to investigate potential causes of piling and smothering. According to the Hy-Line W-36 Commercial Layers Management Guide, chicks are prone to cluster together in response to cold temperatures. Additionally, uneven ventilation or light distribution within the enclosure may prompt chicks to congregate in specific areas, seeking to avoid drafts or noise (Hy-line, 2020).

Figure 13

  1. Download: Download high-res image (700KB)
  2. Download: Download full-size image

Figure 13. The detection of piling behavior in cage-free houses: A shows piling due to stressful conditions, while B depicts piling resulting from birds gathering in a preferred location.

Compared with other observation methods, such as manual observation and labeling piling behavior to train CNN models, our method can serve as a real-time detector without human observation, providing a summary of the location, frequency, and duration of piling incidents (Campbell et al., 2016; Jensen et al., 2024). Moreover, our approach overcomes the shortcomings of labeling groups of chickens as exhibiting piling behavior, which may cause errors when the entire flock is densely packed, making the whole flock appear crowded.

Measuring Activity Levels in Poultry Flocks

According to the speed detection data presented in Figure 12, activity levels can be classified into 3 categories: low (0–30 pixels/s), medium (30–50 pixels/s), and high (50–80 pixels/s). These levels correspond to the different flock activity observed throughout the detection period and are consistent with manual observations (Figure 14). Continuous activity levels of chickens can serve as indicators of the overall health status of the flock. For example, research from the UK has demonstrated that turkey poults with high levels of footpad dermatitis (FPD) exhibit reduced walking speeds (Wyneken et al., 2015). Similarly, studies indicate that broiler chickens with lameness walk significantly slower compared to their healthy counterparts. Observed improvements in gait speed following the administration of carprofen, an analgesic, further corroborate this finding (McGeown et al., 1999). Consequently, walking speed can serve as a reliable indicator of foot issues. However, there is a limitation when chickens jump up and down from perches. Sometimes the horizontal movement distance is small, requiring an additional depth sensor on the camera to monitor such changes. By utilizing technology to detect activity levels, we can identify flocks with consistently lower walking speeds, suggesting potential FPD issues. Additionally, deviations in broiler activity patterns have been correlated with hock burns (R2= 0.70) (Peña Fernández et al., 2018b). Thus, deviations in activity levels in our study could also be used to monitor hock burns by calculating variations in activity levels. This approach not only enhances our understanding of avian health dynamics but also enables proactive management practices to mitigate potential welfare issues within poultry flocks.

Figure 14

  1. Download: Download high-res image (496KB)
  2. Download: Download full-size image

Figure 14. Visualization of activity levels in cage-free houses, where the blue bar represents the range of low activity level, the green bar represents the range of moderate activity level, and the red bar represents the range of high activity level.

CONCLUSIONS

In this study, multiple hen trackers were developed using YOLO networks combined with ByteTrack, StrongSORT, and DeepSORT to monitor the activity index of cage-free hens. Results indicate that the average MOTA for each tracker ranged from 80% to 94%, with the highest performance observed in the YOLOv8 combined with DeepSORT configuration. This tracker can detect piling and smothering behaviors, as well as footpad issues in cage-free hen environments. The classifiers successfully mitigate errors stemming from changes in hen postures and can quantify flock activity by calculating average distance and speed within a chicken flock. The findings offer a practical and visual approach for monitoring the activity index of cage-free hens without the need for human observation. This study is among the pioneers in integrating deep learning technologies with statistical methods to monitor the activity level of cage-free hens, providing valuable information for assessing animal health and welfare.

Source: Science Direct