METHOD AND SYSTEM FOR DETECTING SOW ESTRUS UTILIZING MACHINE VISION
20250318917 ยท 2025-10-16
Inventors
Cpc classification
G06V10/12
PHYSICS
G06V10/26
PHYSICS
A61D17/002
HUMAN NECESSITIES
International classification
A61D17/00
HUMAN NECESSITIES
G06V10/12
PHYSICS
G06V40/10
PHYSICS
Abstract
Accurate estrus detection of sows is critical to achieving a high farrowing rate and maintaining good reproductive performance. However, the conventional method of estrus detection uses a back pressure test by farmers, which is time-consuming and labor-intensive with a significant degree of error. This disclosure is of an automated estrus detection method by monitoring the change in vulva swelling around the estrus using a three-dimensional measurement device, e.g., LiDAR camera, which includes an RGB camera and a depth camera. This sow estrus detection improves accuracy and efficiency, reduces labor and cost, and improves the sustainability of swine production using a data-driven decision-making system based on a robotic cyber-physical system (CPS) that can utilize deep learning detection based on a deep learning model.
Claims
1. A system for detecting sow physical change around estrus comprising: a control unit including at least one processor and at least one memory; at least one three-dimensional measurement device; and a motorized movable mechanism attached to the at least one three-dimensional measurement device, wherein the control unit directs the motorized movable mechanism to obtain physical aspects of a sow on a periodic basis with images from the at least one three-dimensional measurement device.
2. The system for detecting sow physical change around estrus according to claim 1, wherein the physical aspects of the sow are selected from the group consisting of vulva volume, vulva width, vulval length, vulva height, vulva surface area, vulva base area, or vulva color.
3. The system for detecting sow physical change around estrus according to claim 1, wherein the physical aspects of the sow include abdomen movement that is converted to a respiratory rate.
4. The system for detecting sow physical change around estrus according to claim 1, wherein the at least one three-dimensional measurement device includes a 3D camera
5. The system for detecting sow physical change around estrus according to claim 1, wherein the motorized movable mechanism includes at least one motor electrically connected to at least one driver in electronic communication with the control unit.
6. The system for detecting sow physical change around estrus according to claim 1, wherein the control unit includes a wireless module for transmitting sow physical data for analysis.
7. The system for detecting sow physical change around estrus according to claim 5, wherein the motorized movable mechanism moves between a plurality of sow stalls to measure sow vulva volume for a plurality of sows located within the plurality of sow stalls with the at least one three-dimensional measurement device.
8. The system for detecting sow physical change around estrus according to claim 7, wherein the motorized movable mechanism includes a motorized trolley mounted within an overhead rail track and a retractable arm attached to the movable motorized trolley and the at least one three-dimensional measurement device.
9. The system for detecting sow physical change around estrus according to claim 8, wherein the overhead rail track is in a loop.
10. The system for detecting sow physical change around estrus according to claim 7, wherein the control unit initializes the at least one three-dimensional measurement device, moves the motorized movable mechanism to take images of sow vulva volume, and then transmits sow vulva data for analysis.
11. The system for detecting sow physical change around estrus according to claim 1, wherein the at least one three-dimensional measurement device provides posture recognition information to the control unit to determine a physical position of the sow.
12. The system for detecting sow physical change around estrus according to claim 11, wherein after the determination of a sow being in a standing position, the control unit electrically accesses a deep learning model to ascertain a physical condition of the SOW.
13. The system for detecting sow physical change around estrus according to claim 12, wherein after the control unit electrically accesses the deep learning model to ascertain the physical condition of a sow, the control unit electrically accesses the deep learning model to ascertain a vulvar condition of the sow.
14. The system for detecting sow physical change around estrus according to claim 13, wherein after the control unit electrically accesses the deep learning model to ascertain the physical condition and the deep learning model to ascertain the vulvar condition of the sow, existing data and historical records are combined with the physical condition and the vulvar condition to provide a treatment recommendation of the sow.
15. The system for detecting sow physical change around estrus according to claim 14, wherein the physical condition, the vulvar condition, the existing data, and historical records of the sow are electronically transmitted to output selected from the group consisting of an electronic display and a webpage.
16. The system for detecting sow physical change around estrus according to claim 14, wherein the physical condition and the vulvar condition within a predetermined time period of one to two days is concatenated with the categorical data, that includes at least one of time from weaning, parity number, BCS and sow breed to generate an output based on at least one activation function to determine if estrus is taking place for the sow utilizing a multivariate deep learning model.
17. A system for detecting sow vulva change around estrus and providing a data image pipeline comprising: a control unit including at least one processor and at least one memory; and at least one three-dimensional measurement device, wherein the at least one three-dimensional measurement device provides posture recognition information to the control unit to determine if a sow is in a standing position, which is followed by the control unit filtering standing images of sows to find images that provide a full view of a sow's vulva, which is then followed by the control unit electrically accessing a deep learning model control to identify and segment at least one image of the sow vulva region and generate a vulva volume value that is utilized to determine if a sow is in estrus.
18. The system for detecting sow vulva change around estrus and providing a data image pipeline according to claim 17, wherein the control system verifies a shape of the sow vulva in the identified and segmented image to verify that the image can be utilized to determine if the sow is in estrus.
19. A method for detecting sow vulva change around estrus comprising: obtaining measurements of sow vulva volume on a periodic basis with images from at least one three-dimensional measurement device that is attached to a motorized movable mechanism that is commanded by a control unit having at least one processor and at least one memory; and electronically accessing a deep learning model with the control unit to ascertain physical condition of at least one sow and electronically accessing a deep learning model to ascertain a vulvar condition of the at least one sow.
20. The method for detecting sow vulva change around estrus according to claim 19, further comprising: obtaining from the at least one three-dimensional measurement device posture recognition information; providing posture recognition information to the control unit to determine if a sow is in a standing position; filtering standing images of sows to find images that provide a full view of a sow's vulva with the control unit; and accessing a deep learning model control to identify and segment at least one image of the sow vulva region and generate a vulva volume value that is utilized to determine if a sow is in estrus with the control unit.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0046] Several embodiments in which the present invention can be practiced are illustrated and described in detail, wherein like reference characters represent like components throughout the several views. The drawings are presented for exemplary purposes and may not be to scale unless otherwise indicated.
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
[0062]
[0063]
[0064]
[0065]
[0066]
[0067]
[0068]
[0069]
[0070]
[0071]
[0072]
[0073] An artisan of ordinary skill in the art need not view, within the isolated figure(s), the near-infinite number of distinct permutations of features described in the following detailed description to facilitate an understanding of the present invention.
DETAILED DESCRIPTION
[0074] The present disclosure is not to be limited to that described herein. Mechanical, electrical, chemical, procedural, and/or other changes can be made without departing from the spirit and scope of the present invention. No features shown or described are essential to permit the basic operation of the present invention unless otherwise indicated.
[0075] Referring again to the Figures, a three-dimensional measurement device is generally indicated by the numeral 12 in
[0076] The LiDAR camera is more accurate than those based on stereo vision, e.g., Intel RealSense D415 camera. The depth aspect of the Intel RealSense LIDAR Camera L515 has a field view of 7055 and a depth resolution of 640480 pixels with a measurement accuracy of less than five millimeters when an object is placed around one meter away from the sensor at indoor conditions. The RGB aspect of the Intel RealSense LIDAR Camera L515 has a resolution set as 1280720 pixels, and images were aligned with the LiDAR images. The Intel RealSense LIDAR Camera L515 was connected to a laptop (not shown). A wide variety of laptops may suffice, with an illustrative, but non-limiting, example being a DELL LATTITUDE 5480 laptop manufactured by Dell, Inc. having a place of business at One Dell Way Round Rock, Texas 78682 and controlled via an electrical cable, e.g., USB 3.0, and firmware, e.g., Intel RealSense Viewer SDK 2.0, manufactured by the Intel Corporation, having a place of business at 2200 Mission College Blvd., Santa Clara, California 95054.
[0077] Before using the three-dimensional measurement device 12, e.g., LiDAR camera, on sows, the three-dimensional measurement device 12 for accuracy is set up through a default setup program.
[0078] The three-dimensional measurement device 12, e.g., LiDAR camera, images of the sows' vulva regions were preferably collected at a regular time period every day. Because the vulva might become swollen immediately after artificial insemination, imagery data were collected at least five hours after completion of the artificial insemination. While the sows were standing, the three-dimensional measurement device 12, e.g., LiDAR camera, was pointed horizontally at the hip of the sows from a distance between 0.7 meters-1.0 meters to acquire imagery data. The three-dimensional measurement device 12, e.g., LiDAR camera, took both RGB and depth image frames at a rate of thirty frames per second for about two minutes for each sow.
[0079] Python script was built to access the recorded data and save each frame as a point cloud object using the Intel RealSense Python package. Five point-cloud data were randomly selected from the three-dimensional measurement device 12, e.g., LIDAR camera, recordings for each sow of each day for further processing to evaluate the sows' vulva size (swelling).
[0080] An open-source software CloudCompare (Version 2.11.1) was used to manually segment the three-dimensional (3D) point cloud of all sows into rectangular regions that contained the sows' vulva region in the center as shown in
[0081] Referring now to
[0082] Referring to
[0083] The height of the vulva region was determined based on the maximum height found in the Vulva Only Surface 90. After fitting an ellipse shape to the vulva region, the vulva's width and length were determined based on the ellipse's major axis and minor axis length.
[0084] Referring now to
where SA is the surface area that is the integration of depth (height) pixel on the 300300 depth map (f), i.e., Vulva Only Surface; and dx dy is the projected area of each element in f. The base area (BA) is calculated as the total number of values in f that are greater than zero.
[0085] Three of the three-dimensional (3D) features, including volume (V) and cubic volume (CV) of a vulva, as well as the maximum percentage of increase in volume (PIV) observed in each sow, are defined in Equations 5-7 below and shown in
[0086] As an illustrative, but nonlimiting, example, RStudio Team (Version 1.2.5033) was used for all statistical analyses (R Version 3.6.2), where RStudio has a place of business at 250 Northern Avenue, Boston, Massachusetts 02210. A two-way ANOVA test was conducted to examine the effect of distance and angle on the measurement accuracy of the three-dimensional measurement device 12, e.g., LiDAR camera. A correlation analysis was conducted to evaluate the correlation between all image features and vulva volume. It is expected that the vulva volume could be represented by the width, length, and height that are easy to measure. Linear and polynomial regression models were developed to describe the relationship between the calculated vulva volume and the two-dimensional (2D) and three-dimensional (3D) image features. The statistics of all vulva features, including daily means, were calculated. A student t-test (t.test) was conducted to determine the significance of the difference in vulva size (volume and HRA) on different days relative to the records from the previous three days. The significance level was set as 0.05. This technology is not restricted to vulva volume only but can also be applied to vulva volume, vulva width, vulval length, vulva height, vulva surface area, vulva base area, and vulva color to determine estrus.
[0087] Regarding vulva size evaluation, there are a number of correlations among extracted image features. The regression analysis results among the two-dimensional (2D) and three-dimensional (3D) figures are shown in
[0088] Regarding the change in vulva size around the estrus, a farm technician identified all sows' estrus. Results indicate that all sows showed estrus within ten days (7.251.75 Days) after the last Matrix feeding. Matrix is a product of Merck & Co., Inc. having a place of business at 351 N. Sumneytown Pike, North Wales, Pennsylvania 19454.
[0089] The estrous period lasted three days for the gilt and two days for the sows. The detected estrus data were used to evaluate the performance of the three-dimensional measurement device 12, e.g., LiDAR, in detecting estrus.
[0090] The daily values of the two-dimensional (2D) features (SA, BA, HRA, VRA) and three-dimensional (3D) features (volume, CV) of each sow throughout the experiment are shown in
[0091] Among the two-dimensional (2D) features, the vulva width and length are relatively easy for manual measurement. Therefore, the HRA was selected as a representative of the two-dimensional (2D) features, and its change around the estrus was evaluated. The results of the t-test indicate that there was a significant increase in HRA (p-value <0.01) within days prior to the estrus for all sows except Sow 4 (
[0092] To illustrate the vulva CV that represented the vulva volume, CV was linearly transformed with coefficients shown in
[0093] The percentage of increase in volume between two days around the estrus was calculated for each sow. The relation between the maximum percentage increase and its minimum recorded vulva volume during the experiment is shown in
[0094] To evaluate vulva size automatically, an image processing pipeline based on deep learning neural network model (U-Net) has been developed to automatically identify the tail, anal, and segment vulva region from three-dimensional (3D) images. U-Net is a convolutional neural network that was developed for biomedical image segmentation at the Computer Science Department of the University of Freiburg. The network is based on the fully convolutional network, and its architecture was modified and extended to work with fewer training images and to yield more precise segmentation. Segmentation of a 512512 image takes less than a second on a modern GPU. U-net is only one illustrative, but nonlimiting, example of deep learning tools that can be utilized with the present invention. Numerous other tools like VGG16, MobileNet, Xception, and DenseNet121 can be utilized. Based on the experimental analysis, there appears to be no significant difference in model performances when using different types of input images for posture recognition. However, results show that the VGG16 took significantly more time (p<0.01) than the other models tested to process each image and yielded significantly lower validation and test accuracy (p<0.01).
[0095] Meanwhile, the MobileNet took significantly less time (p<0.01) than the other models tested to process each image, and there was no significant difference in performance for recognizing standing and sitting postures to the rest of the models (p>0.1). Although the Xception model took more time (p<0.01) to process each image frame than MobileNet and DenseNet, it had significantly higher test accuracy and F1 scores for lateral lying and sternal lying postures (p<0.05). The overall performance of DenseNet was between MobileNet and Xception. Although DenseNet took more time to process each image compared to MobileNet, no significant improvement in test accuracy or F1 scores for different posture classes was observed. It appears that MobileNet should be used to monitor the sow's activity level at a high frame rate, i.e., video feed, and Xception should be selected when accurately distinguishing different lying postures (sternal and lateral). It seems that the results indicated that the image type has no significant impact on the posture recognition models' performance. Xception has the best accuracy but requires a longer processing time than MobileNet and DenseNet121. Using the posture recognition model to monitor an individual sow's behavior patterns after weaning, the result indicated a significant increase in daily activity and semi-idle level, and a significant decrease in daily idle level was found on the day of onset of estrus. No distinct behavior pattern was observed around the expected return estrus.
[0096]
[0097] Referring to
[0098] The robotic camera system includes a platform controlled by a RASPBERRY PI (where RASPBERRY PI is a federally registered trademark of the Raspberry Pi Foundation private limited company of the United Kingdom located at 30 Station Road, Cambridge, United Kingdom CB12JH), an RGB and an infrared camera to collect rear-view images of individually housed sows during a predetermined time period, e.g., every ten minutes, as shown in
[0099] This low-cost robotic cyber-physical system (CPS) includes a physical system consisting of a robotic imaging system to acquire images of individual sows that will be processed and analyzed by a cyber system based on edge/cloud computing for decision making. The proposed robotic CPS system can be potentially integrated with on-farm automation systems, such as electrical sow feeders (ESF), to automatically adjust feed quota for individual sows. The robotic CPS system aims to optimize sow breeding management with or without needing human input. The CPS system will provide real-time data acquisition, analysis, and decision-making for sow estrus, an optimum time window for artificial insemination, feed quota for each sow, activity pattern, and body structure.
[0100] This system can include a robotic imaging system, edge computing devices, AI-enabled data processing, and analytic pipelines, and a cloud-based control and management system. The system will preferably utilize core CPS technologies, including emerging sensors, IoT, edge/cloud computing, and control, to monitor sow estrus by automatically assessing multiple estrus signs, activity level, and body conditions.
[0101] A robotic imaging system of the present invention is generally indicated by the numeral 250 in
[0102] Referring now to
[0103] As shown in
[0104] At least one three-dimensional measurement device 12, e.g., LiDAR camera, (probably two) can include, but is not limited to, an INTEL RealSense LIDAR Camera L515. The Intel Corporation has a place of business at 2200 Mission College Blvd, Santa Clara, California 95054-1549.
[0105] At least one three-dimensional measurement device 12, e.g., a LiDAR camera, (probably two), can be used to take back-view images of individual sows 262. In addition, the three-dimensional measurement device 12 can acquire red-green-blue (RGB) color, infrared, and depth images simultaneously, and infrared and depth images can be collected at low-light conditions, e.g., nighttime conditions. Each three-dimensional measurement device 12 will be connected to control unit 254, that preferably includes an edge computing unit 274 through electronic communication; a nonlimiting example includes USB 3.2, for camera control, data acquisition, processing, analysis, and wireless communication. Preferably, but not necessarily, the wireless communication 276 is through a cloud platform, e.g., AMAZON AWS, owned by Amazon Technologies, Inc., having a place of business at 410 Terry Avenue, Seattle, Washington 98109.
[0106] A control program for the at least one three-dimensional measurement device 12 based on Python script and the Intel RealSense Viewer SDK 2.0, manufactured by the Intel Corporation, having a place of business at 2200 Mission College Blvd., Santa Clara, California 95054, for initializing the at least one three-dimensional measurement device 12 and takes images on demand.
[0107] An electronic touch screen display 278, shown in
[0108] The robotic imaging system 250 will work in patrol mode to conduct routine data collection or manual mode as needed. Limit switches (not shown) on the overhead rail track 256 will instruct the motorized trolley 258 to stop at an accurate location behind a sow 262 and take images at an ideal angle. Images are preferably taken at predetermined intervals, e.g., ten minutes, to quantify activity patterns. In experimentation, it currently requires about three seconds to acquire images for each sow, resulting in four hundred sows in ten minutes using two of the three-dimensional (3D) measurement devices 12.
[0109] The patrol mode working process is generally indicated by the numeral 300 and illustrated in
[0110] Collected images of each sow 262 will be processed in real-time to extract different image features that will be used to assess the activity, body condition, and estrus status. The image processing and analysis pipeline will include different modules that are extendable, including posture recognition, vulva assessment, and body condition assessment. As illustrated in
[0111] The image processing and analysis process is generally indicated by the numeral 350 and illustrated in
[0112] If the cow is standing in step <360>, then a deep learning model is utilized to access body condition <364>. This information is provided to the database <362>. The next process step is to utilize a deep learning model to assess vulvar condition <366>. This is an ongoing process where the next step is to make comparisons to existing data and historical records <368>. Based on this analysis, decisions on artificial insemination and other decisions involving the sow 262 can be made <370>. This information can be visualized on a wide variety of electronic devices, webpages, and mobile platforms <372>. The end of this process is found in step <374>.
[0113] An important tool that can be utilized when the sow 262 is sleeping in a lateral lying position is to evaluate the respiratory rate of the sow 262 based on the movement of the abdomen of the sow 262 captured by a three-dimensional measurement device(s) 12. An illustrative, but nonlimiting, video capture rate is twenty frames per second. Referring now to
[0114] The activity patterns refer to the time length of different activities that a sow 262 maintains. Activity patterns will be quantified by monitoring sow postures (sleeping, sitting, and standing). Activity patterns can be used as a physical sign of estrus and health conditions. For example, sows and gilts approaching estrus have higher activity levels and restlessness. Continuous monitoring of individual sows 262 will acquire the baseline information when they are at normal conditions and improve estrus detection accuracy. Sow postures will be identified using a convolution neural network (CNN) model based on infrared images that are available at low-light conditions, which includes nighttime. In a preliminary study, a CNN model was able to correctly classify the sow posture into standing, sitting, and lying with an accuracy of 100%. This model takes 0.097 seconds for an edge computing unit, e.g., RASPBERRY PI (where RASPBERRY PI is a federally registered trademark of the Raspberry Pi Foundation private limited company of the United Kingdom located at 30 Station Road, Cambridge, United Kingdom CB12JH. This is an illustrative example with similar models that can be utilized to identify the posture of sows 262.
[0115] Assessment of vulva conditions will include vulva size (swelling), redness, and mucous discharge, which are common biological signs of approaching estrus. Compared to signs of activity patterns, vulva conditions are independent of sexual behaviors and more dependable in detecting estrus. The data processing in the present invention includes vulva region recognition, vulva segmentation, discharge recognition, size, and color quantification. A deep learning model, U-Net, that is widely used in segment images such as finding brain tumors from MRI images, can be utilized to successfully identify a sow's tail, rectal, and vulva region from IR images in 0.9 seconds using the RASPBERRY PI (where RASPBERRY PI is a federally registered trademark of the Raspberry Pi Foundation private limited company of the United Kingdom located at 30 Station Road, Cambridge, United Kingdom CB12JH. In addition, a manual method to quantify the vulva dimension and volume from the depth image by improving the model's performance by testing different object detection algorithms, e.g., Single Shot Box Detector, developing an automated image processing pipeline to calculate vulva volume in real-time and developing deep learning models to quantify vulva redness level and mucous discharge. Combining IR, RGB, and depth images can improve the accuracy and identify reliable signs for estrus detection and other reproduction performance.
[0116] A sow 262's body condition is usually quantified as a body condition score (BCS) with five levels (one through five) based on the sow's back-fat thickness, which is measured by an ultrasound machine or a caliper. The present invention utilizes a deep learning model to quantify the BCS of each sow automatically. A mixed CNN will be used to process imagery data, and a multilayer perceptron network to manage numerical and categorical data, i.e., age, parity number, and/or breed, which will be configured in parallel. Finally, the learned features will be concatenated and fed to a subsequent network to assess body conditions.
[0117] Referring now to
[0118] Locomotive disorder is one of the leading causes of sow replacement at early parity. It is found that the structural soundness is strongly associated with the productive lifetime of a sow. In practice, trained workers evaluate the structural soundness and rank the severity of structural disorder of sows or gilts by visually observing their rear legs, which is time-consuming and subjective.
[0119] Referring now to
[0120] Some symptoms of a sow with poor structural soundness include large ankle angle (A) 402, small feet distance (F-F) 400, and significant difference between feet distance and ankle distance (H-H) 404. Using the extracted features from the key points, machine learning models such as KNN, random forest, and multilayer perceptron neural network, will be evaluated to identify sow with rear leg structural disorder. A scale of ten levels, i.e., 1-10, can be assigned to indicate the severity level.
[0121] A robotic imagery platform can be utilized to monitor sows with automated image processing and analysis pipelines based on edge-computing with image features of sows utilized for future analysis along with post-processed methods or use cloud-computing platforms.
[0122] It is believed that biological signs of vulva conditions, including swelling, redness, and discharge, are reliable indicators of estrus. These biological signs are caused by the rise in estrogen level, independent of the sow's body condition or its sexual interest towards boars. However, visual evaluation of change in vulva conditions can be inaccurate, inconsistent, and difficult to implement in practice by workers. The acquired data from the proposed robotic imaging system can be used to develop a decision support system for the identification of standing estrus and the optimum time for artificial insemination.
[0123] As shown in
[0124] The architecture of the LSTM model is generally indicated by the numeral 410 in
[0125] All processed data and results can be uploaded in real-time to a cloud platform, e.g., AMAZON AWS, owned by Amazon Technologies, Inc., having a place of business at 410 Terry Avenue, Seattle, Washington 98109. Basic information about each sow/gilt, including ear ID (electronic ID tag), breed, age, and reproductive information, will be established when they are added to the system and will keep updating. All data generated from this CPS system, management data (e.g., feeding and drinking, stall location), and reproductive data, e.g., KPIs, weaning date, parity number, will be associated with each sow (ID). User interfaces for websites and mobile devices will be developed to visualize data, monitor information of sows, and make management plans.
[0126] One illustrative, but non-limiting, example of an interface is shown in
[0127] Ovulation usually happens at two-thirds of a standing estrus period (
[0128] An AI-enabled model of the present invention can accurately monitor estrus status for identifying the optimum time window for artificial insemination and should reduce more than 50% labor input for estrus detection and save 50% of semen usage.
[0129] Data-driven decisions made through this present invention will be more efficient and improve reproductive performance than the current standard management procedure. The management decisions in sow farms typically include estrus checks, artificial intelligence, pregnancy checks, daily feed quota, and replacement (or cull decision). A sow's reproductive performance is quantified by the KPIs, e.g., litter size, farrowing rate, PW/MS/Y, piglet survival rate, and non-production days, which will be used as golden criteria to evaluate the performance of the data-driven decisions. If a sow's body condition is deviating from the target range during the gestation period, the CPS system will timely adjust feed accordingly to avoid overfeeding or underfeeding throughout the gestation stage. Sows with structural disorder symptoms usually have a high potential of pregnancy failure which is an important factor for culling or replacement. In addition, an abnormal activity level that deviates from its normal range (baseline) will be a good sign to alert farmers for further examination for potential sicknesses such as lameness and fever. If such a phenomenon is detected in a herd level, farmers could reach out for veterinary assistance.
[0130] In this invention, rear-view three-dimensional (3D) models of sows were acquired using a three-dimensional measurement device 12, e.g., LiDAR camera, which shows the capability of detecting the variation of sows' vulva volume around estrus. The increased blood flow due to the rise of estrogen level during estrous events may cause an increase in vulva size that can be used as an indicator of estrous events. The present invention shows that sows with larger vulva volume had a smaller percentage of increase in volume around estrus, which explains less sensitivity of vulva swelling in detecting estrus for older sows as previously described. In addition, the duration of swelling also varies significantly. Since vulva swelling is due to increased blood flow in the vulva region, such an increase should also lead to increased vulva surface temperature and intra-vaginal temperature. It is believed that vulva temperature would increase and then decrease prior to the onset of estrus. Capturing vulva volume data using more than one LiDAR camera while sows are being fed is believed will yield more consistent volume estimations. Another source of the variance is that the area of the removed depth information is larger than the actual vulva size. Accurately detecting the edge of the vulva region might further improve the accuracy of vulva volume estimation. In the present study, vulva volume data were collected around the third estrus after weaning. Therefore, it is believed that the changes in vulvar size around the third estrus cycle can be captured using the at least one three-dimensional measurement device 12, e.g., LiDAR camera.
[0131] Estrus should occur four to nine days after the last day of a Matrix feeding. For the two sows that came in heat before vulva volume reached peak value, the significant increase in vulva volume was not detected until Days 8 and 9 post the synchronization removal. Therefore, in the early phase of the estrous cycle, producers should check for estrus when the vulva volume reaches peak value. If no significant increase in vulva volume was detected within seven days from the last day of a synchronizer feeding, the producer should check for estrus starting on the day when a significant increase in vulvar volume is detected by the three-dimensional measurement device 12, e.g., LiDAR camera. Since the significant change in vulva volume was detected in all sows before/on the first day of estrus, it can help avoid a missing estrus.
[0132] The estrus checking started on the third day after the synchronization removal and estrus detection was performed fifty-one times in total for the eight test sows. By following the suggested estrus checking guide based on the vulva volume change, producers would only need to perform estrus checking twenty-five times, saving about 50% of the labor input. Sows that do not become pregnant would be expected to return to estrus about twenty-one days later. Detection of that estrus is especially inefficient, especially on farms with high conception rates (low return to estrus), many of whom do not check for returns but instead identify non-pregnant sows late in gestation. The use of the technology of the present invention could identify these sows considerably earlier and reduce the number of non-productive days related to conception failure.
[0133] This present invention provides a novel method that uses a three-dimensional measurement device 12, e.g., LiDAR camera, to evaluate vulva swelling around the estrus. The findings demonstrate that two-dimensional (2D) and three-dimensional (3D) features from a three-dimensional measurement device 12, e.g., LiDAR camera, could detect the significant change in vulva size around the third estrus cycle. It is believed that vulvar size can be subjectively evaluated, and the change in vulvar size shows potential in that it can be used to identify the estrus in sows. Results also indicate that vulva volume (three-dimensional (3D) features) showed higher accuracy and reliability in detecting upcoming estrus. Swelling duration and intensity vary among different sows. Although sows with larger vulva volume had a smaller percentage of increase in vulvar volume around estrus, significant change in vulvar volume was still detected prior to the onset estrus event. It is believed that no sow was found having estrus before a significant change in vulva volume. Most of the sows showed the onset of an estrus event at or after vulva volume reached peak value. Detecting a significant increase in vulva volume can help accurately detect estrus of sows, reduce times of estrus check, and thus save labor and improve production efficiency.
[0134] An image processing pipeline was developed to compute the vulva volume of sows using the collected imagery data, which is generally indicated by the numeral 550 in
[0135] However, as shown in
[0136] The selected images were used to extract the vulva region using an image segmentation model, i.e., the vulva region recognition model (VRR model) 576. All image pixels corresponding to the vulva region of sows from the selected IR images were identified and segmented 578. Because each IR image is physically aligned with its corresponding 3D point cloud (captured simultaneously) 580, vulva regions in the 3D point cloud were extracted automatically. Each IR image is stored in 8-bit unsigned integer format (70 kilobyte/frame), and each 3D point cloud is stored in 32-bit float format (5 megabytes per frame). The 3D point clouds were only used for evaluating the volume of the identified vulva region to reduce computing demand.
[0137] Vulva volume estimation 582 can then be performed. This can be implemented with MATLAB (R2020b, 168 MathWorks, Natick, MA, USA). After identifying the vulva region from the IR image 584, a segmentation box (pad 20 pixels in horizontal direction, pad 10 pixels in vertical direction) is applied on both IR frames and 3D point cloud to zoom in to the vulva region. Next, the segmented mask and 3D point cloud were resized to 300300 pixels. The resulting 3D surface would be a 3300300 matrix that contains the spatial information of the region of interest in the XYZ domain. Next, the spatial information inside of the vulva mask was removed and replaced with new values by interpolating the nearby spatial information. The Extracted vulva surface was obtained by subtracting the No Vulva Surface from the Original Surface. As shown in
[0138] Finally, after extracting the 3D vulva surface, a classification model (vulva shape verification (VSV) model 586 was used to detect and exclude incorrectly segmented 3D vulva surface, i.e., a portion of the vulva region was left out from the segmentation, image labeling, and model training. Imagery data from six sows was used as a training dataset, and those from two sows was used as a testing dataset for the standing posture filtering (SPF) and vulva region recognition (VRR) models.
[0139] The standing posture filtering (SPF) model 561 was developed to remove the defect images with sow postures unfit for vulva volume evaluation from the datasets, as shown in
where, true positive (TP) is the number of correctly classified images, false negative (FN) is the number of misclassified images, and false positive (FP) is the number of negative images that were misclassified.
[0140] This kept image 564 is processed by the vulva region recognition (VRR) model. The classified images by the standing posture filtering (SPF) model 561 (with IR images) were visually examined to discard images that are not suited for vulva volume evaluation. In one illustrative, but nonlimiting experiment, there were 1674 captured images from the eight sows that had suitable standing postures (Labeled as KEEP for the SPF model) for vulva volume evaluation. A vulva region recognition (VRR) model 576 was developed to identify the vulva regions in the images. The vulva region of each sow was labeled using an imaging labeling platform Apeer (ZEISS, Germany), based on the visible images (RGB images directly from the LiDAR camera) that were captured when indoor light was on. Because it was difficult to draw a clear boundary between the sow's vulva region and rectal region, manually labeled vulva masks might contain a part of the sow's rectal region. In addition, the labeled vulva masks were slightly larger than the actual vulva region (i.e., a small margin at the edge of the vulva region). A vulva mask of 480480 pixels with values of zeros was built to select the region of interest, where the labeled vulva region was set as 1. One of the advantages of the U-Net network is the large number of feature channels which allows contextual information to propagate through the model. A U-Net neural network architecture was implemented on Google Colaboratory 256 to classify each pixel into one of the two classes (i.e., 0: background, 1: vulva) for each imagery type (i.e., IR, DI, DI3, and DIIR). A total of 857 images from six sows were labeled as a training dataset. The labeled masks and the corresponding raw images (i.e., IR, DI, DI3, and DIIR) were augmented by flipping horizontally to increase the training dataset sample size (n=1,714). The dataset was divided into 80% training and 20% validation. Each model was trained with 100 epochs, and the batch size was set to 16. The trained models were then tested on a testing dataset that consisted of 399 images (images with suitable standing posture captured during the experiment) from the other two sows.
[0141] Referring now to
[0142] The vulva shape verification (VSV) model was developed to determine whether the computed vulva volume should be discarded. In scenarios where the vulva region was not correctly extracted, the computed volume should not be recorded. Images of the extracted vulva region and the background were saved during the computation of the vulva volume. The correctly extracted vulva region and the incorrectly extracted vulva region were labeled into two classes during the evaluation of the vulva region recognition (VRR) model's performance. Image augmentation, i.e., flip, distortion, scale, and so forth, is an effective strategy to improve a trained model's generalizability when handling a limited dataset. From the masks that were generated by the vulva region recognition (VRR) model using DI images as input, all of the incorrectly extracted vulva shape images of the eight sows (n=129) were augmented by flipping horizontally, vertically stretching by 20%, horizontally stretching by 20%, 298 scaled up by 20%, and scaled down by 20% using OpenCV. The augmentation was performed to 299 increase the variation and the size of the training dataset. Images of the correctly extracted vulva region were downsampled to handle class imbalance. The dataset (Correct: n=774, False: n=774) was divided into 80% training, 10% validation, and 10% testing. Two types of images (1: No vulva surfaceNV, 2: Extracted vulva surfaceEV) were tested as input for the VSV model. The NV images were used to determine if the vulva region was entirely segmented out by the VRR model because a portion of the vulva region was difficult to visually identify from the extracted vulva shape (EV image). Another common reason for the incorrect vulva shape extraction was the vulva mask containing part of the spatial information of the tail.
[0143] Three pre-trained (based on ImageNet) deep learning architectures (MobileNet, Xception, and DenseNet) were implemented on Google Colaboratory and tested on each type of image as illustrative and nonlimiting examples. Each model was trained with 100 epochs, and the batch size was set as 32. The performance of the models was evaluated using accuracy and F1 scores in the equations shown below:
[0144] The Vulva volume quantification is where the vulva volume of each sow was calculated using the vulva shape extracted by the mask generated from the DIIR images. Vulva volume computed from the incorrectly extracted vulva shapes (n=81) was discarded. The daily vulva volume (V) of each sow was calculated as the mean of vulva volume values recorded within 24 hours (from 0:00 to 24:00). Day from the onset of estrus (DFO) was defined as the number of days from the first day (DFO=0) when a sow was first identified as having the onset of estrus by breeding technicians using the BPT method. In this study, the average value of the three smallest daily vulva volumes were observed from weaning was considered as the minimum (normal) vulva volume (MV) of the sow. MV represents the volume of the vulva region under normal conditions that showed no sign of swelling or redness. Daily percentage increase in vulva volume (V) and maximum increase in vulva volume (Vm) around the onset of estrus were defined by the following Equations:
[0145] Referring now to
[0146] The next step is a posture recognition model <654> was developed to extract the behavior patterns of sows. This posture information is sent to create a behavior record <656> that forms part of the estrus detection model <672>. This process also includes evaluating a daily standing duration <658> (STA24: portion of standing posture in a 24-hour window evaluated at 12 PM) is filtered out <660> and daily idle duration (LL24: portion of lateral lying posture in a 24-hour window evaluated at 12 PM, noon). Finally, a determination is made if the sow is in a good standing pose <662>. Also calculated was the DLL and DSTA that were the daily difference in LL24 and STA24 (i.e., DLLDay i=125 LL24Day i-LL24Day i-1). RLL and RSTA were the daily ratios in LL24 and STA24 126 (i.e., RLLDay i=LL24Day i/LL24Day i-1). The sow's vulvar region was automatically identified using a deep learning model, e.g., UNET, and segmented <664>.
[0147] The vulva volume <666> is computed using the method described above. There is input received from a 3D depth map <668>. Daily vulvar volume (VA24) was defined as the average value of captured vulvar volume within a 24-hour window evaluated at 12 PM. In addition, the DV is the daily difference between two consecutive days' VA24 values, and PV was the daily percentage change in VA24 (PVDay i=132 (VA24Day i-VA24Day i-1)/VA24Day i-1). Day from weaning (DFW) is considered 0 for the first day the sow was moved into the gestation stall and increment by 1 for each following day. Data from the second day after weaning to the day when the onset of estrus was detected for twenty sows were used to train an estrus detection model using a support vector machine (RStudio, 1.2.5033). Response variable onset of estrus (OE) is set as 0 (Class weight=1) for each day and set as 1 (Class weight=3) for the day when the onset of estrus was detected. Data from the other six sows were used as test samples. This forms a biological vulvar size record <670> that also forms part of the estrus detection model <672>.
[0148] This method uses a robotic imaging system to automatically monitor a sow's behavior and vulvar size. The daily change in the sow's vulvar size, standing, and lateral lying duration can be used to identify the onset of estrus with 95.4% training accuracy and 93.1% testing accuracy. However, behavior patterns may not be a reliable indicator for returned estrus. The presented robotic imaging system can also identify vulvar swollenness around the returned estrus if a sow failed to conceive from the artificial insemination in the previous estrus cycle, and therefore has the potential to significantly reduce labor consumption on estrus detection and pregnancy test and reduce non-production days.
[0149] Consequently, the present invention provides powerful tools to determine estrus detection resulting in more productive and efficient sow production. From the foregoing, it can be seen that the present invention accomplishes at least all of the stated objectives.
Glossary
[0150] Unless defined otherwise, all technical and scientific terms used above have the same meaning as commonly understood by one of ordinary skill in the art to which embodiments of the present invention pertain.
[0151] The terms a, an, and the include both singular and plural referents.
[0152] The term or is synonymous with and/or and means anyone member or combination of members of a particular list.
[0153] The terms invention or present invention are not intended to refer to any single embodiment of the particular invention but encompass all possible embodiments as described in the specification and the claims.
[0154] The term about as used herein refers to slight variations in numerical quantities with respect to any quantifiable variable. An inadvertent error can occur, for example, through the use of typical measuring techniques or equipment or from differences in the manufacture, source, or purity of components.
[0155] The term substantially refers to a great or significant extent. Substantially can thus refer to a plurality, majority, and/or a supermajority of said quantifiable variable, given proper context.
[0156] The term generally encompasses both about and substantially.
[0157] The term configured describes a structure capable of performing a task or adopting a particular configuration. The term configured can be used interchangeably with other similar phrases, such as constructed, arranged, adapted, manufactured, and the like.
[0158] Terms characterizing sequential order, a position, and/or an orientation are not limiting and are only referenced according to the views presented.
[0159] The scope of the present invention is defined by the appended claims, along with the full scope of equivalents to which such claims are entitled. The scope of the invention is further qualified as including any possible modification to any of the aspects and/or embodiments disclosed herein which would result in other embodiments, combinations, subcombinations, or the like that would be obvious to those skilled in the art.