Robot with perception capability of livestock and poultry information and mapping approach based on autonomous navigation
11892855 ยท 2024-02-06
Assignee
Inventors
- Tao LIN (Zhejiang, CN)
- Guoqiang Ren (Zhejiang, CN)
- Zhixian LIN (Zhejiang, CN)
- Jinfan XU (Zhejiang, CN)
- Huanyu Jiang (Zhejiang, CN)
- K. C Ting (Zhejiang, CN)
- Yibin YING (Zhejiang, CN)
Cpc classification
H04W4/44
ELECTRICITY
G05D1/247
PHYSICS
G01S17/87
PHYSICS
G01S17/86
PHYSICS
G06N7/01
PHYSICS
G01S17/42
PHYSICS
G01D21/02
PHYSICS
G05D1/644
PHYSICS
G05D1/246
PHYSICS
G05D1/0246
PHYSICS
G01C22/00
PHYSICS
G01S17/894
PHYSICS
International classification
G05D1/246
PHYSICS
G05D1/644
PHYSICS
G06N7/00
PHYSICS
G06N7/01
PHYSICS
Abstract
A robot with the perception capability of livestock and poultry information and a mapping approach based on autonomous navigation are disclosed. The robot includes a four-wheeled vehicle (3), an autonomous navigation system, a motion module and an information acquisition module. The autonomous navigation system includes a LiDAR (10), a RGB-D camera (9), an inertial measurement unit (5), an odometer and a main control module (12). The information acquisition module includes two thermal imagers (6), an environmental detection sensor module (8) and a wireless transmission module (11). The robot is controlled in indoor working environment, and simultaneously information about surrounding environment during movement is obtained with the autonomous navigation system; positioning locations are obtained through data processing and a global map is constructed, which can improve positioning accuracy, reduce dependence on breeders, realize automatic environment detection. The present invention has the advantages of high efficiency, high economic benefits, and wide applicability.
Claims
1. A robot with perception capability of livestock and poultry information based on autonomous navigation, the robot comprising a four-wheeled vehicle (3), an elevating platform (7) located above the four-wheeled vehicle (3), and an autonomous navigation system, a motion module and an information acquisition module, three of which are installed on the four-wheeled vehicle (3), wherein: the autonomous navigation system comprises a light detection and ranging (LiDAR) (10) for obtaining information about surrounding environment, a red-green-blue-depth (RGB-D) camera (9), an inertial measurement unit (5), an odometer and a main control module (12) for information processing and control, wherein: the LiDAR (10), the RGB-D camera (9), the inertial measurement unit (5), the odometer and the main control module (12) are fixed to the four-wheeled vehicle (3); the LiDAR (10), the RGB-D camera (9), the inertial measurement unit (5) and the odometer are connected with the main control module (12); the motion module comprises a direct current (DC) motor unit (1) for controlling the robot to move forward, a push-rod motor unit (4) for controlling up and down movement of the elevating platform (7), and a microcontroller (2) for controlling a rotational speed and a rotational direction of the DC motor unit (1) and the push-rod motor unit (4), wherein: the DC motor unit (1) comprises four DC motors (17) installed at four corners of a bottom portion of the four-wheeled vehicle (3), respectively; four output shafts (16) of the four DC motors (17) are connected with four wheels (15) at the four corners of the bottom portion of the four-wheeled vehicle (3), respectively; the push-rod motor unit (4) comprises two push-rod motors (14) fixed to a top of the four-wheeled vehicle (3); two output ends of the push-rod motor unit (4) are connected with the elevating platform (7) for driving the elevating platform (7) to move up and down; the information acquisition module comprises two thermal imagers (6) for collecting animal behavior information, an environmental detection sensor module (8) for collecting environmental information and a wireless transmission module (11) for data transmission, wherein the two thermal imagers (6) are disposed at two sides of a bottom of the elevating platform (7), respectively; the environmental detection sensor module (8) is disposed on the elevating platform (7).
2. The robot with perception capability of livestock and poultry information based on autonomous navigation according to claim 1, further comprising a power management module (13) which is connected with the LiDAR (10), the RGB-D camera (9), the inertial measurement unit (5), the odometer, the main control module (12), the DC motor unit (1), the push-rod motor unit (4), and the microcontroller (2).
3. The robot with perception capability of livestock and poultry information based on autonomous navigation according to claim 1, wherein the DC motor unit (1) and the push-rod motor unit (4) are all connected with the microcontroller (2); the microcontroller (2) is connected with the main control module (12); the main control module (12)is configured to receive the information about the surrounding environment from the LiDAR for controlling the DC motor unit (1) and the push-rod motor unit (4), so as to further drive the four-wheeled vehicle (3) and the elevating platform (7) to move; map autonomous construction and navigation are performed by simultaneous localization and mapping (SLAM).
4. The robot with perception capability of livestock and poultry information based on autonomous navigation according to claim 1, wherein the two thermal imagers (6) and the environmental detection sensor module (8) are all electrically connected with the wireless transmission module (11), the wireless transmission module (11) is connected with an external wireless receiver; the external wireless receiver is configured to receive environmental perception information collected by the thermal imagers (6) and the environmental detection sensor module (8) for storing and processing.
5. The robot with perception capability of livestock and poultry information based on autonomous navigation according to claim 1, wherein the environmental detection sensor module (8) comprises multiple sensors which are but not limited to sound sensor, temperature sensor, humidity sensor, light intensity sensor, hydrogen sulfide sensor, ammonia sensor and carbon dioxide sensor.
6. An autonomous mapping method for the robot with perception capability of livestock and poultry information based on autonomous navigation according to claim 1, the method comprising the steps of: (S1) controlling the robot to move in indoor working environment, and simultaneously obtaining information about surrounding environment during movement with the LiDAR, the RGB-D camera, the inertial measurement unit and the odometer, wherein the information comprises obstacle distance information, image and depth information, pose information and odometer information in a local coordinate system, the pose information comprises first real-time global coordinates, the odometer information comprises second real-time global coordinates, speed, heading angle and wheel angular velocity; (S2) receiving and processing the information about the surrounding environment through the main control module, obtaining positioning global coordinates, speed, heading angle and wheel angular velocity of the robot in a world coordinate system by coordinate transformation; (S3) defining the positioning global coordinates, speed, heading angle and wheel angular velocity of the robot in the world coordinate system to be a state vector of a Kalman filter, wherein the positioning global coordinates in the world coordinate system are obtained through processing the first real-time global coordinates and the second real-time global coordinates in the local coordinate system; (S4) constructing a state model of the Kalman filter according to the state vector, constructing an observation model of the Kalman filter according to an observation model of the odometer, an observation model of the inertial measurement unit and an observation model of the LiDAR, solving the state model and the observation model of the Kalman filter according to Kalman filter algorithm, and obtaining an optimal solution of the state vector at time t; (S5) determining the optimal solution of the state vector under the state model and the observation model of the Kalman filter in the step (S4) in combination with image information collected by the RGB-D camera and Monte Carlo real-time positioning and mapping algorithm, which comprise: (S501) the robot moving in an area to be constructed on a map, judging whether the robot turns and whether there are obstacles during movement through the obstacle distance information collected by the LiDAR, judging whether there are characteristic road markings are captured through the image information collected by the RGB-D camera, performing feature matching on information collected by the LiDAR, the inertial measurement unit and the odometer in the area to be constructed on the map, and obtaining poses in the world coordinate system, wherein the poses are the global coordinates of the robot in the world coordinate system; and (S502) when the robot does not turn, no obstacles or no characteristic road markings are captured by the RGB-D camera during the movement of the robot, defining control vectors of the state model of the Kalman filter to be the poses in the world coordinate system; when the robot turns, there are obstacles or characteristic road markings are captured by the RGB-D camera during the movement of the robot, defining the control vectors of the state model of the Kalman filter to be the optimal solution of the state vector; and (S6) iteratively solving the state model and the observation model of the Kalman filter, and obtaining locations, so that a global map is constructed.
7. The autonomous mapping method according to claim 6, wherein the step (S4) specifically comprises: (S401): at time t, constructing the state vector X.sub.c(t)=[x.sub.t, y.sub.t, .sub.t, v.sub.t, .sub.t].sup.T, wherein x.sub.t and y.sub.t are the positioning global coordinates of the robot in the world coordinate system, .sub.t is the heading angle of the robot in the world coordinate system, v.sub.t is the speed of the robot in the world coordinate system, .sub.t is the wheel angular velocity of the robot in the world coordinate system, and T is matrix transposition; (S402) constructing the state model of the Kalman filter according to formulas of
Q.sub.1(t)=.sub.1.sup.1Q.sub.(t),
P.sub.1(t)=(1.sub.1.sup.1)P.sub.(t),
Q.sub.2(t)=.sub.2.sup.1Q.sub.(t),
Q.sub.2(t)=.sub.2.sup.1Q.sub.(t),
8. The autonomous mapping method according to claim 6, wherein the step (S6) specifically comprises: (S601) converting a motion observation model to a likelihood function; (S602) evenly dividing the area to be constructed on the map into multiple grids, scanning the area to be constructed on the map with the LiDAR, setting a grid with obstacles to 1, setting a grid without obstacles to 0, and obtaining a local grid map which is regarded as an initial global map; (S603) creating particles based on Monte Carlo algorithm, taking positions of the particles as possible locations of the robot, performing weight fusion on the second real-time global coordinates obtained by the odometer and the first real-time global coordinates obtained by the inertial measurement unit in the world coordinate system, and obtaining new locations of the robot, which is expressed by formulas of
X.sub.k=.sub.i=1.sup.n(.sub.k.sup.i.Math.P.sup.i), here, n is a total number of the particles, P.sup.i is a position of the i.sup.th particle after weight and fusion; and (S606) according to the normalized updated particle weight, abandoning particles with less weight .sub.k.sup.i and remaining particles with larger weight, which comprises: (S6061) performing polynomial resample on the updated weights of all particles, and constructing a discrete cumulative distribution function by a formula of
F(i)=.sub.m=1.sup.i.sub.k.sup.i, here, F(i) is a cumulative weight of the i.sup.th particle; (S6062) generating a uniformly distributed random number set {u.sub.j} on [0,1], wherein u.sub.j is a set of random numbers generated above, j is the j.sup.th random number randomly generated in the set above, and judging the weight .sub.k.sup.i, wherein the weight .sub.k.sup.i is smaller if F.sub.(i)u.sub.j; the weight .sub.k.sup.i is larger if F.sub.(i)>u.sub.j, copying a current particle as a new particle, and setting a weight of the new particle to be 1/N; and (S6063) repeating polynomial resample in the step (S6062) for N times, obtaining N new particles, completing particle update, and using positions of the particles finally updated as locations of the robot in the world coordinate system.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6) In the drawings, 1: direct current (DC) motor unit; 2: microcontroller; 3: four-wheeled vehicle; 4: push-rod motor unit; 5: inertial measurement unit; 6: thermal imager; 7: elevating platform; 8: environmental detection sensor module; 9: red-green-blue-depth (RGB-D) camera; 10: light detection and ranging (LiDAR); 11: wireless transmission module; 12: main control module; 13: power management module; 14: push-rod motor; 15: wheel; 16: output shaft; 17: DC motor.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
(7) In order to make the objectives, technical solutions and advantages of the present invention clearer, the technical solutions, principles and features of the present invention will be described with accompanying drawings as follows. The embodiments cited are only used to explain the present invention, not to limit the scope of the present invention. Based on the embodiments of the present invention, other embodiments obtained by those skilled in the art without creative work shall fall within the protective scope of the present invention.
(8) Referring to
(9) As shown in
(10) As shown in
(11) The elevating platform 7 and the four-wheeled vehicle 3 are main components of the robot. The elevating platform 7, having an elevating function, is installed to the four-wheeled vehicle 3. Two thermal imagers 6 and an environmental detection sensor module 8 are installed to the elevating platform 7, so that information from different locations is able to be collected according to task requirements, but not limited to only realizing information perception functions.
(12) As shown in
(13) Preferably, the robot further comprises a power management module 13 which is connected with the LiDAR 10, the RGB-D camera 9, the inertial measurement unit 5, the odometer, the main control module 12, the DC motor unit 1, the push-rod motor unit 4, and the microcontroller 2, wherein the power management module 13 is configured to provide components of the robot with power required for normal work.
(14) The elevating platform 7 drives the two thermal imagers 6 and the environmental detection sensor module 8 to move up and down for perceiving and collecting environmental information required for the cages at different heights.
(15) The LiDAR 10 is configured to measure a distance between the robot and surrounding obstacles in real time; the RGB-D camera 9 is configured to obtain RGB images and depth information of the surrounding environment of the robot; the inertial measurement unit 5 is configured to obtain pose information of the robot; the odometer is configured to obtain odometer information of the robot. Based on robot operating system (ROS), the main control module 12 is able to realize autonomous mapping and path planning in combination with Kalman filter algorithm and particle filter algorithm.
(16) The environmental detection sensor module 8 comprises multiple sensors which are but not limited to sound sensor, temperature sensor, humidity sensor, light intensity sensor, hydrogen sulfide sensor, ammonia sensor and carbon dioxide sensor.
(17)
(18) The autonomous navigation method comprises two steps of autonomous mapping and path planning.
(19) The step of autonomous mapping comprises: (S1) controlling the robot to move in indoor working environment based on ROS, and simultaneously obtaining information about surrounding environment during movement with the LiDAR, the RGB-D camera, the inertial measurement unit and the odometer, wherein the information comprises obstacle distance information, image and depth information, pose information and odometer information in a local coordinate system, the pose information comprises first real-time global coordinates, the odometer information comprises second real-time global coordinates, speed, heading angle and wheel angular velocity; (S2) receiving and processing the information about the surrounding environment through the main control module, obtaining positioning global coordinates, speed, heading angle and wheel angular velocity of the robot in a world coordinate system by coordinate transformation; (S3) defining the positioning global coordinates, speed, heading angle and wheel angular velocity of the robot in the world coordinate system to be a state vector of a Kalman filter, wherein the positioning global coordinates in the world coordinate system are obtained through processing the first real-time global coordinates and the second real-time global coordinates in the local coordinate system; (S4) constructing a state model of the Kalman filter according to the state vector, constructing an observation model of the Kalman filter according to an observation model of the odometer, an observation model of the inertial measurement unit and an observation model of the LiDAR, solving the state model and the observation model of the Kalman filter according to Kalman filter algorithm, and obtaining an optimal solution of the state vector, wherein the step (S4) specifically comprises: (S401): at time t, constructing the state vector X.sub.c(t)=[x.sub.t, y.sub.t, .sub.t, v.sub.t, .sub.t].sup.T, wherein x.sub.t and y.sub.t are the positioning global coordinates of the robot in the world coordinate system, .sub.t is the heading angle of the robot in the world coordinate system, v.sub.t is the speed of the robot in the world coordinate system, .sub.t is the wheel angular velocity of the robot in the world coordinate system, and Tis matrix transposition; (S402) constructing the state model of the Kalman filter according to formulas of
(20)
wherein X.sub.c(t+1) is the state vector at time t+1, (X.sub.c(t+1)) is a nonlinear state transition function of the state vector X.sub.c(t+1) at time t+1, W.sub.t is process noise of the Kalman filter, t is time interval between two adjacent moments; (S403) dividing the Kalman filter into a first sub-filter and a second sub-filter which are independent from each other and in parallel, wherein:
(21) an observation model of the first sub-filter is expressed by Z.sub.1(t+1)=h.sub.iX.sub.c(t)+W.sub.1(t), which is specifically
(22)
here, Z.sub.Las is the observation model of the LiDAR, Z.sub.IMU is the observation model of the inertial measurement unit, W.sub.1(t) is a sum of noise of the LiDAR and the inertial measurement unit, h.sub.1 is an observation matrix of the first sub-filter,
(23) an observation model of the second sub-filter is expressed by Z.sub.2(t+1)=h.sub.2X.sub.c(t)+W.sub.2(t), which is specifically
(24)
here, Z.sub.odom is the observation model of the odometer, W.sub.2(t) is a sum of noise of the odometer and the inertial measurement unit, h.sub.2 is an observation matrix of the second sub-filter; and (S404) processing covariance Q.sub.(t) and estimated error covariance P.sub.(t) of the process noise W.sub.t of the Kalman filter through formulas of
Q.sub.1(t)=.sub.1.sup.1Q.sub.(t),
P.sub.1(t)=(1.sub.1.sup.1)P.sub.(t),
Q.sub.2(t)=.sub.2.sup.1Q.sub.(t),
Q.sub.2(t)=.sub.2.sup.1Q.sub.(t),
(25)
here, P is a position after weight and fusion, P.sub.odam is the second real-time global coordinates obtained by the odometer in the world coordinate system, P.sub.IMU is the first real-time global coordinates obtained by the inertial measurement unit in the world coordinate system, .sub.t is weight of the odometer, t is location duration time, is stable location duration time that the second real-time global coordinates obtained by the odometer in the world coordinate system reaches, n is time index parameter which depends on actual situations and is generally 3; (S604) describing a particle weight update method by Gaussian distribution with a mean value of 0 and a variance of .sup.2, and updating a particle weight of the Monte Carlo algorithm, wherein the updated particle weight is expressed by a formula of
(26)
here, (x.sub.k.sup.i,y.sub.k.sup.i) is position of the i.sup.th particle at time k, e is natural constant, x.sub.0.sup.i and y.sub.0.sup.i are initial positions of the i.sup.th particle, .sub.k.sup.i is weight of the i.sup.th particle at time k, k is time, and then normalizing the updated particle weight; (S605) calculating a current location of the robot according to the normalized updated particle weight through a formula of
X.sub.k=.sub.i=1.sup.n(.sub.k.sup.i.Math.P.sup.i),
here, n is a total number of the particles, P.sup.i is a position of the i.sup.th particle after weight and fusion; and (S606) according to the normalized updated particle weight, abandoning particles with less weight .sub.k.sup.i and remaining particles with larger weight, which comprises: (S6061) performing polynomial resample on the updated weights of all particles, and constructing a discrete cumulative distribution function by a formula of
F(i)=.sub.m=1.sup.i.sub.k.sup.i,
here, F(i) is a cumulative weight of the i.sup.th particle; (S6062) generating a uniformly distributed random number set {u.sub.j} on [0,1], wherein u.sub.j is a set of random numbers generated above, j is the j.sup.th random number randomly generated in the set above, and judging the weight .sub.k.sup.i, wherein the weight .sub.k.sup.i is smaller if F.sub.(i)u.sub.j; the weight .sub.k.sup.i is larger if F.sub.(i)>u.sub.j, copying a current particle as a new particle, and setting a weight of the new particle to be 1/N; and (S6063) repeating polynomial resample in the step (S6062) for N times, obtaining N new particles, completing particle update, and using positions of the particles finally updated as locations of the robot in the world coordinate system.
(27) In the embodiment, the feature road sign image information is also integrated. If the RGB-D camera shows that there is no livestock and poultry house information in the image, and a distance between a current positioning global coordinate and an edge obstacle scanned by laser is less than a threshold, the autonomous navigation system issues a turn signal to the motion module, moves forward to a next intersection and issues the turn signal again till a navigation mode is turned off.
(28) As shown in
(29) In the local real-time planning, the dynamic window approach is used to plan and change the route of each cycle according to the map information and possible obstacles nearby, comprehensively evaluate the time, collision and other conditions to select the optimal route, and calculate the linear velocity and angular velocity during the travel cycle, so as to achieve real-time obstacle avoidance.