POSITION-DETERMINING DEVICE
20220087208 · 2022-03-24
Assignee
Inventors
Cpc classification
G06V10/145
PHYSICS
G06V40/10
PHYSICS
G06V20/647
PHYSICS
A01J5/007
HUMAN NECESSITIES
G01S17/894
PHYSICS
G06T7/521
PHYSICS
International classification
A01J5/007
HUMAN NECESSITIES
G01S17/894
PHYSICS
G06T7/521
PHYSICS
G06V40/10
PHYSICS
Abstract
A position-determining device, and a milking device, that determines a relative position of an object and includes a 3D time-of-flight camera with a 2D arrangement of pixels configured to repeatedly record an image of a space. A control unit is connected and includes an image-processing device. The 3D time-of-flight camera has a controllable light source and is configured to record a 2D image by means of reflected emitted light and collect distance information. The image-processing device is configured to recognize an object in the 2D image using image-processing criteria and determine distance information and relative position by analysing the 2D image and the distance information. Due to the fact that distance information, which is often much more noisy than 2D brightness information, can be determined with far fewer image points, the position is determined more quickly and reliably.
Claims
1. A position-determining device configured to repeatedly determine a position of an object in a space with respect to the position-determining device, comprising: a 3D time-of-flight camera with a 2D arrangement of pixels configured to repeatedly record an image of the space, and a control unit which is connected to the camera and comprises an image-processing device for processing a recorded image, the 3D time-of-flight camera comprising a light source for emitting light which is controllable by the control unit, and is configured both to record a 2D image of the space and to collect distance information for one or more pixels by means of reflected emitted light, wherein the image-processing device is configured: to recognise a possible object in the 2D image on the basis of one or more image-processing criteria, wherein the possible object comprises a first set of the pixels of the image, to determine distance information for a subset of the first set of pixels which is smaller than the first set, and; to determine a position with respect to the camera by means of analysis of the 2D image and of the distance information of the subset of pixels.
2. The position-determining device according to claim 1, wherein the image-processing device is configured to determine a subset of the possible object in the 2D image as only 1 pixel.
3. The position-determining device according to claim 1, wherein the image-processing device is configured to determine the subset of the possible object in the 2D image as precisely 2 or 3 pixels.
4. The position-determining device according to claim 1, wherein the 2D arrangement of pixels also comprises colour pixels, wherein the 3D time-of-flight camera is configured to record a colour image of the space.
5. The position-determining device according to claim 1, wherein the image-processing device is configured to determine distance information for each of the possible objects for an associated subset of pixels if it recognises several different sets of pixels of the recorded image as possible objects, and is furthermore configured to classify the possible objects on the basis of the determined distance information according to a predetermined classification criterion.
6. A position-determining device for repeatedly determining a position of an object in a space with respect to the position-determining device, comprising: a 2D camera with a 2D arrangement of pixels, configured to repeatedly record an image of the space, a height-determining device for determining a height of an object in the space, and a control unit which is connected to the camera and the height-determining device and comprises an image-processing device for processing a recorded image, wherein the image-processing device is configured: to recognise a possible object in a 2D image on the basis of one or more image-processing criteria, and to determine the position with respect to the camera by analysis of the 2D image and a determined height.
7. The position-determining device according to claim 6, wherein the height-determining device comprises a laser distance meter.
8. A milking device for milking a dairy animal, comprising milking cups, a robot arm for attaching the milking cups to teats of the dairy animal, as well as a robot control unit for controlling the robot arm, wherein the robot control unit comprises the position-determining device according to claim 6.
9. The milking device according to claim 8, wherein the position-determining device is configured to determine the position of the teats of the dairy animal.
10. The milking device according to claim 8, wherein the milking device comprises a milking stall for milking the dairy animal, and wherein the position-determining device is configured to determine whether a dairy animal is present in the milking stall and/or to determine a position of the dairy animal in the stall.
11. The milking device according to claim 10, wherein the robot control unit is configured to control the robot arm on the basis of the position of the dairy animal in the milking stall determined by the position-determining device.
12. The milking device according to claim 8, provided with the position-determining device, wherein the height-determining device comprises an animal identification-device for identifying the dairy animal to be milked, as well as a database which is operatively connected to the milking device that contains a height for each dairy animal, and wherein the height-determining device determines the height of the dairy animal by retrieving height data associated with the identified dairy animal from the database.
13. The position-determining device according to claim 6, wherein the height-determining device comprises a series of photoelectric cells.
14. The position-determining device according to claim 2, wherein the image-processing device is configured to determine the subset of the possible object in the 2D image as only 1 pixel surrounded on all sides by other pixels of the first set.
15. The position-determining device according to claim 2, wherein the image-processing device is configured to determine a geometric centre of gravity of the possible object.
16. The position-determining device according to claim 4, wherein the 2D arrangement of pixels also comprises RGB pixels.
Description
[0023] The invention will now be explained with reference to the drawing, in which:
[0024]
[0025]
[0026]
[0027]
[0028]
[0029] A dairy animal, in this case a cow, is denoted by reference numeral 9 and has teats 10. ToF camera 5 has an emitted ray 11, whereas reference numeral 12 denotes a line or direction towards a point P exactly above the centre between the teats 10. The position of the base of the tail is denoted by the letter S. Finally, reference numeral 13 denotes a control unit. For the sake of clarity, the usual components have not been illustrated or denoted, such as a light source in the 3D ToF cameras 5, 5′, and an image-processing device which is incorporated in the latter or in the control unit 13.
[0030] Such a milking device 1 automatically attaches milking cups 2 to the teats 10. To this end, the teat-detecting camera 4 determines the position of the teats 10 with respect to the camera. The camera 4 is preferably placed close to the teats before the positions of these can be determined. In this case, the fact that the position of the udder, and thus also of the teats, with respect to a fixed point of the cow 9 is relatively stable, except for the fact that the cow slowly grows, is often used. It is then advantageous to know a relation between, for example, the position of the base of the tail S and the position of the teats, either in the form of a point P exactly above the centre of the teats or determined after attaching the milking cups 2 to the teats 10. Such a relation can be stored in the control unit 13. During a subsequent milking operation, it then suffices to detect the same point S (or, if desired, another, previously determined point) and to determine the associated estimated position of point P and/or directly of the teats via the relation in order to have a good base position for the robot arm 3 to swing in and for placing the teat-detecting camera 4.
[0031] It is known per se to determine the position of point S using a 3D camera, by producing a 3D representation of the cow 9 and analysing it in 3D by looking at, for example, edge detection where the height of the cow drops quickly, etc. According to the present invention, such a 3D analysis is not required and it suffices to measure a single distance to the camera 5 (or 5′), such as to point P. This will be explained later.
[0032] The illustrated 3D ToF camera 5 has a relatively wide emitted light ray 11, so that there is a good chance that the cow 9 will appear sufficiently clearly in an image, and the relevant one or more points can be recorded by the camera 5. Alternatively, it is also possible to mount, via a connecting arm 6, a 3D ToF camera 5′ onto a trolley 7 which is displaceable along a rod 8 of the milking device 1. Thus, the camera 5′ can always be positioned in optimum manner, in which case it is advantageous that the ray of the camera 5′ can be narrower, thus improving the reliability, accuracy and luminosity of the 3D ToF camera 5′.
[0033] For example, animal recognition is used when placing the trolley, but also when trying to find the above-described relation between udder/teat position and position of the base of the tail S. In every automatic milking device, the cows are recognisable by means of a tag and tag-reading system which is not shown separately here for the sake of simplicity.
[0034]
[0035] In the case of the milking device 1 according to
[0036] It is also possible that several candidate objects are recognised in the image. The control unit can then analyse the object candidates by measuring a distance for every candidate and not only compare it to the reference distance, but also to an expected value of this distance, for example on the basis of a determined animal identity or general knowledge of the cows 9 which enter the milking device.
[0037] The outline 20 is then analysed by the control unit 13. Some useful analysis steps are worked out below, but it is emphasized that other steps are also possible.
[0038] First, a centre line 21 in the image of the cow is determined by determining a longitudinal direction of the outline and dividing this in two. In principle, this centre line 21 represents the position of the spine of the cow, as well as the line of highest points on the back of the cow. In principle, this centre line may also be determined from the centre between parallel tangent lines R1 and R2 to the outline.
[0039] Subsequently a tangent line R3 to the outline 20 is determined at right angles to the centre line 21. The intersection 22 of the centre line 21 and the tangent line R3 is in this case determined as the fixed point by the control unit. This point 22 corresponds to the position of the base of the tail of the cow. The expected position of the teats with respect to this base of the tail can be determined by the control unit. This is determined, for example, from the position of a point P in the centre between the four teats, which point is situated at a distance D along the centre line 21 in a forward direction. In principle, this distance D is a fixed value which will only change in a growing cow, but can easily be determined once the milking cups have been attached. Furthermore, a reference position with respect to said point P could be determined for each individual teat 10, so that a subsequent robot arm can take the milking cup to a more precise starting position.
[0040] Finally, a height of the cow is determined in a point on the centre line, for example in a point which corresponds to a projection of point P on the centre line, and in the figure this obviously corresponds to point P. For the pixel in the image which corresponds to that point, the height is determined by means of the 3D properties of the camera. The position thereof can, as such, be determined satisfactorily from a ratio between the (reference) distance D and the total length of the cow. This ratio may also be updated for a growing cow from time to time. It should be noted that the height of point P in principle corresponds well to the height of point 22, the position of the base of the tail which is to be determined. Of course, this point 22 can also be used to measure the distance and to determine the height or even the coordinates therefrom. However, the height of point P can be determined more easily and accurately, because point P is situated on a more or less flat part of the cow and point 22, by contrast, is situated on an edge, so that the distance determination by means of the 3D ToF camera for point P is more reliable. In general, it is therefore advantageous if the point for determining the height of the animal is not situated on the outline 20, but, by contrast, is surrounded on all sides by other points of the object in the image. It is also possible to use the geometric centre of gravity of the outline 20 as an alternative point, for example. Due to the relative flatness of the back of the cow, the height thereof still corresponds well, so that deviations will still be acceptable.
[0041]
[0042] The “real” x and y coordinates can thus not be determined without knowing the height under the camera or the height above the floor surface. According to the invention, those real coordinates can actually be accurately determined if more information about the height is known, as is illustrated by means of
[0043]
h.sub.P=d cos(α.sub.P).
[0044] This height equates with the corresponding height of point 22. After some goniometric calculations, the coordinates of point 22 in the space can then be determined as (in each case apart from the minus sign):
x.sub.22=d(sin α.sub.P/tan(α.sub.22))×cos(β.sub.22)
y.sub.22=d(sin(α.sub.P))/(tan(α.sub.22))×sin(β.sub.22)
h.sub.22=h.sub.P=d cos(α.sub.P)
Incidentally, a 3D ToF camera often already comprises or is supplied with a program which automatically determines the coordinates of a measured pixel, so that it is not necessary to carry out the entire abovementioned method. However, it should be noted that the point whose coordinates are thus determined still has to be determined by the user or automatically by the control unit.
[0045] Another important note is the fact that this example has used the fact that the orientation of the animal is parallel to the line y=0. Should the animal be at an angle, then the orientation will also have to be determined by, for example, 2D analysis of the determined outline 20, in order take into account the relation between the position of point S and the position of the teats/point P.
[0046] It will be clear that variations are possible with regard to the choice of, for example, the point for determining the height and the way of calculating the coordinates. It is emphasized again, that the present invention provides advantages because it only analyses a two-dimensional image and by means of a single distance measurement greatly improves the accuracy of the determination of the position from that two-dimensional image without requiring a three-dimensional image analysis which demands a large amount of computing power.
[0047] It should also be noted that it is possible to measure the distance for two or three points of the cow and to determine the coordinates for these. In this way, it is possible to better take into account the form properties of the cow or another dairy animal, but still without having to analyse a complicated 3D representation. The 2 or 3 points for example determine a line or a plane, respectively, so that, for example, the orientation of the cow becomes even clearer. In turn, the relation between base of the tail position and teat position can consequently be used in an optimum manner.
[0048] Another alternative relates to a 3D ToF camera with additional RGB pixels, preferably provided between the ToF pixels. This makes it possible to collect additional colour information in the image, which may assist the control unit in analysing the 2D image, for example by the fact that determined object candidates can be excluded or by the fact that it is easy to determine that two or more candidates belong together due to matching colour or the like.
[0049] Finally, it is pointed out here that the position-determining device in the illustrated example is intended to determine the animal's position. However, it is also possible to determine a position of other objects. This may refer generally to a stall in which an entity (object, animal, human) with freedom of movement ends up in each case. Consideration may be given to (trap) cages in a zoo or in the wild, in spaces in which people may end up, etc.
[0050] A specific example which is mentioned here relates to the milking device as illustrated in
[0051] The described embodiment and the alternatives mentioned are not intended to be limitative. The scope of protection is determined by the attached claims.