TOOL-PICKUP SYSTEM, METHOD, COMPUTER PROGRAM AND NON-VOLATILE DATA CARRIER
20220015326 · 2022-01-20
Inventors
Cpc classification
G06T1/0014
PHYSICS
B25J15/0052
PERFORMING OPERATIONS; TRANSPORTING
A01J5/007
HUMAN NECESSITIES
B25J11/0045
PERFORMING OPERATIONS; TRANSPORTING
G06T7/30
PHYSICS
A01J7/02
HUMAN NECESSITIES
H04N23/695
ELECTRICITY
International classification
A01J5/007
HUMAN NECESSITIES
A01J7/02
HUMAN NECESSITIES
B25J11/00
PERFORMING OPERATIONS; TRANSPORTING
B25J13/08
PERFORMING OPERATIONS; TRANSPORTING
G06T7/30
PHYSICS
Abstract
Tools in an automatic milking arrangement are picked up by using a robotic arm (110). The robotic arm (110) moves a camera (130) to an origin location (PC) from which the camera (130) registers three-dimensional image data (Dimg3D) of at least one tool (141, 142, 143, 144). The three-dimensional image data is processed using an image-based object identification algorithm to identify objects in the form of the tools and hoses (152). In response to identifying at least one tool, a respective tool position (PT1, PT3, PT4) is determined for each identified tool based on the origin location (PC) and the three-dimensional image data. Then, a grip device (115) is exclusively controlled to the one or more of the respective tool positions (PT1, PT3, PT4) to perform a pick-up operation. Thus, futile attempts to pick-up non-existing or blocked tools can be avoided.
Claims
1. A tool-pickup system for an automatic milking arrangement, the tool-pickup system comprising: a robotic arm (110) provided with a grip device (115) configured to pick up tools (141, 142, 143, 144), and a camera (130) configured to register three-dimensional image data (Dimg3D); and a control unit (120) operatively connected to the robotic arm, the control unit (120) configured to: control the robotic arm (110) to move the camera (130) to an origin location (PC) from which at least one tool of the tools (141, 142, 143, 144) is expected to be visible within a view field (VF) of the camera (130), obtain three-dimensional image data (Dimg3D) registered by the camera (130) at the origin location (PC), process the three-dimensional image data (Dimg3D) using an image-based object identification algorithm to identify objects in a form of the tools (141, 143, 144) and/or hoses (152), and in response to identifying at least one of the tools (141, 143, 144): i) determine a respective tool position (PT1, PT3, PT4) for each identified tool (141, 143, 144) based on the origin location (PC) and the three-dimensional image data (Dimg3D), and ii) exclusively control the grip device (115) to one or more of the respective tool positions (PT1, PT3, PT4) to perform a pick-up operation.
2. The tool-pickup system according to claim 1, wherein the control unit (120) is further configured to produce an alert (A) in response to identifying at least one hose (152) at a position where, in a current stage of a procedure executed by the automatic milking arrangement, one of said tools (141, 142, 143, 144) should be present.
3. The tool-pickup system according to claim 1, wherein the control unit (120) is configured to process the three-dimensional image data (Dimg3D) by searching for the tools (141, 142, 143, 144) in at least one predefined volume (L, R; 441, 442) within the view field (VF).
4. The tool-pickup system according to claim 3, wherein the at least one predefined volume comprises a respective line (L) or arc for each of the tools (141, 142, 143, 144) along which respective line (L) expected tool positions (PET1, PET2, PET3, PET4) are defined within a range (R).
5. The tool-pickup system according to claim 3, wherein the at least one predefined volume comprises a respective area (441; 442) for each of the tools (141, 142) within which respective area (441; 442) expected tool positions (PET1, PET2) are defined.
6. The tool-pickup system according to claim 3, wherein, after having controlled the grip device (115) to perform a pick-up operation at a particular one of said tool positions (PT1, PT3, PT4), the control unit (120) is configured to exclude the predefined volume for said particular one tool position from a subsequent search for at least one remaining tool of said tools in the three-dimensional image data (Dimg3D).
7. The tool-pickup system according to claim 1, wherein the tools comprise at least one of: one or more teatcups and one or more cleaning cups.
8. A method for picking up tools in an automatic milking arrangement, the method comprising: controlling a robotic arm (110) to move a camera (130) arranged on the robotic arm (110) to an origin location (PC) from which at least one tool of the tools (141, 142, 143, 144) is expected to be visible within a view field (VF) of the camera (130); using the camera (13) to register three-dimensional image data (Dimg3D) within the view field (VF) of the camera (130); obtaining the three-dimensional image data (Dimg3D) registered by the camera (130) at the origin location (PC); processing the three-dimensional image data (Dimg3D) using an image-based object identification algorithm to identify objects in a form of tools (141, 143, 144) and/or hoses (152), and in response to identifying at least one tool of the tools (141, 143, 144): i) determining a respective tool position (PT1, PT3, PT4) for each identified tool (141, 143, 144) based on the origin location (PC) and the three-dimensional image data (Dimg3D), and ii) exclusively controlling a grip device (115) on the robotic arm (110) to one or more of the respective tool positions (PT1, PT3, PT4) to perform a pick-up operation of each respective identified tool (141, 143, 144).
9. The method according to claim 8, comprising: produce an alert (A) in response to identifying at least one hose (152) at a position where, in a current stage of a procedure executed by the automatic milking arrangement, one of said tools (141, 142, 143, 144) should be present.
10. The method according to claim 8, comprising: processing the three-dimensional image data (Dimg3D) by searching for each of the tools (141, 142, 143, 144) in at least one predefined volume (L, R; 441, 442) within the view field (VF).
11. The method according to claim 10, wherein the at least one predefined volume is represented by a respective line (L) or arc for each of the tools (141, 142, 143, 144) along which respective line (L) expected tool positions (PET1, PET2, PET3, PET4) are defined within a range (R).
12. The method according to claim 10, wherein the at least one predefined volume is represented by a respective area (441; 442) for each of the tools (141, 142) within which area (441; 442) expected tool positions (PET1, PET2) are defined.
13. The method according to claim 10, wherein, after having controlled the grip device (115) to perform a pick-up operation at a particular one of said tool positions (PT1, PT3, PT4), the method comprises excluding the predefined volume for said particular one tool position from a subsequent search for at least one remaining tool of said tools in the three-dimensional image data (Dimg3D).
14. The method according to claim 8, wherein the tools comprise at least one of: one or more teatcups and one or more cleaning cups.
15. A non-transitory data carrier (126) containing a computer program (127) loadable into a processing unit (125), the computer program (127) comprising software, when executed by the processing unit (125), causes the processing unit to perform the method according claim 8.
16. (canceled)
17. The tool-pickup system according to claim 4, wherein, after having controlled the grip device (115) to perform a pick-up operation at a particular one of said tool positions (PT1, PT3, PT4), the control unit (120) is configured to exclude the predefined volume for said particular one tool position from a subsequent search for at least one remaining tool of said tools in the three-dimensional image data (Dimg3D).
18. The tool-pickup system according to claim 5, wherein, after having controlled the grip device (115) to perform a pick-up operation at a particular one of said tool positions (PT1, PT3, PT4), the control unit (120) is configured to exclude the predefined volume for said particular one tool position from a subsequent search for at least one remaining tool of said tools in the three-dimensional image data (Dimg3D).
19. The tool-pickup system according to claim 1, wherein the tools comprise at least one teatcup and at least one cleaning cup.
20. The method according to claim 11, wherein, after having controlled the grip device (115) to perform a pick-up operation at a particular one of said tool positions (PT1, PT3, PT4), the method comprises excluding the predefined volume for said particular one tool position from a subsequent search for at least one remaining tool of said tools in the three-dimensional image data (Dimg3D).
21. The method according to claim 12, wherein, after having controlled the grip device (115) to perform a pick-up operation at a particular one of said tool positions (PT1, PT3, PT4), the method comprises excluding the predefined volume for said particular one tool position from a subsequent search for at least one remaining tool of said tools in the three-dimensional image data (Dimg3D).
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] The invention is now to be explained more closely by means of preferred embodiments, which are disclosed as examples, and with reference to the attached drawings.
[0015]
[0016]
[0017]
[0018]
DETAILED DESCRIPTION
[0019] In
[0020] The tool-pickup system contains a robotic arm 110 and a control unit 120. The robotic arm 110, in turn, is provided with a grip device 115 configured to pick up tools, and a camera 130 configured to register three-dimensional image data D.sub.img3D.
[0021] The control unit 120 is arranged to control the robotic arm 110 to move the camera 130 to an origin location P.sub.C from which at least one tool is expected to be visible within a view field VF of the camera 130. In each point in time, the control unit 120 has accurate information about the exact location of the origin location P.sub.C, e.g. via a control system for the robotic arm 110.
[0022] In
[0023] Instead, according to one embodiment of the invention, in response to identifying the hose 152 at a position where a tool should be present in a current stage of the procedure being executed by the automatic milking arrangement, the control unit 120 is configured to produce an alert A. In
[0024]
[0025] The expected tool positions P.sub.ET1, P.sub.ET2, P.sub.ET3 and P.sub.ET4 may be represented by the space coordinates for a particular point on the tool in question, i.e. 141, 142, 143 and 144 respectively. The particular point is preferably a well-defined point on the tool, such as an intersection between a symmetry center C1, C3 or C4 of a teatcup body and a liner's edge to the teatcup body. The position for the particular point may be calculated based on the origin location P.sub.C and data, e.g. a space vector, expressing a distance in three dimensions from the origin location P.sub.C to the particular point.
[0026] Preferably, the control unit 120 is configured to process the three-dimensional image data D.sub.img3D by searching for the tools 141, 142, 143 and 144 in at least one predefined volume within the view field VF of the camera 130.
[0027] Here, the at least one predefined volume contains a respective line L for each of the tools 141, 142, 143 and 144, along which respective line L the expected tool positions P.sub.ET1, P.sub.ET2, P.sub.ET3 and P.sub.ET4 are defined within a range R from a closest expected tool position to a furthest expected tool position. This definition of the at least one predefined volume is advantageous if the tools 141, 142, 143 and 144 are placed in a milking stall on a rotary milking parlor. Namely, in such a case, the lateral position may vary somewhat in a linear manner depending on where the milking parlor stops in relation to the milking robot and its arm 110. In fact, in the rotary-milking-parlor case, the variation will be along an arc of very long radius. This is, of course, also true if the milking parlor never stops, i.e. rotates continuously. However, in practice, the arc shape can often be approximated to the straight line L.
[0028]
[0029] For improved efficiency, after having controlled the grip device 115 to perform a pick-up operation at a particular tool position P.sub.ET1, P.sub.ET2, P.sub.ET3 and P.sub.ET4, the control unit 120 is preferably configured to exclude the predefined volume for that particular one tool position from a subsequent search for at least one remaining tool of said tools in the three-dimensional image data D.sub.img3D. Namely, after having removed a certain tool, for example from the rack 150, the corresponding tool position in the rack 150 should be empty, and therefore it is meaningless to search for tools here.
[0030] However, any detected hoses, e.g. 152, at a position from which a tool has already been removed may serve as a reference object confirming the fact that the tool in question has indeed been picked up by the grip device 115.
[0031] It is generally advantageous if the control unit 120 is configured to effect the above-described procedure in an automatic manner by executing a computer program 127. Therefore, the control unit 120 may include a memory unit 126, i.e. non-volatile data carrier, storing the computer program 127, which, in turn, contains software for making processing circuitry in the form of at least one processor 125 in the control unit 120 execute the above-described actions when the computer program 127 is run on the at least one processor 125.
[0032] In order to sum up, and with reference to the flow diagram in
[0033] In a first step 510, three-dimensional image data are obtained, which have been registered by a camera at an origin location P.sub.C to which the camera has been controlled by a robotic arm. At the origin location, at least one tool is expected to be visible within a view field of the camera.
[0034] In a subsequent step 520, the three-dimensional image data are processed using an image-based object identification algorithm to identify objects in the form of tools and/or hoses.
[0035] Thereafter, a step 530 checks if at least one tool has been identified in the three-dimensional image data. If so, a step 540 follows; and otherwise, the loops back to step 510 for obtaining updated data.
[0036] In step 540, a respective tool position is determined for each identified tool based on the origin location and the three-dimensional image data. Here, the respective tool position may be represented by the space coordinates for a particular point on the tool in question. The position for the particular point can for example be calculated based on the origin location and data, e.g. a space vector, expressing a distance in three dimensions from the origin location to the particular point. The particular point, in turn, is preferably a well-defined point on the tool, such as an intersection between a symmetry center of a teatcup body and a liner's edge to the teatcup body.
[0037] Subsequently, in a step 550, a grip device on the robotic arm is controlled to perform a pick-up operation at the respective tool position(s) where tool(s) has/have been identified. However, the grip device is not controlled to any other positions to perform any pick-up operations.
[0038] Then, the procedure loops back to step 510.
[0039] All of the process steps, as well as any sub-sequence of steps, described with reference to
[0040] The term “comprises/comprising” when used in this specification is taken to specify the presence of stated features, integers, steps or components. However, the term does not preclude the presence or addition of one or more additional features, integers, steps or components or groups thereof.
[0041] The invention is not restricted to the described embodiments in the figures, but may be varied freely within the scope of the claims.