Method for checking for completeness
10928184 ยท 2021-02-23
Assignee
Inventors
Cpc classification
G06T3/40
PHYSICS
International classification
G06T3/40
PHYSICS
B07C5/00
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A method for checking the completeness of a container provided with a plurality of objects by means of a 3D camera includes providing a three-dimensional image of the objects and a multi-ROI. The multi-ROI includes a plurality of ROIs which have a user-settable shape and are arranged within the multi-ROI in a user-settable number of rows, columns and raster type, including a learning mode and a temporally subsequent working mode.
Claims
1. A method for completeness check of a container provided with a plurality of objects by means of a 3D camera which provides a three-dimensional image of the objects and a multi-region of interest, wherein the multi-region of interest includes a plurality of regions of interest comprising a user-settable shape and arranged within the multi-region of interest in a user-settable number of rows, columns and raster type, including a learning mode and a temporally subsequent working mode, wherein the learning mode comprises the following steps: capturing an image of the objects in the form of a three-dimensional pixel matrix by means of the 3D camera, wherein the pixel matrix in one dimension comprises distance values of the detected objects with respect to the 3D camera and in the two other dimensions comprises position values in respective planes perpendicular thereto; reproducing the captured image as a two-dimensional image with the position values in a first image area; displaying input fields for the shape or the number of rows, columns and raster type of the regions of interest; displaying a two-dimensional multi-region of interest with a user-entered number of rows, columns and raster type of the regions of interest and a user-entered shape of the regions of interest in the first image area; adjusting the size, position and skew of the multi-region of interest in response to at least one user input; adjusting the size of the regions of interest in response to at least one user input; displaying height values derived from the distance values for each region of interest in a second image area which is different from the first image area (8); displaying at least one of a lower limit and an upper limit for the height values in the second image area; adjusting at least one of the lower limit and the upper limit for the height values in response to at least one input of the user; and switching into the working mode, wherein the working mode comprises the following step: indicating a state of a respective region of interest, wherein the state of the respective region of interest is at least one of the states overfilled, underfilled and good, and wherein the state overfilled is indicated when the height value of the respective region of interest is above the upper limit, the state underfilled is indicated when the height value of the respective region of interest is below the lower limit, and the state good is indicated when the height value is not outside an intended limit.
2. The method according to claim 1, wherein the height values derived from the distance values for each region of interest in the second image area are displayed as lines with lengths corresponding to the respective height value.
3. The method according to claim 2, wherein the lower limit and the upper limit of the height values are displayed as boundaries extending perpendicular to the lines.
4. The method according to claim 3, wherein the boundaries can be shifted by the user by means of a cursor or a finger gesture.
5. The method according to claim 1, wherein in the learning mode when reproducing the captured image as a two-dimensional image with the position values in the first image area the distance data are indicated coded as colors or gray values.
6. The method according to claim 1, wherein in the learning mode the two-dimensional multi-region of interest is automatically generated based on the number of rows and columns input by the user such that all rows have the same row width and all columns have the same column width.
7. The method according to claim 1, wherein the shape of the regions of interest is the same for all regions of interest.
8. The method according to claim 1, wherein in the working mode the states of the respective regions of interest in the first image area are indicated coded as colors or gray values.
9. The method according to claim 1, wherein in the working mode the states of the respective regions of interest are displayed in an image area different from the first image area, preferably in the form of a list.
10. The method according to claim 1, wherein a time of flight camera is used as a 3D camera.
Description
DRAWINGS
(1) The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure.
(2) In the drawings:
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15) Corresponding reference numerals indicate corresponding parts throughout the several views of the drawings.
DETAILED DESCRIPTION
(16) Example embodiments will now be described more fully with reference to the accompanying drawings.
(17)
(18) By means of a 3D camera 4, which is arranged above the conveyor belt 1, a respective container 2 provided with objects 3 can be optically detected. Here, the 3D camera 4 provides a three-dimensional image in the form of a three-dimensional pixel matrix, wherein the pixel matrix includes distance values of the detected objects 3 with respect to the 3D camera 4 in one dimension and position values in respective planes perpendicular thereto in the other two dimensions. This three-dimensional image captured by the 3D camera 4 is transmitted to a display device 5 in the form of a screen and can be displayed there, as explained in detail below. According to the preferred embodiment of the disclosure described herein, a keyboard 6 and a pointing device 7, such as a mouse, are connected to the display device 5 in order to enable a user to make inputs. Alternatively, the display device 5 may be configured as a touch display, which makes a keyboard and a mouse dispensable.
(19) The sequence of a method according to the presently described preferred embodiment of the disclosure is as follows:
(20) First, the learning mode of the method is carried out. For this purpose, in a state where the conveyor belt is switched off, that is to say when the container 2 is stationary, an image 9 of the objects 3 and the container 2, which are located below the 3D camera 4, is captured. The captured image 9 of the objects 3 and the container 2 is then displayed in a first image area 8 on the display device 5, as shown in
(21)
(22) In the next step, which can be extracted from
(23) In the next step of the method shown in
(24) In the subsequent step, which is shown in
(25) It can now be seen from
(26) In the present case, which can be seen in
(27) As finally shown in
(28)
(29)
(30)
(31)
(32) The remaining elements correspond to those already presented in
(33) The selection options of the raster type are of course not restricted to the examples shown. It is also conceivable to provide other arrangements in the selection menu. In particular, it is also conceivable to provide a selection arbitrary which, for example, in further method steps, offers the possibility to place the ROIs individually.
(34) The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.