Detecting a Moving Stream of Objects
20220327798 · 2022-10-13
Inventors
Cpc classification
G01B11/04
PHYSICS
H04N23/959
ELECTRICITY
G01S17/86
PHYSICS
G01P3/00
PHYSICS
G06V10/25
PHYSICS
H04N23/90
ELECTRICITY
G01S7/4865
PHYSICS
International classification
G06V10/25
PHYSICS
G01P3/00
PHYSICS
G01S17/86
PHYSICS
G01S7/4865
PHYSICS
G06K7/14
PHYSICS
Abstract
A camera device for detecting a stream of objects moved relative to the camera device is provided that has an image sensor for recording image data of the objects, a geometry detection sensor for measuring the objects, and a control and evaluation unit that is configured to determine at least one region of interest using measured data of the geometry detection sensor to restrict the evaluation of the image data to the region of interest. In this respect, the image sensor has a configuration unit to enable the reading of only a settable portion of the respectively recorded image data; and the control and evaluation unit is configured only to read a portion of the image data from the image sensor that is determined with reference to the region of interest.
Claims
1. A camera device for detecting a stream of objects moved relative to the camera device, wherein the camera device has an image sensor for recording image data of the objects, a geometry detection sensor for measuring the objects, and a control and evaluation unit that is configured to determine at least one region of interest using measured data of the geometry detection sensor to restrict the evaluation of the image data to the region of interest, wherein the image sensor has a configuration unit to enable the reading of only a settable portion of the respectively recorded image data; and with the control and evaluation unit being configured only to read a portion of the image data from the image sensor that is determined with reference to the region of interest.
2. The camera device in accordance with claim 1, wherein the control and evaluation unit is configured to adapt at least one of the at least one region of interest and the read portion of the image data between the recordings.
3. The camera device in accordance with claim 1, wherein the control and evaluation has a pre-processing unit to read and pre-process image data from the image sensor, with the pre-processing unit being configured such that the reading and pre-processing of the complete image data of a recording of the image sensor require a complete pre-processing time and with the image sensor being operated at a recording frequency that leaves less time between two recordings than the complete pre-processing time.
4. The camera device in accordance with claim 1, wherein the recording frequency is a flexible recording frequency.
5. The camera device in accordance with claim 1, wherein the user interface is configured for a selection of image lines.
6. The camera device in accordance with claim 1, wherein the configuration unit is configured for the selection of a rectangular partial region.
7. The camera device in accordance with claim 1, wherein the control and evaluation unit is configured only to read that portion of image data from the image sensor that has been recorded with reference to a region of interest within a depth of field zone of the image sensor.
8. The camera device in accordance with claim 1, wherein the control and evaluation unit is configured to determine a suitable depth of field zone for a region of interest outside the depth of field zone and to refocus to the suitable depth of field zone for a following recording.
9. The camera device in accordance with claim 1, wherein the control and evaluation unit is configured to determine the at least one region of interest with reference to a depth of field zone of the image sensor.
10. The camera apparatus in accordance with claim 1, wherein the control and evaluation unit is configured to identify code regions in the image data and to read their code content.
11. The camera device in accordance with claim 1, wherein the geometry detection sensor is configured as a distance sensor.
12. The camera device in accordance with claim 11, wherein the distance sensor is an optoelectronic distance sensor in accordance with the principle of the time of flight method.
13. The camera device in accordance with claim 1, wherein the geometry detection sensor is arranged integrated with the image sensor in a camera or externally and disposed upstream of the image sensor against the flow to measure the objects before the recording of the image data.
14. The camera device in accordance with claim 1, that has a speed sensor for a determination of the seed of the stream.
15. The camera device in accordance with claim 1, wherein the control and evaluation unit is configured to determine the speed of the stream with reference to the measured data of the geometry detection sensor and/or the image data.
16. The camera device in accordance with claim 1, that is installed stationary at a conveying device that conveys the objects in a conveying direction.
17. The camera device in accordance with claim 1 that has at least one image sensor for a recording of the stream from above.
18. The camera device in accordance with claim 1 that has at least one image sensor for a recording of the stream from the side.
19. A method of detecting a moving stream of objects, wherein image data of the objects are recorded by an image sensor, the objects are measured by a geometry detection sensor, and at least one region of interest is determined with reference to measured data of the geometry detection sensor to restrict the evaluation of the image data to the region of interest, wherein the image sensor is configured to only read a portion of the image data from the image sensor determined with reference to the region of interest.
Description
[0031] The invention will be explained in more detail in the following also with respect to further features and advantages by way of example with reference to embodiments and to the enclosed drawing. The Figures of the drawing show in:
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043] To illuminate the detection zone 14 with transmitted light 24 during a recording of the camera 10, the camera 10 comprises an optional illumination unit 26 that is shown in
[0044] In addition to the actual image sensor 20 for detecting image data, the camera 10 has an optoelectronic distance sensor 28 that measures distances from objects in the detection zone 14 using a time of flight (TOF) process. The distance sensor 28 comprises a TOF light transmitter 30 having a TOF transmission optics 32 and a TOF light receiver 34 having a TOF reception optics 36. A TOF light signal 38 is thus transmitted and received again. A time of flight measurement unit 40 determines the time of flight of the TOF light signal 38 and determines from this the distance from an object at which the TOF light signal 38 was reflected back.
[0045] The TOF light receiver 34 preferably has a plurality of light reception elements 34a or pixels and is then spatially resolved. It is therefore not a single distance value that is detected, but rather a spatially resolved height profile (depth map, 3D image). Only a relatively small number of light reception elements 34a and thus a small lateral resolution of the height profile is preferably provided in this process. 2×2 pixels or even only 1×2 pixels can already be sufficient. A more highly laterally resolved height profile having n×m pixels, n, m>2, naturally allows more complex and more accurate evaluations. The number of pixels of the TOF light receiver 34, however, remains comparatively small with, for example, some tens, hundreds, or thousands of pixels or n, m≤10, n, m≤20, n, m≤50, or n, m≤100, far remote from typical megapixel resolutions of the image sensor 20.
[0046] The design and technology of the distance sensor 28 are purely by way of example. In the further description, the distance sensor 28 is treated as an encapsulated module for the geometry measurement that, for example, provides measured data such as a distance value or a height profile cyclically, on detecting an object, or on request. Further measured data are conceivable here, in particular a measurement of the intensity. The optoelectronic distance measurement by means of time of light processes is known and will therefore not be explained in detail. Two exemplary measurement processes are photomixing detection using a periodically modulated TOF light signal 38 and pulse time of flight measurement using a pulse modulated TOF light signal 38. There are also highly integrated solutions here in which the TOF light receiver 34 is accommodated on a common chip with the time of flight measurement unit 40 or at least parts thereof, for instance TDCs (time to digital converters) for time of flight measurements. In particular a TOF light receiver 34 is suitable for this purpose that is designed as a matrix of SPAD (single photon avalanche diode) light reception elements 34a. The TOF optics 32, 36 are shown only symbolically as respective individual lenses representative of any desired optics such as a microlens field.
[0047] A control and evaluation unit 42 is connected to the focus adjustment 18, to the image sensor 20, and to its configuration unit 26, to the illumination unit 22, and to the distance sensor 28 and is responsible for the control work, the evaluation work, and for other coordination work in the camera 10. It determines regions of interest using the measured data of the distance sensor 28 and configures the image sensor 20 via its configuration unit 22 corresponding to the regions of interest. It reads image data of the partial regions configured in this manner from the image sensor 20 and subjects them to further image processing steps. The control and evaluation unit 42 is preferably able to localize and decode code regions in the image data so that the camera 10 becomes a camera-based code reader.
[0048] The reading and first pre-processing steps such as equalization, segmentation, binarization, and the like preferably take place in a pre-processing unit 44 that, for example, comprises at least one FPGA (field programmable gate array). Alternatively, the preferably at least pre-processed image data are output via an interface 46 and the further image processing steps take place in a higher ranking control and evaluation unit, with practically any desired work distributions being conceivable. Further functions can be controlled using the measured data of the distance sensor 28, in particular a desired focus position for the focus adjustment 18 or a trigger time for the image recording can be derived.
[0049] The camera 10 is protected by a housing 48 that is terminated by a front screen 50 in the front region where the received light 12 is incident.
[0050]
[0051] The field of view of the camera 10 preferably covers the stream of objects 54 in full width and over a certain length. Alternatively, additional cameras 10 are used whose fields of vision complement one another to reach the full width. A small overlap is preferably at most provided here. The perspective from above shown is particularly suitable in a number of cases. Alternatively or In order in particular also to better detect laterally applied code regions 60, additional cameras 10, not shown, are preferably used from different perspectives. Lateral perspectives, but also mixed perspectives obliquely from above or from the side are possible.
[0052] An encoder, not shown, can be provided at the conveyor belt 52 for determining the advance or the speed. Alternatively, the conveyor belt reliably moves with a known movement profile; corresponding information is transferred from a higher ranking control or the control and evaluation unit determines the speed itself by tracking certain geometrical structures or image features. Geometry information or image data recorded at different points in time and in different conveying positions can be assembled in the conveying direction and associated with each other using the speed information. An association between read code information and the object 54 bearing associated code 58, 60 in particular preferably thus also takes place.
[0053]
[0054]
[0055] To reduce the image data to be processed from the start, the control and evaluation unit 42 divides its detection zone 14 and corresponding thereto regions of the image sensor 20 into relevant and non-relevant portions using the measured data of the distance sensor 28. A relevant portion here corresponds to a region of interest (ROI). Two differently shaded partial fields of view 70, 72 are shown in
[0056]
[0057] An exemplary evaluation by which the pixels of the region of interest 74 are located from the measured data of the distance sensor 28 will in turn be explained with reference to
[0058] In the course of the conveying movement, the relevant partial region 70 is displaced in accordance with
[0059]
[0060]
[0061] Due to the measured data of the distance sensor 28 and the information on the depth of field zone at a focal position 78, the control and evaluation unit 42 is thus aware of how the focal position 78 has to be changed to record one of the objects 54a-b in focus. As long as there is a focal position 78 with a depth of field zone suitable for all the objects 54a-b, the number of lines to be read can be increased for two or more objects 54 by means of the configuration unit 22 or a further region of interest 74 to be read can be provided on the image sensor 20. A single recording is then possibly sufficient for a plurality of objects 54a-b, with a repeat recording remaining possible just like separate recordings for every object 54a-b.
[0062] In the situation of
[0063]
[0064]
[0065] It is thus possible to generate a first recording with a focal position 78 for the higher object 54b and directly thereafter, at least as long as the lower object 54a is still in the detection zone 14, a second recording after a refocusing and thus adaptation of the focal position 78 to the lower objects 54a. The control and evaluation unit 42 is even informed in good time of the described conflict situation due to the measured data of the distance sensor 28 and can plan in advance.
[0066] In an alternative embodiment, the focal position 78 is cyclically changed, for example by a step function or by an oscillation. A plurality of recordings are generated so that the depth of field zones overall cover the total possible distance zone, preferably excluding the conveying plane itself, provided that very flat objects 54 are also not to be expected. Respective regions of interest 74 that are recorded in focus in the current focal position 78 are configured using the measured data of the distance sensor 28. The respective focal position 78 thus determines the regions of interest 74. It is ensured that every structure is recorded in focus and blurry image data are not read at all.
[0067] In accordance with the invention, the advantages of a large image sensor 20 are thus implemented without thus triggering a flood of data that can no longer be managed. At the same time, the problem of blurry image data of a plurality of sequential objects 54a-b of greatly different heights in a single large recording is solved. It thus becomes possible to cover the stream of objects t4 solely by an image sensor 20, at least with respect to its perspective, for instance, from above or from the side, or at least to cover a portion that is as large as possible.
[0068] Conventionally, in contrast, the pre-processing unit would have to read all the image data only then, where necessary, to discard image data outside of regions of interest. The pre-processing conventionally already takes place on the fly in a pipeline structure during the reading so that the reading and pre-processing are practically not to be considered differently in the time demands. For an image sensor 20 of high resolution this requires a processing time of 25 ms, for example, and thus limits the recording frequency or frame rate to 40 Hz. The situation becomes worse due to more complex image processing steps; the possible recording frequency drops further. If two objects of very different heights now closely follow one another, a second recording after refocusing may possibly be too late. In accordance with the invention, in contrast, the image data quantity is reduced from the start to only read relevant image data. The reduced data load is already an advantage in itself since resources are thus saved or can be used in a more targeted manner. The camera 10 thus selectively becomes less expensive or more powerful. in addition, the strict limitation of the recording frequency corresponding to a processing time for the complete images is dispensed with. The recording frequency can therefore be flexibly increased in total or even from case to case. A second recording in good time after refocusing thus also becomes possible in the situation of two objects 54a-b of greatly different heights following closely on one another, as explained with respect to