Detecting a Moving Stream of Objects

20220327798 · 2022-10-13

    Inventors

    Cpc classification

    International classification

    Abstract

    A camera device for detecting a stream of objects moved relative to the camera device is provided that has an image sensor for recording image data of the objects, a geometry detection sensor for measuring the objects, and a control and evaluation unit that is configured to determine at least one region of interest using measured data of the geometry detection sensor to restrict the evaluation of the image data to the region of interest. In this respect, the image sensor has a configuration unit to enable the reading of only a settable portion of the respectively recorded image data; and the control and evaluation unit is configured only to read a portion of the image data from the image sensor that is determined with reference to the region of interest.

    Claims

    1. A camera device for detecting a stream of objects moved relative to the camera device, wherein the camera device has an image sensor for recording image data of the objects, a geometry detection sensor for measuring the objects, and a control and evaluation unit that is configured to determine at least one region of interest using measured data of the geometry detection sensor to restrict the evaluation of the image data to the region of interest, wherein the image sensor has a configuration unit to enable the reading of only a settable portion of the respectively recorded image data; and with the control and evaluation unit being configured only to read a portion of the image data from the image sensor that is determined with reference to the region of interest.

    2. The camera device in accordance with claim 1, wherein the control and evaluation unit is configured to adapt at least one of the at least one region of interest and the read portion of the image data between the recordings.

    3. The camera device in accordance with claim 1, wherein the control and evaluation has a pre-processing unit to read and pre-process image data from the image sensor, with the pre-processing unit being configured such that the reading and pre-processing of the complete image data of a recording of the image sensor require a complete pre-processing time and with the image sensor being operated at a recording frequency that leaves less time between two recordings than the complete pre-processing time.

    4. The camera device in accordance with claim 1, wherein the recording frequency is a flexible recording frequency.

    5. The camera device in accordance with claim 1, wherein the user interface is configured for a selection of image lines.

    6. The camera device in accordance with claim 1, wherein the configuration unit is configured for the selection of a rectangular partial region.

    7. The camera device in accordance with claim 1, wherein the control and evaluation unit is configured only to read that portion of image data from the image sensor that has been recorded with reference to a region of interest within a depth of field zone of the image sensor.

    8. The camera device in accordance with claim 1, wherein the control and evaluation unit is configured to determine a suitable depth of field zone for a region of interest outside the depth of field zone and to refocus to the suitable depth of field zone for a following recording.

    9. The camera device in accordance with claim 1, wherein the control and evaluation unit is configured to determine the at least one region of interest with reference to a depth of field zone of the image sensor.

    10. The camera apparatus in accordance with claim 1, wherein the control and evaluation unit is configured to identify code regions in the image data and to read their code content.

    11. The camera device in accordance with claim 1, wherein the geometry detection sensor is configured as a distance sensor.

    12. The camera device in accordance with claim 11, wherein the distance sensor is an optoelectronic distance sensor in accordance with the principle of the time of flight method.

    13. The camera device in accordance with claim 1, wherein the geometry detection sensor is arranged integrated with the image sensor in a camera or externally and disposed upstream of the image sensor against the flow to measure the objects before the recording of the image data.

    14. The camera device in accordance with claim 1, that has a speed sensor for a determination of the seed of the stream.

    15. The camera device in accordance with claim 1, wherein the control and evaluation unit is configured to determine the speed of the stream with reference to the measured data of the geometry detection sensor and/or the image data.

    16. The camera device in accordance with claim 1, that is installed stationary at a conveying device that conveys the objects in a conveying direction.

    17. The camera device in accordance with claim 1 that has at least one image sensor for a recording of the stream from above.

    18. The camera device in accordance with claim 1 that has at least one image sensor for a recording of the stream from the side.

    19. A method of detecting a moving stream of objects, wherein image data of the objects are recorded by an image sensor, the objects are measured by a geometry detection sensor, and at least one region of interest is determined with reference to measured data of the geometry detection sensor to restrict the evaluation of the image data to the region of interest, wherein the image sensor is configured to only read a portion of the image data from the image sensor determined with reference to the region of interest.

    Description

    [0031] The invention will be explained in more detail in the following also with respect to further features and advantages by way of example with reference to embodiments and to the enclosed drawing. The Figures of the drawing show in:

    [0032] FIG. 1 a schematic sectional representation of a camera with an integrated distance sensor;

    [0033] FIG. 2 a three-dimensional view of an exemplary use of the camera in an installation at a conveyor belt;

    [0034] FIG. 3 a three-dimensional view of an alternative embodiment with an installation of a camera and an external distance sensor at a conveyor belt;

    [0035] FIG. 4 a schematic sectional representation of fields of vision of the distance sensor and of the camera;

    [0036] FIG. 5 a schematic representation of an image sensor with image lines corresponding to a region of interest configured for reading;

    [0037] FIG. 6 a schematic representation similar to FIG. 5 with an additional configuration of pixels to be read also within image lines;

    [0038] FIG. 7 a schematic sectional representation of the detection of two objects with a limited depth of field zone;

    [0039] FIG. 8 a schematic representation of an image sensor with a partial region corresponding to a region of interest configured for reading in the depth of field zone in accordance with FIG. 7;

    [0040] FIG. 9 a schematic sectional representation of the detection of two objects with a depth of field zone changed with respect to FIG. 7; and

    [0041] FIG. 10 a schematic representation similar to FIG. 8, but now with a configured partial region corresponding to the region of interest disposed in the depth of field zone in FIG. 9.

    [0042] FIG. 1 shows a schematic sectional representation of a camera 10. Received light 12 from a detection zone 14 is incident on a reception optics 16 with a focus adjustment 18 that conducts the received light 12 to an image sensor 20. The optical elements of the reception optics 16 are preferably configured as an objective composed of a plurality of lenses and other optical elements such as diaphragms, prisms, and the like, but here only represented by a lens for reasons of simplicity. The focus adjustment 18 is only shown purely schematically and can, for example, be implemented by a mechanical movement of elements of the reception optics 16 or of the image sensor 20, a moving deflection mirror, or a liquid lens. An actuator system is based, for example, on a motor, a moving coil, or a piezoelectric element. The image sensor 20 preferably has a matrix arrangement of pixel elements having a high resolution in the order of magnitude of megapixels, for example twelve megapixels. A configuration unit 22 enables a configuration of the reading logic of the image sensor 20 and thus a dynamically adjustable selection of pixel lines or pixel zones that are read from the image sensor 20.

    [0043] To illuminate the detection zone 14 with transmitted light 24 during a recording of the camera 10, the camera 10 comprises an optional illumination unit 26 that is shown in FIG. 1 in the form of a simple light source and without a transmission optics. In other embodiments, a plurality of light sources such as LEDs or laser diodes are arranged around the reception path, in ring form, for example, and can also be multi-color and controllable in groups or individually to adapt parameters of the illumination unit 26 such as its color, intensity, and direction.

    [0044] In addition to the actual image sensor 20 for detecting image data, the camera 10 has an optoelectronic distance sensor 28 that measures distances from objects in the detection zone 14 using a time of flight (TOF) process. The distance sensor 28 comprises a TOF light transmitter 30 having a TOF transmission optics 32 and a TOF light receiver 34 having a TOF reception optics 36. A TOF light signal 38 is thus transmitted and received again. A time of flight measurement unit 40 determines the time of flight of the TOF light signal 38 and determines from this the distance from an object at which the TOF light signal 38 was reflected back.

    [0045] The TOF light receiver 34 preferably has a plurality of light reception elements 34a or pixels and is then spatially resolved. It is therefore not a single distance value that is detected, but rather a spatially resolved height profile (depth map, 3D image). Only a relatively small number of light reception elements 34a and thus a small lateral resolution of the height profile is preferably provided in this process. 2×2 pixels or even only 1×2 pixels can already be sufficient. A more highly laterally resolved height profile having n×m pixels, n, m>2, naturally allows more complex and more accurate evaluations. The number of pixels of the TOF light receiver 34, however, remains comparatively small with, for example, some tens, hundreds, or thousands of pixels or n, m≤10, n, m≤20, n, m≤50, or n, m≤100, far remote from typical megapixel resolutions of the image sensor 20.

    [0046] The design and technology of the distance sensor 28 are purely by way of example. In the further description, the distance sensor 28 is treated as an encapsulated module for the geometry measurement that, for example, provides measured data such as a distance value or a height profile cyclically, on detecting an object, or on request. Further measured data are conceivable here, in particular a measurement of the intensity. The optoelectronic distance measurement by means of time of light processes is known and will therefore not be explained in detail. Two exemplary measurement processes are photomixing detection using a periodically modulated TOF light signal 38 and pulse time of flight measurement using a pulse modulated TOF light signal 38. There are also highly integrated solutions here in which the TOF light receiver 34 is accommodated on a common chip with the time of flight measurement unit 40 or at least parts thereof, for instance TDCs (time to digital converters) for time of flight measurements. In particular a TOF light receiver 34 is suitable for this purpose that is designed as a matrix of SPAD (single photon avalanche diode) light reception elements 34a. The TOF optics 32, 36 are shown only symbolically as respective individual lenses representative of any desired optics such as a microlens field.

    [0047] A control and evaluation unit 42 is connected to the focus adjustment 18, to the image sensor 20, and to its configuration unit 26, to the illumination unit 22, and to the distance sensor 28 and is responsible for the control work, the evaluation work, and for other coordination work in the camera 10. It determines regions of interest using the measured data of the distance sensor 28 and configures the image sensor 20 via its configuration unit 22 corresponding to the regions of interest. It reads image data of the partial regions configured in this manner from the image sensor 20 and subjects them to further image processing steps. The control and evaluation unit 42 is preferably able to localize and decode code regions in the image data so that the camera 10 becomes a camera-based code reader.

    [0048] The reading and first pre-processing steps such as equalization, segmentation, binarization, and the like preferably take place in a pre-processing unit 44 that, for example, comprises at least one FPGA (field programmable gate array). Alternatively, the preferably at least pre-processed image data are output via an interface 46 and the further image processing steps take place in a higher ranking control and evaluation unit, with practically any desired work distributions being conceivable. Further functions can be controlled using the measured data of the distance sensor 28, in particular a desired focus position for the focus adjustment 18 or a trigger time for the image recording can be derived.

    [0049] The camera 10 is protected by a housing 48 that is terminated by a front screen 50 in the front region where the received light 12 is incident.

    [0050] FIG. 2 shows a possible use of the camera 10 in an installation at a conveyor belt 52. The camera 10 is shown here and in the following only as a symbol and no longer with its structure already explained with reference to FIG. 1; only the distance sensor 28 is still shown as a functional block. The conveyor belt 52 conveys objects 54, as indicated by the arline 56, through the detection region 14 of the camera 10. The objects 54 can bear code zones 58 at their outer surfaces. It is the object of the camera 10 to detect properties of the objects 54 and, in a preferred use as a code reader, to recognize the code regions 58, to read and decode the codes affixed there, and to associate them with the respective associated object 54.

    [0051] The field of view of the camera 10 preferably covers the stream of objects 54 in full width and over a certain length. Alternatively, additional cameras 10 are used whose fields of vision complement one another to reach the full width. A small overlap is preferably at most provided here. The perspective from above shown is particularly suitable in a number of cases. Alternatively or In order in particular also to better detect laterally applied code regions 60, additional cameras 10, not shown, are preferably used from different perspectives. Lateral perspectives, but also mixed perspectives obliquely from above or from the side are possible.

    [0052] An encoder, not shown, can be provided at the conveyor belt 52 for determining the advance or the speed. Alternatively, the conveyor belt reliably moves with a known movement profile; corresponding information is transferred from a higher ranking control or the control and evaluation unit determines the speed itself by tracking certain geometrical structures or image features. Geometry information or image data recorded at different points in time and in different conveying positions can be assembled in the conveying direction and associated with each other using the speed information. An association between read code information and the object 54 bearing associated code 58, 60 in particular preferably thus also takes place.

    [0053] FIG. 3 shows a three-dimensional view of an alternative embodiment of a device with the camera 10 at a conveyor belt 52. Instead of an internal distance sensor 28 or complementary thereto, an external geometry detection sensor 62, for example a laser scanner, arranged upstream against the conveying direction is provided here. As already explained, the measured data of the geometry detection sensor 62 can be converted to the position of the camera 10 on the basis of speed information. The now following description with reference to an internal distance sensor 28 can therefore be transferred to the situation with an external geometry detection sensor 62 without this having to be mentioned separately.

    [0054] FIG. 4 shows a schematic sectional view of the camera 10 above an object stream that is only represented by a single object 54 here. The optical axis 64 of the distance sensor 28 is at an angle to the optical axis 66 of the camera 10. The field of view 68 of the distance sensor 28 is therefore arranged upstream of the field of view or of the detection zone 14 of the camera 10. The distance sensor 28 thus perceives the objects 54 a little earlier and its measured data are already available on the recording.

    [0055] To reduce the image data to be processed from the start, the control and evaluation unit 42 divides its detection zone 14 and corresponding thereto regions of the image sensor 20 into relevant and non-relevant portions using the measured data of the distance sensor 28. A relevant portion here corresponds to a region of interest (ROI). Two differently shaded partial fields of view 70, 72 are shown in FIG. 4; a darker relevant partial field of view 70 with the object 54 and again a two-part brighter non-relevant partial field of view 72 without an object 54.

    [0056] FIG. 5 shows the association division in a schematic plan view of the image sensor 20. The pixels in the boldly enclosed lines of the region of interest 74 correspond to the relevant partial field of view 70; the other pixels of regions 76 of no interest to the non-relevant partial field of view 72. The lines belonging to the region of interest 74 are selected by the control and evaluation unit 42 via the configuration unit 22 and only the image data of these pixels are read and further processed.

    [0057] An exemplary evaluation by which the pixels of the region of interest 74 are located from the measured data of the distance sensor 28 will in turn be explained with reference to FIG. 4. The distance sensor 28 detects the object 54 having the height h at a first time at a time t. A trigger point t1 at which the object 54 will have moved into the detection zone 14 is determined using the relative location and pose of the distance sensor 28 with respect to the image sensor and the conveying speed. The length of the object 54 is determined from the measured data of the distance sensor 28 up to the time t1. The distance sensor 28 is, for example, operated at a repetition rate f. The length of the object 54 corresponds to the number of detections at this repetition rate. It must be remembered that positions and lengths in the conveying direction can be directly converted into lines via the conveying speed. With knowledge of the respective positions and poses of the camera 10 and the distance sensor 28 or the external geometry sensor 62, the object length determined by means of the distance sensor 28 can thus be trigonometrically converted into associated image lines on the image sensor 20 while taking the object height into consideration.

    [0058] In the course of the conveying movement, the relevant partial region 70 is displaced in accordance with FIG. 4 against the conveying direction or the region of the interest 74 downwardly on the image sensor 20 in FIG. 5. A first image resolution can be coordinated in time such that the front edge of the object 54 lies at the margin of the detection zone 14 and thus in the uppermost line of the image sensor 20. The image sensor 20 can be repeatedly dynamically reconfigured to record the object 54 or any other structure of interest such as a code region 58, 60 on the object multiple times.

    [0059] FIG. 6 shows an alternative division of the pixels of the image sensor 20 in regions to be read and not to be read. Unlike FIG. 5, the condition that the region of interest 74 may only comprise whole lines is dispensed with here. The image data to be read and processed is thereby reduced still further. The configuration unit 22 is accordingly more flexible and also allows the exclusion of pixels within the lines. This preferably still means no individual pixel selection that would lead to too high a switching effort, but the selection possibility of rectangular partial regions as shown. To be able to make a sensible selection of pixels within the line and thus transversely to the stream of objects 54, the distance sensor 28 should preferably provide a lateral resolution so that a contour of the objects 54 resolved in the conveying direction and transversely thereto is successively available.

    [0060] FIG. 7 shows a situation in a schematic sectional representation in which there are two objects 54a-b of different heights in the detection zone 14. A depth of field zone enclosed by an upper and lower DOF (depth of field) boundary 80a-b can be displaced by setting the focal position 78 by means of the focus adjustment 18. The depth of field zone in the respective focal position 78 depends on different factors, in particular on the reception optics 16, but also, for example, on the decoding method, since whether the code is readable is decisive for a sufficient image focus on the code reading. The control and evaluation unit 42 can, for example, access a look-up table with depth of field zones determined in advance by simulation, modeling, or empirically.

    [0061] Due to the measured data of the distance sensor 28 and the information on the depth of field zone at a focal position 78, the control and evaluation unit 42 is thus aware of how the focal position 78 has to be changed to record one of the objects 54a-b in focus. As long as there is a focal position 78 with a depth of field zone suitable for all the objects 54a-b, the number of lines to be read can be increased for two or more objects 54 by means of the configuration unit 22 or a further region of interest 74 to be read can be provided on the image sensor 20. A single recording is then possibly sufficient for a plurality of objects 54a-b, with a repeat recording remaining possible just like separate recordings for every object 54a-b.

    [0062] In the situation of FIG. 7, however, the height of the objects 54a-v is too different; there is no focal position 78 at which the two objects 54a-b would lie within the depth of field zone. The control and evaluation unit 42 has to make a decision and initially focuses on the higher object 54b.

    [0063] FIG. 8 shows a schematic plan view of the image sensor 20 set for this situation by means of the configuration unit 22. Only the region of interest 74 corresponding to the higher object 54b recorded in focus is read. Alternatively to a rectangular partial zone, the total image lines expanded to the right and to the left could be configured and read. There is per se a further region of interest 82 corresponding to the lower object 54a and the control and evaluation unit 42 is aware of this by evaluating the measured data of the distance sensor 28. Since, however, no sufficiently focused image data are anyway to be expected in the further region of interest 82, they are treated as regions of no interest 76 and are not read. With a smaller height difference at which the lower object 54a is still in the depth of field zone, two regions of interest 74, 82 could be configured and read provided that the configuration unit 22 provides this function or both regions of interest 74, 82 are surrounded by a common region of interest.

    [0064] FIGS. 9 and 10 show a situation complementary to FIGS. 7 and 8. The focal position 78 and the associated depth of field zone are now set to the lower object 54a. Its image data are correspondingly read and those of the higher object 54b are discarded together with the regions between and next to the objects 54a-b directly in the image sensor 20.

    [0065] It is thus possible to generate a first recording with a focal position 78 for the higher object 54b and directly thereafter, at least as long as the lower object 54a is still in the detection zone 14, a second recording after a refocusing and thus adaptation of the focal position 78 to the lower objects 54a. The control and evaluation unit 42 is even informed in good time of the described conflict situation due to the measured data of the distance sensor 28 and can plan in advance.

    [0066] In an alternative embodiment, the focal position 78 is cyclically changed, for example by a step function or by an oscillation. A plurality of recordings are generated so that the depth of field zones overall cover the total possible distance zone, preferably excluding the conveying plane itself, provided that very flat objects 54 are also not to be expected. Respective regions of interest 74 that are recorded in focus in the current focal position 78 are configured using the measured data of the distance sensor 28. The respective focal position 78 thus determines the regions of interest 74. It is ensured that every structure is recorded in focus and blurry image data are not read at all.

    [0067] In accordance with the invention, the advantages of a large image sensor 20 are thus implemented without thus triggering a flood of data that can no longer be managed. At the same time, the problem of blurry image data of a plurality of sequential objects 54a-b of greatly different heights in a single large recording is solved. It thus becomes possible to cover the stream of objects t4 solely by an image sensor 20, at least with respect to its perspective, for instance, from above or from the side, or at least to cover a portion that is as large as possible.

    [0068] Conventionally, in contrast, the pre-processing unit would have to read all the image data only then, where necessary, to discard image data outside of regions of interest. The pre-processing conventionally already takes place on the fly in a pipeline structure during the reading so that the reading and pre-processing are practically not to be considered differently in the time demands. For an image sensor 20 of high resolution this requires a processing time of 25 ms, for example, and thus limits the recording frequency or frame rate to 40 Hz. The situation becomes worse due to more complex image processing steps; the possible recording frequency drops further. If two objects of very different heights now closely follow one another, a second recording after refocusing may possibly be too late. In accordance with the invention, in contrast, the image data quantity is reduced from the start to only read relevant image data. The reduced data load is already an advantage in itself since resources are thus saved or can be used in a more targeted manner. The camera 10 thus selectively becomes less expensive or more powerful. in addition, the strict limitation of the recording frequency corresponding to a processing time for the complete images is dispensed with. The recording frequency can therefore be flexibly increased in total or even from case to case. A second recording in good time after refocusing thus also becomes possible in the situation of two objects 54a-b of greatly different heights following closely on one another, as explained with respect to FIGS. 7-10.