Detection of image data of a moving object

11375102 · 2022-06-28

Assignee

Inventors

Cpc classification

International classification

Abstract

A camera for detecting an object moved through a detection zone is provided that has an image sensor for recording image data, a reception optics having a focus adjustment unit for setting a focal position, a distance sensor for measuring a distance value from the object, and a control and evaluation unit connected to the distance sensor and the focus adjustment unit to set a focal position in dependence on the distance value, and to trigger a recording of image data at a focal position at which there is a focus deviation from a focal position that is ideal in accordance with the measured distance value, with the focus deviation remaining small enough for a required image definition of the image data.

Claims

1. A camera for detecting an object moved through a detection zone, the camera comprising: an image sensor for recording image data, a reception optics having a focus adjustment unit for setting a focal position; a distance sensor for measuring a distance value from the object; and a control and evaluation unit connected to the distance sensor and the focus adjustment unit to set a focal position in dependence on the distance value, wherein the control and evaluation unit is configured to trigger a recording of image data at a focal position at which there is a focus deviation from a focal position that is ideal in accordance with the measured distance value, with the focus deviation remaining small enough for a required image definition of the image data, wherein the control and evaluation unit is configured to determine a required refocusing time from the instantaneous focal position and the focal position that is ideal in accordance with the measured distance value.

2. The camera in accordance with claim 1, wherein the control and evaluation unit is configured to determine an available focusing time from the point in time at which the object will reach the recording position.

3. The camera in accordance with claim 2, wherein the distance sensor is configured to measure the speed of the movement of the object.

4. The camera in accordance with claim 2, wherein the control and evaluation unit is configured to compare the available focusing time with the required refocusing time and only to record image data having a focal deviation when the required refocusing time is not sufficient.

5. The camera in accordance with claim 1, wherein an association rule between adjustments from a first focal position into a second focal position and a refocusing time required for this is stored in the control and evaluation unit.

6. The camera in accordance with claim 1, wherein the control and evaluation unit is configured to perform a focus adjustment to the ideal focal position, but already to record image data as soon as the focus deviation has become small enough for a required image definition.

7. The camera in accordance with claim 1, wherein the control and evaluation unit is configured to not perform a focus adjustment up to the ideal focal position, but only up to the focus deviation.

8. The camera in accordance with claim 1, wherein the control and evaluation unit is configured to delay the recording of image data beyond an available focusing time if a focal position having a focus deviation can only then be achieved that is small enough for a required image definition of the image data.

9. The camera in accordance with claim 1, wherein a distance measurement field of view of the distance sensor at least partly overlaps the detection zone.

10. The camera in accordance with claim 1, wherein the distance sensor is integrated in the camera.

11. The camera in accordance with claim 9, wherein the distance measurement field of view is oriented such that an object is detected before it enters into the detection zone.

12. The camera in accordance with claim 1, wherein the distance sensor is configured as an optoelectronic distance sensor.

13. The camera in accordance with claim 12, wherein the optoelectronic distance sensor is in accordance with the principle of the time of flight process.

14. The camera in accordance with claim 1, wherein the control and evaluation unit is configured to evaluate the focus deviation as small enough for a required image definition when the object is still in a depth of field range according to the distance measurement value on a triggering of the recording of the image data in the set focal position.

15. The camera in accordance with claim 14, wherein the depth of field range is a depth of field range determined from optical properties and/or from application-specific demands.

16. The camera in accordance with claim 1, wherein the control and evaluation unit is configured to read a code content of a code on the object using the image data.

17. The camera in accordance with claim 16, wherein the control and evaluation unit is configured to evaluate the focus deviation as small enough for a required image definition of the image data if the image definition is sufficient to read a recorded code.

18. The camera in accordance with claim 17, wherein a sufficiency of the image definition being sufficient to read a recorded code is dependent on at least one of a code type, a module size, and a decoding process.

19. The camera in accordance with claim 1, that is installed in a stationary manner at a conveying device that guides objects to be detected in a direction of conveying through the detection zone.

20. A method of detecting image data of an object moved through a detection zone, in which a distance value from the object is measured by a distance sensor and a focal position of a reception optics is set in dependence on the distance value, wherein a recording of image data is triggered at a focal position at which there is a focus deviation from a focal position that is ideal in accordance with the measured distance value, with the focus deviation remaining small enough for a required image definition of the image data, and wherein a required refocusing time is determined from the instantaneous focal position and the focal position that is ideal in accordance with the measured distance value.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) The invention will be explained in more detail in the following also with respect to further features and advantages by way of example with reference to embodiments and to the enclosed drawing. The Figures of the drawing show in:

(2) FIG. 1 a schematic sectional representation of a camera with a distance sensor;

(3) FIG. 2 a three-dimensional view of an exemplary use of the camera in an installation at a conveyor belt;

(4) FIG. 3 a representation of a camera and of an object that is moved into its detection zone to explain a focusing method;

(5) FIG. 4 a representation similar to FIG. 3, now with two moving objects of different heights moving after one another; and

(6) FIG. 5 a representation of successful and unsuccessful reading attempts of a code on an object at different focal positions (X axis) and object distances (Y axis).

DETAILED DESCRIPTION

(7) FIG. 1 shows a schematic sectional representation of a camera 10. Received light 12 from a detection zone 14 is incident on a reception optics 16 that conducts the received light 12 to an image sensor 18. The optical elements of the reception optics 16 are preferably configured as an objective composed of a plurality of lenses and other optical elements such as diaphragms, prisms, and the like, but here only represented by a lens for reasons of simplicity. The reception optics 16 can be set to different focal positions by means of a focus adjustment 17 to record objects in focus at different distances. The most varied functional principles are conceivable for this purpose, for instance a change of the focal distance by a stepper motor or a moving coil actuator, but also a change of the focal length, for instance by a liquid lens or gel lens.

(8) To illuminate the detection zone 14 with transmitted light 20 during a recording of the camera 10, the camera 10 comprises an optional illumination unit 22 that is shown in FIG. 1 in the form of a simple light source and without a transmission optics. In other embodiments, a plurality of light sources such as LEDs or laser diodes are arranged around the reception path, in ring form, for example, and can also be multi-color and controllable in groups or individually to adapt parameters of the illumination unit 22 such as its color, intensity, and direction.

(9) In addition to the actual image sensor 18 for detecting image data, the camera 10 has an optoelectronic distance sensor 24 that measures distances from objects in the detection zone 14 using a time of flight (TOF) process. The distance sensor 24 comprises a TOF light transmitter 26 having a TOF transmission optics 28 and a TOF light receiver 30 having a TOF reception optics 32. A TOF light signal 34 is thus transmitted and received again. A time of flight measurement unit 36 determines the time of flight of the TOF light signal 34 and determines from this the distance from an object at which the TOF light signal 34 was reflected back.

(10) The TOF light receiver 30 in the embodiment shown has a plurality of light reception elements 30a or pixels and can thus even detect a spatially resolved height profile. Alternatively, the TOF light receiver 30 only has one light reception element 30a or sets off a plurality of measurement values of the light reception elements 30a to one distance value. The design of the distance sensor 24 is purely exemplary and other optoelectronic distance measurements without time of flight processes and non-optical distance measurements are also conceivable. The optoelectronic distance measurement by means of time light processes is known and will therefore not be explained in detail. Two exemplary measurement processes are photomixing detection using a periodically modulated TOF light signal 34 and pulse time of flight measurement using a pulse modulated TOF light signal 34. There are also highly integrated solutions here in which the TOF light receiver 30 is accommodated on a common chip with the time of flight measurement unit 36 or at least parts thereof, for instance TDCs (time to digital converters) for time of flight measurements. In particular a TOF light receiver 30 is suitable for this purpose that is designed as a matrix of SPAD (single photon avalanche diode) light reception elements 30a. For such a SPAD-based distance measurement, a plurality of light reception elements 32 are particularly advantageous that are not used for a spatially resolved measurement, but rather for a statistical multiple measurement with which a more exact distance value is determined. The TOF optics 28, 32 are shown only symbolically as respective individual lenses representative of any desired optics such as a microlens field.

(11) A control and evaluation unit 38 is connected to the focus adjustment 17, to the illumination unit 22, to the image sensor 18, and to the distance sensor 24 and is responsible for the control work, the evaluation work, and for other coordination work in the camera 10. It therefore controls the focus adjustment 17 with a focal position corresponding to the distance value of the distance sensor 24a and reads image data of the image sensor 18 to store them or to output them to an interface 40. The control and evaluation unit 38 is preferably able to localize and decode code zones in the image data so that the camera 10 becomes a camera-based code reader. A plurality of modules can be provided for the different control and evaluation work, for example to perform the focus adaptations in a separate module or to perform pre-processing of the image data on a separate FPGA.

(12) The camera 10 is protected by a housing 42 that is terminated by a front screen 44 in the front region where the received light 12 is incident.

(13) FIG. 2 shows a possible use of the camera 10 in an installation at a conveyor belt 46. The camera 10 is shown from here only as a symbol and no longer with its structure already explained with reference to FIG. 1. The conveyor belt 46 conveys objects 48, as indicated by the arrow 50, through the detection zone 14 of the camera 10. The objects 48 can bear code zones 52 at their outer surfaces. It is the object of the camera 10 to detect properties of the objects 48 and, in a preferred use as a code reader, to recognize the code zones 52, to read and decode the codes affixed there, and to associate them with the respective associated object 48. In order also to detect object sides and in particular laterally applied code zones 54, additional cameras 10, not shown, are preferably used from different perspectives. In addition, a plurality of cameras 10 can be arranged next to one another to together cover a wider detection zone 14.

(14) FIG. 3 shows a camera 10 having a downwardly directed detection zone 14 as in the situation of FIG. 2. A distance measurement field of view 56 of the distance sensor 24 is larger than the detection zone 14 in this example and includes it. Deviating, overlapping and non-overlapping configurations of the detection zone 14 and the distance measurement field of view 56 are, however, also conceivable. A distance measurement field of view 56 disposed at least partly upstream has the advantage that a distance measurement value is available earlier.

(15) An object 48 to be recorded moves at a velocity v into the detection zone 14. The velocity v, known as a parameter of a conveying device, can be measured by an external sensor such as an encoder, be reconstructed from early image recordings, or can be determined by the distance sensor 24. In the latter case, the distance sensor 24 preferably has a plurality of reception zones of light reception elements 30a into which the object 48 successively enters so that a conclusion can be drawn on the velocity v from the temporal sequence and the measured distances.

(16) The object 48 is detected on entry into the distance measurement field of view 56. The recording should be triggered when it is located at the center of the detection zone 14. The distance d.sub.1 has to be covered for this purpose and the time up to this point is given by t.sub.1=d.sub.1/v. The distance d.sub.1 still depends on the distance h.sub.1 since objects 48 of different heights are detected for the first time at different positions. The distance h.sub.1 is in turn measured by the distance sensor 24 and itself has to be converted from the distance value h.sub.m1 measured obliquely instead of straight by means of h.sub.1=h.sub.m1 cos α. Under the assumption that h.sub.m1 is measured immediately on entry into the distance measurement field of view 56, the angle α in the configuration shown corresponds to half the viewing angle of the distance sensor 24 and is at least known from the fixed configuration. d.sub.1=h.sub.1 tan α can now also be calculated using these values.

(17) The geometry shown in FIG. 3 and the time behavior are thus known. An available focusing time dt=t.sub.1 remains to the camera to set the focal position to the height h.sub.1. A correction for inertias and conversely a supplement can be taken into account in the available focusing time dt since it is not the front edge of the object 48 that is to be recorded, but rather the object center.

(18) It can conversely be determined which refocusing time DT is required to refocus from the current focal position to an ideal focal position in accordance with the measured distance h.sub.1. This can be achieved, for example, by a precalibration of the focus adjustment 17. The most varied focus adjustments from a value h.sub.1 to a value h.sub.2 are therefore carried out and in so doing the time until the new focal position has been adopted is determined. A theoretical system observation or a simulation can also be used instead. There is as a result at least a function or lookup table that associates a required refocusing time DT with a pair (h.sub.1, h.sub.2). An exemplary value for a maximum adjustment from a minimal focal position h.sub.1 to a maximum focal position h.sub.2 or vice versa is 50 ms. The required refocusing time DT for the situation of FIG. 3 is calculated from the pair (h.sub.0, h.sub.1), where h.sub.0 is the currently set focal position. In a position of rest, the focus can be moved to a central location h.sub.0 to limit the required refocusing time DT for the next object 48.

(19) If the available focusing time dt is sufficient in comparison with the required refocusing time Dt, that is dt≥DT, the ideal focal position is then set and a recording that is ideally in focus within the framework of the possibilities of the camera 10 is triggered as soon as the object 48 is in the recording position. The problematic case is that the available focusing time dt is not sufficient. A compensation strategy is then applied. An image is not recorded at an ideal focal position, but rather at a focal position that can be reached faster. A certain blur is thereby accepted that is, however, well-defined and furthermore makes it possible to achieve the desired purpose with the image recording, for example to read a code 52. It will be explained later with reference to FIG. 5 how a still permitted focus deviation can in particular be fixed using a depth of field range associated with the respective distance.

(20) There are now a plurality of possible compensation strategies that can be applied individually or in combination when the available focusing time dt is not sufficient and which focus deviation could still be tolerated is known. Combining compensation strategies can also mean triggering a plurality of image recordings in order, for example, to record both a somewhat blurred image at an ideal object location and a focused image in an object position that is no longer fully ideal.

(21) An image recording can take place with the still tolerated focus deviation at a focal position h.sub.1′ that is closer to the instantaneous focal position than h.sub.1 and that is accordingly reached faster. There is then a possibility of nevertheless adjusting the focal position to the ideal focal position h.sub.1 even though it is clear that this focus adjustment will not be carried out to the end in sufficient time. An image recording is then triggered prematurely as soon as at least the focal position h.sub.1′ has been reached. The refocusing time DT′<DT required for this purpose can be determined in advance and triggering takes place after DT′. The image recording can be triggered directly at the focal position h.sub.1′ or the available focusing time dt is made use of and an image recording is then triggered at a focal position h.sub.1″ between h.sub.1 and h.sub.1′.

(22) A further possibility is to set the focal position h.sub.1′ instead of the ideal focal position at the closer margin of the tolerance framework or depth of field range given by the still permitted focus deviation or a focal position h.sub.1′ between h.sub.1 and h.sub.1′ that can just still be reached in the available focusing time d.sub.1. This is only possible when the available focusing time dt is at least sufficient for this adjustment, for which purpose a new required refocusing time DT′ can be determined. It would otherwise, however, also be conceivable to make a setting to said focal position h.sub.1′ at the margin of the depth of field range and only then to trigger the image recording. The object 52 has then moved a little too far, but unlike with an image with a known insufficient image definition, an image recorded a little too late can absolutely still be usable, for example still include the code 52. The object offset is at least smaller than if one were to wait until the focal position actually corresponds to the ideal focal position h.sub.1, with an image recording also being conceivable at that even later point in time, in particular for an additional image.

(23) FIG. 4 again shows a camera 10 having a downwardly directed detection zone 14 similar to FIG. 3. Here, however, a further object 48a follows at a short distance whose height h.sub.2 differs considerably from the height h.sub.1 of the first object 48. A recording of the further object 48a that is calculated from the distance d.sub.2 covered should be produced at a time t.sub.2. The values d.sub.2, h.sub.2 and t.sub.2 are calculated in an analog manner to the values d.sub.1, h.sub.1 and t.sub.2, but can naturally only be determined when the distance sensor 24 detects the further object 48a for the first time.

(24) The available focusing time is now d.sub.1=t.sub.1−t.sub.2 and refocusing has to take place from h.sub.1 to h.sub.2 for the recording of the further object 48a after the recording of the object 48 and the required refocusing time DT results from this. With these values, the explanations on FIG. 3 apply analogously to produce a recording at a focal position h.sub.1 with an available focusing time dt that is too short that at most has a still tolerable focus deviation. The situation of FIG. 4 is possibly even more critical than that of FIG. 3 in dependence on the object distance d.sub.2−d.sub.1 and the height difference h.sub.2−h.sub.1 and therefore in particular profits from the described refocusing that where necessary fully reaches the ideal focal position h.sub.2 and is in turn faster.

(25) Up to now, the question as to which focus deviations can still be tolerated has only been briefly considered and should now finally be looked at more exactly. In this respect, a distinction can be made between purely optical or physical demands and application-specific demands. A possibility of considering a focus adjustment as still small enough is if the difference between the set and the ideal focal positions still remains in the depth of field range, with the extent of the depth of field range in turn having a dependency on the respective focal position or on the respective object distance.

(26) A physical depth of field range DOF.sub.p(h) can be approximated by the formula DOF.sub.p(h)˜2h.sup.2Nc/f.sup.2. Here, h is the distance between the camera 10 and the object 48; N is the numerical aperture f.sub.num of the objective of the reception optics 16 and is thus f-number dependent; c is the circle of confusion and corresponds to the degree of permitted blue of, for example, one pixel on the image sensor 18; and f is the focal length of the objective. A number of these are accordingly parameters of the object that are known and fixed. Further influences on the depth of field range such as the f-number or the exposure can be largely precluded by fixing or by optimum setting.

(27) However, specific demands of the application are not taken into account in the physical depth of field range DOF.sub.p(h). This becomes clear for the example of code reading: It is ultimately not a question of whether images satisfy physical contrast criteria, but rather whether the code can be read. In some cases, this application-specific depth of field range DOF.sub.app can be modeled by a factor κ that depends on application-specific parameters: DOF.sub.app(h)=κ DOF.sub.p(d). Typical application-specific parameters are here the module size, for example measured in pixels per module, the code type, and last but not least the decoding algorithm used. It this cannot be modeled by a simple factor κ, the possibility at least remains of determining DOFapp by simulation or experiment.

(28) FIG. 5 shows a representation of reading attempts of a code 52 on an object 48 at different focal positions and object distances. Light dots 58 designate successful reading attempts (GoodReads) and dark dots 60 unsuccessful reading attempts (NoReads). The two lines 62 follow the border between them and the distance interval of the two lines designates the required application-specific depth of field range DOFappd (d) in dependence on the focal position or on the object distance.

(29) Such a diagram can be produced by measurement or simulation for specific conditions with respect to said parameters such as the code type, module size, decoding process, exposure. An association rule in the form of a function or table (lookup table, LUT) is thereby produced from which the control and evaluation unit 38 can read, with a given provisional distance value, a depth of field range and thus a still permitted focus deviation with which it is still ensured that a code will be readable. There can be a plurality of association rules for different conditions so that the suitable still permitted focus deviation is then determined in a situation and application related manner, for example in dependence on the code type, module size, exposure, and the decoder used.