Detection of image data of a moving object
20210321042 · 2021-10-14
Inventors
Cpc classification
H04N23/671
ELECTRICITY
G03B13/20
PHYSICS
H04N23/959
ELECTRICITY
International classification
Abstract
A camera for detecting an object moved through a detection zone is provided that has an image sensor for recording image data, a reception optics having a focus adjustment unit for setting a focal position, a distance sensor for measuring a distance value from the object, and a control and evaluation unit connected to the distance sensor and the focus adjustment unit to set a focal position in dependence on the distance value, and to trigger a recording of image data at a focal position at which there is a focus deviation from a focal position that is ideal in accordance with the measured distance value, with the focus deviation remaining small enough for a required image definition of the image data.
Claims
1. A camera for detecting an object moved through a detection zone, the camera comprising: an image sensor for recording image data, a reception optics having a focus adjustment unit for setting a focal position, a distance sensor for measuring a distance value from the object, and a control and evaluation unit connected to the distance sensor and the focus adjustment unit to set a focal position in dependence on the distance value, wherein the control and evaluation unit is configured to trigger a recording of image data at a focal position at which there is a focus deviation from a focal position that is ideal in accordance with the measured distance value, with the focus deviation remaining small enough for a required image definition of the image data.
2. The camera in accordance with claim 1, wherein the control and evaluation unit is configured to determine an available focusing time from the point in time at which the object will reach the recording position.
3. The camera in accordance with claim 2, wherein the distance sensor is configured to measure the speed of the movement of the object.
4. The camera in accordance with claim 1, wherein the control and evaluation unit is configured to determine a required refocusing time from the instantaneous focal position and the focal position that is ideal in accordance with the measured distance value.
5. The camera in accordance with claim 4, wherein an association rule between adjustments from a first focal position into a second focal position and a refocusing time required for this is stored in the control and evaluation unit.
6. The camera in accordance with claim 2, wherein the control and evaluation unit is configured to compare the available focusing time with the required refocusing time and only to record image data having a focal deviation when the required refocusing time is not sufficient.
7. The camera in accordance with claim 1, wherein the control and evaluation unit is configured to perform a focus adjustment to the ideal focal position, but already to record image data as soon as the focus deviation has become small enough for a required image definition.
8. The camera in accordance with claim 1, wherein the control and evaluation unit is configured to not perform a focus adjustment up to the ideal focal position, but only up to the focus deviation.
9. The camera in accordance with claim 1, wherein the control and evaluation unit is configured to delay the recording of image data beyond an available focusing time if a focal position having a focus deviation can only then be achieved that is small enough for a required image definition of the image data.
10. The camera in accordance with claim 1, wherein a distance measurement field of view of the distance sensor at least partly overlaps the detection zone.
11. The camera in accordance with claim 1, wherein the distance sensor is integrated in the camera.
12. The camera in accordance with claim 10, wherein the distance measurement field of view is oriented such that an object is detected before it enters into the detection zone.
13. The camera in accordance with claim 1, wherein the distance sensor is configured as an optoelectronic distance sensor.
14. The camera in accordance with claim 13, wherein the optoelectronic distance sensor is in accordance with the principle of the time of flight process.
15. The camera in accordance with claim 1, wherein the control and evaluation unit is configured to evaluate the focus deviation as small enough for a required image definition when the object is still in a depth of field range according to the distance measurement value on a triggering of the recording of the image data in the set focal position.
16. The camera in accordance with claim 15, wherein the depth of field range is a depth of field range determined from optical properties and/or from application-specific demands.
17. The camera in accordance with claim 1, wherein the control and evaluation unit is configured to read a code content of a code on the object using the image data.
18. The camera in accordance with claim 17, wherein the control and evaluation unit is configured to evaluate the focus deviation as small enough for a required image definition of the image data if the image definition is sufficient to read a recorded code.
19. The camera in accordance with claim 18, wherein a sufficiency of the image definition being sufficient to read a recorded code is dependent on at least one of a code type, a module size, and a decoding process.
20. The camera in accordance with claim 1, that is installed in a stationary manner at a conveying device that guides objects to be detected in a direction of conveying through the detection zone.
21. A method of detecting image data of an object moved through a detection zone, in which a distance value from the object is measured by a distance sensor and a focal position of a reception optics is set in dependence on the distance value, wherein a recording of image data is triggered at a focal position at which there is a focus deviation from a focal position that is ideal in accordance with the measured distance value, with the focus deviation remaining small enough for a required image definition of the image data.
Description
[0030] The invention will be explained in more detail in the following also with respect to further features and advantages by way of example with reference to embodiments and to the enclosed drawing. The Figures of the drawing show in:
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037] To illuminate the detection zone 14 with transmitted light 20 during a recording of the camera 10, the camera 10 comprises an optional illumination unit 22 that is shown in
[0038] In addition to the actual image sensor 18 for detecting image data, the camera 10 has an optoelectronic distance sensor 24 that measures distances from objects in the detection zone 14 using a time of flight (TOF) process. The distance sensor 24 comprises a TOF light transmitter 26 having a TOF transmission optics 28 and a TOF light receiver 30 having a TOF reception optics 32. A TOF light signal 34 is thus transmitted and received again. A time of flight measurement unit 36 determines the time of flight of the TOF light signal 34 and determines from this the distance from an object at which the TOF light signal 34 was reflected back.
[0039] The TOF light receiver 30 in the embodiment shown has a plurality of light reception elements 30a or pixels and can thus even detect a spatially resolved height profile. Alternatively, the TOF light receiver 30 only has one light reception element 30a or sets off a plurality of measurement values of the light reception elements 30a to one distance value. The design of the distance sensor 24 is purely exemplary and other optoelectronic distance measurements without time of flight processes and non-optical distance measurements are also conceivable. The optoelectronic distance measurement by means of time light processes is known and will therefore not be explained in detail. Two exemplary measurement processes are photomixing detection using a periodically modulated TOF light signal 34 and pulse time of flight measurement using a pulse modulated TOF light signal 34. There are also highly integrated solutions here in which the TOF light receiver 30 is accommodated on a common chip with the time of flight measurement unit 36 or at least parts thereof, for instance TDCs (time to digital converters) for time of flight measurements. In particular a TOF light receiver 30 is suitable for this purpose that is designed as a matrix of SPAD (single photon avalanche diode) light reception elements 30a. For such a SPAD-based distance measurement, a plurality of light reception elements 32 are particularly advantageous that are not used for a spatially resolved measurement, but rather for a statistical multiple measurement with which a more exact distance value is determined. The TOF optics 28, 32 are shown only symbolically as respective individual lenses representative of any desired optics such as a microlens field.
[0040] A control and evaluation unit 38 is connected to the focus adjustment 17, to the illumination unit 22, to the image sensor 18, and to the distance sensor 24 and is responsible for the control work, the evaluation work, and for other coordination work in the camera 10. It therefore controls the focus adjustment 17 with a focal position corresponding to the distance value of the distance sensor 24a and reads image data of the image sensor 18 to store them or to output them to an interface 40. The control and evaluation unit 38 is preferably able to localize and decode code zones in the image data so that the camera 10 becomes a camera-based code reader. A plurality of modules can be provided for the different control and evaluation work, for example to perform the focus adaptations in a separate module or to perform pre-processing of the image data on a separate FPGA.
[0041] The camera 10 is protected by a housing 42 that is terminated by a front screen 44 in the front region where the received light 12 is incident.
[0042]
[0043]
[0044] An object 48 to be recorded moves at a velocity v into the detection zone 14. The velocity v, known as a parameter of a conveying device, can be measured by an external sensor such as an encoder, be reconstructed from early image recordings, or can be determined by the distance sensor 24. In the latter case, the distance sensor 24 preferably has a plurality of reception zones of light reception elements 30a into which the object 48 successively enters so that a conclusion can be drawn on the velocity v from the temporal sequence and the measured distances.
[0045] The object 48 is detected on entry into the distance measurement field of view 56. The recording should be triggered when it is located at the center of the detection zone 14. The distance d.sub.1 has to be covered for this purpose and the time up to this point is given by t.sub.1=d.sub.1/v. The distance d.sub.1 still depends on the distance h.sub.1 since objects 48 of different heights are detected for the first time at different positions. The distance h.sub.1 is in turn measured by the distance sensor 24 and itself has to be converted from the distance value h.sub.m1 measured obliquely instead of straight by means of h.sub.1=h.sub.m1 cos α. Under the assumption that h.sub.m1 is measured immediately on entry into the distance measurement field of view 56, the angle α in the configuration shown corresponds to half the viewing angle of the distance sensor 24 and is at least known from the fixed configuration. d.sub.1=h.sub.1 tan α can now also be calculated using these values.
[0046] The geometry shown in
[0047] It can conversely be determined which refocusing time DT is required to refocus from the current focal position to an ideal focal position in accordance with the measured distance h.sub.1. This can be achieved, for example, by a precalibration of the focus adjustment 17. The most varied focus adjustments from a value h.sub.1 to a value h.sub.2 are therefore carried out and in so doing the time until the new focal position has been adopted is determined. A theoretical system observation or a simulation can also be used instead. There is as a result at least a function or lookup table that associates a required refocusing time DT with a pair (h.sub.1, h.sub.2). An exemplary value for a maximum adjustment from a minimal focal position h.sub.1 to a maximum focal position h.sub.2 or vice versa is 50 ms. The required refocusing time DT for the situation of
[0048] If the available focusing time dt is sufficient in comparison with the required refocusing time Dt, that is dt≥DT, the ideal focal position is then set and a recording that is ideally in focus within the framework of the possibilities of the camera 10 is triggered as soon as the object 48 is in the recording position. The problematic case is that the available focusing time dt is not sufficient. A compensation strategy is then applied. An image is not recorded at an ideal focal position, but rather at a focal position that can be reached faster. A certain blur is thereby accepted that is, however, well-defined and furthermore makes it possible to achieve the desired purpose with the image recording, for example to read a code 52. It will be explained later with reference to
[0049] There are now a plurality of possible compensation strategies that can be applied individually or in combination when the available focusing time dt is not sufficient and which focus deviation could still be tolerated is known. Combining compensation strategies can also mean triggering a plurality of image recordings in order, for example, to record both a somewhat blurred image at an ideal object location and a focused image in an object position that is no longer fully ideal.
[0050] An image recording can take place with the still tolerated focus deviation at a focal position h.sub.1′ that is closer to the instantaneous focal position than h.sub.1 and that is accordingly reached faster. There is then a possibility of nevertheless adjusting the focal position to the ideal focal position h.sub.1 even though it is clear that this focus adjustment will not be carried out to the end in sufficient time. An image recording is then triggered prematurely as soon as at least the focal position h.sub.1′ has been reached. The refocusing time DT′<DT required for this purpose can be determined in advance and triggering takes place after DT′. The image recording can be triggered directly at the focal position h.sub.1′ or the available focusing time dt is made use of and an image recording is then triggered at a focal position h.sub.1″ between h.sub.1 and h.sub.1′.
[0051] A further possibility is to set the focal position h.sub.1′ instead of the ideal focal position at the closer margin of the tolerance framework or depth of field range given by the still permitted focus deviation or a focal position h.sub.1′ between h.sub.1 and h.sub.1′ that can just still be reached in the available focusing time d.sub.1. This is only possible when the available focusing time dt is at least sufficient for this adjustment, for which purpose a new required refocusing time DT′ can be determined. It would otherwise, however, also be conceivable to make a setting to said focal position h.sub.1′ at the margin of the depth of field range and only then to trigger the image recording. The object 52 has then moved a little too far, but unlike with an image with a known insufficient image definition, an image recorded a little too late can absolutely still be usable, for example still include the code 52. The object offset is at least smaller than if one were to wait until the focal position actually corresponds to the ideal focal position h.sub.1, with an image recording also being conceivable at that even later point in time, in particular for an additional image.
[0052]
[0053] The available focusing time is now d.sub.1=t.sub.1−t.sub.2 and refocusing has to take place from h.sub.1 to h.sub.2 for the recording of the further object 48a after the recording of the object 48 and the required refocusing time DT results from this. With these values, the explanations on
[0054] Up to now, the question as to which focus deviations can still be tolerated has only been briefly considered and should now finally be looked at more exactly. In this respect, a distinction can be made between purely optical or physical demands and application-specific demands. A possibility of considering a focus adjustment as still small enough is if the difference between the set and the ideal focal positions still remains in the depth of field range, with the extent of the depth of field range in turn having a dependency on the respective focal position or on the respective object distance.
[0055] A physical depth of field range DOF.sub.p(h) can be approximated by the formula DOF.sub.p(h)˜2h.sup.2Nc/f.sup.2. Here, h is the distance between the camera 10 and the object 48; N is the numerical aperture f.sub.num of the objective of the reception optics 16 and is thus f-number dependent; c is the circle of confusion and corresponds to the degree of permitted blue of, for example, one pixel on the image sensor 18; and f is the focal length of the objective. A number of these are accordingly parameters of the object that are known and fixed. Further influences on the depth of field range such as the f-number or the exposure can be largely precluded by fixing or by optimum setting.
[0056] However, specific demands of the application are not taken into account in the physical depth of field range DOF.sub.p(h). This becomes clear for the example of code reading: It is ultimately not a question of whether images satisfy physical contrast criteria, but rather whether the code can be read. In some cases, this application-specific depth of field range DOF.sub.app can be modeled by a factor κ that depends on application-specific parameters: DOF.sub.app(h)=κ DOF.sub.p(d). Typical application-specific parameters are here the module size, for example measured in pixels per module, the code type, and last but not least the decoding algorithm used. It this cannot be modeled by a simple factor κ, the possibility at least remains of determining DOFapp by simulation or experiment.
[0057]
[0058] Such a diagram can be produced by measurement or simulation for specific conditions with respect to said parameters such as the code type, module size, decoding process, exposure. An association rule in the form of a function or table (lookup table, LUT) is thereby produced from which the control and evaluation unit 38 can read, with a given provisional distance value, a depth of field range and thus a still permitted focus deviation with which it is still ensured that a code will be readable. There can be a plurality of association rules for different conditions so that the suitable still permitted focus deviation is then determined in a situation and application related manner, for example in dependence on the code type, module size, exposure, and the decoder used.