METHOD FOR ADJUSTING A STEREOSCOPIC IMAGING DEVICE

20190104298 · 2019-04-04

    Inventors

    Cpc classification

    International classification

    Abstract

    Method for adjusting a stereoscopic imaging device including for each sensor, tagging, in an image arising from the sensor, one and the same reference object, evaluating a quantity representative of a focus of the optical system associated with the sensor on the reference object, in order to adjust settings of each of the optical systems associated with a respective sensor of the stereoscopic imaging device.

    Claims

    1. A method for adjusting a stereoscopic imaging device with several detection blocks, each made up of a sensor and an optical system, the method including, for each sensor of the device, a step for tagging, in an image arising from the sensor, one and the same reference object, and a step for evaluating a quantity representative of a focus of the optical system associated with the sensor on the reference object, in order to adjust settings of each of the optical systems associated with a respective sensor of the stereoscopic imaging device.

    2. The method for adjusting a stereoscopic imaging device according to claim 1, wherein the settings of each of the optical systems are adjusted in order to optimize, independently, the quantities representative of the focus.

    3. The method for adjusting a stereoscopic imaging device according to claim 1, wherein the settings of each of the optical systems associated with a respective sensor are adjusted in order to obtain a same value for the quantities representative of the focus that are respectively associated with the optical systems.

    4. The method for adjusting a stereoscopic imaging device according to claim 1, wherein the tagging of a reference object is done by looking for a color frame, a predetermined shape, or specific tags.

    5. The method for adjusting a stereoscopic imaging device according to claim 1, wherein the evaluation of the quantity representative of the focus includes calculating a sharpness quantity.

    6. A stereoscopic imaging device with several detection blocks, each made up of a sensor and an optical system, the device including, for each sensor, means for tagging, in an image arising from the sensor, one and the same reference object, and means for evaluating a quantity representative of a focus of the optical system associated with the sensor on the reference object, as well as means for adjusting the settings of each of the optical systems associated with a respective sensor of the stereoscopic imaging device based on representative focus quantities.

    7. The method for adjusting a stereoscopic imaging device according to claim 2, wherein the tagging of a reference object is done by looking for a color frame, a predetermined shape, or specific tags.

    8. The method for adjusting a stereoscopic imaging device according to claim 3, wherein the tagging of a reference object is done by looking for a color frame, a predetermined shape, or specific tags.

    9. The method for adjusting a stereoscopic imaging device according to claim 2, wherein the evaluation of the quantity representative of the focus includes calculating a sharpness quantity.

    10. The method for adjusting a stereoscopic imaging device according to claim 3, wherein the evaluation of the quantity representative of the focus includes calculating a sharpness quantity.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0029] FIG. 1 shows alternatives of one aspect of the disclosure.

    [0030] FIG. 2 shows a scene seen by an imaging device with several sensors (in this example, three sensors).

    [0031] FIG. 3 shows the three images of the scene of FIG. 2 with or without implementation of the disclosure.

    DETAILED DESCRIPTION

    [0032] The described method for adjusting the focus of a stereoscopic imaging device with several sensors is based on the use of a reference object. This reference object is an object whose identification and demarcation in an image can be done unambiguously in most environments. The color is generally a reliable means of performing this type of demarcation, but a specific shape or specific tags are other solutions for identifying and demarcating an object in an image.

    [0033] FIG. 1 provides several examples of reference objects. In all cases, the principle is the same: the reference object offers visible characteristics that allow an automatic system to identify the object in an image with a high success rate and to demarcate a region of interest in the image essentially corresponding to the object.

    [0034] The reference object may have a visible colored contour 1. It may alternatively have a particular visible outer shape 2, or have specific visible tags 3.

    [0035] This object is placed in an environment, images of which are captured using an imaging device with several sensors for which one wishes to adjust the focus. Here, the images are rectangular, as is often the case in imaging.

    [0036] Each detection block of the imaging device captures images with a unique point of view slightly offset from those of the other blocks. One therefore obtains as many different images as there are detection blocks, as shown in FIG. 2. The scene, shown in the upper part of FIG. 2, contains three objects 10, 11, 12, one of which (referenced 11) is the reference object used to set the focus. The imaging device is referenced 20, and here includes three detection blocks 21, 22 and 23.

    [0037] In the bottom part of FIG. 2 are the three images arising from the detection blocks. Each of these images has a different point of view of the scene, resulting in the difference of position of the objects in the images, visible in the figure.

    [0038] These images have similarities, and in particular each image contains the reference object, but in a different position, with respect to the corners (or edges) of the image.

    [0039] One then uses a detection algorithm to detect the reference object in the images.

    [0040] This algorithm requires adaptation based on the type of reference object used. It may for example be an algorithm including a step by which the image is made binary based on the recognition or lack thereof of a color, point by point, then a contour detection step on the binarized image.

    [0041] It may also involve an algorithm by which one looks for a contour, analyzes the shape of the contour and compares it with the expected shape of the reference object.

    [0042] It may also involve an algorithm by which one looks for tags for the reference object in the image, then demarcates it in the image using the position of these tags. This type of algorithm is known in the field of image processing.

    [0043] At the end of this step, one has a region of interest demarcating the reference object for each image, even if the reference object does not appear in the same position in the different images.

    [0044] In the upper part of FIG. 3, an embodiment is shown of an imaging device not implementing the disclosure. The object 11 has been removed. No reference object is sought in the scene. The rectangle R indicates, in each image, the region where a focus adjustment is done. Since there is no reference object, these regions are for example chosen in the same location in each image, at the same distances from the corners of the image.

    [0045] In the bottom part of FIG. 3, an embodiment is shown of an imaging device implementing the disclosure. The three objects appear in the images arising from the detection blocks. The rectangle indicates, in each image, the region of interest where a focus adjustment is done. Since there is a reference object in the images, the regions are located on the reference object, even if the position of the object changes in each image, with respect to the corners of the image.

    [0046] Once the reference object has been identified in each image and the corresponding region of interest has been demarcated, the quantity representative of the focus is calculated. Generally, this quantity corresponds to the sharpness or blur of the considered region and is calculated using a calculating method by gradients or by spatial frequency analysis.

    [0047] A sharpness quantity may be reported as a contrast quantity on the image region where the object is present.

    [0048] In order to calculate this contrast, one method is to detect all of the contours in this image region and quantify them in terms of intensity. Indeed, if the object is blurry (i.e., the focus is not on the object), the detectable contours and edges of the objects present in this image region will be difficult to see or detect, or their intensity will be low.

    [0049] As metric, a contour filter is therefore used, for example a Sobel filter or a Prewitt filter or indeed a Canny filter, which provides a gray level image that corresponds to the contrast of each pixel.

    [0050] By obtaining the sum of the image over the image region, then dividing by the area of the image region, one obtains an average sharpness value of the image region that can be compared over the different equivalent regions of the cameras.

    [0051] On a single object detected in the scene, it is possible to calculate a focus difference (without knowing how to adjust the focus of one camera relative to the other). In the case of several equivalent objects detected in the scene at different distances, the evolution of the sharpness value for each equivalent region, coupled with the distance from the identifiable object in this region, makes it possible to determine which detection block has a further or closer focus distance.

    [0052] The metrics having been calculated for each image on the regions of interest demarcated by the reference object, they all correspond to a same object plane, a same depth of the scene.

    [0053] One therefore has an indication of the setting of the focus to a same observation depth for each of the detection blocks.

    [0054] Lastly, one analyzes all of the calculated metrics in two different ways.

    [0055] A first method comprises maximizing (or minimizing, depending on the selected metric) the quantity representative of the focus in order to have the best possible settings. Here, each detection block is adjusted independently by the user or an automated or nonautomated mechanical system. This method guarantees that the adjustment of the focus will be optimal for each section block at the depth at which the reference object has been placed, as well as in the entire depth of field in the object space of the optical systems of the detection blocks.

    [0056] A second method comprises comparing the calculated quantities. Indeed, the quality of the optical systems may vary slightly and introduce differences in sharpness in the image, even if each detection block is set optimally via the first method.

    [0057] It is therefore necessary to perform this comparison and adjust all of the detection blocks to an equal quantity in order to obtain detection blocks with identical and compatible focus settings.

    [0058] The above steps are repeated in order to update the metrics over the course of the adjustment.

    [0059] The disclosure is not limited to the described embodiments, but encompasses all alternatives within the scope of the claims.