IMAGE ACQUISITION METHOD FOR TIME OF FLIGHT CAMERA

20230010725 · 2023-01-12

    Inventors

    Cpc classification

    International classification

    Abstract

    A method of reduce the impact of noise on a depth image produced using a Time Of Flight (TOF) camera uses an infrared image produced from one or more phase-specific images captured by the TOF camera to determine whether to move pixels in the depth image from one phase section to another.

    Claims

    1. An image acquisition method for a Time Of Flight (TOF) camera, the image acquisition method comprising: acquiring a first image using a first frequency; acquiring a second image using a second frequency; producing an initial unwrapped image based on the first and second images; selecting a partial area from the unwrapped image; distinguishing one or more erroneously unwrapped pixels of the partial area from remaining pixels of the partial area; and moving an erroneously unwrapped pixel from a phase section to which it has been initially unwrapped to another phase section.

    2. The image acquisition method of claim 1, further comprising: processing the partial area using a mode filter before moving the erroneously unwrapped pixel.

    3. The image acquisition method of claim 1, wherein distinguishing the one or more erroneously unwrapped pixels includes comparing the one or more erroneously unwrapped pixels with the remaining pixels.

    4. The image acquisition method of claim 1, wherein distinguishing the one or more erroneously unwrapped pixels includes performing a mathematical process for analyzing unwrapped values of pixels belonging to the partial area.

    5. The image acquisition method of claim 4, wherein the mathematical process includes creating a histogram.

    6. The image acquisition method of claim 4, wherein the mathematical process includes determining a dispersion value.

    7. The image acquisition method of claim 4, wherein the mathematical process is for distinguishing the erroneously unwrapped pixels.

    8. The image acquisition method of claim 1, wherein moving the erroneously unwrapped pixel is based on a value selected from one period of the first frequency and one period of the second frequency.

    9. An image acquisition method for a TOF camera, the image acquisition method comprising: acquiring an initial image from a TOF camera; unwrapping the initial image to produce an unwrapped initial image; acquiring an infrared image from the initial image; filtering the initial image to produce a reliability evaluation image; correcting the unwrapped initial image to produce a corrected unwrapped image; and performing denoising based on the infrared image, the reliability evaluation image, and the corrected unwrapped image.

    10. The image acquisition method of claim 9, wherein the correcting the unwrapped initial image includes comparing some pixels with remaining pixels.

    11. The image acquisition method of claim 9, wherein the correcting the unwrapped initial image includes performing a mathematical process for analyzing unwrapped values of some pixels.

    12. The image acquisition method of claim 11, wherein the mathematical process includes creating a histogram.

    13. The image acquisition method of claim 11, wherein the mathematical process includes determining a dispersion value.

    14. The image acquisition method of claim 9, wherein performing denoising includes using the infrared image as a reference image.

    15. The image acquisition method of claim 9, wherein filtering the initial image is based on a dispersion value among pieces of information of the initial image.

    16. The image acquisition method of claim 9, wherein the reliability evaluation image is used for evaluating reliability of the initial image.

    17. The image acquisition method of claim 16, wherein when evaluating reliability of the initial image, more noise in the initial image corresponds to a lower evaluation.

    18. The image acquisition method of claim 9, wherein the infrared image is acquired using amplitude information among pieces of information of the initial image.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0016] FIG. 1 illustrates an operation principle of a camera using an indirect TOF method.

    [0017] FIG. 2 illustrates a principle of restoring an image captured by a TOF camera.

    [0018] FIG. 3 illustrates a process of restoring an unwrapped image from images acquired using a plurality of frequencies.

    [0019] FIG. 4 shows a pixel area showing an error due to noise involved in a process of restoring an image by unwrapping according to an embodiment.

    [0020] FIG. 5 illustrates a process for correcting unwrapping by skipping one period of a frequency.

    [0021] FIG. 6 illustrates an erroneously unwrapped pixel due to involved noise.

    [0022] FIG. 7 is a flowchart of a process of moving a pixel section by correction unwrapping according to an embodiment.

    [0023] FIG. 8 illustrates a pixel area where a lot of noise is involved.

    [0024] FIG. 9 is a flowchart of a process of properly denoising a pixel area where a lot of noise is involved according to an embodiment.

    [0025] FIG. 10 is illustrates a reliability evaluation image according to an embodiment.

    [0026] FIG. 11 is a step-by-step view of images derived during a denoising process according to an embodiment.

    DETAILED DESCRIPTION

    [0027] Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings such that the present disclosure can be easily carried out by those skilled in the art to which the present disclosure pertains. The same reference numerals among the reference numerals in each drawing indicate the same members.

    [0028] In the description of the present disclosure, when it is determined that detailed descriptions of related publicly-known technologies may obscure the subject matter of the present disclosure, the detailed descriptions thereof will be omitted.

    [0029] The terms such as first and second may be used to describe various components, but the components are not limited by the terms, and the terms are used only to distinguish one component from another component.

    [0030] Hereinafter, the present disclosure will be described with reference to the related drawings.

    [0031] A process of acquiring an unwrapped image from multi-frequency images as illustrated in FIG. 3 will be described as an example. When a pulse signal with multiple frequencies, for example, a first frequency of 80.32 MHz and a second frequency of 60.24 MHz, is used, phases of the first frequency and the second frequency coincide with each other again after four periods and three periods from the same start point, and this behavior is repeated. Hereinafter, for convenience of description, such a period in which multiple frequencies are repeated is referred to as a ‘repetition period’ and is equal to the inverse of the greatest common divisor (GCD) of the first and second frequencies. Here, that GCD is 20.08 MHz, and accordingly in the example of FIG. 3, 4.98 nanoseconds is the repetition period. Since the first frequency and the second frequency are pulses having different phases, the image may be divided into sections in which phases are relatively changed within one repetition period. For example, in FIG. 3, wherein a dashed line in the graph indicates a distance measurement produced using the first frequency of 80.32 MHz and a solid line in the graph indicates a distance measurement produced using the second frequency of 60.24 MHz, section {circle around (1)} is a section in which the first frequency and the second frequency overlap each other (that is, have identical phase angles and therefore produce images with identical distance values), section {circle around (2)} is a section in which the first frequency transitions into a new cycle first and therefore the first frequency produces a distance much less than the distance produced by the second frequency, section {circle around (3)} is a section in which the second frequency transitions into a new cycle subsequent to the transition into a new cycle of the first frequency and therefore the first frequency produces a distance greater than the distance produced by the second frequency, and section {circle around (4)}, section {circle around (5)}, and section {circle around (6)}) may be described in a similar way.

    [0032] Therefore, in FIG. 3, an image with the first frequency is an image acquired during the repetition period, that is, during which the pulse of 80.32 MHz is repeated four times, and an image with the second frequency is an image acquired during which the frequency of 60.24 MHz is repeated three times. When one repetition period (here, 4.98 nanoseconds) is converted into a distance that can be measured without ambiguity by multiplying it by one-half the speed of light, the distance is 7.46 meters. For the first frequency of 80.32 MHz, a distance corresponding to one period is 1.87 meters, and for the second frequency of 60.24 MHz, a distance corresponding to one period is 2.49 meters.

    [0033] When frequency information is properly used, the image acquired using the first frequency and the image acquired using the second frequency during the repetition period, or an image overlapped by a combination thereof may be restored through proper unwrapping, and the resulting image is referred to as an unwrapped image as illustrated in FIG. 3. In other words, for the two frequencies used in this example, six sections correspond to six cases, and from the information in the images acquired using the two frequencies, it is possible to know which of the cases a specific pixel corresponds to, so that it becomes possible to restore a proper image having proper distances by unwrapping an image.

    [0034] However, there is a possibility that a pixel error will occur due to a problem such as noise introduced when an unwrapping operation is performed, noise in the image acquired using the first frequency, or noise in the image acquired using the second frequency. An example of FIG. 4 illustrates a pixel error that occurs when pixel data that in the absence of the error would belong to section {circle around (3)} is instead determined to belong to a section before or after one period due to the error. That is, the example of FIG. 4 illustrates a result in which a difference of 2 πN, which is an integer multiple of one period, occurs due to erroneous unwrapping of the pixel data. Therefore, in an embodiment, if a first unwrapping (referred to as an ‘initial unwrapping’) is wrong, when the pixel data is adjusted by 2 πN through comparison with surrounding values, a corrected unwrapping (referred to as a ‘correction unwrapping’) result illustrated in FIG. 5 may be acquired.

    [0035] One embodiment of a process of performing correction unwrapping will be described with reference to steps illustrated in FIG. 7. First, an image produced using a first frequency and an image produced using a second frequency are acquired with a TOF camera (steps S10 and S11). An unwrapped image is acquired with these images (step S20). Pixels in a certain area including erroneously unwrapped pixels, for example, pixels of a 3×3 patch are selected from the unwrapped image (step S30). Next, when an unwrapping section of the pixels belonging to the area is expressed using a mathematical tool, for example, a histogram, the pixels are divided into properly unwrapped pixels and erroneously unwrapped pixels as illustrated in FIG. 6 (step S40). Most of the pixels in the patch are unwrapped to belong to section {circle around (3)}, but some pixels are unwrapped to belong to section {circle around (5)}, which is taken as an indication that the pixels in section {circle around (5)} are erroneously unwrapped pixels. Then, the erroneously unwrapped pixels are compared with adjacent pixels and it is determined whether to move the erroneously unwrapped pixels to a section to which most of the adjacent pixels belong (step S50). The erroneously unwrapped pixels are moved to section {circle around (3)} according to the comparison result (step S60). The movement is effectively performed according to comparison of the values of the erroneously unwrapped pixels with surrounding values. At this time, it is preferable to use a mode filter. In such a case, a 3×3 mode filter for coping with the 3×3 patch is suitable. In this way, a corrected unwrapping result may be acquired. This series of steps are illustrated in the flowchart of FIG. 7.

    [0036] In another embodiment, an area including a lot of noise in an unwrapped image is corrected. For example, it is preferable to correct an area including a lot of noise, such as an area illustrated in FIG. 8, by using a mode filter, or in another embodiment without using the mode filter. As illustrated in the histogram of FIG. 8, a properly unwrapped value belongs to section {circle around (2)}, but is less prominent than values erroneously unwrapped and dispersed into other sections due to involved noise. That is, in this example, a dispersion value is large. When the dispersion value is large, it means that the reliability of an image is reduced. Due to such characteristics, correction through comparison with neighboring pixels becomes more difficult. When such a case occurs, embodiments may remove noise based on an infrared intensity value calculated by the TOF camera. This process is based on the principle that when the pixel intensity value of an erroneously unwrapped pixel is similar to that of a neighboring pixel, they are highly likely to be the same subject, and therefore depth values thereof are also highly likely to be similar.

    [0037] Another embodiment of the present disclosure will be described below in more detail with reference to FIG. 9. An initial image is acquired from the TOF camera (step S110). In order to acquire the initial image, a calculation process using a phase restoration algorithm may be used as needed. An image is acquired by unwrapping the initial image (step S120), and an unwrapped image is acquired by correcting the image (step S130). A process of acquiring the corrected unwrapped image may be an embodiment of the present disclosure described above. An image, whose reliability may be evaluated through dispersion-based filtering, is acquired from the initial image (step S220). In such a case, it is preferable to calculate a dispersion value as a reliability value or to calculate a dispersion value by using appropriate scaling. In the reliability evaluation, it is necessary to set the reliability value of an area including a lot of noise to be small. It is empirically known that a lot of noise is involved when a subject surface is black or the orientation direction of the subject surface is changing. For reference, since the pixel value of an area including a lot of involved noise changes significantly, a dispersion value thereof is also increased. When a dispersion value is low, a high reliability value is set. This is well described in the drawing of FIG. 10. FIG. 10 illustrates an example of applying a process in which the reliability of an area including a lot of noise is set to be low based on a dispersion value in the image with the first frequency of 80.32 MHz. Particularly, FIG. 10 illustrates a case where a lot of noise is involved in an area where the orientation direction of a subject surface changes.

    [0038] Next, an infrared intensity image is separately acquired from the initial image (step S320). The infrared intensity image refers to amplitude information determined using the amplitude information of the initial image data for all of the phases; for example, in an embodiment, the four images respectively corresponding to phases Q1, Q2, Q3, and Q4 shown in FIG. 2 may be combined by summation to produce the infrared intensity image. For reference, the aforementioned reliability image uses a phase value or a frequency value of the initial image, whereas the infrared image uses the amplitude information. By performing a denoising step (S140) of removing noise from the corrected unwrapped image using the dispersion-based reliability image and the infrared intensity image, a resultant image is acquired. FIG. 11 illustrates images acquired in each step for reference.

    [0039] In the denoising step (S140), the intensity value of the infrared intensity image is used as a guide image for reference. Furthermore, in the processing of the initial image acquired from the TOF camera or the denoising step, a well-known algorithm such as a Bilateral Solver may be used or other algorithms may also be used. When the Bilateral Solver is applied, a denoised depth image may be acquired by substituting pixel values of a high-reliability surrounding area for pixel values of a low-reliability surrounding area, using the guide image and the reliability area to determine when to perform such substitutions.

    [0040] Although the present disclosure has been described with reference to the embodiments illustrated in the drawings, these are for illustrative purposes only, and those skilled in the art will appreciate that various modifications and other equivalent embodiments are possible from the embodiments. Thus, the true technical scope of the present disclosure should be defined according to the appended claims.