APPARATUS AND METHOD FOR GENERATING DEPTH MAP USING MONOCULAR IMAGE

20220351399 · 2022-11-03

Assignee

Inventors

Cpc classification

International classification

Abstract

Disclosed is an apparatus for generating a depth map using a monocular image. The apparatus includes: a deep convolution neural network (DCNN) optimized based on an encoder and decoder architecture. The encoder extracts one or more features from the monocular image according to the number of provided feature layers, and the decoder calculates displacements of mismatched pixels from the features extracted from different feature layers, and generates the depth map for the monocular image.

Claims

1. An apparatus for generating a depth map using a monocular image, the apparatus comprising: a deep convolution neural network (DCNN) optimized based on an encoder and decoder architecture, wherein the encoder extracts one or more features from the monocular image according to the number of provided feature layers, and the decoder calculates displacements of mismatched pixels from the features extracted from different feature layers, and generates the depth map for the monocular image.

2. The apparatus according to claim 1, wherein the encoder is based on MobileNetV2 to be mounted on a drone or a small robot for fast computation.

3. The apparatus according to claim 1, wherein the decoder includes an encoder SE block for generating first channel information using the features extracted from different feature layers to enable channel attention, and outputting first major channel information from the first channel information.

4. The apparatus according to claim 3, wherein the decoder further includes: a high-density block for learning the features according to the number of density layers and a growth rate, and outputting a feature set; an up-sampling block for performing a Nearest Neighbor Interpolation operation using double scaling for the feature set to improve a resolution of the depth map; a decoder SE block for generating second channel information from the feature set up-sampled by the up-sampling block to enable channel attention, and outputting second major channel information from the second channel information; and a disparity convolution block for reactivating a weighting value of the second major channel information output from the decoder SE block using 3×3 convolution and a Sigmoid function, and performs decoding using all of the feature extracted from an arbitrary feature layer provided in the encoder, the first major channel information, and the second major channel information.

5. The apparatus according to claim 4, wherein the decoder SE block is skip-connected to the encoder SE block.

6. The apparatus according to claim 1, wherein the deep convolution neural network (DCNN) includes a pose estimation network (PoseNet) and a depth estimation network (DepthNet) to learn data sets on the basis of unsupervised learning, and estimates a shape of an object in the monocular image.

7. A method of generating a depth map using a monocular image, the method comprising the steps of: extracting one or more features from the monocular image according to the number of provided feature layers, by an encoder; and calculating displacements of mismatched pixels from the features extracted from different feature layers and generating the depth map for the monocular image, by a decoder.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0011] FIG. 1 is a view showing the configuration of an apparatus for generating a depth map using a monocular image of the present invention.

[0012] FIG. 2 is a view showing an encoder SE block according to an embodiment of the present invention.

[0013] FIG. 3 is a view showing (a) a high-density block and (b) a disparity convolution block according to an embodiment of the present invention.

DETAILED DESCRIPTION

[0014] Although general terms widely used at present are selected as terms used in this specification as much as possible considering the functions of the present invention, this may vary according to the intention of those skilled in the art, precedents, or emergence of new techniques. In addition, in specific cases, there are terms arbitrarily selected by an applicant, and in this case, the meaning of the terms will be described in detail in the corresponding description of the present invention. Therefore, the terms used in the present invention should be defined based on the meaning of the terms and the overall contents of the present invention, not by the simple names of the terms.

[0015] Unless otherwise defined, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by those skilled in the art. The terms such as those defined in a commonly used dictionary should be interpreted as having a meaning consistent with the meaning in the context of related techniques, and should not be interpreted as an ideal or excessively formal meaning unless clearly defined in this application.

[0016] Apparatus for generating depth map using monocular image

[0017] Hereinafter, an embodiment of the present invention will be described in detail with reference to the accompanying drawings. FIG. 1 is a view showing the configuration of an apparatus for generating a depth map using a monocular image of the present invention. FIG. 2 is a view showing an encoder SE block 121 according to an embodiment of the present invention. FIG. 3 is a view showing (a) a high-density block 122 and (b) a disparity convolution block 125 according to an embodiment of the present invention.

[0018] First, referring to FIG. 1, a depth map generation apparatus using a monocular image of the present invention includes a deep convolution neural network (DCNN) 100 optimized based on an encoder 110 and decoder 120 architecture.

[0019] In addition, since the deep convolution neural network (DCNN) 100 is based on pixels when it estimates a depth from a monocular image, it should acquire semantic features and spatial information of an object to estimate the boundary of the object. Therefore, most preferably, the deep convolution neural network (DCNN) 100 may further include a pose estimation network (PoseNet) and a depth estimation network (DepthNet) to learn data sets on the basis of unsupervised learning.

[0020] First, since the deep convolution neural network (DCNN) 100 learns data sets on the basis of unsupervised learning, a data set including a Ground Truth Depth, i.e., a separate correct answer value, is not required, and accordingly, there is an effect of reducing the cost for providing a data set having a correct answer value.

[0021] In addition, the pose estimation network (PoseNet) may regress conversion between adjacent frames used for reconstructing a monocular image in the data set. For example, 6 Degrees of Freedom (DoF) may be predicted based on the monocular image, and the first three dimension represents a translation vector, and next three dimension may represent the Euler angle. The depth estimation network (DepthNet) may calculate a loss that occurs during the unsupervised learning based on the output of the pose estimation network (PoseNet), and then estimate a depth map for each monocular image in the data set.

[0022] Next, in the architecture of the optimized deep convolution neural network (DCNN) 100, the encoder 110 extracts one or more features X.sub.i (i=1,2,3,4 . . . n) from the monocular image according to the number of provided feature layers. The number of features X.sub.i may be the same as the number of provided feature layers.

[0023] Most preferably, the encoder 110 may be based on MobileNetV2 to be mounted on a drone or a small robot for fast computation.

[0024] Conventionally, there are various CNN architectures such as SqueezeNet, MobileNet, MobiletNetV2, and MobileNetV3 suitable for the encoder 110. All of these neural networks may classify objects from an image without need of complex calculations and may be easily distributed to embedded systems in real-time. However, SqueezeNet and MobileNet classify images with a small piece information of input images and thus have a disadvantage of low accuracy. In addition, although MobileNetV3 is faster than MobileNetV2 in classifying images, it has a disadvantage of low accuracy in a work of image segmentation or object detection that requires more pixel-based information. Therefore, most preferably, the encoder 110 of the present invention may be based on MobileNetV2 trained in advance using data set ImageNet.

[0025] In addition, most preferably, the encoder 110 may have a first feature layer FL.sub.1 to a fifth feature layer FL.sub.5 provided with 16, 24, 32, 96, and 160 channels at the scales of ½, ¼, ⅛, 1/16, and 1/32, respectively. In addition, a first feature X.sub.1 to a fifth feature X.sub.5 may be extracted from the feature layers, respectively.

[0026] Next, the decoder 120 calculates displacements of mismatched pixels from the features X.sub.i and X.sub.i+1 extracted from different feature layers, and generates a depth map for the monocular image.

[0027] The decoder 120 may include an encoder SE block 121, a high-density block 122, an up-sampling block 123, a decoder SE block 124, and a parity convolution block 125 to generate a depth map for the monocular image.

[0028] More specifically, the encoder SE block 121 may generate first channel information C.sub.1 using the features X.sub.i and X.sub.i+1 extracted from different feature layers to enable channel attention, and outputs first major channel information CA.sub.1 from the first channel information C.sub.1.

[0029] Referring to FIG. 2, for example, the encoder SE block 121 is denoted by x.sub.n, and one or more encoder SE blocks may be provided. The encoder SE block 121 may receive a feature X.sub.i extracted from the i-th feature layer FL.sub.i of the encoder 110 and a feature X.sub.i+1 extracted from the i+1-th feature layer FL.sub.i+1 (i=1,2, . . . n). Then, the encoder SE block 121 may perform a global pooling process of generating the first channel information C.sub.1 by averaging and compressing the two features X.sub.i and X.sub.i+1.

[0030] In addition, the encoder SE block 121 may determine the first major channel information CA.sub.1 from the first channel information C.sub.1 using a fully-connected (FC) function, and activate the first major channel information CA.sub.1 with a higher weighting value using a ReLu function. This series of processes may be referred to as a Squeeze process.

[0031] In addition, the encoder SE block 121 may perform 1×1 convolution after expanding the compressed first major channel information CA.sub.1 using a fully-connected (FC) function and a Sigmoid function and then scaling the size. This series of processes may be referred to as an excitation process. Here, the 1×1 convolution may reduce the number of parameters for the entire operation by reducing the channels using a filter having a size of 1×1.

[0032] Accordingly, since the encoder SE block 121 may extract one or more features X.sub.i only from a monocular image captured by one camera, not a stereo image captured by two or more cameras, there is an effect of remarkably reducing the number of stored or processed images. In addition, since the first major channel information CA.sub.1 may be output using the features X.sub.i and X.sub.i+1 extracted from two different feature layers FL.sub.i and FL.sub.i+1 in the encoder 110, the operation parameters of the decoder 120 may be reduced remarkably, and therefore, there is a remarkable effect of reducing the operation delay time.

[0033] Next, the high-density block 122 may learn the features according to the number of density layers DL and a growth rate, and then output a feature set XC.sub.i. In addition, the high-density block 122 may include a plurality of density layers DL.sub.i, and an arbitrary density layer DL.sub.i may receive a feature set XC.sub.i−1 obtained from a previous density layer DL.sub.i−1. In addition, the arbitrary density layer DL.sub.i may output a feature set XC.sub.i by adding the learned features to the feature set XC.sub.i−1 obtained from the previous density layer DL.sub.i−1.

[0034] Referring to FIG. 3(a), most preferably, the high-density block 122 may include a first density layer DL.sub.1 to a fourth density layer DL.sub.4 and may include a plurality of channels between the density layers, and the growth rate may be 32. Here, the channels may be classified into one input channel and a plurality of output channels. That is, for each of the density layers DL.sub.i (i=1, 2, 3, 4), the dimension of the output channel of the high-density block 122 may increase by 32 times according to the growth rate. Accordingly, the high-density block 122 may finally output a feature set XC.sub.i in the form of high-density collective knowledge.

[0035] Meanwhile, the high-density block 122 may further include 1×1 convolution to fuse the input channel and reduce the number of parameters for calculation. Accordingly, the high-density block 122 has an effect of alleviating a gradient loss problem, enhancing feature propagation, and enabling feature reuse.

[0036] Next, the up-sampling block 123 may perform a Nearest Neighbor Interpolation operation using double scaling for the feature set XC.sub.i to improve the resolution of the depth map.

[0037] Meanwhile, the up-sampling block 123 may perform up-sampling on the first major channel information CA.sub.1 output from the encoder SE block 121, as well as on the feature set XC.sub.i. In addition, the up-sampling block 123 may include 3×3 convolution and perform operation by expanding the feature set XC.sub.i on which the 1×1 convolution is performed by the high-density block 122.

[0038] Next, the decoder SE block 124 may generate second channel information C.sub.2 from the feature set XC.sub.i up-sampled by the up-sampling block 123 to enable channel attention, and output second major channel information CA.sub.2 from the second channel information C.sub.2.

[0039] For example, first, the decoder SE block 124 may receive the up-sampled feature set XC.sub.i from the up-sampling block 123. Then, the decoder SE block 124 may perform a global pooling process of generating the second channel information C.sub.2 by averaging and compressing the features aggregated in the feature set XC.sub.i at a high density.

[0040] In addition, the decoder SE block 124 may determine the second major channel information CA.sub.2 from the second channel information C.sub.2 using a fully-connected (FC) function, and activate the second major channel information CA.sub.2 with a higher weighting value using a ReLu function. This series of processes may be referred to as a Squeeze process.

[0041] In addition, the decoder SE block 124 may perform 1×1 convolution after expanding the compressed second major channel information CA.sub.2 using a fully-connected (FC) function and a Sigmoid function and then scaling the size. This series of processes may be referred to as an excitation process. Here, the 1×1 convolution may reduce the number of parameters for the entire operation by reducing the channels using a filter having a size of 1×1.

[0042] Therefore, since the decoder SE block 124 may output the second major channel information CA.sub.2 using the features aggregated in the feature set XC.sub.i at a high density, the operation parameters of the decoder 120 may be reduced remarkably, and therefore, there is a remarkable effect of reducing the operation delay time.

[0043] Next, referring to FIG. 3(b), the disparity convolution block 125 may reactivate the weighting value of the second major channel information CA.sub.2 output from the decoder SE block 124 using 3×3 convolution and a Sigmoid function.

[0044] Therefore, most preferably, the decoder 120 may perform decoding using all of the feature X.sub.i extracted from an arbitrary feature layer FL.sub.1 provided in the encoder 110, the first major channel information CA.sub.1, and the second major channel information CA.sub.2 to generate the depth map.

[0045] Next, the decoder SE block 124 may be skip-connected to the encoder SE block 121. Here, the meaning of being skip-connected is to obtain more semantic information from the monocular image. That is, the present invention may finally generate a depth map having a more improved resolution by combining strong features of low resolution and weak features of high resolution using skip-connection between corresponding objects.

[0046] Therefore, as the present invention has a deep convolution neural network (DCNN) 100 optimized based on an encoder and decoder architecture, although the depth map is generated using as few parameters as only 4.1 million or so, there is a remarkable effect of applying the apparatus to a drone or a small robot by achieving a low delay time together with high accuracy and high resolution compared to the prior art.

[0047] Method of generating depth map using monocular image

[0048] Hereinafter, an embodiment of the present invention will be described in detail with reference to the accompanying drawings. The method of generating a depth map using a monocular image of the present invention includes an encoding step (S100) and a decoding step (S200). It is as shown in [Table 1], and will be described below.

TABLE-US-00001 TABLE 1 Layer Description channels input dimension output dimension note #0 Input RGB image 3 H × W — Encoder Layers: MobileNetV2  #1 Feature layer 1  3/16 H × W H/2 × W/2  #2 Feature layer 2 16/24 H/2 × W/2 H/4 × W/4  #3 Feature layer 3 24/32 H/4 × W/4 H/8 × W/8  #4 Feature layer 4 32/96 H/8 × W/8 H/16 × W/16  #5 Feature layer 5  96/100 H/16 × W/16 H/32 × W/32 Decoder layers  #6 eSE layer 4 256/160 H/32 × W/32 ⊕ H/16 × W/16 H/16 × W/16 U(#5) ⊕ #4  #7 eSE layer 3 128/96  H/16 × W/16 ⊕ H/8 × W/8 H/8 × W/8 U(#4) ⊕ #3  #8 eSE layer 2 56/32 H/8 × W/8 ⊕ H/4 × H/4 H/4 × W/4 U(#3) ⊕ #2  #9 eSE layer 1 40/24 H/4 × W/4 ⊕ H/2 × H/2 H/2 × W/2 U(#2) ⊕ #1 #10 Dense block 4 + Upsampling 4 100/160 H/32 × W/32 H/16 × W/16 Layer #5 #11 Conv3 × dSE + dense block 4 416/160 H/16 × W/16 H/16 × W/16 #10 ⊕ #6 ⊕ #4 #12 Upsampling 3 96/96 H/16 × W/16 H/8 × W/8 Upsampling #11 #13 Conv3 × 3 + dSE + dense block 3 288/96  H/8 × W/8 H/8 × W/8 #12 ⊕ #7 ⊕ #3 #14 Upsampling 2 96/96 H/8 × W/8 H/4 × W/4 Upsampling #13 #15 Conv3 × 3 + dSE + dense block 2 152/32  H/4 × W/4 H/4 × W/4 #14 ⊕ #8 ⊕ #2 #16 Upsampling 1 32/32 H/4 × W/4 H/2 × W/2 Upsampling #15 #17 dense block + Upsampling + Conv3 × 3 + dSE 1 80/24 H/2 × W/2 H × W #16 ⊕ #9 ⊕ #1

[0049] First, at the encoding step (S100), one or more features are extracted from a monocular image according to the number of provided feature layers, by the encoder 110.

[0050] According to the embodiment of [Table 1] and FIG. 2, at the encoding step S100, a monocular image of an RGB format may be input. Here, the number of channels may be 3, and the input dimension may be H×W. In addition, at the encoding step (S100), a first feature X.sub.1 to a fifth feature X.sub.5 may be extracted from a first feature layer FL.sub.1 to a fifth feature layers FL.sub.5 provided in the encoder 110.

[0051] The first feature layer FL.sub.1 may include 16 channels, and scale an input dimension of H×W by ½ to output an output dimension of (H/2)×(W/2). That is, like the first feature layer FL.sub.1, the second feature layer FL.sub.2 to the fifth feature layer FL.sub.5 may include 24, 32, 96, and 160 channels, respectively, and scale input dimensions of H×W by ¼, ⅛, 1/16, and 1/32 to finally output an output dimension of (H/32)×(W/32).

[0052] Next, at the decoding step (S200), displacements of mismatched pixels are calculated from the features extracted from different feature layers, and a depth map is generated for the monocular image, by the decoder 120.

[0053] The decoding step (S200) may include an encoder SE step (S210), a high-density step (S220), an up-sampling step (S230), a decoder SE step (S240), and a disparity convolution step (S250) to generate a depth map for the monocular image.

[0054] First, at the encoder SE step (S210), first channel information C.sub.1 may be generated using the features X.sub.i and X.sub.i+1 extracted from different feature layers to enable channel attention, and first major channel information CA.sub.1 may be output from the first channel information C.sub.1, by the encoder SE block 121 in the decoder 120.

[0055] In other words, the encoder SE step (S210) may receive a feature X.sub.i extracted from the i-th feature layer FL.sub.i of the encoder 110 and a feature X.sub.i+1 extracted from the i+1-th feature layer FL.sub.i+1. Then, the encoder SE step (S210) may perform a global pooling process of generating the first channel information C.sub.1 by averaging and compressing the two features X.sub.i and X.sub.i+1.

[0056] Most preferably, the encoder SE step (S210) may be performed starting from the last feature layer. Referring to [Table 1], for example, at the encoder SE step (S210), the fifth feature X.sub.5 extracted from the fifth feature layer FL.sub.5 and the fourth feature X.sub.4 extracted from the fourth feature layer FL.sub.4 may be input into the encoder SE block 121, and the fourth feature X.sub.4 extracted from the fourth feature layer FL.sub.4 and the third feature X.sub.3 extracted from the third feature layer FL.sub.3 may be input into the encoder SE block 121, and this process may be performed by each encoder SE block 121. Finally, the second feature X.sub.2 extracted from the second feature layer FL.sub.2 and the first feature X.sub.1 extracted from the first feature layer FL.sub.1 may be input into the encoder SE block 121.

[0057] In addition, at the encoder SE step (S210), a global pooling process of generating the first channel information C.sub.1 may be performed by averaging and compressing the two features X.sub.i and X.sub.i+1.

[0058] In addition, at the encoder SE step (S210), the first major channel information CA.sub.1 may be determined from the first channel information C.sub.1 using a fully-connected (FC) function, and the first major channel information CA.sub.1 may be activated with a higher weighting value using a ReLu function. This series of processes may be referred to as a Squeeze process.

[0059] In addition, the encoder SE step (S210) may perform 1×1 convolution after expanding the compressed major channel information CA.sub.1 using a fully-connected (FC) function and a Sigmoid function and then scaling the size. This series of processes may be referred to as an excitation process. Here, the 1×1 convolution may reduce the number of parameters for the entire operation by reducing the channels using a filter having a size of 1×1.

[0060] Accordingly, since one or more features X.sub.i may be extracted only from a monocular image captured by one camera, not a stereo image captured by two or more cameras, at the encoder SE step (S210), there is an effect of remarkably reducing the number of stored or processed images. In addition, since the first major channel information CA.sub.1 may be output using the features X.sub.i and X.sub.i+1 extracted from two different feature layers FL.sub.i and FL.sub.i+1 in the encoder 110, the operation parameters at the decoding step (S200) may be remarkably reduced in the future, and therefore, there is a remarkable effect of reducing the operation delay time.

[0061] Next, at the high-density step (S220), a feature set XC.sub.i may be output after learning the features according to the number of density layers DL and a growth rate, by the high-density block 122 in the decoder 120. That is, at the high-density step (S220), an arbitrary density layer DL.sub.i may output a feature set XC.sub.i by adding the learned features to the feature set XC.sub.i−1 obtained from the previous density layer DL.sub.i−1.

[0062] Most preferably, at the high-density step (S220), the dimension of the output channel of the high-density block 122 may be increased by 32 times according to the growth rate for each density layer DL.sub.i (i=1, 2, 3, 4), by the high-density block 122 including a first density layer DL1 to a fourth density layer DL4. Accordingly, at the high-density step (S220), a feature set XC.sub.i may be finally output in the form of high-density collective knowledge.

[0063] Meanwhile, at the high-density step (S220), 1×1 convolution may be performed on the feature set XC.sub.i to reduce the number of parameters for calculation. Accordingly, there is an effect of alleviating a gradient loss problem, enhancing feature propagation, and enabling feature reuse.

[0064] Next, at the up-sampling step (S230), a Nearest Neighbor Interpolation operation may be performed for the feature set XC.sub.i using double scaling by the up-sampling block 123 to improve the resolution of the depth map.

[0065] Meanwhile, at the up-sampling step (S230), 3×3 convolution may be performed on the feature set XC.sub.i so that the feature set XC.sub.i on which the 1×1 convolution is performed at the high-density step (S220) may be expanded and calculated.

[0066] Next, at the decoder SE step (S240), second channel information C.sub.2 may be generated from the feature set XC.sub.i up-sampled at the up-sampling step (S230) to enable channel attention, and second major channel information CA.sub.2 may be output from the second channel information C.sub.2, by the decoder SE block 124 in the decoder 120.

[0067] For example, at the decoder SE step (S240), first, the up-sampled feature set XC.sub.i may be input from the up-sampling block 123. Then, at the decoder SE step (S240), a global pooling process of generating the second channel information C.sub.2 by averaging and compressing the features aggregated in the feature set XC.sub.i at a high density may be performed.

[0068] In addition, at the decoder SE step (S240), the second major channel information CA.sub.2 may be determined from the second channel information C.sub.2 using a fully-connected (FC) function, and the second major channel information CA.sub.2 may be activated with a higher weighting value using a ReLu function. This series of processes may be referred to as a Squeeze process.

[0069] In addition, the decoder SE step (S240) may perform 1×1 convolution after expanding the compressed second major channel information CA.sub.2 using a fully-connected (FC) function and a Sigmoid function and then scaling the size. This series of processes may be referred to as an excitation process. Here, the 1×1 convolution may reduce the number of parameters for the entire operation by reducing the channels using a filter having a size of 1×1.

[0070] Therefore, since the second major channel information CA.sub.2 may be output at the decoder SE step (S240) using the features aggregated in the feature set XC.sub.i at a high density, the operation parameters of the decoder 120 may be reduced remarkably, and therefore, there is a remarkable effect of also reducing the operation delay time.

[0071] Next, referring to FIG. 3(b), at the disparity convolution step (S250), the weighting value of the second major channel information CA.sub.2 output from the decoder SE block 124 may be reactivated using 3×3 convolution and a Sigmoid function, by the disparity convolution block 125 in the decoder 120.

[0072] Accordingly, most preferably, at the decoding step (S200), decoding may be performed using all of the feature X.sub.i extracted from an arbitrary feature layer FL.sub.1 provided in the encoder 110, the first major channel information CA.sub.1, and the second major channel information CA.sub.2 to generate the depth map.

[0073] Therefore, as the method of generating a depth map using a monocular image of the present invention is provided with an encoding step (S100) and a decoding step (S200), there is a remarkable effect of applying the method to a drone or a small robot that should mount lightweight software and hardware as high accuracy and high resolution may be output and low delay time is also achieved compared to the prior art although the depth map is generated using as few parameters as only 4.1 million or so.

[0074] As described above, although the embodiments have been described through the limited embodiments and drawings, those skilled in the art may make various changes and modifications from the above descriptions. For example, although the described techniques are performed in an order different from that of the described method, and/or components such as the systems, structures, devices, circuits, and the like described above are coupled or combined in a form different from those of the described method, or replaced or substituted by other components or equivalents, an appropriate result can be achieved.

[0075] Therefore, other implementations, other embodiments, and those equivalent to the claims also fall within the scope of the claims described below.

[0076] According to the present invention as described above, as an encoder for extracting one or more features from a monocular image according to the number of feature layers, and a decoder for calculating displacements of mismatched pixels from the features extracted from different feature layers and generating a depth map for the monocular image are provided, there is an effect of applying the apparatus to a drone or a small robot that should mount lightweight software and hardware by achieving a low delay time together with high accuracy and high resolution compared to the prior art, although the depth map is generated with only a small number of parameters.