INFORMATION PROCESSING APPARATUS, CORRECTION METHOD, AND PROGRAM
20220390577 · 2022-12-08
Inventors
Cpc classification
G01S7/4868
PHYSICS
G01S17/894
PHYSICS
International classification
G01S7/4865
PHYSICS
Abstract
An information processing apparatus (10a) according to the present disclosure includes a control unit (60). The control unit (60) detects a saturation region of light reception image information generated based on a pixel signal output from a light receiving sensor, the light receiving sensor being configured to receive reflected light being reflection, by a measurement object, of projection light projected from a light source. The pixel signal is used to calculate a distance to the measurement object. The saturation region is a region of light reception image information generated based on the pixel signal which is saturated. The control unit (60) corrects the light reception image information of the saturation region based on the pixel signal.
Claims
1. An information processing apparatus comprising a control unit configured to execute processes including: detecting a saturation region of light reception image information generated based on a pixel signal output from a light receiving sensor, the light receiving sensor being configured to receive reflected light being reflection, by a measurement object, of projection light projected from a light source, the pixel signal being configured to be used to calculate a distance to the measurement object, the saturation region being a region of the light reception image information generated based on the pixel signal which is saturated; and correcting the light reception image information of the saturation region based on the pixel signal.
2. The information processing apparatus according to claim 1, wherein the light reception image information is image information generated in accordance with a component of the reflected light contained in the pixel signal.
3. The information processing apparatus according to claim 1, wherein the light reception image information is image information generated in accordance with a component of the reflected light and a component of ambient light, contained in the pixel signal.
4. The information processing apparatus according to claim 2, wherein the control unit corrects the pixel value of the saturation region based on a pixel value of the light reception image information adjacent to the saturation region in a non-saturation region where the pixel signal is not saturated.
5. The information processing apparatus according to claim 4, wherein the control unit corrects the pixel value in the saturation region using a correction value calculated based on an average value of the pixel values of the light reception image information located in surroundings of the saturation region in the non-saturation region where the pixel signal is not saturated.
6. The information processing apparatus according to claim 5, wherein the correction value is a value larger than the average value.
7. The information processing apparatus according to claim 4, wherein the control unit corrects the pixel value in the saturation region in accordance with a change rate of a reception light value calculated based on the component of the reflected light and the component of the ambient light, contained in the pixel signal.
8. A correction method comprising: detecting a saturation region of light reception image information generated based on a pixel signal output from a light receiving sensor, the light receiving sensor being configured to receive reflected light being reflection, by a measurement object, of projection light projected from a light source, the pixel signal being configured to be used to calculate a distance to the measurement object, the saturation region being a region of the light reception image information generated based on the pixel signal which is saturated; and correcting the light reception image information of the saturation region based on the pixel signal.
9. A program for causing a computer to function as a control unit that executes processes comprising: detecting a saturation region of light reception image information generated based on a pixel signal output from a light receiving sensor, the light receiving sensor being configured to receive reflected light being reflection, by a measurement object, of projection light projected from a light source, the pixel signal being configured to be used to calculate a distance to the measurement object, the saturation region being a region of the light reception image information generated based on the pixel signal which is saturated; and correcting the light reception image information of the saturation region based on the pixel signal.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0008]
[0009]
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
DESCRIPTION OF EMBODIMENTS
[0036] Embodiments of the present disclosure will be described below in detail with reference to the drawings. In each of the following embodiments, the same parts are denoted by the same reference symbols, and a repetitive description thereof will be omitted.
[0037] The present disclosure will be described in the following order.
[0038] 1. Introduction
[0039] 1.1. Configuration common to each embodiment
[0040] 1.2. Distance measurement by indirect ToF method applied to each embodiment
[0041] 1.3. Configuration applied to each embodiment
[0042] 2. First Embodiment
[0043] 2.1. Outline of correction process
[0044] 2.2. Configuration example of distance measuring device
[0045] 2.3. Correction process in distance measuring device
[0046] 3. Second Embodiment
[0047] 3.1. Configuration example of distance measuring device
[0048] 3.2. Correction process in distance measuring device
[0049] 4. Third Embodiment
[0050] 4.1. Configuration example of distance measuring device
[0051] 4.2. Correction process in distance measuring device
[0052] 5. Modification
[0053] 6. Conclusion
1. INTRODUCTION
1.1. Configuration Common to Each Embodiment
[0054] The present disclosure is suitable for use in a technique of performing distance measurement using light. Prior to the description of the embodiment of the present disclosure, an indirect time of flight (ToF) method will be described as one of distance measurement methods applied to the embodiment in order to facilitate understanding. The indirect ToF method is a technique of irradiating a measurement object with light from a light source (for example, laser light in an infrared region) modulated by, for example, pulse width modulation (PWM), receiving the reflected light by a light receiving element, and measuring a distance to the measurement object based on a phase difference in the received reflected light.
[0055]
[0056] The distance measuring device 10 includes a light source unit 11, a light receiving unit 12, and a distance measurement processing unit 13. The light source unit 11 includes: light emitting element that emits light having a wavelength in an infrared region; and a drive circuit that drives the light emitting element to emit light, for example. For example, a light emitting diode (LED) may be applied as the light emitting element included in the light source unit 11. The light emitting element is not limited thereto, and a vertical cavity surface emitting laser (VCSEL) in which a plurality of light emitting elements is formed in an array may be applied as the light emitting element included in the light source unit 11. Hereinafter, unless otherwise specified, “the light emitting element of the light source unit 11 emits light” will be described as “the light source unit 11 emits light” or the like.
[0057] The light receiving unit 12 includes: a light receiving element that detects light having a wavelength in an infrared region; and a signal processing circuit that outputs a pixel signal corresponding to the light detected by the light receiving element, for example. A photodiode may be applied as the light receiving element included in the light receiving unit 12. Hereinafter, unless otherwise specified, “the light receiving element included in the light receiving unit 12 receives light” will be described as “the light receiving unit 12 receives light” or the like.
[0058] The distance measurement processing unit 13 executes a distance measurement process in the distance measuring device 10 in response to a distance measurement instruction from the application unit 20, for example. For example, the distance measurement processing unit 13 generates a light source control signal for driving the light source unit 11 and supplies the generated light source control signal to the light source unit 11. Furthermore, the distance measurement processing unit 13 controls light reception by the light receiving unit 12 in synchronization with a light source control signal supplied to the light source unit 11. For example, the distance measurement processing unit 13 generates an exposure control signal that controls an exposure period in the light receiving unit 12 in synchronization with the light source control signal, and supplies the generated signal to the light receiving unit 12. The light receiving unit 12 outputs a valid pixel signal within the exposure period indicated by the exposure control signal.
[0059] The distance measurement processing unit 13 calculates distance information based on the pixel signal output from the light receiving unit 12 in accordance with light reception. Furthermore, the distance measurement processing unit 13 may generate predetermined image information based on the pixel signal. The distance measurement processing unit 13 passes, to the application unit 20, the distance information and the image information calculated and generated based on the pixel signal.
[0060] In such a configuration, the distance measurement processing unit 13 generates a light source control signal for driving the light source unit 11 in accordance with an instruction to execute distance measurement from the application unit 20, for example, and supplies the generated light source control signal to the light source unit 11. Here, the distance measurement processing unit 13 generates a light source control signal modulated into a rectangular wave having a predetermined duty by PWM, and supplies the light source control signal to the light source unit 11. At the same time, the distance measurement processing unit 13 controls light reception by the light receiving unit 12 based on an exposure control signal synchronized with the light source control signal.
[0061] In the distance measuring device 10, the light source unit 11 emits light modulated in accordance with the light source control signal generated by the distance measurement processing unit 13. In the example of
[0062] The distance measurement processing unit 13 executes light reception by the light receiving unit 12 a plurality of times at different phases for each light receiving element. The distance measurement processing unit 13 calculates a distance D to the measurement object based on a difference between pixel signals due to light reception at different phases. Furthermore, the distance measurement processing unit 13 calculates: first image information obtained by extracting the component of the reflected light 32 based on the difference between the pixel signals; and second image information including the component of the reflected light 32 and the component of the ambient light. Hereinafter, the first image information is referred to as reflected light image information, and a value of each pixel of the reflected light image information is referred to as a pixel value Confidence (or a Confidence value). In addition, the second image information is referred to as IR image information, and a value of each pixel of the IR image information is referred to as a pixel value IR (or IR value). In addition, the reflected light image information and the IR image information are collectively referred to as light reception image information.
1.2. Distance Measurement by Indirect ToF Method Applied to Each Embodiment
[0063] Next, distance measurement by the indirect ToF method applied to each embodiment will be described.
[0064] The distance measurement processing unit 13 performs sampling a plurality of times for each of phases on the pixel signal that has received the reflected light 32, and acquires a light amount value (pixel signal value) indicating the light amount for each sampling. In the example of
[0065] A method of calculating distance information in the indirect ToF method will be described more specifically with reference to
[0066]
[0067] In the example of
[0068] On the other hand, in accordance with the exposure control signal from the distance measurement processing unit 13, the light receiving unit 12 starts an exposure period with phase 0° in synchronization with time point t.sub.0 of the projection timing of the projection light 30 in the light source unit 11. Similarly, the light receiving unit 12 starts exposure periods with the phase 90°, the phase 180°, and the phase 270° in accordance with the exposure control signal from the distance measurement processing unit 13. Here, the exposure period in each phase follows the duty of the projection light 30. Although the example of
[0069] In the example of
[0070] Also for the phase C90 and the phase 270° having a phase difference 180° from the phase 90°, the integral value of the received light amount in the period in which the reflected light 32 arrives within each exposure period is acquired as light amount values C.sub.90 and C.sub.270, similarly to the case of the phases 0° and 180° described above.
[0071] Among these light amount values C.sub.0, C.sub.90, C.sub.180, and C.sub.270, as shown in the following Formulas (1) and (2), a difference I and a difference Q are obtained based on a combination of light amount values having phase difference 180°.
I=C.sub.0−C.sub.180 (1)
Q=C.sub.90−C.sub.270 (2)
[0072] Based on these differences I and Q, the phase difference (phase) is calculated by the following Formula (3). In the Formula (3), the phase difference (phase) is defined in a range of (0≤phase<2π).
phase=tan.sup.−1(Q/I) (3)
[0073] The distance information Depth is calculated by the following Formula (4) using the phase difference (phase) and a predetermined coefficient (range).
Depth=(phase×range)/2 π(4)
[0074] Furthermore, based on the differences I and Q, the component of the reflected light 32 (pixel value Confidence of the reflected light image information) can be extracted from the component of the light received by the light receiving unit 12. The pixel value Confidence of the reflected light image information is calculated by the following Formula (5) using absolute values of the differences I and Q.
Confidence=|I|+|Q| (5)
[0075] In this manner, one pixel of the reflected light image information is calculated from the light amount values C.sub.0, C.sub.90, C.sub.180, and C.sub.270 in the four phases of the light receiving unit 12. The light amount values C.sub.0, C.sub.90, C.sub.180, and C.sub.270 of each phase are acquired from the corresponding light receiving element of the light receiving unit 12.
[0076]
[0077] Therefore, the amount of light received by the light receiving unit 12 is the sum of the amount of direct reflected light, the amount of ambient light, and the dark noise. Calculating the above-described Formulas (1) to (3) and (5) will cancel the component of the ambient light and the dark noise, thereby extracting the component of the directly reflected light.
[0078] Next, a method of acquiring each of the light amount values C.sub.0, C.sub.90, C.sub.180, and C.sub.270 of each of phases and a method of calculating the pixel value Confidence of the distance information and the reflected light image information will be described more specifically with reference to
[0079] (First Method)
[0080]
[0081] At time point t.sub.18 after a predetermined time sandwiched between time point t.sub.17 and time point t.sub.18, the operation from time point t.sub.10 described above is executed again.
[0082] Here, a sequence of performing exposure with each phase is assumed to be one μFrame. In the example of
[0083] The distance measurement processing unit 13 stores, in memory, for example, the light amount values C.sub.0, C.sub.90, C.sub.180, and C.sub.270 sequentially acquired in each phase acquired within a period of one μFrame. The distance measurement processing unit 13 calculates the distance information Depth and the pixel value Confidence of the reflected light image information based on each of the light amount values C.sub.0, C.sub.90, C.sub.180, and C.sub.270 stored in the memory.
[0084] In this case, the differences I and Q, the phase difference (phase), and the distance information Depth are calculated by the above-described Formulas (1) to (4). Furthermore, here, the pixel value Confidence of the reflected light image information is calculated using the following Formula (6).
Confidence=(I.sup.2+Q.sup.2).sup.1/2 (6)
[0085] (Second Method)
[0086]
[0087] A method of calculating distance information using the 2-tap method will be described more specifically with reference to
[0088]
[0089] In the example of
[0090] On the other hand, in accordance with the exposure control signal (DIMIX_A) from the distance measurement processing unit 13, the light receiving unit 12 starts an exposure period in synchronization with time point t.sub.10 of the projection timing of the projection light 30 in the light source unit 11. Similarly, in accordance with the exposure control signal (DIMIX_B) from the distance measurement processing unit 13, the light receiving unit 12 starts an exposure period in synchronization with time point t.sub.12 having a phase difference 180° from DIMIX_A. With this operation, the light receiving unit 12 acquires the light amount values (pixel signals) A.sub.0 and B.sub.0 for each of the taps A and B in phase 0°.
[0091] In the example of
[0092]
[0093]
[0094] For example, DIMIX_A of phase 90° is an exposure control signal at a phase shifted by 90° from the projection timing of the projection light 30, while DIMIX_B of phase 90° is an exposure control signal having a phase difference 180° from DIMIX_A of phase 90°. In addition, DIMIX_A of phase 180° is an exposure control signal having a phase shifted by 180° from the projection timing of the projection light 30, while DIMIX_B of phase 180° is an exposure control signal having a phase difference of 180° from DIMIX_A of phase 180°. DIMIX_A of phase 270° is an exposure control signal having a phase shifted by 270° from the projection timing of the projection light 30, while DIMIX_B of phase 270° is an exposure control signal having a phase difference of 180° from DIMIX_A of phase 270°. Here, the exposure period in each phase follows the duty of the projection light 30.
[0095] The phase difference of readout by the tap A and the tap B in the light receiving unit 12 will be described with reference to
[0096] In
[0097]
[0098] That is, in the example of
[0099] Similarly, exposure at phase 180° is performed in a period from time point t.sub.24 to time point t.sub.25 after a predetermined time sandwiched between time point t.sub.23 and time point t.sub.24. The distance measurement processing unit 13 obtains the light amount value A.sub.180 and the light amount value B.sub.180 based on the pixel signals read by the tap A and the tap B, respectively. Furthermore, the light receiving unit 12 performs exposure at phase 270° in a period of time point t.sub.26 to time point t.sub.27 after a predetermined time sandwiched between time point t.sub.25 and time point t.sub.26. The distance measurement processing unit 13 obtains the light amount value A.sub.270 and the light amount value B.sub.270 based on the pixel signals read by the tap A and the tap B, respectively.
[0100] At a time point t.sub.28 after a predetermined time sandwiched between time point t.sub.27 and time point t.sub.28, the operation from time point t.sub.20 described above is executed again.
[0101] The method of sequentially executing the readout by the taps A and B for the phases 0°, 90°, 180°, and 270° and obtaining the light amount values based on the readout by the taps A and B for individual phases illustrated in
[0102] In the case of this second method, the differences I and Q are respectively calculated by the following Formulas (7) and (8) using the individual light amount values A.sub.0 and B.sub.0, A.sub.90 and B.sub.90, A.sub.180 and B.sub.180, and A.sub.270 and B.sub.270.
I=C.sub.0−C.sub.180=(A.sub.0−B.sub.0)−(A.sub.180−B.sub.180 (7)
Q=C.sub.90−C.sub.270=(A.sub.90−B.sub.90)−(A.sub.270−B.sub.270) (8)
[0103] The phase difference (phase), the distance information Depth, and the pixel value Confidence of the reflected light image information are calculated by the above-described Formulas (3), (4), and (6) using the differences I and Q respectively calculated by the Formulas (7) and (8).
[0104] In the 2-tap method (4 phase) illustrated in
[0105] (Third Method)
[0106]
[0107] As illustrated in
[0108] In the case of
[0109] At a time point t.sub.34 after a predetermined time sandwiched between time point t.sub.33 and time point t.sub.34, the operation from time point t.sub.30 described above is executed again.
[0110] The method of sequentially executing the readout by the taps A and B for the phases 0° and 90 and obtaining the light amount values based on the readout of the taps A and B for the phases 0° and 90° illustrated in
[0111] As described above, the exposure control signals DIMIX_A and DIMIX_B in the tap A and the tap B of each phase are signals having inverted phases. Therefore, DIMIX_A of phase 0° and DIMIX_B of phase 180° are signals having the same phase. Similarly, DIMIX_B of phase 0° and DIMIX_A of phase 180° are signals having the same phase. In addition, DIMIX_A of phase 90° and DIMIX_B of phase 270° are signals having the same phase, and DIMIX_B of phase 90° and DIMIX_A of phase 270° are signals having the same phase.
[0112] Therefore, the light amount value B.sub.0 becomes the same as the readout value of the light receiving unit 12 at the phase 180°, while the light amount value B.sub.90 becomes the same as the readout value of the light receiving unit 12 at the phase 270°. In other words, for example, this is equivalent to execution of readout, at phase 0°, of a phase 0° and phase 180° having a phase difference 180° from phase 0°. Similarly, this is equivalent to execution of readout, at phase 90°, of a phase 90° and phase 270° having a phase difference 180° from phase 90°.
[0113] That is, for example, it can be said that the exposure period of the tap B at phase 0° is the exposure period at phase 180°. It can also be said that the exposure period of tap B at phase 90° is the exposure period at phase 270°. Accordingly, in the case of the third method, the differences I and Q are respectively calculated by the following Formulas (9) and (10) using the light amount values A.sub.0 and B.sub.0, and A.sub.90 and B.sub.90.
I=C.sub.0−C.sub.190=(A.sub.0−B.sub.0) (9)
Q=C.sub.90−C.sub.270=(A.sub.90−B.sub.90) (10)
[0114] The phase difference (phase), the distance information Depth, and the pixel value Confidence of the reflected light image information can be calculated by the above-described Formulas (3), (4), and (6) using the differences I and Q respectively calculated by the Formulas (9) and (10).
[0115] In this manner, two readout circuits (the tap A and the tap B) are provided for one light receiving element, and readout using the tap A and the tap B is sequentially executed. With this configuration, an exposure periods including phases difference 180° can be implemented in one phase (for example, phase 0°). Therefore, in the 2-tap method (2 phase) illustrated in
[0116] Here, an example of a method of calculating the pixel value IR of IR image information will be described. As described above, the IR image information is image information including the component of the reflected light 32 and the component of the ambient light. On the other hand, the light received by the light receiving unit 12 includes a DC component such as dark current (dark noise) in addition to the component of the reflected light 32 and the component of the ambient light. Therefore, the IR image information is calculated by subtracting the DC component from the pixel signal output from the light receiving unit 12. Specifically, the pixel value IR of the IR image information is calculated using the following Formula (11).
RAW=C.sub.0−C.sub.FPN=(A.sub.0−A.sub.FPN)+(B.sub.0−B.sub.FPN) (11)
[0117] Here, C.sub.FPN, A.sub.FPN, and B.sub.FPN are DC components such as dark current (dark noise), and are fixed pattern noise. It is assumed that C.sub.FPN, A.sub.FPN, and B.sub.FPN are obtained in advance by experiments, simulations, or the like.
[0118] Alternatively, C.sub.FPN, A.sub.FPN, and B.sub.FPN may be, for example, pixel signals output from the light receiving unit 12 when the light receiving unit 12 does not receive light. In this case, for example, it is assumed that such a pixel signal is acquired by the distance measuring device 10 acquiring a signal output from the light receiving unit 12 before the light source unit 11 projects the projection light 30.
[0119] Although Formula (11) is a case of calculating the pixel value IR of the IR image information at phase 0°, the pixel value IR of the IR image information may be calculated in a similar manner for other phases (phase 90°, 180° and 270°). In this case, for example, an average value of the pixel values IR calculated for each phase may be used as the pixel value IR of the IR image information calculated from the reflected light 32.
1.3. Configuration Applied to Each Embodiment
[0120] Next, an example of a configuration applied to each embodiment will be described.
[0121] The electronic device 1 illustrated in
[0122] The storage 103 is a nonvolatile storage medium such as flash memory or a hard disk drive. The storage 103 can store various data and a program needed for the CPU 100 to operate. In addition, the storage 103 can store an application program (hereinafter, abbreviated as an application) for actualizing the application unit 20 described with reference to
[0123] The CPU 100 operates using the RAM 102 as work memory in accordance with the program stored in the storage 103 or the ROM 101 so as to control the entire operation of the electronic device 1.
[0124] The UI unit 104 includes various operators needed for operating the electronic device 1, a display element for displaying the state of the electronic device 1, and the like. The UI unit 104 may further include a display that displays an image captured by the sensor unit 111 described below. In addition, this display may be a touch panel integrating a display device and an input device, and various operators may be formed by components displayed on the touch panel.
[0125] The light source unit 110 includes a light emitting element such as an LED or a VCSEL, and a driver needed for driving the light emitting element. In the light source unit 110, the driver generates a drive signal having a predetermined duty in response to an instruction from the CPU 100. The light emitting element emits light in accordance with the drive signal generated by the driver, and projects light modulated by PWM as projection light 30.
[0126] The sensor unit 111 includes: a pixel array unit having a plurality of light receiving elements arranged in an array; and a drive circuit that drives the plurality of light receiving elements arranged in the pixel array unit and outputs a pixel signal read from each light receiving element. The pixel signal output from the sensor unit 111 is supplied to the CPU 100.
[0127] Next, the sensor unit 111 applied to each embodiment will be described with reference to
[0128]
[0129] A pixel area 1111 includes a plurality of pixels 1112 arranged in an array on the sensor chip 1110. For example, an image signal of one frame is formed based on pixel signals output from the plurality of pixels 1112 included in the pixel area 1111. Each of the pixels 1112 arranged in the pixel area 1111 can receive infrared light, performs photoelectric conversion based on the received infrared light, and outputs an analog pixel signal, for example. Each of the pixels 1112 included in the pixel area 1111 is connected to two vertical signal lines, namely, vertical signal lines VSL.sub.1 and VSL.sub.2.
[0130] The sensor unit 111 further includes a vertical drive circuit 1121, a column signal processing unit 1122, a timing control circuit 1123, and an output circuit 1124 arranged on the circuit chip 1120.
[0131] The timing control circuit 1123 controls the drive timing of the vertical drive circuit 1121 in accordance with an element control signal supplied from the outside via a control line 150. Furthermore, the timing control circuit 1123 generates a vertical synchronization signal based on the element control signal. The column signal processing unit 1122 and the output circuit 1124 execute individual processes in synchronization with the vertical synchronization signal generated by the timing control circuit 1123.
[0132] The vertical signal lines VSL.sub.1 and VSL.sub.2 are wired in the vertical direction in
[0133] The vertical signal line VSL.sub.1 is used to output a pixel signal AIN.sub.P1 that is an analog pixel signal based on the electric charge of the tap A of the pixel 1112 in the corresponding pixel column. The vertical signal line VSL.sub.2 is used to output a pixel signal AIN.sub.P2 that is an analog pixel signal based on the charge of the tap B of the pixel 1112 in the corresponding pixel column.
[0134] Under the timing control of the timing control circuit 1123, the vertical drive circuit 1121 drives each of the pixels 1112 included in the pixel area 1111 in units of pixel rows and outputs the pixel signals AIN.sub.P1 and AIN.sub.P2. The pixel signals AIN.sub.P1 and AIN.sub.P2 output from the respective pixels 1112 are supplied to the column signal processing unit 1122 via the vertical signal lines VSL.sub.1 and VSL.sub.2 of the respective columns.
[0135] The column signal processing unit 1122 includes a plurality of AD converters provided for each pixel column corresponding to the pixel column of the pixel area 1111, for example. Each AD converter included in the column signal processing unit 1122 performs AD conversion on the pixel signals AIN.sub.P1 and AIN.sub.P2 supplied via the vertical signal lines VSL.sub.1 and VSL.sub.2, and supplies the pixel signals AIN.sub.P1 and AIN.sub.P2 converted into digital signals to the output circuit 1124.
[0136] The output circuit 1124 performs signal processing such as correlated double sampling (CDS) processing on the pixel signals AIN.sub.P1 and AIN.sub.P2 converted into digital signals and output from the column signal processing unit 1122, and outputs the pixel signals AIN.sub.P1 and AIN.sub.P2 subjected to the signal processing to the outside of the sensor unit 111 via an output line 51 as a pixel signal read from the tap A and a pixel signal read from the tap B, respectively.
[0137]
[0138] The photodiode 231 is a light receiving element that photoelectrically converts received light to generate a charge. When a surface on which the circuit is disposed in the semiconductor substrate is defined as a front surface, the photodiode 231 is disposed on a back surface of the substrate. The solid-state imaging element like this is referred to as a back-illuminated solid-state imaging element. Instead of the back-illuminated type, it is also possible to use a front-illuminated configuration in which the photodiode 231 is arranged on the front surface.
[0139] An overflow transistor 242 is connected between a cathode electrode of the photodiode 231 and a power supply line VDD, and has a function of resetting the photodiode 231. That is, the overflow transistor 242 is turned on in response to the overflow gate signal OFG supplied from the vertical drive circuit 1121, thereby sequentially discharging the charge of the photodiode 231 to the power supply line VDD.
[0140] The transfer transistor 232 is connected between the cathode of the photodiode 231 and the floating diffusion layer 234. Furthermore, the transfer transistor 237 is connected between the cathode of the photodiode 231 and the floating diffusion layer 239. The transfer transistors 232 and 237 sequentially transfer the charges generated by the photodiode 231 to the floating diffusion layers 234 and 239, respectively, in accordance with a transfer signal TRG supplied from the vertical drive circuit 1121.
[0141] The floating diffusion layers 234 and 239 corresponding to the taps A and B accumulate the charges transferred from the photodiode 231, convert the charges into voltage signals of voltage values corresponding to the accumulated charge amounts, and respectively generate pixel signals AIN.sub.P1 and AIN.sub.P2 which are analog pixel signals.
[0142] In addition, the two reset transistors 233 and 238 are connected between the power supply line VDD and each of the floating diffusion layers 234 and 239. The reset transistors 233 and 238 are turned on in accordance with reset signals RST and RST.sub.p supplied from the vertical drive circuit 1121, thereby extracting charges from the floating diffusion layers 234 and 239, respectively, and initializing the floating diffusion layers 234 and 239.
[0143] The two amplification transistors 235 and 240 are connected between the power supply line VDD and each of the selection transistors 236 and 241. The amplification transistors 235 and 240 each amplify a voltage signal obtained by converting a charge into a voltage in each of the floating diffusion layers 234 and 239.
[0144] The selection transistor 236 is connected between the amplification transistor 235 and the vertical signal line VSL.sub.1. In addition, the selection transistor 241 is connected between the amplification transistor 240 and the vertical signal line VSL.sub.2. The selection transistors 236 and 241 are turned on in accordance with the selection signals SEL and SEL.sub.p supplied from the vertical drive circuit 1121, thereby outputting the pixel signals AIN.sub.P1 and AIN.sub.P2 amplified by the amplification transistors 235 and 240 to the vertical signal line VSL.sub.1 and the vertical signal line VSL.sub.2, respectively.
[0145] The vertical signal line VSL.sub.1 and the vertical signal line VSL.sub.2, connected to the pixel 1112 are connected to an input end of one AD converter included in the column signal processing unit 1122 for each pixel column. The vertical signal line VSL.sub.1 and the vertical signal line VSL.sub.2, supply the pixel signals AIN.sub.P1 and AIN.sub.P2 output from the pixels 1112 to the AD converters included in the column signal processing unit 1122 for each pixel column.
[0146] The stacked structure of the sensor unit 111 will be schematically described with reference to
[0147] As an example, the sensor unit 111 is formed with a two-layer structure in which semiconductor chips are stacked in two layers.
[0148] The circuit unit includes, for example, the vertical drive circuit 1121, the column signal processing unit 1122, the timing control circuit 1123, and the output circuit 1124. Note that the sensor chip 1110 may include the pixel area 1111 and the vertical drive circuit 1121, for example. As illustrated on the right side of
[0149] As another example, the sensor unit 111 is formed by a three-layer structure in which semiconductor chips are stacked in three layers.
2. FIRST EMBODIMENT
2.1. Outline of Correction Process
[0150] Next, a first embodiment of the present disclosure will be described. The distance measuring device according to the present embodiment generates reflected light image information in addition to the distance D to the measurement object based on the reflected light 32 received by the light receiving unit 12. At this time, for example, when the reflected light 32 received by the light receiving unit 12 has high intensity and the light intensity is saturated, reflected light image information might be generated with degraded accuracy. Hereinafter, with reference to
[0151] As described above, reflected light 32 includes ambient light and dark noise in addition to direct reflected light reflected by the measurement object 31. For example, in a case where, in particular, the ambient light has high light intensity, the reflected light 32 received by the light receiving unit 12 might have high intensity, leading to a possibility of saturation of the light amount values C.sub.0, C.sub.90, C.sub.180, and C.sub.270. Alternatively, even when the projection light has high intensity, the measurement object 31 has high reflectance, or when the distance D to the measurement object 31 is short, the intensity of the reflected light 32 received by the light receiving unit 12 might increase, leading to the possibility of saturation of the light amount values C.sub.0, C.sub.90, C.sub.180, and C.sub.270. Here, as illustrated in graph G1 of
[0152] Here, the pixel value Confidence of the reflected light image information is calculated by the above-described Formulas (5) to (8). When the light amount values C.sub.0, C.sub.90, C.sub.180, and C.sub.270 are saturated at the light amount value C.sub.max, the I and Q components become zero, and the pixel value Confidence of the reflected light image information also becomes zero. In this manner, when the reflected light 32 received by the light receiving unit 12 has high intensity and the light receiving element is saturated, for example, as illustrated in an image 12 of
[0153] In this manner, occurrence of discontinuity in the reflected light image information might lead to a problem in processes in the application unit 20. For example, when the application unit 20 recognizes the saturation region R.sub.sa of the reflected light image information as a feature, an error might occur in the recognition result of the reflected light image information. For example, in a case where the application unit 20 performs face recognition using reflected light image information, recognition of the saturation region R.sub.sa as a facial feature (for example, a mole) can lead to a possibility of a failure in correct execution of face recognition.
[0154] In view of this, the first embodiment of the present disclosure corrects the pixel value Confidence in the saturation region R.sub.sa of the reflected light image information, thereby canceling the discontinuity of the reflected light image information. This makes it possible to suppress a degradation in the generation accuracy of the reflected light image information, leading to suppression of the occurrence of a problem in the application unit 20.
[0155] Specifically, the correction method according to the first embodiment of the present disclosure corrects the pixel value Confidence in the saturation region R.sub.sa of the reflected light image information from zero to a predetermined value. In the example illustrated in
[0156] With this configuration, as illustrated in an image 13 of
2.2. Configuration Example of Distance Measuring Device
[0157]
[0158] In the following, for the sake of explanation, it is assumed that a 2-tap method (4 phase) is applied to the acquisition of each light amount value and the calculation of each piece of information at each phase of 0°, 90°, 180°, and 270° in the light receiving unit 12. Note that acquisition of each light amount value and calculation of each piece of information may be performed using a method other than the 2-tap method (4 phase).
[0159] The control unit 40 generates a light source control signal and supplies the generated signal to the light source unit 11. The light source control signal includes, for example, information that designates a duty in PWM modulation, intensity of light emitted by the light source unit 11, light emission timing, and the like. The light source unit 11 projects the projection light 30 (refer to
[0160] Furthermore, the control unit 40 generates an exposure control signal and supplies the generated signal to the light receiving unit 12. The exposure control signal includes information to control the light receiving unit 12 to perform exposure with an exposure length based on the duty of the light source unit 11 in each of different phases. Furthermore, the exposure control signal further includes information for controlling the exposure amount in the light receiving unit 12.
[0161] The pixel signal of each of phases output from the light receiving unit 12 is supplied to the distance measuring unit 50. The distance measuring unit 50 calculates the distance information Depth and the pixel value Confidence of the reflected light image information based on the pixel signal of each of phases supplied from the light receiving unit 12. The distance measuring unit 50 passes the calculated distance information Depth and the pixel value Confidence of the reflected light image information to the application unit 20, for example.
[0162] Here, the pixel value Confidence of the reflected light image information will be described with reference to
[0163] As illustrated in
I=(A.sub.0−B.sub.0)−(A.sub.180−B.sub.180) (7)
Q=(A.sub.90−B.sub.90)−(A.sub.270−B.sub.270) (8)
Confidence=|I|+|Q| (5)
[0164] As illustrated in
[0165] Returning to
[0166] That is, more specifically, referring to the above-described Formulas (1) and (2), there is a possibility that the differences I and Q cannot be appropriately calculated in a case where one or more pixel signals among the pixel signals corresponding to the respective phases are saturated or at a predetermined level or less. In this case, the distance information Depth calculated based on the differences I and Q in the distance measuring unit 50 has low reliability.
[0167] To handle this, the control unit 40 obtains a control signal to control each light amount value based on each pixel signal of each of phases to a value within an appropriate range. Based on the obtained control signal, the control unit 40 controls the gain and the exposure time in the light receiving unit 12 and the duty and intensity of light emission in the light source unit 11 so as to adjust the amount of light received by the light receiving unit 12 to be appropriate.
[0168] As an example, in a case where the reflectance of the measurement object 31 is low, or in a case where the distance indicated by the distance information Depth calculated by the distance measuring unit 50 is a predetermined value or more, the S/N of the calculated distance information Depth becomes low, and the accuracy of the distance information Depth decreases. In this case, in order to maintain the S/N of the distance information Depth calculated by the distance measuring unit 50, the control unit 40 generates a control signal to control the light receiving unit 12 so as to prolong the exposure time by the light receiving unit 12.
[0169] The control unit 40 stores the generated control signal in a register (not illustrated) or the like. The control unit 40 executes light emission in the light source unit 11 and light reception by the light receiving unit 12 for each frame of a predetermined cycle. The control unit 40 performs processing for one frame based on the control information stored in the register, obtains a control signal based on a result of the processing, and updates the control signal stored in the register.
[0170] The correction unit 60 corrects the pixel value Confidence of the reflected light image information by using each pixel signal of each of phases. The correction unit 60 includes a saturation region detection unit 61, a saturation value estimation unit 62, and a saturation region compensation unit 63.
[0171] The saturation region detection unit 61 detects the saturation region R.sub.sa of the reflected light image information. The pixel signal output from the light receiving element of the light receiving unit 12 includes saturation information indicating whether the pixel signal is saturated. The saturation region detection unit 61 detects the saturation region R.sub.sa by detecting the light receiving element including a saturated pixel signal based on the saturation information. Alternatively, the saturation region detection unit 61 may detect the saturated light receiving element, that is, the saturation region R.sub.sa by determining whether the pixel signal is a value indicating pixel signal saturation. Alternatively, the saturation region detection unit 61 may detect the saturation region R.sub.sa by determining whether the pixel value Confidence of the reflected light image information is a value indicating saturation of the pixel value Confidence (for example, the pixel value Confidence is zero).
[0172] The saturation value estimation unit 62 estimates a correction value used to correct the pixel value Confidence of the reflected light image information by the saturation region compensation unit 63. The saturation value estimation unit 62 estimates the correction value based on the pixel value Confidence of the non-saturation region R.sub.nsa adjacent in the surroundings of the saturation region R.sub.sa, that is, the non-saturation pixel signal adjacent in the surroundings of the saturation region R.sub.sa.
[0173] The following will describe, for example, the correction value estimated by the saturation value estimation unit 62 when the saturation region detection unit 61 has detected the first saturation region R.sub.sa1 and a second saturation region R.sub.sa2 from reflected light image information 14 illustrated in
[0174] The saturation value estimation unit 62 estimates the correction value based on, for example, an average value of the pixel values Confidence of the non-saturation region R.sub.nsa (the region indicated by the black line in
[0175] Here, in the first saturation region R.sub.sa1, the value of the pixel signal is saturated. Therefore, the actual pixel value Confidence of the first saturation region R.sub.sa1, that is, the pixel value Confidence in a case where the value of the pixel signal is not saturated is considered to be higher than the pixel value Confidence of the surrounding non-saturation region R.sub.nsa. In view of this, the saturation value estimation unit 62 estimates, as the correction value, a value obtained by adding a constant value to the average value of the pixel values Confidence of the non-saturation region R.sub.nsa (region indicated by the white line in
[0176] The saturation region compensation unit 63 corrects the pixel value Confidence of the saturation region R.sub.sa detected by the saturation region detection unit 61 by using the correction value estimated by the saturation value estimation unit 62. As illustrated in
[0177] Note that
[0178]
[0179] Therefore, in the first embodiment of the present disclosure, as described above, the saturation region compensation unit 63 corrects the pixel value Confidence in the saturation region R.sub.sa of the reflected light image information.
[0180] In the case of the face authentication, the authentication accuracy is lower at occurrence of discontinuity as illustrated in
[0181] <2.3. Correction Process in Distance Measuring Device>
[0182]
[0183] First, based on the control signal stored in the register, the control unit 40 of the distance measuring device 10a controls the light source unit 11 and the light receiving unit 12 to perform imaging (step S101). The pixel signal of each of phases obtained by the imaging is passed from the light receiving unit 12 to the control unit 40, the distance measuring unit 50, and the correction unit 60.
[0184] The distance measuring unit 50 of the distance measuring device 10a calculates the distance information Depth and the pixel value Confidence of the reflected light image information based on an imaging result obtained in step S101 (step S102). The distance measuring unit 50 of the distance measuring device 10a outputs the calculated distance information Depth to the application unit 20, for example, and outputs the pixel value Confidence of the reflected light image information to the application unit 20 and the correction unit 60.
[0185] Next, the saturation region detection unit 61 of the distance measuring device 10a calculates the saturation region R.sub.sa of the reflected light image information based on the imaging result obtained in step S101 (step S103). By detecting the light receiving element including the saturated pixel signal, the saturation region detection unit 61 calculates the saturation region R.sub.sa of the reflected light image information.
[0186] The saturation value estimation unit 62 of the distance measuring device 10a calculates a correction value based on the saturation region R.sub.sa calculated in step S103 and the pixel value Confidence of the reflected light image information calculated in step S102 (step S104). More specifically, the saturation value estimation unit 62 estimates, as the correction value, a value obtained by adding a predetermined value to the average value of the pixel values Confidence of the reflected light image information of the non-saturation region R.sub.nsa in the surroundings of the saturation region R.sub.sa.
[0187] The saturation region compensation unit 63 of the distance measuring device 10a corrects the pixel value Confidence of the reflected light image information of the saturation region R.sub.sa based on the correction value estimated by the saturation value estimation unit 62 in step S104 (step S105). The saturation region compensation unit 63 adds the calculated correction value to the pixel value Confidence of the reflected light image information of the saturation region R.sub.sa, thereby replacing the value of the pixel value Confidence of the reflected light image information with the correction value.
[0188] Based on each pixel signal of each phase captured in step S101, the control unit 40 of the distance measuring device 10a obtains a control signal to control the light source unit 11 and the light receiving unit 12 (step S106). The control unit 40 stores the obtained control signal in a register or the like.
[0189] The distance measuring device 10a determines whether the imaging has been completed (step S107). For example, in a case where the distance measuring device 10a has received an imaging end instruction that instructs end of imaging from the application unit 20, the distance measuring device determines that the imaging has ended (step S107, “Yes”). In this case, the distance measuring device 10a ends the correction process.
[0190] In contrast, in a case where the distance measuring device 10a has not received the imaging end instruction from the application unit 20 and determines that the imaging has not ended (step S107, “No”), the process returns to step S101. The processes of steps S101 to S107 are repeated, for example, in units of one frame.
[0191] In this manner, the distance measuring device 10a (an example of an information processing apparatus) according to the first embodiment includes the correction unit 60 (an example of a control unit). The correction unit 60 detects saturation region R.sub.sa of the reflected light image information (an example of the light reception image information) generated based on the pixel signal (an example of the pixel signal) output from the light receiving unit 12 (an example of the light receiving sensor) that receives the reflected light 32 which is a reflection, by the measurement object 31, of the projection light projected from the light source unit 11 (an example of the light source). The pixel signal is used to calculate the distance to the measurement object 31. The saturation region R.sub.sa is a region of reflected light image information generated based on a saturated pixel signal. The correction unit 60 corrects the reflected light image information of the saturation region R.sub.sa based on the pixel signal.
[0192] This makes it possible to improve discontinuity of the light reception image information (the reflected light image information in the first embodiment), leading to suppression of degradation in accuracy of the light reception image information.
3. SECOND EMBODIMENT
[0193] Next, a second embodiment of the present disclosure will be described. The distance measuring device according to the second embodiment corrects the saturation region R.sub.sa of the reflected light image information by using IR image information.
3.1. Configuration Example of Distance Measuring Device
[0194]
[0195] The IR calculation unit 64 calculates IR image information based on a pixel signal output from the light receiving unit 12. Here, the IR image information is calculated based on Formula (11) or Formula (12) described above. The IR image information is calculated by subtracting a DC component such as dark current (dark noise) from the pixel signal. Therefore, even in the saturation region R.sub.sa, the pixel value IR of the IR image information does not become zero, and the IR image information will be image information maintaining continuity even at occurrence of the saturation region R.sub.sa.
[0196] The saturation region compensation unit 63b corrects the reflected light image information of the saturation region R.sub.sa based on the reflected light image information and the IR image information. The saturation region compensation unit 63 corrects the reflected light image information in accordance with the gradient (change rate) of the IR image information in the saturation region R.sub.sa. The correction like this will be described in detail with reference to
[0197]
[0198] The upper graph on the left side of
[0199] A lower graph on the left side of
[0200] As described above, the IR image information is information including a component of direct reflected light and a component of ambient light. Furthermore, the reflected light image information is information including a component of direct reflected light. In the same frame, the components of the ambient light are considered to be the same. Accordingly, it is considered that both the component contributing to the change in the pixel value IR of the IR image information and the component contributing to the change in the pixel value Confidence of the reflected light image information are the same, namely, the component of the directly reflected light, having the equal change rate.
[0201] In view of this, the saturation region compensation unit 63b according to the present embodiment corrects the pixel value Confidence of the reflected light image information in the saturation region R.sub.sa in accordance with the gradient (change rate) of the pixel value IR of the IR image information. Specifically, a correction value of a correction pixel is calculated by multiplying the value of the pixel adjacent to the pixel of the reflected light image information to be corrected (hereinafter, also referred to as a correction pixel) by the change rate of the pixel value IR of the IR image information corresponding to the correction pixel. The saturation region compensation unit 63b corrects the pixel value Confidence of the correction pixel using the calculated correction value.
[0202] For example, the saturation region compensation unit 63b calculates the correction value sequentially from the pixel in the saturation region R.sub.sa adjacent to the non-saturation region R.sub.nsa, and calculates the correction values for all the pixels included in the saturation region R.sub.sa while sequentially scanning the correction target pixel in the horizontal direction or the vertical direction.
[0203] The graph on the right side of
[0204] In this manner, by correcting the reflected light image information in accordance with the gradient (change rate) of the IR image information by the saturation region compensation unit 63b, it is possible to perform correction according to an actual change in the component of the direct reflected light, leading to further suppression of degradation of accuracy of the reflected light image information.
[0205] Although this is a case where the saturation region compensation unit 63b corrects the reflected light image information for each row or column, the correction method is not limited thereto. For example, the saturation region compensation unit 63b may calculate the correction value of the reflected light image information for each row and column. In this case, two correction values corresponding to the row and column directions are calculated for one correction pixel. The saturation region compensation unit 63b may correct the correction pixel using an average value of two correction values, for example.
3.2. Correction Process in Distance Measuring Device
[0206]
[0207] In the flowchart of
[0208] The IR calculation unit 64 of the distance measuring device 10b calculates IR image information based on the imaging result obtained in step S101 (step S201). The IR calculation unit 64 outputs the calculated IR image information to the saturation region compensation unit 63b. Alternatively, the IR calculation unit 64 may output the calculated IR image information to the application unit 20.
[0209] The saturation region compensation unit 63b of the distance measuring device 10b corrects the reflected light image information of the saturation region R.sub.sa based on the gradient of the IR image information calculated by the IR calculation unit 64 in step S104 (step S202). The saturation region compensation unit 63b corrects the correction pixel by multiplying the pixel value Confidence of the pixel adjacent to the correction pixel by the change rate of the pixel value IR of the IR image information corresponding to the correction pixel.
[0210] The control unit 40 of the distance measuring device 10b obtains a control signal to control the light source unit 11 and the light receiving unit 12 based on each pixel signal of each of phases captured in step S101 (step S106). The control unit 40 stores the obtained control signal in a register or the like.
[0211] The distance measuring device 10b determines whether imaging has been completed (step S107). For example, in a case where the distance measuring device 10b has received an imaging end instruction that instructs end of imaging from the application unit 20, the distance measuring device determines that the imaging has ended (Step S107, “Yes”). In this case, the distance measuring device 10b ends the correction process.
[0212] In contrast, in a case where the distance measuring device 10b has not received the imaging end instruction from the application unit 20 and determines that the imaging has not ended (step S107, “No”), the process returns to step S101. The processes of steps S101 to S107 are repeated, for example, in units of one frame.
[0213] In this manner, the distance measuring device 10b (an example of an information processing apparatus) according to the second embodiment includes the correction unit 60b (an example of a control unit). The correction unit 60b corrects the pixel value Confidence in the saturation region of the reflected light image information in accordance with the gradient (change rate) of the pixel value IR of the IR image information. This makes it possible to improve discontinuity of the light reception image information (the reflected light image information in the second embodiment), leading to suppression of degradation in accuracy of the light reception image information.
4. THIRD EMBODIMENT
[0214] A third embodiment of the present disclosure will be described. A distance measuring device according to the third embodiment corrects the saturation region R.sub.sa of the IR image information.
4.1. Configuration Example of Distance Measuring Device
[0215]
[0216] The saturation value estimation unit 62c estimates a correction value of the pixel value IR in the saturation region R.sub.sa of the IR image information. The saturation value estimation unit 62c estimates a predetermined value as a correction value, for example. Alternatively, the saturation value estimation unit 62c may estimate the correction value based on the average value of the pixel values IR of the non-saturation region R.sub.nsa located in the surroundings of the saturation region R.sub.sa in the IR image information. For example, the saturation value estimation unit 62c may estimate the correction value by adding a predetermined value to the average value.
[0217] As described above, the IR image information is not discontinuous even in the presence of the saturation region R.sub.sa. However, even in the IR image information, the pixel value IR is calculated based on the saturated pixel signal in the saturation region R.sub.sa. Therefore, the pixel value IR in the saturation region R.sub.sa would not be a correct value, and becomes a saturated value (a value clipped to a predetermined value). In view of this, in the present embodiment, by correcting the pixel value IR of the saturation region R.sub.sa of the IR image information, degradation in accuracy of the IR image information is suppressed.
[0218] Although this is a case where that the saturation region detection unit 61 detects the saturation region R.sub.sa of the corresponding IR image information by detecting the saturation region R.sub.sa of the reflected light image information, the detection method is not limited thereto. For example, the saturation region detection unit 61 may detect the saturation region R.sub.sa of the IR image information by determining whether the pixel value IR of the IR image information is a value indicating the saturated pixel value IR.
[0219] Furthermore, although this is a case where the correction unit 60c corrects the IR image information, the correction unit 60c may correct the reflected light image information in addition to the IR image information. Since the correction of the reflected light image information is similar to the case of the first and second embodiments, the description thereof will be omitted.
4.2. Correction Process in Distance Measuring Device
[0220]
[0221] In the flowchart of
[0222] The saturation value estimation unit 62c of the distance measuring device 10c calculates a correction value based on the saturation region R.sub.sa calculated in step S103 and the IR image information calculated in step S201 (step S301).
[0223] A saturation region compensation unit 63c of the distance measuring device 10c corrects the IR image information of the saturation region R.sub.sa based on the correction value calculated by the saturation value estimation unit 62c in step S301 (step S302).
[0224] The control unit 40 of the distance measuring device 10c obtains a control signal to control the light source unit 11 and the light receiving unit 12 based on each pixel signal of each of phases captured in step S101 (step S106). The control unit 40 stores the obtained control signal in a register or the like.
[0225] The distance measuring device 10c determines whether imaging has been completed (step S107). For example, in a case where the distance measuring device 10a has received an imaging end instruction that instructs end of imaging from the application unit 20, the distance measuring device determines that the imaging has ended (step S107, “Yes”). In this case, the distance measuring device 10c ends the correction process.
[0226] In contrast, in a case where the distance measuring device 10c has not received the imaging end instruction from the application unit 20 and determines that the imaging has not ended (step S107, “No”), the process returns to step S101. The processes of steps S101 to S107 are repeated, for example, in units of one frame.
[0227] In this manner, the distance measuring device 10c (an example of an information processing apparatus) according to the third embodiment includes the correction unit 60c (an example of a control unit). The correction unit 60c corrects a pixel value in a saturation region of IR image information (an example of light reception image information). This makes it possible to suppress degradation in accuracy of the light reception image information (IR image information in the third embodiment).
5. MODIFICATION
[0228] Although the first embodiment has been described as a case where the distance measuring device 10a is configured as a hardware device by the electronic device 1 including the CPU 100, the ROM 101, the RAM 102, the UI unit 104, the storage 103, the I/F 105, and the like, the configuration is not limited to this example. For example, it is also possible to incorporate the distance measuring device 10a including the control unit 40, the distance measuring unit 50, and the correction unit 60 illustrated in
[0229] Furthermore, although the above embodiment has been described as a case where the pixel value Confidence of the reflected light image information is zero in the saturation region R.sub.sa, the operation is not limited thereto. For example, in a case where the pixel signal in each of phases of the light receiving unit 12 is partially saturated, the pixel value Confidence of the reflected light image information might not be zero. However, even in this case, the pixel value Confidence of the reflected light image information is calculated based on the saturated pixel signal, and thus, the pixel value Confidence includes an error, leading to the possibility of occurrence of discontinuous reflected light image information. Therefore, even in a case where a part of the pixel signal in each phase of the light receiving unit 12 is saturated as described above, the correction process by the correction units 60 and 60b may be performed.
[0230] Moreover, although the above embodiment is a case where the correction of the light reception image information is performed by the correction units 60, 60b, and 60c, the correction method is not limited thereto. For example, the application unit 20 may correct the light reception image information. In this case, the electronic device 1 of
[0231] Alternatively, the correction units 60, 60b, and 60c of the above embodiments may be implemented by a dedicated computer system or a general-purpose computer system.
[0232] For example, a program for executing the above-described operations of the correction process is stored in a computer-readable recording medium such as an optical disk, semiconductor memory, a magnetic tape, or a flexible disk or hard disk and distributed. The program is installed on a computer and the above processes are executed to achieve the configuration of information processing apparatus including the correction unit 60. At this time, the information processing apparatus may be an external device (for example, a personal computer) of the electronic device 1. Furthermore, the information processing apparatus may be a device (for example, the control unit 40) inside the electronic device 1.
[0233] Furthermore, the communication program may be stored in a disk device included in a server device on a network such as the Internet so as to be able to be downloaded to a computer, for example. Furthermore, the functions described above may be implemented by using operating system (OS) and application software in cooperation. In this case, the sections other than the OS may be stored in a medium for distribution, or the sections other than the OS may be stored in a server device so as to be downloaded to a computer, for example.
[0234] Furthermore, among each process described in the above embodiments, all or a part of the processes described as being performed automatically may be manually performed, or the processes described as being performed manually can be performed automatically by a known method. In addition, the processing procedures, specific names, and information including various data and parameters illustrated in the above Literatures or drawings can be arbitrarily altered unless otherwise specified. For example, various types of information illustrated in each of the drawings are not limited to the information illustrated.
[0235] In addition, each of the components of each of the illustrated devices is provided as a functional and conceptional illustration and thus does not necessarily have to be physically configured as illustrated. That is, the specific form of distribution/integration of each of devices is not limited to those illustrated in the drawings, and all or a part thereof may be functionally or physically distributed or integrated into arbitrary units according to various loads and use conditions.
6. CONCLUSION
[0236] The embodiments of the present disclosure have been described above. However, the technical scope of the present disclosure is not limited to the above-described embodiments, and various modifications can be made without departing from the scope of the present disclosure. Moreover, it is allowable to combine the components across different embodiments and a modification as appropriate.
[0237] The effects described in individual embodiments of the present specification are merely examples, and thus, there may be other effects, not limited to the exemplified effects.
[0238] Note that the present technology can also have the following configurations.
(1)
[0239] An information processing apparatus comprising a control unit configured to execute processes including:
[0240] detecting a saturation region of light reception image information generated based on a pixel signal output from a light receiving sensor, the light receiving sensor being configured to receive reflected light being reflection, by a measurement object, of projection light projected from a light source, the pixel signal being configured to be used to calculate a distance to the measurement object, the saturation region being a region of the light reception image information generated based on the pixel signal which is saturated; and
[0241] correcting the light reception image information of the saturation region based on the pixel signal.
(2)
[0242] The information processing apparatus according to (1),
[0243] wherein the light reception image information is image information generated in accordance with a component of the reflected light contained in the pixel signal.
(3)
[0244] The information processing apparatus according to (1),
[0245] wherein the light reception image information is image information generated in accordance with a component of the reflected light and a component of ambient light, contained in the pixel signal.
(4)
[0246] The information processing apparatus according to (2) or (3),
[0247] wherein the control unit corrects the pixel value of the saturation region based on a pixel value of the light reception image information adjacent to the saturation region in a non-saturation region where the pixel signal is not saturated.
(5)
[0248] The information processing apparatus according to (4),
[0249] wherein the control unit corrects the pixel value in the saturation region using a correction value calculated based on an average value of the pixel values of the light reception image information located in surroundings of the saturation region in the non-saturation region where the pixel signal is not saturated.
(6)
[0250] The information processing apparatus according to (5),
[0251] wherein the correction value is a value larger than the average value.
(7)
[0252] The information processing apparatus according to (4),
[0253] wherein the control unit corrects the pixel value in the saturation region in accordance with a change rate of a reception light value calculated based on the component of the reflected light and the component of the ambient light, contained in the pixel signal.
(8)
[0254] A correction method comprising:
[0255] detecting a saturation region of light reception image information generated based on a pixel signal output from a light receiving sensor, the light receiving sensor being configured to receive reflected light being reflection, by a measurement object, of projection light projected from a light source, the pixel signal being configured to be used to calculate a distance to the measurement object, the saturation region being a region of the light reception image information generated based on the pixel signal which is saturated; and
[0256] correcting the light reception image information of the saturation region based on the pixel signal.
(9)
[0257] A program for causing a computer to function as a control unit that executes processes comprising:
[0258] detecting a saturation region of light reception image information generated based on a pixel signal output from a light receiving sensor, the light receiving sensor being configured to receive reflected light being reflection, by a measurement object, of projection light projected from a light source, the pixel signal being configured to be used to calculate a distance to the measurement object, the saturation region being a region of the light reception image information generated based on the pixel signal which is saturated; and
[0259] correcting the light reception image information of the saturation region based on the pixel signal.
REFERENCE SIGNS LIST
[0260] 1 ELECTRONIC DEVICE [0261] 10, 10a, 10b, 10c DISTANCE MEASURING DEVICE [0262] 11 LIGHT SOURCE UNIT [0263] 12 LIGHT RECEIVING UNIT [0264] 13 DISTANCE MEASUREMENT PROCESSING UNIT [0265] 20 APPLICATION UNIT [0266] 40 CONTROL UNIT [0267] 50 DISTANCE MEASURING UNIT [0268] 60, 60b, 60c CORRECTION UNIT [0269] 61 SATURATION REGION DETECTION UNIT [0270] 62, 62c SATURATION VALUE ESTIMATION UNIT [0271] 63, 63b, 63c SATURATION REGION COMPENSATION UNIT [0272] 64 IR CALCULATION UNIT