Depth sensor, image capture method, and image processing system using depth sensor
10171790 ยท 2019-01-01
Assignee
Inventors
- Kyung Il Kim (Anyang-si, KR)
- Dong Wook Kwon (Suwon-si, KR)
- Min Ho Kim (Seongnam-si, KR)
- Gi Sang Lee (Suwon-si, KR)
- Sang Bo Lee (Yongin-si, KR)
- Jin Kyung Lee (Suwon-si, KR)
- Young Gu Jin (Osan-si, KR)
- Jin Wuk Choi (Seoul, KR)
Cpc classification
H04N13/10
ELECTRICITY
H04N13/254
ELECTRICITY
International classification
H04N13/00
ELECTRICITY
H04N13/254
ELECTRICITY
Abstract
An image capture method performed by a depth sensor includes; emitting a first source signal having a first amplitude towards a scene, and thereafter emitting a second source signal having a second amplitude different from the first amplitude towards the scene, capturing a first image in response to the first source signal and capturing a second image in response to the second source signal, and interpolating the first and second images to generate a final image.
Claims
1. An image capture method performed by a depth sensor, the method comprising: emitting a first source signal having a first amplitude towards a scene, and thereafter, emitting a second source signal having a second amplitude different from the first amplitude towards the scene; receiving, as a first reflected signal, a reflected portion of the first source signal; receiving, as a second reflected signal, a reflected portion of the seconds source signal; demodulating the first reflected signal with an N-times sampling operation to generate a first image; demodulating the second reflected signal with another N-times sampling operation to generate a second image, wherein N is an integer greater than one; and interpolating the first and second images to generate a final image, wherein: the second amplitude is greater than the first amplitude, the first source signal is used to capture a first point of the scene that is relatively close to the depth sensor, and the second source signal is used to capture a second point of the scene that is relatively far from the depth sensor.
2. The image capture method of claim 1, wherein a first object in the first image relatively far from the depth sensor is distorted, and a second object in the second image relatively close to the depth sensor is distorted.
3. The image capture method of claim 2, wherein: the first reflected signal is received by an array of depth pixels in the depth sensor; and the second reflected signal is received by the array of depth pixels in the depth sensor after receiving the first reflected signal.
4. The image capture method of claim 3, wherein receiving the first reflected signal comprises focusing the first reflected signal through a lens module, and receiving the second reflected signal comprises focusing the second reflected signal through the lens module after focusing the first reflected signal through the lens module.
5. The image capture method of claim 3, wherein each one of the depth pixels is one of a 1-tap depth pixel or a 2-tap depth pixel.
6. The image capture method of claim 5, wherein the depth sensor is part of a three-dimensional image sensor including at least one of a red pixel, a green pixel, and a blue pixel, a magenta pixel, a cyan pixel, and a yellow pixel.
7. The image capture method of claim 1, further comprising: storing image data corresponding to the final image in a memory; communicating the image data via an interface to a display; and generating a displayed image on the display in accordance with the image data.
8. A depth sensor comprising: a light source that emits a first source signal having a first amplitude towards a scene, and thereafter, emits a second source signal having a second amplitude different from the first amplitude towards the scene; and a depth pixel that generates a first reflected signal in response to a reflected portion of the first source signal, and thereafter, generates a second reflected signal in response to a reflected portion of the second source signal; and an image signal processor that: demodulates the first reflected signal with an N-times sampling operation to generate a first image, demodulates the second reflected signal with another N-times sampling operation to generate a second image, wherein N is an integer greater than one, and interpolates the first and second images to generate a final image, wherein: the second amplitude is greater than the first amplitude, the first source signal is used to capture a first point of the scene that is relatively close to the depth sensor, and the second source signal is used to capture a second point of the scene that is relatively far from the depth sensor.
9. The depth sensor of claim 8, wherein the light source is an infrared diode or a laser diode.
10. The depth sensor of claim 8, wherein a first object in the first image relatively far from the depth sensor is distorted, and a second object in the second image relatively close to the depth sensor is distorted.
11. The depth sensor of claim 8, further comprising: a lens module that focuses the first reflected signal and the second reflected signal on the depth pixel; and a correlated double sampling circuit operating with an analog-to-digital converter to convert the first reflected signal into first pixel signals and to convert the second reflected signal into second pixel signals.
12. The depth sensor of claim 8, wherein the depth pixel is one of a 1-tap depth pixel or a 2-tap depth pixel.
13. A three-dimensional (3D) sensor, comprising: a light source that emits a first source signal having a first amplitude towards a scene, and thereafter, emits a second source signal having a second amplitude different from the first amplitude towards the scene; a depth pixel configured to generate a first reflected signal in response to a reflected portion of the first source signal, and thereafter to generate a second reflected signal in response to a reflected portion of the second source signal; and an image signal processor that: demodulates the first reflected signal with an N-times sampling operation to generate a first image, demodulates the second reflected signal with another N-times sampling operation to generate a second image, wherein N is an integer greater than one, and interpolates the first and second images to generate a final image, wherein: the second amplitude is greater than the first amplitude, the first source signal is used to capture a first point of the scene that is relatively close to the 3D sensor, and the second source signal is used to capture a second point of the scene that is relatively far from the 3D sensor.
14. The 3D sensor of claim 13, wherein the light source is an infrared diode or a laser diode.
15. The 3D sensor of claim 13, wherein a first object in the first image relatively far from the depth pixel is distorted, and a second object in the second image relatively close to the depth pixel is distorted.
16. The 3D sensor of claim 13, wherein the depth pixel is one of a 1-tap depth pixel or a 2-tap depth pixel.
17. The 3D sensor of claim 13, wherein the depth pixel comprises a red pixel, a green pixel, a blue pixel, a magenta pixel, a cyan pixel, or a yellow pixel.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
DETAILED DESCRIPTION
(18) As noted above, figure (
(19) Referring collectively to
(20) The 1-tap depth pixels 23 are arranged in two-dimensional matrix to form the array 22. Each 1-tap depth pixel includes a photo gate 110 and a plurality of transistors for signal processing.
(21) A row decoder 24 may be used to select one of a plurality of rows in response to a row address provided by a timing controller (T/C) 26. Each row is a particular arrangement of 1-tap depth pixels in an arbitrarily defined direction (e.g., an X-direction) within the array 22.
(22) A photo gate controller (TG CON) 28 may be used to generate first, second, third, and fourth photo gate control signals (Ga, Gb, Gc, and Gd) and supply same to the array 22 under the control of the timing controller 26.
(23) As shown in
(24) A light source driver 30 may be used to generate a clock signal (MLS) capable of driving the light source 32 under the control of the timing controller 26.
(25) The light source 32 emits a modulated source signal (EL) towards a scene 40 in response to the clock signal. The scene 40 may generally include one or more target object(s). The modulated source signal may have different amplitudes according to driving of the light source driver 30. As conceptually illustrated in
(26) The light source 32 may be one or more of a light emitting diode (LED), an organic light emitting diode (OLED), an active-matrix organic light emitting diode (AMOLED) and a laser diode. The clock signal applied to the light source and/or the modulated source signal transmitted by the light source 32 may have a sine wave or a square wave.
(27) The light source driver 30 supplies the clock signal and/or information derived from the clock signal to the photo gate controller 28. Accordingly, the photo gate controller 28 may be used to generate the first photo gate control signal Ga in phase with the clock signal, and the second photo gate control signal Gb having a 180 phase difference with respect to the clock signal. The photo gate controller 28 may also be used to generate the third photo gate control signal Gc having a 90 phase difference with respect to the clock signal, and the fourth photo gate control signal Gd having a 270 phase difference with respect to the clock signal. That is, in certain embodiments of the inventive concept, the photo gate controller 28 and light source driver 30 may be operated synchronously.
(28) The photo grate 110 may be formed of transparent polysilicon. In certain embodiments of the inventive concept, the photo gate 110 may be formed of indium tin oxide, tin-doped indium oxide (ITO), indium zinc oxide (IZO), and/or zinc oxide (ZnO). The photo gate 110 may be used to transmit near infrared wavelengths received via the lens module 34.
(29) The modulated source signal provided by the light source 32 will be reflected in various portions by object(s) in the scene 40. It is assumed for purposes of explanation that the scene 40 of
(30) Where the modulated source signal is assumed to have a waveform cos t, and a reflected portion of the source signal (hereafter reflected signal) (RL) received by the 1-tap depth pixel 23 is further assumed to be cos(t+), where is a phase shift or phase difference, a TOF calculation may be made using Equation 1:
=2**Z/C=2*(2f)*Z/C(Equation 1),
where C is the velocity of light.
(31) Then, the distance Z between depth sensor 10 and an object in the scene 40 may be calculated using the Equation 2:
Z=*C/(2*)=*C/(2*(2f))(Equation 2).
(32) The reflected signal may be focused upon (or made incident to) the array 22 using the lens module 34. The lens module 34 may be variously implemented as a unit including one or more lens and one or more optical filters (e.g., an infrared pass filter).
(33) In certain embodiments, the depth sensor 10 may include a plurality of light sources arranged in a pattern (e.g., a circle) around the lens module 34. However, the descriptive embodiments presented here assumed a single light source 32 for convenience of explanation.
(34) The reflected signal returned to the array 22 via the lens module 34 may be demodulated by performing an N-times sampling operation (e.g., a 4 times sampling operation). In this manner, a competent sampling operation may generate (or detect) a sampled pixel signal (e.g., pixel samples A0, A1, A2, and A3 of
(35) A phase shift calculated by Equation 1 between the modulated source signal (EL) and the reflected signal (RL) may be expressed by Equation 3:
(36)
where an amplitude A of the reflected signal (RL) may be expressed by Equation 4:
(37)
(38) Thus, an amplitude A of the reflected signal (RL) may be determined by the amplitude of the modulated source signal (EL) in the illustrated example of
(39) Then, and offset B for the reflected signal (RL) may be expressed by Equation 5:
(40)
(41) Referring to
(42) A silicon oxide layer is formed on the P-type substrate 100, the photo gate 110 is formed on the silicon oxide layer, and a transfer transistor 112 is also formed on the silicon oxide layer. The P-type substrate 100 may be a P-doped epitaxial substrate.
(43) The first photo gate control signal (Ga) is supplied to the photo gate 110 during an integration interval. This is referred to as a charge collection operation. A transfer control signal (TX) that control the transfer of photo-charge generated within a region of the P-type substrate 100 below the photo gate 110 to the floating diffusion region 114 is supplied to a gate of the transfer transistor 112. This is referred to as a charge transfer operation.
(44) According to certain embodiments of the inventive concept, a bridging diffusion region 116 may be further formed within a region of the P-type substrate 100 between regions of the P-type substrate 100 below the photo gate 110 and the transfer transistor 112. The bridging diffusion region 116 may be doped with N-type impurities. The photo-charge is generated by source signals incident into the P-type substrate 100 via the photo gate 110.
(45) When the transfer control signal (TX) having a low level (e.g., 1.0 V) is supplied to the gate of the transfer transistor 112 and the first photo gate control signal (Ga) having a high level (e.g., 3.3 V) is supplied to the photo gate 110, photo-charge generated within the P-type substrate 100 are concentrated in the region of the P-type substrate 100 below the photo gate 110, and this concentrated photo-charge may then be transferred to the floating diffusion region 114 (e.g., when the bridging diffusion region 116 is not formed) or to the floating diffusion region 114 via the bridging diffusion region 116 (e.g., when the bridging diffusion region 116 is formed).
(46)
(47) When a low transfer control signal (TX) is supplied to the gate of the transfer transistor 112 and a low first photo gate control signal (Ga) is supplied to the photo gate 110, photo-charge is generated within the region of the P-type substrate 100 below the photo gate 110, but the generated photo-charge is not transferred to the floating diffusion region 114.
(48) A charge collection operation and a charge transfer operation performed when each of the second, third, and fourth photo gate control signals Gb, Gc, and Gd is supplied to the photo gate 110 are similar to those when the first photo gate control signal Ga is supplied to the photo gate 110.
(49) Although the 1-tap depth pixel 23 illustrated in
(50) The 1-tap depth pixel 23 accumulates photo-charge during a defined period of time, for example, during an integration time, and outputs corresponding pixel signals A0, A1, A2, and A3 generated according to a result of this accumulation. A pixel signal (A.sub.k) generated by each of the 1-tap depth pixels 23 may be expressed by the Equation 6:
(51)
(52) When the first photo gate control signal (Ga) is input to the photo gate 110 of the 1-tap depth pixel 23, k in Equation 6 has a value of 0. When the third photo gate control signal (Gc) is input to the photo gate 110 of the 1-tap depth pixel 23, k in Equation 6 will have a value of 1. When the second photo gate control signal (Gb) is input to the photo gate 110 of the 1-tap depth pixel 23, k in Equation 6 will have a value of 2, and when a phase difference of the fourth photo gate control signal (Gd) with respect to the clock signal MLS is 270, k in Equation 6 will have a value of 3.
(53) Thus, in Equation 6, the term a.sub.k,n denotes a quantity of photo-charge generated by the 1-tap depth pixel 23 when an Nth gate signal is applied with a phase difference corresponding to the variable k, and the natural number value of N is equal to (fm*Tint), where fm is the frequency of modulated source signal (EL) and Tint indicates an integration time period.
(54) Referring to
(55) According to the embodiment illustrated in
(56)
(57) Referring
(58) As before, the depth sensor 23-1 of
(59) Each of the 2-tap depth pixels 23-1 may be implemented two-dimensionally in the array 22 includes a first photo gate 110 and a second photo gate 120. Each of the 2-tap depth pixels 23-1 also includes a plurality of transistors for signal processing.
(60) Since the depth sensor 10 of
(61) During a first integration interval, the first photo gate control signal (Ga) is supplied to the first photo gate 110, and the second photo gate control signal (Gb) is supplied to the second photo gate 120. During a second integration interval, the third photo gate control signal (Gc) is supplied to the first photo gate 110, and the fourth photo gate control signal (Gd) is supplied to the second photo gate 120.
(62) Referring now to
(63) The first floating diffusion region 114 is connected to the gate of a first driving transistor S/F_A (not shown), and the second floating diffusion region 124 is connected to the gate of a second driving transistor S/F_B (not shown). Each of the first and second driving transistors S/F_A and S/F_B may perform the function of a source follower. Each of the first and second floating diffusion regions 114 and 124 may be doped with N-type impurities.
(64) A silicon oxide layer is formed on the P-type substrate 100, the first and second photo gates 110 and 120 are formed on the silicon oxide layer, and first and second transfer transistors 112 and 122 are also formed on the silicon oxide layer. An isolation region 130 for preventing photo-charge generated within the P-type substrate 100 by the first photo gate 110 from interfering with photo-charge generated within the P-type substrate 100 by the second photo gate 120 may be formed within the P-type substrate 100.
(65) The P-type substrate 100 may be a P-doped epitaxial substrate, and the isolation region 130 may be a P+-doped region. According to certain embodiments of the inventive concept, the isolation region 130 may be formed by shallow trench isolation (STI) or local oxidation of silicon (LOCOS).
(66) During a first integration interval, the first photo gate control signal (Ga) is supplied to the first photo gate 110, and the second photo gate control signal (Gb) is supplied to the second photo gate 120. A first transfer control signal (TX_A) controlling the transfer of photo-charge generated within a region of the P-type substrate 100 below the first photo gate 110 to the first floating diffusion region 114 is supplied to the gate of the first transfer transistor 112. A second transfer control signal (TX_B) controlling the transfer of the photo-charge generated within a region of the P-type substrate 100 below the second photo gate 120 to the second floating diffusion region 124 is supplied to a gate of the second transfer transistor 122.
(67) According to the illustrated embodiment of
(68) The photo-charge are generated by source signals incident into the P-type substrate 100 via each of the first and second photo gates 110 and 120.
(69) When a low first transfer control signal (TX_A) is supplied to the gate of the first transfer transistor 112 and a high first photo gate control signal (Ga) is supplied to the first photo gate 110, photo-charge generated within the P-type substrate 100 are concentrated in the region of the P-type substrate 100 below the first photo gate 110, and the concentrated photo-charge are transferred to the first floating diffusion region 114 (e.g., when the first bridging diffusion region 116 is not formed) or to the first floating diffusion region 114 via the first bridging diffusion region 116 (e.g., when the first bridging diffusion region 116 is formed).
(70) Simultaneously, when a low second transfer control signal (TX_B) is supplied to the gate of the second transfer transistor 122 and a low second photo gate control signal (Gb) is supplied to the second photo gate 120, photo-charge are generated within the region of the P-type substrate 100 below the second photo gate 120, but the generated photo-charge are not transferred to the second floating diffusion region 124. This operation is referred to as a charge collection operation.
(71) In
(72) When a low first transfer control signal (TX_A) is supplied to the gate of the first transfer transistor 112 and a low first photo gate control signal (Ga) is supplied to the first photo gate 110, photo-charge is generated within the region of the P-type substrate 100 below the first photo gate 110, but the generated photo-charge are not transferred to the first floating diffusion region 114.
(73) Simultaneously, when a low second transfer control signal (TX_B) is supplied to the gate of the second transfer transistor 122 and a high second photo gate control signal (Gb) is supplied to the second photo gate 120, photo-charge generated within the P-type substrate 100 are concentrated in the region of the P-type substrate 100 below the second photo gate 120, and the concentrated charge are transferred to the second floating diffusion region 124 (e.g., when the second bridging diffusion region 126 is not formed) or to the second floating diffusion region 124 via the second bridging diffusion region 126 (e.g., when the second bridging diffusion region 126 is formed). This operation is referred to as a charge transfer operation.
(74) In
(75) A charge collection operation and a charge transfer operation performed when the third photo gate control signal (Gc) is supplied to the first photo gate 110 are similar to those performed when the first photo gate control signal (Ga) is supplied to the first photo gate 110. And a charge collection operation and a charge transfer operation performed when the fourth photo gate control signal (Gd) is supplied to the second photo gate 120 are similar to those performed when the second photo gate control signal (Gb) is supplied to the second photo gate 120.
(76)
(77) Referring to
(78) Then, the depth sensor 10 emits a second modulated source signal EL having a second amplitude (different from the first amplitude) towards the object in the scene 40 in response to the clock signal at a second point of time T2. Accordingly, a second reflected signal RL is returned from the object of the scene 40 to the depth sensor 10. Here, in the illustrated example of
(79) The first modulated source signal EL having the first amplitude may be used to capture a near point (for example, Z1) of the scene 40 that is relatively close to the depth sensor 10. Whereas, the second modulated source signal EL having the second amplitude may be used to capture a distal point (for example, Z3) of the scene 40 that is relatively far from the depth sensor 10.
(80) The pixel signals A0, A1, A2, and A3 having different pixel values are detected by the depth pixel 23 by sequentially emitting the first and second source signals EL and EL having different amplitudes towards the scene 40. For example, pixel signals A0, A1, A2, and A3 each having a first pixel value are detected by performing a sampling operation on the first reflected signal RL four times, and pixel signals A0, A1, A2, and A3 each having a second pixel value are detected by performing a sampling operation on the second reflected signal RL four times.
(81) The first pixel value of the first pixel signal A0 detected by the first reflected signal RL may be different from the second pixel value of the first pixel signal A0 detected by the second reflected signal RL. Thus, the ISP 39 generates a first image using the pixel signals A0, A1, A2, and A3 each having the first pixel value, and generates a second image using the pixel signals A0, A1, A2, and A3 each having the second pixel value. The first image is generated in accordance with the first source signal EL having the first amplitude, whereas the second image is generated in accordance with the second source signal EL having the second amplitude. A first point (or first object) in the first image that is relatively far from the depth sensor 10 may be distorted (or defocused) by noise for example, whereas a second point (or second object) in the second image that is relatively close to the depth sensor 10 may be distorted (or defocused) by noise for example. Accordingly, the ISP 39 generates both first and second images, and then interpolates the first and second images to generate a final image. The final image is characterized by significantly improve quality over either one of the first and second images. In other words, the accuracy of distance information used to generate the final image is improved.
(82)
(83) Referring to
(84) Then, the ISP 39 may be used to capture first and second images respectively corresponding to the first and second reflected signals RL and RL returned from the scene 40 (S20). That is, the depth pixel 23 detects first and second pixel signals having different pixel values in accordance with the first and second reflected signals RL and RL. The ISP 39 captures the first and second images using the first and second pixel signals.
(85) Then, the ISP 39 may be used to generate a single (final) image by interpolating the first and second images (S30).
(86)
(87) The red pixel R generates a red pixel signal corresponding to the wavelengths belonging to a red region of a visible light region, the green pixel G generates a green pixel signal corresponding to the wavelengths belonging to a green region of the visible light region, and the blue pixel B generates a blue pixel signal corresponding to the wavelengths belonging to a blue region of the visible light region. The depth pixel D generates a depth pixel signal corresponding to the wavelengths belonging to an infrared region.
(88)
(89) The unit pixel arrays 522-1 and 522-2 of
(90)
(91) Referring to
(92) The operations and functions of the row decoder 524, the timing controller 526, the photo gate controller 528, the light source driver 530, the CDS/ADC circuit 536, the memory 538, and the ISP 539 of
(93) According to an embodiment, the 3D image sensor 500 may further include a column decoder (not shown). The column decoder may decode column addresses output by the timing controller 526 to output column selection signals.
(94) The row decoder 524 may generate control signals for controlling an operation of each pixel included in the pixel array 522, for example, operations of the pixels R, G, B, and D of
(95) The pixel array 522 includes the unit pixel array 522-1 or 522-2 of
(96)
(97) The processor 210 may control an operation of the 3D image sensor 500. For example, the processor 210 may store a program for controlling the operation of the 3D image sensor 500. According to an embodiment, the processor 210 may access a memory (not shown) in which the program for controlling the operation of the 3D image sensor 500 is stored, in order to execute the program stored in the memory.
(98) The 3D image sensor 500 may generate 3D image information based on each digital pixel signal (for example, color information or depth information), under the control of the processor 210. The 3D image information may be displayed on a display (not shown) connected to an interface (I/F) 230. The 3D image information generated by the 3D image sensor 500 may be stored in a memory 220 via a bus 201 under the control of the processor 210. The memory 220 may be implemented by using a non-volatile memory.
(99) The I/F 230 may be implemented by using an interface for receiving and outputting 3D image information. According to an embodiment, the I/F 230 may be implemented by using a wireless interface.
(100)
(101) Although the depth sensor 10 or 10 is physically separated from the color image sensor 310 in
(102) The color image sensor 310 may denote an image sensor that includes no depth pixels and includes a pixel array including a red pixel, a green pixel, and a blue pixel. Accordingly, the processor 210 may generate 3D image information based on depth information predicted (or calculated) by the depth sensor 10 or 10 and each color information (for example, at least one of red information, green information, blue information, magenta information, cyan information, or yellow information) output by the color image sensor 310, and may display the 3D image information on a display. The 3D image information generated by the processor 210 may be stored in a memory 220 via a bus 301.
(103)
(104) The processor 210 may calculate distance information or depth information respectively representing a distance or a depth between the signal processing system 800 and a subject (or a target object), based on pixel signals output by the depth sensor 10 or 10. In this case, the depth sensor 10 or 10 may not include the ISP 39. The distance information or the depth information measured by the processor 210 may be stored in a memory 220 via a bus 401.
(105) An I/F 410 may be implemented for receiving and outputting depth information. According to an embodiment, the I/F 410 may be implemented by using a wireless interface.
(106) The image processing system 600, 700, or 800 of
(107)
(108) The image processing system 1200 includes an application processor 1210, an image sensor 1220, and a display 1230.
(109) A camera serial interface (CSI) host 1212 implemented in the application processor 1210 may serially communicate with a CSI device 1221 of the image sensor 1220 via a CSI. According to an embodiment, a deserializer DES may be implemented in the CSI host 1212, and a serializer SER may be implemented in the CSI device 1221. The image sensor 1220 may be the depth sensor 10 of
(110) A display serial interface (DSI) host 1211 implemented in the application processor 1210 may serially communicate with a DSI device 1231 of the display 1230 via a DSI. According to an embodiment, a serializer SER may be implemented in the DSI host 1211, and a deserializer DES may be implemented in the DSI device 1231.
(111) The image processing system 1200 may further include a radio frequency (RF) chip 1240 capable of communicating with the application processor 1210. A PHY (physical layer) 1213 of the application processor 1210 and a PHY 1241 of the RF chip 1240 may transmit and receive data to and from each other via MIPI DigRF.
(112) The image processing system 1200 may further include a global positioning system (GPS) 1250, a memory 1252 such as a dynamic random access memory (DRAM), a data storage device 1254 implemented by using a non-volatile memory such as a NAND flash memory, a microphone (MIC) 1256, or a speaker 1258.
(113) The image processing system 1200 may communicate with an external apparatus by using at least one communication protocol (or a communication standard), for example, a ultra-wideband (UWB) 1260, a wireless local area network (WLAN) 1262, worldwide interoperability for microwave access (WiMAX) 1264, or a long term evolution (LTE) (not shown).
(114) In a depth sensor according to an embodiment of the present inventive concept, and an image capture method performed by the depth sensor, multiple source signals having respectively different amplitudes may be sequentially emitted towards a scene and advantageously used to increase the accuracy of distance information in a final image of the scene.
(115) While the inventive concept has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood that various changes in form and details may be made therein without departing from the scope of the following claims.