IMAGING SYSTEM AND DETECTION METHOD

20220357434 · 2022-11-10

    Inventors

    Cpc classification

    International classification

    Abstract

    In at least one embodiment an imaging system comprises a light emitter, a detector array and a synchronization circuit. The detector array comprises pixels, which have a built-in modulation function. The synchronization circuit is operable to synchronize acquisition performed by detector array with emission by means of the light source.

    Claims

    1. An imaging system comprising: a light source, a detector array which comprises pixels, wherein the pixels have a built-in modulation function, and a synchronization circuit to synchronize the acquisition performed by detector array with the light source.

    2. The imaging system according to claim 1, wherein at least the detector array and the synchronization circuit are integrated into a same chip and/or the imaging comprises a sensor package, which encloses the detector array and the synchronization circuit integrated into the same chip as well as the light source.

    3. The imaging system according to claim 1, where the modulation function is achieved by a modulating element.

    4. The imaging system according to claim 3, where the modulating element introduces a leakage current linear to an applied voltage.

    5. The imaging system according to claim 1, to where the pixels have a polarizing function.

    6. The imaging system according to claim 5, where adjacent pixels have orthogonal polarization functions.

    7. The imaging system according to claim 4, where a leakage current that flows through the modulating element has a first value at a start of a frame and a second value at the end of the frame, wherein the first value is higher than the second value.

    8. The imaging system according to claim 4, where a leakage current that flows through the modulating element monotonously decreases from a first value to a second value during a frame B.

    9. The imaging system according to claim 4, where the modulating element is a transistor, such as a leakage control transistor.

    10. The imaging system according to claim 1, where the emission wavelength of the light source is larger than 800 nm and smaller than 10000 nm.

    11. The imaging system according to claim 1, where the emission wavelength of the light source is in between 840 nm and 1610 nm.

    12. A vehicle comprising, an imaging system according to claim 1, and board electronics embedded in the vehicle, wherein: the imaging system is arranged to provide an output signal to the board electronics.

    13. A detection method where a scene is illuminated with: a first light pulse in order to acquire a first image by a detector array (DA) having a constant sensitivity, and a second light pulse in order to acquire a second image by the detector array having a sensitivity increasing with a time.

    14. The detection method according to claim 13, where the distance of objects of the scene is inferred from the ratio of the second image to the first image.

    15. The detection method according to claim 13, wherein the scene is illuminated by a light source and the first and the second light pulse have identical duration and pulse height.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0045] FIG. 1 shows an example of the imaging system.

    [0046] FIG. 2 shows an example embodiment of a detector array with polarization function,

    [0047] FIG. 3 shows a cross section of an example detector array with a high-contrast grating polarizer,

    [0048] FIG. 4 shows an example embodiment of a modulation element,

    [0049] FIG. 5 shows another example embodiment of a modulation element,

    [0050] FIG. 6 shows another example embodiment of a modulation element,

    [0051] FIG. 7 shows another example embodiment of a modulation element,

    [0052] FIG. 8 shows another example embodiment of a modulation element,

    [0053] FIG. 9 shows an example embodiment of a detection method,

    [0054] FIG. 10 shows an example embodiment of a detection method,

    [0055] FIG. 11 shows an example timing diagram of the light source,

    [0056] FIG. 12 shows an example embodiment of a prior art LIDAR detection method, and

    [0057] FIG. 13 shows another example embodiment of a prior art LIDAR detection method.

    DETAILED DESCRIPTION

    [0058] FIG. 1 shows an example imaging system. The imaging system comprises a light source LS, a detector array DA and a synchronization circuit SC, which are arranged contiguous with and electrically coupled to a carrier CA. For example, the carrier comprises a substrate to provide electrical connectivity and mechanical support. The detector array and the synchronization circuit are integrated into a same chip CH which constitutes a common integrated circuit. Typically, the light source and the common integrated circuit are arranged on and electrically contacted to each other via the carrier. The components of the imaging system are embedded in a sensor package (not shown). Further components such as a processing unit, e.g. a processor or microprocessor, to execute the detection method and ADCs, etc. are also arranged in the sensor package and may be integrated into the same integrated circuit.

    [0059] The light source LS comprises a light emitter such as a surface emitting laser, e.g., a vertical-cavity surface-emitting laser, or VCSEL. The light emitter has one or more characteristic emission wavelengths. For example, an emission wavelength of the light emitter lies in the near infrared, NIR, e.g. larger than 800 nm and smaller than 10.000 nm. LIDAR applications may rely on the range of emission wavelength of the light emitter is in between 840 nm and 1610 nm which results in robust emission and detection. This range can be offered by the VCSEL. The light source such a VCSEL may generate a polarized light. An external target or object may change or rotate the polarized light. Thus, an array with pixels having different polarization is able to generate more information about the object and to increase accuracy of the distance measurement.

    [0060] The detector array DA comprises one or more photodetectors, or pixels. The array of pixels form an image sensor. The photodetectors may be a photodiode, e.g. a pinned photodiode, a pin photodiode or another photodiode. The detector array comprises an array of detecting elements such as an array of photodiodes. The pixels are polarization sensitive. Adjacent pixels of the image sensor are polarization sensitive, each having an orthogonal state of polarization arranged in a checker-board pattern. This will be discussed in more detail below. The synchronization circuit SC is arranged in the same sensor package, and, in fact, integrated in the common integrated circuit. The synchronization circuit SC is arranged to synchronize emission of light, by means of the light emitter and/or detection by means of the detector array, e.g. as frames A and B.

    [0061] FIG. 2 shows an example embodiment of a detector array with polarization function. The detector array DA, or image sensor, comprises pixels which are arranged in a pixel map as shown. The image sensor can be characterized in that adjacent pixels have different states of polarization. The drawings shows a detector array with on-chip polarizers associated with the pixels, respectively. Such a structure may be integrated using CMOS technology. Adjacent pixels have orthogonal polarization functions, e.g., pixels PxH with horizontal polarization and pixels PxV with vertical polarization. Embodiments of said detector array are disclosed in in EP 3261130 A1 which is hereby incorporated by reference.

    [0062] FIG. 3 shows a cross section of an example detector array with a high-contrast grating polarizer. This example corresponds to FIG. 1 of EP 3261130 A1 and is cited here for easy reference. The remaining embodiments of detector arrays in EP 3261130 A1 are not excluded but rather incorporated by reference.

    [0063] The photodetector device, detector array, shown in FIG. 3 comprises a substrate 1 of semiconductor material, which may be silicon, for instance. The photodetectors, or pixels, of the array are suitable for detecting electromagnetic radiation, especially light within a specified range of wavelengths, such as NIR, and are arranged in the substrate 1, e.g. in the common integrated circuit. The detector array may comprise any conventional photodetector structure and is therefore only schematically represented in FIG. 3 by a sensor region 2 in the substrate 1. The sensor region 2 may extend continuously as a layer of the substrate 1, or it may be divided into sections according to a photodetector array.

    [0064] The substrate 1 may be doped for electric conductivity at least in a region adjacent to the sensor region 2, and the sensor region 2 may be doped, either entirely or in separate sections, for the opposite type of electric conductivity. If the substrate 1 has p-type conductivity the sensor region 2 has n-type conductivity, and vice versa. Thus a pn-junction 8 or a plurality of pn-junctions 8 is formed at the boundary of the sensor region 2 and can be operated as a photodiode or array of photodiodes by applying a suitable voltage. This is only an example, and the photodetector array may comprise different structures.

    [0065] A contact region 10 or a plurality of contact regions 10 comprising an electric conductivity that is higher than the conductivity of the adjacent semiconductor material may be provided in the substrate 1 outside the sensor region 2, especially by a higher doping concentration. A further contact region 20 or a plurality of further contact regions 20 comprising an electric conductivity that is higher than the conductivity of the sensor region 2 may be arranged in the substrate 1 contiguous to the sensor region 2 or a section of the sensor region 2. An electric contact 11 can be applied on each contact region 10 and a further electric contact 21 can be applied on each further contact region 20 for external electric connections.

    [0066] An isolation region 3 may be formed above the sensor region 2. The isolation region 3 is transparent or at least partially transparent to the electromagnetic radiation that is to be detected and has a refractive index for the relevant wavelengths of interest. The isolation region 3 comprises a dielectric material like a field oxide, for instance. If the semiconductor material is silicon, the field oxide can be produced at the surface of the substrate 1 by local oxidation of silicon (LOCOS). As the volume of the material increases during oxidation, the field oxide protrudes from the plane of the substrate surface as shown in FIG. 3.

    [0067] Grid elements 4 are arranged at a distance d from one another on the surface 13 of the isolation region 3 above the sensor region 2. For example, the grid elements 4 can be arranged immediately on the surface 13 of the isolation region 3. The grid elements 4 may have the same width w, and the distance d may be the same between any two adjacent grid elements 4. The sum of the width w and the distance d is the pitch p, which is a minimal period of the regular lattice formed by the grid elements 4. The length l of the grid elements 4, which is perpendicular to their width w, is indicated in FIG. 3 for one of the grid elements 4 in a perspective view showing the hidden contours by broken lines.

    [0068] The grid elements 4 are transparent or at least partially transparent to the electromagnetic radiation that is to be detected and have a refractive index for the relevant wavelengths. The grid elements 4 may comprise polysilicon, silicon nitride or niobium pentoxide, for instance. The use of polysilicon for the grid elements 4 has the advantage that the grid elements 4 can be formed in a CMOS process together with the formation of polysilicon electrodes or the like. The refractive index of the isolation region 3 is lower than the refractive index of the grid elements 4. The isolation region 3 is an example of the region of lower refractive index recited in the claims.

    [0069] The grid elements 4 are covered by a further region of lower refractive index. In the photodetector device according to FIG. 3, the grid elements 4 are covered by a dielectric layer 5 comprising a refractive index that is lower than the refractive index of the grid elements 4. The dielectric layer 5 may especially comprise borophosphosilicate glass (BPSG), for instance, or silicon dioxide, which is employed in a CMOS process to form inter-metal dielectric layers of the wiring. The grid elements 4 are thus embedded in material of lower refractive index and form a high-contrast grating polarizer.

    [0070] An antireflective coating 7 may be applied on the grid elements 4. It may be formed by removing the dielectric layer 5 above the grid elements 4, depositing a material that is suitable for the antireflective coating 7, and filling the openings with the dielectric material of the dielectric layer 5. The antireflective coating 7 may especially be provided to match the phase of the incident radiation to its propagation constant in the substrate 1. For example, if the substrate 1 comprises silicon, the refractive index of the antireflective coating 7 may be at least approximately the square root of the refractive index of silicon. Silicon nitride may be used for the antireflective coating 7, for instance.

    [0071] The array of grid elements 4 forms a high-contrast grating, which is comparable to a resonator comprising a high quality-factor. For the vector component of the electric field vector that is parallel to the longitudinal extension of the grid elements 4, i.e., perpendicular to the plane of the cross sections shown in FIG. 3, the high-contrast grating constitutes a reflector. Owing to the difference between the refractive indices, the optical path length of an incident electromagnetic wave is different in the grid elements 4 and in the sections of the further region of lower refractive index 5, 15 located between the grid elements 4. Hence an incident electromagnetic wave reaches the surface 13, 16 of the region of lower refractive index 3, 6, which forms the base of the high-contrast grating, with a phase shift between the portions that have passed a grid element 4 and the portions that have propagated between the grid elements 4. The high-contrast grating can be designed to make the phase shift n or 180° for a specified wavelength, so that the portions in question cancel each other. The high-contrast grating thus constitutes a reflector for a specified wavelength and polarization.

    [0072] When the vector component of the electric field vector is transverse to the longitudinal extension of the grid elements 4, the electromagnetic wave passes the grid elements 4 essentially undisturbed and is absorbed within the substrate 1 underneath. Thus electron-hole pairs are generated in the semiconductor material. The charge carriers generated by the incident radiation produce an electric current, by which the radiation is detected. Optionally, a voltage is applied to the pn-junction 8 in the reverse direction.

    [0073] The grid elements 4 may comprise a constant width w, and the distance d between adjacent grid elements 4 may also be constant, so that the high-contrast grating forms a regular lattice. The pitch p of such a grating, which defines a shortest period of the lattice, is the sum of the width w of one grid element 4 and the distance d. For the application of the array of grid elements 4 as a high-contrast grating polarizer, the pitch p is typically smaller than the wavelength of the electromagnetic radiation in the material of the region of lower refractive index n.sub.low1 and/or in the further region of lower refractive index n.sub.low2 or even smaller than the wavelength in the grid elements 4. In the region of lower refractive index n.sub.low1 the wavelength λ.sub.0 in vacuum of the electromagnetic radiation to be detected becomes λ.sub.1=λ.sub.0/n.sub.low1. In the further region of lower refractive index n.sub.low2 the wavelength becomes λ.sub.2=λ.sub.0/n.sub.low2. If n.sub.high is the refractive index of the grid elements 4, the wavelength λ.sub.0 becomes λ.sub.3=λ.sub.0/n.sub.high in the grid elements 4, λ.sub.3<λ.sub.0/n.sub.low1, λ.sub.3<λ.sub.0/n.sub.low2. This dimension denotes a difference between the high-contrast grating used as a polarizer in the photodetector device described above and a conventional diffraction grating.

    [0074] The pitch p may be larger than a quarter wavelength of the electromagnetic radiation in the grid elements 4. If the wavelength of the electromagnetic radiation to be detected is λ.sub.0 in vacuum, p>λ.sub.3/4=λ.sub.0/(4n.sub.high). This distinguishes the high-contrast grating used as a polarizer in the detector array described above from deep-subwavelength gratings. The length l of the grid elements 4 is optionally larger than the wavelength λ.sub.3=λ.sub.0/n.sub.high of the electromagnetic radiation in the grid elements 4.

    [0075] Therefore, the high-contrast grating based polarizer alleviates drawbacks of tight fabrication tolerance and layer thickness control as for diffraction gratings; and the necessity of very small structures and thus very advanced lithograph as for deep sub-wavelength gratings. The detector array with high-contrast grating polarizer can be used for a broad range of applications. Further advantages include an improved extinction coefficient for states of polarization that are to be excluded and an enhanced responsivity for the desired state of polarization.

    [0076] FIG. 4 shows an example embodiment of a modulation element. The drawings shows a circuit layout of a pixel architecture, or 4T pixel cell, of the detector array DA implemented as an image sensor. The pixel Px is connected as a 4T pixel architecture. The 4T pixel architecture comprises a transfer transistor Tt with transfer gate TG, a first reset transistor Tr1 connected to a first floating diffusion Fd1 and a source follower Sf, a column select Cs transistors which provide an output terminal Out. The pixel Px is further connected to the modulation element ME which comprises a leakage control element LC, a second reset transistor Tr2 and a second floating diffusion Fd2. Vdd indicates the supply rail.

    [0077] The modulation element ME comprises a leakage control element LC, which is responsible for re-routing a certain amount of charge per unit time to a position different from a floating diffusion. This re-routing can be controlled using the gate of the leakage control element LC. The floating diffusion of the 4T pixel cell holds the relevant charge information. That charge is intentionally re-routed so that the responsivity is reduced. In fact, there is a leakage path introduced to the second floating diffusion Fd2 reducing the responsivity of the photodetector Px. The modulation shall proceed monotonously during the modulation frame B as shown in FIG. 11, for example.

    [0078] FIG. 5 shows another example embodiment of a modulation element. The drawings shows a circuit layout of a pixel architecture of the detector array DA implemented as an image sensor. The 4T pixel cell is that of FIG. 4, however, modified. The pixel Px is connected to the modulation element ME which comprises the leakage control element LC but not a second reset transistor Tr2 or a second floating diffusion Fd2. Instead the leakage control element LC is connected to the supply rail Vdd.

    [0079] The leakage control element LC redirects charge to the supply rails (e.g. Vdd) during the acquisition of a frame. In fact, there is a leakage path introduced to the supply rails reducing the responsivity of the photodetector. The leakage control can be monotonous such that a frame starts with a reduced sensitivity which is then monotonously increasing. This concept enables objects further away from the imaging system to contribute more signal.

    [0080] FIG. 6 shows another example embodiment of a modulation element. The drawings shows a circuit layout of FIG. 5. However, the leakage control element LC is connected to ground instead of supply rail Vdd.

    [0081] In the embodiments mentioned above the leakage control element is optionally linear. This means that the leakage current is proportional to the control voltage applied to this element, e.g. via its gate. Alternatively, the leakage control element can be non-linear. Thus, the sensitivity of the photodetector during a frame may rise non-linearly or linearly. In both cases, the change of sensitivity during a given frame is monotonous, i.e. either monotonously increasing or decreasing.

    [0082] FIG. 7 shows another example embodiment of a modulation element. The drawing shows a circuit layout of a pixel architecture similar to that of FIG. 4. The 4T pixel cell is that of FIG. 4. However, the modulation element ME is modified. The pixel Px is connected to the modulation element ME which comprises the leakage control element LC but not a second reset transistor Tr2 or a second floating diffusion Fd2. Instead the leakage control element LC is connected to the supply rail Vdd via a voltage source VS to apply a control voltage Vleak. The voltage between the leakage control element and the supply rail (VDD) can be varied, e.g. linearly or non-linearly, during acquisition of a frame. During this frame, the leakage control element (i.e., a MOSFET) is operated in weak inversion and behaves like a resistor. By linearly varying the voltage Vleak in time, the leakage current Ileak is linearly varied in time also. By non-linearly varying the voltage Vleak in time, the leakage current Ileak is linearly varied in time also.

    [0083] FIG. 8 shows another example embodiment of a modulation element. In this embodiment there is no additional leakage control element, but rather the transfer gate Tg of the transfer transistor Tt of the 4T pixel cell is operated as a leakage control element in part of the acquisition of a frame. During a frame, the transfer gate can be kept slightly opened thus introducing a time-dependent leakage path to the first floating diffusion Fd1, which is reset by the reset transistor Tr before the actual charge transfer. During read-out, the transfer gate Tg is operated conventionally as in a 4T pixel cell, namely it isolates the photodetector from the first floating diffusion Fd1 to not alter the conversion gain, e.g. of a read-out circuit.

    [0084] FIG. 9 shows an example embodiment of a detection method, e.g. a LIDAR detection method. The light source, e.g. the VCSEL laser, LED, laser or flash lamp, emits an unmodulated light pulse. The light pulse is characterized in that it is synchronized with the photodetector. After being emitted either light pulse travels until it is reflected or scattered at one or more objects of the scenery. The reflected or scattered light returns to the imaging system where the detector array eventually detects the returning light. The backward travelling path is shown in FIG. 10.

    [0085] FIG. 10 shows an example embodiment of a detection method, e.g. a LIDAR detection method. This drawing shows the backward path of the LIDAR system. A light pulse emitted by the light emitter travels until it is reflected or scattered at one or more objects of the scenery. The reflected or scattered light returns to the imaging system where the detector array eventually detects the returning light. The reflected or scattered light pulse is modulated within a photodetector of the detector array during detection.

    [0086] Modulation function is integrated in the pixel. The modulation affects the sensitivity or responsiveness of a pixel. For example, sensitivity during a frame is monotonously increasing or monotonously decreasing as a function of time. The modulated sensitivity in the drawing starts at low and increases to high sensitivity.

    [0087] The imaging system, or LIDAR system, due to its arrangement, e.g. in a compact sensor package, which may also include dedicated optics, allows for observing a complete field of view (FOV) at once, called a Flash system. Flash typically works well for short to mid-range (0-100 m); and by capturing a complete scene at once also several objects and objects with high relative speeds can be detected properly. The synchronization circuit controls a delay between emission of light and a time frame for detection, e.g. the delay between emission of the pulses and a time frame for detection. The delay between an end of the pulse and a beginning of detection may be set, e.g. depending on a distance or distance range to be detected.

    [0088] The detection may involve this example sequence of operation: [0089] 1) Emission of a light pulse with constant irradiance, [0090] 2) Acquisition of a “constant image”, e.g. as first image during a first frame, [0091] 3) Emission of another light pulse with constant irradiance, [0092] 4) Acquisition of a “modulated image”, e.g. as second image during a second frame, [0093] 5) Calculation of a “LIDAR image” by division of “modulated image” by “constant image”.

    [0094] FIG. 11 shows an example timing diagram of the light source. The drawing shows leakage voltage, leakage current and sensitivity as functions of time, respectively. During a first integration of a first frame A, a steady-state image, or “constant image”, is acquired. The purpose is to acquire a steady-state or dc image. Such image may be grayscale or color depending on the application. In a second integration of a second frame B, the “modulated image” is acquired. During the modulation a voltage, e.g. the leakage voltage Vleak, is applied to the leakage control element LC as discussed above. In the second frame, frame B, the sensitivity of the pixels is modulated, for example monotonically, i.e. sensitivity increases or decreases linearly or non-linearly. The distance image, or “LIDAR image”, is then composed by dividing the modulated image from frame B by the non-modulated image from frame A, or by performing another mathematical calculation. In the “LIDAR image” items being further away in the scene show a different intensity than close objects.

    [0095] In one example, the light source generates the light pulse with a duration that is smaller than a duration of frame A and a duration of frame B. For example, the duration of the light pulse may be less than 2 ns or less than 1 ns or less than 0.5 ns. The duration of the frame B may be higher than 1 ms or higher than 5 ms or higher than 10 ms or higher than 20 ms. The photodetector measures (e.g. integrates) during the frame B. The photodetector may detect light during the complete duration of the frame B or during most of the duration of the frame B. The light source generates the light pulse at the start of frame B or shortly after the start of frame B. The object reflects the light pulse. A point of time at which the reflected light pulse reaches the photodetector depends on the distance of the object from the photodetector. During frame B, the photodetector has a sensitivity that increases with time. In case of a short distance, the photodetector has a low sensitivity when the reflected light pulse hits the photodetector. In case of a long distance, the photodetector has a high sensitivity when the reflected light pulse hits the photodetector. Thus, a value of the signal generated by the photodetector at the end of frame B depends on the amount of light in the reflected light pulse and the point of time at which the reflected light pulse arrives at the photodetector.

    [0096] During frame A and frame A′, the photodetector has a constant sensitivity. Thus, a value of the signal generated by the photodetector at the end of frame A depends on the amount of light in the reflected light pulse and is independent from the point of time at which the reflected light pulse arrives at the photodetector. The distance of the object from the photodetector can be calculated by using the value of the signal generated by the photodetector at the end of frame B and the value of the signal generated by the photodetector at the end of frame A or A′.

    [0097] The timing discussed above can also be applied to the embodiment of FIG. 8, i.e. there is no additional leakage control element, but rather the transfer gate Tg of the transfer transistor Tt of the 4T pixel cell is operated as a leakage control element in part of the acquisition of a frame. In this case the leakage voltage corresponds to a control voltage to be applied at the transfer gate Tg of the transfer transistor Tt.

    [0098] Emission of pulses and detection by means of the detector, with modulated sensitivity or not, e.g., as frames A or B, is synchronized by means of a synchronization circuit. In fact, light emitter, detector array and synchronization circuit may all be arranged in a same sensor package, wherein at least the detector array and synchronization circuit are integrated in the same integrated circuit. Further components such as a microprocessor to execute the detection method and ADCs, etc. may also be arranged in a same sensor package and integrated into the same integrated circuit.

    [0099] The embodiments shown in FIGS. 1 to 11 as stated above represent example embodiments of the improved imaging system and detection method, therefore they do not constitute a complete list of all embodiments according to the improved imaging system and detection method. An actual imaging system and detection method may vary from the embodiments shown in terms of circuit parts, shape, size and materials, for example.

    [0100] In the embodiment, the leakage control element comprises a transistor with a relatively long gate length.

    [0101] It should be noted that the examples above are non-exhaustive. This approach can be applied all the way from 3T pixel cells up to high-end HDR 8-transistor pixel arrays. The examples given above are all derived from the 4T pixel architecture. Also, the polarization property of the photodetectors does not need to be implemented in low-cost applications where size and cost are critical.

    [0102] The applications of the imaging system may e.g. comprise [0103] Automotive: Autonomous driving, collision prevention [0104] Security and surveillance [0105] Industry and automation [0106] Consumer electronics

    [0107] This imaging system is used for LIDAR and TOF systems, but not from a point-cloud perspective but enables high-resolution LIDAR systems suitable for various applications, in particular automotive, autonomous driving, robotics, drones and industrial applications.

    [0108] The technology described in this disclosure provides advantages over systems working with single laser beam deflection of point clouds of several hundreds of pixels only.

    [0109] The imaging system realizes a control of the VCSEL power which could be implemented in CMOS image sensor designs.

    [0110] The detection method may be implemented by the imaging system described above. The light source may be named light emitter. The image sensor may be named detector array.