MULTISPECTRAL IMAGE SENSOR AND METHOD FOR FABRICATION OF AN IMAGE SENSOR

20220199673 · 2022-06-23

    Inventors

    Cpc classification

    International classification

    Abstract

    The present invention relates to a multispectral image sensor having a pixel array for detecting images with light components in different wave-length ranges, comprising a plurality of imaging layers each embedded in a semiconductor substrate, wherein in each of the imaging layers an array of photodetecting regions is provided, wherein the photodetecting regions are configured with different absorption characteristics, wherein the imaging layers are stacked so that the photodetecting regions of the arrays are aligned, wherein the absorption characteristics allow a preferred absorption of light components of at least one predetermined wavelength range.

    Claims

    1. Multispectral image sensor having a pixel array for detecting images with light components in different wavelength ranges, comprising a plurality of imaging layers each embedded in a semiconductor substrate, wherein in each of the imaging layers an array of photodetecting regions is provided, wherein the photodetecting regions are configured with different absorption characteristics, wherein the imaging layers are stacked so that the photodetecting regions of the arrays are aligned, wherein the absorption characteristics define a preferred absorption of light components of at least one predetermined wavelength range.

    2. Image sensor according to claim 1, wherein the photodetectin regions of at least the upper imaging layers have absorption characteristics which allow a portion of light to transmit to the photodetecting regions of one of the lower imaging layers.

    3. Image sensor according to claim 1, wherein the photodetecting regions of each of the imaging layers have different thicknesses with respect to a. direction perpendicular to the direction of the main surface of the respective imaging layer.

    4. image sensor according to claim 3, wherein the aligned photodetecting regions of the plurality of imaging layers have an increasing thickness of the photodetecting regions from the upper imaging layer which serves as a light impinging surface down to a lowest imaging layer.

    5. Image sensor according to any of the claim 1, wherein the imaging layers are formed in a semiconductor substrate made of the same semiconductor material, such as silicon, or of at least two different semiconductor material s.

    6. Image sensor according to claim 1, wherein at least one of the imaging layers is carried on a light transparent substrate, particularly made of glass.

    7. Image sensor according to claim 6, wherein the at least one imaging layer is bonded to the light transparent substrate, particularly by means of wafer bonding.

    8. Image sensor according to claim 1, wherein each imaging layer has a light receiving surface which is provided with a micro-lens arrangement including micro-lenses each aligned to at least a part of the photodetecting regions.

    9. Image sensor according to claim 8, wherein at least one micro-lens arrangement on one of the imaging layers is in contact with a light transparent substrate carrying of a neighboring one of the stacked imaging layers.

    10. Image sensor according to claim 8, wherein a fully transparent medium is provided between the micro-lenses and the associated photodetecting region.

    11. Image sensor according to claim 1, wherein three imaging layers are stacked so that an upper imaging layer is configured with absorption characteristics to mainly absorb light up to wavelengths of between 450 nm to 550 nm, particularly to 500 nm, a middle imaging layer is configured with absorption characteristics to mainly absorb light up to wavelengths of between 550 nm to 650 nm, particularly to 600 nm, and a lower imaging layer is configured with absorption characteristics to mainly absorb light up to wavelengths of between 700 nm to 800 nm, particularly to 750 nm.

    12. Image sensor according to claim 1, wherein an upper imaging layer has photodetecting regions with a thickness of 1.5-3 μm, a further imaging layer has photodetecting regions with a thickness of 3-8 μm, and a lower imaging layer has photodetecting regions with a thickness more than 9 μm, particularly more than 10 μm.

    13. Image sensor device comprising an image sensor according to claim 1 and a control unit configured to detect the light intensity of each pixel in each of the imaging layers wherein the light components for different wavelength ranges for each pixel are determined based on the detected light intensities for each pixel and on the absorption characteristics of the photodetecting layers of each imaging layer.

    14. Method for fabricating an image sensor having a pixel array for detecting images in light components of different wavelength ranges, comprising: providing separate imaging layers with arrays of photodetecting regions forming pixels, wherein the photodetecting regions have differing absorption characteristics, wherein the absorption characteristics define a preferred absorption of light components of at least one predetermined wavelength range; and stacking the imaging layers so that the photodetecting regions of the imaging layers are aligned.

    15. Method according to claim 14, wherein the providing of the imaging layers include bonding a semiconductor layer to a transparent layer.

    16. Method according to claim 15, wherein the semiconductor layer bonded to the transparent layer is thinned by an etching or polishing process.

    17. Multispectral optical sensor having a pixel array for detecting light components in different wavelength ranges, comprising a plurality of layers each embedded in a semiconductor substrate, wherein in each of the layers an array of photodetecting regions is provided, wherein the photodetecting regions are configured with different absorption characteristics, wherein the layers are stacked so that the photodetecting regions of the arrays are aligned, wherein the absorption characteristics define a preferred absorption of light components of at least one predetermined wavelength range.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0033] Embodiments are described in more detail in conjunction with the accompanying drawings in which:

    [0034] FIG. 1 shows a schematic cross-sectional view of the multispectral image sensor according to an embodiment of the present invention;

    [0035] FIG. 2 schematically shows a top view onto a substrate layer of the multispectral image sensor;

    [0036] FIG. 3 shows a diagram illustrating the absorption depth in silicon as a function of the wavelength;

    [0037] FIG. 4 shows a diagram illustrating the photon intensity as a function of the depth in silicon for blue, green and red;

    [0038] FIGS. 5a to 5g show the process steps for fabricating a multispectral image sensor according to the present invention;

    [0039] FIG. 6 shows a schematic cross-sectional view of the multispectral image sensor according to another embodiment of the present invention; and

    [0040] FIG. 7 shows a packaged imaging sensor.

    DESCRIPTION OF EMBODIMENTS

    [0041] FIG. 1 schematically shows a cross-sectional view through a portion of multispectral image sensor 1 with three stacked layers 2 including a first, second and third imaging layer L1, L2, L3. Each of the imaging layers L1, L2, L3 has an array 3 of neighboring pixels 31 being distanced so that the arrays 3 of pixels 31 of the layers 2 have identical grids.

    [0042] The stacked layers 2 are integrated in or formed by a semiconductor substrate. As semiconductor material for the semiconductor substrate many different types of semiconductor materials are possible. For ease of description the invention is further described with silicon as a preferred semiconductor material while other semiconductor materials which are suitable for photon detection can be applied for implementation of the present invention as well. Usage of silicon has the advantage that it can be processed by well-known technology processes such as a CMOS process.

    [0043] Each pixel of each layer 2 provides a photosensitive region 4 which is configured to preferably absorb photons with a wavelength in a dedicated wavelength range and to preferably transmit photons with higher wavelengths. The photosensitive region 4 may include a pn junction, a PIN-diode or the like wherein an absorbed photon likely generates an electron hole pair. On absorption, the bandgap of the pn junction separates an electron and a hole of a generated electron hole pair resulting in an electrical potential to be measured by a sensing circuitry.

    [0044] The imaging layers 2, L1, L2, L3 are stacked so that the arrays 3 of pixels and the photodetecting regions 4 are aligned along a direction substantially vertical to the surfaces of the layers, i.e. the photosensitive regions 4 of each layer 2 are aligned to each other. So, each of the photons impinging substantially perpendicularly on top of the upper first imaging layer L1 onto a pixel 31 is either absorbed in the respective photosensitive region 4 of the first imaging layer L1 or passed through towards the photosensitive region 4 of the second imaging layer L2. Each of the passing photons is then either absorbed in the respective photosensitive region 4 of the second imaging layer 2 or passed through towards the photosensitive region 4 of the third imaging layer L3. The respective photosensitive region 4 of the third imaging layer L3 may be configured to absorb each of the remaining photons.

    [0045] Above arrangement results in the effect that each of the photons impinging onto the pixel of the image sensor 1 will be absorbed in one of the photodetecting regions 4 thereby generating an electrical signal in one of the layers L1, L2, L3. Each of the photodetecting regions 4 of the different layers have predetermined absorption characteristics so that likelihood and wavelength of the absorption of photons is known.

    [0046] Each array 3 of pixels 31 of each imaging layer 2 (L1, L2, L3) may have a micro-lens arrangement 5. The micro-lens arrangement 5 has micro-lenses 51 which are aligned to a respective (associated) photosensitive region 4 so that a photon impinging on the pixel area of the respective imaging layer L1, L2, L3 is directed to the associated photosensitive region 4. The micro-lenses 51 may be arranged with a specified distance from the photodetecting region 4 wherein between the micro-lenses 51 and the associated photodetecting regions 4 a fully light transmitting medium such as SiN.sub.2, Si0.sub.2 or the like is included. The micro-lenses 51 may be configured with a focus which corresponds to the distance between the micro-lens and the respective photodetecting region 4.

    [0047] FIG. 2 shows schematically a top view on one of the imaging layers 2 to illustrate the grid of the array 3 of pixels 31. Between the pixels 31 select lines SL are located for selecting one row of pixels for reading out with sense amplifiers via data lines DL. Circuitry 10 for selecting the rows and for reading out data is arranged aside of the array 3 as commonly known in the art. Each of these layers L1, L2, L3 shall be designed for detecting a part of the photons which are selectively detected by a wavelength range and a given likelihood of absorption.

    [0048] The thickness of the photosensitive regions 4 in each imaging layer L1, L2, L3 is configured depending on the absorption depth in silicon as a function of the wavelength of the respective photon. The absorption depth indicates the depth from the surface on which the photon impinges where the light intensity has fallen to 36% (1/e) of its original value. That means that the absorption likelihood of a photon is about 64% (1−1/e). An absorption depth of, for example, 1 μm means that the light intensity has fallen to 36% (1/e) of its original value.

    [0049] As shown in the diagram of FIG. 3, the absorption depth in silicon as an exemplary semiconductor is shown as a function of the wavelength. It can be seen that the characteristics of photon absorption strongly depends on the wavelength of the impinging photons, wherein the higher the wavelength the larger the absorption depth (with respect to the surface on which the photon impinges). Vice versa, the lower the wavelength, the lower is the absorption depth in silicon.

    [0050] This effect can also be illustrated by the diagram as shown in FIG. 4 wherein the photon intensity as a function of the depth in silicon for blue, green and red light (photons) is shown. Particularly, FIG. 4 shows the relative intensity over the depth in micrometers in silicon. Here, it can be seen that the absorption of photons in lower depth of the photosensitive region 4 is higher for lower wavelength.

    [0051] Substantially, the light absorption in silicon is described by the Beer-Lambert law wherein the light intensity at a depth L in silicon corresponds to


    I(L)=I.sub.0e.sup.−α(λ)L

    wherein I(L) is the remaining intensity in depth L of light impinging with an intensity I.sub.0, and

    [00001] 1 α ( λ )

    is the absorption depth in silicon for a wavelength λ.

    [0052] The photosensitive regions 4 of the different layers 2 of the pixel arrays are configured with different thicknesses to mainly absorb photons of different wavelength ranges. Therefore, based on the light absorption properties of silicon, a vertical stacking of pixels with wisely chosen thicknesses of the photodetecting regions 4 can be an efficient way to perform color imaging or multispectral imaging in general. By exploiting the dependency of absorption depth on the wavelength of impinging light onto the thickness of the photosensitive regions 4 of the different layers 2, photons of different colors can be selectively (preferredly) absorbed in different layers 2 of the image sensor 1.

    [0053] In an example of three layers 2, the thickness of the photosensitive region 4 of the upper first layer L1 can be chosen as 2 μm corresponding to a wavelength range of blue light, the thickness of the photosensitive region 4 of the second layer L2 as 4 μm corresponding to a wavelength range of green light and the thickness of the photosensitive region 4 of the third layer L3 can be selected as more than 10 μm corresponding to a wavelength of red light. According to the following table, which indicates the absorption ratios R of light in the specified wavelength range, it can be seen that most of the blue component B of photons gets absorbed in the upper first layer L1 (having a thickness of 2 μm of the photosensitive region) while the absorption of the green component G of the photons is mainly split between the photosensitive regions 4 of the first and the second imaging layer L1, L2. Although some portion of the green component G of the photons is absorbed in the first and third imaging layers L1, L3 the largest portion of the light arriving at the second layer L2 (having a thickness of 4 μm of the photosensitive region) is the green component. Although some portion of the red component R of the photons are absorbed in the first and second imaging layers L1, L2 the remaining half of the red component R is absorbed in the lowest third imaging layer L3 (having a thickness of 10 μm of the photosensitive region).

    TABLE-US-00001 Thickness 2 μm 4 μm >10 μm Wavelengths Red component ~700 nm 0.2 R 0.3 R 0.5 R Green component ~546 nm 0.5 G 0.4 G 0.1 G Blue component ~436 nm 0.9 B 0.1 B

    [0054] By knowing the absorption ratios R and the absolute intensities of light detected in each of the imaging layers L1, L2, L3 it is possible to calculate an intensity of each component R, G, B corresponding to wavelength ranges of the three imaging layers L1, L2, L3. In other words, by solving the linear equations of


    I(L1)=0.2R+0.5G+0.9B


    I(L2)=0.3R+0.4G+0.1B


    I(L3)=0.5R+0.1G

    [0055] with I the total intensity of light detected in the given layer L1, L2, L3 the blue, green and red component B, G, R can be determined.

    [0056] In FIG. 5, a process for fabricating a single substrate layer 2 with an array 3 of pixels 31 is illustrated. The substrate layer is fabricated with pixels each formed by a thinned photodetecting region 4.

    [0057] As shown in FIG. 5a a transparent substrate 11, such as SiO.sub.2, and a semiconductor substrate 12 which may be a p-silicon substrate are provided. The transparent substrate 11 may be provided with a thickness /stability so that the transparent substrate 11 can serve as a carrier for the semiconductor substrate 12 as the semiconductor substrate 12 will be provided with a very low thickness of less than 10 μm.

    [0058] As shown in FIG. 5b, the substrates are cleaned and bonded, for example a well- known waferbonding process can be used in a way that does not introduce any intermediate layer keeping the interface between the substrates fully transparent to light. So, it is obtained a silicon-to-glass wafer.

    [0059] FIG. 5c illustrates a thinning process wherein the semiconductor layer 12 (silicon) is thinned to reach the desired silicon thickness. Thinning can be carried out by standard non-isotropic etching processes, polishing processes or the like. It becomes apparent that the transparent layer 11 serves as a carrier as the low mechanical stability of the thinned semiconductor layer 12 does not allow further handling by itself. Therefore, bonding the semiconductor layer 12 to the transparent layer 11 increases the mechanical stability of the thinned semiconductor layer 12 and allows silicon thinning without having an ultra-thin wafer. Further the transparent layer 11 does not block any photons transmitted through photodetecting regions 4 of upper imaging layers L1, L2 from reaching the photodetecting regions 4 of lower layers L2, L3.

    [0060] As shown in FIG. 5d, the thinned silicon-on-glass wafer is then processed to implement photodetecting regions 4 of the array 3 of pixels 31 and electronic circuity as shown in FIG. 2, as well contact pads 11 for electrical connecting the respective layer in a conventional manner which is well known from a standard processing of image sensors. Further, optionally micro-lenses can be arranged on top of all imaging layers L1, L2, L3. The micro-lenses are made of silicon oxide covering the metal wiring of the imaging layers L1, L2, L3.

    [0061] FIG. 5e shows that multiple silicon-on-glass substrate imaging layers can be processed with different imaging layer thicknesses. Possible thicknesses are indicated above.

    [0062] As shown in FIG. 5f, these layers 2 can be stacked to form a stacked multiple layer image sensor for color imaging or multispectral light sensing in general. The stacking is performed so that the photodetecting regions 4 and the array of pixels are aligned.

    [0063] The aligning is performed so that an impinging photon can pass through the layer stack down to the photodetecting region 4 of the lowest layer L3.

    [0064] In FIG. 5g edge parts of layers are etched to make contact pads of lower imaging layers in the stack accessible.

    [0065] In FIG. 6 it is shown an alternative multispectral image sensor wherein micro- lenses are only provided on top of the stacked multiple layer image sensor. The micro-lenses are made of silicon oxide covering the metal wiring of the upper imaging layer while omitting arranging the micro-lenses of the other layers in step of FIG. 5d.

    [0066] Substantially, the bonding pads of the layers are arranged close to the edge of the layers. The layers are provided with varying sizes so that when stacking a pyramid like structure is achieved allowing free access to bonding pads with the layer's area decreasing towards the upper layer.

    [0067] FIG. 7 shows an example of the image sensor 1 which is wire-bonded by bonding wires 21 in a package 20.