Imager integrated circuit and stereoscopic image capture device
09793308 · 2017-10-17
Assignee
Inventors
Cpc classification
G02B30/32
PHYSICS
H04N3/155
ELECTRICITY
H04N23/55
ELECTRICITY
H04N23/16
ELECTRICITY
H01L27/14625
ELECTRICITY
H04N13/282
ELECTRICITY
H04N7/18
ELECTRICITY
G01J5/20
PHYSICS
International classification
H04N7/18
ELECTRICITY
Abstract
An imager integrated circuit intended to cooperate with an optical system configured to direct light rays from a scene to an inlet face of the circuit, the circuit being configured to perform a simultaneous stereoscopic capture of N images corresponding to N distinct views of the scene, each of the N images corresponding to light rays directed by a portion of the optical system which is different from those directing the rays corresponding to the N−1 other images, including: N subsets of pixels made on a same substrate, each of the N subsets of pixels being intended to perform the capture of one of the N associated images, means interposed between each of the N subsets of pixels and the inlet face of the circuit, and configured to pass the rays corresponding to the image associated with said subset of pixels and block the other rays.
Claims
1. An imager integrated circuit intended to cooperate with an optical system configured to direct light rays from a scene to an inlet face of the imager integrated circuit, said imager integrated circuit being configured to perform a simultaneous stereoscopic capture of N images corresponding to N distinct views of the scene, N being an integer higher than 1, each of the N images corresponding to light rays directed by a portion of the optical system which is different from those directing the light rays corresponding to the N−1 other images, the imager integrated circuit including: N subsets of pixels made on a same substrate, each of the N subsets of pixels being intended to perform the capture of one of the N images associated therewith, means interposed between each of the N subsets of pixels and the inlet face of the imager integrated circuit, and configured to pass the light rays corresponding to the image associated with said subset of pixels and block the other light rays directed from the optical system to said subset of pixels, and wherein said means include: at least two opaque layers superimposed one above the other with a space therebetween, provided between the pixels and the inlet face of the imager integrated circuit, both opaque layers having, passing therethrough, a plurality of holes forming, towards each pixel, at least one pair of superimposed diaphragms, formed by the alignment of the holes in one of the at least two opaque layers being different than the alignment of the holes in another of the at least two opaque layers, configured to pass a part of the light rays corresponding to the image associated with the subset of pixels of which said pixel is part and configured to block other light rays directed from the optical system to said pixel and corresponding to the other images, and wherein the optical system is facing the two opaque layers, wherein means is configured to pass light rays to at least one first subset of pixels from a portion of a right half of the optical system, pass light rays to at least one second subset of pixels from a portion of a left half of the optical system, block light rays to the at least one first subset of pixels from the portion of the left half of the optical system, block light rays to the at least one second subset of pixels from the portion of the right half of the optical system, and pass light rays from a middle point of the optical system to the first and second subsets of pixels, wherein between the opaque layers and the subsets of pixels, light rays that are not blocked by the opaque layers pass only through a dielectric layer disposed between the opaque layers and the subsets of pixels.
2. The imager integrated circuit according to claim 1, wherein the number of holes passing through each of both opaque layers is equal to the total number of pixels of the N subsets of pixels.
3. The imager integrated circuit according to claim 1, wherein, when N equals two and said portion of the optical system corresponds to one half of the optical system, a distance H between the pixels and a second one of both opaque layers, a first of both opaque layers being provided between the pixels and the second one of both opaque layers, is H≦1.5×p.Math.O.Math.n with: p: pitch of the pixels; O: numerical aperture of the optical system; n: optical index of a transparent material provided between both opaque layers.
4. The imager integrated circuit according to claim 1, wherein the number of holes passing through a first one of both opaque layers is equal to the total number of pixels of the N subsets of pixels, and the number of holes passing through a second one of both opaque layers is equal to (M.sub.pix/N)±1, with M.sub.pix corresponding to said total number of pixels, said first one of both opaque layers being provided between the pixels and the second one of both opaque layers.
5. The imager integrated circuit according to claim 4, wherein the distance H between the pixels and the second one of both opaque layers is
6. The imager integrated circuit according to claim 1, wherein at least one of the opaque layers is formed by electric interconnection lines electrically connected to the pixels.
7. The imager integrated circuit according to claim 1, wherein the pixels are provided between the inlet face of the imager integrated circuit and electric interconnection lines electrically connected to the pixels.
8. The imager integrated circuit according to claim 1, wherein holes formed in the opaque layers form side by side aligned trenches or wherein the holes are provided in staggered rows.
9. The imager integrated circuit according to claim 1, wherein each pixel includes non-photosensitive electric or electronic elements masked by the opaque layers.
10. The imager integrated circuit according to claim 1, wherein both opaque layers are spaced apart from each other by at least one of the following elements: air, SiO.sub.2, porous SiO.sub.2, a resin optically transparent to light rays intended to be captured by the pixels.
11. The imager integrated circuit according to claim 1, wherein both opaque layers are composed of metal, or resin or metal and resin.
12. The imager integrated circuit according to claim 1, wherein both opaque layers are covered with at least one antireflection layer.
13. The imager integrated circuit according to claim 1, wherein the pixels are configured to capture images in the visible region, or in the infrared region, or in both visible and infrared regions.
14. A stereoscopic image capture device including at least one imager integrated circuit according to claim 1 and at least one optical system configured to direct light rays from a scene to the imager integrated circuit.
15. The image capture device according to claim 14, wherein the pixels of the imager integrated circuit are configured to capture images in the infrared region, said device being a bolometer.
16. An imager integrated circuit intended to cooperate with an optical system configured to direct light rays from a scene to an inlet face of the imager integrated circuit, said imager integrated circuit being configured to perform a simultaneous stereoscopic capture of N images corresponding to N distinct views of the scene, N being an integer higher than 1, each of the N images corresponding to light rays directed by a portion of the optical system which is different from those directing the light rays corresponding to the N−1 other images, the imager integrated circuit including: N subsets of pixels made on a same substrate, each of the N subsets of pixels being intended to perform the capture of one of the N images associated therewith, means interposed between each of the N subsets of pixels and the inlet face of the imager integrated circuit, and configured to pass the light rays corresponding to the image associated with said subset of pixels and block the other light rays directed from the optical system to said subset of pixels, and wherein said means include: at least two opaque layers superimposed one above the other with a space therebetween, provided between the pixels and the inlet face of the imager integrated circuit, both opaque layers having, passing therethrough, a plurality of holes forming, towards each pixel, at least one pair of superimposed diaphragms, formed by the alignment of the holes in one of the at least two opaque layers being different than the alignment of the holes in another of the at least two opaque layers, configured to pass a part of the light rays corresponding to the image associated with the subset of pixels of which said pixel is part and configured to block other light rays directed from the optical system to said pixel and corresponding to the other images, wherein, when N equals two and said portion of the optical system corresponds to one half of the optical system, a distance H between the pixels and a second one of both opaque layers, a first of both opaque layers being provided between the pixels and the second one of both opaque layers, is H≦1.5×p.Math.O.Math.n, with: p: pitch of the pixels; O: numerical aperture of the optical system; n: optical index of a transparent material provided between both opaque layers.
17. An imager integrated circuit intended to cooperate with an optical system configured to direct light rays from a scene to an inlet face of the imager integrated circuit, said imager integrated circuit being configured to perform a simultaneous stereoscopic capture of N images corresponding to N distinct views of the scene, N being an integer higher than 1, each of the N images corresponding to light rays directed by a portion of the optical system which is different from those directing the light rays corresponding to the N−1 other images, the imager integrated circuit including: N subsets of pixels made on a same substrate, each of the N subsets of pixels being intended to perform the capture of one of the N images associated therewith, means interposed between each of the N subsets of pixels and the inlet face of the imager integrated circuit, and configured to pass the light rays corresponding to the image associated with said subset of pixels and block the other light rays directed from the optical system to said subset of pixels, and wherein said means include: at least two opaque layers superimposed one above the other with a space therebetween, provided between the pixels and the inlet face of the imager integrated circuit, both opaque layers having, passing therethrough, a plurality of holes forming, towards each pixel, at least one pair of superimposed diaphragms, formed by the alignment of the holes in one of the at least two opaque layers being different than the alignment of the holes in another of the at least two opaque layers, configured to pass a part of the light rays corresponding to the image associated with the subset of pixels of which said pixel is part and configured to block other light rays directed from the optical system to said pixel and corresponding to the other images, wherein the number of holes passing through a first one of both opaque layers is equal to the total number of pixels of the N subsets of pixels, and the number of holes passing through a second one of both opaque layers is equal to (M.sub.pix/N)±1, with M.sub.pix corresponding to said total number of pixels, said first one of both opaque layers being provided between the pixels and the second one of both opaque layers, wherein the distance H between the pixels and the second one of both opaque layers is
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The present invention will be better understood upon reading the description of exemplary embodiments only given by way of illustration and in no way limiting by reference to the appended drawings wherein:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9) Identical, similar or equivalent parts of the different figures described herein below have the same reference numerals so as to facilitate switching from one figure to another.
(10) The different parts represented in the figures are not necessarily drawn to a uniform scale, for a better understanding of the figures.
(11) The different possibilities (alternatives and embodiments) should be understood as being not mutually exclusive and can be combined with each other.
DETAILED DESCRIPTION OF PARTICULAR EMBODIMENTS
(12)
(13) In this first embodiment, the device 1000 enables stereoscopic image capture to be performed with two images, or two views, and corresponds for example to a camera or video camera. The device 1000 also includes other elements, and in particular an optical system 10 comprising at least one or several lenses and corresponding for example to a fixed focal objective lens.
(14) The imager integrated circuit 100, called hereinafter image sensor, includes a plurality of pixels 102 each including at least one photodetector, for example a photodiode or any other equivalent means for transforming received optical information (amount of photons) into an electric signal. The light rays directed by the optical system 10 and arriving onto the image sensor 100 at an inlet face 101 correspond to the images intended to be captured by the image sensor 100. A first subset of pixels referenced 102a is intended to capture a first part of the light rays directed to the image sensor 100. A second subset of pixels referenced 102b is intended to capture a second part of the light rays directed to the image sensor 100. In the example of
(15) In order to select the rays that should be captured by either subset of pixels 102a, 102b, the image sensor 100 is provided with a mask formed by two opaque layers 104a, 104b, that is non-transparent towards light rays received by the image sensor 100, superimposed and interposed between the pixels 102 and the optical system 10. These opaque layers 104a, 104b are for example composed of metal. Each of the opaque layers 104a, 104b includes several apertures 105a, 105b or holes such that, for each pixel, each opaque layer forms a diaphragm. Thus, for each pixel, one of the apertures 105a formed in the layer 104a forms a lower diaphragm on which is superimposed an upper diaphragm formed by one of the apertures 105b made in the layer 104b.
(16) In the example represented in
(17) The rays from the right half 10a of the optical system 10 do not reach the pixels 102b because they are blocked by either or both opaque layers 104a, 104b. Moreover, the light rays from the left half 10b of the optical system 10 do not reach the pixels 102a because they are blocked by either or both opaque layers 104a, 104b.
(18) A first image is thus obtained from the signals delivered by the pixels 102a and a second image is obtained from the signals delivered by the pixels 102b. The light rays directed by one of the halves 10a, 10b of the optics 10 arrive in one of two pixels of the image sensor 100 whereas those from the other half are stopped by the diaphragms for these same pixels, but reach the other pixels, inaccessible to the rays of the first half because of the presence of the diaphragms.
(19) The optical indexes of the materials used are selected such that the refraction in the materials has a very low impact on the size and position of the apertures 105a, 105b formed in the opaque layers 104a, 104b.
(20) The pixels 102 of the image sensor 100 can be spaced apart from each other by a pitch which is regular or not. The maximization of the number of pixels in an image sensor is generally obtained by spacing the pixels by a regular pitch, the dark area between two photodetectors being equal to the width of a photodetector in the case of a stereoscopic capture with two images. For each pixel, the elements other than the photodetector (interconnection lines, transistors, . . . ) are advantageously provided facing opaque parts of the layers 104a, 104b.
(21) In the example of
(22) The opaque layers 104a, 104b are relatively close to the pixels 102. In the image sensor 100, the opaque layers 104a and 104b include a same number of holes 105a, 105b. Moreover, the diameter of the optics is much higher than the pitch between two pixels. For the image sensor 100, the distance H between the opaque layer 104b, forming the upper diaphragms, and the pixels 102 can be lower than or equal to about 1.5×p.Math.O.Math.n, with:
(23) p: pitch between two pixels;
(24) N: number of captured views (here N=m=2);
(25) O: numerical aperture of the optics 10 (equal to the ratio of the focal lens F to the diameter D);
(26) n: optical index of the transparent material of the layers 106a, 106b.
(27) In an advantageous alternative, the second opaque layer 104b forming the upper diaphragms can be provided at a particular distance from the pixels 102, this distance being approximately equal to 2.Math.p.Math.O.Math.n when the pitch of the pixels is very small as compared to the diameter D of the optics 10, N=2 and the surface of the material in the apertures of the diaphragms is planar. In this case, the number of apertures formed in the second opaque layer 104b, forming the upper diaphragms, is equal to half the number of pixels, more or less one. Such an example is represented in
(28) Generally, the closer the first opaque layer 104a is to the pixels 102, the more the apertures 105a formed in this first opaque layer 104a have high dimensions and close to the size of the photodetectors of the pixels 102.
(29) In the configuration represented in
(30) Since the pixels 102 only receive the rays from either half of the optics 10, the pixels 102a perform an integration on the light from the half 10a of the optics 10 and the pixels 102b perform an integration of the light from the half 10b of the optics 10.
(31) Thus, unlike an image captured by a conventional sensor the average point of view of which corresponds to the centre of the optics, the image captured by the pixels 102a corresponds to an image the average point of view 12a of which is located approximately in the middle of the right half 10a of the optics 10, and the image captured by the pixels 102b corresponds to an image the average point of view 12b of which is located approximately in the middle of the left half 10b of the optics 10.
(32) In view of the mask formed by the opaque layers 104a, 104b above the pixels 102, about one quarter of the rays from the optics 10 reaches at least one of the pixels 102, which enables two really distinct images obtained by capturing rays from the entire surface of the optics with two most spaced apart possible points of views (the spacing is substantially equal to half the width of the optics). The lit width of each pixel 102 is substantially equal to the width darkened by the mask in the plane of the pixels 102.
(33) In an advantageous alternative embodiment, and when the imager integrated circuit 100 is a “front-side” (light rays entering the front face of the image sensor) type sensor, the opaque layers 104a, 104b of the mask can be formed by the electric interconnection lines, generally composed of metal, forming electric connections between electric/electronic elements of the image sensor. The pattern of these electric lines therefore meets both that imposed to obtain the desired electric wiring in the sensor as well as that imposed to form the diaphragms.
(34) Part of the imager integrated circuit 100 made according to this alternative embodiment is represented in
(35) Indeed, for each pixel, the upper diaphragm combined with the lower diaphragm formed facing this pixel form together an aperture directed along an axis and enable the light rays intended to be received by the pixel to be selected. In the example of
(36) The interconnection layers 104a, 104b also enable the light rays to prevent the CMOS transistors from being lit as well as other elements of the pixels not forming the photosensitive areas. Although not represented, the image sensor 100 includes other elements such as microlenses and colour filters formed above the electric interconnection layers 104 and the dielectric layer 106, the microlenses enabling the refraction to be reduced and light between the metal tracks to be more concentrated. In this alternative, with respect to a non-stereoscopic standard image sensor, the optical axis, or line of sight, of diaphragms formed by the metal tracks is turned over and is not perpendicular to the face of the substrate 110 on which the pixels are formed. The microlenses can also be used to correct the optical aberrations and/or to minimize the optical reflections in the sensor.
(37) If the thickness of the interconnection layers (thickness of the layers themselves+space between these layers) is insufficient to form the mask for selecting the rays intended to be received by the pixels, it is possible to add, above the interconnection levels, a further opaque layer intended to form the upper diaphragms. Such a configuration is represented in
(38) The opaque layer 116 is provided on a transparent support 118 covering the dielectric material 106 wherein the interconnection layers 104a, 104b are made. The layer 116 is covered with a passivation layer 120 on which lenses 122 are provided.
(39) In the above described examples, the image sensor 100 can be a colour sensor. In this case, the sensor 100 includes mosaically provided colour filters, under the microlenses, forming for example a Bayer filter. When two neighbouring pixels are intended to form two images, or views, distinct from a same stereoscopic image, each coloured filter will be able to filter the rays intended to be captured by these two neighbouring pixels.
(40) In all the exemplary embodiments described, the photodetectors can be wider than the diffraction pattern, making the diffraction effect almost negligible.
(41) The stereoscopy is obtained from one or more images taken at different points of view along a direction in space. Generally, the mask forming the diaphragms can thus include either apertures forming long lines covering all the pixels aligned along a direction perpendicular to the direction of alignment of the points of view of the different images, or apertures, for example having rectangular shapes, forming a “draughtboard”, to align or on the contrary provide in a staggered row the pixels of different images.
(42) Only the electronics performing the resulting information sorting to reconstitute images is adapted according to the arrangements of apertures and maskings, in order to use and process the signals delivered by pixels to reconstitute different views and construct stereoscopic images.
(43) When the image sensor includes colour filters, it is possible to make these filters so that they also form the mask enabling the rays intended to be received by the pixels to be selected. Such a configuration will be particularly interesting in the case of a “backside” sensor, that is a sensor intended to receive a light through its back face, and thus wherein the mask cannot be formed by the electric interconnection layers.
(44) Actually, two superimposed colour filters can act as two superimposed diaphragms such as described above because if their spectra have no or little common part (filters having different colours), then no light passes through these two superimposed filters. Thus, depending on whether a light ray directed towards a pixel should pass through two colour filters having the same colour or two colour filters having different colours, the light will reach the pixel or not.
(45)
(46) The stereoscopic imager integrated circuit 200 uses superimposed colour filters to form the selection mask of light rays received by the pixels 102 of the sensor 200.
(47) Lower colour filters 202 are provided just above the pixels 102. For simplifying reasons, the filters 202 are considered as having two colours, that is red filters 202a and blue filters 202b alternately provided side by side. Upper (red) 204a and (blue) 204b colour filters are provided above the lower colour filters 202. The upper colour filters 204 are made in an opaque layer 206 which enables some light rays to be blocked. The lower colour filters 202 are separated from the upper colour filters 204 by a transparent material.
(48) The distance between the upper colour filters 204 and the pixels can correspond to the distance separating the upper diaphragms and the pixels from the image capture device 1000 previously described. The lower colour filters 202 can be provided against the photodetectors of the pixels 102.
(49) The double colour filtering performed enables the blue or red filtered pixels to be alternately pointed towards the right half or the left half of the optics. Two neighbouring pixels provided under filters having different colours will be herein intended to form the same image. In
(50) The lower filters 202 have the same function as the above described lower diaphragms. On the other hand, the upper filters 204 are coupled to opaque parts of the layer 206 in order to fulfil the function of the above described upper diaphragms.
(51) In this second embodiment, each pixel 102 only receives light rays passing through two superimposed filters having similar colours. Thus, the other rays directed towards this pixel but which have on their path two filters having different colours or an opaque part of the layer 206 are blocked before reaching this pixel.
(52) The red and blue colours of the filters 202 and 204 are only given by way of illustration. In order to form, towards a pixel, a pair of colour filters intended to block undesirable light rays, it is possible to use either pair of colour filters, provided that their spectrum does not or sparsely overlap mutually. It is for example possible to use and combine red, green and blue colours of the filters into Bayer matrix to form these superimposed colour filters.
(53) The colour filtering enabling the colour to be determined needs at least three colours which are homogeneously distributed throughout the surface of the sensor. This arrangement of the filters only relates to the direction of the plane wherein the stereoscopic effect is desired, but all the colours should be altered by this arrangement for the right and left views to be then reconstructed through colour dematrixing.
(54) By using the colour filters to form the masks for selecting light rays, these filters thereby fulfil two functions, that is the reproduction of the image colours and the separation of the points of view of the stereoscopic images. In comparison with the first embodiment wherein the selection of light rays reaching the pixels is performed by the opaque layers including apertures, this second embodiment enables a larger quantity of light to be captured because the pixels as well as the photodetectors can herein be adjoining. A better coverage of the sensitive surface of the image sensor can thereby be obtained.
(55) In the example of
(56) However, it is possible for the upper filters to be closer to the pixels. In this case, the number of upper filters 204 is equal to the number of lower filters 202.
(57) In another alternative, when the image sensor is the “back-side” type, it is possible that the mask is not formed by colour filters, but by opaque layers as previously described in connection with FIGS. 1 and 2, these layers being in this case made on the back-face side of the sensor.
(58) In view of the very high number of pixels the image sensor can include (for example between about 10 and 20 millions of pixels), a single sensor such as described above can be used to make a stereoscopic image capture with N images, or N views, with N for example between 2 and 40. The sensor can in particular be used to make a stereoscopic image capture with 8 images such as required for the relief TV standard called “Alioscopy”.
(59) In the above described examples, the cone of vision of each pixel transfers an average point of view at the inlet of the optical system. In the case of a non-stereoscopic imager integrated circuit not including means enabling light rays received by the pixels to be selected, the average point of view is located on the optical axis, in the middle of the optics, and is the same for all the pixels of the sensor. In the image sensor according to the invention, the points of view of different captured images are differentiated for each subset of pixels and are generally located out of the optical axis of the optical system. The distance between two average points of view corresponds to the stereoscopic base. In the examples previously described, the different optical fields sensed by the pixels are not superimposed (the parts 10a and 10b are not superimposed).
(60) In an alternative, it is possible for these optical fields to be superimposed, as represented in
(61) Thus, in this configuration represented in
(62)
(63) The greater the number of images simultaneously captured by the capture device, the lesser the amount of light rays, that is the quantity of light, received by each subset of pixels to capture one of the images. Moreover, the greater the number of images simultaneously captured by the capture device, the smaller the stereoscopic base between two neighbouring points of view. However, the stereoscopic base can also be adapted depending on the optical system used.
(64) The focus position of the image capture device according to the invention has not exactly the same effect as for a non-stereoscopic image capture. The system is generally calculated for a sharp image between the infinity and moderately close planes, and this with a fixed optical aperture. It is for example possible to select a fixed focus in hyperfocal position.
(65) When the focus is made on close planes, since the imager integrated circuit moves back with respect to the optical system, the cones defining the points of view are superimposed. Since the diameter of the optics does not change, the points of view are moved closer and the stereoscopic base is reduced.
(66) For a backward movement equal to twice the focal length (that is for an image having a size equal to the object), there is no stereoscopic effect any longer with a sensor intended to operate to the infinity.
(67) Consequently, for macrophotography, the stereoscopic integrated circuit is specially defined for this application and is accordingly restricted in relief depth, in particular for endoscopic applications.
(68) The optical system of the imager integrated circuit according to the invention can include so-called catadioptric mirror objective lenses that can have high diameters. Because the stereoscopic base is only extended in a single direction, the optical system can be constructed with a great dimension in the direction of the stereoscopic base and a small dimension in the direction perpendicular to the stereoscopic base. The optical system could include two (or more) head-to-foot arranged periscopes, conveying the peripheral images in front of the different cones of vision of the image sensor.
(69) It will now be described, in connection with
(70) It is considered that the object is to infinity and that the image is sharp, and thus that the plane of the pixels 102 is at the focal length F.
(71) There is then:
(72) D: (back) diameter of the optical system 10;
(73) D/m: width seen by each pixel, corresponding to the optical field or even to the dimension of one of the parts (referenced 10a, 10b or 10c in
(74) N: number of images, or views, forming a stereoscopic image (three in the example of
(75) H: position of the plane of the upper diaphragms, that is the distance between the upper diaphragms (or upper colour filters) and pixels;
(76) L: width of an aperture of upper diaphragm (corresponding to the dimension of one side or the diameter of one of the holes 105b);
(77) p: pitch of the pixels, that is the distance between two pixels 102 (for example equal to 5 μm, but that can for example range from 1 μm to 50 μm depending on the application);
(78) l: width of a photodetector in a pixel 102;
(79) a: intersecting height of the end rays (rays bounding the cone of vision of the pixels) for each pixel;
(80) b: intersecting height of end rays for the selected number of views (rays bounding the cone of view of N neighbouring pixels);
(81) n: index of the dielectric material separating the opaque layers 104 (or the colour filters).
(82) It can be seen in
incidence angle=n*refraction angle
(83) Moreover, since each width L and the group of N associated pixels have very small dimensions in comparison with the dimension of the whole device 1000, it is possible to assume that all the rays passing through an aperture have only a very small angular deviation between each other, and they are thus, as a first approximation, refracted by the same value. Thereby, there is, to simplify: O=F/D (called “aperture of the optic”) and Px=((N−1)p+1).
(84) O is generally between 2 and 3 and has a fixed value.
(85) The H and L values are searched for with a and b being unknown. The other variables are parameters of the optical structure.
(86) The relationships of the triangles formed by the light rays give:
(F−b)/D=b/((N−1).Math.p+l)=b/px
(H−b)/L=b/px
(F−a)/(D/m)=a/l
(H−a)/L=a/l
(87) First, a and b are determined:
b=O.Math.D.Math.px/(D+px)
b=H.Math.px/(px+L)
a=m.Math.D.Math.O.Math.l/(m.Math.l+D)
a=l.Math.H/(L+l)
(88) Therefore, it is obtained:
O.Math.D/(D+px)=H/(px+L)
m.Math.D.Math.O/(D+m.Math.l)=H/(L+l)
(89) Calculation of L:
(L+px)/(D+px)=m.Math.(L+l)/(D+m.Math.l)
(L+px).Math.(D+m.Math.l)=m.Math.(L+l).Math.(D+px)
L.Math.(D+m.Math.l−m.Math.D−m.Math.px)=(m.Math.l.Math.D+m.Math.l.Math.px−D.Math.px−m.Math.l.Math.px)
L=(m.Math.1−px)/((1−m)−m/D.Math.(l−px))
(90) By reexpressing px, it is obtained:
L=(l.Math.(m−1)−(N−1).Math.p)/((1−m)−(m/D).Math.(N−1).Math.p)
(91) In the particular case where p=2.Math.l and m=N, there is:
L=(N−1).Math.p/(−(N−1)(N.Math.p+D)/D)
(92) if D>>p, then L=p
(93) Thus, it can be seen that the width L of the apertures 105b is little influenced by the optical system.
(94) Calculation of H:
H=m.Math.D.Math.O.Math.(L+l)/(m.Math.l+D)
H=m.Math.D.Math.O/(m.Math.l+D).Math.((l.Math.(m−1)−(N−1).Math.p)+l.Math.((1−m)−(m/D).Math.(N−1).Math.p))/((1−m)−(m/D).Math.(N−1).Math.p)
H=m.Math.D.Math.O/(m.Math.l+D).Math.(−(N−1).Math.p−l.Math.(m/D).Math.(N−1).Math.p)/((1−m)−(m/D).Math.(N−1).Math.p)
H=m.Math.D.Math.O/(m.Math.l+D).Math.(−(N−1).Math.p.Math.(1−l.Math.m/D))/((1−m)−(m/D).Math.(N−1).Math.p)
H=m.Math.O.Math.(N−1).Math.p/((m−1)+(m/D).Math.(N−1).Math.p)
(95) (In the case of a material having an index n≠1 between the layers 104a, 104b, the H value is multiplied by n)
(96) Thus, in the case of
(97) If D>>p then H=3.Math.O.Math.p.Math.
(98) In the particular case where m=N, there is:
H=N.Math.O.Math.p/(1+m.Math.p/D)
(99) If D>>p, then H=N.Math.O.Math.p
(100) Thus, it can be seen that the height H is dependent on the aperture F/D of the optical system.
(101) The height H can have a value between about 10 μm and 100 μm, and for example equal to about 20 μm.
(102) Given herein below are two numerical examples:
(103) If O=2; p=3 μm; n=1.48; N=2
(104) There is then H=17.76 μm.
(105) If O=3; p=5 μm; n=1.66; N=3
(106) There is then H=74.7 μm.
(107) Under these conditions, the width of each slotted (or squared) diaphragm 150b in the upper masks 104b is very close to the pitch of the pixels p. Its arrangement (curvature of the slots for example) also depends on the index n of the material filling the space.
(108) These equations come from construction rules of the imager integrated circuit which are met when designing the same:
(109) as represented in
(110)
(111)
(112) Let i be the angle formed by the light ray 150 and the optical axis 152 of the optical system 10 between the optical system 10 and the plane of the upper diaphragms 104b, and r the angle formed between the light ray 150 and the optical axis 152 of the optical system 10 in the material 106.
(113) According to the Descartes law, there is:
sin(i)=n.Math.sin(r)
(114) But, sin(i)=(x−d)/e and sin(r)=d/c
(115) with e corresponding to the length of the light ray 150 between the optical system 10 and the plane of the upper diaphragms 104b, and c corresponding to the length of the light ray 150 in the dielectric material 106.
(116) Besides, c.sup.2=d.sup.2+h.sup.2et b.sup.2=(x−d).sup.2+(F−h).sup.2
(117) Therefore, (x−d).Math.c=n.Math.d.Math.b can be calculated
(118) That is (x−d).sup.2.Math.(d.sup.2+h.sup.2)=n.sup.2.Math.d.sup.2.Math.((x−d.sup.2).sup.2+(F−h).sup.2)
(119) In this equation, the variable d is the single unknown. A d polynomial equation will thus be developed which will give the solutions:
d.sup.4.Math.(1−n.sup.2)−2d.sup.3.Math.x.Math.(1−n.sup.2)+d.sup.2.Math.(x.sup.2.Math.(1−n.sup.2.Math.(F−h).sup.2)−2d.Math.x.Math.h.sup.2+h.sup.2.Math.x.sup.2=0
(120) Or even:
d.sup.4−2d.sup.3.Math.x+d.sup.2.Math.(x.sup.2+h.sup.2−n.sup.2.Math.F.Math.(F−2h)/(1−n.sup.2))−2d.Math.x.Math.h.sup.2/(1−n.sup.2)+h.sup.2.Math.x.sup.2/(1−n.sup.2)=0
(121) The d values, being the solutions of this polynomial, are then retrieved by known numerical methods.
(122) In the case where n is substantially equal to 1, this equation is simplified and does give the displacement d=h.Math.x/F.
(123) For a given index n, the position of the apertures is thus dependent on the focal length F of the optical system.
(124) In the previously described exemplary embodiments, the dimensions of the apertures made in a same opaque layer (and in particular the apertures 105a forming the lower diaphragms) have substantially similar dimensions. However, generally, the dimensions of the apertures made in a same opaque layer can be different from each other. In this case, the different photodetectors will not provide identical electric signals for a same incident light excitation.
(125)
(126) The apertures 105b formed in the mask are represented by lines arranged above a grid representing the pixels 102. The curvature of the apertures 105b results from the offset d towards the alignment of the pixels. In the example represented in
(127) When the means enabling the light rays to be selected are not made by colour filters or existing electric interconnection layers, and include dedicated opaque layers, these layers are made during or following the construction of the elements of the imager integrated circuit. Making them does not have any particular feature with respect to the steps implemented for making a conventional imager integrated circuit: deposition of uniform layers, etching of these layers through photolithography, etc.
(128) The opaque layers can be made from metal. In an advantageous embodiment, each opaque layer can be formed by a layer of opaque material provided between one or more layers of antireflection material, for example composed of metal or metal oxide. For example, when the opaque layers are composed of aluminium, the antireflection layers can be composed of titanium or TiO.sub.2 or any other material such that its refractive index n corresponds to the square root of (n of the metal*n of the medium around the metal).
(129) The thickness of each opaque layer can be substantially equal to the quarter of the average working wavelength of the image capture device.