CODED APERTURE IMAGING SYSTEM AND METHOD
20240137656 ยท 2024-04-25
Inventors
Cpc classification
H04N23/55
ELECTRICITY
H04N23/54
ELECTRICITY
International classification
H04N23/20
ELECTRICITY
H04N23/54
ELECTRICITY
Abstract
An optical system includes a spatial encoding arrangement for generating spatially encoded light with an initial spatial distribution; a coded aperture defining a mask pattern based on the initial spatial distribution of the spatially encoded light; andan image sensor. The spatial encoding arrangementdirects the spatially encoded light onto the object, and the object reflects at least a portion of the spatially encoded light to form reflected light. The reflected light is directed through the coded aperture to form spatially decoded light. The spatially decoded light is directed onto the image sensor to form an image thereon, and the image sensor detects the image. The spatial encoding arrangement includes optical emitters spatially arranged, defining the initial spatial pattern of the spatially encoded light. The mask pattern is the inverse of the initial spatial pattern of the spatially encoded light defined by the spatial arrangement of the optical emitters.
Claims
1. An optical system for imaging an object, the optical system comprising: a spatial encoding arrangement for generating spatially encoded light with an initial spatial distribution; a coded aperture defining a mask pattern which is based on the initial spatial distribution of the spatially encoded light; and an image sensor, wherein the optical system is configured such that, in use, the spatial encoding arrangement directs the spatially encoded light onto the object so that the object reflects at least a portion of the spatially encoded light to form reflected light, the reflected light is directed through the coded aperture to form spatially decoded light, the spatially decoded light is directed onto the image sensor so as to form an image thereon, and the image sensor detects the image, and wherein the spatial encoding arrangement comprises a plurality of optical emitters, wherein the plurality of optical emitters are spatially arranged so as to define the initial spatial pattern of the spatially encoded light, and wherein the mask pattern defined by the coded aperture is the inverse of the initial spatial pattern of the spatially encoded light defined by the spatial arrangement of the plurality of optical emitters.
2. The optical system as claimed in claim 1, wherein the image detected by the image sensor comprises at least one of: an image of the object; a sharp or focused image of the object; a blurred image of the object; a scaled image of the object; or a product of a scaled image of the object with a function which is independent of the mask pattern defined by the coded aperture.
3. The optical system as claimed in claim 1, wherein the mask pattern comprises a binary mask pattern and/or wherein the coded aperture comprises a plurality of opaque regions and a plurality of transparent regions or apertures which together define the mask pattern.
4. The optical system as claimed in claim 1, wherein the coded aperture is diffractive and/or wherein the coded aperture comprises a phase mask.
5. The optical system as claimed in claim 1, wherein the mask pattern is configured such that an auto-correlation of the mask pattern is equal to, or resembles, a Kronecker delta function ? which includes a central peak or lobe, but which includes no secondary peaks or side-lobes, or which includes a central peak or lobe and one or more secondary peaks or side-lobes which have an amplitude which is less than 1/10 an amplitude of the central peak or lobe, which is less than 1/100 an amplitude of the central peak or lobe, or which is less than 1/1000 an amplitude of the central peak or lobe.
6. The optical system as claimed in claim 1, wherein the mask pattern is a Uniformly Redundant Array (URA) mask pattern or a Modified Uniformly Redundant Array (MURA) mask pattern.
7. The optical system as claimed in claim 1, wherein the coded aperture is reconfigurable, and wherein the coded aperture is formed from, or comprises, an LCD array.
8. The optical system as claimed in claim 1, wherein at least one of: the optical system is configured to reject or block ambient light reflected from the object; the optical system comprises an optical filter in front of the image sensor for rejecting or blocking ambient light reflected from the object; the image sensor is configured to detect infrared light; the image sensor is configured to have a lower sensitivity to visible light and a higher sensitivity to infrared light; or the spatially encoded light and the spatially decoded light both comprise, or are formed from, infrared light, the image sensor is configured to detect infrared light, and the optical system comprises an optical filter in front of the image sensor for rejecting or blocking visible light and for transmitting infrared light.
9. The optical system as claimed in claim 1, wherein at least one of: each optical emitter comprises a Lambertian optical emitter or a non-Lambertian optical emitter; each optical emitter comprises an LED; or each optical emitter is configured to emit infrared light.
10. The optical system as claimed in claim 1, wherein each optical emitter of the plurality of optical emitters is operable independently of the one or more other optical emitters of the plurality of optical emitters.
11. The optical system as claimed in claim 1, wherein the one or more optical emitters and the image sensor are co-planar, and for example and wherein the one or more optical emitters and the image sensor are mounted or formed on the same substrate.
12. The optical system as claimed in claim 1, wherein each optical emitter of the plurality of optical emitters absorbs or blocks the reflected light so that each optical emitter of the plurality of optical emitters defines a corresponding opaque or blocking region of the coded aperture, and wherein each optical emitter of the plurality of optical emitters is mounted, or formed, on a substrate which is transparent to the light emitted by the plurality of optical emitters.
13. The optical system as claimed in claim 1, wherein the spatial encoding arrangement comprises a plurality of lens elements, wherein each lens element is aligned in front of a corresponding optical emitter so as to at least partially focus or collimate light emitted by the corresponding optical emitter.
14. An electronic device comprising the optical system as claimed in claim 1, wherein the electronic device comprises a mobile electronic device, wherein the mobile electronic device comprises a user interface accessible by way of a display or a touchscreen, wherein the one or more optical emitters and the image sensor are located behind the user interface and the object is located in front of the user interface.
15. An optical system for imaging an object, the optical system comprising: a spatial encoding arrangement for generating spatially encoded light with an initial spatial distribution; a coded aperture defining a mask pattern which is based on the initial spatial distribution of the spatially encoded light; and an image sensor, wherein the optical system is configured such that, in use, the spatial encoding arrangement directs the spatially encoded light onto the object so that the object reflects at least a portion of the spatially encoded light to form reflected light, the reflected light is directed through the coded aperture to form spatially decoded light, the spatially decoded light is directed onto the image sensor so as to form an image thereon, and the image sensor detects the image, wherein the spatial encoding arrangement comprises the coded aperture, and wherein the optical system is configured such that, in use, the one or more optical emitters emit light which passes through the coded aperture to form the spatially encoded light such that the initial spatial pattern of the spatially encoded light is defined by the mask pattern of the coded aperture.
16. A method for imaging an object, the method comprising: directing spatially encoded light with an initial spatial distribution onto an object so that the object reflects at least a portion of the spatially encoded light to form reflected light; directing the reflected light through a coded aperture to form spatially decoded light, wherein the coded aperture defines a mask pattern which is based on the initial spatial distribution of the spatially encoded light; and directing the spatially decoded light onto an image sensor, wherein the spatially decoded light forms an image on the image sensor and the image sensor detects the image, and wherein the initial spatial pattern of the spatially encoded light is defined by a spatial arrangement of a plurality of optical emitters, and wherein the mask pattern defined by the coded aperture is the inverse of the initial spatial pattern of the spatially encoded light defined by the spatial arrangement of the plurality of optical emitters.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0083] An optical system and method will now be described by way of non-limiting example only with reference to the accompanying drawings of which:
[0084]
[0085]
[0086]
[0087]
[0088]
[0089]
[0090]
[0091]
DETAILED DESCRIPTION OF THE DRAWINGS
[0092] Referring initially to
[0093] The spatial encoding arrangement 20 includes a Lambertian illuminator generally designated 30 in the form of one or more infrared LEDs 32 which are integrated with, and distributed across, a light sensitive area of the image sensor 28. Moreover, in the particular optical system 10 of
[0094] In use, the illuminator 30 emits infrared light which passes through the optical filter 27 and the coded aperture 26 to form the spatially encoded light 22 with the initial spatial distribution 24. The optical system 10 is configured such that, in use, the spatial encoding arrangement 20 directs the spatially encoded light 22 onto the object 12 and the object 12 reflects at least a portion of the spatially encoded light 22 to form reflected light which is directed back through the coded aperture 26 to form spatially decoded light which is transmitted through the optical filter 27 and is incident on the image sensor 28. The spatially decoded light forms an image on the image sensor 28 and the image sensor 28 detects the image. For the reasons explained in detail below, the image formed on the image sensor 28 more closely resembles the object 12 compared with the images of objects formed using prior art coded aperture imaging systems. Consequently, use of the optical system 10 for imaging eliminates, or at least reduces, the complexity of the processing required to reconstruct an image of the object 12 compared with prior art coded aperture imaging techniques performed using prior art coded aperture imaging systems.
[0095] By way of a simplified explanation of the principle of operation of the optical system 10, the irradiance distribution I.sub.O of the spatially encoded light 22 which illuminates the object 12 may be considered to be a convolution of a radiant exitance distribution I.sub.E generated by the Lambertian illuminator 30 and the transmission function A of the mask pattern defined by the coded aperture 26 i.e. the irradiance distribution I.sub.O of the spatially encoded light 22 which illuminates the object 12 may be expressed as I.sub.O=I.sub.E*A where * represents the convolution operation. If R.sub.O is the reflectivity of the object 12 as a function of a transverse position across the object, the irradiance reflected by the object 12 is given by I.sub.O.Math.R.sub.O=(I.sub.E*A).Math.R.sub.O. The image formed on the image sensor 28 by the spatially decoded light after the reflected irradiance (I.sub.E*A).Math.R.sub.O passes back through the coded aperture 26 may be considered to be a convolution of the reflected irradiance (I.sub.E*A).Math.R.sub.O and the transmission function A of the mask pattern i.e. the spatially decoded light incident on the image sensor 28 is given by [(I.sub.E*A).Math.R.sub.O]*A=I.sub.E*A*A.Math.R.sub.O. The transmission function A defined by the MURA mask pattern of the coded aperture 26 is configured such that A*A=? i.e. such that the auto-correlation of the transmission function A of the MURA mask pattern is, or at least approximates, a two-dimensional Kronecker delta function. Moreover, for the special case where the radiant exitance distribution I.sub.E generated by the Lambertian illuminator 30 is, or resembles, a point source, I.sub.E may also be approximated by a two-dimensional Kronecker delta function i.e. I.sub.E??. Consequently, the spatial distribution of the spatially decoded light incident on the image sensor 28 when the Lambertian illuminator 30 is, or resembles, a point source, is given by R.sub.O i.e. for the special case where the radiant exitance distribution I.sub.E generated by the Lambertian illuminator 30 is, or resembles, a point source, the spatially decoded light incident on the image sensor 28 forms an image which approximates the reflectivity of the object 12 as a function of a transverse position across the object 12. In other words, the spatially decoded light incident on the image sensor 28 forms an image of the object 12 on the image sensor 28 which is independent of the MURA mask pattern of the coded aperture 26. One of skill in the art will understand that the foregoing simplified explanation of the principle of operation of the optical system 10 applies to the special case where the radiant exitance distribution I.sub.E generated by the Lambertian illuminator 30 is, or resembles, a point source and does not take into account blur or magnification of the illuminator 30 or the transparent regions or apertures of the MURA mask pattern. A more general, more detailed explanation of the principle of operation of the optical system 10 is provided below.
[0096] In general, the irradiance distribution I.sub.O of the spatially encoded light 22 which illuminates the object 12 is given by:
I.sub.O=m.sub.tE(I.sub.E)*m.sub.tA(A)*b.sub.t Equation 1
where m.sub.tE represents a magnified irradiance distribution associated with the projection of the radiant exitance distribution I.sub.E generated by the Lambertian illuminator 30 from the plane 40 of the illuminator 30 through the mask pattern defined by the coded aperture 26 onto the object plane 44, m.sub.tA represents a magnification function associated with the projection of one of the transparent regions or apertures of the transmission function A of the MURA mask pattern from the coded aperture plane 42 onto the object plane 44, b.sub.t represents a blur function associated with the projection of a point source in the plane 40 of the illuminator 30 onto the object plane 44, and * represents the convolution operation.
[0097] Specifically, if {right arrow over (r)} is the transverse position in the plane 40 of the Lambertian illuminator 30, and the radiant exitance distribution I.sub.E generated by the Lambertian illuminator 30 as a function of the transverse position in the plane 40 of the Lambertian illuminator 30 is represented by I.sub.E({right arrow over (r)}), m.sub.tE is defined by:
m.sub.tE[I.sub.E({right arrow over (r)})]=I.sub.E(M.sub.tE.Math.{right arrow over (r)}) Equation 2
where M.sub.tE represents a magnification factor associated with the magnification of the irradiance distribution I.sub.E({right arrow over (r)}) generated by the Lambertian illuminator 30 from the plane 40 of the Lambertian illuminator 30 through a pin-hole in the coded aperture plane 42 onto the object plane 44. Specifically, from simple pin-hole camera theory, the magnification factor M.sub.tE is given by:
[0098] Similarly, if {right arrow over (r)} is the transverse position in the coded aperture plane 42, and the transmission of the MURA mask pattern defined by the coded aperture 26 as a function of the transverse position in the coded aperture plane 42 is represented by A({right arrow over (r)}), m.sub.tA is defined by:
m.sub.tA[A({right arrow over (r)})]=A(M.sub.tA.Math.{right arrow over (r)}) Equation 4
where M.sub.tA represents a magnification factor associated with the projection of one of the transparent regions or apertures of the MURA mask pattern having a diameter w from the coded aperture plane 42 to a diameter W in the object plane 44 as illustrated in
which may be re-arranged to give the desired magnification factor M.sub.tA according to:
[0099] The blur function b.sub.t accounts for the projection or blurring of a point source in the plane 40 of the illuminator 30 onto a spot in the object plane 44. Specifically, with reference to
where W represents a dimension of the spot formed in the object plane 44 as a consequence of the projection of light emitted from a point source in the plane 40 of the illuminator 30 through a transparent region or aperture of the coded aperture 26 having a dimension of w.
[0100] The blur factor B.sub.t may also include a contribution of the diffraction blur given by:
where ? is the wavelength of the Lambertian illuminator 30.
[0101] If R.sub.O is the reflectivity of the object 12 as a function of a transverse position in the object plane 44, the irradiance reflected by the object 12 is given by:
I.sub.O.Math.R.sub.O=[m.sub.tE(I.sub.E)*m.sub.tA(A)*b.sub.t].Math.R.sub.O Equation 9
[0102] The irradiance distribution I.sub.S of the spatially decoded light which is incident on the image sensor 28 after transmission of the reflected irradiance I.sub.O.Math.R.sub.O back through the coded aperture 26 is then given by:
I.sub.S=m.sub.rO(I.sub.O.Math.R.sub.O)*m.sub.rA(A)*b.sub.r Equation 10
where m.sub.rO represents a magnified irradiance distribution associated with the projection of the reflected irradiance I.sub.O.Math.R.sub.O reflected by the object 12 back through the mask pattern defined by the coded aperture 26 onto the image sensor plane 40, m.sub.rA represents a magnification function associated with the projection of one of the transparent regions or apertures of the transmission function A of the mask pattern from the coded aperture plane 42 onto the image sensor plane 40, b.sub.r represents a blur function associated with the projection of a point source in the object plane 44 onto the image sensor plane 40, and * represents the convolution operation.
[0103] Specifically, if {right arrow over (r)} is the transverse position in the object plane 44, and the irradiance I.sub.OR.sub.O reflected by the object 12 as a function of the transverse position in the object plane 44 is represented by I.sub.OR.sub.O({right arrow over (r)}), m.sub.rO is defined by:
m.sub.rO[I.sub.OR.sub.O({right arrow over (r)})]=I.sub.OR.sub.O(M.sub.rO.Math.{right arrow over (r)}) Equation 11
where M.sub.rO represents a magnification factor associated with the magnification of the reflected irradiance distribution I.sub.OR.sub.O({right arrow over (r)}) from the object plane 44 through a pin-hole in the coded aperture plane 42 onto the image sensor plane 40. Specifically, from simple pin-hole camera theory, the magnification factor M.sub.rO is given by:
[0104] Similarly, if {right arrow over (r)} is the transverse position in the coded aperture plane 42, and the transmission of the mask pattern defined by the coded aperture 26 as a function of the transverse position in the coded aperture plane 42 is represented by A({right arrow over (r)}), m.sub.rA is defined by:
m.sub.rA[A({right arrow over (r)})]=A(M.sub.rA.Math.{right arrow over (r)}) Equation 13
where M.sub.rA represents a constant magnification factor associated with the projection of one of the transparent regions or apertures of the mask pattern having a diameter w from the coded aperture plane 42 to a diameter W in the image sensor plane 40. Specifically, the projection W of one of the transparent regions or apertures of the mask pattern having a diameter w onto the image sensor plane 40 is given by:
which may be re-arranged to give the desired magnification factor M.sub.rA according to:
[0105] The blur function b.sub.r accounts for the projection or blurring of a point source in the object plane 44 onto the image sensor plane 40. Specifically, the degree of blur experienced by a point source in the object plane 44 is given by a blur factor B.sub.r defined by the relation:
where W represents a dimension of the spot formed in the image sensor plane 40 as a consequence of the projection of light emitted from a point source in the object plane 44 through a transparent region or aperture of the coded aperture 26 having a dimension of w.
[0106] The blur factor B.sub.r may also include a contribution of the diffraction blur given by:
where ? is the wavelength of the Lambertian illuminator 30.
[0107] The irradiance distribution I.sub.S of the spatially decoded light which is incident on the image sensor 28 is given by:
[0108] For the case where z>>d:
[0109] Since the transmission function A defined by the MURA mask pattern of the coded aperture 26 is configured such that A*A=? i.e. such that the auto-correlation of the transmission function A of the MURA mask pattern is, or at least approximates, a two-dimensional Kronecker delta function, the irradiance distribution I.sub.S of the spatially decoded light which is incident on the image sensor 28 is given by:
I.sub.S?[I.sub.E*m.sub.rO(b.sub.t)].Math.m.sub.rO(R.sub.O) Equation 28
[0110] In other words, the spatially decoded light detected by the image sensor 28 forms an image which is a product of the magnified reflectivity m.sub.rO(R.sub.O) distribution of the object 12 with a predictable function I.sub.E*m.sub.rO(b.sub.t) which is independent of the transmission function A defined by the MURA mask pattern of the coded aperture 26. Thus, for the general case of a Lambertian illuminator 30 which is distributed (i.e. not a point source), the image R.sub.O of the object 12 may be obtained from knowledge of the irradiance distribution I.sub.E generated by the Lambertian illuminator 30 and of the blur function b.sub.t, but without any knowledge of the transmission function A of the MURA mask pattern of the coded aperture 26. In effect, this means that the image R.sub.O of the object 12 may be obtained more rapidly and with less energy than the deconvolution operations associated with prior art coded aperture imaging techniques.
[0111] The irradiance distribution I.sub.E generated by the Lambertian illuminator 30 may be estimated from knowledge of the design or construction of the Lambertian illuminator 30. Additionally or alternatively, the irradiance distribution I.sub.E generated by the Lambertian illuminator 30 may be measured, for example before the Lambertian illuminator 30 is assembled with the coded aperture 26 and/or before the Lambertian illuminator 30 is fitted into the smartphone 2.
[0112] The blur function b.sub.t may be determined using an iterative trial and error approach so as to minimise the degree of blur of the obtained image R.sub.O of the object 12. Alternatively, the blur function b.sub.t may be calculated using the blur factor:
where w is known from the design or construction of the coded aperture 26, d is known from the design or construction of the spatial encoding arrangement 20 of the optical system 10, and z may be estimated. Alternatively, z may be measured, for example using a phase- or frequency-shift or time of flight measurement method.
[0113] For the special case where the radiant exitance distribution I.sub.E generated by the Lambertian illuminator 30 is, or resembles, a point source, the radiant exitance distribution I.sub.E may be considered to be a two-dimensional Kronecker delta function such that the irradiance distribution I.sub.S of the spatially decoded light which is incident on the image sensor 28 is given by:
[0114] In other words, for the special case where the radiant exitance distribution I.sub.E generated by the Lambertian illuminator 30 is, or resembles, a point source, the spatially decoded light detected by the image sensor 28 is a product of the reflectivity R.sub.O of the object 12 as a function of a transverse position across the object 12 and a predictable function m.sub.rO(b.sub.t) which depends on blur but which is independent of the transmission function A of the MURA mask pattern of the coded aperture 26. Thus, for a point source Lambertian illuminator 30, the image R.sub.O of the object 12 may be obtained from knowledge of the blur function b.sub.t without any knowledge of the transmission function A of the MURA mask pattern of the coded aperture 26, where the blur function b.sub.t may be determined using any of the methods described above.
[0115] In contrast to prior art coded aperture imaging systems which rely upon computational techniques to reconstruct an image of the object from spatially encoded light received from the object, the image formed on the image sensor 28 more closely resembles the object 12 compared with the images of objects formed using prior art coded aperture imaging systems. Consequently, use of the optical system 10 eliminates, or at least reduces, the complexity of the image processing required to reconstruct an image of the object 12 compared with prior art coded aperture imaging techniques performed using prior art coded aperture imaging systems. Accordingly, such an optical system 10 may eliminate, or at least reduce, the computational burden associated with imaging relative to prior art coded aperture imaging systems thereby reducing imaging time/improving responsiveness and/or reducing the energy consumption for imaging relative to prior art coded aperture imaging systems.
[0116] The smartphone 2 may be configured to process the image detected by the image sensor 28 to thereby recognise one or more features of the object 12. For example, the smartphone 2 may include a processing resource (not shown) which is configured to process the image detected by the image sensor 28 to thereby recognise one or more features of the object 12. The object 12 may comprise at least part of a person's finger or thumb and the processing resource may be configured to process the image detected by the image sensor 28 for the purposes of recognising or determining the proximity of the person's finger or thumb to the optical system 10 from the detected image of at least part of the person's finger or thumb. This may be particularly advantageous for proximity touchscreens for small displays in which a user interacts with the touchscreen on an elevated virtual plane in the vicinity of the touchscreen to avoid or at least reduce the extent to which a user's finger or thumb obscures the display.
[0117] The object 12 may comprise a person or part of a person and the smartphone 2 may be configured to process the image detected by the image sensor 28 for the purposes of recognising or identifying the person from the image of the object 12. For example, the object 12 may comprise at least part of a person's face and the smartphone 2 may include a processing resource (not shown) which is configured to process the image detected by the image sensor 28 for the purposes of facial recognition. The object 12 may comprise a person's finger print or thumb print and the processing resource may be configured to process the image detected by the image sensor 28 for the purposes of recognising or identifying the person from the image of the person's finger print or thumb print.
[0118] The smartphone 2 may be configured to process a plurality of images of a moving object for the recognition of one or more predetermined movements of the object. For example, the moving object may comprise at least part of a person and the smartphone 2 may include a processing resource (not shown) which is configured to process a plurality of images detected by the image sensor 28 for the purposes of recognising or identifying one or more predetermined movements of at least part of the person e.g. a gesture.
[0119] Referring to
[0120] The smartphone 102 may be configured to process the image detected by the image sensor 128 to thereby recognise one or more features of the object 112. For example, the smartphone 102 may include a processing resource (not shown) which is configured to process the image detected by the image sensor 128 to thereby recognise one or more features of the object 112. The object 112 may comprise at least part of a person's finger or thumb and the processing resource may be configured to process the image detected by the image sensor 128 for the purposes of recognising or determining the proximity of the person's finger or thumb to the optical system 110 from the detected image of at least part of the person's finger or thumb. This may be particularly advantageous for proximity touchscreens for small displays in which a user interacts with the touchscreen on an elevated virtual plane in the vicinity of the touchscreen to avoid or at least reduce the extent to which a user's finger or thumb obscures the display.
[0121] The object 112 may comprise a person or part of a person and the smartphone 102 may be configured to process the image detected by the image sensor 128 for the purposes of recognising or identifying the person from the image of the object 112. For example, the object 112 may comprise at least part of a person's face and the smartphone 102 may include a processing resource (not shown) which is configured to process the image detected by the image sensor 128 for the purposes of facial recognition. The object 112 may comprise a person's finger print or thumb print and the processing resource may be configured to process the image of the person's finger print or thumb print for the purposes of recognising or identifying the person from the image of the person's finger print or thumb print.
[0122] The smartphone 102 may be configured to process a plurality of images of a moving object for the recognition of one or more predetermined movements of the object. For example, the moving object may comprise at least part of a person and the smartphone 102 may include a processing resource (not shown) which is configured to process a plurality of images detected by the image sensor 128 for the purposes of recognising or identifying one or more predetermined movements of at least part of the person e.g. a gesture.
[0123] Referring to
[0124] The spatial encoding arrangement 220 includes a Lambertian illuminator generally designated 230 in the form of one or more infrared LEDs 232. However, unlike the spatial encoding arrangement 20 of the optical system 10 of
[0125] In a variant of the optical system 210 of the smartphone 202 of
[0126] Referring now to
[0127] Like the coded aperture 26 of the optical system 10 of
[0128] In use, the LEDs 332 emit infrared light with a spatial distribution which is the inverse of the MURA mask pattern to form the spatially encoded light 322 with the initial spatial distribution 324. The optical system 310 is configured such that, in use, the spatial encoding arrangement 320 directs the spatially encoded light 322 onto the object 312 and the object 312 reflects at least a portion of the spatially encoded light 322 to form reflected light which is directed back through the transparent regions of the MURA mask pattern defined by the gaps between the LEDs 332 to form spatially decoded light which is transmitted through the optical filter 327 and is incident on the image sensor 328. The spatially decoded light forms an image on the image sensor 328 and the image sensor 328 detects the image. For the reasons explained in detail below, the image formed on the image sensor 328 more closely resembles the object 312 compared with the images of objects formed using prior art coded aperture imaging systems. Consequently, use of the optical system 310 for imaging eliminates, or at least reduces, the complexity of the processing required to reconstruct an image of the object 312 compared with prior art coded aperture imaging techniques performed using prior art coded aperture imaging systems.
[0129] In general, the irradiance distribution I.sub.O of the spatially encoded light 322 which illuminates the object 312 is given by:
I.sub.O?m.sub.t[I.Math.(1?A)]Equation 31
where it is assumed that each of the LEDs 332 of the Lambertian illuminator 330 generates the same radiant exitance I, A represents the transmission function of the MURA mask pattern of the coded aperture 326 defined by the LEDs 332, and m.sub.t is a magnification function which accounts for the divergence of light from a point source in the plane 342 of the illuminator 330 onto a spot in the object plane 344. Specifically, with reference to
[0130] If R.sub.O is the reflectivity of the object 312 as a function of a transverse position in the object plane 344, the irradiance reflected by the object 312 is given by:
I.sub.O.Math.R.sub.O=m.sub.t[I.Math.(1?A)].Math.R.sub.O Equation 33
[0131] The irradiance distribution I.sub.S of the spatially decoded light which is incident on the image sensor 328 after transmission of the reflected irradiance I.sub.O.Math.R.sub.O through the coded aperture 326 is then given by:
I.sub.S=m.sub.rO(I.sub.O.Math.R.sub.O)*m.sub.rA(A)*b.sub.r Equation 34
where m.sub.rO represents a magnified irradiance distribution associated with the projection of the reflected irradiance I.sub.O.Math.R.sub.O reflected by the object 312 through the mask pattern defined by the coded aperture 326 onto the image sensor plane 340, m.sub.rA represents a magnification function associated with the projection of one of the transparent regions or apertures of the mask pattern from the coded aperture plane 342 onto the image sensor plane 340, b.sub.r represents a blur function associated with the projection of a point source in the object plane 344 onto the image sensor plane 340, and * represents the convolution operation.
[0132] Specifically, if {right arrow over (r)} is the transverse position in the object plane 344, and the irradiance I.sub.OR.sub.O reflected by the object 312 as a function of the transverse position in the object plane 344 is represented by I.sub.OR.sub.O({right arrow over (r)}), m.sub.rO is defined by:
m.sub.rO[I.sub.OR.sub.O({right arrow over (r)})]=I.sub.OR.sub.O (M.sub.rO.Math.{right arrow over (r)}) Equation 35
where M.sub.rO represents a magnification factor associated with the magnification of the reflected irradiance distribution R0 / 0 (r) from the object plane 344 through a pin-hole in the coded aperture plane 342 onto the image sensor plane 340. Specifically, from simple pin-hole camera theory, the magnification factor M.sub.rO is given by:
[0133] Similarly, if {right arrow over (r)} is the transverse position in the coded aperture plane 342, and the mask pattern defined by the coded aperture 326 as a function of the transverse position in the coded aperture plane 342 is represented by A({right arrow over (r)}), m.sub.rA is defined by:
m.sub.rA[A({right arrow over (r)})]=A(M.sub.rA.Math.{right arrow over (r)}) Equation 37
where M.sub.rA represents a constant magnification factor associated with the projection of one of the transparent regions or apertures of the mask pattern A({right arrow over (r)}) having a diameter w from the coded aperture plane 342 to a diameter W in the image sensor plane 340. Specifically, the projection W of one of the transparent regions or apertures of the mask pattern A({right arrow over (r)}) having a diameter w onto the image sensor plane 340 is given by:
which may be re-arranged to give the desired magnification factor M.sub.rA according to:
The blur function b.sub.r accounts for the projection or blurring of a point source in the object plane 344 onto the image sensor plane 340. Specifically, the degree of blur experienced by a point source in the object plane 344 is given by a blur factor B.sub.r defined by the relation:
where W represents a dimension of the spot formed in the image sensor plane 340 as a consequence of the projection of light emitted from a point source in the object plane 344 through a transparent region or aperture of the coded aperture 326 having a dimension of w.
[0134] The irradiance distribution I.sub.S of the spatially decoded light which is incident on the image sensor 328 is given by:
I.sub.S=m.sub.rO{m.sub.t[I.Math.(1?A)].Math.R.sub.O}*m.sub.rA(A)*b.sub.r Equation 41
Assuming that an image of the object 312 is projected onto the image sensor 328, then from pin-hole camera theory based on the geometry shown
For the case where z>>d:
[0135] Since the transmission function A of the MURA mask pattern defined by the coded aperture 326 is configured such that A*A=? i.e. such that the auto-correlation of the transmission function A of the MURA mask pattern is, or at least approximates, a two-dimensional Kronecker delta function, the irradiance distribution I.sub.S of the spatially decoded light which is incident on the image sensor 328 is given by:
I.sub.S?I.Math.(g??).Math.m.sub.rO(R.sub.O) Equation 49
where g=1*A is a pyramid function as shown in
[0136] In other words, the spatially decoded light detected by the image sensor 328 forms an image which is a product of the magnified reflectivity m.sub.rO(R.sub.O) distribution of the object 312 with a predictable function (g??) which is independent of the radiant exitance distribution associated with the LEDs 332 and the transmission function A of the MURA mask pattern of the coded aperture 326. Thus, the image R.sub.O of the object 312 may be obtained from knowledge of the function (g??), but without any knowledge of the radiant exitance distribution associated with the LEDs 332 and without any knowledge of the transmission function A of the MURA mask pattern of the coded aperture 326. In effect, this means that the image R.sub.O of the object 312 may be obtained more rapidly and with less energy than the deconvolution operations associated with prior art coded aperture imaging techniques.
[0137] In contrast to prior art coded aperture imaging systems which rely upon computational techniques to reconstruct an image of the object from spatially encoded light received from the object, the image formed on the image sensor 328 more closely resembles the object 312 compared with the images of objects formed using prior art coded aperture imaging systems. Consequently, use of the optical system 310 eliminates, or at least reduces, the complexity of the image processing required to reconstruct an image of the object 312 compared with prior art coded aperture imaging techniques performed using prior art coded aperture imaging systems. Accordingly, such an optical system 310 may eliminate, or at least reduce, the computational burden associated with imaging relative to prior art coded aperture imaging systems thereby reducing imaging time/improving responsiveness and/or reducing the energy consumption for imaging relative to prior art coded aperture imaging systems.
[0138] Although preferred embodiments of the disclosure have been described in terms as set forth above, it should be understood that these embodiments are illustrative only and that the claims are not limited to those embodiments. Those skilled in the art will understand that various modifications may be made to the described embodiments without departing from the scope of the appended claims. For example, those skilled in the art will understand that if a non-Lambertian illuminator were used in place of the Lambertian illuminator 30 in the optical system 10 of the smartphone 2 of
I.sub.S?[I.sub.E.Math.A*A*m.sub.rO(b.sub.t)]m.sub.rO(R.sub.O) Equation 50
[0139] Since the transmission function A of the MURA mask pattern defined by the coded aperture 26 is configured such that A*A=? i.e. such that the auto-correlation of the transmission function A of the MURA mask pattern is, or at least approximates, a two-dimensional Kronecker delta function, the irradiance distribution I.sub.S of the spatially decoded light which is incident on the image sensor 28 is given by:
I.sub.S?[I.sub.E.Math.m.sub.rO(b.sub.t)]m.sub.rO(R.sub.O) Equation 51
[0140] For the special case where the radiant exitance distribution I.sub.E generated by the non-Lambertian illuminator is, or resembles, a point source, the radiant exitance distribution I.sub.E may be considered to be a two-dimensional Kronecker delta function such that the irradiance distribution I.sub.s of the spatially decoded light which is incident on the image sensor 28 is given by:
I.sub.S?m.sub.rO(b.sub.t).Math.m.sub.rO(R.sub.O) Equation 52
[0141] In other words, the spatially decoded light incident on the image sensor 28 forms an image which is a blurred magnified version of the reflectivity R.sub.O of the object 12 as a function of a transverse position across the object 12, but which is independent of the transmission function A of the MURA mask pattern of the coded aperture 26. Thus, for a point source Lambertian illuminator, the image R.sub.O of the object 12 may be obtained from knowledge of the blur function b.sub.r without any knowledge of the transmission function A of the MURA mask pattern of the coded aperture 26. The blur function b.sub.r may be determined using any of the methods described above.
[0142] Although the mask pattern defined by the coded apertures 26, 126, 226 is a MURA mask pattern, other mask patterns may be possible, provided the mask pattern is configured such that an auto-correlation of the transmission function of the mask pattern is, or resembles, the Kronecker delta function ? which includes a central peak or lobe, but which includes no secondary peaks or side-lobes, or which includes a central peak or lobe and one or more secondary peaks or side-lobes which have an amplitude which is less than 1/10 an amplitude of the central peak or lobe, which is less than 1/100 an amplitude of the central peak or lobe, or which is less than 1/1000 an amplitude of the central peak or lobe.
[0143] The mask pattern may be a Uniformly Redundant Array (URA) mask pattern.
[0144] The coded aperture may comprise a phase mask.
[0145] The coded aperture may be diffractive.
[0146] The coded aperture 26 may be reconfigurable. For example, the coded aperture 26, 126, 226 may be formed from, or comprise, a plurality of reconfigurable elements for this purpose, wherein each element is reconfigurable between a transparent state in which light from the Lambertian illuminator 30, 130, 230 can pass through the element and a blocking or absorbing state in which light from the Lambertian illuminator 30, 130, 230 is blocked or absorbed. For example, the coded aperture 26, 126, 226 may be formed from, or comprise, an LCD array.
[0147] The optical filter 27, 127, 227 may comprise, or be formed from, a dye-based polymer material. The optical filter 27, 127, 227 may comprise, or be formed from, antimony doped tin oxide.
[0148] The image sensor 28, 128, 228, 328 may be configured to detect infrared light.
[0149] The image sensor 28, 128, 228, 328 may be configured to have a lower sensitivity to ambient visible light reflected from the object and a higher sensitivity to infrared light reflected from the object
[0150] The spatially encoded light 22, 322 may be modulated temporally with a pre-defined temporal modulation so that the spatially decoded light is also modulated temporally with the pre-defined temporal modulation and wherein the image sensor 28, 328 is configured to distinguish between the temporally modulated spatially decoded light and light which is temporally unmodulated and/or light which is modulated temporally with a temporal modulation which is different to the pre-defined temporal modulation.
[0151] Rather than using one or more infrared LEDs 32, 132, 232, 332 as illuminator, other types of optical emitters may be used.
[0152] Each optical emitter may be modulated temporally with a pre-defined temporal modulation so that the spatially decoded light is also modulated temporally with the pre-defined temporal modulation.
[0153] The spatial encoding arrangement 20, 120, 220, 320 may comprise a plurality of optical emitters and different optical emitters of the plurality of optical emitters may be operated at different times so as to illuminate an object from different directions and/or to illuminate different parts of an object at different times to thereby form different images of the object when viewed from different directions and/or to thereby form images of different parts of the object.
[0154] The optical system 10, 110, 210, 310 may be incorporated into an electronic device of any kind.
[0155] The optical system 10, 110, 210, 310 may be incorporated into a mobile electronic device.
[0156] The optical system 10, 110, 210, 310 may be incorporated into a mobile phone, a cell phone, a smartphone, a tablet, a laptop, or a wearable electronic device such as an electronic watch or an electronic wristband.
[0157] Each feature disclosed or illustrated in the present specification may be incorporated in any embodiment, either alone, or in any appropriate combination with any other feature disclosed or illustrated herein. In particular, one of ordinary skill in the art will understand that one or more of the features of the embodiments of the present disclosure described above with reference to the drawings may produce effects or provide advantages when used in isolation from one or more of the other features of the embodiments of the present disclosure and that different combinations of the features are possible other than the specific combinations of the features of the embodiments of the present disclosure described above.
[0158] The skilled person will understand that in the preceding description and appended claims, positional terms such as above, along, side, etc. are made with reference to conceptual illustrations, such as those shown in the appended drawings. These terms are used for ease of reference but are not intended to be of limiting nature. These terms are therefore to be understood as referring to an object when in an orientation as shown in the accompanying drawings.
[0159] Use of the term comprising when used in relation to a feature of an embodiment of the present disclosure does not exclude other features or steps. Use of the term a or an when used in relation to a feature of an embodiment of the present disclosure does not exclude the possibility that the embodiment may include a plurality of such features.
[0160] The use of reference signs in the claims should not be construed as limiting the scope of the claims.
LIST OF REFERENCE NUMERALS
[0161] 2 smartphone; [0162] 10 optical system; [0163] 12 object; [0164] 20 spatial encoding arrangement; [0165] 22 spatially encoded light; [0166] 24 initial spatial distribution of the spatially encoded light; [0167] 26 coded aperture; [0168] 27 optical filter; [0169] 28 image sensor; [0170] 30 illuminator; [0171] 32 LED; [0172] 40 illuminator/image sensor plane; [0173] 42 coded aperture plane; [0174] 44 object plane; [0175] 102 smartphone; [0176] 110 optical system; [0177] 120 spatial encoding arrangement; [0178] 126 coded aperture; [0179] 127 optical filter; [0180] 128 image sensor; [0181] 130 illuminator; [0182] 32 LED; [0183] 150 touchscreen; [0184] 202 smartphone; [0185] 210 optical system; [0186] 220 spatial encoding arrangement; [0187] 226a coded aperture; [0188] 226b further coded aperture; [0189] 227 optical filter; [0190] 228 image sensor; [0191] 230 illuminator; [0192] 232 LED; [0193] 302 smartphone; [0194] 310 optical system; [0195] 312 object; [0196] 320 spatial encoding arrangement; [0197] 322 spatially encoded light; [0198] 324 initial spatial distribution of the spatially encoded light; [0199] 326 coded aperture; [0200] 327 optical filter; [0201] 328 image sensor; [0202] 330 illuminator; [0203] 332 LED; [0204] 340 illuminator/image sensor plane; [0205] 342 coded aperture plane; [0206] 344 object plane; and [0207] 360 lens elements.