Monolithic, multi-mode, CMOS-compatible imager with active, per-pixel drive and sense circuitry for transducers
12212870 ยท 2025-01-28
Assignee
Inventors
Cpc classification
H04N23/11
ELECTRICITY
H04N25/77
ELECTRICITY
H10F39/803
ELECTRICITY
H10F39/18
ELECTRICITY
International classification
H04N25/77
ELECTRICITY
H04N23/11
ELECTRICITY
Abstract
A single-chip solution for multi-modal imaging with every pixel capable of electrostatic, ultrasonic and optical imaging. The device can be configured in as much modality configurations as possible based on the types of the transducers included in the system.
Claims
1. A device, comprising a substrate and a focal plane array of pixels, a. wherein pixels comprise one or more transducers in one or more of the ultrasonic, electrical, optical, and thermal domains, b. wherein ultrasonic transducers are adapted to perform one or more of the following functions: ultrasonic transmit, ultrasonic receive, electrical transmit, electrical receive, GND electrode, by connecting their electrodes to relevant circuits using electrical switches, c. wherein the pixels are implemented using a complementary metal oxide semiconductor (CMOS) process, d. wherein the top of the substrate includes CMOS electronics, e. wherein optical transducer consists of one or more of a photodetector element using a semiconductor junction, or an array thereof, or a light source incident on top of the substrate, or a light source incident from the bottom of the substrate, f. wherein thermal transducers on the pixel consist of a temperature sensor and a local heater element.
2. The device of claim 1, wherein the substrate is made of silicon, glass, sapphire or other semiconductor materials.
3. The device of claim 1, wherein the ultrasonic transducer can be used to have ultrasonic image of the sample.
4. The device of claim 1, wherein the electrical transmit and receive circuits can be used to have the electrical impedance and capacitive image of the sample.
5. The device of claim 1, wherein the device is adapted to image the samples on the top side of the substrate in a manner where transmit and receive modalities are each picked from one or more of the following: ultrasonic, electrical, optical, thermal.
6. The device of claim 1, wherein the device is adapted to image the samples on the bottom side of the substrate in a manner where transmit and receive modalities are each picked from one or more of the following: ultrasonic, optical, thermal.
7. The device of claim 1, wherein the samples can be placed on top of the substrate by temporarily removing or folding the top light source.
8. The device of claim 1, wherein the collected data can be transferred to other electronic peripherals or to a central controller wirelessly or using wires.
9. The device of claim 1, wherein any external light source or any of the pixel transducers and their configuration can be controlled from a central unit either on the device, or off the device.
10. The device of claim 1, wherein the ultrasonic transducer electrodes are shared with electrical and capacitive transduction electrodes to allow tighter integration within the pixel or plurality of pixels.
11. The device of claim 1, wherein a sample can be viewed from a camera on the bottom side of the substrate facing the sample.
12. The device of claim 1, wherein a second substrate made of an optically transparent material is bonded on the back side of the substrate.
13. The device of claim 12, wherein the optical output of the LED lights couple to the second substrate of transparent nature and is transmitted to the sample to image it.
14. An apparatus, comprising a substrate and first and second focal plane arrays of pixels, wherein in each: a. pixels comprise one or more transducers in one or more of the ultrasonic, electrical, optical, and thermal domains, b. ultrasonic transducers are adapted to perform at least one of the following functions by connecting their electrodes to relevant circuits using electrical switches: ultrasonic transmit, ultrasonic receive, electrical transmit, electrical receive, GND electrode, by connecting their electrodes to relevant circuits using electrical switches, c. the pixels are implemented using a complementary metal oxide semiconductor (CMOS) process, d. the top of the substrate includes CMOS electronics, e. optical transducer consists of one or more of a photodetector element using a semiconductor junction, or an array thereof, or a light source incident on top of the substrate, or a light source incident from the bottom of the substrate, f. wherein thermal transducers on the pixel consist of a temperature sensor and a local heater element, g. wherein, the first and second focal array of pixels are positioned on top of each other facing their imaging surfaces against each other with an adjustable gap that can be used to adjust the normal force applied to any sample to be imaged in between the two devices.
15. The apparatus of claim 14, wherein all the pixels and the modality of transducers can be controlled from a single controller on or off the apparatus.
16. The apparatus of claim 14, wherein the collected sensor data can be transferred to other electronic peripherals or to a central controller wirelessly or using wires.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The present invention will be more fully understood and appreciated by reading the following Detailed Description in conjunction with the accompanying drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
DETAILED DESCRIPTION OF EMBODIMENTS
(11) The present disclosure describes a monolithic, multi-mode, CMOS-compatible imager with active, per-pixel drive and sense circuitry for transducers.
The First Embodiment
(12) A single-chip solution for multi-modal imaging with every pixel capable of electrostatic, ultrasonic and optical imaging. Note that the approach does not merely come from summing different modalities, but by making each modality work efficiently with and/or without each other, simultaneously or in a time-multiplexed manner to also support cross-domain imaging if desired. Further, standard motivational aspects of microsystems such as small size, weight, area, low-power and low cost (SWAP-C) are fully valid.
(13) An overarching goal is to realize a system that can be configured in as much modality configurations as possible based on the types of the transducers included in the system. For instance, if the number of modalities is N.sub.mode=4, one should be able to operate in N.sub.mode.sup.2=16 for single transmit (TX) and single receive (RX) mode of operations. Purely ultrasonic imaging would correspond to ultrasonic TX and RX, and purely optical operation entails optical TX and RX. Picking different operating modes for TX and RX enables cross domain imaging such as acousto-optic imaging with US as TX and optical as RX [Laudereau15], photo-acoustic microscopy [Beard11, Steinberg19] with optics as TX and US as RX. Other example applications are fluorescence imaging, [Mezil20], dielectrostriction measurements, thermal microcalorimetry, thermal imaging of (light-induced) chemical reactions, acoustic heating etc. It might also be possible to operate in 1 TX-2TX modes in certain scenarios, but this would complicate the system design more than potentially improving the scan speed. Sequential TX-RX of imaging modality pairs is therefore the preferred approach.
(14) The main parts of the proposed device are pictured in
(15) Ultrasonic Transduction
(16) Using CMOS compatible piezoelectric materials such as AlN, one can realize either solidly mounted [Kuo21, Abdelmejeed19] or released Piezoelectric Micromachined Ultrasonic Transducers (PMUT). One can also implement released actuators using Capacitive Micromachined Ultrasonic Transducers (CMUTs), unless using relatively large DC Voltages is a concern.
(17) In the embodiment described here, AlN is chosen due to its CMOS compatibility, high transduction efficiency, and low voltage operation. Each sensor is designed to operate in the thickness mode, which can be modeled relatively accurately using transmission line based Mason, Redwood or KLM models [Cobbold06:Ch6]. These models can guide the material choice along the acoustic path and in the piezoelectric stack up to keep the acoustic loss to a minimum.
(18) In the first embodiment shown in
(19) There may be cases that warrant imaging on the backside. The extra packaging layer on the top side can increase the z-profile height of the overall system, and repeated exposure to chemical solutions may reduce the lifetime of the device, due to infiltration of the packaging with chemicals that the device might be exposed in imaging in high/low pH and reactive liquids. Furthermore, in some cases, placing several chips together side-by-side, may be used to increase the imaging field. In these situations when the object to be imaged is larger than the area of one chip, the presence of surrounding space required for wiring on top surface may in certain cases limit the spatial density of chips and imaging. The fourth embodiment to be presented later in this disclosure allows imaging on the backside, while still offering many of the benefits cited for the first embodiment.
(20) The ultrasonic transducers can be operated both in pulse-echo and electrical impedance measurement mode as described next:
(21) In pulse-echo mode, the transducers are excited with a pulse, burst signal, or a wavelet. The wave travels through the silicon substrate, reflects back off the free substrate-air interface and travels back to the originating or neighboring AlN transducers. The amplitude and phase of the reflected pulse constitute the signal and is received after the ultrasonic pulse travels twice the substrate thickness. In the case of a sample present, the acoustic impedance of the sample will modulate the amplitude/phase of the originating pulse, and hence also the received signal. Therefore, the received signal amplitude/phase constitute the imaging signal just like it is in backside imaging approaches mentioned earlier [Kuo21, Abdelmejeed19, Hoople14]. Phased drive approaches are also possible [Hoople14] to improve SNR as done routinely in ultrasonic imaging.
(22) In the impedance mode, the magnitude/phase of the input current of the AlN transducer is measured with forced AC voltage close to the resonance frequency of the stack or, likewise, by measuring voltage across the transducer with a forced AC current drive. Ratio of voltage to current in phasor form than gives the impedance of the AlN, which will be a function of the sample acoustic impedance since the sample constitutes a termination/boundary condition. If measured in steady state, impedance measurements of this sort capture higher order modes of the bulk silicon thickness originating from resonance due to multiple reflections off the substrate boundaries. The mean value of these oscillations can be shown to correlate with pulse-echo results. The continuous wave interrogation can lead to undesired and/or spurious signals due to the addition of multiple reflections from the entirety of silicon chip imager.
(23) Given the peaks and dips in impedance vs frequency characteristics, which are due to substrate reflections, impedance measurement approach has a higher susceptibility to resonance frequency variations due to temperature, array uniformity and other factors. As such, pulse-echo approaches are used more commonly and is also the method of choice for ultrasonic mode of operation in the multi-modal imagers presented here.
(24) Reflection Mode for Ultrasonic Microscopy
(25) Ultrasonic imaging shown in
(26) Capacitive/Electrical Impedance/Potential Sensing
(27) Capacitive/electrical impedance sensors can be realized using electrodes in close proximity or in-contact with samples, hence they are easy to implement in the standard CMOS technologies. While both electrical impedance (magnitude and phase) of the sample or electrical potential of the sample [Tokuda06] with respect to another reference electrode can be measured, our implementations focus on impedance/capacitance sensing due to amenability to lock-in approaches and resulting higher SNRs. Many electrical sensing applications such as capacitive touch or fingerprint sensors use a dielectric between the samples and electrodes for passivation and protection from wear/tear. Sensing is usually done at frequencies lower than MHz due to bandwidth considerations and higher responsivity of biological samples at lower frequencies.
(28) In the described embodiment, the electrodes are on the same metal as the top electrode of the AlN transducers, because it is the metal layer closest to the sample and will have the highest sensitivity. As each pixel has transducers for four different domains, squeezing all four, with their respective row-column switching and readout electronics, within the tight pixel area is challenging. This is done by using the top electrodes of the piezoelectric transducers also as the electrodes for capacitive sensing. One approach to implement this concept is to use dedicated analog multiplexers for different modalities at top and bottom electrodes and is described later.
(29) Optical Sensing
(30) The first embodiment depicted in
(31) Photodiodes (PD) in camera pixels are often envisioned as capturing the incident light, hence they are designed to absorb as much of the incident light as possible based on the wavelength of the light and solid-state characteristics of the junction such as energy band diagram, doping profiles, junction depths and quantum efficiency. For instance, standard silicon CMOS PD based imagers, with a bandgap energy of 1.12eV, do not work well as IR cameras especially in the far-IR region (3-100 m), because silicon is mostly transparent to radiation in this band as the photon energy is significantly less than the bandgap energy. Instead, monolithic CMOS compatible IR cameras on silicon substrates rely on detecting the incident radiation from the temperature change it causes via special coatings and/or suspended structures as done in microbolometers or thermophile focal plane arrays [Akin17].
(32) The absorption coefficient of intrinsic silicon from Green et al. is given in
(33) Implementing CMOS on transparent substrates (E.g.: Silicon-on-Saphire (SOS)) is clearly a great solution to avoid the tradeoff mentioned above [Andreou01]. However, fabs offering this technology are not as common as those offering the traditional bulk or SOI CMOS technologies. Therefore, it needs to be shown that the device of the first embodiment is still feasible to implement with these bulk or SOI CMOS technologies. The following discussion addresses this need, using data and specs from the literature, and shows why near-IR wavelengths around (1050 nm) present a feasible solution for illumination from backside and light detection from CMOS-side as depicted in the first embodiment in
(34) TABLE-US-00002 TABLE 2 Photodiode Sensitivity From Backside at Various Illuminations: Sea-Level Solar and IR LED Illumination Cases Parameter Value Units Reference/Notes Wavelength 1000 1050 1100 nm Design choice Silicon Thickness 500 500 500 m Typical silicon thickness- if thicker substrate is used, it can be polished to this thickness Absorbtion Coefficient 64 16.3 3.5 cm.sup.1 [Green95, PVWeb] CMOS silicon 0.05 0.02 0.01 A/W See for example [Hamamatsu21] photodiode sensitivity Loss Factor in Silicon 27.795 7.079 1.520 dB 20*log10(exp(Si Thickness * Absorption Coefficient)) Loss Factor in Silicon 0.041 0.443 0.839 (exp(Si Thickness * Absorption Coefficient)) Phtodiode Sensitivity 2.038 8.853 8.395 mA/W Photodiode Sensitivity resulting in From Backside case of backside illumination illumination Solar Radiation 6.9159E-01 6.1802E-01 4.6113E-01 W*m.sup.2*nm.sup.1 [NREL_ASTMG173] Direct + circumsolar W*m.sup.2*nm.sup.1 PhotoDiode Area Per 100.000 100.000 100.000 (m).sup.2 4% of the cell size for a typical cell Pixel of 50 m 50 m PD Response to Solar 140.954 547.120 387.099 fA/nm PD Response to solar radiation (sea Radiation From Back level) from back side per nm of Side wavelength IR LED Illumination 5.000 5.000 5.000 mW [IRLED_5010] Rated power is 7 mW Output Power (1 mm distance from LED, backside) IR LED Spectral 100.000 100.000 100.000 nm [IRLED_5010] Bandwidth IR LED Viewing Half 45.000 45.000 45.000 Degrees Half Viewing Angle From LED Angle Assuming it is a point source is 45 Degrees [IRLED_5010] LED to CMOS PD 1.000 1.000 1.000 mm 0.5 mm is the silicon thickness and Vertical Distance 0.5 mm is reserved for spacing between the LED and the backside silicon surface IR LED Illumination 1.250 1.250 1.250 mW/mm.sup.2 Based on viewing angle and Power Density on assuming 5 mW of the rated 7 mW CMOS PD (Excluding diode power at 1 mm away from the silicon Loss) CMOS PD [IRLED_5010], To cover the chip area of 10 mm 10 mm one would need 5 LED's under similar operating conditions, I.e. 50 mW total Output PD Response to IR 2.548 11.066 10.493 pA/nm PD output current Per unit spectral LED Radiation From line width Back Side
(35) Table 2 summarizes the main parameters involved in the calculation of the CMOS PD response for both the case of sea-level solar radiation [NREL_ASTMG173] and using an IR LED [IRLED_5010] when illuminated from the back of the substrate. The calculation is carried out in three wavelengths for the 100 nm spectral range centered around =1050 nm by using silicon absorption coefficients from
(36) One can take the above calculation a step further to calculate the SNR using the state of the art CMOS TIA input referred noise for a 60 frame per second imager. Table 3 lists the parameters used in this calculation. It can be seen that by using advanced low noise TIA architectures [Salvia09], one can reach, theoretical signal to noise ratios in excess of 35 dB and 60 dB using solar light and IR-LED's, respectively.
(37) It should be emphasized that the concern for optical sensitivity and the calculations presented above are only valid for the case of backside illumination in the case of opaque or high-absorption substrates like silicon. In the case of transparent substrates or sapphire, the design is much simpler since UV, visible, and IR spectrum will have favorable passbands as allowed by the substrate's transmission characteristic.
(38) TABLE-US-00003 TABLE 3 SNR calculation based on the solar radiation and IR LED illumination from backside Parameter Value Unit Reference/Note Average PD Response per 358.391 fA/nm See Table 2 Unit Wavelength to Power From Solar Radiation (1000 nm < < 1100 nm) Average PD Response per 8.036 pA/nm See Table 2 Unit Wavelength to IR LED Radiation (1000 nm < < 1100 nm) Spectral Bandwith 100 nm Sensitivity and input power assumed uniform Frame Rate 60 Hz Design choice CMOS TIA Input Referred 65 fA/ [Salvia09] Area of the TIA used here is Current Noise Spectral sqrt(Hz) 280 m 180 m @ 180 nm CMOS. It may be Density necessary to share TIA's among multiple cells that might reduce the final frame rate. CMOS TIA Input Referred 503.488 fA Current Noise, Integrated SNR in case of Solar 37.047 dB See Table 2 Radiance SNR in case of 1050 nm IR 64.061 dB See Table 2 LED Illumination
Transmission vs Reflectance Mode for Optical Imaging
(39) Optical imaging for the first embodiment, unlike ultrasonic imaging, supports both transmission and reflection as mentioned before. When the sample is illuminated from the top, the sample is imaged in transmission mode, and when the sample is illuminated from bottom the sample is imaged mostly in reflection mode, but based on the transparency of the sample and the reflection characteristic of the backlight module on top, there can also be a transmission component. It is also possible to turn on both light sources, which then should result in a superposition of both of these modes.
(40) The layout of the PD detectors require attention as they should be able to accept light from both top and bottom of the substrate. Therefore, one should eliminate metal routing under the PDs to avoid reflections in the case of backside illumination. Ray tracing simulations can aid to estimate the response of the PD based on optical parameters of materials, topography of samples, the gap between the sample and imager surface, and PD response characteristics.
(41) Thermal Transduction
(42) CMOS enables nW-to-W temperature sensors with different tradeoffs in the power-vs-resolution space [Jeong14]. Note that silicon is a good thermal conductor, therefore, without suspended structures [Akin17] or custom SOI wafers, its performance is limited. In other words, thermal imaging sensitivity of the proposed embodiment will be limited by the thermal conductance of the substrate unless more complicated approaches are pursued in the fabrication of the CMOS die. As described in Section-2, glass substrates, with typically two orders of magnitude lower thermal conductivity than silicon, present a great option to improve thermal response. Thermal actuation is possible by resistor elements at the pixel level.
(43) For the embodiments in this disclosure, we have chosen ultrasonic, capacitive, & optical as the high-priority imaging modes and thermal imaging as the secondary. In addition, we opted for silicon substrate to support easy and inexpensive manufacturability despite the large absorption and difficulties in optical sensing as explained in the previous section. Thermal isolation is yet another functionality silicon substrates are not good for. This can be acceptable if the thermal imaging is considered to be a lower-priority modality as in our case. Otherwise, one should investigate less-common post CMOS implementations to release structures or use different substrate materials. None of these advanced fab approaches or less common substrates are detailed with respect to our embodiment of
The Second Embodiment
(44) Given the multi-modal functionality introduced by the first embodiment, there might be cases where useful information can be obtained by imaging both sides of the sample. This is especially true for high frequency ultrasound where penetration depth in water may be limited to only a few tens of microns due to the large acoustic loss.
(45) Given that each imager has N.sub.mode=4 modalities, the combined imager of the second embodiment will have the option for (2N.sub.mode).sup.2=64 operation modes involving TX and RX configuration for each of the four modality on top and bottom. While some of these modes may not be too useful, the ultrasonic mode in particular expands the capability of the first embodiment significantly. For instance, for soft biological samples, by controlling the thickness of the sample with an external adjustable z-force (orthogonal to the FPA surface) from the hinge/spring structure, transmission mode imaging of the sample is also possible. Remember that this mode of imaging is not enabled by the first embodiment. Another example application can be the characterization of thermal properties of the sample by heating the sample locally using per-pixel resistor elements, or by the light sources on top and bottom (if the substrate is transparent). This can yield relative information about the thermal conductivity and resistance of the sample locally after the response due to thermal diffusion the substrate is subtracted.
(46) An important advantage of the second embodiment comes with the ability to enhance the capacitive imaging capabilities. First embodiment relied on measuring inter-electrode capacitance among lateral structures, that would mean that the electric fields involved in capacitance contrast images are mostly parallel to the imager surface. On the other hand, second embodiment allows interrogation of parallel plate capacitances (with the sample as the dielectric) via electrical fields perpendicular to the imager surface. Therefore, the second embodiment paves the way to investigate anisotropic dielectric properties of samples. This is in addition to the time-of-flight and sample speed of sound measurement capability introduced by the transmission mode ultrasonic imaging, thanks again to the dual imager arrangement of the second embodiment
(47) The second embodiment also helps with the calibration of the imager. By operating the imager with only water as the sample, one can calibrate the incident light by driving one light source and measuring with photodetectors (PD's) on either side. As the optical transmission of water is well known across a wide range of frequencies, the optical loss due to the sample can be measured with a reasonable accuracy. For the acoustic mode, ultrasonic radiation intensity emitted by one side of the imager can be picked up by the ultrasonic transducers on the other side to calibrate the 1-D models with improved accuracy.
The Third Embodiment
(48) One can argue that having the delicate CMOS circuits and wire-bonds on the imaging side increases wear and tear on the passivation or enhances the risk of ESD failures if the SoC is not properly micro-packaged. There is also the argument about the wire bonds causing protrusion around the imaging chip due to finite loop height.
(49) While all the above issues have solutions as proven by millions of capacitive touch sensors in the market today, SoC packaging costs can potentially be lower if the acoustic imaging is done on the backside. To that end, a simpler imager system that images the sample on the back of the CMOS substrate, the third embodiment, is shown in
(50) There are two light sources in
(51) The second light source on the bottom of the PCBs consists of LED's coupling its light mostly to the glass layer bonded on the back side of the imager, between the substrate and sample. This additional glass layer functions as a light-waveguide that carries the light through total internal reflection. When there is a sample that has significantly higher index of refraction than air, such as water, finger, or other biological samples, the light will diffract out of the glass and illuminate the sample. External camera facing the sample will be able to gather images using this light. Note that the side LED can be in the visible or IR spectrum since this modality of imaging only requires glass and air as the transmission medium. However, if a transparent substrate is used and/or IR light is used with a silicon substrate, i.e. if the substate is transparent for the band of light used, PD detectors on the CMOS side will also be able to pick up images from the sample, and bonding of the glass layer may not be necessary.
(52) Acoustic modality of the third embodiment works in pulse-echo mode in a similar fashion to current Geegah imager [Kuo21, Baskota22]. However, as described in Section 2.1.A, it is also possible to operate in the impedance mode with associated shortcomings.
The Fourth Embodiment
(53) The acoustic pulses that reach to the sample (medium or object) can not only be used to image the surface of the sample and imager boundary through reflections but can also be used to bulk properties of the sample in transmission mode through pulses that penetrate the sampled tissue or liquid. This is sometimes desired, for example to measure the speed of sound in the liquid or solid. As a specific example, as one goes deeper into the ocean, the speed of sound changes due to the variation in pressure, temperature, and salinity. In the case of liquids, sometimes the addition of the sample can lead to creation of biochemical events such as corrosion, or bio-film formation. There is therefore a need to monitor the surface formation of any thin layer while at the same time measuring the time of flight in the sample. Any thin layer thickness measurement and its two-dimensional growth history can be used to determine the thin film formation. Furthermore, parts of the imager itself, such as the back-light unit and/or LED light sources can be used as reflectors to perform time-of-flight (TOF) measurements.
(54)
(55) In addition to the above functionalities, there may be cases where the light source can serve to modify or treat the sample or target medium. For example, in the case of speed of sound measurement in ocean, it may be necessary to limit or stop the formation of the biofilm on the imager surfaces, potentially using light exposure from the UV LEDs on both sides of the imager. Hence, the UV light sources on top or below the multi-modal imaging chip can be used to provide UV radiation to prevent formation of or remove the cells that might adhere to the imager or imaged surfaces. This UV light can act as an anti-fouling agent to keep the surfaces clean over time. This removal of the thin film controlled by activation of the UV-light source can be used to monitor the presence of biofilm forming bacteria and fungi in the liquid that is being imaged. As explained above, the surface of the same UV light source can be used also as the acoustic reflector for acoustic imaging in the transmission mode and ToF measurements.
(56) As in earlier embodiments, multiple transducers per pixel or per plurality of pixels can be used for cross-domain imaging such as photoacoustic or acousto-optic imaging. Thermal or electrical/capacitive transducers on the CMOS die may have inferior sensitivity due to distance to the sample or thermal leakage in the substrate. Alternative solutions involve implementing certain type of transducers on the backside of the CMOS die and using through-silicon via (TSV) interconnects. Note however that this approach limits the density of transducer and increase complexity and cost significantly. A separate hybrid integration of a layer of electrodes can be bonded on the backside of the silicon die as well for electrical/capacitive imaging. The density of interconnects either through the silicon die itself or PCB vias and wire bonds may be limiting factors for these approaches. For these reasons, cost per imaging unit area of sample can be significantly higher making these alternative solutions less attractive.
OTHER FEATURES
(57) Transparent Substrate Option
(58) All the embodiments described in this disclosure are designed to work with the ubiquitous CMOS substrate, silicon, by a careful choice of the center wavelength, around =1050 nm. As highlighted multiple times, optical imaging modes described in this disclosure can benefit immensely from transparent substrates since they allow operation in the visible range and a much wider selection of light sources to be used as backlight modules, LED's or VCSEL. Those who are experts in the field can come up with other spectral ranges if substrates other than silicon is used without changing the scope of this disclosure. Likewise, more sensitive electronics can also enable higher SNR at a given wavelength, hence extending the range of operational electromagnetic spectrum.
(59) Electrode sharing for Ultrasonic and Capacitive Transduction
(60) Trying to pack more and more transducers into the unit-pixel of a focal plane array (FPA), with their own switching/drive/readout electronics is one of the most challenging parts of the design of a multi-modal imager. A sample subarray of such an array is shown in
(61) For a radial ultrasonic transducer of radius a, the full-width-half-maximum (FWHM) is given as:
(62)
(63) The discussion above shows that high density FPA's with pixel sizes on the order of 50 m already push the limits for what can fit in a given pixel area for either ultrasonic or capacitive transduction, let alone fitting both within the same pixel. To ease with this tradeoff, we propose using the electrodes of piezoelectric transducers also for capacitive transduction, namely TX and RX channels for capacitive sensing.
(64) The basic architecture for the electrical connections to use the same electrodes of the piezoelectric device also for capacitive sensing is shown in
(65) Multiplexing can be done via low-resistance complementary transmission gates or other well-known methods in the art. Note that there is a switch that shorts the top and bottom electrodes to make sure piezoelectric device is not excited in case its electrodes are being used with electrostatic TX or RX functionality. The control signals used to generate the modality of each pixel (i.e. the gating signals for multiplexing switches) can be global signals along rows and columns of the focal-plane-array (FPA) shown in
(66) For cross-domain imaging, such-as acoustic imaging & capacitive pickup, or electrostatic excitation & acoustic pickup, it may be necessary to have RX pixels sense at the same time the TX pixels are transmitting. For these cases, once can conceive situations in which one pixel transmits acoustically while the neighboring pixels sense in capacitive mode for multi-modal imaging.
(67) Note that during purely acoustic imaging, there is a time delay between the excitation of the pulse during TX and receiving after reflection from the back of the silicon. This is the standard time-of-flight (ToF) delay determined approximately by the thickness of the substrate and the speed of sound in the substrate. It is therefore possible to use the same pixel for both TX and RX by connecting the top and bottom electrodes to proper TX and RX electronics, respectively, using the switches in
(68) Another feature offered by
(69) Some of the US and ES TX and RX circuits that can be used with the above architecture are exemplified in
(70) In
(71) TX-RX Configurability: SNR, Frame Rate Optimization, Multi-modal Scan Configurations, and Circuit Sharing among Pixels
(72) Many current capacitive touch sensors that use mutual and/or self-capacitance detection rely on a fixed configuration of TX & RX electrodes. As many capacitive touch sensors are designed to interact with finger and are used to work over large areas such as laptop or cell-phone screens, the pixel size is in the mm-scale and do not need to change during the lifetime of the device. As for the channel count of TX and RX electrodes, many of the standard off-the-shelf touch controllers allow for a few dozens of TX, RX channels. For example, one of the high-node count (product of number of channels for TX and RX) advanced touch controllers support 32 TX and 52 RX lines [Microchip1665XT]. On the other hand, for a general purpose imager described in this disclosure with potentially more than 100 pixel on each side (>10.sup.4 channels TX/RX channels), it might be necessary to experiment with different TX/RX configurations to improve SNR or to change the effective scan area and location to increase frame rate.
(73)
(74) The final example in
(75) Scan Plans
(76) Sequencing of the multi-modal imaging and synchronization among different TX and RX cycles for each mode is critical. Any CMOS implementation will either need an on-chip scan controller or need to work with another FPGA or micro-controller to manage the scan. While this disclosure will not detail specifics of scan-plans, any implementation needs to support both simultaneous and also sequenced TX-RX operation among different modalities while also controlling the per-pixel switches shown in
(77) Spectroscopy and Time-Resolved Imaging
(78) Both US and ES methods are sinusoidal drive in nature and can be carried out for a range of excitation frequencies as supported by the bandwidth of the system. For the configurations shown in
(79) Heterodyne Techniques and Filtering to Improve SNR
(80) In addition to changing the excitation frequency, heterodyne techniques that modulate the drive amplitude of the TX excitation source and then down-convert using the carrier frequency to extract in-phase and out of phase component can be employed at TX/RX. As well known by those experienced in the field, this allows narrowband operation in the presence of background interferers or noise and improves the SNR. Similarly, one can also use various filters to reject out-of-band noise in different imaging modalities. It should also be emphasized that modulation of the RX or TX fields is inherent in many imaging modalities such as acousto-optic imaging, where some of the light is modulated by an ultrasonic wave inside the biological tissue and carries the ultrasonic frequency. [Yao2000]
(81) Some features of the described multi-imager embodiments are summarized below: A compact, single-chip solution with per-pixel transduction circuitry for multi-modal imaging that can be used for excitation and sensing across different domains as well as within single domain Cross-domain imaging: Excite in one-domain and image in the other Acoustic, electrical, optical, and thermal sensors per pixel for multi-domain imaging No need for registration among different modalities as compared to multiple chip solutions, where registration errors between different modalities, such as ultrasound and optical can be >1 mm and require additional calibration Share the same readout electronics such as ADC, mixed-signal processing, row-column decoders and multiplexer circuits across different modalities to save power and reduce the transistor count High value of innovation per mm.sup.2 of CMOS: N.sub.mode.sup.2 microscopes in a single device for the case of the first imager Single chip solution and cost effective Using an FPA for ultrasound and other modalities eliminate motorized position control and slow mechanical scans used in some other implementations such as surface acoustic microscopes with a single transducer Visible or florescence microscopy possible without optical fibers Spectroscopy measurements for each of the modalities by changing the frequency of excitation and/or detection for TX and RX, respectively Flexible drive electronics allow sharing of transducer electrodes for different modalities allowing more compact pixel layouts
(82) While various embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, embodiments may be practiced otherwise than as specifically described and claimed. Embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.
(83) The above-described embodiments of the described subject matter can be implemented in any of numerous ways. For example, some embodiments may be implemented using hardware, software or a combination thereof. When any aspect of an embodiment is implemented at least in part in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single device or computer or distributed among multiple devices/computers.
6. References Incorporated by Reference
(84) [Huang15] Huang, Xiwei, et al. A dual-mode large-arrayed CMOS ISFET sensor for accurate and high-throughput pH sensing in biomedical diagnosis. IEEE Transactions on Biomedical Engineering 62.9 (2015): 2224-2233 [Huang14] XIWEI, HUANG. CMOS MULTIMODAL SENSOR BASED LAB-ON-A-CHIP SYSTEM FOR PERSONALIZED BIO-IMAGING DIAGNOSIS, Ph.D. Thesis, 2014. [Tokuda06] Tokuda, Takashi, et al. A CMOS image sensor with optical and potential dual imaging function for on-chip bioscientific applications. Sensors and Actuators A: Physical 125.2 (2006): 273-280. [Mela 21] Mela, Christopher, Francis Papay, and Yang Liu. Novel multimodal, multiscale imaging system with augmented reality. Diagnostics 11.3 (2021): 441. [Mezil20] Mezil, Sylvain, et al. Single-shot hybrid photoacoustic-fluorescent microendoscopy through a multimode fiber with wavefront shaping. Biomedical Optics Express 11.10 (2020): 5717-5727. [Yao2000] Yao, Gang, and Lihong V. Wang. Theoretical and experimental studies of ultrasound-modulated optical tomography in biological tissue. Applied Optics 39.4 (2000): 659-664. [Akin17] Tankut, Firat, et al. An 8080 microbolometer type thermal imaging sensor using the LWIR-band CMOS infrared (CIR) technology. Infrared Technology and Applications XLIII. Vol. 10177. International Society for Optics and Photonics, 2017 [Green08] Green, Martin A. Self-consistent optical parameters of intrinsic silicon at 300 K including temperature coefficients. Solar Energy Materials and Solar Cells 92.11 (2008): 1305-1310. [Kuo21] Kuo, Justin, et al. Gigahertz Ultrasonic Imaging of Nematodes in Liquids, Soil, and Air. 2021 IEEE International Ultrasonics Symposium (IUS). IEEE, 2021. [Abdelmejeed19] Abdelmejeed, Mamdouh, et al. Monolithic 180 nm CMOS Controlled GHz Ultrasonic Impedance Sensing and Imaging. 2019 IEEE International Electron Devices Meeting (IEDM). IEEE, 2019. [Hoople14] J. Hoople, J. Kuo, S. Ardanu and A. Lal, Chip-scale reconfigurable phased-array sonic communication, 2014 IEEE International Ultrasonics Symposium, 2014, pp. 479-482, doi: 10.1109/ULTSYM.2014.0119. [Baskota22] Baskota, Anuj, Justin Kuo, and Amit Lal. Gigahertz Ultrasonic Multi-Imaging of Soil Temperature, Morphology, Moisture, and Nematodes. 2022 IEEE 35th International Conference on Micro ElectroMechanical Systems Conference (MEMS). IEEE, 2022. [Cobbold06] Cobbold, Richard SC. Foundations of biomedical ultrasound. Oxford university press, 2006. [Hamamatsu21] s15908-512q CMOS Linear Image Sensor Datasheet, Hamamatsu https://www.hamamatsu.com/content/dam/hamamatsu-photonics/site/documents/99_SALES_LIBRARY/ssd/s15908-512q_etc_kmpd1239e.pdf [NREL_ASTMG173] 2000 ASTM Standard Extraterrestrial Spectrum Reference E-490-00, https://www/nrel.gov/grid/solar-resource/spectra-astm-e490.html [IRLED_5010] MTE5010-995-IR Infrared Emitter Datasheet, Marktech Optoelectronics, https://marktechopto.com/pdf/products/datasheet/MTE5010-995-IR,.pdf [Salvia09] Salvia, James, et al. A 56M CMOS TIA for MEMS applications. 2009 IEEE Custom Integrated Circuits Conference. IEEE, 2009. [Andreou01] A. G. Andreou et al., Silicon on sapphire CMOS for optoelectronic microsystems, in IEEE Circuits and Systems Magazine, vol. 1, no. 3, pp. 22-30, 2001, doi: 10.1109/7384.963464. [Jeong14] S. Jeong, Z. Foo, Y. Lee, J. Sim, D. Blaauw and D. Sylvester, A Fully-Integrated 71 nW CMOS Temperature Sensor for Low Power Wireless Sensor Nodes, in IEEE Journal of Solid-State Circuits, vol. 49, no. 8, pp. 1682-1693, Aug. 2014, doi: 10.1109/JSSC.2014.2325574. [Laudereau15] Laudereau, Jean-Baptiste, et al. Multi-modal acousto-optic/ultrasound imaging of ex vivo liver tumors at 790 nm using a Sn2P2S6 wavefront adaptive holographic setup. Journal of biophotonics 8.5 (2015): 429-436. [Beard11] Beard, Paul. Biomedical photoacoustic imaging. Interface focus 1.4 (2011): 602-631. [Steinberg19] Steinberg, Idan, et al. Photoacoustic clinical imaging. Photoacoustics 14 (2019): 77-98. [Microchip1665XT] maXTouch 1664-node Touchscreen Controller Product Brief, Microchip, https://ww1.microchip.com/downloads/en/DeviceDoc/40001956A.pdf