ULTRASONIC IMAGING DEVICE AND METHOD FOR IMAGE ACQUISITION IN THE ULTRASONIC DEVICE
20220237940 · 2022-07-28
Assignee
Inventors
Cpc classification
G06F3/0436
PHYSICS
G01S7/52077
PHYSICS
G06V10/25
PHYSICS
G01S15/8995
PHYSICS
A61B8/0858
HUMAN NECESSITIES
G01S15/8927
PHYSICS
International classification
Abstract
Method for image acquisition in an ultrasonic biometric imaging device, the device comprising a plurality of ultrasonic transducers arranged at a periphery of a touch surface along one side of the touch surface, the method comprising: determining a target area of a touch surface; identifying a blocking feature preventing ultrasonic wave propagation in the touch surface such that the blocking feature creates a blocked region in the touch surface where image acquisition is not possible; determining that the target area at least partially overlaps the blocked region; dividing the transducers into a first subset and a second subset, the first and second subset being defined in that ultrasonic waves emitted by the respective subset reaches the target area on a first and second side of the blocking feature; and capturing an image of the biometric object using transmit and receive beamforming.
Claims
1. A method for image acquisition in an ultrasonic biometric imaging device, the device comprising a plurality of ultrasonic transducers arranged at a periphery of a touch surface along one side of the touch surface, the method comprising: determining a target area of a touch surface; identifying a blocking feature preventing ultrasonic wave propagation in the touch surface such that the blocking feature creates a blocked region in the touch surface where image acquisition is not possible; determining that the target area at least partially overlaps the blocked region; dividing the plurality of transducers into a first subset and a second subset, the first subset being defined in that ultrasonic waves emitted by the first subset reaches the target area on a first side of the blocking feature and the second subset being defined in that ultrasonic waves emitted by the second subset reaches the target area on a second side of the blocking feature; controlling the first and second subset of transducers to emit a first and a second ultrasonic beam towards the target area using transmit beamforming, the ultrasonic beams being defocused or unfocused ultrasonic beams; by the ultrasonic transducers, receiving reflected ultrasonic echo signals defined by received radio frequency data (RF-data), the reflected ultrasonic echo signals resulting from interactions with an object in contact with the touch surface at the target area; subtracting background RF-data from the received RF-data to form a clean image; performing receive side beamforming to form a reconstructed image from the clean image; and for a plurality of reconstructed images resulting from a plurality of emitted ultrasonic beams for a given target area, adding the plurality of reconstructed images to form a summed image.
2. The method according to claim 1, wherein forming a defocused beam comprises performing transmit beamforming to form a virtual point source located behind the transducers and outside of the touch surface.
3. The method according to claim 1, further comprising emitting a respective first and second directional defocused beam by the first and second subset of transducers such that the blocked region is minimized.
4. The method according to claim 1, further comprising emitting a respective first and second directional defocused beam by the first and second subset of transducers, wherein the first and second directional defocused beam has the same shape.
5. The method according to claim 1, further comprising controlling the ultrasonic transducers to emit a defocused beam or an unfocused beam based on a speed of sound in the touch surface.
6. The method according to claim 1, wherein the touch surface is a surface of a display panel and the blocking feature is an opening in the display panel.
7. The method according to claim 1, wherein identifying a blocking feature comprises retrieving stored information describing properties of the blocking feature.
8. The method according to claim 1, wherein identifying a blocking feature comprises forming an image of at least a portion of the touch surface, detecting a blocking feature in the formed image and determining properties of the blocking feature based on the formed image.
9. The method according to claim 1, wherein emitting a first and a second ultrasonic beam towards the target area using transmit beamforming comprises emitting a first and a second ultrasonic beam having the largest possible angles in relation to the blocking feature.
10. The method according to claim 1, wherein determining the target area comprises receiving information describing the target area from a touch sensing arrangement configured to detect a location of an object in contact with the touch surface.
11. An ultrasonic biometric imaging device comprising: a cover structure comprising a touch surface; a plurality of ultrasonic transducers arranged at a periphery of the touch surface, the plurality of ultrasonic transducers being configured to emit a defocused or unfocused ultrasonic beam towards a target area using transmit beamforming and to receive a reflected ultrasonic echo signals defined by received radio frequency data (RF-data), the reflected ultrasonic echo signals resulting from reflections by an object in contact with the touch surface at the target area; and a biometric imaging control unit configured to: determine a target area of a touch surface; identify a blocking feature preventing ultrasonic wave propagation in or at the touch surface such that the blocking feature creates a blocked region in the touch surface where image acquisition is not possible; determine that the target area at least partially overlaps the blocked region; divide the plurality of transducers into a first subset and a second subset, the first subset being defined in that ultrasonic waves emitted by the first subset reaches the target are on a first side of the object and the second subset being defined in that ultrasonic waves emitted by the second subset reaches the target area on a second side of the object; control the first and second subset of transducers to emit a first and a second ultrasonic beam towards the target area using transmit beamforming, the ultrasonic beam being a defocused or unfocused ultrasonic beam; by the ultrasonic transducers, receive reflected ultrasonic echo signals defined by received RF-data, the reflected ultrasonic echo signals resulting from interactions with an object in contact with the touch surface at the target area; subtract background RF-data from the received RF-data to form a clean image; perform receive side beamforming to form a reconstructed image from the clean image; and for a plurality of reconstructed images resulting from a plurality of emitted ultrasonic beams for a given target area, add the plurality of reconstructed images to form a summed image.
12. The ultrasonic imaging device according to claim 11, wherein the blocking feature preventing ultrasonic wave propagation is a cutout in the cover structure located at the edge of the cover structure, and wherein the first subset of ultrasonic transducers is located at a first side of the cutout and the second subset of ultrasonic transducers is located at a second side of the cutout, opposite the first side.
13. The ultrasonic imaging device according to claim 11, wherein the blocking feature preventing ultrasonic wave propagation is an opening in the cover structure located at the edge of the cover structure
14. The ultrasonic imaging device according to claim 11, wherein the blocking feature preventing ultrasonic wave propagation is a crack in the cover structure located at the edge of the cover structure
15. The ultrasonic imaging device according to claim 11, wherein the cover structure is a display glass.
16. The ultrasonic imaging device according to claim 11, wherein the plurality of transducers are arranged in a single row on a single side of the touch surface.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0033] These and other aspects of the present invention will now be described in more detail, with reference to the appended drawings showing an example embodiment of the invention, wherein:
[0034]
[0035]
[0036]
[0037]
[0038]
DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
[0039] In the present detailed description, various embodiments of the system and method according to the present invention are mainly described with reference to a biometric imaging device adapted to form an image of a finger placed on a display glass of a smartphone. It should however be noted that the described technology may be implemented in a range of different applications.
[0040]
[0041] The display arrangement further comprises a plurality of ultrasonic transducers 106 connected to the cover structure 102 and located at the periphery of the cover structure 102. Accordingly, the ultrasonic transducers 106 are here illustrated as being non-overlapping with an active sensing area of the biometric imaging device formed by the ultrasonic transducers 106 and the cover structure 103. However, the ultrasonic transducers 106 may also be arranged and configured such that they overlap an active sensing area.
[0042] The distribution of transducers may for example be selected based on the size of the desired area. For a typical display in a smartphone or the like, it may for example be sufficient to arrange transducers along the top and bottom edges of the display to achieve full area coverage.
[0043]
[0044] The pitch of the transducers may be between half the wavelength of the emitted signal and 1.5 times the wavelength, where the wavelength of the transducer is related to the size of the transducer. For an application where it is known that beam steering will be required, the pitch may preferably be half the wavelength so that grating lobes are located outside of an active imaging area. A pitch approximately equal to the wavelength of the emitted signal may be well suited for applications where no beam steering is required since the grating lobes will be close to the main lobe. The wavelength of the transducer should be approximately equal to the size of the features that are to be detected, which in the case of fingerprint imaging means using a wavelength in the range of 50-300 μm. An ultrasonic transducer 106 can have different configurations depending on the type of transducer and also depending on the specific transducer package used. Accordingly, the size and shape of the transducer as well as electrode configurations may vary. It is furthermore possible to use other types of devices for the generation of ultrasonic signals such as micromachined ultrasonic transducers (MUTs), including both capacitive (cMUTs) and piezoelectric types (pMUTs).
[0045] Moreover, suitable control circuitry 114 is required for controlling the transducer to emit an acoustic signal having the required properties with respect to e.g. amplitude, pulse shape and timing. However, such control circuitry for ultrasonic transducers is well known to the skilled person and will not be discussed in detail herein.
[0046] Each ultrasonic transducer 106 is configured to transmit an acoustic signal ST propagating in the cover structure 102 and to receive a reflected ultrasonic signal S.sub.R having been influenced by an object 105, here represented by a finger 105, in contact with the sensing surface 104.
[0047] The acoustic interaction signals S.sub.R are presently believed to mainly be due to so-called contact scattering at the contact area between the cover structure 102 and the skin of the user (finger 105). The acoustic interaction at the point of contact between the finger 105 and the cover plate 103 may also give rise to refraction, diffraction, dispersion and dissipation of the acoustic transmit signal S.sub.T. Accordingly, the interaction signals S.sub.R are advantageously analyzed based on the described interaction phenomena to determine properties of the finger 105 based on the received ultrasonic signal. For simplicity, the received ultrasonic interaction signals S.sub.R will henceforth be referred to as reflected ultrasonic echo signals S.sub.R.
[0048] Accordingly, the ultrasonic transducers 106 and associated control circuitry 114 are configured to determine properties of the object based on the received ultrasonic echo signal S.sub.R. The plurality of ultrasonic transducers 106 are connected to and controlled by ultrasonic transducer control circuitry 114. The control circuitry 114 for controlling the transducers 106 may be embodied in many different ways. The control circuitry 114 may for example be one central control unit 114 responsible for determining the properties of the acoustic signals S.sub.T to be transmitted, and for analyzing the subsequent interaction signals S.sub.IN. Moreover, each transducer 106 may additionally comprise control circuitry for performing specified actions based on a received command.
[0049] The control unit 114 may include a microprocessor, microcontroller, programmable digital signal processor or another programmable device. The control unit 114 may also, or instead, include an application specific integrated circuit, a programmable gate array or programmable array logic, a programmable logic device, or a digital signal processor. Where the control unit 114 includes a programmable device such as the microprocessor, microcontroller or programmable digital signal processor mentioned above, the processor may further include computer executable code that controls operation of the programmable device. The functionality of the control circuitry 114 may also be integrated in control circuitry used for controlling the display panel or other features of the smartphone 100.
[0050]
[0051] The first step comprises determining 200 a target area 107 of the touch surface 104. Determining the target area 107 may comprise receiving information describing the target area 107 from a touch sensing arrangement configured to detect a location of an object in contact with the touch surface. The touch sensing arrangement may for example be a capacitive touch panel in a display panel or it may be formed by the ultrasonic transducers.
[0052] The following step comprises identifying 202 a blocking feature 302 preventing ultrasonic wave propagation in the touch surface 104 such that the blocking feature 302 creates a blocked region 304 in the touch surface 104 where image acquisition is not possible. The blocked region is thus not a region empty of ultrasonic waves, it is defined as the region where the resolution of the resulting image is insufficient for accurately determine the sought biometric properties, such as ridges and valleys of a fingerprint. Accordingly, the extension of the blocked region 304 may vary depending on the resolution requirement for a given application.
[0053] In
[0054] Once the properties of the blocking feature 302 have been determined, it is determined 204 that the target area 107 at least partially overlaps the blocked region 107. If there is no overlap, there is no need for adjusting the emitted ultrasonic beam or beams based on the blocking feature. However, biometric imaging in general may advantageously use the described method comprising transmit and receive beamforming.
[0055] If it is determined that there is an overlap between the blocked region 304 and the target area 107 as illustrated in
[0056] The next step, illustrated in
[0057] By means of the transmit beamforming, one or more virtual point sources 314, 316 are formed outside of the cover glass 102 and behind the respective rows of transducers 306, 308. Thereby, defocused ultrasonic beams 310, 312 having a conical shape are formed. Thereby, diffraction of the two ultrasonic beams 310, 312 takes place in a region which is not directly in line of sight form the transducers, effectively reducing the size of the blocked region.
[0058] The directionality of the ultrasonic beam is limited by the opening angles of the ultrasonic transducers. The opening angle is inversely proportional to the operating frequency of the transducers such that a higher frequency of the emitted ultrasonic wave leads to a narrower opening angle.
[0059] Next, the ultrasonic transducers receive 210 reflected ultrasonic echo signals defined by the received RF-data. As discussed above, the reflected ultrasonic echo signals S.sub.R result from interactions with an object in contact with the touch surface at the target area.
[0060] In order to more clearly distinguish the echo signal S.sub.R in the received RF-data, background RF-data is subtracted 212 from the received RF-data to form what is here referred to as a clean image. The subtraction of the background RF-data from the acquired RF-data can be done either in the raw RF-data or after a receive side beamforming procedure which will be described in further detail below. For subtraction of background RF-data in the RF-data domain, the response of each individual transducer element is stored and a corresponding background measurement for each transducer element is subtracted from the acquired RF-data. It should be noted that all operations are performed in the digital domain, meaning that AD-conversion is performed before subtraction of the background RF-data, and that the background RF-data needs to be available in digital form. The resulting image after subtraction of background RF-data is herein referred to as a clean image.
[0061] The background RF-data may be acquired in different ways. The background data may for example be acquired by capturing an image of the entire touch surface either at regular intervals or when it is anticipated that a finger will be placed on the touch surface, for example if prompted by an application in the device. However, capturing an image of the touch surface requires acquiring and storing large amounts of data and if possible, it is desirable to only acquire background data of a subarea of the touch surface corresponding to the target area. This in turn requires prior knowledge of where on the touch surface the finger will be placed.
[0062] In a device comprising a capacitive touch screen, it can be possible to use a so-called hover mode of the capacitive touch screen to determine the target are before the actual contact takes place. In the hover mode, the proximity of a finger can be detected, the target area can be anticipated and background RF-data for the anticipated target are can be acquired prior to image acquisition. It would however in principle also be possible to acquire the background noise after the touch has taken place, i.e. when the user removes the finger, even though this may limit the possible implementations of the image acquisition device.
[0063] Receive side beamforming to form a reconstructed image from the clean image can be performed 214 either before or after the subtraction of background RF-data described above. The receive side beamforming is performed dynamically by adjusting the delay values of the received echo signals so that they are “focused” at every single imaging pixel. The received signals are focused at any imaging point, which will be repeated until a full image is generated. In general, an example implementation of receive side beamforming referred to as delay-and-sum beamforming can be described by three steps:
[0064] 1) The delay between each imaging point from the focal point as well as back to each receiving element is estimated.
[0065] 2) The estimated delay is used in an interpolation step to estimate the RF-data value. The interpolation is used since the delay might be between two samples. For example, a Spline interpolation may be used.
[0066] 3) The RF amplitudes are summed across all receive channels.
[0067] The method further comprises adding 216 a plurality of reconstructed images resulting from a plurality of emitted ultrasonic beams for a given target area to form a summed image. The number of transmit events required for capturing the target area can be estimated based on the relation between the width of the transmitted beam at the target area and the width of the target area. Accordingly, for a focused emitted beam, a larger number of emitted beams is typically required compared to when using an unfocused or defocused beam, assuming that the width of the transmitted beam at the target area is lower than the width of the target area.
[0068] The reconstructed images for each transmit event may be either coherently or incoherently added together, i.e. in-phase or out-of-phase depending on if there is a need to reduce the noise in the image (achieved by in-phase addition) or if it is desirable to increase the contrast of the image (can be achieved by out-of-phase addition).
[0069] In-phase addition of the reconstructed images can be achieved by converting the received RF-data into in-phase quadrature complex data, IQ-data, thereby making the phase information available. Thereby, reconstructed images represented by IQ data will subsequently be added in-phase (coherently). However, if the reconstructed images should be added out-of-phase (incoherently), IQ data is not needed.
[0070] Out-of-phase combining can help to increase the contrast by making sure that the impulse values are always added together without their phase information, i.e. whether they are positive values or negative.
[0071] A final image is formed 218 by taking the envelope of the summed image. The final values for every imaging pixel can be either positive or negative due to the nature of the RF-values. However, it is preferred to show the full image based on the brightness of the image. In the RF-values, large values in both positive and negative represent a strong reflectivity and values close to zero represent low reflectivity. Accordingly, envelope detection can be used to convert the original representation into values only in the positive range. However, it should be noted that the step of taking the envelope of the image is optional and that it in some applications is possible to derive sufficient information directly from the summed image.
[0072]
[0073]
[0074]
[0075]
[0076] The spatial resolution of the system refers to the ability to resolve points that are very close to each other. In the described system the lateral resolution (x-axis) and the axial resolution (y-axis) is preferably the same. This will make sure that the total resolution is uniform and symmetrical in both directions. The spatial resolution can be represented by a point spread function (PSF) and in the present case the PSF will substantially circular. Biometric image acquisition requires a spatial resolution which is sufficiently high to resolve the features of the biometric object, e.g. to resolve the ridges and valleys of a fingerprint. However, the described method and system may also be used in applications where a much lower resolution is required, e.g. in a touch detection system.
[0077] In summary, the described method an and system is useful for improving area coverage of an ultrasonic biometric imaging system in applications where blocking features limits the propagation paths of the emitted ultrasonic signals.
[0078] The described method and system can also be useful for expanding the sensing area if there are cracks, scratches or other damage to the surface that influence the imaging properties.
[0079] Moreover, the described method and system may advantageously be used in applications which do not comprise a display. In particular, the described method may be used in an application where the touch surface comprises a plurality of openings or other types of blocking features which may not be present in a display screen.
[0080] Even though the invention has been described with reference to specific exemplifying embodiments thereof, many different alterations, modifications and the like will become apparent for those skilled in the art. Also, it should be noted that parts of the method and system may be omitted, interchanged or arranged in various ways, the method and system yet being able to perform the functionality of the present invention.
[0081] Additionally, variations to the disclosed embodiments can be understood and effected by the skilled person in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.