Passive ultrasound imaging with sparse transducer arrays
11260247 · 2022-03-01
Assignee
Inventors
Cpc classification
G01S15/8925
PHYSICS
G01S7/52047
PHYSICS
A61B8/4494
HUMAN NECESSITIES
G10K11/34
PHYSICS
A61B8/5207
HUMAN NECESSITIES
International classification
A61B8/00
HUMAN NECESSITIES
Abstract
A passive compression wave imaging system comprises an array of sensor elements arranged in a sparse array and a processor arranged to: store a plurality of samples of the output from each of the sensor elements over a sample period; derive from the stored samples a value for each of a set of image pixels; wherein for each of the image pixels the processing means is arranged to: define a plurality of different sets of weights for the elements of the sparse array; calculate a component of a pixel value from each of the sets of weights and the stored samples; and sum the components of the pixel value to produce a final pixel value.
Claims
1. A passive imaging system for compression wave imaging, said imaging system arranged to generate an image by passive imaging, said image having a set of pixels, the system comprising an array of sensor elements arranged in a sparse array and each arranged to produce an output, and a processor arranged to: store a plurality of samples of the output from each of the sensor elements over a sample period; derive from the stored pluralities of samples a value for each of the set of pixels; wherein for each of the set of pixels the processor is configured to: a. define a plurality of different sets of weights for the sensor elements of the sparse array; b. calculate a component of a pixel value from each of the sets of weights and the stored pluralities of samples; and c. sum the components of the pixel value to produce a final pixel value, wherein the sparse array has a coarray, and, for each of the set of pixels, the weights are defined by a weight mask defined over the coarray, the weight mask has a centre, and the processor is configured to position the weight mask such that during calculation of the components of the pixel value of one of the set of pixels, the centre of the weight mask is at a point in the coarray, and to shift the weight mask such that during calculation of the components of the pixel value of each of the other pixels of the set of pixels, the centre of the weight mask is at a different respective point in the coarray.
2. An imaging system according to claim 1 wherein each of the set of pixels is arranged to represent a point, and for each of the set of pixels the processor is configured to calculate, for each of the sensor elements, a delay corresponding to a time the compression waves will take to travel from the point represented by the pixel to the sensor element.
3. An imaging system according to claim 2 wherein the processor is configured, for each of the set of pixels, to shift the stored plurality of samples from at least one of the sensor elements on the basis of the delays.
4. An imaging system according to claim 1 wherein for each of the set of pixels, the weights are related to each other by a weight mask function, and the weight mask function has a centre which is located at a different respective point in the coarray for each of the set of pixels.
5. An imaging system according to claim 4 wherein the weight mask function has a main area that corresponds to an area of the coarray, and a boundary area covering parts of the coarray not covered by the main area, the boundary area being of a single constant value.
6. An imaging system according to claim 1 wherein the processor is configured, in determining each of the sets of weights, to determine from the coarray weight mask an element space matrix having matrix elements corresponding to pairs of the array elements.
7. An imaging system according to claim 6 wherein the processor is configured to derive, from the element space matrix, at least one pair of weight vectors each having a weight for each of the array elements, and to derive from the pair of weight vectors a value for the pixel of the image.
8. An imaging system according to claim 7 wherein the processor is configured to derive, from the element space matrix, a plurality of pairs of weight vectors.
9. An imaging system according to claim 8 wherein the processor is configured to select from said plurality of pairs of weight vectors, a group of the most significant pairs, and to derive a pixel value only from each of said group.
10. A method of passive compression wave imaging, wherein said imaging produces a compression wave image comprising a set of pixels, the method comprising: providing an array of sensor elements arranged in a sparse array and each arranged to produce an output; storing a plurality of samples of the output from each of the sensor elements over a sample period; deriving from the stored pluralities of samples a value for each of the set of image pixels; wherein for each of the set of image pixels the method includes: defining a plurality of different sets of weights for the sensor elements of the sparse array; calculating a component of a pixel value from each of the sets of weights and the stored pluralities of samples; and summing the components of the pixel value to produce a final pixel value, wherein the sparse array has a coarray, and for each of the set of pixels, the weights are related to each other by a weight mask defined over the coarray, wherein the weight mask has a centre, and is positioned such that, during calculation of the components of the pixel value of one of the set of pixels, the centre of the weight mask is at a point in the coarray, and the weight mask is shifted such that during calculation of the components of the pixel value of each of the other pixels of the set of pixels, the centre of the weight mask is at a different respective point in the coarray.
11. A method according to claim 10 wherein each of the set of pixels is arranged to represent a point, the method including, for each of the set of image pixels, calculating for each of the sensor elements, a delay corresponding to the time the compression waves will take to travel from the point represented by the pixel to the sensor element.
12. A method according to claim 11 including, for each of the set of pixels, shifting the stored plurality of samples from at least one of the sensor elements on the basis of the delays.
13. A method according to claim 10 including, for each of the set of pixels, deriving the weights using a weight mask function, and shifting the weight mask function for each of the set of pixels.
14. A method according to claim 13 wherein the weight mask function has a main area that corresponds to the area of the coarray, the method including, identifying elements of the coarray not covered by the main area of the weight mask function and allocating a single constant value to those points.
15. A method according to claim 14 wherein determining each set of weights includes determining from the coarray weight mask an element space matrix having matrix elements corresponding to pairs of the array elements.
16. A method according to claim 15 including deriving, from the element space matrix, at least one pair of weight vectors each having a weight for each of the array elements, and deriving from the pair of weight vectors a value for the pixel of the image.
17. A method according to claim 16 including deriving from the element space matrix, a plurality of pairs of weight vectors.
18. A method according to claim 17 including selecting from said plurality of pairs of weight vectors, a group of the most significant pairs, and deriving a pixel value only from each of said group.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DESCRIPTION OF EMBODIMENTS OF THE INVENTION
(10) As one example of the application of the present invention is in mapping of cavitation during ultrasound therapies, a basic method of mapping such cavitation will first be described. Ultrasound therapies are emerging as competitive options in the treatment of various cancer types because of their ability to tightly focus energy deep into the body completely noninvasively with few adverse side effects. One of several key phenomena that can be exploited and monitored during ultrasound therapy is that of acoustic cavitation, or the nucleation and oscillation of microscopic bubbles caused by acoustic waves travelling in the body. Cavitation has the potential to enhance the local heating rate during HIFU ablation and can also augment desired mechanical bioeffects of ultrasound. This is useful in applications such as the dissolution of blood clots (thrombolysis), tissue homogenization (histotripsy), and aiding in cell permeability for drug transport. Of the two general regimes of cavitation, inertial cavitation, which is the rapid expansion and collapse of a bubble, is of primary concern in the context of monitoring therapy. The bubble collapse, being an impulse-like event in the time domain, generates broadband acoustic emissions which, in addition to their helpful bioeffects, can also be used to monitor the spatial extent of the therapy.
(11) Assume that we have a sensor array located at positions defined in a coordinate system by xj=(xj, yj, zj)T for j=1, 2, . . . , 4(N−1) sensors. Let A={xj:j=1, 2, . . . , 4(N−1)} be the set of sensor array element locations. The source strength measured by the array of sensors at a position x is given as
(12)
where s.sub.j(t) is the signal recorded on channel j, α is a conversion factor from pressure to voltage, and the distance from sensor element to source field position is
d.sub.j(x)=√{square root over ((x.sub.j−x).sup.2+)}(y.sub.j−y).sup.2+(z.sub.j−z).sup.2 (2)
The source power Ψ(x) can then be found by time-averaging the source strength q(x,t)
(13)
where ρ.sub.o is the density of the medium and c is the speed of propagation. Although in conventional passive acoustic mapping, apodization or weighting functions are not applied to the power estimates, it is known to apply weights to shape the receive beam to reduce the effect of noise and unwanted reflections. Weighting based on optimal beamforming criteria has also been demonstrated previously (for example C. Coviello, “Robust Capon beamforming for passive cavitation mapping during HIFU therapy,” in 160th Meeting of Acoustical Society of America, Abstract published in J. Acoust. Soc. Am. (2010).
(14) In an embodiment of the invention a square array of detector elements are each arranged to output a signal that varies with the intensity of ultrasound signal that is incident upon them, over a broadband frequency range. The signals are input to a processor which is arranged to sample each of the signals or channels repeatedly at a sampling frequency, and the values for each channel are stored as a one-dimensional array, or vector. The processor is then arranged to perform a number of steps to generate an image data set comprising a value for each pixel of the image.
(15) Then, to form an image of an imaged area, a value for each pixel of the image has to be derived from the stored channel vectors. For each pixel, a delay is calculated for each element in the sensor array, which is the time taken for the ultrasound to travel form the point being imaged in the pixel to the element of the sensor array. The data in each of the channels is then shifted by an amount dependent on the respective delay, so as to temporally align the channels. The data from each of the channels is then weighted, using a weighting algorithm. The weighting algorithm defines a plurality of sets of sub-image weightings for the square array elements. These weightings are then applied to the stored channel vectors, and the data combined to obtain a sub-image value for the pixel as will be described in more detail below. Several sub-image values are derived for each pixel, using the different weightings for the data, and summed linearly to derive a total value for the pixel. This process is repeated for each pixel in the image, in each case re-calculating the delays and using different weightings specific to that pixel, thereby generating a complete image data set that can then be used to generate an image.
(16) In one embodiment of the invention, in order to simulate an image formed from a filled square array, two components of the source strength, or pixel value, in (1) are computed using the sampled values of the signals from the sensor array, with different weights w.sub.1,j and w.sub.2,j applied to each array element:
(17)
for k=1, 2, and where β=(4π)/(4(N−1)α). Strictly speaking, in near-field imaging, the weight vector, w.sub.k,j, will have a range dependence, although this is omitted from (4). The source power estimate Ψ(x), then, in (3) is generalized with weighting as
(18)
where * denotes the complex conjugate.
(19) Utilizing the concept of the coarray, weights may be chosen over a sparse array that allow higher quality passive imaging compared with a sparse array and conventional passive acoustic mapping. The concept of the coarray and an example of how the weights can be calculated will be described below.
(20) Referring to
C.sub.d={y|y=x.sub.i−x.sub.j,∀x.sub.i,x.sub.jϵA} (6)
where j is a separate index over elements of the array.
(21) The difference coarray for the N=5 boundary array is shown in
(22)
which is called the coarray weight mask.
(23) The fundamental point to consider is that the difference coarray of a fully-filled rectangular array and that of a sparse rectangular boundary array are equivalent, meaning that the sparse array should be able to achieve imaging performance comparable to the fully-filled array. However, it is necessary to determine the weighting to be applied to the array, specifically to each element on the array, so as to limit the effect of side lobes. A boundary array having fewer elements than a fully-filled array has a more limited number of combinations of individual weightings that could be applied in a fully-filled array. Linear imaging, i.e. the linear addition of a number of intermediate or sub-images, can however be used to allow multiple sets of weights to be applied to the sparse array elements, to enable the construction of multiple intermediate images which are then coherently combined to yield the final image. This provides variability in weighting combinations equivalent to that with a fully filled array.
(24) Referring to
(25) The goal of the weight synthesis algorithm is to decompose C into a sum of matrices, each of which is an outer product of two weight vectors, each of size 4(N−1)×1, with outer product being used to make an intermediate image. To do so, we need a mapping from the coarray weighting mask represented by C, to a 4(N−1)×4(N−1) matrix C, of element-pair weights, shown in
(26) Now, with C.sub.w in hand, we then decompose it using the singular value decomposition (SVD) so as to derive a pair of 4(N−1)×1 weight vectors, each having one weight for each of the sparse array elements, for each sub-image value for the pixel
(27)
Where:
H denotes conjugate-transpose;
u.sub.i are the left singular vectors
σ.sub.i are the singular values
v.sub.i are the right singular vectors.
(28) Let l be the index over sub-images, and w1(l), w2(l) be the two 4(N−1)×1 weight vectors used to form q.sub.1(x,t; l) and q.sub.2(x,t; l) using (4). Then assign w.sub.1(l)=u.sub.lρ.sub.l.sup.1/2 and w.sub.2(l)=v.sub.lρ.sub.l.sup.1/2 for l=1, . . . , 4(N−1), so we have the weight vector pairs for each sub-image, and the sub-image value for the pixel is computed using (5):
(29)
(30) Finally, all of the sub-images are summed together to form the output image:
(31)
(32) The complete algorithm description is shown in Algorithm 1 (see Appendix 1).
(33) A variable of this process is how many intermediate images to use, with the maximum being 4(N−1). In practice, for each pixel of the final image, only a few singular values are significant, so the image addition in (10) may be truncated with negligible effect on image quality.
(34) For efficiency, rather than computing the weights for each image pixel as described above, the weights can be pre-computed for all ranges once the sensor array is fixed. For a rectangular boundary array, this would require storage of a (N×N×2) matrix of weights per pixel position. Depending on the processing system to be used, this gives a trade-off of memory size and access versus computation speed. It can also be noted that Algorithm 1 is very parallelizable for potential implementation on graphical processing units (GPUs) or other multiprocessor systems.
(35) A means of assessing the potential performance of an imaging array is that of the point spread function (PSF), which allows the determination of the possible imaging resolution. For a 2-D sparse array and a point source at position x.sub.s=(x.sub.s, y.sub.s, z.sub.s)T, let us start by assuming that the array is situated in the XY-plane (z.sub.j=0, ∀j); then, the PSF as a function of frequency f and position x can be written as
(36)
where w.sub.j is the weighting on array element j, and k=(2πf)/c is the wavenumber. Note that this PSF is formulated for the case in which the same weights w.sub.1,j=w.sub.2,j=w.sub.j are used to compute q.sub.1(x, t) and q.sub.2(x, t) in (4). This equation can be simplified to find the spatial resolution for both the transverse and axial imaging planes by assuming w to be a smooth function. This allows us to write the PSF as
P(x,f)=|∫.sub.−∞.sup.∞w(l)exp{jΦ(x.sub.j)}dl|.sup.2 (13)
For the transverse plane, z=z.sub.s and letting δx=x−x.sub.s and δy=y−y.sub.s, the PSF can be shown to be
(37)
which is the Fourier transform of the aperture weighting function with scaled frequency (k(δx+δy))/z.sub.s. For uniform weighting, and ignoring grating lobes, this would simply be the square of the sine function, or
(38)
where L=√{square root over (L.sub.x.sup.2+L.sub.y.sup.2)} is the aperture size. Similarly, to find the axial resolution, assume a smooth, uniform weighting function and axisymmetry (an approximation for noncircular arrays), the Fresnel integral allows us to represent the PSF as
(39)
where F(⋅) is the Fresnel integral. Plotting both the axial and transverse PSF versus depth (zs) allows the determination of the resolutions in both planes from the fullwidth at half-maximum value (FWHM) of the PSF peak.
Sensor Array
(40) PVDF and its copolymers have been used in many applications involving acoustic sensors because of their low acoustic impedance, large receive bandwidth, and the material flexibility that facilitates custom nonrigid sensors. In fact, many of the widely reported uses for PVDF and P(VDF-TrFE) are in hydrophones, both single-element (needles and membranes) and arrays. Because the acoustic emissions in ultrasound therapy tend to have both harmonic and broadband components, a material with a large bandwidth and high receive efficiency is highly desired. Further, the relative cost in construction of an array made from prepoled PVDF film can be much less than conventional piezoceramic materials, such as PZT, because of the high manufacturing cost in dicing, wirebonding, and associated backing and matching layers. Referring to
(41) Experimental Setup
(42) Two separate experiments were run to test the imaging quality of the sparse boundary array. First, a scattering experiment was run to determine the experimental PSF and compare it to the theoretical resolutions obtained previously. Second, the array was tested using a tissue-mimicking material (TMM) loaded with cavitation-nucleating particles to determine the passive acoustic mapping performance versus predictions.
(43) Referring to
(44) Referring to
(45) Results
(46) To determine the experimental PSF, the sparse rectangular boundary array recorded the scattered signals from the needle hydrophone with record length of 35 μs. The received bandwidth of the pulse at the array is approximately 3.75 MHz as shown in see
(47) One difficulty in assessing the accuracy of passive acoustic maps is the lack of alternative validated experimental techniques that enable accurate sizing of regions of inertial cavitation activity. It is possible, however, to predict the size of the region using knowledge of the cavitation threshold for the material and the transducer calibration. The cavitation threshold of this type of TMM has been found to be 1.42 MPa. Using an accurate calibration of the HIFU transducer, the predicted cavitation region should coincide with the size of the HIFU beam above the cavitation threshold at a particular insonating amplitude, as shown in
(48) While the examples above relate to monitoring cavitation using broadband detection, the invention is also applicable to the monitoring of other effects, such as the mapping of harmonic emissions which can arise in a HIFU field from the scattering of the incident non-linear wave by tissue that lies within the focal volume.
(49) TABLE-US-00001 Appendix Algorithm 1 Sparse Aperture Weighted Passive Beamforming. Given: i indexes the pixels, j index sensor positions, x = (x, y, z)T, sj(t) is received sensor data for sensor j. for i = 1 .fwdarw. M.sub.pixels do Compute delay: {τ.sub.ij = d.sub.ij/c : ∀.sub.j} Presteer data: for j = 1 : 4(N − 1) do s.sub.j(t) ← s.sub.j(t + τ.sub.ij) end for Shift weight mask: C = γ(y) ← γ (y + x.sub.i) Map Weights: C.sub.w ← Transform{C} Compute SVD: USVH C ← SVD(C.sub.w) Threshold: N.sub.subimage ← threshold(S) for l = 1 : N.sub.subimage do Form Weights: w.sub.1(l) ← u.sup.lσ.sub.1.sup.1/2 w.sub.1(l) ← v.sup.lσ.sub.1.sup.1/2 Apply weights: q.sub.1 (t; l) ← βΣ.sub.j=1.sup.4(N−1) w.sub.1,j (l)d.sub.ijs.sub.j(t) q.sub.1 (t; l) ← βΣ.sub.j=1.sup.4(N−1) w.sub.2,j (l)d.sub.ijs.sub.j(t) Compute pixel values: