METHOD AND APPARATUS FOR ADAPTIVE BEAMFORMING
20220155439 · 2022-05-19
Inventors
Cpc classification
G01S15/8995
PHYSICS
International classification
Abstract
In a method of imaging, a first transmission is carried out in a first direction. The reflected signals are received using a plurality of receiving devices. For each device, a two/three dimensional data set is formed. The first dimension (26b) represents the depth or range and the second dimension (26a) represents lateral distance. The optional third dimension (26c) represents an orthogonal lateral distance. The data set is formed by calculating times of flight for each pixel within a grid. The receive time is then assigned to each pixel. A data set is generated for each receiver, which results in a three/four dimensional data set from the first transmission of signals. A second transmission of signals is made in a different direction or from a different position. The signals received from the second transmission are received in the same way as those received from the first transmission. The signals are first summed across the transmit dimension to form a single data set, so that the data from various transmissions is combined. Adaptive beamforming is then carried out on this data set, resulting in a single adaptive image.
Claims
1. A method of imaging a target region, the method comprising: i) carrying out a first transmission of signals in a first direction into the target region using one or a plurality of transmitting devices located at a first position; ii) receiving the signals reflected from the target region using a plurality of receiving devices; iii) for each receiving device, forming a data set made up of the received signals wherein the data set has at least two dimensions, wherein the first dimension represents the depth or range within the region and the second dimension represents lateral distance within the region and optionally wherein the data comprises a third dimension and wherein the third dimension represents an orthogonal lateral distance within the region, wherein the data set is formed by first calculating times of flight for each pixel within a two-dimensional grid, or optionally a three-dimensional grid, and then assigning to each pixel in the grid the data value of the corresponding time of the received signal, thereby generating a two-dimensional, or optionally three-dimensional, data set for each receiver and therefore a three-dimensional, or optionally four-dimensional, data set resulting from the first transmission of signals; iv) making a second transmission of signals into the region, wherein the second transmission is in a second direction and/or made from a second position, distinct from the first direction or first position; v) repeating steps ii) and iii) for the signals received from the second transmission; vi) for each receiving device, summing the data acquired from each of the at least two transmissions, thereby producing a two-dimensional, or optionally three-dimensional receiving device data set corresponding to each receiving device; vii) forming a three-dimensional, or optionally four-dimensional, data set made up of receiving device data sets, and subsequently carrying out adaptive beamforming on said three-dimensional, or four-dimensional, data set to combine the receiving device data sets so as to produce a single adaptive two-dimensional, or optionally three-dimensional, image of the region; viii) and storing or displaying said image.
2. A method as claimed in claim 1, wherein the transmitted signal is a sound wave, optionally an ultrasound wave.
3. A method as claimed in claim 1, wherein the transmitted signal is an electromagnetic wave.
4. A method as claimed in claim 1, wherein the first transmission and/or the second transmission is carried out using a majority of the plurality of transmitting devices.
5. A method as claimed in claim 1, wherein the plurality of transmitters are arranged so that the first transmission and/or the second transmission originates from a virtual source located behind the transmitters.
6. A method as claimed in claim 1, wherein the first transmission and/or the second transmission is a focused-wave waveform.
7. A method as claimed in claim 1, wherein the first transmission and/or the second transmission is an omni-directional wave and wherein the first transmission and the second transmission are made from different positions.
8. A method as claimed in claim 1, wherein the second direction of the second transmission is at a distinct angle to the first direction of the first transmission, wherein said distinct angles are each in a range from a minimum angle value −α.sub.max, to a maximum angle value α.sub.max, and the value of α.sub.max is determined to be α.sub.max≈1/2f#, wherein f# is a selected ratio between the depth of a pixel and the size of a receiving aperture.
9. An imaging device, comprising: one or a plurality of transmitting devices, for carrying out a first transmission of signals in a first direction into a target region from a first position and for carrying out a second transmission of signals in a second direction from a second position, wherein the second direction is distinct from the first direction and/or the second position is distinct from the first position, into the target region; a plurality of receiving devices, for receiving the signals reflected from a target region using a plurality of receiving devices; a processing unit, configured to form a first data set made up of the received signals from the first transmission, and a second data set made up of the received signals from the second transmission, wherein the first data set and the second data set comprise two dimensions, wherein the first dimension represents the depth or range within the region and the second dimension represents lateral distance within the region, and optionally wherein the first data set and the second data set each comprise a third dimension and wherein the third dimension represents an orthogonal lateral distance within the region; wherein the data set is formed by first calculating times of flight for each pixel within a two-dimensional, or optionally three-dimensional, grid and then assigning to each pixel in the grid the data value of the corresponding time of the received signal, thereby generating a two-dimensional, or optionally three-dimensional, data set for each receiver and therefore a three-dimensional, or optionally four-dimensional, data set resulting from the first transmission of signals; the processing unit further configured, for each receiving device, to sum the data acquired from each of the at least two transmissions, thereby producing a two-dimensional, or optionally three-dimensional receiving device data set corresponding to each receiving device; the processing unit further configured to form a three-dimensional, or optionally four-dimensional, data set made up of receiving device data sets, and subsequently carry out adaptive beamforming on said three-dimensional, or optionally four dimensional, data set to combine the receiving device data sets so as to produce a single adaptive image of the region; and a storage unit; for storing or displaying said image.
10. An imaging device as claimed in claim 9, wherein the transmitted signal is a sound wave, optionally an ultrasound wave, or wherein the transmitted signal is an electromagnetic wave.
11. An imaging device as claimed in claim 9, wherein the first transmission and/or the second transmission is carried out using a majority of the plurality of transmitting devices.
12. An imaging device as claimed in claim 9, wherein the plurality of transmitters are arranged so that the first transmission and/or the second transmission originates from a virtual source located behind the transmitters.
13. An imaging device as claimed in claim 9, wherein the first transmission and/or the second transmission is a focused-wave waveform.
14. An imaging device as claimed in claim 9, wherein the first transmission and/or the second transmission is an omni-directional wave and wherein the first transmission and the second transmission are made from different positions.
15. A method as claimed in any of claim 9, wherein the second direction of the second transmission is at a distinct angle to the first direction of the first transmission, wherein said distinct angles are each in a range from a minimum angle value −α.sub.max, to a maximum angle value α.sub.max, and the value of α.sub.max is determined to be α.sub.max≈½f#, wherein f# is a selected ratio between the depth of a pixel and the size of a receiving aperture.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0039] Certain preferred embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
DETAILED DESCRIPTION
[0048] The following examples are given with reference to a two-dimensional imaging system and method, however it will be readily understood by the skilled person that the same disclosure and teaching applies equally in the context of three-dimensional imaging systems and methods.
[0049]
[0050] A particular transducer element m receives a signal h.sub.m,a(t) following a particular transmit a. In the example case of
Δt=(T+R)/c.sub.0,
[0051] The receive distance R is independent of the type of transmit which is made. It is calculated as:
R(z, x, m)=√{square root over (z.sup.2+(x−m).sup.2 )}
[0052] The transmit distance T depends on the type of beam which is transmitted from the array 2. Some specific examples are considered below, for explanatory purposes:
[0053] In the case of Coherent Plane Wave Compounding one or multiple planar transmit beams are transmitted into a domain at different transmit angles α. In this case the transmit distance T, is:
T(z, x, α)=(z cos(α)+x sin(α))
[0054] In the case of a Diverging Wave, originating from a virtual source (z.sub.s,x.sub.s) behind the transducer, the transmit distance T is given by:
T(z, x, z.sub.s, x.sub.s)=√{square root over ((x−x.sub.s).sup.2+(z−z.sub.s).sup.2)}
[0055] In the case of a focused wave, which can be either converging or diverging, one can calculate the transmit distance T assuming a spherical virtual source model. In this model a virtual source {right arrow over (v)}.sub.s=(z.sub.s, x.sub.s) is placed in the focus of the transmission, with the centre of the transmission originating from {right arrow over (p)}.sub.c=(z.sub.c, x.sub.c) and the transmit distance T is given by:
T(z, x, z.sub.s, x.sub.s)=(|{right arrow over (v)}.sub.s−{right arrow over (p)}.sub.c|+|{right arrow over (x)}−{right arrow over (v)}.sub.s|)
[0056] This “time of flight” value, Δt, is used to assign a signal value s.sub.m,a, also known as a pixel value, to a particular pixel corresponding to a point in the imaged region, for a particular transducer element m and a particular transmission a. So that:
s.sub.m,a=h.sub.m,a(t)|.sub.t=Δt
[0057] Conventionally, these pixel values for different transducer elements and for different transmissions are combined using “Delay-and-Sum” beamforming. In this approach an image b.sub.DAS is formed by coherently combining the pixel values as received by all elements M from all transmits N.sub.a. Giving:
[0058] Here w.sub.m.sup.Rx is the receive apodization with dimensions [N.sub.z, N.sub.x, M] while w.sub.a.sup.Tx is the transmit apodization with dimensions [N.sub.z, N.sub.x, N.sub.a]. An apodization function, or tapering function (also known as a window function) is a mathematical function that is zero-valued outside of some chosen interval, normally symmetric around the middle of the interval, usually near a maximum in the middle, and usually tapering away from the middle. One such window function is known as a Boxcar window. For a uniform Boxcar window the receive apodization w.sub.m.sup.Rx can be calculated by
[0059] where (z,x) is the pixel position and x.sub.nis the position of the receiving element and the f# is the selected ratio between the pixel depth and the size of the receiving aperture. Other window functions such as, but not limited to, Hamming and Tukey can also be used. For the transmit apodization w.sub.a.sup.Tx the apodization is dependent on the type of transmitted wave and which area the transmitted wave is transmitted into.
[0060] For simplicity the spatial coordinates can be dropped, the sum over each transmit a can be defined as the sum over the transmit dimension T.sub.x, and the sum over each transducer element m can be defined as the sum over the receive dimension R.sub.x.
[0061] This sum shows the conventional way of implementing “Delay-and-Sum”. Once data is received for all of the M elements, the sum over the receive dimension R.sub.x is carried out during the imaging process, whilst further transmissions are being made.
[0062] In particular, in the above equation, the data for each transmission a can be considered as a three dimensional cube, with dimensions of z, x, and M—the number of receive elements. Thus the pixel values s.sub.m, 1 have dimensions [N.sub.z, N.sub.x, M], and there will be N.sub.a of these data cubes, one for reach transmit a. This is shown in
[0063]
[0064] In the “Delay-and-Sum” approach described in the above equation, the sum over the receive dimension 26c is carried out first, producing a single transmission image (22a, 22b, 22c, 22d, 22e) corresponding to each transmission. The images produced from each transmission are then summed to produce a final image 24, referred to above as b.sub.DAS.
[0065] However, the Applicant has appreciated that using software beamforming, opens up the possibility of storing the signals received on individual transducer elements (the channel data) for multiple different transmissions, and therefore of processing the data differently.
[0066] Considering again the equation:
[0067] Where
is the result of the coherent combination of the signals over the receive elements M.
[0068] Therefore
[0069] Where
is likewise the coherent combination of the signal over the transmit dimension.
[0070] This equation therefore shows the process as represented in
[0071] A sum is commutative and therefore
[0072] A key insight of the present invention is therefore to carry out this sum in a different order, to sum firstly on the transmit dimension T.sub.x, to arrive at a three-dimensional data set, and then to sum on the receive dimension R.sub.x to arrive at a final image. This method is represented schematically in
[0073] This process is represented schematically in
[0074] Thus the described embodiment of the present invention processes received data in two stages: the first stage 40 sums the data across the transmit dimension, whilst the second stage 42 sums, or combines, the data cube produced by the first stage across the receive dimension, giving a single adaptive image, which may be a “high-quality” image. Thus far the only method of summing the received data that has been described is the conventional delay-and-sum approach, as denoted by the subscript ‘DAS’. However, it is known in the art to process such received data using a method known as “adaptive beamforming”.
[0075] There are many different adaptive beamformers which are known in the art. The dimension reduction discussed above is general and can be implemented using most known adaptive beamformers. For illustration purposes the dimension reduction strategy will be considered in more detail below using two of the most popular adaptive beamformers—Capon's Minimum Variance beamformer and the Short Lag Spatial Coherence.
Capon's Minimum Variance (MV)
[0076] Capon's Minimum Variance (MV) technique calculates a data dependent set of weights w while maintaining unity gain in the steering direction. This is posed as a minimization problem by
min.sub.wE{|b|.sup.2}=w.sup.HRw
subject to w.sup.Ha=1
[0077] where R≡E{tt.sup.H} is the spatial covariance matrix, E is the expected value operator and the steering vector a=1, because it is assumed that all signals are already delayed.
[0078] This equation can be solved using the method of Lagrange multipliers. This gives:
[0079] The spatial covariance matrix R is unknown, but assuming a linear array it can be estimated for point (z,x) by:
[0080] where (2K+1) is the number of axial samples, L is the length of the subarray, and
[0081] The subarray averaging improves robustness. To further improve robustness, and numerical stability, diagonal loading is added to the estimated covariance matrix by {tilde over (R)}(z, x)=R(z, x)+∈I where I is the identity matrix, and
[0082] where tr{ } is the trace operator.
[0083] The adaptive weights are then applied as
[0084] Conventionally, the minimum variance (MV) weightset is calculated for the signals received on the M elements from one transmit a, and thus the t in the equation for {circumflex over (R)}(x, z) above is t=S.sub.m,a.
[0085] This means that we are calculating the w.sub.a.sup.R.sup.
[0086] and substituting
[0087] as defined in the case of Delay-and-Sum above with b.sub.MV as given above.
[0088] The resulting images, where the MV was applied over the R.sub.x dimension, are denoted by
Notice that we will have one such image for each transmit a, and thus we can do conventional coherent compounding over the transmit dimension
[0089] One can also first do a conventional Delay-and-Sum over the receive elements as described above, and then set t=b.sub.a.sup.R.sup.
[0090] with the equation for b.sub.MV as given above. Let's denote the resulting image, where the MV was applied over the Tx dimension of multiple
images as
[0091] For the CPWC case, this means that the coherent compounding is adaptive.
[0092] Alternatively, one can do both—meaning that we apply MV first over the receive dimension, and then over the transmit dimension. Thus we are substituting both
[0093] with the equation for b.sub.MV as given above. Let's denote the resulting image as
[0094] For the CPWC case, this approach has elsewhere been referred to as Double Adaptive Plane-Wave Imaging.
[0095] All of the above-described methods are ways in which adaptive beamforming can be implemented within conventional data processing as described with reference to
[0096] However, a significant development provided in accordance with the present invention is the realization that the novel data processing method in which data is summed first on the transmit dimension and then combined, or summed, on the receive dimension (as shown in
[0097] can be substituted with the equation for b.sub.MV as given above, so that minimum variance adaptive beamforming is done on the R.sub.x dimension. Let's denote the resulting image as
[0098]
[0099]
[0100]
[0101]
[0102]
[0103] This method has been referred to as Double Adaptive Plane-Wave imaging. Such an image is denoted as
[0104]
[0105] These Figures clearly show an improvement in image resolution for all of the images (
[0106]
[0107] Interestingly, the computation time for
Short Laq Spatial Coherence
[0108] The invention may also be applied to short lag spatial coherence (SLSC) algorithm. The spatial correlation can be calculated as
[0109] where p is the delayed signal, n is the depth sample index, m is the distance, or lag, in number of elements between two points on the aperture. The sum over n results in a correlation over a given kernel size, n.sub.2−n.sub.1 of pixels. The short lag spatial coherence, is calculated as the sum over the first M lags,
[0110] Thus, notice that b.sub.SLSC is an image of the coherence and not the backscattered signal amplitude as with DAS and MV. The SLSC is a visualization of the spatial coherence of backscattered ultrasound waves, building upon the theoretical prediction of the van Cittert-Zernike (VCZ) theorem. Thus, the SLSC is applied on the R.sub.x dimension. Convenionally, this is done by setting p in the equation above to p=s.sub.m,a so that we get one SLSC image
from each transmit a. Which again can be coherently compounded so that we get the final SLSC image.
[0111] We can also exploit the invention: that we can sum over the T.sub.x dimension first, and then doing SLSCS over the R.sub.x dimension. Thus, we set
resulting in the final image
[0112]
[0113]
as described above. It can be appreciated that the region of high coherence from the speckle background is extended beyond only the focal region as is high in
[0114] The invention is however, not limited to the adaptive beamforming techniques of SLSC and MV.
[0115] Although the present invention has been described in the context of ultrasound imaging it is also suitable for use in other imaging systems, such as radar and sonar imaging systems.
[0116] In synthetic sonar and radar, the transmitter is often simpler than in the case of Coherent Plane Wave Compounding imaging. The transmitter often consists of one or a few elements, while the receiver is an array comprising many receiver elements.
[0117] In synthetic sonar and radar, a small number of transmitters (or possibly a single transmitter) is used to transmit a signal at a first location, and a large number of receivers are then used to receive the reflected signal. The small number of transmitters is then moved a short distance, and used to carry out a second transmission, which is again received by the array of receivers. This effectively forms what is known as a “synthetic long array” which is able to image a large region, as though a large array of transmitters were used, by using only a small number of transmitters but moving them many times. The above description is somewhat simplified, since typically in this kind of imaging the arrays are moved during the transmission and reception processes, not simply between each transmission-reception cycle.
[0118] The long receiver array can be treated mathematically as though it is a long array of transceivers. It can be treated as though a small region of the transceiver array, a “sub-array”, has been used to make a first transmission, and has then been used to receive the reflected signal. It can then be treated as though a second region of the transceiver has been used to make a second transmission into the same region (this corresponds to the transmission by the transmitters after they have been moved to a different position), and this data has been received with a different array of receivers (an array of the same size as the array used for the first transmission).
[0119] This process is then repeated at many transmitter locations (which can be treated as many different “sub-arrays”). Of course there is not a long transceiver array, so these “sub-arrays” are just mathematical tools. In synthetic aperture processing (SAR and SAS), they might better be termed a “translated receiver array”. Notably, the shorter arrays into which the synthetic receiver array is divided do not need to be the same size as the physical transmitter array. The shorter array could be shorter than the physical transmitter array (i.e. it could divide the physical array into two), or it could be longer than the physical transmitter array (i.e. the “sub-array” could comprise two-and-two transmit-receive locations using the physical receiver array (i.e. divide into something twice as long as the physical array)). This is likewise true for the technique used in the context of ultrasound imaging. In the ultrasound method there is a physical transceiver array, but nonetheless in processing the image the physical array can be artificially divided into shorter sub-arrays and summed to combine cubes from these.
[0120] Ordinarily a transmission from a single position into a large region, using an unfocused beam, would form a low quality image. Adaptive beamforming algorithms, specifically adaptive beamforming based on adaptive element weightings, do not produce very good results when applied to such images. The technique described herein is useful as it allows data from a number of transmissions to be summed, and then synthetically formed into a focused transmit beam. Adaptive beamforming algorithms can then be applied very effectively to the image data.
[0121] Instead of just summing all of the received data together, as if it was a long (synthetic) array, cubes are formed for each transmission (or for every two transmissions/receptions etc.) and summed, as described above with reference to ultrasound imaging, to give a cube. Summing the acquired cube along the receiver dimension gives a high-resolution image of the order of, for example, cm or dm, as if conventional DAS beamforming had been used. Alternatively, adaptive beamforming along the receiver dimension can be applied to form an adaptive image of the same scene.
[0122] It will be appreciated by those skilled in the art that the invention has been illustrated by describing one or more specific embodiments thereof, but is not limited to these embodiments; many variations and modifications are possible, within the scope of the accompanying claims.