OBJECT IMAGING WITHIN STRUCTURES
20240134041 ยท 2024-04-25
Assignee
Inventors
Cpc classification
G01S7/53
PHYSICS
G01S7/539
PHYSICS
G01S2015/465
PHYSICS
G01S15/42
PHYSICS
International classification
Abstract
A method and system of imaging at least one passive object within a surrounding structure is provided. The surrounding structure has multiple surfaces. The method includes: transmitting an ultrasonic signal into the surrounding structure using an array of ultrasonic transmitters and receiving reflections from the passive object using an array of ultrasonic receivers. The method also includes steering the ultrasonic signal such that it includes at least one reflection off a surrounding structure surface using stored data relating to a position of at least one of the surfaces.
Claims
1. A method of imaging at least one passive object within a surrounding structure having a plurality of surfaces, the method comprising: transmitting an ultrasonic signal into the surrounding structure using an array of ultrasonic transmitters; receiving reflections from the passive object using an array of ultrasonic receivers; steering the ultrasonic signal such that it includes at least one reflection off a surrounding structure surface using stored data relating to a position of at least one of said surfaces.
2-3. (canceled)
4. The method of claim 1, comprising using the ultrasonic transmitter array and/or the ultrasonic receiver array to estimate position(s) of the surrounding structure surface(s) prior to the steering.
5. The method of claim 1, comprising updating the surrounding structure surface information during imaging or between episodes of imaging.
6. (canceled)
7. The method of claim 1, comprising simulating, for one or more reflections of the ultrasonic signal from the array, an estimated received signal for the passive object and comparing the reflections which comprise an actual received signal against the estimated received signal.
8. The method of claim 7, comprising basing the estimated received signal on a simulated image from past characteristics of the surrounding structure, a past image of the passive object in the surrounding structure, or a preliminary image of the passive object.
9. The method of claim 7, comprising determining the accuracy of the estimated signal by comparing the estimated received signal with the actual received signal.
10. The method of claim 7, comprising performing a gradient search to compare the estimated received signal with the actual received signal.
11. The method of claim 1, comprising steering the transmitted signal based on characteristics of the passive object.
12. (canceled)
13. The method of claim 1, comprising imaging using a single steered ultrasonic signal.
14. The method of claim 1, comprising using the array to transmit multiple ultrasonic signals in different directions.
15. The method of claim 14, comprising transmitting the multiple ultrasonic signals simultaneously.
16. The method of claim 1, comprising modifying the shape of the transmitted ultrasonic signal to match that of the passive object by focussing the energy of the beam predominantly onto the passive object.
17-18. (canceled)
19. The method of claim 1, comprising creating a visual representation of the passive object.
20. (canceled)
21. The method of claim 1, comprising using compressed sensing and/or sparsity methods.
22. The method of claim 1, comprising calculating a Doppler shift of the ultrasonic signal and using said Doppler shift for said imaging.
23. A system arranged to image at least one passive object within a surrounding structure having a plurality of surfaces, the system comprising: an array of ultrasonic transmitters arranged to transmit an ultrasonic signal into the surrounding structure; and an array of ultrasonic receivers arranged to receive reflections from the passive object; wherein the system is arranged to steer the ultrasonic signal using stored data relating to a position of at least one of said surfaces such that the ultrasonic signal includes at least one reflection off a surrounding structure surface.
24. The system of claim 23, having a single array comprising separate transmitters and receivers therein.
25-26. (canceled)
27. The system of claim 23, wherein the receiver array comprises Micro-Electro-Mechanical System microphones.
28. (canceled)
29. The system of claim 23, wherein the receiver array is a microphone array having a peak response in the audible frequency range; and the transmitter array has a spacing between the transmitters equivalent to a half-wavelength of a sound wave in the ultrasonic frequency range.
30. A device for imaging at least one passive object, the device comprising: an array of ultrasonic transmitters, arranged to transmit an ultrasonic signal, wherein a pair of adjacent transmitters of said array has a spacing equivalent to a half-wavelength of a sound wave in the ultrasonic frequency range; an array of microphones arranged to receive reflections from the passive object, wherein the microphones have a peak response in the audible frequency range; wherein the device is arranged to determine an image of said object using said reflections.
Description
[0071] Certain embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings in which:
[0072]
[0073]
[0074]
[0075]
[0076]
[0077]
[0078]
[0079]
[0080]
[0081]
[0082]
[0083]
[0084]
[0085]
[0086]
[0087]
[0088]
[0089]
[0090]
[0091]
[0092]
[0093]
[0094]
[0095]
[0096]
[0097] The imaging system 2 comprises an ultrasonic array 4. The ultrasonic array 4 comprises a plurality of piezoelectric micro machined ultrasonic transducers (PMUTs) 6; the array 4 is shown in further detail in
[0098] The imaging system 2 may, for example, be affixed to a wall of a room, and the ultrasonic array 4 configured to transmit an ultrasonic signal into the room using the PMUTs 6. As will be explained in further detail below, the ultrasonic array 4 will receive reflections from any objects in the room. The ultrasonic array 4 may then steer the ultrasonic beam to ensure the reflections include at least one reflection off a wall of the room, when the location of the walls are known.
[0099]
[0100] The transmitter 16 is circular and located in the centre of the die. The receiver 18 is much smaller than the transmitter 16 and is located in the unused space in each corner of the die. Other numbers of receivers may be provided; they could be located elsewhere or more than one could be located in each corner. The transmitter could be differently shaped or located and/or multiple transmitters could be provided.
[0101] The individual dies 14 are tessellated together in a mutually abutting relationship on a common substrate (not shown) to form the array. The dies 14 are half a wavelength wide, such that the centre-centre spacings 20 of the transmitters 16 in both the X and Y directions are also half a wavelength. The receivers 18 in the respective corners of adjacent dies form respective 2?2 mini arrays 22. These mini arrays 22 are also separated by half a wavelength.
[0102] Although only six dies 14 are shown in
[0103] In operation, the ultrasonic array 4 emits a steered ultrasonic beam. Determined phase adjustments are applied to the signals from respective transmitters 16 or receivers 18 to allow them to act as a coherent arraye.g. for beamforming. Beam steering may be used on either the transmitted ultrasonic signal, reflected ultrasonic signal, or both. In order to steer the transmitted ultrasonic signal, the determined phase adjustments are added to the signal transmitted by each transmitter 16 in the array 4 such that the resultant transmitted ultrasonic signals undergo interference, resulting in an overall signal which is transmitted in a desired direction. The received, reflected ultrasonic signal may be steered in a similar way. Determined phase adjustments may be applied to the received signals from all directions to determine the reflected signal from a single direction in the surrounding structure.
[0104] Most standard beamforming algorithms benefit from half wavelength spacing of the ultrasonic elements 16, 18 as this enables each incoming wave front to be discernible from other incoming wave fronts with a different angle or wavenumber, in turn preventing the problem of grating lobes. Classical beamforming methods that benefit from half wavelength (or tighter) spacing includes (weighted) delay-and-sum beam formers, adaptive beam formers such as MVDR/Capon, direction-finding methods like MUSIC and ESPRIT and Blind Source Estimation approaches like DUET, as well as wireless communication methods, ultrasonic imaging methods with additional constraints such as entropy or information maximisation.
[0105]
[0106] The locations of the walls 28 may be determined using LI DAR scanning, or a CAD drawing of the room which is input to a CPU. Alternatively, the array 4 is used to determine the locations of the walls 28 when the room 26 is empty. The ultrasonic transmitters 16 in the array 4 emit ultrasonic signals which are reflected by the walls 28 of the room 26. These reflected signals are received by the receivers 18 in the array. The CPU then processes the data relating to the transmitted and reflected signals to determine the locations of the walls 28 which the signals were reflected from.
[0107] Once the location of the walls 28 have been determined, the imaging system 2 is used to image the object 24 in the room. A first beam 30 is directed into the near field and reflects off the object 24. The reflected beam 30 is a band limited Dirac pulse 32 which is received by the receivers 18 in the array 4, and provides limited information about the portion of the object which is in the line of sight of the transmitters and receivers in the array. Other signals, such as chirps/frequency sweeps, or other coded signals could be used, combined with suitable processing post-reception, such as pulse-compression techniques.
[0108] In order to gain further information about the dimensions and location of the object 24, a second beam 34 is then directed towards a wall 28a of the room 26. This beam 34 is reflected off the first wall 28a towards the back wall 28b. The beam 34 is then reflected towards the object 24, and the beam 34 is then further reflected off the object 24 back to the array 4. As with the first beam 30, the reflected second beam 34 is a band limited Dirac pulse 36 which is also received by the receivers 18 in the array 4.
[0109] As shown in the time-domain signal traces on the right of
[0110] The calculations below provide further detail on the processing performed by the CPU 8 on the received signals 32, 36 in order to determine the location of the object 24.
[0111] Firstly, consider the hypothetical and simplified scenario where there is a single reflector 74, a transmitter 70 and a receiver 72, as shown in
y(t)=?.Math.l.sub.1.Math.?(t??.sub.1)
[0112] Where a represents the reflective strength of the target at the specified grid position, l.sub.1 is the path loss (the longer the path the larger the loss), and ?(t??.sub.1) is the originally transmitted Dirac pulse, time-delayed by the delay factor ?.sub.1. The path loss l.sub.1 can be explicitly computed based on the wave propagation model, i.e. for a spherical wave in 3D it will typically be 1 divided by the travel distance squared.
[0113] The received samples y(t) can be put into a vector of length L (i.e. containing L samples) according to the equation:
assuming the signals have been sampled at the receiver from points t to t+L?1.
[0114] Multiple reflective paths are shown in
y(t0=?.Math.l.sub.1.Math.?(t??.sub.1)+?.Math.l.sub.2.Math.?(t??.sub.2)
where there are now two different paths losses, l.sub.1 and l.sub.2.
[0115] More generally, there may be several different echoic paths, as illustrated by
which may also be represented as
[0116] S is a set of path index integers, typically i.e. S={1, 2, 3, 4, 5, . . . } representing the varying echoic path indexes, sorted in order of path length. S is the echoic index set.
[0117] Next, if there are several transmitters 70 and receivers 72, as shown in
[0118] The number of hypothetical reflective grid points ? may then be increased, as shown in
[0119] ?.sub.k is the strength of the k'th hypothetical reflector for the 1.sup.st to the P'th reflector under consideration. The path lengths l.sub.ijrk, the time delays ijrk and the echoic index now depend on the positions of the transmitters 70, receivers 72, reflectors 74 and the echoic path number. This may be rewritten in matrix/vector form by defining:
[0120] And using the definition
[0121] The matrix D.sub.ij(t) is then defined as
[0122] Where L is a suitable window length for the number of samples in the vector y.sub.ij(t), and also the number of rows in D.sub.ij(t). This therefore gives the set of equations
y.sub.ij(t)=D.sub.ij(t)?
[0123] Where i=1, . . . , N and j=1, . . . , Q, where N is the number of transmitters 70, and Q the number of receivers 72. Multiple transmit-receive pairs 70, 72 can be used to better estimate the vector containing reflective coefficients ?, by stacking these equations and removing the time dependence temporarily for notational convenience:
[0124] Or more generally, y=D?, or if additive noise is to be incorporated, y=D?+n, where n is a vector of additive noise. It should be clear from the above, that the more echoic paths there are, the higher each subblock D.sub.ij becomes, and therefore the equation system becomes better conditioned. In other terms, the echoic multipath situation helps improve the solvability of the equation and, in the presence of noise, improves the SNR. This equation set can be solved in any number of suitable ways, including least-squares, weighted least squares, various techniques incorporating knowledge of the noise characteristics, such as its spatio-temporal distributions etc.
[0125]
[0126] The array 4 emits a steered ultrasonic beam 40 which is focused in the near field 42. The beam 40 may also be steered in post-processing of the reflected signal to obtain a steered received signal. The beam 40 is reflected from the front of the complex object 38 back towards the array 4 where the reflected beam is received. However, this only provides information about the side of the object which is close to, and facing the array 4.
[0127] Once sufficient data has been gathered using direct reflections from the object 38, in order to image the remainder of the object, the array 4 steers the ultrasonic beam towards the walls 28 of the room 26, away from the shortest path as shown in
[0128]
[0129] The beam 44 will reflect from the object 38 along a different path (not shown) towards the wall 28, and from there back to the ultrasonic array 4. The time delay in this beam being reflected back to the array 104, along with the predetermined locations of the walls is used by the CPU to gain further information about the size, shape and location of the object 38.
[0130] In open acoustic scenes, such as that of
min.sub.??y?D??.sub.2.sup.2
[0131] This is one way of solving the above-mentioned problem of an open acoustic scene. However, given the dimensions of Dassuming it is made up from a tightly spaced grid of hypothetical reflectorsthere will typically also be infinitely many such solutions ? and so it makes sense to try to pin down the most physically likely of those. One approach for this is the compressive sensing approach, where one instead tries to solve
min.sub.?|?|.sub.1 subject to y=D?(Eq. A)
i.e. to find the solution to the problem that has the smallest L1-norm. This is frequently a good approximation to the best L0-norm solution, which is the solution with the fewest number of non-zero coefficients. Having a high number of zeros reflects the previously known underlying hypothesis that the scene is largely full of zeros or non-reflective points, i.e. empty spaces.
[0132] More generally, the dimensions of the equation system can be such that the number of coefficients in ? representing the entire acoustic scene can be in the hundreds-of-thousands of coefficients or more, so any dimension-reduction will drastically save compute time and complexity. To this end, if some of the coefficients in ? can be known or computed before others, using simplified means to a general, big inversion, both time/CPU resource consumption and accuracy can be improved. The equation may be subdivided into
y=D?=D.sub.u?.sub.u+D.sub.k?.sub.k
[0133] Where y?D.sub.l?.sub.k=D.sub.u?.sub.u.sub.
y?D.sub.k?.sub.k=D.sub.u?.sub.u
[0134] Which can be solved for a u which will have fewer dimensions than the original problem where a was to be estimated. Approaches to (easily) obtaining some coefficients involve the following: first, in
[0135] Next, referring to
[0136] Now, utilizing the previously known ?.sub.k samples, a new equation system can be created with 29 unknowns, and those can be estimated. Then, there are 29+22=51 known/estimated samples that can be utilized as more samples are obtained in the cut-off approach. Overall, a sequence of estimators are being driven, each with lower dimensions than the full imaging problem, to gradually create a full image of the scene. Any estimation step can utilize any of the aforementioned techniques, including compressed sensing to obtain physically plausible estimates of the acoustic scene.
[0137] Of course, it is not essential to use Eq. A above. Any other suitable method utilizing the sparsity of the scene could be employed, using other norms than L1/L0, and other norms or measures of sparsity, such as information-theoretic approaches optimizing properties like the distribution of coefficients, e.g. the super-Gaussian distribution properties. Bayesian approaches such as Bayesian Sparse Regression, could be also employed, see e.g. https://arxiv.org/abs/1403.0735.
[0138] The direct path reflections shown in
[0139] Therefore, in order to image the occluded object, an ultrasonic beam 50 is directed towards a wall 28, the location of which is known. The beam 50 is reflected off the wall 28 directly towards the occluded object 46, without being reflected off the first object 38. The beam 50 will therefore be reflected from the occluded object 46 back to the wall 28 along a different path (not shown), and to the array 4, where the received echoes are analysed by the CPU 8 to image the objects 38, 46. The indirect ultrasonic reflections therefore allow for imaging of objects in the room which are occluded from line of sight imaging from the array by other objects in the room.
[0140] The calculations below provide further modifications on the processing described above which is performed by the CPU 8 on the received signals 40, 44, 50 in order to determine the location of the objects 38, 46. These modified calculations remove the occluded paths 50 from the data set in order to reduce the computational load on the CPU 8.
[0141] The general model, y=D? does not incorporate effects such as occlusions, it simply assumes that sound propagates unhindered through all the reflective voxels. Referring back to the equation
this problem can be managed by using knowledge of the first pixels/reflectors to effectively rule out potential echoic paths in the set S.sub.ij. Referring now to
[0142] Finally, knowledge of previously (sequentially) estimated reflectors can be used to steer the acoustic beam in certain directions and away from others. In
[0143]
[0144] The reflected beams 40, 44 which may be described as band limited Dirac pulses are input into the equations described above, and the inverse equation y=?.sup.TD used to determine ? which describes the reflectivity at all grid points, and therefore can be used to provide an image of the object 38.
[0145] At step 56, this inverse equation is modified to remove blocked paths, such as path 48 shown in
[0146]
[0147] At step 64, the equation y=D? is solved for the nearfield reflected beams 40, 44. This gives information relating to the location of the object 38, and the beam steering is therefore modified in order to further image the object 38. Through an iterative procedure of steering a beam, receiving the reflected signal, determining information about the object 38, and modifying the direction of the beam, extensive information about the object 38 location and shape may be obtained.
[0148] As with the method described in
[0149] Referring back to
[0150] Referring to
[0151]
where c (or v) is the speed of sound.
[0152] The microphones 76 can be placed anywhere in the room. The location of microphones 76 can be computed using any suitable means. The ultrasonic array 75 may be used to determine the position of the speaker 78, and/or microphones 76 using ultrasound.
[0153] Assuming the target person 78 is the only active audio source in the room, the received signals y.sub.1(t), y.sub.2(t), y.sub.3(t), . . . y.sub.N(t) can be expressed as
y.sub.i(t)=s(t??.sub.i)+n(t)
[0154] Where s(t) is the spoken word, i.e. the sound produced by the target person, and n(t) is the sensor noise. An alternative way of expressing this is
y.sub.i(t)=s(t)*?(t??.sub.i)+n(t)
[0155] Where ?(t) is the delta Dirac functions. Both equations essentially say that each microphone receives an appropriately time-delayed version of the sounds output from the target person. For simplicity of explanation, no attenuation term has been included, but they can be readily incorporated as will be appreciated by those skilled in the art.
[0156] A straightforward way to recover signal-of-interest s(t) is by delaying-and-summing, i.e
[0157] Where the first part becomes an amplification of the source s(t) (added up N times), and the second part becomes a sum of incoherent noise components, i.e. the parts of the noise component that do not sum up constructively. The overall result is an amplification of the signal-to-noise ratio via delay-and-sum beamforming. In the frequency domain, this could be expressed as:
Y.sub.i(?)=S(?)*D.sub.i(?)+N(?)
[0158] Where D.sub.i(?) is the phase delay associated with the time delay ?.sub.i for the specific frequency ?. Note that D.sub.i(?) has unit modulo (i.e. it only phase delays the signal, it does not amplify or attenuate it in accordance with the assumption explained above). In the frequency domain, the delay-and-sum recovery strategy thus becomes:
[0159] Where the effect of D.sub.i(?)* to cancel out the effect of D.sub.i(?), to once again get an amplification of the signal relative to the noise. This gives rise to the term phased array, i.e. the phase information in some or all frequency bands is used constructively to recover the signal of interest. Note also, that, in the case of an interfering signal being added to the mix, i.e.
Y.sub.i(?)=S(?)*D.sub.i(?)+Z(?)*F.sub.i(?)+N(?)
[0160] If Z(?) is the interfering signal originating at some other location q and being delayed towards each of the microphones 76 via the individual time delay represented as F.sub.i(?) then the same delay-and-sum strategy would also serve to reduce the effect of the interfering signal in the output result relative to the signal of interest, i.e. the strategy would use the phase knowledge to improve the signal-to-noise-and-interference ratio.
[0161] Other more sophisticated techniques existing for signal source enhancement. Some take into account the positions and/or statistical acoustic properties of an interfering source i.e. not simply smear the out to reduce their impact, as in the above example. Minimum Variance Distortionless receiver (MVDR) or Capon beamforming, is but one example.
[0162] Moreover, if the acoustic transfer functions, or impulse responses from each source 78 to each microphone 76 are known, better results may be obtained, because impulse responses can take into account not merely the direct path of the sound from the person 78 towards each of the microphones 76, but also any subsequent echo coming from a sound impinging on a wall 82, ceiling or other object. Letting H.sub.ij(?) denote, in the frequency domain, the impulse frequency response from source j, j=1, . . . Q, to microphone number i, then we have, assuming S.sub.j(?) to be the source signal from the j'th source:
[0163] This can be put into vector-matrix notation by stacking the successive microphone inputs in a vector as:
Y(?)=H(?S(?)+N(?)
[0164] Here H(?)={H.sub.ij(?)}, S(?)=[S.sub.1(?), . . . S.sub.Q(?)].sup.T, Y(?)=[Y(?), . . . Y.sub.N(?)].sup.T and N(?)=[N.sub.1(?), . . . N.sub.N(?)].sup.T. A similar formulation exists in time-domain where the effect of the impulse responses in the time-domain, i.e. h.sub.ij(t) (which are convolved with the source signals s.sub.ij(t)) build up a block Toeplitz matrix system.
[0165] One can now compute an estimate of the sources as:
?(?=H.sup.+(?Y(?)
[0166] Where H.sup.+(?) is a suitable inverse matrix of H(?). This could be a Moore-Penrose inverse, a regularised inverse to match the noise level, such as Tikhonov regularization, or a generalized inverse utilizing knowledge of the noise characteristics, such as a Bayesian estimator. Whether used in the time or frequency domain, any of the following techniques can equally well be used:
[0167] Minimum Mean Square Error (MMSE) receiver strategies, Blind Source Separation or Independent Component Analysis, Blind Source Separation approaches utilizing statistical properties related to the signal-of-interest, Sparse methods such as Bayesian models with Gaussian Mixture Models or L1-based regularization methods such as in compressed sensing, or any other suitable technique that utilizes phase information.
[0168] In practice, this means that in accordance with embodiments of the invention audio capture can be improved in two important ways: first, the location of the person 78 in the room 80, i.e. the position p, can be estimated. Moreover, a statistical map of his or her range of movements and likely positions can be computedeven if he or she is not speakingso that the audio signal processing can be optimized for this purpose. Secondly, the location of the walls 82 and ceilings can be used to compute the impulse response functions H(w) above, which is what enables the sound to be focused using the ceilings and walls 82 and/or other reflective items. So the information captured in the ultrasound domain can usefully be employed in the audio domain.
[0169] Turning now to transmission, such as in a directed hi-fi sound reproduction system as shown in
s.sub.j(t)=s(t)*?(t+?.sub.j)
[0170] In which case, the signal received at the position of the target person 78 would be:
i.e. an amplification of the signal at the focus point p where the person 78 is. If the person 78 moved to another location p, then there would not be the same amplification, because the terms ?(t+?.sub.j)?(t??.sub.j) would be replaced by ?(t+?.sub.j)?(t??.sub.j) for some ?.sub.j which would generally not combine to become ?(t).
[0171] Instead, the effect would be a smearing out of the outputs and effective lowering of the N-time amplification observed at p. A parallel argument can be made in the frequency domain, making it apparent that the system is relying on phase delays of the transmit signals to obtain the local focussing effect.
[0172] Also on the transmit side, it is possible to utilize detailed knowledge of the impulse response function to create even better focussing utilizing reflectors like walls 82 and ceiling or other large objects. For instance, if h.sub.ij(t) is the impulse response between each transmitter j and each target i, then the sound received at each target i can be jointly modelled as:
[0173] Or
y=Hs
[0174] The matrices H.sub.ij are the aforementioned Toeptliz matrices containing the impulse responses h.sub.ij(t) as its shifted rows, s.sub.j(t) is the sampled vector to samples output from speaker j, and y.sub.i(t) the sound that is received at the i'th target location, for i=1, . . . Q.
[0175] The speakers 84 can be placed anywhere in the room. The location of speakers 84 can be computed using any suitable means. The ultrasonic array 75 may be used to determine the position of the user 78, and/or speakers 84 using ultrasound as previously described herein.
[0176] It is now possible to select transmit signals {s.sub.j(t)} so that the received signals become the desired ones, i.e. that a specific sound is observed at some location I and an entirely different sound at location jeven though the original transmit signals {s.sub.j(t)} all contain mixes of those specific sounds. One straightforward example is to let s=H.sup.+y, where H.sup.+ denotes the Moore-Penrose Inverse of H. More sophisticated techniques capable of dealing with noise robustness can be envisaged too, as explained above for the receive/sound capture scenario. Note that in the above, the entire impulse response, i.e. not just the direct time-of-flight path, can be utilized for audio focussing.
[0177] In some situations the exact position of the person 78 onto which the sound is to be focused may not be known, i.e. there is uncertainty connected with the position p for that person 78, or there may be multiple persons 78 present. In the receive scenario of
[0178] Again, as with the audio receiving situation of
[0179] The imaging approach where ultrasound is used to map an environment by utilizing reflections from the enclosure 86 is shown in
[0180] Referring back to the equation y=Hs, the (stacked) transmit signals held in s, may be chosen in such a way that a desired signal set in the (stacked) vector y is at least approximately obtained. The problem of choosing the sources s may be reformulated as:
[0181] Where H k denotes the k'th block row of the matrix H, i.e. H.sub.k=[H.sub.k1, . . . , H.sub.kN]. Weightings can be introduced to the right hand term, i.e. to create a weighted cost function
[0182] Where the matrices {W.sub.k} are typically diagonal matrices with positive indices. By choosing these weight matrices carefully, certain points in time and space can be set where there isn't any energy. For instance, for a specific hypothetical point k with an associated target signal y.sub.k, y.sub.k=0, and the associated W.sub.k=?I, where ? is a large positive integer.
[0183] At the same time, another vector y.sub.i, can be chosen, for l?k, which is a zero-padded spike or sinc signal, and a suitable weight matrix W.sub.l=?I. It may also be desirable to take less account of energy that arrives at a certain point after a given time, but to take greater account of the fact that there is no energy at this point or other points early on. This is equivalent to steering energy away from an object, see
[0184]
[0185]
[0186]
[0187] The Applicant has also appreciated that the received signals in accordance with any of the foregoing aspects or embodiments of the invention can be processed to take into account Doppler information. This may enhance imaging performance even further.
[0188] There are several ways in which Doppler information can be used to enhance imaging performance. The following mathematics illustrates one way in which Doppler can explicitly be accounted for during processing.
[0189] Returning to the equation:
y(t)=?.Math.l.sub.1.Math.?(t??.sub.1)
[0190] Where it is assumed that a Dirac pulse had been transmitted and has been received at a receiver as a time-series y(t).
[0191] More typically and as mentioned earlier in this application, coded signals may be used. Let x(t) be the bandlimited, linear output signal, which may for instance be a chirp signal.
[0192] Then y(t) can be obtained through the following:
y.sub.0(t)=h(t)*x(t)+n(t),
y(t)=x(?t)*y.sub.0(t)=x(?t)*h(t)*x(t)+x(?t)*n(t)
[0193] If x(?t)*n(t)=n.sub.2(t):
y(t)=h(t)*x(?t)*x(t)+n.sub.2(t)
[0194] If x(?t)*x(t)=?.sub.B(t), where ?.sub.B(t) is the bandlimited version of the Dirac impulse response, within the frequency band B defined by the signal x(t):
y(t)=h(t)*?.sub.B(t)+n.sub.2(t)
[0195] Now, if a signal x(t) is transmitted and it bounces off a moving object, a major effect will be that of effectively stretching or compressing the transmit signal x(t) upon reception. This can be thought of in a slightly different way: the object staying still, but the transmit signal x(t) being stretched, or scaled in time so that it is now x(kt), where k is a constant positive number, typically close to 1.
[0196] This gives:
y.sub.0(t)=h(t)*x(kt)+n(t),
y(t)=x(?t)*y.sub.0(t)=x(?t)*h(t)*x(kt)+x(?t)*n(t)=h(t)*x(?t)*x(kt)+n.sub.2(t)
[0197] However, the property x(?t)*x(t)=?.sub.B(t) is now missing. This mismatch can be taken advantage of, to construct a set of x(?t) replacements, which will focus the signal processing and subsequent image generation process onto objects with a specific Doppler shift only.
[0198] Now, to filter out and separate objects with a certain Doppler shift, a family of functions {{tilde over (x)}.sub.l(t)} can be designed, which approximately satisfy the criterion:
[0199] Then, a single slice of the imaging problem can be created by pre-convolving the received signal by any of the signals in the family. For instance:
{tilde over (y)}.sub.l(t)={tilde over (x)}.sub.l(?t)*y.sub.0(t)={tilde over (x)}.sub.l(?t)*h(t)*x(k.Math.t)+{tilde over (x)}.sub.l(?t)*n(t)
[0200] If {tilde over (x)}.sub.l(?t)*n(t)=n.sub.3(t):
{tilde over (y)}.sub.l(t)=h(t)*{tilde over (x)}.sub.l(?t)*x(k.Math.t)+n.sub.3(t)
and {tilde over (x)}.sub.l(?t)*x(k.Math.t)=?.sub.B(t) for k=l, 0 otherwise.
[0201] By picking the right Doppler speed-related function {tilde over (x)}.sub.k(?t), the objects in the scene with a specific Doppler shift can effectively be captured, while filtering others out. Imaging can then be continued, assuming that the output driving signal was in fact the bandlimited Dirac signal ?.sub.B(t).
[0202] The family of functions held in Equation (*) can be derived in any number of ways. One specific way to do it is to (a) resample the function x.sub.i(k.Math.t) with different values of k, to generate a family of vectors x.sub.k, momentarily skipping the index i, which is a common variable value when there is only a single transmitter. Then each of those vectors are used to generate an associated Toeplitz matrix X.sub.k with the vectors x.sub.k as its (flipped) elements.
[0203] Then vector-approximations of the filters {tilde over (x)}.sub.k(?t) can be computed as vectors h.sub.k. This is achieved by setting up the requirements:
[0204] Where d is a vector of zeros with the exception of the centre element which is 1, or alternatively, d represents a sampled, bandlimited version of the Dirac function limited to the frequency band of interest. More specifically, the following function can be minimised:
[0205] Where w.sub.rk is 0 if r<>k and a vector sampled Dirac function if r=k and k is the number of relevant Doppler speed indexes. There are also other separation strategies than filtering, for instance deconvolution approaches could be used, the optimization problem above could be solved using other norms, or deep learning approaches could be used to design optimal filters.
[0206] More sophisticated filtering or deconvolution strategies could also be employed by assuming that only a few Doppler shifts are present at the same time, for example that most objects are static and only a few are moving at relatively high and known speed. This eases the pressure on the criterion (+) because the filter h.sub.k doesn't have to be orthogonal to all the other filters in the family, only to those objects whos speed match specific subsets of the filter family.
[0207] The following equation could then be solved:
[0208] Where S is a subset of relevant speed indices, with |S|<K. The criterion will then be better fulfilled, getting closer to the design goal set up in (*).
[0209] Multiple other strategies for steering both transmit and receive beams exist in the literature, see e.g. Demi, L., Practical guide to ultrasound beam forming: beam pattern and image reconstruction analysis, Applied Sciences, 2018, 8, 1544.
[0210] It will be appreciated by those skilled in the art that the invention has been illustrated by describing one or more specific embodiments thereof, but is not limited to these embodiments; many variations and modifications are possible, within the scope of the accompanying claims. For example, the CPU may not be local to the imaging system and may instead be an external hub used for work-sharing, with data sent between the imaging system and hub via Bluetooth signals.